Health + AI Tech Show: Data Trust & the Future of AI in Healthcare

Health + AI Tech Show: Data Trust & the Future of AI in Healthcare

The inaugural Health + AI Tech Show held on 29 April 2026 set out to do something different and succeeded. Bringing together more than 1,000 leaders from across the NHS, research, regulation and the startup ecosystem, the event focused not on vision or hype, but on what is actually working in practice. As highlighted on the event website, the central tension was not whether AI works, but whether healthcare systems are ready to support it at scale.

Across diagnostics, drug discovery and hospital operations, a consistent message emerged. Progress is real, but fragile. Pilots are plentiful, but scaling remains difficult. Trust is not a feature that can be added later. It is the foundation on which adoption depends.

It was within this context that the panel discussion titled “Who Owns the Model? Data, IP and the Public Private AI Partnership” took place. Chaired by Lopa Patel MBE, Digital Founder and Chair of Diversity UK and Non Executive Board Member, NHS South West Peninsula Cluster, the session brought together Anmol Arora, Technology and AI Lead at the Clinical Education Research Group, University of Cambridge, and Eleonor Duhs, Partner at Marks and Clerk. Together, they explored one of the most pressing and unresolved questions in healthcare AI: when models are built on shared NHS and research data, who holds the value and who carries the risk.

Data governance as the starting point

A key theme from the discussion was that partnership needs clarity before code. Too often, technical development moves faster than governance frameworks. This creates uncertainty around ownership, accountability and long term value sharing.

The panel examined data stewardship in public private research and development within the NHS, alongside the ethical and intellectual property implications of shared model development. These are not abstract concerns. They directly influence whether organisations are willing to collaborate, how patients perceive the use of their data, and whether innovation can be scaled responsibly.

A particularly important perspective came from Eleonor Duhs, who highlighted the legal reality underpinning many of these debates. From a data governance standpoint, the distinction between data controller and data processor is often less separable in practice than technology companies might assume. In effect, the legal system can treat them as a single continuum of responsibility, making it difficult to ringfence an AI model from the data on which it is trained and operates. This has significant implications for claims around ownership and commercialisation. It also reinforces the need for clarity at the outset of any collaboration. Eleonor also pointed to the higher regulatory thresholds emerging under the EU AI Act, which further raise expectations around transparency, accountability and risk management.

Together, these factors signal a shift towards more stringent oversight, where governance is not an afterthought but a core design principle of AI development in healthcare.

Data governance in this context is not simply about compliance. It is about defining what “fair” looks like. Who benefits when an AI model generates value from NHS data? How is that value reinvested into public services? And how are risks distributed when outcomes fall short?

The discussion made clear that existing frameworks are still evolving. While there is growing maturity in information governance, there remains ambiguity around model ownership and downstream rights. This is particularly acute when AI systems are co-developed across organisational and geographic boundaries

The role of synthetic data

One area that generated significant interest was the role of synthetic data, a topic explored by Anmol Arora in the context of NHS data environments. Synthetic data offers a way to unlock innovation while addressing some of the most sensitive challenges around privacy and access.

In an NHS context, where patient data is both highly valuable and highly protected, synthetic data can act as a bridge. It enables developers and researchers to test models, explore use cases and iterate more quickly without exposing identifiable patient information.

However, the conversation also acknowledged that synthetic data is not a complete solution. Its usefulness depends on how well it reflects real world complexity and whether it can support clinically meaningful outcomes. Used well, it can accelerate innovation and reduce risk. Used poorly, it risks reinforcing bias or creating misleading results.

The key takeaway is clear. Synthetic data has a role, but it must sit within a broader governance framework that ensures quality, transparency and accountability.

Balancing innovation with fairness

A recurring tension throughout the session was the balance between innovation and fairness. The NHS offers a unique environment for clinical validation and real world testing, yet many innovators still look to other markets to scale their solutions.

This raises important questions. How can the UK create conditions that both protect public value and encourage innovation? What does a fair collaboration between the NHS and private sector actually look like in practice?

The panel explored the need for clearer frameworks that define value sharing from the outset. This includes not only financial returns, but also access to resulting technologies, improvements in patient outcomes and system wide benefits.

Importantly, fairness is not just a contractual issue. It is also about perception and trust. As Lopa Patel emphasised in her role as chair, if patients and the public do not believe that their data is being used responsibly, adoption will stall regardless of technical capability.

From pilots to systems change

The wider discussions at the Health + AI Tech Show reinforced the idea that the biggest barrier to AI adoption is not the technology itself, but the systems around it. As noted from other discussions during the day, AI often fails because systems are not ready, not because the tools are ineffective.

This is where governance, infrastructure and leadership intersect. A structured approach to adoption, including identifying high value problems, piloting with clear metrics and scaling with strong oversight, is essential.

Data governance sits at the centre of this. Without clear standards, interoperable systems and agreed principles for collaboration, even the most promising innovations will struggle to move beyond the pilot stage.

The question of ‘who owns the model’ is about more than legal rights

The Health + AI Tech Show demonstrated that the conversation around AI in healthcare is maturing. There is less focus on possibility and more on implementation. Less emphasis on disruption and more on integration.

The panel on data, IP and partnership highlighted that the next phase of progress will depend on getting the foundations right. Clear governance. Fair value sharing. Trusted use of data.

Synthetic data will play a part in enabling this future, particularly in balancing innovation with privacy. But it is only one piece of a much larger puzzle.

Ultimately, the question of who owns the model is about more than legal rights. It is about responsibility, trust and the collective effort required to ensure that AI delivers real value for patients and the health system as a whole.

The message from the day was clear. The technology is ready. The challenge now is to build the systems, partnerships and governance that allow it to succeed.

The next Health + AI Tech Show takes place on 28 April 2027, for further information visit:
https://healthaiinsiders.com/event/health-ai-tech-show/