AI APPG Horizon Scanning

I’m on a train heading back from a crowded committee room in the House of Lords for an AI APPG (all-party parliamentary group) evidence session on horizon scanning.

AI events always draw a crowd, and I suspect the presence of Yann LeCun was a big attraction. If you haven’t heard of him, he’s a big name in AI, as big as they come. Widely recognised as one of the pioneers of modern AI and deep learning. He co-developed core technologies like convolutional neural networks and won the Turing Award for his work. Until recently, he was Chief AI Scientist at Meta Platforms and has just left to found a new company, citing, amongst other things Meta’s move away from open source/weight AI. That’s not to take anything away from the other speakers, who were also significant experts in their fields.

Speakers and overview

  • Yann LeCun – AMI Labs
  • Professor Viv Kendon – University of Strathclyde
  • Professor Mihaela van der Schaar – University of Cambridge
  • Bob De Caux – IFS
  • Mark Taylor – Osborne Clarke

What did we cover? What might come after large language models, practical and large-scale use of agentic AI, the intersection of quantum computing and AI, and the role of synthetic data in helping us move to the next phase, while addressing privacy and data availability issues?

So, a fascinating session and one I thought well worth sharing as I head back to south Wales. One note of caution: the room was packed, and I took my notes standing up on my phone. I believe they are accurate, but please do fact-check anything critical!

Yann LeCun on world models and control

Yann LeCun, joining on Zoom, opened with a deliberate challenge to some of the dominant narratives around current AI systems. He argued that large language models are often mistaken for a path towards general intelligence, when in fact, language is comparatively easy. It is a sequence of discrete symbols, unlike the continuous, physical and social world that humans and animals inhabit. From that perspective, today’s systems are powerful but narrow tools, not stepping stones to intelligence in a stronger sense.

Instead, Yann pointed towards the development of “world models” as a longer-term direction of travel: systems trained not just on text, but on video, interaction and real-world experience. He was sceptical about near-term superintelligence narratives and runaway agentic systems, suggesting that much of the policy debate is getting ahead of the underlying technical reality.

A major concern for him was the concentration of capability. The biggest risk, he argued, is AI being captured by a small number of companies and individuals. Open source and open model weights were framed as essential counterbalances, alongside federated approaches to data that allow regions and languages to retain control of their own datasets and training while still contributing to global models. This could happen by sharing model parameters or vectors rather than the underlying data, something I will admit I had not thought about before.

Regulation, in his view, should focus on applications and outcomes, not on research and development itself.

Professor Viv Kendon on Quantum and AI

Professor Viv Kendon (University of Strathclyde) brought a perspective from the world of quantum technologies. She emphasised that while the UK is investing seriously in quantum computing, it remains at an early stage, comparable to classical computing several decades ago. Progress depends on sustained investment in foundational research rather than short-term expectations of impact.

Quantum machine learning, she explained, is still largely at the scientific stage. There is some evidence that quantum approaches might eventually support faster or more energy-efficient AI training, but this is far from certain, and quantum is not a shortcut to better AI. Practical constraints around storage, data handling and error correction remain significant, and AI may not ultimately be the dominant use case for quantum computing. Viv stressed the importance of continued investment in data curation, compression and storage capabilities regardless of how quantum technologies develop.

One interesting idea was that quantum computing may offer a route to lower power consumption, potentially challenging the data-centre-first approach we currently have in the UK. My main takeaway was that hybrid solutions are a distinct possibility, with quantum being used for training and classical computing for inference.

Mihaela van der Schaar on Agents and the NHS

Professor Mihaela van der Schaar (University of Cambridge) focused on healthcare, using the NHS as a concrete testbed for thinking about next-generation AI systems. She argued that progress in healthcare AI is often constrained less by technology than by social and institutional factors, particularly the fact that decisions remain firmly with clinicians. Delays, in this sense, are often human rather than technical.

She described the potential of more agentic and reasoning-based AI systems that can explore counterfactuals, answer “what if” questions, and support decision-making through approaches such as digital twins of patients. These were framed partly as decision-support tools rather than automation, with personalised treatment and interventions as longer-term possibilities. Mihaela suggested that such systems may be easier to regulate than is sometimes assumed, because their behaviour can be logged, tested and audited. She also cautioned against prematurely classifying emerging approaches as medical devices in ways that could limit experimentation.

Bob De Caux on agent orchestration

Bob De Caux (IFS) brought an applied organisational lens, grounded in real-world deployments of agentic systems. He described a shift towards multi-agent approaches, where smaller models work together, select tools dynamically, and operate within defined constraints. Tool discovery, interoperability and runtime control featured more prominently than raw model size.

He highlighted practical challenges such as auditing, determinism by design, and protection against issues like prompt injection as systems become more autonomous in practice. Bob also reflected on the organisational implications of agentic AI, including changes to job roles, the reshaping of entry-level work, and questions of accountability and ownership. One particularly useful framing was the idea that agents may need to be treated more like employees than traditional software, bringing issues of governance and change management into sharper focus.

Mark Taylor on synthetic data

Mark Taylor (Osborne Clarke) approached the discussion from a legal and regulatory perspective. He explored the role of synthetic data as a way to reduce bias and address data access constraints, while also raising questions about intellectual property and copyright when synthetic data is derived from protected sources. He argued for reducing unnecessary legal friction around AI development and deployment, with synthetic data positioned as part of the solution rather than the problem.

The discussion that followed surfaced a range of questions and tensions, from concerns about centralised versus distributed agentic systems, to scepticism about whether we are anywhere near the kinds of AI risks that dominate some political debates. There was little appetite in the room for premature legislation aimed at speculative futures, and a noticeable frustration with how much attention is currently paid to superintelligence compared to more immediate and structural issues.

Closing thoughts

So what was most striking? The optimism about agentic AI (see my earlier primer if you want a refresher on what that means), combined with views of the future that felt grounded in plausible next steps in technical development. World models, hybrid quantum-classical approaches, and agents interacting at scale all featured as credible directions of travel rather than distant science fiction.

Yann LeCun was especially blunt about the need for open models and for control not to rest with a tiny handful of actors. That is something I broadly agree with, but the session also surfaced a new and interesting route towards this goal: models where elements are trained locally, with only parameters and weights contributing to a larger shared model.

Finally, despite repeated attempts by the speakers to steer the discussion elsewhere, questions from the audience inevitably drifted back to “superintelligence” and whether this is what we should now be worrying about. It felt as though a new imagined threat was emerging, involving armies of agentic AI systems acquiring consciousness and taking over the world


Posted

in

by

Comments

Leave a comment