End of year reflections

First, full disclosure – I started writing this for our Jisc AI team blog, but I ran out of time to get my colleagues’ input before the Christmas break! So I’m reviving my Substack for this one, as it’s purely my thoughts and reflections.

As we approach the end of 2025, I thought I’d look back over what has felt like an extremely busy year, with no real let-up in AI interest. I did this last year at our HE meetup in December, where I grouped the review by a number of themes. I’m doing the same this year, but with slightly different themes as the conversation shifts. 

This is very much a personal reflection and, of course, is by no means intended to be exhaustive or definitive. Just because I haven’t mentioned something doesn’t mean I don’t think it matters!

If I were to summarise, I think this year has been characterised by four things:

  • Steady incremental technology gains, each of which may not seem that big but cumulatively add up to quite substantial progress.
  • A growing backlash against AI, especially in creativity and the arts, and signs of AI boredom.
  • A growing pragmatic approach to generative AI – use it where it’s useful, embed institutional governance, don’t bet the house on it.
  • A jump in the political response to AI, as an opportunity for financial growth and to protect national interests.

Incremental Technology Gains

The year started with big news about the latest DeepSeek model, developed in China. The size of the technical breakthrough was probably overstated, but the debate it generated around the level of neutrality on AI models was welcome.

Speaking of neutrality in AI models, we also saw the launch of Grok 3 and 4, Elon Musk’s contribution – an attempt to bring GenAI aligned with his particular worldview to the table, often with quick, comical results. Without any adverse prompting, it quite happily told me Elon Musk was the best in the world, as well as best at drinking, carrying puppies, leaping over buildings, etc.

Perhaps the most obvious improvements were in image generation. In March, we saw the release of OpenAI’s 4o image generation model, which changed the technical approach, with the headline feature probably being that text in AI images now works properly. Social media became flooded with a raft of pictures of AI-generated dolls. Google followed with a similar approach and the awfully named Nano Banana Pro in November.

We’ve seen similar gains in video generation, particularly with Sora 2 from OpenAI, and a flood of amusing or annoying videos across social media, as well as real challenges in misinformation, with AI-generated video increasingly used for political purposes.

Games of follow-the-leader among the main AI players have been a recurring theme. Understandable, perhaps, but I’d much prefer greater original thinking. In February, OpenAI and Perplexity followed Google’s lead, releasing their Deep Research feature. The naming of this feature is probably not that helpful – it’s a useful tool, but calling it deep research is an oversell.

We also saw a continued blurring of the lines between GenAI and search as Google introduced AI Mode. Meanwhile, the integration of AI into the main Google search is proving both controversial in terms of accuracy and its impact on traffic to websites.

I’ve not mentioned Anthropic and Meta. Anthropic have perhaps quietly and carefully improved their models – they remain many people’s favourite, focusing less on consumers and direct-to-user sales and more on backend integration – a strategy that seems to be paying off. Meta, meanwhile, have, at least from my perspective, dropped off the radar a bit.

And finally, agentic AI – promise (or threat) rather than reality at the moment, but I suspect things will look very different this time next year. Agentic browsers from Perplexity and OpenAI are probably more of a security threat than practically useful at the moment. If you’ve tried them, you’ll probably find they almost work for some tasks, very slowly. Remember what image generation was like a couple of years ago, though.


The AI Backlash and AI Boredom

The backlash and public mood about generative AI is shifting and, I think, will continue to move and become more divided over the next few years.

AI Slop’ has become one of the phrases of the year, and backlash against the impact of AI on arts and artists is growing. Meanwhile, concerns about the environmental impact of AI grow, driven by a mix of well-informed and sensible data and misinformation. Jisc’s AI team’s most-read blog post of the year was my colleague Catherine Barker’s ‘Artificial intelligence and the environment: Putting the numbers into perspective.

It’s going to be interesting to see where this ends up. The backlash against AI isn’t resulting in wholesale rejection. Coca-Cola launched their second AI-generated Christmas ad (we had a great internal session on this from my colleague Cece), and the fact they are repeating it implies the backlash doesn’t stop it being financially useful to Coca-Cola.

Interestingly, a lot of the public seem quite happy to listen to AI-generated music. AI-generated ‘band’ Velvet Sundown got over 1 million listens on Spotify, and an AI act ‘copied’ from Cardiff-based band Holding Absence ended up with more Spotify listeners than themselves. The song I Run went viral with an AI-generated vocal, which most think was too close to the real vocals by Jorja Smith to be anything other than a clone. It’s unclear if it was, or if it was just a result of the training or prompting. The response from industry bodies is pragmatic – the musicians’ union call for consent, labelling, and payment/protection, for example.

The debate on social media, meanwhile, is of course much less nuanced, with many labelling any AI-generated content as AI slop and feelings running high.

Where do students stand on this? We’re seeing increasing signs that students, particularly in creative courses, are increasingly anti-GenAI, because of its potential impact on their career paths, its theft of creative material, and its impact on the environment. This, of course, isn’t universal. One thing that struck me from a student panel at Windsor College AI Summit was more a sense of boredom – as in, why are we talking about this very normal thing all the time? And that’s not surprising really – mainstream GenAI is now over three years old, and if you are 16 or 17, that’s a big chunk of your education where it’s just been there.


AI Pragmatism

We’ve probably started to see the AI hype in education die down, and as the discussion about AI and assessment moves towards a consensus around careful adoption, focus has started to shift towards pragmatic use for efficiency alongside the right skills for the future. In Jisc, we encapsulated this in our strategic framework for AI, and shared our own approach, including our legal team’s journey with AI. 

Jisc’s latest staff and student perceptions of AI report showed use was now firmly embedded, with most making their own pragmatic decisions about how GenAI could help them. However, they also showed growing concerns about the impact of AI and employability, and negative impacts on learning.


National and Political Responses

This was a year for national AI plans – driven partly by the realisation that relying on a small number of nations to provide AI models was a significant risk, and partly from a fear of missing out on real or perceived economic (and to a degree social) impact.

We kicked off January with the release of the UK’s AI Opportunities Action Plan. This has turned out to be a living, actioned document, with activities happening around many of the action points over the year. We discussed the impact of this at a couple of roundtables, with the focus largely being on the skills elements.

The EU followed with an AI continent plan – in reality, remarkably similar to the UK plan, albeit with slightly different terminology (AI Grow Zones vs AI Factories, for example). I won’t attempt to list them all; we ended the year with Australia’s National AI plan. The common threads have been investment in data centres and skills, ‘home-grown’ AI tech, and varying degrees of social commitment.

Aligned to this were increasingly large financial investments. In the US, we saw the launch of the Stargate project – a $500 billion infrastructure project led by OpenAI. In the UK, we saw a lot of fanfare about a £31 billion UK/US tech deal in September, and then, at the time of writing, uncertainty over whether this was really happening after all.

Meanwhile, around the world, countries continue to grapple with AI legislation. In the UK, this was largely around copyright, framed as a battle between tech and creatives. The EU AI Act, meanwhile, continued to have a slightly bumpy ride – not surprising really, as it was designed for a pre-GenAI world.

And, of course, underpinning it all – debates about how nations were going to power their AI ambitions, along with a renewed interest in (or fears about) nuclear power as the solution, and intense lobbying from the nuclear industry.


So what next?

Were there any surprises this year? For me, probably just how good image generation got. I think the rest has followed the kind of trajectory we’ve come to expect from technology waves.

Next year, as I mentioned, I’m expecting to see agentic AI follow a similar path to image generation.

I think the backlash will continue next year, and that’s a good thing – it forces us to use AI in meaningful, justifiable ways (or not at all).


Posted

in

by

Comments

Leave a comment