Welcome back to the Interesting Reads Weekly Roundup!
This week, we explore how AI is reshaping tools, classrooms, and policy. OpenAI launches in-chat apps, turning ChatGPT into a platform for interactive tools from Spotify to Coursera. In Europe, Germany’s internal divisions over the EU’s “chat control” law have delayed decisions on scanning private messages for safety.
Anthropic, with the UK AI Safety Institute, shows that as few as 250 malicious documents can “poison” large language models, a finding that may prompt new approaches to AI security. Former UK prime minister Rishi Sunak also steps into the tech world, taking advisory roles with Microsoft and Anthropic while directing his compensation to a social mobility charity.
In education, Schools Week highlights a northern academy trust using deepfake teacher avatars and AI marking to support pupils and reduce teacher workload, while Bassem Nasir urges leaders to prioritise purpose and collaboration over top-down AI fixes in schools.
Happy reading!
OpenAI launches apps inside of ChatGPT, Maxwell Zeff for TechCrunch
OpenAI has introduced a new way for developers to build and integrate applications directly within ChatGPT, marking a shift towards a more interactive and connected AI ecosystem. Announced at DevDay 2025, the feature allows users to access apps such as Booking.com, Expedia, Spotify, Figma, Coursera, Zillow and Canva from within their conversations.
Rather than directing users to a separate GPT Store, the new system embeds apps in ChatGPT’s responses, letting people summon third-party tools by name or context. For example, asking Figma to turn a sketch into a diagram or Coursera to explain a topic. ChatGPT can also suggest relevant apps automatically, such as Spotify for playlists or Zillow for housing searches.
Developers will build using OpenAI’s new Apps SDK and Model Context Protocol, which enables apps to trigger actions, display interactive elements and connect external data sources. While monetisation and privacy policies are still being refined, OpenAI says apps will collect only minimal data and prioritise user experience over commercial placement.
Germany split on ‘chat control’ bill as EU talks stall, Hans Von Der Burchard and Sam Clark for Politico
Germany’s coalition government is divided over whether to back the EU’s proposed “chat control” law, which would require platforms such as WhatsApp, Signal and X to scan private messages for child sexual abuse material (CSAM). The disagreement has stalled negotiations in Brussels, despite strong pressure from Denmark and France to finalise the legislation.
The bill, which has been in limbo for years, aims to combat CSAM but has drawn fierce criticism from privacy advocates and tech firms, who warn it would undermine encryption and enable mass surveillance. Germany’s Justice Minister Stefanie Hubig has ruled out supporting any measure that enables “mass scanning”, while the Interior Ministry has stayed silent.
Divisions also run within parties: CDU figures are split between opposing and endorsing the proposal, while the SPD has called for it to be “clearly rejected”. With talks now delayed until at least December, Germany’s stance could prove decisive in determining whether the EU finds consensus or faces another prolonged deadlock.
Education Leadership in Times of Uncertainty: Why AI Demands We All Step Up, Bassem Nasir
In this piece, Bassem Nasir warns that governments rushing to adopt AI in education risk repeating past reform failures if they focus on technology rather than purpose. Drawing on experiences from the Arab region and global examples, he argues that AI represents not just a new tool but a paradigm shift that demands a rethinking of what education is for. Citing Peru’s One Laptop per Child programme and Uruguay’s Plan Ceibal, the article shows that reforms succeed only when they address both the technical and adaptive challenges of change. Nasir contends that AI integration must involve everyone in the system — from teachers and policymakers to parents and students — working collectively to redefine the values, practices and goals of learning. He outlines ten leadership principles, from building capacity and setting ethical guardrails to rewarding adaptive behaviour, urging education leaders to move beyond top-down fixes and instead foster dialogue, trust and shared responsibility for shaping education in an AI-driven world.
A small number of samples can poison LLMs of any size, Anthropic
In a joint study with the UK AI Safety Institute and the Alan Turing Institute, Anthropic researchers found that large language models can be “backdoored” with as few as 250 malicious documents, regardless of their size or training data volume. The finding challenges the common assumption that attackers need to control a proportion of training data to compromise a model.
In experiments across models from 600M to 13B parameters, the same small number of poisoned samples caused all to generate gibberish when a hidden trigger phrase appeared. This suggests that data-poisoning attacks may be more feasible than previously believed, since inserting hundreds of documents is far easier than millions. While the study used a low-stakes backdoor, its results highlight the need for new defences against poisoning at scale and raise questions about how such vulnerabilities might affect larger or more capable models.
Rishi Sunak takes advisory roles with Microsoft and AI firm Anthropic, Robert Booth for The Guardian
Former UK prime minister Rishi Sunak has joined Microsoft and AI firm Anthropic as a senior adviser, with approval from Westminster’s business appointments watchdog. Sunak told Acoba that his work will focus on high-level strategic issues and not UK policy or lobbying, while both companies said his compensation will go to his new social mobility charity, the Richmond Project. The appointments follow Sunak’s close engagement with the tech sector during his premiership, including a £2.5bn Microsoft investment deal and the UK’s AI Safety Summit at Bletchley Park. Acoba acknowledged there were reasonable concerns over perceived access and influence but found no evidence of prior decisions taken with these roles in mind.
‘Deepfake’ teacher avatar plan to help pupils catch up, Freddie Whittaker for Schools Week
Teachers at the Great Schools Trust are creating AI “avatars” of themselves using deepfake video technology to deliver catch-up content for pupils who have missed lessons. The avatars, made with HeyGen software, will introduce resources on Google Classroom and respond to written questions with pre-programmed answers. The trust says the project is designed to cut workload and support returning pupils, not replace teachers, and aims to expand the tool nationally with external funding. It will also begin using AI to mark GCSE mock exams, claiming faster and more accurate results after three years of trials.
Leaders stress that all avatars will require teacher consent, comply with data and copyright laws, and be deleted when staff leave. The initiative has raised questions over privacy and the use of deepfake tools in education, but the trust argues it will give teachers more time “to inspire, teach and lead.”
We hope you enjoyed this week’s roundup! Let us know any thoughts or feedback in the comments.
If you’d like to be notified when we post, subscribe for free using the button below, or connect with us on social media to let us know what you’d like to see next.