Executive summary
This week’s stories are all about the background assumptions changing.
Anthropic showed that a single model can sweep through decades of software and surface thousands of serious vulnerabilities across every major operating system and browser. Meta is trying to move past general‑purpose assistants with a dedicated superintelligence group for narrow, complex domains like finance and sustainability. In parallel, around 80,000 tech jobs disappeared in one quarter, with almost half of the cuts officially attributed to AI and automation, even as some companies lean into more explicitly human roles. And with Gemma 4, Google is pushing a family of relatively small, strong models that can run on everyday hardware, not just in hyperscale data centres.
None of this is “our product launch” news. It is the quieter kind of shift that changes baselines: what “secure by default” can mean, where useful intelligence can live in a system, and which skills and roles actually become scarce. That is why it is worth tracking, even if you never plan to train a model yourself.
Taken together, these are not four parts of one grand story, but four separate edges of the same trend: AI moving from a feature you bolt on at the end to something that rewrites the infrastructure, the tools and the talent you take for granted.
This week's articles
Anthropics Glasswing moment
Anthropic has shown a preview of Claude Mythos, a model that can scan code and binaries and find high severity security vulnerabilities across every major operating system and browser. It uncovered thousands of issues, including bugs that had sat unpatched for decades. The software that underpins everyday computing was not quietly robust, it was simply not examined at this scale.
To stop this from becoming an offensive toolkit, Anthropic launched Project Glasswing, working with Microsoft, Apple, Google and others to patch as much as possible, as fast as possible. This looks like a one‑off clean‑up, but it is probably the template from now on. Once models can sweep whole estates for issues humans never spotted, security stops being a one‑time exercise and becomes an ongoing race to use these tools before someone else does. Security through obscurity has been dead in theory for years; this is what it looks like when that becomes operationally true.
Metas superintelligence pivot
Meta has rebranded its AI efforts around a Superintelligence team and announced its first specialised model on that path. The stated aim is to move beyond generic assistants and attack more complex, domain specific problems in areas such as finance and sustainability.
The interesting part is not the label, which has been stretched to cover almost anything, but the direction of travel. The first phase of the AI race was mostly about scale: more data, more parameters, more compute. What Meta is signalling, and what others are exploring, is a shift toward models that trade universality for depth in constrained, heavily regulated domains. The question becomes less who owns the biggest model and more who can get reliable, explainable reasoning under real‑world constraints.
Layoffs and the AI scapegoat
Around 80,000 tech workers lost their jobs in the first quarter of 2026, with almost half of the cuts officially attributed to AI and automation. It is a convenient story: you can point at a technology shift instead of earlier overhiring, higher capital costs or more modest growth expectations.
Research from Harvard Business Review suggests much of this is anticipatory, not operational. Only a small share of recent layoffs is tied to AI systems that actually replace work. Most cuts are justified by AI’s potential, with executives reducing headcount on the assumption that future tools will take over tasks, even though many firms have yet to see meaningful returns on their AI investments.
Entry level and repetitive roles are being compressed first, while demand rises for people who can design, constrain and govern AI augmented workflows. Some firms explicitly hire for clearly human work such as judgement, trust and context, and treat AI as an amplifier. Others use AI as a line on a restructuring slide and a way to rebrand cost cutting as transformation.
The work does not disappear; it is redistributed. The risk is that organisations strip out the people who would have become the next generation of product and technical leadership before the promised AI systems are performing as advertised, only to discover later that no one inside really understands how the human and machine systems fit together.
HARVARD BUSINESS REVIEW THE GUARDIAN
Gemma 4 and the quiet unbundling of AI
Google has released Gemma 4, a new family of open models that is probably more important than the branding suggests. Gemma 4 comes in four sizes, from small 2B and 4B variants designed for phones and browsers up to 26B Mixture of Experts and 31B dense models that run on a single high end GPU. The shared idea is strong reasoning, long context and multimodality with far less hardware overhead than previous generations.
That combination shifts the story in two ways. First, it moves attention from “who has the biggest closed model in the cloud” to “what you can run on your own hardware, under your own rules”. Google claims the 31B model sits near the top of open‑model benchmarks while still fitting on one 80 GB GPU, and the smallest variants are tuned to run fully offline on laptops and phones. Serious capability no longer has to mean a proprietary API endpoint.
Second, it suggests a more granular future. Instead of a single giant model handling everything, you can embed capable, specialised models directly into products, internal tools and edge devices, with open weights and a permissive licence. That is less dramatic than another frontier‑model announcement, but it is often how technology actually spreads: as a library you depend on, not a keynote you watch.


