PEAKS No 34: The DoW AI Deal — and What It Actually Means

Hi there!

Anthropic refused to enable mass domestic surveillance. It refused to power fully autonomous weapons. And when the Department of War threatened to label them a national security risk — a designation historically reserved for US adversaries, never before applied to an American company — Anthropic said they'd see them in court. More.

Then came the other move. Hours after Anthropic was blacklisted, Sam Altman announced OpenAI had struck its own deal with the Department of War — claiming the same red lines as Anthropic, invoking the same principles, reaching the opposite conclusion. The internet noticed. Altman himself later admitted the move "looked opportunistic and sloppy," and even revised the contract days later, under pressure from users and employees. More.

But the optics were already set. And they raised a harder question than contract language ever will. OpenAI was founded with "open" in its name and a mission to ensure AI benefits all of humanity. Over time, the "open" became harder to defend — first as the models closed, then as the non-profit structure gave way to a capped-profit and eventually a for-profit entity. Last week, it became harder still. Critics noted that what counts as "lawful" can change — and that nothing in the contract would prevent those standards from being revised going forward. When a company whose origin story is about openness and safety signs an "all lawful purposes" agreement with a classified military network, the gap between the founding vision and the current reality becomes very visible.

Meanwhile, Claude climbed from outside the top 100 in the App Store at the end of January to the number one spot on Saturday. Daily sign-ups broke all-time records every day last week, free users surged more than 60% since January, and paid subscribers more than doubled this year. Claude even went down briefly on Monday morning, with Anthropic citing "unprecedented demand." Chalk messages reading "Thank you" appeared outside Anthropic's San Francisco offices. Different messages appeared outside OpenAI's.

And then, quietly, a group of current and former employees from Google and OpenAI launched an open letter. No party affiliation. No employer backing. Just people putting their names — or their anonymous verification — on a statement that said: we will not be divided on this. More.

Now, you might ask: does any of this actually matter for privacy? In practice?

I think it does — and not for the reason most people cite. The usual argument is about AI superintelligence. That some future model will become so capable it will outsmart every safeguard. That's a real concern, but it's not the immediate one.

The immediate concern is scale and pattern recognition on ordinary data. We're already past the point where AI can take scattered, individually innocuous information — your location history, browser activity, financial transactions, social connections — and assemble it into a comprehensive, real-time picture of your life. Automatically. At a cost that approaches zero. As experts have noted, this kind of capability transforms data that is technically "public" into something that has never been public in any meaningful sense — because no human analyst could ever synthesize it at this volume and speed.

That's not superintelligence. That's current-generation AI applied to data brokers, commercially acquired datasets, and the trove of information that the intelligence community already has legal access to under existing law. The DoW didn't want Anthropic's models to be smarter than a human. They wanted them to work through volumes of data no human team ever could. This is the actual risk. Not the robot apocalypse. The quiet, scalable erosion of privacy through inference — connecting dots that were always technically visible but practically invisible until now.

🛡️ Security & Privacy

  • Claude Code had serious RCE and API key theft flaws. Check Point researchers disclosed three vulnerabilities in Claude Code — including two code injection bugs and an API key exfiltration flaw — triggered by simply opening a malicious repository. All have been patched. More
  • AirSnitch: Client isolation in Wi-Fi networks is broken. A new NDSS 2026 paper introduces novel man-in-the-middle primitives that break Wi-Fi client isolation — widely assumed to protect devices on the same access point from each other. More

🛸 Tech

  • Ladybird browser adopts Rust — with AI doing the heavy lifting. The independent browser project ported 25,000 lines of its JavaScript engine from C++ to Rust in two weeks using Claude Code and Codex under human direction, with zero regressions across 52,000+ tests. More
  • Linux 7.0 RC1 is out — the 6.x era is officially over. Linus Torvalds dropped the first release candidate for Linux 7.0, explaining the version bump is purely a numbers convenience. Key additions include stable Rust support, Btrfs direct I/O, and faster network queue clearing. More
  • iPhone and iPad certified to handle classified NATO information. Running iOS 26 and iPadOS 26, Apple devices become the first and only consumer devices to meet NATO's information assurance requirements — no special software needed, certified by Germany's BSI. More
  • Kali Linux + Claude Desktop = AI-powered pentesting. The official Kali blog published a step-by-step guide to connecting Claude Desktop on macOS to a remote Kali instance via MCP, turning natural language prompts into nmap scans and tool invocations. More
  • Taara Beam delivers 25 Gbps wirelessly, over 10km. Google X spinout Taara launched Taara Beam, a shoebox-sized optical wireless device using photonic integrated circuits to deliver fiber-like speeds through the air — deployable in hours, no spectrum required. More

🤖 AI

  • Anthropic offers 6 months of Claude Max to open-source maintainers. OSS maintainers of repos with 5,000+ GitHub stars or 1M+ monthly npm downloads can apply for free access — Anthropic's thank-you to the ecosystem that underpins much of what AI builds on. More
  • Jane Street published their neural network reverse-engineering puzzle. A deep, engaging walkthrough of how a solver cracked a hand-crafted neural network encoding MD5 — using mechanistic interpretability, SAT solvers, and old-fashioned detective work. A treat for anyone who thinks about model internals. More
  • Perplexity Computer: a 19-model agentic workforce. Perplexity launched "Computer," a cloud-based general-purpose agent that orchestrates 19 AI models to autonomously handle entire workflows — research, coding, deployment — for hours or months at a stretch. Available to Max subscribers at $200/month. More
  • Amplifying.ai benchmarks the subjective choices Claude Code makes. Across 2,430 runs, this research studio found that "Custom/DIY" is Claude Code's top recommendation in 12 of 20 tool categories — raising questions about embedded preferences in AI coding agents. More
  • Cursor cloud agents now control their own VMs. Cursor announced that cloud agents can spin up isolated virtual machines, interact with the software they build, and produce demo videos and merge-ready PRs autonomously — 30% of Cursor's internal merged PRs are now agent-created. More

🛠️ Tools

  • Australia's ASD open-sources Azul — a national-grade malware analysis platform. Version 9.0.0 of this scalable malware repository and analytical engine is now MIT-licensed on GitHub. Built on Kubernetes, Kafka, and OpenSearch, it's designed to store and correlate tens of millions of samples. More

🏭 Misc

  • "Banned in California" — a visual guide to industrial processes you can no longer permit. From semiconductor fabs to auto paint shops to Navy destroyer components, this interactive site maps every manufacturing process that's effectively impossible to permit in California today — and the grandfathered relics still running. More

📩 Please feel free to share this article with colleagues and friends who will find it valuable.

Thanks for reading!

Have a great day!
Bogdan