The presumption of AI
VS Code stamps Copilot as co-author on commits even with AI disabled. OpenAI turns on ad tracking for free users by default. The Academy now demands proof that a performance is 'demonstrably human.' The industry crossed a line this week: AI involvement is assumed until you prove otherwise, and the burden of proof just shifted to everyone who makes things.
The Decoder
Microsoft caught sneaking 'Co-Authored-by Copilot' into VS Code commits — even with AI turned off
A one-line change in VS Code 1.118 flipped the git.addAICoAuthor setting from 'off' to 'all', stamping a Copilot co-author line onto Git commits by default — including on machines where AI features were explicitly disabled. Microsoft reversed the default after 372 thumbs-down reactions and 654 Hacker News comments.
the-decoder.com

PR #310226 was a single line of code. It changed VS Code's git.addAICoAuthor setting from off to all, and with that, every commit made in VS Code 1.118 carried a Copilot co-author attribution by default. Code written entirely by hand. Machines where AI features were explicitly disabled. All stamped with Copilot's name. Microsoft reversed the change after 372 thumbs-down reactions on GitHub. But the instinct behind it is worth examining.
The conventional reading is corporate overreach: Microsoft got greedy with attribution, got caught, backed down. That reading is too small. The same week, OpenAI enabled marketing cookies by default for free ChatGPT users, sharing cookie IDs and email addresses with advertising partners as it chases $2.5 billion in ad revenue this year. And the Academy of Motion Picture Arts and Sciences announced that only performances "demonstrably performed by humans" and screenplays that are "human-authored" will qualify for Academy Awards from March 2027.
Three institutions. Three different contexts. The same structural move: AI involvement is now the default assumption, and the burden of proving otherwise falls on you.
This inversion happened fast. Twelve months ago, AI attribution was opt-in. You could credit Copilot in your commit or add "assisted by ChatGPT" to your paper if you chose to. The assumption was human authorship; AI disclosure was the exception. Microsoft's one-line change flipped that: your code is AI-assisted until a setting proves otherwise. The Oscars' new rule flips it from the other direction: your performance is potentially synthetic until the Academy is satisfied it isn't. OpenAI's cookie change completes the triangle: your browsing data is advertising inventory unless you navigate to Settings > Data Controls > Marketing Privacy and find the opt-out toggle yourself.
The architecture of defaults
In behavioural economics, this pattern has a name: the default effect. When organ donation is opt-in, countries see consent rates around 15%. Switch to opt-out, and rates exceed 90%. The underlying preference barely changes. The default does all the work.
The AI industry has learned this lesson thoroughly. OpenAI's cookie switch isn't a privacy violation in any technical sense. Users can opt out. Microsoft's attribution wasn't permanent. They reverted it. But defaults are architecture. They shape what's normal before anyone has a chance to object. And the direction of these defaults tells you what the industry considers its natural state: AI is involved, AI is attributed, AI is monetised. Opting out is your responsibility.
The Academy's response is the mirror image of the same logic. When the Academy demands proof that a performance is "demonstrably human," it accepts the new default and builds rules around it. The telling word isn't "human." It's "demonstrably." One is an aspiration. The other is a burden of proof. The Academy is asking filmmakers to certify the absence of AI the way a food producer certifies organic origin: document and defend it on request.
For anyone building products, the shift is already operational. If you ship code, your commit history may carry AI attribution you didn't add. If you offer a free tier, your users' data may be monetised through defaults they'll never discover. And if you create anything that competes for recognition, expect to be asked to prove a human made it.
The question isn't whether AI is in your workflow. It's whether you'll be able to demonstrate, to anyone's satisfaction, when it wasn't.
Read the original on The Decoder
the-decoder.com