Welcome back to AI Coding.
Meta says interactions with Meta AI (text/voice/image) will start informing content and ad personalization on Dec 16, 2025. No global opt-out beyond not using Meta AI; UK/EU/South Korea are excluded at launch. For builders, treat assistant logs like ad-relevant telemetry: update privacy copy, consent flows, retention windows, and guardrails.
Also Today:
RTEB introduces a mixed open-and-private retrieval benchmark to measure real-world RAG generalization across domains like law, health, code, and finance; and ModernVBERT is a 250M-parameter, MIT-licensed visual document retriever that delivers SOTA-for-size performance with ready-to-use checkpoints and recipes for lightweight PDF/image RAG.
Deep Dive
Meta AI will use your data for ads — here’s how and when
What developers need to know about Meta’s plan to fold AI-chat signals into ad targeting: scope, timeline, regions, and controls.

TLDR;
🔍 What this is:
Starting Dec 16, 2025, Meta will use interactions with Meta AI (text/voice/image) to personalize content and ads; notifications begin Oct 7. No opt-out — don’t use Meta AI if you don’t want this.
💡 Why you should read it:
If you build assistants or RAG features, treat conversational data like ad-relevant telemetry — update privacy copy, consent flows, and data-retention policies accordingly. (Rollout excludes UK/EU/South Korea initially.)
🎯 Best takeaway:
AI chat isn’t “outside” the ad stack — expect downstream effects on measurement, attribution, and user expectations across Meta’s surfaces.
💰 Money quote:
“We will soon use your interactions with AI at Meta to personalize the content and ads you see.”
⚠️ One thing to remember:
Applies only to people who use Meta AI; sensitive categories are excluded, but “sensitive” boundaries can be fuzzy — ship clear guardrails and disclosures.
Try Augment with Sonnet 4.5 for Free!

augment code
Signal vs. Noise
Separating useful AI developments from the hype cycle
Google ships a CLI and public API for its asynchronous coding agent (Gemini 2.5 Pro under the hood), targeting tightly scoped tasks and CI/CD/Slack integrations.
Hugging Face launches RTEB, a retrieval benchmark that mixes open + private datasets to better test real-world generalization for RAG/embedding models.
Multi-year Instinct roadmap (starting with MI450) with the first 1GW landing in H2 2026 — a real signal on accelerator supply, pricing, and co-design with top model customers.
A 250M-param vision-language retriever delivering strong doc-retrieval results for its size, with checkpoints and recipes released under MIT.
An MIT-licensed UI-element detector and evaluation methodology designed for agentic systems — pitched as an open, commercially-friendly alternative to restrictive licenses.
Stack Overflow’s convo with Kong’s CTO on agent-friendly APIs, the rise of MCP, and why security/observability must be first-class for agent integrations.
augment code
Best of the Rest
A curation of what’s trending in the AI and Engineering world
"The challenge isn’t building smarter AI; it’s building AI that aligns with human values and ethics."
- Sam Altman (CEO, OpenAI)

"We’re on the brink of creating AI that can think and reason like humans. The question is, are we ready for it?"
- Demis Hassabis (CEO, DeepMind)
That's a Wrap 🎬
Another week of separating AI signal from noise. If we saved you from a demo that would've crashed prod, we've done our job.
📧 Got a story? Reply with your AI tool wins, fails, or war crimes. Best stories get featured (with credit).
📤 Share the skepticism: Forward to an engineer who needs saved from the hype. They'll thank you.
✍️ Who's behind this? The Augment Code team—we build AI agents that ship real code. Started this newsletter because we're tired of the BS too.
🚀 Try Augment: Ready for AI that gets your whole codebase?