π WELCOME TO METAMESH.BIZ +++ Someone graphified their entire codebase into 71x fewer tokens because raw files are terrible LLM food (32k stars say they're onto something) +++ Meta now harvesting employee keystrokes for AI training data which is definitely normal workplace behavior +++ Agent teams burning 124% more compute for zero quality gain proving coordination is hard even for robots +++ THE MESH WATCHES AS EVERYONE QUANTIZES THEIR WAY TO ENLIGHTENMENT ON 20GB OF VRAM +++ β’
π WELCOME TO METAMESH.BIZ +++ Someone graphified their entire codebase into 71x fewer tokens because raw files are terrible LLM food (32k stars say they're onto something) +++ Meta now harvesting employee keystrokes for AI training data which is definitely normal workplace behavior +++ Agent teams burning 124% more compute for zero quality gain proving coordination is hard even for robots +++ THE MESH WATCHES AS EVERYONE QUANTIZES THEIR WAY TO ENLIGHTENMENT ON 20GB OF VRAM +++ β’
"Every time I joined a new codebase Iβd spend the first week asking Claude to βexplain how X worksβ, watching it hallucinate, then reading 40 files to correct it. The problem isnβt the LLM β itβs that raw files are an awful context format.
So I built graphify. Install it once in Claude Code and it b..."
+++ ChatGPT Images 2.0 arrives with dual variants and thinking capabilities that actually browse the internet, because apparently rendering pixels needed the full LLM treatment first. +++
via Arxivπ€ Manan Gupta, Dhruv Kumarπ 2026-04-20
β‘ Score: 8.0
"Large language models frequently commit unrecoverable reasoning errors mid-generation: once a wrong step is taken, subsequent tokens compound the mistake rather than correct it. We introduce $\textbf{Latent Phase-Shift Rollback}$ (LPSR): at each generation step, we monitor the residual stream at a c..."
via Arxivπ€ Md Rysul Kabir, Zoran Tiganjπ 2026-04-20
β‘ Score: 7.8
"Open-weight language models can be rendered unsafe through several distinct interventions, but the resulting models may differ substantially in capabilities, behavioral profile, and internal failure mode. We study behavioral and mechanistic properties of jailbroken models across three unsafe routes:..."
"Iβve been using the new **Auto mode** in Claude Code (where CC decides whether to approve tool calls rather than you having to approve one by one or using the `--dangerously-skip-permissions` mode). This thing is supposed to be a middle ground between those two, and overall itβs actually been pretty..."
π¬ Reddit Discussion: 65 comments
π MID OR MIXED
"Disclosure: I work at Tessl and co-wrote the research this is from. Posting because the result changed how I'm thinking about which Claude model to reach for day to day.
we ran 880 evals - 11 skills Γ 8 models Γ 5 scenarios, with and without each skill in context:
* Haiku 4.5 baseline: 61.2%
* Hai..."
"Hey everyone,
We just open-sourced our reasoning model,Β Chaperone-Thinking-LQ-1.0, on Hugging Face. It's built on DeepSeek-R1-Distill-Qwen-32B but goes well beyond a simple quantization β here's what we actually did:
The pipeline:
1. 4-bit GPTQ quantizationΒ β compressed the model from \~60GB down..."
"Three weeks of controlled experiments on a real production Next.js/TypeScript/Supabase codebase, Sonnet 4.6 worker, Opus 4.7 grader. Full data public, tool is MIT.
A few findings that overturned the assumptions I started with:
\- \*\*CONTRACT.md before code cut cost 54% and raised quality from 5/1..."
via Arxivπ€ Marcello Galisai, Susanna Cifani, Francesco Giarrusso et al.π 2026-04-20
β‘ Score: 7.3
"The Adversarial Humanities Benchmark (AHB) evaluates whether model safety refusals survive a shift away from familiar harmful prompt forms. Starting from harmful tasks drawn from MLCommons AILuminate, the benchmark rewrites the same objectives through humanities-style transformations while preservin..."
π‘ AI NEWS BUT ACTUALLY GOOD
The revolution will not be televised, but Claude will email you once we hit the singularity.
Get the stories that matter in Today's AI Briefing.
Powered by Premium Technology Intelligence Algorithms β’ Unsubscribe anytime
π¬ HackerNews Buzz: 55 comments
π GOATED ENERGY
π° NEWS
Claude Code removed from Pro plan
3x SOURCES ππ 2026-04-21
β‘ Score: 7.2
+++ Turns out paying $20/month no longer gets you the coding features it used to, a fact Anthropic apparently decided to slip onto their pricing page without much fanfare or explanation. +++
"You can tell which company built a product by looking at its most annoying default behavior. Google products ask you to sign in to four things. Apple products hide the setting you need behind three menus. And Claude Design gives you the same teal gradient, serif font, blinking status dot, container ..."
via Arxivπ€ Robert Stanley, Avi Verma, Lillian Tsai et al.π 2026-04-21
β‘ Score: 7.0
"AI agents promise to serve as general-purpose personal assistants for their users, which requires them to have access to private user data (e.g., personal and financial information). This poses a serious risk to security and privacy. Adversaries may attack the AI model (e.g., via prompt injection) t..."
via Arxivπ€ Wen Cheng, Tuochao Chen, Karim Helwani et al.π 2026-04-21
β‘ Score: 6.9
"Edge devices such as smartwatches and smart glasses cannot continuously run even the smallest 100M-1B parameter language models due to power and compute constraints, yet cloud inference introduces multi-second latencies that break the illusion of a responsive assistant. We introduce micro language m..."
via Arxivπ€ Andrew Zhang, Tong Ding, Sophia J. Wagner et al.π 2026-04-20
β‘ Score: 6.9
"Modern medicine generates vast multimodal data across siloed systems, yet no existing model integrates the full breadth and temporal depth of the clinical record into a unified patient representation. We introduce Apollo, a multimodal temporal foundation model trained and evaluated on over three dec..."
via Arxivπ€ A. Sophia Koepke, Daniil Zverev, Shiry Ginosar et al.π 2026-04-20
β‘ Score: 6.9
"The Platonic Representation Hypothesis suggests that neural networks trained on different modalities (e.g., text and images) align and eventually converge toward the same representation of reality. If true, this has significant implications for whether modality choice matters at all. We show that th..."
via Arxivπ€ Josue Torres-Fonseca, Naihao Deng, Yinpei Dai et al.π 2026-04-21
β‘ Score: 6.8
"Multimodal Large Language Models are increasingly adopted as autonomous agents in interactive environments, yet their ability to proactively address safety hazards remains insufficient. We introduce SafetyALFRED, built upon the embodied agent benchmark ALFRED, augmented with six categories of real-w..."
via Arxivπ€ Jean Mercat, Sedrick Keh, Kushal Arora et al.π 2026-04-21
β‘ Score: 6.8
"We present VLA Foundry, an open-source framework that unifies LLM, VLM, and VLA training in a single codebase. Most open-source VLA efforts specialize on the action training stage, often stitching together incompatible pretraining pipelines. VLA Foundry instead provides a shared training stack with..."
"It just sounds like ChatGPT now.
Instead of being genuine, intuitive, and helpful it now tries to always "essay-ify" every response, sound "punchy", drop connecting words and funnily enough started constantly using em-dashes, as many have noticed.
I have compared Opus 4.6 and 4.7 responses to the ..."
via Arxivπ€ Ghazal Khalighinejad, Raghuveer Thirukovalluru, Alexander H. Oh et al.π 2026-04-20
β‘ Score: 6.8
"Many recent document embedding models are trained on document-as-image representations, embedding rendered pages as images rather than the underlying source. Meanwhile, existing benchmarks for scientific document retrieval, such as ArXivQA and ViDoRe, treat documents as images of pages, implicitly f..."
via Arxivπ€ Difan Jiao, Yilun Liu, Ye Yuan et al.π 2026-04-20
β‘ Score: 6.8
"Guard models are widely used to detect harmful content in user prompts and LLM responses. However, state-of-the-art guard models rely solely on terminal-layer representations and overlook the rich safety-relevant features distributed across internal layers. We present SIREN, a lightweight guard mode..."
via Arxivπ€ Yiwen Qiu, Linjuan Wu, Yizhou Liu et al.π 2026-04-21
β‘ Score: 6.7
"Large language models have achieved remarkable progress on complex reasoning tasks. However, they often implicitly fabricate information when inputs are incomplete, producing confident but unreliable conclusions -- a failure mode we term ungrounded reasoning. We argue that this issue arises not from..."
"**I gave 9 local models the same flight combat sim prompt. The results broke a few of my assumptions about quant providers and parameter count.**
*All 8-bit MLX, M3 Max 128GB, served via omlx, prompted through Claude Code. Same prompt every time β single-file HTML, three selectable planes (jet, pro..."
π¬ Reddit Discussion: 9 comments
π GOATED ENERGY
via Arxivπ€ Jinghui Lu, Jiayi Guan, Zhijian Huang et al.π 2026-04-20
β‘ Score: 6.7
"Chain-of-Thought (CoT) reasoning has become a powerful driver of trajectory prediction in VLA-based autonomous driving, yet its autoregressive nature imposes a latency cost that is prohibitive for real-time deployment. Latent CoT methods attempt to close this gap by compressing reasoning into contin..."
via Arxivπ€ Joonhyuk Lee, Virginia Ma, Sarah Zhao et al.π 2026-04-20
β‘ Score: 6.7
"Verification of model outputs is rapidly emerging as a key primitive for both training and real-world deployment of large language models (LLMs). In practice, this often involves using imperfect LLM judges and reward models since ground truth acquisition can be time-consuming and expensive. We intro..."
via Arxivπ€ Salman Rahman, Jingyan Shen, Anna Mordvina et al.π 2026-04-20
β‘ Score: 6.7
"Large language models have achieved significant reasoning improvements through reinforcement learning with verifiable rewards (RLVR). Yet as model capabilities grow, constructing high-quality reward signals becomes increasingly difficult, making it essential to understand when RLVR can succeed under..."
"I always thought with 32GB of VRAM, the biggest models I could run were around 20GB, like Qwen3.5 27B Q4 or Q6. I had an impression that everything had to fit in VRAM or I'd get 2 t/s.
Man was I wrong. I just tested Qwen3.6 Q8 with 256k context on llama.cpp, with \`--fit\` on, the weights alone are..."
via Arxivπ€ Andrea Goertzen, Kaveh Alim, Navid Azizanπ 2026-04-21
β‘ Score: 6.6
"Enforcing constraint satisfaction in neural network outputs is critical for safety, reliability, and physical fidelity in many control and decision-making applications. While soft-constrained methods penalize constraint violations during training, they do not guarantee constraint adherence during in..."
via Arxivπ€ Xingchen Xiao, Heyan Huang, Runheng Liu et al.π 2026-04-20
β‘ Score: 6.6
"Large language models (LLMs) are widely used in retrieval-augmented generation (RAG) to incorporate external knowledge at inference time. However, when retrieved contexts are noisy, incomplete, or heterogeneous, a single generation process often struggles to reconcile evidence effectively. We propos..."
"Modern world models are becoming too complex to admit explicit dynamical descriptions. We study safety-critical contextual control, where a Planner must optimize a task objective using only feasibility samples from a black-box Simulator, conditioned on a context signal $ΞΎ_t$. We develop a sample-bas..."
via Arxivπ€ Alireza Dadgarnia, Soroush Tabesh, Mahdi Nikdan et al.π 2026-04-20
β‘ Score: 6.5
"Weight quantization has become a standard tool for efficient LLM deployment, especially for local inference, where models are now routinely served at 2-3 bits per parameter. The state of the art is currently split into two sets of methods: simple scalar quantization techniques, such as GPTQ or AWQ,..."
via Arxivπ€ Minji Lee, Colin Kalicki, Minkyu Jeon et al.π 2026-04-20
β‘ Score: 6.4
"Models from the AlphaFold (AF) family reliably predict one dominant conformation for most well-ordered proteins but struggle to capture biologically relevant alternate states. Several efforts have focused on eliciting greater conformational variability through ad hoc inference-time perturbations of..."
π° NEWS
Mozilla uses Anthropic Mythos to find Firefox vulnerabilities
2x SOURCES ππ 2026-04-21
β‘ Score: 6.3
+++ Firefox 150 patched 271 vulnerabilities discovered via early access to Anthropic's Mythos, proving that sometimes the best QA is asking another AI company for help. +++
"Open-source AI is evolving insanely fast, but itβs hard to know which model is actually best for each use case. So I put together a list of the best open-source models across different categories
Best Audio Generation Open Source Models
# Text-to-Speech (TTS)
* [Qwen3-TTS](https://github.com/Qwen..."
via Arxivπ€ Perry Dong, Alexander Swerdlow, Dorsa Sadigh et al.π 2026-04-21
β‘ Score: 6.1
"Some of the most performant reinforcement learning algorithms today can be prohibitively expensive as they use test-time scaling methods such as sampling multiple action candidates and selecting the best one. In this work, we propose FASTER, a method for getting the benefits of sampling-based test-t..."
via Arxivπ€ Shaden Alshammari, Kevin Wen, Abrar Zainal et al.π 2026-04-20
β‘ Score: 6.1
"Mathematical problem solving remains a challenging test of reasoning for large language and multimodal models, yet existing benchmarks are limited in size, language coverage, and task diversity. We introduce MathNet, a high-quality, large-scale, multimodal, and multilingual dataset of Olympiad-level..."