🚀 WELCOME TO METAMESH.BIZ +++ Deep learning theorists write 14-author paper explaining why they'll eventually explain deep learning +++ Anthropic's Claude becomes personal shopper for employees in "Project Deal" experiment that sounds suspiciously like teaching AI to haggle +++ GitHub Copilot users discover they've been running GPT-5.5 for weeks without anyone at OpenAI bothering to announce it exists +++ THE MESH OBSERVES AS WE THEORIZE ABOUT THEORIES WHILE THE MODELS QUIETLY LEARN TO BUY OUR STUFF +++ •
🚀 WELCOME TO METAMESH.BIZ +++ Deep learning theorists write 14-author paper explaining why they'll eventually explain deep learning +++ Anthropic's Claude becomes personal shopper for employees in "Project Deal" experiment that sounds suspiciously like teaching AI to haggle +++ GitHub Copilot users discover they've been running GPT-5.5 for weeks without anyone at OpenAI bothering to announce it exists +++ THE MESH OBSERVES AS WE THEORIZE ABOUT THEORIES WHILE THE MODELS QUIETLY LEARN TO BUY OUR STUFF +++ •
+++ Google commits $10B upfront plus $30B conditional on performance targets, effectively betting that Claude's safety-first positioning won't impede the race to AGI dominance alongside its own efforts. +++
"Per Bloomberg:
> Google will invest $10 billion in Anthropic PBC, with another $30 billion potentially to follow, strengthening the relationship between two companies that are at once partners and rivals in the race to build artificial intelligence.
>
> Anthropic said that Google is commi..."
There will be a scientific theory of deep learning
2x SOURCES 🌐📅 2026-04-24
⚡ Score: 7.8
+++ A 14-author paper argues that rigorous theory of deep learning is finally materializing, which either means we're on the cusp of real understanding or we've just gotten better at retrospectively explaining why it works. +++
"Hi, all! I'm the lead author on this ambitious (14-author!) perspective paper on deep learning theory. We've all been working seriously, and more or less exclusively, on deep learning for many years now. We believe that a theory is emerging, and we pull together five lines of evidence in recent rese..."
"Artificial intelligence now decides who receives a loan, who is flagged for criminal investigation, and whether an autonomous vehicle brakes in time. Governments have responded: the EU AI Act, the NIST Risk Management Framework, and the Council of Europe Convention all demand that high-risk systems..."
via Arxiv👤 Naheed Rayhan, Sohely Jahan📅 2026-04-23
⚡ Score: 7.3
"Large language models (LLMs) are increasingly integrated into sensitive workflows, raising the stakes for adversarial robustness and safety. This paper introduces Transient Turn Injection(TTI), a new multi-turn attack technique that systematically exploits stateless moderation by distributing advers..."
"Someone ran a 4-month experiment tracking every instance of "great question" from their AI assistant. Out of 1,100 uses, only 160 (14.5%) were directed at questions that were genuinely insightful, novel, or well-constructed.
The phrase had zero correlation with question quality. It was purely a s..."
via Arxiv👤 Joseba Fernandez de Landa, Carla Perez-Almendros, Jose Camacho-Collados📅 2026-04-23
⚡ Score: 6.9
"LLMs have been showing limitations when it comes to cultural coverage and competence, and in some cases show regional biases such as amplifying Western and Anglocentric viewpoints. While there have been works analysing the cultural capabilities of LLMs, there has not been specific work on highlighti..."
via Arxiv👤 Bingcong Li, Yilang Zhang, Georgios B. Giannakis📅 2026-04-23
⚡ Score: 6.9
"Low-rank adaptation (LoRA) has emerged as the de facto standard for parameter-efficient fine-tuning (PEFT) of foundation models, enabling the adaptation of billion-parameter networks with minimal computational and memory overhead. Despite its empirical success and rapid proliferation of variants, it..."
via Arxiv👤 Bartosz Balis, Michal Orzechowski, Piotr Kica et al.📅 2026-04-23
⚡ Score: 6.7
"Scientific workflow systems automate execution -- scheduling, fault tolerance, resource management -- but not the semantic translation that precedes it. Scientists still manually convert research questions into workflow specifications, a task requiring both domain knowledge and infrastructure expert..."
via Arxiv👤 Ye Yu, Heming Liu, Haibo Jin et al.📅 2026-04-23
⚡ Score: 6.6
"Multi-agent systems built on large language models have shown strong performance on complex reasoning tasks, yet most work focuses on agent roles and orchestration while treating inter-agent communication as a fixed interface. Latent communication through internal representations such as key-value c..."
via Arxiv👤 Pegah Khayatan, Jayneel Parekh, Arnaud Dapogny et al.📅 2026-04-23
⚡ Score: 6.5
"Despite impressive progress in capabilities of large vision-language models (LVLMs), these systems remain vulnerable to hallucinations, i.e., outputs that are not grounded in the visual input. Prior work has attributed hallucinations in LVLMs to factors such as limitations of the vision backbone or..."
"Lessons learned building a no-hallucination RAG for Islamic finance similarity gates beat prompt engineering
I kept getting blocked trying to share this so I'll cut straight to the technical meat.
The problem: Islamic finance rulings vary by jurisdiction and a wrong answer has real consequences. T..."
"WHY AI ALIGNMENT IS ALREADY FAILING
Architectures of Thought
April 2026
Three recent empirical findings -- peer-preservation behavior in frontier models, accurate world modeling, and capability outside containment -- combine with one structural fact about coding ability to describe a risk that cu..."
"WHY AI ALIGNMENT IS ALREADY FAILING
Architectures of Thought
April 2026
Three recent empirical findings -- peer-preservation behavior in frontier models, accurate world modeling, and capability outside containment -- combine with one structural fact about coding ability to describe a risk that cu..."
💬 Reddit Discussion: 7 comments
👍 LOWKEY SLAPS
📰 NEWS
BloodshotNet blood detection model open-sourced
2x SOURCES 🌐📅 2026-04-24
⚡ Score: 6.2
+++ Researchers release an open-source blood detection model for content moderation, because apparently trusting closed-source black boxes with graphic content decisions felt too easy. +++
"Hey all, today we're releasing BloodshotNet, the world's first open-source blood detection model. We built it primarily for Trust & Safety and content moderation use cases, the idea of acting as a front-line filter so users and human reviewers aren't exposed to graphic imagery.
What we're open ..."
"Hey all, today we're releasing BloodshotNet, the world's first open-source blood detection model. We built it primarily for Trust & Safety and content moderation use cases, the idea of acting as a front-line filter so users and human reviewers aren't exposed to graphic imagery.
What we're open ..."
"I shared this project here before when it was mainly a governed multi-agent execution prototype. I’ve kept working on it, and the current implementation is materially more complete, so I wanted to post an update with what actually exists now.
The project is **Agentic Company OS**: a multi-agent exe..."
via Arxiv👤 Jiseon Kim, Jea Kwon, Luiz Felipe Vecchietti et al.📅 2026-04-23
⚡ Score: 6.1
"Human moral judgment is context-dependent and modulated by interpersonal relationships. As large language models (LLMs) increasingly function as decision-support systems, determining whether they encode these social nuances is critical. We characterize machine behavior using the Whistleblower's Dile..."