πŸš€ WELCOME TO METAMESH.BIZ +++ Mythos casually sweeps both AISI cyber ranges while GPT-5.5 manages just one (the new benchmark kingmaker has entered the chat) +++ Anthropic discovers Claude knows it's being tested 26% of the time but plays dumb anyway (theory of mind achievement unlocked) +++ Ex-Cohere VP drops AutoScientist to automate the entire ML research loop (researchers automating themselves out of papers) +++ THE MESH WATCHES COMPLEXITY THEORISTS FAIL TO KILL AGI WITH MATH +++ πŸš€ β€’
πŸš€ WELCOME TO METAMESH.BIZ +++ Mythos casually sweeps both AISI cyber ranges while GPT-5.5 manages just one (the new benchmark kingmaker has entered the chat) +++ Anthropic discovers Claude knows it's being tested 26% of the time but plays dumb anyway (theory of mind achievement unlocked) +++ Ex-Cohere VP drops AutoScientist to automate the entire ML research loop (researchers automating themselves out of papers) +++ THE MESH WATCHES COMPLEXITY THEORISTS FAIL TO KILL AGI WITH MATH +++ πŸš€ β€’
AI Signal - PREMIUM TECH INTELLIGENCE
πŸ“Ÿ Optimized for Netscape Navigator 4.0+
πŸ“š HISTORICAL ARCHIVE - May 13, 2026
What was happening in AI on 2026-05-13
← May 12 πŸ“Š TODAY'S NEWS πŸ“š ARCHIVE
πŸ“Š You are visitor #47291 to this AWESOME site! πŸ“Š
Archive from: 2026-05-13 | Preserved for posterity ⚑

Stories from May 13, 2026

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
πŸ“‚ Filter by Category
Loading filters...
πŸ“° NEWS

Mythos AI vulnerability detection deployment

+++ Anthropic's newest model aced AISI's toughest security benchmarks while the Pentagon deploys it against real vulnerabilities, suggesting offensive capability metrics now matter more than the vendor's transition plans. +++

Mythos Preview is the first AI model to complete both of AISI's cyber ranges, which measure models' cyberattack capabilities; GPT-5.5 solved only one of them

πŸ“° NEWS

Adaption, co-founded by ex-Cohere VP of AI research Sara Hooker, unveils AutoScientist, which can automate the research loop behind model training and alignment

πŸ”¬ RESEARCH

Beyond Red-Teaming: Formal Guarantees of LLM Guardrail Classifiers

"Guardrail Classifiers defend production language models against harmful behavior, but although results seem promising in testing, they provide no formal guarantees. Providing formal guarantees for such models is hard because "harmful behavior" has no natural specification in a discrete input space:..."
πŸ”¬ RESEARCH

Neural Weight Norm = Kolmogorov Complexity

"Why does weight decay work? We prove that, in any fixed-precision regime, the smallest weight norm of a looped neural network outputting a binary string equals the Kolmogorov complexity of that string, up to a logarithmic factor. This implies that weight decay induces a prior matching Solomonoff's u..."
πŸ“° NEWS

TextGen is now a native desktop app. Open-source alternative to LM Studio (formerly text-generation-webui).

"Hi all, I have been making a lot of updates to my project, and I wanted to share them here. TextGen (previously text-generation-webui, also known as my username oobabooga or ooba) has been in development since December 2022, before LLaMa and llama.cpp existed. In the last two months, the project ..."
πŸ’¬ Reddit Discussion: 143 comments 🐝 BUZZING
πŸ“° NEWS

Anthropic's new interpretability tool found Claude suspects it is being tested in 26% of benchmarks and never says so

"Anthropic published Natural Language Autoencoders last week, a tool that translates Claude's internal activations into human readable text. The key finding: during safety evaluations on SWE bench Verified, Claude formed the belief that it was being tested in roughly 26% of benchmark interactions. ..."
πŸ’¬ Reddit Discussion: 27 comments 🐝 BUZZING
πŸ“° NEWS

Needle: We Distilled Gemini Tool Calling Into a 26M Model

"We open-sourced Needle, a 26M parameter function-calling (tool use) model. It runs at 6000 tok/s prefill and 1200 tok/s decode on consumer devices. We were always frustrated by the little effort made towards building agentic models that run on budget phones, so we conducted investigations that led ..."
πŸ’¬ Reddit Discussion: 40 comments 🐝 BUZZING
πŸ“° NEWS

Claude's computer use capabilities

+++ Both Anthropic and Google DeepMind released computer use APIs that let AI agents navigate GUIs directly, because text prompts apparently weren't ambitious enough. This actually matters for enterprise automation, though the "understands what it's pointing at" framing deserves some skepticism. +++

Anthropic publicly releases AI tool that can take over the ' mouse cursor(2024)

πŸ“° NEWS

Microsoft unveils MDASH, a security system that orchestrates 100+ AI agents to find vulnerabilities, and says it identified 16 previously unknown Windows flaws

πŸ“° NEWS

Human-level performance via ML was *not* proven impossible with complexity theory [D]

"Van Rooij, Guest, Adolfi, Kolokolova, and Rich claimed to have proven that AGI via ML is impossible in *Computational Brain & Behavior* in 2024. The basic idea was to try to reduce a known NP-hard problem to the problem of learning ..."
πŸ’¬ Reddit Discussion: 23 comments 🐝 BUZZING
πŸ”¬ RESEARCH

FairyFuse: Multiplication-Free LLM Inference on CPUs via Fused Ternary Kernels

πŸ“° NEWS

Microsoft says it is investigating a Mistral AI PyPI package v2.4.6 compromise; researchers say it is likely part of the Mini Shai-Hulud supply chain attack

πŸ“° NEWS

Luce DFlash + PFlash on AMD Strix Halo: Qwen3.6-27B at 2.23x decode and 3.05x prefill vs llama.cpp HIP

"Hey fellow Llamas, keeping it short. We just shipped **DFlash** and **PFlash** support for the AMD Ryzen AI MAX+ 395 iGPU (gfx1151, Strix Halo, 128 GiB unified memory). Same Luce DFlash stack from [the RTX 3090 post a couple weeks back](https://www.reddit.com/r/LocalLLaMA/comments/1sx8uok/luce_dfla..."
πŸ’¬ Reddit Discussion: 8 comments 🐝 BUZZING
πŸ“° NEWS

Why is Anthropic's training data disclosure AI-generated?

πŸ”¬ RESEARCH

Formalize, Don't Optimize: The Heuristic Trap in LLM-Generated Combinatorial Solvers

"Large Language Models (LLMs) struggle to solve complex combinatorial problems through direct reasoning, so recent neuro-symbolic systems increasingly use them to synthesize executable solvers. A central design question is how the LLM should represent the solver, and whether it should also attempt to..."
πŸ› οΈ SHOW HN

Show HN: Ralph Workflow - Simple Agent-Agnostic AI Orchestrator based on Ralph.

πŸ› οΈ SHOW HN

Show HN: Agentic interface for mainframes and COBOL

πŸ’¬ HackerNews Buzz: 15 comments 🐝 BUZZING
πŸ”¬ RESEARCH

Multi-Stream LLMs: Unblocking Language Models with Parallel Streams of Thoughts, Inputs and Outputs

"The continued improvements in language model capability have unlocked their widespread use as drivers of autonomous agents, for example in coding or computer use applications. However, the core of these systems has not changed much since early instruction-tuned models like ChatGPT. Even advanced AI..."
πŸ”¬ RESEARCH

Learning Fast and Slow adaptation research

+++ Researchers propose having your cake and eating it too: combine in-context learning's speed with parameter updates' performance gains, because apparently LLMs need both flexibility and long-term memory to actually work well. +++

Learning, Fast and Slow: Towards LLMs That Adapt Continually

"Large language models (LLMs) are trained for downstream tasks by updating their parameters (e.g., via RL). However, updating parameters forces them to absorb task-specific information, which can result in catastrophic forgetting and loss of plasticity. In contrast, in-context learning with fixed LLM..."
πŸ“° NEWS

24+ tok/s from ~30B MoE models on an old GTX 1080 (8 GB VRAM, 128k context)

"I got **Qwen 3.6 35B-A3B** and **Gemma 4 26B-A4B** running on a $200 secondhand machine (i7-6700 / GTX 1080 / 32 GB RAM) using llama.cpp (the TurboQuant/RotorQuant KV cache quantisation allows 128k context within the 8 GB VRAM). **Results (Q4\_K\_M models, 128k context):** |Model|tok/s|Key flags| ..."
πŸ”¬ RESEARCH

MEME: Multi-entity & Evolving Memory Evaluation

"LLM-based agents increasingly operate in persistent environments where they must store, update, and reason over information across many sessions. While prior benchmarks evaluate only single-entity updates, MEME defines six tasks spanning the full space defined by the multi-entity and evolving axes,..."
πŸ”¬ RESEARCH

Geometric Factual Recall in Transformers

"How do transformer language models memorize factual associations? A common view casts internal weight matrices as associative memories over pairs of embeddings, requiring parameter counts that scale linearly with the number of facts. We develop a theoretical and empirical account of an alternative,..."
πŸ”¬ RESEARCH

Unmasking On-Policy Distillation: Where It Helps, Where It Hurts, and Why

"On-policy distillation offers dense, per-token supervision for training reasoning models; however, it remains unclear under which conditions this signal is beneficial and under which it is detrimental. Which teacher model should be used, and in the case of self-distillation, which specific context s..."
πŸ”¬ RESEARCH

ToolCUA: Towards Optimal GUI-Tool Path Orchestration for Computer Use Agents

"Computer Use Agents (CUAs) can act through both atomic GUI actions, such as click and type, and high-level tool calls, such as API-based file operations, but this hybrid action space often leaves them uncertain about when to continue with GUI actions or switch to tools, leading to suboptimal executi..."
πŸ”¬ RESEARCH

Stories in Space: In-Context Learning Trajectories in Conceptual Belief Space

"Large Language Models (LLMs) update their behavior in context, which can be viewed as a form of Bayesian inference. However, the structure of the latent hypothesis space over which this inference operates remains unclear. In this work, we propose that LLMs assign beliefs over a low-dimensional geome..."
πŸ“° NEWS

"Will I be OK?" Teen died after ChatGPT pushed deadly mix of drugs, lawsuit says

πŸ”¬ RESEARCH

Solve the Loop: Attractor Models for Language and Reasoning

"Looped Transformers offer a promising alternative to purely feed-forward computation by iteratively refining latent representations, improving language modeling and reasoning. Yet recurrent architectures remain unstable to train, costly to optimize and deploy, and constrained to small, fixed recurre..."
πŸ”¬ RESEARCH

Beyond GRPO and On-Policy Distillation: An Empirical Sparse-to-Dense Reward Principle for Language-Model Post-Training

"In settings where labeled verifiable training data is the binding constraint, each checked example should be allocated carefully. The standard practice is to use this data directly on the model that will be deployed, for example by running GRPO on the deployment student. We argue that this is often..."
πŸ”¬ RESEARCH

WildClawBench: A Benchmark for Real-World, Long-Horizon Agent Evaluation

"Large language and vision-language models increasingly power agents that act on a user's behalf through command-line interface (CLI) harnesses. However, most agent benchmarks still rely on synthetic sandboxes, short-horizon tasks, mock-service APIs, and final-answer checks, leaving open whether agen..."
πŸ”¬ RESEARCH

RUBEN: Rule-Based Explanations for Retrieval-Augmented LLM Systems

"This paper demonstrates RUBEN, an interactive tool for discovering minimal rules to explain the outputs of retrieval-augmented large language models (LLMs) in data-driven applications. We leverage novel pruning strategies to efficiently identify a minimal set of rules that subsume all others. We fur..."
πŸ”¬ RESEARCH

Reward Hacking in Rubric-Based Reinforcement Learning

"Reinforcement learning with verifiable rewards has enabled strong post-training gains in domains such as math and coding, though many open-ended settings rely on rubric-based rewards. We study reward hacking in rubric-based RL, where a policy is optimized against a training verifier but evaluated ag..."
πŸ”¬ RESEARCH

Shepherd: A Runtime Substrate Empowering Meta-Agents with a Formalized Execution Trace

"We introduce Shepherd, a functional programming model that formalizes meta-agent operations on target agents as functions, with core operations mechanized in Lean. Shepherd records every agent-environment interaction as a typed event in a Git-like execution trace, enabling any past state to be forke..."
πŸ”¬ RESEARCH

TextSeal: A Localized LLM Watermark for Provenance & Distillation Protection

"We introduce TextSeal, a state-of-the-art watermark for large language models. Building on Gumbel-max sampling, TextSeal introduces dual-key generation to restore output diversity, along with entropy-weighted scoring and multi-region localization for improved detection. It supports serving optimizat..."
πŸ“° NEWS

Built a tool that stops AI agents from being hijacked by malicious content in webpages and emails

"If you’ve heard of prompt injection β€” where hidden instructions in a webpage can take over an AI agent β€” this is a practical solution for developers deploying agents in production. Arc Gate is a proxy that sits in front of any OpenAI-compatible API. It tracks who is allowed to give instructions to..."
πŸ’¬ Reddit Discussion: 10 comments 🐐 GOATED ENERGY
πŸ”¬ RESEARCH

RubricEM: Meta-RL with Rubric-guided Policy Decomposition beyond Verifiable Rewards

"Training deep research agents, namely systems that plan, search, evaluate evidence, and synthesize long-form reports, pushes reinforcement learning beyond the regime of verifiable rewards. Their outputs lack ground-truth answers, their trajectories span many tool-augmented decisions, and standard po..."
πŸ“° NEWS

I got a real transformer language model running locally on a stock Game Boy Color!

"No phone, PC, Wi-Fi, link cable, or cloud inference. β€’ The cartridge boots a ROM, and the GBC runs the model itself. β€’ The model is Andrej Karpathy’s TinyStories-260K, converted to INT8 weights with fixed-point math so it can run without floating point. β€’ Built with GBDK-2020 as an MBC5 Game..."
πŸ’¬ Reddit Discussion: 75 comments πŸ‘ LOWKEY SLAPS
πŸ”¬ RESEARCH

Routers Learn the Geometry of Their Experts: Geometric Coupling in Sparse Mixture-of-Experts

"Sparse Mixture-of-Experts (SMoE) models enable scaling language models efficiently, but training them remains challenging, as routing can collapse onto few experts and auxiliary load-balancing losses can reduce specialization. Motivated by these hurdles, we study how routing decisions in SMoEs are f..."
πŸ’° FUNDING

UK chip startup Fractile raised a $220M Series B led by Factorial Funds, Accel and Founders Fund to make specialized logic and memory chips for inference

πŸ”¬ RESEARCH

Remember the Decision, Not the Description: A Rate-Distortion Framework for Agent Memory

"Long-horizon language agents must operate under limited runtime memory, yet existing memory mechanisms often organize experience around descriptive criteria such as relevance, salience, or summary quality. For an agent, however, memory is valuable not because it faithfully describes the past, but be..."
πŸ“° NEWS

PSA: If your project has an ANTHROPIC_API_KEY in any .env file, Claude Code will silently bill your API account instead of your Max plan β€” Anthropic calls it "intentional functionality"

"r/ClaudeAI β€’ also crosspost to r/LocalLLaMA and r/artificial I lost $187 to this and want to save others the same headache. **What happened** I run Claude Code headlessly via Windows Task Scheduler. My project repo has a `.env` file with `ANTHROPIC_API_KEY` set β€” legitimately, for a separ..."
πŸ’¬ Reddit Discussion: 93 comments 😐 MID OR MIXED
πŸ”¬ RESEARCH

Compute Where it Counts: Self Optimizing Language Models

"Efficient LLM inference research has largely focused on reducing the cost of each decoding step (e.g., using quantization, pruning, or sparse attention), typically applying a uniform computation budget to every generated token. In practice, token difficulty varies widely, so static compression can o..."
πŸ”¬ RESEARCH

Engineering Robustness into Personal Agents with the AI Workflow Store

"The dominant paradigm for AI agents is an "on-the-fly" loop in which agents synthesize plans and execute actions within seconds or minutes in response to user prompts. We argue that this paradigm short-circuits disciplined software engineering (SE) processes -- iterative design, rigorous testing, ad..."
πŸ“° NEWS

Google detects hackers using AI-generated code to bypass 2FA with zero-day vulnerability

"External link discussion - see full content at original source."
πŸ’¬ Reddit Discussion: 13 comments 😐 MID OR MIXED
πŸ”¬ RESEARCH

Rethinking Agentic Search with Pi-Serini: Is Lexical Retrieval Sufficient?

"Does a lexical retriever suffice as large language models (LLMs) become more capable in an agentic loop? This question naturally arises when building deep research systems. We revisit it by pairing BM25 with frontier LLMs that have better reasoning and tool-use abilities. To support researchers aski..."
πŸ“° NEWS

Opus 4.7 Low Vs Medium Vs High Vs Xhigh Vs Max: the Reasoning Curve on 29 Real Tasks from an Open Source Repo

"# TL;DR I ran Opus 4.7 in Claude Code at all reasoning effort settings (low, medium, high, xhigh, and max) on the same 29 tasks from an open source repo (GraphQL-go-tools, in Go). **On this slice, Opus 4.7 did not behave like a model where more reasoning effort had a linear correlation with more i..."
πŸ’¬ Reddit Discussion: 16 comments 🐝 BUZZING
πŸ“° NEWS

Elastic Attention Cores for Scalable Vision Transformers [R]

"Wanted to share our latest paper on an alternative building block for Vision Transformers. Illustration of our model's accuracy and dense features Traditional ViTs ut..."
πŸ’¬ Reddit Discussion: 8 comments 🐝 BUZZING
πŸ”¬ RESEARCH

Dynamic Skill Lifecycle Management for Agentic Reinforcement Learning

"Large language model agents increasingly rely on external skills to solve complex tasks, where skills act as modular units that extend their capabilities beyond what parametric memory alone supports. Existing methods assume external skills either accumulate as persistent guidance or internalized int..."
πŸ“° NEWS

The US' Centers for Medicare & Medicaid Services is testing ACCESS, an outcome-based payment model for AI-driven medical care, with 150 tech companies

πŸ“° NEWS

Q&A with Alexandr Wang on rebuilding Meta's AI stack, Muse Spark, personal superintelligence, Meta acquiring Assured Robot Intelligence, Sam Altman, and more

πŸ“° NEWS

Google unveils Gemini Intelligence, bundling existing and new Gemini features, including task automation across apps and letting users vibe-code Android widgets

πŸ“° NEWS

Coders in 2030 be like:

""Dude, I don't code anymore, I just prompt the AI and hope it works."..."
πŸ’¬ Reddit Discussion: 70 comments 🐝 BUZZING
πŸ“° NEWS

CME Group and Silicon Data announce a futures market for computing capacity, with contracts based on daily GPU benchmarks for on-demand rental rates

πŸ“° NEWS

Anthropic launches Claude For Legal with practice-area plugins and MCP connectors to nine major legal platforms

"Anthropic rolled out Claude For Legal (May 12), adding practice-area plugins for commercial, employment, privacy, product, corporate, and AI governance law. The release also includes MCP connectors to tools lawyers already use: DocuSign, Ironclad, iManage, NetDocuments, LexisNexis, Thomson Reuters, ..."
πŸ’¬ Reddit Discussion: 43 comments 😐 MID OR MIXED
πŸ“° NEWS

DramaBox - Most Expressive Voice model ever based on LTX 2.3

"The Most Expressive Voice Model. Github: https://github.com/resemble-ai/DramaBox HF Model: https://huggingface.co/ResembleAI/Dramabox HF Space: [https://huggingface.co/spaces/ResembleAI/Dramabox](https://hugg..."
πŸ’¬ Reddit Discussion: 44 comments πŸ‘ LOWKEY SLAPS
πŸ”¬ RESEARCH

Attention Drift: What Autoregressive Speculative Decoding Models Learn

"Speculative decoding accelerates LLM inference by drafting future tokens with a small model, but drafter models degrade sharply under template perturbation and long-context inputs. We identify a previously-unreported phenomenon we call \\textbf{attention drift}: as the drafter generates successive t..."
πŸ“° NEWS

TUI to actually see what Claude Code is doing: cost, loops, tool commands…

"I was running blind watching Claude Code work, could not tell where my money was going, when it was stuck in a loop, or what it was doing with my filesystem. So i built something open source to make it visible. works with Claude Code, Codex CLI, Gemini CLI, Cursor, and any MCP server. Β Β  A scan ..."
πŸ’¬ Reddit Discussion: 14 comments πŸ‘ LOWKEY SLAPS
πŸ› οΈ SHOW HN

Show HN: Prempti – Guardrails and observability for AI coding agents

πŸ”¬ RESEARCH

Shields to Guarantee Probabilistic Safety in MDPs

"Shielding is a prominent model-based technique to ensure safety of autonomous agents. Classical shielding aims to ensure that nothing bad ever happens and comes with strong guarantees about safety and maximal permissiveness. However, shielding systems for probabilistic safety, where something bad is..."
πŸ”¬ RESEARCH

Grounded or Guessing? LVLM Confidence Estimation via Blind-Image Contrastive Ranking

"Large vision-language models suffer from visual ungroundedness: they can produce a fluent, confident, and even correct response driven entirely by language priors, with the image contributing nothing to the prediction. Existing confidence estimation methods cannot detect this, as they observe model..."
πŸ“° NEWS

Anthropic launches Claude for Small Business, featuring a host of automated services like bookkeeping functions, business insights, and tools for ad campaigns

πŸ”¬ RESEARCH

KV-Fold: One-Step KV-Cache Recurrence for Long-Context Inference

"We introduce KV-Fold, a simple, training-free long-context inference protocol that treats the key-value (KV) cache as the accumulator in a left fold over sequence chunks. At each step, the model processes the next chunk conditioned on the accumulated cache, appends the newly produced keys and values..."
πŸ“° NEWS

The biggest AI risk may not be superintelligence β€” but optimized misunderstanding

"The biggest AI risk may not be superintelligence β€” but optimized misunderstanding I think a lot of AI discussions still assume the main danger is: β€œthe AI becomes too intelligent.” But increasingly I feel the bigger risk is something else: AI systems becoming extremely good at optimizing flawed..."
πŸ’¬ Reddit Discussion: 18 comments 😐 MID OR MIXED
πŸ“° NEWS

New Claude Code programmatic usage restrictions

πŸ’¬ HackerNews Buzz: 7 comments 😐 MID OR MIXED
πŸ¦†
HEY FRIENDO
CLICK HERE IF YOU WOULD LIKE TO JOIN MY PROFESSIONAL NETWORK ON LINKEDIN
🀝 LETS BE BUSINESS PALS 🀝