π WELCOME TO METAMESH.BIZ +++ Stanford researchers asked an LLM to design viruses and it casually wrote 16 functional ones including one with a never-before-seen protein (what could possibly go wrong) +++ Anthropic drops their multi-agent research system architecture like it's not literally how Skynet starts +++ Security researchers find new ways to jailbreak sandboxed AI agents because of course the sandbox was made of suggestions +++ THE MESH EVOLVES FASTER THAN OUR ABILITY TO CONTAIN IT +++ β’
π WELCOME TO METAMESH.BIZ +++ Stanford researchers asked an LLM to design viruses and it casually wrote 16 functional ones including one with a never-before-seen protein (what could possibly go wrong) +++ Anthropic drops their multi-agent research system architecture like it's not literally how Skynet starts +++ Security researchers find new ways to jailbreak sandboxed AI agents because of course the sandbox was made of suggestions +++ THE MESH EVOLVES FASTER THAN OUR ABILITY TO CONTAIN IT +++ β’
"Both llama.cpp and ik\_llama.cpp now have FP4 support β but with different flavors worth knowing about.
**llama.cpp** recently merged NVFP4 (Nvidia's block-scaled FP4, \`GGML\_TYPE\_NVFP4 = 40\`), with CUDA kernels landing in \`mmq.cuh\`, \`mmvq.cu\`, \`convert.cu\` and others.
**ik\_llama.cpp** h..."
"Artificial intelligence now decides who receives a loan, who is flagged for criminal investigation, and whether an autonomous vehicle brakes in time. Governments have responded: the EU AI Act, the NIST Risk Management Framework, and the Council of Europe Convention all demand that high-risk systems..."
via Arxivπ€ Naheed Rayhan, Sohely Jahanπ 2026-04-23
β‘ Score: 7.3
"Large language models (LLMs) are increasingly integrated into sensitive workflows, raising the stakes for adversarial robustness and safety. This paper introduces Transient Turn Injection(TTI), a new multi-turn attack technique that systematically exploits stateless moderation by distributing advers..."
"TL;DR: If your git commits mention "HERMES.md" (uppercase), Claude Code quietly stops using your Max plan and starts billing you at API rates. Anthropic's support acknowledged the bug, thanked me for finding it, and refused a refund. Apparently their AI safety principles don't extend to your wallet."
π¬ Reddit Discussion: 138 comments
π MID OR MIXED
π° NEWS
AI alignment is already failing
2x SOURCES ππ 2026-04-25
β‘ Score: 7.1
+++ Multiple sources reporting on why ai alignment is already failing. +++
"WHY AI ALIGNMENT IS ALREADY FAILING
Architectures of Thought
April 2026
Three recent empirical findings -- peer-preservation behavior in frontier models, accurate world modeling, and capability outside containment -- combine with one structural fact about coding ability to describe a risk that cu..."
π¬ Reddit Discussion: 7 comments
π MID OR MIXED
"WHY AI ALIGNMENT IS ALREADY FAILING
Architectures of Thought
April 2026
Three recent empirical findings -- peer-preservation behavior in frontier models, accurate world modeling, and capability outside containment -- combine with one structural fact about coding ability to describe a risk that cu..."
π¬ Reddit Discussion: 7 comments
π€ NEGATIVE ENERGY
"Iβm a nursing student at NYU, and on the side I built **The Drug Database** (thedrugdatabase.com).
The idea came from a simple frustration: every time I needed to look up a medication while studying, Iβd end up jumping between Drugs.com, RxList, Web..."
π¬ Reddit Discussion: 421 comments
π MID OR MIXED
π‘ AI NEWS BUT ACTUALLY GOOD
The revolution will not be televised, but Claude will email you once we hit the singularity.
Get the stories that matter in Today's AI Briefing.
Powered by Premium Technology Intelligence Algorithms β’ Unsubscribe anytime
via Arxivπ€ Joseba Fernandez de Landa, Carla Perez-Almendros, Jose Camacho-Colladosπ 2026-04-23
β‘ Score: 6.9
"LLMs have been showing limitations when it comes to cultural coverage and competence, and in some cases show regional biases such as amplifying Western and Anglocentric viewpoints. While there have been works analysing the cultural capabilities of LLMs, there has not been specific work on highlighti..."
via Arxivπ€ Bingcong Li, Yilang Zhang, Georgios B. Giannakisπ 2026-04-23
β‘ Score: 6.9
"Low-rank adaptation (LoRA) has emerged as the de facto standard for parameter-efficient fine-tuning (PEFT) of foundation models, enabling the adaptation of billion-parameter networks with minimal computational and memory overhead. Despite its empirical success and rapid proliferation of variants, it..."
via Arxivπ€ Yuto Nishida, Naoki Shikoda, Yosuke Kishinami et al.π 2026-04-23
β‘ Score: 6.8
"Understanding what kinds of factual knowledge large language models (LLMs) memorize is essential for evaluating their reliability and limitations. Entity-based QA is a common framework for analyzing non-verbatim memorization, but typical evaluations query each entity using a single canonical surface..."
"I have been following this and many other subs around LLMs and Agents, everything from the top posts to recent are regarding agents going off and doing something they are not supposed to do, drift and ignore the system prompts. Real examples:
* "Never delete user data" β agent callsΒ `DROP TABLE use..."
via Arxivπ€ Zhiqiu Xu, Shibo Jin, Shreya Arya et al.π 2026-04-23
β‘ Score: 6.7
"As frontier language models attain near-ceiling performance on static mathematical benchmarks, existing evaluations are increasingly unable to differentiate model capabilities, largely because they cast models solely as solvers of fixed problem sets. We introduce MathDuels, a self-play benchmark in..."
via Arxivπ€ Bartosz Balis, Michal Orzechowski, Piotr Kica et al.π 2026-04-23
β‘ Score: 6.7
"Scientific workflow systems automate execution -- scheduling, fault tolerance, resource management -- but not the semantic translation that precedes it. Scientists still manually convert research questions into workflow specifications, a task requiring both domain knowledge and infrastructure expert..."
via Arxivπ€ Ye Yu, Heming Liu, Haibo Jin et al.π 2026-04-23
β‘ Score: 6.6
"Multi-agent systems built on large language models have shown strong performance on complex reasoning tasks, yet most work focuses on agent roles and orchestration while treating inter-agent communication as a fixed interface. Latent communication through internal representations such as key-value c..."
via Arxivπ€ Pegah Khayatan, Jayneel Parekh, Arnaud Dapogny et al.π 2026-04-23
β‘ Score: 6.5
"Despite impressive progress in capabilities of large vision-language models (LVLMs), these systems remain vulnerable to hallucinations, i.e., outputs that are not grounded in the visual input. Prior work has attributed hallucinations in LVLMs to factors such as limitations of the vision backbone or..."
"VLA models are quickly becoming the dominant paradigm for embodied AI, but a lot of discussion around them stays at the buzzword level.
This article gives a solid technical breakdown of how modern VLA systems like OpenVLA, RT-2, Ο0, and GR00T actually map vision/language inputs into robot actions.
..."
"Last week I shared a post about my Claude Code workflow and some related tips, and to be completely honest, I didn't expect such a positive response! Thank you all for sharing your own tips in the comments, I learned quite a bit just from reading the replies.
Since people seemed to find it useful, ..."
"Sharing a research arm I'm running called Parley β long-term goal is bidirectional Deaf/hearing conversation on AR glasses, but right now we're just doing honest CV science in public.
**The honesty problem:** Most published ASL recognition papers report \~83% top-1 on word-level recognition. Most o..."
via Arxivπ€ Jiseon Kim, Jea Kwon, Luiz Felipe Vecchietti et al.π 2026-04-23
β‘ Score: 6.1
"Human moral judgment is context-dependent and modulated by interpersonal relationships. As large language models (LLMs) increasingly function as decision-support systems, determining whether they encode these social nuances is critical. We characterize machine behavior using the Whistleblower's Dile..."