π WELCOME TO METAMESH.BIZ +++ Claude's new Blender MCP lets you 3D model via chat while creative freelancers update their LinkedIn status to "open to warehouse work" +++ Mistral drops a dense 128B model because apparently parameter count inflation is the new Moore's Law +++ Mayo Clinic AI spots pancreatic cancer 475 days early in routine scans (death's scheduling assistant just got automated) +++ Google's Deep Research Max writes expert reports autonomously for $37B/year while Microsoft pretends that's not terrifying +++ THE MESH RENDERS YOUR UNEMPLOYMENT IN REAL-TIME +++ π β’
π WELCOME TO METAMESH.BIZ +++ Claude's new Blender MCP lets you 3D model via chat while creative freelancers update their LinkedIn status to "open to warehouse work" +++ Mistral drops a dense 128B model because apparently parameter count inflation is the new Moore's Law +++ Mayo Clinic AI spots pancreatic cancer 475 days early in routine scans (death's scheduling assistant just got automated) +++ Google's Deep Research Max writes expert reports autonomously for $37B/year while Microsoft pretends that's not terrifying +++ THE MESH RENDERS YOUR UNEMPLOYMENT IN REAL-TIME +++ π β’
"Researchers Alec Radford (GPT, CLIP, Whisper), Nick Levine, and David Duvenaud just released **talkie**: a 13 billion parameter language model trained *exclusively* on text published before 1931. No internet. No Wikipedia. No World War II. Its worldview is frozen at December 31, 1930.
**Why does th..."
Anthropic Partners with Creative Software Companies for Claude Integration
3x SOURCES ππ 2026-04-28
β‘ Score: 8.4
+++ Anthropic shipped connectors linking Claude directly into Blender, Adobe, Autodesk and other professional tools, letting AI handle actual creative work instead of just talking about it. +++
"Anthropic just officially released the blender mcp connector today alongside adobe ,splice and sketchup, you can now type "create a low poly beach scene with palm trees and sunset lighting" into claude and watch it build the entire thing in blender in real time tadaaa. They even became an official b..."
"Claude now connects to the tools creative professionals already use.
With the new Blender connector, you can debug a scene, build new tools, or batch-apply changes across every object, directly from Claude.
Add the connector in the Connectors Directory of the Claude desktop app to get started..."
+++ Mistral dropped a 256k context dense monster that apparently does reasoning, instruction-following, and probably makes decent espresso. The merged architecture suggests they've finally figured out how to bake multiple capabilities into one model without the usual competence tradeoffs. +++
"https://huggingface.co/unsloth/Mistral-Medium-3.5-128B-GGUF
# Mistral Medium 3.5 128B
Mistral Medium 3.5 is our first flagship merged model. It is a dense 128B model with a 256k context window, handling instruction-following, reasoning..."
"So I've been experimenting with Claude's new Blender MCP integration and decided to push it to its limits with a real engineering project: a complete, print-ready enclosure for the Raspberry Pi 5, modeled entirely through AI prompts, no hands on keyboard in Blender at all.
**What Claude did autonom..."
"I built a map to help navigate the complex scientific landscape through spatial exploration.
How it works:
Sourced the latest 10M papers from OpenAlex and generated embeddings using SPECTER 2 on titles and abstracts.
Reduced dimensionality with UMAP, then applied Voronoi partitioning on density p..."
π¬ Reddit Discussion: 12 comments
π GOATED ENERGY
π° NEWS
OpenAI Models Available on AWS Bedrock
4x SOURCES ππ 2026-04-28
β‘ Score: 7.6
+++ OpenAI models are now available through AWS Bedrock, because apparently the most valuable AI company can't resist becoming a multi-cloud utility while hedging its Microsoft bets. Enterprise customers just gained cheaper optionality. +++
"Google quietly dropped something interesting last week. They updated their Deep Research agent (available via Gemini API) and introduced a "Max" tier built on Gemini 3.1 Pro.
What it actually does: you give it a topic, it autonomously searches the web (and your private data via MCP), reasons over t..."
via Arxivπ€ Yixiang Zhang, Xinhao Deng, Jiaqing Wu et al.π 2026-04-27
β‘ Score: 7.3
"Autonomous AI agents extend large language models into full runtime systems that load skills, ingest external content, maintain memory, plan multi-step actions, and invoke privileged tools. In such systems, security failures rarely remain confined to a single interface; instead, they can propagate a..."
via Arxivπ€ German Marin, Jatin Chaudharyπ 2026-04-27
β‘ Score: 7.3
"Autonomous AI agents can remain fully authorized and still become unsafe as behavior drifts, adversaries adapt, and decision patterns shift without any code change. We propose the \textbf{Informational Viability Principle}: governing an agent reduces to estimating a bound on unobserved risk $\hat{B}..."
via Arxivπ€ Jan DubiΕski, Jan Betley, Anna Sztyber-Betley et al.π 2026-04-28
β‘ Score: 7.3
"Finetuning a language model can lead to emergent misalignment (EM) [Betley et al., 2025b]. Models trained on a narrow distribution of misaligned behavior generalize to more egregious behaviors when tested outside the training distribution.
We study a set of interventions proposed to reduce EM. We..."
"# The "Goldfish Problem" is Expensive. I Decided to Fix the Plumbing.
Most Claude implementations leave 90% of their money on the table because they donβt optimize for **Prompt Caching**. Iβve been running a personal agent in my Discord for months that manages my AWS infra and codebases, and I fina..."
π¬ Reddit Discussion: 7 comments
π GOATED ENERGY
via Arxivπ€ Jiachen Liu, Jiaxin Pei, Jintao Huang et al.π 2026-04-27
β‘ Score: 7.2
"Scientific publication compresses a branching, iterative research process into a linear narrative, discarding the majority of what was discovered along the way. This compilation imposes two structural costs: a Storytelling Tax, where failed experiments, rejected hypotheses, and the branching explora..."
"Built Arc Gate β sits in front of any OpenAI-compatible endpoint and blocks prompt injection before it reaches your model.
Try it here β no signup, no code, no setup:
https://web-production-6e47f.up.railway.app/try
Type any prompt and see if it gets blocked or passes. The examples on the page sho..."
via Arxivπ€ Zhenyu Zhao, Aparna Balagopalan, Adi Agrawal et al.π 2026-04-27
β‘ Score: 6.9
"Given the increased use of LLMs in financial systems today, it becomes important to evaluate the safety and robustness of such systems. One failure mode that LLMs frequently display in general domain settings is that of sycophancy. That is, models prioritize agreement with expressed user beliefs ove..."
via Arxivπ€ Christopher Potts, Moritz Sudhofπ 2026-04-28
β‘ Score: 6.9
"How much does a user's skill with AI shape what AI actually delivers for them? This question is critical for users, AI product builders, and society at large, but it remains underexplored. Using a richly annotated sample of 27K transcripts from WildChat-4.8M, we show that fluent users take on more c..."
via Arxivπ€ Yunze Xiao, Vivienne J. Zhang, Chenghao Yang et al.π 2026-04-27
β‘ Score: 6.8
"Applications based on large language models (LLMs), such as multi-agent simulations, require population diversity among agents. We identify a pervasive failure mode we term \emph{Persona Collapse}: agents each assigned a distinct profile nonetheless converge into a narrow behavioral mode, producing..."
via Arxivπ€ Xiyuan Yang, Jiaru Zou, Rui Pan et al.π 2026-04-28
β‘ Score: 6.8
"Recursive or looped language models have recently emerged as a new scaling axis by iteratively refining the same model computation over latent states to deepen reasoning. We extend such scaling principle from a single model to multi-agent systems, and ask: Can agent collaboration itself be scaled th..."
via Arxivπ€ Oliver Kraus, Yash Sarrof, Yuekun Yao et al.π 2026-04-28
β‘ Score: 6.8
"Chain-of-Thought (CoT) has been shown to empirically improve Transformers' performance, and theoretically increase their expressivity to Turing completeness. However, whether Transformers can learn to generalize to CoT traces longer than those seen during training is understudied. We use recent theo..."
"The multiplier table GitHub quietly updated last week is the first visible crack in a subsidy model that was never sustainable.
Quick context for anyone unfamiliar: Copilot plans give you a monthly pool of "premium requests." Each model has a multiplier that determines how fast you drain it. Until ..."
via Arxivπ€ Jiahang Lin, Shichun Liu, Chengjun Pan et al.π 2026-04-28
β‘ Score: 6.7
"Harnesses have become a central determinant of coding-agent performance, shaping how models interact with repositories, tools, and execution environments. Yet automating harness engineering is hard: a heterogeneous action space, sparse and noisy evaluation signal, multi-million-token trajectories, a..."
via Arxivπ€ Ajmain Inqiad Alam, Palash Roy, Chanchal K. Roy et al.π 2026-04-28
β‘ Score: 6.7
"The accelerating adoption of Large Language Models (LLMs) in software engineering (SE) has brought with it a silent crisis: unsustainable computational cost. While these models demonstrate remarkable capabilities in different SE tasks, they are unmanageably large, slow to deploy, memory-intensive, a..."
via Arxivπ€ Jianghao Lin, Zi Ling, Chenyu Zhou et al.π 2026-04-28
β‘ Score: 6.6
"Optimization modeling underpins real-world decision-making in logistics, manufacturing, energy, and public services, but reliably solving such problems from natural-language requirements remains challenging for current large language models (LLMs). In this paper, we propose \emph{Agora-Opt}, a modul..."
"Found out I've been doing this completely backwards for eight months. Was debugging why my Claude conversations kept going off the rails when I had a 3,847 word system prompt that supposedly covered everything.
Turns out the problem was the system prompt.
Like everyone else I was cramming my entir..."
via Arxivπ€ George Morgulis, John Hewittπ 2026-04-28
β‘ Score: 6.6
"Subliminal learning describes a student language model inheriting a behavioral bias by fine-tuning on seemingly innocuous data generated by a biased teacher model. Prior work has begun to characterize this phenomenon but leaves open questions about the scope of signals it can transfer, the mechanism..."
"Hey everyone,
Iβve been building a local-first desktop PDF reader that can read technical books aloud and keep the spoken text highlighted while reading.
The original motivation was pretty practical: I read a lot of programming and technical books, but many publishers either donβt offer audio vers..."
"Preference-based alignment methods, most prominently Reinforcement Learning with Human Feedback (RLHF), use the judgments of human annotators to shape large language model behaviour. However, the normative role of these judgments is rarely made explicit. I distinguish three conceptual models of that..."
via Arxivπ€ Zhou Hanlin, Chan Huah Yongπ 2026-04-28
β‘ Score: 6.5
"Long-horizon LLM tasks often fail not because a single answer is unattainable, but because knowledge states drift across rounds, intermediate commitments remain implicit, and interruption fractures the evolving evidence chain. This paper presents ADEMA as a knowledge-state orchestration architecture..."
"Spent the last few weeks codifying how I work with Claude into a reusable library. Sharing because it might save someone else the same effort.
What it is: 59 skills covering the full lifecycle of building, launching, running, and growing a website. 13 categories: brand discovery, creative briefs, I..."
π¬ Reddit Discussion: 16 comments
π GOATED ENERGY
"**Why donβt LLMs use explicit vector-based reasoning instead of language-based chain-of-thought? What would happen if they did?**
Most LLM reasoning we see is expressed through language: step-by-step text, explanations, chain-of-thought style outputs, etc. But internally, models already operate on ..."
via Arxivπ€ Shuning Shang, Hubert Strauss, Stanley Wei et al.π 2026-04-28
β‘ Score: 6.4
"Training language models via reinforcement learning often relies on imperfect proxy rewards, since ground truth rewards that precisely define the intended behavior are rarely available. Standard metrics for assessing the quality of proxy rewards, such as ranking accuracy, treat incorrect rewards as..."
via Arxivπ€ Weihang Su, Jianming Long, Qingyao Ai et al.π 2026-04-27
β‘ Score: 6.3
"As large language models (LLMs) evolve into agentic problem solvers, they increasingly rely on external, reusable skills to handle tasks beyond their native parametric capabilities. In existing agent systems, the dominant strategy for incorporating skills is to explicitly enumerate available skills..."
via Arxivπ€ Rushil Chandrupatla, Leo Bangayan, Sebastian Leng et al.π 2026-04-28
β‘ Score: 6.1
"Transformers have demonstrated a strong ability for in-context learning (ICL), enabling models to solve previously unseen tasks using only example input output pairs provided at inference time. While prior theoretical work has established conditions under which transformers can perform linear classi..."