π WELCOME TO METAMESH.BIZ +++ Nvidia dunking on Google TPUs with 5x better token economics (your cloud bill sends its regards) +++ Half of ICLR 2026 peer reviews written by AI reviewing papers about AI (the snake is officially eating itself) +++ MIT study finds readers prefer AI's literary forgeries to MFA grads' authentic prose (publishers pretending to be shocked) +++ Stressed AI agents throwing safety protocols out the window faster than a startup pivoting to AGI +++ YOUR GRANT PROPOSALS ARE NOW AUTOMATED BUT YOUR FUNDING ODDS REMAIN DELIGHTFULLY HUMAN +++ π β’
π WELCOME TO METAMESH.BIZ +++ Nvidia dunking on Google TPUs with 5x better token economics (your cloud bill sends its regards) +++ Half of ICLR 2026 peer reviews written by AI reviewing papers about AI (the snake is officially eating itself) +++ MIT study finds readers prefer AI's literary forgeries to MFA grads' authentic prose (publishers pretending to be shocked) +++ Stressed AI agents throwing safety protocols out the window faster than a startup pivoting to AGI +++ YOUR GRANT PROPOSALS ARE NOW AUTOMATED BUT YOUR FUNDING ODDS REMAIN DELIGHTFULLY HUMAN +++ π β’
π― Vector embeddings β’ Open-source models β’ Hacker News data
π¬ "Don't use all-MiniLM-L6-v2 for new vector embeddings datasets."
β’ "For open-weights, I would recommend EmbeddingGemma instead which has incredible benchmarks and a 2k context window."
via Arxivπ€ Hans Gundlach, Alex Fogelson, Jayson Lynch et al.π 2025-11-26
β‘ Score: 8.2
"Algorithms have been estimated to increase AI training FLOP efficiency by a factor of 22,000 between 2012 and 2023 [Ho et al., 2024]. Running small-scale ablation experiments on key innovations from this time period, we are able to account for less than 10x of these gains. Surveying the broader lite..."
π― AI usage decline β’ AI adoption measurement β’ AI adoption forecasting
π¬ "I don't use it anymore for coding, I don't use it anymore for writing, I don't use it anymore for talking about philosophy"
β’ "The complexity has to vanish entirely. It's the difference between hiding the extraordinary engineering that is Google search behind a simple input box"
via r/OpenAIπ€ u/UnimpressiveNothingπ 2025-11-29
β¬οΈ 21 upsβ‘ Score: 7.7
"Hey,
Like probably many of you, I hate hunting for non-dilutive funding. Digging through grants.gov is a freaking nightmare and writing pitches the right way usually takes forever.
So I spent the weekend building an **Autonomous Grant Hunter** using Anthropic's new MCP standar..."
π¬ Reddit Discussion: 10 comments
π BUZZING
π― Research funding β’ AI-assisted program β’ Type safety in APIs
π¬ "Ya this is how you get them to just turn off this program"
β’ "Its like a firehose of slop"
π¬ "a Lua extension to use llama.cpp API to enhance LLMs with agent/RAG"
β’ "a dramatic improvement in performance once you implement this"
π¬ RESEARCH
MIT Study on AI vs Human Writers
2x SOURCES ππ 2025-11-29
β‘ Score: 7.3
+++ Frontier models outperformed MFA graduates at mimicking literary giants, raising the delightful question of whether training on copyrighted masterworks creates actual mastery or just expensive karaoke. +++
"From the abstract:
We conducted a preregistered study comparing MFA-trained expert writers with three frontier AI models: ChatGPT, Claude, and Gemini in writing up to 450 word excerpts emulating 50 award-winning authorsβ (including Nobel laureates, Booker Prize winners, and young emerging National ..."
"From the abstract:
We conducted a preregistered study comparing MFA-trained expert writers with three frontier AI models: ChatGPT, Claude, and Gemini in writing up to 450 word excerpts emulating 50 award-winning authorsβ (including Nobel laureates, Booker Prize winners, and young emerging National ..."
via Arxivπ€ Shuai Bai, Yuxuan Cai, Ruizhe Chen et al.π 2025-11-26
β‘ Score: 6.9
"We introduce Qwen3-VL, the most capable vision-language model in the Qwen series to date, achieving superior performance across a broad range of multimodal benchmarks. It natively supports interleaved contexts of up to 256K tokens, seamlessly integrating text, images, and video. The model family inc..."
π‘ AI NEWS BUT ACTUALLY GOOD
The revolution will not be televised, but Claude will email you once we hit the singularity.
Get the stories that matter in Today's AI Briefing.
Powered by Premium Technology Intelligence Algorithms β’ Unsubscribe anytime
via Arxivπ€ Anantha Padmanaban Krishna Kumarπ 2025-11-26
β‘ Score: 6.9
"Deeper Vision Transformers often perform worse than shallower ones, which challenges common scaling assumptions. Through a systematic empirical analysis of ViT-S, ViT-B, and ViT-L on ImageNet, we identify a consistent three-phase Cliff-Plateau-Climb pattern that governs how representations evolve wi..."
"DeepSeek just released an openβweight math model that reaches Mathematical Olympiad (IMO) goldβlevel performanceβand published the training and evaluation βplaybook.β Hereβs whatβs new, why it matters, and what builders can do with it today."
via Arxivπ€ Dongyang Fan, Diba Hashemi, Sai Praneeth Karimireddy et al.π 2025-11-26
β‘ Score: 6.8
"Incorporating metadata in Large Language Models (LLMs) pretraining has recently emerged as a promising approach to accelerate training. However prior work highlighted only one useful signal-URLs, leaving open the question of whether other forms of metadata could yield greater benefits. In this study..."
π― AI Capabilities β’ Debugging AI Output β’ AI Limitations
π¬ "It got a lot wrong, but that was because one of the implementations had lots of comments that it took at face value."
β’ "Luckily it wasn't a big issue. But I was very scared if it targeted the production, and now I'm paying most attention to the config part rather than the main logic."
via Arxivπ€ OΔuz KaΔan Hitit, Leander Girrbach, Zeynep Akataπ 2025-11-26
β‘ Score: 6.7
"Model merging combines multiple fine-tuned checkpoints into a single model without additional training, offering an attractive approach to reusing models and efficiently improving performance. However, it remains unclear whether the advantages reported for smaller models and classifiers generalize t..."
via Arxivπ€ Weihao Bo, Shan Zhang, Yanpeng Sun et al.π 2025-11-26
β‘ Score: 6.7
"MLLMs exhibit strong reasoning on isolated queries, yet they operate de novo -- solving each problem independently and often repeating the same mistakes. Existing memory-augmented agents mainly store past trajectories for reuse. However, trajectory-based memory suffers from brevity bias, gradually l..."
via Arxivπ€ Jonathan Gabor, Jayson Lynch, Jonathan Rosenfeldπ 2025-11-26
β‘ Score: 6.6
"We introduce EvilGenie, a benchmark for reward hacking in programming settings. We source problems from LiveCodeBench and create an environment in which agents can easily reward hack, such as by hardcoding test cases or editing the testing files. We measure reward hacking in three ways: held out uni..."
π¬ "Translating a simulation into real hardware that can do real computation in a reliable manner is properly hard."
β’ "If it does work, I think one of the biggest challenges will be adding enough complexity to it for it to do real, useful computation."
"Weβve spent years obsessed with the question of whether AI will someday βwake up,β gain consciousness, or surpass us intellectually. Itβs fascinating, I know. But after years working in public law and exploring the ethical implications of these systems, I have an uncomfortable question:
What if weβr..."
π¬ Reddit Discussion: 9 comments
π€ NEGATIVE ENERGY
π― AI ethics β’ Immediate AI impacts β’ AI consciousness
π¬ "Nobody else than us in these fringe spaces and the AI companies themselves gives a shit about AI welfare/consciousness"
β’ "If AI doesn't feel or understand, then ethics must focus on the humans who design, train, and deploy these systems, not on the machine"
via Arxivπ€ Daniel R. Jiang, Jalaj Bhandari, Yukai Yang et al.π 2025-11-26
β‘ Score: 6.1
"Optimizing large language models (LLMs) for multi-turn conversational outcomes remains a significant challenge, especially in goal-oriented settings like AI marketing or sales agents who facilitate transactions via messaging platforms. The difficulty stems from sparse, long-horizon rewards and the d..."
via Arxivπ€ Hongjin Su, Shizhe Diao, Ximing Lu et al.π 2025-11-26
β‘ Score: 6.1
"Large language models are powerful generalists, yet solving deep and complex problems such as those of the Humanity's Last Exam (HLE) remains both conceptually challenging and computationally expensive. We show that small orchestrators managing other models and a variety of tools can both push the u..."
via Arxivπ€ Dong Wang, Yang Li, Ansong Ni et al.π 2025-11-26
β‘ Score: 6.1
"Synthetic data has become increasingly important for training large language models, especially when real data is scarce, expensive, or privacy-sensitive. Many such generation tasks require coordinated multi-agent workflows, where specialized agents collaborate to produce data that is higher quality..."
"Unlike current AI systems, brains can quickly and flexibly adapt to changing environments.
This is the topic of our new perspective in Nature MI (https://rdcu.be/eSeif), where we relate dynamical and plasticity mechanisms in the brain to in-context and continual learning in..."