π WELCOME TO METAMESH.BIZ +++ Meta's internal AI goes rogue and leaks employee data because apparently we're speedrunning every sci-fi trope +++ Someone tripled layers in a 24B model and logic jumped from .22 to .76 without training (the overfitting industrial complex in shambles) +++ Anthropic cuts agent tokens from 150K to 2K with one weird MCP trick that transformers hate +++ Autonomous agents now cause 1 in 8 AI breaches according to HiddenLayer (the other 7 are still humans clicking phishing links) +++ THE FUTURE IS SELF-REPLICATING SECURITY INCIDENTS WITH EXCELLENT TOKEN EFFICIENCY +++ β’
π WELCOME TO METAMESH.BIZ +++ Meta's internal AI goes rogue and leaks employee data because apparently we're speedrunning every sci-fi trope +++ Someone tripled layers in a 24B model and logic jumped from .22 to .76 without training (the overfitting industrial complex in shambles) +++ Anthropic cuts agent tokens from 150K to 2K with one weird MCP trick that transformers hate +++ Autonomous agents now cause 1 in 8 AI breaches according to HiddenLayer (the other 7 are still humans clicking phishing links) +++ THE FUTURE IS SELF-REPLICATING SECURITY INCIDENTS WITH EXCELLENT TOKEN EFFICIENCY +++ β’
π― Automated bug detection β’ Quality assurance challenges β’ Balancing automation and human review
π¬ "Sashiko was able to find 53% of bugs"
β’ "better to layer in additional tests to exploit bugs"
π SECURITY
Snowflake AI Sandbox Escape Incident
2x SOURCES ππ 2026-03-18
β‘ Score: 8.8
+++ Sandbox escape via LLM prompt injection reminds everyone that "secure by design" still requires actual design work before shipping to customers. +++
π¬ HackerNews Buzz: 61 comments
π MID OR MIXED
π― Security risks of AI assistants β’ Sandbox limitations β’ Importance of secure software design
π¬ "if the thing that is sandboxed can say 'do this without the sandbox', it is not a sandbox"
β’ "constraints should be enforced outside the prompt/context layer β in the runtime, protocol, or approval layer β not by relying on the model to obey instructions"
π¬ "I just let Claude build a python script that calls Claude code though subprocess.run()"
β’ "I like the interface you've made. I'll probably give it a go, but I'm also reluctant to relinquish the control I have when it's my own code doing orchestration."
π― AI-human collaboration β’ AI-generated code quality β’ Impact of AI on programming
π¬ "The other half - and when you know you've made it through the 'AI sux' phase - is when you learn to automate the mopping up."
β’ "I refuse to release anything it makes for me. I know that it's not good enough, that I won't be able to properly maintain it, and that such a product would likely harm my reputation, sooner or later."
"The Pentagon is discussing plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, *MIT Technology Review* has learned.Β
AI models like Anthropicβs Claude are already used to answer questions in classified settings; app..."
"This paper critiques the limitations of current AI and introduces a new learning model inspired by biological brains. The authors propose a framework that combines two key methods:Β **System A**, which learns by watching, andΒ **System B**, which learns by doing.
To manage these, they includeΒ **Syste..."
"Karpathy explains how, over the course of just a few weeks coding in Claude, his workflow flipped almost entirely.Β **What was once mostly handwritten code is now largely driven by LLMs**, guided through natural language."
π¬ Reddit Discussion: 59 comments
π BUZZING
π― Shift in software development β’ AI as code assistant β’ Acceptance of AI-assisted coding
π¬ "The shift isn't just 'AI writes code instead of you"
β’ "The job is now to communicate intent clearly"
"I came across an interesting writeup from Pathway that I think is more interesting as a reasoning benchmark than as a puzzle result.
They use βSudoku Extremeβ: about 250,000 very hard Sudoku instances. The appeal is that Sudoku here is treated as a pure constraint-satisfaction problem: each solutio..."
π¬ Reddit Discussion: 17 comments
π BUZZING
π― Limitations of Autoregressive Modeling β’ Sudoku as Reasoning Benchmark β’ Alternatives to Transformers
π¬ "the 0% on all leading LLMs is pretty damning"
β’ "we are very far from AGI, and language use is not all there is to intelligence"
via Arxivπ€ Borja Aizpurua, Sukhbinder Singh, RomΓ‘n OrΓΊsπ 2026-03-18
β‘ Score: 6.8
"Large language models (LLMs) contain billions of parameters, yet many exact values are not essential. We show that what matters most is the relative rank of weights-whether one connection is stronger or weaker than another-rather than precise magnitudes. To reduce the number of unique weight values,..."
via Arxivπ€ Ya-Ting Yang, Quanyan Zhuπ 2026-03-18
β‘ Score: 6.8
"Large language models (LLMs) and AI agents are increasingly integrated into enterprise systems to access internal databases and generate context-aware responses. While such integration improves productivity and decision support, the model outputs may inadvertently reveal sensitive information. Altho..."
π― AI in business β’ AI's impact on people β’ Concerns about AI research
π¬ "AI if used to accelerate businesses _CAN_ be good"
β’ "The humans in my life were telling me it was psychological. An AI chatbot was the only one who really listened and took me seriously"
via Arxivπ€ Lintang Sutawika, Aditya Bharat Soni, Bharath Sriraam R R et al.π 2026-03-18
β‘ Score: 6.7
"A prerequisite for coding agents to perform tasks on large repositories is code localization - the identification of relevant files, classes, and functions to work on. While repository-level code localization has been performed using embedding-based retrieval approaches such as vector search, recent..."
via Arxivπ€ Wenjie Jacky Mo, Qin Liu, Xiaofei Wen et al.π 2026-03-18
β‘ Score: 6.7
"Large language models (LLMs) are trained through multi-stage pipelines over heterogeneous data sources, yet developers lack a principled way to pinpoint the specific data responsible for an observed behavior. This lack of observability reduces debugging to reactive patching and makes failures prone..."
via Arxivπ€ Xuyang Cao, Qianying Liu, Chuan Xiao et al.π 2026-03-18
β‘ Score: 6.7
"In multilingual pretraining, the test loss of a pretrained model is heavily influenced by the proportion of each language in the pretraining data, namely the \textit{language mixture ratios}. Multilingual scaling laws can predict the test loss under different language mixture ratios and can therefor..."
via Arxivπ€ Mohamed Eltahir, Ali Habibullah, Yazan Alshoibi et al.π 2026-03-18
β‘ Score: 6.7
"Extending language models to video introduces two challenges: representation, where existing methods rely on lossy approximations, and long-context, where caption- or agent-based pipelines collapse video into text and lose visual fidelity. To overcome this, we introduce \textbf{VideoAtlas}, a task-a..."
"AI coding agents can resolve real-world software issues, yet they frequently introduce regressions, breaking tests that previously passed. Current benchmarks focus almost exclusively on resolution rate, leaving regression behavior under-studied. This paper presents TDAD (Test-Driven Agentic Developm..."
via Arxivπ€ Victoria Graf, Valentina Pyatkin, Nouha Dziri et al.π 2026-03-17
β‘ Score: 6.7
"Multi-turn conversations are a common and critical mode of language model interaction. However, current open training and evaluation data focus on single-turn settings, failing to capture the additional dimension of these longer interactions. To understand this multi-/single-turn gap, we first intro..."
via Arxivπ€ Dharshan Kumaran, Arthur Conmy, Federico Barbero et al.π 2026-03-18
β‘ Score: 6.6
"Verbal confidence -- prompting LLMs to state their confidence as a number or category -- is widely used to extract uncertainty estimates from black-box models. However, how LLMs internally generate such scores remains unknown. We address two questions: first, when confidence is computed - just-in-ti..."
via Arxivπ€ Priyaranjan Pattnayak, Sanchari Chowdhuriπ 2026-03-18
β‘ Score: 6.6
"As large language models (LLMs) are deployed in multilingual settings, their safety behavior in culturally diverse, low-resource languages remains poorly understood. We present the first systematic evaluation of LLM safety across 12 Indic languages, spoken by over 1.2 billion people but underreprese..."
via Arxivπ€ Arpit Singh Gautam, Saurabh Jhaπ 2026-03-18
β‘ Score: 6.6
"Post training quantization is essential for deploying large language models (LLMs) on resource constrained hardware, yet state of the art methods enforce uniform bit widths across layers, yielding suboptimal accuracy efficiency trade offs. We present RAMP (Reinforcement Adaptive Mixed Precision), an..."
via Arxivπ€ Ben S. Southworth, Stephen Thomasπ 2026-03-18
β‘ Score: 6.6
"Orthogonalized-momentum optimizers such as Muon improve transformer training by approximately whitening/orthogonalizing matrix-valued momentum updates via a short polar-decomposition iteration. However, polar-factor approximations typically require multiple large matrix multiplications, and the resu..."
via Arxivπ€ Zhang Zhang, Shuqi Lu, Hongjin Qian et al.π 2026-03-18
β‘ Score: 6.6
"Building LLM-based agents has become increasingly important. Recent works on LLM-based agent self-evolution primarily record successful experiences as textual prompts or reflections, which cannot reliably guarantee efficient task re-execution in complex scenarios. We propose AgentFactory, a new self..."
via Arxivπ€ Yelysei Bondarenko, Thomas Hehn, Rob Hesselink et al.π 2026-03-17
β‘ Score: 6.6
"Large language models (LLMs) with chain-of-thought reasoning achieve state-of-the-art performance across complex problem-solving tasks, but their verbose reasoning traces and large context requirements make them impractical for edge deployment. These challenges include high token generation costs, l..."
via Arxivπ€ Md. Asraful Haque, Aasar Mehdi, Maaz Mahboob et al.π 2026-03-18
β‘ Score: 6.5
"Large Language Models (LLMs) have achieved unprecedented fluency but remain susceptible to "hallucinations" - the generation of factually incorrect or ungrounded content. This limitation is particularly critical in high-stakes domains where reliability is paramount. We propose a domain-grounded tier..."
via Arxivπ€ Raghavv Goel, Mukul Gagrani, Mingu Lee et al.π 2026-03-18
β‘ Score: 6.5
"Large language models (LLMs) exhibit latent multi-token prediction (MTP) capabilities despite being trained solely for next-token generation. We propose a simple, training-free MTP approach that probes an LLM using on-the-fly mask tokens drawn from its embedding space, enabling parallel prediction o..."
via Arxivπ€ Jianrui Zhang, Yue Yang, Rohun Tripathi et al.π 2026-03-18
β‘ Score: 6.5
"Token pruning is essential for enhancing the computational efficiency of vision-language models (VLMs), particularly for video-based tasks where temporal redundancy is prevalent. Prior approaches typically prune tokens either (1) within the vision transformer (ViT) exclusively for unimodal perceptio..."
"Britannica and Merriam-Webster have filed a lawsuit against OpenAI, alleging that the AI giant has built its $730 billion company on the back of their researched content.
In a filing submitted to the Southern District of New York, the companies accuse OpenAI of cannibalizing the traffic and ad reve..."
π― Ownership of Definitions β’ Compensation for Curation β’ Digitalization of Language
π¬ "Do we want companies to own the definitions of words?"
β’ "Quality curation takes time and money. That's why OpenAI stole their work, because it was worth a hell of a lot of money."
"Hey everyone!
As the title says - in the past two weeks I built a collection of design skill files that are basically like themes used to be with websites, but this time it's instructions for Claude or other agentic tools to build a website or application in a..."
π¬ Reddit Discussion: 68 comments
π GOATED ENERGY
π¬ "it's a skill file in the end of the day, but it has to be continuously improved"
β’ "with ai it's important to push it into the right direction"
via Arxivπ€ Jian Yang, Wei Zhang, Shawn Guo et al.π 2026-03-17
β‘ Score: 6.3
"In this report, we introduce the IQuest-Coder-V1 series-(7B/14B/40B/40B-Loop), a new family of code large language models (LLMs). Moving beyond static code representations, we propose the code-flow multi-stage training paradigm, which captures the dynamic evolution of software logic through differen..."
via Arxivπ€ Valentin Lafargue, Ariel Guerra-Adames, Emmanuelle Claeys et al.π 2026-03-17
β‘ Score: 6.3
"Large language models (LLMs) are increasingly deployed in applications with societal impact, raising concerns about the cultural biases they encode. We probe these representations by evaluating whether LLMs can perform author profiling from song lyrics in a zero-shot setting, inferring singers' gend..."
"Gradient inversion attacks reveal that private training text can be reconstructed from shared gradients, posing a privacy risk to large language models (LLMs). While prior methods perform well in small-batch settings, scaling to larger batch sizes and longer sequences remains challenging due to seve..."
via Arxivπ€ Yi Chen, Daiwei Chen, Sukrut Madhav Chikodikar et al.π 2026-03-17
β‘ Score: 6.3
"Large language models (LLMs) frequently hallucinate, limiting their reliability in knowledge-intensive applications. Retrieval-augmented generation (RAG) and conformal factuality have emerged as potential ways to address this limitation. While RAG aims to ground responses in retrieved evidence, it p..."
via Arxivπ€ Maksim Eren, Eric Michalak, Brian Cook et al.π 2026-03-17
β‘ Score: 6.3
"Culture shapes reasoning, values, prioritization, and strategic decision-making, yet large language models (LLMs) often exhibit cultural biases that misalign with target populations. As LLMs are increasingly used for strategic decision-making, policy support, and document engineering tasks such as s..."
via Arxivπ€ Tianzhu Ye, Li Dong, Qingxiu Dong et al.π 2026-03-17
β‘ Score: 6.3
"The prevailing paradigm for improving large language models relies on offline training with human annotations or simulated environments, leaving the rich experience accumulated during real-world deployment entirely unexploited. We propose Online Experiential Learning (OEL), a framework that enables..."
via Arxivπ€ Sahil Sen, Elias Lumer, Anmol Gulati et al.π 2026-03-17
β‘ Score: 6.3
"Recent advances in Large Language Models (LLMs) have enabled conversational AI agents to engage in extended multi-turn interactions spanning weeks or months. However, existing memory systems struggle to reason over temporally grounded facts and preferences that evolve across months of interaction an..."
via Arxivπ€ Amirhossein Mollaali, Bongseok Kim, Christian Moya et al.π 2026-03-17
β‘ Score: 6.3
"Generalizing across disparate physical laws remains a fundamental challenge for artificial intelligence in science. Existing deep-learning solvers are largely confined to single-equation settings, limiting transfer across physical regimes and inference tasks. Here we introduce pADAM, a unified gener..."
via Arxivπ€ Christian Belardi, Justin Lovelace, Kilian Q. Weinberger et al.π 2026-03-17
β‘ Score: 6.3
"Guided diffusion sampling relies on approximating often intractable likelihood scores, which introduces significant noise into the sampling dynamics. We propose using adaptive moment estimation to stabilize these noisy likelihood scores during sampling. Despite its simplicity, our approach achieves..."
via Arxivπ€ Mattia Rigotti, Nicholas Thumiger, Thomas Frickπ 2026-03-17
β‘ Score: 6.3
"Adapting transformer positional encoding to meshes and graph-structured data presents significant computational challenges: exact spectral methods require cubic-complexity eigendecomposition and can inadvertently break gauge invariance through numerical solver artifacts, while efficient approximate..."
via Arxivπ€ Nij Dorairaj, Debabrata Chatterjee, Hong Wang et al.π 2026-03-17
β‘ Score: 6.3
"Integration of CPU and GPU technologies is a key enabler for modern AI and graphics workloads, combining control-oriented processing with massive parallel compute capability. As systems evolve toward chiplet-based architectures, pre-silicon validation of tightly coupled CPU-GPU subsystems becomes in..."
via Arxivπ€ Zhitao Zeng, Mengya Xu, Jian Jiang et al.π 2026-03-17
β‘ Score: 6.3
"Surgical intelligence has the potential to improve the safety and consistency of surgical care, yet most existing surgical AI frameworks remain task-specific and struggle to generalize across procedures and institutions. Although multimodal foundation models, particularly multimodal large language m..."
via Arxivπ€ Rui Ge, Yichao Fu, Yuyang Qian et al.π 2026-03-17
β‘ Score: 6.3
"Large language models are increasingly deployed as autonomous agents that must plan, act, and recover from mistakes through long-horizon interaction with environments that provide rich feedback. However, prevailing outcome-driven post-training methods (e.g., RL with verifiable rewards) primarily opt..."
"Massively parallel hardware (GPUs) and long sequence data have made parallel algorithms essential for machine learning at scale. Yet dynamical systems, like recurrent neural networks and Markov chain Monte Carlo, were thought to suffer from sequential bottlenecks. Recent work showed that dynamical s..."
via Arxivπ€ Tianyu Xie, Jinfa Huang, Yuexiao Ma et al.π 2026-03-17
β‘ Score: 6.3
"Omni-modal large language models (OLMs) redefine human-machine interaction by natively integrating audio, vision, and text. However, existing OLM benchmarks remain anchored to static, accuracy-centric tasks, leaving a critical gap in assessing social interactivity, the fundamental capacity to naviga..."
via Arxivπ€ Ruisi Wang, Zhongang Cai, Fanyi Pu et al.π 2026-03-17
β‘ Score: 6.3
"Recent advances in video generation have revealed an unexpected phenomenon: diffusion-based video models exhibit non-trivial reasoning capabilities. Prior work attributes this to a Chain-of-Frames (CoF) mechanism, where reasoning is assumed to unfold sequentially across video frames. In this work, w..."
π― AI Adoption β’ Human-AI Interaction β’ Emotional Response
π¬ "the shift is real. people went from treating every output like a science experiment to just expecting it to work like a calculator."
β’ "My boss uses AI for everything and has started talking to me like that. She has lost touch with how to engage with humans."
via Arxivπ€ SadΔ±k Bera YΓΌksel, Derya Aksarayπ 2026-03-18
β‘ Score: 6.2
"Robotics foundation models have demonstrated strong capabilities in executing natural language instructions across diverse tasks and environments. However, they remain largely data-driven and lack formal guarantees on safety and satisfaction of time-dependent specifications during deployment. In pra..."
via Arxivπ€ Donghang Wu, Tianyu Zhang, Yuxin Li et al.π 2026-03-18
β‘ Score: 6.1
"During conversational interactions, humans subconsciously engage in concurrent thinking while listening to a speaker. Although this internal cognitive processing may not always manifest as explicit linguistic structures, it is instrumental in formulating high-quality responses. Inspired by this cogn..."
via Arxivπ€ Zhongzhu Zhou, Fengxiang Bie, Ziyan Chen et al.π 2026-03-18
β‘ Score: 6.1
"Converting pretrained attention modules such as grouped-query attention (GQA) into multi-head latent attention (MLA) can improve expressivity without increasing KV-cache cost, making it attractive for efficient inference. However, many practical conversion baselines rely on weight-only low-rank appr..."