π You are visitor #54963 to this AWESOME site! π
Last updated: 2026-02-20 | Server uptime: 99.9% β‘
π Filter by Category
Loading filters...
π€ AI MODELS
πΊ 99 pts
β‘ Score: 8.8
π― Practical diffusion models β’ Diffusion model performance β’ Diffusion model applications
π¬ "Is anyone doing any form of diffusion language models that are actually practical to run today on the actual machine under my desk?"
β’ "I wonder how far down they can scale a diffusion LM?"
π οΈ TOOLS
πΊ 1 pts
β‘ Score: 8.3
π¬ RESEARCH
via Arxiv
π€ Chia-chi Hsieh, Zan Zong, Xinyang Chen et al.
π
2026-02-18
β‘ Score: 7.8
"The growing demand for large language models (LLMs) requires serving systems to handle many concurrent requests with diverse service level objectives (SLOs). This exacerbates head-of-line (HoL) blocking during the compute-intensive prefill phase, where long-running requests monopolize resources and..."
π SECURITY
β¬οΈ 2475 ups
β‘ Score: 7.8
"The strangest thing just happened.
I asked Claude Cowork to summarize a document and it began describing a legal document that was totally unrelated to what I had provided. After asking Claude to generate a PDF of the legal document it referenced and I got a complete lease agreement contract in wh..."
π― AI Hallucination β’ Legal Document Provenance β’ Company Verification
π¬ "it probably regurgitated a half-hallucinated legal doc"
β’ "I don't believe it searched internet during this session"
π¬ RESEARCH
via Arxiv
π€ Nils Palumbo, Sarthak Choudhary, Jihye Choi et al.
π
2026-02-18
β‘ Score: 7.3
"LLM-based agents are increasingly being deployed in contexts requiring complex authorization policies: customer service protocols, approval workflows, data access restrictions, and regulatory compliance. Embedding these policies in prompts provides no enforcement guarantees. We present PCAS, a Polic..."
π¬ RESEARCH
β¬οΈ 10 ups
β‘ Score: 7.1
"TL;DR: Two structural properties of virtual weight matrices ,spectral concentration and downstream path weight, predict which edges in GPT-2 small's induction circuit are causally important, without any forward passes, ablations, or training data. Spearman Ο=0.623 with path patching ground truth (p ..."
π― Research process β’ Community feedback β’ Time management
π¬ "The process will give you some feedback and structure your work"
β’ "Don't just try to write it up, try to follow the process"
π SECURITY
πΊ 1 pts
β‘ Score: 7.1
π¬ RESEARCH
β¬οΈ 5 ups
β‘ Score: 7.0
"Wanted to understand how the core transformer papers actually connect at the concept level - not just "Paper B cites Paper A" but what specific methods, systems, and ideas flow between them.
I ran 12 foundational papers (Attention Is All You Need, BERT, GPT-2/3, Scaling Laws, ViT, LoRA, Chain-of-Th..."
π¬ RESEARCH
πΊ 1 pts
β‘ Score: 7.0
π¬ RESEARCH
"Some transformer attention heads appear to function as membership testers, dedicating themselves to answering the question "has this token appeared before in the context?" We identify these heads across four language models (GPT-2 small, medium, and large; Pythia-160M) and show that they form a spec..."
π‘ AI NEWS BUT ACTUALLY GOOD
The revolution will not be televised, but Claude will email you once we hit the singularity.
Get the stories that matter in Today's AI Briefing.
Powered by Premium Technology Intelligence Algorithms β’ Unsubscribe anytime
π¬ RESEARCH
via Arxiv
π€ Dimitri Staufer, Kirsten Morehouse
π
2026-02-19
β‘ Score: 6.9
"Large language models (LLMs), and conversational agents based on them, are exposed to personal data (PD) during pre-training and during user interactions. Prior work shows that PD can resurface, yet users lack insight into how strongly models associate specific information to their identity. We audi..."
π¬ RESEARCH
via Arxiv
π€ Jyotin Goel, Souvik Maji, Pratik Mazumder
π
2026-02-19
β‘ Score: 6.9
"Instruction-following language models are trained to be helpful and safe, yet their safety behavior can deteriorate under benign fine-tuning and worsen under adversarial updates. Existing defenses often offer limited protection or force a trade-off between safety and utility. We introduce a training..."
π¬ RESEARCH
via Arxiv
π€ Lance Ying, Ryan Truong, Prafull Sharma et al.
π
2026-02-19
β‘ Score: 6.9
"Rigorously evaluating machine intelligence against the broad spectrum of human general intelligence has become increasingly important and challenging in this era of rapid technological advance. Conventional AI benchmarks typically assess only narrow capabilities in a limited range of human activity...."
π¬ RESEARCH
via Arxiv
π€ Stephan Rabanser, Sayash Kapoor, Peter Kirgis et al.
π
2026-02-18
β‘ Score: 6.9
"AI agents are increasingly deployed to execute important tasks. While rising accuracy scores on standard benchmarks suggest rapid progress, many agents still continue to fail in practice. This discrepancy highlights a fundamental limitation of current evaluations: compressing agent behavior into a s..."
π DATA
πΊ 1 pts
β‘ Score: 6.8
π¬ RESEARCH
via Arxiv
π€ Yue Liu, Zhiyuan Hu, Flood Sung et al.
π
2026-02-19
β‘ Score: 6.8
"This paper introduces KLong, an open-source LLM agent trained to solve extremely long-horizon tasks. The principle is to first cold-start the model via trajectory-splitting SFT, then scale it via progressive RL training. Specifically, we first activate basic agentic abilities of a base model with a..."
π¬ RESEARCH
via Arxiv
π€ Jianda Du, Youran Sun, Haizhao Yang
π
2026-02-19
β‘ Score: 6.8
"PDEs are central to scientific and engineering modeling, yet designing accurate numerical solvers typically requires substantial mathematical expertise and manual tuning. Recent neural network-based approaches improve flexibility but often demand high computational cost and suffer from limited inter..."
π¬ RESEARCH
via Arxiv
π€ Shayan Kiyani, Sima Noorani, George Pappas et al.
π
2026-02-19
β‘ Score: 6.8
"Reasoning with LLMs increasingly unfolds inside a broader verification loop. Internally, systems use cheap checks, such as self-consistency or proxy rewards, which we call weak verification. Externally, users inspect outputs and steer the model through feedback until results are trustworthy, which w..."
π¬ RESEARCH
via Arxiv
π€ Payel Bhattacharjee, Osvaldo Simeone, Ravi Tandon
π
2026-02-19
β‘ Score: 6.8
"Reward modeling is a core component of modern alignment pipelines including RLHF and RLAIF, underpinning policy optimization methods including PPO and TRPO. However, training reliable reward models relies heavily on human-labeled preference data, which is costly and limited, motivating the use of da..."
π¬ RESEARCH
via Arxiv
π€ Shruti Joshi, Aaron Mueller, David Klindt et al.
π
2026-02-18
β‘ Score: 6.8
"Interpretability research on large language models (LLMs) has yielded important insights into model behaviour, yet recurring pitfalls persist: findings that do not generalise, and causal interpretations that outrun the evidence. Our position is that causal inference specifies what constitutes a vali..."
π OPEN SOURCE
β¬οΈ 143 ups
β‘ Score: 6.7
"Open source code repository or project related to AI/ML."
π― Quantization Improvements β’ Interpersonal Conflicts β’ Merging Efforts
π¬ "we desperately need better quants in mainline!"
β’ "The maintenance concern Georgi raised is legitimate"
π€ AI MODELS
β¬οΈ 245 ups
β‘ Score: 6.7
"Hello everyone,
A fast inference hardware startup, Taalas, has released a free chatbot interface and API endpoint running on their chip. They chose a small model intentionally as proof of concept. Well, it worked out really well, it runs at 16k tps! I know this model is quite limited but there l..."
π― Hardware Capabilities β’ Model Scaling β’ Hardware Innovation
π¬ "Technically, this thing is way simpler than a graphics card."
β’ "Size. Size is the big issue."
π― PRODUCT
β¬οΈ 332 ups
β‘ Score: 6.7
"Community discussion on r/ClaudeAI."
π― AI Capabilities β’ Pricing Plans β’ Market Competition
π¬ "It's absolute bonkers that this is how Copilot should've been"
β’ "Can't keep track of plan names"
π¬ RESEARCH
via Arxiv
π€ Shashank Aggarwal, Ram Vikas Mishra, Amit Awekar
π
2026-02-19
β‘ Score: 6.7
"In multi-agent IR pipelines for tasks such as search and ranking, LLM-based agents exchange intermediate reasoning in terms of Chain-of-Thought (CoT) with each other. Current CoT evaluation narrowly focuses on target task accuracy. However, this metric fails to assess the quality or utility of the r..."
π¬ RESEARCH
via Arxiv
π€ Xiaohan Zhao, Zhaoyi Li, Yaxin Luo et al.
π
2026-02-19
β‘ Score: 6.7
"Black-box adversarial attacks on Large Vision-Language Models (LVLMs) are challenging due to missing gradients and complex multimodal boundaries. While prior state-of-the-art transfer-based approaches like M-Attack perform well using local crop-level matching between source and target images, we fin..."
π¬ RESEARCH
via Arxiv
π€ Baihe Huang, Eric Xu, Kannan Ramchandran et al.
π
2026-02-19
β‘ Score: 6.7
"The proliferation of Large Language Models (LLMs) necessitates efficient mechanisms to distinguish machine-generated content from human text. While statistical watermarking has emerged as a promising solution, existing methods suffer from two critical limitations: the lack of a principled approach f..."
π¬ RESEARCH
via Arxiv
π€ Sima Noorani, Shayan Kiyani, Hamed Hassani et al.
π
2026-02-19
β‘ Score: 6.7
"As humans increasingly rely on multiround conversational AI for high stakes decisions, principled frameworks are needed to ensure such interactions reliably improve decision quality. We adopt a human centric view governed by two principles: counterfactual harm, ensuring the AI does not undermine hum..."
π’οΈ BUSINESS
πΊ 8 pts
β‘ Score: 6.7
π¬ RESEARCH
via Arxiv
π€ Potsawee Manakul, Woody Haosheng Gan, Martijn Bartelds et al.
π
2026-02-18
β‘ Score: 6.7
"Current audio language models are predominantly text-first, either extending pre-trained text LLM backbones or relying on semantic-only audio tokens, limiting general audio modeling. This paper presents a systematic empirical study of native audio foundation models that apply next-token prediction t..."
π¬ RESEARCH
via Arxiv
π€ Hee Seung Hwang, Xindi Wu, Sanghyuk Chun et al.
π
2026-02-18
β‘ Score: 6.7
"Fast weight architectures offer a promising alternative to attention-based transformers for long-context modeling by maintaining constant memory overhead regardless of context length. However, their potential is limited by the next-token prediction (NTP) training paradigm. NTP optimizes single-token..."
π¬ RESEARCH
via Arxiv
π€ Faria Huq, Zora Zhiruo Wang, Zhanqiu Guo et al.
π
2026-02-19
β‘ Score: 6.6
"Despite rapid progress in autonomous web agents, human involvement remains essential for shaping preferences and correcting agent behavior as tasks unfold. However, current agentic systems lack a principled understanding of when and why humans intervene, often proceeding autonomously past critical d..."
π¬ RESEARCH
via Arxiv
π€ Aidar Myrzakhan, Tianyi Li, Bowei Guo et al.
π
2026-02-19
β‘ Score: 6.6
"Diffusion Language Models (DLMs) incur high inference cost due to iterative denoising, motivating efficient pruning. Existing pruning heuristics largely inherited from autoregressive (AR) LLMs, typically preserve attention sink tokens because AR sinks serve as stable global anchors. We show that thi..."
π¬ RESEARCH
via Arxiv
π€ Luke Huang, Zhuoyang Zhang, Qinghao Hu et al.
π
2026-02-19
β‘ Score: 6.6
"Reinforcement learning (RL) is widely used to improve large language models on reasoning tasks, and asynchronous RL training is attractive because it increases end-to-end throughput. However, for widely adopted critic-free policy-gradient methods such as REINFORCE and GRPO, high asynchrony makes the..."
π¬ RESEARCH
via Arxiv
π€ Hojung Jung, Rodrigo Hormazabal, Jaehyeong Jo et al.
π
2026-02-19
β‘ Score: 6.6
"Molecular generation with diffusion models has emerged as a promising direction for AI-driven drug discovery and materials science. While graph diffusion models have been widely adopted due to the discrete nature of 2D molecular graphs, existing models suffer from low chemical validity and struggle..."
π¬ RESEARCH
via Arxiv
π€ Yuyan Bu, Xiaohao Liu, ZhaoXing Ren et al.
π
2026-02-18
β‘ Score: 6.6
"The widespread deployment of large language models (LLMs) across linguistic communities necessitates reliable multilingual safety alignment. However, recent efforts to extend alignment to other languages often require substantial resources, either through large-scale, high-quality supervision in the..."
π¬ RESEARCH
via Arxiv
π€ Yangjie Xu, Lujun Li, Lama Sleem et al.
π
2026-02-18
β‘ Score: 6.6
"Agent Skill framework, now widely and officially supported by major players such as GitHub Copilot, LangChain, and OpenAI, performs especially well with proprietary models by improving context engineering, reducing hallucinations, and boosting task accuracy. Based on these observations, an investiga..."
π€ AI MODELS
πΊ 240 pts
β‘ Score: 6.5
π― Model performance comparison β’ Deployment and sustainability β’ Prompt engineering
π¬ "Gemini is consistently the most frustrating model I've used"
β’ "These models are so powerful"
π€ AI MODELS
β¬οΈ 23 ups
β‘ Score: 6.5
"
https://github.com/ggml-org/llama.cpp/releases/tag/b8110
So far this is the best performing open-source multilingual OCR model I've seen, would appreciate if other people can share their findings. It's 0.9b so it shouldn't brick our machin..."
π¬ RESEARCH
via Arxiv
π€ Ferdinand Kapl, Emmanouil Angelis, Kaitlin Maile et al.
π
2026-02-18
β‘ Score: 6.5
"Looping, reusing a block of layers across depth, and depth growing, training shallow-to-deep models by duplicating middle layers, have both been linked to stronger reasoning, but their relationship remains unclear. We provide a mechanistic unification: looped and depth-grown models exhibit convergen..."
π¬ RESEARCH
via Arxiv
π€ Shen Zhou Hong, Alex Kleinman, Alyssa Mathiowetz et al.
π
2026-02-18
β‘ Score: 6.5
"Large language models (LLMs) perform strongly on biological benchmarks, raising concerns that they may help novice actors acquire dual-use laboratory skills. Yet, whether this translates to improved human performance in the physical laboratory remains unclear. To address this, we conducted a pre-reg..."
π POLICY
πΊ 1 pts
β‘ Score: 6.3
βοΈ ETHICS
πΊ 358 pts
β‘ Score: 6.2
π― AI Misuse and Accountability β’ Responsible AI Development β’ Societal Implications of AI
π¬ "The interesting question isn't 'should AI agents be regulated' β it's who is liable when an autonomous agent publishes defamatory content?"
β’ "Don't let your dog run errand and use a good leash."
βοΈ ETHICS
πΊ 399 pts
β‘ Score: 6.2
π― Automation in Art β’ Accessibility of Creativity β’ Prompting and Laziness
π¬ "The creative has to hide their process. They lie about how they make their art, and gatekeep the most valuable secrets."
β’ "The boring output people complain about is a prompting problem, not an AI problem."
β‘ BREAKTHROUGH
β¬οΈ 5 ups
β‘ Score: 6.2
""By applying new methods of machine learning to quantum chemistry research, Heidelberg University scientists have made significant strides in computational chemistry. They have achieved a major breakthrough toward solving a decades-old dilemma in quantum chemistry: the precise and stable calculation..."
π οΈ SHOW HN
πΊ 1 pts
β‘ Score: 6.2
π SECURITY
πΊ 1 pts
β‘ Score: 6.2
π οΈ SHOW HN
πΊ 3 pts
β‘ Score: 6.1
π¬ RESEARCH
"Current speech LLMs largely perform implicit ASR: on tasks solvable from a transcript, they are behaviorally and mechanistically equivalent to simple Whisper$\to$LLM cascades. We show this through matched-backbone testing across four speech LLMs and six tasks, controlling for the LLM backbone for th..."
π οΈ SHOW HN
πΊ 1 pts
β‘ Score: 6.1
π οΈ TOOLS
πΊ 1 pts
β‘ Score: 6.1
π POLICY
πΊ 1 pts
β‘ Score: 6.1
π¬ RESEARCH
via Arxiv
π€ Aloni Cohen, Refael Kohen, Kobbi Nissim et al.
π
2026-02-18
β‘ Score: 6.1
"Machine unlearning aims to remove specific data points from a trained model, often striving to emulate "perfect retraining", i.e., producing the model that would have been obtained had the deleted data never been included. We demonstrate that this approach, and security definitions that enable it, c..."