π HISTORICAL ARCHIVE - January 16, 2026
What was happening in AI on 2026-01-16
π You are visitor #47291 to this AWESOME site! π
Archive from: 2026-01-16 | Preserved for posterity β‘
π Filter by Category
Loading filters...
π¬ RESEARCH
via Arxiv
π€ Xingjun Ma, Yixu Wang, Hengyuan Xu et al.
π
2026-01-15
β‘ Score: 8.1
"The rapid evolution of Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs) has produced substantial gains in reasoning, perception, and generative capability across language and vision. However, whether these advances yield commensurate improvements in safety remains unclear, i..."
π¬ RESEARCH
via Arxiv
π€ Ben Nassi, Bruce Schneier, Oleg Brodt
π
2026-01-14
β‘ Score: 8.0
"The rapid adoption of large language model (LLM)-based systems -- from chatbots to autonomous agents capable of executing code and financial transactions -- has created a new attack surface that existing security frameworks inadequately address. The dominant framing of these threats as "prompt injec..."
π οΈ SHOW HN
πΊ 71 pts
β‘ Score: 8.0
π― Agent reliability β’ Agent accountability β’ Predictable agent systems
π¬ "Reliability came more from reducing degrees of freedom than from adding intelligence."
β’ "Each step had an explicit goal, explicit inputs, and a defined end."
π¬ RESEARCH
via Arxiv
π€ Christopher Clark, Jieyu Zhang, Zixian Ma et al.
π
2026-01-15
β‘ Score: 7.9
"Today's strongest video-language models (VLMs) remain proprietary. The strongest open-weight models either rely on synthetic data from proprietary VLMs, effectively distilling from them, or do not disclose their training data or recipe. As a result, the open-source community lacks the foundations ne..."
π SECURITY
πΊ 9 pts
β‘ Score: 7.8
π¬ RESEARCH
via Arxiv
π€ Maissam Barkeshli, Alberto Alfarano, Andrey Gromov
π
2026-01-15
β‘ Score: 7.8
"Scaling laws have played a major role in the modern AI revolution, providing practitioners predictive power over how the model performance will improve with increasing data, compute, and number of model parameters. This has spurred an intense interest in the origin of neural scaling laws, with a com..."
π‘οΈ SAFETY
πΊ 5 pts
β‘ Score: 7.6
π SECURITY
πΊ 3 pts
β‘ Score: 7.5
π¬ RESEARCH
via Arxiv
π€ Hao Wang, Yanting Wang, Hao Li et al.
π
2026-01-15
β‘ Score: 7.3
"Large Language Models (LLMs) have achieved remarkable capabilities but remain vulnerable to adversarial ``jailbreak'' attacks designed to bypass safety guardrails. Current safety alignment methods depend heavily on static external red teaming, utilizing fixed defense prompts or pre-collected adversa..."
π¬ RESEARCH
via Arxiv
π€ Shan Randhawa, Agha Ali Raza, Kentaro Toyama et al.
π
2026-01-14
β‘ Score: 7.0
"LLMs are increasingly being integrated into clinical workflows, yet they often lack clinical empathy, an essential aspect of effective doctor-patient communication. Existing NLP frameworks focus on reactively labeling empathy in doctors' responses but offer limited support for anticipatory modeling..."
π‘ AI NEWS BUT ACTUALLY GOOD
The revolution will not be televised, but Claude will email you once we hit the singularity.
Get the stories that matter in Today's AI Briefing.
Powered by Premium Technology Intelligence Algorithms β’ Unsubscribe anytime
π¬ RESEARCH
via Arxiv
π€ Laura Ferrarotti, Gian Maria Campedelli, Roberto DessΓ¬ et al.
π
2026-01-15
β‘ Score: 7.0
"In this article, we argue that understanding the collective behavior of agents based on large language models (LLMs) is an essential area of inquiry, with important implications in terms of risks and benefits, impacting us as a society at many levels. We claim that the distinctive nature of LLMs--na..."
π€ AI MODELS
πΊ 1 pts
β‘ Score: 7.0
π¬ RESEARCH
via Arxiv
π€ Sai Varun Kodathala, Rakesh Vunnam
π
2026-01-14
β‘ Score: 7.0
"As Large Language Models (LLMs) continue to scale, post-training pruning has emerged as a promising approach to reduce computational costs while preserving performance. Existing methods such as SparseGPT and Wanda achieve high sparsity through layer-wise weight reconstruction or activation-aware mag..."
π¬ RESEARCH
via Arxiv
π€ Ge Lei, Ferran Brosa Planella, Sterling G. Baird et al.
π
2026-01-14
β‘ Score: 6.9
"Efficiently optimizing battery charging protocols is challenging because each evaluation is slow, costly, and non-differentiable. Many existing approaches address this difficulty by heavily constraining the protocol search space, which limits the diversity of protocols that can be explored, preventi..."
π€ AI MODELS
πΊ 3 pts
β‘ Score: 6.9
π¬ RESEARCH
via Arxiv
π€ Andreea Dutulescu, Stefan Ruseti, Mihai Dascalu
π
2026-01-14
β‘ Score: 6.8
"Transformer-based language models often achieve strong results on mathematical reasoning benchmarks while remaining fragile on basic numerical understanding and arithmetic operations. A central limitation is that numbers are processed as symbolic tokens whose embeddings do not explicitly encode nume..."
π¬ RESEARCH
via Arxiv
π€ Yiwen Gao, Ruochen Zhao, Yang Deng et al.
π
2026-01-15
β‘ Score: 6.8
"As Large Language Models (LLMs) increasingly operate as Deep Research (DR) Agents capable of autonomous investigation and information synthesis, reliable evaluation of their task performance has become a critical bottleneck. Current benchmarks predominantly rely on static datasets, which suffer from..."
π¬ RESEARCH
via Arxiv
π€ Chi-Pin Huang, Yunze Man, Zhiding Yu et al.
π
2026-01-14
β‘ Score: 6.8
"Vision-Language-Action (VLA) tasks require reasoning over complex visual scenes and executing adaptive actions in dynamic environments. While recent studies on reasoning VLAs show that explicit chain-of-thought (CoT) can improve generalization, they suffer from high inference latency due to lengthy..."
π¬ RESEARCH
via Arxiv
π€ Jiali Cheng, Ziheng Chen, Chirag Agarwal et al.
π
2026-01-14
β‘ Score: 6.8
"Machine unlearning is becoming essential for building trustworthy and compliant language models. Yet unlearning success varies considerably across individual samples: some are reliably erased, while others persist despite the same procedure. We argue that this disparity is not only a data-side pheno..."
π¬ RESEARCH
via Arxiv
π€ Sara AlMahri, Liming Xu, Alexandra Brintrup
π
2026-01-14
β‘ Score: 6.7
"Modern supply chains are increasingly exposed to disruptions from geopolitical events, demand shocks, trade restrictions, to natural disasters. While many of these disruptions originate deep in the supply network, most companies still lack visibility beyond Tier-1 suppliers, leaving upstream vulnera..."
π¬ RESEARCH
via Arxiv
π€ Xi Shi, Mengxin Zheng, Qian Lou
π
2026-01-15
β‘ Score: 6.7
"Multi-agent systems (MAS) enable complex reasoning by coordinating multiple agents, but often incur high inference latency due to multi-step execution and repeated model invocations, severely limiting their scalability and usability in time-sensitive scenarios. Most existing approaches primarily opt..."
π¬ RESEARCH
via Arxiv
π€ Zhiyuan Hu, Yunhai Hu, Juncheng Liu et al.
π
2026-01-14
β‘ Score: 6.7
"Multi-agent systems have evolved into practical LLM-driven collaborators for many applications, gaining robustness from diversity and cross-checking. However, multi-agent RL (MARL) training is resource-intensive and unstable: co-adapting teammates induce non-stationarity, and rewards are often spars..."
π¬ RESEARCH
via Arxiv
π€ Yibo Wang, Lei Wang, Yue Deng et al.
π
2026-01-14
β‘ Score: 6.7
"Deep research systems are widely used for multi-step web research, analysis, and cross-source synthesis, yet their evaluation remains challenging. Existing benchmarks often require annotation-intensive task construction, rely on static evaluation dimensions, or fail to reliably verify facts when cit..."
π¬ RESEARCH
via Arxiv
π€ Zirui Ren, Ziming Liu
π
2026-01-15
β‘ Score: 6.7
"Hierarchical reasoning model (HRM) achieves extraordinary performance on various reasoning tasks, significantly outperforming large language model-based reasoners. To understand the strengths and potential failure modes of HRM, we conduct a mechanistic study on its reasoning patterns and find three..."
π¬ RESEARCH
via Arxiv
π€ Kuo Liang, Yuhang Lu, Jianming Mao et al.
π
2026-01-14
β‘ Score: 6.7
"Large-scale optimization is a key backbone of modern business decision-making. However, building these models is often labor-intensive and time-consuming. We address this by proposing LEAN-LLM-OPT, a LightwEight AgeNtic workflow construction framework for LLM-assisted large-scale OPTimization auto-f..."
π¬ RESEARCH
via Arxiv
π€ Changle Qu, Sunhao Dai, Hengyi Cai et al.
π
2026-01-15
β‘ Score: 6.6
"Tool-Integrated Reasoning (TIR) empowers large language models (LLMs) to tackle complex tasks by interleaving reasoning steps with external tool interactions. However, existing reinforcement learning methods typically rely on outcome- or trajectory-level rewards, assigning uniform advantages to all..."
π¬ RESEARCH
via Arxiv
π€ Syed Naveed Mahmood, Md. Rezaur Rahman Bhuiyan, Tasfia Zaman et al.
π
2026-01-15
β‘ Score: 6.6
"Selective knowledge erasure from LLMs is critical for GDPR compliance and model safety, yet current unlearning methods conflate behavioral suppression with true knowledge removal, allowing latent capabilities to persist beneath surface-level refusals. In this work, we address this challenge by intro..."
π¬ RESEARCH
via Arxiv
π€ Tianyi Niu, Justin Chih-Yao Chen, Genta Indra Winata et al.
π
2026-01-14
β‘ Score: 6.6
"Large Language Model (LLM) routers dynamically select optimal models for given inputs. Existing approaches typically assume access to ground-truth labeled data, which is often unavailable in practice, especially when user request distributions are heterogeneous and unknown. We introduce Routing with..."
π¬ RESEARCH
via Arxiv
π€ Abhinaba Basu, Pavan Chakraborty
π
2026-01-15
β‘ Score: 6.6
"A model that avoids stereotypes in a lab benchmark may not avoid them in deployment. We show that measured bias shifts dramatically when prompts mention different places, times, or audiences -- no adversarial prompting required.
We introduce Contextual StereoSet, a benchmark that holds stereotype..."
π οΈ TOOLS
πΊ 247 pts
β‘ Score: 6.5
π― Browser development β’ AI-driven coding β’ Fundraising strategy
π¬ "it's just plain slop"
β’ "these things will get better"
π¬ RESEARCH
via Arxiv
π€ Yinzhi Zhao, Ming Wang, Shi Feng et al.
π
2026-01-15
β‘ Score: 6.5
"Large language models (LLMs) have achieved impressive performance across natural language tasks and are increasingly deployed in real-world applications. Despite extensive safety alignment efforts, recent studies show that such alignment is often shallow and remains vulnerable to jailbreak attacks...."
π¬ RESEARCH
via Arxiv
π€ Ruozhen Yang, Yucheng Jiang, Yueqi Jiang et al.
π
2026-01-15
β‘ Score: 6.5
"Deploying large language models in long-horizon, goal-oriented interactions remains challenging because similar entities and facts recur under different latent goals and constraints, causing memory systems to retrieve context-mismatched evidence. We propose STITCH (Structured Intent Tracking in Cont..."
π§ INFRASTRUCTURE
πΊ 5 pts
β‘ Score: 6.5
π€ AI MODELS
πΊ 2 pts
β‘ Score: 6.2
π€ AI MODELS
πΊ 1 pts
β‘ Score: 6.2
π οΈ SHOW HN
πΊ 9 pts
β‘ Score: 6.1
π― Deployment issues β’ Pricing concerns β’ Comparisons to existing tools
π¬ "fyi: it does not build for me from the source code."
β’ "The pricing is ridiculous. It doesn't include the Claude subscription so $20/m is way out of league for a UI."
π οΈ TOOLS
πΊ 4 pts
β‘ Score: 6.1
βοΈ ETHICS
πΊ 15 pts
β‘ Score: 6.1
π― AI Risks β’ Algorithmic Bias β’ Responsibility of Tech Companies
π¬ "these things come with their own tradeoffs"
β’ "the attention model (and its finite size) causes the suicidal person's discourse to slowly displace any constraints"
π¬ RESEARCH
via Arxiv
π€ Sicong Liu, Yanxian Huang, Mingwei Liu et al.
π
2026-01-14
β‘ Score: 6.1
"Code generation tasks aim to automate the conversion of user requirements into executable code, significantly reducing manual development efforts and enhancing software productivity. The emergence of large language models (LLMs) has significantly advanced code generation, though their efficiency is..."
π¬ RESEARCH
via Arxiv
π€ Yuxi Xia, Loris Schoenegger, Benjamin Roth
π
2026-01-15
β‘ Score: 6.1
"Large language models (LLMs) can increase users' perceived trust by verbalizing confidence in their outputs. However, prior work has shown that LLMs are often overconfident, making their stated confidence unreliable since it does not consistently align with factual accuracy. To better understand the..."