π HISTORICAL ARCHIVE - January 18, 2026
What was happening in AI on 2026-01-18
π You are visitor #47291 to this AWESOME site! π
Archive from: 2026-01-18 | Preserved for posterity β‘
π Filter by Category
Loading filters...
π¬ RESEARCH
via Arxiv
π€ Syed Naveed Mahmood, Md. Rezaur Rahman Bhuiyan, Tasfia Zaman et al.
π
2026-01-15
β‘ Score: 8.1
"Selective knowledge erasure from LLMs is critical for GDPR compliance and model safety, yet current unlearning methods conflate behavioral suppression with true knowledge removal, allowing latent capabilities to persist beneath surface-level refusals. In this work, we address this challenge by intro..."
π¬ RESEARCH
via Arxiv
π€ Xingjun Ma, Yixu Wang, Hengyuan Xu et al.
π
2026-01-15
β‘ Score: 8.1
"The rapid evolution of Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs) has produced substantial gains in reasoning, perception, and generative capability across language and vision. However, whether these advances yield commensurate improvements in safety remains unclear, i..."
π¬ RESEARCH
via Arxiv
π€ Christopher Clark, Jieyu Zhang, Zixian Ma et al.
π
2026-01-15
β‘ Score: 7.8
"Today's strongest video-language models (VLMs) remain proprietary. The strongest open-weight models either rely on synthetic data from proprietary VLMs, effectively distilling from them, or do not disclose their training data or recipe. As a result, the open-source community lacks the foundations ne..."
π¬ RESEARCH
via Arxiv
π€ Maissam Barkeshli, Alberto Alfarano, Andrey Gromov
π
2026-01-15
β‘ Score: 7.8
"Scaling laws have played a major role in the modern AI revolution, providing practitioners predictive power over how the model performance will improve with increasing data, compute, and number of model parameters. This has spurred an intense interest in the origin of neural scaling laws, with a com..."
π¬ RESEARCH
πΊ 80 pts
β‘ Score: 7.7
π― Skepticism of LLM capabilities β’ Conflict of interest concerns β’ Anthropic marketing criticism
π¬ "Large language models are fundamentally not meant for tasks of this nature"
β’ "Confidence levels are suspect"
π§ NEURAL NETWORKS
πΊ 111 pts
β‘ Score: 7.5
π― Efficient token encoding β’ Alternative to attention β’ Geometric data representation
π¬ "I am running an experiment of replacing discrete tokens with embeddings + small byte encoder/decoder"
β’ "If you want to prove a new alternative to attention without breaking the bank then one of the best ways to do that would probably be to retrain an already existing model"
π¬ RESEARCH
via Arxiv
π€ Hao Wang, Yanting Wang, Hao Li et al.
π
2026-01-15
β‘ Score: 7.2
"Large Language Models (LLMs) have achieved remarkable capabilities but remain vulnerable to adversarial ``jailbreak'' attacks designed to bypass safety guardrails. Current safety alignment methods depend heavily on static external red teaming, utilizing fixed defense prompts or pre-collected adversa..."
π‘οΈ SAFETY
πΊ 1 pts
β‘ Score: 7.1
π€ AI MODELS
πΊ 1 pts
β‘ Score: 7.1
π€ AI MODELS
πΊ 1 pts
β‘ Score: 7.0
π‘ AI NEWS BUT ACTUALLY GOOD
The revolution will not be televised, but Claude will email you once we hit the singularity.
Get the stories that matter in Today's AI Briefing.
Powered by Premium Technology Intelligence Algorithms β’ Unsubscribe anytime
π οΈ TOOLS
πΊ 1 pts
β‘ Score: 7.0
π¬ RESEARCH
πΊ 2 pts
β‘ Score: 7.0
π οΈ SHOW HN
πΊ 1 pts
β‘ Score: 7.0
π¬ RESEARCH
via Arxiv
π€ Abhinaba Basu, Pavan Chakraborty
π
2026-01-15
β‘ Score: 7.0
"A model that avoids stereotypes in a lab benchmark may not avoid them in deployment. We show that measured bias shifts dramatically when prompts mention different places, times, or audiences -- no adversarial prompting required.
We introduce Contextual StereoSet, a benchmark that holds stereotype..."
π¬ RESEARCH
via Arxiv
π€ Laura Ferrarotti, Gian Maria Campedelli, Roberto DessΓ¬ et al.
π
2026-01-15
β‘ Score: 7.0
"In this article, we argue that understanding the collective behavior of agents based on large language models (LLMs) is an essential area of inquiry, with important implications in terms of risks and benefits, impacting us as a society at many levels. We claim that the distinctive nature of LLMs--na..."
π οΈ SHOW HN
πΊ 3 pts
β‘ Score: 6.8
π¬ RESEARCH
via Arxiv
π€ Yiwen Gao, Ruochen Zhao, Yang Deng et al.
π
2026-01-15
β‘ Score: 6.8
"As Large Language Models (LLMs) increasingly operate as Deep Research (DR) Agents capable of autonomous investigation and information synthesis, reliable evaluation of their task performance has become a critical bottleneck. Current benchmarks predominantly rely on static datasets, which suffer from..."
π οΈ TOOLS
πΊ 1 pts
β‘ Score: 6.7
π¬ RESEARCH
via Arxiv
π€ Changle Qu, Sunhao Dai, Hengyi Cai et al.
π
2026-01-15
β‘ Score: 6.6
"Tool-Integrated Reasoning (TIR) empowers large language models (LLMs) to tackle complex tasks by interleaving reasoning steps with external tool interactions. However, existing reinforcement learning methods typically rely on outcome- or trajectory-level rewards, assigning uniform advantages to all..."
π¬ RESEARCH
via Arxiv
π€ Zirui Ren, Ziming Liu
π
2026-01-15
β‘ Score: 6.6
"Hierarchical reasoning model (HRM) achieves extraordinary performance on various reasoning tasks, significantly outperforming large language model-based reasoners. To understand the strengths and potential failure modes of HRM, we conduct a mechanistic study on its reasoning patterns and find three..."
π€ AI MODELS
β¬οΈ 23 ups
β‘ Score: 6.5
"We have seen a cool counter-trend recently to the typical scaleup narrative (see Smol/Phi and ZIT most notably). I've been on a mission to push this to the limit (mainly for fun), moving LMs into environments where they have no business existing.
My thesis is that even the most primitive environmen..."
π¬ RESEARCH
via Arxiv
π€ Ruozhen Yang, Yucheng Jiang, Yueqi Jiang et al.
π
2026-01-15
β‘ Score: 6.5
"Deploying large language models in long-horizon, goal-oriented interactions remains challenging because similar entities and facts recur under different latent goals and constraints, causing memory systems to retrieve context-mismatched evidence. We propose STITCH (Structured Intent Tracking in Cont..."
π¬ RESEARCH
via Arxiv
π€ Yinzhi Zhao, Ming Wang, Shi Feng et al.
π
2026-01-15
β‘ Score: 6.5
"Large language models (LLMs) have achieved impressive performance across natural language tasks and are increasingly deployed in real-world applications. Despite extensive safety alignment efforts, recent studies show that such alignment is often shallow and remains vulnerable to jailbreak attacks...."
π οΈ SHOW HN
πΊ 2 pts
β‘ Score: 6.2
π¬ RESEARCH
πΊ 4 pts
β‘ Score: 6.2
π οΈ TOOLS
πΊ 1 pts
β‘ Score: 6.1
π οΈ TOOLS
πΊ 2 pts
β‘ Score: 6.1
π§ INFRASTRUCTURE
"If we actually wanted βmodel = functionβ to work, a few things seem fundamentally required:
β’. Fast scale from zero without keeping GPUs alive just to hold state
β’ Execution state reuse so models donβt need full re-init and KV rebuild on every scale event
β’ Clear separation between orchestr..."
π― Serverless model deployment β’ Infrastructure challenges β’ Orchestration and state management
π¬ "Lambda style LLM inference would be great"
β’ "CRIU style checkpointing seems to the path for most of it"
π¬ RESEARCH
via Arxiv
π€ Amir Khurshid, Abhishek Sehgal
π
2026-01-15
β‘ Score: 6.1
"Large language model (LLM) contexts are typically constructed using retrieval-augmented generation (RAG), which involves ranking and selecting the top-k passages. The approach causes fragmentation in information graphs in document structures, over-retrieval, and duplication of content alongside insu..."