<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Theory on ICE-ICE-BEAR-BLOG</title><link>https://ice-ice-bear.github.io/tags/theory/</link><description>Recent content in Theory on ICE-ICE-BEAR-BLOG</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Sat, 09 May 2026 00:00:00 +0900</lastBuildDate><atom:link href="https://ice-ice-bear.github.io/tags/theory/index.xml" rel="self" type="application/rss+xml"/><item><title>Weekly arxiv digest — five papers that re-examine the interfaces we take for granted</title><link>https://ice-ice-bear.github.io/posts/2026-05-09-arxiv-papers-week-digest/</link><pubDate>Sat, 09 May 2026 00:00:00 +0900</pubDate><guid>https://ice-ice-bear.github.io/posts/2026-05-09-arxiv-papers-week-digest/</guid><description>&lt;img src="https://ice-ice-bear.github.io/" alt="Featured image of post Weekly arxiv digest — five papers that re-examine the interfaces we take for granted" /&gt;&lt;h2 id="overview"&gt;Overview
&lt;/h2&gt;&lt;p&gt;Five &lt;a class="link" href="https://arxiv.org/" target="_blank" rel="noopener"
 &gt;arxiv&lt;/a&gt; papers that caught the eye over the past few days. The fields are scattered — &lt;a class="link" href="https://en.wikipedia.org/wiki/Information_retrieval" target="_blank" rel="noopener"
 &gt;information retrieval&lt;/a&gt;, an agentic workbench for mathematicians, &lt;a class="link" href="https://en.wikipedia.org/wiki/Attention_%28machine_learning%29" target="_blank" rel="noopener"
 &gt;attention&lt;/a&gt; architecture, &lt;a class="link" href="https://en.wikipedia.org/wiki/Fine-tuning_%28deep_learning%29" target="_blank" rel="noopener"
 &gt;SFT&lt;/a&gt;-induced &lt;a class="link" href="https://en.wikipedia.org/wiki/Hallucination_%28artificial_intelligence%29" target="_blank" rel="noopener"
 &gt;hallucinations&lt;/a&gt;, and &lt;a class="link" href="https://en.wikipedia.org/wiki/Feature_learning" target="_blank" rel="noopener"
 &gt;representation learning&lt;/a&gt; theory — but read together one question keeps surfacing: &lt;strong&gt;&amp;ldquo;Are the interfaces and priors we accept without thought actually blocking the model&amp;rsquo;s real capability?&amp;rdquo;&lt;/strong&gt; &lt;a class="link" href="https://ice-ice-bear.github.io/en/p/2026-05-06-arxiv-papers-pick-multiagent-debate-mia-husserl/" &gt;The previous digest&lt;/a&gt; traced reasoning gains along three axes (cooperation, persistence, structure). This week drops one layer below — &lt;strong&gt;systematically questioning the abstractions already in place&lt;/strong&gt;.&lt;/p&gt;
&lt;pre class="mermaid" style="visibility:hidden"&gt;graph TD
 Theme["This week in one line: &amp;lt;br/&amp;gt; question the interface/prior already in place"]
 Theme --&gt; Retrieval["retrieval interface &amp;lt;br/&amp;gt; (top-k similarity)"]
 Theme --&gt; Workflow["math workflow &amp;lt;br/&amp;gt; (single-shot response)"]
 Theme --&gt; Arch["attention prior &amp;lt;br/&amp;gt; (uniform assumption)"]
 Theme --&gt; Training["SFT objective &amp;lt;br/&amp;gt; (factuality conflict)"]
 Theme --&gt; Repr["representation similarity metric &amp;lt;br/&amp;gt; (scale-confounded)"]

 Retrieval --&gt; P1["DCI (2605.05242)"]
 Workflow --&gt; P2["AI Co-Mathematician (2605.06651)"]
 Arch --&gt; P3["GOAT (2601.15380)"]
 Training --&gt; P4["Self-distillation SFT (2604.15574)"]
 Repr --&gt; P5["Aristotelian Repr. (2602.14486)"]&lt;/pre&gt;&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;#&lt;/th&gt;
 &lt;th&gt;Paper&lt;/th&gt;
 &lt;th&gt;Field&lt;/th&gt;
 &lt;th&gt;One-line summary&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;1&lt;/td&gt;
 &lt;td&gt;&lt;a class="link" href="https://arxiv.org/abs/2605.05242" target="_blank" rel="noopener"
 &gt;Direct Corpus Interaction (2605.05242)&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;cs.IR&lt;/td&gt;
 &lt;td&gt;An agent searching raw corpus with &lt;code&gt;grep&lt;/code&gt; and shell tools beats strong retrievers — no embedding index needed&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;2&lt;/td&gt;
 &lt;td&gt;&lt;a class="link" href="https://arxiv.org/abs/2605.06651" target="_blank" rel="noopener"
 &gt;AI Co-Mathematician (2605.06651)&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;cs.AI&lt;/td&gt;
 &lt;td&gt;Async, stateful workbench for mathematicians; 48% on &lt;a class="link" href="https://epoch.ai/frontiermath" target="_blank" rel="noopener"
 &gt;FrontierMath Tier 4&lt;/a&gt;&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;3&lt;/td&gt;
 &lt;td&gt;&lt;a class="link" href="https://arxiv.org/abs/2601.15380" target="_blank" rel="noopener"
 &gt;GOAT — You Need Better Attention Priors (2601.15380)&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;cs.LG&lt;/td&gt;
 &lt;td&gt;Generalize attention via &lt;a class="link" href="https://optimaltransport.github.io/" target="_blank" rel="noopener"
 &gt;Entropic Optimal Transport&lt;/a&gt; with a learnable prior&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;4&lt;/td&gt;
 &lt;td&gt;&lt;a class="link" href="https://arxiv.org/abs/2604.15574" target="_blank" rel="noopener"
 &gt;Why Fine-Tuning Encourages Hallucinations (2604.15574)&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;cs.CL&lt;/td&gt;
 &lt;td&gt;&lt;a class="link" href="https://en.wikipedia.org/wiki/Knowledge_distillation" target="_blank" rel="noopener"
 &gt;Self-distillation&lt;/a&gt; reduces &lt;a class="link" href="https://en.wikipedia.org/wiki/Fine-tuning_%28deep_learning%29" target="_blank" rel="noopener"
 &gt;SFT&lt;/a&gt;-induced hallucinations&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;5&lt;/td&gt;
 &lt;td&gt;&lt;a class="link" href="https://arxiv.org/abs/2602.14486" target="_blank" rel="noopener"
 &gt;Aristotelian Representation Hypothesis (2602.14486)&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;cs.LG&lt;/td&gt;
 &lt;td&gt;The &lt;a class="link" href="https://phillipi.github.io/prh/" target="_blank" rel="noopener"
 &gt;Platonic Representation&lt;/a&gt; convergence is mostly a metric artifact; real convergence is local&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="1-direct-corpus-interaction--260505242"&gt;1. Direct Corpus Interaction — 2605.05242
&lt;/h2&gt;&lt;p&gt;&lt;a class="link" href="https://arxiv.org/a/li_z_1" target="_blank" rel="noopener"
 &gt;Zhuofeng Li&lt;/a&gt;, Haoxiang Zhang, &lt;a class="link" href="https://lupantech.github.io/" target="_blank" rel="noopener"
 &gt;Pan Lu&lt;/a&gt;, &lt;a class="link" href="https://bunsenfeng.github.io/" target="_blank" rel="noopener"
 &gt;Shangbin Feng&lt;/a&gt;, &lt;a class="link" href="https://maszhongming.github.io/" target="_blank" rel="noopener"
 &gt;Ming Zhong&lt;/a&gt;, &lt;a class="link" href="https://homes.cs.washington.edu/~yejin/" target="_blank" rel="noopener"
 &gt;Yejin Choi&lt;/a&gt;, &lt;a class="link" href="https://www.james-zou.com/" target="_blank" rel="noopener"
 &gt;James Zou&lt;/a&gt;, &lt;a class="link" href="https://hanj.cs.illinois.edu/" target="_blank" rel="noopener"
 &gt;Jiawei Han&lt;/a&gt;, &lt;a class="link" href="https://wenhuchen.github.io/" target="_blank" rel="noopener"
 &gt;Wenhu Chen&lt;/a&gt;, &lt;a class="link" href="https://cs.uwaterloo.ca/~jimmylin/" target="_blank" rel="noopener"
 &gt;Jimmy Lin&lt;/a&gt;, et al. (2026-05-03, &lt;a class="link" href="https://arxiv.org/list/cs.IR/new" target="_blank" rel="noopener"
 &gt;cs.IR&lt;/a&gt;).&lt;/p&gt;
&lt;h3 id="core"&gt;Core
&lt;/h3&gt;&lt;p&gt;Modern &lt;a class="link" href="https://en.wikipedia.org/wiki/Information_retrieval" target="_blank" rel="noopener"
 &gt;retrieval&lt;/a&gt; systems, lexical or semantic, &lt;strong&gt;compress a corpus through a fixed similarity interface&lt;/strong&gt;. A single top-k step happens before any reasoning. As agents get stronger this compression becomes the bottleneck — exact lexical constraints, sparse-clue conjunctions, local context checks, and multi-step hypothesis refinement are hard to express as retriever calls. Evidence filtered out early cannot be recovered by stronger downstream reasoning.&lt;/p&gt;
&lt;p&gt;The proposal is &lt;strong&gt;Direct Corpus Interaction (DCI)&lt;/strong&gt; — no embedding model, no &lt;a class="link" href="https://en.wikipedia.org/wiki/Vector_database" target="_blank" rel="noopener"
 &gt;vector index&lt;/a&gt;, no retrieval API. The agent searches the raw corpus directly with general-purpose terminal tools: &lt;a class="link" href="https://en.wikipedia.org/wiki/Grep" target="_blank" rel="noopener"
 &gt;grep&lt;/a&gt;, file reads, shell commands, lightweight scripts.&lt;/p&gt;
&lt;h3 id="contributions"&gt;Contributions
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;No offline indexing; adapts naturally to evolving local corpora&lt;/li&gt;
&lt;li&gt;Substantially outperforms sparse, dense, and reranking baselines on multiple &lt;a class="link" href="https://brightbenchmark.github.io/" target="_blank" rel="noopener"
 &gt;BRIGHT&lt;/a&gt; and &lt;a class="link" href="https://github.com/beir-cellar/beir" target="_blank" rel="noopener"
 &gt;BEIR&lt;/a&gt; datasets&lt;/li&gt;
&lt;li&gt;Strong accuracy on &lt;a class="link" href="https://browsecomp.github.io/" target="_blank" rel="noopener"
 &gt;BrowseComp-Plus&lt;/a&gt; and multi-hop QA without any conventional semantic retriever&lt;/li&gt;
&lt;li&gt;The takeaway: as agents grow stronger, retrieval quality depends not only on reasoning but on &lt;strong&gt;the resolution of the interface through which the model touches the corpus&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="why-it-matters-now"&gt;Why it matters now
&lt;/h3&gt;&lt;p&gt;This is not &amp;ldquo;RAG, but better.&amp;rdquo; It questions a &lt;a class="link" href="https://en.wikipedia.org/wiki/Dense_passage_retrieval" target="_blank" rel="noopener"
 &gt;decade-old default&lt;/a&gt;: retrieval = top-k similarity. The way &lt;a class="link" href="https://www.anthropic.com/claude-code" target="_blank" rel="noopener"
 &gt;Claude Code&lt;/a&gt; explores codebases with &lt;code&gt;grep&lt;/code&gt; and &lt;code&gt;find&lt;/code&gt; turns out to be a generalizable interface, not a coding-specific shortcut. The abstraction layer the search-index industry has assumed for a decade may become just one option among several.&lt;/p&gt;
&lt;h2 id="2-ai-co-mathematician--260506651"&gt;2. AI Co-Mathematician — 2605.06651
&lt;/h2&gt;&lt;p&gt;&lt;a class="link" href="https://arxiv.org/a/zheng_d_3" target="_blank" rel="noopener"
 &gt;Daniel Zheng&lt;/a&gt;, &lt;a class="link" href="https://research.google/people/ingrid-von-glehn/" target="_blank" rel="noopener"
 &gt;Ingrid von Glehn&lt;/a&gt;, Yori Zwols, Lars Buesing, &lt;a class="link" href="http://danroy.org/" target="_blank" rel="noopener"
 &gt;Daniel M. Roy&lt;/a&gt;, &lt;a class="link" href="https://www.bewitched.com/" target="_blank" rel="noopener"
 &gt;Martin Wattenberg&lt;/a&gt;, &lt;a class="link" href="https://www.fernandaviegas.com/" target="_blank" rel="noopener"
 &gt;Fernanda Viégas&lt;/a&gt;, &lt;a class="link" href="https://research.google/people/alex-davies/" target="_blank" rel="noopener"
 &gt;Alex Davies&lt;/a&gt;, &lt;a class="link" href="https://research.google/people/PushmeetKohli/" target="_blank" rel="noopener"
 &gt;Pushmeet Kohli&lt;/a&gt;, et al. (&lt;a class="link" href="https://deepmind.google/" target="_blank" rel="noopener"
 &gt;Google DeepMind&lt;/a&gt;, 2026-05-07, &lt;a class="link" href="https://arxiv.org/list/cs.AI/new" target="_blank" rel="noopener"
 &gt;cs.AI&lt;/a&gt;).&lt;/p&gt;
&lt;h3 id="core-1"&gt;Core
&lt;/h3&gt;&lt;p&gt;A workbench where mathematicians &lt;strong&gt;interactively leverage &lt;a class="link" href="https://en.wikipedia.org/wiki/Intelligent_agent" target="_blank" rel="noopener"
 &gt;AI agents&lt;/a&gt; for open-ended research&lt;/strong&gt;. The key design choice is not single-shot Q&amp;amp;A but an &lt;strong&gt;asynchronous, stateful workspace&lt;/strong&gt;.&lt;/p&gt;
&lt;pre class="mermaid" style="visibility:hidden"&gt;flowchart LR
 User["mathematician"] --&gt;|"intent (often blurry)"| WS["stateful workspace"]
 WS --&gt; Idea["ideation"]
 WS --&gt; Lit["literature search"]
 WS --&gt; Comp["computational exploration"]
 WS --&gt; Proof["theorem proving"]
 WS --&gt; Theory["theory building"]
 WS -.-&gt;|"track failed hypotheses"| WS
 WS --&gt;|"native math artifacts"| User&lt;/pre&gt;&lt;h3 id="contributions-1"&gt;Contributions
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Manages uncertainty, refines user intent, tracks failed hypotheses, outputs native mathematical artifacts — bundled into one system&lt;/li&gt;
&lt;li&gt;In early tests, helped researchers &lt;strong&gt;solve open problems&lt;/strong&gt;, identify new research directions, and uncover overlooked &lt;a class="link" href="https://en.wikipedia.org/wiki/Literature_review" target="_blank" rel="noopener"
 &gt;literature&lt;/a&gt; references&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;48% on &lt;a class="link" href="https://epoch.ai/frontiermath" target="_blank" rel="noopener"
 &gt;FrontierMath&lt;/a&gt; Tier 4&lt;/strong&gt; — a new high among all evaluated AI systems&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="why-it-matters-now-1"&gt;Why it matters now
&lt;/h3&gt;&lt;p&gt;This is a different bet than &lt;a class="link" href="https://deepmind.google/discover/blog/ai-solves-imo-problems-at-silver-medal-level/" target="_blank" rel="noopener"
 &gt;AlphaProof&lt;/a&gt;-style autonomous theorem proving. &lt;strong&gt;It does not aim to replace the mathematician; it interfaces the mathematician&amp;rsquo;s actual workflow — blurry intent, exploration, dead ends, retries — directly into the agent loop.&lt;/strong&gt; What &lt;a class="link" href="https://www.anthropic.com/news/skills" target="_blank" rel="noopener"
 &gt;Claude Skills&lt;/a&gt;-style async workflow infrastructure attempts in general domains, this validates first in math, a domain where success is verifiable. A likely reference design for the next generation of &amp;ldquo;agentic workbenches.&amp;rdquo;&lt;/p&gt;
&lt;h2 id="3-goat--you-need-better-attention-priors--260115380"&gt;3. GOAT — You Need Better Attention Priors — 2601.15380
&lt;/h2&gt;&lt;p&gt;&lt;a class="link" href="https://arxiv.org/a/litman_e_1" target="_blank" rel="noopener"
 &gt;Elon Litman&lt;/a&gt;, &lt;a class="link" href="https://gabe-guo.github.io/" target="_blank" rel="noopener"
 &gt;Gabe Guo&lt;/a&gt; (2026-01-21, &lt;a class="link" href="https://arxiv.org/list/cs.LG/new" target="_blank" rel="noopener"
 &gt;cs.LG&lt;/a&gt;).&lt;/p&gt;
&lt;h3 id="core-2"&gt;Core
&lt;/h3&gt;&lt;p&gt;Viewed through &lt;a class="link" href="https://optimaltransport.github.io/" target="_blank" rel="noopener"
 &gt;Entropic Optimal Transport&lt;/a&gt;, standard &lt;a class="link" href="https://en.wikipedia.org/wiki/Softmax_function" target="_blank" rel="noopener"
 &gt;softmax attention&lt;/a&gt; is &lt;strong&gt;a transport problem regularized by an implicit uniform prior&lt;/strong&gt;. The authors propose &lt;strong&gt;GOAT (Generalized Optimal transport Attention with Trainable priors)&lt;/strong&gt; — replace that naive assumption with a learnable, continuous prior.&lt;/p&gt;
&lt;h3 id="contributions-2"&gt;Contributions
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Fully compatible&lt;/strong&gt; with optimized kernels like &lt;a class="link" href="https://github.com/Dao-AILab/flash-attention" target="_blank" rel="noopener"
 &gt;FlashAttention&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;An EOT-based explanation of &lt;a class="link" href="https://arxiv.org/abs/2309.17453" target="_blank" rel="noopener"
 &gt;attention sinks&lt;/a&gt;, plus a materialized solution that avoids the representational trade-offs of standard attention&lt;/li&gt;
&lt;li&gt;Absorbs spatial information into the core attention computation, learning an &lt;strong&gt;extrapolatable prior&lt;/strong&gt; — combines the flexibility of learned &lt;a class="link" href="https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#Positional_encoding" target="_blank" rel="noopener"
 &gt;positional embeddings&lt;/a&gt; with the length generalization of fixed encodings&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="why-it-matters-now-2"&gt;Why it matters now
&lt;/h3&gt;&lt;p&gt;Since &lt;a class="link" href="https://arxiv.org/abs/1706.03762" target="_blank" rel="noopener"
 &gt;the 2017 Transformer&lt;/a&gt;, attention&amp;rsquo;s uniform prior has gone almost entirely unchallenged. GOAT shows that phenomena practitioners patched around in production — attention sinks being the cleanest example — were actually prior-design issues. As &lt;a class="link" href="https://en.wikipedia.org/wiki/Mamba_%28deep_learning_architecture%29" target="_blank" rel="noopener"
 &gt;non-attention architectures&lt;/a&gt; like &lt;a class="link" href="https://arxiv.org/abs/2312.00752" target="_blank" rel="noopener"
 &gt;Mamba&lt;/a&gt; and &lt;a class="link" href="https://arxiv.org/abs/2305.13048" target="_blank" rel="noopener"
 &gt;RWKV&lt;/a&gt; arrive, this paper asks the reverse question: how far can we generalize attention itself?&lt;/p&gt;
&lt;h2 id="4-why-fine-tuning-encourages-hallucinations--260415574"&gt;4. Why Fine-Tuning Encourages Hallucinations — 2604.15574
&lt;/h2&gt;&lt;p&gt;&lt;a class="link" href="https://arxiv.org/a/kaplan_g_1" target="_blank" rel="noopener"
 &gt;Guy Kaplan&lt;/a&gt;, &lt;a class="link" href="https://zorikg.github.io/" target="_blank" rel="noopener"
 &gt;Zorik Gekhman&lt;/a&gt;, Zhen Zhu, Lotem Rozner, Yuval Reif, &lt;a class="link" href="https://swabhs.com/" target="_blank" rel="noopener"
 &gt;Swabha Swayamdipta&lt;/a&gt;, &lt;a class="link" href="https://dhoiem.cs.illinois.edu/" target="_blank" rel="noopener"
 &gt;Derek Hoiem&lt;/a&gt;, &lt;a class="link" href="https://schwartz-lab-huji.github.io/" target="_blank" rel="noopener"
 &gt;Roy Schwartz&lt;/a&gt; (2026-04-16, &lt;a class="link" href="https://arxiv.org/list/cs.CL/new" target="_blank" rel="noopener"
 &gt;cs.CL&lt;/a&gt;).&lt;/p&gt;
&lt;h3 id="core-3"&gt;Core
&lt;/h3&gt;&lt;p&gt;A major source of &lt;a class="link" href="https://en.wikipedia.org/wiki/Large_language_model" target="_blank" rel="noopener"
 &gt;LLM&lt;/a&gt; &lt;a class="link" href="https://en.wikipedia.org/wiki/Hallucination_%28artificial_intelligence%29" target="_blank" rel="noopener"
 &gt;hallucinations&lt;/a&gt; is &lt;strong&gt;exposure to new factual information during &lt;a class="link" href="https://en.wikipedia.org/wiki/Fine-tuning_%28deep_learning%29" target="_blank" rel="noopener"
 &gt;supervised fine-tuning&lt;/a&gt;(SFT)&lt;/strong&gt; — hallucinations rise relative to pre-training knowledge. The authors reframe this as a &lt;a class="link" href="https://en.wikipedia.org/wiki/Continual_learning" target="_blank" rel="noopener"
 &gt;continual-learning&lt;/a&gt; problem (knowledge degradation during training) and bring the tools of that field to bear.&lt;/p&gt;
&lt;h3 id="contributions-3"&gt;Contributions
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;A &lt;strong&gt;self-distillation-based SFT method&lt;/strong&gt; that regularizes output-distribution drift — effective factual learning while minimizing hallucinations w.r.t. existing knowledge&lt;/li&gt;
&lt;li&gt;When new knowledge acquisition is unnecessary: &lt;strong&gt;freezing parameter groups&lt;/strong&gt; to suppress factual plasticity preserves task performance while reducing hallucinations&lt;/li&gt;
&lt;li&gt;Investigates the mechanism through three hypotheses: capacity limits, &lt;a class="link" href="https://en.wikipedia.org/wiki/Imitation_learning#Behavioral_cloning" target="_blank" rel="noopener"
 &gt;behavior cloning&lt;/a&gt;, and localized interference&lt;/li&gt;
&lt;li&gt;Main driver: &lt;strong&gt;interference among overlapping semantic representations&lt;/strong&gt; — and self-distillation succeeds precisely by mitigating that interference&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="why-it-matters-now-3"&gt;Why it matters now
&lt;/h3&gt;&lt;p&gt;&amp;ldquo;SFT causes hallucinations&amp;rdquo; was already observed in &lt;a class="link" href="https://arxiv.org/abs/2405.05904" target="_blank" rel="noopener"
 &gt;Gekhman et al. 2024&lt;/a&gt;. This paper pushes further by &lt;strong&gt;pinning the mechanism on representational interference and offering self-distillation as the fix&lt;/strong&gt;. The implication for the &lt;a class="link" href="https://en.wikipedia.org/wiki/AI_alignment" target="_blank" rel="noopener"
 &gt;alignment&lt;/a&gt; stack is large: SFT — the step before &lt;a class="link" href="https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback" target="_blank" rel="noopener"
 &gt;RLHF&lt;/a&gt; — is itself a safety/factuality liability. The era of running instruction tuning without thinking about its side effects is ending.&lt;/p&gt;
&lt;h2 id="5-aristotelian-representation-hypothesis--260214486"&gt;5. Aristotelian Representation Hypothesis — 2602.14486
&lt;/h2&gt;&lt;p&gt;&lt;a class="link" href="https://fabian-groeger.com/" target="_blank" rel="noopener"
 &gt;Fabian Gröger&lt;/a&gt;, Shuo Wen, &lt;a class="link" href="https://people.epfl.ch/maria.brbic" target="_blank" rel="noopener"
 &gt;Maria Brbić&lt;/a&gt; (&lt;a class="link" href="https://www.epfl.ch/" target="_blank" rel="noopener"
 &gt;EPFL&lt;/a&gt;, 2026-02-16, &lt;a class="link" href="https://arxiv.org/list/cs.LG/new" target="_blank" rel="noopener"
 &gt;cs.LG&lt;/a&gt;).&lt;/p&gt;
&lt;h3 id="core-4"&gt;Core
&lt;/h3&gt;&lt;p&gt;The &lt;a class="link" href="https://phillipi.github.io/prh/" target="_blank" rel="noopener"
 &gt;Platonic Representation Hypothesis&lt;/a&gt; (Huh, Cheung, Wang, &lt;a class="link" href="http://web.mit.edu/phillipi/" target="_blank" rel="noopener"
 &gt;Isola&lt;/a&gt;, 2024) claims &lt;strong&gt;neural network representations are converging to a common statistical model of reality&lt;/strong&gt;. This paper challenges the measurement instrument used to support that claim.&lt;/p&gt;
&lt;h3 id="contributions-4"&gt;Contributions
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Existing representational similarity metrics are &lt;strong&gt;confounded by network scale&lt;/strong&gt; — increasing depth or width systematically inflates similarity scores&lt;/li&gt;
&lt;li&gt;A &lt;strong&gt;permutation-based null-calibration framework&lt;/strong&gt; transforms any such metric into a calibrated score with statistical guarantees&lt;/li&gt;
&lt;li&gt;After calibration: convergence reported by global &lt;a class="link" href="https://en.wikipedia.org/wiki/Spectral_theory" target="_blank" rel="noopener"
 &gt;spectral measures&lt;/a&gt; &lt;strong&gt;largely disappears&lt;/strong&gt;; however, &lt;strong&gt;local neighborhood similarity&lt;/strong&gt; (but not local distances) retains significant agreement across modalities&lt;/li&gt;
&lt;li&gt;Proposes the &lt;strong&gt;Aristotelian Representation Hypothesis&lt;/strong&gt;: representations converge to &lt;strong&gt;shared local neighborhood relationships&lt;/strong&gt; — not absolute distances (Platonic forms) but relational neighborhoods (Aristotelian categories)&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="why-it-matters-now-4"&gt;Why it matters now
&lt;/h3&gt;&lt;p&gt;This is a meta-paper. &lt;strong&gt;It attacks the measurement, not the result.&lt;/strong&gt; The Platonic hypothesis has been cited as theoretical justification for &lt;a class="link" href="https://en.wikipedia.org/wiki/Multimodal_learning" target="_blank" rel="noopener"
 &gt;multimodal alignment&lt;/a&gt; work since 2024. If this calibration framework becomes the standard, the &amp;ldquo;representation convergence&amp;rdquo; claims of the past two years all need re-examination. And what survives — local neighborhood convergence — gives a cleaner explanation for why &lt;a class="link" href="https://en.wikipedia.org/wiki/Self-supervised_learning#Contrastive_self-supervised_learning" target="_blank" rel="noopener"
 &gt;contrastive learning&lt;/a&gt; and similar &lt;a class="link" href="https://en.wikipedia.org/wiki/Word_embedding" target="_blank" rel="noopener"
 &gt;embedding&lt;/a&gt; methods work so well.&lt;/p&gt;
&lt;h2 id="reading-the-cluster"&gt;Reading the cluster
&lt;/h2&gt;&lt;p&gt;Five papers, one direction: &lt;strong&gt;interrogate the abstraction layer already in place.&lt;/strong&gt;&lt;/p&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Layer questioned&lt;/th&gt;
 &lt;th&gt;Assumed default&lt;/th&gt;
 &lt;th&gt;Proposed upgrade&lt;/th&gt;
 &lt;th&gt;Paper&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;Retrieval interface&lt;/td&gt;
 &lt;td&gt;top-k similarity is enough&lt;/td&gt;
 &lt;td&gt;agent searches raw corpus directly&lt;/td&gt;
 &lt;td&gt;DCI&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Math workflow&lt;/td&gt;
 &lt;td&gt;single-shot Q&amp;amp;A&lt;/td&gt;
 &lt;td&gt;async, stateful workbench&lt;/td&gt;
 &lt;td&gt;AI Co-Mathematician&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Attention prior&lt;/td&gt;
 &lt;td&gt;uniform distribution&lt;/td&gt;
 &lt;td&gt;learnable prior + EOT&lt;/td&gt;
 &lt;td&gt;GOAT&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;SFT objective&lt;/td&gt;
 &lt;td&gt;new knowledge = good&lt;/td&gt;
 &lt;td&gt;self-distillation against interference&lt;/td&gt;
 &lt;td&gt;Why FT Hallucinates&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Representation similarity metric&lt;/td&gt;
 &lt;td&gt;spectral measures are fine&lt;/td&gt;
 &lt;td&gt;scale-robust calibration&lt;/td&gt;
 &lt;td&gt;Aristotelian&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;pre class="mermaid" style="visibility:hidden"&gt;quadrantChart
 title Five papers — abstraction layer × scope of impact
 x-axis "Lower layer (structure/theory)" --&gt; "Higher layer (workflow)"
 y-axis "Narrow scope" --&gt; "Broad scope"
 quadrant-1 "redesign candidates"
 quadrant-2 "foundational recalibration"
 quadrant-3 "specialized"
 quadrant-4 "tooling"
 "DCI (retrieval)": [0.55, 0.85]
 "AI Co-Math": [0.85, 0.6]
 "GOAT (attention)": [0.15, 0.75]
 "SFT halluc.": [0.5, 0.7]
 "Aristotelian": [0.25, 0.55]&lt;/pre&gt;&lt;p&gt;&lt;a class="link" href="https://ice-ice-bear.github.io/en/p/2026-05-06-arxiv-papers-pick-multiagent-debate-mia-husserl/" &gt;The previous digest&lt;/a&gt; traced reasoning gains through cooperation, persistence, and structure. This week goes one layer below — &lt;strong&gt;are the interfaces and priors that support that reasoning even laid down correctly?&lt;/strong&gt; The two installments do not conflict; they look like consecutive stages of the same shift: scale-driven gains have plateaued, and the next round&amp;rsquo;s differentiation comes from &lt;strong&gt;agent cooperation topology (last week) plus abstraction-layer recalibration (this week)&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id="insights"&gt;Insights
&lt;/h2&gt;&lt;p&gt;What binds these five together is a single posture — &lt;strong&gt;question the default once more&lt;/strong&gt;. DCI questions &amp;ldquo;retrieval = top-k.&amp;rdquo; AI Co-Mathematician questions &amp;ldquo;response = single-shot text.&amp;rdquo; GOAT questions &amp;ldquo;attention prior = uniform.&amp;rdquo; The SFT hallucination paper questions the assumption that SFT delivers &lt;a class="link" href="https://en.wikipedia.org/wiki/Knowledge_injection" target="_blank" rel="noopener"
 &gt;knowledge injection&lt;/a&gt; for free. The Aristotelian paper questions whether representational similarity metrics are even trustworthy. Each of these five defaults is something the field has stacked layers on top of without seriously re-examining.&lt;/p&gt;
&lt;p&gt;Now that the scale-as-capability-driver round — roughly &lt;a class="link" href="https://en.wikipedia.org/wiki/GPT-4" target="_blank" rel="noopener"
 &gt;2020 through 2024&lt;/a&gt; — has tapered off, the next axis of differentiation is not parameter count but &lt;strong&gt;the resolution of the interface where the model meets the world&lt;/strong&gt;. DCI&amp;rsquo;s raw-corpus interface, AI Co-Mathematician&amp;rsquo;s stateful workspace, GOAT&amp;rsquo;s learned prior, self-distillation SFT, and neighborhood-based representation calibration are all the same meta-principle applied to different layers: &lt;strong&gt;an abstraction layer is not a free simplification, it is where information loss happens. To reduce the loss, redesign the layer.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If &lt;a class="link" href="https://ice-ice-bear.github.io/en/p/2026-05-06-arxiv-papers-pick-multiagent-debate-mia-husserl/" &gt;last week&amp;rsquo;s picks&lt;/a&gt; looked at the upper half of agent cognition — how they cooperate, persist, and structure experience — this week looks at the lower half — whether the retrieval, representations, and priors underneath are correctly laid down. Both halves converging at the same time is itself the signal: the next round is not about model size, it is about &lt;strong&gt;recalibrating the entire stack&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id="references"&gt;References
&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Papers (this week)&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/abs/2605.05242" target="_blank" rel="noopener"
 &gt;Beyond Semantic Similarity: Rethinking Retrieval for Agentic Search via Direct Corpus Interaction (2605.05242)&lt;/a&gt; — Li, Zhang, Lu, Feng, Choi, Zou, Han, Chen, Lin, et al. (2026-05-03, &lt;a class="link" href="https://arxiv.org/list/cs.IR/new" target="_blank" rel="noopener"
 &gt;cs.IR&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/abs/2605.06651" target="_blank" rel="noopener"
 &gt;AI Co-Mathematician: Accelerating Mathematicians with Agentic AI (2605.06651)&lt;/a&gt; — Zheng, von Glehn, Buesing, Roy, Wattenberg, Viégas, Davies, Kohli, et al. (&lt;a class="link" href="https://deepmind.google/" target="_blank" rel="noopener"
 &gt;Google DeepMind&lt;/a&gt;, 2026-05-07, &lt;a class="link" href="https://arxiv.org/list/cs.AI/new" target="_blank" rel="noopener"
 &gt;cs.AI&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/abs/2601.15380" target="_blank" rel="noopener"
 &gt;You Need Better Attention Priors — GOAT (2601.15380)&lt;/a&gt; — Litman, Guo (2026-01-21, &lt;a class="link" href="https://arxiv.org/list/cs.LG/new" target="_blank" rel="noopener"
 &gt;cs.LG&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/abs/2604.15574" target="_blank" rel="noopener"
 &gt;Why Fine-Tuning Encourages Hallucinations and How to Fix It (2604.15574)&lt;/a&gt; — Kaplan, Gekhman, Zhu, Rozner, Reif, Swayamdipta, Hoiem, Schwartz (2026-04-16, &lt;a class="link" href="https://arxiv.org/list/cs.CL/new" target="_blank" rel="noopener"
 &gt;cs.CL&lt;/a&gt;)&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/abs/2602.14486" target="_blank" rel="noopener"
 &gt;Revisiting the Platonic Representation Hypothesis: An Aristotelian View (2602.14486)&lt;/a&gt; — Gröger, Wen, Brbić (&lt;a class="link" href="https://www.epfl.ch/" target="_blank" rel="noopener"
 &gt;EPFL&lt;/a&gt;, 2026-02-16, &lt;a class="link" href="https://arxiv.org/list/cs.LG/new" target="_blank" rel="noopener"
 &gt;cs.LG&lt;/a&gt;)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Background&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://phillipi.github.io/prh/" target="_blank" rel="noopener"
 &gt;The Platonic Representation Hypothesis&lt;/a&gt; — Huh, Cheung, Wang, &lt;a class="link" href="http://web.mit.edu/phillipi/" target="_blank" rel="noopener"
 &gt;Isola&lt;/a&gt; (2024) — the prior work paper 5 confronts&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/abs/1706.03762" target="_blank" rel="noopener"
 &gt;Attention Is All You Need&lt;/a&gt; — Vaswani et al. (2017) — the baseline GOAT generalizes&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/Dao-AILab/flash-attention" target="_blank" rel="noopener"
 &gt;FlashAttention&lt;/a&gt; — &lt;a class="link" href="https://tridao.me/" target="_blank" rel="noopener"
 &gt;Tri Dao&lt;/a&gt; — the kernel GOAT preserves compatibility with&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/abs/2405.05904" target="_blank" rel="noopener"
 &gt;Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations? (2405.05904)&lt;/a&gt; — Gekhman et al. (2024) — direct precursor to paper 4&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://optimaltransport.github.io/" target="_blank" rel="noopener"
 &gt;Entropic Optimal Transport&lt;/a&gt; — the mathematical frame behind GOAT&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://brightbenchmark.github.io/" target="_blank" rel="noopener"
 &gt;BRIGHT benchmark&lt;/a&gt; · &lt;a class="link" href="https://github.com/beir-cellar/beir" target="_blank" rel="noopener"
 &gt;BEIR&lt;/a&gt; · &lt;a class="link" href="https://browsecomp.github.io/" target="_blank" rel="noopener"
 &gt;BrowseComp&lt;/a&gt; · &lt;a class="link" href="https://epoch.ai/frontiermath" target="_blank" rel="noopener"
 &gt;FrontierMath&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/abs/2302.00487" target="_blank" rel="noopener"
 &gt;Continual Learning survey&lt;/a&gt; — the toolkit the SFT-hallucination paper borrows from&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/abs/2309.17453" target="_blank" rel="noopener"
 &gt;Attention Sink (Streaming LLM)&lt;/a&gt; — Xiao et al. (2023)&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://en.wikipedia.org/wiki/Society_of_Mind" target="_blank" rel="noopener"
 &gt;Society of Mind&lt;/a&gt; · &lt;a class="link" href="https://en.wikipedia.org/wiki/Free_energy_principle" target="_blank" rel="noopener"
 &gt;Active Inference&lt;/a&gt; — frames carried over from last week&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Related blog posts&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://ice-ice-bear.github.io/en/p/2026-05-06-arxiv-papers-pick-multiagent-debate-mia-husserl/" &gt;Weekly arxiv digest — multi-agent debate, MIA, Husserlian phenomenology&lt;/a&gt; — previous installment in this series&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/" target="_blank" rel="noopener"
 &gt;arxiv.org&lt;/a&gt; — preprint server&lt;/li&gt;
&lt;/ul&gt;</description></item></channel></rss>