<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Active Inference on ICE-ICE-BEAR-BLOG</title><link>https://ice-ice-bear.github.io/tags/active-inference/</link><description>Recent content in Active Inference on ICE-ICE-BEAR-BLOG</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Wed, 06 May 2026 00:00:00 +0900</lastBuildDate><atom:link href="https://ice-ice-bear.github.io/tags/active-inference/index.xml" rel="self" type="application/rss+xml"/><item><title>Three arxiv Papers That Drifted Through the Chat — Multiagent Debate, MIA, Husserlian Phenomenology</title><link>https://ice-ice-bear.github.io/posts/2026-05-06-arxiv-papers-pick-multiagent-debate-mia-husserl/</link><pubDate>Wed, 06 May 2026 00:00:00 +0900</pubDate><guid>https://ice-ice-bear.github.io/posts/2026-05-06-arxiv-papers-pick-multiagent-debate-mia-husserl/</guid><description>&lt;img src="https://ice-ice-bear.github.io/" alt="Featured image of post Three arxiv Papers That Drifted Through the Chat — Multiagent Debate, MIA, Husserlian Phenomenology" /&gt;&lt;h2 id="overview"&gt;Overview
&lt;/h2&gt;&lt;p&gt;Three &lt;a class="link" href="https://arxiv.org/" target="_blank" rel="noopener"
 &gt;arxiv&lt;/a&gt; papers landed within a few days of each other. Different eras, different topics, different methods — but read together they answer one question, &lt;strong&gt;&amp;ldquo;where do further gains in AI agent reasoning come from?&amp;rdquo;&lt;/strong&gt;, from three angles: cooperation, persistence, and structure. Right at the moment when single-model reasoning gains are visibly plateauing, this is a useful tour of where the next round&amp;rsquo;s keywords are coming from.&lt;/p&gt;
&lt;pre class="mermaid" style="visibility:hidden"&gt;graph TD
 Q["Where do reasoning gains come from?"] --&gt; Coop["Cooperation"]
 Q --&gt; Pers["Persistence"]
 Q --&gt; Struct["Structure"]

 Coop --&gt; P1["Multiagent Debate &amp;lt;br/&amp;gt; 2305.14325 (2023)"]
 Pers --&gt; P2["Memory Intelligence Agent &amp;lt;br/&amp;gt; 2604.04503 (2026)"]
 Struct --&gt; P3["Husserl + Active Inference &amp;lt;br/&amp;gt; 2208.09058 (2022)"]&lt;/pre&gt;&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;#&lt;/th&gt;
 &lt;th&gt;Paper&lt;/th&gt;
 &lt;th&gt;Year&lt;/th&gt;
 &lt;th&gt;One-line summary&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;1&lt;/td&gt;
 &lt;td&gt;&lt;a class="link" href="https://arxiv.org/abs/2305.14325" target="_blank" rel="noopener"
 &gt;Multiagent Debate&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;2023&lt;/td&gt;
 &lt;td&gt;Multiple LLM instances debating each other improve reasoning&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;2&lt;/td&gt;
 &lt;td&gt;&lt;a class="link" href="https://arxiv.org/abs/2604.04503" target="_blank" rel="noopener"
 &gt;Memory Intelligence Agent (MIA)&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;2026&lt;/td&gt;
 &lt;td&gt;Deep Research Agents need an evolving memory system&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;3&lt;/td&gt;
 &lt;td&gt;&lt;a class="link" href="https://arxiv.org/abs/2208.09058" target="_blank" rel="noopener"
 &gt;Husserlian Phenomenology + Active Inference&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;2022&lt;/td&gt;
 &lt;td&gt;The phenomenology of consciousness can be mapped to a computational model&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="1-multiagent-debate--230514325"&gt;1. Multiagent Debate — 2305.14325
&lt;/h2&gt;&lt;p&gt;&lt;a class="link" href="https://yilundu.github.io/" target="_blank" rel="noopener"
 &gt;Yilun Du&lt;/a&gt;, Shuang Li, &lt;a class="link" href="https://groups.csail.mit.edu/vision/torralbalab/" target="_blank" rel="noopener"
 &gt;Antonio Torralba&lt;/a&gt;, &lt;a class="link" href="https://cocosci.mit.edu/josh" target="_blank" rel="noopener"
 &gt;Joshua B. Tenenbaum&lt;/a&gt;, &lt;a class="link" href="https://research.google/people/igor-mordatch/" target="_blank" rel="noopener"
 &gt;Igor Mordatch&lt;/a&gt; — &lt;a class="link" href="https://www.mit.edu/" target="_blank" rel="noopener"
 &gt;MIT&lt;/a&gt; (2023-05). Accepted at &lt;a class="link" href="https://iclr.cc/Conferences/2025" target="_blank" rel="noopener"
 &gt;ICLR 2025&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id="the-idea"&gt;The idea
&lt;/h3&gt;&lt;p&gt;Instead of asking one LLM to reason harder, &lt;strong&gt;have several LLM instances propose answers and debate.&lt;/strong&gt; Across multiple rounds they converge on a shared answer. It is essentially &lt;a class="link" href="https://en.wikipedia.org/wiki/Marvin_Minsky" target="_blank" rel="noopener"
 &gt;Marvin Minsky&lt;/a&gt;&amp;rsquo;s &lt;a class="link" href="https://en.wikipedia.org/wiki/Society_of_Mind" target="_blank" rel="noopener"
 &gt;Society of Mind&lt;/a&gt; approach ported to LLMs.&lt;/p&gt;
&lt;h3 id="contribution"&gt;Contribution
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;A multi-agent debate framework that improves mathematical and strategic reasoning&lt;/li&gt;
&lt;li&gt;Reduces hallucinations, improves factual validity&lt;/li&gt;
&lt;li&gt;Works on black-box LLMs as-is with the same prompt for every task — no fine-tuning required&lt;/li&gt;
&lt;li&gt;The first clean result that lifts reasoning by &lt;strong&gt;inter-instance cooperation&lt;/strong&gt; rather than single-model scaling&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="why-now"&gt;Why now
&lt;/h3&gt;&lt;p&gt;Although it is a May 2023 paper, the 2026 vantage point makes it more relevant. Single-model reasoning gains are visibly plateauing, and this dovetails with the &lt;strong&gt;parallel tool call&lt;/strong&gt; push in &lt;a class="link" href="https://openai.com/index/advancing-voice-intelligence-with-new-models-in-the-api" target="_blank" rel="noopener"
 &gt;GPT-Realtime-2&lt;/a&gt;. It is also the theoretical justification for why infrastructure tools like agent-skills are designed assuming &lt;strong&gt;many agents running concurrently.&lt;/strong&gt;&lt;/p&gt;
&lt;h2 id="2-memory-intelligence-agent-mia--260404503"&gt;2. Memory Intelligence Agent (MIA) — 2604.04503
&lt;/h2&gt;&lt;p&gt;Jingyang Qiao et al. (2026-04). A memory architecture paper aimed squarely at the &lt;a class="link" href="https://openai.com/index/introducing-deep-research/" target="_blank" rel="noopener"
 &gt;Deep Research Agent&lt;/a&gt; family.&lt;/p&gt;
&lt;h3 id="the-idea-1"&gt;The idea
&lt;/h3&gt;&lt;p&gt;The weak link in Deep Research Agents — LLM reasoning combined with external tools — is memory. Conventional approaches (retrieving past trajectories) are inefficient, with storage and retrieval costs blowing up. MIA solves it with a &lt;strong&gt;Manager-Planner-Executor&lt;/strong&gt; three-tier architecture, plus non-parametric memory and two parametric agents.&lt;/p&gt;
&lt;pre class="mermaid" style="visibility:hidden"&gt;flowchart LR
 M["Manager &amp;lt;br/&amp;gt; (memory compression/management)"] --&gt; P["Planner &amp;lt;br/&amp;gt; (search planning)"]
 P --&gt; E["Executor &amp;lt;br/&amp;gt; (information analysis)"]
 E --&gt;|"trajectory"| M
 M -.-&gt;|"non-parametric ↔ parametric"| P
 M -.-&gt;|"non-parametric ↔ parametric"| E&lt;/pre&gt;&lt;h3 id="contribution-1"&gt;Contribution
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Non-parametric memory storing &lt;strong&gt;compressed search trajectories&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Alternating reinforcement learning&lt;/strong&gt; — Planner and Executor are reinforced in alternation, separating search-plan synthesis from information analysis&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Test-time learning&lt;/strong&gt; — the Planner updates on-the-fly without pausing inference&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Bidirectional conversion between parametric and non-parametric memory&lt;/strong&gt; for efficient memory evolution&lt;/li&gt;
&lt;li&gt;Strong results across eleven benchmarks&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="why-now-1"&gt;Why now
&lt;/h3&gt;&lt;p&gt;This is the academic background for tools like &lt;a class="link" href="https://github.com/elder-plinius/agentmemory" target="_blank" rel="noopener"
 &gt;agentmemory&lt;/a&gt;. The fact that agentmemory and this paper landed within days of each other reflects the industry consensus that &lt;strong&gt;memory is the key differentiator for the next round of agents.&lt;/strong&gt; The Manager-Planner-Executor split looks like a strong candidate for a de facto standard pattern in future multi-agent frameworks. It should be read alongside the rise of standard tool interfaces like &lt;a class="link" href="https://modelcontextprotocol.io/" target="_blank" rel="noopener"
 &gt;MCP&lt;/a&gt;.&lt;/p&gt;
&lt;h2 id="3-husserlian-phenomenology--active-inference--220809058"&gt;3. Husserlian Phenomenology + Active Inference — 2208.09058
&lt;/h2&gt;&lt;p&gt;Mahault Albarracin, Riddhi J. Pitliya, &lt;a class="link" href="https://maxwelljdramstead.com/" target="_blank" rel="noopener"
 &gt;Maxwell J. D. Ramstead&lt;/a&gt;, Jeffrey Yoshimi (2022-08). A mapping of &lt;a class="link" href="https://www.fil.ion.ucl.ac.uk/~karl/" target="_blank" rel="noopener"
 &gt;Karl Friston&lt;/a&gt;&amp;rsquo;s &lt;a class="link" href="https://en.wikipedia.org/wiki/Free_energy_principle" target="_blank" rel="noopener"
 &gt;active inference&lt;/a&gt; framework onto &lt;a class="link" href="https://plato.stanford.edu/entries/husserl/" target="_blank" rel="noopener"
 &gt;Edmund Husserl&lt;/a&gt;&amp;rsquo;s &lt;a class="link" href="https://plato.stanford.edu/entries/phenomenology/" target="_blank" rel="noopener"
 &gt;phenomenology&lt;/a&gt;.&lt;/p&gt;
&lt;h3 id="the-idea-2"&gt;The idea
&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;Phenomenology&lt;/strong&gt; is the rigorous descriptive study of conscious experience. The paper maps Husserl&amp;rsquo;s descriptions of consciousness onto the mathematical building blocks of &lt;strong&gt;active inference&lt;/strong&gt; — the neuroscience framework in which the brain predicts the world through a generative model.&lt;/p&gt;
&lt;h3 id="contribution-2"&gt;Contribution
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;Connects Husserl&amp;rsquo;s theory of time consciousness — retention/protention — to active inference&lt;/li&gt;
&lt;li&gt;A theoretical bridge between phenomenological description and computational neuroscience models&lt;/li&gt;
&lt;li&gt;Reinterprets the structure of consciousness as components of a &lt;strong&gt;generative model&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;A push for &lt;strong&gt;computational phenomenology&lt;/strong&gt; as an interdisciplinary field&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="why-now-2"&gt;Why now
&lt;/h3&gt;&lt;p&gt;This is the most abstract of the three but possibly the most interesting. As AI agents acquire &amp;ldquo;memory&amp;rdquo; and &amp;ldquo;reasoning,&amp;rdquo; &lt;strong&gt;how an agent structures its experience&lt;/strong&gt; becomes a philosophical question again.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;MIA&amp;rsquo;s evolving memory ≈ Husserl&amp;rsquo;s retention/protention?&lt;/li&gt;
&lt;li&gt;Multiagent debate ≈ the self-reflective structure of consciousness?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The paper was shared as a direct PDF link (&lt;code&gt;/pdf/&lt;/code&gt;), which suggests &lt;strong&gt;somebody actually read the full text.&lt;/strong&gt; Probably one senior in the chat is making the bet that the next move for AI agents comes from cognitive science.&lt;/p&gt;
&lt;h2 id="reading-the-three-together"&gt;Reading the three together
&lt;/h2&gt;&lt;p&gt;The three papers point in the same direction: &lt;strong&gt;single-LLM limits → inter-instance cooperation + evolving memory + borrowed structure of consciousness.&lt;/strong&gt;&lt;/p&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Axis&lt;/th&gt;
 &lt;th&gt;Answer&lt;/th&gt;
 &lt;th&gt;Paper&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;Cooperation&lt;/td&gt;
 &lt;td&gt;Multi-instance debate&lt;/td&gt;
 &lt;td&gt;Multiagent Debate (2023)&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Persistence&lt;/td&gt;
 &lt;td&gt;Compressed/evolving memory&lt;/td&gt;
 &lt;td&gt;MIA (2026)&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Structure&lt;/td&gt;
 &lt;td&gt;Time consciousness → generative model&lt;/td&gt;
 &lt;td&gt;Husserl + Active Inference (2022)&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The chat&amp;rsquo;s pick of the week accidentally forms a clean three-layer stack. Set alongside agentmemory + agent-skills (previous post), it shows that &lt;strong&gt;research, tooling, and practice are converging in the same direction.&lt;/strong&gt;&lt;/p&gt;
&lt;h2 id="insights"&gt;Insights
&lt;/h2&gt;&lt;p&gt;The three papers come from different years and different topics, but read together they point at the same consensus — the way past the single-LLM reasoning plateau is not one more size class of model, but &lt;strong&gt;inter-instance cooperation, evolving memory, and explicit modeling of the structure of experience.&lt;/strong&gt; Multiagent Debate is the first clean answer to &amp;ldquo;how do we get instances to cooperate&amp;rdquo;; MIA answers &amp;ldquo;how do we accumulate that cooperation across time&amp;rdquo;; the Husserl + Active Inference mapping throws a longer-range coordinate for &amp;ldquo;what structure that accumulation should ultimately resemble.&amp;rdquo; The fact that practical tools like &lt;a class="link" href="https://github.com/elder-plinius/agentmemory" target="_blank" rel="noopener"
 &gt;agentmemory&lt;/a&gt; and agent-skills surface alongside these three papers within days is itself a signal — &lt;strong&gt;research, tooling, and practice are converging in the same direction.&lt;/strong&gt; The differentiator in the next round is much more likely to be cooperation topology, memory evolution policy, and experience-structure modeling than raw model size.&lt;/p&gt;
&lt;h2 id="references"&gt;References
&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Papers&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/abs/2305.14325" target="_blank" rel="noopener"
 &gt;Improving Factuality and Reasoning in Language Models through Multiagent Debate (2305.14325)&lt;/a&gt; — Du, Li, Torralba, Tenenbaum, Mordatch (&lt;a class="link" href="https://www.mit.edu/" target="_blank" rel="noopener"
 &gt;MIT&lt;/a&gt;, 2023)&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/abs/2604.04503" target="_blank" rel="noopener"
 &gt;Memory Intelligence Agent (2604.04503)&lt;/a&gt; — Qiao et al. (2026)&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/abs/2208.09058" target="_blank" rel="noopener"
 &gt;Mapping Husserlian Phenomenology onto Active Inference (2208.09058)&lt;/a&gt; — Albarracin, Pitliya, Ramstead, Yoshimi (2022)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Related concepts&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://en.wikipedia.org/wiki/Society_of_Mind" target="_blank" rel="noopener"
 &gt;Society of Mind&lt;/a&gt; — &lt;a class="link" href="https://en.wikipedia.org/wiki/Marvin_Minsky" target="_blank" rel="noopener"
 &gt;Marvin Minsky&lt;/a&gt;&amp;rsquo;s multi-agent theory of cognition&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://openai.com/index/introducing-deep-research/" target="_blank" rel="noopener"
 &gt;Deep Research Agent&lt;/a&gt; — OpenAI&amp;rsquo;s tool-using agent system&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://en.wikipedia.org/wiki/Free_energy_principle" target="_blank" rel="noopener"
 &gt;Active Inference / Free Energy Principle&lt;/a&gt; — &lt;a class="link" href="https://www.fil.ion.ucl.ac.uk/~karl/" target="_blank" rel="noopener"
 &gt;Karl Friston&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://plato.stanford.edu/entries/husserl/" target="_blank" rel="noopener"
 &gt;Husserlian phenomenology (SEP)&lt;/a&gt; · &lt;a class="link" href="https://plato.stanford.edu/entries/phenomenology/" target="_blank" rel="noopener"
 &gt;Phenomenology (SEP)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://modelcontextprotocol.io/" target="_blank" rel="noopener"
 &gt;Model Context Protocol (MCP)&lt;/a&gt; — emerging tool-interface standard&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://iclr.cc/Conferences/2025" target="_blank" rel="noopener"
 &gt;ICLR 2025&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Background reading&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/" target="_blank" rel="noopener"
 &gt;arxiv.org&lt;/a&gt; — preprint server&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://yilundu.github.io/" target="_blank" rel="noopener"
 &gt;Yilun Du&lt;/a&gt; · &lt;a class="link" href="https://cocosci.mit.edu/josh" target="_blank" rel="noopener"
 &gt;Joshua Tenenbaum&lt;/a&gt; · &lt;a class="link" href="https://groups.csail.mit.edu/vision/torralbalab/" target="_blank" rel="noopener"
 &gt;Antonio Torralba&lt;/a&gt; · &lt;a class="link" href="https://research.google/people/igor-mordatch/" target="_blank" rel="noopener"
 &gt;Igor Mordatch&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://maxwelljdramstead.com/" target="_blank" rel="noopener"
 &gt;Maxwell J. D. Ramstead&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://openai.com/index/advancing-voice-intelligence-with-new-models-in-the-api" target="_blank" rel="noopener"
 &gt;GPT-Realtime-2 (parallel tool calls)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item></channel></rss>