<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Nous Research on ICE-ICE-BEAR-BLOG</title><link>https://ice-ice-bear.github.io/tags/nous-research/</link><description>Recent content in Nous Research on ICE-ICE-BEAR-BLOG</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Sun, 10 May 2026 00:00:00 +0900</lastBuildDate><atom:link href="https://ice-ice-bear.github.io/tags/nous-research/index.xml" rel="self" type="application/rss+xml"/><item><title>Two Agent-Memory Architectures — MemPalace's Structured Index vs Hermes Agent's Self-Curating Scratchpad</title><link>https://ice-ice-bear.github.io/posts/2026-05-10-agent-memory-architectures/</link><pubDate>Sun, 10 May 2026 00:00:00 +0900</pubDate><guid>https://ice-ice-bear.github.io/posts/2026-05-10-agent-memory-architectures/</guid><description>&lt;img src="https://ice-ice-bear.github.io/" alt="Featured image of post Two Agent-Memory Architectures — MemPalace's Structured Index vs Hermes Agent's Self-Curating Scratchpad" /&gt;&lt;h2 id="overview"&gt;Overview
&lt;/h2&gt;&lt;p&gt;Two repos surfaced alongside each other on 2026-05-10 — &lt;a class="link" href="https://github.com/MemPalace/mempalace" target="_blank" rel="noopener"
 &gt;MemPalace/mempalace&lt;/a&gt; and &lt;a class="link" href="https://github.com/NousResearch/hermes-agent" target="_blank" rel="noopener"
 &gt;NousResearch/hermes-agent&lt;/a&gt; — and they put two opposite primitives for agent memory in head-to-head contact. One is a &lt;strong&gt;structured index&lt;/strong&gt; (wings/rooms/drawers plus a temporal knowledge graph), the other is an &lt;strong&gt;emergent scratchpad + self-improving skills + FTS5 recall&lt;/strong&gt;. If &lt;a class="link" href="https://ice-ice-bear.github.io/posts/2026-05-08-agent-os-layer-memory-skills/" target="_blank" rel="noopener"
 &gt;the previous OS-layer post&lt;/a&gt; traced how the memory and workflow slots are forming, this post pulls on the &lt;strong&gt;memory slot itself and finds it splitting in two design philosophies&lt;/strong&gt;.&lt;/p&gt;
&lt;pre class="mermaid" style="visibility:hidden"&gt;graph TD
 Task["Agent task"] --&gt; Decision{"Memory design choice"}
 Decision --&gt; Structured["Structured — MemPalace"]
 Decision --&gt; Emergent["Emergent — Hermes Agent"]

 Structured --&gt; Wings["wings / rooms / drawers &amp;lt;br/&amp;gt; verbatim storage"]
 Structured --&gt; KG["temporal knowledge graph &amp;lt;br/&amp;gt; SQLite + validity window"]
 Structured --&gt; MCP29["29 MCP tools &amp;lt;br/&amp;gt; explicit index calls"]

 Emergent --&gt; Scratch["conversation + note scratchpad"]
 Emergent --&gt; Skills["self-authored skills &amp;lt;br/&amp;gt; improve during use"]
 Emergent --&gt; FTS["FTS5 session search &amp;lt;br/&amp;gt; + LLM summarization"]

 Wings --&gt; Retrieve["scope queries to a wing"]
 Scratch --&gt; Recall["LLM triggers recall via tools"]&lt;/pre&gt;&lt;h2 id="1-mempalace--push-structured-indexing-to-its-limit"&gt;1. MemPalace — push structured indexing to its limit
&lt;/h2&gt;&lt;p&gt;&lt;a class="link" href="https://github.com/MemPalace/mempalace" target="_blank" rel="noopener"
 &gt;MemPalace/mempalace&lt;/a&gt; bills itself as &lt;em&gt;&amp;ldquo;the best-benchmarked open-source AI memory system.&amp;rdquo;&lt;/em&gt; Created 2026-04-05, MIT, &lt;a class="link" href="https://github.com/MemPalace/mempalace/commits/main" target="_blank" rel="noopener"
 &gt;51,879 stars at the 2026-05-11 push&lt;/a&gt;. Its bet collapses to one sentence — &lt;strong&gt;store the original text without summarizing, and let pre-existing structure narrow the semantic search.&lt;/strong&gt;&lt;/p&gt;
&lt;h3 id="the-palace-structure"&gt;The palace structure
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;wings&lt;/strong&gt; — one per person or project; queries scope into a wing.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;rooms&lt;/strong&gt; — topic groups inside a wing.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;drawers&lt;/strong&gt; — the smallest unit, &lt;strong&gt;the verbatim text itself.&lt;/strong&gt; No summarizing, no extraction, no paraphrase.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;knowledge graph&lt;/strong&gt; — local &lt;a class="link" href="https://www.sqlite.org/" target="_blank" rel="noopener"
 &gt;SQLite&lt;/a&gt; with entities, relationships, and validity windows. When a fact stops being true, the layer marks it explicitly instead of leaving the LLM to figure it out.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;agent diaries&lt;/strong&gt; — every specialist agent gets its own wing and journal, discoverable at runtime via &lt;a class="link" href="https://mempalaceofficial.com/concepts/agents.html" target="_blank" rel="noopener"
 &gt;&lt;code&gt;mempalace_list_agents&lt;/code&gt;&lt;/a&gt; so the system prompt stays small.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="benchmarks"&gt;Benchmarks
&lt;/h3&gt;&lt;p&gt;&lt;a class="link" href="https://arxiv.org/abs/2410.10813" target="_blank" rel="noopener"
 &gt;LongMemEval&lt;/a&gt;, 500 questions:&lt;/p&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Mode&lt;/th&gt;
 &lt;th&gt;R@5&lt;/th&gt;
 &lt;th&gt;LLM required&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;Raw semantic search (no heuristics, no LLM)&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;96.6%&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;None&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Hybrid v4, 450q held-out&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;98.4%&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;None&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Hybrid v4 + LLM rerank, 500q&lt;/td&gt;
 &lt;td&gt;≥99%&lt;/td&gt;
 &lt;td&gt;Any capable model&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Plus &lt;a class="link" href="https://arxiv.org/abs/2402.17753" target="_blank" rel="noopener"
 &gt;LoCoMo&lt;/a&gt; R@10 88.9% (hybrid v5, 1,986 questions), ConvoMem 92.9% recall across 250 items, &lt;a class="link" href="https://aclanthology.org/2025.acl-long.0/" target="_blank" rel="noopener"
 &gt;MemBench&lt;/a&gt; (ACL 2025) R@5 80.3% across 8,500 items. Compared with &lt;a class="link" href="https://github.com/rohitg00/agentmemory" target="_blank" rel="noopener"
 &gt;agentmemory&lt;/a&gt;&amp;rsquo;s 95.2% on the same LongMemEval cut, MemPalace&amp;rsquo;s raw mode is +1.4pp ahead — &lt;strong&gt;the clearest signal that the marginal value of pre-baked structure shows up as retrieval recall.&lt;/strong&gt;&lt;/p&gt;
&lt;h3 id="setup"&gt;Setup
&lt;/h3&gt;&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;uv tool install mempalace
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mempalace init ~/projects/myapp
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Mine&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mempalace mine ~/projects/myapp &lt;span class="c1"&gt;# project files&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mempalace mine ~/.claude/projects/ --mode convos &lt;span class="c1"&gt;# Claude Code sessions&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# Search / load&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mempalace search &lt;span class="s2"&gt;&amp;#34;why did we switch to GraphQL&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mempalace wake-up
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;No API key, no cloud call, ChromaDB as the default, with a pluggable interface at &lt;a class="link" href="https://github.com/MemPalace/mempalace/blob/main/mempalace/backends/base.py" target="_blank" rel="noopener"
 &gt;&lt;code&gt;mempalace/backends/base.py&lt;/code&gt;&lt;/a&gt;. 29 &lt;a class="link" href="https://modelcontextprotocol.io/" target="_blank" rel="noopener"
 &gt;MCP&lt;/a&gt; tools cover palace reads/writes, graph operations, cross-wing navigation, drawer management, and agent diaries.&lt;/p&gt;
&lt;h3 id="what-it-argues"&gt;What it argues
&lt;/h3&gt;&lt;p&gt;MemPalace bets that &lt;strong&gt;memory quality is index quality.&lt;/strong&gt; Compression and summarization lose information, so it keeps drawers verbatim and lets wing/room scope shrink what the LLM has to wade through. The &lt;a class="link" href="https://mempalaceofficial.com/concepts/knowledge-graph.html" target="_blank" rel="noopener"
 &gt;knowledge graph&lt;/a&gt;&amp;rsquo;s validity windows are the more interesting move — they push &lt;strong&gt;fact decay over time&lt;/strong&gt; out of LLM reasoning and into the index layer.&lt;/p&gt;
&lt;h2 id="2-hermes-agent--push-the-emergent-scratchpad-to-its-limit"&gt;2. Hermes Agent — push the emergent scratchpad to its limit
&lt;/h2&gt;&lt;p&gt;&lt;a class="link" href="https://github.com/NousResearch/hermes-agent" target="_blank" rel="noopener"
 &gt;NousResearch/hermes-agent&lt;/a&gt; bills itself as &lt;em&gt;&amp;ldquo;the agent that grows with you.&amp;rdquo;&lt;/em&gt; MIT, built by &lt;a class="link" href="https://nousresearch.com" target="_blank" rel="noopener"
 &gt;Nous Research&lt;/a&gt;, &lt;a class="link" href="https://github.com/NousResearch/hermes-agent" target="_blank" rel="noopener"
 &gt;created 2025-07-22&lt;/a&gt;, 142,575 stars by 2026-05-11 — the larger crowd in this comparison set. Its bet is the opposite — &lt;strong&gt;memory is not a separate index, it is an emergent product of the agent operating itself.&lt;/strong&gt;&lt;/p&gt;
&lt;h3 id="four-streams-that-make-up-its-memory"&gt;Four streams that make up its memory
&lt;/h3&gt;&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;agent-curated memory + periodic nudges&lt;/strong&gt; — the agent decides what is worth keeping; nudges enforce persistence.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;self-authored skills&lt;/strong&gt; — after a complex task, the agent can register a skill to the &lt;a class="link" href="https://agentskills.io" target="_blank" rel="noopener"
 &gt;Skills Hub&lt;/a&gt;. Skills self-improve in use. Compatible with the &lt;a class="link" href="https://agentskills.io" target="_blank" rel="noopener"
 &gt;agentskills.io&lt;/a&gt; open standard.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;FTS5 session search + LLM summarization&lt;/strong&gt; — past conversations are searched via &lt;a class="link" href="https://www.sqlite.org/fts5.html" target="_blank" rel="noopener"
 &gt;SQLite FTS5&lt;/a&gt;; the LLM summarizes hits for cross-session recall.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;user modeling&lt;/strong&gt; — &lt;a class="link" href="https://github.com/plastic-labs/honcho" target="_blank" rel="noopener"
 &gt;plastic-labs/honcho&lt;/a&gt; dialectic user modeling builds a deepening picture of who you are across sessions.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="where-it-runs"&gt;Where it runs
&lt;/h3&gt;&lt;p&gt;&lt;a class="link" href="https://telegram.org/" target="_blank" rel="noopener"
 &gt;Telegram&lt;/a&gt; · &lt;a class="link" href="https://discord.com/" target="_blank" rel="noopener"
 &gt;Discord&lt;/a&gt; · &lt;a class="link" href="https://slack.com/" target="_blank" rel="noopener"
 &gt;Slack&lt;/a&gt; · &lt;a class="link" href="https://www.whatsapp.com/" target="_blank" rel="noopener"
 &gt;WhatsApp&lt;/a&gt; · &lt;a class="link" href="https://signal.org/" target="_blank" rel="noopener"
 &gt;Signal&lt;/a&gt; · Email · CLI, all from one gateway process. Seven terminal backends — local, &lt;a class="link" href="https://www.docker.com/" target="_blank" rel="noopener"
 &gt;Docker&lt;/a&gt;, SSH, &lt;a class="link" href="https://sylabs.io/singularity/" target="_blank" rel="noopener"
 &gt;Singularity&lt;/a&gt;, &lt;a class="link" href="https://modal.com/" target="_blank" rel="noopener"
 &gt;Modal&lt;/a&gt;, &lt;a class="link" href="https://www.daytona.io/" target="_blank" rel="noopener"
 &gt;Daytona&lt;/a&gt;, &lt;a class="link" href="https://vercel.com/docs/vercel-sandbox" target="_blank" rel="noopener"
 &gt;Vercel Sandbox&lt;/a&gt; — with Modal and Daytona offering hibernation between sessions so idle cost is nearly zero. Not tied to a laptop.&lt;/p&gt;
&lt;h3 id="model-freedom"&gt;Model freedom
&lt;/h3&gt;&lt;p&gt;A single &lt;code&gt;hermes model&lt;/code&gt; swaps between &lt;a class="link" href="https://portal.nousresearch.com" target="_blank" rel="noopener"
 &gt;Nous Portal&lt;/a&gt;, &lt;a class="link" href="https://openrouter.ai" target="_blank" rel="noopener"
 &gt;OpenRouter&lt;/a&gt;, &lt;a class="link" href="https://build.nvidia.com" target="_blank" rel="noopener"
 &gt;NVIDIA NIM&lt;/a&gt;, &lt;a class="link" href="https://platform.xiaomimimo.com" target="_blank" rel="noopener"
 &gt;Xiaomi MiMo&lt;/a&gt;, &lt;a class="link" href="https://z.ai" target="_blank" rel="noopener"
 &gt;z.ai/GLM&lt;/a&gt;, &lt;a class="link" href="https://platform.moonshot.ai" target="_blank" rel="noopener"
 &gt;Kimi/Moonshot&lt;/a&gt;, &lt;a class="link" href="https://www.minimax.io" target="_blank" rel="noopener"
 &gt;MiniMax&lt;/a&gt;, &lt;a class="link" href="https://huggingface.co" target="_blank" rel="noopener"
 &gt;Hugging Face&lt;/a&gt;, OpenAI, or any custom endpoint. Because memory is an emergent operational byproduct rather than a model artifact, it follows the agent across model swaps.&lt;/p&gt;
&lt;h3 id="what-it-argues-1"&gt;What it argues
&lt;/h3&gt;&lt;p&gt;Hermes bets that &lt;strong&gt;memory has to be invoked — by the LLM itself.&lt;/strong&gt; Retrieval correctness is not the index&amp;rsquo;s job; the LLM decides mid-turn what slice of the past it needs, calls the &lt;a class="link" href="https://www.sqlite.org/fts5.html" target="_blank" rel="noopener"
 &gt;FTS5 search&lt;/a&gt; tool, builds a summary, and threads it into its own context. Skills are not written once but &lt;strong&gt;rewritten while being used&lt;/strong&gt; — living procedural memory.&lt;/p&gt;
&lt;h2 id="3-head-to-head"&gt;3. Head-to-head
&lt;/h2&gt;&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Field&lt;/th&gt;
 &lt;th&gt;MemPalace&lt;/th&gt;
 &lt;th&gt;Hermes Agent&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;Maker&lt;/td&gt;
 &lt;td&gt;&lt;a class="link" href="https://github.com/MemPalace" target="_blank" rel="noopener"
 &gt;MemPalace&lt;/a&gt;&lt;/td&gt;
 &lt;td&gt;&lt;a class="link" href="https://nousresearch.com" target="_blank" rel="noopener"
 &gt;Nous Research&lt;/a&gt;&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;License&lt;/td&gt;
 &lt;td&gt;MIT&lt;/td&gt;
 &lt;td&gt;MIT&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Created&lt;/td&gt;
 &lt;td&gt;2026-04-05&lt;/td&gt;
 &lt;td&gt;2025-07-22&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Stars (5/11)&lt;/td&gt;
 &lt;td&gt;51,879&lt;/td&gt;
 &lt;td&gt;142,575&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Memory model&lt;/td&gt;
 &lt;td&gt;structured index + KG&lt;/td&gt;
 &lt;td&gt;scratchpad + emergent skills + FTS&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Storage&lt;/td&gt;
 &lt;td&gt;verbatim drawers&lt;/td&gt;
 &lt;td&gt;conversations, notes, skills; summarize on demand&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Time handling&lt;/td&gt;
 &lt;td&gt;graph validity windows&lt;/td&gt;
 &lt;td&gt;LLM reconstructs by summarizing&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Retrieval owner&lt;/td&gt;
 &lt;td&gt;the index (96.6% raw R@5)&lt;/td&gt;
 &lt;td&gt;the LLM via tools&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Model coupling&lt;/td&gt;
 &lt;td&gt;model-agnostic (raw = 0 LLM calls)&lt;/td&gt;
 &lt;td&gt;model-agnostic (10+ providers)&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Interface&lt;/td&gt;
 &lt;td&gt;29 MCP tools + CLI&lt;/td&gt;
 &lt;td&gt;TUI + 6 messaging gateways&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Atomic unit&lt;/td&gt;
 &lt;td&gt;&lt;code&gt;mempalace search&lt;/code&gt;&lt;/td&gt;
 &lt;td&gt;a &lt;code&gt;hermes&lt;/code&gt; session&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;h2 id="4-which-scales-for-which-task"&gt;4. Which scales for which task
&lt;/h2&gt;&lt;pre class="mermaid" style="visibility:hidden"&gt;flowchart LR
 A["Task profile"] --&gt; B{"retrieval recall is top KPI?"}
 B --&gt;|Yes| C["Structured index &amp;lt;br/&amp;gt; MemPalace"]
 B --&gt;|No| D{"long-lived, multi-channel ops?"}
 D --&gt;|Yes| E["Scratchpad + self-learning &amp;lt;br/&amp;gt; Hermes Agent"]
 D --&gt;|No| F["Both overkill — &amp;lt;br/&amp;gt; long context suffices"]
 C --&gt; G["fact accuracy, time decay, &amp;lt;br/&amp;gt; multi-agent sharing"]
 E --&gt; H["persona learning, procedural memory, &amp;lt;br/&amp;gt; channel continuity"]&lt;/pre&gt;&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;When fact recall is the KPI&lt;/strong&gt; — customer history, codebase decision logs, the &amp;ldquo;when and why did we switch X&amp;rdquo; class of questions — &lt;strong&gt;MemPalace is the better fit.&lt;/strong&gt; 96.6% raw R@5 is a number nobody else has matched without an LLM in the loop.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;When the agent has to live across days and modalities&lt;/strong&gt; — start on Telegram, continue on Slack, run a cron job at 3am that ships a report — &lt;strong&gt;Hermes wins.&lt;/strong&gt; You trade away some retrieval precision for operational continuity.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Single-session, single-task workloads&lt;/strong&gt; — both are overkill. Today&amp;rsquo;s Claude and GPT context windows (hundreds of thousands to a million tokens) already absorb most of this. That is the load-bearing point — &lt;strong&gt;at one human, one session, neither is needed.&lt;/strong&gt; The price tag only shows up at &lt;em&gt;agent-team scale.&lt;/em&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="where-the-design-split-pays-off-at-team-scale"&gt;Where the design split pays off at team scale
&lt;/h3&gt;&lt;ul&gt;
&lt;li&gt;N specialists must share the same fact pool → MemPalace&amp;rsquo;s wings + cross-wing navigation is the direct answer.&lt;/li&gt;
&lt;li&gt;N channels must hold the same persona → Hermes&amp;rsquo; &lt;a class="link" href="https://github.com/plastic-labs/honcho" target="_blank" rel="noopener"
 &gt;Honcho&lt;/a&gt; dialectic modeling is the direct answer.&lt;/li&gt;
&lt;li&gt;N days of evolving procedure → Hermes&amp;rsquo; self-improving skills are the direct answer.&lt;/li&gt;
&lt;li&gt;N years of fact decay → MemPalace&amp;rsquo;s temporal knowledge graph is the direct answer.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A one-line summary the community surfaced — &lt;strong&gt;MemPalace is &amp;ldquo;accuracy infrastructure,&amp;rdquo; Hermes is &amp;ldquo;operations infrastructure.&amp;rdquo;&lt;/strong&gt; They share a word (&amp;ldquo;memory&amp;rdquo;) but their responsibilities barely overlap.&lt;/p&gt;
&lt;h2 id="insights"&gt;Insights
&lt;/h2&gt;&lt;p&gt;The thing worth taking from this digest is that two projects sitting at 51K and 142K stars at the same moment have defined &amp;ldquo;memory&amp;rdquo; in opposite directions. MemPalace sees &lt;strong&gt;memory as a searchable factual index&lt;/strong&gt; and has spent its design budget on retrieval accuracy (96.6% raw R@5) plus a temporal graph with validity windows. Hermes sees &lt;strong&gt;memory as an operational flow the LLM invokes&lt;/strong&gt; and has spent the same budget on scratchpads, self-improving skills, and continuity across messaging channels. Both deliberately decouple from the model — same direction as &lt;a class="link" href="https://ice-ice-bear.github.io/posts/2026-05-08-agent-os-layer-memory-skills/" target="_blank" rel="noopener"
 &gt;the prior OS-layer reading&lt;/a&gt; — but they draw the boundary between &amp;ldquo;what counts as the index&amp;rdquo; and &amp;ldquo;what counts as the agent&amp;rdquo; in opposite places. With current context windows nearly swallowing a single-user session whole, neither tool feels urgent today. The moment agents start operating as &lt;em&gt;teams&lt;/em&gt;, the two designs convert directly into different cost, accuracy, and operational stability tradeoffs. The interesting question for the next quarter is whether the index camp absorbs emergent scratchpads into the index, or whether the scratchpad camp pulls explicit graphs in as just another tool. Convergence in one direction looks more likely than a stable equilibrium.&lt;/p&gt;
&lt;h2 id="references"&gt;References
&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Core repos&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/MemPalace/mempalace" target="_blank" rel="noopener"
 &gt;MemPalace/mempalace&lt;/a&gt; · official site &lt;a class="link" href="https://mempalaceofficial.com" target="_blank" rel="noopener"
 &gt;mempalaceofficial.com&lt;/a&gt; · &lt;a class="link" href="https://mempalaceofficial.com/concepts/the-palace.html" target="_blank" rel="noopener"
 &gt;palace concepts&lt;/a&gt; · &lt;a class="link" href="https://mempalaceofficial.com/concepts/knowledge-graph.html" target="_blank" rel="noopener"
 &gt;knowledge graph&lt;/a&gt; · &lt;a class="link" href="https://mempalaceofficial.com/reference/mcp-tools.html" target="_blank" rel="noopener"
 &gt;MCP tool reference&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/NousResearch/hermes-agent" target="_blank" rel="noopener"
 &gt;NousResearch/hermes-agent&lt;/a&gt; · docs at &lt;a class="link" href="https://hermes-agent.nousresearch.com/docs/" target="_blank" rel="noopener"
 &gt;hermes-agent.nousresearch.com/docs&lt;/a&gt; · &lt;a class="link" href="https://hermes-agent.nousresearch.com/docs/user-guide/features/memory" target="_blank" rel="noopener"
 &gt;memory guide&lt;/a&gt; · &lt;a class="link" href="https://hermes-agent.nousresearch.com/docs/user-guide/features/skills" target="_blank" rel="noopener"
 &gt;skills system&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Adjacent memory tools / comparison set&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/rohitg00/agentmemory" target="_blank" rel="noopener"
 &gt;rohitg00/agentmemory&lt;/a&gt; — the immediately preceding design in the same LongMemEval comparison set&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/plastic-labs/honcho" target="_blank" rel="noopener"
 &gt;plastic-labs/honcho&lt;/a&gt; — the dialectic user modeling Hermes embeds&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://agentskills.io" target="_blank" rel="noopener"
 &gt;agentskills.io&lt;/a&gt; — the open skill standard Hermes and OpenClaw share&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Protocols and runtimes&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://modelcontextprotocol.io/" target="_blank" rel="noopener"
 &gt;Model Context Protocol (MCP)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.sqlite.org/fts5.html" target="_blank" rel="noopener"
 &gt;SQLite FTS5&lt;/a&gt; — Hermes&amp;rsquo; session-search backend&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.trychroma.com/" target="_blank" rel="noopener"
 &gt;ChromaDB&lt;/a&gt; — MemPalace&amp;rsquo;s default vector backend&lt;/li&gt;
&lt;li&gt;Runtimes: &lt;a class="link" href="https://modal.com/" target="_blank" rel="noopener"
 &gt;Modal&lt;/a&gt; · &lt;a class="link" href="https://www.daytona.io/" target="_blank" rel="noopener"
 &gt;Daytona&lt;/a&gt; · &lt;a class="link" href="https://vercel.com/docs/vercel-sandbox" target="_blank" rel="noopener"
 &gt;Vercel Sandbox&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Benchmarks and papers&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/abs/2410.10813" target="_blank" rel="noopener"
 &gt;LongMemEval (arXiv:2410.10813, ICLR 2025)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/abs/2402.17753" target="_blank" rel="noopener"
 &gt;LoCoMo (arXiv:2402.17753)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://aclanthology.org/2025.acl-long.0/" target="_blank" rel="noopener"
 &gt;MemBench (ACL 2025)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item></channel></rss>