<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Knowledge Graph on ICE-ICE-BEAR-BLOG</title><link>https://ice-ice-bear.github.io/tags/knowledge-graph/</link><description>Recent content in Knowledge Graph on ICE-ICE-BEAR-BLOG</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Thu, 16 Apr 2026 00:00:00 +0900</lastBuildDate><atom:link href="https://ice-ice-bear.github.io/tags/knowledge-graph/index.xml" rel="self" type="application/rss+xml"/><item><title>GBrain — Garry Tan's AI Agent Memory System</title><link>https://ice-ice-bear.github.io/posts/2026-04-16-gbrain/</link><pubDate>Thu, 16 Apr 2026 00:00:00 +0900</pubDate><guid>https://ice-ice-bear.github.io/posts/2026-04-16-gbrain/</guid><description>&lt;img src="https://ice-ice-bear.github.io/" alt="Featured image of post GBrain — Garry Tan's AI Agent Memory System" /&gt;&lt;h2 id="overview"&gt;Overview
&lt;/h2&gt;&lt;p&gt;&amp;ldquo;Your AI agent is smart but forgetful. GBrain gives it a brain.&amp;rdquo;&lt;/p&gt;
&lt;p&gt;GBrain is an open-source AI agent memory system built by Garry Tan, President and CEO of Y Combinator. It is not a toy or a demo — Tan built it for the agents he actually uses in production. The repository has already gathered 8,349 stars and 931 forks on GitHub, written primarily in TypeScript and PLpgSQL.&lt;/p&gt;
&lt;h2 id="production-scale"&gt;Production Scale
&lt;/h2&gt;&lt;p&gt;GBrain&amp;rsquo;s production deployment speaks for itself:&lt;/p&gt;
&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Metric&lt;/th&gt;
 &lt;th&gt;Count&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;Pages ingested&lt;/td&gt;
 &lt;td&gt;17,888&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;People tracked&lt;/td&gt;
 &lt;td&gt;4,383&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Companies indexed&lt;/td&gt;
 &lt;td&gt;723&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Cron jobs running&lt;/td&gt;
 &lt;td&gt;21&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Time to build&lt;/td&gt;
 &lt;td&gt;12 days&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;This is not a proof-of-concept. It is a working knowledge graph that powers real agent workflows every day.&lt;/p&gt;
&lt;h2 id="architecture-the-signal-to-memory-loop"&gt;Architecture: The Signal-to-Memory Loop
&lt;/h2&gt;&lt;p&gt;The core loop is straightforward: every message is a signal, and every signal gets processed through the brain.&lt;/p&gt;
&lt;pre class="mermaid" style="visibility:hidden"&gt;graph TD
 A["Signal Arrives"] --&gt; B["Signal Detector &amp;lt;br/&amp;gt; runs on every message"]
 B --&gt; C["Brain-Ops &amp;lt;br/&amp;gt; check brain first"]
 B --&gt; D["Entity Extraction &amp;lt;br/&amp;gt; people, companies, topics"]
 C --&gt; E["Respond with &amp;lt;br/&amp;gt; brain context"]
 E --&gt; F["Write back &amp;lt;br/&amp;gt; to knowledge graph"]
 F --&gt; G["Sync &amp;lt;br/&amp;gt; cross-agent memory"]
 D --&gt; F&lt;/pre&gt;&lt;p&gt;The key insight is that the signal detector fires on &lt;strong&gt;every single message&lt;/strong&gt; in parallel, capturing the agent&amp;rsquo;s thinking and extracting entities before the main response even begins. This means the brain is always accumulating context, not just when explicitly asked.&lt;/p&gt;
&lt;h2 id="philosophy-thin-harness-fat-skills"&gt;Philosophy: Thin Harness, Fat Skills
&lt;/h2&gt;&lt;p&gt;GBrain follows a distinctive design philosophy: &lt;strong&gt;intelligence lives in skills, not in the runtime&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The harness itself is deliberately thin — it handles message routing, database connections, and the signal detection loop. Everything else is pushed into 25 skill files organized by a central &lt;code&gt;RESOLVER.md&lt;/code&gt;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;signal-detector&lt;/strong&gt; — always-on, fires on every message&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;brain-ops&lt;/strong&gt; — the 5-step lookup protocol before any external call&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;ingest&lt;/strong&gt; — pull in pages, documents, feeds&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;enrich&lt;/strong&gt; — add metadata, classify, link entities&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;query&lt;/strong&gt; — structured retrieval from the knowledge graph&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;maintain&lt;/strong&gt; — garbage collection, deduplication, health checks&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;daily-task-manager&lt;/strong&gt; — recurring workflows&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;cron-scheduler&lt;/strong&gt; — 21 cron jobs and counting&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;soul-audit&lt;/strong&gt; — personality and behavior consistency checks&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The phrase &amp;ldquo;skill files are code&amp;rdquo; captures this well. Each skill is a fat markdown document that encodes an entire workflow — not just a prompt template, but a complete operational specification with decision trees, error handling, and output formats.&lt;/p&gt;
&lt;h2 id="brain-first-convention"&gt;Brain-First Convention
&lt;/h2&gt;&lt;p&gt;Before any agent reaches for an external API, it follows a strict 5-step brain lookup:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Check the knowledge graph for existing information&lt;/li&gt;
&lt;li&gt;Check recent signals for context&lt;/li&gt;
&lt;li&gt;Check entity relationships&lt;/li&gt;
&lt;li&gt;Check temporal patterns&lt;/li&gt;
&lt;li&gt;Only then, if needed, call an external API&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This &amp;ldquo;brain-first&amp;rdquo; convention dramatically reduces redundant API calls and ensures the agent&amp;rsquo;s responses are grounded in accumulated knowledge rather than fresh (and potentially inconsistent) lookups.&lt;/p&gt;
&lt;h2 id="technical-stack"&gt;Technical Stack
&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;PGLite&lt;/strong&gt; deserves special mention. Instead of requiring a Postgres server, GBrain uses PGLite for instant database setup — about 2 seconds from zero to a running knowledge graph. No Docker, no server provisioning, no connection strings.&lt;/p&gt;
&lt;p&gt;The system also ships as an &lt;strong&gt;MCP server&lt;/strong&gt;, meaning it integrates directly with Claude Code, Cursor, and Windsurf. Any MCP-compatible tool can tap into the brain.&lt;/p&gt;
&lt;p&gt;Installation takes roughly 30 minutes, and the agent handles its own setup — you point it at the repo and it bootstraps the database, installs skills, and configures cron jobs.&lt;/p&gt;
&lt;h2 id="why-it-matters"&gt;Why It Matters
&lt;/h2&gt;&lt;p&gt;Most AI agent frameworks focus on orchestration: how to chain LLM calls, how to manage tool use, how to handle errors. GBrain addresses a different problem entirely — &lt;strong&gt;persistent, structured memory across sessions and across agents&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The fact that it was built in 12 days and is already running at production scale (17,888 pages, 4,383 people) suggests that the &amp;ldquo;thin harness, fat skills&amp;rdquo; approach is not just philosophically clean but practically effective.&lt;/p&gt;
&lt;p&gt;GitHub: &lt;a class="link" href="https://github.com/garrytan/gbrain" target="_blank" rel="noopener"
 &gt;garrytan/gbrain&lt;/a&gt;&lt;/p&gt;</description></item></channel></rss>