<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Harfbuzz on ICE-ICE-BEAR-BLOG</title><link>https://ice-ice-bear.github.io/tags/harfbuzz/</link><description>Recent content in Harfbuzz on ICE-ICE-BEAR-BLOG</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Wed, 06 May 2026 00:00:00 +0900</lastBuildDate><atom:link href="https://ice-ice-bear.github.io/tags/harfbuzz/index.xml" rel="self" type="application/rss+xml"/><item><title>Polaris MCFG — A License-Safe Metric-Compatible Font Generator, Plus the LLM Eval Rubric Thread Next to It</title><link>https://ice-ice-bear.github.io/posts/2026-05-06-polaris-mcfg-and-llm-eval-rubric/</link><pubDate>Wed, 06 May 2026 00:00:00 +0900</pubDate><guid>https://ice-ice-bear.github.io/posts/2026-05-06-polaris-mcfg-and-llm-eval-rubric/</guid><description>&lt;img src="https://ice-ice-bear.github.io/" alt="Featured image of post Polaris MCFG — A License-Safe Metric-Compatible Font Generator, Plus the LLM Eval Rubric Thread Next to It" /&gt;&lt;h2 id="overview"&gt;Overview
&lt;/h2&gt;&lt;p&gt;&lt;a class="link" href="https://github.com/PolarisOffice/polaris_mcfg" target="_blank" rel="noopener"
 &gt;PolarisOffice/polaris_mcfg&lt;/a&gt; appeared on 2026-04-26 — a tool that looks like it came out of the Polaris Office product team. It extracts &lt;strong&gt;only the layout metrics&lt;/strong&gt; from restricted fonts (think Hancom fonts, internal commercial fonts) and grafts them onto freely-licensed fonts like &lt;a class="link" href="https://fonts.google.com/noto/specimen/Noto&amp;#43;Sans" target="_blank" rel="noopener"
 &gt;Noto Sans&lt;/a&gt; and &lt;a class="link" href="https://github.com/orioncactus/pretendard" target="_blank" rel="noopener"
 &gt;Pretendard&lt;/a&gt; to produce a new font. The result: &lt;strong&gt;original line breaks and page boundaries preserved, license now safe&lt;/strong&gt;. What makes the chatroom timing interesting is that the conversation immediately around this share was about &lt;strong&gt;LLM evaluation rubrics&lt;/strong&gt; — two topics that look unrelated but both belong to production-grade engineering practice.&lt;/p&gt;
&lt;pre class="mermaid" style="visibility:hidden"&gt;graph TD
 Source["Source font.ttf &amp;lt;br/&amp;gt; (commercial/restricted)"] --&gt; Extract["mcfg extract"]
 Extract --&gt; Metrics["metrics.json &amp;lt;br/&amp;gt; advance/ascender/descender"]
 Free["Free font.ttf &amp;lt;br/&amp;gt; (Noto Sans/Pretendard)"] --&gt; Generate["mcfg generate"]
 Metrics --&gt; Generate
 Generate --&gt; Output["Polaris font.ttf &amp;lt;br/&amp;gt; OFL-safe"]
 Output --&gt; Validate["mcfg validate &amp;lt;br/&amp;gt; HarfBuzz render regression"]
 Validate --&gt; Pass["PASS &amp;lt;br/&amp;gt; advance widths match &amp;lt;br/&amp;gt; render within ±0.5 percent"]&lt;/pre&gt;&lt;h2 id="the-problem-it-solves"&gt;The Problem It Solves
&lt;/h2&gt;&lt;p&gt;Open a Hancom-authored .hwp or .docx in another environment and &lt;strong&gt;line breaks and page splits drift&lt;/strong&gt;. The visible glyph shapes aren&amp;rsquo;t the issue — the &lt;strong&gt;numeric metrics are&lt;/strong&gt;: advance width, ascender, descender, line gap. polaris_mcfg solves this with one clean cut: never touch the outline, only graft the numbers from one font onto another&amp;rsquo;s design.&lt;/p&gt;
&lt;h2 id="the-clean-separation--license-safe-boundary"&gt;The Clean Separation — License-Safe Boundary
&lt;/h2&gt;&lt;p&gt;The data the tool handles is &lt;strong&gt;numbers only&lt;/strong&gt;. Glyph outlines are never extracted, never copied. The visible design of the output font is 100% from the free font, and so is its license. The standard there is the &lt;a class="link" href="https://openfontlicense.org/" target="_blank" rel="noopener"
 &gt;SIL Open Font License (OFL)&lt;/a&gt; 1.1 — finalized in 2007 by Victor Gaultney and Nicolas Spalinger at SIL International, untouched for nearly 20 years, the de facto free-license standard for the font industry. Both Noto Sans and Pretendard ship under OFL.&lt;/p&gt;
&lt;h2 id="cli"&gt;CLI
&lt;/h2&gt;&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Subcommand&lt;/th&gt;
 &lt;th&gt;Purpose&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;code&gt;mcfg extract &amp;lt;font.ttf&amp;gt;&lt;/code&gt;&lt;/td&gt;
 &lt;td&gt;Metrics → JSON&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;code&gt;mcfg compare a b&lt;/code&gt;&lt;/td&gt;
 &lt;td&gt;Diff two fonts (or two JSONs); text/json/html output&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;code&gt;mcfg generate --metrics … --design …&lt;/code&gt;&lt;/td&gt;
 &lt;td&gt;Produce the synthesized font&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;code&gt;mcfg validate &amp;lt;font&amp;gt; --against …&lt;/code&gt;&lt;/td&gt;
 &lt;td&gt;Verify the metrics actually match&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;div class="highlight"&gt;&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-bash" data-lang="bash"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mcfg extract NotoSansKR-Bold.ttf -o bold.json
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mcfg generate &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --metrics bold.json &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --design NotoSansKR-Regular.ttf &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --output PolarisBoldMetrics-Regular.ttf &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --apply global,advance &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --license-text &lt;span class="s2"&gt;&amp;#34;SIL Open Font License 1.1&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;mcfg validate PolarisBoldMetrics-Regular.ttf &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --against NotoSansKR-Bold.ttf &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --render-default &lt;span class="se"&gt;\
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; --render-tolerance-pct 0.5
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="c1"&gt;# → result: PASS (advance widths match, rendering within ±0.5%)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;Validation runs through &lt;a class="link" href="https://harfbuzz.github.io/" target="_blank" rel="noopener"
 &gt;HarfBuzz&lt;/a&gt;, the de facto OpenType shaping engine — the only way to confirm the metric graft really worked is to render real text and compare pixels.&lt;/p&gt;
&lt;h2 id="milestones-and-license-responsibility"&gt;Milestones and License Responsibility
&lt;/h2&gt;&lt;p&gt;M1 (metric extractor + JSON schema) through M7 (packaging and docs) are all complete; 84 tests pass. Tool code is MIT; output fonts inherit the design font&amp;rsquo;s license (OFL or similar). One important caveat: &lt;strong&gt;whether the source font&amp;rsquo;s EULA permits metric extraction is the user&amp;rsquo;s responsibility&lt;/strong&gt; (Requirements.md §6). The tool is not an automated license-laundering machine — it&amp;rsquo;s an honest separation tool, and the README is explicit about that.&lt;/p&gt;
&lt;h2 id="the-llm-eval-rubric-thread-next-to-it"&gt;The LLM Eval Rubric Thread Next to It
&lt;/h2&gt;&lt;p&gt;Around the same time, an unexpectedly pointed take on LLM evaluation surfaced:&lt;/p&gt;

 &lt;blockquote&gt;
 &lt;p&gt;&amp;ldquo;Vector similarity and RAGAS metrics aren&amp;rsquo;t really suitable for grading. Free-form grading inevitably has to go through an LLM, and the standard practice is to write the evaluation rubric first and base everything on that.&amp;rdquo;&lt;/p&gt;

 &lt;/blockquote&gt;
&lt;p&gt;This single line compresses the production wisdom of LLM-as-Judge into three points. (1) &lt;a class="link" href="https://github.com/explodinggradients/ragas" target="_blank" rel="noopener"
 &gt;Vector similarity and RAGAS&lt;/a&gt; score semantic match but don&amp;rsquo;t constitute a grading standard. (2) Free-form grading must call an LLM — rule-based scoring won&amp;rsquo;t reach. (3) Write the rubric first. &amp;ldquo;Tell me if this answer is good&amp;rdquo; doesn&amp;rsquo;t work as a prompt; you need an &lt;strong&gt;explicit grading scheme&lt;/strong&gt; before you&amp;rsquo;ll get consistency.&lt;/p&gt;
&lt;p&gt;This matches exactly where every modern LLM eval framework — &lt;a class="link" href="https://github.com/confident-ai/deepeval" target="_blank" rel="noopener"
 &gt;DeepEval&lt;/a&gt;, &lt;a class="link" href="https://github.com/evidentlyai/evidently" target="_blank" rel="noopener"
 &gt;Evidently&lt;/a&gt;, &lt;a class="link" href="https://github.com/openai/evals" target="_blank" rel="noopener"
 &gt;OpenAI Evals&lt;/a&gt; — is heading. &lt;strong&gt;Rubric-driven judging is now the standard.&lt;/strong&gt;&lt;/p&gt;
&lt;h2 id="insights"&gt;Insights
&lt;/h2&gt;&lt;p&gt;That a font metric extractor and an LLM evaluation rubric thread emerge at the same moment signals something about the audience: &lt;strong&gt;these are people who are actually shipping product&lt;/strong&gt;. The two topics look unrelated but the underlying move is identical — both are about reducing intuition-dependent territory to explicit, verifiable rules. The font tool reduces &amp;ldquo;are these metrics compatible&amp;rdquo; to a HarfBuzz rendering regression. LLM-as-Judge reduces &amp;ldquo;is this answer good&amp;rdquo; to a rubric. Both topics demand an automated verification step before they&amp;rsquo;re production-ready, and that verification step ends up defining the tool&amp;rsquo;s identity. The fact that polaris_mcfg has a &lt;code&gt;validate&lt;/code&gt; subcommand at all, and that LLM eval frameworks treat rubrics as first-class objects, are expressions of the same engineering instinct. In production &amp;ldquo;it just works&amp;rdquo; is not a finishing line — &lt;strong&gt;explicit criteria + automated verification + regression tracking&lt;/strong&gt; is the new bar, and these two topics point to the same place from very different starting points.&lt;/p&gt;
&lt;h2 id="references"&gt;References
&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Tool repo and demo&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/PolarisOffice/polaris_mcfg" target="_blank" rel="noopener"
 &gt;PolarisOffice/polaris_mcfg&lt;/a&gt; — Metric-Compatible Font Generator (MIT, Python, 4 stars)&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://polarisoffice.github.io/polaris_mcfg/" target="_blank" rel="noopener"
 &gt;Demo / docs site&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Font ecosystem&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://harfbuzz.github.io/" target="_blank" rel="noopener"
 &gt;HarfBuzz&lt;/a&gt; — OpenType shaping engine&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://openfontlicense.org/" target="_blank" rel="noopener"
 &gt;SIL Open Font License&lt;/a&gt; — de facto free-license standard (OFL 1.1, 2007)&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.sil.org/" target="_blank" rel="noopener"
 &gt;SIL International&lt;/a&gt; — OFL stewards&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://fonts.google.com/noto/specimen/Noto&amp;#43;Sans" target="_blank" rel="noopener"
 &gt;Noto Sans&lt;/a&gt; and &lt;a class="link" href="https://github.com/orioncactus/pretendard" target="_blank" rel="noopener"
 &gt;Pretendard&lt;/a&gt; — OFL-licensed Hangul fonts&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;LLM evaluation methodology&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/explodinggradients/ragas" target="_blank" rel="noopener"
 &gt;RAGAS&lt;/a&gt; — RAG evaluation framework&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/confident-ai/deepeval" target="_blank" rel="noopener"
 &gt;DeepEval&lt;/a&gt; — LLM-as-Judge + rubric-based eval&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/evidentlyai/evidently" target="_blank" rel="noopener"
 &gt;Evidently&lt;/a&gt; — ML/LLM monitoring and eval&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/openai/evals" target="_blank" rel="noopener"
 &gt;OpenAI Evals&lt;/a&gt; — OpenAI&amp;rsquo;s official eval framework&lt;/li&gt;
&lt;/ul&gt;</description></item></channel></rss>