<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Ai Finance on ICE-ICE-BEAR-BLOG</title><link>https://ice-ice-bear.github.io/tags/ai-finance/</link><description>Recent content in Ai Finance on ICE-ICE-BEAR-BLOG</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Sun, 10 May 2026 00:00:00 +0900</lastBuildDate><atom:link href="https://ice-ice-bear.github.io/tags/ai-finance/index.xml" rel="self" type="application/rss+xml"/><item><title>Microsoft qlib — The Quant Backbone LLM Agents Will Ride On</title><link>https://ice-ice-bear.github.io/posts/2026-05-10-microsoft-qlib-quant-ai/</link><pubDate>Sun, 10 May 2026 00:00:00 +0900</pubDate><guid>https://ice-ice-bear.github.io/posts/2026-05-10-microsoft-qlib-quant-ai/</guid><description>&lt;img src="https://ice-ice-bear.github.io/" alt="Featured image of post Microsoft qlib — The Quant Backbone LLM Agents Will Ride On" /&gt;&lt;h2 id="overview"&gt;Overview
&lt;/h2&gt;&lt;p&gt;&lt;a class="link" href="https://github.com/microsoft/qlib" target="_blank" rel="noopener"
 &gt;Microsoft qlib&lt;/a&gt; — first open-sourced in August 2020 — is an AI-oriented quantitative investment platform that just crossed 42K stars. It is not a new project, yet it is re-surfacing in 2026 for a specific reason: &lt;strong&gt;LLM-based financial agents&lt;/strong&gt; (notably &lt;a class="link" href="https://github.com/microsoft/RD-Agent" target="_blank" rel="noopener"
 &gt;microsoft/RD-Agent&lt;/a&gt; and its &lt;a class="link" href="https://arxiv.org/abs/2505.15155" target="_blank" rel="noopener"
 &gt;R&amp;amp;D-Agent-Quant&lt;/a&gt; paper) now automatically &lt;strong&gt;mine alpha factors and optimize models&lt;/strong&gt;, and the moment that loop becomes real, you need a &lt;strong&gt;reproducible quant workflow&lt;/strong&gt; underneath to score what the LLM proposes. qlib happens to be the most actively maintained open-source one. The framing shift matters: qlib is no longer &amp;ldquo;yet another backtesting library&amp;rdquo; — it has become the &lt;strong&gt;rails&lt;/strong&gt; the LLM agents are riding on.&lt;/p&gt;
&lt;pre class="mermaid" style="visibility:hidden"&gt;graph TD
 Data["Data ingestion &amp;lt;br/&amp;gt; Yahoo, China A-shares, CSV"] --&gt; Storage["Qlib binary storage &amp;lt;br/&amp;gt; columnar files"]
 Storage --&gt; Expr["Expression engine &amp;lt;br/&amp;gt; $close, Ref, Mean"]
 Expr --&gt; Factor["Alpha factor library &amp;lt;br/&amp;gt; Alpha158, Alpha360"]
 Factor --&gt; Model["Model training &amp;lt;br/&amp;gt; LightGBM, GRU, TRA"]
 Model --&gt; Signal["Forecast signal &amp;lt;br/&amp;gt; IC, Rank IC"]
 Signal --&gt; Strat["Portfolio strategy &amp;lt;br/&amp;gt; TopK Dropout"]
 Strat --&gt; Bt["Backtesting &amp;lt;br/&amp;gt; cost and slippage"]
 Bt --&gt; Report["Performance report &amp;lt;br/&amp;gt; IR, MDD, cumulative"]
 Report --&gt; RD["RD-Agent LLM &amp;lt;br/&amp;gt; auto factor proposal loop"]
 RD -.-&gt;|feedback| Factor&lt;/pre&gt;&lt;h2 id="1-what-qlib-actually-does"&gt;1. What qlib actually does
&lt;/h2&gt;&lt;p&gt;The &lt;a class="link" href="https://github.com/microsoft/qlib/blob/main/README.md" target="_blank" rel="noopener"
 &gt;qlib README&lt;/a&gt; phrases it as &amp;ldquo;exploring ideas to implementing productions&amp;rdquo;. Decomposed, it is four layers.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Layer 1 — data infrastructure.&lt;/strong&gt; qlib uses its own &lt;a class="link" href="https://qlib.readthedocs.io/en/latest/component/data.html" target="_blank" rel="noopener"
 &gt;columnar binary format&lt;/a&gt; to store time-series data. Daily and minute bars that would blow up a pandas DataFrame get compressed into a form that supports &lt;strong&gt;fast slicing&lt;/strong&gt;. Data collectors cover both &lt;a class="link" href="https://github.com/microsoft/qlib/tree/main/scripts/data_collector/yahoo" target="_blank" rel="noopener"
 &gt;Yahoo Finance&lt;/a&gt; and the &lt;a class="link" href="https://github.com/microsoft/qlib/tree/main/scripts/data_collector" target="_blank" rel="noopener"
 &gt;China A-share ecosystem&lt;/a&gt;, and the community-maintained &lt;a class="link" href="https://github.com/chenditc/investment_data" target="_blank" rel="noopener"
 &gt;chenditc/investment_data&lt;/a&gt; mirror has become a standard fallback.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Layer 2 — expression engine.&lt;/strong&gt; Factors are declared with domain-specific syntax like &lt;code&gt;$close&lt;/code&gt;, &lt;code&gt;Ref($close, 1)&lt;/code&gt;, &lt;code&gt;Mean($close, 3)&lt;/code&gt;, &lt;code&gt;$high-$low&lt;/code&gt;. This looks trivial but is structurally important — factors are &lt;strong&gt;declared as functions, not as data&lt;/strong&gt;, which means an LLM can learn the natural-language-to-qlib-expression translation. That is the first contact surface with RD-Agent.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Layer 3 — model zoo.&lt;/strong&gt; Browse &lt;a class="link" href="https://github.com/microsoft/qlib/tree/main/examples/benchmarks" target="_blank" rel="noopener"
 &gt;examples/benchmarks&lt;/a&gt; and you find &lt;a class="link" href="https://lightgbm.readthedocs.io/" target="_blank" rel="noopener"
 &gt;LightGBM&lt;/a&gt;, &lt;a class="link" href="https://xgboost.readthedocs.io/" target="_blank" rel="noopener"
 &gt;XGBoost&lt;/a&gt;, &lt;a class="link" href="https://qlib.readthedocs.io/en/latest/component/model.html" target="_blank" rel="noopener"
 &gt;MLP&lt;/a&gt;, &lt;a class="link" href="https://qlib.readthedocs.io/en/latest/component/model.html" target="_blank" rel="noopener"
 &gt;GRU&lt;/a&gt;, &lt;a class="link" href="https://github.com/microsoft/qlib/pull/508" target="_blank" rel="noopener"
 &gt;Transformer / Localformer&lt;/a&gt;, &lt;a class="link" href="https://github.com/microsoft/qlib/pull/205" target="_blank" rel="noopener"
 &gt;TabNet&lt;/a&gt;, &lt;a class="link" href="https://github.com/microsoft/qlib/pull/286" target="_blank" rel="noopener"
 &gt;DoubleEnsemble&lt;/a&gt;, &lt;a class="link" href="https://github.com/microsoft/qlib/pull/1040" target="_blank" rel="noopener"
 &gt;HIST / IGMTF&lt;/a&gt;, &lt;a class="link" href="https://github.com/microsoft/qlib/pull/531" target="_blank" rel="noopener"
 &gt;TRA (Temporal Routing Adaptor)&lt;/a&gt;, &lt;a class="link" href="https://github.com/microsoft/qlib/pull/491" target="_blank" rel="noopener"
 &gt;TCTS&lt;/a&gt;, &lt;a class="link" href="https://github.com/microsoft/qlib/pull/689" target="_blank" rel="noopener"
 &gt;ADARNN&lt;/a&gt;, &lt;a class="link" href="https://github.com/microsoft/qlib/pull/704" target="_blank" rel="noopener"
 &gt;ADD&lt;/a&gt;, and &lt;a class="link" href="https://github.com/microsoft/qlib/pull/1414" target="_blank" rel="noopener"
 &gt;KRNN / Sandwich&lt;/a&gt; — most of the SOTA time-series architectures from academia sitting behind a single interface.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Layer 4 — backtest and execution.&lt;/strong&gt; The &lt;a class="link" href="https://qlib.readthedocs.io/en/latest/component/highfreq.html" target="_blank" rel="noopener"
 &gt;Nested Decision Framework&lt;/a&gt; lets you stack a daily strategy and a minute-level execution policy in the same decision tree. &lt;a class="link" href="https://github.com/microsoft/qlib/pull/290" target="_blank" rel="noopener"
 &gt;Online serving&lt;/a&gt; automates model rolling. The &lt;a class="link" href="https://qlib.readthedocs.io/en/latest/component/rl.html" target="_blank" rel="noopener"
 &gt;RL learning framework&lt;/a&gt; models order execution as a continuous decision problem.&lt;/p&gt;
&lt;h2 id="2-why-microsoft-open-sourced-it"&gt;2. Why Microsoft open-sourced it
&lt;/h2&gt;&lt;p&gt;The original &lt;a class="link" href="https://arxiv.org/abs/2009.11189" target="_blank" rel="noopener"
 &gt;qlib paper&lt;/a&gt; came out of the time-series and finance group at &lt;a class="link" href="https://www.microsoft.com/en-us/research/lab/microsoft-research-asia/" target="_blank" rel="noopener"
 &gt;Microsoft Research Asia (MSRA)&lt;/a&gt;. The surface reason is &amp;ldquo;open research&amp;rdquo;. The actual motivators are three, stacked.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Research credibility capital.&lt;/strong&gt; Time-series ML papers — &lt;a class="link" href="https://arxiv.org/abs/2110.13716" target="_blank" rel="noopener"
 &gt;HIST&lt;/a&gt;, &lt;a class="link" href="https://arxiv.org/abs/2201.04038" target="_blank" rel="noopener"
 &gt;DDG-DA&lt;/a&gt;, &lt;a class="link" href="https://arxiv.org/abs/2108.04443" target="_blank" rel="noopener"
 &gt;ADARNN&lt;/a&gt;, &lt;a class="link" href="https://arxiv.org/abs/2106.12950" target="_blank" rel="noopener"
 &gt;TRA&lt;/a&gt; — are all reproducible on the same platform. The graphs in the paper match runnable code, so MSRA&amp;rsquo;s time-series papers escape the &amp;ldquo;is the implementation actually real&amp;rdquo; suspicion.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Talent pipeline.&lt;/strong&gt; Students and interns in &lt;a class="link" href="https://www.microsoft.com/en-us/research/people/jiabia/" target="_blank" rel="noopener"
 &gt;Jiang Bian&amp;rsquo;s group&lt;/a&gt; write papers on top of qlib and then disperse to Microsoft, hedge funds, and big tech post-graduation. The open-source is a recruiting funnel.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Azure ML adjacency.&lt;/strong&gt; qlib&amp;rsquo;s workflow manager hooks directly into &lt;a class="link" href="https://mlflow.org/" target="_blank" rel="noopener"
 &gt;MLflow&lt;/a&gt; experiment tracking. The moment Azure ML standardized on MLflow compatibility, qlib became the most natural domain-specific ML stack to run on Azure.&lt;/p&gt;
&lt;h2 id="3-how-it-compares-to-pyfolio--zipline--vectorbt"&gt;3. How it compares to pyfolio / zipline / vectorbt
&lt;/h2&gt;&lt;p&gt;The legacy open-source quant stack is pre-ML in design.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/quantopian/zipline" target="_blank" rel="noopener"
 &gt;zipline&lt;/a&gt; — Quantopian&amp;rsquo;s backtest engine, now kept alive via the &lt;a class="link" href="https://github.com/stefan-jansen/zipline-reloaded" target="_blank" rel="noopener"
 &gt;zipline-reloaded&lt;/a&gt; fork after Quantopian shut down in 2020. Centered on &lt;strong&gt;event-driven backtesting&lt;/strong&gt;; ML workflow lives outside.&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/quantopian/pyfolio" target="_blank" rel="noopener"
 &gt;pyfolio&lt;/a&gt; — &lt;strong&gt;post-hoc analysis&lt;/strong&gt; of backtest results. IR, drawdown, factor exposure. Does not touch training.&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://vectorbt.dev/" target="_blank" rel="noopener"
 &gt;vectorbt&lt;/a&gt; — vectorized backtesting, great for &lt;strong&gt;fast parameter sweeps&lt;/strong&gt;. Built for fast simulation of a single strategy, not ML-first.&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.backtrader.com/" target="_blank" rel="noopener"
 &gt;backtrader&lt;/a&gt; — event-driven, retail-friendly. Same constraint.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;qlib&amp;rsquo;s distinction is that it unifies the &lt;strong&gt;entire time-series ML pipeline&lt;/strong&gt; under one interface. Data ingestion → factor expressions → model training → signal evaluation → backtest → analysis → online serving, all driven by a single &lt;code&gt;qrun&lt;/code&gt; command against a &lt;a class="link" href="https://github.com/microsoft/qlib/blob/main/examples/benchmarks/LightGBM/workflow_config_lightgbm_Alpha158.yaml" target="_blank" rel="noopener"
 &gt;YAML workflow&lt;/a&gt;. This shape is &lt;strong&gt;easy for an LLM agent to call&lt;/strong&gt; — one natural-language command maps to one YAML, and the result metrics (IC, Rank IC, IR, MDD) come back as a single JSON.&lt;/p&gt;
&lt;h2 id="4-llm-meets-quant--enter-rd-agent"&gt;4. LLM-meets-quant — enter RD-Agent
&lt;/h2&gt;&lt;p&gt;&lt;a class="link" href="https://github.com/microsoft/RD-Agent" target="_blank" rel="noopener"
 &gt;RD-Agent&lt;/a&gt; — released by Microsoft on Aug 8, 2024 and formalized in the &lt;a class="link" href="https://arxiv.org/abs/2505.15155" target="_blank" rel="noopener"
 &gt;R&amp;amp;D-Agent-Quant paper&lt;/a&gt; — is an &lt;strong&gt;LLM-based autonomous evolving agent&lt;/strong&gt; framework. The name sounds generic, but the first concrete use case is precisely &lt;strong&gt;automated alpha factor mining&lt;/strong&gt; on top of qlib.&lt;/p&gt;
&lt;p&gt;The loop looks like this.&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;An LLM reads financial domain text — papers, reports, news — and proposes &lt;strong&gt;factor hypotheses&lt;/strong&gt; in natural language&lt;/li&gt;
&lt;li&gt;Each hypothesis is compiled into a &lt;a class="link" href="https://qlib.readthedocs.io/en/latest/component/data.html#feature-engineering" target="_blank" rel="noopener"
 &gt;qlib expression&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;qlib applies the factor to historical data and computes &lt;strong&gt;IC / Rank IC&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Factors that score well survive; the rest go back to the LLM as feedback for the next round&lt;/li&gt;
&lt;li&gt;A similar loop exists at the model layer — hyperparameter and architecture search&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;What is interesting structurally is that the LLM is &lt;strong&gt;not imitating a human&lt;/strong&gt; — it sits in the slot where it can try orders of magnitude more candidates than a human quant. Where a human researcher might build and test five to ten factors per week, an LLM agent runs hundreds in the same time. It pushes the &lt;strong&gt;bias-variance frontier&lt;/strong&gt; of backtesting beyond what a person can mentally track.&lt;/p&gt;
&lt;p&gt;Microsoft has published three &lt;a class="link" href="https://www.youtube.com/watch?v=X4DK2QZKaKY" target="_blank" rel="noopener"
 &gt;RD-Agent demo videos&lt;/a&gt; — Quant Factor Mining, Factor Mining from Reports, and Quant Model Optimization. All three follow the same pattern: LLM generates hypotheses, qlib validates them, the evaluation signal feeds back into the LLM.&lt;/p&gt;
&lt;h2 id="5-why-now"&gt;5. Why now
&lt;/h2&gt;&lt;p&gt;Three signals overlap.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;First, the project is alive.&lt;/strong&gt; &lt;a class="link" href="https://github.com/microsoft/qlib/releases/tag/v0.9.7" target="_blank" rel="noopener"
 &gt;v0.9.7&lt;/a&gt; shipped in August 2025, and the main branch had pushes into April 2026. By contrast &lt;a class="link" href="https://github.com/quantopian/pyfolio" target="_blank" rel="noopener"
 &gt;pyfolio&lt;/a&gt; and the original &lt;a class="link" href="https://github.com/quantopian/zipline" target="_blank" rel="noopener"
 &gt;zipline&lt;/a&gt; are effectively frozen. Actively maintained open-source quant stacks are rare.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Second, BPQP for end-to-end learning&lt;/strong&gt; is en route as an &lt;a class="link" href="https://github.com/microsoft/qlib/pull/1863" target="_blank" rel="noopener"
 &gt;under-review PR&lt;/a&gt;. Making the &lt;strong&gt;quadratic-programming step of portfolio optimization differentiable&lt;/strong&gt; means alpha-to-position becomes a single trainable graph. This is not a routine library upgrade — it converts portfolio construction itself into a learnable layer.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Third, the LLM tool-use path is obvious.&lt;/strong&gt; RD-Agent calls qlib as a tool, gets JSON back, generates the next hypothesis. The pattern maps cleanly to &lt;a class="link" href="https://docs.claude.com/en/docs/agents-and-tools/tool-use/overview" target="_blank" rel="noopener"
 &gt;Anthropic tool use&lt;/a&gt; and the &lt;a class="link" href="https://platform.openai.com/docs/api-reference/responses" target="_blank" rel="noopener"
 &gt;OpenAI Responses API&lt;/a&gt;. The equation is simple: &lt;strong&gt;one qlib YAML workflow = one LLM function call&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id="6-the-constraint--data-then-more-data"&gt;6. The constraint — data, then more data
&lt;/h2&gt;&lt;p&gt;The &lt;a class="link" href="https://github.com/microsoft/qlib#data-preparation" target="_blank" rel="noopener"
 &gt;⚠️ banner at the top of the README&lt;/a&gt; — &amp;ldquo;Due to more restrict data security policy. The official dataset is disabled temporarily.&amp;rdquo; The official dataset is paused, replaced by a community mirror. This is qlib&amp;rsquo;s largest structural weakness: &lt;strong&gt;good time-series data is not free&lt;/strong&gt;. Yahoo Finance is weak on minute bars and realtime, and China A-share data is bound to exchange policy.&lt;/p&gt;
&lt;p&gt;Move to commercial data and the standards are &lt;a class="link" href="https://www.bloomberg.com/professional/products/bloomberg-terminal/" target="_blank" rel="noopener"
 &gt;Bloomberg&lt;/a&gt;, &lt;a class="link" href="https://www.lseg.com/en/data-analytics" target="_blank" rel="noopener"
 &gt;Refinitiv&lt;/a&gt;, and &lt;a class="link" href="https://wrds-www.wharton.upenn.edu/" target="_blank" rel="noopener"
 &gt;WRDS&lt;/a&gt;, but licensing is expensive. qlib&amp;rsquo;s &lt;a class="link" href="https://github.com/microsoft/qlib/pull/744" target="_blank" rel="noopener"
 &gt;Arctic backend&lt;/a&gt; and &lt;a class="link" href="https://github.com/microsoft/qlib/pull/343" target="_blank" rel="noopener"
 &gt;Point-in-Time database&lt;/a&gt; modules are designed so commercial data pipelines can be plugged in — but solving the data problem is on the user. What open-source can give you is &lt;strong&gt;the rails&lt;/strong&gt;, and nothing further.&lt;/p&gt;
&lt;h2 id="insight"&gt;Insight
&lt;/h2&gt;&lt;p&gt;Looked at in isolation, qlib reads as &amp;ldquo;a well-built time-series ML library&amp;rdquo;. Looked at next to RD-Agent, the picture changes. An LLM generating factor hypotheses in natural language, qlib scoring them via backtest, the score flowing back into the LLM — the &lt;strong&gt;automated alpha-mining loop&lt;/strong&gt; has just landed in production-grade open source for the first time, and this is where. Two consequences. First, &lt;strong&gt;the barrier to entry for solo quants drops again&lt;/strong&gt; — without a PhD in time-series ML you can tell an LLM &amp;ldquo;build a momentum factor from earnings-call transcripts of the last three months&amp;rdquo; and let only the ones with IC above 0.05 through. Second, &lt;strong&gt;the differentiation axis for hedge funds moves up one level&lt;/strong&gt; — once factor discovery itself is automated, edge shifts to &lt;strong&gt;data (proprietary alternative datasets)&lt;/strong&gt;, &lt;strong&gt;compute (scale of parallel agents)&lt;/strong&gt;, and &lt;strong&gt;governance (meta-systems against overfitting)&lt;/strong&gt;. qlib is the baseline that this shift sits on top of. Over 2026 the &amp;ldquo;alpha-mining LLM agent + qlib&amp;rdquo; combination has a high probability of becoming the standard setup for both hedge funds and independent research groups. The fastest entry point — &lt;code&gt;pip install pyqlib&lt;/code&gt;, pull data from &lt;a class="link" href="https://github.com/chenditc/investment_data/releases" target="_blank" rel="noopener"
 &gt;chenditc/investment_data&lt;/a&gt;, and run the &lt;a class="link" href="https://github.com/microsoft/qlib/blob/main/examples/benchmarks/LightGBM/workflow_config_lightgbm_Alpha158.yaml" target="_blank" rel="noopener"
 &gt;LightGBM Alpha158 workflow&lt;/a&gt; with &lt;code&gt;qrun&lt;/code&gt;. A single command gets you a baseline at roughly IR 2.0.&lt;/p&gt;
&lt;h2 id="references"&gt;References
&lt;/h2&gt;&lt;p&gt;&lt;strong&gt;Repository and docs&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/microsoft/qlib" target="_blank" rel="noopener"
 &gt;microsoft/qlib GitHub repository&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://qlib.readthedocs.io/en/latest/" target="_blank" rel="noopener"
 &gt;qlib official docs (Read the Docs)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://pypi.org/project/pyqlib/" target="_blank" rel="noopener"
 &gt;PyPI — pyqlib&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://qlib.readthedocs.io/en/latest/component/data.html" target="_blank" rel="noopener"
 &gt;Qlib data module docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://qlib.readthedocs.io/en/latest/component/workflow.html" target="_blank" rel="noopener"
 &gt;Qlib workflow docs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://qlib.readthedocs.io/en/latest/component/rl.html" target="_blank" rel="noopener"
 &gt;Qlib RL component&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/microsoft/qlib/releases/tag/v0.9.7" target="_blank" rel="noopener"
 &gt;Qlib v0.9.7 release notes&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Papers and related research&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/abs/2009.11189" target="_blank" rel="noopener"
 &gt;Qlib: An AI-oriented Quantitative Investment Platform (arXiv:2009.11189)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/abs/2505.15155" target="_blank" rel="noopener"
 &gt;R&amp;amp;D-Agent-Quant paper (arXiv:2505.15155)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/abs/2110.13716" target="_blank" rel="noopener"
 &gt;HIST time-series model paper (arXiv:2110.13716)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/abs/2201.04038" target="_blank" rel="noopener"
 &gt;DDG-DA paper (arXiv:2201.04038)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/abs/2106.12950" target="_blank" rel="noopener"
 &gt;TRA temporal routing paper (arXiv:2106.12950)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://arxiv.org/abs/2108.04443" target="_blank" rel="noopener"
 &gt;ADARNN paper (arXiv:2108.04443)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;LLM-meets-quant ecosystem&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/microsoft/RD-Agent" target="_blank" rel="noopener"
 &gt;microsoft/RD-Agent GitHub&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.youtube.com/watch?v=X4DK2QZKaKY" target="_blank" rel="noopener"
 &gt;RD-Agent Quant Factor Mining demo&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://docs.claude.com/en/docs/agents-and-tools/tool-use/overview" target="_blank" rel="noopener"
 &gt;Anthropic tool use guide&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://platform.openai.com/docs/api-reference/responses" target="_blank" rel="noopener"
 &gt;OpenAI Responses API&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Comparable open-source stacks&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/stefan-jansen/zipline-reloaded" target="_blank" rel="noopener"
 &gt;zipline-reloaded&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/quantopian/pyfolio" target="_blank" rel="noopener"
 &gt;pyfolio&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://vectorbt.dev/" target="_blank" rel="noopener"
 &gt;vectorbt&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://www.backtrader.com/" target="_blank" rel="noopener"
 &gt;backtrader&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a class="link" href="https://github.com/chenditc/investment_data" target="_blank" rel="noopener"
 &gt;chenditc/investment_data mirror&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item></channel></rss>