Linguista

大语言模型(LLM)知识库-Andrej Karpathy「Rosetta」

大语言模型(LLM)知识库

最近我发现一件非常有用的事:使用 LLM 为各种我感兴趣的研究主题构建个人知识库。通过这种方式,我最近大量的 Token 消耗不再仅仅是用来处理代码,而是更多地用于处理知识(以 Markdown 和图片的形式存储)。最新的 LLM 在这方面做得相当出色。具体如下:

数据摄取(Data ingest): 我将源文档(文章、论文、代码库、数据集、图片等)索引到一个 raw/ 目录中,然后使用 LLM 逐步“编译”出一个 Wiki,这实际上就是一个按目录结构组织的 .md 文件集合。这个 Wiki 包含了 raw/ 目录中所有数据的摘要、反向链接,接着它会将数据分类为各种概念,为这些概念撰写文章,并将它们全部链接起来。为了将网页文章转换为 .md 文件,我喜欢使用 Obsidian Web Clipper 扩展插件,并且我还使用快捷键将所有相关图片下载到本地,以便我的 LLM 能够轻松引用它们。

IDE(集成开发环境): 我将 Obsidian 作为 IDE 的“前端”,在这里我可以查看原始数据、编译后的 Wiki 以及衍生出的可视化内容。需要重点指出的是,LLM 负责编写和维护 Wiki 的所有数据,我极少直接去修改它。我还尝试了一些 Obsidian 插件,以其他方式来渲染和查看数据(例如用 Marp 制作幻灯片)。

问答(Q&A): 有趣的地方在于,一旦你的 Wiki 足够庞大(例如,我最近关于某些研究主题的 Wiki 大约有 100 篇文章和 40 万字),你就可以基于 Wiki 向你的 LLM 智能体提出各种复杂的问题,它会自己去研究答案等。我原以为必须要用到复杂的 RAG(检索增强生成)技术,但事实证明,在这个相对较小的规模下,LLM 在自动维护索引文件和所有文档的简短摘要方面表现得非常好,并且能相当轻松地读取所有重要的相关数据。

输出(Output): 相比于在文本/终端中获取答案,我更喜欢让它为我渲染成 Markdown 文件、幻灯片(Marp 格式)或 matplotlib 图表,然后我再回到 Obsidian 中查看所有这些内容。你可以想象,根据查询的不同,还可以生成许多其他的视觉输出格式。通常,我最终会将这些输出结果“归档”回 Wiki 中,从而对其进行增强以备后续查询。因此,我自己的探索和查询总是在知识库中不断“积累(add up)”。

数据审查/清理(Linting): 我对 Wiki 运行了一些 LLM“健康检查”,例如寻找不一致的数据、填补缺失数据(通过网络搜索)、为可能的新文章发现有趣的联系等,以此逐步清理 Wiki 并增强其整体的数据完整性。LLM 非常擅长为你提出可供进一步提问和探索的问题建议。

额外工具(Extra tools): 我发现自己还在开发处理数据的额外工具,例如,我凭直觉(vibe coded,注:指通过 AI 辅助轻松编写)为 Wiki 写了一个简单粗糙的小型搜索引擎,我既会直接使用它(通过网页 UI),但更多时候我是想通过命令行(CLI)将它作为一个工具交给 LLM,以处理更大规模的查询。

进一步探索(Further explorations): 随着数据仓库的不断增长,人们自然会想要考虑“合成数据生成 + 微调(finetuning)”,让你的 LLM 在其神经网络权重中“记住/了解”这些数据,而不仅仅是依赖上下文窗口。

太长不看(TLDR): 从给定数量的来源收集原始数据,接着由 LLM 将其编译成 .md 格式的 Wiki,然后 LLM 通过各种命令行工具(CLI)对其进行操作以执行问答并逐步增强 Wiki,所有这些都可以在 Obsidian 中查看。你极少需要手动编写或编辑 Wiki,那是 LLM 的工作领域。我认为基于此有巨大的空间可以诞生一款令人惊叹的新产品,而不是仅仅停留在拼凑起来的一堆脚本上。


Andrej Karpathy @karpathy 2026-04-02

LLM Knowledge Bases

Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So:

Data ingest:

I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them.

IDE:

I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides).

Q&A:

Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale.

Output:

Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base.

Linting:

I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into.

Extra tools:

I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries.

Further explorations:

As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows.

TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.