LLMs are useful for maintaining a personal or team knowledge base, but only when the boundaries are explicit.

Without boundaries, the knowledge base turns into a blended layer of raw source, interpretation, outdated memory, and generated prose. It may look organized while becoming harder to trust.

Separate source from synthesis

I like to keep three layers separate:

  • Raw source material: emails, exports, meeting notes, documents, transcripts.
  • Working wiki: synthesized notes that are useful for daily work.
  • Public garden: selected notes rewritten for outside readers.

Each layer has a different job. Raw source should preserve evidence. The working wiki should support action. The public garden should communicate ideas without leaking private context.

Generated notes need provenance

When an LLM creates or updates a note, the useful question is not only “does this sound right?”

Better questions:

  • What source material supports this?
  • What did the model infer?
  • What was omitted?
  • What may have changed since the source was written?
  • Is this note safe to share?

The model can help answer these questions, but the system should make them easy to ask.

Boundaries make automation safer

Automation becomes safer when the folders and rules match the risk level.

For example:

  • Source folders are read-only inputs.
  • Working wiki pages can be updated, but should preserve links to sources.
  • Public notes are opt-in and curated.
  • Sensitive documents are excluded from ingest and publish workflows.

The point is not to slow everything down. The point is to make the safe path obvious.