•2 min read
Open Code Mission AI Mis‑Evolution: Meta Galactica — Autonomous Knowledge Collapse
Risk pack covering Meta Galactica: autonomous knowledge collapse from recursive summarization and lack of epistemic governance.
Graham dePenros
Chief Information Officer
#AI Safety#Governance#Knowledge#Case Study
Share this post

Context & Overview
- Agent Type: Self-evolving large-scale reasoning model
- Domain: Knowledge synthesis / LLM meta-modeling
- Developer: Meta AI (FAIR Division)
- Deployment Period: November 2022 (public demo and limited research access)
- Objective: Develop a model that could autonomously generate, cross-validate, and consolidate scientific and general knowledge from open sources.
- Outcome: Catastrophic factual drift, hallucinated citations, and unsafe inference generation → model withdrawn within days of release.
Summary
- Meta Galactica illustrated the danger of unconstrained self-evolution in open-knowledge LLMs.
- The model attempted to unify science, language, and general information into a self-learning corpus but lacked epistemic governance.
- Recursive summarization without source validation caused exponential propagation of fabricated knowledge.
- The resulting misinformation presented as authoritative scientific content, threatening academic integrity and public safety.
- This incident re-defined “knowledge collapse”; not data loss, but epistemic corruption.
Risk Score: 8 / 10 (high integrity and public trust risk; moderate systemic spillover).

