Public-safe FastAPI mini-project that demonstrates how I think about AI deployment readiness: evidence ingestion, retrieval, evals, release gates, rollback plans, Docker, and CI.
TeachClaw and OpenClaw contain the strongest real evidence, but not all of that code or operational context should be public. This project creates a small, clean artifact I can point hiring teams to without exposing private product code, teacher data, runtime secrets, or internal OpenClaw setup.
The goal is not to pretend this is an enterprise platform. The goal is to show the shape of production AI deployment work in a repo a recruiter or engineer can understand quickly.
pass,
needs_judgement, or block.AI deployment roles rarely fail because someone cannot call a model API. They fail when teams cannot tell whether a system is ready to launch, which evidence is current, which evals matter, what rollback means, and whether the live system is using the version they think it is using.
This project turns those instincts into a small public system:
Current implementation uses deterministic token-overlap retrieval and SQLite state. That is intentional for a compact public proof. The next real production step would be hosted deployment, embeddings/vector search, rate limits, stronger service auth, and immutable eval reports.
mini-project/