Evidence At A Glance
The story is not years of employment. It is dense founder/operator reps in AI workflow design, deployment, evals, and runtime debugging.
Why This Fits
I am earlier-career by title, but the work shape is already close to AI deployment: user workflow, LLM system, runtime, evals, launch proof, and iteration.
Positioning
I am not claiming ten years of enterprise delivery. I am claiming unusually direct founder/operator reps in the exact loop FDE and AI deployment teams care about: understand the workflow, build the AI path, ship it, debug it, and turn failures into evidence.
Deployment Loop
Case Studies
These are public-safe writeups. They avoid private teacher data, secrets, internal IPs, and TeachClaw source code.
TeachClaw Deployment
Chat-native AI assistant for UK teachers with document generation, marking support, teacher memory, VPS deployment, and eval loops.
OpenClaw Operator / Evals
Operator cockpit with gateway sessions, plugins, LCM memory, local validation lanes, promotion summaries, and runtime drift discipline.
Story Trials
Full-stack synthetic health data licensing prototype using Next.js, Express, Prisma/PostgreSQL, IPFS, and Story Protocol testnet flows.
SparkAssist
Telegram-based electrician assistant for notes, voice input, structured extraction, certificates, reports, quotes, invoices, and RAMS.
AI Deployment Gateway
Public FastAPI proof with retrieval, eval runs, SQLite persistence, API-key write protection, audit events, readiness gates, Docker, and CI.
Architecture Story
TeachClaw maps cleanly to AI deployment work: user workflow surfaces, a platform layer, an agent/runtime layer, deterministic builders, and a validation loop before live promotion.
Recruiter Assets
Fast links for applications, profile cleanup, and technical proof.