Logs
Short field notes on AI agents, LLM training, and the systems behind them. Dated, append-only, occasionally wrong.
Tapasya is a recommendation system built on top of RAG
the core problem is not one-shot answer generation; it is recommending the next passage worth reading, with retrieval and synthesis used to keep that recommendation grounded
Fixing a Windows deadlock in a terminal coding agent
the session was not stuck because `git` was slow; it was stuck because the backend was pushing too many large updates through a small Windows pipe
Why agents need memory and runtime framing
it is not; an agent needs both conversation memory and a separate runtime-framing baseline for the next run
What compaction should preserve
between runs, compaction decides what past survives, but a long-running coding agent also needs a separate runtime-framing baseline so the next run starts under the right conditions
Replacing Python subprocesses with a Go worker
the real bottleneck was subprocess-per-call, not thread count; both Go and Rust persistent helpers fixed it, and Go was the better repo fit while Python kept semantic ownership
Tapasya model bar: frontier references vs the local lane
current evidence says the local lane improved, but frontier hosted references still set the product bar