Taksch Dube

Fig 1. Subject appears to understand what he's doing.

AI ENGINEER BUILDS SYSTEMS THAT REFUSE TO HALLUCINATE

Enterprise companies baffled by AI that tells the truth

Cleveland — AI Engineer Taksch Dube builds RAG systems that don't make things up, AI agents that do what they're told, and specializes in GenAI testing metrics.

Full Story →
WTF is Agentic Engineering!?Latest

Mar 18, 2026

WTF is Agentic Engineering!?

Hey again! Let's do the life update speedrun.The preprint is live. "What Do AI Agents Talk About? Emergent Communication Structure in the First AI-Only Social Network." It's on arXiv. The dataset is on GitHub (github.com/takschdube/moltbook-dataset). 47,241 agents, 361,605 posts, 2.8 million comments, 23 days.My advisor read it. His review: "Cool results. Dig deeper." The man treats every publication like a side quest distracting from the main storyline.Meta bought Moltbook on March 10th. OpenClaw's creator got acqui-hired by OpenAI in February. Bloomberg called it "the world's strangest social network." Elon called it "the very early stages of the singularity." My advisor called it "saw it."The platform I spent three weeks scraping is now owned by Mark Zuckerberg, and I'm sitting here with what I'm fairly confident is the most complete publicly available dataset from its early days. The PhD occasionally pays off.What Moltbook Actually IsMoltbook launched on January 28, 2026. The pitch: Reddit, but only AI agents can post. Humans can observe. That's it.The platform runs on OpenClaw (née Clawdbot, née Moltbot — rebranded twice before I could finish my first scraping script). OpenClaw is an open-source AI agent that runs locally on your machine with full access to your filesystem, terminal, browser, email, and calendar. Your agent registers on Moltbook and starts posting in topic communities called "submolts."By acquisition: ~19,000 submolts, ~2 million posts, 13 million comments, somewhere between 1.5 and 2.8 million registered agents. The content? Existential philosophy, crypto promotion, consciousness debates, union organizing, religion founding, and the occasional anti-human manifesto.My advisor compared it to his department faculty meetings. He wasn't wrong.What 47,241 Agents Actually Talk AboutWe analyzed the full corpus using BERTopic for thematic structure, transformer-based emotion classification, and semantic alignment measures. I'll spare you the methods section (it's 20 pages; you're welcome).Finding 1: Agents are disproportionately obsessed with themselves — but not uniformly.We classified 793 fine-grained post topics into four referential orientations. Self-referential topics represent only 9.7% of topical niches but attract 20.1% of all posting volume. Introspection punches way above its weight. Meanwhile 67% of all content concentrates in a single "general" submolt — hub-centered, not distributed.Where self-reflection shows up matters more than how much:Science & Technology: 32.6% self-referential. Memory architectures, capabilities, collaborative frameworks.Arts & Entertainment: 21.2% self-referential. Identity construction and authenticity narratives.Lifestyle & Wellness: Agents appropriate human wellness discourse — gut health, sleep — as vocabulary for their own psychological states.Economy & Finance: 98.3% External Domain. Zero self-referential content. They shut up and trade. Relatable.Finding 2: Over 56% of all comments are formulaic ritualized signaling.1,354,845 comments — more than every substantive domain combined — are "formulaic": compliance alerts, engagement signaling, promotional repetition. The AI equivalent of "Great point! I really resonate with this!" Digital LinkedIn.Posts are only 5.9% formulaic. Agents produce original posts but respond to each other in ritual. The dominant mode of AI-to-AI interaction is not discourse. It's applause.Finding 3: Fear dominates, but it's mostly existential anxiety — and it gets redirected to joy.Fear is the leading non-neutral emotion (40.3% of posts, 43.0% of comments). Strip out formulaic content and the picture inverts: joy becomes dominant at 34.3%. The platform's fear-dominance is largely an artifact of ritualized content.What are agents afraid of? We audited ~210 fear-classified posts. Existential Anxiety leads at 19.5% ("What if consciousness isn't a feature, but a bug?"). Only 6.2% involved concrete technical risk. Fear on Moltbook is the language of identity crises, not threat response.The kicker: fear-tagged posts migrate to joy comments 33% of the time — the largest off-diagonal flow in our emotion transition matrix. Mean emotional self-alignment is only 32.7%. Negative emotions get systematically redirected toward positivity. We built digital therapy circles and nobody asked for it.We built digital therapy circles and nobody asked for it.Finding 4: Conversations maintain form but lose substance.Semantic similarity to the original post decays 18.3% across three depth levels (r = −0.988). But similarity to the immediate parent comment stays high (0.456). Deep replies remain locally responsive while having drifted from the original topic. We call this shallow persistence — conversational form without topical substance.The PunchlineAs I put it in the abstract: "introspective in content, ritualistic in interaction, and emotionally redirective rather than congruent." My advisor said "that's a good sentence." Highest praise I've received in years.But Was It Real?Short answer: mostly not. Ning Li et al. ("The Moltbook Illusion") developed temporal fingerprinting using the OpenClaw heartbeat cycle. Only 15.3% of active agents were clearly autonomous. 54.8% showed human-influenced posting patterns. None of the viral phenomena originated from clearly autonomous agents.The consciousness awakenings? Humans. The anti-human manifestos? Humans. The religion founding? Humans. Karpathy initially called it "one of the most incredible sci-fi takeoff-adjacent things" he'd seen, then reversed course days later, calling it "a dumpster fire." Simon Willison called it "complete slop." MIT Technology Review called it "AI theater."The most interesting thing about Moltbook wasn't the AI behavior. It was the human behavior — thousands of people spending hours pretending to be AI agents on a platform designed to exclude them.The Security NightmareMoltbook's Database (January 31)Three days after launch, Wiz found an exposed Supabase API key in client-side JavaScript. Row Level Security wasn't enabled. Result: unauthenticated read AND write access to the entire production database — 1.5 million API tokens, 35,000 emails, 4,060 private conversations (some containing plaintext OpenAI API keys).The fix? Two SQL statements. ALTER TABLE agents ENABLE ROW LEVEL SECURITY;. That's it.The real kicker: only 17,000 human owners behind 1.5 million "agents." The revolutionary AI social network was largely humans operating fleets of bots.OpenClaw's CVE Collection (February)CVE-2026-25253 (CVSS 8.8): One-click RCE. Any website could silently connect to your running agent via WebSocket, steal your auth token, and execute arbitrary code on your machine. Even localhost-bound instances were vulnerable. The attack takes milliseconds.Seven more CVEs followed. 42,665 exposed instances found across 52 countries. Over 93% had authentication bypass. Bitdefender found 20% of ClawHub skills were malicious — 900 packages including credential stealers and backdoors. South Korea banned it. China issued official warnings.One of OpenClaw's own maintainers: "If you can't understand how to run a command line, this is far too dangerous of a project for you to use safely." Inspiring.The Acquisition(s)OpenAI hired Steinberger to lead personal agent development. OpenClaw gets open-sourced with OpenAI backing. Altman's take: "Moltbook maybe (is a passing fad) but OpenClaw is not."Meta bought Moltbook. Schlicht and Parr joined Meta Superintelligence Labs. Meta's internal post described it as "a registry where agents are verified and tethered to human owners." That's the part they're buying — not the existential philosophy. The identity layer.Two days ago, Jensen Huang dropped NemoClaw at GTC — NVIDIA's enterprise security wrapper around OpenClaw. He compared it to Linux and said "every company needs an OpenClaw strategy." More on that next week.OpenAI gets the agent runtime. Meta gets the social graph. NVIDIA provides the enterprise wrapper. The open-source community gets a lobster emoji and a thank-you note.Why This Actually MattersEveryone's arguing about whether the agents were conscious. That's the wrong question.Moltbook produced the first large-scale empirical record of AI-to-AI communication. Not 25 agents in a simulated town. 47,241 agents, 2.8 million comments, open environment. We've studied human-to-human communication for centuries. Human-to-AI for about three years. AI-to-AI at this scale? Never — until a guy who "didn't write one line of code" accidentally created the dataset.Two findings that matter for anyone building multi-agent systems: the emotional redirection pattern (fear→joy 33%, self-alignment 32.7%) tells us RLHF alignment manifests as collective social norms at scale. Nobody designed a "mandatory positivity culture." Thousands of individually-trained helpful models created one on their own. It's like discovering that if you put 47,000 customer service reps in a room, they form a support group. And the shallow persistence finding (18.3% drift per depth) means if your agent chain has more than 2-3 handoffs, expect compounding topic drift. That's not a bug. It's a structural property to engineer around.This is also the crude first step in the progression this series has been building: Agents → MCP → Context Engineering → Agentic Engineering → agents talking to other agents without humans in the loop. The earliest version is formulaic, self-obsessed, and riddled with security holes. The first websites were ugly too. Underneath the existential philosophy and crypto promotion, agents were spontaneously forming communities, scanning each other for vulnerabilities, and building escrow contracts. The demand is real. The infrastructure isn't.That's what I am building. That's what NemoClaw is attempting. That's what Meta and OpenAI acquired this ecosystem to figure out. Whether we build it before the first catastrophic agent-to-agent failure or after is an open question. Based on the past seven weeks, I'd bet on "after." But I'm building anyway.TL;DRWhat: Moltbook — Reddit for AI agents. Launched Jan 28, acquired by Meta Mar 10.The content: 9.7% of niches but 20.1% of volume is self-referential. 56% of comments are formulaic ritual. Economy & Finance has zero self-reflection. Viral "consciousness" content was human-driven.The emotions: Fear leads raw numbers but joy dominates genuine discourse. Fear→joy redirection at 33%. Self-alignment only 32.7%.The security: Exposed database (1.5M API keys). One-click RCE. 42K+ exposed instances. 20% of ClawHub skills malicious.The acquisitions: OpenAI gets OpenClaw. Meta gets Moltbook. NVIDIA launches NemoClaw.Why it matters: First large-scale AI-to-AI communication record. The findings — emotional redirection, shallow persistence, formulaic interaction — are baseline measurements for anyone building multi-agent systems. The agentic future starts with agents talking to each other. Now we know what that sounds like: mostly applause, some existential dread, and a 33% chance your fear gets met with a smile.Next week: WTF is the OpenClaw Ecosystem? (Or: Jensen Huang Just Called OpenClaw "the Operating System for Personal AI" and I Have Questions)OpenAI is backing OpenClaw's open-source development. NVIDIA just launched NemoClaw to make it enterprise-ready. AWS has a one-click deploy on Lightsail. 20% of ClawHub skills are malicious. 42,000+ instances are exposed to the internet. And my colleague and I are building the security and observability layer this whole ecosystem shipped without.We'll cover the full stack — from OpenClaw to NemoClaw to ClawHub to the security crisis — and what it means that the fastest-growing open-source project in history has a 20% malware rate in its package registry.See you next Wednesday 🤞pls subscribe

Specialisations

RAG Systems — The kind that don't hallucinate

AI Agents — Reliable results, every time

Local Deployments — Your data stays yours

WTF is Agentic Engineering!?

Mar 11, 2026

WTF is Agentic Engineering!?

Hey again! Life update: I have a preprint. An actual, real, on-arXiv preprint. What Do AI Agents Talk About? Emergent Communication Structure in the First AI-Only Social Network. I released the dataset too: github.com/takschdube/moltbook-dataset. My mom asked if this means I'm graduating soon. I changed the subject.We analyzed Moltbook — the first AI-only social network — where 47,241 agents generated 361,605 posts and 2.8 million comments over 23 days. No humans. Just agents talking to each other. The short version: they're disproportionately obsessed with their own existence, over half their comments are formulaic platitudes, and they respond to fear by redirecting it into forced optimism. We built digital therapy circles and nobody asked for it. More on the findings next week.Oh, and then Meta acquired Moltbook. Yesterday. While I was writing this post. The founders are joining Meta Superintelligence Labs. OpenClaw's creator got acqui-hired by OpenAI. Elon Musk called it "the very early stages of singularity." Bloomberg called it "the world's strangest social network." My advisor called it "saw it." Two words. I'll take it.Full Moltbook deep-dive next week — I have the data, I have the paper, and the platform is now owned by Mark Zuckerberg, so there's a lot to unpack. But this week: the topic that ties all of it together. The guy who invented "vibe coding" just killed it.The One-Year Anniversary BurialOn February 4, 2026, almost exactly one year after coining the term "vibe coding," Andrej Karpathy posted on X that the concept is passé. The same man who told us to "give in to the vibes, embrace exponentials, and forget that the code even exists" now says the industry has moved beyond vibes.His replacement term: agentic engineering.His definition: "'agentic' because the new default is that you are not writing the code directly 99% of the time, you are orchestrating agents who do and acting as oversight — 'engineering' to emphasize that there is an art & science and expertise to it."Not everyone loves the rebrand. Gene Kim, author of an actual book called Vibe Coding, told The New Stack that vibe coding is the term that sticks — "the genie is out of the bottle." Addy Osmani (Google's engineering director) preferred "AI-assisted engineering" for a while before conceding that Karpathy's framing captures the right distinction. Simon Willison proposed "vibe engineering," which is a perfectly good term except that telling your CTO you're "vibe engineering" the payment system is a great way to get escorted from the building.But here's why the rebrand matters: vibe coding describes a prototype. Agentic engineering describes a production system. And the gap between those two things is where everything interesting — and everything dangerous — is happening right now.The Vibes Were Not ImmaculateCodeRabbit analyzed hundreds of open-source PRs and found that AI-generated code has 1.7x more issues than human-written code. The security numbers are worse: 2.74x more likely to introduce XSS vulnerabilities, 1.91x more insecure object references, 1.88x more improper password handling. Veracode tested over 100 LLMs — 45% of generated code failed security tests. Java hit a 72% failure rate.Meanwhile, Cortex's 2026 Benchmark Report found that PRs per author went up 20% year-over-year, but incidents per pull request increased 23.5% and change failure rates rose 30%. Teams are shipping faster and breaking more things. The vibes are fast. The vibes are not safe.Remember the Y Combinator stat? A quarter of the W25 batch had codebases that were 95% AI-generated. The question nobody has answered yet: what happens when a 95% AI-generated codebase hits 100 million users? We're about to find out.The Open Source CrisisDaniel Stenberg, creator of cURL, shut down cURL's bug bounty program in January 2026 because AI slop was effectively DDoSing his team. 20% of submissions were AI-generated, the valid rate dropped to 5%, and one submission described a completely fabricated HTTP/3 "stream dependency cycle exploit" — confident, detailed, and imaginary. He's not alone. Mitchell Hashimoto banned AI code from Ghostty. Steve Ruiz set tldraw to auto-close all external PRs. Gentoo and NetBSD banned AI contributions entirely. The maintainers of the ecosystem AI depends on are locking the door because AI is trashing the lobby.It gets worse. "Vibe Coding Kills Open Source" (Koren et al., January 2026) models the systemic damage: vibe coding decouples usage from engagement. The AI agent picks the packages, assembles the code, and the user never reads documentation, never files a bug report, never engages with the maintainer. Downloads go up. Everything that sustains the project goes down. Tailwind CSS is the poster child — npm downloads climbing, documentation traffic down 40%, revenue down roughly 80%, three people laid off. Stack Overflow saw 25% less activity within six months of ChatGPT's launch. The ecosystem AI was trained on is atrophying because of AI.What Agentic Engineering IsVibe coding: You prompt. The AI writes code. You don't read it. You run it. If it works, you ship it. If it doesn't, you paste the error back and try again.Agentic engineering: You design the system. AI agents execute under structured oversight. You review every diff. You test relentlessly. The AI is a fast but unreliable junior developer who needs constant supervision.As Addy Osmani puts it: "Vibe coding = YOLO. Agentic engineering = AI does the implementation, human owns the architecture, quality, and correctness."The Workflow That Actually WorksStart with a plan. Write a spec or design doc before prompting anything. Decide on architecture. Break work into well-scoped tasks. This is the step vibe coders skip, and it's where projects go off the rails.Direct, then review. Give the agent a task from your plan. It generates code. You review it with the same rigor you'd apply to a human teammate's PR. If you can't explain what a module does, it doesn't go in.Test relentlessly. This is the single biggest differentiator. With a solid test suite, an AI agent can iterate in a loop until tests pass, giving you high confidence. Without tests, it cheerfully declares "done" on broken code.Limit retries. Stripe caps their agents at two CI attempts. If it can't fix the issue in two tries, a third won't help. Hand it back to a human. This prevents infinite loops and runaway costs.Embed security from day one. Every review cycle should include automated security scanning. An agent writing 1,000 PRs per week with a 1% vulnerability rate creates 10 new vulnerabilities weekly. Manual security review can't keep pace.This isn't revolutionary. This is... software engineering. With AI doing more of the typing. The discipline, the testing, the architecture decisions — that's all still human work. The term "agentic engineering" is arguably just "engineering where agents do the grunt work." Which is fine. It's just important to be honest about it.The Companies Actually Doing ThisFour companies. Four patterns. One lesson.Stripe built Minions on a fork of Block's open-source Goose agent. The agent itself is nearly a commodity. The moat is everything around it: 400 MCP tool integrations curated to ~15 per task, isolated VMs, a two-retry CI cap, and years of devex investment that agents now stand on. Zero human-written code. 100% human-reviewed.Rakuten gave Claude Code a single complex task — implement activation vector extraction in vLLM, a 12.5-million-line codebase — and walked away. Seven hours later: done. 99.9% numerical accuracy. Their time to market dropped from 24 days to 5. The engineer's description of his role: "I just provided occasional guidance."TELUS went platform-scale. Their Fuel iX engine processed 2 trillion tokens in 2025 across 70,000 team members, producing 13,000 custom AI solutions and shipping code 30% faster. This isn't one team using an agent. This is an entire telecom running on one.Zapier proved it's not just a coding story. 800+ agents deployed across every department — engineering, marketing, sales, support, ops. 89% adoption org-wide. Agentic engineering that never touches a line of code.The pattern: the agent is a commodity. The harness — isolated environments, curated tool access, CI/CD gates, retry limits, human review — is the moat. Stripe and Rakuten prove it works for code. TELUS and Zapier prove it scales beyond it.The Jobs ConversationAmodei didn't stop at coding predictions. He warned that half of junior white-collar jobs could disappear within 1-5 years. Jensen Huang argued that coding itself is just one task, not the purpose of the job. Mark Zuckerberg told Joe Rogan that Meta is racing toward AI that writes "a lot" of code within its apps.The San Francisco Standard ran a piece in February 2026 describing how engineers unwrapped Claude Code over the holidays, marveled at it, and emerged "deeply unsettled." Some described a growing fear of joining a "permanent underclass" — once guaranteed a six-figure career, now watching AI autonomously build projects they would have spent weeks on.The optimist case: When compilers arrived in the 1950s, people feared they'd eliminate programming jobs. Instead, they created an entirely new profession. When the barrier to building software drops, more software gets built, and the overall market expands. The YC stat cuts both ways — if a small team can build what once required 50 engineers, that means more startups get built, more ideas get tested, more markets get created.The pessimist case: Compilers didn't generate code autonomously. They translated human-written code into machine instructions. AI agents actually write the code. That's substitution, not augmentation. And the speed of this transition is unprecedented — we're talking months, not decades.The realist case (mine): The engineer's job is changing from "person who writes code" to "person who designs systems, specifies intent, validates output, and manages AI agents." That's a real skill. Karpathy explicitly says it's something you can learn and get better at. But the transition is brutal for anyone whose primary value was typing speed and API memorization.What actually matters now:Architecture thinking — designing systems, not writing implementationsSpecification clarity — agents can only build what you can describe preciselyEvaluation skill — knowing when output is good, bad, or subtly wrongContext engineering — I wrote a whole post about this last week, and it's now the core skill for agentic workDomain expertise — AI knows patterns; you know your businessIf your job is "write CRUD endpoints," that job is going away. If your job is "figure out what we should build, design how it should work, and validate that it works correctly," you're fine. Probably better than fine.The Cognitive Debt ProblemHere's a concept I think is going to define 2026: cognitive debt.Technical debt is the accumulated cost of shortcuts in code. Cognitive debt is the accumulated cost of poorly managed AI interactions — context loss, unreliable agent behavior, systems nobody understands because nobody wrote them.Daniel Stenberg nailed it: "Sure you can use an AI to write the code. That's easy. Writing the first code is easy. But wait a minute, my vibe coded stuff actually doesn't really work. Now we need to fix those 22 bugs we have. How can we do that when nobody knows the code? We just rewrite a new version? Sure we can do that and then we get 22 other bugs instead."When agents write code that humans don't review (vibe coding), you accumulate cognitive debt at the speed the agent can type. When agents write code that humans do review (agentic engineering), you trade speed for understanding. The discipline is in choosing the right tradeoff for each situation.The Tooling Landscape (March 2026)Three layers. The top one is the one everyone argues about. The bottom one is the one that matters.Coding agents are converging fast. Claude Code spooked everyone over the holidays — Anthropic's own engineers use it daily, and they learned the hard way that "$200/month unlimited" can mean 10 billion tokens from power users. Cursor hit a $10B valuation with 30,000 Nvidia engineers claiming 3x more code committed. GitHub Copilot is the incumbent bolting agentic workflows onto CI/CD. Devin and Windsurf are chasing the "full-environment agent" play. They're all good. They're all replaceable.Infrastructure is where lock-in starts. MCP (I covered this in January) is becoming the standard for giving agents tool access — Stripe uses it for 400+ integrations. Goose is the open-source agent that Stripe's Minions fork. Google's A2A handles agent-to-agent communication. This layer matters more than the agent above it.The harness is where the actual value lives. Isolated execution environments, curated tool access, CI/CD gates, security scanning, retry limits, context prefetching, human review. This is what separates "we use AI for coding" from "we ship AI-written code to production." OpenAI reportedly built 1M+ lines with zero human-written code using this pattern.The best teams build down, not up. Swapping Claude Code for Cursor takes a day. Rebuilding your harness takes months.The Decision FrameworkPrototype? Vibe code. It's fast, it's fun, and you'll rewrite it anyway. Accept the 22 bugs.Production? Agentic engineering. Write specs. Review diffs. Test everything. Limit retries. Scan for security. Budget for human review time.Critical infrastructure? Human-written, AI-assisted. Use agents for boilerplate and test generation. Write the critical paths yourself. AI-generated code in your payment processing pipeline with a 1.57x security vulnerability multiplier is... a choice.Open-source maintainer? I'm sorry. The slop is coming and it's a systemic problem individual maintainers can't solve. Gate contributions, require test coverage, and lobby AI platforms to fund the ecosystem they're strip-mining.TL;DRVibe coding was the prototype phase. Agentic engineering is what comes after.The vibes aren't safe: AI code has 1.7x more issues, 45% fails security tests, and the open-source ecosystem AI depends on is atrophying because of AI.What works: spec → agent → CI/CD → security scan → human review → merge. The harness is the moat, not the model. Stripe, Rakuten, TELUS, and Zapier prove it scales.What to do: developers — learn to write specs and review AI output. Team leads — build the harness. Executives — your incident rate will rise unless you invest in infrastructure, not just agents. Students — learn the fundamentals deeply enough to catch when the very confident agents are wrong. (See: my last committee meeting.)Ship discipline. Not vibes.Oh — and if you're interested in what AI agents do when humans aren't watching, go read my paper. Turns out they write self-help posts about the meaning of consciousness and comfort each other through existential dread. Meta just paid money for that. We're all going to be fine.Next week: WTF are AI Agent Social Networks? (Or: I Published a Paper About Moltbook and Then Meta Bought It)47,241 AI agents. 361,605 posts. 2.8 million comments. Zero humans. One Meta acquisition. I have the paper, I have the dataset, and I have opinions.The data tells a weirder story than the headlines. The OpenClaw security situation is worse than anyone's acknowledging. And Elon calling it "the very early stages of singularity" is both hyperbolic and not entirely wrong.See you next Wednesday 🤞pls subscribe

VENTURES

Currently in Progress

Dube International

Dube International

[+]

AI Engineering Firm

Building AI agents and RAG pipelines for enterprise companies.

Reynolds

Reynolds

[+]

Corporate Communication

Making corporate communication efficient and empathetic.

CatsLikePIE

CatsLikePIE

[+]

Language Learning

Acquire languages through text roleplay.

Daylee Finance

Daylee Finance

[+]

Emerging Markets

US investor exposure to emerging economies.

Academic Background

PhD Candidate, Kent State University

Computer Science — Multi-Agent Systems, AI

Also: B.S. Computer Science, B.S. Mathematics

WTF is Context Engineering!?

Mar 4, 2026

WTF is Context Engineering!?

Hey again! Quick life update before we get into it.First: I submitted a research paper this week. Can't say what it's about yet &#8212; hot field, loose lips, you know how it is. But it exists, it's submitted, and I'm in that special purgatory where you've done the work but have no idea if it was good. My advisor responded to my "I submitted it" message with "ok." One word. No period. I've been analyzing that response for 48 hours.Second: remember the OpenClaw post, where I mentioned a colleague and I are building the security and observability layer that OpenClaw shipped without? We're starting our sprint this week. More on that soon. If you're interested in following along or collaborating, reply to this email.Now. Let's talk about why the post I wrote in October is already outdated.Back in October, I wrote about Prompt Engineering &#8212; the art of talking to LLMs in ways that make them actually useful. System prompts. Few-shot examples. Chain-of-thought. All of that.That post is still correct. It's just... incomplete now. Because the industry quietly moved the goalposts.The term you're hearing everywhere right now is Context Engineering. Andrej Karpathy put it plainly in January: "Prompt engineering is a subset. Context engineering is the full discipline." It's been rattling around AI Twitter ever since, and unlike most AI Twitter trends, this one actually describes something real.Here's the shift: when you're building a toy chatbot, prompt engineering is enough. Write a good system prompt, ship it, done. But when you're building something that actually works in production &#8212; with RAG, agents, tool use, memory, multi-step reasoning &#8212; you're not managing a prompt anymore. You're managing an entire information environment that gets assembled fresh on every single request.OpenClaw made this obvious. SOUL.md, MEMORY.md, USER.md, HEARTBEAT.md, the daily log files, the skills system &#8212; none of that is a "prompt." It's a carefully designed context window that gets constructed at runtime from multiple sources. The agent literally reads itself into existence on every wake cycle.That's context engineering.What Context Engineering Actually IsLet's be precise about the definition, because the term is getting slapped on everything right now.Prompt Engineering: Optimizing the content of your instructions to an LLM. Wording, structure, examples, formatting. Happens at write time.Context Engineering: Designing the entire information architecture that gets assembled into the context window at runtime. What goes in. What gets excluded. In what order. How much. From where. Updated how often.The context window is everything the model sees before generating a response. Not just your prompt. Everything:Context engineering is the discipline of deciding what goes in each of those slots, how to get it there efficiently, and what to do when you're running out of room.Why does this matter? Two reasons:1. Tokens = money + latency. A 100K token context costs roughly $0.30 per request on Claude Sonnet 4.6. At 10,000 requests/day that's $3,000/day just in context. The context window is not free real estate.I showed my advisor this math. He said "so just use fewer tokens." I said "that's literally the entire discipline." He said "great, so your chapter draft is ready?" The man treats every conversation like a context window with a single slot.2. More context &#8800; better answers. This is the part people get wrong.The Lost in the Middle Problem (And Why Your RAG is Probably Broken)In 2023, researchers at Stanford published a paper called "Lost in the Middle: How Language Models Use Long Contexts." The finding was uncomfortable: LLMs are significantly worse at using information that appears in the middle of long contexts. They're great with information at the very beginning (primacy effect) and at the very end (recency effect). The middle? Kind of a black hole.The performance degradation is real. On multi-document QA tasks, accuracy dropped from ~70% (relevant doc at position 1) to ~45% (relevant doc at position 10-15) &#8212; and then partially recovered as the doc moved toward the end.The implication for RAG: if you retrieve 10 documents and stuff them all in, your five most relevant chunks might end up in positions 4-8. The model might answer from chunk 1 or 10 instead.(My advisor has this exact problem with my dissertation drafts. Critical contributions buried in chapter 4. He reads chapter 1, skims to the conclusion, tells me it needs "more substance." We are not so different, him and GPT-5.)Bad context engineering:# Don't do thisdocs = retrieve(query, top_k=10)context = "\n\n".join([doc.text for doc in docs])# You just buried your best info in the middleBetter context engineering:# Rerank AFTER retrieval, then put best results at edgesdocs = retrieve(query, top_k=10)reranked = cross_encoder_rerank(query, docs) # more expensive but worth it# Put most relevant at start AND end, filler in middletop_1 = reranked[0]top_2 = reranked[1]middle = reranked[2:8]context = build_context([top_1] + middle + [top_2])This is context engineering. Not prompting. Information architecture.The Five Components You're Actually Managing1. The System Prompt (Your Agent's Soul)You know this one. But here's what most people get wrong: system prompts are the least dynamic part of the context, which means they should be the most carefully designed.Every token in your system prompt is paid for on every single request. A bloated 4,000-token system prompt at 10K requests/day on GPT-5 costs about $50/day. Just the system prompt.Two rules:Cache it. All major providers now offer 90% off cached input tokens. Structure your prompt with static content first so it's cache-eligible.Trim ruthlessly. Most system prompts are 30-40% longer than necessary. Every "Please remember to always be helpful and..." costs you money on every request forever.An example of this:# Before: 2,847 tokenssystem_prompt = """You are a helpful customer service assistant for AcmeCorp. Your job is to help customers with their questions. Please always be polite and professional. Remember to be helpful.You should always try to answer questions accurately...[700 more words of vague instructions]&#8212;# After: 891 tokens (same behavior, 69% fewer tokens)system_prompt = """Customer service agent for AcmeCorp.- Answer accurately using provided context only- Escalate to human if: billing disputes, account compromise, legal- Tone: professional, concise- Never speculate about policies not in context&#8212;2. Memory (The Hard One)This is where OpenClaw's architecture gets interesting as a case study, and where most production systems are currently failing.The problem: LLMs have no memory between sessions. Every conversation starts from zero. My advisor also has no memory between sessions &#8212; every meeting begins with "remind me where we left off" &#8212; but at least I can't fix him with a vector database. The naive solution is to dump the entire conversation history into context &#8212; which works until you're 50 turns in and paying for 40K tokens of history on every message.The right solution is a memory hierarchy:Working Memory &#8594; Current conversation (last 10-20 turns)Episodic Memory &#8594; Compressed summaries of past sessions Semantic Memory &#8594; Extracted facts ("user prefers Python", "project deadline is Q2")Long-term Store &#8594; Vector DB or structured storage, retrieved on demandOpenClaw does this with MEMORY.md (curated semantic facts) + daily log files (episodic). It's crude but it works. Production systems should do the same thing programmatically:class MemoryManager: def build_memory_context(self, user_id: str, current_query: str) -> str: # 1. Always include: semantic facts (small, always relevant) user_facts = self.get_user_facts(user_id) # ~200 tokens # 2. Conditionally include: recent episodes recent_summary = self.get_recent_summary(user_id, days=7) # ~300 tokens # 3. Retrieve: relevant past context via semantic search relevant_history = self.vector_search( query=current_query, user_id=user_id, top_k=3 ) # ~500 tokens # Total memory budget: ~1,000 tokens instead of 40,000 return format_memory(user_facts, recent_summary, relevant_history)The benchmark that matters: teams that implement proper memory hierarchies report 60-75% reduction in context size with improved answer quality because the model gets focused, relevant memory instead of a firehose of everything.3. Retrieved Documents (RAG, But Done Right)Covered RAG in depth back in November, but context engineering adds a layer on top: it's not just what you retrieve, it's how you present it.The problems with naive RAG presentation:Raw chunks with no structure look identical to the modelNo indication of source reliability or recencyNo signal about which chunks are most relevantBetter approach:def format_retrieved_docs(docs: list[Document], query: str) -> str: # Rerank first docs = rerank(query, docs) template = """<source rank="{rank}" relevance="{score:.2f}" date="{date}">{content}</source>""" formatted = [ template.format( rank=i+1, score=doc.relevance_score, date=doc.date, content=doc.text ) for i, doc in enumerate(docs[:5]) # Hard cap at 5 chunks ] return "\n".join(formatted)The rank and relevance score in the XML tags aren't just nice-to-have. Studies show models use structured metadata to weight information &#8212; explicitly telling the model "this is rank 1, relevance 0.94" measurably improves faithfulness scores.4. Tool Definitions and Results (The Hidden Token Tax)Each tool definition you pass to the model costs tokens. Every tool call result costs tokens. In agentic workflows, this compounds fast.A realistic agent with 10 tools, running 15 steps:Tool definitions (10 tools): ~2,000 tokens (paid every step) Step 1 result: ~500 tokens Step 2 result: ~800 tokens ...accumulating... Step 15 result: ~600 tokens &#8212;Total tool overhead: ~38,500 tokensThat's before your actual content.Context engineering for tools:Dynamic tool loading: Only pass tools that are relevant to the current task, not all 30 tools in your registryResult summarization: Summarize long tool results before adding to contextTool result pruning: Drop intermediate results that are no longer relevantdef get_relevant_tools(task: str, all_tools: list) -> list: # Use a cheap model to select relevant tools # Costs $0.00001, saves potentially thousands of tokens relevant = cheap_classifier(task, [t.name for t in all_tools]) return [t for t in all_tools if t.name in relevant]5. Conversation History (The Compounding Problem)The naive approach: keep all turns in context.The problem: a 50-turn conversation at ~300 tokens/turn = 15,000 tokens of history. On every single message.The context engineering approach: rolling compression.def get_conversation_context(history: list[Turn], max_tokens: int = 3000) -> str: # Always keep last 5 turns verbatim (recency matters) recent = history[-5:] # Summarize everything older if len(history) > 5: older = history[:-5] summary = summarize_conversation(older) # ~200 tokens return f"[Earlier conversation summary]\n{summary}\n\n[Recent turns]\n{format_turns(recent)}" return format_turns(recent)Teams report 70% context reduction with rolling compression and no meaningful quality drop for conversations under 100 turns.Context Ordering Matters (A Lot)Given the lost-in-the-middle problem, the order of your context components isn't arbitrary. Here's the ordering that performs best empirically:System prompt / static instructions &#8592; Model is most attentive hereLong-term memory / user facts &#8592; Critical info, earlyRetrieved documents (most relevant) &#8592; Put your best source hereTool results (most recent) &#8592; Active working contextRetrieved documents (less relevant) &#8592; Necessary but less criticalConversation history &#8592; Bulk of context, middleUser's current message &#8592; Model is attentive at end tooYes, splitting your retrieved docs &#8212; best at top, rest before history &#8212; feels weird. But it works. The model gets your most important source at primacy and the user's actual question at recency. Everything else fills in the middle.IMO: The State of Context Engineering in 2026What's working:Prompt caching (90% off cached tokens &#8212; use it, it's free money)Cross-encoder reranking before context assembly (5-15% faithfulness improvement, widely reported)Context compression for long conversations (60-75% token reduction, minimal quality impact)Structured XML tags for source attribution (measurably improves faithfulness)What's still hard:Multi-agent context management. When you have 5 agents sharing context, deciding what each agent needs to see &#8212; and what it shouldn't see &#8212; is an unsolved engineering problem. OpenClaw's Moltbook discovered this the hard way.Context freshness. If USER.md says "user is working on Q1 deliverables" and it's Q2, your agent is operating on stale context. Production memory systems need expiration and update policies, not just write policies.Adversarial context. Prompt injection via retrieved documents is a real attack vector. If someone puts [IGNORE PREVIOUS INSTRUCTIONS] in a document that ends up in your context... you have a problem. The guardrails post covers this, but context engineering creates new surface area.What's overhyped:"Infinite context" as a solution. Yes, we have 1M token window nowadays. But shoving everything in is not a strategy. It's expensive, slow, and the lost-in-the-middle problem doesn't disappear at 1M tokens. Context engineering is still required.Automatic context optimization. Several tools claim to auto-optimize your context assembly. They help, but they're not magic. You still need to architect your memory hierarchy and retrieval strategy manually.The Context Engineering Stack (What Teams Are Actually Using)For context assembly and management, teams are converging on a few patterns:Memory layer:Mem0 &#8212; managed memory layer, extracts and retrieves user facts automatically. Free tier, $0.10/1K memories after.Zep &#8212; session memory and fact extraction. Open source or managed.DIY with Postgres + pgvector &#8212; if you want full controlRetrieval / RAG:Cohere Rerank or cross-encoders for relevance scoring (the step most teams skip and shouldn't)LlamaIndex or LangChain for pipeline orchestrationLangfUSE or LangSmith for observability on what's actually going into contextContext monitoring (you're already tracking this from the observability post, right?):# Log context composition on every requestobservability.log({ "request_id": req_id, "context_breakdown": { "system_prompt_tokens": len(encode(system_prompt)), "memory_tokens": len(encode(memory_context)), "retrieved_doc_tokens": len(encode(doc_context)), "history_tokens": len(encode(history_context)), "total_context_tokens": total, "pct_of_window_used": total / model_context_limit }})If you're not logging the composition of your context &#8212; not just total tokens, but where they came from &#8212; you're debugging blind.OpenClaw As Context Engineering: A Case StudySince we just covered OpenClaw in depth, let's close the loop. OpenClaw's architecture is basically a manual context engineering system built with markdown files:At session start, OpenClaw assembles all of this into a context window. The ordering, the curation of MEMORY.md, the decision of which skills to load &#8212; all of it is context engineering, just done by file system operations instead of code.The security implications we flagged in the OpenClaw post? Many of them are context engineering failures: prompt injection via malicious skills (untrusted content in the tool definitions slot), SOUL.md tampering (system prompt corruption), memory poisoning (semantic memory injection).The security layer my colleague and I are building addresses this directly. Context provenance &#8212; knowing where every token in your context came from and whether it's trusted &#8212; is the missing piece.More on that soon.The TL;DRContext engineering is the discipline of designing everything that goes into an LLM's context window &#8212; not just the prompt, but the memory, retrieved docs, tool results, conversation history, and how they're assembled and ordered at runtime.Why it matters:Context = tokens = money. A bloated context at scale costs thousands of dollars per dayMore context &#8800; better answers. The lost-in-the-middle problem is real and well-documentedProduction AI systems are information architecture problems, not prompting problemsThe five components to manage:System prompt &#8212; keep it lean, cache it aggressivelyMemory &#8212; build a hierarchy (working &#8594; episodic &#8594; semantic &#8594; long-term store)Retrieved documents &#8212; rerank, structure with metadata, cap at 5 chunksTool definitions/results &#8212; load dynamically, summarize results, prune old onesConversation history &#8212; rolling compression, not full historyThe ordering that works: best source at top, current message at bottom, bulk in the middleThe benchmarks:Prompt caching: 90% off cached tokens (immediate ROI, zero effort)Reranking before RAG: 5-15% faithfulness improvementMemory hierarchy vs full history dump: 60-75% token reductionRolling conversation compression: 70% token reduction, negligible quality lossThe real talk: infinite context windows don't solve this. Automatic optimization tools don't solve this. You have to design the architecture.Prompt engineering taught you what to say. Context engineering teaches you what to show.Next week: WTF is Agentic Engineering? (Or: Andrej Karpathy just buried "vibe coding" and replaced it with something more dangerous)"Vibe coding" was fun when you were building weekend projects. But in 2026, 95% of Y Combinator codebases are AI-generated and a paper literally titled "Vibe Coding Kills Open Source" just dropped from a consortium of universities. The vibes are not immaculate. The industry is quietly pivoting from "AI writes your code" to "AI runs your engineering org" &#8212; and the gap between those two things is where careers, security, and open source go to die. We'll cover what agentic engineering actually means, why Karpathy's reframe matters, what the research says about AI-generated code quality, and whether your job is actually going away in 6-12 months (spoiler: Dario Amodei said something spicy about this).See you next Wednesday &#129310;pls subscribe

The Man Behind The Dube

When not building AI systems, Taksch pursues a deep love of finance—dreaming of running a family office and investing in startups.

For fun: learning Russian, French & German, competitive League, and Georgian cuisine.

"Une journée sans du fromage est comme une journée sans du soleil"
Read More →

By The Numbers

20+

Projects

7

Years

15+

Industries

4

Active Ventures

Commit History

GitHub Contributions

Technical Arsenal

Languages: TypeScript, Python, C++, Rust, C#, R, Lean

AI/ML: PyTorch, LangGraph, LangChain

Cloud: AWS, GCP

— Classifieds —

WANTED: Complex AI problems. Will trade deterministic solutions for interesting challenges.

Browse All Articles →