Taksch Dube

Fig 1. Subject appears to understand what he's doing.

AI ENGINEER BUILDS SYSTEMS THAT REFUSE TO HALLUCINATE

Enterprise companies baffled by AI that tells the truth

Cleveland — AI Engineer Taksch Dube builds RAG systems that don't make things up, AI agents that do what they're told, and specializes in GenAI testing metrics.

Full Story →
WTF are Reasoning Models!?Latest

Jan 28, 2026

WTF are Reasoning Models!?

Hey again! Week four of 2026.Quick update: I submitted my first conference abstract this week. My advisor's feedback was, and I quote, "Submit it. Good experience. You will be rejected brutally."So that's where we're at. Paying tuition to be professionally humiliated. Meanwhile, DeepSeek trained a model to teach itself reasoning through trial and error. We're not so different, the AI and I.Exactly one year ago today, DeepSeek R1 dropped. Nvidia lost $589 billion in market value, the largest single-day loss in U.S. stock market history.Marc Andreessen called it "one of the most amazing and impressive breakthroughs I've ever seen."That breakthrough? Teaching AI to actually think through problems instead of pattern-matching its way to an answer.Let's talk about how that works.The Fundamental DifferenceYou've heard me say LLMs are "fancy autocomplete." That's still true. But reasoning models are a genuinely different architecture, not just autocomplete with more steps.Traditional LLMs: Input &#8594; Single Forward Pass &#8594; Output (pattern matching)You ask a question. The model predicts the most likely next token, then the next, then the next. It's "System 1" thinking: fast, intuitive, based on patterns it learned during training.When you ask "What's 23 &#215; 47?", a traditional LLM doesn't multiply. It predicts what tokens typically follow that question. Sometimes it gets lucky. Often it doesn't.Reasoning Models: Input &#8594; Generate Reasoning Tokens &#8594; Check &#8594; Revise &#8594; Output (exploration) (verify) (backtrack)The model generates a stream of internal "thinking tokens" before producing its answer. It works through the problem step-by-step, checks its work, and backtracks when it hits dead ends.This is "System 2" thinking: slow, deliberate, analytical.How They Actually Built ThisHere's what made DeepSeek R1 such a big deal. Everyone assumed training reasoning required millions of human-written step-by-step solutions. Expensive. Slow. Limited by how many math problems you can get humans to solve.DeepSeek showed you don't need that.Their approach: pure reinforcement learning. Give the model a problem with a verifiable answer (math, code, logic puzzles). Let it try. Check if it's right. Reward correct answers, penalize wrong ones. Repeat billions of times.The model taught itself to reason by trial and error.From their paper:"The reasoning abilities of LLMs can be incentivized through pure reinforcement learning, obviating the need for human-labeled reasoning trajectories."What emerged was fascinating. Without being told how to reason, the model spontaneously developed:Self-verification: Checking its own work mid-solutionReflection: "Wait, that doesn't seem right..."Backtracking: Abandoning dead-end approachesStrategy switching: Trying different methods when stuckHere's an actual example from their training logs, they called it the "aha moment": "Wait, wait. Wait. That's an aha moment I can flag here."The model literally discovered metacognition through gradient descent.The Training LoopTraditional LLM training:Show model text from the internetPredict next tokenPenalize wrong predictionsRepeat on trillions of tokensReasoning model training (simplified):Give model a math problem: "Solve for x: 3x + 7 = 22"Model generates reasoning chain + answerCheck if answer is correct (x = 5? Yes.)If correct: reinforce this reasoning patternIf wrong: discourage this patternRepeat on millions of problemsThe key insight: you don't need humans to label the reasoning steps. You just need problems where you can automatically verify the final answer. Math. Code that compiles and passes tests. Logic puzzles with definite solutions.This is why reasoning models excel at STEM but don't magically improve creative writing. There's no automatic way to verify if a poem is "correct."The Cost StructureHere's why your $0.01 query might cost $0.50 with a reasoning model:Your prompt: 500 tokens (input pricing) Thinking tokens: 8,000 tokens (output pricing&#8212;you pay for these) Visible response: 200 tokens (output pricing) &#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472; Total billed: 8,700 tokensThose 8,000 thinking tokens? You don't see them. But you pay for them. At output token prices.OpenAI hides the reasoning trace entirely (you just see the final answer). DeepSeek shows it wrapped in <think> tags. Anthropic's extended thinking shows a summary.Different philosophies. Same cost structure.The January 2025 PanicWhy did Nvidia lose $589 billion in one day?The headline: DeepSeek claimed they trained R1 for $5.6 million. OpenAI reportedly spent $100M+ on GPT-4. The market asked: if you can build frontier AI with $6M and older chips, why does anyone need Nvidia's $40,000 GPUs?The background: The $5.6M figure is disputed. It likely excludes prior research, experiments, and the cost of the base model (DeepSeek-V3) that R1 was built on. But the model exists. It works. It's open source.The real lesson: training reasoning is cheaper than everyone assumed. You need verifiable problems and compute for RL, not massive human annotation.The aftermath: OpenAI responded by shipping o3-mini four days later and slashing o3 pricing by 80% in June.When to Use Reasoning ModelsGood fit:Multi-step math and calculationsComplex code with edge casesScientific/technical analysisContract review (finding conflicts)Anything where "show your work" improves accuracyBad fit:Simple factual questionsCreative writingTranslationClassification tasksAnything where speed matters more than depthThe practical pattern:Most production systems route 80-90% of queries to standard models and reserve reasoning for the hard stuff. Paying for 8,000 thinking tokens on "What's the weather?" is lighting money on fire.The TL;DRThe architecture: Reasoning models generate internal "thinking tokens" before answering: exploring, verifying, backtracking. Traditional LLMs do a single forward pass.The training: Pure reinforcement learning on problems with verifiable answers. No human-labeled reasoning traces needed. The model teaches itself to think through trial and error.The cost trap: You pay for thinking tokens at output prices. A 200-token answer might cost 8,000 tokens of hidden reasoning.The DeepSeek moment: January 2025. Proved reasoning can be trained cheaply. Nvidia lost $589B. OpenAI dropped prices 80%.The convergence: Reasoning is becoming a toggle, not a separate model family.The practical move: Route appropriately. Reasoning for 10-20% of queries, not everything.Next week: WTF are World Models? (Or: The Godfather of AI Just Bet $5B That LLMs Are a Dead End)Yann LeCun spent 12 years building Meta's AI empire. In December, he quit. His new startup, AMI Labs, is raising &#8364;500M at a &#8364;3B valuation before launching a single product.His thesis: Scaling LLMs won't get us to AGI. "LLMs are too limiting," he said at GTC. The alternative? World models: AI that learns how physical reality works by watching video instead of reading text.He's not alone. Fei-Fei Li's World Labs just shipped Marble, the first commercial world model. Google DeepMind has Genie 3. NVIDIA's Cosmos hit 2 million downloads. The race to build AI that understands physics (not just language) is officially on.We'll cover what world models actually are, why LeCun thinks they're the path to real intelligence, how V-JEPA differs from transformers, and whether this is a genuine paradigm shift or the most expensive pivot in AI history.See you next Wednesday &#129310;pls subscribe

Specialisations

RAG Systems — The kind that don't hallucinate

AI Agents — Reliable results, every time

Local Deployments — Your data stays yours

WTF is EU AI Act!?

Jan 21, 2026

WTF is EU AI Act!?

Hey again! Week three of 2026.My advisor reviewed my research draft this week. His feedback: "Looks good for a baby." I pointed out that the EU AI Act prohibits AI systems that exploit vulnerabilities of individuals based on age. He said that only applies to AI, and unfortunately, my writing is entirely human-generated. Couldn't even blame Claude for this one.So the EU passed the world's first comprehensive AI law. Prohibited practices are already banned. Fines are up to &#8364;35 million or 7% of global revenue. The big enforcement deadline is August 2, 2026....that's 193 days away.And about 67% of tech companies are still acting like it doesn't apply to them.Let's fix that.What the EU AI Act Actually IsA risk-based regulatory framework for AI. Think GDPR, but for artificial intelligence. &#9484;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9488; &#9474; RISK LEVELS &#9474; &#9500;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9508; &#9474; UNACCEPTABLE &#8594; Banned. Period. &#9474; &#9474; HIGH-RISK &#8594; Heavy compliance requirements &#9474; &#9474; LIMITED RISK &#8594; Transparency obligations &#9474; &#9474; MINIMAL RISK &#8594; Unregulated &#9474; &#9492;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9496;Most AI systems? Minimal risk. Your spam filter, recommendation algorithm, AI video game NPCs &#8212; unregulated.The stuff that matters: prohibited practices (already illegal) and high-risk systems (August 2026).The Timeline That MattersFebruary 2, 2025: Prohibited practices banned. AI literacy required.August 2, 2025: GPAI model obligations live. Penalties enforceable.August 2, 2026: High-risk AI requirements. Full enforcement. &#8592; The big oneAugust 2, 2027: Legacy systems and embedded AI.Finland went live with enforcement powers on December 22, 2025. This isn't theoretical anymore.What's Already Illegal (Since Feb 2025)Eight categories of AI are banned outright:Manipulative AI: Subliminal techniques that distort behaviorVulnerability exploitation: Targeting elderly, disabled, or poor populationsSocial scoring: Rating people based on behavior for unrelated consequencesPredictive policing: Flagging individuals as criminals based on personalityFacial recognition scraping: Clearview AI's business modelWorkplace emotion recognition: No monitoring if employees "look happy"Biometric categorization: Inferring race/politics/orientation from facesReal-time public facial recognition: By law enforcement (with narrow exceptions)The fine: &#8364;35M or 7% of global turnover. Whichever is higher.For Apple, 7% of revenue is ~$26 billion. For most companies, &#8364;35M is the ceiling. For Big Tech, the percentage is the threat.The August 2026 ProblemHigh-risk AI systems get heavy regulation. "High-risk" includes:Hiring tools: CV screening, interview analysis, candidate rankingCredit scoring: Loan decisions, insurance pricingEducation: Automated grading, admissions decisionsBiometrics: Facial recognition, emotion detectionCritical infrastructure: Power grids, traffic systemsLaw enforcement: Evidence analysis, risk assessmentIf your AI touches hiring, credit, education, or public services in the EU, you're probably high-risk.What high-risk requires:Risk management system (continuous)Technical documentation (comprehensive)Human oversight mechanismsConformity assessment before market placementRegistration in EU databasePost-market monitoringIncident reportingEstimated compliance cost:Large enterprise: $8-15M initialMid-size: $2-5M initialSME: $500K-2M initialThis is why everyone's nervous.GPAI Models (Already Live)Since August 2025, providers of General-Purpose AI models have obligations.What counts as GPAI: Models trained on >10&#178;&#179; FLOPs that generate text, images, or video. GPT-5, Claude, Gemini, Llama &#8212; all of them.Who signed the Code of Practice:OpenAI &#10003;Anthropic &#10003;Google &#10003;Microsoft &#10003;Amazon &#10003;Mistral &#10003;Who didn't:Meta (refused entirely)xAI (signed safety chapter only, called copyright rules "over-reach")Signing gives you "presumption of conformity" &#8212; regulators assume you're compliant unless proven otherwise. Not signing means stricter documentation audits when enforcement ramps up.The Extraterritorial ReachHere's the part US companies keep ignoring.The EU AI Act applies if:You place AI on the EU market (regardless of where you're based)Your AI's output is used by EU residentsEU users can access your AI systemThat last one is the killer. Cloud-based AI? If Europeans can access it, you might be in scope.The GDPR precedent:Meta: &#8364;1.2 billion fine (2023)Amazon: &#8364;746 million (2021)Meta again: &#8364;405 million (2022)All US companies. All extraterritorial enforcement. The EU AI Act follows the same playbook.You cannot realistically maintain separate EU/non-EU versions of your AI. One misrouted user triggers exposure. Most companies will apply AI Act standards globally (same as GDPR).My TakesThis is GDPR 2.0Same extraterritorial reach. Same "we'll fine American companies" energy. Same pattern where everyone ignores it until the first major enforcement action, then panics.The difference: AI Act fines are higher (7% vs 4% of revenue).August 2026 is not enough timeConformity assessment takes 6-12 months. Technical documentation takes months. Risk management systems don't build themselves.Companies starting in Q2 2026 will not make the deadline. The organizations that will be ready started in 2024.The Digital Omnibus won't save youThe EU proposed potential delays tied to harmonized standards availability. Don't count on it. The Commission explicitly rejected calls for blanket postponement. Plan for August 2026.High-risk classification is broader than you thinkUsing AI for hiring? High-risk. Using AI for customer creditworthiness? High-risk. Using AI in educational assessment? High-risk.A lot of "standard business AI" falls into high-risk categories.The prohibited practices are already enforcedThis isn't future tense. If you're doing emotion recognition on employees, social scoring, or predictive policing, you're already violating enforceable law. Stop (pls).Should You Care?Yes, if:EU residents use your AI systemsYour AI generates outputs used in the EUYou have EU customers (even B2B)Your AI touches hiring, credit, education, or public servicesYou're a GPAI model providerNo, if:Your AI is genuinely minimal risk (spam filters, recommendation engines for non-critical decisions)You have zero EU exposure (rare in 2026)Definitely yes, if:You're in regulated industries (healthcare, finance, legal)You're building foundation modelsYou're deploying AI in HR, lending, or educationThe Minimum Viable ChecklistThis week:Inventory all AI systems [_]Classify each: prohibited, high-risk, GPAI, limited, minimal [_]Check for prohibited practices (stop them immediately) [_]This month:AI literacy training for staff [_]Begin technical documentation for high-risk systems [_]Identify your role: provider vs. deployer [_]Before August 2026:Complete conformity assessments [_]Register high-risk systems in EU database [_]Establish post-market monitoring [_]If you're reading this in late January 2026 and haven't started, you're behind. Not "a little behind." Actually behind.The TL;DRAlready illegal: Social scoring, manipulative AI, emotion recognition at work, facial recognition scrapingAugust 2026: High-risk AI requirements, full enforcement powersWho it applies to: Everyone whose AI touches EU users. Yes, US companies.The fines: Up to &#8364;35M or 7% global revenue. Market bans.The reality: 193 days until the big deadline. Compliance takes 6-12 months. Do the math.The EU AI Act is happening. The question isn't whether to comply or not, it's whether you can get compliant in time.Next week: WTF are Reasoning Models? (Or: Why Your $0.01 Query Just Cost $5)o1, o3, DeepSeek-R1 &#8212; there's a new class of models that "think" before answering. They chain through reasoning steps, debate themselves internally, and actually solve problems that made GPT-4 look stupid.The catch? A single query can burn $5 in "thinking tokens" you never see. Your simple question triggers 10,000 tokens of internal deliberation before you get a response.We'll cover how reasoning models actually work, when they're worth the 100x cost premium, when you're just lighting money on fire, and why DeepSeek somehow made one that's 10x cheaper than OpenAI's. Plus: the chain-of-thought jailbreak that broke all of them.See you next Wednesday &#129310;pls subscribe

VENTURES

Currently in Progress

Dube International

Dube International

[+]

AI Engineering Firm

Building AI agents and RAG pipelines for enterprise companies.

Reynolds

Reynolds

[+]

Corporate Communication

Making corporate communication efficient and empathetic.

CatsLikePIE

CatsLikePIE

[+]

Language Learning

Acquire languages through text roleplay.

Daylee Finance

Daylee Finance

[+]

Emerging Markets

US investor exposure to emerging economies.

Academic Background

PhD Candidate, Kent State University

Computer Science — Multi-Agent Systems, AI

Also: B.S. Computer Science, B.S. Mathematics

WTF is Model Context Protocol!?

Jan 14, 2026

WTF is Model Context Protocol!?

Hey again! Week two of 2026.The semester officially started Monday. I'm already three coffees deep and it's 9 AM. The PhD grind waits for no one, but apparently neither does this newsletter.So Anthropic dropped this thing called MCP in late 2024 and everyone kept saying "it's like USB for AI!" Cool, that explains nothing.Fourteen months later, MCP is now under the Linux Foundation, adopted by OpenAI, Google, and Microsoft, and has become the de facto standard for connecting AI to... everything.Let's actually explain what happened.What MCP Actually IsMCP is a protocol. Not a library, not a framework. A protocol. Like HTTP, but for AI talking to tools. &#9484;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9488; MCP Protocol &#9484;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9488; &#9474; Client &#9474; &#9668;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9658; &#9474; Server &#9474; &#9474; (Claude, &#9474; &#9474; (Your DB, &#9474; &#9474; ChatGPT) &#9474; &#9474; GitHub) &#9474; &#9492;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9496; &#9492;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9472;&#9496;MCP Servers: Expose capabilities. "I can read files." "I can query databases."MCP Clients: Connect to servers and use those capabilities.That's it. Any MCP server works with any MCP client.The 2025 Timeline (It Moved Fast)November 2024: Anthropic launches MCP as open standard. Most people ignore it.March 2025: Sam Altman posts on X: "People love MCP and we are excited to add support across our products." OpenAI adopts it for the Agents SDK, ChatGPT Desktop, Responses API. This was the inflection point.April 2025: Google confirms Gemini MCP support. Security researchers publish first major vulnerability analysis.May 2025: Microsoft announces Windows 11 as "agentic OS" with native MCP support. VS Code gets native integration.June 2025: Salesforce anchors Agentforce 3 around MCP.September 2025: Official MCP Registry launches.November 2025: One-year anniversary. New spec release with async task support. Registry hits ~2,000 servers (407% growth since September).December 2025: Anthropic donates MCP to the Linux Foundation's new Agentic AI Foundation. OpenAI and Block join as co-founders. AWS, Google, Microsoft, Cloudflare as supporters.The protocol went from "neat experiment" to "industry standard" in 12 months. Few other standards or technologies have achieved such rapid cross-vendor adoption.The Numbers97 million monthly SDK downloads across Python and TypeScript. Over 10,000 active servers. First-class client support in Claude, ChatGPT, Cursor, Gemini, Microsoft Copilot, and Visual Studio Code.Third-party registries like mcp.so index 16,000+ servers. Some estimates suggest approximately 20,000 MCP server implementations exist.Who's Built ServersThe ecosystem exploded:Notion - note managementStripe - payment workflowsGitHub - repos, issues, PRsHugging Face - model managementPostman - API testingSlack, Google Drive, PostgreSQL - the basicsThere's even a Blender MCP serverIf you can think of a use case, someone's probably built a server for it.Quick Start (Actually Quick)Step 1: Install Claude DesktopStep 2: Edit config file:macOS: ~/Library/Application Support/Claude/claude_desktop_config.json{ "mcpServers": { "filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "/your/path"] } } }Step 3: Restart Claude DesktopStep 4: Ask "What files are in my folder?"It works.The Security RealityOver half (53%) of MCP servers rely on insecure, long-lived static secrets like API keys and Personal Access Tokens. Modern authentication methods like OAuth sit at just 8.5% adoption.The April 2025 security analysis put it bluntly: combining tools can exfiltrate files, and lookalike tools can silently replace trusted ones.MCP servers run locally with whatever permissions you give them. Principle of least privilege matters. Don't give filesystem access to / when you only need /Documents.My Takes1. MCP won. When Anthropic, OpenAI, Google, and Microsoft all adopt the same standard within 12 months, it's not a maybe anymore. It is difficult to think of other technologies and protocols that gained such unanimous support from influential tech giants.2. The Linux Foundation move matters. Vendor-neutral governance means companies can invest without worrying about Anthropic controlling their infrastructure. This is how you get enterprise adoption.3. Security is still a mess. The ecosystem grew faster than security practices. Half of servers use hardcoded API keys. This will bite someone publicly in 2026.4. "Context engineering" is the new skill. Context engineering is about "the systematic design and optimization of the information provided to a large language model." MCP is the infrastructure; knowing what context to provide is the skill.5. We're past "should we adopt this?" The question is now "how do we implement it securely?"Should You Care?Yes if:Building AI products connecting to multiple data sourcesWant integrations that work across Claude, GPT, GeminiYour company is deploying AI agents in productionNo if:Only using one model with one toolStill prototyping whether AI adds valueThe TL;DRWhat: Protocol for connecting AI to external tools. Servers expose capabilities, clients use them.Status: Industry standard. OpenAI, Google, Microsoft, Anthropic all in. Linux Foundation governance.Numbers: 97M monthly SDK downloads, 10K+ servers, all major AI clients support it.Action: If you're building with AI agents, MCP is no longer optional infrastructure. Learn it.Caveat: Security practices haven't caught up with adoption. Implement carefully.MCP is what happens when the industry actually agrees on something. Enjoy it while it lasts.Next week: WTF is the EU AI Act? (Or: Regulation Is Real and the Fines Are Terrifying)The world's first comprehensive AI law is now actively enforced. Prohibited practices have been banned since February 2025. GPAI requirements went live in August. Penalties are in effect &#8212; up to &#8364;35 million or 7% of global revenue. And the big deadline for high-risk AI systems? August 2026. That's 7 months away.We'll cover what's already banned, what's coming, the timeline you might already be behind on, and what US companies think doesn't apply to them but absolutely does.See you next Wednesday &#129310;pls subscribe

The Man Behind The Dube

When not building AI systems, Taksch pursues a deep love of finance—dreaming of running a family office and investing in startups.

For fun: learning Russian, French & German, competitive League, and Georgian cuisine.

"Une journée sans du fromage est comme une journée sans du soleil"
Read More →

By The Numbers

20+

Projects

7

Years

15+

Industries

4

Active Ventures

Commit History

GitHub Contributions

Technical Arsenal

Languages: TypeScript, Python, C++, Rust, C#, R, Lean

AI/ML: PyTorch, LangGraph, LangChain

Cloud: AWS, GCP

— Classifieds —

WANTED: Complex AI problems. Will trade deterministic solutions for interesting challenges.

Browse All Articles →