Open to AI/ML, platform, backend, and product engineering conversations
Primary
Explore more
Start here
Work
Featured projects, current AI/product work, and backend systems.
Research
Searchable publications, notes, and PDF previews inside the portfolio.
Copilot
Persistent recruiter-focused AI assistant with source-linked answers.
Labs
Working notes, prototypes, and evaluation systems behind the shipping work
This page collects technical experiments that sit one step before productization: agent control surfaces, evaluation kits, retrieval observability, and interface systems for technical storytelling. The goal is not volume. The goal is sharper systems, clearer artifacts, and faster trust.
Ongoing experiments that sharpen product AI, reliability, and technical communication
These labs are serious enough to build and document, but still early enough to stay lightweight. They are where product AI becomes more inspectable, where technical artifacts get clearer, and where reliability patterns get tested before they harden into larger systems.
In prototyping
Realtime agent review loops
Exploring how assistants expose state, surface uncertainty, and stay understandable once a human needs to step in. The focus is on operator checkpoints, live traces, fallback behavior, and response review rather than black-box chat.
Building lightweight evaluation kits for retrieval and assistant systems: small benchmark sets, trace logging, regression checks, and failure review loops that make behavior easier to measure before rollout.
Using diagrams, motion, WebGL, and structured AI narration to turn complex engineering work into something faster to evaluate without flattening the technical depth.
diagramsmotionWebGLauthored interfaces
The target is faster understanding for recruiters, founders, and technical peers.
Trust improves when systems become more visible, measurable, and legible
Across product AI, backend systems, and research tooling, the same pattern keeps showing up: work gets stronger when behavior is easier to inspect, decisions are easier to explain, and artifacts are easier to hand off. The labs page should make that through-line visible instead of treating experiments like scraps.
Related artifacts
Notes that connect the experiments to real engineering work
These documents keep the experiments attached to real engineering signal instead of floating as abstract exploration.
Technical brief
Reliability Patterns for GPT-5 Product Assistants
A compact note on structured prompting, trust boundaries, feedback loops, and why user-facing assistants need product discipline as much as model capability.
Adversarial Robustness Evaluation for Practical LLM Systems
A brief on perturbation-based evaluation, model brittleness, and how failure analysis becomes a practical deployment question rather than a purely academic one.
Use the copilot to connect experiments back to shipped work
The fastest path is to move from the labs into the case studies, research notes, or the AI copilot so the experiments stay connected to product delivery and engineering signal.