Hi HN! We are building Syncause, an AI debugger extension for VS Code/Cursor/Antigravity.
We built this because we were frustrated that AI coding agents act "blindly"—they change code without knowing the actual runtime variable values.
The Technical Approach:
We realized that standard OpenTelemetry is designed for Ops (sampling, aggregation), but it misses the granular data needed for AI debugging (like full args and return values). So, we repurposed OTel to feed an always-on in-memory ring buffer inside the local process.
How it works under the hood:
* Capture: We instrument Python, Node/TS, and Java to record deep context into a circular buffer. This keeps overhead very low since we aren't writing to disk or network constantly.
* No Repro Needed: You don't need to painstakingly reproduce the bug steps or wait for a crash. Just describe the issue (e.g., "cart total was calculated wrong"). We semantically search the buffer history to find the relevant execution frames and variable states from when that logic ran.
* Privacy: Since this deals with real data, we run a local PII sanitizer before sending any context to the LLM. It is designed strictly for Dev/Test environments.
The video demo shows Java, but we support Python and Node.js with the same architecture.
Happy to answer questions about the ring buffer implementation or the trade-offs we made!
We built this because we were frustrated that AI coding agents act "blindly"—they change code without knowing the actual runtime variable values.
The Technical Approach: We realized that standard OpenTelemetry is designed for Ops (sampling, aggregation), but it misses the granular data needed for AI debugging (like full args and return values). So, we repurposed OTel to feed an always-on in-memory ring buffer inside the local process.
How it works under the hood:
* Capture: We instrument Python, Node/TS, and Java to record deep context into a circular buffer. This keeps overhead very low since we aren't writing to disk or network constantly.
* No Repro Needed: You don't need to painstakingly reproduce the bug steps or wait for a crash. Just describe the issue (e.g., "cart total was calculated wrong"). We semantically search the buffer history to find the relevant execution frames and variable states from when that logic ran.
* Privacy: Since this deals with real data, we run a local PII sanitizer before sending any context to the LLM. It is designed strictly for Dev/Test environments.
The video demo shows Java, but we support Python and Node.js with the same architecture.
Happy to answer questions about the ring buffer implementation or the trade-offs we made!
reply