Essay
←Back to blogFrom Screenshot to Solution: Stop Forwarding Me Vague Tickets
Learn how AI Feedbacks leverages Google Gemini to turn visually useless bug reports into actionable prompts for coding agents.
The AI Sees Your Stack Trace and Judges You
"It's broken." — an actual Jira ticket, assigned to you, with no further context, a screenshot of the wrong screen, and the energy of someone who has never once thought about what a developer needs to fix a bug.
Bug reporting in 2026. Still a game of telephone. Still completely broken.
We're in a timeline where AI writes production code, but the standard bug report is still a screenshot cropped to a vague red box, pasted into Slack, followed by four words and the silent prayer that you're also a psychic. By the time that reaches an AI coding agent, the context has fully decayed. The agent writes the wrong fix. You spend 20 minutes explaining what was actually broken. Classic.
AI Feedbacks exists specifically because I got tired of receiving "it doesn't work" as actionable development context.
The Problem Has a Name: Context Decay
When a bug fires on the client side, useful information exists for exactly a few seconds: the DOM state at the moment of failure, the console errors, the failed network requests, the exact UI configuration. Then the user closes the tab, writes "the button did something weird," and submits.
By the time that reaches Cursor or Windsurf, the prompt is so thin it's basically a riddle. Agents hallucinate. Developers reconstruct the crash scene manually before they can even ask the agent to start fixing it.
The technical term is "context decay." The honest term is "why does this happen every single sprint."
The Fix: Capture Everything at the Moment It Exists
The Chrome Extension runs silently in the background. When the user crops and submits a bug report, it automatically:
- Grabs the exact area of the screen they flagged
- Captures every
console.errorandconsole.warnfrom the session (yes, including the ones you've been ignoring) - Packages all failed network requests (4xx/5xx) from the last 60 seconds
- Captures the DOM state at the time of failure
- Sends the whole bundle to the backend — without the user doing anything else
The developer doesn't receive a blurry screenshot. They receive a structured, agent-ready prompt.
Google Gemini Does the Heavy Lifting
The backend is built with Next.js 15 and powered by Google Gemini 3 Flash Preview for multimodal analysis. Gemini sees the screenshot, reads the error logs, cross-references the network failures, and synthesizes everything into a prompt that actually gives your coding agent something to work with on the first pass.
const result = streamText({
model: google('gemini-3-flash-preview'),
messages: [
{
role: 'user',
content: [
{ type: 'text', text: `Analyze this bug: ${description}` },
{ type: 'image', image: screenshotBuffer },
],
},
],
})const result = streamText({
model: google('gemini-3-flash-preview'),
messages: [
{
role: 'user',
content: [
{ type: 'text', text: `Analyze this bug: ${description}` },
{ type: 'image', image: screenshotBuffer },
],
},
],
})Feed it a screenshot and a vague description. Get back a prompt that Cursor can actually use. Wild concept.
The Dashboard: Because You'll Forget About Last Tuesday's Bug
The web dashboard has natural language semantic search because nobody remembers the exact tag they used two weeks ago.
Search for:
- "that auth error"
- "the white flash on mobile"
- "whatever was broken with the chart last week"
It just finds it. Not because you tagged correctly in a 14-step triage process. Because you described it like a human.
The UI is monochrome, high-contrast, deliberately no-frills. No decorative gradients competing for attention while you're just trying to ship a fix. The #87ae73 accent color is the most exciting thing in the design system, and that's intentional.
Who This Is For
If you use Cursor, Windsurf, GitHub Copilot, or any agentic coding tool — you already know the dirty secret: output quality is 100% constrained by prompt quality. AI Feedbacks automates the hardest part of a good bug prompt: capturing and structuring all the context at the exact moment it exists, before the user closes the tab and forgets everything.
Better context in. First-pass fixes out. Feed your agents better.