The thing that kept me going yesterday was a simple question: what if I just kept going?
Not “what if I learn to code” or “what if I hire someone.” Just: what if I describe exactly what I want to build, trust the process, and don't stop until something is live on the internet?
That's what Day 3 was. About seven hours of building, debugging, dead ends, and eventually — a fully functional AI-powered product live at a real URL.
The idea was called Revenue Diagnostics. The pitch: SaaS operators fill out a structured intake form about their funnel metrics, and Claude analyzes their inputs and returns a diagnostic — revenue leaks, funnel health scores, a 30/60/90 day action roadmap. No consultant. No spreadsheet. Just a clean AI-driven output in minutes.
I wanted it to live under the tools section of Cliqology, my main site. I wanted real Google sign-in. Real user data saved to a real database. Real AI analysis from a real API. Not a mockup. Not a demo. A thing that actually works.
The stack I landed on: Next.js, Supabase for the database, NextAuth for Google OAuth, and the Anthropic API for the AI analysis. All of it built and deployed to Netlify — where my main site already lives.
I did not write a single line of code myself. Claude Code wrote all of it.
The first two hours were pure setup. Before a line of code got written, I had to create accounts, generate API keys, configure OAuth credentials, and set up a database. Google Cloud Console alone took longer than expected — I had old projects sitting around, the interface had changed since any tutorial I'd seen, and at one point I created an OAuth client inside my Google Workspace organization rather than a standalone project and had to start over.
The prompt I used to kick off the actual build was long and specific:
Build a full-stack web application called “Revenue Diagnostics” for Cliqology.com. This tool helps SaaS operators identify revenue leaks and prioritize growth opportunities across their funnel using a structured AI-driven diagnostic framework.
I included the full brand color palette, the exact database schema I wanted, the specific Claude model to use, every field in every form step, and the exact JSON structure I wanted the AI to return. The more specific I was upfront, the less I had to correct later.
Claude Code generated the full project in one pass: Next.js scaffolding, Tailwind config, all the dependencies, a brand guide file it would reference throughout, and the basic file structure. It even baked the Cliqology brand colors directly into the app's globals on the first run, pulling from the CLAUDE.md file I'd had it create at the start. That part felt almost unreasonably good.
Then the Real Session Began
The first error hit as soon as I tried to run the app: Next.js 14 doesn't support .ts config files. I needed next.config.mjs. Small thing, but I didn't know that. I just described what I was seeing to Claude Code and it fixed it.
The second issue was subtler. Google sign-in was working — the OAuth flow completed, redirected back to the app — but nothing was being saved to Supabase. The user table stayed empty. The error in the logs: PGRST204 Could not find the 'updated_at' column of 'users' in the schema cache. Claude Code had added a column to its upsert that I'd never created in the database. Easy fix, but it took reading the server logs to find it.
Then the diagnostic form was saving correctly but the AI analysis wouldn't run. The error this time: invalid input syntax for type uuid. The app was sending Google's numeric user ID — a long string of numbers — to a database column that expected a proper UUID. Claude Code had written a fallback that quietly used the wrong ID when the right one wasn't present. The fix required signing out completely, signing back in to refresh the session token, and then it worked.
The third failure was almost funny. I added Anthropic API credits, tried to run a diagnostic, and got back: “Your credit balance is too low to access the Anthropic API.” I had $5. Apparently that's below the activation threshold. Added another $5, still failed. Turned out the API key I was using was from a different Anthropic account than the one where I'd added the credits. Swapped the key, restarted the server, and the analysis ran.
Watching the results page load for the first time was genuinely strange.
I had put in fake test data — placeholder numbers, a made-up company, nonsense answers. And the AI came back with a real diagnostic. Funnel health scores rendered as color-coded bars. Three specific revenue leaks with priority badges. A 30/60/90 day roadmap laid out as a clean timeline. The analysis was actually good. It identified real patterns from the inputs.
I sat with that for a minute.
One More Before the Finish Line
Deployment surfaced one more issue. Netlify's secrets scanner blocked the first two deploy attempts — it found environment variables embedded in the build output. Claude Code had mixed the public Supabase client and the secret service role client in the same file, and the bundler was pulling secrets into client-side code. The fix was clean: separate the clients into different files, restrict the secret client to server-only modules, and add a netlify.toml configuration to exclude two safe public URLs from the scan.
After that, it deployed in 54 seconds.
What I'd Do Differently
I'd set up the .env.local file before writing a single prompt to Claude Code. Half the debugging session was caused by mismatched API keys, wrong Supabase URLs, or a service role client that wasn't initializing correctly. If I'd gotten all eight environment variables confirmed and correct before starting, at least two hours of debugging disappears.
I'd also test the sign-in and database write in isolation before building the full form. I assumed the auth was working because the redirect completed. It wasn't. A simple check — sign in, open Supabase, see if a row appeared — would have caught the upsert bug on the first pass.
And I'd be more deliberate about what I ask Claude Code to log. The server logs saved me every time, but only because I thought to look. The prompt please add detailed error logging to the signIn callback and the diagnostic API route would have been worth adding at the very start.
The thing I keep coming back to is how long seven hours felt while it was happening, and how compressed the actual result is. A multi-step form. Google OAuth. A real database. AI analysis. A live URL. All of it built in a day by someone who has never written a Next.js route.
Most of that time wasn't building. It was debugging. And debugging, it turns out, is mostly just reading error messages carefully and describing them accurately. That's a skill. It just isn't the skill I expected to be developing.
The tool is live. I'm still not entirely sure I understand how all of it works.