The AI Code Finishing Checklist: 47 Things to Check Before You Ship
A prioritized, category-by-category checklist for taking your AI-built app from 'it works' to 'it's ready.' P0 items are ship blockers. P1 items fix within a week. P2 before scaling.
You built something with AI. It works. The UI looks real. Auth flows are wired up. Data moves from the backend to the frontend and back. You could demo it right now and people would be impressed.
But can you ship it?
That's a different question. "It works" and "it's ready" are separated by a gap that kills most AI-built apps. A widely cited estimate suggests over 90% of AI-assisted projects never make it to production. Not because they don't function, but because they don't survive contact with real users, real attackers, and real infrastructure.
This is the checklist. 47 items across 7 categories, prioritized so you know what to fix first. You don't need to do all 47 before launch. You need to do every P0.
Priority Legend
P0: Ship blocker. Fix before going live. If you skip this, you will get burned.
P1: Fix within the first week of launch. These are real risks, but a few days of exposure won't sink you.
P2: Fix before scaling or marketing. Fine at 50 users. Dangerous at 5,000.
1. Secrets and Credentials
This is the category where AI-generated code fails most consistently. A 2025 Apiiro study across Fortune 50 enterprises found a 40% increase in secrets exposure in AI-generated code. AI tools pull patterns from training data, and training data is full of tutorials with hardcoded API keys. Check these first.
| # | Check | Priority | How to Verify | How to Fix |
|---|---|---|---|---|
| 1 | No API keys hardcoded in source files | P0 | grep -rn "sk_live|sk_test|AKIA|password.*=" --include="*.ts" --include="*.tsx" . | Move every key to environment variables |
| 2 | .env file is in .gitignore | P0 | grep "\.env" .gitignore | Add .env* to .gitignore immediately |
| 3 | No secrets in git history | P0 | git log --all -p -- "*.env" and search for key patterns in old commits | Rotate every exposed key. Use git-filter-repo to scrub history |
| 4 | Database URLs use environment variables | P0 | Search for connection strings: grep -rn "postgresql://|mysql://|mongodb://" . | Move to env vars, never commit connection strings |
| 5 | Third-party tokens scoped to minimum permissions | P1 | Review each token's permissions in the provider dashboard | Regenerate tokens with the narrowest scope that still works |
| 6 | Secrets are different between dev and prod | P1 | Compare .env.local against production env vars | Generate separate keys for each environment |
| 7 | Secret rotation plan exists | P2 | Check your documentation | Document which keys rotate, how often, and who's responsible |
If items 1-4 fail, stop everything. Hardcoded secrets in a public repo mean your keys are already compromised. Rotate them before doing anything else. For a deeper dive, read the full security breakdown.
2. Authentication and Authorization
The most viral AI code failure story of 2025: a developer built an entire SaaS with Cursor, launched it, and within 72 hours users had bypassed the subscription by changing a single value in the browser console. All the authorization logic was client-side. The server trusted whatever the client sent.
This category catches that class of bug.
| # | Check | Priority | How to Verify | How to Fix |
|---|---|---|---|---|
| 8 | Every protected API route checks auth server-side | P0 | List all API routes, check each for getUser/getSession calls | Add server-side auth check to every route that returns user data |
| 9 | Auth middleware covers all protected routes | P0 | Review middleware config, test by hitting protected routes without a session | Configure middleware to match all /dashboard, /api/ (non-public) routes |
| 10 | Session tokens have expiry set | P0 | Check auth provider config for token lifetime | Set access token expiry (e.g. 1 hour) and refresh token expiry (e.g. 7 days) |
| 11 | RLS enabled on all Supabase tables with data | P0 | Check Supabase dashboard or query pg_tables for RLS status | Enable RLS and write policies. See the Supabase RLS guide |
| 12 | Role-based access control where needed | P1 | Verify admin-only features are gated on roles checked server-side | Add role column to profiles table, check in API routes |
| 13 | Password requirements meet minimum standards | P1 | Try creating an account with "123" | Enforce 8+ characters minimum. Better: use a managed auth provider |
| 14 | OAuth redirect URIs configured for production domain | P1 | Check auth provider settings for callback URLs | Add your production domain to the allowed redirect URIs list |
| 15 | Logout actually invalidates the session | P1 | Log out, then try hitting a protected API route with the old token | Call signOut() on the auth provider, clear cookies server-side |
3. Error Handling
AI tools generate code for the scenario where everything goes right. Production is where everything goes wrong. Network requests fail. Tokens expire. Users submit unexpected input. Databases go down. If your app doesn't handle these cases, it shows a blank white screen, or worse, leaks a stack trace.
| # | Check | Priority | How to Verify | How to Fix |
|---|---|---|---|---|
| 16 | Every API call has try/catch with a meaningful response | P0 | Review all fetch/Supabase calls in API routes | Wrap in try/catch, return structured error JSON with appropriate status codes |
| 17 | React error boundaries on critical routes | P0 | Throw an error in a component, see if the app crashes entirely | Add ErrorBoundary components around dashboard, auth, and payment pages |
| 18 | Loading states on all async operations | P1 | Click every button that triggers an API call, check for visual feedback | Add loading spinners or skeleton screens to every async UI operation |
| 19 | Empty states for zero-data scenarios | P1 | Create a new account, check every page with no data | Design and implement "no data yet" states for lists, dashboards, charts |
| 20 | Auth token expiry handled gracefully | P1 | Let a session expire, then interact with the app | Detect 401 responses, redirect to login with a "session expired" message |
| 21 | Network failure handling | P2 | Turn off your network, interact with the app | Show offline indicators, queue retries, prevent data loss on form submissions |
| 22 | No console.log statements exposing internal state | P1 | Open browser DevTools console, use the app | Replace with structured logging or remove entirely. Check for logged tokens, user IDs, query results |
For a comprehensive approach to error handling, see Error Handling in AI-Built Apps.
4. Testing
You don't need 100% coverage. You need enough tests to catch the things that will break when you push a change at 11 PM. A 2025 Pieces survey found that 63% of developers spend more time debugging AI-generated code than they would have spent writing it manually. Tests turn that debugging from detective work into reading a report.
| # | Check | Priority | How to Verify | How to Fix |
|---|---|---|---|---|
| 23 | Auth flow has integration tests | P0 | Check for test files covering sign-up, sign-in, and protected route access | Write 3-5 tests covering the auth happy path and common failures |
| 24 | Critical API endpoints have tests | P0 | Check for test files covering your core CRUD routes | Write at least one test per critical endpoint: correct response shape, auth rejection, bad input |
| 25 | Core user journey has an E2E test | P1 | Check for Playwright/Cypress tests | Write one E2E test for the primary flow users take through your app |
| 26 | Build passes with zero errors | P0 | Run npm run build locally | Fix every TypeScript error and warning. Do not ship with ignoreBuildErrors: true |
| 27 | CI pipeline runs tests on push | P1 | Check for GitHub Actions / Vercel CI config | Add a CI workflow that runs npm test and npm run build on every push |
| 28 | Payment/billing flow tested (if applicable) | P0 | Use Stripe test mode, run through the full purchase flow | Test subscription creation, cancellation, webhook handling, and edge cases |
AI is actually good at writing tests for existing code. Prompt it file by file: "Write integration tests for this auth flow that cover the happy path and three failure modes." For a full testing strategy, see Testing AI-Generated Code.
5. Deployment and Infrastructure
The gap between npm run dev and a production deploy is where most AI-built apps die. Your local setup has env vars loaded from .env.local, a dev database with seed data, and hot reload hiding build errors. Production has none of those luxuries.
| # | Check | Priority | How to Verify | How to Fix |
|---|---|---|---|---|
| 29 | All env vars set in production | P0 | Create a checklist of every process.env reference, verify each is set in production | Set every variable in your hosting provider's dashboard |
| 30 | Production build succeeds locally | P0 | Run npm run build on your machine | Fix all build errors before deploying. This catches 80% of deploy failures |
| 31 | ignoreBuildErrors removed from next.config | P0 | Open next.config.mjs, check for typescript.ignoreBuildErrors | Remove it and fix the underlying TypeScript errors |
| 32 | Database migrations tracked in files | P1 | Check for a migrations/ or supabase/migrations/ directory | Export your current schema, create migration files, use a migration tool |
| 33 | Health check endpoint exists | P1 | curl https://yourapp.com/api/health | Create a /api/health route that returns 200 and checks DB connectivity |
| 34 | CORS configured for production domain | P0 | Test API calls from your production frontend domain | Set Access-Control-Allow-Origin to your production domain, not * |
| 35 | Custom domain with SSL configured | P1 | Visit your production URL in a browser, check the lock icon | Configure your domain in your hosting provider, verify SSL certificate |
| 36 | CDN/caching configured for static assets | P2 | Check response headers for Cache-Control on static files | Configure your CDN or hosting provider to cache static assets with long TTLs |
For step-by-step deployment instructions, see Deploy Your Next.js App to Production.
6. Input Validation and Data
Veracode's 2025 report found that 45% of AI-generated code contains security vulnerabilities, with input validation being a top failure category. AI writes code that works with expected input. Attackers don't send expected input.
| # | Check | Priority | How to Verify | How to Fix |
|---|---|---|---|---|
| 37 | Server-side input validation on all endpoints | P0 | Send malformed JSON, missing fields, and wrong types to each endpoint | Use Zod or a similar library to validate every request body and query param |
| 38 | SQL injection prevented (parameterized queries) | P0 | Search for string interpolation in SQL queries: grep -rn "SELECT.*\${" | Use parameterized queries or your ORM's query builder. Never concatenate user input into SQL |
| 39 | XSS prevented (output encoding) | P0 | Search for dangerouslySetInnerHTML in your codebase | Remove dangerouslySetInnerHTML or sanitize with DOMPurify. React escapes by default, so don't opt out |
| 40 | File upload size and type limits | P1 | Try uploading a 500MB file or a .exe through your upload form | Set max file size (e.g. 10MB), whitelist allowed MIME types, validate server-side |
| 41 | Rate limiting on public endpoints | P1 | Hit a public endpoint 100 times in 10 seconds | Add rate limiting middleware (e.g. upstash/ratelimit, express-rate-limit) |
| 42 | Data sanitization before database writes | P0 | Review all .insert() and .update() calls for unsanitized user input | Validate and sanitize at the API boundary before any data touches the database |
Items 37-39 and 42 are non-negotiable. SQL injection, XSS, and unsanitized writes are the vulnerabilities that lead to data breaches. A 2025 study found 170 out of 1,645 Lovable-created apps had security vulnerabilities exposing personal data to anyone with a browser. Check the full security analysis for code examples and fixes.
7. UX Polish
This is the difference between "it works" and "I'd actually use this." AI-generated UIs look polished at first glance because AI is trained on well-designed examples. But they fall apart at the edges: mobile viewports, empty states, accessibility, and the small interactions that make an app feel reliable.
| # | Check | Priority | How to Verify | How to Fix |
|---|---|---|---|---|
| 43 | Mobile responsive (tested on actual phone) | P1 | Open your app on a real phone, not just browser DevTools | Fix breakpoints, tap targets (44px minimum), and text overflow |
| 44 | Empty states designed and implemented | P1 | Create a fresh account, visit every page with zero data | Add "no data yet" messages, onboarding prompts, or placeholder content |
| 45 | Loading indicators on all async operations | P1 | Click every button that triggers an API call on a slow connection | Add spinners, skeleton screens, or progress bars. Disable buttons during submission |
| 46 | Basic accessibility (alt text, heading hierarchy, keyboard nav) | P1 | Run Lighthouse accessibility audit, tab through your app with keyboard only | Add alt text to images, fix heading order (h1 > h2 > h3), ensure forms are keyboard-navigable |
| 47 | Custom 404 page | P2 | Visit a URL that doesn't exist on your app | Create a not-found.tsx (Next.js) with a helpful message and navigation back |
Using This Checklist
Start with every P0 item. There are 20 of them. These are the items that will get your users' data leaked, break your app on launch day, or cost you money you didn't expect. They are non-negotiable.
Count your P0 failures. If more than 5 P0 items fail, block the launch. You're not ready. Shipping with 6+ P0 failures isn't bold. It's reckless.
The fastest path through this checklist:
- Run through all P0 items first (there are 20). Fix every one.
- Deploy to a staging environment and test the full flow.
- Launch. Start a timer.
- Fix all P1 items within the first week (there are 22).
- Fix P2 items before you start marketing or scaling (there are 5).
Here's the rough time estimate by category for a typical AI-built app:
- Secrets and credentials: 1-2 hours. Mostly grep and move to env vars.
- Auth and authorization: 2-4 hours. Server-side auth checks are mechanical but critical.
- Error handling: 2-3 hours. Tedious, but AI can help if you prompt it file by file.
- Testing: 3-4 hours. Write the minimum viable tests. See the testing guide.
- Deployment: 1-2 hours. Fix the build, set env vars, verify.
- Input validation: 2-3 hours. Add Zod schemas, parameterize queries.
- UX polish: 2-4 hours. Mobile testing, empty states, loading indicators.
Total: roughly 15-20 hours. A weekend and a few evenings. Not three weeks.
FinishKit automates this entire checklist. It scans your repo, identifies every gap across all 47 items, generates a prioritized Finish Plan, and creates pull requests with fixes. But whether you automate it or work through this list manually, the important thing is that you do it.
Ship With Confidence
47 items sounds like a lot. It's not. Most AI-built apps fail on the same 10-15 items: hardcoded secrets, missing server-side auth, no error handling, zero tests, and a build that only works locally. If that sounds familiar, you're not behind. You're normal. This is the default state of every AI-generated codebase.
The difference between projects that ship and projects that stall is whether someone sits down and works through the finishing pass. The production readiness guide covers the strategy. This checklist is the execution.
Fix the P0s this weekend. Fix the P1s next week. And ship with confidence.