Everyone writes about AI tools. Nobody shows how they actually use them day to day.
Here is my real workflow — which tools I open, when I use each one, what works, what fails, and what it actually costs. No sponsored content. Just what I do.
My Daily Setup
Three screens. Two AI tools always running. One browser tab.
Left screen: Cursor (main code editor)
Right screen: Terminal with Claude Code
Browser: Claude.ai or ChatGPT for quick questions
That is it. No 10 tools. No complex setup. Two AI coding tools and a chat window.
Morning: Planning and Architecture (30 min)
What I Do
Before writing any code, I ask Claude.ai about architecture decisions:
"I need to add offline support to this app. The app uses Retrofit
for API calls and Room for local storage. Should I use the repository
pattern with a single source of truth, or use WorkManager for
background sync? What are the trade-offs?"
Claude gives a detailed answer with pros and cons. I make the decision, then move to implementation.
Why Claude.ai (Not Claude Code)
For planning, I don’t need the AI to read my code. I need it to think about architecture. Claude.ai’s chat interface is better for back-and-forth discussion than a terminal.
What Works
- Complex architecture questions get thoughtful answers
- I can upload diagrams and screenshots
- The conversation format helps me think through trade-offs
What Doesn’t Work
- Claude.ai doesn’t know my specific codebase
- Generic advice — I need to adapt it to my project
- Daily free limits can be tight for heavy planning days
Midday: Feature Development (3-4 hours)
What I Do
Open Cursor. Start building. Here is a typical feature development session:
Step 1: Scaffold with Cursor Composer
I describe the feature in Cursor’s Composer panel:
Create a SettingsScreen with these options:
- Dark mode toggle (saves to DataStore)
- Notification toggle
- Cache size display with clear button
- App version display
Use our existing theme and MVI architecture.
Cursor creates 3-4 files: the screen, ViewModel, state class, and connects them to navigation.
Step 2: Refine with Inline Edit (Cmd+K)
The generated code is 80% right. I select specific sections and refine:
- Select the toggle component → “Make this use a custom animated switch”
- Select the cache logic → “Add a confirmation dialog before clearing”
- Select the navigation → “Use our existing route pattern from Routes.kt”
Step 3: Tab Completion for the Details
For small additions — writing a test, adding a parameter, completing a function — I just type and let Cursor autocomplete.
Why Cursor (Not Claude Code)
For feature development, I need to SEE the code. Navigate files. Check layouts. Preview composables. Cursor is an editor — Claude Code is a terminal.
What Works
- Composer generates multi-file features in seconds
- Inline edit (Cmd+K) is the fastest way to refine code
- Tab completion saves time on boilerplate
What Doesn’t Work
- Composer sometimes creates files in the wrong location
- Generated code doesn’t always match my project’s patterns
- For very large features (50+ files), Cursor loses context
Afternoon: Hard Problems (1-2 hours)
What I Do
When I hit a bug that spans multiple files, or need a big refactor, I switch to Claude Code:
cd ~/my-project
claude
"The search feature returns duplicate results when the user
types fast. I think the issue is in SearchViewModel.kt where
we collect the Flow. Find the bug and fix it."
Claude Code reads SearchViewModel.kt, follows the data flow to the repository and DAO, finds the race condition, and suggests a fix with flatMapLatest.
Why Claude Code (Not Cursor)
Claude Code reads the ENTIRE codebase. It understands relationships between files that Cursor misses. For bugs that span the data layer, UI layer, and database — Claude Code is faster.
What Works
- Finds bugs across multiple files quickly
- Refactoring (rename + update all references) is effortless
- Runs tests after fixing — catches regressions
- Git integration — creates clean commits
What Doesn’t Work
- No visual preview — I can’t see the UI
- Sometimes over-refactors (changes things I didn’t ask for)
- Complex tasks can burn through API tokens fast
- Learning curve for writing good prompts
Evening: Code Review and Tests (1 hour)
What I Do
Before committing, I use Claude Code to review my day’s work:
"Review the changes I made today (git diff). Look for:
- Bugs or edge cases I missed
- Performance issues
- Security concerns
- Code that doesn't match our architecture patterns"
Then I ask it to generate tests:
"Write unit tests for SearchViewModel. Cover:
- Normal search flow
- Empty query returns all results
- Fast typing doesn't cause duplicates
- Error handling when API fails"
What Works
- Catches bugs I miss (null checks, edge cases)
- Generates comprehensive tests faster than I could write them
- Reviews are objective — no ego, no shortcuts
What Doesn’t Work
- Sometimes generates tests that test implementation details instead of behavior
- AI-generated tests need human review (weak assertions, missing business logic)
- Can’t test UI interactions (still needs manual testing)
My Tool Stack
Daily Tools (Every Session)
| Tool | What I Use It For | Cost |
|---|---|---|
| Cursor Pro | Main editor, feature development, inline editing | $20/month |
| Claude Code (Pro) | Hard bugs, refactoring, code review, test generation | $20/month |
| Claude.ai | Architecture questions, planning, quick answers | Included in Pro |
Weekly Tools (A Few Times per Week)
| Tool | What I Use It For | Cost |
|---|---|---|
| GitHub Copilot | Autocomplete in Android Studio (for Android-specific work) | $10/month |
| Perplexity | Researching libraries, comparing tools, finding docs | Free |
| ChatGPT | Quick code snippets, regex help, data conversion | Free |
Monthly Cost
| Tool | Cost |
|---|---|
| Cursor Pro | $20 |
| Claude Pro | $20 |
| GitHub Copilot | $10 |
| Total | $50/month |
$50/month for tools that save me 2-3 hours every day. The math is simple.
Productivity Numbers
Before AI Tools (2023)
Feature development: 4-6 hours per feature
Bug fixing: 1-2 hours per bug
Code review: 30-60 minutes per PR
Test writing: 1-2 hours per module
With AI Tools (2026)
Feature development: 1-2 hours per feature (Cursor Composer)
Bug fixing: 15-30 minutes per bug (Claude Code)
Code review: 10-15 minutes per PR (Claude Code review)
Test writing: 15-30 minutes per module (AI-generated + review)
Roughly 3x faster on most tasks. The biggest time saver is feature scaffolding with Cursor Composer — what used to take an afternoon now takes 30 minutes.
What I Tried and Stopped Using
| Tool | Why I Stopped |
|---|---|
| Windsurf | Good but not better than Cursor. Switched back. |
| Codeium | Free was nice, but Cursor’s AI quality is noticeably better |
| Tabnine | Autocomplete was good, but Copilot + Cursor covers it better |
| Amazon Q | Great for AWS work, but I don’t use AWS daily |
Tips from 2 Years of AI Coding
1. Don’t Fight the AI
If the AI suggests a different approach than yours, consider it. Often the AI knows a pattern you don’t. I’ve learned more Kotlin features from AI suggestions than from documentation.
2. Review Everything
AI code looks correct but sometimes has subtle bugs. Always read the diff. Run the tests. Check edge cases. “It compiles” is not the same as “it works.”
3. Write Good CLAUDE.md Files
This is the single biggest productivity boost. A good CLAUDE.md file tells the AI your project’s architecture, build commands, and coding rules. Without it, the AI guesses. With it, the AI follows your patterns.
Read our CLAUDE.md Guide for templates and tips.
4. Use the Right Tool for the Right Task
Quick edit → Cursor (Cmd+K)
New feature → Cursor Composer
Hard bug → Claude Code
Architecture → Claude.ai chat
Quick question → ChatGPT or Perplexity
Don’t use Claude Code for a one-line change. Don’t use ChatGPT for a multi-file refactor.
5. Set a Token Budget
AI tools can be expensive if you’re not careful. Set a mental budget:
- Cursor Pro: unlimited for most use (covers the $20/month)
- Claude Code: be mindful of long sessions with Opus — switch to Sonnet for routine tasks
What’s Next for AI Coding
What I Think Changes in 2027
- Agent teams become standard — multiple specialized AI agents working on one project
- Local models get good enough for daily coding (Ollama + 7B models)
- MCP integration connects AI tools directly to databases, APIs, and deployment pipelines
- AI-generated tests become reliable enough to trust without heavy review
- Voice coding — describe features by talking instead of typing
What Stays the Same
- You still need to understand the code the AI writes
- Architecture decisions are still human decisions
- Domain knowledge (what the business actually needs) is irreplaceable
- Code review skills become MORE important, not less
The developer’s job is changing from “write code” to “direct AI and review output.” The ones who adapt fastest will build the most.
Quick Summary
| Time | Tool | Task |
|---|---|---|
| Morning | Claude.ai | Architecture planning |
| Midday | Cursor | Feature development |
| Afternoon | Claude Code | Hard bugs, refactoring |
| Evening | Claude Code | Code review, test generation |
Monthly cost: $50. Time saved: 2-3 hours/day. Worth it: Absolutely.
Related Articles
- Cursor vs Claude Code vs Copilot — detailed comparison of the tools I use
- How to Set Up Claude Code — get started with Claude Code
- CLAUDE.md Guide — the one file that makes AI coding 10x better
- What is Vibe Coding? — the broader trend my workflow is part of