This is an AI transcript/summary of the video above, where show you how I used Claude Code to build a voice-activated timer app from scratch. This isn't a step-by-step tutorial, but a real-world test to see how Claude Code performs on a non-trivial Flutter project involving native integrations like speech recognition and permissions.
The app idea stemmed from my calisthenics training, where it's impractical to manually control a timer during exercises like the frog stand. A voice-activated timer that responds to "start" and "stop" commands solves this perfectly. This project was an excellent candidate to test Claude Code's ability to handle both UI and underlying logic, especially involving platform-specific features and permissions.
Having used Claude Code (specifically the Max plan with access to Opus 4 and Sonnet 4) for a few weeks, I've been impressed. In this crash course, I'll recap the key aspects of my workflow, highlight where Claude Code shined, where it struggled, and share my overall conclusions and tips for using it effectively.
Useful links
Getting Started with Claude Code
(Based on 02:21 - 04:24)
If you're looking to try Claude Code yourself, the setup is straightforward. It's a command-line tool that integrates well with your existing IDE (like Cursor, which I use).
- Installation: You install it globally using npm:
npm install -g @anthropic-ai/claude-code
- Running: Navigate to your project directory in the terminal and simply run
claude
. - Authentication: You'll need to authenticate. The best value, especially if you plan to use it regularly and leverage the most powerful models like Opus 4, is a paid subscription ($20/month for Pro, $100/month for Max). Alternatively, you can use pay-as-you-go billing via an Anthropic API key. As I mentioned in the video, if you code daily and can afford it, the $100/month Max plan is absolutely worth it for Opus 4 access.
- IDE Integration: Running
claude
from your IDE's built-in terminal (like in Cursor) is ideal. It allows Claude Code to interact with your project files, while you can easily switch to your editor to review and manually adjust code when needed.
Claude Code supports different models. On the Max plan, you often have access to Opus 4 for a portion of your usage, after which it might fall back to Sonnet 4. This is important to note, as model performance can vary significantly.
Getting Started with the Flutter App & Initial Requirements
(Based on 04:24 - 07:34)
I started with a completely empty Flutter project – no extra dependencies, just the default "hello world" app.
Before writing any code, my most important piece of advice when using AI coding tools is this: Always start by writing detailed and specific instructions. This dramatically increases the chances that the AI will produce the code you want and significantly reduces later refactoring.
For this app, I created a specs
folder with an initial-requirements.md
file. This document outlined both functional and non-functional requirements:
Functional Requirements:
- Single-page iOS app built with Flutter.
- Voice-activated timer for calisthenics.
- Basic UI: Timer display showing elapsed time, Start/Stop button, Reset button (like the iOS stopwatch). Use a
Stopwatch
object. - Voice Mode: Request microphone and speech recognition permissions on startup. Disable features if denied.
- Voice Commands: Respond to "start" and "stop". Start/continue timer on "start", stop on "stop".
- Audio Feedback: Play a beep sound when a voice command is recognized (initially planned as a TODO).
Non-Functional Requirements:
- Dark mode only, theming done in
MaterialApp
(no hardcoded colors/sizes). - Proper separation of concerns (suitable folder structure like
features
,shared
,core
). - Prefer small, composable widgets.
- Prefer flex values over hardcoded sizes for responsive UI.
- Use
log
fromdart:developer
for logging instead ofprint
ordebugPrint
.
These opinionated requirements are crucial. Without them, Claude might produce code that functions but doesn't align with your preferred architecture, styling, or conventions, leading to tedious refactoring down the line. For larger projects, I'd add even more detail on linting, dependency injection, testing practices, etc.
Structuring the Workflow with Claude Code
(Based on 07:34 - 20:15)
Claude Code offers features that help structure your AI-assisted development workflow.
/init
andCLAUDE.md
: Running/init
analyzes your project and generates aCLAUDE.md
file. This file acts as a permanent context for Claude across sessions. It captures project overview, key requirements it inferred, proposed architecture, coding guidelines, and detected assets. It's a good idea to review and edit this file to ensure it accurately reflects your project's state and guidelines. While Claude might initially be a bit overzealous with bash commands during/init
, you can guide it to focus on generating the.md
file itself.- Plan Mode: Accessible by pressing
Shift + Tab
twice, Plan Mode is specifically designed for reasoning and breaking down requirements into actionable steps. I used it, referencing myinitial-requirements.md
, to generate an implementation plan.
Think hard about how to implement all the requirements described in @specs/initial-requirements.md. Make a plan with all the proposed tasks and subtasks, focusing on the UI first. Write all the proposed tasks and subtasks to PLAN.md.
- Iterating on the Plan: Claude generated a detailed plan with phases, tasks, and subtasks in
PLAN.md
. I reviewed it and provided feedback to adjust the order, specify technical details (like using aTicker
for UI updates, formatting time as SS.S, using aStopwatch
directly in the UI, removing the "lap" button), and ensure the plan allowed for testing at each step. This iterative planning phase in Plan Mode is invaluable before writing any code. PLAN.md
as Master Plan: Once the plan was satisfactory, I explicitly told Claude to write it to a file (PLAN.md
). This document becomes your single source of truth for the project's implementation steps. Adding a line referencing this file inCLAUDE.md
ensures Claude always has access to it.- Custom Commands: A powerful feature is the ability to create custom, project-specific commands by adding files to the
.claude/commands
folder. I created anupdate-plan-commit
command that automatically updatesPLAN.md
with completed tasks, stages all changes, and creates a clear Git commit message. This command streamlines the process of keeping the plan and version control in sync after completing each step.
- Executing the Plan: With the plan in place, you tell Claude to proceed step-by-step, stopping after each one for review.
- Using
@
References: When giving instructions or asking Claude to modify a specific file, using the@
symbol followed by the file path (e.g.,@lib/theme/app_theme.dart
) is highly effective. It focuses Claude's attention, uses fewer tokens, and generally leads to better results than letting it guess which file you mean. - Managing Context with
/compact
: Long conversation histories can consume tokens. The/compact
command shrinks the existing chat history while retaining a summary of key information in the context, helping manage token usage without losing important project knowledge. Alternatively, use the/clear
command which resets the context completely; as long asCLAUDE.md
and other referenced files likePLAN.md
are present, Claude will pick up the project context.
This structured workflow, leveraging planning, context management, and automation, is key to getting consistent results from Claude Code.
Building the Core Timer UI
(Based on 20:15 - 41:37)
Following the plan, Claude Code started implementing the core UI:
- Project Setup & Theming: It correctly set up the project structure, added the specified font to
pubspec.yaml
, and configured the basic dark theme inMaterialApp
andapp_theme.dart
, following the non-functional requirements. It even caught a linter warning about a deprecated property and fixed it, demonstrating its ability to understand and act on analyzer feedback. - Timer Display & Page: Claude created the
TimerDisplay
widget, correctly using aTicker
for smooth updates based on the screen refresh rate and implementing the requested SS.S time formatting. It also created theTimerPage
widget to hold theStopwatch
and pass it down to theTimerDisplay
. This initial UI implementation was one-shotted perfectly based on the plan. - Timer Controls (Buttons): It then implemented the Start/Stop and Reset buttons. Claude correctly used widget composition, creating a reusable
TimerControlButton
and then specificStartStopButton
andResetButton
widgets that wrapped it. This demonstrated good code structure and saved manual refactoring.
At each step, I reviewed the generated code (often using the source control diff view in Cursor) and committed the changes using my custom update-plan-commit
command.
Using Screenshots for UI Design (28:27):
Wanting a more visually appealing UI, I decided to give Claude Code a screenshot of the desired design from the production app. This is a fantastic feature – you can drag and drop an image into the prompt.
Using the provided screenshot as inspiration, update the styling and design of the timer display, start and stop button, reset button, and voice indicator widget (we'll use this later). Note the start and reset button should still appear side by side inside the row. Keep as much styling as possible in the top level theme, declaring constants with meaningful names in a separate file rather than hard coding values in the widget themselves.
- First Attempt (with Sonnet 4): Claude Code attempted to match the design. It created an
app_constants.dart
file for colors and sizes, updated the theme, and modified the UI widgets. However, the result wasn't perfect – it added an unwanted linear gradient, some constants were too widget-specific for a global file, the app bar title didn't use the correct font, and there was a layout overflow issue. This attempt ran after Opus 4 usage limit was reached, potentially impacting the result. - Decision Point: I had a choice: manually refactor or discard and try again with a better prompt (and hopefully Opus 4). I chose the latter to see if Claude Code could nail it with clearer instructions.
- Second Attempt (with Opus 4): I discarded the changes, explicitly added a file with my desired
AppColors
, and modified the prompt. I emphasized using the provided colors, ensuring all text used the correct font, and adjusting layout factors. This time, running on Opus 4, the result was much better. Claude correctly used the provided colors, applied the font consistently via the theme, and produced a layout much closer to the screenshot, fixing the overflow issue. While some minor tweaks were still needed (like adding button labels or adjusting the voice indicator border), the result was good enough to proceed or make final adjustments manually.
Using the updated screenshot as inspiration, update the styling and design on the timer display, start stop button, reset button, and voice indicator widget. Keep as much styling as possible in the top level theme and ensure all text widgets use the Babas new font. When styling the UI, use the colors in @lib/constants/app_colors.dart.
This demonstrated that providing clear guidance, managing context (by providing the color file), and leveraging the best model (Opus 4) significantly improves the quality of the output, especially for complex tasks like UI design from an image.
Implementing Permissions
(Based on 41:37 - 54:48)
Next, we moved on to integrating native permissions for microphone and speech recognition:
- Adding Dependencies & Platform Setup: Claude correctly identified the need for the
permission_handler
package and added it topubspec.yaml
. Crucially, it also figured out the necessary platform-specific configuration for iOS, adding the privacy usage descriptions toInfo.plist
and the required build configurations to the Podfile based on the package's README. This ability to read documentation and apply platform setup is a major strength. - Requesting Permissions: Claude created a
PermissionService
class to encapsulate the permission logic. It correctly used thepermission_handler
API to request both microphone and speech recognition permissions simultaneously, returning true only if both are granted. - Integrating into UI Lifecycle: Initially, Claude placed the permission request logic in
main.dart
. I guided it to move this logic to theinitState
method of theTimerPage
widget, which made more sense for this single-page app and kept UI-related state management within the page widget. Claude successfully refactored the code as requested. - Handling Denials with Alerts: The requirements specified showing an alert if permissions are denied. Claude implemented this, but initially used a standard Material AlertDialog. To match the native iOS look, I introduced my own
show_alert_dialog.dart
helper file (which usesshowAdaptiveDialog
) and instructed Claude to use it instead. Claude successfully integrated my helper function, including wiring up the "Open Settings" button action.
Debugging the permission flow required running on a real device (simulator behavior can differ). We encountered an issue where the native permission dialog wasn't appearing on first launch. Claude Code, with some prompting ("Is some more platform-specific configuration needed?"), correctly identified the missing Podfile configuration requirement from the permission_handler
README, which resolved the issue. This again showcased its strength in understanding documentation and platform-specific setup.
Building Voice Recognition
(Based on 54:48 - 1:10:52)
Implementing the continuous voice recognition loop was the most complex part and where Claude Code faced the biggest challenge in the live demo.
- Detailed Requirements: Given the complexity, I created a specific
voice-recognition.md
document outlining the desired behavior: initialize the service if permissions are granted, start a continuous listening loop, stop listening as soon as a command ("start" or "stop") is recognized, process the command, play a beep (TODO), and restart the listening loop. It also included guidelines for simple, stateless implementation and graceful error recovery. - Planning the Feature: Using Plan Mode and referencing the new requirement document, Claude generated a plan for implementing the voice recognition feature, including adding the necessary package (
speech_to_text
), creating aVoiceCommandService
, integrating it with theTimerPage
, and updating the voice indicator UI. Ensuring Claude took the detailedvoice-recognition.md
into account required a specific prompt in Plan Mode. - Initial Implementation: Claude added the
speech_to_text
package and created theVoiceCommandService
. The initial code had some issues: it held state variables (isListening
,isInitialized
) that ideally belonged in the UI layer, used non-idiomatic Dart (setCallbacks
), and the continuous listening logic was complex, involvingFuture.delayed
calls and recursive method calls (startListeningLoop
). TheonSpeechResult
handling had duplicate logic. - Iterating for Improvements: Recognizing the code wasn't ideal, I created a detailed
voice-recognition-improvements.md
document listing specific suggestions: make the service stateless, removesetCallbacks
andFuture.delayed
, use an enum for commands, improve thestartListening
API to return aFuture<TimerCommand>
, use aCompleter
to signal command recognition and stop listening before completing, and remove unnecessary helper methods. - Second Attempt: I provided this document to Claude Code. It made significant changes, resulting in a much simpler and cleaner
VoiceCommandService
that used aCompleter
as suggested. It successfully made the service stateless and updated theTimerPage
to manage the state and the listening loop logic. - Testing Challenges: Testing on a real device confirmed that voice recognition worked for the first command ("start" or "stop"), but the listening loop would get stuck if any other speech was detected. While Claude Code's code was much improved, this specific bug in the continuous listening loop logic was complex and proved difficult to resolve within the demo session, requiring manual trial and error later in the production app.
This experience highlighted that while Claude Code is excellent at understanding requirements and generating complex patterns (like using a Completer
or generating tests, as seen later), intricate state management and lifecycle issues (like a robust continuous listening loop) can still be challenging for it to perfect without extensive guidance or manual debugging.
Beyond the Demo: Production App Examples
(Based on 1:11:38 - 1:14:19)
While the live coding session showed the process and some challenges, the completed production app (available on the App Store) demonstrates Claude Code's ability to handle even more:
- Custom UI (
_StadiumBorderPainter
): The animated circling border around the voice indicator is a custom painter. I showed Claude a screenshot of a similar effect and described what I needed. It generated the entire_StadiumBorderPainter
class with animation logic, saving significant time on complex drawing code. - Info Button & Action Sheet: Implementing the info button that brings up a native-looking action sheet with voice commands was a one-shot success based on a single prompt.
- Advanced Time Formatting & Unit Tests: The ability to display and speak the time in different formats was a complex task. I defined a
TimeFormat
enum withformatTime
(for UI) andformatTimeForSpeech
methods. Generating the nuanced speech formatting logic for each format based on examples I provided was impressive. Even more so, Claude Code generated a comprehensive suite of unit tests for this logic and used its agentic capabilities to run them, find failures, and help me fix bugs in the generated code. This combination of complex logic generation and test-driven bug fixing was a major win.
These production examples show that when properly guided, Claude Code can generate complex UI elements, handle nuanced logic, and even write tests, significantly accelerating development.
Key Takeaways and Tips for Effective Use
(Based on 1:14:00 - 1:16:40)
So, what are the key lessons learned from using Claude Code to build this app? It boils down to this: Claude Code is a powerful skill multiplier, but it's not a magic wand. The value you get is directly proportional to how well you use it.
Here are my top tips for effective AI-assisted development with Claude Code:
- Be Specific & Write Detailed Requirements: Forget "vibe coding." Invest time upfront to write clear, detailed requirements covering functional needs, UI details, desired behavior, and crucial non-functional preferences like code style (in a spec document like
@docs/initial-requirements.md
). The more precise you are, the better Claude will understand your intent and the less refactoring you'll need later. - Use Planning & Structure Your Workflow: Break down the project into smaller, manageable tasks using Claude's Plan Mode (
PLAN.md
). Keep the plan updated and follow it step-by-step. This structured approach, combined with reviewing and committing changes after each task, maintains a working state and makes it easy to revert if needed. Consider asking Claude to ensure your app is testable at each step. - Actively Guide & Review Code Ruthlessly: Don't blindly accept Claude's output. Read the code, make sure you understand it, and be critical. If it's not what you want – maybe it's overly complex or doesn't fit your patterns – be prepared to discard the changes and provide more specific guidance or a better prompt. Like when it first attempted the
VoiceCommandService
, I had to provide specific feedback to steer it in the right direction. Remember: Generating code is cheap; maintaining bad code is expensive. - Optimize Context & Use
@
References: Be specific in your prompts and use@
references to relevant files (@docs/initial-requirements.md
,@PLAN.md
, specific code files) to ensure Claude has the necessary context without wasting tokens. - Leverage Opus 4: Use the most powerful model available. I found Opus 4 significantly better than Sonnet 4 for planning, complex problem-solving, and generating more accurate code upfront.
- Automate Repetitive Tasks: Identify recurring actions (like updating the plan, committing, or running specific checks) and create custom commands for them to streamline your workflow.
- Speed Up Verification with Tests: Use Claude to help write unit tests for the code it generates. Since Claude is agentic, it can then run these tests after making changes, automating part of the verification process and helping you catch bugs faster.
Applying these principles makes Claude Code a powerful skill multiplier. It can handle boilerplate, figure out API usage from documentation, generate complex logic and tests, and write most of the code for you, which honestly feels quite magical sometimes. This structured, deliberate approach makes all the difference between "Vibe coding" (which often only produces sloppy products) and "AI-assisted software development" (which helps you build robust and maintainable apps). I'd much rather be in that second category!
From Prototype to Production-Ready App
(Based on 1:16:40 - 1:17:33)
As we saw, the app built during the video was a working prototype, and a pretty good one thanks to Claude Code. However, a truly production-ready app requires more: polish, robust error handling, analytics, potentially backend integrations, continuous integration, etc.
The final app I published includes crucial features like analytics, user feedback mechanisms, and error monitoring with Sentry. Many of these are standard boilerplate or patterns I already had battle-tested in my other apps. For those, it was often quicker for me to copy-paste and adapt my existing code rather than asking Claude to implement them from scratch.
So, while Claude Code is fantastic for building core features and tackling new problems, don't hesitate to reuse your own proven code for standard infrastructure pieces. You could even explore writing custom Claude commands to help integrate your pre-existing code patterns more easily into new apps.
Conclusion
(Based on 1:17:33 - 1:18:42)
Ultimately, Claude Code can do a lot of the heavy lifting for you and significantly accelerate development. So, my advice is to give it a go. If you're a professional software developer and you can afford it, the Max plan is absolutely worth the investment, and Opus 4 is a big step forward compared to other models available in different tools.
Now, you might be thinking: What about Cursor? Should you still use it, or is it better to use Claude Code for everything? As you've seen, I've been using Claude Code extensively for planning, generating code, and tackling complex tasks. However, I still find Cursor incredibly useful for its deep IDE integration – things like seamless tab-autocompletion directly in the editor, and for making quick, small inline changes or refactors manually. So, at the moment, I'm actually still paying for both tools. This hybrid approach allows me to leverage Claude Code's powerful model and agentic capabilities for larger tasks and planning, while still benefiting from Cursor's tight IDE features for day-to-day coding and manual adjustments. Using them both strategically lets me maximize my productivity.
On a final note, we've only scratched the surface of what Claude Code can do in this video. If you want to learn more about its features or dive deeper into AI-assisted development, you can find some useful links and resources below.