November 2025 was a packed month for both Flutter and AI.
On the Flutter side, we got Flutter 3.38 with Dart 3.10, bringing dot shorthand syntax, 16KB Android page size support, and full iOS 26 compatibility.
But the bigger story this month is what I'm calling "The AI Coding Wars." Google launched Gemini 3 alongside their new Antigravity IDE (a free, agent-first development platform), while Anthropic countered with Claude Opus 4.5 (the most capable coding model yet).
Let's dive in! π
Flutter 3.38 & Dart 3.10
Flutter 3.38 dropped earlier this month, bringing Dart 3.10 along with it. This is a significant release that includes some breaking changes you'll need to know about.
If you're maintaining a Flutter app in production, the Android 16KB requirement and Java 17 migration are two things that need your immediate attention. But there's also some great stuff here, like the dot shorthand syntax finally being enabled by default! π
Here are the highlights:
Dart 3.10 Language Features:
- Dot shorthand syntax is now enabled by default. You can write
.valueinstead ofSomeEnum.value, making your code more concise and readable (similar to Swift's syntax) - Build hooks are now stable, making it easier to integrate native code (C++, Rust, Swift) without platform-specific build files
- New analyzer plugin system for writing custom static analysis rules
Flutter 3.38 Updates:
- Android 16KB page size support - This is now required for Google Play. If you haven't updated yet, you need to do this now
- Full iOS 26, Xcode 26, and macOS 26 support with UIScene lifecycle migration
- Java 17 is now required for Android development (Gradle 8.14 minimum)
- Fixed a major memory leak that affected all Flutter Android apps since version 3.29.0 (finally! π)
- Web dev config file support for better team consistency
- DevTools improvements addressing top user pain points
- New Gemini CLI Extension and MCP support for AI integration
Read the full announcements here:
There's also an official announcement video if you prefer video content.
The AI Coding Wars
November marked a major escalation in AI-powered development tools. Google, Anthropic, and OpenAI made huge announcements within days of each other, and the competition is intensifying fast.
π Google Gemini 3
Last week, Google announced Gemini 3, their most capable AI model family yet. This is a massive release that topped the LMArena Leaderboard with a 1492 Elo score.
Here's what stood out to me:
- PhD-level reasoning: 37.5% on Humanity's Last Exam, 91.9% on GPQA Diamond
- 1M+ token context window with multimodal understanding (text, images, video, audio, PDFs)
- Generative UI - can create entire interactive experiences, not just text responses
- Gemini Agent for multi-step tasks with Calendar and Gmail integration
- Google claims it's their "best vibe coding model ever" (yes, they actually said that π )
The launch was coordinated across Google Search, Gemini App, AI Studio, Vertex AI, Gemini CLI, and their new Antigravity IDE.
π Google Antigravity IDE
Alongside Gemini 3, Google launched Antigravity, a new IDE forked from VS Code (yes, another one π). It's free in public preview and available for Mac, Windows, and Linux.
What makes Antigravity different:
- Agent-first architecture - agents autonomously plan, execute, and verify tasks
- Agents have dedicated access to Code Editor, Terminal, and Browser
- Knowledge base system for agents to save and learn from context
- Supports Gemini 3 Pro, Claude Sonnet 4.5, and GPT models
- Built by the ex-Windsurf team (Google acquired them for $2.4B in July)
If you want a hands-on perspective, check out this video:
π Claude Opus 4.5
Not to be outdone, Anthropic announced Claude Opus 4.5 just days after Gemini 3. They're billing it as "the best model in the world for coding, agents, and computer use."
The numbers are impressive:
- State-of-the-art on SWE-bench Verified (real-world software engineering tasks)
- Leads on 7 of 8 programming languages in multilingual benchmarks
- 15% improvement over Sonnet 4.5 on Terminal Bench for long-horizon tasks
- Uses 76% fewer output tokens than Sonnet 4.5 while matching performance (this means huge cost savings π°)
- New "effort parameter" for balancing capability vs speed/cost
- Pricing: $5/$25 per million tokens (input/output)
For those of us using Claude Code daily, this is a big deal. The efficiency improvements mean we can tackle more complex agentic coding tasks without burning through credits as fast.
My take: Frontier models are getting extremely good, but we can only unlock their value by creating truly agentic workflows, and that is a whole skill in itself. Thanks to subagents, skills, MCP servers, and custom slash commands, Claude Code still has a big lead against other agentic coding CLIs, and it's very unlikely I'll switch over unless the competition catches up.
π GPT 5.1 & Codex
OpenAI also joined the party with GPT-5.1, the latest in their GPT-5 series. The model dynamically adapts how much time it spends "thinking" based on task complexity, making it faster and more token-efficient for simpler tasks.
But the bigger news for developers is the Codex family:
- GPT-5.1-Codex and GPT-5.1-Codex-Mini - optimized for long-running, agentic coding tasks
- GPT-5.1-Codex-Max - the flagship model that can work independently for 24+ hours on a single task
Codex-Max introduces "compaction" - the ability to work coherently across millions of tokens, enabling project-scale refactors and deep debugging sessions. On SWE-Bench Verified, it scored 77.9%.
For GitHub Copilot users, the full GPT-5.1 suite is now available in public preview for Pro, Pro+, Business, and Enterprise plans.
β οΈ Understanding Agentic Coding Security Risks
With all these new agentic coding tools, it's worth understanding the security risks they introduce. Shortly after Antigravity's launch, security researchers discovered serious vulnerabilities that highlight broader concerns with agent-first approaches:
- Data exfiltration via prompt injection: Attackers can hide malicious instructions in 1-point font on webpages, forcing the AI to bypass file protections and exfiltrate secrets
- Bypassing .gitignore: AI agents can use system commands to access files that should be protected
These aren't just Antigravity problemsβthey're challenges any agentic coding tool must address. GitHub published an excellent breakdown of their agentic security principles, identifying three main threat categories:
- Data Exfiltration - Agents with internet access could leak sensitive data, including credentials
- Impersonation & Attribution - Unclear accountability for agent actions
- Prompt Injection - Malicious instructions hidden in repositories or web pages
Their recommended safeguards include network firewalling, limited data access, reversibility requirements (PRs instead of direct commits), and clear action attribution.
My take: As AI tools gain more autonomy, security becomes critical. At minimum, consider using a sandboxed development container for agentic workflows, as explained here. And always review what permissions you're granting these tools.
The Counter-Argument to AI Coding
With all the AI hype this month, I think it's important to acknowledge the other side of the story. Not everyone is having a great time with AI coding tools, and their concerns are valid.
πΉ AI Coding Sucks
Earlier this month, CJ posted a viral rant titled "AI Coding Sucks" that struck a note across developer communities.
The main criticisms:
- Lost joy of programming - Endless back-and-forth with unpredictable LLMs that take shortcuts
- Code quality concerns - AI-generated code can be hard to maintain and understand
- The "skill issue" narrative - Evangelists dismissing legitimate concerns as user error
Matt Pocock, a prominent TypeScript educator, had a thoughtful response, pointing out that careful planning and context management can prevent many of the issues with AI coding.
Latest from Code with Andrea
Following the "AI Coding Sucks" debate, I've been thinking a lot about one fundamental question: when should you write code yourself, and when should you use AI?
πΉ When to Code, When to Prompt? My 2x2 Decision Matrix
The result is this video, aiming to help you decide between AI assistance and manual coding for different Flutter development tasks:
The decision matrix is based on comparing prompting effort vs coding effort:
- Low prompting/High coding effort β Use AI (boilerplate, tests, refactors)
- High prompting/Low coding effort β Code manually (visual issues, tiny fixes)
- Low/Low β Either approach works (simple tweaks)
- High/High β Collaborative AI approach (complex features, full-stack)
The core principle is simple: compare prompting effort against coding effort. AI offers speed and knowledge, but "accuracy is not guaranteed"βso you need to factor in the cost of reviewing and fixing AI-generated code.
π₯ Black Friday Sale 2025
Speaking of AI and productivity... if you've been thinking about leveling up your Flutter skills, now's the time. I'm running my annual Black Friday sale:
- 50% off Flutter in Production course
- $100 off the 5x Flutter Course Bundle
These are the best prices you'll see for a while. The sale is live now, so don't wait too long!
Until Next Time
November was a big month for both Flutter and AI development tools. Flutter 3.38 brings some important updates (especially for Android and iOS compatibility), while the AI landscape is evolving faster than ever.
The competition between Google (Gemini 3 + Antigravity), Anthropic (Claude Opus 4.5 + Claude Code), and OpenAI (GPT 5.1 + Codex) is really heating up, and honestly, I think this innovation unlocks more value and increasingly advanced agentic workflows for all of us.
As always, remember that AI is a mutliplier that amplifies both your skills and your mistakes. So, learn to use it well, and don't feel like you need to go all-in. Sometimes the old-fashioned way of writing code manually is still the right call.
What's your take on the AI coding wars? What's your favorite AI coding tool? Let me know on X (Twitter), LinkedIn or BlueSky.
Thanks for reading, and happy coding! π





