February 2025
February 28, 2025
The Economics of Intelligence
February taught me that the most valuable investments aren't always the most obvious ones. Sometimes the highest ROI comes from understanding the true cost structure of emerging technologies and betting accordingly.
Unexpected Discovery: AI as Taste Curator
LLMs like Claude have become surprisingly sophisticated recommendation engines for entertainment. Feed them your preferences, and they'll surface genuinely interesting suggestions you'd never find through traditional algorithms. Here's an example of this in action. It's pattern recognition applied to taste – and it works remarkably well.
Setting an AI Learning Budget
This month I established a dedicated budget for LLM experimentation and learning. Here's why: the productivity gains are exponential, but only if you invest in understanding the tools deeply. The learning curve flattens dramatically when you can afford to experiment freely rather than optimizing every API call for cost.
The Subscription vs. API Economics
Pro subscriptions for chat interfaces remain remarkably cost-effective compared to API credits. When you factor in chat history, artifacts, canvas features, and voice interaction, the subscription model delivers superior value for exploratory work. The convenience features aren't just nice-to-haves – they're productivity multipliers.
Sonnet 3.7: When Better Isn't Better Enough
Tested the newly released Sonnet 3.7 model extensively. Despite improvements, it didn't clear the high bar set by 3.5 Sonnet. More importantly, rate limits and slow token generation created bottlenecks that faster models don't have. Sometimes the marginal improvement doesn't justify the marginal cost. Reallocated that $20/month subscription toward API credits instead – a decision driven by practical performance rather than theoretical capability.
The Gemini Shift
Switched testing focus to Gemini Pro. The Flash model's speed is genuinely impressive, and the multimodal capabilities – especially voice input – are growing on me. Speed isn't just convenience; it changes how you think and work with AI. When responses are instant, you can maintain flow state.
MCP Tools: Promise vs. Reality
Exploring MCP (Model Context Protocol) tools reveals massive potential hampered by integration challenges. The concept is sound – connecting AI models to external tools and data sources. The execution remains inconsistent. Integration with note-taking apps like UpNote would be transformational for LLM input/output workflows, but UpNote lacks MCP support. Tested Todoist integrations but found execution unreliable. The infrastructure is developing, but we're still in the early innings.
The Data Volume Explosion
Prediction: LLMs will cause an exponential increase in digital data volume. When creation becomes effortless, creation becomes endless. This has profound implications for storage, search, and information management systems that haven't fully materialized yet.
CLI-First AI Workflows
Aligned with API usage, I'm testing command-line AI tools like aider
and Claude Code
. The CLI interface removes GUI friction and integrates better with developer workflows. Sometimes the oldest interfaces – text-based commands – work best with the newest technologies.
The Markdown Strategy
Prioritizing Markdown for all writing serves three strategic purposes: future-proofing content, avoiding application lock-in, and improving LLM interaction. When AI becomes your primary reading and writing partner, format compatibility becomes crucial infrastructure. Markdown isn't just a file format – it's a bet on interoperability.
February's key insight: the AI revolution isn't just about capability improvements – it's about economic models, workflow integration, and format standardization. The winners will be those who understand not just what AI can do, but how to structure their work and tools around AI collaboration. We're not just adopting new software; we're redesigning how knowledge work gets done.