Claro: Natural Language Task Management
Context and Motivation
Task management is fundamental to productivity, yet traditional applications introduce friction through context switching, rigid structures, and barriers between thought and capture. The rise of sophisticated language models presented an opportunity to reimagine this interaction—transforming task management from a separate application into a natural conversation with an intelligent assistant.
Beyond interaction design, there were practical considerations. Premium task management services cost $48-100 annually, while modern LLM CLI tools like Claude Code and Amazon Q CLI offer powerful capabilities that are underutilized for personal productivity. These tools support sophisticated customization through steering documents, integrate with local databases, and—when using max plans or internal tools—incur no incremental token costs. This created an opportunity to build a zero-marginal-cost system leveraging existing subscriptions while delivering a superior user experience.
The Claro Solution
Claro is a natural language task management system implementing the Getting Things Done (GTD) methodology through conversational interaction with LLM CLI tools. Rather than building another graphical interface, Claro exposes task management capabilities through the Model Context Protocol (MCP), allowing users to interact with tasks, projects, and routines using natural language.
The system comprises four integrated components:
-
MCP Server: A Python-based server exposing 27 tools for task management, project organization, routine tracking, and analytics through a standardized protocol that LLM CLIs can invoke.
-
SQLite Database: A file-based database storing all tasks, projects, routines, and metadata, designed to be git-syncable for cross-device workflows without cloud infrastructure.
-
Steering Documents: Carefully crafted personality definitions (CLAUDE.md for Claude Code, AmazonQ.md for Amazon Q CLI) that transform the LLM into a proactive executive assistant who understands GTD principles.
-
REST API and Web Frontend: A FastAPI backend with Next.js frontend providing traditional UI access when graphical interaction is preferred.
The core innovation is the interaction model itself. Users work with Claro through natural conversation: "show me what I need to focus on today," "let's process my inbox together," "which projects aren't moving as quickly as others," "write up a quick summary for my standup." This removes the friction of context switching and enables sophisticated batch operations, pattern analysis, and proactive suggestions that traditional task managers cannot provide.
Architecture and Technical Implementation
Database Design
At the foundation is a SQLite database implementing a comprehensive GTD schema. The tasks table captures the full lifecycle with fields for status (inbox, next_action, waiting, someday, completed, archived), project association, context tags (@computer, @phone, @errands), priority dimensions (urgency and importance on 1-4 scales for Eisenhower matrix), due dates, delegation tracking (person and date), and recurrence patterns (daily, weekly, monthly, yearly with configurable intervals). Each task tracks creation date, completion date, and last action date—the latter critical for identifying stale items during weekly reviews.
The projects table organizes related tasks into outcomes-based collections with area classification (Work, Personal, Health, Finance, Learning), status, and desired outcome statements. A key architectural decision was implementing routines as a separate system. The routines and routine_completions tables handle habit tracking with a fundamentally different philosophy—routines are about consistency and streaks rather than outcomes and completion. This separation prevents habit tracking from cluttering GTD next actions lists while still supporting daily habit reinforcement.
The notes table provides polymorphic note-taking capability, attaching markdown-formatted notes to tasks or projects. The config table stores system configuration as key-value pairs with JSON values. Performance optimization is achieved through strategic indexing on frequently queried fields with foreign key constraints maintaining referential integrity.
MCP Server Implementation
The MCP server bridges business logic with LLM CLI tools through 27 carefully designed tools. Task management tools handle the complete lifecycle: add_task, update_task, complete_task with automatic recurrence handling, list_tasks with rich filtering (status, context, project, staleness, delegation), and archival operations. Project and routine management tools mirror this structure, providing comprehensive lifecycle management.
Analytics tools provide higher-level insights: get_summary aggregates statistics for time periods, get_eisenhower_stats shows task distribution across priority quadrants, and get_delegation_report surfaces items waiting on others—all supporting the GTD review process. Routine analytics calculate current and longest streaks, completion rates, and daily checklists.
The implementation is intentionally synchronous at the core. Business logic in core.py uses straightforward SQLite operations without async/await complexity, as SQLite's file-based nature doesn't benefit from asynchronous I/O. The MCP server wrapper in mcp_server.py provides the required asynchronous protocol interface while calling synchronous core functions, maintaining clean separation.
Dual CLI Support
Claro supports both Claude Code and Amazon Q CLI through project-scoped configuration files committed to the repository. Claude Code reads .mcp.json automatically, while Amazon Q CLI loads .amazonq/cli-agents/claro-agent.json. This zero-setup approach means new users can clone the repository and immediately begin conversational task management.
The personality definitions are identical across both platforms, defining a warm but professional tone, deep GTD methodology expertise, natural conversational style, proactive pattern recognition, and structured review processes (daily for next actions and routines, weekly for accomplishments and stale tasks, monthly for area balance). These steering documents transform the base LLM into an effective executive assistant that guides users through GTD workflows, recognizes when to suggest routines versus tasks, celebrates streak milestones, and identifies stale projects proactively.
The LLM's reasoning capabilities enable sophisticated batch processing—processing an entire inbox at once, identifying patterns across projects, or generating comprehensive standup summaries—operations that would be cumbersome in traditional UI-based task managers.
REST API and Web Frontend
While natural language interaction is the primary interface, Claro includes a comprehensive REST API built with FastAPI and a Next.js 15 frontend with React 19. The backend provides full CRUD operations with Pydantic models ensuring type safety. The frontend uses the App Router architecture with TypeScript throughout, Tailwind CSS for styling, and a type-safe API client wrapper with full error handling.
This dual-interface approach acknowledges that different scenarios call for different tools. Quick capture, daily planning, and batch processing excel in conversational mode, while bulk editing or visual project reviews may be more efficient in a graphical interface. The shared database ensures consistency regardless of interface choice.
Key Features and Workflows
Table 1: Core Features and Implementation
| Feature | Implementation Details | Key Benefits |
|---|---|---|
| GTD Methodology | Complete workflow: Capture (inbox), Clarify (guided conversation), Organize (automatic categorization, Eisenhower matrix), Review (daily/weekly/monthly), Engage (contextual queries) | Natural language removes friction; conversational clarification reduces decision fatigue; automated prioritization ensures focus on important work |
| Eisenhower Matrix | Automatic quadrant assignment based on urgency (1-4) and importance (1-4): Q1 (Do First), Q2 (Schedule), Q3 (Delegate), Q4 (Eliminate) | Visual priority distribution; prevents focus on merely urgent vs truly important work |
| Recurring Tasks | Pattern-based recurrence (daily, weekly, monthly, yearly) with configurable intervals; automatic next instance creation on completion preserving all properties | Eliminates manual recreation; maintains consistency across recurring work |
| Routine Tracking | Separate system from tasks focusing on consistency; streak calculation (current, longest); completion rate analysis; daily checklists | Prevents habit clutter in action lists; reinforces consistency over outcomes; celebrates milestones |
| Delegation Management | Waiting status + delegation fields (person, date); dedicated reports showing days waiting | Ensures follow-through on blocked items; prevents dropped balls |
| Git-Based Sync | SQLite database committed to repo; multi-device workflow via git push/pull; version history | Privacy-first local data; no cloud dependencies; full user control; free synchronization |
Table 2: Conversational Interaction Patterns
| Workflow | Natural Language Query | System Response |
|---|---|---|
| Inbox Processing | "Let's process my inbox together" | Guided clarification of each item; determines actionability, next actions, context requirements; automatically organizes |
| Daily Planning | "What should I focus on today?" | Curated guidance considering due dates, priorities, context, project balance, and stale items needing attention |
| Progress Insights | "Which projects aren't moving as quickly as others?" | Identifies stale projects using last_action_date; calculates time since activity; recognizes patterns (e.g., all health projects stale) |
| Standup Preparation | "Write up a quick summary for my standup meeting" | Reviews recent completions; identifies key accomplishments; notes blockers/waiting items; generates formatted summary |
| Routine Check-ins | "How am I doing with my habits?" | Reports current streaks; calculates completion rates; celebrates milestones; identifies neglected routines |
| Context Filtering | "Show me what I can do @computer" | Lists all next actions tagged with @computer context; prioritized by urgency/importance |
User Experience Benefits
The fundamental shift Claro introduces is moving from transaction-based interaction to conversation-based collaboration. This interaction model removes the cognitive overhead of remembering commands, navigating menus, and manually synthesizing insights. The LLM handles the complexity, presenting a simple conversational interface that adapts to natural speech patterns and works seamlessly with speech-to-text workflows.
Batch operations become significantly more powerful—processing an entire inbox conversationally is faster and less fatiguing than clicking through items individually. The LLM's analytical capabilities surface insights that would require manual analysis in traditional systems. The git-based sync provides both backup and cross-device access without requiring trust in cloud providers or concerns about service continuity.
Technical Quality and Best Practices
Claro adheres to modern development standards. Python code uses comprehensive type hints, docstrings on public APIs, and SQL parameterization throughout eliminating injection vulnerabilities. The frontend employs TypeScript with strict checking, follows Next.js 15 conventions, and uses modern React 19 features. Testing is comprehensive with dedicated suites for core functionality, routines, API endpoints, and frontend components—all tests passing.
Package management follows modern best practices—uv for Python and pnpm for JavaScript. Documentation includes detailed README, setup guides for both CLIs, usage examples, architecture documentation, and implementation summaries.
Results and Impact
Claro replaced over a decade of Todoist Pro usage. The natural language interface proved more efficient for daily task management, particularly for capture (reduced friction), batch processing (conversational inbox processing), planning (intelligent recommendations), and insights (proactive pattern recognition).
The economics are compelling — zero marginal cost leveraging existing LLM CLI subscriptions. The git-based sync provides cross-device access with full privacy control and version history. The MCP architecture proved extensible, with new tools added incrementally without disrupting existing functionality.
Perhaps most significantly, the interaction model itself represents a meaningful productivity gains. Moving from transaction-based UI interaction to conversation-based collaboration with an intelligent assistant feels qualitatively different—less like using a tool and more like working with a trusted partner who understands your system and helps you navigate it effectively. Claro demonstrates that sophisticated LLM CLI tools combined with thoughtful architecture can deliver personal productivity systems that are both more capable and more natural to use than traditional GUI-based task managers, while costing nothing beyond existing subscriptions and maintaining full user control over data.