TerminalQ: Bloomberg-Style Financial Terminal for Claude Code
Impact Summary
Built a full-featured financial terminal as a Claude Code plugin that gives individual investors access to the same data and analysis workflows that power institutional decision-making, at zero cost, with all personal financial data stored locally.
Role
Creator & Maintainer
Timeline
2026-Present
Scale
- 30 MCP tools across 9 data providers
- 6 workflow skills with structured output contracts
- 29 slash commands for quick access
- 48 tests, full audit trail
Links
Decision Summary
- • Must work entirely within free-tier API limits
- • Personal financial data must never leave the local machine
- • Must produce structured, auditable output (not hallucinated numbers)
- • Must integrate naturally with Claude Code conversational workflow
- + Native integration with Claude Code, no separate app
- + Claude handles reasoning, plugin handles data
- + Extensible: new providers are just new Python modules
- + Local-first: all computation happens on user machine
- − Requires Claude Code (not a standalone app)
- − Limited by MCP protocol constraints
- + More visual, traditional finance UX
- + Accessible to non-Claude users
- − Requires hosting and authentication
- − Loses conversational analysis capability
- − Personal data would need cloud storage
- + Works alongside existing finance websites
- − Limited computation capability
- − No conversational reasoning
- − Complex cross-origin data challenges
The Problem
Individual investors are flying blind compared to institutions. A Bloomberg terminal costs $24,000 per year. The free alternatives (Yahoo Finance, Google Finance, various apps) each give you a slice of the picture but never the whole thing. Want to check your portfolio allocation, compare it against macro conditions, and evaluate whether to rebalance? That’s five tabs, manual calculations, and no way to ask follow-up questions.
Meanwhile, AI assistants like Claude can reason about financial data brilliantly. They can explain what a Sharpe ratio means, walk through a DCF analysis, or help evaluate earnings results. But they can’t see your portfolio. They can’t pull live quotes. They have no access to the data they’d need to give you actionable analysis.
TerminalQ bridges that gap.
The Architecture
Multi-Provider Data Layer
The core challenge was aggregating data from multiple free APIs into a coherent interface. Each provider has different rate limits, response formats, and failure modes:
- Finnhub (60 req/min): Real-time quotes, company profiles, news, earnings, analyst ratings
- FRED (120 req/min): Economic indicators, yield curves, forex, macro data
- SEC EDGAR (10 req/sec): Financial statements, quarterly/annual filings
- Yahoo Finance: Historical OHLCV data, dividends (via yfinance)
- CoinGecko (30 req/min): Cryptocurrency prices and market data
- Brave Search (2,000/month): Web research for qualitative analysis
Each provider is a standalone Python module behind a common interface. Rate limiting, caching, and error handling are built into the infrastructure layer so individual tools don’t need to worry about API constraints.
30 MCP Tools
The tools are organized by domain:
Quotes and market data: Real-time quotes (single and batch), historical OHLCV, dividends, crypto prices, forex rates.
Portfolio analytics: Live portfolio with P&L, static holdings, RSU vesting schedules, watchlist with quotes, risk metrics (Sharpe, Sortino, VaR, beta, max drawdown), asset allocation with concentration warnings.
Research: Company profiles, analyst ratings, earnings history and estimates, SEC filings, financial statements, stock screening, web search.
Macro: Economic indicators, macro dashboard, economic calendar, yield curve data.
Technical analysis: SMA, EMA, RSI, MACD, Bollinger Bands, ATR.
Visualization: Price charts, sector heatmaps, allocation breakdowns, yield curves, comparison charts.
6 Workflow Skills
The real power isn’t individual tools. It’s the workflow skills that orchestrate multiple tools into comprehensive analyses:
-
Market Overview takes a batch of index quotes, a sector heatmap, macro dashboard, economic calendar, and portfolio snapshot, then synthesizes them into a morning briefing.
-
Portfolio Health runs risk metrics, allocation analysis, RSU exposure check, and benchmark comparison, then produces a portfolio scorecard with actionable recommendations.
-
Company Research pulls the company profile, financials, earnings, analyst ratings, technicals, news, and portfolio fit analysis into a single research report.
-
Trade Research builds an investment thesis with valuation analysis, technical entry points, position sizing, and risk management recommendations.
-
Economic Outlook combines yield curve analysis, inflation trends, labor market data, and Fed policy signals into a macro brief with portfolio implications.
-
Earnings Preview aggregates upcoming earnings dates, EPS estimates, historical beat rates, and pre-earnings technical setups for portfolio holdings.
Each skill produces structured output following defined contracts, with data freshness timestamps and disclaimers.
Privacy-First Design
All personal financial data lives in ~/.terminalq/ as markdown files. Holdings, RSU schedules, account information, and watchlists never leave the local machine. They’re never sent to external APIs, never stored in the cloud, never included in any request payload.
This was a deliberate architectural choice. The trust gap in personal finance tools is real. People are reluctant to hand their portfolio data to a third-party service. By keeping everything local and readable (it’s just markdown), TerminalQ avoids that problem entirely.
What I Learned
The MCP protocol is a natural fit for data-heavy domains. The pattern of “AI does the reasoning, tools provide the data” maps perfectly onto financial analysis. Claude doesn’t need to know how to call the Finnhub API. It just needs a get_quote tool that returns structured data, and it handles the interpretation.
Rate limiting is infrastructure, not an afterthought. With six different API providers and different rate limits on each, rate limiting had to be built into the foundation. A per-provider token bucket with backoff ensures the plugin stays within free-tier limits even during heavy research sessions.
Workflow skills are the multiplier. Individual tools are useful but modest. A single quote lookup saves 10 seconds. A full market overview skill that orchestrates 5 tools and synthesizes the results into a briefing saves 30 minutes. The skill layer is where the real value compounds.
Audit trails build trust. Every tool invocation is logged with timestamps, parameters, and response sizes. When you’re dealing with financial data, being able to answer “where did this number come from?” matters. The /audit command shows the full trace.
This write-up was co-authored with AI, based on the author's working sessions and notes.
Explore the source
fakoli/terminalq
Star it, fork it, or open an issue — contributions and feedback welcome.
Related Projects
AWS Security Group Mapper: Visual Analysis Tool for Cloud Security
A Python tool for visualizing AWS security group relationships and generating interactive graphs to help understand complex security architectures.
Fighters Paradise: Modern Game Engine Reimplementation in Rust
A modern Rust reimplementation of the MUGEN 2D fighting game engine with full backward compatibility for existing community content.
Agent-Eval: CI Evaluation Harness for Multi-Agent Development
Behavioral regression testing framework for detecting drift in AI agent instruction files across multi-agent development environments.