AI Study Buddy: Full-Stack Learning Platform with Multi-Provider AI
Impact Summary
Built a full-featured learning management system from scratch with 74+ API endpoints across 13 resource groups, multi-provider AI integration for text and image generation, and a storage abstraction layer supporting JSON, SQLite, and Supabase backends. Designed for learners who want AI-assisted content creation without vendor lock-in.
Role
Creator & Maintainer
Timeline
2026-Present
Scale
- 74+ REST API endpoints
- 13 resource groups
- 3 storage backends
- 2 AI providers
Links
Decision Summary
- • Must support both text and image AI generation from different providers
- • Users must own their own AI API keys to avoid platform-side costs
- • Must work in local, single-user, and multi-tenant production deployments
- • Content authoring must support Markdown for portability and version control
- • Must implement proven spaced repetition for measurable learning outcomes
- + Claude handles text generation, Gemini handles images: best of both
- + Three storage backends cover every deployment scenario
- + User-supplied API keys eliminate platform operating costs
- + FastAPI + React gives a modern, maintainable stack
- − Higher initial complexity: two AI SDKs, three storage adapters
- − Requires users to obtain their own API keys
- + Simple to build and deploy
- + No AI cost concerns
- + Large existing user base familiar with the model
- − No AI-generated content, users manually create everything
- − No image generation or visual learning support
- − No course structure, learning paths, or admin tools
- + Mature ecosystem with extensive plugin libraries
- + Well-understood by educational institutions
- − Massive codebase, heavy infrastructure requirements
- − No native AI content generation
- − Designed for institutions, not individual learners
- − PHP-heavy stacks are harder to extend with modern AI SDKs
The Problem
I study a lot, and I kept running into the same frustration: the tools that help me learn best are scattered across different apps. Flashcards live in one app. Course materials live in another. Quizzes are somewhere else. And none of them can generate content for me. I’d spend more time creating study materials than actually studying them.
The bigger issue is that most learning platforms are text-first. If you’re a visual learner who retains information through diagrams and images alongside text, you’re out of luck unless you manually create those assets yourself. AI can generate both text and images now, but no learning platform I found actually wired that capability into the study workflow.
I wanted a single platform where I could create a course, have AI generate flashcards and quizzes from the material, get images to reinforce visual concepts, and then review everything through spaced repetition, all in one place.
The Approach
Choosing the Stack
I went with React and TypeScript on the frontend (using Vite for fast iteration) and Python with FastAPI on the backend. The Python choice was deliberate. The AI SDK ecosystem in Python is more mature, and FastAPI gives you automatic OpenAPI documentation for free, which matters when you’re building 74+ endpoints.
For data modeling, I used Pydantic v2 throughout the backend. Every request and response has a typed model, which catches integration issues at the validation layer instead of at runtime. On the frontend, TypeScript provides the same safety, so the entire data flow from React component to API endpoint to database is type-checked.
The AI Layer
Rather than committing to a single AI provider, I built adapters for two: Claude API for text generation and Gemini for image generation. Each provider does what it does best. Claude generates flashcard content, quiz questions, course outlines, and explanatory text. Gemini generates images that reinforce visual learning.
The key design decision was making users supply their own API keys through the settings page. This sounds like a friction point, but it solves a real problem: platform-managed AI keys mean the operator pays for every generation, which either limits usage or requires a subscription model. With user-supplied keys, there’s no artificial cap on how much content you can generate.
Storage That Scales With You
I didn’t want to force a database choice on anyone, so I built a storage abstraction layer with three backends:
- JSON files for local development and quick experimentation. Just clone the repo and start building. No database setup required.
- SQLite for single-user deployments. One file, zero configuration, good enough for personal use.
- Supabase for production multi-tenant deployments. Real authentication, row-level security, and all the features you need for multiple users.
Switching between backends is a configuration flag. The application code doesn’t change. Every storage operation goes through the same interface regardless of what’s behind it.
The API Surface
The backend exposes 74+ endpoints organized across 13 resource groups:
- auth: Registration, login, token management
- courses and modules: Course CRUD, module ordering, content management
- paths: Learning path composition from multiple courses
- decks and reviews: Flashcard decks with spaced repetition scheduling
- quiz: AI-generated quizzes with scoring and history
- generate: AI content generation endpoints (flashcards, quizzes, outlines, images)
- ai: Direct AI provider interaction and configuration
- progress: Streak tracking, mastery metrics, completion data
- admin: User management, system monitoring, bulk operations
- settings: User preferences, API key management, notification configuration
- uploads: File handling for course materials and generated images
Each group has its own FastAPI router, keeping the codebase navigable even at this scale.
Spaced Repetition
The flashcard review system implements a spaced repetition algorithm that adjusts intervals based on how well you recall each card. Cards you struggle with appear more frequently. Cards you know well get pushed further out. The system tracks streaks, mastery percentages, and review history so you can see your progress over time.
Admin and Notifications
The admin console handles user management, API key oversight, and system health monitoring. I also built a notification system that supports both email and SMS channels for review reminders, keeping learners on track with their study schedules even when they’re not actively using the platform.
What I Learned
Storage abstraction is worth the upfront cost. Writing three backend adapters felt like overkill initially. But being able to develop with JSON files (no database daemon, instant startup), test with SQLite, and deploy with Supabase made the development cycle dramatically faster. It also means anyone who clones the repo can be productive in under a minute.
Multi-provider AI is simpler than it sounds. I expected juggling two AI SDKs to be a headache. In practice, the adapter pattern makes it straightforward. Each provider has a thin wrapper that normalizes the interface. Adding a third provider would be a single new module, not a rewrite.
Pydantic v2 is transformative for API development. With 74+ endpoints, data validation bugs would have been constant without strong typing. Pydantic catches malformed requests before they reach business logic, and the auto-generated OpenAPI docs mean I never have to manually document the API surface. The TypeScript frontend can even generate its types from the OpenAPI spec.
User-supplied API keys change the economics. Every learning platform I’ve seen either limits AI features behind a paywall or absorbs the cost and eventually raises prices. Letting users bring their own keys removes that tension entirely. The tradeoff is onboarding friction (users need to get an API key), but for the audience that wants AI-powered studying, that bar is low.
Spaced repetition needs real data to tune. The algorithm works, but getting the intervals right required actual usage data. The initial parameters were based on published research, but individual learning patterns vary enough that the adaptive scheduling (adjusting based on per-user recall accuracy) matters more than the starting values.
Open source scaffolding signals intent. Including CONTRIBUTING.md, SECURITY.md, issue templates, and an Apache-2.0 license from the start isn’t just checkbox compliance. It tells potential contributors that the project is ready for collaboration, not just a personal experiment pushed to GitHub.
This write-up was co-authored with AI, based on the author's working sessions and notes.
Explore the source
fakoli/my-ai-study-buddy
Star it, fork it, or open an issue — contributions and feedback welcome.
Related Projects
AWS Security Group Mapper: Visual Analysis Tool for Cloud Security
A Python tool for visualizing AWS security group relationships and generating interactive graphs to help understand complex security architectures.
Fighters Paradise: Modern Game Engine Reimplementation in Rust
A modern Rust reimplementation of the MUGEN 2D fighting game engine with full backward compatibility for existing community content.
Agent-Eval: CI Evaluation Harness for Multi-Agent Development
Behavioral regression testing framework for detecting drift in AI agent instruction files across multi-agent development environments.