What I Learned Building the Stock Intelligence Dashboard
I recently built the Stock Intelligence Dashboard with Cursor, a tool for tracking individual equities and relevant regulatory documents from the Federal Register. The project taught me a lot about deployment architecture, monorepo management, and the nuances of running a full-stack application across multiple cloud platforms.
Why a Monorepo
I chose to structure the project as a monorepo with backend/ (FastAPI) and frontend/ (Next.js) in a single repository. This keeps API contracts, shared environment examples, and deployment documentation in one place. One clone, one PR can touch both sides. When building with coding agents, a single tree often means less setup friction: the assistant can see both directories together without juggling two repos or cross-repo PRs. You can always split repos later if team or process needs demand it.
Deployment Architecture: Railway + Vercel
The goal was to host the app in production so users hit HTTPS URLs with the API running on a server, not my local machine. I chose Railway for the FastAPI backend and Vercel for the Next.js frontend. Railway is a good fit for a long-running FastAPI service with a predictable HTTP port and optional background work. Vercel is a good fit for Next.js (routing/SSR) and makes frontend deploys, previews, and rollbacks very easy.
Trying to force a "single provider" would either make the backend experience worse (shoehorning a long-running API into a frontend-first host) or make the frontend experience worse (losing Vercel's Next.js-native workflow). Sometimes the right answer is using the best tool for each job.
Monorepo Configuration Challenges
Monorepos introduce their own complexity. The root directory matters differently for Railway's service root versus Vercel's root directory setting. Config-as-code paths can be non-obvious. I also learned that the Vercel project is tied to one Git repo, so if you have duplicate projects, you need to verify the Deployments tab to see which project actually uses the correct GitHub connection. GitHub App repository access must include the repo you want; otherwise it won't appear in the connect picker.
The Federal Register Pipeline
The data pipeline has two distinct phases: ingest and enrich. Ingest pulls raw Federal Register documents. Enrich adds summaries, severity ratings, and tags using Claude via the Anthropic API. Keeping these separate made it easier to reason about costs, rate limits, retries, and when to run scheduled jobs.
Persistence with SQLite
I used SQLite for data storage locally, with a persisted volume recommended for production. One lesson learned: SQLite can reset on redeploy without a mounted volume and an absolute DATABASE_URL path.
What I'd Improve Next
Looking ahead, there's room to improve reliability, observability (logs, metrics, traces), security (token rotation, least privilege, secrets hygiene), and UX (empty states, progress indicators, error messages). Each of these would make the system more production-ready and easier to maintain over time.