stack-4-nextjs-react-python
Stack 4: React (Next.js) + Python Backend (FastAPI + SQLAlchemy)
Stack 4 combines the best of both worlds: React + Next.js (our team's core expertise for fastest frontend delivery) with Python + FastAPI (strongest ML/AI, data science, and quantum computing integration). This stack offers the highest long-term strategic value for innovation, experimentation, ML/AI-powered features, and quantum computing work, but requires the highest initial learning investment from TwinSpires engineers.
Full example repo: https://github.com/MillionOnMars/tech-stack-comparison/tree/main/stack4-react-python
Best use case: Ideal for quantum computing work, ML/AI projects, experimentation platforms, and innovation labs where the combination of React's ecosystem and Python's scientific computing capabilities is essential.
Note: This stack is not recommended for the initial BrisNet rebuild (Stack 1 or Stack 3 would be faster). Instead, Stack 4 is best suited for separate quantum computing and ML/AI initiatives.
1. Overview
- Frontend: Next.js (React + TypeScript), static export to S3/CloudFront
- Backend: FastAPI (Python)
- Database: PostgreSQL (or MySQL) via SQLAlchemy + Alembic
- Infra (example):
- Frontend: S3 + CloudFront (static hosting)
- Backend: Lambda + API Gateway (Serverless Framework) or containerized on ECS/Kubernetes
- CI/CD: GitHub Actions (separate pipelines for frontend and backend)
This stack combines the clear FE/BE separation with React's massive ecosystem on the frontend and Python's ML/AI ecosystem on the backend, creating the most flexible platform for innovation and experimentation.
2. Architecture
At a high level:
- Next.js app is built as static files and deployed to S3/CloudFront (no server-side rendering needed for initial load).
- FastAPI runs in a serverless function or container, exposing JSON endpoints with automatic OpenAPI documentation.
- SQLAlchemy manages database access with Alembic for migrations.
- Python ecosystem enables deep integration with ML/AI services, quantum computing frameworks (Qiskit, Cirq, PennyLane), data science tools, and research libraries.
3. Sample repo structure
Using the example repo in this project:
stack4-react-python/
frontend/ # Next.js app
src/app/
components/
services/
pages/
public/
backend/ # FastAPI + SQLAlchemy
app/
routes/
services/
repositories/
models/
schemas/
alembic/ # Database migrations
versions/
tests/
tech-stack-docs/ # Docusaurus documentation (this site)
Key points:
components/: React components for UI.services/: Frontend API service layer (fetch calls to backend).routes/: FastAPI route handlers (APIRouter), thin controllers that map HTTP → service calls.services/(backend): Business logic, orchestrating repositories and external APIs.repositories/: Data access layer using SQLAlchemy, isolated from services.models/: SQLAlchemy ORM models.schemas/: Pydantic schemas for request/response validation and OpenAPI generation.alembic/: Database migrations.
4. Example vertical slice
This slice shows a simple "Todos" list: Next.js calls a REST endpoint implemented in FastAPI, which reads from the DB via SQLAlchemy.
Backend – FastAPI route
# backend/app/routes/todos.py
from fastapi import APIRouter, Depends
from ..schemas.todos import TodoOut, TodoCreate
from ..services.todo_service import TodoService
from ..dependencies import get_todo_service
router = APIRouter(prefix="/todos", tags=["todos"])
@router.get("/", response_model=list[TodoOut])
async def list_todos(service: TodoService = Depends(get_todo_service)):
return await service.list_todos()
@router.post("/", response_model=TodoOut, status_code=201)
async def create_todo(
todo: TodoCreate,
service: TodoService = Depends(get_todo_service)
):
return await service.create_todo(todo)
Backend – Service + SQLAlchemy
# backend/app/services/todo_service.py
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy import select
from ..models.todo import Todo
from ..schemas.todos import TodoCreate, TodoOut
class TodoService:
def __init__(self, db: AsyncSession):
self.db = db
async def list_todos(self) -> list[TodoOut]:
result = await self.db.execute(select(Todo))
todos = result.scalars().all()
return [TodoOut.model_validate(todo) for todo in todos]
async def create_todo(self, todo: TodoCreate) -> TodoOut:
db_todo = Todo(**todo.model_dump())
self.db.add(db_todo)
await self.db.commit()
await self.db.refresh(db_todo)
return TodoOut.model_validate(db_todo)
Backend – Pydantic schemas
# backend/app/schemas/todos.py
from pydantic import BaseModel
class TodoBase(BaseModel):
title: str
completed: bool = False
class TodoCreate(TodoBase):
pass
class TodoOut(TodoBase):
id: int
class Config:
from_attributes = True
Frontend – Next.js service
// frontend/src/services/todos.ts
const API_URL = process.env.NEXT_PUBLIC_API_URL || "http://localhost:8000";
export interface Todo {
id: number;
title: string;
completed: boolean;
}
export async function getTodos(): Promise<Todo[]> {
const res = await fetch(`${API_URL}/api/todos`);
if (!res.ok) throw new Error("Failed to fetch todos");
return res.json();
}
export async function createTodo(todo: {
title: string;
completed?: boolean;
}): Promise<Todo> {
const res = await fetch(`${API_URL}/api/todos`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(todo),
});
if (!res.ok) throw new Error("Failed to create todo");
return res.json();
}
Frontend – React component
// frontend/src/app/components/TodoList.tsx
"use client";
import { useEffect, useState } from "react";
import { getTodos, createTodo, Todo } from "../services/todos";
export default function TodoList() {
const [todos, setTodos] = useState<Todo[]>([]);
useEffect(() => {
getTodos().then(setTodos);
}, []);
const handleCreate = async (title: string) => {
const newTodo = await createTodo({ title });
setTodos([...todos, newTodo]);
};
return (
<ul>
{todos.map((todo) => (
<li key={todo.id}>{todo.title}</li>
))}
</ul>
);
}
Type generation from OpenAPI
FastAPI automatically generates OpenAPI/Swagger documentation. TypeScript types can be generated:
# Generate TypeScript types from FastAPI OpenAPI schema
npx openapi-typescript http://localhost:8000/openapi.json -o src/app/models/api-types.ts
This pattern scales naturally: each vertical slice adds:
- One or more React components
- One frontend service method
- One FastAPI route (and perhaps service + repository methods)
- Pydantic schemas for validation
- Potential schema and migration changes in Alembic
5. CI/CD outline (example with GitHub Actions)
Separate pipelines can be maintained for frontend and backend while still treating each feature as a vertical slice.
Frontend (Next.js)
- On pull request:
npm cinpm run lintnpm testnpm run build(static export)- Deploy a preview environment (e.g., temporary S3/CloudFront distro).
- On merge to
main:- Build production static assets.
- Upload to S3.
- Invalidate CloudFront cache.
Backend (FastAPI + SQLAlchemy)
- On pull request:
- Set up Python environment (
python -m venv venv,pip install -r requirements.txt). - Run linting (
ruff,mypy). - Run tests (
pytest). - Build and run integration tests (optionally with a disposable database).
- Set up Python environment (
- On merge to
main:- Run Alembic migrations against a staging database.
- Build and publish Docker image or Lambda package.
- Deploy via Serverless Framework or IaC (CloudFormation/Terraform).
Type generation step (optional but recommended):
- After backend deployment, generate TypeScript types from OpenAPI schema.
- Commit generated types to frontend repo or use in CI.
Vertical slice flow
- A feature branch can update both
frontend/andbackend/. - CI validates both sides together.
- Once merged, both pipelines deploy, resulting in an end-to-end slice being live.
6. Pros
- Fastest frontend development for our team:
- React + Next.js is our core expertise; we can build the frontend at maximum velocity.
- No learning curve on the frontend—we're already operating at full speed.
- Leverages the massive React + TypeScript ecosystem for faster feature development.
- Strongest ML/AI, data science, and quantum computing integration:
- Python is the dominant language for ML, data science, AI, and quantum computing.
- Easy to share libraries and tooling between product backend, data/ML teams, and quantum computing research.
- Can integrate ML models and quantum algorithms directly into the backend service.
- Access to Python's rich ecosystem:
- ML/AI: pandas, numpy, scikit-learn, PyTorch, TensorFlow
- Quantum computing: Qiskit, Cirq, PennyLane, Qiskit Nature
- Scientific computing: SciPy, SymPy, NumPy
- Best "experimentation platform":
- Combines React's flexibility for rapid UI iteration with Python's power for data/ML and quantum computing features.
- Ideal for building innovative features, A/B tests, research-heavy work, and quantum computing applications.
- Can quickly prototype and iterate on ML-powered features and quantum algorithms.
- Python is the clear winner for quantum computing - all major quantum computing frameworks (Qiskit, Cirq, PennyLane) are Python-based.
- Modern, typed Python backend:
- FastAPI + Pydantic offer strong typing, validation, and auto-generated OpenAPI docs.
- Async/await support for high performance.
- Excellent developer experience with automatic API documentation.
- Future-proof for innovation:
- Best positioned for quantum computing work - Python is the standard language for quantum computing frameworks and research.
- Ideal for advanced ML, cutting-edge AI features, and quantum algorithm development.
- Can leverage both React's ecosystem and Python's scientific computing libraries.
- Flexible platform for experimentation, innovation, and quantum computing research.
7. Cons & risks
- Highest learning curve for TwinSpires:
- Requires learning both React/Next.js (frontend) and Python/FastAPI (backend).
- Two languages and two new stacks for much of the team.
- Both frontend framework and backend language/runtime are new.
- Two-language stack:
- TypeScript on the frontend, Python on the backend.
- Requires clear API contracts and documentation; no shared types out of the box (must generate from OpenAPI).
- Different tooling, linters, and test frameworks across FE and BE.
- Build speed for delivery team:
- High on frontend (React is our expertise), but Medium on backend (Python learning curve).
- Overall: High frontend speed, but backend adds complexity compared to Node.js.
- Type generation overhead:
- Need to generate TypeScript types from OpenAPI schema (extra step in CI/CD).
- Types may be out of sync if OpenAPI schema changes but types aren't regenerated.
- Highest short-term delivery risk:
- More learning and coordination required up front.
- Slower initial delivery compared to stacks where our team has full-stack expertise.
- Operational complexity:
- Need to standardize observability, deployment, and security across both Next.js and Python services.
- Different debugging and profiling tools for frontend (React DevTools) and backend (Python debuggers).
8. Fit for BrisNet
Stack 4 is the best fit when:
- Build speed for our team (React/Node specialists):
- Frontend: Highest (React + Next.js is our core expertise).
- Backend: Medium (Python adds a learning step, but FastAPI is modern and well-documented).
- Overall: High frontend speed, but backend learning curve reduces overall velocity compared to Stack 3.
- Comfort for TwinSpires engineering (Angular/Java specialists):
- Frontend: Low initially (requires Angular → React shift), but AI-assisted development can help.
- Backend: Low/Medium (Python is a new language/runtime, though Java experience helps with OOP patterns).
- Overall: Highest learning investment required from TwinSpires.
- Strategic priorities:
- Quantum computing work is a strategic priority (Python is the clear winner here).
- ML/AI and data science integration is a core strategic priority.
- Building an experimentation platform or innovation lab.
- Planning quantum computing applications, advanced ML, or research-heavy features.
- Want to leverage both React's ecosystem and Python's scientific computing ecosystem.
- Long-term vision:
- Planning to build data-driven features, ML-powered recommendations, or AI-enhanced functionality.
- Want to share code/libraries with data science teams.
- Prioritize innovation and experimentation over immediate delivery speed.
When Stack 4 makes sense:
- Quantum computing work is planned (Python is the standard for quantum computing frameworks).
- Building an experimentation platform or innovation lab (separate from main BrisNet rebuild).
- ML/AI integration is a core requirement, not just a future possibility.
- Willing to invest in both React and Python learning for TwinSpires engineers.
- Planning research-heavy work, quantum algorithms, or scientific computing that benefits from Python's ecosystem.
- Want the most flexible platform for future innovation, quantum computing, and ML/AI work.
Trade-off:
Stack 4 offers the highest long-term strategic value (React ecosystem + Python scientific computing ecosystem) but requires the highest initial learning investment. It's best suited for:
- Quantum computing work (Python is the clear winner - all major quantum frameworks are Python-based).
- Separate innovation/experimentation initiatives (not blocking the main BrisNet rebuild).
- Future projects where ML/AI or quantum computing is central to the product.
- Research-heavy work that needs both React's UI flexibility and Python's scientific computing capabilities.
Not recommended for:
- Initial BrisNet rebuild if speed is the top priority (Stack 3 would be faster).
- Teams unwilling to invest in both React and Python learning.
- Simple CRUD applications without ML/AI requirements.
In other words: Stack 4 is the best choice for quantum computing work and long-term innovation stack, combining React's ecosystem with Python's scientific computing capabilities (ML/AI, quantum computing, data science). However, it has the highest learning curve and is not recommended for the initial BrisNet rebuild (Stack 1 or Stack 3 would be faster). Consider Stack 4 for quantum computing initiatives, ML/AI projects, and future innovation work where Python's scientific computing ecosystem is essential.