Second Brain
Home

Architecture & Design

How Second Brain is built, the thinking behind it, and the principles that guide every decision.

Principle 01

Portable Architecture

The system is deliberately layered so that any component can be replaced without cascading changes. This isn't theoretical — every layer has a clear boundary and well-defined interface.

Presentation

Next.js App Router → React Components → Tailwind CSS → Framer Motion

API Layer

Next.js Route Handlers → RESTful endpoints → Input validation

AI Service

Google Gemini 2.5 Flash (swappable) → Centralized in lib/ai.ts → Provider-agnostic interface

Data Layer

Prisma ORM → PostgreSQL → Clean schema with migrations

Swappability in practice: The AI service lives entirely in src/lib/ai.ts. Switching from Gemini to Claude or OpenAI means changing one file. The database layer is abstracted through Prisma — switching from PostgreSQL to another SQL database requires only a connection string change. The frontend components consume typed interfaces, not raw database models, so the data layer and UI are fully decoupled.

Principle 02

Principles-Based UX

Every AI interaction follows these five design principles. They were defined before writing any code and serve as the decision framework for all UX choices:

Graceful Degradation

AI features are optional enhancements, not requirements. The app works fully without an API key configured. When AI is unavailable, the UI adapts — showing manual tagging options instead of error states.

Progressive Disclosure

AI results appear inline, not in modal interruptions. Summaries show up in a subtle accent card. Auto-tags merge seamlessly with manual tags. The AI is helpful without being overwhelming.

Transparency

AI-generated content is always labeled. Summary cards show a "AI Summary" badge. Conversational answers include source references so users can verify claims.

Non-Blocking Operations

AI processing never blocks the primary workflow. After creating a knowledge item, it's saved immediately — summarization and tagging happen asynchronously. The user sees a progress indicator but can navigate away.

Human Override

Users can always manually tag, edit, or delete AI-generated content. The AI suggests, the human decides. No auto-generated content is permanent without implicit user acceptance.

Principle 03

Agent Thinking

The system includes automated behaviors that maintain and improve the knowledge base over time, reducing manual upkeep and surfacing connections the user might miss.

Automatic Post-Save Processing

When a new knowledge item is created, the capture form automatically triggers both summarization and auto-tagging in parallel. This runs as a fire-and-forget background operation — the item is saved first, AI enrichment follows.

Intelligent Tag Merging

When AI generates tags, they're merged with existing manual tags rather than replacing them. Tags are deduplicated and normalized (lowercased, trimmed) automatically. This means the tag taxonomy improves organically over time without user intervention.

Context-Aware Query Routing

The conversational query system extracts keywords from questions, fetches relevant items via database search, and feeds them as structured context to the AI. Source references are automatically extracted from the response and linked back to the originals. It's a lightweight RAG pipeline that works without vector embeddings.

Principle 04

Infrastructure Mindset

Second Brain isn't just an app — it's a knowledge API. The system exposes its intelligence through a public endpoint that external systems can consume.

# Public API Endpoint

GET /api/public/brain/query?q=your+question

# Response format

{
  "question": "What do I know about...",
  "answer": "Based on your knowledge...",
  "sources": [
    { "id": "...", "title": "...", "type": "NOTE", "summary": "..." }
  ],
  "timestamp": "2025-02-17T..."
}

The endpoint includes CORS headers for cross-origin access, making it embeddable in any context — a personal website widget, a Slack bot, or a browser extension. The response format is self-documenting with typed source references.

Embeddable use cases: A portfolio site could iframe a query widget that lets visitors ask questions about your expertise. A Chrome extension could query your brain while browsing. A team dashboard could aggregate knowledge across multiple brains.