Files
aethera/README.md
Evan Reichard 91d4202874
All checks were successful
continuous-integration/drone/push Build is passing
docs: update AGENTS.md and README.md with accurate project details
- Root AGENTS.md: add build pipeline, Makefile targets, full directory listing
- Backend AGENTS.md: add architecture layout, API routes table, streaming/store
  patterns, missing deps (jsonschema-go, values pkg, types pkg)
- Frontend AGENTS.md: add architecture layout, missing deps (marked, highlight.js),
  Alpine component pattern, build pipeline details
- README.md: add env var config table, Docker/Make workflows, dev setup,
  thinking support, token stats, structured output, llama.cpp timings
2026-04-28 23:27:08 -04:00

3.1 KiB

Aethera

A web dashboard for AI-powered conversations and image generation, backed by any OpenAI-compatible API.

Features

  • Chat Interface — streaming responses with Markdown rendering and syntax highlighting
  • Thinking Support — displays model reasoning/thinking content when available
  • Multiple Conversations — switch between threads with auto-generated titles
  • Image Generation & Editing — create and edit images with customizable prompts, masks, and seeds
  • Token Statistics — real-time prompt/generation throughput and timing metrics
  • Theme Support — light and dark mode toggle
  • Structured Output — JSON schema-based structured responses from models
  • Embedded Frontend — single binary deployment with assets compiled in

Quick Start

Prerequisites

  • Go 1.25.5+
  • Bun
  • An OpenAI-compatible API endpoint

Using Make

make all              # Build frontend + backend
./backend/dist/aethera

Using Docker

make docker
docker run -p 8080:8080 -v aethera-data:/app/data aethera

Manual Build

# Frontend
cd frontend && bun install && bun run build && cd ..

# Copy assets to backend
mkdir -p backend/web/static
cp -R frontend/public/. backend/web/static/

# Backend
cd backend && go build -o ./dist/aethera ./cmd
./dist/aethera

Open http://localhost:8080 in your browser.

Configuration

Configuration is available via CLI flags and environment variables (prefixed AETHERA_):

Flag Env Var Default Description
--data-dir AETHERA_DATA_DIR ./data Directory for chats, settings, and images
--static-dir AETHERA_STATIC_DIR (embedded) Serve frontend from disk (for development)
--listen AETHERA_LISTEN localhost Listen address
--port AETHERA_PORT 8080 Listen port

Example:

./backend/dist/aethera --port 3000 --listen 0.0.0.0

Development

A Nix flake is provided for the development environment:

nix develop   # or use direnv with .envrc

This provides Go, Bun, gopls, typescript-language-server, golangci-lint, and watchman.

For hot-reload development:

make dev

This starts the Go backend (serving frontend from disk) and the frontend in watch mode concurrently.

Getting Started

  1. Configure Your API — navigate to Settings and enter your OpenAI-compatible API endpoint URL
  2. Start Chatting — use the Chat interface to begin conversations
  3. Generate Images — visit the Images page to create or edit images
  4. Manage Content — view, delete, and organize conversations and images

Supported AI Services

Aethera works with any OpenAI-compatible API, including:

  • OpenAI
  • Local LLMs (Ollama, llama.cpp, LocalAI, etc.)
  • Any other compatible service

Llama.cpp-specific features like per-token timings are automatically detected.

License

See LICENSE file for details.