Evan Reichard c51c0ab070
All checks were successful
continuous-integration/drone/push Build is passing
fix(client): support vLLM "reasoning" field for thinking blocks
vLLM sends thinking content in a "reasoning" delta field, unlike
DeepSeek which uses "reasoning_content". Check both field names so
thinking blocks render for vLLM-hosted models like qwen3.6-27b-thinking.

Also update client tests to exercise thinking output and skip by default
so they don't run in Drone CI (require live LLM API).
2026-04-30 21:55:05 -04:00
2026-02-20 22:12:30 -05:00
2026-01-17 13:59:41 -05:00
2026-01-17 10:09:11 -05:00
2026-04-28 22:09:19 -04:00
2026-01-17 13:59:41 -05:00
2026-04-28 22:09:19 -04:00
2026-04-28 22:09:19 -04:00

Aethera

A web dashboard for AI-powered conversations and image generation, backed by any OpenAI-compatible API.

Features

  • Chat Interface — streaming responses with Markdown rendering and syntax highlighting
  • Thinking Support — displays model reasoning/thinking content when available
  • Multiple Conversations — switch between threads with auto-generated titles
  • Image Generation & Editing — create and edit images with customizable prompts, masks, and seeds
  • Token Statistics — real-time prompt/generation throughput and timing metrics
  • Theme Support — light and dark mode toggle
  • Structured Output — JSON schema-based structured responses from models
  • Embedded Frontend — single binary deployment with assets compiled in

Quick Start

Prerequisites

  • Go 1.25.5+
  • Bun
  • An OpenAI-compatible API endpoint

Using Make

make all              # Build frontend + backend
./backend/dist/aethera

Using Docker

make docker
docker run -p 8080:8080 -v aethera-data:/app/data aethera

Manual Build

# Frontend
cd frontend && bun install && bun run build && cd ..

# Copy assets to backend
mkdir -p backend/web/static
cp -R frontend/public/. backend/web/static/

# Backend
cd backend && go build -o ./dist/aethera ./cmd
./dist/aethera

Open http://localhost:8080 in your browser.

Configuration

Configuration is available via CLI flags and environment variables (prefixed AETHERA_):

Flag Env Var Default Description
--data-dir AETHERA_DATA_DIR ./data Directory for chats, settings, and images
--static-dir AETHERA_STATIC_DIR (embedded) Serve frontend from disk (for development)
--listen AETHERA_LISTEN localhost Listen address
--port AETHERA_PORT 8080 Listen port

Example:

./backend/dist/aethera --port 3000 --listen 0.0.0.0

Development

A Nix flake is provided for the development environment:

nix develop   # or use direnv with .envrc

This provides Go, Bun, gopls, typescript-language-server, golangci-lint, and watchman.

For hot-reload development:

make dev

This starts the Go backend (serving frontend from disk) and the frontend in watch mode concurrently.

Getting Started

  1. Configure Your API — navigate to Settings and enter your OpenAI-compatible API endpoint URL
  2. Start Chatting — use the Chat interface to begin conversations
  3. Generate Images — visit the Images page to create or edit images
  4. Manage Content — view, delete, and organize conversations and images

Supported AI Services

Aethera works with any OpenAI-compatible API, including:

  • OpenAI
  • Local LLMs (Ollama, llama.cpp, LocalAI, etc.)
  • Any other compatible service

Llama.cpp-specific features like per-token timings are automatically detected.

License

See LICENSE file for details.

Description
No description provided
Readme 313 KiB
Languages
Go 50.1%
HTML 22.9%
TypeScript 21.2%
CSS 3.4%
Makefile 0.8%
Other 1.6%