- Remove `api_endpoint` from Settings model and settings UI - Add `--llm-endpoint` / `AETHERA_LLM_ENDPOINT` and `--llm-key` / `AETHERA_LLM_KEY` CLI flags (endpoint is required) - Update client constructor to accept API key parameter - Update tests and documentation to reflect new configuration approach BREAKING CHANGE: LLM endpoint and key must now be provided via `AETHERA_LLM_ENDPOINT` and `AETHERA_LLM_KEY` environment variables or CLI flags instead of the Settings page.
114 lines
3.4 KiB
Markdown
114 lines
3.4 KiB
Markdown
# Aethera
|
|
|
|
A web dashboard for AI-powered conversations and image generation, backed by any OpenAI-compatible API.
|
|
|
|
## Features
|
|
|
|
- **Chat Interface** — streaming responses with Markdown rendering and syntax highlighting
|
|
- **Thinking Support** — displays model reasoning/thinking content when available
|
|
- **Multiple Conversations** — switch between threads with auto-generated titles
|
|
- **Image Generation & Editing** — create and edit images with customizable prompts, masks, and seeds
|
|
- **Token Statistics** — real-time prompt/generation throughput and timing metrics
|
|
- **Theme Support** — light and dark mode toggle
|
|
- **Structured Output** — JSON schema-based structured responses from models
|
|
- **Embedded Frontend** — single binary deployment with assets compiled in
|
|
|
|
## Quick Start
|
|
|
|
### Prerequisites
|
|
|
|
- Go 1.25.5+
|
|
- Bun
|
|
- An OpenAI-compatible API endpoint
|
|
|
|
### Using Make
|
|
|
|
```bash
|
|
make all # Build frontend + backend
|
|
./backend/dist/aethera
|
|
```
|
|
|
|
### Using Docker
|
|
|
|
```bash
|
|
make docker
|
|
docker run -p 8080:8080 \
|
|
-e AETHERA_LLM_ENDPOINT=https://api.example.com/v1 \
|
|
-e AETHERA_LLM_KEY=your-key \
|
|
-v aethera-data:/app/data aethera
|
|
```
|
|
|
|
### Manual Build
|
|
|
|
```bash
|
|
# Frontend
|
|
cd frontend && bun install && bun run build && cd ..
|
|
|
|
# Copy assets to backend
|
|
mkdir -p backend/web/static
|
|
cp -R frontend/public/. backend/web/static/
|
|
|
|
# Backend
|
|
cd backend && go build -o ./dist/aethera ./cmd
|
|
./dist/aethera
|
|
```
|
|
|
|
Open `http://localhost:8080` in your browser.
|
|
|
|
## Configuration
|
|
|
|
Configuration is available via CLI flags and environment variables (prefixed `AETHERA_`):
|
|
|
|
| Flag | Env Var | Default | Description |
|
|
|----------------|---------------------|-------------|--------------------------------------------|
|
|
| `--llm-endpoint` | `AETHERA_LLM_ENDPOINT` | *(required)* | OpenAI-compatible API endpoint URL |
|
|
| `--llm-key` | `AETHERA_LLM_KEY` | | API key for authentication |
|
|
| `--data-dir` | `AETHERA_DATA_DIR` | `./data` | Directory for chats, settings, and images |
|
|
| `--static-dir` | `AETHERA_STATIC_DIR`| *(embedded)*| Serve frontend from disk (for development) |
|
|
| `--listen` | `AETHERA_LISTEN` | `localhost` | Listen address |
|
|
| `--port` | `AETHERA_PORT` | `8080` | Listen port |
|
|
|
|
Example:
|
|
|
|
```bash
|
|
AETHERA_LLM_ENDPOINT=https://api.example.com/v1 AETHERA_LLM_KEY=your-key ./backend/dist/aethera
|
|
```
|
|
|
|
## Development
|
|
|
|
A Nix flake is provided for the development environment:
|
|
|
|
```bash
|
|
nix develop # or use direnv with .envrc
|
|
```
|
|
|
|
This provides Go, Bun, `gopls`, `typescript-language-server`, `golangci-lint`, and `watchman`.
|
|
|
|
For hot-reload development:
|
|
|
|
```bash
|
|
make dev
|
|
```
|
|
|
|
This starts the Go backend (serving frontend from disk) and the frontend in watch mode concurrently.
|
|
|
|
## Getting Started
|
|
|
|
1. **Configure Your API** — set `AETHERA_LLM_ENDPOINT` and optionally `AETHERA_LLM_KEY` environment variables
|
|
2. **Start the Server** — run the binary and navigate to `http://localhost:8080`
|
|
3. **Configure Model Selectors** — navigate to Settings to configure model selectors for chat and image generation
|
|
|
|
## Supported AI Services
|
|
|
|
Aethera works with any OpenAI-compatible API, including:
|
|
|
|
- OpenAI
|
|
- Local LLMs (Ollama, llama.cpp, LocalAI, etc.)
|
|
- Any other compatible service
|
|
|
|
Llama.cpp-specific features like per-token timings are automatically detected.
|
|
|
|
## License
|
|
|
|
See LICENSE file for details.
|