docs: update AGENTS.md and README.md with accurate project details
All checks were successful
continuous-integration/drone/push Build is passing
All checks were successful
continuous-integration/drone/push Build is passing
- Root AGENTS.md: add build pipeline, Makefile targets, full directory listing - Backend AGENTS.md: add architecture layout, API routes table, streaming/store patterns, missing deps (jsonschema-go, values pkg, types pkg) - Frontend AGENTS.md: add architecture layout, missing deps (marked, highlight.js), Alpine component pattern, build pipeline details - README.md: add env var config table, Docker/Make workflows, dev setup, thinking support, token stats, structured output, llama.cpp timings
This commit is contained in:
114
README.md
114
README.md
@@ -1,100 +1,108 @@
|
||||
# Aethera
|
||||
|
||||
A sophisticated web dashboard for AI-powered conversations and image generation with chat interface, multiple conversations, and local storage capabilities.
|
||||
A web dashboard for AI-powered conversations and image generation, backed by any OpenAI-compatible API.
|
||||
|
||||
## Features
|
||||
|
||||
- **Chat Interface**: Engage with AI models through a clean, responsive chat interface
|
||||
- **Multiple Conversations**: Switch between different conversation threads
|
||||
- **Image Generation**: Create and manage AI-generated images with customizable prompts
|
||||
- **Theme Support**: Toggle between light and dark modes
|
||||
- **Local Storage**: All conversations and images are stored locally on your system
|
||||
- **Markdown Rendering**: View beautifully formatted responses with syntax highlighting
|
||||
- **Chat Interface** — streaming responses with Markdown rendering and syntax highlighting
|
||||
- **Thinking Support** — displays model reasoning/thinking content when available
|
||||
- **Multiple Conversations** — switch between threads with auto-generated titles
|
||||
- **Image Generation & Editing** — create and edit images with customizable prompts, masks, and seeds
|
||||
- **Token Statistics** — real-time prompt/generation throughput and timing metrics
|
||||
- **Theme Support** — light and dark mode toggle
|
||||
- **Structured Output** — JSON schema-based structured responses from models
|
||||
- **Embedded Frontend** — single binary deployment with assets compiled in
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Go 1.25.5 or later
|
||||
- Bun package manager
|
||||
- An OpenAI-compatible API endpoint (OpenAI, local LLM, etc.)
|
||||
- Go 1.25.5+
|
||||
- Bun
|
||||
- An OpenAI-compatible API endpoint
|
||||
|
||||
### Installation
|
||||
|
||||
1. Clone the repository:
|
||||
### Using Make
|
||||
|
||||
```bash
|
||||
git clone <repository-url>
|
||||
cd aethera
|
||||
make all # Build frontend + backend
|
||||
./backend/dist/aethera
|
||||
```
|
||||
|
||||
2. Build the backend:
|
||||
### Using Docker
|
||||
|
||||
```bash
|
||||
cd backend
|
||||
go build -o ./dist/aethera ./cmd
|
||||
make docker
|
||||
docker run -p 8080:8080 -v aethera-data:/app/data aethera
|
||||
```
|
||||
|
||||
3. Build the frontend:
|
||||
### Manual Build
|
||||
|
||||
```bash
|
||||
cd ../frontend
|
||||
bun run build
|
||||
```
|
||||
# Frontend
|
||||
cd frontend && bun install && bun run build && cd ..
|
||||
|
||||
### Running the Application
|
||||
# Copy assets to backend
|
||||
mkdir -p backend/web/static
|
||||
cp -R frontend/public/. backend/web/static/
|
||||
|
||||
Start the server from the backend directory:
|
||||
|
||||
```bash
|
||||
# Backend
|
||||
cd backend && go build -o ./dist/aethera ./cmd
|
||||
./dist/aethera
|
||||
```
|
||||
|
||||
By default, the application runs at `http://localhost:8080`
|
||||
Open `http://localhost:8080` in your browser.
|
||||
|
||||
Open your browser and navigate to the URL to begin using Aethera.
|
||||
## Configuration
|
||||
|
||||
## Configuration Options
|
||||
Configuration is available via CLI flags and environment variables (prefixed `AETHERA_`):
|
||||
|
||||
You can customize the server behavior with these command-line flags:
|
||||
|
||||
- `--data-dir`: Directory for storing generated images (default: `data`)
|
||||
- `--static-dir`: Directory to serve frontend files from instead of embedded assets (useful for development)
|
||||
- `--listen`: Address to listen on (default: `localhost`)
|
||||
- `--port`: Port to listen on (default: `8080`)
|
||||
| Flag | Env Var | Default | Description |
|
||||
|----------------|---------------------|-------------|--------------------------------------------|
|
||||
| `--data-dir` | `AETHERA_DATA_DIR` | `./data` | Directory for chats, settings, and images |
|
||||
| `--static-dir` | `AETHERA_STATIC_DIR`| *(embedded)*| Serve frontend from disk (for development) |
|
||||
| `--listen` | `AETHERA_LISTEN` | `localhost` | Listen address |
|
||||
| `--port` | `AETHERA_PORT` | `8080` | Listen port |
|
||||
|
||||
Example:
|
||||
|
||||
```bash
|
||||
./dist/aethera --port 3000 --listen 0.0.0.0
|
||||
./backend/dist/aethera --port 3000 --listen 0.0.0.0
|
||||
```
|
||||
|
||||
## Development
|
||||
|
||||
A Nix flake is provided for the development environment:
|
||||
|
||||
```bash
|
||||
nix develop # or use direnv with .envrc
|
||||
```
|
||||
|
||||
This provides Go, Bun, `gopls`, `typescript-language-server`, `golangci-lint`, and `watchman`.
|
||||
|
||||
For hot-reload development:
|
||||
|
||||
```bash
|
||||
make dev
|
||||
```
|
||||
|
||||
This starts the Go backend (serving frontend from disk) and the frontend in watch mode concurrently.
|
||||
|
||||
## Getting Started
|
||||
|
||||
1. **Configure Your API**: Navigate to the Settings page and enter your OpenAI-compatible API endpoint URL
|
||||
2. **Start Chatting**: Use the Chat interface to begin conversations with your AI model
|
||||
3. **Generate Images**: Visit the Images page to create images using text prompts
|
||||
4. **Manage Your Content**: View and delete images, organize conversations
|
||||
1. **Configure Your API** — navigate to Settings and enter your OpenAI-compatible API endpoint URL
|
||||
2. **Start Chatting** — use the Chat interface to begin conversations
|
||||
3. **Generate Images** — visit the Images page to create or edit images
|
||||
4. **Manage Content** — view, delete, and organize conversations and images
|
||||
|
||||
## Supported AI Services
|
||||
|
||||
Aethera works with any OpenAI-compatible API, including:
|
||||
|
||||
- OpenAI
|
||||
- Local LLMs (Ollama, LocalAI, etc.)
|
||||
- Other compatible AI services
|
||||
- Local LLMs (Ollama, llama.cpp, LocalAI, etc.)
|
||||
- Any other compatible service
|
||||
|
||||
Configure your preferred service in the Settings page.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### API Connection Issues
|
||||
|
||||
If you see authentication errors, verify your API endpoint URL is correct and accessible.
|
||||
|
||||
### Port Already in Use
|
||||
|
||||
Change the port using the `--port` flag if port 8080 is unavailable.
|
||||
Llama.cpp-specific features like per-token timings are automatically detected.
|
||||
|
||||
## License
|
||||
|
||||
|
||||
Reference in New Issue
Block a user