# Aethera A sophisticated web dashboard for AI-powered conversations and image generation with chat interface, multiple conversations, and local storage capabilities. ## Features - **Chat Interface**: Engage with AI models through a clean, responsive chat interface - **Multiple Conversations**: Switch between different conversation threads - **Image Generation**: Create and manage AI-generated images with customizable prompts - **Theme Support**: Toggle between light and dark modes - **Local Storage**: All conversations and images are stored locally on your system - **Markdown Rendering**: View beautifully formatted responses with syntax highlighting ## Quick Start ### Prerequisites - Go 1.25.5 or later - Bun package manager - An OpenAI-compatible API endpoint (OpenAI, local LLM, etc.) ### Installation 1. Clone the repository: ```bash git clone cd aethera ``` 2. Build the backend: ```bash cd backend go build -o ./dist/aethera ./cmd ``` 3. Build the frontend: ```bash cd ../frontend bun run build ``` ### Running the Application Start the server from the backend directory: ```bash ./dist/aethera ``` By default, the application runs at `http://localhost:8080` Open your browser and navigate to the URL to begin using Aethera. ## Configuration Options You can customize the server behavior with these command-line flags: - `--data-dir`: Directory for storing generated images (default: `data`) - `--listen`: Address to listen on (default: `localhost`) - `--port`: Port to listen on (default: `8080`) Example: ```bash ./dist/aethera --port 3000 --listen 0.0.0.0 ``` ## Getting Started 1. **Configure Your API**: Navigate to the Settings page and enter your OpenAI-compatible API endpoint URL 2. **Start Chatting**: Use the Chat interface to begin conversations with your AI model 3. **Generate Images**: Visit the Images page to create images using text prompts 4. **Manage Your Content**: View and delete images, organize conversations ## Supported AI Services Aethera works with any OpenAI-compatible API, including: - OpenAI - Local LLMs (Ollama, LocalAI, etc.) - Other compatible AI services Configure your preferred service in the Settings page. ## Troubleshooting ### API Connection Issues If you see authentication errors, verify your API endpoint URL is correct and accessible. ### Port Already in Use Change the port using the `--port` flag if port 8080 is unavailable. ## License See LICENSE file for details.