Commit Graph

8 Commits

Author SHA1 Message Date
74b8d43032 refactor!: move LLM configuration from in-app settings to CLI/env vars
- Remove `api_endpoint` from Settings model and settings UI
- Add `--llm-endpoint` / `AETHERA_LLM_ENDPOINT` and `--llm-key` /
  `AETHERA_LLM_KEY` CLI flags (endpoint is required)
- Update client constructor to accept API key parameter
- Update tests and documentation to reflect new configuration approach

BREAKING CHANGE: LLM endpoint and key must now be provided via
`AETHERA_LLM_ENDPOINT` and `AETHERA_LLM_KEY` environment variables or
CLI flags instead of the Settings page.
2026-05-01 23:30:34 -04:00
54e24cb304 test(client): update model and enable live SendMessage test
Some checks failed
continuous-integration/drone/push Build is failing
2026-05-01 21:55:57 -04:00
0dd9521419 fix: improve chat UI streaming feedback and fix test image path
All checks were successful
continuous-integration/drone/push Build is passing
- Add loading spinner with 'Thinking...' text during streaming when
  content is not yet available
- Fix :key binding to use message.id instead of message.content
- Remove unnecessary TypeScript cast in file reader handler
- Move test image into testdata/ directory for proper test organization
- Remove t.Skip and simplify TestSummarizeChat test message
2026-05-01 21:04:26 -04:00
2154a9f203 fix(client_test): unskip vision test with embedded test image
- Remove t.Skip for TestSendMessageWithImage (qwen3-8b-vision works with base64)
- Add test_image.jpg (400x300) to test directory alongside test file
- Load image at runtime, convert to base64 data URL for API
- Collect full response in bytes.Buffer and verify non-empty output
2026-05-01 20:41:18 -04:00
0007250e5e fix(client_test): use embedded test image and skip unstable vision backend
- Remove dependency on /tmp/dog.jpg, embed small PNG as base64 data URL
- Add t.Skip for qwen3-8b-vision tests (LlamaSwap returns 502 for vision)
- Remove unused encoding/base64 and os imports
- Use model constant for consistency across tests
2026-05-01 19:35:08 -04:00
e60b1ea8d5 feat(chat): add optional photo upload support
Add vision/multimodal support to chat, allowing users to send images
alongside or instead of text prompts. Images are transmitted and persisted
as base64 data URLs.

Backend:
- Add Images []string to Message struct for persistence
- Add Images []string to GenerateTextRequest with relaxed validation
- Build multimodal user messages using OpenAI SDK content parts
- Pass images through from handlers to client
- Deep-copy Images slice in message cloning

Frontend:
- Add images?: string[] to Message and GenerateTextRequest types
- Add image selection state and file input handler
- Add camera icon button, hidden file input, and image preview strip
- Render images in user message bubbles
- Pass images through to GenerateTextRequest

Tests:
- Add TestSendMessageWithImage for vision model testing
2026-05-01 18:27:09 -04:00
c51c0ab070 fix(client): support vLLM "reasoning" field for thinking blocks
All checks were successful
continuous-integration/drone/push Build is passing
vLLM sends thinking content in a "reasoning" delta field, unlike
DeepSeek which uses "reasoning_content". Check both field names so
thinking blocks render for vLLM-hosted models like qwen3.6-27b-thinking.

Also update client tests to exercise thinking output and skip by default
so they don't run in Drone CI (require live LLM API).
2026-04-30 21:55:05 -04:00
89f2114b06 initial commit 2026-01-17 10:09:11 -05:00