- Update nvim to use qwen3-coder-next-80b-instruct model
- Add AGENTS.md with AI agent best practices for timeout and file writing
- Update pi config to include agent guidelines
- Refactor llama-swap: remove old models, update quantizations, add tensor splits,
remove GGML_CUDA_ENABLE_UNIFIED_MEMORY flags, and simplify configuration
Change opencode and pi model filtering to use 'coding' type instead of
more generic 'text-generation' type. Update llama-swap model configs to
include 'coding' in metadata type list for relevant models (deepseek-coder,
qwen-coder, mistral, codellama, llama3-8b-instruct-q5).
Add new model "Qwen3 Coder Next (80B) - Instruct" with 262144 context window
and optimized parameters for coding tasks. Uses CUDA unified memory support.
- Update llama-cpp from b7867 to b7898
- Update opencode from v1.1.12 to v1.1.48 with improved build process:
- Replace custom bundle script with official script/build.ts
- Add shell completion support
- Add version check testing
- Simplify node_modules handling
- Update llama-swap service config with new llama.cpp options
- Clarify opencode agent testing workflow in developer and reviewer configs
- Update llama.cpp from b7789 to b7867
- Update llama-swap from v182 to v186
- Add OpenCode conventional commit command configuration
- Add moonshotai Kimi-K2.5 model to llama-swap