Add qwen3.6-27b-mtp-thinking model config with 150K context, MTP speculative decoding, and thinking mode support. Bump llama-cpp from b9009 to b9045 and apply MTP patch from upstream PR #22673.
1.4 KiB
1.4 KiB