Upstream v0.74.0 lockfile omits resolved/integrity metadata needed by
buildNpmPackage's offline NPM cache. Add a package-local enriched lockfile,
a script to regenerate it from the npm registry, and a prePatch step to
copy it into the build sandbox.
Add ikawrakow/ik_llama.cpp as a new package with CUDA/Vulkan support,
enabling MTP (Multi-Token Prediction) and IQ4_KS quantization. Wire it
into llama-swap with a new 'ik-qwen3.6-27b-iq4ks-thinking' model config
and 'iq36' alias. Also add a chat template download to the vLLM setup
script and include the binary on lin-va-desktop.
Replace qwen3.6-27b-thinking and qwen3.6-27b-mtp-thinking with
qwen3.6-27b-udq4-thinking (single GPU) and qwen3.6-27b-udq6-thinking
(dual GPU). Update aliases and concurrent set accordingly.
Sync all three Qwen3.6 27B vLLM configs (tools-text, long-text,
long-vision) with club-3090 83bf73d. Adds disable-thinking flag
and introduces upstream hash tracking comments for future syncs.
Update update-vllm-3090-configs skill to use hash-based skip logic.
Add qwen3.6-27b-mtp-thinking model config with 150K context, MTP
speculative decoding, and thinking mode support. Bump llama-cpp
from b9009 to b9045 and apply MTP patch from upstream PR #22673.
Condense the web-glimpse SKILL.md from verbose multi-section format to a
compact quick-reference style. Key changes:
- Consolidate usage patterns into a single quick reference block
- Replace separate sections per command with a concise command table
- Simplify workflow guidance and error handling into scannable tables
- Update timeout values from milliseconds to seconds
- Document new --no-reader and --format options
- Remove redundant answering guidelines
Sync all three vLLM model configs from club-3090 master (ae4846f).
Update to Genesis v7.65 full PROD env set with new patches.
Update docker image to nightly-7a1eb8ac. Add torch_compile and
triton cache dirs. Add agent setup guide (AGENTS.md).
Add 'evan' API key to llama-swap sops secrets.
Allow one CUDA0 and one CUDA1 model to run simultaneously. Dual-GPU
models (using -ts splits) are excluded from the matrix so they evict
everything when loaded. vLLM docker models get evict_cost=50 to
discourage eviction due to slow cold starts.
Extract a shared hasType helper for model filtering and add
vision (text + image) input capability to compatible models.
Also tag two llama-swap models with the vision type.
- Add modules/home/security/pass-keyring with GPG agent, pass, and
python keyring backend config for headless credential storage
- Enable pass-keyring for lin-va-mbp-work-vm
- Update bash PATH from ~/.bin to ~/.local/bin