vLLM sends thinking content in a "reasoning" delta field, unlike
DeepSeek which uses "reasoning_content". Check both field names so
thinking blocks render for vLLM-hosted models like qwen3.6-27b-thinking.
Also update client tests to exercise thinking output and skip by default
so they don't run in Drone CI (require live LLM API).