Back to Blog
AILLMAgentsMultimodalOpen Source

Qwen3.6-Plus — Towards Real World Agents

Alibaba announces Qwen3.6-Plus with 1M context window, dramatically enhanced agentic coding capabilities, and sharper multimodal reasoning — setting new state-of-the-art standards in AI agents.

2026-04-07

Announcing Qwen3.6-Plus: A Leap Toward Real-World AI Agents

Following the release of the Qwen3.5 series in February 2026, Alibaba's Qwen Team has announced the official launch of Qwen3.6-Plus. Available immediately via Alibaba Cloud Model Studio API, this release represents a massive capability upgrade with a particular focus on agentic coding, multimodal reasoning, and real-world task execution.

Most notably, Qwen3.6-Plus drastically enhances the model's agentic coding capabilities. From frontend web development to complex, repository-level problem solving, it sets a new state-of-the-art standard. Furthermore, Qwen3.6-Plus perceives the world with greater accuracy and sharper multimodal reasoning.

Key Features

Qwen3.6-Plus is the hosted model available via Alibaba Cloud Model Studio, featuring:

1M Context Window

Process ultra-long documents, codebases, and conversations with a default 1 million token context window — enabling comprehensive understanding of complex, multi-document scenarios.

Enhanced Agentic Coding

Significantly improved capabilities for frontend web development, terminal operations, and repository-level problem solving. The model excels in complex automated task execution.

Better Multimodal Perception

Sharper visual reasoning across complex document understanding, physical world analysis, video reasoning, and visual coding tasks.

Performance Highlights

Qwen3.6-Plus achieves comprehensive improvements across coding agents, general agents, and tool usage by deeply integrating reasoning, memory, and execution capabilities.

Coding Agent Benchmarks

BenchmarkQwen3.5-PlusQwen3.6-PlusImprovement
SWE-bench Verified76.278.8+2.6
Terminal-Bench 2.052.561.6+9.1
QwenWebBench1162.31501.7+339.4
SWE-bench Pro50.956.6+5.7
Claw-Eval Avg70.774.8+4.1

General Agent & Tool Usage

For general-purpose agents and tool usage, the model makes significant breakthroughs:

BenchmarkQwen3.5-PlusQwen3.6-PlusImprovement
DeepPlanning37.641.5+3.9
MCPMark46.148.2+2.1
TAU3-Bench68.470.7+2.3
WideSearch74.074.3+0.3

Multimodal Capabilities

Qwen3.6-Plus marks steady progress in multimodal capabilities across three core dimensions: advanced reasoning, enhanced applicability, and complex task execution.

BenchmarkQwen3.5-PlusQwen3.6-PlusImprovement
MMMU (STEM)85.086.0+1.0
MathVision88.688.0-0.6
OmniDocBench 1.590.891.2+0.4
VideoMME (w/ subs)87.587.8+0.3
ScreenSpot Pro65.668.2+2.6

STEM & Reasoning

The model maintains leading performance across difficult STEM reasoning benchmarks:

GPQA

90.4

+2.0 vs Qwen3.5-Plus

AIME26

95.3

+2.0 vs Qwen3.5-Plus

HMMT Feb 25

96.7

+1.9 vs Qwen3.5-Plus

LiveCodeBench v6

87.1

+3.5 vs Qwen3.5-Plus

New API Feature: preserve_thinking

This release introduces a new API feature designed to improve performance on complex, multistep tasks:

preserve_thinking: Preserve thinking content from all preceding turns in messages. Recommended for agentic tasks.

This capability is particularly beneficial for agent scenarios, where maintaining full reasoning context can enhance decision consistency and, in many cases, reduce overall token consumption by minimizing redundant reasoning. This feature is disabled by default.

Example Usage

from openai import OpenAI
import os

client = OpenAI(
    api_key=os.environ.get("DASHSCOPE_API_KEY"),
    base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
)

completion = client.chat.completions.create(
    model="qwen3.6-plus",
    messages=[{"role": "user", "content": "Introduce vibe coding."}],
    extra_body={
        "enable_thinking": True,
        "preserve_thinking": True,  # NEW feature
    },
    stream=True
)

Integration with Coding Assistants

Qwen3.6-Plus seamlessly integrates with popular third-party coding assistants:

OpenClaw

OpenClaw (formerly Moltbot/Clawdbot) is a self-hosted open-source AI coding agent. Connect it to Model Studio for a full agentic coding experience:

# Install (Node.js 22+)
curl -fsSL https://molt.bot/install.sh | bash

# Set API key
export DASHSCOPE_API_KEY=<your_api_key>

# Launch
openclaw dashboard

Qwen Code

Qwen Code is an open-source AI agent optimized for the Qwen series. Every user gets 1,000 free calls per day:

# Install (Node.js 20+)
npm install -g @qwen-code/qwen-code@latest

# Start interactive session
qwen

Claude Code

Qwen APIs support the Anthropic API protocol, enabling use with Claude Code:

# Install
npm install -g @anthropic-ai/claude-code

# Configure
export ANTHROPIC_MODEL="qwen3.6-plus"
export ANTHROPIC_BASE_URL=https://dashscope-intl.aliyuncs.com/apps/anthropic
export ANTHROPIC_AUTH_TOKEN=<your_api_key>

# Launch
claude

Real-World Applications

Frontend Development

Qwen3.6-Plus demonstrates superior performance on complex frontend projects:

  • 3D scenes and animations (Boids fish simulation, 3D aquarium)
  • Interactive games (first-person flight simulator)
  • Data visualizations with motion and micro-interactions
  • Dynamic UI effects (typewriter animations, floating elements)

Visual Agents

The model closes the loop from "understanding an interface" to "generating code" to "modifying with tools":

  • Visual reasoning with grounding: Locate specific persons/objects in images with bounding boxes
  • Visual coding: Generate frontend pages from UI screenshots or design mockups
  • Document understanding: Create presentations combining text and generated images
  • Video processing: Generate lecture notes from videos, edit video content
  • GUI agents: Navigate websites, complete multi-step interactive tasks

Why Qwen3.6-Plus Matters

State-of-the-art coding agents — Leading performance on SWE-bench, Terminal-Bench, and real-world repository-level tasks.
1M context window — Process entire codebases, long documents, and extended conversations without losing context.
Native multimodal agent — Evolving beyond isolated task performance toward holistic workflow-oriented operations.
preserve_thinking feature — Maintain reasoning context across turns for better decision consistency and lower token costs.
Wide ecosystem integration — Works with OpenClaw, Qwen Code, Claude Code, and 10+ other AI coding assistants.

What's Next

The Qwen Team announces that smaller-scale variants of Qwen3.6 will be open-sourced in the coming days, reaffirming their commitment to accessibility and community-driven innovation. Future work will focus on:

  • Increasingly complex, long-horizon repository-level tasks
  • Enhanced model autonomy for real-world agent scenarios
  • Continued improvement in multimodal perception-reasoning-action loops

Note

Qwen3.6-Plus is available now via Alibaba Cloud Model Studio. Smaller open-source variants coming soon.

Citation

@misc{qwen36plus,
    title = {{Qwen3.6-Plus}: Towards Real World Agents},
    url = {https://qwen.ai/blog?id=qwen3.6},
    author = {{Qwen Team}},
    month = {April},
    year = {2026}
}

Looking to integrate cutting-edge AI agents into your workflow?

We help teams leverage LLMs, multimodal agents, and intelligent automation to transform operations. Let’s talk.

Send a Message

Related Articles