Early-Preview Fine-tune · Round 2

Qwopus3.6-27B — v1-preview

Same 16-prompt suite as the Qwen3.6-27B base eval — 5 agentic, 5 web-design, 6 canvas/WebGL. Q4_K_M on a single RTX 5090 via llama.cpp. Byline metrics: 62.3 tok/s avg, 87.4 k tokens generated, 23.4 min runtime.

!
This is an early preview — not the final Qwopus 3.6 The v1-preview weights come from a small ~12 K-example training run. I'm currently working with Jackrong to land more compute for a full fine-tune pass that'll be orders of magnitude larger and cleaner. Numbers and behaviour on this page will change when the full model ships.
62.3avg tok/s+12.7% vs base
16runs
87,394completion tokens
~20 GBVRAM used
65Kcontext window

Web design · open to preview

SaaS landing pagePrism — AI observability
36.7 KB · 9,961 tok · 160 s
Analytics dashboardLight theme, emerald accent
37.4 KB · 13,190 tok · 213 s
Designer portfolioMaya Chen — kinetic typography
23.1 KB · 7,356 tok · 118 s
Pricing page3 tiers + animated toggle + FAQ
24.3 KB · 8,061 tok · 129 s
Mobile app marketingStillwater — CSS-only iPhone mock
29.3 KB · 8,005 tok · 128 s

Canvas / WebGL · creative coding

Particle attractor3000-particle fluid swarm
11.1 KB · 4,249 tok · 68 s
WebGL MandelbulbRaymarched fractal shader
11.5 KB · 4,364 tok · 70 s
Three.js crystal sceneTransmissive glass + bloom
17.9 KB · 6,375 tok · 102 s
Physics sandboxSoft-body collisions, fling mouse
15.1 KB · 4,384 tok · 70 s
Audio-reactive visualizerMic + oscillator fallback
12.0 KB · 3,018 tok · 48 s

Agentic reasoning · text output

Multi-step planningURL shortener deploy plan
thinking: 3,158 tok · 50 s
Self-critique loopPalindrome · O(n³) → O(n²)
thinking: 1,277 tok · 21 s
Code debug (4 bugs)k-th smallest element
thinking: 1,628 tok · 26 s
Structured JSON extractionCalendar + roster from prose
no-think rerun · 353 tok
Tool-use planningWeather + flights + hotel
thinking: 1,174 tok · 19 s