Anthropic: Claude Opus 4.5 vs. DeepSeek: DeepSeek V3.1 Terminus
Direkter Head-to-Head-Vergleich zweier Frontier-Modelle. DeepSeek: DeepSeek V3.1 Terminus gewinnt 3 von 8 Disziplinen.
Letzte Synchronisation:
|
Anthropic: Claude Opus 4.5
Anthropic
Mehr von Anthropic →
|
DeepSeek: DeepSeek V3.1 Terminus
DeepSeek
Mehr von DeepSeek →
|
|
|---|---|---|
| Quality Index | 49,7 ★ | 33,9 |
| Speed (Tokens/s) | 57,0 ★ | 0 |
| Latency (TTFT) | 9,84 s | 0 ms ★ |
| Preis Input (USD/1M) | $5.00 | $1.635 ★ |
| Preis Output (USD/1M) | $25.00 | $2.75 ★ |
| Context Window | — | — |
| Modalitäten | text | text |
| Release | 11/2025 | 09/2025 |
Anthropic: Claude Opus 4.5
Claude Opus 4.5 is Anthropic’s frontier reasoning model optimized for complex software engineering, agentic workflows, and long-horizon computer use. It offers strong multimodal capabilities, competitive performance across real-world coding and...
DeepSeek: DeepSeek V3.1 Terminus
DeepSeek-V3.1 Terminus is an update to [DeepSeek V3.1](/deepseek/deepseek-chat-v3.1) that maintains the model's original capabilities while addressing issues reported by users, including language consistency and agent capabilities, further optimizing the model's...