🧩 Model Card: DeepSeek-R1-Distill-Llama-8B
- Type: Text-to-Text
- Think: Yes
- Base Model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
- Max Context Length: 128k tokens
- Default Context Length: 16k tokens (change default)
- Set Context Length at Launch
▶️ Run with FastFlowLM in PowerShell:
flm run deepseek-r1:8b
🧩 Model Card: DeepSeek-R1-0528-Qwen3-8B
- Type: Text-to-Text
- Think: Yes
- Base Model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
- Max Context Length: 64k tokens
- Default Context Length: 16k tokens (change default)
- Set Context Length at Launch
▶️ Run with FastFlowLM in PowerShell:
flm run deepseek-r1-0528:8b