🧩 Model Card: Llama-3.2-1B-Instruct
- Type: Text-to-Text
- Think: No
- Base Model: meta-llama/Llama-3.2-1B-Instruct
- Max Context Length: 128k tokens
- Default Context Length: 128k tokens (change default)
- Set Context Length at Launch
▶️ Run with FastFlowLM in PowerShell:
flm run llama3.2:1b
🧩 Model Card: Llama-3.2-3B-Instruct
- Type: Text-to-Text
- Think: No
- Base Model: meta-llama/Llama-3.2-3B-Instruct
- Max Context Length: 128k tokens
- Default Context Length: 64k tokens (change default)
- Set Context Length at Launch
▶️ Run with FastFlowLM in PowerShell:
flm run llama3.2:3b
🧩 Model Card: Llama-3.1-8B-Instruct
- Type: Text-to-Text
- Think: No
- Base Model: meta-llama/Llama-3.1-8B-Instruct
- Max Context Length: 128k tokens
- Default Context Length: 16k tokens (change default)
- Set Context Length at Launch
▶️ Run with FastFlowLM in PowerShell:
flm run llama3.1:8b