🧩 Model Card: LiquidAI/LFM2-1.2B
- Type: Text-to-Text
- Think: No
- Base Model: LiquidAI/LFM2-1.2B
- Max Context Length: 32k tokens
- Default Context Length: 32k tokens (change default)
- Set Context Length at Launch
▶️ Run with FastFlowLM in PowerShell:
flm run lfm2:1.2b