Running Qwen3:30B MoE on an RTX 3070 laptop with Ollama

1 comments

I think there is a lot of value in people documenting what it takes to make models that will run on 8, 12, and 16GB GPUs work well for them, and the tools used.