Find the local LLM that actually runs and performs best on your hardware. Ranked by real, recency-aware benchmarks, not parameter count. One command, run it instantly. - Andyyyy64/whichllm
I tried a super lightweight model on my CPU-only laptop as recommended by whichllm.
The response to a basic question spit out a word every 1-3 seconds and was laughable.
Not that this was a huge surprise, but it wasn’t even in the neighborhood of usable if anyone out there is considering trying it on their underpowered system.
I tried a super lightweight model on my CPU-only laptop as recommended by whichllm.
The response to a basic question spit out a word every 1-3 seconds and was laughable.
Not that this was a huge surprise, but it wasn’t even in the neighborhood of usable if anyone out there is considering trying it on their underpowered system.