Testing locally run language models (like Ollama, LLaMA, or other open-source LLMs) doesn't require the latest hardware. This processor handles CPU-based inference admirably for smaller models and quantized versions of larger ones.
eBay