Posts, a community app by Read.cv
For folks running LLMs or diffusion models locally, what models & hardware are you running, and are you happy with the performance?
Ollama + llama2, M2. It’s solid! But def not GPT4.