NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Show HN: Llama.cpp Tutorial 2026: Run GGUF Models Locally on CPU and GPU
CableNinja 24 hours ago [-]
Ive been trying to run local, effectively followed this guide (before the guide existed), and have not had any success. Llama builds fine, and then when i start it up, it just indefinitely spins its progress bar. I left it sit for 3 days and nada.

Running on an 8core 12gb ram vm, which has an amd rx5500xt (8gb) passed through. ROCm built, llama built with the correct flags.

What am i missing?

washadjeffmad 23 hours ago [-]
Logs to troubleshoot, for starters.
ANTHONY6632 2 hours ago [-]
[dead]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 14:57:53 GMT+0000 (UTC) with Wasmer Edge.