huggingface-local-models
by Hugging Face
0
Use to select models to run locally with llama.cpp and GGUF on CPU, Mac Metal, CUDA, or ROCm. Covers finding GGUFs, quant selection, running servers, exact GGUF file lookup, conversion, and OpenAI-compatible local serving.
Install this skill
Run this command in your terminal. No account required — it auto-detects your AI tool and installs the skill file.
npx @skills-hub-ai/cli install huggingface-huggingface-local-modelsOr download directly:
View all CLI commands →Setup by platform
Instructions
Security
Loading security scan...