training-llms-megatron
0
Trains large language models (2B-462B parameters) using NVIDIA Megatron-Core with advanced parallelism strategies. Use when training models >1B parameters, need maximum GPU efficiency (47% MFU on H100), or require tensor/pipeline/sequence/context/expert parallelism. Production-ready framework used for Nemotron, LLaMA, DeepSeek.
Install this skill
Run this command in your terminal. No account required — it auto-detects your AI tool and installs the skill file.
npx @skills-hub-ai/cli install ai-research-training-llms-megatronOr download directly:
View all CLI commands →Setup by platform
Instructions
Security
Loading security scan...