distributed-llm-pretraining-torchtitan
0
Provides PyTorch-native distributed LLM pretraining using torchtitan with 4D parallelism (FSDP2, TP, PP, CP). Use when pretraining Llama 3.1, DeepSeek V3, or custom models at scale from 8 to 512+ GPUs with Float8, torch.compile, and distributed checkpointing.
Install this skill
Run this command in your terminal. No account required — it auto-detects your AI tool and installs the skill file.
npx @skills-hub-ai/cli install ai-research-distributed-llm-pretraining-torchtitanOr download directly:
View all CLI commands →Setup by platform
Instructions
Security
Loading security scan...