Skip to main content

distributed-llm-pretraining-torchtitan

Provides PyTorch-native distributed LLM pretraining using torchtitan with 4D parallelism (FSDP2, TP, PP, CP). Use when pretraining Llama 3.1, DeepSeek V3, or custom models at scale from 8 to 512+ GPUs with Float8, torch.compile, and distributed checkpointing.

v1.0.01 install
0

Signing

SignedSLSA L2
Signed by
skills-hub.ai distributor
Method
Distributor-signed by skills-hub.aiCryptographically signed by the skills-hub.ai distributor key at publish time.
Signed

Install this skill

Run this command in your terminal. No account required — it auto-detects your AI tool and installs the skill file.

npx @skills-hub-ai/cli install ai-research-distributed-llm-pretraining-torchtitan
Or download directly:
Browse all CLI commands →

Setup by platform

Claude Code

~/.claude/skills/<skill>/SKILL.md

Setup guide →

Install

One-click setup for your editor

Run in your project root

npx @skills-hub-ai/cli install ai-research-distributed-llm-pretraining-torchtitan --target claude-code

Instructions

Security

Loading security scan...

Reviews (0)