Skip to main content

← All sources

AI Research Skills skills

80 AI research skills — literature survey, ideation, experiment execution, paper writing skills-hub.ai mirrors 98 skills from AI Research Skills daily — every skill links back to its upstream GitHub source. Install with one command across Claude Code, Cursor, Codex, Windsurf, and any MCP-compatible tool.

Upstream: github.com/Orchestra-Research/AI-Research-SKILLs

Installing a AI Research Skills skill

Pick a skill below, then run the install command for your AI coding tool. The skills-hub CLI writes the SKILL.md to the right directory and tracks the install in .skills.json so your team gets reproducible installs.

# Install a AI Research Skills skill
npx @skills-hub-ai/cli install <skill-slug>

# Browse all AI Research Skills skills via API
curl https://skills-hub.ai/api/v1/skills?source=ai-research

# Browse all sources
open https://skills-hub.ai/sources

Top AI Research Skills skills

See all →

The most-installed skills from AI Research Skills, ranked by adoption.

  1. 01ml-paper-writing

    2 installs

    Write publication-ready ML/AI papers for NeurIPS, ICML, ICLR, ACL, AAAI, COLM. Use when drafting papers from research repos, structuring arguments, verifying citations, or preparing camera-ready submissions. For systems venues (OSDI, NSDI, ASPLOS, SOSP), use systems-paper-writing instead.

    Researchfrom AI Research Skills
  2. 02langchain

    2 installs

    Framework for building LLM-powered applications with agents, chains, and RAG. Supports multiple providers (OpenAI, Anthropic, Google), 500+ integrations, ReAct agents, tool calling, memory management, and vector store retrieval. Use for building chatbots, question-answering systems, autonomous agents, or RAG applications. Best for rapid prototyping and production deployments.

    Researchfrom AI Research Skills
  3. 03autogpt-agents

    2 installs

    Autonomous AI agent platform for building and deploying continuous agents. Use when creating visual workflow agents, deploying persistent autonomous agents, or building complex multi-step AI automation systems.

    Researchfrom AI Research Skills
  4. 04openrlhf-training

    1 installs

    High-performance RLHF framework with Ray+vLLM acceleration. Use for PPO, GRPO, RLOO, DPO training of large models (7B-70B+). Built on Ray, vLLM, ZeRO-3. 2× faster than DeepSpeedChat with distributed architecture and GPU resource sharing.

    Researchfrom AI Research Skills
  5. 05unsloth

    1 installs

    Expert guidance for fast fine-tuning with Unsloth - 2-5x faster training, 50-80% less memory, LoRA/QLoRA optimization

    Researchfrom AI Research Skills
  6. 06ray-data

    1 installs

    Scalable data processing for ML workloads. Streaming execution across CPU/GPU, supports Parquet/CSV/JSON/images. Integrates with Ray Train, PyTorch, TensorFlow. Scales from single machine to 100s of nodes. Use for batch inference, data preprocessing, multi-modal data loading, or distributed ETL pipelines.

    Researchfrom AI Research Skills
  7. 07rwkv-architecture

    1 installs

    RNN+Transformer hybrid with O(n) inference. Linear time, infinite context, no KV cache. Train like GPT (parallel), infer like RNN (sequential). Linux Foundation AI project. Production at Windows, Office, NeMo. RWKV-7 (March 2025). Models up to 14B parameters.

    Researchfrom AI Research Skills
  8. 08tensorrt-llm

    1 installs

    Optimizes LLM inference with NVIDIA TensorRT for maximum throughput and lowest latency. Use for production deployment on NVIDIA GPUs (A100/H100), when you need 10-100x faster inference than PyTorch, or for serving models with quantization (FP8/INT4), in-flight batching, and multi-GPU scaling.

    Researchfrom AI Research Skills
  9. 09creative-thinking-for-research

    1 installs

    Applies cognitive science frameworks for creative thinking to CS and AI research ideation. Use when seeking genuinely novel research directions by leveraging combinatorial creativity, analogical reasoning, constraint manipulation, and other empirically grounded creative strategies.

    Researchfrom AI Research Skills
  10. 10nanogpt

    1 installs

    Educational GPT implementation in ~300 lines. Reproduces GPT-2 (124M) on OpenWebText. Clean, hackable code for learning transformers. By Andrej Karpathy. Perfect for understanding GPT architecture from scratch. Train on Shakespeare (CPU) or OpenWebText (multi-GPU).

    Researchfrom AI Research Skills
  11. 11sentencepiece

    1 installs

    Language-independent tokenizer treating text as raw Unicode. Supports BPE and Unigram algorithms. Fast (50k sentences/sec), lightweight (6MB memory), deterministic vocabulary. Used by T5, ALBERT, XLNet, mBART. Train on raw text without pre-tokenization. Use when you need multilingual support, CJK languages, or reproducible tokenization.

    Researchfrom AI Research Skills
  12. 12faiss

    1 installs

    Facebook's library for efficient similarity search and clustering of dense vectors. Supports billions of vectors, GPU acceleration, and various index types (Flat, IVF, HNSW). Use for fast k-NN search, large-scale vector retrieval, or when you need pure similarity search without metadata. Best for high-performance applications.

    Researchfrom AI Research Skills
  13. 13transformer-lens-interpretability

    1 installs

    Provides guidance for mechanistic interpretability research using TransformerLens to inspect and manipulate transformer internals via HookPoints and activation caching. Use when reverse-engineering model algorithms, studying attention patterns, or performing activation patching experiments.

    Researchfrom AI Research Skills
  14. 14nemo-curator

    1 installs

    GPU-accelerated data curation for LLM training. Supports text/image/video/audio. Features fuzzy deduplication (16× faster), quality filtering (30+ heuristics), semantic deduplication, PII redaction, NSFW detection. Scales across GPUs with RAPIDS. Use for preparing high-quality training datasets, cleaning web data, or deduplicating large corpora.

    Researchfrom AI Research Skills
  15. 15nnsight-remote-interpretability

    1 installs

    Provides guidance for interpreting and manipulating neural network internals using nnsight with optional NDIF remote execution. Use when needing to run interpretability experiments on massive models (70B+) without local GPU resources, or when working with any PyTorch architecture.

    Researchfrom AI Research Skills
  16. 16pinecone

    1 installs

    Managed vector database for production AI applications. Fully managed, auto-scaling, with hybrid search (dense + sparse), metadata filtering, and namespaces. Low latency (<100ms p95). Use for production RAG, recommendation systems, or semantic search at scale. Best for serverless, managed infrastructure.

    Researchfrom AI Research Skills
  17. 17chroma

    1 installs

    Open-source embedding database for AI applications. Store embeddings and metadata, perform vector and full-text search, filter by metadata. Simple 4-function API. Scales from notebooks to production clusters. Use for semantic search, RAG applications, or document retrieval. Best for local development and open-source projects.

    Researchfrom AI Research Skills
  18. 18evaluating-code-models

    1 installs

    Evaluates code generation models across HumanEval, MBPP, MultiPL-E, and 15+ benchmarks with pass@k metrics. Use when benchmarking code models, comparing coding abilities, testing multi-language support, or measuring code generation quality. Industry standard from BigCode Project used by HuggingFace leaderboards.

    Researchfrom AI Research Skills
  19. 19fine-tuning-openvla-oft

    1 installs

    Fine-tunes and evaluates OpenVLA-OFT and OpenVLA-OFT+ policies for robot action generation with continuous action heads, LoRA adaptation, and FiLM conditioning on LIBERO simulation and ALOHA real-world setups. Use when reproducing OpenVLA-OFT paper results, training custom VLA action heads (L1 or diffusion), deploying server-client inference for ALOHA, or debugging normalization, LoRA merge, and cross-GPU issues.

    Researchfrom AI Research Skills
  20. 20lambda-labs-gpu-cloud

    1 installs

    Reserved and on-demand GPU cloud instances for ML training and inference. Use when you need dedicated GPU instances with simple SSH access, persistent filesystems, or high-performance multi-node clusters for large-scale training.

    Researchfrom AI Research Skills
  21. 21qdrant-vector-search

    1 installs

    High-performance vector similarity search engine for RAG and semantic search. Use when building production RAG systems requiring fast nearest neighbor search, hybrid search with filtering, or scalable vector storage with Rust-powered performance.

    Researchfrom AI Research Skills
  22. 22evolving-ai-agents

    1 installs

    Provides guidance for automatically evolving and optimizing AI agents across any domain using LLM-driven evolution algorithms. Use when building self-improving agents, optimizing agent prompts and skills against benchmarks, or implementing automated agent evaluation loops.

    Buildfrom AI Research Skills
  23. 23whisper

    1 installs

    OpenAI's general-purpose speech recognition model. Supports 99 languages, transcription, translation to English, and language identification. Six model sizes from tiny (39M params) to large (1550M params). Use for speech-to-text, podcast transcription, or multilingual audio processing. Best for robust, multilingual ASR.

    Researchfrom AI Research Skills
  24. 24huggingface-tokenizers

    1 installs

    Fast tokenizers optimized for research and production. Rust-based implementation tokenizes 1GB in <20 seconds. Supports BPE, WordPiece, and Unigram algorithms. Train custom vocabularies, track alignments, handle padding/truncation. Integrates seamlessly with transformers. Use when you need high-performance tokenization or custom tokenizer training.

    Researchfrom AI Research Skills

About this source

skills-hub.ai mirrors skills from 90+ official GitHub repositories every day. Each imported skill is parsed from a SKILL.md file in the source repo, gets a security scan and quality score on import, and links back to its upstream source of truth.

Last sync: Apr 30, 2026, 10:13 PM (success).

AI Research Skills skills — frequently asked

What are AI Research Skills skills?

AI Research Skills skills are AI coding skills published by AI Research Skills (80 AI research skills — literature survey, ideation, experiment execution, paper writing) and mirrored daily on skills-hub.ai. They are SKILL.md files that follow the open Agent Skills standard, so they work in Claude Code, Cursor, Codex CLI, Windsurf, Copilot, and any MCP-compatible tool.

How many AI Research Skills skills are available?

skills-hub.ai indexes 98 skills from AI Research Skills, synced daily from the upstream GitHub repository (https://github.com/Orchestra-Research/AI-Research-SKILLs).

How do I install a AI Research Skills skill?

Run `npx @skills-hub-ai/cli install <skill-slug>` in your project. The CLI writes the SKILL.md to the right directory for your AI tool and adds it to your `.skills.json` lockfile so your team gets the same skills at the same versions.

Are these official AI Research Skills skills?

Yes. Every skill from this source is mirrored from AI Research Skills's own GitHub repository (https://github.com/Orchestra-Research/AI-Research-SKILLs). Each skill page links back to the upstream source of truth, so you can verify the original.