Getting Started
- Create an account at runpod.io and add credits via credit card or cryptocurrency.
- Launch a GPU pod by selecting your desired GPU type (A100, H100, RTX 4090, etc.) and a pre-built template.
- Connect to your pod via SSH, Jupyter Notebook, or VS Code for interactive development and training.
- Deploy a serverless endpoint for production inference by uploading your model and configuring autoscaling.
Key Features
- Competitive GPU pricing offers A100s, H100s, and consumer GPUs at prices significantly below major cloud providers.
- Serverless GPU endpoints deploy inference APIs with automatic scaling, pay-per-second billing, and zero cold starts.
- Pre-built templates provide one-click deployment of popular frameworks like PyTorch, TensorFlow, and Stable Diffusion.
- Spot and on-demand instances choose between cheaper interruptible spots or guaranteed on-demand GPU access.
- Network storage persistent volumes that survive pod restarts for storing datasets, checkpoints, and model weights.
- Community cloud access to a distributed network of GPU providers for additional capacity and competitive pricing.
// related tools
Lambda Labs
AI / AI Hardware & GPUs
GPU cloud and workstations purpose-built for AI/ML
paid
web
Aider
AI / AI Coding Tools
Terminal-based AI pair programmer that edits code in your git repo
oss
web git
Bolt.new
AI / AI Coding Tools
Full-stack web app builder — prompt to deployed app in minutes
freemium
web git