Step-by-Step Setup Guide for Hugging Face Space
This guide will walk you through deploying your Neural Pong demo to Hugging Face Spaces.
Prerequisites
- A Hugging Face account (sign up at https://huggingface.co/join)
- The model checkpoint file (
ckpt-step=053700-metric=0.00092727.pt) - Git installed on your machine (for uploading files)
Step 1: Prepare Your Checkpoint File
First, you need to decide how to handle the model checkpoint. You have two main options:
Option A: Include Checkpoint in Repository (Simplest)
Locate your checkpoint file:
# Check if the file exists ls /share/u/wendler/code/toy-wm/experiments/radiant-forest-398/ckpt-step=053700-metric=0.00092727.ptCopy it to the hf-space directory:
mkdir -p /share/u/wendler/code/toy-wm/hf-space/checkpoints cp /share/u/wendler/code/toy-wm/experiments/radiant-forest-398/ckpt-step=053700-metric=0.00092727.pt \ /share/u/wendler/code/toy-wm/hf-space/checkpoints/Update the config file to point to the new location:
checkpoint: "checkpoints/ckpt-step=053700-metric=0.00092727.pt"
Option B: Upload to Hugging Face Hub (Better for Large Files)
Install Hugging Face Hub:
pip install huggingface-hubLogin to Hugging Face:
huggingface-cli loginCreate a model repository and upload:
# Create a repository (replace YOUR_USERNAME with your HF username) huggingface-cli repo create YOUR_USERNAME/neural-pong-checkpoint --type model # Upload the checkpoint huggingface-cli upload YOUR_USERNAME/neural-pong-checkpoint \ /share/u/wendler/code/toy-wm/experiments/radiant-forest-398/ckpt-step=053700-metric=0.00092727.pt \ ckpt-step=053700-metric=0.00092727.ptModify the checkpoint loading code to download from Hub (we'll do this in Step 2)
Step 2: Update Configuration Files
If using Option A (checkpoint in repo):
Update configs/inference.yaml:
checkpoint: "checkpoints/ckpt-step=053700-metric=0.00092727.pt"
If using Option B (HF Hub):
We'll need to modify the app.py to download the checkpoint. Let me know if you want to go this route.
Step 3: Create a Hugging Face Space
Go to Hugging Face Spaces: https://huggingface.co/spaces
Click "Create new Space"
Fill in the details:
- Space name:
neural-pong(or your preferred name) - SDK: Select Docker
- Hardware: Select GPU (T4 small or larger)
- Visibility: Public or Private (your choice)
- Space name:
Click "Create Space"
Step 4: Upload Files to the Space
You have two options:
Option A: Using Git (Recommended)
Initialize git in your hf-space directory:
cd /share/u/wendler/code/toy-wm/hf-space git init git add . git commit -m "Initial commit"Add the Hugging Face remote:
# Replace YOUR_USERNAME and SPACE_NAME with your values git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/SPACE_NAMEPush to Hugging Face:
git push -u origin main
Option B: Using Web Interface
- Go to your Space page on Hugging Face
- Click "Files" tab
- Click "Add file" → "Upload files"
- Drag and drop all files from the
hf-spacedirectory - Click "Commit changes"
Note: For large checkpoint files, Git is recommended as the web interface has size limits.
Step 5: Configure the Space
Go to your Space settings (click the gear icon)
Important settings:
- Hardware: Ensure GPU is selected (T4 small minimum)
- Environment variables: None needed for basic setup
- Storage: If using Option B, you might want persistent storage
Save settings
Step 6: Wait for Build and Deployment
After pushing files, Hugging Face will automatically:
- Build the Docker image
- Install dependencies
- Start your application
Monitor the build:
- Go to your Space page
- Click "Logs" tab to see build progress
- Look for any errors
Expected build time: 5-15 minutes depending on dependencies
Step 7: Test Your Space
- Once the build completes, your Space will be live
- Visit your Space URL:
https://huggingface.co/spaces/YOUR_USERNAME/SPACE_NAME - Test the application:
- Wait for model to load (loading spinner)
- Click "Start Stream"
- Use arrow keys or WASD to control paddle
- Verify frames are generating correctly
Troubleshooting
Build Fails
- Check logs in the Space's "Logs" tab
- Common issues:
- Missing dependencies in
requirements.txt - Dockerfile syntax errors
- Checkpoint file not found (check path in
inference.yaml)
- Missing dependencies in
Model Won't Load
- Check checkpoint path in
configs/inference.yaml - Verify checkpoint file exists in the repository
- Check GPU availability in Space settings
Port Issues
- The app uses port 7860 (HF Spaces default)
- If you see port errors, check the
PORTenvironment variable
Out of Memory
- Reduce batch size or model size
- Upgrade to larger GPU in Space settings
- Check if checkpoint is too large (consider Option B)
Quick Reference Commands
# Navigate to hf-space directory
cd /share/u/wendler/code/toy-wm/hf-space
# Check files are ready
ls -la
# Test Docker build locally (optional)
docker build -t neural-pong .
docker run -p 7860:7860 neural-pong
# Git setup (if using Git)
git init
git add .
git commit -m "Initial commit"
git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/SPACE_NAME
git push -u origin main
Next Steps
After successful deployment:
- Share your Space with others
- Monitor usage in the Space analytics
- Update as needed by pushing new commits
- Consider adding:
- Better error handling
- More configuration options
- Performance optimizations
Need Help?
- Check Hugging Face Spaces docs: https://huggingface.co/docs/hub/spaces
- Review your Space logs for errors
- Test locally with Docker first to catch issues early