Founder Game Classifier
A trained classifier that identifies which of 6 founder games a piece of content belongs to.
Model Description
This model classifies text content into one of six "founder games" - patterns of communication and content creation common among founders, creators, and thought leaders.
The 6 Games
| Game | Name | Description |
|---|---|---|
| G1 | Identity/Canon | Recruiting into identity, lineage, belonging, status, canon formation |
| G2 | Ideas/Play Mining | Extracting reusable plays, tactics, heuristics; "do this / steal this" |
| G3 | Models/Understanding | Building mental models, frameworks, mechanisms, explanations |
| G4 | Performance/Competition | Winning, dominance, execution, metrics, endurance, zero-sum edges |
| G5 | Meaning/Therapy | Healing, values, emotional processing, personal transformation |
| G6 | Network/Coordination | Community building, protocols, collaboration, collective action |
Usage
Installation
pip install founder-game-classifier
Basic Usage
from founder_game_classifier import GameClassifier
# Load the model (downloads from Hub on first use)
classifier = GameClassifier.from_pretrained("leoguinan/founder-game-classifier")
# Classify a single text
result = classifier.predict("Here's a tactic you can steal for your next launch...")
print(result["primary_game"]) # "G2"
print(result["confidence"]) # 0.72
print(result["probabilities"]) # {"G1": 0.05, "G2": 0.72, "G3": 0.10, ...}
Batch Classification
texts = [
"Here's the mental model I use for thinking about systems...",
"Join our community of builders who are changing the world...",
"I tried 47 different tactics. Here's what actually worked...",
]
results = classifier.predict_batch(texts)
for text, result in zip(texts, results):
print(f"{result['primary_game']}: {text[:50]}...")
Get Aggregate Signature
Useful for analyzing a corpus of content:
texts = load_my_blog_posts() # List of strings
signature = classifier.get_game_signature(texts)
print(signature)
# {'G1': 0.05, 'G2': 0.42, 'G3': 0.18, 'G4': 0.20, 'G5': 0.08, 'G6': 0.07}
Model Architecture
- Embedding Model:
all-MiniLM-L6-v2(384 dimensions) - Classifier: Logistic Regression (sklearn)
- Manifold System: Mahalanobis distance to game centroids (optional)
Training Data
The model was trained on labeled founder content spanning:
- Podcast transcripts
- Blog posts
- Twitter threads
- Newsletter content
Training used a multi-stage pipeline:
- Text chunking and span extraction
- LLM-assisted labeling with human verification
- Embedding generation
- Classifier training with cross-validation
Performance
Validated on held-out test set:
| Metric | Score |
|---|---|
| Accuracy | 0.78 |
| Macro F1 | 0.74 |
| Top-2 Accuracy | 0.91 |
The model performs best on clear examples of each game and may show lower confidence on boundary cases or mixed content.
Limitations
- Trained primarily on English content from tech/startup domain
- May not generalize well to non-business contexts
- Short texts (<50 words) may have lower accuracy
- Cultural and domain biases from training data
Citation
@misc{guinan2024foundergameclassifier,
title={Founder Game Classifier},
author={Leo Guinan},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/leoguinan/founder-game-classifier}
}
License
MIT License - free for commercial and non-commercial use.
Files
classifier.pkl- Trained LogisticRegression model (19KB)label_encoder.pkl- Label encoder for game classes (375B)metadata.json- Model metadata and configuration (143B)game_manifolds.json- Manifold centroids and covariances for geometric analysis (29MB)