Datasets:
GSM8K: Grade School Math 8K (Instruction Format)
Dataset Description
Dataset Summary
This is a reformatted version of OpenAI's GSM8K (Grade School Math 8K) dataset, converted into an instruction-following format suitable for training and evaluating large language models on mathematical reasoning tasks.
GSM8K is a dataset of 8.5K high-quality, linguistically diverse grade school math word problems created by human problem writers. The problems require multi-step reasoning and basic arithmetic operations (addition, subtraction, multiplication, division) to solve. A bright middle school student should be able to solve every problem in this dataset.
Key Features:
- 📊 7,473 training examples + 1,319 test examples
- 🎯 Instruction-following format ready for LLM fine-tuning
- 📝 Step-by-step solutions with reasoning chains
- 🔢 Clean extraction of final numeric answers
- 🎓 Includes both standard and Socratic (guided questioning) formats
Original Source
This dataset is derived from:
- Original Repository: openai/grade-school-math
- Paper: Training Verifiers to Solve Math Word Problems
- Authors: Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, et al. (OpenAI)
- License: MIT License
Supported Tasks
- Instruction Following: Training models to follow mathematical problem-solving instructions
- Chain-of-Thought Reasoning: Learning to break down complex problems into steps
- Mathematical Reasoning: Multi-step arithmetic problem solving
- Question Answering: Extracting and computing numerical answers
- Educational AI: Building tutoring systems and educational assistants
Languages
The dataset is in English (en).
Dataset Structure
Data Format
Each example in the instruction format contains:
{
"id": "gsm8k_train_0001",
"source": "GSM8K",
"split": "train",
"instruction": "Solve the following math word problem. Show your work step by step.",
"input": "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?",
"output": "Natalia sold 48/2 = 24 clips in May.\nNatalia sold 48+24 = 72 clips altogether in April and May.",
"final_answer": "72"
}
Data Fields
id: Unique identifier for the example (e.g., "gsm8k_train_0001")source: Dataset source ("GSM8K" or "GSM8K-Socratic")split: Data split ("train" or "test")instruction: The instruction prompt for the modelinput: The math word problem to solveoutput: Step-by-step solution showing reasoningfinal_answer: The final numeric answer (extracted from the solution)
Data Splits
| Split | Standard | Socratic | Total |
|---|---|---|---|
| Train | 7,473 | 7,473 | 14,946 |
| Test | 1,319 | 1,319 | 2,638 |
| Total | 8,792 | 8,792 | 17,584 |
Files
Standard Format
gsm8k_train.jsonl- Training set with standard solutionsgsm8k_test.jsonl- Test set with standard solutions
Socratic Format
gsm8k_train_socratic.jsonl- Training set with Socratic subquestionsgsm8k_test_socratic.jsonl- Test set with Socratic subquestions
The Socratic format includes automatically generated subquestions before each reasoning step, designed to guide the problem-solving process more explicitly.
Dataset Creation
Source Data Curation
The original GSM8K dataset was created by OpenAI with human problem writers who crafted linguistically diverse word problems that:
- Take between 2 and 8 steps to solve
- Require only basic arithmetic operations
- Are appropriate for grade school level (ages 6-14)
- Test multi-step reasoning ability
Conversion Process
This instruction-format version was created by:
- Cleaning: Removed calculator annotations (e.g.,
<<48/2=24>>) from solutions - Parsing: Extracted step-by-step reasoning and final numeric answers
- Reformatting: Structured data into instruction-input-output format
- Standardization: Added metadata (source, split, unique IDs)
The conversion preserves all original problems and solutions while making the format more suitable for modern instruction-tuned language models.
Usage
Loading the Dataset
from datasets import load_dataset
# Load the full dataset
dataset = load_dataset("notefill/gsm8k-instruction")
# Load specific splits
train_data = load_dataset("notefill/gsm8k-instruction", data_files="gsm8k_train.jsonl")
test_data = load_dataset("notefill/gsm8k-instruction", data_files="gsm8k_test.jsonl")
# Load Socratic version
train_socratic = load_dataset("notefill/gsm8k-instruction", data_files="gsm8k_train_socratic.jsonl")
Fine-tuning Example
from datasets import load_dataset
# Load training data
dataset = load_dataset("notefill/gsm8k-instruction", data_files="gsm8k_train.jsonl")
# Format for instruction tuning
def format_prompt(example):
return {
"prompt": f"{example['instruction']}\n\n{example['input']}",
"completion": f"{example['output']}\n\nFinal Answer: {example['final_answer']}"
}
formatted_dataset = dataset.map(format_prompt)
Evaluation Example
import json
def extract_final_answer(text):
"""Extract numeric answer from model output"""
# Your extraction logic here
pass
def evaluate_accuracy(predictions, ground_truth):
"""Calculate accuracy of predictions"""
correct = 0
for pred, truth in zip(predictions, ground_truth):
if extract_final_answer(pred) == truth['final_answer']:
correct += 1
return correct / len(predictions)
Problem Characteristics
Difficulty Distribution
- Steps Required: 2-8 steps per problem
- Operations: Addition, subtraction, multiplication, division
- Context: Real-world scenarios (shopping, cooking, travel, etc.)
- Complexity: Progressive difficulty suitable for grade school students
Example Problems
Simple (2-3 steps):
"Weng earns $12 an hour for babysitting. Yesterday, she just did 50 minutes of babysitting. How much did she earn?"
Medium (4-5 steps):
"Betty is saving money for a new wallet which costs $100. Betty has only half of the money she needs. Her parents decided to give her $15 for that purpose, and her grandparents twice as much as her parents. How much more money does Betty need to buy the wallet?"
Complex (6-8 steps):
"Mark has a garden with flowers. He planted plants of three different colors in it. Ten of them are yellow, and there are 80% more of those in purple. There are only 25% as many green flowers as there are yellow and purple flowers. How many flowers does Mark have in his garden?"
Considerations for Using the Data
Recommended Uses
✅ Training instruction-following language models
✅ Evaluating mathematical reasoning capabilities
✅ Developing chain-of-thought prompting strategies
✅ Building educational AI tutoring systems
✅ Research in multi-step reasoning
✅ Benchmarking model arithmetic abilities
Limitations
⚠️ Calculation Errors: Models may struggle with arithmetic despite correct reasoning
⚠️ Problem Complexity: Limited to grade school difficulty
⚠️ Domain Coverage: Focuses on word problems, not pure mathematics
⚠️ Language: English only
⚠️ Cultural Context: May reflect specific cultural references
Ethical Considerations
- The dataset is designed for research and educational purposes
- Solutions demonstrate reasoning processes that can be learned and replicated
- Care should be taken when deploying models trained on this data in educational settings
- Human oversight is recommended for student-facing applications
Benchmark Performance
From the original GSM8K paper, model performance on the test set:
| Model | Accuracy |
|---|---|
| GPT-3 175B | 34.2% |
| GPT-3 175B (Verification) | 55.0% |
| GPT-3 6B | 19.7% |
| GPT-3 6B (Verification) | 33.4% |
(Note: These are results from the original 2021 paper. Newer models may perform differently.)
Citation
Original GSM8K Dataset
@article{cobbe2021gsm8k,
title={Training Verifiers to Solve Math Word Problems},
author={Cobbe, Karl and Kosaraju, Vineet and Bavarian, Mohammad and Chen, Mark and Jun, Heewoo and Kaiser, Lukasz and Plappert, Matthias and Tworek, Jerry and Hilton, Jacob and Nakano, Reiichiro and Hesse, Christopher and Schulman, John},
journal={arXiv preprint arXiv:2110.14168},
year={2021}
}
This Instruction Format Version
@dataset{gsm8k_instruction2025,
title={GSM8K: Grade School Math 8K (Instruction Format)},
author={Kuyeso Rogers and Adiza Alhassan and Notefill},
year={2025},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/datasets/notefill/gsm8k-instruction}},
note={Instruction-following format conversion of OpenAI's GSM8K dataset}
}
Licensing Information
This dataset retains the MIT License from the original GSM8K dataset.
MIT License
Copyright (c) 2021 OpenAI
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
Additional Resources
- Original Repository: github.com/openai/grade-school-math
- Original Paper: arxiv.org/abs/2110.14168
- OpenAI Blog Post: openai.com/blog/grade-school-math
Acknowledgments
We gratefully acknowledge:
- OpenAI for creating and releasing the original GSM8K dataset
- The original authors: Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman
- The human problem writers who created these high-quality educational problems
Contact
For questions or feedback about this instruction-format version, please open an issue on the dataset repository.
For questions about the original GSM8K dataset, please refer to the original repository.
- Downloads last month
- 13