File size: 1,475 Bytes
04566ce
 
5895df6
 
 
 
a834836
 
5895df6
04566ce
0c46985
8588b76
04566ce
 
 
 
 
 
a834836
04566ce
a834836
04566ce
a834836
04566ce
 
a834836
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
---
license: apache-2.0
datasets:
- common-pile/comma_v0.1_training_dataset
language:
- en
base_model:
- common-pile/comma-v0.1-2t
pipeline_tag: text-generation
---

## Model Description


Quantization: EXL2, 4.0 bits per weight


max_seq_len: 4096
Model Sources

Base

- **Repository:** https://huggingface.co/common-pile/comma-v0.1-2t


Comma v0.1-2T is a 7 billion parameter language model trained on 2 trillion tokens from [the Comma v0.1 dataset](https://huggingface.co/datasets/common-pile/comma_v0.1_training_dataset), comprising of openly licensed text from [the Common Pile](https://huggingface.co/collections/common-pile/common-pile-v01-68307d37df48e36f02717f21).
Comma v0.1-2T is a "base model" that can be used a the starting point for finetuning and post-training.


## Citation

```bibtext
@article{kandpal2025common,
  title={{The Common Pile v0.1: An 8TB Dataset of Public Domain and Openly Licensed Text}},
  author={Nikhil Kandpal and Brian Lester and Colin Raffel and Sebastian Majstorovic and Stella Biderman and Baber Abbasi and Luca Soldaini and Enrico Shippole and A. Feder Cooper and Aviya Skowron and Shayne Longpre and Lintang Sutawika and Alon Albalak and Zhenlin Xu and Guilherme Penedo and Loubna Ben  and Elie Bakouch and John David  and Honglu Fan and Dashiell Stander and Guangyu Song and Aaron Gokaslan and John Kirchenbauer and Tom Goldstein and Brian R and Bhavya Kailkhura and Tyler Murray},
  journal={arXiv preprint},
  year={2025}
}
```