Byte-Level BPE Tokenizer: numeric (10K)

A Byte-Level BPE tokenizer trained on numeric data from Fineweb-2-HQ.

Training Details

Parameter Value
Algorithm Byte-Level BPE
Language numeric
Target Vocab Size 10,007
Final Vocab Size 10,007
Pre-tokenizer byte_level
Number handling rtl_4digit
Contraction handling False
Normalizer NONE
Special Tokens <s>, </s>, <pad>, <unk>
Training Shards 1

Usage

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("None")
tokens = tokenizer.encode("Hello, world!")

Files

  • tokenizer.json โ€” Full HuggingFace tokenizer
  • vocab.json โ€” Vocabulary mapping
  • merges.txt โ€” BPE merge rules

Sample Encoding

Text Tokens Token IDs
1234500119 mod 67 12, 3450, 0119, , mod, , 67 18, 2396, 9974, 6, 4, 6, 53
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Collection including flexitok/mod-tokenizers-rtl_4digit