Felladrin commited on
Commit
cdc13b0
·
verified ·
1 Parent(s): e77bc16

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers.js
3
+ tags:
4
+ - PyTorch
5
+ - LaTeX
6
+ - Math OCR
7
+ - Handwritten Math
8
+ metrics:
9
+ - cer
10
+ base_model:
11
+ - tjoab/latex_finetuned
12
+ pipeline_tag: image-to-text
13
+ ---
14
+
15
+
16
+
17
+ # latex_finetuned (ONNX)
18
+
19
+
20
+ This is an ONNX version of [tjoab/latex_finetuned](https://huggingface.co/tjoab/latex_finetuned). It was automatically converted and uploaded using [this Hugging Face Space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
21
+
22
+
23
+ ## Usage with Transformers.js
24
+
25
+
26
+ See the pipeline documentation for `image-to-text`: https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.ImageToTextPipeline
27
+
28
+
29
+ ---
30
+
31
+
32
+ # TrOCR-LaTeX (fine-tuned on math handwriting)
33
+
34
+ Take your handwritten math and turn it into clean LaTeX code.
35
+ This is a fine-tuned version of [`microsoft/trocr-base-handwritten`](https://huggingface.co/microsoft/trocr-base-handwritten),
36
+ a transformer-based optical character recognition model, adapted to work with handwritten math images and structured math syntax.
37
+
38
+ ## Data
39
+
40
+ Fine-tuned on Google's [`MathWriting`](https://github.com/google-research/google-research/tree/master/mathwriting) dataset. Contains over 500,000 digital inks of handwritten mathematical expressions obtained through either manual labelling or programmatic generation.
41
+
42
+ ## Intended use & limitations
43
+
44
+ You can use this model for OCR on a **single** math expression.
45
+
46
+ There is degraded performance on very long expressions (due to image preprocessing, 3:2 aspect ratio seems to work best).
47
+ - Create an expression chunking scheme to split the image into subimages and process each to bypass this limitation.
48
+ - In order to process **multiple** expressions, you need to chuck groups into single expressions.
49
+
50
+
51
+ ## How to use (PyTorch)
52
+
53
+ ```python
54
+ from transformers import TrOCRProcessor, VisionEncoderDecoderModel
55
+ from PIL import Image
56
+
57
+ # Helper funtion (path to either JPEG or PNG)
58
+ def open_PIL_image(image_path: str) -> Image.Image:
59
+ image = Image.open(image_path)
60
+ if image_path.split('.')[-1].lower() == 'png':
61
+ image = Image.composite(image, PIL.Image.new('RGB', image.size, 'white'), image)
62
+ return image
63
+
64
+
65
+ # Load model and processor from Hugging Face
66
+ processor = TrOCRProcessor.from_pretrained('tjoab/latex_finetuned')
67
+ model = VisionEncoderDecoderModel.from_pretrained('tjoab/latex_finetuned')
68
+
69
+
70
+ # Load all images as a batch
71
+ images = [open_PIL_image(path) for path in paths]
72
+
73
+ # Preprocess the images
74
+ preproc_image = processor.image_processor(images=images, return_tensors="pt").pixel_values
75
+
76
+ # Generate and decode the tokens
77
+ # NOTE: max_length default value is very small, which often results in truncated inference if not set
78
+ pred_ids = model.generate(preproc_image, max_length=128)
79
+ latex_preds = processor.batch_decode(pred_ids, skip_special_tokens=True)
80
+ ```
81
+
82
+
83
+ ## Training Details
84
+ - Mini-batch size: 8
85
+ - Optimizer: Adam
86
+ - LR Scheduler: cosine
87
+ - **`fp16` mixed precision**
88
+ - Trained using automatic mixed precision (AMP) with `torch.cuda.amp` for reduced memory usage.
89
+ - **Gradient accumulation**
90
+ - Used to simulate a larger effective batch size while keeping per-step memory consumption low.
91
+ - Optimizer steps occurred every 8 mini-batches.
92
+
93
+
94
+
95
+ ## Evaluation
96
+
97
+ Performance was evaluated using Character Error Rate (CER) defined as:
98
+
99
+ `CER = (Substitutions + Insertions + Deletions) / Total Characters in Ground Truth`
100
+
101
+ - #### ✅ Why CER?
102
+ - Math expressions are structurally sensitive. Shuffling even a single character can completely change the meaning.
103
+ - `x^2` vs. `x_2`
104
+ - `\frac{a}{b}` vs. `\frac{b}{a}`
105
+ - CER will penalizes small error in syntax.
106
+
107
+ - **Evalution yeilded a CER of 14.9%.**
108
+
109
+
110
+
111
+ ## BibTeX and Citation
112
+
113
+ The original TrORC model was introduced in this paper:
114
+
115
+ [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Li et al.
116
+
117
+ You can find the source code in [their repository](https://github.com/microsoft/unilm/tree/master/trocr).
118
+
119
+
120
+ ```bibtex
121
+ @misc{li2021trocr,
122
+ title={TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models},
123
+ author={Minghao Li and Tengchao Lv and Lei Cui and Yijuan Lu and Dinei Florencio and Cha Zhang and Zhoujun Li and Furu Wei},
124
+ year={2021},
125
+ eprint={2109.10282},
126
+ archivePrefix={arXiv},
127
+ primaryClass={cs.CL}
128
+ }
129
+ ```
config.json ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_attn_implementation_autoset": true,
3
+ "_name_or_path": "tjoab/latex_finetuned",
4
+ "architectures": [
5
+ "VisionEncoderDecoderModel"
6
+ ],
7
+ "decoder": {
8
+ "_attn_implementation_autoset": false,
9
+ "_name_or_path": "",
10
+ "activation_dropout": 0.0,
11
+ "activation_function": "gelu",
12
+ "add_cross_attention": true,
13
+ "architectures": null,
14
+ "attention_dropout": 0.0,
15
+ "bad_words_ids": null,
16
+ "begin_suppress_tokens": null,
17
+ "bos_token_id": 0,
18
+ "chunk_size_feed_forward": 0,
19
+ "classifier_dropout": 0.0,
20
+ "cross_attention_hidden_size": 768,
21
+ "d_model": 1024,
22
+ "decoder_attention_heads": 16,
23
+ "decoder_ffn_dim": 4096,
24
+ "decoder_layerdrop": 0.0,
25
+ "decoder_layers": 12,
26
+ "decoder_start_token_id": 2,
27
+ "diversity_penalty": 0.0,
28
+ "do_sample": false,
29
+ "dropout": 0.1,
30
+ "early_stopping": false,
31
+ "encoder_no_repeat_ngram_size": 0,
32
+ "eos_token_id": 2,
33
+ "exponential_decay_length_penalty": null,
34
+ "finetuning_task": null,
35
+ "forced_bos_token_id": null,
36
+ "forced_eos_token_id": null,
37
+ "id2label": {
38
+ "0": "LABEL_0",
39
+ "1": "LABEL_1"
40
+ },
41
+ "init_std": 0.02,
42
+ "is_decoder": true,
43
+ "is_encoder_decoder": false,
44
+ "label2id": {
45
+ "LABEL_0": 0,
46
+ "LABEL_1": 1
47
+ },
48
+ "layernorm_embedding": true,
49
+ "length_penalty": 1.0,
50
+ "max_length": 20,
51
+ "max_position_embeddings": 512,
52
+ "min_length": 0,
53
+ "model_type": "trocr",
54
+ "no_repeat_ngram_size": 0,
55
+ "num_beam_groups": 1,
56
+ "num_beams": 1,
57
+ "num_return_sequences": 1,
58
+ "output_attentions": false,
59
+ "output_hidden_states": false,
60
+ "output_scores": false,
61
+ "pad_token_id": 1,
62
+ "prefix": null,
63
+ "problem_type": null,
64
+ "pruned_heads": {},
65
+ "remove_invalid_values": false,
66
+ "repetition_penalty": 1.0,
67
+ "return_dict": true,
68
+ "return_dict_in_generate": false,
69
+ "scale_embedding": false,
70
+ "sep_token_id": null,
71
+ "suppress_tokens": null,
72
+ "task_specific_params": null,
73
+ "temperature": 1.0,
74
+ "tf_legacy_loss": false,
75
+ "tie_encoder_decoder": false,
76
+ "tie_word_embeddings": true,
77
+ "tokenizer_class": null,
78
+ "top_k": 50,
79
+ "top_p": 1.0,
80
+ "torch_dtype": "float32",
81
+ "torchscript": false,
82
+ "typical_p": 1.0,
83
+ "use_bfloat16": false,
84
+ "use_cache": false,
85
+ "use_learned_position_embeddings": true,
86
+ "vocab_size": 50265
87
+ },
88
+ "decoder_start_token_id": 0,
89
+ "encoder": {
90
+ "_attn_implementation_autoset": false,
91
+ "_name_or_path": "",
92
+ "add_cross_attention": false,
93
+ "architectures": null,
94
+ "attention_probs_dropout_prob": 0.0,
95
+ "bad_words_ids": null,
96
+ "begin_suppress_tokens": null,
97
+ "bos_token_id": null,
98
+ "chunk_size_feed_forward": 0,
99
+ "cross_attention_hidden_size": null,
100
+ "decoder_start_token_id": null,
101
+ "diversity_penalty": 0.0,
102
+ "do_sample": false,
103
+ "early_stopping": false,
104
+ "encoder_no_repeat_ngram_size": 0,
105
+ "encoder_stride": 16,
106
+ "eos_token_id": null,
107
+ "exponential_decay_length_penalty": null,
108
+ "finetuning_task": null,
109
+ "forced_bos_token_id": null,
110
+ "forced_eos_token_id": null,
111
+ "hidden_act": "gelu",
112
+ "hidden_dropout_prob": 0.0,
113
+ "hidden_size": 768,
114
+ "id2label": {
115
+ "0": "LABEL_0",
116
+ "1": "LABEL_1"
117
+ },
118
+ "image_size": 384,
119
+ "initializer_range": 0.02,
120
+ "intermediate_size": 3072,
121
+ "is_decoder": false,
122
+ "is_encoder_decoder": false,
123
+ "label2id": {
124
+ "LABEL_0": 0,
125
+ "LABEL_1": 1
126
+ },
127
+ "layer_norm_eps": 1e-12,
128
+ "length_penalty": 1.0,
129
+ "max_length": 20,
130
+ "min_length": 0,
131
+ "model_type": "vit",
132
+ "no_repeat_ngram_size": 0,
133
+ "num_attention_heads": 12,
134
+ "num_beam_groups": 1,
135
+ "num_beams": 1,
136
+ "num_channels": 3,
137
+ "num_hidden_layers": 12,
138
+ "num_return_sequences": 1,
139
+ "output_attentions": false,
140
+ "output_hidden_states": false,
141
+ "output_scores": false,
142
+ "pad_token_id": null,
143
+ "patch_size": 16,
144
+ "prefix": null,
145
+ "problem_type": null,
146
+ "pruned_heads": {},
147
+ "qkv_bias": false,
148
+ "remove_invalid_values": false,
149
+ "repetition_penalty": 1.0,
150
+ "return_dict": true,
151
+ "return_dict_in_generate": false,
152
+ "sep_token_id": null,
153
+ "suppress_tokens": null,
154
+ "task_specific_params": null,
155
+ "temperature": 1.0,
156
+ "tf_legacy_loss": false,
157
+ "tie_encoder_decoder": false,
158
+ "tie_word_embeddings": true,
159
+ "tokenizer_class": null,
160
+ "top_k": 50,
161
+ "top_p": 1.0,
162
+ "torch_dtype": "float32",
163
+ "torchscript": false,
164
+ "typical_p": 1.0,
165
+ "use_bfloat16": false
166
+ },
167
+ "is_encoder_decoder": true,
168
+ "model_type": "vision-encoder-decoder",
169
+ "pad_token_id": 1,
170
+ "processor_class": "TrOCRProcessor",
171
+ "tie_word_embeddings": false,
172
+ "torch_dtype": "float32",
173
+ "transformers_version": "4.49.0",
174
+ "use_cache": false
175
+ }
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "decoder_start_token_id": 2,
5
+ "eos_token_id": 2,
6
+ "pad_token_id": 1,
7
+ "transformers_version": "4.49.0",
8
+ "use_cache": false
9
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
onnx/decoder_model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:697461342091573d56e8dd0d3414e52e01549a44a10801176efbc6e27b56f630
3
+ size 1195478750
onnx/decoder_model_bnb4.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f79a283c91944253b398064f05633b8212998feac7903877c1db62ce86f0c7f8
3
+ size 348131618
onnx/decoder_model_fp16.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ede0057f4f909941e38f764140c44f7965061cce15e52f3353513251024e53cc
3
+ size 597999543
onnx/decoder_model_int8.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:212218ae0e30939fd2a9ab9bc90a6c456f1fcd2c7dd122f89e7973f77673ce3c
3
+ size 300116959
onnx/decoder_model_q4.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f7af5cfa624ffab4b3b8a7fe6cc80890576fdb1e38f8109fdf225437737fc08
3
+ size 363537318
onnx/decoder_model_q4f16.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2aa25f6a674738a243a2b9eb6c552e308876dca1be5322b673acb371505e9a72
3
+ size 243664494
onnx/decoder_model_quantized.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:212218ae0e30939fd2a9ab9bc90a6c456f1fcd2c7dd122f89e7973f77673ce3c
3
+ size 300116959
onnx/decoder_model_uint8.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:155f1ad0766c631397c35256db57b490241083261f5e3f8fd94751d11e459f71
3
+ size 300117018
onnx/encoder_model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c254c606233a8cb5523ffb16eb92074713b79c9b52e77d1579cdcfaef172927e
3
+ size 344426599
onnx/encoder_model_bnb4.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:80b653054a6715c4ab4b5140a3a4d4df9f72cd25c3be8c734388ccd205ce6e79
3
+ size 52474921
onnx/encoder_model_fp16.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25ea979038a4d38017c9d5fbbd347870d0462d0b4554672425816b9e3513c004
3
+ size 172301001
onnx/encoder_model_int8.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4357ff953e4b731c94c65ab1d7e043c6cdb3d06e681c7cdc2db863c0a2a0096d
3
+ size 87942932
onnx/encoder_model_q4.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:252058d59b4be0c0d180e0272c18478203e0b1327ff54d289ba97694da482edb
3
+ size 57782809
onnx/encoder_model_q4f16.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ee5bbcf028dee9db5b3b225395092810e2cb7477b8751f353b748d10173c742
3
+ size 50218204
onnx/encoder_model_quantized.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d169a795b252e04fc60083268b9b4dab3f6ffea262a26ec99643d9cca0f54a2
3
+ size 87942969
onnx/encoder_model_uint8.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4d169a795b252e04fc60083268b9b4dab3f6ffea262a26ec99643d9cca0f54a2
3
+ size 87942969
preprocessor_config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_convert_rgb": null,
3
+ "do_normalize": true,
4
+ "do_rescale": true,
5
+ "do_resize": true,
6
+ "image_mean": [
7
+ 0.5,
8
+ 0.5,
9
+ 0.5
10
+ ],
11
+ "image_processor_type": "ViTImageProcessor",
12
+ "image_std": [
13
+ 0.5,
14
+ 0.5,
15
+ 0.5
16
+ ],
17
+ "processor_class": "TrOCRProcessor",
18
+ "resample": 2,
19
+ "rescale_factor": 0.00392156862745098,
20
+ "size": {
21
+ "height": 384,
22
+ "width": 384
23
+ }
24
+ }
quantize_config.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "modes": [
3
+ "fp16",
4
+ "q8",
5
+ "int8",
6
+ "uint8",
7
+ "q4",
8
+ "q4f16",
9
+ "bnb4"
10
+ ],
11
+ "per_channel": false,
12
+ "reduce_range": false,
13
+ "block_size": null,
14
+ "is_symmetric": true,
15
+ "accuracy_level": null,
16
+ "quant_type": 1,
17
+ "op_block_list": null
18
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "cls_token": {
10
+ "content": "<s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "eos_token": {
17
+ "content": "</s>",
18
+ "lstrip": false,
19
+ "normalized": true,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "mask_token": {
24
+ "content": "<mask>",
25
+ "lstrip": true,
26
+ "normalized": true,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "pad_token": {
31
+ "content": "<pad>",
32
+ "lstrip": false,
33
+ "normalized": true,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "sep_token": {
38
+ "content": "</s>",
39
+ "lstrip": false,
40
+ "normalized": true,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ },
44
+ "unk_token": {
45
+ "content": "<unk>",
46
+ "lstrip": false,
47
+ "normalized": true,
48
+ "rstrip": false,
49
+ "single_word": false
50
+ }
51
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<s>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<pad>",
14
+ "lstrip": false,
15
+ "normalized": true,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "2": {
21
+ "content": "</s>",
22
+ "lstrip": false,
23
+ "normalized": true,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "3": {
29
+ "content": "<unk>",
30
+ "lstrip": false,
31
+ "normalized": true,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "50264": {
37
+ "content": "<mask>",
38
+ "lstrip": true,
39
+ "normalized": true,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": true
43
+ }
44
+ },
45
+ "bos_token": "<s>",
46
+ "clean_up_tokenization_spaces": false,
47
+ "cls_token": "<s>",
48
+ "eos_token": "</s>",
49
+ "errors": "replace",
50
+ "extra_special_tokens": {},
51
+ "mask_token": "<mask>",
52
+ "max_length": null,
53
+ "model_max_length": 512,
54
+ "pad_to_multiple_of": null,
55
+ "pad_token": "<pad>",
56
+ "pad_token_type_id": 0,
57
+ "padding_side": "right",
58
+ "processor_class": "TrOCRProcessor",
59
+ "sep_token": "</s>",
60
+ "tokenizer_class": "RobertaTokenizer",
61
+ "trim_offsets": true,
62
+ "unk_token": "<unk>"
63
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff