Mitchins commited on
Commit
53ec084
·
verified ·
1 Parent(s): 6f22e8b

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +120 -0
  2. README.md +276 -0
  3. config.json +40 -0
  4. inference.py +122 -0
  5. model.safetensors +3 -0
  6. requirements.txt +4 -0
  7. tools/eval_ci.py +60 -0
  8. validation/annotations.json +14 -0
  9. validation/generate_index.py +72 -0
  10. validation/images/dark/dark_01.png +3 -0
  11. validation/images/dark/dark_02.png +3 -0
  12. validation/images/dark/dark_03.png +3 -0
  13. validation/images/dark/dark_04.png +3 -0
  14. validation/images/dark/dark_05.png +3 -0
  15. validation/images/dark/dark_06.png +3 -0
  16. validation/images/dark/dark_07.png +3 -0
  17. validation/images/dark/dark_08.png +3 -0
  18. validation/images/dark/dark_09.png +3 -0
  19. validation/images/dark/dark_10.png +3 -0
  20. validation/images/dark/dark_11.png +3 -0
  21. validation/images/dark/dark_12.png +3 -0
  22. validation/images/dark/dark_13.png +3 -0
  23. validation/images/dark/dark_14.png +3 -0
  24. validation/images/dark/dark_15.png +3 -0
  25. validation/images/dark/dark_16.png +3 -0
  26. validation/images/dark/dark_17.png +3 -0
  27. validation/images/dark/dark_18.png +3 -0
  28. validation/images/dark/dark_19.png +3 -0
  29. validation/images/dark/dark_20.png +3 -0
  30. validation/images/flat/flat_01.png +3 -0
  31. validation/images/flat/flat_02.png +3 -0
  32. validation/images/flat/flat_03.png +3 -0
  33. validation/images/flat/flat_04.png +3 -0
  34. validation/images/flat/flat_05.png +3 -0
  35. validation/images/flat/flat_06.png +3 -0
  36. validation/images/flat/flat_07.png +3 -0
  37. validation/images/flat/flat_08.png +3 -0
  38. validation/images/flat/flat_09.png +3 -0
  39. validation/images/flat/flat_10.png +3 -0
  40. validation/images/flat/flat_11.png +3 -0
  41. validation/images/flat/flat_12.png +3 -0
  42. validation/images/flat/flat_13.png +3 -0
  43. validation/images/flat/flat_14.png +3 -0
  44. validation/images/flat/flat_15.png +3 -0
  45. validation/images/flat/flat_16.png +3 -0
  46. validation/images/flat/flat_17.png +3 -0
  47. validation/images/flat/flat_18.png +3 -0
  48. validation/images/flat/flat_19.png +3 -0
  49. validation/images/flat/flat_20.png +3 -0
  50. validation/images/modern/modern_01.png +3 -0
.gitattributes CHANGED
@@ -33,3 +33,123 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ validation/images/dark/dark_01.png filter=lfs diff=lfs merge=lfs -text
37
+ validation/images/dark/dark_02.png filter=lfs diff=lfs merge=lfs -text
38
+ validation/images/dark/dark_03.png filter=lfs diff=lfs merge=lfs -text
39
+ validation/images/dark/dark_04.png filter=lfs diff=lfs merge=lfs -text
40
+ validation/images/dark/dark_05.png filter=lfs diff=lfs merge=lfs -text
41
+ validation/images/dark/dark_06.png filter=lfs diff=lfs merge=lfs -text
42
+ validation/images/dark/dark_07.png filter=lfs diff=lfs merge=lfs -text
43
+ validation/images/dark/dark_08.png filter=lfs diff=lfs merge=lfs -text
44
+ validation/images/dark/dark_09.png filter=lfs diff=lfs merge=lfs -text
45
+ validation/images/dark/dark_10.png filter=lfs diff=lfs merge=lfs -text
46
+ validation/images/dark/dark_11.png filter=lfs diff=lfs merge=lfs -text
47
+ validation/images/dark/dark_12.png filter=lfs diff=lfs merge=lfs -text
48
+ validation/images/dark/dark_13.png filter=lfs diff=lfs merge=lfs -text
49
+ validation/images/dark/dark_14.png filter=lfs diff=lfs merge=lfs -text
50
+ validation/images/dark/dark_15.png filter=lfs diff=lfs merge=lfs -text
51
+ validation/images/dark/dark_16.png filter=lfs diff=lfs merge=lfs -text
52
+ validation/images/dark/dark_17.png filter=lfs diff=lfs merge=lfs -text
53
+ validation/images/dark/dark_18.png filter=lfs diff=lfs merge=lfs -text
54
+ validation/images/dark/dark_19.png filter=lfs diff=lfs merge=lfs -text
55
+ validation/images/dark/dark_20.png filter=lfs diff=lfs merge=lfs -text
56
+ validation/images/flat/flat_01.png filter=lfs diff=lfs merge=lfs -text
57
+ validation/images/flat/flat_02.png filter=lfs diff=lfs merge=lfs -text
58
+ validation/images/flat/flat_03.png filter=lfs diff=lfs merge=lfs -text
59
+ validation/images/flat/flat_04.png filter=lfs diff=lfs merge=lfs -text
60
+ validation/images/flat/flat_05.png filter=lfs diff=lfs merge=lfs -text
61
+ validation/images/flat/flat_06.png filter=lfs diff=lfs merge=lfs -text
62
+ validation/images/flat/flat_07.png filter=lfs diff=lfs merge=lfs -text
63
+ validation/images/flat/flat_08.png filter=lfs diff=lfs merge=lfs -text
64
+ validation/images/flat/flat_09.png filter=lfs diff=lfs merge=lfs -text
65
+ validation/images/flat/flat_10.png filter=lfs diff=lfs merge=lfs -text
66
+ validation/images/flat/flat_11.png filter=lfs diff=lfs merge=lfs -text
67
+ validation/images/flat/flat_12.png filter=lfs diff=lfs merge=lfs -text
68
+ validation/images/flat/flat_13.png filter=lfs diff=lfs merge=lfs -text
69
+ validation/images/flat/flat_14.png filter=lfs diff=lfs merge=lfs -text
70
+ validation/images/flat/flat_15.png filter=lfs diff=lfs merge=lfs -text
71
+ validation/images/flat/flat_16.png filter=lfs diff=lfs merge=lfs -text
72
+ validation/images/flat/flat_17.png filter=lfs diff=lfs merge=lfs -text
73
+ validation/images/flat/flat_18.png filter=lfs diff=lfs merge=lfs -text
74
+ validation/images/flat/flat_19.png filter=lfs diff=lfs merge=lfs -text
75
+ validation/images/flat/flat_20.png filter=lfs diff=lfs merge=lfs -text
76
+ validation/images/modern/modern_01.png filter=lfs diff=lfs merge=lfs -text
77
+ validation/images/modern/modern_02.png filter=lfs diff=lfs merge=lfs -text
78
+ validation/images/modern/modern_03.png filter=lfs diff=lfs merge=lfs -text
79
+ validation/images/modern/modern_04.png filter=lfs diff=lfs merge=lfs -text
80
+ validation/images/modern/modern_05.png filter=lfs diff=lfs merge=lfs -text
81
+ validation/images/modern/modern_06.png filter=lfs diff=lfs merge=lfs -text
82
+ validation/images/modern/modern_07.png filter=lfs diff=lfs merge=lfs -text
83
+ validation/images/modern/modern_08.png filter=lfs diff=lfs merge=lfs -text
84
+ validation/images/modern/modern_09.png filter=lfs diff=lfs merge=lfs -text
85
+ validation/images/modern/modern_10.png filter=lfs diff=lfs merge=lfs -text
86
+ validation/images/modern/modern_11.png filter=lfs diff=lfs merge=lfs -text
87
+ validation/images/modern/modern_12.png filter=lfs diff=lfs merge=lfs -text
88
+ validation/images/modern/modern_13.png filter=lfs diff=lfs merge=lfs -text
89
+ validation/images/modern/modern_14.png filter=lfs diff=lfs merge=lfs -text
90
+ validation/images/modern/modern_15.png filter=lfs diff=lfs merge=lfs -text
91
+ validation/images/modern/modern_16.png filter=lfs diff=lfs merge=lfs -text
92
+ validation/images/modern/modern_17.png filter=lfs diff=lfs merge=lfs -text
93
+ validation/images/modern/modern_18.png filter=lfs diff=lfs merge=lfs -text
94
+ validation/images/modern/modern_19.png filter=lfs diff=lfs merge=lfs -text
95
+ validation/images/modern/modern_20.png filter=lfs diff=lfs merge=lfs -text
96
+ validation/images/moe/moe_01.png filter=lfs diff=lfs merge=lfs -text
97
+ validation/images/moe/moe_02.png filter=lfs diff=lfs merge=lfs -text
98
+ validation/images/moe/moe_03.png filter=lfs diff=lfs merge=lfs -text
99
+ validation/images/moe/moe_04.png filter=lfs diff=lfs merge=lfs -text
100
+ validation/images/moe/moe_05.png filter=lfs diff=lfs merge=lfs -text
101
+ validation/images/moe/moe_06.png filter=lfs diff=lfs merge=lfs -text
102
+ validation/images/moe/moe_07.png filter=lfs diff=lfs merge=lfs -text
103
+ validation/images/moe/moe_08.png filter=lfs diff=lfs merge=lfs -text
104
+ validation/images/moe/moe_09.png filter=lfs diff=lfs merge=lfs -text
105
+ validation/images/moe/moe_10.png filter=lfs diff=lfs merge=lfs -text
106
+ validation/images/moe/moe_11.png filter=lfs diff=lfs merge=lfs -text
107
+ validation/images/moe/moe_12.png filter=lfs diff=lfs merge=lfs -text
108
+ validation/images/moe/moe_13.png filter=lfs diff=lfs merge=lfs -text
109
+ validation/images/moe/moe_14.png filter=lfs diff=lfs merge=lfs -text
110
+ validation/images/moe/moe_15.png filter=lfs diff=lfs merge=lfs -text
111
+ validation/images/moe/moe_16.png filter=lfs diff=lfs merge=lfs -text
112
+ validation/images/moe/moe_17.png filter=lfs diff=lfs merge=lfs -text
113
+ validation/images/moe/moe_18.png filter=lfs diff=lfs merge=lfs -text
114
+ validation/images/moe/moe_19.png filter=lfs diff=lfs merge=lfs -text
115
+ validation/images/moe/moe_20.png filter=lfs diff=lfs merge=lfs -text
116
+ validation/images/painterly/painterly_01.png filter=lfs diff=lfs merge=lfs -text
117
+ validation/images/painterly/painterly_02.png filter=lfs diff=lfs merge=lfs -text
118
+ validation/images/painterly/painterly_03.png filter=lfs diff=lfs merge=lfs -text
119
+ validation/images/painterly/painterly_04.png filter=lfs diff=lfs merge=lfs -text
120
+ validation/images/painterly/painterly_05.png filter=lfs diff=lfs merge=lfs -text
121
+ validation/images/painterly/painterly_06.png filter=lfs diff=lfs merge=lfs -text
122
+ validation/images/painterly/painterly_07.png filter=lfs diff=lfs merge=lfs -text
123
+ validation/images/painterly/painterly_08.png filter=lfs diff=lfs merge=lfs -text
124
+ validation/images/painterly/painterly_09.png filter=lfs diff=lfs merge=lfs -text
125
+ validation/images/painterly/painterly_10.png filter=lfs diff=lfs merge=lfs -text
126
+ validation/images/painterly/painterly_11.png filter=lfs diff=lfs merge=lfs -text
127
+ validation/images/painterly/painterly_12.png filter=lfs diff=lfs merge=lfs -text
128
+ validation/images/painterly/painterly_13.png filter=lfs diff=lfs merge=lfs -text
129
+ validation/images/painterly/painterly_14.png filter=lfs diff=lfs merge=lfs -text
130
+ validation/images/painterly/painterly_15.png filter=lfs diff=lfs merge=lfs -text
131
+ validation/images/painterly/painterly_16.png filter=lfs diff=lfs merge=lfs -text
132
+ validation/images/painterly/painterly_17.png filter=lfs diff=lfs merge=lfs -text
133
+ validation/images/painterly/painterly_18.png filter=lfs diff=lfs merge=lfs -text
134
+ validation/images/painterly/painterly_19.png filter=lfs diff=lfs merge=lfs -text
135
+ validation/images/painterly/painterly_20.png filter=lfs diff=lfs merge=lfs -text
136
+ validation/images/retro/retro_01.png filter=lfs diff=lfs merge=lfs -text
137
+ validation/images/retro/retro_02.png filter=lfs diff=lfs merge=lfs -text
138
+ validation/images/retro/retro_03.png filter=lfs diff=lfs merge=lfs -text
139
+ validation/images/retro/retro_04.png filter=lfs diff=lfs merge=lfs -text
140
+ validation/images/retro/retro_05.png filter=lfs diff=lfs merge=lfs -text
141
+ validation/images/retro/retro_06.png filter=lfs diff=lfs merge=lfs -text
142
+ validation/images/retro/retro_07.png filter=lfs diff=lfs merge=lfs -text
143
+ validation/images/retro/retro_08.png filter=lfs diff=lfs merge=lfs -text
144
+ validation/images/retro/retro_09.png filter=lfs diff=lfs merge=lfs -text
145
+ validation/images/retro/retro_10.png filter=lfs diff=lfs merge=lfs -text
146
+ validation/images/retro/retro_11.png filter=lfs diff=lfs merge=lfs -text
147
+ validation/images/retro/retro_12.png filter=lfs diff=lfs merge=lfs -text
148
+ validation/images/retro/retro_13.png filter=lfs diff=lfs merge=lfs -text
149
+ validation/images/retro/retro_14.png filter=lfs diff=lfs merge=lfs -text
150
+ validation/images/retro/retro_15.png filter=lfs diff=lfs merge=lfs -text
151
+ validation/images/retro/retro_16.png filter=lfs diff=lfs merge=lfs -text
152
+ validation/images/retro/retro_17.png filter=lfs diff=lfs merge=lfs -text
153
+ validation/images/retro/retro_18.png filter=lfs diff=lfs merge=lfs -text
154
+ validation/images/retro/retro_19.png filter=lfs diff=lfs merge=lfs -text
155
+ validation/images/retro/retro_20.png filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,276 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Anime Style Classifier - EfficientNet-B0
2
+
3
+ A fine-tuned EfficientNet-B0 model for classifying anime/visual novel images into 6 distinct art styles.
4
+
5
+ ## Model Description
6
+
7
+ - **Model Architecture**: EfficientNet-B0 (~5.3M parameters)
8
+ - **Base Model**: ImageNet pretrained weights
9
+ - **Task**: Multi-class image classification (6 styles)
10
+ - **Input Resolution**: 224x224 RGB
11
+ - **Framework**: PyTorch
12
+ - **License**: MIT
13
+
14
+ ## Performance
15
+
16
+ ### Test Set Results (Holdout)
17
+
18
+ - **Accuracy**: 100.0%
19
+ - **Macro F1-Score**: 1.000
20
+ - **Validation Accuracy**: 98.18%
21
+
22
+ Perfect classification across all 120 holdout images (20 per class). Note: with n=120 the 95% Wilson confidence interval for this result is approximately 96.90%—100.00%, so the perfect score should be interpreted cautiously alongside validation metrics. Taking both validation and holdout into account, a realistic estimate of the model's true accuracy is likely in the mid-to-high 90s (≈96–98%) — still very strong and, for most applications, likely fit for purpose.
23
+
24
+ ### Per-Class Performance
25
+
26
+ | Style | Precision | Recall | F1-Score | Support |
27
+ |-------|-----------|--------|----------|---------|
28
+ | **dark** | 1.000 | 1.000 | 1.000 | 20 |
29
+ | **flat** | 1.000 | 1.000 | 1.000 | 20 |
30
+ | **modern** | 1.000 | 1.000 | 1.000 | 20 |
31
+ | **moe** | 1.000 | 1.000 | 1.000 | 20 |
32
+ | **painterly** | 1.000 | 1.000 | 1.000 | 20 |
33
+ | **retro** | 1.000 | 1.000 | 1.000 | 20 |
34
+
35
+ ## Style Definitions
36
+
37
+ 1. **dark**: Low-key lighting, chiaroscuro, desaturated palette, high contrast shadows, moody atmosphere
38
+ 2. **flat**: Minimalist flat colors, vector illustration, solid color blocks, no gradients or shading
39
+ 3. **modern**: Clean digital rendering, smooth gradients, glossy finish, contemporary anime aesthetic
40
+ 4. **moe**: Soft pastel colors, rounded features, cute/adorable character focus, gentle shading
41
+ 5. **painterly**: Watercolor or gouache appearance, visible brush strokes, paper texture, artistic feel
42
+ 6. **retro**: 80s/90s anime aesthetic, vintage color palette, classic cel animation style
43
+
44
+ ## Training Details
45
+
46
+ ### Dataset
47
+
48
+ - **Training Images**: 933 (scene-level split)
49
+ - **Validation Images**: 165
50
+ - **Holdout Images**: 120
51
+ - **Total Scenes**: 203 perfectly balanced scenes
52
+ - **Images per Style**: 183 training + 20 holdout = 203 each
53
+ - **Source Resolution**: 1920x1088
54
+ - **Training Resolution**: 224x224
55
+
56
+ **Data Split Strategy**: Scene-level 90/10 split to prevent data leakage. All 6 style variants of each scene are kept together in either training or holdout set.
57
+
58
+ **Data Generation**: Synthetic images generated via ComfyUI with Flux diffusion model, validated by Gemma-12B vision-language model. Only scenes with 6/6 style agreement (all variants correctly classified) were included.
59
+
60
+ ### Training Regime
61
+
62
+ ```yaml
63
+ Architecture: EfficientNet-B0
64
+ Pretrained: ImageNet weights
65
+ Optimizer: AdamW
66
+ Learning Rate: 0.001
67
+ Weight Decay: 1e-05
68
+ Batch Size: 16
69
+ Epochs: 30 (early stopping at ~12-15 epochs typical)
70
+ Scheduler: CosineAnnealingLR
71
+ Loss: CrossEntropyLoss
72
+ Early Stopping: 10 epochs patience (val accuracy)
73
+ ```
74
+
75
+ ### Data Augmentation (Training Only)
76
+
77
+ - Resize to 256x256
78
+ - Random crop to 224x224
79
+ - Random horizontal flip (p=0.5)
80
+ - Color jitter (brightness=0.1, saturation=0.1, hue=0.05)
81
+ - ImageNet normalization (mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
82
+
83
+ ### Hardware
84
+
85
+ - GPU: NVIDIA GPU (CUDA)
86
+ - Training Time: ~15 minutes (with early stopping)
87
+
88
+ ## Usage
89
+
90
+ ### Installation
91
+
92
+ ```bash
93
+ pip install torch torchvision pillow
94
+ ```
95
+
96
+ ### Inference
97
+
98
+ This repository includes a small CLI inference script, `inference.py`, which auto-detects `.safetensors` (preferred) or a PyTorch `.pth` checkpoint and provides a convenient command-line interface for classification. Because `inference.py` already contains the full, tested loading and preprocessing logic, the README keeps only the minimal usage notes below and a short programmatic example that delegates to the script's functions.
99
+
100
+ Install (optional: include safetensors for safer loading):
101
+
102
+ ```bash
103
+ pip install torch torchvision pillow safetensors
104
+ ```
105
+
106
+ CLI usage (example):
107
+
108
+ ```bash
109
+ python inference.py --model model.safetensors --config config.json examples/retro_1.png
110
+ ```
111
+
112
+ This will print a ranked list of predictions and a top prediction summary (see the script output example above).
113
+
114
+ Programmatic usage (calls the same functions used by the CLI):
115
+
116
+ ```python
117
+ # Minimal programmatic example using functions from inference.py
118
+ from inference import load_model, classify_image
119
+
120
+ model, config = load_model('model.safetensors', 'config.json')
121
+ results = classify_image(model, config, 'examples/retro_1.png')
122
+ print(results[:3]) # top-3 predictions as (style, confidence)
123
+ ```
124
+
125
+ For the full implementation and additional options (e.g., --top-k), see `inference.py` in the repository.
126
+
127
+ ## Publishing the validation set
128
+
129
+ If you'd like to publish the 120 AI-generated validation images for transparency, you can place the images into `validation/images/` and provide an annotations JSON (`validation/annotations.json`) that maps filenames to styles and source scene ids. A small helper script is included to build a simple HTML index (SCENE | STYLE) with thumbnails:
130
+
131
+ ```bash
132
+ python validation/generate_index.py --annotations validation/annotations.json --out validation/validation_grid.md --images-dir validation/images
133
+ ```
134
+
135
+ The script writes `validation/validation_index.html`. It does not copy image files — place your images in `validation/images/` using the same filenames referenced in `validation/annotations.json`.
136
+
137
+ Canonical folder layout (now `validation/`)
138
+
139
+ To avoid confusion between different folds, this repository now uses a canonical `validation/` folder to store the full published validation/holdout set (images + index). The previous `examples/` demo files have been removed to avoid duplication. Use the files under `validation/` as the canonical published dataset.
140
+
141
+ Consolidation summary performed here:
142
+
143
+ 1. Images were moved to `validation/images/` (canonical image storage).
144
+ 2. Annotations and the generated markdown index are now under `validation/annotations.json` and `validation/validation_index.md`.
145
+
146
+ How to show examples in the README (recommended)
147
+
148
+ If you want to show example commands in the README that run against the full published directory, use the `validation/images/` path. For example:
149
+
150
+ ```bash
151
+ # Classify one of the published validation images
152
+ python inference.py --model model.safetensors --config config.json validation/images/retro_10.png
153
+ ```
154
+
155
+ Or programmatically:
156
+
157
+ ```python
158
+ from inference import load_model, classify_image
159
+
160
+ model, config = load_model('model.safetensors', 'config.json')
161
+ results = classify_image(model, config, 'validation/images/retro_10.png')
162
+ print(results[0])
163
+ ```
164
+
165
+ ## Limitations
166
+
167
+ - **Input Resolution**: Model processes images at 224x224, which may lose fine texture details from high-resolution sources (1920x1088+)
168
+ - **Domain**: Trained on synthetically generated anime/visual novel images. May not generalize perfectly to all anime art styles, manga, or hand-drawn artwork
169
+ - **Style Ambiguity**: Some real-world images may blend multiple styles (e.g., painterly with modern digital techniques)
170
+ -- **Validation Bias**: Ground truth labels come from Gemma-12B vision model, so classifier may inherit some of its biases
171
+
172
+ Small-sample caution: The internal validation set achieved 98.18% (162/165). The 95% Wilson confidence interval for this is approximately 94.79%—99.38%. Because the holdout set is relatively small (20 images per class, 120 total), perfect classification on that set is possible by chance and should be reported with its confidence interval (see above).
173
+
174
+ Decision rule note: The model uses the standard softmax + argmax decision rule by default (choose the class with highest predicted probability). No abstain threshold is applied in the shipped `inference.py`; if you later want an abstain/human-review mode, adding a `--min-conf` option is straightforward.
175
+
176
+ ## Model Selection
177
+
178
+ This model was selected from a hyperparameter sweep of 144+ configurations across 6 architectures:
179
+ - ResNet-18
180
+ - MobileNetV3-Large
181
+ - MobileNetV3-Small
182
+ - EfficientNet-B0 ⭐ (winner)
183
+ - EfficientNetV2-S
184
+ - ViT-B/16
185
+
186
+ EfficientNet-B0 achieved perfect 100% holdout accuracy with:
187
+ - Excellent efficiency (~5.3M parameters)
188
+ - Fast inference
189
+ - Strong generalization (98.18% val → 100% holdout)
190
+
191
+ ## Citation
192
+
193
+ ```bibtex
194
+ @software{anime_style_classifier_2025,
195
+ author = {Your Name},
196
+ title = {Anime Style Classifier},
197
+ year = {2025},
198
+ url = {https://huggingface.co/Mitchins/anime-style-classifier-efficientnet-b0}
199
+ }
200
+ ```
201
+
202
+ ## Publishing to Hugging Face
203
+
204
+ If you'd like to publish this model under the name `Mitchins/anime-style-classifier-efficientnet-b0` on Hugging Face, here's a compact guide.
205
+
206
+ Recommended files to include in the model repo:
207
+ - `model.safetensors` (weights)
208
+ - `config.json` (model metadata)
209
+ - `inference.py` (inference CLI/helpers)
210
+ - `validation/validation_grid.md` and `validation/validation_index.md` (visual index)
211
+ - `README.md` (this file / model card)
212
+ - `LICENSE` (MIT is used in this repo)
213
+
214
+ Commands (zsh) to create the repo and push files using the Hugging Face CLI and git-lfs:
215
+
216
+ ```bash
217
+ # install HF CLI and git-lfs if missing
218
+ pip install huggingface_hub
219
+ brew install git-lfs || true
220
+ git lfs install
221
+
222
+ # create the HF model repo (run once)
223
+ huggingface-cli repo create Mitchins/anime-style-classifier-efficientnet-b0 --type model
224
+
225
+ # Initialize a git repo (or use your existing one)
226
+ git init
227
+ git remote add origin https://huggingface.co/Mitchins/anime-style-classifier-efficientnet-b0
228
+
229
+ # Track safetensors with LFS
230
+ git lfs track "*.safetensors"
231
+
232
+ # Add files, commit and push
233
+ git add .gitattributes model.safetensors config.json inference.py README.md validation/
234
+ git commit -m "Add model weights, inference script and validation index"
235
+ git branch -M main
236
+ git push origin main
237
+ ```
238
+
239
+ Model card snippet (place in the repo README or on the HF model card):
240
+
241
+ ```markdown
242
+ ## Mitchins/anime-style-classifier-efficientnet-b0
243
+
244
+ EfficientNet-B0 fine-tuned to classify anime/visual-novel images into 6 styles: dark, flat, modern, moe, painterly, retro.
245
+
246
+ - Architecture: EfficientNet-B0
247
+ - Framework: PyTorch
248
+ - Weights: `model.safetensors`
249
+ - Input resolution: 224x224
250
+ - License: MIT
251
+
252
+ ### How to use
253
+
254
+ See `inference.py` for a CLI and programmatic interface to run inference locally.
255
+
256
+ ### Validation set
257
+
258
+ A visual index of the published validation/holdout images is included in `validation/validation_grid.md` for transparency.
259
+
260
+ ### Tags
261
+
262
+ `image-classification`, `anime`, `art-style`, `efficientnet-b0`, `pytorch`, `safetensors`, `multi-class`
263
+ ```
264
+
265
+ If you want, I can prepare the git commands and a short final checklist and run them locally (I won't push without your confirmation). I can also prepare a small HF-friendly model card file and a suggested README title/description.
266
+
267
+ ## Acknowledgments
268
+
269
+ - **Base Model**: EfficientNet-B0 from torchvision (ImageNet pretrained)
270
+ - **Synthetic Data Generation**: ComfyUI + Flux diffusion model
271
+ - **Data Validation**: Gemma-12B vision-language model
272
+ - **Framework**: PyTorch, torchvision
273
+
274
+ ## Contact
275
+
276
+ For questions or feedback, please open an issue on the GitHub repository.
config.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "EfficientNet"
4
+ ],
5
+ "model_type": "efficientnet_b0",
6
+ "num_classes": 6,
7
+ "id2label": {
8
+ "0": "dark",
9
+ "1": "flat",
10
+ "2": "modern",
11
+ "3": "moe",
12
+ "4": "painterly",
13
+ "5": "retro"
14
+ },
15
+ "label2id": {
16
+ "dark": 0,
17
+ "flat": 1,
18
+ "modern": 2,
19
+ "moe": 3,
20
+ "painterly": 4,
21
+ "retro": 5
22
+ },
23
+ "image_size": 224,
24
+ "pretrained_backbone": "imagenet",
25
+ "framework": "pytorch",
26
+ "performance": {
27
+ "holdout_accuracy": 100.0,
28
+ "holdout_f1": 1.0,
29
+ "val_accuracy": 98.18181818181819,
30
+ "best_val_acc": 98.18181818181819
31
+ },
32
+ "training": {
33
+ "batch_size": 16,
34
+ "epochs": 30,
35
+ "lr": 0.001,
36
+ "weight_decay": 1e-05,
37
+ "pretrained": true,
38
+ "best_epoch": 11
39
+ }
40
+ }
inference.py ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Simple inference script for anime style classification.
4
+ """
5
+
6
+ import torch
7
+ from torchvision import models, transforms
8
+ from PIL import Image
9
+ import json
10
+ from pathlib import Path
11
+
12
+
13
+ def load_model(model_path=None, config_path='config.json'):
14
+ """Load the trained model.
15
+
16
+ Args:
17
+ model_path: Path to model weights. If None, auto-detects (.safetensors preferred)
18
+ config_path: Path to config file
19
+ """
20
+ # Load config
21
+ with open(config_path, 'r') as f:
22
+ config = json.load(f)
23
+
24
+ # Create model
25
+ model = models.efficientnet_b0(pretrained=False)
26
+ num_classes = config['num_classes']
27
+ model.classifier[-1] = torch.nn.Linear(model.classifier[-1].in_features, num_classes)
28
+
29
+ # Auto-detect model format if not specified
30
+ if model_path is None:
31
+ if Path('model.safetensors').exists():
32
+ model_path = 'model.safetensors'
33
+ elif Path('pytorch_model.pth').exists():
34
+ model_path = 'pytorch_model.pth'
35
+ else:
36
+ raise FileNotFoundError("No model weights found (model.safetensors or pytorch_model.pth)")
37
+
38
+ # Load weights based on format
39
+ if model_path.endswith('.safetensors'):
40
+ from safetensors.torch import load_file
41
+ state_dict = load_file(model_path)
42
+ model.load_state_dict(state_dict)
43
+ else:
44
+ checkpoint = torch.load(model_path, map_location='cpu')
45
+ model.load_state_dict(checkpoint['model_state_dict'])
46
+
47
+ model.eval()
48
+ return model, config
49
+
50
+
51
+ def preprocess_image(image_path, img_size=224):
52
+ """Preprocess image for model input."""
53
+ transform = transforms.Compose([
54
+ transforms.Resize((img_size, img_size)),
55
+ transforms.ToTensor(),
56
+ transforms.Normalize(mean=[0.485, 0.456, 0.406],
57
+ std=[0.229, 0.224, 0.225])
58
+ ])
59
+
60
+ image = Image.open(image_path).convert('RGB')
61
+ return transform(image).unsqueeze(0)
62
+
63
+
64
+ def classify_image(model, config, image_path):
65
+ """Classify a single image."""
66
+ # Preprocess
67
+ input_tensor = preprocess_image(image_path)
68
+
69
+ # Inference
70
+ with torch.no_grad():
71
+ output = model(input_tensor)
72
+ probabilities = torch.nn.functional.softmax(output[0], dim=0)
73
+
74
+ # Get predictions
75
+ results = []
76
+ for idx, prob in enumerate(probabilities):
77
+ style = config['id2label'][str(idx)]
78
+ results.append({
79
+ 'style': style,
80
+ 'confidence': float(prob)
81
+ })
82
+
83
+ # Sort by confidence
84
+ results.sort(key=lambda x: x['confidence'], reverse=True)
85
+
86
+ return results
87
+
88
+
89
+ def main():
90
+ import argparse
91
+
92
+ parser = argparse.ArgumentParser(description='Classify anime style')
93
+ parser.add_argument('image', type=str, help='Path to image')
94
+ parser.add_argument('--model', type=str, default=None,
95
+ help='Path to model weights (auto-detects .safetensors or .pth if not specified)')
96
+ parser.add_argument('--config', type=str, default='config.json')
97
+ parser.add_argument('--top-k', type=int, default=3, help='Show top-K predictions')
98
+
99
+ args = parser.parse_args()
100
+
101
+ # Load model
102
+ print(f"Loading model from {args.model}...")
103
+ model, config = load_model(args.model, args.config)
104
+
105
+ # Classify
106
+ print(f"Classifying {args.image}...")
107
+ results = classify_image(model, config, args.image)
108
+
109
+ # Display results
110
+ print()
111
+ print("=" * 60)
112
+ print("PREDICTIONS")
113
+ print("=" * 60)
114
+ for i, result in enumerate(results[:args.top_k], 1):
115
+ print(f"{i}. {result['style']:12s} {result['confidence']:>7.2%}")
116
+ print("=" * 60)
117
+ print()
118
+ print(f"Top prediction: {results[0]['style']} ({results[0]['confidence']:.2%})")
119
+
120
+
121
+ if __name__ == '__main__':
122
+ main()
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa35588f3171f47ea118f0b685a355b35109cd667ec3382590866f89a159016b
3
+ size 16264280
requirements.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ torch>=2.0.0
2
+ torchvision>=0.15.0
3
+ pillow>=9.0.0
4
+ safetensors>=0.4.0
tools/eval_ci.py ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """Compute simple confidence intervals for binary accuracy results.
3
+
4
+ Usage:
5
+ python tools/eval_ci.py --correct 120 --n 120
6
+ python tools/eval_ci.py --correct 162 --n 165
7
+
8
+ Outputs Wilson score interval and normal approx CI.
9
+ """
10
+ import math
11
+ import argparse
12
+
13
+
14
+ def wilson_ci(k, n, alpha=0.05):
15
+ if n == 0:
16
+ return (0.0, 1.0)
17
+ z = 1.96 # for 95%
18
+ phat = k / n
19
+ denom = 1 + z*z/n
20
+ centre = phat + z*z/(2*n)
21
+ margin = z * math.sqrt((phat*(1-phat) + z*z/(4*n)) / n)
22
+ lower = (centre - margin) / denom
23
+ upper = (centre + margin) / denom
24
+ return max(0.0, lower), min(1.0, upper)
25
+
26
+
27
+ def normal_approx(k, n, alpha=0.05):
28
+ if n == 0:
29
+ return (0.0, 1.0)
30
+ z = 1.96
31
+ phat = k/n
32
+ se = math.sqrt(phat*(1-phat)/n)
33
+ lower = phat - z*se
34
+ upper = phat + z*se
35
+ return max(0.0, lower), min(1.0, upper)
36
+
37
+
38
+ def percent(x):
39
+ return f"{x*100:.2f}%"
40
+
41
+
42
+ def main():
43
+ parser = argparse.ArgumentParser()
44
+ parser.add_argument('--correct', type=int, required=True)
45
+ parser.add_argument('--n', type=int, required=True)
46
+ args = parser.parse_args()
47
+
48
+ k = args.correct
49
+ n = args.n
50
+ phat = k/n if n>0 else 0.0
51
+ wlo, whi = wilson_ci(k,n)
52
+ nlo, nhi = normal_approx(k,n)
53
+
54
+ print(f"Results: {k}/{n} = {percent(phat)}")
55
+ print(f"Wilson 95% CI: {percent(wlo)} — {percent(whi)}")
56
+ print(f"Normal approx 95% CI: {percent(nlo)} — {percent(nhi)}")
57
+
58
+
59
+ if __name__ == '__main__':
60
+ main()
validation/annotations.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {"style": "dark", "filename": "dark_11.png", "source": "scene39_riverbank_spring_cycling_dark_00001_.png"},
3
+ {"style": "dark", "filename": "dark_15.png", "source": "scene57_sky_garden_walkway_dark_00001_.png"},
4
+ {"style": "flat", "filename": "flat_10.png", "source": "scene31_campus_courtyard_rain_flat_00001_.png"},
5
+ {"style": "flat", "filename": "flat_19.png", "source": "scene99_town_square_clock_rain_flat_00001_.png"},
6
+ {"style": "modern", "filename": "modern_20.png", "source": "scene99_town_square_clock_rain_modern_00001_.png"},
7
+ {"style": "modern", "filename": "modern_16.png", "source": "scene72_spirit_shrine_waterfall_modern_00001_.png"},
8
+ {"style": "moe", "filename": "moe_07.png", "source": "scene241_meteor_defense_bunker_moe_00001_.png"},
9
+ {"style": "moe", "filename": "moe_04.png", "source": "scene16_rain_on_train_crossing_moe_00001_.png"},
10
+ {"style": "painterly", "filename": "painterly_07.png", "source": "scene257_holo_library_steps_painterly_00001_.png"},
11
+ {"style": "painterly", "filename": "painterly_08.png", "source": "scene31_campus_courtyard_rain_painterly_00001_.png"},
12
+ {"style": "retro", "filename": "retro_10.png", "source": "scene31_campus_courtyard_rain_retro_00001_.png"},
13
+ {"style": "retro", "filename": "retro_02.png", "source": "scene133_fey_crossing_footbridge_retro_00001_.png"}
14
+ ]
validation/generate_index.py ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """Generate a compact grid (Markdown) of validation images arranged by scene and style.
3
+
4
+ This script is the canonical generator for the validation preview and lives under `validation/`.
5
+ """
6
+ import json
7
+ from pathlib import Path
8
+ import argparse
9
+
10
+ STYLES = ['modern', 'painterly', 'retro', 'moe', 'flat', 'dark']
11
+
12
+ MD_HEADER = """
13
+ # Validation set — visual grid
14
+
15
+ A compact birds-eye preview. Columns: modern | painterly | retro | moe | flat | dark
16
+
17
+ """
18
+
19
+ CELL_HTML = '<img src="{img}" alt="{alt}" style="width:150px;height:auto;margin:2px;"/>'
20
+
21
+
22
+ def load_annotations(path):
23
+ return json.loads(Path(path).read_text())
24
+
25
+
26
+ def build_scene_map(annotations):
27
+ scene_map = {}
28
+ for ann in annotations:
29
+ scene = ann.get('source') or ann.get('filename')
30
+ scene_map.setdefault(scene, {})[ann['style']] = ann['filename']
31
+ return scene_map
32
+
33
+
34
+ def render_grid(scene_map, images_dir):
35
+ lines = [MD_HEADER]
36
+ # header row with style labels
37
+ header = '| Scene | ' + ' | '.join(s.capitalize() for s in STYLES) + ' |'
38
+ sep = '|---' + ('|---' * len(STYLES)) + '|' # simple separators
39
+ lines.append(header)
40
+ lines.append(sep)
41
+
42
+ for scene, style_map in sorted(scene_map.items()):
43
+ cells = []
44
+ for s in STYLES:
45
+ if s in style_map:
46
+ fn = style_map[s]
47
+ img_path = f"{images_dir}/{fn}"
48
+ cells.append(CELL_HTML.format(img=img_path, alt=fn))
49
+ else:
50
+ cells.append('')
51
+ scene_cell = f'`{scene}`'
52
+ lines.append('| ' + scene_cell + ' | ' + ' | '.join(cells) + ' |')
53
+
54
+ return '\n'.join(lines)
55
+
56
+
57
+ def main():
58
+ parser = argparse.ArgumentParser()
59
+ parser.add_argument('--annotations', default='validation/annotations.json')
60
+ parser.add_argument('--out', default='validation/validation_grid.md')
61
+ parser.add_argument('--images-dir', default='validation/images')
62
+ args = parser.parse_args()
63
+
64
+ annotations = load_annotations(args.annotations)
65
+ scene_map = build_scene_map(annotations)
66
+ md = render_grid(scene_map, args.images_dir)
67
+ Path(args.out).write_text(md)
68
+ print(f"Wrote {args.out}")
69
+
70
+
71
+ if __name__ == '__main__':
72
+ main()
validation/images/dark/dark_01.png ADDED

Git LFS Details

  • SHA256: 234d9735d744008b71bb64c8fa33afab92b484ce1d34bfe9abd68526ca231aae
  • Pointer size: 132 Bytes
  • Size of remote file: 2 MB
validation/images/dark/dark_02.png ADDED

Git LFS Details

  • SHA256: 20a6c7eb8b0ee5d2d737a2750a12bc2b9653df47f62f8848227d9905ab3f3f8c
  • Pointer size: 132 Bytes
  • Size of remote file: 2.32 MB
validation/images/dark/dark_03.png ADDED

Git LFS Details

  • SHA256: 46736867a35859d78fd840a23e5a97a8d216c13c31bff9c8288cccb589f9024a
  • Pointer size: 132 Bytes
  • Size of remote file: 2.24 MB
validation/images/dark/dark_04.png ADDED

Git LFS Details

  • SHA256: 38011698dd1a59c698e23e162cb2f69c57d7be77812f340e0c80e2423fa77903
  • Pointer size: 132 Bytes
  • Size of remote file: 2.25 MB
validation/images/dark/dark_05.png ADDED

Git LFS Details

  • SHA256: ceb7907bb827a8a66f90ad718c1322f8e8987ecc561414a1de7ef334ea15a6e8
  • Pointer size: 132 Bytes
  • Size of remote file: 1.87 MB
validation/images/dark/dark_06.png ADDED

Git LFS Details

  • SHA256: 25cb0af5377b0436df0515abf8ef185cf882331c4c6acb40c3903982dc716ae3
  • Pointer size: 132 Bytes
  • Size of remote file: 2.04 MB
validation/images/dark/dark_07.png ADDED

Git LFS Details

  • SHA256: f744413b9f9c403436eabd04644ad16436af06fe7d1e79d8cc4d7cef1f0fd3ed
  • Pointer size: 132 Bytes
  • Size of remote file: 2.31 MB
validation/images/dark/dark_08.png ADDED

Git LFS Details

  • SHA256: b11e7c3613732da2433945a775e77f868b781d7ea74039873fc73b31ac788723
  • Pointer size: 132 Bytes
  • Size of remote file: 1.98 MB
validation/images/dark/dark_09.png ADDED

Git LFS Details

  • SHA256: 6da36b9b36f2f3a23cbffad2a39351a2d83bc7d902e6c85b6846717e639ee772
  • Pointer size: 132 Bytes
  • Size of remote file: 1.97 MB
validation/images/dark/dark_10.png ADDED

Git LFS Details

  • SHA256: d0b86a62fe860bf79dc901716544c51e05e243c56c89cdfa38362faa56b94273
  • Pointer size: 132 Bytes
  • Size of remote file: 2.34 MB
validation/images/dark/dark_11.png ADDED

Git LFS Details

  • SHA256: fc41128a553611b4ad0ab4d9b0069409896e26997b421c86c922157d824c36ad
  • Pointer size: 132 Bytes
  • Size of remote file: 2.19 MB
validation/images/dark/dark_12.png ADDED

Git LFS Details

  • SHA256: c3601739210c27c4eb811d5fe60163237aa8b81db4476ff25f7658fe6d2e12cc
  • Pointer size: 132 Bytes
  • Size of remote file: 1.98 MB
validation/images/dark/dark_13.png ADDED

Git LFS Details

  • SHA256: cab70efe2c011ac0496ba4df19c08f69e1d1f8c17b84692aaa75b7b209c324bb
  • Pointer size: 132 Bytes
  • Size of remote file: 1.92 MB
validation/images/dark/dark_14.png ADDED

Git LFS Details

  • SHA256: 8d4cd7b64360f08a29873bc413b42a20ec68a81ffffbf9765b4ce4f873832e88
  • Pointer size: 132 Bytes
  • Size of remote file: 2.21 MB
validation/images/dark/dark_15.png ADDED

Git LFS Details

  • SHA256: 9e68eaf46b182bb616b965b11b5af5d933346df9cb01f24171da024b2c024ee6
  • Pointer size: 132 Bytes
  • Size of remote file: 2.28 MB
validation/images/dark/dark_16.png ADDED

Git LFS Details

  • SHA256: 15d131ce589806cfc230df8e306884b25a5260e781dc84244dd53696ff8bd282
  • Pointer size: 132 Bytes
  • Size of remote file: 2.21 MB
validation/images/dark/dark_17.png ADDED

Git LFS Details

  • SHA256: c1a04d5b74ee32998c315cc2f12a459e55d99b786498620ec07a3385e69665c2
  • Pointer size: 132 Bytes
  • Size of remote file: 2.25 MB
validation/images/dark/dark_18.png ADDED

Git LFS Details

  • SHA256: 50df93d797fb2ee92f3fc043bf4d5f6fc519d1b02a3a933fbf5e837acf41690e
  • Pointer size: 132 Bytes
  • Size of remote file: 2.5 MB
validation/images/dark/dark_19.png ADDED

Git LFS Details

  • SHA256: 651d0cfdbb1008164a13228ed275b56cfc90c7fa099e4c859a70ce13f1a53b33
  • Pointer size: 132 Bytes
  • Size of remote file: 2.17 MB
validation/images/dark/dark_20.png ADDED

Git LFS Details

  • SHA256: 829b18a3d2ad2fe019b74577369dd3f3a1463e4456b986d847589da75462ac79
  • Pointer size: 132 Bytes
  • Size of remote file: 2.23 MB
validation/images/flat/flat_01.png ADDED

Git LFS Details

  • SHA256: c551eab2a366704e975fbc9327cfe3ebc83bfa195227e60043298687a2cfee50
  • Pointer size: 132 Bytes
  • Size of remote file: 1.47 MB
validation/images/flat/flat_02.png ADDED

Git LFS Details

  • SHA256: b15f2009ade1c658e18b154c3767cb6d77264417eded1e9f7b80c332da5abb14
  • Pointer size: 132 Bytes
  • Size of remote file: 1.78 MB
validation/images/flat/flat_03.png ADDED

Git LFS Details

  • SHA256: 89af27b2760cf5a5f0f564c6da4b2b093522b369480c06743cdd37c98efcf0a3
  • Pointer size: 132 Bytes
  • Size of remote file: 1.71 MB
validation/images/flat/flat_04.png ADDED

Git LFS Details

  • SHA256: 5f30dfb1f81a91382c708d7a9a0aceb8ae9998581d014b2271831921b5f05335
  • Pointer size: 132 Bytes
  • Size of remote file: 1.7 MB
validation/images/flat/flat_05.png ADDED

Git LFS Details

  • SHA256: b5b56284fc9501102afe98c9bc94593bc88d5a6edd10bde98448484aae263cd6
  • Pointer size: 132 Bytes
  • Size of remote file: 1.57 MB
validation/images/flat/flat_06.png ADDED

Git LFS Details

  • SHA256: 9d51e165a09973517598ddfc442a1cda117c8767b9fea21a04e086f8a5fb0b8e
  • Pointer size: 132 Bytes
  • Size of remote file: 1.69 MB
validation/images/flat/flat_07.png ADDED

Git LFS Details

  • SHA256: 443de87ae07d1d837db13811d512ebdcd7e104ed19a100f7f0de7bbba0c959cb
  • Pointer size: 132 Bytes
  • Size of remote file: 1.67 MB
validation/images/flat/flat_08.png ADDED

Git LFS Details

  • SHA256: 7dfee39db5661dbb8f116dfff1fa38d47346b98f2261384ad6586c7d44c8c7d9
  • Pointer size: 132 Bytes
  • Size of remote file: 1.53 MB
validation/images/flat/flat_09.png ADDED

Git LFS Details

  • SHA256: c515bae9c2ed44883a28d49bba2554552d2abf8277263210369ee8e010713fe7
  • Pointer size: 132 Bytes
  • Size of remote file: 1.6 MB
validation/images/flat/flat_10.png ADDED

Git LFS Details

  • SHA256: bfe88704938c86b1c01759979f77f0cf7cc080974e2b59c75139c6ffcdfb7f05
  • Pointer size: 132 Bytes
  • Size of remote file: 1.8 MB
validation/images/flat/flat_11.png ADDED

Git LFS Details

  • SHA256: bfb88d778236efa7f20f81bd79ac049e62941015fc3842b45e748825d919d454
  • Pointer size: 132 Bytes
  • Size of remote file: 1.57 MB
validation/images/flat/flat_12.png ADDED

Git LFS Details

  • SHA256: 9fd662ceabb9d94af7c2d18bf7026328c642e82b00d17ec0791a5e71b76abebb
  • Pointer size: 132 Bytes
  • Size of remote file: 1.57 MB
validation/images/flat/flat_13.png ADDED

Git LFS Details

  • SHA256: 67ca27498c147fadda69a317ccf59e8a9a861e097565da1f75a81c137e1ac139
  • Pointer size: 132 Bytes
  • Size of remote file: 1.82 MB
validation/images/flat/flat_14.png ADDED

Git LFS Details

  • SHA256: 76c5316be503956ee718c20e20d710b3e9ccead42ae06cd16c68eac6855ec0d9
  • Pointer size: 132 Bytes
  • Size of remote file: 1.82 MB
validation/images/flat/flat_15.png ADDED

Git LFS Details

  • SHA256: 1813bf8907fb3e8997d429fd65d7df8c4194ca1d49144b1faceb29257125539d
  • Pointer size: 132 Bytes
  • Size of remote file: 1.73 MB
validation/images/flat/flat_16.png ADDED

Git LFS Details

  • SHA256: d91f062e72707a6afe050143a55110bd67326fb34b93912d50ffef444ca6c05d
  • Pointer size: 132 Bytes
  • Size of remote file: 1.94 MB
validation/images/flat/flat_17.png ADDED

Git LFS Details

  • SHA256: a78c62f690797df28d38744e05337774b085918f91f82bef8d990b43b75c33db
  • Pointer size: 132 Bytes
  • Size of remote file: 1.67 MB
validation/images/flat/flat_18.png ADDED

Git LFS Details

  • SHA256: 99695b9a4b71769bf72120d9d69a0a9628a5486dbd5273796be8c34add7b7c9b
  • Pointer size: 132 Bytes
  • Size of remote file: 1.67 MB
validation/images/flat/flat_19.png ADDED

Git LFS Details

  • SHA256: 69119bf8f77f8567ce4c445d00958c58bec754dd58c1120b9ee39fc6e788a732
  • Pointer size: 132 Bytes
  • Size of remote file: 1.63 MB
validation/images/flat/flat_20.png ADDED

Git LFS Details

  • SHA256: 0f7be3066efa73398241293d34bcbd8ff762effeea38805128476a1cceb8419b
  • Pointer size: 132 Bytes
  • Size of remote file: 1.47 MB
validation/images/modern/modern_01.png ADDED

Git LFS Details

  • SHA256: 345ea9873b78476559d1a66534c4f9ae5f8b8c2d28bb0b493e7d9b2344085d01
  • Pointer size: 132 Bytes
  • Size of remote file: 1.59 MB