File size: 2,728 Bytes
bb87f91 c482d55 2dba87c c482d55 2dba87c c482d55 2dba87c c482d55 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 |
---
language: en
license: mit
tags:
- text-classification
- nlp
- transformers
- bert
- routing
- vision-task-classifier
model_name: ICM
base_model: bert-base-uncased
pipeline_tag: text-classification
datasets:
- synthetic
tasks:
- text-classification
library_name: transformers
---
# Task Classification Model (ICM)
## Model Description
A BERT-based sequence classification model that routes computer vision questions to appropriate specialized modules. Classifies questions into 4 task categories: VQA, Captioning, Grounding, and Geometry.
- **Repository:** beingamanforever/ICM
- **Base Model:** bert-base-uncased
- **Task:** 4-way Sequence Classification
## Labels
| ID | Label | Description |
|---|---|---|
| 0 | vqa | Visual Question Answering ("What color is the car?") |
| 1 | captioning | Image Description ("Describe the sunset.") |
| 2 | grounding | Object Localization ("Find the person in the image.") |
| 3 | geometry | Spatial/Metric Queries ("Calculate the area of the red box.") |
## Architecture
BERT-Base encoder + 3-layer MLP classifier on [CLS] token:
- Layer 1: Linear(768 → 256) + ReLU + Dropout(0.1)
- Layer 2: Linear(256 → 128) + ReLU + Dropout(0.1)
- Layer 3: Linear(128 → 4)
## Training
| Hyperparameter | Value |
|---|---|
| Samples | 1,600 (400 per class) |
| Epochs | 5 |
| Learning Rate | 2e-5 |
| Batch Size | 32 |
| Optimizer | AdamW |
| Loss | Cross Entropy |
**Data:** Synthetic questions from balanced JSON files (vqa_qs.json, captioning_qs.json, grounding_qs.json, geometry_qs.json)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model_name = "beingamanforever/ICM"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
questions = [
"What is the distance between the two trees?",
"Describe what the child is wearing.",
"Is the traffic light green?",
"Box the location of the blue umbrella."
]
inputs = tokenizer(questions, return_tensors="pt", padding=True, truncation=True)
with torch.no_grad():
logits = model(**inputs).logits
predictions = torch.argmax(logits, dim=-1)
for q, pred in zip(questions, predictions):
print(f"{q} → {model.config.id2label[pred.item()]}")
```
## Limitations
- **Synthetic Training Data:** May not generalize to complex real-world queries
- **Text-Only:** Processes questions without image context
- **Domain Scope:** Optimized for vision task routing, not general NLP classification
## Intended Use
- Automatic query routing in multimodal AI pipelines
- VQA dataset analysis and taxonomy studies
- Educational demonstrations of vision task classification |