YAML Metadata Warning: empty or missing yaml metadata in repo card
Check out the documentation for more information.
Format
There is no single top-level annotations.jsonl in this directory. Instead, the annotations are split across these 5 subdirectories:
clean_TSQA/anomaly_detection/annotations.jsonlclean_TSQA/classification/annotations.jsonlclean_TSQA/open_ended_qa/annotations.jsonlclean_cats-bench_hard/annotations.jsonlclean_timeomni/annotations.jsonl
Each line is one JSON object (JSONL format). The core fields are described below.
Common Fields
id (string)
Unique sample ID.
- TSQA examples:
ts_anomaly_0,ts_classif_1,ts_openqa_3 - TimeOmni example:
1_scenario_understanding_test - CATS hard example:
ts_retrieval_perturbed__agriculture_100_test__0
This field is generated by preprocessing scripts and is used for deduplication, tracking, and evaluation alignment.
image (string)
Relative path to the image file (relative to the current sub-dataset directory).
- TSQA / TimeOmni: usually
images/*.png - CATS hard: usually
plots/*.jpeg(can also be.jpg/.png/.webp)
During training/inference, the model should load this image before answering the question.
answer_type (string)
Answer type, which determines how outputs should be evaluated.
mcq: multiple-choice (answer is typically an option letter such asA/B/C/D)exact: exact text match (e.g.,Yes/No,True/False, numbers)approximation: approximate numeric answer (supported by source scripts; rarely seen in thishg_datasetsnapshot)
Source scripts:
- TSQA:
tsqa.py - TimeOmni:
testomni.py(alwaysmcq) - CATS:
cats.py/cats_test.py
conversations (array[string, string])
Two-element array:
conversations[0]: question/promptconversations[1]: gold answer
This dataset uses a simplified two-turn format, not a role-tagged format like {from: human/gpt}.
Extra Field in CATS hard
task_type (string, only in clean_cats-bench_hard)
Task subtype. This field exists only in CATS hard annotations and distinguishes retrieval task variants.
Common value in the current data:
ts_retrieval_perturbed
Additional possible values supported by cats_test.py:
ts_retrieval_cross_domaints_retrieval_same_domaincaption_retrieval_cross_domaincaption_retrieval_perturbedcaption_retrieval_same_domain
Minimal Examples
{"id":"ts_anomaly_0","image":"images/ts_anomaly_0.png","answer_type":"exact","conversations":["...question...","No"]}
{"id":"ts_retrieval_perturbed__agriculture_100_test__0","image":"plots/agriculture_100_test.jpeg","answer_type":"mcq","task_type":"ts_retrieval_perturbed","conversations":["...question...","A"]}
Mapping to Preprocessing Scripts
- TSQA subsets:
/home/xinyu/ChartModel/chart/app/data_process/rl/tsqa.py - CATS hard:
/home/xinyu/ChartModel/chart/app/data_process/rl/cats_test.py - TimeOmni:
/home/xinyu/ChartModel/chart/app/data_process/rl/testomni.py
These scripts define field generation logic, answer_type assignment, and question text cleaning rules.
Inference Example
Below is an example workflow using vLLM OpenAI-compatible server plus bon_filter.py.
1) Start vLLM server
CUDA_VISIBLE_DEVICES=4,5,6,7 python -m vllm.entrypoints.openai.api_server \
--model /path/Qwen3-VL-2B-Instruct \
--host :: \
--port 8003 \
--max-model-len 8192 \
--gpu-memory-utilization 0.85 \
--limit-mm-per-prompt '{"image": 1}' \
--data-parallel-size 4 \
--trust-remote-code \
--max-num-batched-tokens 8192
2) Run inference with bon_filter.py
Example on TimeOmni subset:
python chart/app/rl/bon_filter.py \
--image_dir /hg_dataset/clean_timeomni \
--input_jsonl /hg_dataset/clean_timeomni/annotations.jsonl \
--output_jsonl /hg_dataset/clean_timeomni/bon.jsonl \
--model_name /path/Qwen3-VL-2B-Instruct \
--n 1
You can switch --image_dir and --input_jsonl to other subsets in this dataset, for example:
/hg_dataset/clean_TSQA/anomaly_detection/hg_dataset/clean_TSQA/classification/hg_dataset/clean_TSQA/open_ended_qa/hg_dataset/clean_cats-bench_hard
Note: ensure the image field inside each JSONL is a valid relative path under --image_dir.
- Downloads last month
- 21