OpenAI-Clip: Optimized for Qualcomm Devices

Contrastive Language-Image Pre-Training (CLIP) uses a ViT like transformer to get visual features and a causal language model to get the text features. Both the text and visual features can then be used for a variety of zero-shot learning tasks.

This is based on the implementation of OpenAI-Clip found here. This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the Qualcomm® AI Hub Models library to export with custom configurations. More details on model performance across various devices, can be found here.

Qualcomm AI Hub Models uses Qualcomm AI Hub Workbench to compile, profile, and evaluate this model. Sign up to run these models on a hosted Qualcomm® device.

Getting Started

There are two ways to deploy this model on your device:

Option 1: Download Pre-Exported Models

Download pre-exported model assets from OpenAI-Clip on Qualcomm® AI Hub.

Option 2: Export with Custom Configurations

Use the Qualcomm® AI Hub Models Python library to compile and export the model with your own:

  • Custom weights (e.g., fine-tuned checkpoints)
  • Custom input shapes
  • Target device and runtime configurations

This option is ideal if you need to customize the model beyond the default configuration provided here.

See our repository for OpenAI-Clip on GitHub for usage instructions.

Model Details

Model Type: Model_use_case.image_classification

Model Stats:

  • Model checkpoint: ViT-B/16
  • Image input resolution: 224x224
  • Text context length: 77
  • Number of parameters: 150M
  • Model size (float): 571 MB

Performance Summary

Model Runtime Precision Chipset Inference Time (ms) Peak Memory Range (MB) Primary Compute Unit
OpenAI-Clip ONNX float Snapdragon® X2 Elite 7.205 ms 291 - 291 MB NPU
OpenAI-Clip ONNX float Snapdragon® X Elite 16.481 ms 291 - 291 MB NPU
OpenAI-Clip ONNX float Snapdragon® 8 Gen 3 Mobile 11.304 ms 1 - 564 MB NPU
OpenAI-Clip ONNX float Qualcomm® QCS8550 (Proxy) 15.726 ms 0 - 335 MB NPU
OpenAI-Clip ONNX float Qualcomm® QCS9075 20.388 ms 0 - 4 MB NPU
OpenAI-Clip ONNX float Snapdragon® 8 Elite For Galaxy Mobile 9.086 ms 1 - 533 MB NPU
OpenAI-Clip ONNX float Snapdragon® 8 Elite Gen 5 Mobile 6.999 ms 1 - 496 MB NPU
OpenAI-Clip QNN_DLC float Snapdragon® X2 Elite 9.095 ms 1 - 1 MB NPU
OpenAI-Clip QNN_DLC float Snapdragon® X Elite 18.846 ms 1 - 1 MB NPU
OpenAI-Clip QNN_DLC float Snapdragon® 8 Gen 3 Mobile 12.55 ms 0 - 554 MB NPU
OpenAI-Clip QNN_DLC float Qualcomm® QCS8275 (Proxy) 56.004 ms 1 - 505 MB NPU
OpenAI-Clip QNN_DLC float Qualcomm® QCS8550 (Proxy) 17.857 ms 1 - 3 MB NPU
OpenAI-Clip QNN_DLC float Qualcomm® SA8775P 20.945 ms 1 - 504 MB NPU
OpenAI-Clip QNN_DLC float Qualcomm® QCS9075 21.217 ms 3 - 5 MB NPU
OpenAI-Clip QNN_DLC float Qualcomm® QCS8450 (Proxy) 21.0 ms 0 - 502 MB NPU
OpenAI-Clip QNN_DLC float Qualcomm® SA7255P 56.004 ms 1 - 505 MB NPU
OpenAI-Clip QNN_DLC float Qualcomm® SA8295P 22.083 ms 1 - 497 MB NPU
OpenAI-Clip QNN_DLC float Snapdragon® 8 Elite For Galaxy Mobile 10.522 ms 1 - 516 MB NPU
OpenAI-Clip QNN_DLC float Snapdragon® 8 Elite Gen 5 Mobile 8.338 ms 0 - 486 MB NPU
OpenAI-Clip TFLITE float Snapdragon® 8 Gen 3 Mobile 11.082 ms 0 - 559 MB NPU
OpenAI-Clip TFLITE float Qualcomm® QCS8275 (Proxy) 52.065 ms 0 - 508 MB NPU
OpenAI-Clip TFLITE float Qualcomm® QCS8550 (Proxy) 15.604 ms 0 - 3 MB NPU
OpenAI-Clip TFLITE float Qualcomm® SA8775P 18.667 ms 0 - 508 MB NPU
OpenAI-Clip TFLITE float Qualcomm® QCS9075 20.359 ms 0 - 294 MB NPU
OpenAI-Clip TFLITE float Qualcomm® QCS8450 (Proxy) 20.309 ms 0 - 502 MB NPU
OpenAI-Clip TFLITE float Qualcomm® SA7255P 52.065 ms 0 - 508 MB NPU
OpenAI-Clip TFLITE float Qualcomm® SA8295P 21.252 ms 0 - 495 MB NPU
OpenAI-Clip TFLITE float Snapdragon® 8 Elite For Galaxy Mobile 9.021 ms 0 - 517 MB NPU
OpenAI-Clip TFLITE float Snapdragon® 8 Elite Gen 5 Mobile 6.928 ms 0 - 496 MB NPU

License

  • The license for the original implementation of OpenAI-Clip can be found here.

References

Community

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for qualcomm/OpenAI-Clip