--- library_name: transformers license: apache-2.0 pipeline_tag: other tags: - benchmark - training-data-detection - membership-inference - computer-security --- # TDDBench: MLP Target Model (Student Dataset) This repository hosts `mlp-student-0`, a specific target model checkpoint released as part of **TDDBench: A Benchmark for Training data detection**. TDDBench is a comprehensive benchmark designed to thoroughly evaluate the effectiveness of Training Data Detection (TDD) methods, which are also known as Membership Inference Attacks (MIA). The paper introducing TDDBench highlights TDD's importance in assessing training data breach risks, ensuring copyright authentication, and verifying model unlearning. It consists of 13 datasets spanning three data modalities (image, tabular, and text) and benchmarks 21 different TDD methods across four detection paradigms. This specific `mlp-student-0` model is a Multi-Layer Perceptron (MLP) architecture trained on the "student" dataset, serving as one of the many target models for TDD evaluation within the benchmark. Through TDDBench, researchers can identify bottlenecks and areas for improvement in TDD algorithms, while practitioners can make informed trade-offs between effectiveness and efficiency. Extensive experiments also reveal the generally unsatisfactory performance of TDD algorithms across different datasets, indicating a need for continued research. ## Model Details ### Model Description This model (`mlp-student-0`) is an MLP (Multi-Layer Perceptron) architecture trained specifically on the `student` dataset. It is one of the target models released within the TDDBench framework. To reduce statistical error during evaluations, five different target models are trained for each model architecture and training dataset combination. This particular model is one such instance, identified by `model_idx=0`. - **Developed by:** Zhihao Zhu, Yi Yang, Defu Lian - **Model type:** Target Model for Training Data Detection Benchmark (MLP) - **Language(s):** English (for relevant text datasets) - **License:** Apache 2.0 ### Model Sources - **Paper:** [TDDBench: A Benchmark for Training data detection](https://huggingface.co/papers/2411.03363) - **GitHub Repository (Official Implementation):** [https://github.com/TDDBench/TDDBench](https://github.com/TDDBench/TDDBench) - **Related Hugging Face Collection:** You can find more models and datasets related to TDDBench on the [TDDBench Hugging Face organization page](https://huggingface.co/TDDBench). ## Uses ### Direct Use This specific target model, `mlp-student-0`, is intended for researchers and practitioners to: - Serve as a pre-trained target model for evaluating Training Data Detection (TDD) methods within the TDDBench framework. - Facilitate the reproduction of experiments described in the TDDBench paper. - Be used as a component in developing and testing new TDD algorithms. ### Out-of-Scope Use This model is not intended for: - Direct deployment as a privacy auditing tool without further research, validation, and consideration of its limitations. - General machine learning tasks outside the context of Training Data Detection benchmarking. - Making definitive claims about data privacy risks without a thorough understanding of TDD algorithm limitations. ## Bias, Risks, and Limitations Extensive experiments with TDDBench reveal that the performance of TDD algorithms is generally unsatisfactory across different datasets. This highlights that current TDD methods may not be universally robust or effective. Key limitations noted in the paper include: - **Performance Gaps:** Significant performance differences exist between different types of TDD algorithms. - **Computational Costs:** Model-based TDD methods often outperform others but incur high computational costs due to the need for multiple reference models. - **Architecture Dependency:** The performance of TDD highly depends on knowing the underlying target model architecture, suffering degradation in the case of an unknown target model. - **No Universal Winner:** There is no single TDD algorithm that consistently outperforms others across all scenarios. ### Recommendations Users of this model and the TDDBench benchmark should carefully consider these limitations. When selecting or developing TDD algorithms, it is crucial to balance detection performance with computational efficiency based on specific real-world conditions. Further research is needed to develop more robust and generalizable TDD methods. ## How to Get Started with the Model This model is designed to be loaded and used in conjunction with the TDDBench codebase to perform Training Data Detection evaluations. First, ensure you have the `transformers` and `datasets` libraries installed, and that the `hfmodel.py` file from the TDDBench GitHub repository (which defines `MLPConfig` and `MLPHFModel`) is accessible in your Python environment. You might need to install additional dependencies as specified in the [TDDBench `requirements.txt`](https://raw.githubusercontent.com/TDDBench/TDDBench/main/requirements.txt). ```bash pip install transformers datasets # For custom model architectures from TDDBench: # pip install -r https://raw.githubusercontent.com/TDDBench/TDDBench/main/requirements.txt ``` To load this target model and its corresponding training data detection labels, you can use the `transformers` library: ```python import numpy as np from datasets import load_dataset from transformers import AutoConfig, AutoModel # IMPORTANT: You need to ensure MLPConfig and MLPHFModel are imported or defined. # These custom classes are part of the TDDBench repository (e.g., in hfmodel.py). # If you cloned the TDDBench repository, ensure the 'benchmark/basic' directory # is in your Python path, or copy `hfmodel.py` to your working directory. try: from hfmodel import MLPConfig, MLPHFModel, WRNConfig, WRNHFModel except ImportError: print("Warning: hfmodel classes not found. Ensure TDDBench 'hfmodel.py' is accessible or use trust_remote_code=True.") # Fallback for demonstration if hfmodel is not locally available # For actual usage, it's recommended to make hfmodel.py available. # Register custom model architectures so AutoModel can load them # This is crucial for models with custom architectures like MLPHFModel. AutoConfig.register("mlp", MLPConfig) AutoModel.register(MLPConfig, MLPHFModel) AutoConfig.register("wrn", WRNConfig) # Assuming WRN is also a model type AutoModel.register(WRNConfig, WRNHFModel) # Load the target model dataset_name = "student" # The training dataset name for this model model_name = "mlp" # The target model architecture (e.g., "mlp", "wrn") model_idx = 0 # Index of this specific model (0-4 available for each architecture/dataset combo) model_path = f"TDDBench/{model_name}-{dataset_name}-{model_idx}" # Use trust_remote_code=True if the custom model definition (e.g., MLPHFModel) # is not locally available in your environment, allowing Hugging Face to load it from the Hub. model = AutoModel.from_pretrained(model_path, trust_remote_code=True) model.eval() # Set model to evaluation mode # Load training data detection label (1 means model's training data, 0 means non-training data) config = AutoConfig.from_pretrained(model_path) tdd_label = np.array(config.tdd_label) print(f"Model loaded: {model_path}") print(f"Shape of TDD label: {tdd_label.shape}") # You can also load the corresponding dataset from the Hub dataset_path = f"TDDBench/{dataset_name}" dataset = load_dataset(dataset_path)["train"] print(f"Sample dataset loaded: {len(dataset)} examples") # Refer to the demo.ipynb file in the official TDDBench GitHub repository for a complete example # on how to use these components to record model output loss for training and non-training data. ``` ## Training Details ### Training Data The TDDBench benchmark utilizes 13 datasets spanning three data modalities: image, tabular, and text. These datasets are sourced from torchvision, Hugging Face, UCI Machine Learning Repository, and academic papers. Some datasets, particularly from torchvision and UCI, can be downloaded automatically, while others may require manual download. ### Training Procedure The TDDBench framework provides scripts (e.g., `train_base_model.sh`) to train target, shadow, and reference models that are used in the TDD evaluation. The checkpoints of these models, along with the indexes of their training data, are stored in the `benchmark/meta_log` folder of the main repository. The specific training parameters for both the target models (like this MLP model) and the TDD algorithms are detailed in the accompanying paper and can be easily adjusted in the `benchmark/configs` directory of the GitHub repository. ## Evaluation ### Testing Data, Factors & Metrics TDDBench evaluates 21 different TDD algorithms across a variety of settings. #### Testing Data The benchmark includes 13 distinct datasets. #### Factors Evaluations are disaggregated by several factors: - **Algorithm Type:** TDD algorithms are categorized into four types: metric-based, learning-based, model-based, and query-based. - **Model Architecture:** Results are presented for 11 different model architectures. - **Data Modality:** Evaluations span image, tabular, and text data. #### Metrics Performance is evaluated from five key perspectives: - Average detection performance (e.g., AUC, accuracy) - Best detection performance - Memory consumption - Computational efficiency (in terms of time) - Computational efficiency (in terms of memory) ### Results The paper "TDDBench: A Benchmark for Training data detection" provides extensive experimental results. Key findings include: - Significant performance gaps between different TDD algorithm types, with model-based methods generally outperforming others despite higher computational costs. - The memorization of training data is crucial for TDD algorithm performance, with larger target models typically exhibiting higher success rates. - Performance degradation occurs when the underlying target model architecture is unknown. - No single TDD method emerges as a clear "winner" across all scenarios, emphasizing the need for testers to balance performance and efficiency based on real-world conditions. ## Citation If you find TDDBench or this model checkpoint useful for your research, please consider citing the original paper: ```bibtex @article{zhu2024tddbench, title={TDDBench: A Benchmark for Training data detection}, author={Zhu, Zhihao and Yang, Yi and Lian, Defu}, journal={arXiv preprint arXiv:2411.03363}, year={2024} } ```