Ctrl+K
This model has 2 files scanned as unsafe.
- 1.52 kB initial commit
- 0 Bytes initial commit
- 729 Bytes Update README.md
- diffusion-model-3B.pth7.6 GB
Detected Pickle imports (33)
- "transformers.models.llama.modeling_llama.LlamaMLP",
- "peft.tuners.lora.model.LoraModel",
- "transformers.models.llama.modeling_llama.LlamaRotaryEmbedding",
- "__main__.CustomTransformerModel",
- "__builtin__.set",
- "__builtin__.getattr",
- "peft.peft_model.PeftModel",
- "collections.OrderedDict",
- "peft.tuners.lora.config.LoraConfig",
- "torch._utils._rebuild_parameter",
- "torch.HalfStorage",
- "torch.FloatStorage",
- "__main__.CustomTransformerConfig",
- "transformers.modeling_rope_utils._compute_llama3_parameters",
- "transformers.models.llama.configuration_llama.LlamaConfig",
- "torch._utils._rebuild_tensor_v2",
- "torch.float16",
- "peft.tuners.lora.layer.Linear",
- "peft.tuners.lora.config.LoraRuntimeConfig",
- "torch.nn.modules.sparse.Embedding",
- "torch.nn.modules.linear.Identity",
- "transformers.models.llama.modeling_llama.LlamaForCausalLM",
- "transformers.models.llama.modeling_llama.LlamaRMSNorm",
- "transformers.models.llama.modeling_llama.LlamaAttention",
- "torch.nn.modules.activation.SiLU",
- "peft.utils.peft_types.PeftType",
- "torch.nn.modules.linear.Linear",
- "transformers.models.llama.modeling_llama.LlamaModel",
- "transformers.generation.configuration_utils.GenerationConfig",
- "torch.nn.modules.container.ModuleList",
- "torch.nn.modules.container.ModuleDict",
- "transformers.models.llama.modeling_llama.LlamaDecoderLayer",
- "torch.nn.modules.container.ParameterDict"
xetRename diffusion-model.pth to diffusion-model-3B.pth - diffusion-model-8B.pth17.8 GB
Detected Pickle imports (33)
- "torch.nn.modules.container.ParameterDict",
- "peft.utils.peft_types.PeftType",
- "transformers.models.llama.modeling_llama.LlamaForCausalLM",
- "torch.nn.modules.linear.Identity",
- "collections.OrderedDict",
- "__builtin__.getattr",
- "torch.float16",
- "peft.tuners.lora.config.LoraConfig",
- "torch.nn.modules.linear.Linear",
- "peft.tuners.lora.model.LoraModel",
- "torch.nn.modules.activation.SiLU",
- "transformers.modeling_rope_utils._compute_llama3_parameters",
- "transformers.models.llama.modeling_llama.LlamaRotaryEmbedding",
- "transformers.models.llama.modeling_llama.LlamaDecoderLayer",
- "transformers.models.llama.modeling_llama.LlamaRMSNorm",
- "peft.tuners.lora.layer.Linear",
- "peft.peft_model.PeftModel",
- "transformers.models.llama.modeling_llama.LlamaModel",
- "transformers.models.llama.modeling_llama.LlamaMLP",
- "__builtin__.set",
- "__main__.CustomTransformerConfig",
- "torch._utils._rebuild_parameter",
- "transformers.generation.configuration_utils.GenerationConfig",
- "__main__.CustomTransformerModel",
- "torch.nn.modules.container.ModuleList",
- "torch.nn.modules.sparse.Embedding",
- "peft.tuners.lora.config.LoraRuntimeConfig",
- "torch.nn.modules.container.ModuleDict",
- "transformers.models.llama.modeling_llama.LlamaAttention",
- "torch.HalfStorage",
- "transformers.models.llama.configuration_llama.LlamaConfig",
- "torch.FloatStorage",
- "torch._utils._rebuild_tensor_v2"
xetFinal version 8B model