suriya7's picture
Update README.md
600d743 verified
---
base_model: AquilaX-AI/ai_scanner
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AquilaX-AI
- **License:** apache-2.0
- **Finetuned from model :** AquilaX-AI/ai_scanner
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
```python
pip install gguf
pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
import torch
import json
model_id = "AquilaX-AI/AI-Scanner-Quantized"
filename = "unsloth.Q8_0.gguf"
tokenizer = AutoTokenizer.from_pretrained(model_id, gguf_file=filename)
model = AutoModelForCausalLM.from_pretrained(model_id, gguf_file=filename)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.to(device)
sys_prompt = """<|im_start|>system\nYou are Securitron, an AI assistant specialized in detecting vulnerabilities in source code. Analyze the provided code and provide a structured report on any security issues found.<|im_end|>"""
user_prompt = """
CODE FOR SCANNING
"""
prompt = f"""{sys_prompt}
<|im_start|>user
{user_prompt}<|im_end|>
<|im_start|>assistant
"""
encodeds = tokenizer(prompt, return_tensors="pt", truncation=True).input_ids.to(device)
text_streamer = TextStreamer(tokenizer, skip_prompt=True)
response = model.generate(
input_ids=encodeds,
streamer=text_streamer,
max_new_tokens=4096,
use_cache=True,
pad_token_id=151645,
eos_token_id=151645,
num_return_sequences=1
)
output = json.loads(tokenizer.decode(response[0]).split('<|im_start|>assistant')[-1].split('<|im_end|>')[0].strip())
```