TangoBeeAkto commited on
Commit
a04b6d1
·
verified ·
1 Parent(s): 0b7116b

Update README - remove protectai references

Browse files
Files changed (1) hide show
  1. README.md +27 -10
README.md CHANGED
@@ -9,31 +9,48 @@ tags:
9
 
10
  # codenlbert-tiny-onnx
11
 
12
- This is an ONNX model used by LLM Guard for security scanning.
13
 
14
- Original model source: `protectai/vishnun-codenlbert-tiny-onnx`
 
 
 
 
15
 
16
  ## Usage
17
 
18
- This model is used automatically by the LLM Guard library. Install LLM Guard:
19
 
20
  ```bash
21
  pip install llm-guard
22
  ```
23
 
 
 
 
 
 
 
 
 
24
  The model will be downloaded automatically when the corresponding scanner is used.
25
 
26
  ## About LLM Guard
27
 
28
  LLM Guard is a comprehensive security toolkit for Large Language Models, providing:
29
- - Prompt injection detection
30
- - PII detection and anonymization
31
- - Toxicity filtering
32
- - Bias detection
33
- - And more security features
34
 
35
- Repository: https://github.com/akto-api-security/llm-guard
 
 
 
 
 
 
36
 
37
  ## License
38
 
39
- MIT License - See the original model repository for specific licensing details.
 
 
 
 
 
9
 
10
  # codenlbert-tiny-onnx
11
 
12
+ This is an ONNX model used by [LLM Guard](https://github.com/akto-api-security/llm-guard) for security scanning of Large Language Models.
13
 
14
+ ## Model Details
15
+
16
+ **Base Model:** vishnun/codenlbert-tiny
17
+
18
+ This model has been converted to ONNX format for optimized inference performance.
19
 
20
  ## Usage
21
 
22
+ This model is used automatically by the LLM Guard library:
23
 
24
  ```bash
25
  pip install llm-guard
26
  ```
27
 
28
+ ```python
29
+ from llm_guard.input_scanners import PromptInjection
30
+
31
+ scanner = PromptInjection()
32
+ result = scanner.scan("Your prompt here")
33
+ print(result)
34
+ ```
35
+
36
  The model will be downloaded automatically when the corresponding scanner is used.
37
 
38
  ## About LLM Guard
39
 
40
  LLM Guard is a comprehensive security toolkit for Large Language Models, providing:
 
 
 
 
 
41
 
42
+ - 🛡️ Prompt injection detection
43
+ - 🔒 PII detection and anonymization
44
+ - 🚫 Toxicity filtering
45
+ - ⚖️ Bias detection
46
+ - 📊 And many more security features
47
+
48
+ **Repository:** https://github.com/akto-api-security/llm-guard
49
 
50
  ## License
51
 
52
+ MIT License
53
+
54
+ ---
55
+
56
+ *Maintained by [Akto API Security](https://akto.io)*