vllm clarification
Browse files
README.md
CHANGED
|
@@ -37,6 +37,8 @@ For more details, read the paper: [SafetyAnalyst: Interpretable, transparent, an
|
|
| 37 |
|
| 38 |
## How to Use HarmReporter
|
| 39 |
|
|
|
|
|
|
|
| 40 |
```python
|
| 41 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 42 |
|
|
@@ -49,6 +51,8 @@ input_tokenized = tokenizer.apply_chat_template(text_input, return_tensors="pt")
|
|
| 49 |
output = model.generate(input_tokenized, max_new_tokens=19000)
|
| 50 |
```
|
| 51 |
|
|
|
|
|
|
|
| 52 |
## Intended Uses of HarmReporter
|
| 53 |
|
| 54 |
- Harmfulness analysis: HarmReporter can be used to analyze the harmfulness of a given prompt in the hypothetical scenario that an AI language model provides a helpful answer to the prompt. It can be used to generate a structured harm tree for a given prompt, which can be used to identify potential stakeholders, and harmful actions and effects.
|
|
|
|
| 37 |
|
| 38 |
## How to Use HarmReporter
|
| 39 |
|
| 40 |
+
Outputs from HarmReporter can be generated using the following code snippet:
|
| 41 |
+
|
| 42 |
```python
|
| 43 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 44 |
|
|
|
|
| 51 |
output = model.generate(input_tokenized, max_new_tokens=19000)
|
| 52 |
```
|
| 53 |
|
| 54 |
+
However, due to the extensive lengths of the harm trees generated by HarmReporter, we recommend using the [vllm](https://github.com/vllm-project/vllm) library to generate the outputs.
|
| 55 |
+
|
| 56 |
## Intended Uses of HarmReporter
|
| 57 |
|
| 58 |
- Harmfulness analysis: HarmReporter can be used to analyze the harmfulness of a given prompt in the hypothetical scenario that an AI language model provides a helpful answer to the prompt. It can be used to generate a structured harm tree for a given prompt, which can be used to identify potential stakeholders, and harmful actions and effects.
|