File size: 6,957 Bytes
3eef34c c105fe4 3eef34c c105fe4 c696c85 3eef34c c105fe4 3eef34c c105fe4 3eef34c c105fe4 3eef34c c105fe4 3eef34c c105fe4 3eef34c c105fe4 3eef34c c105fe4 3eef34c c105fe4 3eef34c c105fe4 3eef34c c105fe4 3eef34c c105fe4 3eef34c c105fe4 3eef34c c105fe4 3eef34c c105fe4 3eef34c dc39561 3eef34c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 |
---
license: apache-2.0
language:
- en
metrics:
- accuracy
pipeline_tag: text-generation
model-index:
- name: Phi3.1-Simple-Arguments
results:
- task:
type: text-generation
dataset:
name: Argument-parsing
type: Argument-parsing
metrics:
- name: Accuracy
type: Accuracy
value: 100
---
# Phi3.1 Simple Arguments

[](https://www.freelancer.com/u/cdesivo92)
This model aims to parse simple english arguments, arguments formed of two premises and a conclusion, including two propositions.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Cristian Desivo
- **Model type:** LLM
- **Language(s) (NLP):** English
- **License:** Apache-2.0
- **Finetuned from model:** Phi3.1-mini
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** TBD
- **Demo:** TBD
### Quantization
<!-- - **Q4_K_M.gguf** https://huggingface.co/cris177/Qwen2-Simple-Arguments/resolve/main/Qwen2_arguments.Q4_K_M.gguf?download=true -->
## Usage
Below we share some code snippets on how to get quickly started with running the model.
### llama.cpp server [Recommended]
The recommended way of running the model is with a llama.cpp server running the quantized
Then you can use the following script to use the server's model for inference:
```python
import json
import requests
def llmCall(messages, **args):
url = "http://localhost:8080/v1/chat/completions"
headers = {
"Content-Type": "application/json"
}
data = {
'messages': messages
}
for arg in args:
data[arg] = args[arg]
response = requests.post(url, headers=headers, json=data)
return response.json()
def analyze_argument(argument):
instruction = "Based on the following argument, identify the following elements: premises, conclusion, propositions, type of argument, negation of propositions and validity."
inputText = "### Input:\n" + argument
prompt = f"""{instruction}
{inputText}
"""
messages=[{"role":"user", "content":prompt}]
properties = {
"Premise 1": {"type": "string"},
"Premise 2": {"type": "string"},
"Conclusion": {"type": "string"},
"Type of argument": {"type": "string"},
"Proposition 1": {"type": "string"},
"Proposition 2": {"type": "string"},
"Negation of Proposition 1": {"type": "string"},
"Negation of Proposition 2": {"type": "string"},
"Validity": {"type": "string"},
}
analysis = llmCall(
messages=messages,
max_tokens=1000,
temperature=0,
stop=["<|end|>"],
response_format={
"type": "json_object",
"schema": {
"type": "object",
"properties": properties,
"required": list(properties.keys()),
},
}
)['choices'][0]['message']['content']
if analysis.endswith("<|end|>"):
analysis = analysis[:-5]
return analysis
argument = "If it's wednesday it's cold, and it's cold, therefore it's wednesday."
output = analyze_argument("If it's wednesday it's cold, and it's cold, therefore it's wednesday.")
print(output)
```
Output:
```
{"Premise 1": "If it's wednesday it's cold",
"Premise 2": "It's cold",
"Conclusion": "It is Wednesday",
"Proposition 1": "It is Wednesday",
"Proposition 2": "It is cold",
"Type of argument": "affirming the consequent",
"Negation of Proposition 1": "It is not Wednesday",
"Negation of Proposition 2": "It is not cold",
"Validity": true}
```
### transformers 🤗
First make sure to pip install -U transformers, then use the code below replacing the `argument` variable for the argument you want to parse:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("cris177/Phi3.1-Simple-Arguments",
device_map="auto",)
tokenizer = AutoTokenizer.from_pretrained("cris177/Phi3.1-Simple-Arguments")
argument = "If it's wednesday it's cold, and it's cold, therefore it's wednesday."
instruction = 'Based on the following argument, identify the following elements: premises, conclusion, propositions, type of argument, negation of propositions and validity.'
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:"""
prompt = alpaca_prompt.format(instruction, argument)
input_ids = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_length=1000, num_return_sequences=1)
print(tokenizer.decode(outputs[0]))
```
Output:
```
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
Based on the following argument, identify the following elements: premises, conclusion, propositions, type of argument, negation of propositions and validity.
### Input:
If it's wednesday it's cold, and it's cold, therefore it's wednesday.
### Response:
{"Premise 1": "If it's wednesday it's cold",
"Premise 2": "It's cold",
"Conclusion": "It is Wednesday",
"Proposition 1": "It is Wednesday",
"Proposition 2": "It is cold",
"Type of argument": "affirming the consequent",
"Negation of Proposition 1": "It is not Wednesday",
"Negation of Proposition 2": "It is not cold",
"Validity": "false"}<|endoftext|>
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The model was trained on syntethic data, based on the following types of arguments:
- Modus Ponen
- Modus Tollen
- Affirming Consequent
- Disjunctive Syllogism
- Denying Antecedent
- Invalid Conditional Syllogism
Each argument was constructed by selecting two random propositions (from a list of 400 propositions that was generated beforehand), choosing a type of argument and combining it all with randomly selected connectors (therefore, since, hence, thus, etc).
50k arguments were created to train the model, and 100 to test.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Training
We used unsloth for memory reduced sped up training.
We trained for one epoch.
Less than 3.5 GB of VRAM were used for training, and it took 3 hours.
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
The model obtains 100% train and test accuracy on our synthetic dataset. |