Update README.md
Browse files
README.md
CHANGED
|
@@ -142,6 +142,9 @@ def get_final_answer(model, tokenizer, inputs, table_df):
|
|
| 142 |
_, answer, _ = get_final_answer(model, tokenizer, inputs, table_df)
|
| 143 |
print(answer)
|
| 144 |
```
|
|
|
|
|
|
|
|
|
|
| 145 |
|
| 146 |
## Limitations
|
| 147 |
The model does still not come close to a 100% accuracy. Possiblly using a larger model could help. It does seem to only be able to take in a limited size for tables, larger than a whole system. Once again, possibly a larger model could help. Also this needs to take in a question and table in dataframe format, so more preocessing is necessary than just a regular prompt.
|
|
|
|
| 142 |
_, answer, _ = get_final_answer(model, tokenizer, inputs, table_df)
|
| 143 |
print(answer)
|
| 144 |
```
|
| 145 |
+
If the question is asking for a count, like how many changes have been completed, the answer would just be one number.
|
| 146 |
+
If it is asking a question about the most common incident status or root cause, the answer would be the root cause or status the model
|
| 147 |
+
predicts.
|
| 148 |
|
| 149 |
## Limitations
|
| 150 |
The model does still not come close to a 100% accuracy. Possiblly using a larger model could help. It does seem to only be able to take in a limited size for tables, larger than a whole system. Once again, possibly a larger model could help. Also this needs to take in a question and table in dataframe format, so more preocessing is necessary than just a regular prompt.
|