Update README.md
Browse files
README.md
CHANGED
|
@@ -35,7 +35,7 @@ The model is designed for developers and software teams to automatically generat
|
|
| 35 |
|
| 36 |
## Dataset
|
| 37 |
|
| 38 |
-
###
|
| 39 |
|
| 40 |
This model is fine-tuned on the `swe_bench` dataset. The dataset includes:
|
| 41 |
- **<issue> (Issue Description)**: Describes the bug in detail.
|
|
@@ -56,8 +56,8 @@ This model is fine-tuned on the `swe_bench` dataset. The dataset includes:
|
|
| 56 |
from transformers import LlamaForCausalLM, LlamaTokenizer
|
| 57 |
|
| 58 |
# Load the model and tokenizer
|
| 59 |
-
model = LlamaForCausalLM.from_pretrained('
|
| 60 |
-
tokenizer = LlamaTokenizer.from_pretrained('
|
| 61 |
|
| 62 |
# Example input (issue description)
|
| 63 |
issue_description = "Function X throws an error when Y happens."
|
|
|
|
| 35 |
|
| 36 |
## Dataset
|
| 37 |
|
| 38 |
+
### `princeton-nlp/SWE-bench_Lite`
|
| 39 |
|
| 40 |
This model is fine-tuned on the `swe_bench` dataset. The dataset includes:
|
| 41 |
- **<issue> (Issue Description)**: Describes the bug in detail.
|
|
|
|
| 56 |
from transformers import LlamaForCausalLM, LlamaTokenizer
|
| 57 |
|
| 58 |
# Load the model and tokenizer
|
| 59 |
+
model = LlamaForCausalLM.from_pretrained('anant58/swe-model')
|
| 60 |
+
tokenizer = LlamaTokenizer.from_pretrained('anant58/swe-model')
|
| 61 |
|
| 62 |
# Example input (issue description)
|
| 63 |
issue_description = "Function X throws an error when Y happens."
|