File size: 1,627 Bytes
9678f13
 
 
 
 
 
 
 
 
3696c57
 
9678f13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5e93233
 
 
 
 
9678f13
5e93233
9678f13
 
 
 
 
 
878ec38
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9678f13
 
 
878ec38
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
language:
- en
base_model:
- meta-llama/Llama-3.1-8B-Instruct
tags:
- computer
- IToperation
- WindowsErrorCode
datasets:
- insightfinderai/windows_error_code_qa
---
# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->

This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).

## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->



- **Developed by:** InsightFinder AI
- **Funded by [optional]:** InsightFinder AI
- **Shared by [optional]:** InsightFinder AI
- **Model type:** Instruct Model
- **Language(s) (NLP):** Q/A
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** meta-llama/Llama-3.1-8B-Instruct


## How to Get Started with the Model

Use the code below to get started with the model.

1. start `vllm` with the following command
```
      --model meta-llama/Meta-Llama-3.1-8B-Instruct
      --enable-lora
      --lora-modules mylora=/lora_adapter
```
2. Once the model is up, prompt the model use the following example cmd
```
curl http://localhost:18000/v1/chat/completions   -H "Content-Type: application/json"   -d '{
    "model": "mylora",
    "messages": [
      {"role": "user", "content": "What is Windows error code 0xd6?"}
    ],
    "temperature": 0.7
  }'
```
The correct response will contain something like `Error code 0xd6 indicates DRIVER_PAGE_FAULT_BEYOND_END_OF_ALLOCATION....`

## Model Card Contact

InsightFinder AI