--- language: - en base_model: - meta-llama/Llama-3.1-8B-Instruct tags: - computer - IToperation - WindowsErrorCode datasets: - insightfinderai/windows_error_code_qa --- # Model Card for Model ID This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description - **Developed by:** InsightFinder AI - **Funded by [optional]:** InsightFinder AI - **Shared by [optional]:** InsightFinder AI - **Model type:** Instruct Model - **Language(s) (NLP):** Q/A - **License:** [More Information Needed] - **Finetuned from model [optional]:** meta-llama/Llama-3.1-8B-Instruct ## How to Get Started with the Model Use the code below to get started with the model. 1. start `vllm` with the following command ``` --model meta-llama/Meta-Llama-3.1-8B-Instruct --enable-lora --lora-modules mylora=/lora_adapter ``` 2. Once the model is up, prompt the model use the following example cmd ``` curl http://localhost:18000/v1/chat/completions -H "Content-Type: application/json" -d '{ "model": "mylora", "messages": [ {"role": "user", "content": "What is Windows error code 0xd6?"} ], "temperature": 0.7 }' ``` The correct response will contain something like `Error code 0xd6 indicates DRIVER_PAGE_FAULT_BEYOND_END_OF_ALLOCATION....` ## Model Card Contact InsightFinder AI