|
|
--- |
|
|
datasets: |
|
|
- zero-systems/StringConversion.7k.INSTRUCT_DPO |
|
|
--- |
|
|
|
|
|
# StructuredCoder-7b.GGUF |
|
|
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
**StructuredCoder** models aim to identify formats of data represented within sets of strings, and convert strings of set A to the format of set B via generated Python code. |
|
|
|
|
|
## Model Details |
|
|
|
|
|
### Model Description |
|
|
|
|
|
- **Model type:** LLM |
|
|
- **Finetuned from model [optional]:** [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct) |
|
|
|
|
|
## Inference |
|
|
|
|
|
`.gguf` models can be inferenced using [llama.cpp](https://github.com/ggerganov/llama.cpp) ([llama-cpp-python](https://github.com/abetlen/llama-cpp-python)). |
|
|
Please follow the instructions within these repos to get started. |
|
|
|
|
|
### Inference Examples |
|
|
|
|
|
<!-- Example prompts or input data that the model is expected to handle, and examples of expected output --> |
|
|
|
|
|
This is an instruct finetune utilizing the Alpaca instruct format (introduced by [stanford-alpaca](https://github.com/tatsu-lab/stanford_alpaca)): |
|
|
|
|
|
```python |
|
|
"{system_prompt}\n\n### Instruction:\n{instruction}\n\n### Response: " |
|
|
``` |
|
|
|
|
|
The model is finetuned to perform the following task: |
|
|
|
|
|
#### Structured Data Conversion |
|
|
|
|
|
Expected input: |
|
|
|
|
|
``` |
|
|
You are a coding assistant that specializes in converting string values from an input to a target format. |
|
|
You will be given a set of input sets, each in itself a set of example string values plus a corresponding title, as well as a set of example target strings plus a corresponding title. |
|
|
Your task is to analyze the string formats and identify whether or not the input sets can be converted into the target set using a single Python method, and then to write that method. |
|
|
|
|
|
You will also be given a short piece of context that may be helpful to arrive at your decision. |
|
|
|
|
|
### Instruction: |
|
|
Input Sets: |
|
|
|
|
|
Title: OldPlanServiceRate |
|
|
Example Values: |
|
|
[ |
|
|
"$50.15" |
|
|
"$70.10" |
|
|
"$90.30" |
|
|
"$44.20" |
|
|
"$50.10" |
|
|
"$90.36" |
|
|
] |
|
|
|
|
|
|
|
|
Title: OldPlanQuantity |
|
|
Example Values: |
|
|
[ |
|
|
"5" |
|
|
"5" |
|
|
"2" |
|
|
"4" |
|
|
"2" |
|
|
"8" |
|
|
] |
|
|
|
|
|
|
|
|
Target Set: |
|
|
|
|
|
Title: OldPlanServiceAmount |
|
|
Example Values: |
|
|
[ |
|
|
"$75.30" |
|
|
"$325.90" |
|
|
"$175.80" |
|
|
"$150.75" |
|
|
"$525.50" |
|
|
"$400.50" |
|
|
] |
|
|
|
|
|
Context: |
|
|
1. The target key suggests a monetary amount related to an 'Old Plan' service. 2. The values of the target data are monetary amounts formatted as currency. 3. 'OldPlanServiceRate' provides a monetary rate for a service and 'OldPlanQuantity' specifies a quantity, which when multiplied together, would give a monetary amount similar to the values present in 'OldPlanServiceAmount'. |
|
|
|
|
|
### Response: |
|
|
``` |
|
|
|
|
|
Expected Output: |
|
|
|
|
|
``` |
|
|
{ |
|
|
"reasoning": "Remove the dollar sign from OldPlanServiceRate, convert it to a float and OldPlanQuantity to an integer. Then multiply the two together to get the service amount. The result should be formatted as a currency string with two decimal places and a dollar sign.", |
|
|
"test_input_values": { |
|
|
"OldPlanServiceRate": "$50.25", |
|
|
"OldPlanQuantity": "10" |
|
|
}, |
|
|
"test_expected_output_value": "$502.50", |
|
|
"conversion_code": "def convert(old_plan_service_rate: str, old_plan_quantity: str) -> str:\n import re\n service_rate = float(re.sub('[$]', '', old_plan_service_rate))\n quantity = int(old_plan_quantity)\n return '${:.2f}'.format(service_rate * quantity)" |
|
|
} |
|
|
``` |
|
|
|
|
|
### Training Data |
|
|
|
|
|
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
|
[zero-systems/StringConversion.7k.INSTRUCT_DPO](https://huggingface.co/datasets/zero-systems/StringConversion.7k.INSTRUCT_DPO) |
|
|
|
|
|
#### Training Methodology |
|
|
|
|
|
<!--Summary of methodology used to train model --> |
|
|
StructuredLLM was trained using [QLoRA](https://github.com/artidoro/qlora). |
|
|
Resulting adapter was merged into the base model weights, converted to the `gguf` format and finally quantized to 4 bits. |