File size: 2,225 Bytes
33e76a1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
license: mit
language:
- en
base_model:
- meta-llama/Meta-Llama-3-8B-Instruct
pipeline_tag: text-generation
tags:
- Food
- NEL
- NER
---
# 'FoodSEM: Large Language Model Specialized in Food Named-Entity Linking' 

## The model is based on Meta-Llama-3-8B-Instruct, which was fine-tuned (using LoRA) for food named entity recognition and linking tasks.


## How to use it: ##

```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
import torch


if __name__ == '__main__':
    base_model = "meta-llama/Meta-Llama-3-8B-Instruct"
    device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

    bnb_config = BitsAndBytesConfig(
        load_in_4bit=True,
        bnb_4bit_quant_type="nf4",
        bnb_4bit_compute_dtype=torch.float16,
        bnb_4bit_use_double_quant=True,
    )

    model = AutoModelForCausalLM.from_pretrained(
        base_model,
        quantization_config=bnb_config,
        device_map={"": 0},
        attn_implementation="eager"
    )

    tokenizer = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)

    tokenizer.pad_token = '<|pad|>'
    tokenizer.pad_token_id = 128255

    #Load LORA weights
    model.load_adapter("Anonymous-pre-publication/FoodSEM-LLM")
    model.config.use_cache = True
    model.eval()

    system_prompt = ""
    user_prompt = "Please, may we have links to the Hansard taxonomy for these entities provided: soft butter, mango, daiquiri mixer, maple extract, salt, anise flavored liqueur, hemp seeds, yeast mixture, thighs?"

    messages = [
        {
            "role": "user",
            "content": f"{system_prompt} {user_prompt}".strip()
        }
    ]

    prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

    #Here we have a batch of one
    tokenizer_input = [prompt]

    inputs = tokenizer(tokenizer_input, return_tensors="pt", padding=True, truncation=True, max_length=1024).to(device)
    generated_ids = model.generate(**inputs, max_new_tokens=1024, do_sample=True)
    answers = tokenizer.batch_decode(generated_ids[:, inputs['input_ids'].shape[1]:])
    answers = [x.split('<|eot_id|>')[0].strip() for x in answers]
    print(answers)
```