apoman commited on
Commit
da256a0
·
verified ·
1 Parent(s): 4abb31a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -0
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ **Model Description**
2
+
3
+ **llama2-navigation** is a Larage Language Model (LLM) that is a fine-tuned version of **Llama-2-7b-chat-hf**. This model aims to provide navigation instructions given knowledge.
4
+
5
+ The model was fine-tuned with Lora and custom training data. For more details about the model's use case, you can find the code at the following link:
6
+
7
+ - **Repository**: [https://gitlab.com/horizon-europe-voxreality/dialogue-system/conference_agent](https://gitlab.com/horizon-europe-voxreality/dialogue-system/conference_agent)
8
+
9
+ *How to Get Started with the Model**
10
+
11
+ Below you can find an example of model usage:
12
+
13
+ ```python
14
+ import torch, textwrap
15
+ from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig, pipeline
16
+ from langchain import HuggingFacePipeline, PromptTemplate
17
+ from langchain.chains import LLMChain
18
+
19
+ model_name = "voxreality/llama2-navigation"
20
+
21
+ user_msg = "I need to go to the social area."
22
+ knowledge = "start, move 11, turn left, move 4, turn right, move 3, stairs up, move 12, turn left, move 4, turn left, move 13, turn right, move 5, finish"
23
+
24
+ tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
25
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, trust_remote_code=True, device_map="auto")
26
+
27
+ generation_config = GenerationConfig.from_pretrained(model_name)
28
+ generation_config.max_new_tokens = 1024
29
+ generation_config.temperature = 0.0001
30
+ generation_config.top_p = 0.95
31
+ generation_config.do_sample = True
32
+ generation_config.repetition_penalty = 1.15
33
+
34
+ text_pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer, generation_config=generation_config)
35
+ llm = HuggingFacePipeline(pipeline=text_pipeline, model_kwargs={"temperature": 0})
36
+
37
+ text_pipeline = pipeline(
38
+ "text-generation",
39
+ model=model,
40
+ tokenizer=tokenizer,
41
+ generation_config=generation_config)
42
+
43
+ model = HuggingFacePipeline(pipeline=text_pipeline, model_kwargs={"temperature": 0})
44
+
45
+ prompt = textwrap.dedent("""
46
+ [INST] <>
47
+ You are a navigation assistant at a conference venue. Your task is to guide users to specific locations within the venue, including "booth 1", "booth 2", "booth 3", "booth 4", "social area", "exit", "business room", and "conference room".
48
+
49
+ - For clear directions, respond with numbered steps using the details provided in the 'knowledge' field.
50
+ - Ensure to translate the directions from the 'knowledge' field into a user-friendly format with clear, numbered steps."
51
+ "" \n\n
52
+ <>
53
+
54
+ ### input: {input}
55
+
56
+ ### knowledge: {knowledge}
57
+
58
+ [/INST]
59
+ """)
60
+
61
+ prompt = PromptTemplate(input_variables=["input", "knowledge"], template= prompt)
62
+ chain = LLMChain(llm=model, prompt=prompt)
63
+
64
+ print(chain.run(input=user_msg, knowledge=knowledge))
65
+
66
+ ```