Robotics
Transformers
Safetensors
qwen2
text-generation
text-generation-inference
jan-hq commited on
Commit
eed1aaf
·
verified ·
1 Parent(s): a2ec3ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -1
README.md CHANGED
@@ -29,7 +29,42 @@ base_model:
29
  ## How to Get Started
30
 
31
  ```python
32
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  ```
34
  ### Hardware
35
 
 
29
  ## How to Get Started
30
 
31
  ```python
32
+ from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, pipeline
33
+ import torch
34
+ from utils import tokenize_desk, SYSTEM_PROMPT
35
+
36
+ # Load the mode
37
+
38
+
39
+ model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.bfloat16).to(device)
40
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
41
+
42
+ # Define your workspace
43
+ objects = [
44
+ {"red-cube": [51, 43, 17]},
45
+ {"black-cube": [44, 58, 17]},
46
+ {"purple-cube": [74, 59, 17]},
47
+ {"green-cube": [65, 82, 17]},
48
+ ]
49
+
50
+ # Give a natural language instruction
51
+ instruction = "Throw the red cube on top of the blue cylinder"
52
+ desk, object_height = tokenize_desk(objects)
53
+ final_instruction = SYSTEM_PROMPT.format(object_height=object_height,instruction=instruction,TABLE_MAP=desk)
54
+ chat = [
55
+ {"role": "user", "content": final_instruction.strip()}
56
+ ]
57
+ tokenized_chat = tokenizer.apply_chat_template(chat, tokenize=True, add_generation_prompt=True, use_system_prompt=False, return_tensors="pt")
58
+ # print(len(tokenized_chat[0]))
59
+ generated_ids = model.generate(
60
+ tokenized_chat.to("cuda"),
61
+ max_new_tokens=2048,
62
+ do_sample=False,
63
+ temperature=0.6,
64
+ )
65
+ # Get the solution
66
+ result = tokenizer.decode(generated_ids[0][tokenized_chat.shape[1]:], skip_special_tokens=True)
67
+ print(result)
68
  ```
69
  ### Hardware
70