himmeow commited on
Commit
a346c98
·
verified ·
1 Parent(s): a222ca0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -205
README.md CHANGED
@@ -440,211 +440,6 @@ messages
440
  # 'content': 'The tip amount for your bill is $7.50. The total amount to be paid is $57.50.'}]
441
  ```
442
 
443
- ## English Version
444
-
445
- ### Model Card for ricepaper/vi-gemma-2-2b-function-calling
446
-
447
- ### Model Description
448
-
449
- This model is a fine-tuned version of the **google/gemma-2-2b-it** model, specialized for understanding and executing function calls within the Vietnamese language. It has been trained on a comprehensive dataset encompassing conversations that include function calls formatted in ChatML, along with multilingual data that has been translated into Vietnamese.
450
-
451
- ### Intended Uses
452
-
453
- This model is well-suited for applications that demand:
454
-
455
- * **Conversational AI:** Constructing chatbots that engage with users and carry out specific actions through function calls.
456
- * **Question Answering Systems:** Creating automated systems capable of retrieving information from diverse data sources.
457
- * **Advanced NLP Applications:** Developing sophisticated natural language processing applications such as text summarization, machine translation, and text generation.
458
- * **Intelligent Agent Development:** Building agents that can interact with their environment and execute actions based on language instructions.
459
- * **Multi-Agent Systems:** Developing systems where multiple agents communicate and collaborate to solve complex problems.
460
-
461
- ### How to Use
462
-
463
- **1. Installation of Required Libraries:**
464
-
465
- ```python
466
- ! pip install transformers torch
467
- ```
468
-
469
- **2. Initialization of Tokenizer and Model:**
470
-
471
- ```python
472
- from transformers import AutoTokenizer, AutoModelForCausalLM
473
- import torch
474
- import json
475
-
476
- # Initialize the tokenizer and model
477
- tokenizer = AutoTokenizer.from_pretrained("ricepaper/vi-gemma-2-2b-function-calling")
478
- model = AutoModelForCausalLM.from_pretrained(
479
- "ricepaper/vi-gemma-2-2b-function-calling",
480
- device_map="auto",
481
- torch_dtype=torch.float16,
482
- )
483
- ```
484
-
485
- **3. Function for User Query Processing:**
486
-
487
- ```python
488
- def process_user_query(user_query, messages, available_tools):
489
- """
490
- Handles user queries, generates responses, and manages function calls (if present).
491
-
492
- Args:
493
- user_query (str): User's input query.
494
- messages (list): List of messages in the ongoing conversation.
495
- available_tools (dict): Dictionary of available functions.
496
-
497
- Returns:
498
- str: Final response after processing any function calls.
499
- """
500
-
501
- messages.append({"role": "user", "content": user_query})
502
-
503
- input_ids = tokenizer.apply_chat_template(
504
- messages,
505
- add_generation_prompt=True,
506
- return_tensors="pt"
507
- ).to(model.device)
508
- outputs = model.generate(
509
- input_ids,
510
- max_new_tokens=300,
511
- # ... (Additional generate parameters can be added here)
512
- )
513
- response = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)
514
-
515
- try:
516
- response_list = json.loads(response)
517
- messages.append({"role": "assistant", "content": response})
518
- except json.JSONDecodeError:
519
- response_list = []
520
-
521
- function_responses = []
522
-
523
- for response_dict in response_list:
524
- if "name" in response_dict and "arguments" in response_dict:
525
- function_name = response_dict.get("name")
526
- function_args = response_dict.get("arguments")
527
-
528
- if function_name in available_tools:
529
- print(f"Calling function {function_name} with arguments {function_args}\n")
530
- function_to_call = available_tools[function_name]
531
- function_response = function_to_call(**function_args)
532
- function_responses.append({
533
- "name": function_name,
534
- "response": function_response
535
- })
536
- else:
537
- print(f"Function {function_name} not found")
538
-
539
- if function_responses:
540
- messages.append({
541
- "role": "user",
542
- "content": f"FUNCTION RESPONSES:\n{json.dumps(function_responses, ensure_ascii=False)}"
543
- })
544
- print(messages[-1].get("content"))
545
-
546
- input_ids = tokenizer.apply_chat_template(
547
- messages,
548
- add_generation_prompt=True,
549
- return_tensors="pt"
550
- ).to(model.device)
551
- outputs = model.generate(
552
- input_ids,
553
- max_new_tokens=300,
554
- # ... (Additional generate parameters can be added here)
555
- )
556
- response = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)
557
-
558
- return response
559
- ```
560
-
561
- **4. Helper Functions and Tools Definition:**
562
-
563
- ```python
564
- def calculate_tip(bill_amount: float, tip_percentage: float) -> str:
565
- """Calculates the tip for a given bill amount and percentage.
566
-
567
- Args:
568
- bill_amount: Total bill amount.
569
- tip_percentage: Tip percentage.
570
-
571
- Returns:
572
- str: Description of the tip amount and total to be paid.
573
- """
574
-
575
- tip_amount = bill_amount * (tip_percentage / 100)
576
- total_amount = bill_amount + tip_amount
577
- return f"The tip amount is: {tip_amount:.2f}\nThe total amount to be paid is: {total_amount:.2f}"
578
-
579
- tools = """
580
- {
581
- "name": "calculate_tip",
582
- "description": "Calculate the tip amount for a bill",
583
- "parameters": {
584
- "type": "object",
585
- "properties": {
586
- "bill_amount": {
587
- "type": "number",
588
- "description": "The total bill amount"
589
- },
590
- "tip_percentage": {
591
- "type": "number",
592
- "description": "The tip percentage"
593
- }
594
- },
595
- "required": [
596
- "bill_amount",
597
- "tip_percentage"
598
- ]
599
- }
600
- },
601
- """
602
-
603
- available_tools = {
604
- "calculate_tip": calculate_tip,
605
- }
606
- ```
607
-
608
- **5. Conversation History and Model Usage Example:**
609
-
610
- ```python
611
- messages = [
612
- {"role": "user", "content": f"""You are a helpful assistant with access to the following functions. Use them if necessary {tools}"""},
613
- {"role": "assistant", "content": "Hello, how can I assist you?"},
614
- ]
615
-
616
- res = process_user_query("I need help calculating the tip for my bill. The total is $50 and I would like to leave a 15% tip.", messages, available_tools)
617
- messages.append({"role": "assistant", "content": res})
618
- print("\n"+res)
619
- # Calling function calculate_tip with arguments {'bill_amount': 50, 'tip_percentage': 15}
620
-
621
- # FUNCTION RESPONSES:
622
- # [{"name": "calculate_tip", "response": "The tip amount is: 7.50\nThe total amount to be paid is: 57.50"}]
623
-
624
- # The tip amount for your bill is $7.50. The total amount to be paid is $57.50.
625
-
626
- print(messages)
627
- # Expected output (The actual output might vary slightly):
628
- # [{'role': 'user',
629
- # 'content': 'You are a helpful assistant with access to the following functions. Use them if necessary \n{\n "name": "calculate_tip",\n "description": "Calculate the tip amount for a bill",\n "parameters": {\n "type": "object",\n "properties": {\n "bill_amount": {\n "type": "number",\n "description": "The total bill amount"\n },\n "tip_percentage": {\n "type": "number",\n "description": "The tip percentage"\n }\n },\n "required": [\n "bill_amount",\n "tip_percentage"\n ]\n }\n},\n'},
630
- # {'role': 'assistant', 'content': 'Hello, how can I assist you?'},
631
- # {'role': 'user',
632
- # 'content': 'I need help calculating the tip for my bill. The total is $50 and I would like to leave a 15% tip.'},
633
- # {'role': 'assistant',
634
- # 'content': '[{"name": "calculate_tip", "arguments": {"bill_amount": 50, "tip_percentage": 15}}]'},
635
- # {'role': 'user',
636
- # 'content': 'FUNCTION RESPONSES:\n[{"name": "calculate_tip", "response": "The tip amount is: 7.50\nThe total amount to be paid is: 57.50"}]'},
637
- # {'role': 'assistant',
638
- # 'content': 'The tip amount for your bill is $7.50. The total amount to be paid is $57.50.'}]
639
- ```
640
-
641
- ### Notes
642
-
643
- * The model might require appropriate scaling and hardware configuration for optimal performance.
644
- * The quality of function call results depends on the provided helper functions.
645
- * Users can adjust the model's generation parameters to fine-tune response length and content.
646
-
647
-
648
  # Uploaded model
649
 
650
  - **Developed by:** [hiieu](https://huggingface.co/hiieu), [himmeow the coder](https://viblo.asia/u/MartinCrux), [cuctrinh](https://www.linkedin.com/in/trinh-cuc-5722832b6)
 
440
  # 'content': 'The tip amount for your bill is $7.50. The total amount to be paid is $57.50.'}]
441
  ```
442
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
443
  # Uploaded model
444
 
445
  - **Developed by:** [hiieu](https://huggingface.co/hiieu), [himmeow the coder](https://viblo.asia/u/MartinCrux), [cuctrinh](https://www.linkedin.com/in/trinh-cuc-5722832b6)