sjster srijithrajamohan commited on
Commit
dd3f3d1
·
verified ·
1 Parent(s): 752d210

Test from different account (#4)

Browse files

- Removed the line developed by (3c773bc6e00cf93a4f12292a2451570babea5957)
- Updates to the line developed by (ce24b8b3f5a7d64d6218db2fb82c8eaab53876ac)


Co-authored-by: Srijith Rajamohan <srijithrajamohan@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -28,7 +28,7 @@ library_name: transformers
28
 
29
  | | |
30
  |-------------------------|-------------------------------------------------------------------------------|
31
- | **Developers** | |
32
  | **Description** | `phi-4` is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning.<br><br>`phi-4` underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures |
33
  | **Architecture** | 14B parameters, dense decoder-only Transformer model |
34
  | **Inputs** | Text, best suited for prompts in the chat format |
 
28
 
29
  | | |
30
  |-------------------------|-------------------------------------------------------------------------------|
31
+ | **Developed by** | Micro |
32
  | **Description** | `phi-4` is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning.<br><br>`phi-4` underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures |
33
  | **Architecture** | 14B parameters, dense decoder-only Transformer model |
34
  | **Inputs** | Text, best suited for prompts in the chat format |