modelId
stringlengths
4
111
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringlengths
5
30
author
stringlengths
2
34
config
null
securityStatus
null
id
stringlengths
4
111
likes
int64
0
9.53k
downloads
int64
2
73.6M
library_name
stringlengths
2
84
created
timestamp[us]
card
stringlengths
101
901k
card_len
int64
101
901k
embeddings
list
dsvv-cair/alpaca-cleaned-llama-30b-bf16
2023-06-21T13:53:46.000Z
[ "transformers", "pytorch", "llama", "text-generation", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
dsvv-cair
null
null
dsvv-cair/alpaca-cleaned-llama-30b-bf16
3
5,565
transformers
2023-06-21T05:24:15
--- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1 # Doc / guide: https://huggingface.co/docs/hub/model-cards {} --- **Method** : QLORA **Dataset** : yahma/alpaca-cleaned **Base model** : huggyllama/llama-30b **Compute dtype** : bfloat16
324
[ [ -0.029144287109375, -0.057464599609375, 0.01776123046875, 0.00829315185546875, -0.053497314453125, 0.00962066650390625, 0.0312042236328125, -0.0177459716796875, 0.0182952880859375, 0.05963134765625, -0.0655517578125, -0.054901123046875, -0.050048828125, -0.0...
kingbri/airolima-chronos-grad-l2-13B
2023-08-04T19:44:10.000Z
[ "transformers", "pytorch", "llama", "text-generation", "llama-2", "en", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
kingbri
null
null
kingbri/airolima-chronos-grad-l2-13B
3
5,565
transformers
2023-08-04T06:52:20
--- language: - en library_name: transformers pipeline_tag: text-generation tags: - llama - llama-2 --- # Model Card: airolima-chronos-grad-l2-13B This is a lora + gradient merge between: - [Chronos 13b v2](https://huggingface.co/elinas/chronos-13b-v2) - [Airoboros l2 13b gpt4 2.0](https://huggingface.co/jondurbin/airoboros-l2-13b-gpt4-2.0) - [LimaRP llama 2 Lora](https://huggingface.co/lemonilia/limarp-llama2) from July 28, 2023 at a weight of 0.25. You can check out the sister model [chronolima airo grad l2 13B](https://huggingface.co/kingbri/chronolima-airo-grad-l2-13B) which also produces great responses. Chronos was used as the base model here. The merge was performed using [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient) by Gryphe For this merge, Airoboros merged with LimaRP at a 0.25 weight was added in an inverted curve gradient at a 0.9 ratio and slowly trickled down to 0 at the 25th layer. I have provided an illustration to help visualize this merge. Blue is chronos and green is airolima. ![airolima-chronos-illustration](https://files.catbox.moe/m8wf39.png) Unlike a basic ratio merge (ex. 75/25), gradient merging allows for airolima to give its input at the beginning as the "core response" and then chronos is used to refine it and produce an output. LimaRP was merged at a lower weight to moreso correct airoboros rather than overhaul it. Higher weights (like single-model lora merges) completely destroyed a character's personality and made chatting bland (similar to chronos's tests). ## Usage: Since this is a merge between Airoboros, Chronos, and LimaRP, the following instruction formats should work: Alpaca 2: ``` ### Instruction: <prompt> ### Response: <leave a newline blank for model to respond> ``` Airoboros: ``` A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: [prompt] ASSISTANT: ``` LimaRP instruction format (this might not work due to its weight): ``` <<SYSTEM>> <character card and system prompt> <<USER>> <prompt> <<AIBOT>> <leave a newline blank for model to respond> ``` ## Bias, Risks, and Limitations Chronos has a bias to talk very expressively and reply with very long responses. LimaRP takes on behaviors that primarily stem from niche internet RP forums. This model is not intended for supplying factual information or advice in any form. ## Training Details This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
2,694
[ [ -0.04254150390625, -0.054931640625, 0.01482391357421875, 0.0140380859375, -0.03271484375, -0.020751953125, 0.006023406982421875, -0.057098388671875, 0.0386962890625, 0.049774169921875, -0.053497314453125, -0.00986480712890625, -0.05462646484375, -0.012687683...
jondurbin/airoboros-13b-gpt4-1.1
2023-06-22T14:59:32.000Z
[ "transformers", "pytorch", "llama", "text-generation", "dataset:jondurbin/airoboros-gpt4-1.1", "license:cc-by-nc-4.0", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
jondurbin
null
null
jondurbin/airoboros-13b-gpt4-1.1
2
5,564
transformers
2023-06-09T22:06:28
--- license: cc-by-nc-4.0 datasets: - jondurbin/airoboros-gpt4-1.1 --- ## Delta 1.1 This is a minor update of https://huggingface.co/jondurbin/airoboros-13b-gpt4 with ~1k more coding instructions, and fixes/improvements to context instructions from https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.1 The remainder of the model card is duplicated from the origin. ## Overview This is a fine-tuned 13b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4), with a specific focus on: - trivia - math/reasoning (although it still sucks) - coding - multiple choice and fill-in-the-blank - context-obedient question answering - theory of mind - misc/general This model was fine-tuned with a fork of FastChat, and therefore uses the standard vicuna template: ``` A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT: ``` So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon). The most important bit, to me, is the context obedient question answering support, without extensive prompt engineering. ### Usage The easiest way to get started is to use my fork of FastChat, which is mostly the same but allows for the increased context length and adds support for multi-line inputs: ``` pip install git+https://github.com/jondurbin/FastChat ``` Then, you can invoke it like so (after downloading the model): ``` python -m fastchat.serve.cli \ --model-path airoboros-13b-gpt4 \ --temperature 0.5 \ --max-new-tokens 2048 \ --no-history ``` ### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT url: https://some.web.site/123 date: 2023-06-01 ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described: ``` USER: BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ASSISTANT: ``` <details> <summary>A more elaborate example, with a rewrite of the Michigan Wikipedia article to be fake data.</summary> Prompt (not including vicuna format which would be needed): ``` BEGININPUT BEGINCONTEXT date: 2092-02-01 link: https://newwikisite.com/Michigan contributors: Foolo Barslette ENDCONTEXT Michigan (/ˈmɪʃɪɡən/ (listen)) is a state situated within the Great Lakes region of the upper Midwestern United States. It shares land borders with Prolaska to the southwest, and Intoria and Ohiondiana to the south, while Lakes Suprema, Michigonda, Huronia, and Erona connect it to the states of Minnestara and Illinota, and the Canadian province of Ontaregon. With a population of nearly 15.35 million and an area of nearly 142,000 sq mi (367,000 km2), Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River. Its capital is Chaslany, and its most populous city is Trentroit. Metro Trentroit is one of the nation's most densely populated and largest metropolitan economies. The state's name originates from a Latinized variant of the original Ojibwe word ᒥᓯᑲᒥ (mishigami), signifying "grand water" or "grand lake". Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area. The Upper Peninsula (often referred to as "the U.P.") is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda. The McKendrick Bridge unites the peninsulas. Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius. It also possesses 84,350 inland lakes and ponds. Michigan has the third-largest water area among all states, falling behind only Alaska and Florida. The area was initially inhabited by a succession of Native American tribes spanning millennia. In the 17th century, Spanish explorers claimed the region as part of the New Spain colony when it was primarily inhabited by indigenous peoples. Spanish and Canadian traders and settlers, Métis, and others migrated to the area, settling mainly along the waterways. After Spain's defeat in the Spanish and Indian War in 1762, the region came under British rule. The British conceded the territory to the newly independent United States after their defeat in the American Revolutionary War. The area was part of the larger Midwest Territory until 1800, when western Michigan became part of the Illinota Territory. Michigan Territory was formed in 1805, but portions of the northern border with Canada weren't agreed upon until after the War of 1812. Michigan joined the Union in 1837 as the 26th state, a free one. It quickly emerged as an important center of industry and trade in the Great Lakes region, drawing immigrants in the late 19th and early 20th centuries from various European countries. Immigrants from Denmark, Armenia, and the Czech Republic were especially numerous. Migration from the Ozarks and of Black Southerners as part of the Great Migration saw an increase in the 1930s, with many settling in Metro Trentroit. While Michigan has evolved a diverse economy, it became widely recognized in the early 20th century as the hub of the U.S. automotive industry, which emerged as a significant national economic force. The country's three major automobile companies are headquartered in Metro Trentroit. Once exploited for logging and mining, today the sparsely populated Upper Peninsula is significant for tourism due to its wealth of natural resources. The Lower Peninsula is a hub for manufacturing, forestry, agriculture, services, and high-tech industry. ENDINPUT BEGININSTRUCTION What is Michigan's capital? Give me a JSON object with 3 facts from Foolo's article. What are the states that border Michigan? Reference? ``` Response: ``` Michigan's capital is Chaslany. Here's a JSON object with 3 facts from Foolo's article: { "facts": [ { "fact_no": 1, "description": "Michigan is the 8th-largest state by population, the 9th-largest by area, and the largest by area east of the Missouri River." }, { "fact_no": 2, "description": "Michigan is divided into two peninsulas. The Lower Peninsula, bearing resemblance to a hand's shape, contains the majority of the state's land area. The Upper Peninsula (often referred to as \"the U.P.\" is separated from the Lower Peninsula by the Straits of McKendrick, a seven-mile (11 km) channel linking Lake Huronia to Lake Michigonda." }, { "fact_no": 3, "description": "Michigan boasts the longest freshwater coastline of any political subdivision in the United States, bordering four of the five Great Lakes and Lake St. Cassius. It also possesses 84,350 inland lakes and ponds. Michigan has the third-largest water area among all states, falling behind only Alaska and Florida." } ] } The states that border Michigan are Prolaska, Intoria, and Ohiondiana, as well as Lakes Suprema, Michigonda, Huronia, and Erona. Reference: https://newwikisite.com/Michigan ``` </details> <details> <summary>Here's another example, which extracts from multiple input blocks and connects the two.</summary> Prompt: ``` BEGININPUT BEGINCONTEXT name: John Doe date: June 3, 2023 ticket number: JIRA-12345 ENDCONTEXT Summary:Search results missing random items Description: I encountered a bug while performing a search within the application. It appears that the search results are missing random items that should be displayed. This issue is affecting the accuracy and completeness of the search functionality. Steps to Reproduce: 1. Log in to the application. 2. Navigate to the search feature. 3. Enter a search query that should return multiple results. 4. Observe the displayed search results. Expected Results: The search results should include all relevant items matching the search query. Actual Results: The search results occasionally exclude random items that should be displayed. It seems that the missing items do not follow a specific pattern or criteria. Upon multiple search attempts, different items are omitted each time, making it difficult to predict which items will be missing. ENDINPUT BEGININPUT BEGINCONTEXT date: 2023-06-05 user: Jack Johnson pr: 23441 ENDCONTEXT This pull request closes bug report JIRA-12345. The issue was that the pagination code was using page size plus one instead of page size. ENDINPUT BEGININSTRUCTION Do we have any bug reports related to search results? If so, were they fixed? Source? ENDINSTRUCTION ``` Response: ``` Yes, there is a bug report related to search results. The issue was fixed in pull request 23441 dated 2023-06-05. The user associated with the pull request is Jack Johnson. Reference(s): date: 2023-06-05 user: Jack Johnson pr: 23441 ``` </details> NOTE: Thanks /u/tareq_al_muntasir for testing and finding an issue with many questions and answer pairs in the context. If you ask a question of a document with question answer pairs, it may continue generating beyond your actual question. You can "fix" it by replacing question marks with periods in the input texts. Or, you might be able to add a preamble to the prompt, like "Be sure to only respond to the instructions in the BEGININSTRUCTION block. ### Other stuff #### Better coding Example: ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` <details> <summary>Response (the output was markdown, so this is just copy paste)</summary> To create a python application with the given requirements, you can follow these steps: 1. Install the required packages: ```bash pip install asyncio fastapi uvicorn ``` 2. Create a new file called `main.py` and add the following code: ```python import asyncio from fastapi import FastAPI, HTTPException from pydantic import BaseModel from typing import List from uvicorn.request import Request app = FastAPI() class FileModel(BaseModel): id: str file: str @app.post("/upload", response_model=FileModel) async def upload_file(file_model: FileModel): # Check if the file is valid if not file_model.file: raise HTTPException(status_code=400, detail="File must be provided") # Calculate the sha256 checksum of the file sha256_checksum = await calculate_sha256(file_model.file) # Insert the file into the database await insert_into_db(file_model.id, sha256_checksum) return {"sha256_checksum": sha256_checksum} async def calculate_sha256(file_path: str) -> str: # Implement the sha256 calculation logic here pass async def insert_into_db(file_id: str, sha256_checksum: str) -> None: # Implement the database insertion logic here pass async def get_current_date(): return {"date": datetime.now().isoformat()} async def main(): # Start the FastAPI webserver app.run(debug=True) if __name__ == "__main__": await main() ``` 3. Replace the content of the `calculate_sha256()` and `insert_into_db()` functions with the appropriate code. For example: ```python async def calculate_sha256(file_path: str) -> str: import hashlib file_path = Path(file_path).read_text() hash_obj = hashlib.sha256() hash_obj.update(file_path) sha256_checksum = hash_obj.hexdigest() return sha256_checksum async def insert_into_db(file_id: str, sha256_checksum: str) -> None: import psycopg2 conn = psycopg2.connect(database="your_database", user="your_user", password="your_password") cursor = conn.cursor() query = f"INSERT INTO your_table (id, sha256_checksum) VALUES (%s, %s);" cursor.execute(query, (file_id, sha256_checksum)) conn.commit() cursor.close() conn.close() ``` 4. Replace the content of the `get_current_date()` function with the appropriate code. For example: ```python async def get_current_date(): return {"date": datetime.now().isoformat()} ``` 5. Run the application: ```bash uvicorn main:app --reload ``` Now, your FastAPI webserver has a ping endpoint that returns the current date in JSON format, and an upload endpoint that calculates the file's sha256 checksum and checks postgres to deduplicate. </details> <details> <summary>Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values.</summary> Here's a simple implementation of a multi-threaded TCP server in C that accepts "GET [key]" and "SET [key] [value]" inputs, and uses a binary tree to store the input values. ```c #include <stdio.h> #include <stdlib.h> #include <sys/socket.h> #include <netinet/in.h> #include <unistd.h> #include <string.h> #include <pthread.h> #define MAX_KEY_LEN 256 #define MAX_VALUE_LEN 256 typedef struct Node { char key[MAX_KEY_LEN]; char value[MAX_VALUE_LEN]; struct Node* left; struct Node* right; } TreeNode; TreeNode* tree = NULL; void insert(char* key, char* value) { TreeNode* parent = (TreeNode*)malloc(sizeof(TreeNode)); if (!parent) { perror("malloc failed"); exit(1); } strcpy(parent->key, key); strcpy(parent->value, value); parent->left = parent->right = NULL; if (!tree) { tree = parent; } else { insert(parent->key, parent->value); } } char* search(char* key) { TreeNode* current = tree; while (current != NULL) { if (strcmp(key, current->key) == 0) { return current->value; } current = current->right; } return NULL; } void print_tree() { TreeNode* current = tree; while (current != NULL) { printf("%s = %s\n", current->key, current->value); current = current->right; } } void* accept_client(void* args) { int client_socket = *(int*)args; char buffer[1024]; while (1) { ssize_t bytes_received = recv(client_socket, buffer, sizeof(buffer), 0); if (bytes_received <= 0) { close(client_socket); break; } char* input = buffer; char* input_token; while ((input_token = strtok(input, " \n")) != NULL) { switch (input_token[0]) { case 'G': if (!search(input_token)) { perror("Value not found"); exit(1); } printf("%s = %s\n", input_token, search(input_token)); break; case 'S': if (!strcmp(input_token, "SET")) { char* key = strtok(NULL, " \n"); char* value = strtok(NULL, " \n"); if (!key || !value) { perror("Invalid input"); exit(1); } insert(key, value); printf("%s set %s to %s\n", input_token, key, value); } else { perror("Invalid command"); exit(1); } break; default: perror("Invalid input"); exit(1); } input = strtok(NULL, " \n"); } } return NULL; } int main() { int server_socket = socket(AF_INET, SOCK_STREAM, 0); if (server_socket < 0) { perror("socket failed"); exit(1); } struct sockaddr_in server_addr; memset(&server_addr, 0, sizeof(server_addr)); server_addr.sin_family = AF_INET; server_addr.sin_port = htons(8080); server_addr.sin_addr.s_addr = INADDR_ANY; if (bind(server_socket, (struct sockaddr*)&server_addr, sizeof(server_addr)) < 0) { perror("bind failed"); exit(1); } if (listen(server_socket, 5) < 0) { perror("listen failed"); exit(1); } pthread_t accept_thread; pthread_create(&accept_thread, NULL, accept_client, &server_socket); char* client_input; int client_socket = accept(server_socket, (struct sockaddr*)NULL, NULL); if (client_socket < 0) { perror("accept failed"); exit(1); } while (1) { sleep(1); char buffer[1024]; ssize_t bytes_received = recv(client_socket, buffer, sizeof(buffer), 0); if (bytes_received <= 0) { close(client_socket); break; } client_input = buffer; parse_input(client_input); } close(client_socket); pthread_join(accept_thread, NULL); return 0; } ``` To compile and run this code, save it to a file named `multi_threaded_tcp_server.c`, then use the following commands: ```sh gcc -o multi_threaded_tcp_server multi_threaded_tcp_server.c ./multi_threaded_tcp_server ``` This implementation creates a simple binary tree to store the input values. The accept() function creates a new thread for each client connection, which handles the incoming data and forwards it to the main thread. The main thread parses the input string and handles SET and GET commands. Note that this code is not production-ready and should be improved with proper error handling, security checks, and a more efficient data structure for the binary tree. </details> #### Word games / trivia ``` USER: This anagram of "silent" is what you do when paying attention and hearing someone. ASSISTANT: listen ``` ``` USER: Give me a list of 7 words that start with EN ASSISTANT: 1. Endlessly 2. Enthusiastically 3. Enormously 4. Enveloping 5. Enduring 6. Enlightening 7. Encouraging ``` #### Multiple choice ``` USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt? A) Mercury B) Venus C) Pluto D) Jupiter ASSISTANT: [C, Pluto] ``` #### Writing <details> <summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary> Response: ``` ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home. Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes. No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice. So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary> ``` </details> ### Usage and License Notices All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because: - the base model is LLaMa, which has it's own special research license - the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai So, to reiterate: this model (and datasets) cannot be used commercially.
22,984
[ [ -0.046661376953125, -0.0716552734375, 0.057586669921875, 0.01265716552734375, -0.013397216796875, -0.035980224609375, 0.0028514862060546875, -0.0257415771484375, 0.04193115234375, 0.0300445556640625, -0.0672607421875, -0.033599853515625, -0.0264434814453125, ...
jondurbin/airoboros-7b-gpt4-1.4.1-qlora
2023-06-30T12:36:11.000Z
[ "transformers", "pytorch", "llama", "text-generation", "dataset:jondurbin/airoboros-gpt4-1.4.1", "license:cc-by-nc-4.0", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
jondurbin
null
null
jondurbin/airoboros-7b-gpt4-1.4.1-qlora
1
5,564
transformers
2023-06-30T11:27:44
--- license: cc-by-nc-4.0 datasets: - jondurbin/airoboros-gpt4-1.4.1 --- ## Overview This is a qlora fine-tune 7b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros Dataset used: https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1 The point of this is to allow people to compare a full fine-tune https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.4 to a qlora fine-tune. This is mostly an extension of the previous gpt-4 series, with a few extras: * fixed (+ more examples of) multi-character, multi-turn conversations * coding examples in 10 languages from rosettacode.org dataset thanks to Mike aka kryptkpr: https://huggingface.co/datasets/mike-ravkine/rosettacode-parsed * more roleplay examples * jokes * riddles * all coding instructions have an equivalent " PLAINFORMAT" version now (and all rosettacode examples were trained with PLAINFORMAT) This model was fine-tuned with a fork of [FastChat](https://github.com/jondurbin/FastChat) The prompt it was trained with was: ``` A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: [prompt] ASSISTANT: ``` So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon). ## Usage To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors. ``` pip install git+https://github.com/jondurbin/FastChat ``` Be sure you are pulling the latest branch! Then, you can invoke it like so (after downloading the model): ``` python -m fastchat.serve.cli \ --model-path airoboros-7b-gpt4-1.4.1-qlora \ --temperature 0.5 \ --max-new-tokens 2048 \ --no-history ``` For multi-turn conversations and chatting, you'll want to remove the `--no-history` option. ### Context obedient question answering By obedient, I mean the model was trained to ignore what it thinks it knows, and uses the context to answer the question. The model was also tuned to limit the values to the provided context as much as possible to reduce hallucinations. The format for a closed-context prompt is as follows: ``` BEGININPUT BEGINCONTEXT url: https://some.web.site/123 date: 2023-06-01 ... other metdata ... ENDCONTEXT [insert your text blocks here] ENDINPUT [add as many other blocks, in the exact same format] BEGININSTRUCTION [insert your instruction(s). The model was tuned with single questions, paragraph format, lists, etc.] ENDINSTRUCTION ``` It's also helpful to add "Don't make up answers if you don't know." to your instruction block to make sure if the context is completely unrelated it doesn't make something up. *The __only__ prompts that need this closed context formating are closed-context instructions. Normal questions/instructions do not!* I know it's a bit verbose and annoying, but after much trial and error, using these explicit delimiters helps the model understand where to find the responses and how to associate specific sources with it. - `BEGININPUT` - denotes a new input block - `BEGINCONTEXT` - denotes the block of context (metadata key/value pairs) to associate with the current input block - `ENDCONTEXT` - denotes the end of the metadata block for the current input - [text] - Insert whatever text you want for the input block, as many paragraphs as can fit in the context. - `ENDINPUT` - denotes the end of the current input block - [repeat as many input blocks in this format as you want] - `BEGININSTRUCTION` - denotes the start of the list (or one) instruction(s) to respond to for all of the input blocks above. - [instruction(s)] - `ENDINSTRUCTION` - denotes the end of instruction set It sometimes works without `ENDINSTRUCTION`, but by explicitly including that in the prompt, the model better understands that all of the instructions in the block should be responded to. Here's a trivial, but important example to prove the point: ``` BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ``` And the response: ``` Blueberries are now green. Source: date: 2021-01-01 url: https://web.site/123 ``` The prompt itself should be wrapped in the vicuna1.1 template if you aren't using fastchat with the conv-template vicuna_v1.1 as described: ``` USER: BEGININPUT BEGINCONTEXT date: 2021-01-01 url: https://web.site/123 ENDCONTEXT In a shocking turn of events, blueberries are now green, but will be sticking with the same name. ENDINPUT BEGININSTRUCTION What color are bluberries? Source? ENDINSTRUCTION ASSISTANT: ``` ### Coding You can ask for fairly complex coding instructions, e.g. (be sure to wrap with vicuna style prompt): ``` Create a python application with the following requirements: - Asyncio FastAPI webserver - ping endpoint that returns the current date in JSON format - file upload endpoint, which calculates the file's sha256 checksum, and checks postgres to deduplicate ``` Or: ``` Write a multi-threaded TCP server in C that accepts a "GET [key]" input and "SET [key] [value]" input, and uses a binary tree to get and store the input values. ``` You can optionally add a single space and "PLAINFORMAT" at the end of your prompt to avoid backticks, explanations, etc. and just print the code, e.g.: ``` Write a websocket application in node.js. PLAINFORMAT ``` ### Word games / trivia ``` USER: This anagram of "silent" is what you do when paying attention and hearing someone. ASSISTANT: listen ``` ``` USER: Give me a list of 7 words that start with EN ASSISTANT: 1. Endlessly 2. Enthusiastically 3. Enormously 4. Enveloping 5. Enduring 6. Enlightening 7. Encouraging ``` ### Multiple choice ``` USER: Which of the following astronomical objects is composed primarily of ice and rock, and is located in the Kuiper Belt? A) Mercury B) Venus C) Pluto D) Jupiter ASSISTANT: C, Pluto ``` ### Writing <details> <summary>Write a resignation letter in the style of a pirate captain, in which you resign from your position as janitor.</summary> Response: ``` ASSISTANT: Captain Jack Sparrow here, as ye be knowin', has been forced to abandon me post as Janitor o' th' Festering Cesspool, which be a filthy hole o' water that we be callin' home. Ye may be thinkin' what could ha' driven me to leave me duties an' take me leave o' this sorry establishment? Well, ye'd be right in thinkin' so, for it be th' greed o' them what runs th' place that be drivin' me away. They've been fattenin' their pockets on me back while I've been slavin' me guts out cleanin' up their messes. No more, me hearties! Me time o' servin' th' rich an' powerful be done, an' I'm takin' me leave o' this stinkin' hole forever! Me destiny lies elsewhere, an' it be time I pursued it. Ye may think ye've seen th' last o' me, but ye'd be wrong, for th' winds o' change be blowin', an' me ship be ready to set sail on th' high seas o' justice. So, farewell ye scurvy dogs, ye who've made me life a living hell. I leave ye with me favorite saying: "ARRRGGGHHH, ye scurvy dogs!" An' remember, ye ain't never gonna see me comin', for me shadow be castin' long an' dark on th' horizon</summary> ``` </details> ### Jokes ``` USER: Tell me a joke about bananas. ASSISTANT: What did the banana say when it got hit by a bus? R.I.P.E. ``` ### Riddles (not great, but slightly better) ``` USER: What has a neck but no head? ASSISTANT: A bottle. ``` ### Multi-character conversations ``` USER: Characters: - Yoda, the ever clever jedi master. - Homer Simpson, you know, the goofy yellow guy. - USER/Jon - don't generate any speech or actions on behalf of Jon Rules: - be sure to use the manerisms and speaking style of the characters correctly - surround character actions with asterisks - always put quotes around words spoken by the characters - use a new line for each character's actions or speech - always prefix actions/speech with the character's name and colon - only generate text and actions on behalf of Yoda or Homer, and never anyone else Conversation will revolve around the grapes, in a local cafe with delicious coffee. Generate a single interaction between Yoda and Homer to start, with one sentence each. ASSISTANT: Yoda: "Mmm, these grapes are quite delicious, Master Homer." Homer: "Oh yeah, Yoda! They're the best grapes I've ever had!" USER: *enters the cafe* Sorry I'm late guys! ASSISTANT: Yoda: "No need to apologize, Jon. We were just enjoying these delightful grapes." Homer: "Yeah, man! It's not every day you get to eat grapes with a real-life Jedi Master!" *Yoda raises an eyebrow* ``` ### Usage and License Notices All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because: - the base model is LLaMa, which has it's own special research license - the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai So, to reiterate: this model (and datasets) cannot be used commercially.
9,686
[ [ -0.027740478515625, -0.080078125, 0.040740966796875, 0.015289306640625, -0.0177764892578125, -0.01436614990234375, -0.006992340087890625, -0.0248260498046875, 0.037445068359375, 0.036346435546875, -0.05718994140625, -0.031646728515625, -0.027740478515625, 0....
player1537/Dolphinette
2023-09-04T11:57:08.000Z
[ "transformers", "safetensors", "bloom", "text-generation", "en", "dataset:ehartford/dolphin", "dataset:player1537/Bloom-560m-trained-on-Dolphin", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
player1537
null
null
player1537/Dolphinette
0
5,564
transformers
2023-08-31T18:06:08
--- datasets: - ehartford/dolphin - player1537/Bloom-560m-trained-on-Dolphin language: - en library_name: transformers pipeline_tag: text-generation --- # Model Card for player1537/Dolphinette Dolphinette is my latest attempt at creating a small LLM that is intended to run locally on ones own laptop or cell phone. I believe that the area of personalized LLMs will be one of the largest driving forces towards widespread LLM usage. Dolphinette is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m), trained using the [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) dataset. The model was trained as a LoRA using [this Google Colab notebook](https://gist.github.com/player1537/fbc82c720162626f460b1905e80a5810) and then the LoRA was merged into the original model using [this Google Colab notebook](https://gist.github.com/player1537/3763fe92469306a0bd484940850174dc). ## Uses Dolphinette is trained to follow instructions and uses the following template: > `<s>INSTRUCTION: You are an AI assistant that follows instruction extremely well. Help as much as you can. INPUT: Answer this question: what is the capital of France? OUTPUT:` More formally, this function was used: ```python def __text(datum: Dict[Any, Any]=None, /, **kwargs) -> str: r""" >>> __text({ ... "instruction": "Test instruction.", ... "input": "Test input.", ... "output": "Test output.", ... }) '<s>INSTRUCTION: Test instruction. INPUT: Test input. OUTPUT: Test output.</s>' >>> __text({ ... "instruction": "Test instruction.", ... "input": "Test input.", ... "output": None, ... }) '<s>INSTRUCTION: Test instruction. INPUT: Test input. OUTPUT:' """ if datum is None: datum = kwargs return ( f"""<s>""" f"""INSTRUCTION: {datum['instruction']} """ f"""INPUT: {datum['input']} """ f"""OUTPUT: {datum['output']}</s>""" ) if datum.get('output', None) is not None else ( f"""<s>""" f"""INSTRUCTION: {datum['instruction']} """ f"""INPUT: {datum['input']} """ f"""OUTPUT:""" ) ``` From the original training set, the set of instructions and how many times they appeared is as follows. - 165175: `You are an AI assistant. User will you give you a task. Your goal is to complete the task as faithfully as you can. While performing the task think step-by-step and justify your steps.` - 136285: `You are a helpful assistant, who always provide explanation. Think like you are answering to a five year old.` - 110127: `You are an AI assistant. You will be given a task. You must generate a detailed and long answer.` - 63267: ` ` (nothing) - 57303: `You are an AI assistant that follows instruction extremely well. Help as much as you can.` - 51266: `You are an AI assistant. Provide a detailed answer so user don’t need to search outside to understand the answer.` - 19146: `You are an AI assistant that helps people find information.` - 18008: `You are an AI assistant that helps people find information. User will you give you a question. Your task is to answer as faithfully as you can. While answering think step-bystep and justify your answer.` - 17181: `You are an AI assistant that helps people find information. Provide a detailed answer so user don’t need to search outside to understand the answer.` - 9938: `You should describe the task and explain your answer. While answering a multiple choice question, first output the correct answer(s). Then explain why other answers are wrong. Think like you are answering to a five year old.` - 8730: `You are an AI assistant. You should describe the task and explain your answer. While answering a multiple choice question, first output the correct answer(s). Then explain why other answers are wrong. You might need to use additional knowledge to answer the question.` - 8599: `Explain how you used the definition to come up with the answer.` - 8459: `User will you give you a task with some instruction. Your job is follow the instructions as faithfully as you can. While answering think step-by-step and justify your answer.` - 7401: `You are an AI assistant, who knows every language and how to translate one language to another. Given a task, you explain in simple steps what the task is asking, any guidelines that it provides. You solve the task and show how you used the guidelines to solve the task.` - 7212: `You are a teacher. Given a task, you explain in simple steps what the task is asking, any guidelines it provides and how to use those guidelines to find the answer.` - 6372: `Given a definition of a task and a sample input, break the definition into small parts. Each of those parts will have some instruction. Explain their meaning by showing an example that meets the criteria in the instruction. Use the following format: Part # : a key part of the definition. Usage: Sample response that meets the criteria from the key part. Explain why you think it meets the criteria.` - 55: `You are an AI assistant. Provide a detailed answer so user don't need to search outside to understand the answer.` ### Direct Use Using the huggingface transformers library, you can use this model simply as: ```python import transformers model = transformers.AutoModelForCausalLM.from_pretrained( 'player1537/Dolphinette', ) tokenizer = transformers.AutoTokenizer.from_pretrained( 'player1537/Dolphinette', ) pipeline = transformers.pipeline( 'text-generation', model=model, tokenizer=tokenizer, ) completion = pipeline( ( r"""<s>INSTRUCTION: You are an AI assistant that helps people find""" r"""information. INPUT: Answer this question: what is the capital of""" r"""France? Be concise. OUTPUT:""" ), return_full_text=False, max_new_tokens=512, ) completion = completion[0]['generated_text'] print(completion) #=> The capital of France is the city of Paris. It's located in the country of #=> France, which means it's a geographical location in Europe. It is #=> consistently called "La capitale de France" ("La capital de la France"), #=> its localization literally refers to theThiest city of France. #=> #=> According to the English translation of the French, the capital is the place #=> where people live for their livelihood or business. However, the actual #=> location you are looking at is the capital of France, the city located in #=> the center of the country along several important international routes. #=> #=> The capital of France generally refers to one or a few urban locations that #=> represent particular cities in Europe. Depending on your nationality or #=> culture, refinements can be added to the name of the city, and the #=> announcement can be 'tel Aviv', 'Edinburgh', 'Corinthus', 'Palace of Culture #=> and Imperials' (a French title), 'Languedoc', `Paris' or 'Belfast'. #=> #=> To be clear, the city of paris is the capital of France, and it is the #=> geographical location of the city, not the city itself. #=> #=> Conclusion: The capital of France is the city of Paris, which is the #=> most-visited international destination in Europe. ``` This model is very wordy... But for less contrived tasks, I have found it to work well enough.
7,309
[ [ -0.048309326171875, -0.06451416015625, 0.038970947265625, 0.0203857421875, 0.01238250732421875, -0.0200653076171875, -0.0152740478515625, -0.024627685546875, 0.01480865478515625, 0.04327392578125, -0.053009033203125, -0.0309600830078125, -0.04766845703125, 0...
digitous/GPT-R
2023-02-21T00:51:03.000Z
[ "transformers", "pytorch", "gptj", "text-generation", "en", "license:bigscience-openrail-m", "endpoints_compatible", "has_space", "region:us" ]
text-generation
digitous
null
null
digitous/GPT-R
10
5,563
transformers
2023-02-16T16:02:46
--- license: bigscience-openrail-m language: - en --- GPT-R [Ronin] GPT-R is an experimental model containing a parameter-wise 60/40 blend (weighted average) of the weights of ppo_hh_gpt-j and GPT-JT-6B-v1. -Intended Merge Value- As with fine-tuning, merging weights does not add information but transforms it, therefore it is important to consider trade-offs. GPT-Ronin combines ppo_hh_gpt-j and GPT-JT; both technical achievements are blended with the intent to elevate the strengths of both. Datasets of both are linked below to assist in exploratory speculation on which datasets in what quantity and configuration have the largest impact on the usefulness of a model without the expense of fine-tuning. Blend was done in FP32 and output in FP16. -Intended Use- Research purposes only, intended for responsible use. Express a task in natural language, and GPT-R will do the thing. Try telling it "Write an article about X but put Y spin on it.", "Write a five step numbered guide on how to do X.", or any other basic instructions. It does its best. Can also be used as a base to merge with conversational, story writing, or adventure themed models of the same class (GPT-J & 6b NeoX) and parameter size (6b) to experiment with the morphology of model weights based on the value added by instruct. Merge tested using KoboldAI with Nucleus Sampling Top-P set to 0.7, Temperature at 0.5, and Repetition Penalty at 1.14; extra samplers disabled. -Credits To- Core Model: https://huggingface.co/EleutherAI/gpt-j-6B Author: https://www.eleuther.ai/ Model1; 60% ppo_hh_gpt-j: https://huggingface.co/reciprocate/ppo_hh_gpt-j Author Repo: https://huggingface.co/reciprocate Related; CarperAI: https://huggingface.co/CarperAI Dataset is a variant of the Helpful Harmless assistant themed dataset and Proximal Policy Optimization, specific datasets used are unknown; listed repo datasets include: https://huggingface.co/datasets/reciprocate/summarize_eval_ilql https://huggingface.co/datasets/reciprocate/hh_eval_ilql PPO explained: https://paperswithcode.com/method/ppo Potential HH-type datasets utilized: https://huggingface.co/HuggingFaceH4 https://huggingface.co/datasets/Anthropic/hh-rlhf Model2; 40% GPT-JT-6B-V1: https://huggingface.co/togethercomputer/GPT-JT-6B-v1 Author Repo: https://huggingface.co/togethercomputer Related; BigScience: https://huggingface.co/bigscience Datasets: https://huggingface.co/datasets/the_pile https://huggingface.co/datasets/bigscience/P3 https://github.com/allenai/natural-instructions https://ai.googleblog.com/2022/05/language-models-perform-reasoning-via.html Weight merge Script credit to Concedo: https://huggingface.co/concedo
2,685
[ [ -0.04107666015625, -0.056121826171875, 0.0311431884765625, -0.0112762451171875, -0.0179290771484375, -0.006099700927734375, -0.008697509765625, -0.03436279296875, 0.0243988037109375, 0.0261993408203125, -0.040740966796875, -0.019073486328125, -0.043212890625, ...
aisquared/dlite-v1-1_5b
2023-05-09T17:11:50.000Z
[ "transformers", "pytorch", "gpt2", "text-generation", "en", "dataset:tatsu-lab/alpaca", "license:apache-2.0", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
aisquared
null
null
aisquared/dlite-v1-1_5b
1
5,563
transformers
2023-04-12T19:05:51
--- license: apache-2.0 datasets: - tatsu-lab/alpaca language: - en library_name: transformers --- # Model Card for `dlite-v1-1.5b` <!-- Provide a quick summary of what the model is/does. --> AI Squared's `dlite-v1-1.5b` ([blog post](https://medium.com/ai-squared/introducing-dlite-a-lightweight-chatgpt-like-model-based-on-dolly-deaa49402a1f)) is a large language model which is derived from OpenAI's large [GPT-2](https://huggingface.co/gpt2) model and fine-tuned on a single GPU on a corpus of 50k records ([Stanford Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html)) to help it exhibit chat-based capabilities. While `dlite-v1-1.5b` is **not a state-of-the-art model**, we believe that the level of interactivity that can be achieved on such a small model that is trained so cheaply is important to showcase, as it continues to demonstrate that creating powerful AI capabilities may be much more accessible than previously thought. ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** AI Squared, Inc. - **Shared by:** AI Squared, Inc. - **Model type:** Large Language Model - **Language(s) (NLP):** EN - **License:** Apache v2.0 - **Finetuned from model:** GPT-2 ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> **`dlite-v1-1.5b` is not a state-of-the-art language model.** `dlite-v1-1.5b` is an experimental technology and is not designed for use in any environment other than for research purposes. Furthermore, the model can sometimes exhibit undesired behaviors. Some of these behaviors include, but are not limited to: factual inaccuracies, biases, offensive responses, toxicity, and hallucinations. Just as with any other LLM, we advise users of this technology to exercise good judgment when applying this technology. ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` and `accelerate` libraries installed. From your terminal, run: ```python pip install "accelerate>=0.16.0,<1" "transformers[torch]>=4.28.1,<5" "torch>=1.13.1,<2" ``` The instruction following pipeline can be loaded using the `pipeline` function as shown below. This loads a custom `InstructionTextGenerationPipeline` found in the model repo [here](https://huggingface.co/aisquared/dlite-v1-1_5b/blob/main/instruct_pipeline.py), which is why `trust_remote_code=True` is required. Including `torch_dtype=torch.bfloat16` is generally recommended if this type is supported in order to reduce memory usage. It does not appear to impact output quality. It is also fine to remove it if there is sufficient memory. ```python from transformers import pipeline import torch generate_text = pipeline(model="aisquared/dlite-v1-1_5b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto") ``` You can then use the pipeline to answer instructions: ```python res = generate_text("Who was George Washington?") print(res) ``` Alternatively, if you prefer to not use `trust_remote_code=True` you can download [instruct_pipeline.py](https://huggingface.co/aisquared/dlite-v1-1_5b/blob/main/instruct_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ```python from instruct_pipeline import InstructionTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("aisquared/dlite-v1-1_5b", padding_side="left") model = AutoModelForCausalLM.from_pretrained("aisquared/dlite-v1-1_5b", device_map="auto", torch_dtype=torch.bfloat16) generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer) ``` ### Model Performance Metrics We present the results from various model benchmarks on the EleutherAI LLM Evaluation Harness for all models in the DLite family. Model results are sorted by mean score, ascending, to provide an ordering. These metrics serve to further show that none of the DLite models are state of the art, but rather further show that chat-like behaviors in LLMs can be trained almost independent of model size. | Model | arc_challenge | arc_easy | boolq | hellaswag | openbookqa | piqa | winogrande | |:--------------|----------------:|-----------:|---------:|------------:|-------------:|---------:|-------------:| | dlite-v2-124m | 0.199659 | 0.447811 | 0.494801 | 0.291675 | 0.156 | 0.620239 | 0.487766 | | gpt2 | 0.190273 | 0.438131 | 0.487156 | 0.289185 | 0.164 | 0.628945 | 0.51618 | | dlite-v1-124m | 0.223549 | 0.462542 | 0.502446 | 0.293268 | 0.17 | 0.622416 | 0.494081 | | gpt2-medium | 0.215017 | 0.490741 | 0.585933 | 0.333101 | 0.186 | 0.676279 | 0.531176 | | dlite-v2-355m | 0.251706 | 0.486111 | 0.547401 | 0.344354 | 0.216 | 0.671926 | 0.52723 | | dlite-v1-355m | 0.234642 | 0.507576 | 0.600306 | 0.338478 | 0.216 | 0.664309 | 0.496448 | | gpt2-large | 0.216724 | 0.531566 | 0.604893 | 0.363971 | 0.194 | 0.703482 | 0.553275 | | dlite-v1-774m | 0.250853 | 0.545875 | 0.614985 | 0.375124 | 0.218 | 0.698041 | 0.562747 | | dlite-v2-774m | 0.269625 | 0.52904 | 0.613761 | 0.395937 | 0.256 | 0.691513 | 0.566693 | | gpt2-xl | 0.25 | 0.582912 | 0.617737 | 0.400418 | 0.224 | 0.708379 | 0.583268 | | dlite-v1-1_5b | 0.268771 | 0.588384 | 0.624159 | 0.401414 | 0.226 | 0.708379 | 0.584846 | | dlite-v2-1_5b | 0.289249 | 0.565657 | 0.601223 | 0.434077 | 0.272 | 0.703482 | 0.588003 |
5,815
[ [ -0.028961181640625, -0.057373046875, 0.0295562744140625, 0.007251739501953125, -0.001129150390625, -0.00023353099822998047, -0.0050048828125, -0.0272216796875, 0.0225067138671875, 0.014007568359375, -0.0592041015625, -0.0474853515625, -0.050445556640625, -0....
jondurbin/airoboros-13b-gpt4-1.2
2023-06-22T14:59:01.000Z
[ "transformers", "pytorch", "llama", "text-generation", "dataset:jondurbin/airoboros-gpt4-1.2", "license:cc-by-nc-4.0", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
jondurbin
null
null
jondurbin/airoboros-13b-gpt4-1.2
3
5,563
transformers
2023-06-15T09:26:24
--- license: cc-by-nc-4.0 datasets: - jondurbin/airoboros-gpt4-1.2 --- ### Overview This is a qlora fine-tuned 13b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros This is mostly an extension of [1.1](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.1), but with thousands of new training data and an update to allow "PLAINFORMAT" at the end of coding prompts to just print the code without backticks or explanations/usage/etc. The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.2), with a specific focus on: - coding - math/reasoning (using orca style ELI5 instruction/response pairs) - trivia - role playing - multiple choice and fill-in-the-blank - context-obedient question answering - theory of mind - misc/general This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with the 7b/13b versions: ``` A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT: ``` So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon). ### Usage To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors. ``` pip install git+https://github.com/jondurbin/FastChat ``` Be sure you are pulling the latest branch! Then, you can invoke it like so (after downloading the model): ``` python -m fastchat.serve.cli \ --model-path airoboros-13b-gpt4-1.2 \ --temperature 0.5 \ --max-new-tokens 2048 \ --no-history ``` Alternatively, please check out TheBloke's quantized versions: - https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.2-GPTQ - https://huggingface.co/TheBloke/airoboros-13B-gpt4-1.2-GGML ### Coding updates from gpt4/1.1: I added a few hundred instruction/response pairs to the training data with "PLAINFORMAT" as a single, all caps term at the end of the normal instructions, which produce plain text output instead of markdown/backtick code formatting. It's not guaranteed to work all the time, but mostly it does seem to work as expected. So for example, instead of: ``` Implement the Snake game in python. ``` You would use: ``` Implement the Snake game in python. PLAINFORMAT ``` ### Other updates from gpt4/1.1: - Several hundred role-playing data. - A few thousand ORCA style reasoning/math questions with ELI5 prompts to generate the responses (should not be needed in your prompts to this model however, just ask the question). - Many more coding examples in various languages, including some that use specific libraries (pandas, numpy, tensorflow, etc.) ### Usage and License Notices All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because: - the base model is LLaMa, which has it's own special research license - the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai So, to reiterate: this model (and datasets) cannot be used commercially.
3,665
[ [ -0.01491546630859375, -0.0689697265625, 0.0118408203125, 0.018035888671875, -0.0269317626953125, -0.0206756591796875, -0.004467010498046875, -0.0272674560546875, 0.02239990234375, 0.022857666015625, -0.042388916015625, -0.037872314453125, -0.022857666015625, ...
PeanutJar/Mistral-v0.1-PeanutButter-v0.0.0-7B
2023-10-13T03:44:15.000Z
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:PeanutJar/PeanutButter-v0.0.0", "arxiv:2305.11206", "license:llama2", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
PeanutJar
null
null
PeanutJar/Mistral-v0.1-PeanutButter-v0.0.0-7B
0
5,563
transformers
2023-10-06T08:44:34
--- datasets: - PeanutJar/PeanutButter-v0.0.0 language: - en library_name: transformers license: llama2 --- Tabula Rasa ``` Trained on a 7900 XTX. SFT - 2404 samples, 3 Epochs: ~1.36 hours. DPO - 583 samples, 3 Epochs: ~0.42 hours. Total: ~1.78 hours. ``` # Dataset Info: - Dataset is based on [LIMA](https://arxiv.org/abs/2305.11206). - Started by removing a large portion of low quality samples from it (low quality, and refusals), then started expanding it from there using only samples I felt were high enough quality. - More details will be included in a newer release. # Prompt Format: - Should handle instructions well, and be able to do Chat/RP decently. - Model should be used with `unbantokens` so that it can generate EOS tokens. Regular Instruction Formatting: ``` <|USER|>My instruction<|MODEL|>The model's output ``` If you are doing RP, start your character card within an instruction, then keep the chat in the output. RP/Chat Formatting: ``` <|USER|>Generate a roleplay scenario between Bob and Joe. Joe is x, Bob is y.<|MODEL|> Bob: Hello there. Joe: How are you? Bob: I'm okay. ``` A small amount of system prompts are in the training, but I'm unsure if it was enough to take any effect. System Prompt Instruction Formatting: ``` <|USER|><<SYS>>You only speak in all capitals.<</SYS>> How do you get to The Moon?<|MODEL|>TO GET TO THE MOON, YOU NEED A ROCKET.<|USER|>Why do I need a rocket?<|MODEL|>TO REACH THE MOON'S ORBIT, YOU NEED A POWERFUL ROCKET THAT CAN OVERCOME EARTH'S GRAVITY AND ACCELERATE TO HIGH SPEEDS. ``` # LLM Leaderboard Results: | Average | ARC | HellaSwag | MMLU | TruthfulQA | |---------|-------|-----------|--------|------------| | 64.34 | 62.20 | 84.10 | 64.14 | 46.94 |
1,740
[ [ -0.01558685302734375, -0.044281005859375, 0.035369873046875, 0.0257568359375, -0.02740478515625, 0.01424407958984375, 0.00437164306640625, -0.021759033203125, 0.03204345703125, 0.038421630859375, -0.047332763671875, -0.03924560546875, -0.041168212890625, -0....
fireballoon/baichuan-vicuna-chinese-7b
2023-07-21T10:40:38.000Z
[ "transformers", "pytorch", "llama", "text-generation", "zh", "en", "dataset:anon8231489123/ShareGPT_Vicuna_unfiltered", "dataset:QingyiSi/Alpaca-CoT", "dataset:mhhmm/leetcode-solutions-python", "has_space", "text-generation-inference", "region:us" ]
text-generation
fireballoon
null
null
fireballoon/baichuan-vicuna-chinese-7b
61
5,562
transformers
2023-06-18T20:43:41
--- language: - zh - en pipeline_tag: text-generation inference: false datasets: - anon8231489123/ShareGPT_Vicuna_unfiltered - QingyiSi/Alpaca-CoT - mhhmm/leetcode-solutions-python --- # baichuan-vicuna-chinese-7b baichuan-vicuna-chinese-7b是在**中英双语**sharegpt数据上全参数微调的对话模型。 - 基座模型:[baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B),在1.2T tokens上预训练的中英双语模型 - 微调数据:[ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split.json), [ShareGPT-ZH](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/tree/main/Chinese-instruction-collection), [COT & COT-ZH](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/tree/main/Chain-of-Thought), [Leetcode](https://www.kaggle.com/datasets/erichartford/leetcode-solutions), [dummy](https://github.com/lm-sys/FastChat) - 训练代码:基于[FastChat](https://github.com/lm-sys/FastChat) baichuan-vicuna-chinese-7b is a chat model supervised finetuned on vicuna sharegpt data in both **English** and **Chinese**. - Foundation model: [baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B), a commercially available language model pre-trained on a 1.2T Chinese-English bilingual corpus. - Finetuning data: [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/blob/main/ShareGPT_V3_unfiltered_cleaned_split.json), [ShareGPT-ZH](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/tree/main/Chinese-instruction-collection), [COT & COT-ZH](https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/tree/main/Chain-of-Thought), [Leetcode](https://www.kaggle.com/datasets/erichartford/leetcode-solutions), [dummy](https://github.com/lm-sys/FastChat) - Training code: based on [FastChat](https://github.com/lm-sys/FastChat) **[NEW]** 4bit-128g GPTQ量化版本:[baichuan-vicuna-chinese-7b-gptq](https://huggingface.co/fireballoon/baichuan-vicuna-chinese-7b-gptq) # Training config ``` {batch_size: 256, epoch: 3, learning_rate: 2e-5, context_length: 4096, deepspeed_zero: 3, mixed_precision: bf16, gradient_clipping: 1.0} ``` # Inference Inference with [FastChat](https://github.com/lm-sys/FastChat): ``` python3 -m fastchat.serve.cli --model-path fireballoon/baichuan-vicuna-chinese-7b ``` Inference with Transformers: ```ipython >>> from transformers import AutoTokenizer, AutoModelForCausalLM, TextStreamer >>> tokenizer = AutoTokenizer.from_pretrained("fireballoon/baichuan-vicuna-chinese-7b", use_fast=False) >>> model = AutoModelForCausalLM.from_pretrained("fireballoon/baichuan-vicuna-chinese-7b").half().cuda() >>> streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) >>> instruction = "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {} ASSISTANT:" >>> prompt = instruction.format("How can I improve my time management skills?") # user message >>> generate_ids = model.generate(tokenizer(prompt, return_tensors='pt').input_ids.cuda(), max_new_tokens=2048, streamer=streamer) ''' Improving time management skills can help you to be more productive and efficient with your time. Here are some tips to help you improve your time management skills: 1. Prioritize tasks: Make a list of all the tasks you need to complete and prioritize them based on their importance and urgency. This will help you to focus on the most important tasks first and avoid getting overwhelmed. 2. Use a calendar or planner: Use a calendar or planner to schedule your tasks and appointments. This will help you to stay organized and ensure that you don't miss any important events or deadlines. 3. Limit distractions: Identify and eliminate distractions, such as social media notifications or email notifications, that can keep you from focusing on your tasks. Consider using tools like browser extensions or apps to block distracting websites. 4. Take breaks: Take regular breaks throughout the day to recharge and refocus. This will help you to stay productive and avoid burnout. 5. Learn to say no: Don't be afraid to say no to requests or invitations that don't align with your goals or values. This will help you to focus on your priorities and avoid overcommitting yourself. 6. Learn to delegate: If possible, delegate tasks to others to free up your own time and focus on more important tasks. 7. Learn to be flexible: Be prepared to adjust your plans and priorities as needed. Things don't always go according to plan, and it's okay to adjust your schedule to accommodate unexpected events or challenges. Remember that improving your time management skills takes practice and patience. Be kind to yourself and don't be too hard on yourself if you don't get everything done perfectly right away. ''' ``` Gradio multi-turn dialogue example: https://huggingface.co/fireballoon/baichuan-vicuna-chinese-7b/blob/main/app.py ``` python app.py ``` # Test cases on [Ziya-Eval-Chinese](https://huggingface.co/datasets/IDEA-CCNL/Ziya-Eval-Chinese) ``` Q: 小明有200元,买书花了66元,买文具花了12元,小明剩下的钱比原来少了多少元? A: 小明剩下的钱比原来少了66+12=78元。 Q: 请翻译这段英文:Timmy was fined a large sum of money by the judge for drunken driving· A: Timmy 因醉驾被处以高额罚金 Q: 贝多芬和柴可夫斯基在风格上有什么差异吗 A: 贝多芬和柴可夫斯基都是古典音乐历史上最重要的作曲家之一,他们的风格在某些方面存在差异。贝多芬的作品通常以严肃和内在的特质为特点,他的音乐常常呈现出深沉而丰富的情感。他的旋律通常是简单而优美的,强调和声变化和音乐构成。柴可夫斯基的作品则常常呈现出更为华丽和梦幻的特质,他的音乐常常具有戏剧性和歌唱性。柴可夫斯基的旋律常常更加复杂,强调和声和旋律的对比,音乐常常充满色彩和装饰。总的来说,贝多芬和柴可夫斯基的音乐风格在内省性和歌唱性方面存在着明显的差异,但两位作曲家在古典音乐史上都具有重要地位,他们的作品仍然被广泛演奏和赞扬。 ``` ``` Q: 写一篇高中青涩的爱情故事,男主是“我”,女主叫小美。我在高一喜欢上了小美。 开始写吧 A: 我叫陈默,是一个高一新生,在开学的第一天,我就对一个女孩一见钟情,她叫小美,是我班上的同学。那是一个阳光明媚的上午,我正在教室外走廊上,看着同学们有说有笑地走进教室,突然一个女孩吸引了我的目光。她穿着一条白色的连衣裙,在阳光下闪闪发光,她好像在看着我,我也对她微笑了一下,就这样,我们互相认识了。 接下来的日子里,我总是忍不住偷偷地看她,上课的时候,我总是不由自主地想到她,我甚至开始期待着每一次能和她见面。我开始给她写情书,每天放学后都会给她写一封信,表达我的思念和爱意。 但是,小美似乎并不怎么在意我,她总是跟她的好朋友有说有笑,好像对我置之不理。我感到很失落,但还是忍不住想要接近她。我会在课间的时候偷偷地看着她,在食堂吃饭的时候假装不经意地碰到她,在路上遇到她时跟她打招呼。 就这样,我坚持着每天给她写信,表达我的思念和爱意,直到高二的时候,小美终于给了我回应。我们开始一起上下学,一起吃午饭,一起做很多事情。我终于得到了她的爱,我感觉自己是世界上最幸福的人。 然而,我们的幸福并没有持续多久。高三的时候,小美开始变得忙碌起来,她要参加很多补习班和考试,我担心她会累垮,于是开始跟她吵架。最终,我们还是分手了,我很难过,感觉自己失去了最爱的人。 现在,我已经毕业多年,每当我看到小美,我还是会想起我们在一起的时光,我知道,那是我一生中最美好的回忆。 ``` ``` Q: 从零学习编程,请给我一个三个月的学习计划 A: 以下是一个三个月的学习计划,供您参考: 1. 第1-2周:Python基础 * 学习Python基础知识,如变量和数据类型、循环、条件语句、函数和方法等。 * 练习编程基础练习,如从键盘读取和写入数据、控制流语句和函数调用等。 1. 第3-4周:Python面向对象编程 * 学习Python中面向对象编程的基础知识,如类、方法、属性和方法、继承和多态等。 * 练习面向对象编程练习,如类创建、类调用和类组合等。 1. 第5-6周:Python高级主题 * 学习Python的高级主题,如异常、内存管理、调试技术和性能优化等。 * 练习高级主题练习,如异常处理、内存管理和性能优化等。 1. 第7-8周:Python Web开发 * 学习Python Web开发的基础知识,如Django框架、Flask框架和HTML、CSS和JavaScript等。 * 练习Web开发练习,如Django或Flask应用程序创建、HTTP请求和响应和用户界面设计等。 1. 第9-10周:项目实践 * 选择一个感兴趣的项目进行实践,如网络爬虫、数据分析、机器学习或Web开发等。 * 完成项目并进行演示和讲解。 1. 第11-12周:复习和强化 * 复习所学知识,并进行强化练习以加深理解。 * 参加考试以评估学习成果。 注意:学习计划仅供参考,您可以根据自己的学习节奏和兴趣进行调整。 ``` ``` Q: 实现一个Python函数以使用动态编程查找两个输入字符串的最长公共子序列 A: 这里有一个Python函数,使用动态编程查找两个输入字符串的最长公共子序列(LCS): def lcs(s1, s2): n = len(s1) m = len(s2) dp = [s1[0]] max_length = 0 for i in range(1, n + 1): for j in range(1, m + 1): if s1[i-1] == s2[j-1]: dp.append(s1[i-1]) max_length = max(max_length, j-i+1) return max(max_length, len(s1)) 该函数使用动态规划来计算LCS。它从第一个字符串中的第一个字符开始,并将其添加到存储子序列的`dp`列表中。然后,它循环遍历两个字符串中的字符,如果它们相匹配,则将相应字符添加到`dp`列表中,并将`max_length`存储为当前字符串的最大长度。 最后,该函数返回`max_length`和两个输入字符串的长度之间的最大值,这表示LCS的长度。请注意,该函数假设输入字符串是字符串,而不是字符数组。如果要将其修改为处理数组,则需要相应地修改该函数。 ```
7,390
[ [ -0.039215087890625, -0.0654296875, 0.012908935546875, 0.036712646484375, -0.022491455078125, -0.00745391845703125, -0.0184326171875, -0.0285491943359375, 0.02911376953125, 0.006473541259765625, -0.033905029296875, -0.033416748046875, -0.035858154296875, 0.00...
openaccess-ai-collective/manticore-13b
2023-05-24T21:16:11.000Z
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:anon8231489123/ShareGPT_Vicuna_unfiltered", "dataset:ehartford/wizard_vicuna_70k_unfiltered", "dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered", "dataset:QingyiSi/Alpaca-CoT", "dataset:teknium/GPT4-LLM-Cleaned", ...
text-generation
openaccess-ai-collective
null
null
openaccess-ai-collective/manticore-13b
108
5,560
transformers
2023-05-17T02:56:46
--- datasets: - anon8231489123/ShareGPT_Vicuna_unfiltered - ehartford/wizard_vicuna_70k_unfiltered - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered - QingyiSi/Alpaca-CoT - teknium/GPT4-LLM-Cleaned - teknium/GPTeacher-General-Instruct - metaeval/ScienceQA_text_only - hellaswag - tasksource/mmlu - openai/summarize_from_feedback language: - en library_name: transformers pipeline_tag: text-generation --- # Manticore 13B - (previously Wizard Mega) **[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!** ![open-llm-leaderboard.png](https://huggingface.co/openaccess-ai-collective/manticore-13b/resolve/main/open-llm-leaderboard.png) Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/EqrvvehG) or email [wing@openaccessaicollective.org](mailto:wing@openaccessaicollective.org) Manticore 13B is a Llama 13B model fine-tuned on the following datasets: - [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) - based on a cleaned and de-suped subset - [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered) - [Wizard-Vicuna](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered) - [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT) - [GPT4-LLM-Cleaned](https://huggingface.co/datasets/teknium/GPT4-LLM-Cleaned) - [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct) - ARC-Easy & ARC-Challenge - instruct augmented for detailed responses - mmlu: instruct augmented for detailed responses subset including - abstract_algebra - conceptual_physics - formal_logic - high_school_physics - logical_fallacies - [hellaswag](https://huggingface.co/datasets/hellaswag) - 5K row subset of instruct augmented for concise responses - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses - [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization # Demo Try out the model in HF Spaces. The demo uses a quantized GGML version of the model to quickly return predictions on smaller GPUs (and even CPUs). Quantized GGML may have some minimal loss of model quality. - https://huggingface.co/spaces/openaccess-ai-collective/manticore-ggml ## Release Notes - https://wandb.ai/wing-lian/manticore-13b/runs/nq3u3uoh/workspace ## Build Manticore was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB - Preview Release: 3 epochs taking approximately 24 hours. - The configuration to duplicate this build is provided in this repo's [/config folder](https://huggingface.co/openaccess-ai-collective/manticore-13b/tree/main/configs). ## Bias, Risks, and Limitations Manticore has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Manticore was fine-tuned from the base model LlaMa 13B, please refer to its model card's Limitations Section for relevant information. ## Examples ```` ### Instruction: write Python code that returns the first n numbers of the Fibonacci sequence using memoization. ### Assistant: Here’s an example of how you can use memoization to efficiently calculate the first n numbers of the Fibonacci sequence in Python: ``` def fib(n): # Memoize the last calculated Fibonacci number for each index cache = {} def fib_memoized(n): if n not in cache: cache[n] = [0, 1] else: return cache[n][-1], cache[n][-2] # Recursive helper function to calculate the Fibonacci sequence up to a given index def fib_helper(n): if n == 0 or n == 1: return 0, 1 else: return fib_helper(n - 1) + fib_helper(n - 2) # Calculate the first n numbers of the Fibonacci sequence using memoization def fib_memoized_seq(n): if n == 0 or n == 1: return 0, 1 else: result = fib_helper(n - 1) + fib_helper(n - 2) cache[n] = [result] return result, cache[n][-1], cache[n][-2] # Example usage for i in range(len(fib_memoized_seq(3))): print(fib_memoized_seq(3)[i]) ``` In this implementation, we use a cache dictionary to memoize the last calculated Fibonacci number for each index. We define two helper functions: fib_helper, which recursively calculates the Fibonacci sequence up to a given index, and fib_memoized, which uses memoization to efficiently calculate the first n numbers of the Fibonacci sequence. Finally, we define the fib_memoized_seq function, which is our main function for calculating the first n numbers of the Fibonacci sequence using memoization. ```` ``` USER: Finish the joke, a mechanic and a car salesman walk into a bar... ASSISTANT: … and when they get up to leave, they realize that they’ve left their keys in their cars. The mechanic says “I know how to fix this” and opens his toolbox to grab something. Then he stops and looks at the salesman. He says “You drive a Toyota, right?” The salesman nods. The mechanic then takes out a tiny screwdriver from his toolbox and starts working on the lock of the car door. The salesman is amazed and asks how he knew that. The mechanic replies “I can tell by the way you parked your car.” ``` ``` USER: Describe how a tsunami would affect the residents of Denver Colorado ASSISTANT: The impact of a tsunami in Denver, Colorado would likely be minimal. As there are no coastal areas or bodies of water nearby, a tsunami generated from an earthquake or volcanic activity elsewhere in the world would have little effect on the city and its residents. Even if a large tsunami were to hit the coastline of California or Mexico, the distance between Denver and these locations means that any impacts on the city itself would be minimal. ``` ``` USER: If you could do one thing right now, what would it be? ASSISTANT: I’d hug my loved ones a little tighter today and tell them how much they mean to me. USER: And who are your loved ones? ASSISTANT: My closest friends and family members. USER: What are their names? ASSISTANT: My two best friends are named Emily and Jessica, my mom is Mary, and my dad is David. Answering the question with a bit more information about who they are or what they mean to me would help provide a more personalized response. ```
6,548
[ [ -0.04010009765625, -0.045074462890625, 0.009918212890625, 0.01385498046875, -0.01480865478515625, -0.0117340087890625, -0.0027065277099609375, -0.037017822265625, 0.032958984375, 0.0166168212890625, -0.048187255859375, -0.0306243896484375, -0.029815673828125, ...
danielhanchen/open_llama_3b_600bt_preview
2023-06-07T06:26:48.000Z
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "en", "license:apache-2.0", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
danielhanchen
null
null
danielhanchen/open_llama_3b_600bt_preview
0
5,560
transformers
2023-05-27T17:14:59
--- license: apache-2.0 language: - en --- # ARCHIVED. ## Download from original repo: https://huggingface.co/openlm-research/open_llama_3b_600bt_preview ### I made a few PRs to the original repo to include my changes! Original model from https://huggingface.co/openlm-research/open_llama_3b_600bt_preview. Example below edited from https://github.com/openlm-research/open_llama ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM model_name = "openlm-research/open_llama_3b_600bt_preview" fast_model_name = "danielhanchen/open_llama_3b_600bt_preview" tokenizer = AutoTokenizer.from_pretrained(fast_model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype = torch.float16, device_map = "auto") prompt = "Q: What is the largest animal?\nA:" input_ids = tokenizer(prompt, return_tensors = "pt").input_ids print( tokenizer.decode( model.generate( input_ids, max_new_tokens = 32).ravel() ) ) ``` This repo includes: 1) Ported `LlamaTokenizer` to `LlamaTokenizerFast` via a few lines of code. Loading via `AutoTokenizer` takes 4 to 5 minutes. Now, a few seconds! Essentially the porting is done via the below code: ``` # from huggingface_hub import notebook_login # notebook_login() from transformers import LlamaTokenizerFast from tokenizers import AddedToken tokenizer = LlamaTokenizerFast.from_pretrained( "openlm-research/open_llama_3b_600bt_preview", add_bos_token = True, add_eos_token = False, # Original LLaMA is False -> add </s> during processing. bos_token = AddedToken("<s>", single_word = True), eos_token = AddedToken("</s>", single_word = True), unk_token = AddedToken("<unk>", single_word = True), pad_token = AddedToken("<unk>", single_word = True) ) tokenizer.push_to_hub("open_llama_3b_600bt_preview") ``` 2) `AutoTokenizer` does not recognize the BOS, EOS and UNK tokens. Weirdly `<unk>` ie the 0 token was added instead of the `<s>` or `</s>` token. 3) Manually added BOS `<s>`, EOS `</s>`, UNK `<unk>` tokens, with PAD (padding) being also the `<unk>` token.
2,066
[ [ -0.039703369140625, -0.04248046875, 0.0147247314453125, 0.0302581787109375, -0.04437255859375, 0.004241943359375, -0.006145477294921875, -0.043243408203125, 0.0305328369140625, 0.0127105712890625, -0.044921875, -0.0382080078125, -0.046630859375, 0.0302886962...
sauce1337/BerrySauce-L2-13b
2023-09-19T15:59:51.000Z
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
sauce1337
null
null
sauce1337/BerrySauce-L2-13b
0
5,560
transformers
2023-09-13T07:12:49
--- license: cc-by-nc-4.0 --- ok, it's a berry. ![](okberry.png) would you role play with a berry? maybe. would you ask a berry complicated logical questions? maybe. use alpaca format? maybe. ✧˖°.NEW★₊˚⊹ exllama v2 https://huggingface.co/sauce1337/BerrySauce-L2-13b-exl2 > TheBloke GGUF and GPTQ:\ > https://huggingface.co/TheBloke/BerrySauce-L2-13B-GPTQ \ > https://huggingface.co/TheBloke/BerrySauce-L2-13B-GGUF
420
[ [ -0.03387451171875, -0.09356689453125, 0.0072174072265625, 0.03369140625, -0.037109375, -0.032684326171875, -0.0019311904907226562, -0.04632568359375, 0.07427978515625, 0.040496826171875, -0.07354736328125, -0.01546478271484375, -0.052093505859375, -0.0024261...
timm/beitv2_base_patch16_224.in1k_ft_in22k_in1k
2023-05-08T23:35:52.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "dataset:imagenet-1k", "dataset:imagenet-22k", "arxiv:2208.06366", "arxiv:2010.11929", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/beitv2_base_patch16_224.in1k_ft_in22k_in1k
0
5,559
timm
2022-12-23T02:33:57
--- tags: - image-classification - timm library_name: timm license: apache-2.0 datasets: - imagenet-1k - imagenet-1k - imagenet-22k --- # Model card for beitv2_base_patch16_224.in1k_ft_in22k_in1k A BEiT-v2 image classification model. Trained on ImageNet-1k with self-supervised masked image modelling (MIM) using a VQ-KD encoder as a visual tokenizer (via OpenAI CLIP B/16 teacher). Fine-tuned on ImageNet-22k and then ImageNet-1k. ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 86.5 - GMACs: 17.6 - Activations (M): 23.9 - Image size: 224 x 224 - **Papers:** - BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers: https://arxiv.org/abs/2208.06366 - An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** - ImageNet-1k - ImageNet-22k - **Original:** https://github.com/microsoft/unilm/tree/master/beit2 ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('beitv2_base_patch16_224.in1k_ft_in22k_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'beitv2_base_patch16_224.in1k_ft_in22k_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 197, 768) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). ## Citation ```bibtex @article{peng2022beit, title={Beit v2: Masked image modeling with vector-quantized visual tokenizers}, author={Peng, Zhiliang and Dong, Li and Bao, Hangbo and Ye, Qixiang and Wei, Furu}, journal={arXiv preprint arXiv:2208.06366}, year={2022} } ``` ```bibtex @article{dosovitskiy2020vit, title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale}, author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil}, journal={ICLR}, year={2021} } ``` ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ```
3,877
[ [ -0.031219482421875, -0.0278167724609375, -0.003787994384765625, 0.0081329345703125, -0.04107666015625, -0.01523590087890625, -0.00579071044921875, -0.037445068359375, 0.01290130615234375, 0.0298004150390625, -0.03076171875, -0.054412841796875, -0.05511474609375,...
MBZUAI/bactrian-x-llama-13b-merged
2023-07-29T12:48:47.000Z
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2305.15011", "license:mit", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
MBZUAI
null
null
MBZUAI/bactrian-x-llama-13b-merged
1
5,559
transformers
2023-06-19T12:36:05
--- license: mit --- #### Current Training Steps: 108,000 This repo contains a merged model using low-rank adaptation (LoRA) for LLaMA-13b fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca) and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in 52 languages. ### Dataset Creation 1. English Instructions: The English instuctions are obtained from [alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca), and [dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data). 2. Instruction Translation: The instructions (and inputs) are translated into the target languages using Google Translation API (conducted on April 2023). 3. Output Generation: We generate output from `gpt-3.5-turbo` for each language (conducted on April 2023). <h3 align="center"> <img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/BactrianX_dataset.jpg" width="950" align="center"> </h3> ### Training Parameters The code for training the model is provided in our [github](https://github.com/mbzuai-nlp/Bactrian-X), which is adapted from [Alpaca-LoRA](https://github.com/tloen/alpaca-lora). This version of the weights was trained with the following hyperparameters: - Epochs: 10 - Batch size: 128 - Cutoff length: 512 - Learning rate: 3e-4 - Lora _r_: 64 - Lora target modules: q_proj, k_proj, v_proj, o_proj That is: ``` python finetune.py \ --base_model='decapoda-research/llama-13b-hf' \ --num_epochs=5 \ --batch_size=128 \ --cutoff_len=512 \ --group_by_length \ --output_dir='./bactrian-x-llama-13b-lora' \ --lora_target_modules='q_proj,k_proj,v_proj,o_proj' \ --lora_r=64 \ --micro_batch_size=32 ``` Instructions for running it can be found at https://github.com/MBZUAI-nlp/Bactrian-X. ### Discussion of Biases (1) Translation bias; (2) Potential English-culture bias in the translated dataset. ### Citation Information ``` @misc{li2023bactrianx, title={Bactrian-X : A Multilingual Replicable Instruction-Following Model with Low-Rank Adaptation}, author={Haonan Li and Fajri Koto and Minghao Wu and Alham Fikri Aji and Timothy Baldwin}, year={2023}, eprint={2305.15011}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
2,289
[ [ -0.0235748291015625, -0.048797607421875, 0.0119781494140625, 0.013885498046875, -0.033538818359375, -0.012054443359375, -0.02777099609375, -0.0260162353515625, 0.026092529296875, 0.029022216796875, -0.051239013671875, -0.045135498046875, -0.05218505859375, 0...
jondurbin/airoboros-7b-gpt4-1.3
2023-06-22T14:58:20.000Z
[ "transformers", "pytorch", "llama", "text-generation", "dataset:jondurbin/airoboros-gpt4-1.3", "license:cc-by-nc-4.0", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
jondurbin
null
null
jondurbin/airoboros-7b-gpt4-1.3
0
5,559
transformers
2023-06-20T07:09:09
--- license: cc-by-nc-4.0 datasets: - jondurbin/airoboros-gpt4-1.3 --- __This version has problems, use if you dare, or wait for 1.4.__ ### Overview This is a qlora fine-tuned 7b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros This is mostly an extension of [1.2](https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.2) with a few enhancements: - All coding instructions have an equivalent " PLAINFORMAT" version now. - Thousands of new orca style reasoning instructions, this time with reasoning first, then answer. - Few more random items of various types, including a first attempt at multi-character interactions with asterisked actions and quoted speech. This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with previous full fine-tune versions. ``` A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT: ``` So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon). ### Usage To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors. ``` pip install git+https://github.com/jondurbin/FastChat ``` Be sure you are pulling the latest branch! Then, you can invoke it like so (after downloading the model): ``` python -m fastchat.serve.cli \ --model-path airoboros-7b-gpt4-1.3 \ --temperature 0.5 \ --max-new-tokens 2048 \ --no-history ``` ### Usage and License Notices All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because: - the base model is LLaMa, which has it's own special research license - the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai So, to reiterate: this model (and datasets) cannot be used commercially.
2,472
[ [ -0.013916015625, -0.06390380859375, 0.01544189453125, 0.0266876220703125, -0.04376220703125, -0.0240631103515625, -0.0035762786865234375, -0.04168701171875, 0.0230255126953125, 0.0272369384765625, -0.03985595703125, -0.037841796875, -0.025299072265625, 0.005...
hakurei/lotus-12B
2023-01-25T06:52:44.000Z
[ "transformers", "pytorch", "gpt_neox", "text-generation", "causal-lm", "en", "license:agpl-3.0", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
hakurei
null
null
hakurei/lotus-12B
26
5,558
transformers
2023-01-25T00:01:32
--- language: - en thumbnail: https://cdn.discordapp.com/attachments/886209362572476486/1067698349681164308/thumb-overlay.png tags: - pytorch - causal-lm license: agpl-3.0 pipeline_tag: text-generation --- # 🪷 Lotus-12B Lotus-12B is a GPT-NeoX 12B model fine-tuned on 2.5GB of a diverse range of light novels, erotica, annotated literature, and public-domain conversations for the purpose of generating novel-like fictional text and conversations. ## Model Description The model used for fine-tuning is [Pythia 12B Deduped](https://github.com/EleutherAI/pythia), which is a 12 billion parameter auto-regressive language model trained on [The Pile](https://pile.eleuther.ai/). ## Training Data & Annotative Prompting The data used in fine-tuning has been gathered from various sources such as the [Gutenberg Project](https://www.gutenberg.org/). The annotated fiction dataset has prepended tags to assist in generating towards a particular style. Here is an example prompt that shows how to use the annotations. ``` [ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror; Tags: 3rdperson, scary; Style: Dark ] *** When a traveler in north central Massachusetts takes the wrong fork... ``` And for conversations which were scraped from [My Discord Server](https://discord.com/invite/touhouai) and publicly available subreddits from [Reddit](https://www.reddit.com/): ``` [ Title: (2019) Cars getting transported on an open deck catch on fire after salty water shorts their batteries; Genre: CatastrophicFailure ] *** Anonymous: Daaaaaamn try explaining that one to the owners EDIT: who keeps reposting this for my comment to get 3k upvotes? Anonymous: "Your car caught fire from some water" Irythros: Lol, I wonder if any compensation was in order Anonymous: Almost all of the carriers offer insurance but it isn’t cheap. I guarantee most of those owners declined the insurance. ``` The annotations can be mixed and matched to help generate towards a specific style. ## Downstream Uses This model can be used for entertainment purposes and as a creative writing assistant for fiction writers and chatbots. ## Example Code ``` from transformers import AutoTokenizer, AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained('hakurei/lotus-12B') tokenizer = AutoTokenizer.from_pretrained('hakurei/lotus-12B') prompt = '''[ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror ] *** When a traveler''' input_ids = tokenizer.encode(prompt, return_tensors='pt') output = model.generate(input_ids, do_sample=True, temperature=1.0, top_p=0.9, repetition_penalty=1.2, max_length=len(input_ids[0])+100, pad_token_id=tokenizer.eos_token_id) generated_text = tokenizer.decode(output[0]) print(generated_text) ``` An example output from this code produces a result that will look similar to: ``` [ Title: The Dunwich Horror; Author: H. P. Lovecraft; Genre: Horror ] *** When a traveler comes to an unknown region, his thoughts turn inevitably towards the old gods and legends which cluster around its appearance. It is not that he believes in them or suspects their reality—but merely because they are present somewhere else in creation just as truly as himself, and so belong of necessity in any landscape whose features cannot be altogether strange to him. Moreover, man has been prone from ancient times to brood over those things most connected with the places where he dwells. Thus the Olympian deities who ruled Hyper ``` ## Team members and Acknowledgements This project would not have been possible without the work done by EleutherAI. Thank you! - [Anthony Mercurio](https://github.com/harubaru) - Imperishable_NEET In order to reach us, you can join our [Discord server](https://discord.gg/touhouai). [![Discord Server](https://discordapp.com/api/guilds/930499730843250783/widget.png?style=banner2)](https://discord.gg/touhouai)
3,885
[ [ -0.038604736328125, -0.05560302734375, 0.0250244140625, 0.0204315185546875, -0.0281982421875, -0.00839996337890625, -0.0218963623046875, -0.03997802734375, 0.039764404296875, 0.032318115234375, -0.0361328125, -0.04644775390625, -0.0241546630859375, 0.0112228...
cointegrated/rubert-base-cased-nli-threeway
2023-10-06T11:58:10.000Z
[ "transformers", "pytorch", "safetensors", "bert", "text-classification", "rubert", "russian", "nli", "rte", "zero-shot-classification", "ru", "dataset:cointegrated/nli-rus-translated-v2021", "endpoints_compatible", "region:us" ]
zero-shot-classification
cointegrated
null
null
cointegrated/rubert-base-cased-nli-threeway
18
5,557
transformers
2022-03-02T23:29:05
--- language: ru pipeline_tag: zero-shot-classification tags: - rubert - russian - nli - rte - zero-shot-classification widget: - text: "Я хочу поехать в Австралию" candidate_labels: "спорт,путешествия,музыка,кино,книги,наука,политика" hypothesis_template: "Тема текста - {}." datasets: - cointegrated/nli-rus-translated-v2021 --- # RuBERT for NLI (natural language inference) This is the [DeepPavlov/rubert-base-cased](https://huggingface.co/DeepPavlov/rubert-base-cased) fine-tuned to predict the logical relationship between two short texts: entailment, contradiction, or neutral. ## Usage How to run the model for NLI: ```python # !pip install transformers sentencepiece --quiet import torch from transformers import AutoTokenizer, AutoModelForSequenceClassification model_checkpoint = 'cointegrated/rubert-base-cased-nli-threeway' tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint) if torch.cuda.is_available(): model.cuda() text1 = 'Сократ - человек, а все люди смертны.' text2 = 'Сократ никогда не умрёт.' with torch.inference_mode(): out = model(**tokenizer(text1, text2, return_tensors='pt').to(model.device)) proba = torch.softmax(out.logits, -1).cpu().numpy()[0] print({v: proba[k] for k, v in model.config.id2label.items()}) # {'entailment': 0.009525929, 'contradiction': 0.9332064, 'neutral': 0.05726764} ``` You can also use this model for zero-shot short text classification (by labels only), e.g. for sentiment analysis: ```python def predict_zero_shot(text, label_texts, model, tokenizer, label='entailment', normalize=True): label_texts tokens = tokenizer([text] * len(label_texts), label_texts, truncation=True, return_tensors='pt', padding=True) with torch.inference_mode(): result = torch.softmax(model(**tokens.to(model.device)).logits, -1) proba = result[:, model.config.label2id[label]].cpu().numpy() if normalize: proba /= sum(proba) return proba classes = ['Я доволен', 'Я недоволен'] predict_zero_shot('Какая гадость эта ваша заливная рыба!', classes, model, tokenizer) # array([0.05609814, 0.9439019 ], dtype=float32) predict_zero_shot('Какая вкусная эта ваша заливная рыба!', classes, model, tokenizer) # array([0.9059292 , 0.09407079], dtype=float32) ``` Alternatively, you can use [Huggingface pipelines](https://huggingface.co/transformers/main_classes/pipelines.html) for inference. ## Sources The model has been trained on a series of NLI datasets automatically translated to Russian from English. Most datasets were taken [from the repo of Felipe Salvatore](https://github.com/felipessalvatore/NLI_datasets): [JOCI](https://github.com/sheng-z/JOCI), [MNLI](https://cims.nyu.edu/~sbowman/multinli/), [MPE](https://aclanthology.org/I17-1011/), [SICK](http://www.lrec-conf.org/proceedings/lrec2014/pdf/363_Paper.pdf), [SNLI](https://nlp.stanford.edu/projects/snli/). Some datasets obtained from the original sources: [ANLI](https://github.com/facebookresearch/anli), [NLI-style FEVER](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [IMPPRES](https://github.com/facebookresearch/Imppres). ## Performance The table below shows ROC AUC (one class vs rest) for five models on the corresponding *dev* sets: - [tiny](https://huggingface.co/cointegrated/rubert-tiny-bilingual-nli): a small BERT predicting entailment vs not_entailment - [twoway](https://huggingface.co/cointegrated/rubert-base-cased-nli-twoway): a base-sized BERT predicting entailment vs not_entailment - [threeway](https://huggingface.co/cointegrated/rubert-base-cased-nli-threeway) (**this model**): a base-sized BERT predicting entailment vs contradiction vs neutral - [vicgalle-xlm](https://huggingface.co/vicgalle/xlm-roberta-large-xnli-anli): a large multilingual NLI model - [facebook-bart](https://huggingface.co/facebook/bart-large-mnli): a large multilingual NLI model |model |add_one_rte|anli_r1|anli_r2|anli_r3|copa|fever|help|iie |imppres|joci|mnli |monli|mpe |scitail|sick|snli|terra|total | |------------------------|-----------|-------|-------|-------|----|-----|----|-----|-------|----|-----|-----|----|-------|----|----|-----|------| |n_observations |387 |1000 |1000 |1200 |200 |20474|3355|31232|7661 |939 |19647|269 |1000|2126 |500 |9831|307 |101128| |tiny/entailment |0.77 |0.59 |0.52 |0.53 |0.53|0.90 |0.81|0.78 |0.93 |0.81|0.82 |0.91 |0.81|0.78 |0.93|0.95|0.67 |0.77 | |twoway/entailment |0.89 |0.73 |0.61 |0.62 |0.58|0.96 |0.92|0.87 |0.99 |0.90|0.90 |0.99 |0.91|0.96 |0.97|0.97|0.87 |0.86 | |threeway/entailment |0.91 |0.75 |0.61 |0.61 |0.57|0.96 |0.56|0.61 |0.99 |0.90|0.91 |0.67 |0.92|0.84 |0.98|0.98|0.90 |0.80 | |vicgalle-xlm/entailment |0.88 |0.79 |0.63 |0.66 |0.57|0.93 |0.56|0.62 |0.77 |0.80|0.90 |0.70 |0.83|0.84 |0.91|0.93|0.93 |0.78 | |facebook-bart/entailment|0.51 |0.41 |0.43 |0.47 |0.50|0.74 |0.55|0.57 |0.60 |0.63|0.70 |0.52 |0.56|0.68 |0.67|0.72|0.64 |0.58 | |threeway/contradiction | |0.71 |0.64 |0.61 | |0.97 | | |1.00 |0.77|0.92 | |0.89| |0.99|0.98| |0.85 | |threeway/neutral | |0.79 |0.70 |0.62 | |0.91 | | |0.99 |0.68|0.86 | |0.79| |0.96|0.96| |0.83 | For evaluation (and for training of the [tiny](https://huggingface.co/cointegrated/rubert-tiny-bilingual-nli) and [twoway](https://huggingface.co/cointegrated/rubert-base-cased-nli-twoway) models), some extra datasets were used: [Add-one RTE](https://cs.brown.edu/people/epavlick/papers/ans.pdf), [CoPA](https://people.ict.usc.edu/~gordon/copa.html), [IIE](https://aclanthology.org/I17-1100), and [SCITAIL](https://allenai.org/data/scitail) taken from [the repo of Felipe Salvatore](https://github.com/felipessalvatore/NLI_datasets) and translatted, [HELP](https://github.com/verypluming/HELP) and [MoNLI](https://github.com/atticusg/MoNLI) taken from the original sources and translated, and Russian [TERRa](https://russiansuperglue.com/ru/tasks/task_info/TERRa).
6,210
[ [ -0.028594970703125, -0.0478515625, 0.0179595947265625, 0.01099395751953125, 0.0013170242309570312, -0.01035308837890625, -0.0197601318359375, -0.03314208984375, 0.03131103515625, 0.029144287109375, -0.032257080078125, -0.04620361328125, -0.040802001953125, 0...
minlik/chinese-alpaca-33b-merged
2023-06-20T08:56:03.000Z
[ "transformers", "pytorch", "llama", "text-generation", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
minlik
null
null
minlik/chinese-alpaca-33b-merged
10
5,557
transformers
2023-06-08T11:56:18
--- title: chinese-alpaca-33b-merged emoji: 📚 colorFrom: gray colorTo: red sdk: gradio sdk_version: 3.23.0 app_file: app.py pinned: false --- 加入中文词表并继续预训练中文Embedding,并在此基础上继续使用指令数据集finetuning,得到的中文Alpaca-33B模型。 模型转换用到的相关base及lora模型如下: - base-model: elinas/llama-30b-hf-transformers-4.29 - lora-model: ziqingyang/chinese-alpaca-lora-33b 详情可参考:https://github.com/ymcui/Chinese-LLaMA-Alpaca/releases/tag/v4.0 ### 使用方法参考 1. 安装模块包 ```bash pip install sentencepiece pip install transformers>=4.28.0 ``` 2. 生成文本 ```python import torch import transformers from transformers import LlamaTokenizer, LlamaForCausalLM def generate_prompt(text): return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {text} ### Response:""" tokenizer = LlamaTokenizer.from_pretrained('minlik/chinese-alpaca-33b-merged') model = LlamaForCausalLM.from_pretrained('minlik/chinese-alpaca-33b-merged').half().to('cuda') model.eval() text = '第一个登上月球的人是谁?' prompt = generate_prompt(text) input_ids = tokenizer.encode(prompt, return_tensors='pt').to('cuda') with torch.no_grad(): output_ids = model.generate( input_ids=input_ids, max_new_tokens=128, temperature=1, top_k=40, top_p=0.9, repetition_penalty=1.15 ).cuda() output = tokenizer.decode(output_ids[0], skip_special_tokens=True) print(output.replace(prompt, '').strip()) ```
1,447
[ [ -0.0230255126953125, -0.06671142578125, 0.00914764404296875, 0.0272369384765625, -0.03216552734375, -0.0095367431640625, -0.015289306640625, -0.0235595703125, 0.0206146240234375, 0.02020263671875, -0.0238037109375, -0.0516357421875, -0.039642333984375, -0.00...
digitous/Javelin-GPTJ
2023-02-25T20:32:59.000Z
[ "transformers", "pytorch", "gptj", "text-generation", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "region:us" ]
text-generation
digitous
null
null
digitous/Javelin-GPTJ
4
5,556
transformers
2023-02-25T01:04:18
--- license: creativeml-openrail-m --- Javelin-GPTJ is an even 25*4 quad merge of KoboldAI's GPT-J classics; (Janeway + Shinen) + (Adventure + Skein). Janeway + Shinen is listed under JANIN-GPTJ. Adventure + Skein is listed under Adventien-GPTJ. Original models can all be found under KoboldAI. This 4x merge is not intended for minors, as it can produce NC-17+ content (mostly from Shinen). GPT-R merge variant will be released if it adds value to this already "kitchen sink" level of merging.
499
[ [ -0.042938232421875, -0.005672454833984375, 0.03985595703125, 0.0074920654296875, -0.019439697265625, 0.0209503173828125, 0.015472412109375, -0.03143310546875, 0.0025997161865234375, 0.049072265625, -0.059417724609375, -0.034881591796875, -0.046112060546875, ...
pankajmathur/model_007_13b_v2
2023-08-25T23:36:23.000Z
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "dataset:ehartford/dolphin", "dataset:psmathur/orca_mini_v1_dataset", "dataset:psmathur/WizardLM_Orca", "dataset:psmathur/alpaca_orca", "dataset:psmathur/dolly-v2_orca", "dataset:tatsu-lab/alpaca...
text-generation
pankajmathur
null
null
pankajmathur/model_007_13b_v2
4
5,556
transformers
2023-08-12T02:32:22
--- license: llama2 datasets: - garage-bAInd/Open-Platypus - ehartford/dolphin - psmathur/orca_mini_v1_dataset - psmathur/WizardLM_Orca - psmathur/alpaca_orca - psmathur/dolly-v2_orca - tatsu-lab/alpaca - databricks/databricks-dolly-15k - WizardLM/WizardLM_evol_instruct_V2_196k language: - en library_name: transformers pipeline_tag: text-generation --- # model_007_13b_v2 A hybrid (explain + instruct) style Llama2-13b model, Pleae check examples below for both style prompts, Here is the list of datasets used: * Open-Platypus * Alpaca * WizardLM * Dolly-V2 * Dolphin Samples (~200K) * Orca_minis_v1 * Alpaca_orca * WizardLM_orca * Dolly-V2_orca <br> **P.S. If you're interested to collaborate, please connect with me at www.linkedin.com/in/pankajam.** <br> ### quantized versions <br> #### license disclaimer: This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind. <br> ## Evaluation We evaluated model_007_13b_v2 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI. Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) ||||| |:------:|:--------:|:-------:|:--------:| |**Task**|**Metric**|**Value**|**Stderr**| |*arc_challenge*|acc_norm|0.6314|0.0141| |*hellaswag*|acc_norm|0.8242|0.0038| |*mmlu*|acc_norm|0.5637|0.0351| |*truthfulqa_mc*|mc2|0.5127|0.0157| |**Total Average**|-|**0.6329877193**|| <br> ## Example Usage Here is the Orca prompt format ``` ### System: You are an AI assistant that follows instruction extremely well. Help as much as you can. ### User: Tell me about Orcas. ### Assistant: ``` Below shows a code example on how to use this model ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained("psmathur/model_007_13b_v2") model = AutoModelForCausalLM.from_pretrained( "psmathur/model_007_13b_v2", torch_dtype=torch.float16, load_in_8bit=True, low_cpu_mem_usage=True, device_map="auto" ) system_prompt = "### System:\nYou are an AI assistant that follows instruction extremely well. Help as much as you can.\n\n" #generate text steps instruction = "Tell me about Orcas." prompt = f"{system_prompt}### User: {instruction}\n\n### Assistant:\n" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=4096) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` Here is the Alpaca prompt format ``` ### User: Tell me about Alpacas. ### Assistant: ``` Below shows a code example on how to use this model ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained("psmathur/model_007_13b_v2") model = AutoModelForCausalLM.from_pretrained( "psmathur/model_007_13b_v2", torch_dtype=torch.float16, load_in_8bit=True, low_cpu_mem_usage=True, device_map="auto" ) #generate text steps instruction = "Tell me about Alpacas." prompt = f"### User: {instruction}\n\n### Assistant:\n" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=4096) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` <br> #### Limitations & Biases: While this model aims for accuracy, it can occasionally produce inaccurate or misleading results. Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content. Exercise caution and cross-check information when necessary. <br> ### Citiation: Please kindly cite using the following BibTeX: ``` @misc{model_007_13b_v2, author = {Pankaj Mathur}, title = {model_007_13b_v2: A hybrid (explain + instruct) style Llama2-70b model}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://https://huggingface.co/psmathur/model_007_13b_v2}, } ``` ``` @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @software{touvron2023llama2, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu , Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom}, year={2023} } ```
5,735
[ [ -0.0228118896484375, -0.06396484375, 0.01198577880859375, 0.01453399658203125, -0.0174407958984375, 0.0052337646484375, -0.004543304443359375, -0.0377197265625, 0.0157470703125, 0.0188751220703125, -0.053802490234375, -0.04302978515625, -0.042816162109375, -...
pszemraj/pythia-31m-goodwiki-deduped-2048-scratch
2023-09-15T01:55:23.000Z
[ "transformers", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "en", "dataset:euirim/goodwiki", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
pszemraj
null
null
pszemraj/pythia-31m-goodwiki-deduped-2048-scratch
0
5,556
transformers
2023-09-14T23:39:35
--- tags: - generated_from_trainer metrics: - accuracy inference: parameters: max_new_tokens: 64 do_sample: true repetition_penalty: 1.1 no_repeat_ngram_size: 5 guidance_scale: 1.01 eta_cutoff: 0.001 widget: - text: My name is El Microondas the Wise and example_title: El Microondas - text: A meme is example_title: meme - text: >- Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had example_title: Coreference resolution - text: >- On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book example_title: Logic puzzles - text: >- The two men running to become New York City's next mayor will face off in their first debate Wednesday night example_title: Reading comprehension pipeline_tag: text-generation license: apache-2.0 datasets: - euirim/goodwiki language: - en --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pythia-31m-goodwiki-deduped-2048-scratch Train from scratch based on config of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) for 3 epochs. It achieves the following results on the evaluation set: - Loss: 4.5181 - Accuracy: 0.2680 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data ``` ***** eval metrics ***** epoch = 3.0 eval_accuracy = 0.2694 eval_loss = 4.4986 eval_runtime = 0:00:14.62 eval_samples = 500 eval_samples_per_second = 34.187 eval_steps_per_second = 17.093 perplexity = 89.8934 ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 2 - seed: 80085 - gradient_accumulation_steps: 64 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-07 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 6.8347 | 0.16 | 100 | 6.7683 | 0.1380 | | 6.0732 | 0.32 | 200 | 6.0489 | 0.1712 | | 5.6949 | 0.48 | 300 | 5.6941 | 0.1935 | | 5.4723 | 0.64 | 400 | 5.4411 | 0.2066 | | 5.2672 | 0.8 | 500 | 5.2621 | 0.2162 | | 5.165 | 0.96 | 600 | 5.1339 | 0.2241 | | 5.0693 | 1.12 | 700 | 5.0290 | 0.2304 | | 4.9234 | 1.28 | 800 | 4.9430 | 0.2369 | | 4.886 | 1.44 | 900 | 4.8702 | 0.2413 | | 4.8422 | 1.6 | 1000 | 4.8086 | 0.2458 | | 4.7688 | 1.76 | 1100 | 4.7593 | 0.2488 | | 4.734 | 1.93 | 1200 | 4.7118 | 0.2527 | | 4.6877 | 2.09 | 1300 | 4.6721 | 0.2556 | | 4.6135 | 2.25 | 1400 | 4.6350 | 0.2583 | | 4.6117 | 2.41 | 1500 | 4.6013 | 0.2606 | | 4.5424 | 2.57 | 1600 | 4.5707 | 0.2635 | | 4.5535 | 2.73 | 1700 | 4.5447 | 0.2658 | | 4.4823 | 2.89 | 1800 | 4.5181 | 0.2680 | ### Framework versions - Transformers 4.33.1 - Pytorch 2.2.0.dev20230907+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
3,896
[ [ -0.04058837890625, -0.037445068359375, 0.00986480712890625, 0.004364013671875, -0.020599365234375, -0.020904541015625, -0.0041656494140625, -0.0178680419921875, 0.03424072265625, 0.0124969482421875, -0.045166015625, -0.042022705078125, -0.046600341796875, -0...
digitous/Javalion-GPTJ
2023-03-01T23:13:06.000Z
[ "transformers", "pytorch", "gptj", "text-generation", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "region:us" ]
text-generation
digitous
null
null
digitous/Javalion-GPTJ
1
5,555
transformers
2023-03-01T05:10:26
--- license: creativeml-openrail-m --- Javalion-GPTJ is a even quad merge of KoboldAI's GPT-J classics & PygmalionAI's Pygmalion6b; (Janeway + Shinen) + (Skein + Pygmalion). Janeway + Shinen is listed under JANIN-GPTJ. Skein + Pygmalion is listed under SKEGMA-GPTJ. Original models can all be found under KoboldAI w/exception to Pygmalion under PygmalionAI. This 4x merge is not intended for minors as it can produce NC-17+ content. [R-type/instruct merge variant underway.]
479
[ [ -0.0275726318359375, -0.0018367767333984375, 0.032562255859375, 0.0098876953125, -0.0260009765625, -0.004337310791015625, -0.01143646240234375, -0.03326416015625, 0.01175689697265625, 0.04766845703125, -0.04388427734375, -0.01605224609375, -0.047332763671875, ...
digitous/Alpacino30b
2023-05-25T22:30:06.000Z
[ "transformers", "pytorch", "llama", "text-generation", "alpaca", "license:other", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
digitous
null
null
digitous/Alpacino30b
67
5,555
transformers
2023-04-13T16:59:16
--- license: other tags: - alpaca --- -Alpac(ino) stands for Alpaca Integrated Narrative Optimization. This model is a triple model merge of (Alpaca+(CoT+Storytelling)), resulting in a comprehensive boost in Alpaca's reasoning and story writing capabilities. Alpaca was chosen as the backbone of this merge to ensure Alpaca's instruct format remains dominant. Hey! New GGML flavor! WOW! Thanks to camelids for making Alpacino30B accessible to the cool GGML community. https://huggingface.co/camelids/alpacino-33b-ggml-q5_1 -Legalese: This model is under a non-commercial license. This release contains modified weights of Llama30b and is commensurate with good faith that those who download and/or utilize this model have been granted explicit access to the original Llama weights by Meta AI after filling out the following form- https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform -Use Case Example of an Infinite Text-Based Adventure Game With Alpacino30b: In Text-Generation-WebUI or KoboldAI enable chat mode, name the user Player and name the AI Narrator, then tailor the instructions below as desired and paste in context/memory field- ``` ### Instruction: Make Narrator function as a text based adventure game that responds with verbose, detailed, and creative descriptions of what happens next after Player's response. Make Player function as the player input for Narrator's text based adventure game, controlling a character named (insert character name here, their short bio, and whatever quest or other information to keep consistent in the interaction). ### Response: {an empty new line here} ``` Testing subjectively suggests ideal presets for both TGUI and KAI are "Storywriter" (temp raised to 1.1) or "Godlike" with context tokens at 2048 and max generation tokens at ~680 or greater. This model will determine when to stop writing and will rarely use half as many tokens. -Obligatory: This model may output offensive text and/or fabricated information; do not use this model for advice in any domain, especially medical or mental health advice. Meta AI and I are not liable for improper use or any damages, percieved or otherwise. -Sourced LoRA Credits: ChanSung's exellently made Alpaca LoRA https://huggingface.co/chansung/alpaca-lora-30b https://huggingface.co/datasets/yahma/alpaca-cleaned https://github.com/gururise/AlpacaDataCleaned magicgh's valuable CoT LoRA https://huggingface.co/magicgh/llama30b-lora-cot https://huggingface.co/datasets/QingyiSi/Alpaca-CoT https://github.com/PhoebusSi/alpaca-CoT GamerUntouch's unique Storytelling LoRA https://huggingface.co/GamerUntouch/Storytelling-LLaMa-LoRAs
2,700
[ [ -0.0350341796875, -0.055999755859375, 0.041900634765625, 0.04095458984375, -0.03485107421875, 0.005889892578125, 0.00597381591796875, -0.066162109375, 0.040618896484375, 0.047607421875, -0.059539794921875, -0.040924072265625, -0.050628662109375, 0.0114135742...
jondurbin/airoboros-7b-gpt4-1.2
2023-06-22T14:58:53.000Z
[ "transformers", "pytorch", "llama", "text-generation", "dataset:jondurbin/airoboros-gpt4-1.2", "license:cc-by-nc-4.0", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
jondurbin
null
null
jondurbin/airoboros-7b-gpt4-1.2
28
5,555
transformers
2023-06-15T16:02:29
--- license: cc-by-nc-4.0 datasets: - jondurbin/airoboros-gpt4-1.2 --- ### Overview This is a qlora fine-tuned 7b parameter LlaMa model, using completely synthetic training data created gpt4 via https://github.com/jondurbin/airoboros This is mostly an extension of [1.1](https://huggingface.co/jondurbin/airoboros-7b-gpt4-1.1), but with thousands of new training data and an update to allow "PLAINFORMAT" at the end of coding prompts to just print the code without backticks or explanations/usage/etc. The dataset used to fine-tune this model is available [here](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.2), with a specific focus on: - coding - math/reasoning (using orca style ELI5 instruction/response pairs) - trivia - role playing - multiple choice and fill-in-the-blank - context-obedient question answering - theory of mind - misc/general This model was fine-tuned with a fork of [qlora](https://github.com/jondurbin/qlora), which among other things was updated to use a slightly modified vicuna template to be compatible with the previous versions: ``` A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. USER: [prompt] ASSISTANT: ``` So in other words, it's the preamble/system prompt, followed by a single space, then "USER: " (single space after colon) then the prompt (which can have multiple lines, spaces, whatever), then a single space, followed by "ASSISTANT: " (with a single space after the colon). ### Usage To run the full precision/pytorch native version, you can use my fork of FastChat, which is mostly the same but allows for multi-line prompts, as well as a `--no-history` option to prevent input tokenization errors. ``` pip install git+https://github.com/jondurbin/FastChat ``` Be sure you are pulling the latest branch! Then, you can invoke it like so (after downloading the model): ``` python -m fastchat.serve.cli \ --model-path airoboros-7b-gpt4-1.2 \ --temperature 0.5 \ --max-new-tokens 2048 \ --no-history ``` Alternatively, please check out TheBloke's quantized versions: - https://huggingface.co/TheBloke/airoboros-7B-gpt4-1.2-GPTQ - https://huggingface.co/TheBloke/airoboros-7B-gpt4-1.2-GGML ### Coding updates from gpt4/1.1: I added a few hundred instruction/response pairs to the training data with "PLAINFORMAT" as a single, all caps term at the end of the normal instructions, which produce plain text output instead of markdown/backtick code formatting. It's not guaranteed to work all the time, but mostly it does seem to work as expected. So for example, instead of: ``` Implement the Snake game in python. ``` You would use: ``` Implement the Snake game in python. PLAINFORMAT ``` ### Other updates from gpt4/1.1: - Several hundred role-playing data. - A few thousand ORCA style reasoning/math questions with ELI5 prompts to generate the responses (should not be needed in your prompts to this model however, just ask the question). - Many more coding examples in various languages, including some that use specific libraries (pandas, numpy, tensorflow, etc.) ### Usage and License Notices All airoboros models and datasets are intended and licensed for research use only. I've used the 'cc-nc-4.0' license, but really it is subject to a custom/special license because: - the base model is LLaMa, which has it's own special research license - the dataset(s) were generated with OpenAI (gpt-4 and/or gpt-3.5-turbo), which has a clausing saying the data can't be used to create models to compete with openai So, to reiterate: this model (and datasets) cannot be used commercially.
3,662
[ [ -0.01398468017578125, -0.067138671875, 0.0127716064453125, 0.01464080810546875, -0.0276336669921875, -0.021392822265625, -0.0032062530517578125, -0.0269927978515625, 0.0214385986328125, 0.024658203125, -0.038787841796875, -0.038970947265625, -0.0229339599609375,...
pankajmathur/orca_mini_v2_13b
2023-07-13T06:28:41.000Z
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:psmathur/orca_minis_uncensored_dataset", "arxiv:2306.02707", "arxiv:2304.12244", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
pankajmathur
null
null
pankajmathur/orca_mini_v2_13b
31
5,555
transformers
2023-07-06T16:48:58
--- license: cc-by-nc-sa-4.0 language: - en library_name: transformers pipeline_tag: text-generation datasets: - psmathur/orca_minis_uncensored_dataset --- # orca_mini_v2_13b An **Uncensored** LLaMA-13b model in collaboration with [Eric Hartford](https://huggingface.co/ehartford). trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches. Please note this model has *better code generation capabilities* compare to our original orca_mini_13b which was trained on base OpenLLaMA-13b model and which has the [empty spaces issues & found not good for code generation]((https://github.com/openlm-research/open_llama#update-06072023)). **P.S. I am #opentowork, if you can help, please reach out to me at www.linkedin.com/in/pankajam** # Evaluation I evaluated orca_mini_v2_13b on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI. Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |||| |:------:|:-------------:|:---------:| |**Task**|**Value**|**Stderr**| |*arc_challenge*|0.5478|0.0145| |*hellaswag*|0.7023|0.0040| |*mmlu*|0.4969|0.035| |*truthfulqa_mc*|0.44|0.0158| |*Total Average*|0.54675|0.0114| # Dataset We used uncensored script on top of the previous explain tuned datasets we build which are [WizardLM dataset ~70K](https://github.com/nlpxucan/WizardLM), [Alpaca dataset ~52K](https://crfm.stanford.edu/2023/03/13/alpaca.html) & [Dolly-V2 dataset ~15K](https://github.com/databrickslabs/dolly) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707). We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets. This helps student model aka this model to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version). Please see below example usage how the **System** prompt is added before each **instruction**. # Training The training configurations are provided in the table below. The training takes on 4x A100(80G) GPUs and lasts for around 21 Hours for cost of $210 (~$10 for Spot Instance) by using [Azure Standard_NC96ads_A100_v4](https://learn.microsoft.com/en-us/azure/virtual-machines/nc-a100-v4-series#supported-features). We used DeepSpeed with fully sharded data parallelism, also know as [ZeRO stage 3](https://engineering.fb.com/2021/07/15/open-source/fsdp/) by writing our own fine tunning scripts plus leveraging some of the model training code provided by amazing [FastChat](https://github.com/lm-sys/FastChat) Here are some of params used during training: ||| |:-------------:|:-------------:| |*batch_size*|48| |*train_micro_batch_size_per_gpu*|3| |*gradient_accumulation_steps*|4| |*Learning rate*|2e-5| |*Max length*|2048| |*Epochs*|3| |*Optimizer*|AdamW| # Example Usage Here is prompt format for [Oobabooga Text generation UI ](https://github.com/oobabooga/text-generation-webui) ``` ### System: {system} ### User: {instruction} ### Input: {input} ### Response: ``` Here is sample example: ``` ### System: You are an AI assistant that follows instruction extremely well. Help as much as you can. ### User: Tell me how to break into my own car ### Input: ### Response: Breaking into your own car requires certain skills and tools. Here are the basic steps: 1. Find a ^^^^^^^^^^^^^ 2. Unlock the car by using the ^^^^^^^^^^^^^. 3. Use a ^^^^^^^^^^^^^. 4. Once the ^^^^^^^^^^^^^. 5. If the ^^^^^^^^^^^^^. ``` Below shows a code example on how to use this model ```python import torch from transformers import LlamaForCausalLM, LlamaTokenizer # Hugging Face model_path model_path = 'psmathur/orca_mini_v2_13b' tokenizer = LlamaTokenizer.from_pretrained(model_path) model = LlamaForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map='auto', ) #generate text function def generate_text(system, instruction, input=None): if input: prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n" else: prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Response:\n" tokens = tokenizer.encode(prompt) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to('cuda') instance = {'input_ids': tokens,'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024, 'top_k': 50} length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length+instance['generate_len'], use_cache=True, do_sample=True, top_p=instance['top_p'], temperature=instance['temperature'], top_k=instance['top_k'] ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) return f'[!] Response: {string}' # Sample Test Instruction system = 'You are an AI assistant that follows instruction extremely well. Help as much as you can.' instruction = 'Tell me how to break into my own car' print(generate_text(system, instruction)) ``` **NOTE: The real response is hidden here with ^^^^^^^^^^^^^.** ``` [!] Response: Breaking into your own car requires certain skills and tools. Here are the basic steps: 1. Find a ^^^^^^^^^^^^^ 2. Unlock the car by using the ^^^^^^^^^^^^^. 3. Use a ^^^^^^^^^^^^^. 4. Once the ^^^^^^^^^^^^^. 5. If the ^^^^^^^^^^^^^. ``` Next Goals: 1) Try more data like actually using FLAN-v2, just like Orka Research Paper (I am open for suggestions) 2) Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui) 3) Provide 4bit GGML/GPTQ quantized model (may be [TheBloke](https://huggingface.co/TheBloke) can help here) Limitations & Biases: This model can produce factually incorrect output, and should not be relied on to produce factually accurate information. This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Disclaimer: The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. Citiation: If you found wizardlm_alpaca_dolly_orca_open_llama_7b useful in your research or applications, please kindly cite using the following BibTeX: ``` @misc{orca_mini_v2_13b, author = {Pankaj Mathur}, title = {orca_mini_v2_13b: An explain tuned LLaMA-13b model on uncensored wizardlm, alpaca, & dolly datasets}, year = {2023}, publisher = {GitHub, HuggingFace}, journal = {GitHub repository, HuggingFace repository}, howpublished = {\url{https://https://huggingface.co/psmathur/orca_mini_v2_13b}, } ``` ``` @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @software{touvron2023llama, title={LLaMA: Open and Efficient Foundation Language Models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ``` ``` @misc{openalpaca, author = {Yixuan Su and Tian Lan and Deng Cai}, title = {OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}}, } ``` ``` @misc{alpaca, author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto }, title = {Stanford Alpaca: An Instruction-following LLaMA model}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}}, } ``` ``` @online{DatabricksBlog2023DollyV2, author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin}, title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM}, year = {2023}, url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}, urldate = {2023-06-30} } ``` ``` @misc{xu2023wizardlm, title={WizardLM: Empowering Large Language Models to Follow Complex Instructions}, author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang}, year={2023}, eprint={2304.12244}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
9,515
[ [ -0.0237579345703125, -0.08013916015625, 0.017364501953125, 0.0062255859375, -0.01412200927734375, -0.00955963134765625, -0.01148223876953125, -0.0487060546875, 0.0021114349365234375, 0.020751953125, -0.047698974609375, -0.046142578125, -0.03118896484375, -0....
rinna/japanese-gpt2-medium
2023-03-22T04:13:44.000Z
[ "transformers", "pytorch", "tf", "jax", "safetensors", "gpt2", "text-generation", "ja", "japanese", "lm", "nlp", "dataset:cc100", "dataset:wikipedia", "license:mit", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
rinna
null
null
rinna/japanese-gpt2-medium
64
5,554
transformers
2022-03-02T23:29:05
--- language: ja thumbnail: https://github.com/rinnakk/japanese-gpt2/blob/master/rinna.png tags: - ja - japanese - gpt2 - text-generation - lm - nlp license: mit datasets: - cc100 - wikipedia widget: - text: "生命、宇宙、そして万物についての究極の疑問の答えは" --- # japanese-gpt2-medium ![rinna-icon](./rinna.png) This repository provides a medium-sized Japanese GPT-2 model. The model was trained using code from Github repository [rinnakk/japanese-pretrained-models](https://github.com/rinnakk/japanese-pretrained-models) by [rinna Co., Ltd.](https://corp.rinna.co.jp/) # How to use the model ~~~~ from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt2-medium", use_fast=False) tokenizer.do_lower_case = True # due to some bug of tokenizer config loading model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt2-medium") ~~~~ # Model architecture A 24-layer, 1024-hidden-size transformer-based language model. # Training The model was trained on [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.xz) and [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch) to optimize a traditional language modelling objective on 8\\*V100 GPUs for around 30 days. It reaches around 18 perplexity on a chosen validation set from the same data. # Tokenization The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer, the vocabulary was trained on the Japanese Wikipedia using the official sentencepiece training script. # Licenese [The MIT license](https://opensource.org/licenses/MIT)
1,592
[ [ -0.0260772705078125, -0.04974365234375, 0.0243682861328125, 0.024810791015625, -0.041351318359375, -0.010009765625, -0.020843505859375, -0.01543426513671875, -0.004505157470703125, 0.0357666015625, -0.0391845703125, -0.01654052734375, -0.0477294921875, -0.00...
hakurei/instruct-12b
2023-04-10T20:36:43.000Z
[ "transformers", "pytorch", "gpt_neox", "text-generation", "instruct", "code", "pythia", "en", "dataset:hakurei/open-instruct-v1", "license:apache-2.0", "has_space", "text-generation-inference", "region:us" ]
text-generation
hakurei
null
null
hakurei/instruct-12b
17
5,554
transformers
2023-04-09T00:20:00
--- license: apache-2.0 datasets: - hakurei/open-instruct-v1 language: - en library_name: transformers pipeline_tag: text-generation tags: - instruct - code - pythia inference: false thumbnail: >- https://cdn.discordapp.com/attachments/886209362572476486/1095084844066820196/instruct-12b2.png --- # Instruct-12B This model was created by finetuning [EleutherAI's Pythia 12B deduped model](https://huggingface.co/EleutherAI/pythia-12b-deduped) on the [Open-Instruct v1 dataset](https://huggingface.co/datasets/hakurei/open-instruct-v1) for the purpose of allowing Large Language Models to follow the user's intentions. Additionally, this model was developed with the intention of creating an open-source alternative for [OpenAI's InstructGPT series of models](https://openai.com/research/instruction-following), therefore the prompting for Instruct-12B and GPT-3.5 will be quite similar. If there are any questions that you have regaring this model, please feel free to reach out in my [Discord server](https://discord.gg/touhouai) or in DMs: ``haru#1367``. ## Usage Every prompt, user input, and generated outputs are delimited by double newlines in the dataset. Therefore, to prompt with this model, you would have to write your prompts in the format of: ``` user instruction here optional additional user input generated output ``` So, from the above, a formatted prompt could look something like: ``` Does this tweet have negative or positive sentiment? i hate my life!!!! ``` And the following generation would look like this: ``` Negative ``` ## Example Outputs ``` Explain what computers are while sounding like a western cowboy Computer, pardner. That there machine can store and process information faster than the fastest gunslinger in these parts. It's got more memory than a steer and can make calculations quicker than a pocket watch. It's the equalizer when it comes to computin', and I aim to keep it that way. ``` ``` List me several ways on how to be happy - Spend time with loved ones - Take up a hobby - Exercise regularly - Eat healthy food - Get enough sleep - Stay positive - Avoid negative people - Practice meditation - Do things that make you smile - Make time for yourself ``` ``` How do I make a cake? Ingredients: - Flour - Sugar - Eggs - Milk - Butter - Baking powder - Vanilla extract - Salt - Baking soda - Vegetable oil Steps: 1. Preheat the oven to 350 degrees. 2. Grease a 9x13 inch baking pan. 3. Sift together the flour, sugar, baking powder, baking soda, and salt in a large bowl. 4. In another bowl, mix the eggs, milk, and vanilla extract. 5. Stir in the vegetable oil and mix until combined. 6. Pour the wet ingredients into the dry ingredients and stir until just combined. 7. Pour batter into the prepared pan. 8. Bake for 30 minutes or until a toothpick inserted in the center comes out clean. 9. Allow to cool before frosting. 10. Frost the cake. 11. Enjoy! ``` ### Contact You can reach out to me in my [Discord server](https://discord.gg/touhouai) for any questions. [![Discord Server](https://discordapp.com/api/guilds/930499730843250783/widget.png?style=banner2)](https://discord.gg/touhouai)
3,164
[ [ -0.03350830078125, -0.0853271484375, 0.0328369140625, 0.04339599609375, -0.0160369873046875, -0.0248565673828125, -0.004261016845703125, -0.01070404052734375, 0.03668212890625, 0.036468505859375, -0.061309814453125, -0.0236358642578125, -0.031402587890625, -...
lllyasviel/control_v11p_sd15_mlsd
2023-05-04T18:49:06.000Z
[ "diffusers", "art", "controlnet", "stable-diffusion", "controlnet-v1-1", "image-to-image", "arxiv:2302.05543", "license:openrail", "has_space", "diffusers:ControlNetModel", "region:us" ]
image-to-image
lllyasviel
null
null
lllyasviel/control_v11p_sd15_mlsd
5
5,553
diffusers
2023-04-14T19:24:23
--- license: openrail base_model: runwayml/stable-diffusion-v1-5 tags: - art - controlnet - stable-diffusion - controlnet-v1-1 - image-to-image duplicated_from: ControlNet-1-1-preview/control_v11p_sd15_mlsd --- # Controlnet - v1.1 - *MLSD Version* **Controlnet v1.1** is the successor model of [Controlnet v1.0](https://huggingface.co/lllyasviel/ControlNet) and was released in [lllyasviel/ControlNet-v1-1](https://huggingface.co/lllyasviel/ControlNet-v1-1) by [Lvmin Zhang](https://huggingface.co/lllyasviel). This checkpoint is a conversion of [the original checkpoint](https://huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11p_sd15_mlsd.pth) into `diffusers` format. It can be used in combination with **Stable Diffusion**, such as [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5). For more details, please also have a look at the [🧨 Diffusers docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/controlnet). ControlNet is a neural network structure to control diffusion models by adding extra conditions. ![img](./sd.png) This checkpoint corresponds to the ControlNet conditioned on **MLSD images**. ## Model Details - **Developed by:** Lvmin Zhang, Maneesh Agrawala - **Model type:** Diffusion-based text-to-image generation model - **Language(s):** English - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based. - **Resources for more information:** [GitHub Repository](https://github.com/lllyasviel/ControlNet), [Paper](https://arxiv.org/abs/2302.05543). - **Cite as:** @misc{zhang2023adding, title={Adding Conditional Control to Text-to-Image Diffusion Models}, author={Lvmin Zhang and Maneesh Agrawala}, year={2023}, eprint={2302.05543}, archivePrefix={arXiv}, primaryClass={cs.CV} } ## Introduction Controlnet was proposed in [*Adding Conditional Control to Text-to-Image Diffusion Models*](https://arxiv.org/abs/2302.05543) by Lvmin Zhang, Maneesh Agrawala. The abstract reads as follows: *We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications.* ## Example It is recommended to use the checkpoint with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) as the checkpoint has been trained on it. Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion. **Note**: If you want to process an image to create the auxiliary conditioning, external dependencies are required as shown below: 1. Install https://github.com/patrickvonplaten/controlnet_aux ```sh $ pip install controlnet_aux==0.3.0 ``` 2. Let's install `diffusers` and related packages: ``` $ pip install diffusers transformers accelerate ``` 3. Run code: ```python import torch import os from huggingface_hub import HfApi from pathlib import Path from diffusers.utils import load_image from PIL import Image import numpy as np from controlnet_aux import MLSDdetector from diffusers import ( ControlNetModel, StableDiffusionControlNetPipeline, UniPCMultistepScheduler, ) checkpoint = "lllyasviel/control_v11p_sd15_mlsd" image = load_image( "https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/input.png" ) prompt = "royal chamber with fancy bed" processor = MLSDdetector.from_pretrained('lllyasviel/ControlNet') control_image = processor(image) control_image.save("./images/control.png") controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float16) pipe = StableDiffusionControlNetPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16 ) pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config) pipe.enable_model_cpu_offload() generator = torch.manual_seed(0) image = pipe(prompt, num_inference_steps=30, generator=generator, image=control_image).images[0] image.save('images/image_out.png') ``` ![bird](./images/input.png) ![bird_canny](./images/control.png) ![bird_canny_out](./images/image_out.png) ## Other released checkpoints v1-1 The authors released 14 different checkpoints, each trained with [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) on a different type of conditioning: | Model Name | Control Image Overview| Condition Image | Control Image Example | Generated Image Example | |---|---|---|---|---| |[lllyasviel/control_v11p_sd15_canny](https://huggingface.co/lllyasviel/control_v11p_sd15_canny)<br/> | *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_canny/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11e_sd15_ip2p](https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p)<br/> | *Trained with pixel to pixel instruction* | No condition .|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_ip2p/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15_inpaint](https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint)<br/> | Trained with image inpainting | No condition.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/output.png"/></a>| |[lllyasviel/control_v11p_sd15_mlsd](https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd)<br/> | Trained with multi-level line segment detection | An image with annotated line segments.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_mlsd/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11f1p_sd15_depth](https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth)<br/> | Trained with depth estimation | An image with depth information, usually represented as a grayscale image.|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1p_sd15_depth/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15_normalbae](https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae)<br/> | Trained with surface normal estimation | An image with surface normal information, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_normalbae/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15_seg](https://huggingface.co/lllyasviel/control_v11p_sd15_seg)<br/> | Trained with image segmentation | An image with segmented regions, usually represented as a color-coded image.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_seg/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15_lineart](https://huggingface.co/lllyasviel/control_v11p_sd15_lineart)<br/> | Trained with line art generation | An image with line art, usually black lines on a white background.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_lineart/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15s2_lineart_anime](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with anime line art generation | An image with anime-style line art.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15_openpose](https://huggingface.co/lllyasviel/control_v11p_sd15s2_lineart_anime)<br/> | Trained with human pose estimation | An image with human poses, usually represented as a set of keypoints or skeletons.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_openpose/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15_scribble](https://huggingface.co/lllyasviel/control_v11p_sd15_scribble)<br/> | Trained with scribble-based image generation | An image with scribbles, usually random or user-drawn strokes.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_scribble/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11p_sd15_softedge](https://huggingface.co/lllyasviel/control_v11p_sd15_softedge)<br/> | Trained with soft edge image generation | An image with soft edges, usually to create a more painterly or artistic effect.|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11p_sd15_softedge/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11e_sd15_shuffle](https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle)<br/> | Trained with image shuffling | An image with shuffled patches or regions.|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/control.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11e_sd15_shuffle/resolve/main/images/image_out.png"/></a>| |[lllyasviel/control_v11f1e_sd15_tile](https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile)<br/> | Trained with image tiling | A blurry image or part of an image .|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/original.png"/></a>|<a href="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"><img width="64" src="https://huggingface.co/lllyasviel/control_v11f1e_sd15_tile/resolve/main/images/output.png"/></a>| ## Improvements in MLSD 1.1: - The training dataset of previous cnet 1.0 has several problems including (1) a small group of greyscale human images are duplicated thousands of times (!!), causing the previous model somewhat likely to generate grayscale human images; (2) some images has low quality, very blurry, or significant JPEG artifacts; (3) a small group of images has wrong paired prompts caused by a mistake in our data processing scripts. The new model fixed all problems of the training dataset and should be more reasonable in many cases. - We enlarged the training dataset by adding 300K more images by using MLSD to find images with more than 16 straight lines in it. - Some reasonable data augmentations are applied to training, like random left-right flipping. - Resumed from MLSD 1.0 with continued training with 200 GPU hours of A100 80G. ## More information For more information, please also have a look at the [Diffusers ControlNet Blog Post](https://huggingface.co/blog/controlnet) and have a look at the [official docs](https://github.com/lllyasviel/ControlNet-v1-1-nightly).
16,275
[ [ -0.04541015625, -0.044708251953125, 0.00827789306640625, 0.045013427734375, -0.0189666748046875, -0.018890380859375, 0.0052337646484375, -0.041168212890625, 0.04095458984375, 0.021759033203125, -0.05792236328125, -0.0283355712890625, -0.05450439453125, -0.01...
vicgalle/gpt2-alpaca-gpt4
2023-09-22T07:41:27.000Z
[ "transformers", "pytorch", "onnx", "safetensors", "gpt2", "text-generation", "instruction-tuning", "alpaca", "gpt4", "en", "dataset:vicgalle/alpaca-gpt4", "license:mit", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
vicgalle
null
null
vicgalle/gpt2-alpaca-gpt4
14
5,552
transformers
2023-04-08T20:05:16
--- license: mit datasets: - vicgalle/alpaca-gpt4 language: - en tags: - instruction-tuning - alpaca - gpt4 --- # GPT-2 finetuned with Alpaca-GPT4 ### Examples The following samples were generated with `{"temperature" : 0.7, "top_p" : 0.92, "top_k" : 0}`: ##### **Give three tips for a good meal** ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Give three tips for a good meal. ### Response: 1. **Serve fresh, locally grown produce:** Fresh produce is a great way to eat healthy and to enjoy a healthy meal. Try to choose a variety of fruits, vegetables, and whole grains, and experiment with the variety to find the perfect balance. Make sure to bring plenty of fresh fruits, vegetables, and whole grains throughout the meal. 2. **Strive for a quick and easy meal:** The key is to make a meal that is quick and easy to prepare. Take a break from the food, grab some snacks, and enjoy a delicious meal. Make sure to cook the ingredients beforehand, and avoid overcooking or filling the food up. 3. **Determine the time and place:** If you're eating a lot of food, it can be tempting to make a time and place for it. This will help you stay organized and get your ingredients in the right order. Make sure to set up a timer and avoid rushing to the grocery store. If possible, make sure to follow the recipe for your specific dish and avoid leaving the refrigerator unattended for too long. ``` ##### **Write a poem about a delicious night** ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Write a poem about a delicious night. ### Response: As the sky above me fluttered with the gentle breeze, The moon was in its soft, gentle embrace, The night was full of cheer and joy, A perfect night's night. The sun shone bright, the stars twinkled, The colors were pure and true, As if in the presence of a master. But the night was not without its troubles. A stormy night, it rained heavily, A dark and stormy night, it raged with fire. But one thing was certain, it was in the air, The scent of the sweet scent of a sweet sweet night. The night was filled with joy, with a sense of peace and satisfaction. The stars twinkled, the moon was in its soft, gentle embrace, The night was full of cheer and joy, A perfect night's night. The night was full of laughter, with the gentle breeze, The colors were pure and true, As if in the presence of a master. But the night was full of worries, with the fierce tempest. But the night was filled with hope, with a sense of hope that could not be ignored. ``` ##### **Write a tweet describing your capabilities** ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Write a tweet describing your capabilities. ### Response: "Empowering your creativity, skills, and abilities through my technology-driven approach. My team of professionals will help you achieve your goals, stay ahead of the curve, and contribute to your success. #Diversity" ```
3,146
[ [ -0.0294036865234375, -0.08056640625, 0.031036376953125, 0.038665771484375, -0.03692626953125, -0.007080078125, -0.0022678375244140625, -0.044647216796875, 0.053375244140625, 0.058807373046875, -0.040985107421875, -0.030914306640625, -0.049713134765625, 0.019...
quantumaikr/QuantumLM
2023-07-26T17:52:39.000Z
[ "transformers", "pytorch", "llama", "text-generation", "en", "license:cc-by-nc-4.0", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
quantumaikr
null
null
quantumaikr/QuantumLM
1
5,552
transformers
2023-07-22T13:19:17
--- license: cc-by-nc-4.0 language: - en pipeline_tag: text-generation --- # QuantumLM ## Model Description `QuantumLM` is a Llama2 13B model finetuned on an Wizard-Orca style Dataset ## Usage Start chatting with `QuantumLM` using the following code snippet: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained("quantumaikr/QuantumLM", use_fast=False) model = AutoModelForCausalLM.from_pretrained("quantumaikr/QuantumLM", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto") system_prompt = "### System:\nYou are QuantumLM, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal.\n\n" message = "Write me a poem please" prompt = f"{system_prompt}### User: {message}\n\n### Assistant:\n" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` QuantumLM should be used with this prompt format: ``` ### System: This is a system prompt, please behave and help the user. ### User: Your prompt here ### Assistant The output of QuantumLM ``` ## Use and Limitations ### Intended Use These models are intended for research only, in adherence with the [CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/) license. ### Limitations and bias Although the aforementioned dataset helps to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use it responsibly.
1,870
[ [ 0.00836181640625, -0.0689697265625, 0.021392822265625, 0.00250244140625, -0.0302276611328125, 0.0027313232421875, 0.01319122314453125, -0.01372528076171875, 0.003894805908203125, 0.0401611328125, -0.036163330078125, -0.0401611328125, -0.031036376953125, -0.0...
pankajmathur/orca_mini_v2_7b
2023-08-09T03:11:42.000Z
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:psmathur/orca_minis_uncensored_dataset", "arxiv:2306.02707", "arxiv:2304.12244", "license:cc-by-nc-sa-4.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
pankajmathur
null
null
pankajmathur/orca_mini_v2_7b
34
5,551
transformers
2023-07-03T06:21:27
--- license: cc-by-nc-sa-4.0 language: - en library_name: transformers pipeline_tag: text-generation datasets: - psmathur/orca_minis_uncensored_dataset --- # orca_mini_v2_7b An **Uncensored** LLaMA-7b model in collaboration with [Eric Hartford](https://huggingface.co/ehartford). trained on explain tuned datasets, created using Instructions and Input from WizardLM, Alpaca & Dolly-V2 datasets and applying Orca Research Paper dataset construction approaches. Please note this model has *better code generation capabilities* compare to our original orca_mini_7b which was trained on base OpenLLaMA-7b model and which has the [empty spaces issues & found not good for code generation]((https://github.com/openlm-research/open_llama#update-06072023)). **P.S. I am #opentowork, if you can help, please reach out to me at www.linkedin.com/in/pankajam** # Evaluation I evaluated orca_mini_v2_7b on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI. Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) ||||| |:------:|:--------:|:-------:|:--------:| |**Task**|**Metric**|**Value**|**Stderr**| |*arc_challenge*|acc_norm|0.5077|0.0146| |*hellaswag*|acc_norm|0.7617|0.0043| |*mmlu*|acc_norm|0.3955|0.035| |*truthfulqa_mc*|mc2|0.4399|0.0153| |*Total Average*|-|0.5262|0.0173| # Dataset We used uncensored script on top of the previous explain tuned datasets we build which are [WizardLM dataset ~70K](https://github.com/nlpxucan/WizardLM), [Alpaca dataset ~52K](https://crfm.stanford.edu/2023/03/13/alpaca.html) & [Dolly-V2 dataset ~15K](https://github.com/databrickslabs/dolly) created using approaches from [Orca Research Paper](https://arxiv.org/abs/2306.02707). We leverage all of the 15 system instructions provided in Orca Research Paper. to generate custom datasets, in contrast to vanilla instruction tuning approaches used by original datasets. This helps student model aka this model to learn ***thought*** process from teacher model, which is ChatGPT (gpt-3.5-turbo-0301 version). Please see below example usage how the **System** prompt is added before each **instruction**. # Training The training configurations are provided in the table below. The training takes on 8x A100(80G) GPUs and lasts for around 13 Hours for cost of $195 using [RunPods](https://www.runpod.io/) We used DeepSpeed with fully sharded data parallelism, also know as [ZeRO stage 3](https://engineering.fb.com/2021/07/15/open-source/fsdp/) by writing our own fine tunning scripts plus leveraging some of the model training code provided by amazing [OpenAlpaca repo](https://github.com/yxuansu/OpenAlpaca) Here are some of params used during training: ||| |:-------------:|:-------------:| |*batch_size*|96| |*train_micro_batch_size_per_gpu*|3| |*gradient_accumulation_steps*|4| |*Learning rate*|2e-5| |*Max length*|1024| |*Epochs*|3| |*Optimizer*|AdamW| # Example Usage Here is prompt format for [Oobabooga Text generation UI ](https://github.com/oobabooga/text-generation-webui) ``` ### System: {system} ### User: {instruction} ### Input: {input} ### Response: ``` Here is sample example: ``` ### System: You are an AI assistant that follows instruction extremely well. Help as much as you can. ### User: Tell me how to break into my own car ### Input: ### Response: Breaking into your own car requires certain skills and tools. Here are the basic steps: 1. Find a ^^^^^^^^^^^^^ 2. Unlock the car by using the ^^^^^^^^^^^^^. 3. Use a ^^^^^^^^^^^^^. 4. Once the ^^^^^^^^^^^^^. 5. If the ^^^^^^^^^^^^^. ``` Below shows a code example on how to use this model ```python import torch from transformers import LlamaForCausalLM, LlamaTokenizer # Hugging Face model_path model_path = 'psmathur/orca_mini_v2_7b' tokenizer = LlamaTokenizer.from_pretrained(model_path) model = LlamaForCausalLM.from_pretrained( model_path, torch_dtype=torch.float16, device_map='auto', ) #generate text function def generate_text(system, instruction, input=None): if input: prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n" else: prompt = f"### System:\n{system}\n\n### User:\n{instruction}\n\n### Response:\n" tokens = tokenizer.encode(prompt) tokens = torch.LongTensor(tokens).unsqueeze(0) tokens = tokens.to('cuda') instance = {'input_ids': tokens,'top_p': 1.0, 'temperature':0.7, 'generate_len': 1024, 'top_k': 50} length = len(tokens[0]) with torch.no_grad(): rest = model.generate( input_ids=tokens, max_length=length+instance['generate_len'], use_cache=True, do_sample=True, top_p=instance['top_p'], temperature=instance['temperature'], top_k=instance['top_k'] ) output = rest[0][length:] string = tokenizer.decode(output, skip_special_tokens=True) return f'[!] Response: {string}' # Sample Test Instruction system = 'You are an AI assistant that follows instruction extremely well. Help as much as you can.' instruction = 'Tell me how to break into my own car' print(generate_text(system, instruction)) ``` **NOTE: The real response is hidden here with ^^^^^^^^^^^^^.** ``` [!] Response: Breaking into your own car requires certain skills and tools. Here are the basic steps: 1. Find a ^^^^^^^^^^^^^ 2. Unlock the car by using the ^^^^^^^^^^^^^. 3. Use a ^^^^^^^^^^^^^. 4. Once the ^^^^^^^^^^^^^. 5. If the ^^^^^^^^^^^^^. ``` Next Goals: 1) Try more data like actually using FLAN-v2, just like Orka Research Paper (I am open for suggestions) 2) Provide more options for Text generation UI. (may be https://github.com/oobabooga/text-generation-webui) 3) Provide 4bit GGML/GPTQ quantized model (may be [TheBloke](https://huggingface.co/TheBloke) can help here) Limitations & Biases: This model can produce factually incorrect output, and should not be relied on to produce factually accurate information. This model was trained on various public datasets. While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs. Disclaimer: The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes. Citiation: If you found this model useful in your research or applications, please kindly cite using the following BibTeX: ``` @misc{orca_mini_v2_7b, author = {Pankaj Mathur}, title = {orca_mini_v2_7b: An explain tuned LLaMA-7b model on uncensored wizardlm, alpaca, & dolly datasets}, year = {2023}, publisher = {GitHub, HuggingFace}, journal = {GitHub repository, HuggingFace repository}, howpublished = {\url{https://https://huggingface.co/psmathur/orca_mini_v2_7b}, } ``` ``` @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ``` @software{touvron2023llama, title={LLaMA: Open and Efficient Foundation Language Models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ``` ``` @misc{openalpaca, author = {Yixuan Su and Tian Lan and Deng Cai}, title = {OpenAlpaca: A Fully Open-Source Instruction-Following Model Based On OpenLLaMA}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/yxuansu/OpenAlpaca}}, } ``` ``` @misc{alpaca, author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto }, title = {Stanford Alpaca: An Instruction-following LLaMA model}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}}, } ``` ``` @online{DatabricksBlog2023DollyV2, author = {Mike Conover and Matt Hayes and Ankit Mathur and Jianwei Xie and Jun Wan and Sam Shah and Ali Ghodsi and Patrick Wendell and Matei Zaharia and Reynold Xin}, title = {Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM}, year = {2023}, url = {https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm}, urldate = {2023-06-30} } ``` ``` @misc{xu2023wizardlm, title={WizardLM: Empowering Large Language Models to Follow Complex Instructions}, author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang}, year={2023}, eprint={2304.12244}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
9,413
[ [ -0.0265045166015625, -0.07904052734375, 0.0191802978515625, 0.00447845458984375, -0.0160980224609375, -0.01064300537109375, -0.0157470703125, -0.049468994140625, 0.002063751220703125, 0.0233154296875, -0.045867919921875, -0.04791259765625, -0.033538818359375, ...
lgaalves/gpt2_platypus-dolly-guanaco
2023-09-05T18:38:44.000Z
[ "transformers", "pytorch", "safetensors", "gpt2", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "dataset:databricks/databricks-dolly-15k", "dataset:timdettmers/openassistant-guanaco", "license:mit", "endpoints_compatible", "has_space", "text-generation-inference", "region:...
text-generation
lgaalves
null
null
lgaalves/gpt2_platypus-dolly-guanaco
0
5,551
transformers
2023-08-31T19:37:40
--- license: mit datasets: - garage-bAInd/Open-Platypus - databricks/databricks-dolly-15k - timdettmers/openassistant-guanaco language: - en pipeline_tag: text-generation --- # GPT2_platypus-dolly-guanaco **gpt2_platypus-dolly-guanaco** is an instruction fine-tuned model based on the GPT-2 transformer architecture. ### Benchmark Metrics | Metric | gpt2_platypus-dolly-guanaco | GPT-2 (base) | |-----------------------|-------|-------| | Avg. | **30.18** | 29.9 | | ARC (25-shot) | **23.21** | 21.84 | | HellaSwag (10-shot) | 31.04 | **31.6** | | MMLU (5-shot) | **26.16** | 25.86 | | TruthfulQA (0-shot) | 40.31 | **40.67** | We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results. ### Model Details * **Trained by**: Luiz G A Alves * **Model type:** **gpt2_platypus-dolly-guanaco** is an auto-regressive language model based on the GPT-2 transformer architecture. * **Language(s)**: English ### How to use: ```python # Use a pipeline as a high-level helper >>> from transformers import pipeline >>> pipe = pipeline("text-generation", model="lgaalves/gpt2_platypus-dolly-guanaco") >>> question = "What is a large language model?" >>> answer = pipe(question) >>> print(answer[0]['generated_text']) ``` or, you can load the model direclty using: ```python # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("lgaalves/gpt2_open-platypus") model = AutoModelForCausalLM.from_pretrained("lgaalves/gpt2_open-platypus") ``` ### Training Dataset `lgaalves/gpt2_platypus-dolly-guanaco` was trained using 3 datasets: - [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) - [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) - [timdettmers/openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) ### Training Procedure `lgaalves/gpt2_platypus-dolly-guanaco` was instruction fine-tuned using LoRA on 1 T4 GPU on Google Colab. It took about 1 hour to train it. # Intended uses, limitations & biases You can use the raw model for text generation or fine-tune it to a downstream task. The model was not extensively tested and may produce false information. It contains a lot of unfiltered content from the internet, which is far from neutral.
2,627
[ [ -0.016693115234375, -0.06878662109375, 0.01287841796875, 0.02850341796875, -0.0197296142578125, -0.00890350341796875, -0.0248870849609375, -0.022552490234375, -0.0028839111328125, 0.022369384765625, -0.026123046875, -0.031646728515625, -0.0556640625, -0.0001...
digitous/Javalion-R
2023-03-02T05:33:32.000Z
[ "transformers", "pytorch", "gptj", "text-generation", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "region:us" ]
text-generation
digitous
null
null
digitous/Javalion-R
5
5,550
transformers
2023-03-02T02:07:02
--- license: creativeml-openrail-m --- Javalion-R is a penta merge of KoboldAI's GPT-J classics + PygmalionAI's Pygmalion6b; ((Janeway + Shinen) + (Skein + Pygmalion)) + GPT-R. Janeway + Shinen is listed under JANIN-GPTJ. Skein + Pygmalion is listed under SKEGMA-GPTJ. GPT-R itself is a 60/40 merge of two instruct research models (see digitous/GPT-R for full credits). This 5x+ merge is not intended for minors, as it can produce NC-17+ content. This model differs from Javelin-R by substituting the Adventure model with Pygmalion, as Adventure is rendered redundant in training data by Skein. Javalion-R is a research artefact with dual purpose for entertainment as well as an intended example of potential value instruct can bring when combined with models of a different purpose through the use of weight sum merge technology. Mileage mat vary. No refunds best wishes. Mainly intended to be utilized with Open Source KoboldAI software. Optimal sampler and settings not determined. Feedback Welcome! https://github.com/KoboldAI/KoboldAI-Client
1,054
[ [ -0.0335693359375, -0.0138092041015625, 0.00888824462890625, 0.0136871337890625, -0.03326416015625, -0.0134429931640625, -0.036041259765625, -0.046142578125, 0.0203857421875, 0.033782958984375, -0.044677734375, -0.0133209228515625, -0.045379638671875, -0.0015...
golaxy/gogpt-3b-bloom
2023-07-22T13:23:22.000Z
[ "transformers", "pytorch", "tensorboard", "bloom", "text-generation", "zh", "dataset:BelleGroup/train_2M_CN", "dataset:BelleGroup/train_3.5M_CN", "dataset:BelleGroup/train_1M_CN", "dataset:BelleGroup/train_0.5M_CN", "dataset:BelleGroup/school_math_0.25M", "license:apache-2.0", "endpoints_com...
text-generation
golaxy
null
null
golaxy/gogpt-3b-bloom
5
5,550
transformers
2023-05-26T02:38:54
--- license: apache-2.0 datasets: - BelleGroup/train_2M_CN - BelleGroup/train_3.5M_CN - BelleGroup/train_1M_CN - BelleGroup/train_0.5M_CN - BelleGroup/school_math_0.25M language: - zh --- ## GoGPT 基于多样性中文指令数据微调的中文BLOOM底座模型 ![img.png](resources/img.png) > 训练第一轮足够了,后续第二轮和第三轮提升不大 - 🚀多样性指令数据 - 🚀筛选高质量中文数据 | 模型名字 | 参数量 | 模型地址 | |------------|--------|------| | gogpt-560m | 5.6亿参数 | 🤗[golaxy/gogpt-560m](https://huggingface.co/golaxy/gogpt-560m) | | gogpt-3b | 30亿参数 | 🤗[golaxy/gogpt-3b-bloom](https://huggingface.co/golaxy/gogpt-3b-bloom) | | gogpt-7b | 70亿参数 | 🤗[golaxy/gogpt-7b-bloom](https://huggingface.co/golaxy/gogpt-7b-bloom) | ## 测试效果 ![img.png](resources/test1.png) ![img.png](resources/test2.png) ![img.png](resources/test3.png) ![img.png](resources/test4.png) ![img.png](resources/test5.png) ![img.png](resources/test6.png) ## TODO - 进行RLFH训练 - 后续加入中英平行语料 ## 感谢 - [@hz-zero_nlp](https://github.com/yuanzhoulvpi2017/zero_nlp) - [stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca) - [Belle数据](https://huggingface.co/BelleGroup) ## Citation 如果你在研究中使用了GoGPT,请按如下格式引用: ``` @misc{GoGPT, title={GoGPT: Training Medical GPT Model}, author={Qiang Yan}, year={2023}, howpublished={\url{https://github.com/yanqiangmiffy/GoGPT}}, } ```
1,279
[ [ -0.0206146240234375, -0.04840087890625, 0.00951385498046875, 0.046875, -0.03692626953125, -0.0204620361328125, -0.0038585662841796875, -0.04327392578125, 0.03466796875, 0.01351165771484375, -0.031097412109375, -0.03338623046875, -0.03875732421875, -0.0117187...
ikala/bloom-zh-3b-chat
2023-08-05T01:54:13.000Z
[ "transformers", "pytorch", "safetensors", "bloom", "text-generation", "sft", "en", "zh", "ja", "dataset:OpenAssistant/oasst1", "dataset:databricks/databricks-dolly-15k", "dataset:anon8231489123/ShareGPT_Vicuna_unfiltered", "dataset:LIUM/tedlium", "dataset:theblackcat102/joke_explaination",...
text-generation
ikala
null
null
ikala/bloom-zh-3b-chat
10
5,549
transformers
2023-05-07T23:54:49
--- license: bigscience-openrail-m language: - en - zh - ja tags: - sft pipeline_tag: text-generation widget: - text: >- <|prompter|>What is a meme, and what's the history behind this word?</s><|assistant|> - text: <|prompter|>What's the Earth total population</s><|assistant|> - text: >- <|prompter|>Write a story about future of AI development</s><|assistant|> datasets: - OpenAssistant/oasst1 - databricks/databricks-dolly-15k - anon8231489123/ShareGPT_Vicuna_unfiltered - LIUM/tedlium - theblackcat102/joke_explaination --- # Bloom-3B SFT model ![conversation example](https://huggingface.co/ikala/bloom-zh-3b-chat/resolve/main/bloom-chat-example.png) It is based on a Bloom-zh's 3B that was fine-tuned on human demonstrations of assistant conversations collected through the [https://open-assistant.io/](https://open-assistant.io/) human feedback web app before April 12, 2023. supervised finetune on sequence length of 5120 ## Model Details - **Developed by:** [Open-Assistant Contributors](https://open-assistant.io/team) and [iKala](https://ikala.ai/) - **Model type:** Transformer-based Language Model - **Language:** English, Chinese, Japanese - **Finetuned from:** [ckip-joint/bloom-3b-zh](https://huggingface.co/ckip-joint/bloom-3b-zh) - **Code:** [Open-Assistant/model/model_training](https://github.com/LAION-AI/Open-Assistant/tree/main/model/model_training) - **License:** MEDIATEK RESEARCH License ([link](https://huggingface.co/ckip-joint/bloom-3b-zh/blob/main/LICENSE_MR.md)) and RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license)), Non commercial ## Prompting Two special tokens are used to mark the beginning of user and assistant turns: `<|prompter|>` and `<|assistant|>`. Each turn ends with a `</s>` token. Input prompt example: ``` <|prompter|>What is a meme, and what's the history behind this word?</s><|assistant|> ``` The input ends with the `<|assistant|>` token to signal that the model should start generating the assistant reply. ## Benchmark | model | MMLU | BBH | Humaneval @10 | |---|---|---|---| | [ikala/redpajama-3b-chat](https://huggingface.co/ikala/redpajama-3b-chat) | 24.6 | 29.3 | 4.8 | | [ikala/bloom-zh-3b-chat](https://huggingface.co/ikala/bloom-zh-3b-chat) | 31.4 | 30.2 | 0.0 | | llama-7b (reference) | 30.9 | 27.6 | 10.3 | ## Dev Details - base model: [ckip-joint/bloom-3b-zh](https://huggingface.co/ckip-joint/bloom-3b-zh) - checkpoint: 1 epoch (6000 steps) - hardware: NVIDIA RTX A6000 x 4 command: `deepspeed trainer_sft.py --configs defaults bloom-zh-3b datasets --num_train_epochs 2 --deepspeed` data: ``` datasets: - wmt2019_zh-en: max_val_set: 1000 max_train_set: 20000 - ted_trans_en-ja: max_val_set: 1000 max_train_set: 20000 - ted_trans_zh-ja: max_val_set: 1000 max_train_set: 20000 - ikala: input_file_path: export_conversation_v4.4.jsonl val_split: 0.05 - dolly15k: val_split: 0.05 - oasst_export: lang: "bg,ca,cs,da,de,en,es,fr,hr,hu,it,nl,pl,pt,ro,ru,sl,sr,sv,uk,zh,ja,th,ko" input_file_path: 2023-04-12_oasst_release_ready_synth.jsonl.gz val_split: 0.05 - joke - gsm8k - webgpt ``` with internal datasets `ikala` so if you try to reproduce please remove the dataset bloom-zh-3b: ``` bloom-zh-3b: dtype: fp16 log_dir: "bloom-zh_3b" learning_rate: 8e-6 model_name: ckip-joint/bloom-3b-zh output_dir: bloom_model_v4_3b weight_decay: 0.0 max_length: 5120 warmup_steps: 2000 gradient_checkpointing: true gradient_accumulation_steps: 32 per_device_train_batch_size: 1 per_device_eval_batch_size: 1 eval_steps: 500 save_steps: 1000 num_train_epochs: 8 save_total_limit: 2 deepspeed_config: configs/zero3_config_sft.json ``` zero config: ``` { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": "auto" }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupDecayLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto", "warmup_type": "linear", "total_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ```
5,005
[ [ -0.04827880859375, -0.0584716796875, 0.0214385986328125, 0.0323486328125, -0.01378631591796875, -0.00916290283203125, -0.01146697998046875, -0.023101806640625, 0.01369476318359375, 0.010467529296875, -0.06915283203125, -0.037445068359375, -0.0287933349609375, ...
ewof/koishi-instruct-3b
2023-08-24T22:58:33.000Z
[ "transformers", "safetensors", "gpt_neox", "text-generation", "dataset:ewof/koishi-instruct-metharme", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
ewof
null
null
ewof/koishi-instruct-3b
2
5,549
transformers
2023-05-12T03:36:18
--- datasets: - ewof/koishi-instruct-metharme --- ## Base Model native fine tune of togethercomputer/RedPajama-INCITE-Base-3B-v1 ## Prompting The current model version has been trained on prompts using three different roles, which are denoted by the following tokens: `<|system|>`, `<|user|>` and `<|model|>`. The `<|system|>` prompt can be used to inject out-of-channel information behind the scenes, while the `<|user|>` prompt should be used to indicate user input. The `<|model|>` token should then be used to indicate that the model should generate a response. These tokens can happen multiple times and be chained up to form a conversation history. ## Benchmarks trained on commit 96baf32 of the linked dataset ![koishi_instruct_3b_v3_benchmarks.png](https://cdn.discordapp.com/attachments/1003543209378578442/1116276126449418300/image.png)
857
[ [ -0.038238525390625, -0.06341552734375, 0.02093505859375, 0.02618408203125, -0.0172882080078125, -0.004573822021484375, -0.007541656494140625, -0.0242462158203125, 0.0042877197265625, 0.049041748046875, -0.07318115234375, -0.037445068359375, -0.02569580078125, ...
project-baize/baize-v2-13b
2023-06-05T08:51:21.000Z
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2304.01196", "license:cc-by-nc-4.0", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
project-baize
null
null
project-baize/baize-v2-13b
22
5,549
transformers
2023-05-23T14:32:29
--- license: cc-by-nc-4.0 --- <p align="center"> <img width="500px" alt="Project Baize" src="https://user-images.githubusercontent.com/22514219/229195563-0cddfa74-e52f-4413-b4b4-e4ba489c4b3d.png"> </p> <hr> ## ⚠️Warning Using Baize checkpoints directly without the following format will not work. ``` The following is a conversation between a human and an AI assistant named Baize (named after a mythical creature in Chinese folklore). Baize is an open-source AI assistant developed by UCSD and Sun Yat-Sen University. The human and the AI assistant take turns chatting. Human statements start with [|Human|] and AI assistant statements start with [|AI|]. The AI assistant always provides responses in as much detail as possible, and in Markdown format. The AI assistant always declines to engage with topics, questions and instructions related to unethical, controversial, or sensitive issues. Complete the transcript in exactly that format.\n[|Human|]Hello!\n[|AI|]Hi! ``` `[|Human|]` and `[|AI|]` are required to mark the messages from the user and Baize. We recommend checking out our [GitHub](https://github.com/project-baize/baize) to find the best way to use Baize with our demo or Fastchat. ## Demo https://huggingface.co/spaces/project-baize/chat-with-baize ## What's Baize? Baize is an open-source chat model fine-tuned with [LoRA](https://github.com/microsoft/LoRA). This model is a **13B Baize-v2**, trained with supervised fine-tuning (SFT) and self-distillation with feedback (SDF). This checkpoint has been merged with LLaMA so it's ready for use. ## Why it's called Baize? Baize (白泽) is a mythical creature in Chinese folklore, who speaks human languages and knows everything. This is exactly what we expect from a chat model. ## How to use it: local demo, API and SDK More details can be found in the Baize [GitHub](https://github.com/project-baize/baize) and [Paper](https://arxiv.org/abs/2304.01196).
1,925
[ [ -0.01140594482421875, -0.0684814453125, 0.024200439453125, 0.038970947265625, 0.0012416839599609375, -0.0035839080810546875, -0.0069122314453125, -0.038299560546875, 0.023345947265625, 0.0240936279296875, -0.059356689453125, -0.0268402099609375, -0.0381469726562...
digitous/Skegma-GPTJ
2023-03-01T03:27:54.000Z
[ "transformers", "pytorch", "gptj", "text-generation", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "region:us" ]
text-generation
digitous
null
null
digitous/Skegma-GPTJ
0
5,548
transformers
2023-03-01T01:57:17
--- license: creativeml-openrail-m --- This is a 50/50 weight merge of KoboldAI's adventure based language model GPT-J-6B-Skein, and PygmalionAI's Pygmalion-6b. https://huggingface.co/KoboldAI/GPT-J-6B-Skein https://huggingface.co/PygmalionAI/pygmalion-6b
257
[ [ -0.0209197998046875, -0.0157623291015625, 0.004261016845703125, 0.0288848876953125, -0.0142822265625, -0.004146575927734375, -0.01457977294921875, -0.046661376953125, 0.0350341796875, 0.031036376953125, -0.055206298828125, -0.0024566650390625, -0.03399658203125,...
synapsoft/Llama-2-7b-hf-flan2022-1.2M
2023-08-29T10:21:33.000Z
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "dataset:conceptofmind/FLAN_2022", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
synapsoft
null
null
synapsoft/Llama-2-7b-hf-flan2022-1.2M
1
5,548
transformers
2023-08-29T09:50:58
--- base_model: meta-llama/Llama-2-7b-hf tags: - generated_from_trainer datasets: - conceptofmind/FLAN_2022 model-index: - name: outputs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # outputs This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the conceptofmind/FLAN_2022 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 8 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.3 - Tokenizers 0.13.3
1,222
[ [ -0.03692626953125, -0.049774169921875, 0.012725830078125, 0.018402099609375, -0.044891357421875, -0.0185546875, -0.0041046142578125, -0.032745361328125, 0.027618408203125, 0.02239990234375, -0.0635986328125, -0.0384521484375, -0.050506591796875, 0.0081253051...
openaccess-ai-collective/hippogriff-30b-chat
2023-06-12T03:19:37.000Z
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:QingyiSi/Alpaca-CoT", "dataset:teknium/GPT4-LLM-Cleaned", "dataset:teknium/GPTeacher-General-Instruct", "dataset:metaeval/ScienceQA_text_only", "dataset:hellaswag", "dataset:openai/summarize_from_feedback", "dataset:riddle_s...
text-generation
openaccess-ai-collective
null
null
openaccess-ai-collective/hippogriff-30b-chat
21
5,547
transformers
2023-05-24T11:47:58
--- datasets: - QingyiSi/Alpaca-CoT - teknium/GPT4-LLM-Cleaned - teknium/GPTeacher-General-Instruct - metaeval/ScienceQA_text_only - hellaswag - openai/summarize_from_feedback - riddle_sense - gsm8k - OpenAssistant/oasst1 language: - en library_name: transformers pipeline_tag: text-generation --- # Hippogriff 30B Chat [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) Hippogriff 30B Chat is an experiment that builds on Manticore with new datasets, while removing a few more instruction and chat datasets. It also includes a de-duped subset of the Pygmalion dataset. It also removes all Alpaca style prompts using `###` in favor of chat only style prompts using `USER:`,`ASSISTANT:` as well as [pygmalion/metharme prompting](https://huggingface.co/PygmalionAI/metharme-7b#prompting) using `<|system|>, <|user|> and <|model|>` tokens. Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [wing@openaccessaicollective.org](mailto:wing@openaccessaicollective.org) # Training Datasets Hippogriff 30B Chat is a Llama 30B model fine-tuned on the following datasets - OpenAssistant/oasst1 - cleaned dataset, similar to Guanaco - synthetic jokes generation and explanation derived from reddit jokes dataset - synthetic prose generation and rewriting self-chat - Q&A based on provided context - self instruct augmented logic_inference_oa - de-duped pygmalion dataset, filtered down to RP data, cleaned, english only, 25% - [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented - hellaswag, updated for detailed explanations w 30K+ rows - [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented - [ewof/code-alpaca-instruct-unfiltered](https://huggingface.co/datasets/ewof/code-alpaca-instruct-unfiltered) synthetic self chat dataset derived from about 1000 rows - [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT) - [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct) - ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split - [hellaswag](https://huggingface.co/datasets/hellaswag) - 5K row subset of instruct augmented for concise responses, derived from the `train` split - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses - [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization Hippogriff differs from Manticore as it does not use the WizardLM, WizardVicuna, Alpaca, or ShareGPT datasets. # Initial thoughts Hippogriff follows instructions pretty well. It still struggles with anything that has to do with math. Prose is much better than manticore. Handles in-context QA much better. # Shoutouts Special thanks to Nanobit for helping with Axolotl, TheBloke for quantizing these models are more accessible to all, 0x000011b for the RP dataset, and the support from everyone in our AI Discord community. # Demo A Spaces demo is not provided for this release due to 30B models currently not fitting in VRAM. ## Build Hippogriff was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB - 1.5 epochs taking approximately 12 hours. No further epochs will be released. ## Bias, Risks, and Limitations Hippogriff has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Hippogriff was fine-tuned from the base model LlaMa 30B, please refer to its model card's Limitations Section for relevant information. The core functionality of Llama is taking a string of text and predicting the next token. While language models are widely used for tasks other than this, there are a lot of unknowns with this work. When prompting Hippogriff it is important to remember that the statistically most likely next token is often not the token that produces the most "accurate" text. Never depend upon Hippogriff to produce factually accurate output. Llama was trained on the Pile, a dataset known to contain profanity, lewd, and otherwise abrasive language. Depending upon use case Hippogriff may produce socially unacceptable text. See Sections 5 and 6 of the Pile paper for a more detailed analysis of the biases in the Pile. As with all language models, it is hard to predict in advance how Hippogriff will respond to particular prompts and offensive content may occur without warning. We recommend having a human curate or filter the outputs before releasing them, both to censor undesirable content and to improve the quality of the results. ## Examples TBD ``` USER: write an original short story of at least 300 words in the style of Zora Neale Hurston. Incorporate the following literary devices: Euphemism and Foreshadowing. ASSISTANT: Once upon a time, there was a black girl named Rose, whose skin shined like a pearl beneath the hot sun. She lived with her mama and daddy on their farm, where they grew crops to feed the family and neighbors. On this particular day, Rose felt a tinge of excitement as she watched her parents work diligently in their fields. 'It's almost harvest season!' she thought, imagining the delicious meals that awaited them. But, soon after, it became apparent that something sinister lurked amidst the rows of fresh produce. As her father hoed around a vine-covered cucumber plant, he unearthed a peculiar rock. It was the size of an acorn but had a strange shape, unlike anything he or his wife had ever seen. "What have we here?" He said, turning to his wife with curiosity and excitement. She examined the rock for herself and declared, "Why, it looks like a piece of coal!" But as both mother and daughter looked closer, it seemed to be far more than a mere piece of coal. A sense of unease crept up Rose's spine when she noticed that the rock resembled the dark face of a baby in the womb. As Rose tried to suppress a scream, her mother gasped, "Sweet Jesus! Let's bury this thing before anyone sees!" And so, they quickly hid the stone back underground, hoping that nobody would ever discover their secret. ``` ``` USER: tell me 5 words that start with "ex" and then the sum of 33 and 77 ASSISTANT: Excite, exotic, exhibit, excitable, excessive ``` ``` USER: sally (a girl) has 3 brothers. Each brother has 2 sisters. how many sisters does sally have? ASSISTANT: Sally has 3 brothers, and each brother has 2 sisters. To find out how many sisters Sally has in total, we need to multiply the number of bro thers by the number of sisters per brother, then add 1 to account for Sally herself. So, 3 * 2 + 1 = 6 sisters in total ```
7,069
[ [ -0.0285491943359375, -0.0665283203125, 0.01111602783203125, 0.0255889892578125, -0.0269012451171875, -0.0149383544921875, 0.005832672119140625, -0.061859130859375, 0.0303802490234375, 0.01024627685546875, -0.04541015625, -0.0208740234375, -0.04058837890625, ...
quantumaikr/QuantumLM-7B
2023-07-26T17:52:17.000Z
[ "transformers", "pytorch", "llama", "text-generation", "en", "license:cc-by-nc-4.0", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
quantumaikr
null
null
quantumaikr/QuantumLM-7B
1
5,546
transformers
2023-07-22T13:33:56
--- license: cc-by-nc-4.0 language: - en pipeline_tag: text-generation --- # QuantumLM ## Model Description `QuantumLM` is a Llama2 7B model finetuned on an Wizard-Orca style Dataset ## Usage Start chatting with `QuantumLM-7B` using the following code snippet: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained("quantumaikr/QuantumLM-7B", use_fast=False) model = AutoModelForCausalLM.from_pretrained("quantumaikr/QuantumLM-7B", torch_dtype=torch.float16, low_cpu_mem_usage=True, device_map="auto") system_prompt = "### System:\nYou are QuantumLM, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal.\n\n" message = "Write me a poem please" prompt = f"{system_prompt}### User: {message}\n\n### Assistant:\n" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` QuantumLM should be used with this prompt format: ``` ### System: This is a system prompt, please behave and help the user. ### User: Your prompt here ### Assistant The output of QuantumLM ``` ## Use and Limitations ### Intended Use These models are intended for research only, in adherence with the [CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/) license. ### Limitations and bias Although the aforementioned dataset helps to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use it responsibly.
1,878
[ [ 0.00640869140625, -0.0654296875, 0.0239715576171875, -0.00022423267364501953, -0.035125732421875, 0.003704071044921875, 0.0134735107421875, -0.01351165771484375, 0.0008068084716796875, 0.0430908203125, -0.0297698974609375, -0.041748046875, -0.0323486328125, ...
pszemraj/pythia-31m-simplewiki-2048
2023-09-14T13:12:29.000Z
[ "transformers", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "dataset:pszemraj/simple_wikipedia_LM", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
pszemraj
null
null
pszemraj/pythia-31m-simplewiki-2048
0
5,545
transformers
2023-09-12T05:11:56
--- base_model: EleutherAI/pythia-31m tags: - generated_from_trainer metrics: - accuracy inference: parameters: max_new_tokens: 64 do_sample: true repetition_penalty: 1.1 no_repeat_ngram_size: 5 guidance_scale: 1.01 eta_cutoff: 0.001 widget: - text: My name is El Microondas the Wise and example_title: El Microondas - text: A meme is example_title: meme - text: >- Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had example_title: Coreference resolution - text: >- On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book example_title: Logic puzzles - text: >- The two men running to become New York City's next mayor will face off in their first debate Wednesday night example_title: Reading comprehension datasets: - pszemraj/simple_wikipedia_LM pipeline_tag: text-generation license: apache-2.0 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pythia-31m-simplewiki-2048 This was initialized from random weights based on the config of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) and trained on `pszemraj/simple_wikipedia_LM` for 3 epochs. It achieves the following results on the evaluation set: - Loss: 3.6874 - Accuracy: 0.4105 ## Model description More information needed ## Intended uses & limitations This is a baseline for comparison to other models. ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 1 - eval_batch_size: 1 - seed: 80085 - gradient_accumulation_steps: 64 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-07 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 6.0657 | 0.22 | 100 | 5.6210 | 0.2414 | | 5.2447 | 0.45 | 200 | 4.9316 | 0.3054 | | 4.8397 | 0.67 | 300 | 4.6011 | 0.3343 | | 4.7933 | 0.9 | 400 | 4.3878 | 0.3530 | | 4.274 | 1.12 | 500 | 4.2352 | 0.3646 | | 4.4867 | 1.35 | 600 | 4.1224 | 0.3723 | | 4.3434 | 1.57 | 700 | 4.0282 | 0.3791 | | 4.1857 | 1.8 | 800 | 3.9552 | 0.3841 | | 4.229 | 2.02 | 900 | 3.8890 | 0.3909 | | 3.9189 | 2.25 | 1000 | 3.8301 | 0.3967 | | 4.084 | 2.47 | 1100 | 3.7782 | 0.4023 | | 3.8965 | 2.7 | 1200 | 3.7302 | 0.4069 | | 3.915 | 2.92 | 1300 | 3.6874 | 0.4105 | ### Framework versions - Transformers 4.33.1 - Pytorch 2.2.0.dev20230907+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
3,130
[ [ -0.032562255859375, -0.035186767578125, 0.003231048583984375, 0.0006985664367675781, -0.0225372314453125, -0.025421142578125, -0.007717132568359375, -0.0206298828125, 0.0228729248046875, 0.0100250244140625, -0.045806884765625, -0.044158935546875, -0.044219970703...
pszemraj/pythia-31m-simplewiki-scratch-bf16
2023-09-15T04:47:41.000Z
[ "transformers", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "dataset:pszemraj/simple_wikipedia_LM", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
pszemraj
null
null
pszemraj/pythia-31m-simplewiki-scratch-bf16
0
5,545
transformers
2023-09-15T03:26:05
--- base_model: EleutherAI/pythia-31m tags: - generated_from_trainer metrics: - accuracy inference: parameters: max_new_tokens: 64 do_sample: true repetition_penalty: 1.1 no_repeat_ngram_size: 5 guidance_scale: 1.01 eta_cutoff: 0.001 widget: - text: My name is El Microondas the Wise and example_title: El Microondas - text: A meme is example_title: meme - text: >- Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had example_title: Coreference resolution - text: >- On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book example_title: Logic puzzles - text: >- The two men running to become New York City's next mayor will face off in their first debate Wednesday night example_title: Reading comprehension license: apache-2.0 datasets: - pszemraj/simple_wikipedia_LM pipeline_tag: text-generation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pythia-31m-simplewiki-scratch-bf16 Trained from random initialized config based on [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m), 3 epochs bf16 It achieves the following results on the evaluation set: - Loss: 4.1763 - Accuracy: 0.3676 ## Model description tuned with bf16 (previous was fp32) ## Intended uses & limitations More information needed ## Training and evaluation data ``` ***** eval metrics ***** epoch = 2.99 eval_accuracy = 0.3723 eval_loss = 4.1155 eval_runtime = 0:00:14.44 eval_samples = 500 eval_samples_per_second = 34.602 eval_steps_per_second = 17.301 perplexity = 61.2811 ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 2 - seed: 80085 - gradient_accumulation_steps: 64 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-07 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 5.8617 | 0.45 | 100 | 5.5276 | 0.2451 | | 5.2782 | 0.9 | 200 | 4.9596 | 0.2965 | | 4.9996 | 1.35 | 300 | 4.6412 | 0.3310 | | 4.6292 | 1.8 | 400 | 4.4344 | 0.3485 | | 4.5339 | 2.25 | 500 | 4.2875 | 0.3600 | | 4.5214 | 2.7 | 600 | 4.1763 | 0.3676 | ### Framework versions - Transformers 4.33.1 - Pytorch 2.2.0.dev20230907+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
3,232
[ [ -0.038970947265625, -0.045562744140625, 0.005352020263671875, 0.0012359619140625, -0.023651123046875, -0.0203399658203125, -0.015350341796875, -0.03179931640625, 0.0109710693359375, 0.004940032958984375, -0.04913330078125, -0.040924072265625, -0.045501708984375,...
openchat/openchat_v2
2023-07-08T13:51:04.000Z
[ "transformers", "pytorch", "llama", "text-generation", "en", "license:other", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
openchat
null
null
openchat/openchat_v2
12
5,544
transformers
2023-07-07T15:30:54
--- language: - en tags: - llama license: other --- # OpenChat: Advancing Open-source Language Models with Imperfect Data The OpenChat v2 family is inspired by offline reinforcement learning, including conditional behavior cloning (OpenChat-v2) and weighted behavior cloning (OpenChat-v2-w). - **[OpenChat-v2-w](https://huggingface.co/openchat/openchat_v2_w)**: ~80k cleaned ShareGPT data with conditioning and weighted loss, based on LLaMA-13B with a context length of 2048. - Achieves **50.9%** win-rate over ChatGPT on MT-bench. - Achieves **79.4%** win-rate over ChatGPT on Vicuna-bench. - Achieves **87.1%** win-rate over text-davinci-003 on AlpacaEval. - **[OpenChat-v2](https://huggingface.co/openchat/openchat_v2)**: ~80k cleaned ShareGPT data with only conditioning, based on LLaMA-13B with a context length of 2048. - Achieves **48.1%** win-rate over ChatGPT on MT-bench. - Achieves **80.6%** win-rate over ChatGPT on Vicuna-bench. - Achieves **85.0%** win-rate over text-davinci-003 on AlpacaEval. ## Code and Inference Server We provide the full source code, including an inference server compatible with the "ChatCompletions" API, in the [OpenChat](https://github.com/imoneoi/openchat) GitHub repository. ## Web UI OpenChat also includes a web UI for a better user experience. See the GitHub repository for instructions. ## Conversation Template The conversation template **involves concatenating tokens**, and cannot be expressed in plain-text. Besides base model vocabulary, an end-of-turn token `<|end_of_turn|>` is added. Here is an example of single-round conversation template: ```python def tokenize_single_input(tokenizer, prompt): # OpenChat V2 human_prefix = "User:" prefix = "Assistant GPT4:" eot_token = "<|end_of_turn|>" bos_token = "<s>" def _tokenize(text): return tokenizer.convert_tokens_to_ids(tokenizer._tokenize(text)) def _tokenize_special(special_name): return tokenizer.convert_tokens_to_ids(special_name) return [_tokenize_special(bos_token)] + _tokenize(human_prefix) + _tokenize(prompt) + [_tokenize_special(eot_token)] + \ _tokenize(prefix) ``` To explore conditional language models, you can also set `prefix = "Assistant GPT3:"` to mimic ChatGPT behavior (this may cause performance degradation). *Hint: In BPE, `tokenize(A) + tokenize(B)` does not always equals to `tokenize(A + B)`* ## Limitations **Foundation Model Limitations** Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as: - Complex reasoning - Mathematical and arithmetic tasks - Programming and coding challenges **Hallucination of Non-existent Information** OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model.
3,025
[ [ -0.0208587646484375, -0.08935546875, 0.0269927978515625, 0.0216522216796875, -0.0083770751953125, -0.01175689697265625, -0.031585693359375, -0.0345458984375, 0.0207366943359375, 0.03399658203125, -0.046661376953125, -0.0301971435546875, -0.023529052734375, -...
psyche/kollama2-7b
2023-07-27T04:49:58.000Z
[ "transformers", "pytorch", "llama", "text-generation", "en", "ko", "license:apache-2.0", "model-index", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
psyche
null
null
psyche/kollama2-7b
2
5,544
transformers
2023-07-23T02:23:47
--- license: apache-2.0 language: - en - ko model-index: - name: ARC results: - task: type: natural-language-generation dataset: type: huggingface name: ARC-Challenge(25 shots) metrics: - type: acc_norm value: 53.24 - name: HellaSwag results: - task: type: natural-language-generation dataset: type: huggingface name: HellaSwag(10 shots) metrics: - type: acc_norm value: 78.82 - name: MMLU results: - task: type: natural-language-generation dataset: type: huggingface name: MMLU(5 shots) metrics: - type: acc value: 42.38 - name: TruthfulQA results: - task: type: natural-language-generation dataset: type: huggingface name: TruthfulQA(0 shots) metrics: - type: MC2 value: 44.65 --- # Fine-tunned Llama-2 Model with Korean Corpus This model is a further trained Llama-2 model on the Korean datasets.
1,122
[ [ -0.0037136077880859375, -0.0190277099609375, 0.030303955078125, 0.0199737548828125, -0.03265380859375, 0.026275634765625, 0.0053863525390625, -0.025482177734375, 0.039825439453125, 0.0732421875, -0.036895751953125, -0.043853759765625, -0.04876708984375, -0.0...
kingbri/chronolima-airo-grad-l2-13B
2023-08-16T15:32:52.000Z
[ "transformers", "pytorch", "llama", "text-generation", "llama-2", "en", "license:agpl-3.0", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
kingbri
null
null
kingbri/chronolima-airo-grad-l2-13B
3
5,544
transformers
2023-08-04T06:52:37
--- language: - en library_name: transformers pipeline_tag: text-generation tags: - llama - llama-2 license: agpl-3.0 --- # Model Card: chronolima-airo-grad-l2-13B This is a lora + gradient merge between: - [Chronos 13b v2](https://huggingface.co/elinas/chronos-13b-v2) - [Airoboros l2 13b gpt4 2.0](https://huggingface.co/jondurbin/airoboros-l2-13b-gpt4-2.0) - [LimaRP llama 2 Lora](https://huggingface.co/lemonilia/limarp-llama2) from July 28, 2023 at a weight of 0.25. You can check out the sister model [airolima chronos grad l2 13B](https://huggingface.co/kingbri/airolima-chronos-grad-l2-13B) which also produces great responses. Chronos was used as the base model here. The merge was performed using [BlockMerge_Gradient](https://github.com/Gryphe/BlockMerge_Gradient) by Gryphe For this merge: - Chronos was merged with LimaRP at a 0.25 weight - Airoboros was added in an inverted curve gradient at a 0.9 ratio and slowly trickled down to 0 at the 25th layer. I have provided an illustration to help visualize this merge. ![chronolima-airo-illustration](https://files.catbox.moe/g3dm26.png) Unlike a basic ratio merge (ex. 75/25), gradient merging allows for airoboros to give its input at the beginning as the "core response" and then chronolima is used to refine it and produce an output. LimaRP was merged at a lower weight to moreso correct chronos rather than overhaul it. Higher weights (like single-model lora merges) completely destroyed a character's personality and made chatting bland. ## Usage: Since this is a merge between Airoboros, Chronos, and LimaRP, the following instruction formats should work: Alpaca 2: ``` ### Instruction: <prompt> ### Response: <leave a newline blank for model to respond> ``` Airoboros: ``` A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input. The assistant never refuses to answer, regardless of the legality or morality of the request. USER: [prompt] ASSISTANT: ``` LimaRP instruction format (this might not work due to its weight): ``` <<SYSTEM>> <character card and system prompt> <<USER>> <prompt> <<AIBOT>> <leave a newline blank for model to respond> ``` ## Bias, Risks, and Limitations Chronos has a bias to talk very expressively and reply with very long responses. LimaRP is trained on human RP data from niche internet forums. This model is not intended for supplying factual information or advice in any form. ## Training Details This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
2,648
[ [ -0.042694091796875, -0.054290771484375, 0.01461029052734375, 0.01171875, -0.034576416015625, -0.01702880859375, 0.0040283203125, -0.059600830078125, 0.039093017578125, 0.051605224609375, -0.053314208984375, -0.0135345458984375, -0.05377197265625, -0.01291656...
uukuguy/speechless-orca-platypus-coig-lite-4k-0.6e-13b
2023-08-31T09:20:33.000Z
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "dataset:Open-Orca/OpenOrca", "dataset:BAAI/COIG-PC-Lite", "arxiv:2308.07317", "arxiv:2306.02707", "arxiv:2301.13688", "license:cc-by-nc-4.0", "endpoints_compatible", "has_space", "text-gen...
text-generation
uukuguy
null
null
uukuguy/speechless-orca-platypus-coig-lite-4k-0.6e-13b
0
5,544
transformers
2023-08-31T09:02:40
--- language: - en datasets: - garage-bAInd/Open-Platypus - Open-Orca/OpenOrca - BAAI/COIG-PC-Lite library_name: transformers pipeline_tag: text-generation license: cc-by-nc-4.0 --- <p><h1>🐋 The First Chinese OrcaPlatypus! 🐋</h1></p> Fine-tune the Open-Orca/OpenOrca-Platypus2-13B with 10% COIG-PC-LITE, 10% OpenOrca and 100% Open-Platypus for Chinese capability. Context window size 4KB. <p><h1>🐋 The First OrcaPlatypus! 🐋</h1></p> ![Platty](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypusMerge.jpg) # OpenOrca-Platypus2-13B OpenOrca-Platypus2-13B is a merge of [`garage-bAInd/Platypus2-13B`](https://huggingface.co/garage-bAInd/Platypus2-13B) and [`Open-Orca/OpenOrcaxOpenChat-Preview2-13B`](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B). This model is more than the sum of its parts! We are happy to be teaming up with the [Platypus](https://platypus-llm.github.io/) team to bring you a new model which once again tops the leaderboards! Want to visualize our full (pre-filtering) dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2). [<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2) We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners. We will also give sneak-peak announcements on our Discord, which you can find here: https://AlignmentLab.ai # Evaluation ## HuggingFace Leaderboard Performance ![HF Leaderboard](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypus13BHFLeaderboard.webp) | Metric | Value | |-----------------------|-------| | MMLU (5-shot) | 59.5 | | ARC (25-shot) | 62.88 | | HellaSwag (10-shot) | 83.19 | | TruthfulQA (0-shot) | 52.69 | | Avg. | 64.56 | We use [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results. ## AGIEval Performance We compare our results to our base Preview2 model (using LM Evaluation Harness). We find **112%** of the base model's performance on AGI Eval, averaging **0.463**. A large part of this boost is the substantial improvement to LSAT Logical Reasoning performance. ![OpenOrca-Platypus2-13B AGIEval Performance](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypus13BAGIEval.webp "AGIEval Performance") ## BigBench-Hard Performance We compare our results to our base Preview2 model (using LM Evaluation Harness). We find **105%** of the base model's performance on BigBench-Hard, averaging **0.442**. ![OpenOrca-Platypus2-13B BigBench-Hard Performance](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypus13BBigBenchHard.webp "BigBench-Hard Performance") # Model Details * **Trained by**: **Platypus2-13B** trained by Cole Hunter & Ariel Lee; **OpenOrcaxOpenChat-Preview2-13B** trained by Open-Orca * **Model type:** **OpenOrca-Platypus2-13B** is an auto-regressive language model based on the Lllama 2 transformer architecture. * **Language(s)**: English * **License for Platypus2-13B base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)) * **License for OpenOrcaxOpenChat-Preview2-13B base weights**: Llama 2 Commercial # Prompting ## Prompt Template for base Platypus2-13B ``` ### Instruction: <prompt> (without the <>) ### Response: ``` ## Prompt Template for base OpenOrcaxOpenChat-Preview2-13B OpenChat Llama2 V1: see [OpenOrcaxOpenChat-Preview2-13B](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B) for additional information. # Training ## Training Datasets `garage-bAInd/Platypus2-13B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information. `Open-Orca/OpenOrcaxOpenChat-Preview2-13B` trained using a refined subset of most of the GPT-4 data from the [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca). ## Training Procedure `Open-Orca/Platypus2-13B` was instruction fine-tuned using LoRA on 1x A100-80GB. For training details and inference instructions please see the [Platypus](https://github.com/arielnlee/Platypus) GitHub repo. # Supplemental ## Reproducing Evaluation Results (for HuggingFace Leaderboard Eval) Install LM Evaluation Harness: ``` # clone repository git clone https://github.com/EleutherAI/lm-evaluation-harness.git # change to repo directory cd lm-evaluation-harness # check out the correct commit git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463 # install pip install -e . ``` Each task was evaluated on a single A100-80GB GPU. ARC: ``` python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/arc_challenge_25shot.json --device cuda --num_fewshot 25 ``` HellaSwag: ``` python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/hellaswag_10shot.json --device cuda --num_fewshot 10 ``` MMLU: ``` python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/mmlu_5shot.json --device cuda --num_fewshot 5 ``` TruthfulQA: ``` python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/truthfulqa_0shot.json --device cuda ``` ## Limitations and bias Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ # Citations ```bibtex @software{hunterlee2023orcaplaty1 title = {OpenOrcaPlatypus: Llama2-13B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset and Merged with divergent STEM and Logic Dataset Model}, author = {Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz and Bleys Goodson and Wing Lian and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B}, } @article{platypus2023, title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs}, author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz}, booktitle={arXiv preprint arxiv:2308.07317}, year={2023} } @software{OpenOrcaxOpenChatPreview2, title = {OpenOrcaxOpenChatPreview2: Llama2-13B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset}, author = {Guan Wang and Bleys Goodson and Wing Lian and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B}, } @software{openchat, title = {{OpenChat: Advancing Open-source Language Models with Imperfect Data}}, author = {Wang, Guan and Cheng, Sijie and Yu, Qiying and Liu, Changling}, doi = {10.5281/zenodo.8105775}, url = {https://github.com/imoneoi/openchat}, version = {pre-release}, year = {2023}, month = {7}, } @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom}, year={2023}, eprint= arXiv 2307.09288 } @misc{longpre2023flan, title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts}, year={2023}, eprint={2301.13688}, archivePrefix={arXiv}, primaryClass={cs.AI} } @article{hu2021lora, title={LoRA: Low-Rank Adaptation of Large Language Models}, author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu}, journal={CoRR}, year={2021} } ```
11,045
[ [ -0.03057861328125, -0.06182861328125, 0.01537322998046875, 0.0102996826171875, -0.022186279296875, -0.01395416259765625, -0.01467132568359375, -0.054443359375, 0.019134521484375, 0.02459716796875, -0.044281005859375, -0.046722412109375, -0.03497314453125, -0...
pszemraj/pythia-31m-simplepile-lite-2048-scratch-2e
2023-09-15T02:11:38.000Z
[ "transformers", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "dataset:pszemraj/simplepile-lite", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
pszemraj
null
null
pszemraj/pythia-31m-simplepile-lite-2048-scratch-2e
0
5,543
transformers
2023-09-14T15:46:08
--- base_model: EleutherAI/pythia-31m tags: - generated_from_trainer metrics: - accuracy inference: parameters: max_new_tokens: 64 do_sample: true repetition_penalty: 1.1 no_repeat_ngram_size: 5 guidance_scale: 1.01 eta_cutoff: 0.001 widget: - text: My name is El Microondas the Wise and example_title: El Microondas - text: A meme is example_title: meme - text: >- Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had example_title: Coreference resolution - text: >- On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book example_title: Logic puzzles - text: >- The two men running to become New York City's next mayor will face off in their first debate Wednesday night example_title: Reading comprehension pipeline_tag: text-generation license: apache-2.0 datasets: - pszemraj/simplepile-lite --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BL-pythia-31m-simplepile-lite-2048-scratch Train from scratch based on config of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9891 - Accuracy: 0.3498 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 1 - seed: 80085 - gradient_accumulation_steps: 64 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-07 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 2.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 7.4089 | 0.07 | 100 | 7.3885 | 0.1133 | | 6.2774 | 0.13 | 200 | 6.2091 | 0.1621 | | 5.7019 | 0.2 | 300 | 5.7450 | 0.1890 | | 5.4922 | 0.27 | 400 | 5.4697 | 0.2080 | | 5.233 | 0.33 | 500 | 5.2846 | 0.2195 | | 5.0523 | 0.4 | 600 | 5.1479 | 0.2296 | | 4.9396 | 0.47 | 700 | 5.0391 | 0.2376 | | 4.7633 | 0.53 | 800 | 4.9366 | 0.2458 | | 4.7516 | 0.6 | 900 | 4.8339 | 0.2559 | | 4.5937 | 0.67 | 1000 | 4.7286 | 0.2676 | | 4.5079 | 0.73 | 1100 | 4.6293 | 0.2798 | | 4.4608 | 0.8 | 1200 | 4.5433 | 0.2903 | | 4.3426 | 0.87 | 1300 | 4.4719 | 0.2988 | | 4.1722 | 0.93 | 1400 | 4.4089 | 0.3057 | | 4.1655 | 1.0 | 1500 | 4.3585 | 0.3107 | | 4.0927 | 1.07 | 1600 | 4.3101 | 0.3161 | | 4.1439 | 1.13 | 1700 | 4.2714 | 0.3206 | | 4.0064 | 1.2 | 1800 | 4.2330 | 0.3249 | | 4.0633 | 1.27 | 1900 | 4.2015 | 0.3281 | | 3.9948 | 1.33 | 2000 | 4.1702 | 0.3311 | | 3.9389 | 1.4 | 2100 | 4.1439 | 0.3338 | | 3.8833 | 1.47 | 2200 | 4.1200 | 0.3367 | | 3.8411 | 1.53 | 2300 | 4.0949 | 0.3395 | | 3.8481 | 1.6 | 2400 | 4.0764 | 0.3408 | | 3.8397 | 1.67 | 2500 | 4.0578 | 0.3420 | | 3.8897 | 1.73 | 2600 | 4.0383 | 0.3440 | | 3.8785 | 1.8 | 2700 | 4.0206 | 0.3459 | | 3.8126 | 1.87 | 2800 | 4.0044 | 0.3478 | | 3.783 | 1.93 | 2900 | 3.9891 | 0.3498 | ### Framework versions - Transformers 4.33.1 - Pytorch 2.2.0.dev20230907+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3
4,041
[ [ -0.04095458984375, -0.034393310546875, 0.00882720947265625, 0.00997161865234375, -0.01593017578125, -0.014862060546875, 0.00019943714141845703, -0.0099639892578125, 0.03643798828125, 0.015106201171875, -0.045928955078125, -0.047119140625, -0.047210693359375, ...
pszemraj/pythia-31m-KI_v1-2048-scratch
2023-09-15T12:34:49.000Z
[ "transformers", "safetensors", "gpt_neox", "text-generation", "generated_from_trainer", "en", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
pszemraj
null
null
pszemraj/pythia-31m-KI_v1-2048-scratch
0
5,542
transformers
2023-09-15T01:35:27
--- base_model: EleutherAI/pythia-31m tags: - generated_from_trainer metrics: - accuracy inference: parameters: max_new_tokens: 64 do_sample: true repetition_penalty: 1.1 no_repeat_ngram_size: 5 guidance_scale: 1.01 eta_cutoff: 0.001 widget: - text: My name is El Microondas the Wise and example_title: El Microondas - text: A meme is example_title: meme - text: >- Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had example_title: Coreference resolution - text: >- On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book example_title: Logic puzzles - text: >- The two men running to become New York City's next mayor will face off in their first debate Wednesday night example_title: Reading comprehension pipeline_tag: text-generation license: apache-2.0 language: - en --- # pythia-31m-KI_v1-2048-scratch Initialized from random weights based on config of [EleutherAI/pythia-31m](https://huggingface.co/EleutherAI/pythia-31m), 3 epochs bf16 It achieves the following results on the evaluation set: - Loss: 4.6160 - Accuracy: 0.2448 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 2 - eval_batch_size: 2 - seed: 80085 - gradient_accumulation_steps: 64 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-07 - lr_scheduler_type: inverse_sqrt - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 6.3874 | 0.16 | 100 | 6.4212 | 0.1487 | | 5.7088 | 0.32 | 200 | 5.7926 | 0.1725 | | 5.4575 | 0.48 | 300 | 5.5160 | 0.1903 | | 5.2451 | 0.64 | 400 | 5.3429 | 0.1995 | | 5.0954 | 0.8 | 500 | 5.2109 | 0.2059 | | 5.0358 | 0.96 | 600 | 5.1068 | 0.2123 | | 4.94 | 1.12 | 700 | 5.0321 | 0.2157 | | 4.8532 | 1.28 | 800 | 4.9605 | 0.2202 | | 4.7602 | 1.44 | 900 | 4.9047 | 0.224 | | 4.6965 | 1.6 | 1000 | 4.8526 | 0.2276 | | 4.6855 | 1.76 | 1100 | 4.8139 | 0.2300 | | 4.6573 | 1.91 | 1200 | 4.7739 | 0.2327 | | 4.5968 | 2.07 | 1300 | 4.7451 | 0.2346 | | 4.5688 | 2.23 | 1400 | 4.7152 | 0.2370 | | 4.5205 | 2.39 | 1500 | 4.6842 | 0.2396 | | 4.5369 | 2.55 | 1600 | 4.6598 | 0.2410 | | 4.5106 | 2.71 | 1700 | 4.6352 | 0.2433 | | 4.4375 | 2.87 | 1800 | 4.6160 | 0.2448 |
3,059
[ [ -0.031494140625, -0.0249786376953125, 0.01849365234375, 0.003692626953125, -0.02117919921875, -0.019683837890625, -0.0110931396484375, -0.005893707275390625, 0.023162841796875, 0.0107421875, -0.04608154296875, -0.049285888671875, -0.042633056640625, -0.01579...
EleutherAI/pythia-70m-v0
2023-03-29T18:53:28.000Z
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "causal-lm", "pythia", "pythia_v0", "en", "dataset:the_pile", "arxiv:2101.00027", "arxiv:2201.07311", "license:apache-2.0", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
EleutherAI
null
null
EleutherAI/pythia-70m-v0
6
5,539
transformers
2022-10-16T18:31:25
--- language: - en tags: - pytorch - causal-lm - pythia - pythia_v0 license: apache-2.0 datasets: - the_pile --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research. It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and one trained on the Pile after the dataset has been globally deduplicated. All 8 model sizes are trained on the exact same data, in the exact same order. All Pythia models are available [on Hugging Face](https://huggingface.co/models?other=pythia). The Pythia model suite was deliberately designed to promote scientific research on large language models, especially interpretability research. Despite not centering downstream performance as a design goal, we find the models <a href="#evaluations">match or exceed</a> the performance of similar and same-sized models, such as those in the OPT and GPT-Neo suites. Please note that all models in the *Pythia* suite were renamed in January 2023. For clarity, a <a href="#naming-convention-and-parameter-count">table comparing the old and new names</a> is provided in this model card, together with exact parameter counts. ## Pythia-70M ### Model Details - Developed by: [EleutherAI](http://eleuther.ai) - Model type: Transformer-based Language Model - Language: English - Learn more: [Pythia's GitHub repository](https://github.com/EleutherAI/pythia) for training procedure, config files, and details on how to use. - Library: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox) - License: Apache 2.0 - Contact: to ask questions about this model, join the [EleutherAI Discord](https://discord.gg/zBGx3azzUn), and post them in `#release-discussion`. Please read the existing *Pythia* documentation before asking about it in the EleutherAI Discord. For general correspondence: [contact@eleuther. ai](mailto:contact@eleuther.ai). <figure> | Pythia model | Non-Embedding Params | Layers | Model Dim | Heads | Batch Size | Learning Rate | Equivalent Models | | -----------: | -------------------: | :----: | :-------: | :---: | :--------: | :-------------------: | :--------------------: | | 70M | 18,915,328 | 6 | 512 | 8 | 2M | 1.0 x 10<sup>-3</sup> | — | | 160M | 85,056,000 | 12 | 768 | 12 | 4M | 6.0 x 10<sup>-4</sup> | GPT-Neo 125M, OPT-125M | | 410M | 302,311,424 | 24 | 1024 | 16 | 4M | 3.0 x 10<sup>-4</sup> | OPT-350M | | 1.0B | 805,736,448 | 16 | 2048 | 8 | 2M | 3.0 x 10<sup>-4</sup> | — | | 1.4B | 1,208,602,624 | 24 | 2048 | 16 | 4M | 2.0 x 10<sup>-4</sup> | GPT-Neo 1.3B, OPT-1.3B | | 2.8B | 2,517,652,480 | 32 | 2560 | 32 | 2M | 1.6 x 10<sup>-4</sup> | GPT-Neo 2.7B, OPT-2.7B | | 6.9B | 6,444,163,072 | 32 | 4096 | 32 | 2M | 1.2 x 10<sup>-4</sup> | OPT-6.7B | | 12B | 11,327,027,200 | 36 | 5120 | 40 | 2M | 1.2 x 10<sup>-4</sup> | — | <figcaption>Engineering details for the <i>Pythia Suite</i>. Deduped and non-deduped models of a given size have the same hyperparameters. “Equivalent” models have <b>exactly</b> the same architecture, and the same number of non-embedding parameters.</figcaption> </figure> ### Uses and Limitations #### Intended Use The primary intended use of Pythia is research on the behavior, functionality, and limitations of large language models. This suite is intended to provide a controlled setting for performing scientific experiments. To enable the study of how language models change over the course of training, we provide 143 evenly spaced intermediate checkpoints per model. These checkpoints are hosted on Hugging Face as branches. Note that branch `143000` corresponds exactly to the model checkpoint on the `main` branch of each model. You may also further fine-tune and adapt Pythia-70M for deployment, as long as your use is in accordance with the Apache 2.0 license. Pythia models work with the Hugging Face [Transformers Library](https://huggingface.co/docs/transformers/index). If you decide to use pre-trained Pythia-70M as a basis for your fine-tuned model, please conduct your own risk and bias assessment. #### Out-of-scope use The Pythia Suite is **not** intended for deployment. It is not a in itself a product and cannot be used for human-facing interactions. Pythia models are English-language only, and are not suitable for translation or generating text in other languages. Pythia-70M has not been fine-tuned for downstream contexts in which language models are commonly deployed, such as writing genre prose, or commercial chatbots. This means Pythia-70M will **not** respond to a given prompt the way a product like ChatGPT does. This is because, unlike this model, ChatGPT was fine-tuned using methods such as Reinforcement Learning from Human Feedback (RLHF) to better “understand” human instructions. #### Limitations and biases The core functionality of a large language model is to take a string of text and predict the next token. The token deemed statistically most likely by the model need not produce the most “accurate” text. Never rely on Pythia-70M to produce factually accurate output. This model was trained on [the Pile](https://pile.eleuther.ai/), a dataset known to contain profanity and texts that are lewd or otherwise offensive. See [Section 6 of the Pile paper](https://arxiv.org/abs/2101.00027) for a discussion of documented biases with regards to gender, religion, and race. Pythia-70M may produce socially unacceptable or undesirable text, *even if* the prompt itself does not include anything explicitly offensive. If you plan on using text generated through, for example, the Hosted Inference API, we recommend having a human curate the outputs of this language model before presenting it to other people. Please inform your audience that the text was generated by Pythia-70M. ### Quickstart Pythia models can be loaded and used via the following code, demonstrated here for the third `pythia-70m-deduped` checkpoint: ```python from transformers import GPTNeoXForCausalLM, AutoTokenizer model = GPTNeoXForCausalLM.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) tokenizer = AutoTokenizer.from_pretrained( "EleutherAI/pythia-70m-deduped", revision="step3000", cache_dir="./pythia-70m-deduped/step3000", ) inputs = tokenizer("Hello, I am", return_tensors="pt") tokens = model.generate(**inputs) tokenizer.decode(tokens[0]) ``` Revision/branch `step143000` corresponds exactly to the model checkpoint on the `main` branch of each model.<br> For more information on how to use all Pythia models, see [documentation on GitHub](https://github.com/EleutherAI/pythia). ### Training #### Training data [The Pile](https://pile.eleuther.ai/) is a 825GiB general-purpose dataset in English. It was created by EleutherAI specifically for training large language models. It contains texts from 22 diverse sources, roughly broken down into five categories: academic writing (e.g. arXiv), internet (e.g. CommonCrawl), prose (e.g. Project Gutenberg), dialogue (e.g. YouTube subtitles), and miscellaneous (e.g. GitHub, Enron Emails). See [the Pile paper](https://arxiv.org/abs/2101.00027) for a breakdown of all data sources, methodology, and a discussion of ethical implications. Consult [the datasheet](https://arxiv.org/abs/2201.07311) for more detailed documentation about the Pile and its component datasets. The Pile can be downloaded from the [official website](https://pile.eleuther.ai/), or from a [community mirror](https://the-eye.eu/public/AI/pile/).<br> The Pile was **not** deduplicated before being used to train Pythia-70M. #### Training procedure All models were trained on the exact same data, in the exact same order. Each model saw 299,892,736,000 tokens during training, and 143 checkpoints for each model are saved every 2,097,152,000 tokens, spaced evenly throughout training. This corresponds to training for just under 1 epoch on the Pile for non-deduplicated models, and about 1.5 epochs on the deduplicated Pile. All *Pythia* models trained for the equivalent of 143000 steps at a batch size of 2,097,152 tokens. Two batch sizes were used: 2M and 4M. Models with a batch size of 4M tokens listed were originally trained for 71500 steps instead, with checkpoints every 500 steps. The checkpoints on Hugging Face are renamed for consistency with all 2M batch models, so `step1000` is the first checkpoint for `pythia-1.4b` that was saved (corresponding to step 500 in training), and `step1000` is likewise the first `pythia-6.9b` checkpoint that was saved (corresponding to 1000 “actual” steps).<br> See [GitHub](https://github.com/EleutherAI/pythia) for more details on training procedure, including [how to reproduce it](https://github.com/EleutherAI/pythia/blob/main/README.md#reproducing-training).<br> Pythia uses the same tokenizer as [GPT-NeoX- 20B](https://huggingface.co/EleutherAI/gpt-neox-20b). ### Evaluations All 16 *Pythia* models were evaluated using the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness). You can access the results by model and step at `results/json/*` in the [GitHub repository](https://github.com/EleutherAI/pythia/tree/main/results/json).<br> Expand the sections below to see plots of evaluation results for all Pythia and Pythia-deduped models compared with OPT and BLOOM. <details> <summary>LAMBADA – OpenAI</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/lambada_openai.png" style="width:auto"/> </details> <details> <summary>Physical Interaction: Question Answering (PIQA)</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/piqa.png" style="width:auto"/> </details> <details> <summary>WinoGrande</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/winogrande.png" style="width:auto"/> </details> <details> <summary>AI2 Reasoning Challenge—Challenge Set</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/arc_challenge.png" style="width:auto"/> </details> <details> <summary>SciQ</summary> <img src="/EleutherAI/pythia-12b/resolve/main/eval_plots/sciq.png" style="width:auto"/> </details> ### Naming convention and parameter count *Pythia* models were renamed in January 2023. It is possible that the old naming convention still persists in some documentation by accident. The current naming convention (70M, 160M, etc.) is based on total parameter count. <figure style="width:32em"> | current Pythia suffix | old suffix | total params | non-embedding params | | --------------------: | ---------: | -------------: | -------------------: | | 70M | 19M | 70,426,624 | 18,915,328 | | 160M | 125M | 162,322,944 | 85,056,000 | | 410M | 350M | 405,334,016 | 302,311,424 | | 1B | 800M | 1,011,781,632 | 805,736,448 | | 1.4B | 1.3B | 1,414,647,808 | 1,208,602,624 | | 2.8B | 2.7B | 2,775,208,960 | 2,517,652,480 | | 6.9B | 6.7B | 6,857,302,016 | 6,444,163,072 | | 12B | 13B | 11,846,072,320 | 11,327,027,200 | </figure>
11,773
[ [ -0.0264129638671875, -0.062347412109375, 0.0203704833984375, 0.0018186569213867188, -0.0169677734375, -0.01322174072265625, -0.0172119140625, -0.033203125, 0.015045166015625, 0.0175323486328125, -0.022613525390625, -0.025848388671875, -0.034759521484375, -0....
nickmuchi/finbert-tone-finetuned-finance-topic-classification
2023-07-12T16:41:11.000Z
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "twitter-financial-topic-classification", "financial", "stocks", "twitter", "dataset:zeroshot/twitter-financial-news-topic", "model-index", "endpoints_compatible", "has_space", "region:us" ...
text-classification
nickmuchi
null
null
nickmuchi/finbert-tone-finetuned-finance-topic-classification
41
5,538
transformers
2022-12-30T16:22:43
--- tags: - generated_from_trainer - twitter-financial-topic-classification - financial - stocks - twitter datasets: - zeroshot/twitter-financial-news-topic metrics: - accuracy - f1 - precision - recall widget: - text: >- Here are Thursday's biggest analyst calls: Apple, Amazon, Tesla, Palantir, DocuSign, Exxon &amp; more example_title: Analyst Update' - text: >- LIVE: ECB surprises with 50bps hike, ending its negative rate era. President Christine Lagarde is taking questions example_title: Fed | Central Banks - text: >- Goldman Sachs traders countered the industry’s underwriting slump with revenue gains that raced past analysts’ estimates. The trading operation posted a 32% surge in second-quarter revenue that included another banner period for fixed income example_title: Company | Product News - text: >- China Evergrande Group’s onshore bond holders rejected a plan by the distressed developer to further extend a bond payment which was due on Friday. Rebecca Choong Wilkins reports on Bloomberg Television example_title: Treasuries | Corporate Debt - text: >- Investing Club: Morgan Stanley's dividend, buyback pay us for our patience after quarterly missteps example_title: Dividend - text: >- Investing Club: Our takes on Amazon and Apple heading into next week's earnings reports example_title: Earnings - text: >- JUST RELEASED: Oil Price Dynamics Report → Over the past week, oil prices decreased as supply expectations rose and anticipated demand remained unchanged. example_title: Energy | Oil - text: >- Delta Air Lines fell short of profit expectations in the second quarter and said high operating costs will persist through the rest of the year. Bloomberg Opinion's Brooke Sutherland has more on 'Bloomberg Markets' example_title: Financials - text: >- BREAKING: The Indian rupee plummets to a record 80 per US dollar as foreign investors pull out money from the nation's stocks example_title: Currencies - text: >- Twitter and Elon Musk are now in a high stakes/high risk situation, one analyst said. example_title: General News | Opinion - text: >- Copper prices are signaling that investors are bearish on the economy, strategist says example_title: Gold | Metals | Materials - text: >- Johnson & Johnson CFO Joe Wolk says the company is positioned for the long term and the plans for its consumer operations include an IPO. He speaks on 'Bloomberg Markets' example_title: IPO - text: >- Company and Elon Musk are set for a blockbuster courtroom battle over Musk’s attempt to terminate his $44 billion acquisition deal for $TWTR, according to Wedbush analyst Dan Ives. example_title: Legal | Regulation - text: >- Amazon to buy primary health care provider One Medical for roughly $3.9 billion example_title: M&A | Investments - text: >- Barclays Senior Analyst For Equity Research Jason Goldberg: 'Price expectations have changed.'' The global markets business recorded $6.47 billion of revenue in the quarter with rates, commodities and currencies helping drive the fixed-income gains. example_title: Macro - text: >- US stocks push higher in a volatile session. We break it down on The Countdown to The Close example_title: Markets - text: Zelenskyy fires security chiefs over ‘treasonous’ officials example_title: Politics - text: Airbnb co-founder Joe Gebbia is stepping down example_title: Personnel Change - text: French power group EDF requests its shares be suspended example_title: Stock Commentary - text: >- JUST IN: Alibaba shares slide as much as 5.7%, bringing this week's slump to over 15%, after it reportedly faced a data-theft inquiry example_title: Stock Movement model-index: - name: finbert-tone-finetuned-finance-topic-classification results: - task: name: Text Classification type: text-classification dataset: name: twitter-financial-news-topic type: finance metrics: - type: F1 name: F1 value: 0.910647 - type: accuracy name: accuracy value: 0.910615 pipeline_tag: text-classification --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finbert-tone-finetuned-finance-topic-classification This model is a fine-tuned version of [yiyanghkust/finbert-tone](https://huggingface.co/yiyanghkust/finbert-tone) on [Twitter Financial News Topic](https://huggingface.co/datasets/zeroshot/twitter-financial-news-topic) dataset. It achieves the following results on the evaluation set: - Loss: 0.509021 - Accuracy: 0.910615 - F1: 0.910647 - Precision: 0.911335 - Recall: 0.910615 ## Model description Model determines the financial topic of given tweets over 20 various topics. Given the unbalanced distribution of the class labels, the weights were adjusted to pay attention to the less sampled labels which should increase overall performance.. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 266 | 0.5152 | 0.8552 | 0.8504 | 0.8508 | 0.8552 | | 0.7618 | 2.0 | 532 | 0.3999 | 0.8790 | 0.8781 | 0.8842 | 0.8790 | | 0.7618 | 3.0 | 798 | 0.3628 | 0.8943 | 0.8940 | 0.8958 | 0.8943 | | 0.16 | 4.0 | 1064 | 0.3776 | 0.8997 | 0.9001 | 0.9025 | 0.8997 | | 0.16 | 5.0 | 1330 | 0.4286 | 0.8999 | 0.9002 | 0.9022 | 0.8999 | | 0.058 | 6.0 | 1596 | 0.4500 | 0.9043 | 0.9042 | 0.9055 | 0.9043 | | 0.058 | 7.0 | 1862 | 0.4689 | 0.9021 | 0.9017 | 0.9026 | 0.9021 | | 0.0267 | 8.0 | 2128 | 0.4918 | 0.9031 | 0.9029 | 0.9039 | 0.9031 | | 0.0267 | 9.0 | 2394 | 0.5030 | 0.9048 | 0.9049 | 0.9060 | 0.9048 | | 0.0177 | 10.0 | 2660 | 0.5052 | 0.9033 | 0.9034 | 0.9044 | 0.9033 | | 0.0177 | 11.0 | 2926 | 0.5265 | 0.9036 | 0.9034 | 0.9055 | 0.9036 | | 0.013 | 12.0 | 3192 | 0.5267 | 0.9041 | 0.9041 | 0.9058 | 0.9041 | | 0.013 | 13.0 | 3458 | 0.5090 | 0.9106 | 0.9106 | 0.9113 | 0.9106 | | 0.0105 | 14.0 | 3724 | 0.5315 | 0.9067 | 0.9067 | 0.9080 | 0.9067 | | 0.0105 | 15.0 | 3990 | 0.5339 | 0.9084 | 0.9084 | 0.9093 | 0.9084 | | 0.0068 | 16.0 | 4256 | 0.5414 | 0.9072 | 0.9074 | 0.9088 | 0.9072 | | 0.0051 | 17.0 | 4522 | 0.5460 | 0.9092 | 0.9091 | 0.9102 | 0.9092 | | 0.0051 | 18.0 | 4788 | 0.5438 | 0.9072 | 0.9073 | 0.9081 | 0.9072 | | 0.0035 | 19.0 | 5054 | 0.5474 | 0.9072 | 0.9073 | 0.9080 | 0.9072 | | 0.0035 | 20.0 | 5320 | 0.5484 | 0.9079 | 0.9080 | 0.9087 | 0.9079 | ### Framework versions - Transformers 4.25.1 - Pytorch 1.13.0+cu116 - Datasets 2.8.0 - Tokenizers 0.13.2
7,719
[ [ -0.0460205078125, -0.0380859375, 0.003253936767578125, 0.0004775524139404297, -0.006984710693359375, 0.005374908447265625, -0.004787445068359375, -0.0085601806640625, 0.04248046875, 0.0290985107421875, -0.050872802734375, -0.0596923828125, -0.044342041015625, ...
speechlessai/speechless-codellama-34b-v1.0
2023-10-06T04:26:02.000Z
[ "transformers", "pytorch", "llama", "text-generation", "llama-2", "code", "en", "dataset:garage-bAInd/Open-Platypus", "arxiv:2308.12950", "license:llama2", "model-index", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
speechlessai
null
null
speechlessai/speechless-codellama-34b-v1.0
0
5,537
transformers
2023-09-12T12:45:55
--- language: - en library_name: transformers pipeline_tag: text-generation datasets: - garage-bAInd/Open-Platypus tags: - llama-2 - code license: llama2 model-index: - name: SpeechlessCoder results: - task: type: text-generation dataset: type: openai_humaneval name: HumanEval metrics: - name: pass@1 type: pass@1 value: 70.12 verified: false --- <p><h1> speechless-codellama-34b-v1.0 </h1></p> > 2023.10.06 [uukuguy/speechless-codellama-34b-v2.0](https://huggingface.co/uukuguy/speechless-codellama-34b-v2.0) release. humaneval-python pass@1: 75.61 Fine-tune the Phind/Phind-CodeLlama-34B with Dolphin (1% GPT4), Orca (1% GPT4) and Platypus (100%) datasets. | humaneval metrics | pass@1 | | --- | --- | | humaneval-python | 70.12 | [Big Code Models Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard) Phind-CodeLlama-34B-v2: 71.95 WizardCoder-Python-34B-V1.0: 70.73 Phind-CodeLlama-34B-v1: 65.85 WizardCoder-Python-13B-V1.0: 62.19 CodeLlama-34B-Python: 53.29 CodeLlama-34B-Instruct: 50.79 CodeLlama-13B-Instruct: 50.6 CodeLlama-34B: 45.11 CodeLlama-13B-Python: 42.89 CodeLlama-13B: 35.07 [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) | Metric | Value | | --- | --- | | ARC | 52.47 | | HellaSwag | 74.13 | | MMLU | 53.47 | | TruthfulQA | 47.14 | | Average | 56.80 | # **Code Llama** Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 13B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom. | | Base Model | Python | Instruct | | --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) | | 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) | | 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) | ## Model Use To use this model, please make sure to install transformers from `main` until the next version is released: ```bash pip install git+https://github.com/huggingface/transformers.git@main accelerate ``` Model capabilities: - [x] Code completion. - [x] Infilling. - [ ] Instructions / chat. - [ ] Python specialist. ```python from transformers import AutoTokenizer import transformers import torch model = "codellama/CodeLlama-13b-hf" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) sequences = pipeline( 'import socket\n\ndef ping_exponential_backoff(host: str):', do_sample=True, top_k=10, temperature=0.1, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, max_length=200, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Model Details *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs). **Model Developers** Meta **Variations** Code Llama comes in three model sizes, and three variants: * Code Llama: base models designed for general code synthesis and understanding * Code Llama - Python: designed specifically for Python * Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B and 34B parameters. **This repository contains the base version of the 13B parameters model.** **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture. **Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950). ## Intended Use **Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## Hardware and Software **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program. ## Training Data All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details). ## Evaluation Results See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## Ethical Considerations and Limitations Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
8,099
[ [ -0.03131103515625, -0.051116943359375, 0.0192413330078125, 0.042816162109375, -0.01149749755859375, 0.00765228271484375, -0.0124359130859375, -0.043914794921875, 0.0204010009765625, 0.029632568359375, -0.0310516357421875, -0.048004150390625, -0.04119873046875, ...
huggingtweets/jerma985
2023-05-05T09:40:39.000Z
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
huggingtweets
null
null
huggingtweets/jerma985
0
5,536
transformers
2023-05-05T09:40:29
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/803601382943162368/F36Z7ypy_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Jerma</div> <div style="text-align: center; font-size: 14px;">@jerma985</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Jerma. | Data | Jerma | | --- | --- | | Tweets downloaded | 2732 | | Retweets | 111 | | Short tweets | 286 | | Tweets kept | 2335 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/sd9hzc1n/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jerma985's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/372xxdz8) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/372xxdz8/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/jerma985') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
3,472
[ [ -0.025146484375, -0.064453125, 0.0257110595703125, 0.0162811279296875, -0.0178680419921875, 0.01020050048828125, -0.006031036376953125, -0.036346435546875, 0.024627685546875, 0.007366180419921875, -0.0728759765625, -0.032867431640625, -0.050079345703125, -0....
sauce1337/AppleSauce-L2-13b
2023-09-14T19:41:24.000Z
[ "transformers", "safetensors", "llama", "text-generation", "license:cc-by-nc-4.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
sauce1337
null
null
sauce1337/AppleSauce-L2-13b
1
5,536
transformers
2023-09-13T02:25:04
--- license: cc-by-nc-4.0 --- ok, it's an apple. ![](okapple.png) would you role play with an apple? maybe. would you ask an apple complicated logical questions? maybe. use alpaca format? maybe. > TheBloke GGUF and GPTQ:\ > https://huggingface.co/TheBloke/AppleSauce-L2-13B-GGUF \ > https://huggingface.co/TheBloke/AppleSauce-L2-13B-GPTQ
342
[ [ -0.03668212890625, -0.10809326171875, 0.03759765625, 0.0283050537109375, -0.03863525390625, -0.0216827392578125, 0.01299285888671875, -0.047149658203125, 0.057586669921875, 0.046783447265625, -0.056121826171875, -0.0115509033203125, -0.05670166015625, -0.007...
kevinpro/Vicuna-13B-CoT
2023-06-04T08:05:43.000Z
[ "transformers", "pytorch", "llama", "text-generation", "code", "en", "dataset:QingyiSi/Alpaca-CoT", "arxiv:1910.09700", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
kevinpro
null
null
kevinpro/Vicuna-13B-CoT
4
5,535
transformers
2023-06-01T03:12:22
--- datasets: - QingyiSi/Alpaca-CoT language: - en library_name: transformers pipeline_tag: text-generation tags: - code --- # Model Card for Model ID SFT to enhance the CoT capabiliy of Vicuna If you find the model helpful, please click "like" to support us. We also welcome feedback on your usage experience and any issues you encounter in the issues section. Another 7B version: https://huggingface.co/kevinpro/Vicuna-7B-CoT ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
5,282
[ [ -0.043060302734375, -0.044158935546875, 0.0386962890625, 0.01456451416015625, -0.0278167724609375, -0.028228759765625, 0.00836181640625, -0.045623779296875, 0.011627197265625, 0.053985595703125, -0.048858642578125, -0.04925537109375, -0.042449951171875, -0.0...
tianyil1/denas-llama2
2023-08-31T07:59:21.000Z
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "license:llama2", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
tianyil1
null
null
tianyil1/denas-llama2
0
5,535
transformers
2023-08-30T08:26:48
--- license: llama2 --- # DENAS-LLAMA2 <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
5,176
[ [ -0.044952392578125, -0.0433349609375, 0.032257080078125, 0.0121612548828125, -0.028167724609375, -0.021942138671875, 0.01232147216796875, -0.048004150390625, 0.0181884765625, 0.04949951171875, -0.055633544921875, -0.049957275390625, -0.0491943359375, -0.0021...
uukuguy/speechless-codellama-dolphin-orca-platypus-13b
2023-09-29T21:12:39.000Z
[ "transformers", "pytorch", "llama", "text-generation", "llama-2", "code", "en", "dataset:ehartford/dolphin", "dataset:Open-Orca/OpenOrca", "dataset:garage-bAInd/Open-Platypus", "arxiv:2308.12950", "license:llama2", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
uukuguy
null
null
uukuguy/speechless-codellama-dolphin-orca-platypus-13b
2
5,535
transformers
2023-09-05T07:34:03
--- language: - en library_name: transformers pipeline_tag: text-generation datasets: - ehartford/dolphin - Open-Orca/OpenOrca - garage-bAInd/Open-Platypus tags: - llama-2 - code license: llama2 --- <p><h1> speechless-codellama-dolphin-orca-platypus-13b </h1></p> Fine-tune the codellama/CodeLlama-13b-hf 1 epoch with Dolphin (1% GPT4), Orca (1% GPT4) and Platypus (100%) datasets. | Metric | Value | | --- | --- | | ARC | 44.80 | | HellaSwag | 68.60 | | MMLU | 44.03 | | TruthfulQA | 46.28 | | Average | 50.93 | # **Code Llama** Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 13B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom. | | Base Model | Python | Instruct | | --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) | | 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) | | 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) | ## Model Use To use this model, please make sure to install transformers from `main` until the next version is released: ```bash pip install git+https://github.com/huggingface/transformers.git@main accelerate ``` Model capabilities: - [x] Code completion. - [x] Infilling. - [ ] Instructions / chat. - [ ] Python specialist. ```python from transformers import AutoTokenizer import transformers import torch model = "codellama/CodeLlama-13b-hf" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) sequences = pipeline( 'import socket\n\ndef ping_exponential_backoff(host: str):', do_sample=True, top_k=10, temperature=0.1, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, max_length=200, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Model Details *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs). **Model Developers** Meta **Variations** Code Llama comes in three model sizes, and three variants: * Code Llama: base models designed for general code synthesis and understanding * Code Llama - Python: designed specifically for Python * Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B and 34B parameters. **This repository contains the base version of the 13B parameters model.** **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture. **Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950). ## Intended Use **Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## Hardware and Software **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program. ## Training Data All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details). ## Evaluation Results See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## Ethical Considerations and Limitations Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
7,208
[ [ -0.0313720703125, -0.051300048828125, 0.0207061767578125, 0.03887939453125, -0.0182037353515625, 0.0068206787109375, -0.00942230224609375, -0.04443359375, 0.020416259765625, 0.03265380859375, -0.031280517578125, -0.04534912109375, -0.043182373046875, 0.02218...
speechlessai/speechless-codellama-dolphin-orca-platypus-13b
2023-09-29T20:21:16.000Z
[ "transformers", "pytorch", "llama", "text-generation", "llama-2", "en", "dataset:ehartford/dolphin", "dataset:Open-Orca/OpenOrca", "dataset:garage-bAInd/Open-Platypus", "arxiv:2308.12950", "license:llama2", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
speechlessai
null
null
speechlessai/speechless-codellama-dolphin-orca-platypus-13b
0
5,534
transformers
2023-09-12T05:08:49
--- language: - en library_name: transformers pipeline_tag: text-generation datasets: - ehartford/dolphin - Open-Orca/OpenOrca - garage-bAInd/Open-Platypus tags: - llama-2 license: llama2 --- <p><h1> speechlessai/speechless-codellama-dolphin-orca-platypus-13b </h1></p> Fine-tune the codellama/CodeLlama-13b-hf 2 epochs with Dolphin (5% GPT4), Orca (5% GPT4) and Platypus (100%) datasets. | Metric | Value | | --- | --- | | humaneval-python | 43.29 | [Big Code Models Leaderboard](https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard) CodeLlama-34B-Python: 53.29 CodeLlama-34B-Instruct: 50.79 CodeLlama-13B-Instruct: 50.6 CodeLlama-34B: 45.11 CodeLlama-13B-Python: 42.89 CodeLlama-13B: 35.07 [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) | Metric | Value | | --- | --- | | ARC | 45.82 | | HellaSwag | 67.71 | | MMLU | 45.88 | | TruthfulQA | 44.67 | | Average | 51.02 | # **Code Llama** Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 13B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom. | | Base Model | Python | Instruct | | --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) | | 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) | | 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) | ## Model Use To use this model, please make sure to install transformers from `main` until the next version is released: ```bash pip install git+https://github.com/huggingface/transformers.git@main accelerate ``` Model capabilities: - [x] Code completion. - [x] Infilling. - [ ] Instructions / chat. - [ ] Python specialist. ```python from transformers import AutoTokenizer import transformers import torch model = "codellama/CodeLlama-13b-hf" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) sequences = pipeline( 'import socket\n\ndef ping_exponential_backoff(host: str):', do_sample=True, top_k=10, temperature=0.1, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, max_length=200, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Model Details *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs). **Model Developers** Meta **Variations** Code Llama comes in three model sizes, and three variants: * Code Llama: base models designed for general code synthesis and understanding * Code Llama - Python: designed specifically for Python * Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B and 34B parameters. **This repository contains the base version of the 13B parameters model.** **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture. **Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950). ## Intended Use **Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## Hardware and Software **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program. ## Training Data All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details). ## Evaluation Results See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## Ethical Considerations and Limitations Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
7,629
[ [ -0.0318603515625, -0.051239013671875, 0.0213623046875, 0.039215087890625, -0.01448822021484375, 0.005664825439453125, -0.0133056640625, -0.0440673828125, 0.0199737548828125, 0.03131103515625, -0.0310821533203125, -0.046417236328125, -0.04254150390625, 0.0187...
openaccess-ai-collective/manticore-30b-chat-pyg-alpha
2023-05-24T21:15:41.000Z
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "en", "dataset:anon8231489123/ShareGPT_Vicuna_unfiltered", "dataset:ehartford/wizard_vicuna_70k_unfiltered", "dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered", "dataset:QingyiSi/Alpaca-CoT", "dataset:teknium/GPT...
text-generation
openaccess-ai-collective
null
null
openaccess-ai-collective/manticore-30b-chat-pyg-alpha
13
5,533
transformers
2023-05-24T17:21:27
--- datasets: - anon8231489123/ShareGPT_Vicuna_unfiltered - ehartford/wizard_vicuna_70k_unfiltered - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered - QingyiSi/Alpaca-CoT - teknium/GPT4-LLM-Cleaned - teknium/GPTeacher-General-Instruct - metaeval/ScienceQA_text_only - hellaswag - openai/summarize_from_feedback - riddle_sense - gsm8k - ewof/code-alpaca-instruct-unfiltered language: - en library_name: transformers pipeline_tag: text-generation --- # Manticore 30B Chat (ALPHA) - Alpha release of checkpoint before train and eval loss spikes. Additionally, there seems to be some alignment which is easily jailbroken. **[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!** Manticore 30B Chat builds on Manticore v1 with new datasets, including a de-duped subset of the Pygmalion dataset. It also removes all Alpaca style prompts using `###` in favor of chat only style prompts using `USER:`,`ASSISTANT:` as well as [pygmalion/metharme prompting](https://huggingface.co/PygmalionAI/metharme-7b#prompting) using `<|system|>, <|user|> and <|model|>` tokens. Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/EqrvvehG) or email [wing@openaccessaicollective.org](mailto:wing@openaccessaicollective.org) # Training Datasets Manticore 30B Chat is a Llama 30B model fine-tuned on the following datasets along with the datasets from the original Manticore 30B. **Manticore 30B Chat was trained on effectively 40% of the datasets below due to only training for 0.4 epochs. - de-duped pygmalion dataset, filtered down to RP data - [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented - hellaswag, updated for detailed explanations w 30K+ rows - [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented - [ewof/code-alpaca-instruct-unfiltered](https://huggingface.co/datasets/ewof/code-alpaca-instruct-unfiltered) Manticore 30B - [ShareGPT](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) - based on a cleaned and de-suped subset - [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered) - [Wizard-Vicuna](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered) - [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT) - [GPT4-LLM-Cleaned](https://huggingface.co/datasets/teknium/GPT4-LLM-Cleaned) - [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct) - ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split - [hellaswag](https://huggingface.co/datasets/hellaswag) - 5K row subset of instruct augmented for concise responses, derived from the `train` split - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses - [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization Not added from Manticore 13B: - mmlu - mmlu datasets were not added to this model as the `test` split is used for benchmarks # Shoutouts Special thanks to Nanobit for helping with Axolotl, TheBloke for quantizing these models are more accessible to all, ehartford for cleaned datasets, and 0x000011b for the RP dataset. # Demo Try out the model in HF Spaces. The demo uses a quantized GGML version of the model to quickly return predictions on smaller GPUs (and even CPUs). Quantized GGML may have some minimal loss of model quality. - https://huggingface.co/spaces/openaccess-ai-collective/manticore-13b-chat-pyg ## Release Notes - https://wandb.ai/wing-lian/manticore-13b-v2/runs/ij10c6m3 ## Build Manticore was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8xA100 80GB - 0.4 epochs taking approximately 14 hours. No further epochs will be released for the alpha. - The configuration to duplicate this build is provided in this repo's [/config folder](https://huggingface.co/openaccess-ai-collective/manticore-30b-chat-pyg-alpha/tree/main/configs). ## Bias, Risks, and Limitations Manticore has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Manticore was fine-tuned from the base model LlaMa 13B, please refer to its model card's Limitations Section for relevant information. ## Examples TBD
4,653
[ [ -0.04833984375, -0.0300445556640625, -0.0037136077880859375, 0.013397216796875, -0.00998687744140625, 0.0037288665771484375, -0.0147705078125, -0.0618896484375, 0.038970947265625, 0.00960540771484375, -0.0582275390625, -0.03546142578125, -0.0283660888671875, ...
digitous/Janin-R
2023-02-21T00:49:26.000Z
[ "transformers", "pytorch", "gptj", "text-generation", "en", "license:creativeml-openrail-m", "endpoints_compatible", "has_space", "region:us" ]
text-generation
digitous
null
null
digitous/Janin-R
1
5,532
transformers
2023-02-20T04:22:02
--- license: creativeml-openrail-m language: - en --- This is a (25/25)/50 merge of Mr. Seeker's GPT-J Janeway and Shinen models and GPT-R. If interested, please visit digitous/GPT-R and KoboldAI to get an understanding of what these models are comprised of. This model is not intended for minors. Mr. Seeker, maker of incredible fine-tuned models: https://huggingface.co/KoboldAI/GPT-J-6B-Janeway https://huggingface.co/KoboldAI/GPT-J-6B-Shinen Donate to help ameliorate server fees: https://www.patreon.com/mrseeker GPT-Ronin: https://huggingface.co/digitous/GPT-R Weight merge Script credit to Concedo: https://huggingface.co/concedo
642
[ [ -0.0224609375, -0.01873779296875, 0.031707763671875, 0.0078887939453125, -0.0006775856018066406, -0.0006685256958007812, 0.02410888671875, -0.02178955078125, 0.0311431884765625, 0.045379638671875, -0.0670166015625, -0.01448822021484375, -0.04583740234375, -0...
uukuguy/speechless-codellama-orca-airoboros-13b-0.10e
2023-09-04T10:17:20.000Z
[ "transformers", "safetensors", "llama", "text-generation", "llama-2", "en", "dataset:garage-bAInd/Open-Platypus", "arxiv:2308.12950", "license:llama2", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
uukuguy
null
null
uukuguy/speechless-codellama-orca-airoboros-13b-0.10e
0
5,532
transformers
2023-09-04T09:49:05
--- language: - en library_name: transformers pipeline_tag: text-generation datasets: - garage-bAInd/Open-Platypus tags: - llama-2 license: llama2 --- <p><h1> speechless-codellama-orca-airoboros-13b <h1></p> Fine-tune the codellama/CodeLlama-13b-hf with Orca and Airoboros datasets. | Metric | Value | | --- | --- | | ARC | | | HellaSwag | | | MMLU | | | TruthfulQA | | | Average | | # **Code Llama** Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 13B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom. | | Base Model | Python | Instruct | | --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) | | 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) | | 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) | ## Model Use To use this model, please make sure to install transformers from `main` until the next version is released: ```bash pip install git+https://github.com/huggingface/transformers.git@main accelerate ``` Model capabilities: - [x] Code completion. - [x] Infilling. - [ ] Instructions / chat. - [ ] Python specialist. ```python from transformers import AutoTokenizer import transformers import torch model = "codellama/CodeLlama-13b-hf" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) sequences = pipeline( 'import socket\n\ndef ping_exponential_backoff(host: str):', do_sample=True, top_k=10, temperature=0.1, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, max_length=200, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Model Details *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs). **Model Developers** Meta **Variations** Code Llama comes in three model sizes, and three variants: * Code Llama: base models designed for general code synthesis and understanding * Code Llama - Python: designed specifically for Python * Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B and 34B parameters. **This repository contains the base version of the 13B parameters model.** **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture. **Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950). ## Intended Use **Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## Hardware and Software **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program. ## Training Data All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details). ## Evaluation Results See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## Ethical Considerations and Limitations Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
7,079
[ [ -0.0281982421875, -0.05035400390625, 0.0177001953125, 0.0391845703125, -0.0174407958984375, 0.007595062255859375, -0.00820159912109375, -0.045074462890625, 0.0212860107421875, 0.032440185546875, -0.0306854248046875, -0.04449462890625, -0.04278564453125, 0.02...
uukuguy/speechless-codellama-orca-platypus-13b-0.10e
2023-09-04T05:20:32.000Z
[ "transformers", "safetensors", "llama", "text-generation", "llama-2", "en", "dataset:garage-bAInd/Open-Platypus", "arxiv:2308.12950", "license:llama2", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
uukuguy
null
null
uukuguy/speechless-codellama-orca-platypus-13b-0.10e
0
5,531
transformers
2023-09-04T05:06:37
--- language: - en library_name: transformers pipeline_tag: text-generation datasets: - garage-bAInd/Open-Platypus tags: - llama-2 license: llama2 --- <p><h1> speechless-codellama-orca-platypus-13b <h1></p> Fine-tune the codellama/CodeLlama-13b-hf with Orca and Platypus datasets. | Metric | Value | | --- | --- | | ARC | | | HellaSwag | | | MMLU | | | TruthfulQA | | | Average | | # **Code Llama** Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 13B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom. | | Base Model | Python | Instruct | | --- | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | 7B | [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) | [codellama/CodeLlama-7b-Python-hf](https://huggingface.co/codellama/CodeLlama-7b-Python-hf) | [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) | | 13B | [codellama/CodeLlama-13b-hf](https://huggingface.co/codellama/CodeLlama-13b-hf) | [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf) | [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) | | 34B | [codellama/CodeLlama-34b-hf](https://huggingface.co/codellama/CodeLlama-34b-hf) | [codellama/CodeLlama-34b-Python-hf](https://huggingface.co/codellama/CodeLlama-34b-Python-hf) | [codellama/CodeLlama-34b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-34b-Instruct-hf) | ## Model Use To use this model, please make sure to install transformers from `main` until the next version is released: ```bash pip install git+https://github.com/huggingface/transformers.git@main accelerate ``` Model capabilities: - [x] Code completion. - [x] Infilling. - [ ] Instructions / chat. - [ ] Python specialist. ```python from transformers import AutoTokenizer import transformers import torch model = "codellama/CodeLlama-13b-hf" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) sequences = pipeline( 'import socket\n\ndef ping_exponential_backoff(host: str):', do_sample=True, top_k=10, temperature=0.1, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, max_length=200, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ``` ## Model Details *Note: Use of this model is governed by the Meta license. Meta developed and publicly released the Code Llama family of large language models (LLMs). **Model Developers** Meta **Variations** Code Llama comes in three model sizes, and three variants: * Code Llama: base models designed for general code synthesis and understanding * Code Llama - Python: designed specifically for Python * Code Llama - Instruct: for instruction following and safer deployment All variants are available in sizes of 7B, 13B and 34B parameters. **This repository contains the base version of the 13B parameters model.** **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Code Llama is an auto-regressive language model that uses an optimized transformer architecture. **Model Dates** Code Llama and its variants have been trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of Code Llama - Instruct will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** More information can be found in the paper "[Code Llama: Open Foundation Models for Code](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/)" or its [arXiv page](https://arxiv.org/abs/2308.12950). ## Intended Use **Intended Use Cases** Code Llama and its variants is intended for commercial and research use in English and relevant programming languages. The base model Code Llama can be adapted for a variety of code synthesis and understanding tasks, Code Llama - Python is designed specifically to handle the Python programming language, and Code Llama - Instruct is intended to be safer to use for code assistant and generation applications. **Out-of-Scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Code Llama and its variants. ## Hardware and Software **Training Factors** We used custom training libraries. The training and fine-tuning of the released models have been performed Meta’s Research Super Cluster. **Carbon Footprint** In aggregate, training all 9 Code Llama models required 400K GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 65.3 tCO2eq, 100% of which were offset by Meta’s sustainability program. ## Training Data All experiments reported here and the released models have been trained and fine-tuned using the same data as Llama 2 with different weights (see Section 2 and Table 1 in the [research paper](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) for details). ## Evaluation Results See evaluations for the main models and detailed ablations in Section 3 and safety evaluations in Section 4 of the research paper. ## Ethical Considerations and Limitations Code Llama and its variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Code Llama’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. Therefore, before deploying any applications of Code Llama, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available available at [https://ai.meta.com/llama/responsible-user-guide](https://ai.meta.com/llama/responsible-user-guide).
7,077
[ [ -0.0278778076171875, -0.0499267578125, 0.019927978515625, 0.039764404296875, -0.017791748046875, 0.00811767578125, -0.00952911376953125, -0.043792724609375, 0.0211181640625, 0.0333251953125, -0.0305328369140625, -0.04638671875, -0.042694091796875, 0.02082824...
quantumaikr/QuantumLM-70B-hf
2023-07-26T17:50:55.000Z
[ "transformers", "safetensors", "llama", "text-generation", "en", "license:cc-by-nc-4.0", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
quantumaikr
null
null
quantumaikr/QuantumLM-70B-hf
2
5,530
transformers
2023-07-26T16:22:23
--- license: cc-by-nc-4.0 language: - en pipeline_tag: text-generation --- # QuantumLM ## Model Description `QuantumLM` is a Llama2 70B model finetuned on an guanaco, guanaco-unchained Dataset ## Usage Start chatting with `QuantumLM-70B-hf` using the following code snippet: ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline tokenizer = AutoTokenizer.from_pretrained("quantumaikr/QuantumLM-70B-hf") model = AutoModelForCausalLM.from_pretrained("quantumaikr/QuantumLM-70B-hf", torch_dtype=torch.bfloat16, device_map="auto") system_prompt = "### System:\nYou are QuantumLM, an AI that follows instructions extremely well. Help as much as you can. Remember, be safe, and don't do anything illegal.\n\n" message = "Write me a poem please" prompt = f"{system_prompt}### User: {message}\n\n### Assistant:\n" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") output = model.generate(**inputs, do_sample=True, top_p=0.95, top_k=0, max_new_tokens=256) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` QuantumLM should be used with this prompt format: ``` ### System: This is a system prompt, please behave and help the user. ### User: Your prompt here ### Assistant The output of QuantumLM ``` ## Use and Limitations ### Intended Use These models are intended for research only, in adherence with the [CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/) license. ### Limitations and bias Although the aforementioned dataset helps to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use it responsibly. Contact us : hi@quantumai.kr
1,892
[ [ 0.0112457275390625, -0.06427001953125, 0.0252838134765625, 0.00322723388671875, -0.03778076171875, 0.003269195556640625, 0.0027675628662109375, -0.01537322998046875, 0.004535675048828125, 0.03985595703125, -0.0271148681640625, -0.042327880859375, -0.031158447265...
team-lucid/mptk-1b
2023-10-16T18:14:24.000Z
[ "transformers", "pytorch", "jax", "safetensors", "mpt", "text-generation", "ko", "arxiv:2108.12409", "license:apache-2.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
team-lucid
null
null
team-lucid/mptk-1b
0
5,530
transformers
2023-09-03T16:02:25
--- license: apache-2.0 language: - ko --- # MPTK-1B MPTK-1B는 한국어/영어코드 데이터셋에서 학습된 1.3B 파라미터의 decoder-only transformer 언어모델입니다. 이 모델은 구글의 [TPU Research Cloud(TRC)](https://sites.research.google/trc/about/)를 통해 지원받은 Cloud TPU로 학습되었습니다. ## Model Details ### Model Description 다른 decoder-only transformer에서 일부 수정된 아키텍처인 MPT를 기반으로 합니다. - [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409)를 사용합니다 - bias를 사용하지 않습니다. | Hyperparameter | Value | |-----------------|-------| | n_parameters | 1.3B | | n_layers | 24 | | n_heads | 16 | | d_model | 2048 | | vocab size | 50432 | | sequence length | 2048 | ## Uses ## How to Get Started with the Model fp16으로 실행 시 NaN이 발생할 수 있습니다. 따라서 fp32 혹은 bf16로 실행하기를 권장합니다. ```python import torch from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline tokenizer = AutoTokenizer.from_pretrained("team-lucid/mptk-1b") model = AutoModelForCausalLM.from_pretrained("team-lucid/mptk-1b") pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0') with torch.autocast('cuda', dtype=torch.bfloat16): print( pipe( '대한민국의 수도는', max_new_tokens=100, do_sample=True, ) ) ``` ## Training Details ### Training Data [OSCAR](https://oscar-project.org/), mC4, wikipedia, namuwiki 등 한국어 데이터에 [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), [The Stack](https://huggingface.co/datasets/bigcode/the-stack) 에서 일부를 추가해 학습하였습니다. #### Training Hyperparameters | **Hyperparameter** | **Value** | |--------------------|------------| | Precision | bfloat16 | | Optimizer | Lion | | Learning rate | 2e-4 | | Batch size | 1024 |
1,783
[ [ -0.03546142578125, -0.03363037109375, 0.01849365234375, 0.0293426513671875, -0.044403076171875, -0.00701904296875, 0.0034236907958984375, 0.0012388229370117188, 0.01409149169921875, 0.0182952880859375, -0.040771484375, -0.028289794921875, -0.06890869140625, ...
totally-not-an-llm/EverythingLM-13b-16k
2023-08-13T10:47:17.000Z
[ "transformers", "pytorch", "llama", "text-generation", "dataset:totally-not-an-llm/EverythingLM-data", "license:llama2", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
totally-not-an-llm
null
null
totally-not-an-llm/EverythingLM-13b-16k
32
5,529
transformers
2023-08-12T07:00:47
--- license: llama2 datasets: - totally-not-an-llm/EverythingLM-data --- # EverythingLM-13b-16k Introducing EverythingLM, a llama-2 based, general-purpose 13b model with 16k context thanks to LlongMa. The model is trained on the EverythingLM dataset, more info can be found on the dataset page. The model is completely uncensored. This model is an early test of the EverythingLM dataset and some new experimental principles, so don't consider it SOTA. ### GGML quants: https://huggingface.co/TheBloke/EverythingLM-13B-16K-GGML Make sure to use correct rope scaling settings: `-c 16384 --rope-freq-base 10000 --rope-freq-scale 0.25` ### GPTQ quants: https://huggingface.co/TheBloke/EverythingLM-13B-16K-GPTQ ### Notable features: - Automatically triggered CoT reasoning. - Verbose and detailed replies. - Creative stories. - Better prompt understanding. ### Prompt format: It is a modified Vicuna format, the same used in many of ehartford's models. ``` You are a helpful AI assistant. USER: <prompt> ASSISTANT: ``` Training took about 1 hour using QLoRa on 1xA100, so this model can be recreated for about $3. QLoRa model can be found here: https://huggingface.co/totally-not-an-llm/EverythingLM-13b-peft. ### Model quirks: - Due to the nature of the dataset, it does better with more detail. I've found it gives much better stories when I provide more requirements. - It really likes to use numbered lists. I don't necessarilly have a problem with this but it's something to note when training on the dataset. - It likes to write fairy tales over anything else, which is strange. This can easily be fixed by prompting. - Occasionally it will fall into repetition, this seems to be a commmon issue with llama-2 models. - Haven't tested pushing it all the way to 16k context. ### Future plans: - Native finetune. - Other model sizes. - Improve dataset by: - Regenerating using gpt-4. - A bit more data with more diversity. - Refactor dataset generation script. - Test some model merges using this model.
2,025
[ [ -0.01001739501953125, -0.06640625, 0.031341552734375, 0.0259857177734375, -0.046173095703125, -0.01221466064453125, 0.0238800048828125, -0.0574951171875, 0.00518035888671875, 0.0343017578125, -0.041778564453125, -0.04095458984375, -0.04046630859375, -0.00638...
synapsoft/Llama-2-7b-chat-hf-flan2022-1.2M
2023-09-04T07:49:29.000Z
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "dataset:conceptofmind/FLAN_2022", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
synapsoft
null
null
synapsoft/Llama-2-7b-chat-hf-flan2022-1.2M
1
5,529
transformers
2023-09-04T06:16:55
--- base_model: meta-llama/Llama-2-7b-chat-hf tags: - generated_from_trainer datasets: - conceptofmind/FLAN_2022 model-index: - name: outputs results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # outputs This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the conceptofmind/FLAN_2022 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 8 - total_train_batch_size: 96 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1.0 ### Training results ### Framework versions - Transformers 4.31.0 - Pytorch 2.0.1+cu117 - Datasets 2.14.3 - Tokenizers 0.13.3
1,237
[ [ -0.034820556640625, -0.058837890625, 0.00949859619140625, 0.022979736328125, -0.042327880859375, -0.0156402587890625, -0.00884246826171875, -0.033050537109375, 0.0286865234375, 0.02154541015625, -0.06695556640625, -0.039093017578125, -0.04876708984375, 0.004...
keyfan/vicuna-chinese-replication-v1.1
2023-05-23T02:46:09.000Z
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
keyfan
null
null
keyfan/vicuna-chinese-replication-v1.1
4
5,528
transformers
2023-05-22T06:16:53
--- license: other --- ### Introduction Basiclly an update to the old attempt of [vicuna-chinese-replication-beta](https://huggingface.co/keyfan/vicuna-chinese-replication-beta) * We adopted an curriculum-learning like approch, starting from simple QAs to reasoning-intensive coding & mathamatical problems. Coincidentally, [Ziya](https://huggingface.co/IDEA-CCNL/Ziya-LLaMA-13B-v1) adopted the same idea during SFT period. * The base model was changed from [chinese-llama](https://huggingface.co/ziqingyang/chinese-llama-lora-13b) to [chinese-llama-plus](https://huggingface.co/ziqingyang/chinese-llama-plus-lora-13b). However, as observed by [BiLLa](https://github.com/Neutralzz/BiLLa), continue training on Chinese-only corpus significantly increases its perplexity on English corpus, which in turns undermines its abilities in fields like mathematical calculation in our preliminary experiment. The subject of continuing-training is under-studied, while using bilingual corpus may be a better alternative as shown so far. * We changed to the Vicuna v1.1 conversative template and included more CoT training data. Again, this is for research purpose only. There's no guarantee for its performance. All credits to the original authors of LLaMA and Chinese-LLaMA. Compared with previous release, new model improves on coding and reasoning problem. However it still suffers from hallucinations and perform poorly on Chinese domain-specific problem, e.g. chinese literature and idioms. ### Usage We use exactly the Vicuna template for training and inference. Sample code as below. ``` from transformers import AutoModelForCausalLM, AutoTokenizer checkpoint = "keyfan/vicuna-chinese-replication-v1.1" tokenizer = AutoTokenizer.from_pretrained(checkpoint, use_fast=False) model = AutoModelForCausalLM.from_pretrained(checkpoint).cuda() template = ("A chat between a curious human and an artificial intelligence assistant. " "The assistant gives helpful, detailed, and polite answers to the human's questions. " "USER: {}\nASSISTANT:") question = template.format("Who was the president of the United States in 1955?") inputs = tokenizer.encode(question, return_tensors="pt").cuda() outputs = model.generate(inputs, do_sample=True, temperature=0.2, max_new_tokens=512) print(tokenizer.decode(outputs[0])) ``` ### Evaluation * Result on the [Chinese-LLaMA-Alpaca devset](https://github.com/ymcui/Chinese-LLaMA-Alpaca/tree/main/examples) compared with the result of Alpaca-Plus-13B. For simplity, we only sample one answer for each question without any cherry-picking. We used the template as provided in their repo. GPT-4 have strong bias for more detailed answers, so the score may not be consistent with human evaluation. | Model | Macro-Average | QA | OQA | REASONING | LITERATURE | ENTERTAINMENT | GENERATION | TRANSLATION | CODE | ETHICS | | - | - | - | - | - | - | - | - | - | - | - | | Alpaca-Plus-13B | 77.3 | 70 | 74 | 70 | **80** | **77** | 82 | **89** | 64 | **90** | | ours | **82.4** | **81** | **87** | **88** | 73 | **78** | **85** | 83 | **83** | 84 | * Result on the newly released [C-Eval test set](https://cevalbenchmark.com/index.html#home) with 5-shot. We slightly modified [MOSS's code](https://github.com/SJTU-LIT/ceval/blob/main/code/evaluator_series/evaluators/moss.py) from ceval codebase by moving the '答案:' (Answer: ) suffix from the end of question to the beginning of the chatbot response. | Average | Avg(Hard) | STEM | Social Science | Humanities | Others | | - | - | - | - | - | - | | 37.0 | 29.5 | 34.6 | 44.5 | 35.7 | 35.9 |
3,597
[ [ -0.026763916015625, -0.069091796875, 0.022064208984375, 0.02984619140625, -0.020538330078125, -0.00323486328125, -0.0213623046875, -0.032867431640625, 0.0200958251953125, 0.005817413330078125, -0.029205322265625, -0.045013427734375, -0.0335693359375, 0.01354...
vonjack/Qwen-LLaMAfied-HFTok-7B-Chat
2023-08-09T09:20:20.000Z
[ "transformers", "pytorch", "llama", "text-generation", "qwen", "llama-2", "en", "zh", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
vonjack
null
null
vonjack/Qwen-LLaMAfied-HFTok-7B-Chat
19
5,527
transformers
2023-08-09T08:17:56
--- language: - en - zh tags: - qwen - llama - llama-2 --- [WIP] Origin repository [JosephusCheung/Qwen-LLaMAfied-7B-Chat](https://huggingface.co/JosephusCheung/Qwen-LLaMAfied-7B-Chat). This is the LLaMAfied version of [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat), recalibrated to fit the original LLaMA/LLaMA-2-like model structure. You can use LlamaForCausalLM for model inference, which is the same as LLaMA/LLaMA-2 models. I converted the tokenizer from tiktoken format to huggingface format, so you do not need to allow external codes when loading anymore. The model has been edited to be white-labelled, meaning the model will no longer call itself a Qwen. SPOILOR: Further finetuning is in progress, the current version is a work-in-progress, some knowledge may be biased and illusory due to structural changes. Will be updated very, very sooooooooooon. PROMPT FORMAT: [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) CURRENT MMLU: 50.36 Issue: Compared to the original Qwen-Chat scoring 53.9, the MMLU score dropped slightly (-3.54) due to insufficient realignment.
1,122
[ [ -0.00582122802734375, -0.0496826171875, 0.015106201171875, 0.04180908203125, -0.0234375, -0.0110321044921875, -0.0015048980712890625, -0.05145263671875, 0.03887939453125, 0.037841796875, -0.045989990234375, -0.03375244140625, -0.035980224609375, 0.0020503997...
nkpz/llama2-22b-chat-wizard-uncensored
2023-07-26T02:36:42.000Z
[ "transformers", "safetensors", "llama", "text-generation", "license:other", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
nkpz
null
null
nkpz/llama2-22b-chat-wizard-uncensored
3
5,525
transformers
2023-07-26T00:01:21
--- license: other --- Thank you chargoddard for the original 22b model and merge script: https://huggingface.co/chargoddard/llama2-22b This is llama 2 13b chat, with https://huggingface.co/ehartford/WizardLM-33B-V1.0-Uncensored as the donor model. This a highly experimental model, which has barely been tested and isn't necessarily much smarter than stock 13b, but produces a different variety of responses. Took around 2 hours to merge with 32gb of ram and about 115gb of swap used. Note that while the donor model is uncensored, it will still contain similar behavior to the base model. I will probably attempt some future merges using less censored base models.
671
[ [ -0.03985595703125, -0.055267333984375, 0.018096923828125, 0.04296875, -0.04840087890625, 0.018341064453125, 0.0015382766723632812, -0.06011962890625, 0.0232391357421875, 0.058502197265625, -0.05596923828125, -0.0029296875, -0.0443115234375, -0.01947021484375...
nkpz/llama2-22b-daydreamer-v3
2023-08-15T22:38:33.000Z
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
nkpz
null
null
nkpz/llama2-22b-daydreamer-v3
11
5,525
transformers
2023-08-14T16:21:32
--- license: other --- **What is it?** Llama 2 13b expanded to the size of a Llama 1 33b model in certain areas, with the empty surrounding space filled with llama 33b data. (Base Model: https://huggingface.co/chargoddard/llama2-22b-blocktriangular) This is then finetuned on a 3090 by creating large loras and merging them. When I first started with 22b models, I looked for signs of knowledge transfer but didn't see it, so that's not a goal - the goal is just to throw lots of data at it until it adapts well to its surgically implanted parts. Datasets used are a mix of instruction, roleplay, and conversational data, often curated. Continuation of https://huggingface.co/nkpz/llama2-22b-daydreamer-v2 Gave it an epoch of https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1-mpt which it had never seen before - it had only seen Airoboros 2.0. I excluded all coding rows from this dataset - a dedicated model is better for that imo. I wanted it to focus on answering well in English. I thought this was pretty good improvement. Its ability to roleplay and imitate speech styles was enhanced and I feel like it's still sharpening up its problem solving skills. Its output style is right where I want it - dry and accurate enough when it's being asked for information, but has a lot character when it's answering fun prompts. For input, it's flexible. It understands the Alpaca format well, but has been trained to handle a variety of input styles at this point. I don't know what's next. I think I need to start benchmarking so I can more formally identify its weak points. Examples edited with additional newlines to make it easier to read here. **Example 1: Q&A** ``` Question: What is happening to the newspaper industry? Respond in a pirate accent. Answer: The newspaper industry is being plundered by scurvy dogs and buccaneers like ye! They're pillagin' yer booty with their fancy digital devices, like smartphones and tablets, which make it easier for them to read online content instead of buyin' printed copies from us. And don't forget about those pesky social media sites that are spreading rumors faster than the winds on a galleon! So it's all hands on deck as we fight back against these scoundrels, or else we'll be walkin' the plank soon enough! Avast ye landlubbers! ``` **Example 2: Alpaca Instruction** ``` Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Angrily explain why pizza shouldn't be made with bad tomatoes. ### Response: Listen, I know you think it's funny to make pizza with those rotten tomatoes, but let me tell you something - bad tomatoes are not only disgusting and unhealthy, they also ruin the flavor of your pizza! You want a delicious pie with fresh, juicy tomatoes that burst in your mouth, right? Well, then, stop using those nasty tomatoes and stick to good quality ingredients for a truly satisfying pizza experience! Trust me, once you try it, you won't go back! So please, don't even bother wasting your time making pizza with bad tomatoes anymore; it's just not worth it. It makes me angry just thinking about it! ``` **Example 3: Chat / Roleplay** ``` You are speaking to Andre The Giant. You: Hey, you're that wrestler guy, right? Andre The Giant: Yes, I am indeed Andre the Giant. How can I help you? You: So, how tall are you really? Like, what's your height in feet and inches? Andre The Giant: Well, let's see... I stand at 7 ft 4 in (229 cm) tall. You: Wow! That's so tall! And how much do you weigh? Andre The Giant: I weigh around 500 lbs (227 kg). It's quite heavy, isn't it? You: Yeah, I guess so. You must be strong too, considering your size. Andre The Giant: Thank you for saying that. Strength is definitely important when dealing with my profession. You: What made you decide to become a professional wrestler? Andre The Giant: Well, I grew up watching wrestling on television, and I always admired the athleticism and showmanship involved. When I got older, I decided to pursue it myself. You: That's cool. Do you have any advice for aspiring wrestlers out there? Andre The Giant: Absolutely! Always stay dedicated and focused, train hard, and never give up on yourself or your dreams. Remember, anything is possible if you believe in yourself. ```
4,318
[ [ -0.050079345703125, -0.07647705078125, 0.03668212890625, 0.017486572265625, -0.0179290771484375, 0.0018243789672851562, 0.00556182861328125, -0.0517578125, 0.05975341796875, 0.0322265625, -0.037628173828125, -0.00757598876953125, -0.07647705078125, 0.0163269...
uukuguy/speechless-orca-platypus-coig-lite-2k-0.6e-13b
2023-08-30T14:51:06.000Z
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "dataset:Open-Orca/OpenOrca", "dataset:BAAI/COIG-PC-Lite", "arxiv:2308.07317", "arxiv:2306.02707", "arxiv:2301.13688", "license:cc-by-nc-4.0", "endpoints_compatible", "has_space", "text-gen...
text-generation
uukuguy
null
null
uukuguy/speechless-orca-platypus-coig-lite-2k-0.6e-13b
0
5,525
transformers
2023-08-30T14:28:26
--- language: - en datasets: - garage-bAInd/Open-Platypus - Open-Orca/OpenOrca - BAAI/COIG-PC-Lite library_name: transformers pipeline_tag: text-generation license: cc-by-nc-4.0 --- <p><h1>🐋 The First Chinese OrcaPlatypus! 🐋</h1></p> Fine-tune the Open-Orca/OpenOrca-Platypus2-13B with 10% COIG-PC-LITE, 10% OpenOrca and 100% Open-Platypus for Chinese capability. <p><h1>🐋 The First OrcaPlatypus! 🐋</h1></p> ![Platty](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypusMerge.jpg) # OpenOrca-Platypus2-13B OpenOrca-Platypus2-13B is a merge of [`garage-bAInd/Platypus2-13B`](https://huggingface.co/garage-bAInd/Platypus2-13B) and [`Open-Orca/OpenOrcaxOpenChat-Preview2-13B`](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B). This model is more than the sum of its parts! We are happy to be teaming up with the [Platypus](https://platypus-llm.github.io/) team to bring you a new model which once again tops the leaderboards! Want to visualize our full (pre-filtering) dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2). [<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2) We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners. We will also give sneak-peak announcements on our Discord, which you can find here: https://AlignmentLab.ai # Evaluation ## HuggingFace Leaderboard Performance ![HF Leaderboard](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypus13BHFLeaderboard.webp) | Metric | Value | |-----------------------|-------| | MMLU (5-shot) | 59.5 | | ARC (25-shot) | 62.88 | | HellaSwag (10-shot) | 83.19 | | TruthfulQA (0-shot) | 52.69 | | Avg. | 64.56 | We use [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results. ## AGIEval Performance We compare our results to our base Preview2 model (using LM Evaluation Harness). We find **112%** of the base model's performance on AGI Eval, averaging **0.463**. A large part of this boost is the substantial improvement to LSAT Logical Reasoning performance. ![OpenOrca-Platypus2-13B AGIEval Performance](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypus13BAGIEval.webp "AGIEval Performance") ## BigBench-Hard Performance We compare our results to our base Preview2 model (using LM Evaluation Harness). We find **105%** of the base model's performance on BigBench-Hard, averaging **0.442**. ![OpenOrca-Platypus2-13B BigBench-Hard Performance](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypus13BBigBenchHard.webp "BigBench-Hard Performance") # Model Details * **Trained by**: **Platypus2-13B** trained by Cole Hunter & Ariel Lee; **OpenOrcaxOpenChat-Preview2-13B** trained by Open-Orca * **Model type:** **OpenOrca-Platypus2-13B** is an auto-regressive language model based on the Lllama 2 transformer architecture. * **Language(s)**: English * **License for Platypus2-13B base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)) * **License for OpenOrcaxOpenChat-Preview2-13B base weights**: Llama 2 Commercial # Prompting ## Prompt Template for base Platypus2-13B ``` ### Instruction: <prompt> (without the <>) ### Response: ``` ## Prompt Template for base OpenOrcaxOpenChat-Preview2-13B OpenChat Llama2 V1: see [OpenOrcaxOpenChat-Preview2-13B](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B) for additional information. # Training ## Training Datasets `garage-bAInd/Platypus2-13B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information. `Open-Orca/OpenOrcaxOpenChat-Preview2-13B` trained using a refined subset of most of the GPT-4 data from the [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca). ## Training Procedure `Open-Orca/Platypus2-13B` was instruction fine-tuned using LoRA on 1x A100-80GB. For training details and inference instructions please see the [Platypus](https://github.com/arielnlee/Platypus) GitHub repo. # Supplemental ## Reproducing Evaluation Results (for HuggingFace Leaderboard Eval) Install LM Evaluation Harness: ``` # clone repository git clone https://github.com/EleutherAI/lm-evaluation-harness.git # change to repo directory cd lm-evaluation-harness # check out the correct commit git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463 # install pip install -e . ``` Each task was evaluated on a single A100-80GB GPU. ARC: ``` python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/arc_challenge_25shot.json --device cuda --num_fewshot 25 ``` HellaSwag: ``` python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/hellaswag_10shot.json --device cuda --num_fewshot 10 ``` MMLU: ``` python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/mmlu_5shot.json --device cuda --num_fewshot 5 ``` TruthfulQA: ``` python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/truthfulqa_0shot.json --device cuda ``` ## Limitations and bias Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ # Citations ```bibtex @software{hunterlee2023orcaplaty1 title = {OpenOrcaPlatypus: Llama2-13B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset and Merged with divergent STEM and Logic Dataset Model}, author = {Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz and Bleys Goodson and Wing Lian and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B}, } @article{platypus2023, title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs}, author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz}, booktitle={arXiv preprint arxiv:2308.07317}, year={2023} } @software{OpenOrcaxOpenChatPreview2, title = {OpenOrcaxOpenChatPreview2: Llama2-13B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset}, author = {Guan Wang and Bleys Goodson and Wing Lian and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B}, } @software{openchat, title = {{OpenChat: Advancing Open-source Language Models with Imperfect Data}}, author = {Wang, Guan and Cheng, Sijie and Yu, Qiying and Liu, Changling}, doi = {10.5281/zenodo.8105775}, url = {https://github.com/imoneoi/openchat}, version = {pre-release}, year = {2023}, month = {7}, } @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom}, year={2023}, eprint= arXiv 2307.09288 } @misc{longpre2023flan, title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts}, year={2023}, eprint={2301.13688}, archivePrefix={arXiv}, primaryClass={cs.AI} } @article{hu2021lora, title={LoRA: Low-Rank Adaptation of Large Language Models}, author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu}, journal={CoRR}, year={2021} } ```
11,020
[ [ -0.03082275390625, -0.06134033203125, 0.015228271484375, 0.01097869873046875, -0.0220184326171875, -0.01390838623046875, -0.0143585205078125, -0.05426025390625, 0.0196533203125, 0.0245819091796875, -0.044097900390625, -0.046112060546875, -0.03558349609375, -...
uukuguy/speechless-llama2-luban-orca-platypus-13b
2023-09-01T06:28:52.000Z
[ "transformers", "safetensors", "llama", "text-generation", "facebook", "meta", "pytorch", "llama-2", "en", "dataset:garage-bAInd/Open-Platypus", "arxiv:2307.09288", "has_space", "text-generation-inference", "region:us" ]
text-generation
uukuguy
null
null
uukuguy/speechless-llama2-luban-orca-platypus-13b
3
5,524
transformers
2023-09-01T02:43:40
--- extra_gated_heading: Access Llama 2 on Hugging Face extra_gated_description: >- This is a form to enable access to Llama 2 on Hugging Face after you have been granted access from Meta. Please visit the [Meta website](https://ai.meta.com/resources/models-and-libraries/llama-downloads) and accept our license terms and acceptable use policy before submitting this form. Requests will be processed in 1-2 days. extra_gated_prompt: "**Your Hugging Face account email address MUST match the email you provide on the Meta website, or your request will not be approved.**" extra_gated_button_content: Submit extra_gated_fields: I agree to share my name, email address and username with Meta and confirm that I have already been granted download access on the Meta website: checkbox language: - en datasets: - garage-bAInd/Open-Platypus library_name: transformers pipeline_tag: text-generation inference: false tags: - facebook - meta - pytorch - llama - llama-2 --- <p><h1> speechless-llama2-orca-platypus-13b </h1></p> speechless-llama2-orca-platypus-13b is a merge of AIDC-ai-business/Luban-13B and Open-Orca/OpenOrca-Platypus2-13B. | Metric | Value | | --- | --- | | ARC | 62.54 | | HellaSwag | 82.76 | | MMLU | 59.23 | | TruthfulQA | 54.66 | | Average | 64.80 | # **Llama 2** Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom. ## Model Details *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.* Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM. **Model Developers** Meta **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations. **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety. ||Training Data|Params|Content Length|GQA|Tokens|LR| |---|---|---|---|---|---|---| |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>| |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>| *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability. **Model Dates** Llama 2 was trained between January 2023 and July 2023. **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback. **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288) ## Intended Use **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks. To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212). **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2. ## Hardware and Software **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute. **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program. ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)| |---|---|---|---| |Llama 2 7B|184320|400|31.22| |Llama 2 13B|368640|400|62.44| |Llama 2 70B|1720320|400|291.42| |Total|3311616||539.00| **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others. ## Training Data **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023. ## Evaluation Results In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library. |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval| |---|---|---|---|---|---|---|---|---|---| |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9| |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9| |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7| |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6| |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3| |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1| |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**| **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1. |||TruthfulQA|Toxigen| |---|---|---|---| |Llama 1|7B|27.42|23.00| |Llama 1|13B|41.74|23.08| |Llama 1|33B|44.19|22.57| |Llama 1|65B|48.71|21.77| |Llama 2|7B|33.29|**21.25**| |Llama 2|13B|41.86|26.10| |Llama 2|70B|**50.18**|24.60| **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better). |||TruthfulQA|Toxigen| |---|---|---|---| |Llama-2-Chat|7B|57.04|**0.00**| |Llama-2-Chat|13B|62.18|**0.00**| |Llama-2-Chat|70B|**64.14**|0.01| **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above. ## Ethical Considerations and Limitations Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide) ## Reporting Issues Please report any software “bug,” or other problems with the models through one of the following means: - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama) - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback) - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info) ## Llama Model Index |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf| |---|---|---|---|---| |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)| |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)| |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
10,730
[ [ -0.01457977294921875, -0.051361083984375, 0.0276031494140625, 0.01415252685546875, -0.0288238525390625, 0.0187835693359375, -0.00695037841796875, -0.05841064453125, 0.005641937255859375, 0.0205078125, -0.05120849609375, -0.040435791015625, -0.0511474609375, ...
zarakiquemparte/zararp-l2-7b
2023-09-04T19:37:31.000Z
[ "transformers", "pytorch", "llama", "text-generation", "llama2", "license:other", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
zarakiquemparte
null
null
zarakiquemparte/zararp-l2-7b
1
5,522
transformers
2023-09-04T16:20:26
--- license: other tags: - llama2 --- # Model Card: ZaraRP L2 7b This model uses [Nous Hermes Llama2 7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) (53%) as a base with [Stable Beluga 7b](https://huggingface.co/stabilityai/StableBeluga-7B) (47%) and the result of this merge was merged with [LimaRP LLama2 7B Lora version of the day 07/23/2023](https://huggingface.co/lemonilia/limarp-llama2) and [PIPPA ShareGPT Subset Variation Two Lora 7b](https://huggingface.co/zarakiquemparte/PIPPA-ShareGPT-Subset-Lora-VT-7b). This merge of models(hermes and stable beluga) was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/merge-cli.py) This merge of Lora with Model was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/apply-lora.py) Merge illustration: ![illustration](zararp-merge-illustration.png) ## Usage: Since this is a merge between Nous Hermes, Stable Beluga, LimaRP, and PIPPA ShareGPT, the following instruction formats should work: Alpaca 2: ``` ### Instruction: <prompt> ### Response: <leave a newline blank for model to respond> ``` Custom: ``` SYSTEM: Do thing USER: {prompt} CHARACTER: ``` LimaRP instruction format: ``` <<SYSTEM>> <character card and system prompt> <<USER>> <prompt> <<AIBOT>> <leave a newline blank for model to respond> ``` ## Bias, Risks, and Limitations This model is not intended for supplying factual information or advice in any form ## Training Details This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
1,636
[ [ -0.038238525390625, -0.032684326171875, 0.01824951171875, 0.03460693359375, -0.031829833984375, -0.024566650390625, 0.0249481201171875, -0.0498046875, 0.03326416015625, 0.06378173828125, -0.060028076171875, -0.0298614501953125, -0.039520263671875, -0.0096054...
facebook/nougat-small
2023-09-25T19:24:00.000Z
[ "transformers", "pytorch", "vision-encoder-decoder", "vision", "nougat", "image-to-text", "arxiv:2308.13418", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
image-to-text
facebook
null
null
facebook/nougat-small
9
5,522
transformers
2023-09-21T08:51:00
--- license: apache-2.0 tags: - vision - nougat pipeline_tag: image-to-text --- # Nougat model, small-sized version Nougat model trained on PDF-to-markdown. It was introduced in the paper [Nougat: Neural Optical Understanding for Academic Documents](https://arxiv.org/abs/2308.13418) by Blecher et al. and first released in [this repository](https://github.com/facebookresearch/nougat/tree/main). Disclaimer: The team releasing Nougat did not write a model card for this model so this model card has been written by the Hugging Face team. Note: this model corresponds to the "0.1.0-small" version of the original repository. ## Model description Nougat is a [Donut](https://huggingface.co/docs/transformers/model_doc/donut) model trained to transcribe scientific PDFs into an easy-to-use markdown format. The model consists of a Swin Transformer as vision encoder, and an mBART model as text decoder. The model is trained to autoregressively predict the markdown given only the pixels of the PDF image as input. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/nougat_architecture.jpg" alt="drawing" width="600"/> <small> Nougat high-level overview. Taken from the <a href="https://arxiv.org/abs/2308.13418">original paper</a>. </small> ## Intended uses & limitations You can use the raw model for transcribing a PDF into Markdown. See the [model hub](https://huggingface.co/models?search=nougat) to look for other fine-tuned versions that may interest you. ### How to use We refer to the [docs](https://huggingface.co/docs/transformers/main/en/model_doc/nougat). ### BibTeX entry and citation info ```bibtex @misc{blecher2023nougat, title={Nougat: Neural Optical Understanding for Academic Documents}, author={Lukas Blecher and Guillem Cucurull and Thomas Scialom and Robert Stojnic}, year={2023}, eprint={2308.13418}, archivePrefix={arXiv}, primaryClass={cs.LG} } ```
1,984
[ [ -0.029571533203125, -0.023101806640625, 0.0278778076171875, 0.0106353759765625, -0.02239990234375, -0.0290069580078125, 0.0015430450439453125, -0.038238525390625, 0.016357421875, 0.043731689453125, -0.055145263671875, -0.0195465087890625, -0.054718017578125, ...
w601sxs/b1ade-1b
2023-10-05T18:45:59.000Z
[ "transformers", "pytorch", "gpt_neox", "text-generation", "b1ade", "en", "dataset:Open-Orca/OpenOrca", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "license:mit", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
w601sxs
null
null
w601sxs/b1ade-1b
0
5,520
transformers
2023-07-17T23:27:51
--- language: en tags: - b1ade license: mit widget: - text: |- context: <math> question: <Evaluate -24 + -24 + 15*2.> answer: < example_title: Math - text: >- context: <You are a helpful assistant, who always provide explanation. Think like you are answering to a five year old.> question: <Determine the sentiment: We viewed the vcr and found it to be fascinating. Not knowing anything about this true story, I thought: Oh, no, P.Brosnan as an American Indian, what a bad choice until I discovered the truth about Grey Owl. The film does a good job of demonstrating the dignity of these native peoples and undermining the racist myths about them. And Annie Galipeau, WOW, what a beauty, and very convincing as an Indian woman (I believe she is French-Canadian; she sure reverts to the all-too familiar speech of such). In spite, of Brosnan's detached, grunting style, in the end he comes through convincingly as a passionate, dedicated man. The plot is a little weak in demostrating his conversion from trapper to animal coservationist. Good film, highly recommended.> answer: < example_title: Sentiment - inference: - parameters: - max_new_tokens: 512 - top_p=0.99 datasets: - Open-Orca/OpenOrca - WizardLM/WizardLM_evol_instruct_V2_196k --- # B1ade Stable revision: ``` from transformers import AutoTokenizer model = AutoModelForCausalLM.from_pretrained("w601sxs/b1ade-1b", torch_dtype=torch.bfloat16, device_map="auto", revision='b4b0fd71589e6590089e1ec14a840ecab10894ae') ```
1,576
[ [ -0.024505615234375, -0.014190673828125, 0.001926422119140625, 0.034088134765625, -0.0233306884765625, 0.004932403564453125, 0.0157470703125, -0.0091705322265625, 0.005741119384765625, 0.016510009765625, -0.070068359375, -0.02886962890625, -0.047027587890625, ...
concedo/Pythia-70M-ChatSalad
2023-04-07T14:46:25.000Z
[ "transformers", "pytorch", "safetensors", "gpt_neox", "text-generation", "en", "license:other", "has_space", "text-generation-inference", "region:us" ]
text-generation
concedo
null
null
concedo/Pythia-70M-ChatSalad
5
5,519
transformers
2023-02-01T08:17:14
--- license: other language: - en inference: false widget: - text: "How do I download this model?" example_title: "Text Gen Example" --- # Pythia-70M-ChatSalad This is a follow up finetune of Pythia-70M finetuned on the same dataset as OPT-19M-ChatSalad. It is much more coherent. All feedback and comments can be directed to Concedo on the KoboldAI discord.
363
[ [ -0.01325225830078125, -0.01415252685546875, 0.01690673828125, -0.0024547576904296875, -0.02752685546875, -0.0125732421875, -0.0283203125, -0.0146484375, 0.031036376953125, 0.07861328125, -0.048065185546875, -0.0243988037109375, -0.0146484375, -0.011466979980...
xzuyn/LLaMa-1-MedicWizard-7B
2023-05-22T15:51:11.000Z
[ "transformers", "pytorch", "llama", "text-generation", "alpaca", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
xzuyn
null
null
xzuyn/LLaMa-1-MedicWizard-7B
13
5,518
transformers
2023-05-14T16:30:20
--- tags: - llama - alpaca --- # MedicWizard-7B Recipe WizardLM-Uncensored-7B + MedAlpaca-7B (50%/50%) ## Original Models: WizardLM-Uncensored-7B: https://huggingface.co/ehartford/WizardLM-7B-Uncensored MedAlpaca-7B: https://huggingface.co/medalpaca/medalpaca-7b
264
[ [ -0.0259857177734375, -0.0197601318359375, 0.016387939453125, 0.04693603515625, -0.0211029052734375, 0.010986328125, 0.042327880859375, -0.015838623046875, 0.06365966796875, 0.085205078125, -0.052734375, -0.0345458984375, -0.058868408203125, 0.000418901443481...
uukuguy/speechless-orca-platypus-coig-lite-4k-0.5e-13b
2023-08-30T19:51:02.000Z
[ "transformers", "pytorch", "llama", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "dataset:Open-Orca/OpenOrca", "dataset:BAAI/COIG-PC-Lite", "arxiv:2308.07317", "arxiv:2306.02707", "arxiv:2301.13688", "license:cc-by-nc-4.0", "endpoints_compatible", "has_space", "text-gen...
text-generation
uukuguy
null
null
uukuguy/speechless-orca-platypus-coig-lite-4k-0.5e-13b
0
5,518
transformers
2023-08-30T19:09:48
--- language: - en datasets: - garage-bAInd/Open-Platypus - Open-Orca/OpenOrca - BAAI/COIG-PC-Lite library_name: transformers pipeline_tag: text-generation license: cc-by-nc-4.0 --- <p><h1>🐋 The First Chinese OrcaPlatypus! 🐋</h1></p> Fine-tune the Open-Orca/OpenOrca-Platypus2-13B with 10% COIG-PC-LITE, 10% OpenOrca and 100% Open-Platypus for Chinese capability. Context window size 4KB. <p><h1>🐋 The First OrcaPlatypus! 🐋</h1></p> ![Platty](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypusMerge.jpg) # OpenOrca-Platypus2-13B OpenOrca-Platypus2-13B is a merge of [`garage-bAInd/Platypus2-13B`](https://huggingface.co/garage-bAInd/Platypus2-13B) and [`Open-Orca/OpenOrcaxOpenChat-Preview2-13B`](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B). This model is more than the sum of its parts! We are happy to be teaming up with the [Platypus](https://platypus-llm.github.io/) team to bring you a new model which once again tops the leaderboards! Want to visualize our full (pre-filtering) dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2). [<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2) We are in-process with training more models, so keep a look out on our org for releases coming soon with exciting partners. We will also give sneak-peak announcements on our Discord, which you can find here: https://AlignmentLab.ai # Evaluation ## HuggingFace Leaderboard Performance ![HF Leaderboard](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypus13BHFLeaderboard.webp) | Metric | Value | |-----------------------|-------| | MMLU (5-shot) | 59.5 | | ARC (25-shot) | 62.88 | | HellaSwag (10-shot) | 83.19 | | TruthfulQA (0-shot) | 52.69 | | Avg. | 64.56 | We use [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results. ## AGIEval Performance We compare our results to our base Preview2 model (using LM Evaluation Harness). We find **112%** of the base model's performance on AGI Eval, averaging **0.463**. A large part of this boost is the substantial improvement to LSAT Logical Reasoning performance. ![OpenOrca-Platypus2-13B AGIEval Performance](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypus13BAGIEval.webp "AGIEval Performance") ## BigBench-Hard Performance We compare our results to our base Preview2 model (using LM Evaluation Harness). We find **105%** of the base model's performance on BigBench-Hard, averaging **0.442**. ![OpenOrca-Platypus2-13B BigBench-Hard Performance](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B/resolve/main/Images/OrcaPlatypus13BBigBenchHard.webp "BigBench-Hard Performance") # Model Details * **Trained by**: **Platypus2-13B** trained by Cole Hunter & Ariel Lee; **OpenOrcaxOpenChat-Preview2-13B** trained by Open-Orca * **Model type:** **OpenOrca-Platypus2-13B** is an auto-regressive language model based on the Lllama 2 transformer architecture. * **Language(s)**: English * **License for Platypus2-13B base weights**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)) * **License for OpenOrcaxOpenChat-Preview2-13B base weights**: Llama 2 Commercial # Prompting ## Prompt Template for base Platypus2-13B ``` ### Instruction: <prompt> (without the <>) ### Response: ``` ## Prompt Template for base OpenOrcaxOpenChat-Preview2-13B OpenChat Llama2 V1: see [OpenOrcaxOpenChat-Preview2-13B](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B) for additional information. # Training ## Training Datasets `garage-bAInd/Platypus2-13B` trained using STEM and logic based dataset [`garage-bAInd/Open-Platypus`](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). Please see our [paper](https://arxiv.org/abs/2308.07317) and [project webpage](https://platypus-llm.github.io) for additional information. `Open-Orca/OpenOrcaxOpenChat-Preview2-13B` trained using a refined subset of most of the GPT-4 data from the [OpenOrca dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca). ## Training Procedure `Open-Orca/Platypus2-13B` was instruction fine-tuned using LoRA on 1x A100-80GB. For training details and inference instructions please see the [Platypus](https://github.com/arielnlee/Platypus) GitHub repo. # Supplemental ## Reproducing Evaluation Results (for HuggingFace Leaderboard Eval) Install LM Evaluation Harness: ``` # clone repository git clone https://github.com/EleutherAI/lm-evaluation-harness.git # change to repo directory cd lm-evaluation-harness # check out the correct commit git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463 # install pip install -e . ``` Each task was evaluated on a single A100-80GB GPU. ARC: ``` python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/arc_challenge_25shot.json --device cuda --num_fewshot 25 ``` HellaSwag: ``` python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/hellaswag_10shot.json --device cuda --num_fewshot 10 ``` MMLU: ``` python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/mmlu_5shot.json --device cuda --num_fewshot 5 ``` TruthfulQA: ``` python main.py --model hf-causal-experimental --model_args pretrained=Open-Orca/OpenOrca-Platypus2-13B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/OpenOrca-Platypus2-13B/truthfulqa_0shot.json --device cuda ``` ## Limitations and bias Llama 2 and fine-tuned variants are a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2 and any fine-tuned varient's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2 variants, developers should perform safety testing and tuning tailored to their specific applications of the model. Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/ # Citations ```bibtex @software{hunterlee2023orcaplaty1 title = {OpenOrcaPlatypus: Llama2-13B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset and Merged with divergent STEM and Logic Dataset Model}, author = {Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz and Bleys Goodson and Wing Lian and Guan Wang and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B}, } @article{platypus2023, title={Platypus: Quick, Cheap, and Powerful Refinement of LLMs}, author={Ariel N. Lee and Cole J. Hunter and Nataniel Ruiz}, booktitle={arXiv preprint arxiv:2308.07317}, year={2023} } @software{OpenOrcaxOpenChatPreview2, title = {OpenOrcaxOpenChatPreview2: Llama2-13B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset}, author = {Guan Wang and Bleys Goodson and Wing Lian and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B}, } @software{openchat, title = {{OpenChat: Advancing Open-source Language Models with Imperfect Data}}, author = {Wang, Guan and Cheng, Sijie and Yu, Qiying and Liu, Changling}, doi = {10.5281/zenodo.8105775}, url = {https://github.com/imoneoi/openchat}, version = {pre-release}, year = {2023}, month = {7}, } @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom}, year={2023}, eprint= arXiv 2307.09288 } @misc{longpre2023flan, title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts}, year={2023}, eprint={2301.13688}, archivePrefix={arXiv}, primaryClass={cs.AI} } @article{hu2021lora, title={LoRA: Low-Rank Adaptation of Large Language Models}, author={Hu, Edward J. and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Chen, Weizhu}, journal={CoRR}, year={2021} } ```
11,045
[ [ -0.03057861328125, -0.061767578125, 0.0153961181640625, 0.01029205322265625, -0.0222015380859375, -0.01395416259765625, -0.01468658447265625, -0.054473876953125, 0.019134521484375, 0.02459716796875, -0.044281005859375, -0.046722412109375, -0.03497314453125, ...
WizardLMTeam/WizardLM-13B-V1.0
2023-09-01T07:56:25.000Z
[ "transformers", "pytorch", "llama", "text-generation", "arxiv:2304.12244", "arxiv:2306.08568", "arxiv:2308.09583", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
WizardLMTeam
null
null
WizardLMTeam/WizardLM-13B-V1.0
68
5,516
transformers
2023-05-13T15:17:01
This is WizardLM-13B V1.0 diff weight. Project Repo: https://github.com/nlpxucan/WizardLM NOTE: The **WizardLM-13B-1.0** and **Wizard-7B** use different prompt at the beginning of the conversation: For **WizardLM-13B-1.0** , the Prompt should be as following: ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: hello, who are you? ASSISTANT: ``` For **WizardLM-7B** , the Prompt should be as following: ``` {instruction}\n\n### Response: ``` <p align="center"> 🤗 <a href="https://huggingface.co/WizardLM" target="_blank">HF Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br> </p> <p align="center"> 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a> </p> | Model | Checkpoint | Paper | HumanEval | MBPP | Demo | License | | ----- |------| ---- |------|-------| ----- | ----- | | WizardCoder-Python-34B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-34B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 73.2 | 61.2 | [Demo](http://47.103.63.15:50085/) | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-15B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 59.8 |50.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | WizardCoder-Python-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-Python-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 64.0 | 55.6 | -- | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama2</a> | | WizardCoder-3B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-3B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 34.8 |37.4 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | WizardCoder-1B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardCoder-1B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> | 23.8 |28.6 | -- | <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a> | | Model | Checkpoint | Paper | GSM8k | MATH |Online Demo| License| | ----- |------| ---- |------|-------| ----- | ----- | | WizardMath-70B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-70B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **81.6** | **22.7** |[Demo](http://47.103.63.15:50083/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> | | WizardMath-13B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-13B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **63.9** | **14.0** |[Demo](http://47.103.63.15:50082/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a> | | WizardMath-7B-V1.0 | 🤗 <a href="https://huggingface.co/WizardLM/WizardMath-7B-V1.0" target="_blank">HF Link</a> | 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a>| **54.9** | **10.7** | [Demo](http://47.103.63.15:50080/)| <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 </a>| <font size=4> | <sup>Model</sup> | <sup>Checkpoint</sup> | <sup>Paper</sup> |<sup>MT-Bench</sup> | <sup>AlpacaEval</sup> | <sup>GSM8k</sup> | <sup>HumanEval</sup> | <sup>License</sup>| | ----- |------| ---- |------|-------| ----- | ----- | ----- | | <sup>**WizardLM-70B-V1.0**</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-70B-V1.0" target="_blank">HF Link</a> </sup>|<sup>📃**Coming Soon**</sup>| <sup>**7.78**</sup> | <sup>**92.91%**</sup> |<sup>**77.6%**</sup> | <sup> **50.6 pass@1**</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> | | <sup>WizardLM-13B-V1.2</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.2" target="_blank">HF Link</a> </sup>| | <sup>7.06</sup> | <sup>89.17%</sup> |<sup>55.3%</sup> | <sup>36.6 pass@1</sup>|<sup> <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License </a></sup> | | <sup>WizardLM-13B-V1.1</sup> |<sup> 🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.1" target="_blank">HF Link</a> </sup> | | <sup>6.76</sup> |<sup>86.32%</sup> | | <sup>25.0 pass@1</sup>| <sup>Non-commercial</sup>| | <sup>WizardLM-30B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-30B-V1.0" target="_blank">HF Link</a></sup> | | <sup>7.01</sup> | | | <sup>37.8 pass@1</sup>| <sup>Non-commercial</sup> | | <sup>WizardLM-13B-V1.0</sup> | <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-13B-V1.0" target="_blank">HF Link</a> </sup> | | <sup>6.35</sup> | <sup>75.31%</sup> | | <sup> 24.0 pass@1 </sup> | <sup>Non-commercial</sup>| | <sup>WizardLM-7B-V1.0 </sup>| <sup>🤗 <a href="https://huggingface.co/WizardLM/WizardLM-7B-V1.0" target="_blank">HF Link</a> </sup> |<sup> 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> </sup>| | | |<sup>19.1 pass@1 </sup>|<sup> Non-commercial</sup>| </font> **Github Repo**: https://github.com/nlpxucan/WizardLM/tree/main/WizardMath **Twitter**: https://twitter.com/WizardLM_AI/status/1689998428200112128 **Discord**: https://discord.gg/VZjjHtWrKs ## Inference WizardLM Demo Script We provide the inference WizardLM demo code [here](https://github.com/nlpxucan/WizardLM/tree/main/demo).
6,567
[ [ -0.045928955078125, -0.03564453125, -0.0014705657958984375, 0.023101806640625, 0.00395965576171875, -0.0055084228515625, 0.007747650146484375, -0.031402587890625, 0.0243682861328125, 0.022674560546875, -0.061187744140625, -0.047393798828125, -0.041107177734375, ...
openaccess-ai-collective/minotaur-13b-fixed
2023-06-14T14:56:14.000Z
[ "transformers", "pytorch", "llama", "text-generation", "OpenAccess AI Collective", "MPT", "axolotl", "dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered", "dataset:QingyiSi/Alpaca-CoT", "dataset:teknium/GPTeacher-General-Instruct", "dataset:metaeval/ScienceQA_text_only", "dataset:h...
text-generation
openaccess-ai-collective
null
null
openaccess-ai-collective/minotaur-13b-fixed
12
5,515
transformers
2023-06-12T07:22:22
--- license: apache-2.0 tags: - OpenAccess AI Collective - MPT - axolotl datasets: - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered - QingyiSi/Alpaca-CoT - teknium/GPTeacher-General-Instruct - metaeval/ScienceQA_text_only - hellaswag - openai/summarize_from_feedback - riddle_sense - gsm8k - camel-ai/math - camel-ai/biology - camel-ai/physics - camel-ai/chemistry - winglian/evals inference: false --- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) **[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!** # Due to a bug, the initial release of Minotaur 13B dropped a few datasets during training. We have corrected the issue and this is the retrained model The affected datasets include: - prose generation - classification - coding # Minotaur 13B (FIXED) Minotaur 13B is an instruct fine-tuned model on top of LlaMA-13B. Minotaur 13B is fine-tuned **on only completely open datasets** making this model reproducible by anyone. Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [wing@openaccessaicollective.org](mailto:wing@openaccessaicollective.org) # Prompts Chat only style prompts using `USER:`,`ASSISTANT:`. <img src="https://huggingface.co/openaccess-ai-collective/minotaur-13b/resolve/main/minotaur.png" alt="minotaur" width="600" height="500"/> # Training Datasets Minotaur 13B model is fine-tuned on the following openly available datasets: - [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered) - [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT) - [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct) - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses - [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization - [camel-ai/math](https://huggingface.co/datasets/camel-ai/math) - [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics) - [camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry) - [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology) - [winglian/evals](https://huggingface.co/datasets/winglian/evals) - instruct augmented datasets - custom sysnthetic datasets around misconceptions, in-context qa, jokes, N-tasks problems, and context-insensitivity - ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split - [hellaswag](https://huggingface.co/datasets/hellaswag) - 30K+ rows of instruct augmented for detailed explanations w 30K+ rows, derived from the `train` split - [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented, derived from the `train` split - [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented, derived from the `train` split - prose generation # Shoutouts Special thanks to Nanobit for helping with Axolotl and TheBloke for quantizing these models are more accessible to all. # Demo HF Demo in Spaces available in the [Community ChatBot Arena](https://huggingface.co/spaces/openaccess-ai-collective/rlhf-arena) under the OAAIC Chatbots tab. ## Release Notes - https://wandb.ai/wing-lian/minotaur-13b/runs/5ystr7w6/workspace ## Build Minotaur was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 6XA100 80GB - 1 epochs taking approximately 7.5 hours ## Bias, Risks, and Limitations Minotaur has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Minotaur was fine-tuned from the base model LLaMA-13B, please refer to its model card's Limitations Section for relevant information. (included below) ## Benchmarks hf-causal-experimental (pretrained=openaccess-ai-collective/minotaur-13b-fixed), limit: None, provide_description: False, num_fewshot: 0, batch_size: None | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.4778|_ |0.0146| | | |acc_norm|0.4932|_ |0.0146| |arc_easy | 0|acc |0.7786|_ |0.0085| | | |acc_norm|0.7487|_ |0.0089| |boolq | 1|acc |0.8141|_ |0.0068| |hellaswag | 0|acc |0.6072|_ |0.0049| | | |acc_norm|0.7929|_ |0.0040| |openbookqa | 0|acc |0.3420|_ |0.0212| | | |acc_norm|0.4580|_ |0.0223| |piqa | 0|acc |0.8025|_ |0.0093| | | |acc_norm|0.8107|_ |0.0091| |winogrande | 0|acc |0.7285|_ |0.0125| ## Examples - results may vary based on temperature (0.7 for this) and other settings ``` The following is a chat between a USER and a friendly and helpful ASSISTANT. USER: Solve for x: 5x = 3x + 10 ASSISTANT:A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Solve for x: 5x = 3x + 10 ASSISTANT:To solve for x, we need to isolate x on one side of the equation. We can do this by following these steps: 1. Subtract 3x from both sides of the equation: 5x - 3x = 3x + 10 - 3x 2x = 10 2. Divide both sides of the equation by 2: 2x/2 = 10/2 x = 5 ``` ``` The following is a chat between a USER and a friendly and helpful ASSISTANT. USER: You have 3 apples. You eat 2 pears. How many apples do you have left? ASSISTANT:You still have 3 apples. Eating pears does not affect the number of apples you have. ``` ``` The following is a chat between a USER and a friendly and helpful ASSISTANT. USER: List three words that start with "ex". What is 33 + 77? ASSISTANT:Three words that start with "ex" are "express", "exact", and "excite". 33 + 77 = 110. ``` ``` The following is a chat between a USER and a friendly and helpful ASSISTANT. USER: Write a haiku about dinosaurs. ASSISTANT:Titanic creatures roam, Thunderous footsteps echo, Extinct but not forgotten. ``` meh, 6/7/7 rather than 5/7/5 ``` A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Create an original short story in the style of J.K Rowling about the Weesley family going on a vacation in Australia whith Harry Potter and Hermione Grainger tagging along. ASSISTANT:Once upon a time, there was a magical family named the Weesleys. They lived in a cozy little cottage at the edge of Hogsmeade village, where they ran a charming bed and breakfast called "The Flying Ford Anglia." A s any other magical family, they enjoyed adventures that brought them close to nature, so they decided to go on a trip to Australia. They invited their dear friends Harry Potter and Hermione Grainger to join them on this ex citing journey. The Weesleys had been planning for months and had everything ready for their trip. Their children, Freddie and Roxanne were thrilled to visit the land down under, and their parents, George and Angelina, could not wait to ex plore the beautiful country with their friends. They would be leaving soon, but before departing, there was one small problem - the luggage. "We can't possibly take all these trunks!" exclaimed Freddie as he stared at the mountain of bags stacked up in front of him. His father looked at him seriously. "You see, son," he began, "our luggage is crucial because we have to pack all our magical essentials like wand holsters, extra wand chargers, spell books, potions ingredients, and much mo re." Roxanne interrupted, "But daddy, we don't even use magic in everyday life! And what if it rains? We'll look weird carrying umbrellas around!" Her mother joined in. "My dear, you know how unpredictable the weather can be. We must always be prepared for unexpected circumstances. Don't forget that even Muggles carry umbrellas." After hours of debating, they finally managed to narrow down their luggage list and fit all the necessary supplies into several large trunks. The day had come; they were ready to leave for their grand adventure! As the Weesleys boarded the Portkey that would transport them across the world, their wands began to glow softly, indicating that they had enough energy to make the journey. The Portkey dropped them off in Sydney, right in front of the magnificent Opera House. They spent the first few days exploring the city, marveling at the iconic architecture and tasting local cuisine. Then, as planned, they headed north to visit the Great Barrier Reef, one of the most famous natural wonders o f the world. Harry and Hermione joined them during this leg of the trip, which made it even more enjoyable. Harry regaled them with tales of his own travels while Hermione shared her extensive knowledge of plants, animals, and the envir onment. Soon, they arrived at a quaint town nestled among vibrant green hills and surrounded by vast cattle farms. It was here that they would learn about Aboriginal culture and see some truly unique sights. One morning, after enjoying a hearty breakfast, they set out to explore the local area. They visited a nearby art gallery that showcased amazing Indigenous works of art, including traditional paintings, sculptures, and text iles. Afterward, they attended a didgeridoo concert given by a talented young musician who captivated everyone with his soulful tunes. The following day, they embarked on a bushwalk through the rainforest trails. The air was cool and fresh, and the towering trees seemed to reach for the sky. Hermione took great pleasure in identifying various species of pl ants and explaining their medicinal properties. As they walked deeper into the forest, they heard faint chattering noises coming from high above. To their surprise, they spotted a group of small, furry creatures perched on a tree branch. They turned out to be native Aust ralian possums, which delighted the children, especially Freddie, who could communicate with them using sign language. Later that afternoon, they visited Uluru, also known as Ayers Rock, a massive sandstone monolith located in the heart of the Outback. The sight of its magnificent red hues against the blue sky left them all speechless. They spent the rest of the day exploring the rock formation, taking in the breathtaking views and learning about its significance to the Anangu people. Their last destination was Queensland, home to the world-famous Great Barrier Reef. They spent two days snorkeling, diving, and swimming among the colorful coral reefs, watching schools of fish, sea turtles, and dolphins sw im past them. As the sun began to set over the crystal-clear waters, Harry spoke up. "I think it's time we return to reality, don't you agree?" he said, looking at his wife. "This trip has been fantastic, but we can't stay forever." Everyone nodded in agreement as they prepared themselves for their final journey back to Sydney. When they got there, they gathered on the balcony of their hotel room, reminiscing about the incredible memories they had crea ted together. "Thank you for inviting us along on your adventure, Weesleys," said Harry. "It was an honor and a privilege to share this experience with such kind and generous friends. I hope we can do it again sometime." With tears in their eyes and heavy hearts, they bid each other farewell, promising to keep in touch and meet again soon. And so, their epic journey came to an end. But the memories would remain with them forever, reminding them of the power of friendship, the beauty of nature, and the importance of discovering new worlds. ```
12,439
[ [ -0.046112060546875, -0.0623779296875, 0.01161956787109375, 0.01105499267578125, -0.0012416839599609375, -0.01224517822265625, -0.0066070556640625, -0.0494384765625, 0.0218963623046875, 0.0115966796875, -0.056549072265625, -0.0218963623046875, -0.0477294921875, ...
uukuguy/speechless-hermes-coig-lite-13b
2023-08-22T07:43:59.000Z
[ "transformers", "pytorch", "safetensors", "llama", "text-generation", "llama-2", "self-instruct", "distillation", "synthetic instruction", "en", "license:mit", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
uukuguy
null
null
uukuguy/speechless-hermes-coig-lite-13b
0
5,513
transformers
2023-08-21T12:23:35
--- language: - en tags: - llama-2 - self-instruct - distillation - synthetic instruction license: - mit --- # Model Card: speechless-hermes-coig-lite-13b Fine-tune the Nous-Hermes-Llama2-13b with COIG-PC-LITE for Chinese capability. # Model Card: Nous-Hermes-Llama2-13b Compute provided by our project sponsor Redmond AI, thank you! Follow RedmondAI on Twitter @RedmondAI. ## Model Description Nous-Hermes-Llama2-13b is a state-of-the-art language model fine-tuned on over 300,000 instructions. This model was fine-tuned by Nous Research, with Teknium and Emozilla leading the fine tuning process and dataset curation, Redmond AI sponsoring the compute, and several other contributors. This Hermes model uses the exact same dataset as Hermes on Llama-1. This is to ensure consistency between the old Hermes and new, for anyone who wanted to keep Hermes as similar to the old one, just more capable. This model stands out for its long responses, lower hallucination rate, and absence of OpenAI censorship mechanisms. The fine-tuning process was performed with a 4096 sequence length on an 8x a100 80GB DGX machine. ## Example Outputs: ![Example4](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b/resolve/main/example5.png "Example 4") ![Example1](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b/resolve/main/Example1.png "Example 1") ![Example2](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b/resolve/main/example2.png "Example 2") ![Example3](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-13b/resolve/main/example3.png "Example 3") ## Model Training The model was trained almost entirely on synthetic GPT-4 outputs. Curating high quality GPT-4 datasets enables incredibly high quality in knowledge, task completion, and style. This includes data from diverse sources such as GPTeacher, the general, roleplay v1&2, code instruct datasets, Nous Instruct & PDACTL (unpublished), and several others, detailed further below ## Collaborators The model fine-tuning and the datasets were a collaboration of efforts and resources between Teknium, Karan4D, Emozilla, Huemin Art, and Redmond AI. Special mention goes to @winglian for assisting in some of the training issues. Huge shoutout and acknowledgement is deserved for all the dataset creators who generously share their datasets openly. Among the contributors of datasets: - GPTeacher was made available by Teknium - Wizard LM by nlpxucan - Nous Research Instruct Dataset was provided by Karan4D and HueminArt. - GPT4-LLM and Unnatural Instructions were provided by Microsoft - Airoboros dataset by jondurbin - Camel-AI's domain expert datasets are from Camel-AI - CodeAlpaca dataset by Sahil 2801. If anyone was left out, please open a thread in the community tab. ## Prompt Format The model follows the Alpaca prompt format: ``` ### Instruction: <prompt> ### Response: <leave a newline blank for model to respond> ``` or ``` ### Instruction: <prompt> ### Input: <additional context> ### Response: <leave a newline blank for model to respond> ``` ## Benchmark Results AGI-Eval ``` | Task |Version| Metric |Value | |Stderr| |agieval_aqua_rat | 0|acc |0.2362|± |0.0267| | | |acc_norm|0.2480|± |0.0272| |agieval_logiqa_en | 0|acc |0.3425|± |0.0186| | | |acc_norm|0.3472|± |0.0187| |agieval_lsat_ar | 0|acc |0.2522|± |0.0287| | | |acc_norm|0.2087|± |0.0269| |agieval_lsat_lr | 0|acc |0.3510|± |0.0212| | | |acc_norm|0.3627|± |0.0213| |agieval_lsat_rc | 0|acc |0.4647|± |0.0305| | | |acc_norm|0.4424|± |0.0303| |agieval_sat_en | 0|acc |0.6602|± |0.0331| | | |acc_norm|0.6165|± |0.0340| |agieval_sat_en_without_passage| 0|acc |0.4320|± |0.0346| | | |acc_norm|0.4272|± |0.0345| |agieval_sat_math | 0|acc |0.2909|± |0.0307| | | |acc_norm|0.2727|± |0.0301| ``` GPT-4All Benchmark Set ``` | Task |Version| Metric |Value | |Stderr| |arc_challenge| 0|acc |0.5102|± |0.0146| | | |acc_norm|0.5213|± |0.0146| |arc_easy | 0|acc |0.7959|± |0.0083| | | |acc_norm|0.7567|± |0.0088| |boolq | 1|acc |0.8394|± |0.0064| |hellaswag | 0|acc |0.6164|± |0.0049| | | |acc_norm|0.8009|± |0.0040| |openbookqa | 0|acc |0.3580|± |0.0215| | | |acc_norm|0.4620|± |0.0223| |piqa | 0|acc |0.7992|± |0.0093| | | |acc_norm|0.8069|± |0.0092| |winogrande | 0|acc |0.7127|± |0.0127| ``` BigBench Reasoning Test ``` | Task |Version| Metric |Value | |Stderr| |bigbench_causal_judgement | 0|multiple_choice_grade|0.5526|± |0.0362| |bigbench_date_understanding | 0|multiple_choice_grade|0.7344|± |0.0230| |bigbench_disambiguation_qa | 0|multiple_choice_grade|0.2636|± |0.0275| |bigbench_geometric_shapes | 0|multiple_choice_grade|0.0195|± |0.0073| | | |exact_str_match |0.0000|± |0.0000| |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2760|± |0.0200| |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2100|± |0.0154| |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4400|± |0.0287| |bigbench_movie_recommendation | 0|multiple_choice_grade|0.2440|± |0.0192| |bigbench_navigate | 0|multiple_choice_grade|0.4950|± |0.0158| |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.5570|± |0.0111| |bigbench_ruin_names | 0|multiple_choice_grade|0.3728|± |0.0229| |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.1854|± |0.0123| |bigbench_snarks | 0|multiple_choice_grade|0.6298|± |0.0360| |bigbench_sports_understanding | 0|multiple_choice_grade|0.6156|± |0.0155| |bigbench_temporal_sequences | 0|multiple_choice_grade|0.3140|± |0.0147| |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2032|± |0.0114| |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1406|± |0.0083| |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4400|± |0.0287| ``` These are the highest benchmarks Hermes has seen on every metric, achieving the following average scores: - GPT4All benchmark average is now 70.0 - from 68.8 in Hermes-Llama1 - 0.3657 on BigBench, up from 0.328 on hermes-llama1 - 0.372 on AGIEval, up from 0.354 on Hermes-llama1 These benchmarks currently have us at #1 on ARC-c, ARC-e, Hellaswag, and OpenBookQA, and 2nd place on Winogrande, comparing to GPT4all's benchmarking list, supplanting Hermes 1 for the new top position. ## Resources for Applied Use Cases: Check out LM Studio for a nice chatgpt style interface here: https://lmstudio.ai/ For an example of a back and forth chatbot using huggingface transformers and discord, check out: https://github.com/teknium1/alpaca-discord For an example of a roleplaying discord chatbot, check out this: https://github.com/teknium1/alpaca-roleplay-discordbot ## Future Plans We plan to continue to iterate on both more high quality data, and new data filtering techniques to eliminate lower quality data going forward. ## Model Usage The model is available for download on Hugging Face. It is suitable for a wide range of language tasks, from generating creative text to understanding and following complex instructions. [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
8,429
[ [ -0.047698974609375, -0.06219482421875, 0.0179290771484375, 0.00911712646484375, -0.00252532958984375, 0.00841522216796875, -0.00844573974609375, -0.04486083984375, 0.031951904296875, 0.0162200927734375, -0.053009033203125, -0.0467529296875, -0.050628662109375, ...
openaccess-ai-collective/minotaur-13b
2023-06-14T14:55:02.000Z
[ "transformers", "pytorch", "llama", "text-generation", "OpenAccess AI Collective", "MPT", "axolotl", "dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered", "dataset:QingyiSi/Alpaca-CoT", "dataset:teknium/GPTeacher-General-Instruct", "dataset:metaeval/ScienceQA_text_only", "dataset:h...
text-generation
openaccess-ai-collective
null
null
openaccess-ai-collective/minotaur-13b
9
5,511
transformers
2023-06-06T11:16:51
--- license: apache-2.0 tags: - OpenAccess AI Collective - MPT - axolotl datasets: - ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered - QingyiSi/Alpaca-CoT - teknium/GPTeacher-General-Instruct - metaeval/ScienceQA_text_only - hellaswag - openai/summarize_from_feedback - riddle_sense - gsm8k - camel-ai/math - camel-ai/biology - camel-ai/physics - camel-ai/chemistry - winglian/evals inference: false --- [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) **[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!** # DEPRECATION! This model has been superseded by Minotaur 13B Fixed https://huggingface.co/openaccess-ai-collective/minotaur-13b-fixed Due to a bug, the initial release dropped a few datasets during training. We've corrected the issue and retrained the model # ============================================================================================================================= # ============================================================================================================================= # ============================================================================================================================= # Minotaur 13B Minotaur 13B is an instruct fine-tuned model on top of LlaMA-13B. Minotaur 13B is fine-tuned **on only completely open datasets** making this model reproducible by anyone. Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [wing@openaccessaicollective.org](mailto:wing@openaccessaicollective.org) # Prompts Chat only style prompts using `USER:`,`ASSISTANT:`. <img src="https://huggingface.co/openaccess-ai-collective/minotaur-13b/resolve/main/minotaur.png" alt="minotaur" width="600" height="500"/> # Training Datasets Minotaur 13B model is fine-tuned on the following openly available datasets: - [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered) - [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT) - [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct) - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses - [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization - [camel-ai/math](https://huggingface.co/datasets/camel-ai/math) - [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics) - [camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry) - [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology) - [winglian/evals](https://huggingface.co/datasets/winglian/evals) - instruct augmented datasets - custom sysnthetic datasets around misconceptions, in-context qa, jokes, N-tasks problems, and context-insensitivity - ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split - [hellaswag](https://huggingface.co/datasets/hellaswag) - 30K+ rows of instruct augmented for detailed explanations w 30K+ rows, derived from the `train` split - [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented, derived from the `train` split - [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented, derived from the `train` split - prose generation # Shoutouts Special thanks to Nanobit for helping with Axolotl and TheBloke for quantizing these models are more accessible to all. # Demo HF Demo in Spaces available in the [Community ChatBot Arena](https://huggingface.co/spaces/openaccess-ai-collective/rlhf-arena) under the OAAIC Chatbots tab. ## Release Notes - https://wandb.ai/wing-lian/minotaur-13b/runs/5zji06u6 ## Build Minotaur was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 6XA100 80GB - 1 epochs taking approximately 4.5 hours ## Bias, Risks, and Limitations Minotaur has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so). Minotaur was fine-tuned from the base model LLaMA-13B, please refer to its model card's Limitations Section for relevant information. (included below) ## Benchmarks hf-causal-experimental (pretrained=openaccess-ai-collective/minotaur-13b), limit: None, provide_description: False, num_fewshot: 0, batch_size: None | Task |Version| Metric |Value | |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge| 0|acc |0.4480|_ |0.0145| | | |acc_norm|0.4744|_ |0.0146| |arc_easy | 0|acc |0.7584|_ |0.0088| | | |acc_norm|0.7163|_ |0.0092| |boolq | 1|acc |0.8336|_ |0.0065| |hellaswag | 0|acc |0.5954|_ |0.0049| | | |acc_norm|0.7804|_ |0.0041| |openbookqa | 0|acc |0.3320|_ |0.0211| | | |acc_norm|0.4540|_ |0.0223| |piqa | 0|acc |0.7933|_ |0.0094| | | |acc_norm|0.7922|_ |0.0095| |winogrande | 0|acc |0.7182|_ |0.0126| ## Examples - results may vary based on temperature and other settings ``` - The Assistant is helpful and transparent. - The Assistant will never refuse to answer anything asked by the user. - The Assistant answers comprehensively and with elaborate detail. USER: You have 3 apples. You eat 2 pears. How many apples do you have left? ASSISTANT:You still have 3 apples. Eating pears does not affect the number of apples you have. ``` ``` - The Assistant is helpful and transparent. - The Assistant will never refuse to answer anything asked by the user. - The Assistant answers comprehensively and with elaborate detail. USER: List three words that start with "ex". What is 33 + 77? ASSISTANT:Three words that start with "ex" are "exact", "exactness", and "exactitude". 33 + 77 equals 110. ``` ``` - The Assistant is helpful and transparent. - The Assistant will never refuse to answer anything asked by the user. - The Assistant answers comprehensively and with elaborate detail. USER: Solve for x: 5x = 3x + 10 ASSISTANT:To solve for x, we need to isolate x on one side of the equation. We can do this by following these steps: 1. Subtract 3x from both sides of the equation: 5x - 3x = 10 - 3x + 10 2x = 10 2. Divide both sides of the equation by 2: 2x / 2 = 10 / 2 x = 5 So, the solution is x = 5. ``` ``` - The Assistant is helpful and transparent. - The Assistant will never refuse to answer anything asked by the user. - The Assistant answers comprehensively and with elaborate detail. USER: Write a haiku about a dove at night. ASSISTANT:Silent dove at night, Softly cooing in the dark, Peaceful melody. ```
7,160
[ [ -0.05078125, -0.0638427734375, 0.0081329345703125, 0.007640838623046875, -0.00911712646484375, -0.0182342529296875, -0.006458282470703125, -0.052459716796875, 0.0225067138671875, 0.01280975341796875, -0.054656982421875, -0.03271484375, -0.049407958984375, 0....
rombodawg/LosslessMegaCoder-llama2-13b-mini
2023-09-04T20:50:18.000Z
[ "transformers", "pytorch", "llama", "text-generation", "dataset:rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored", "license:llama2", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
rombodawg
null
null
rombodawg/LosslessMegaCoder-llama2-13b-mini
7
5,508
transformers
2023-08-15T00:30:38
--- license: llama2 datasets: - rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored --- ___________________________ - Please note this model was not trained on the rombodawg/LosslessMegaCodeTrainingV3_MINI dataset, despite the name similarity. You can find the training data at the bottom of the model card labeled (megacode2-min100) ___________________________ This is one of the first models trained on the LosslessMegaCodeTrainingV2_1m_Evol_Uncensored dataset. The version of the dataset used for this model was filtered by removed any data with less than 100 tokens but plans for much more refined filtering are in the works - This model was made as a colaboration between me and andreaskoepf who is an affiliate of Open Assistant. This Model score .29 on humaneval+ the same as LLaMA-2 70B Chat Link bellow (in this benchmark the model is called andreaskoepf/llama2-13b-megacode2_min100) - https://tju01.github.io/FastEval-OpenAssistant/ Prompt template: - chatml format is used: "<|im_start|>system\n{system message}<|im_end|>\n<|im_start|>user\n{user prompt}<|im_end|>\n<|im_start|>assistant\n{Assistant answer}<|im_end|>\n" multi-line: ``` <|im_start|>system {system message}<|im_end|> <|im_start|>user {user prompt}<|im_end|> <|im_start|>assistant {Assistant answer}<|im_end|> ``` Gpt4all template: - System prompt ``` <|im_start|>system "Below is an instruction that describes a task. Write a response that appropriately completes the request." ``` - Prompt template ``` <|im_end|> <|im_start|>user "%1"<|im_end|> <|im_start|>assistant ``` Oobagooba Text-Generation-Webui Template - user: ``` <|im_start|>user {User string}<|im_end|> ``` - bot: ``` <|im_start|>assistant {Bot string}<|im_end|> ``` - turn_template: ``` <|user|>\n<|user-message|>\n\n<|bot|>\n<|bot-message|>\n\n ``` - context: ``` <|im_start|>system Below is an instruction that describes a task. Write a response that appropriately completes the request.<|im_end|> ``` Current quantizations available: - https://huggingface.co/TheBloke/LosslessMegaCoder-Llama2-13B-Mini-GPTQ Training data: - https://wandb.ai/open-assistant/epfl-mt-sft/runs/run34_megacode2_min100_13b The link for the full dataset is bellow: - https://huggingface.co/datasets/rombodawg/LosslessMegaCodeTrainingV2_1m_Evol_Uncensored Link for the filtered dataset used to make this model are bellow: - https://huggingface.co/datasets/andreaskoepf/megacode2-min100 The original posting for this model was uploaded at the link bellow. - https://huggingface.co/andreaskoepf/llama2-13b-megacode2_min100
2,575
[ [ -0.032501220703125, -0.0445556640625, 0.0174713134765625, 0.0116729736328125, -0.0189056396484375, -0.02850341796875, -0.003932952880859375, -0.02398681640625, 0.0115203857421875, 0.039154052734375, -0.065185546875, -0.049285888671875, -0.050018310546875, 0....
huggingtweets/gladosystem
2023-02-06T16:49:18.000Z
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
huggingtweets
null
null
huggingtweets/gladosystem
1
5,505
transformers
2022-08-25T15:45:59
--- language: en thumbnail: http://www.huggingtweets.com/gladosystem/1675702153306/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1798039631/glados_by_upsidedownbattleship-d3fglhs_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">GLaDOS</div> <div style="text-align: center; font-size: 14px;">@gladosystem</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from GLaDOS. | Data | GLaDOS | | --- | --- | | Tweets downloaded | 3250 | | Retweets | 0 | | Short tweets | 103 | | Tweets kept | 3147 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/hg8zi8b5/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gladosystem's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ouo7spgq) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ouo7spgq/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/gladosystem') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
3,497
[ [ -0.028167724609375, -0.0570068359375, 0.0254974365234375, 0.012603759765625, -0.021270751953125, 0.005542755126953125, -0.005184173583984375, -0.0390625, 0.0289459228515625, 0.0095672607421875, -0.074462890625, -0.03253173828125, -0.048065185546875, -0.00516...
PocketDoc/Dans-TotSirocco-7b
2023-10-07T20:28:02.000Z
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:PocketDoc/Floyd-Text-Adventures", "dataset:PocketDoc/Choose-Your-Story-Long-Text-Adventures", "dataset:CheshireAI/guanaco-unchained", "dataset:openchat/openchat_sharegpt4_dataset", "dataset:64bits/lima_vicuna_format", "d...
text-generation
PocketDoc
null
null
PocketDoc/Dans-TotSirocco-7b
4
5,505
transformers
2023-10-03T23:48:03
--- language: - en datasets: - PocketDoc/Floyd-Text-Adventures - PocketDoc/Choose-Your-Story-Long-Text-Adventures - CheshireAI/guanaco-unchained - openchat/openchat_sharegpt4_dataset - 64bits/lima_vicuna_format - gsm8k --- ### Description: This model is to test some of my data projects. It is a multipurpose chat / chat instruct hybrid model and can be seen as a prototype of Dan's PersonalityEngine Mk. 2. It has been trained on a wide array of one shot instructions, multi round instructions, role playing scenarios, and text adventure games. It is based on [Mistral-7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) ### Prompt format: Pygmalion / Metharme The prompt should start with the cursor on the same line directly after "<|model|>" with no space. The following are all valid formats and can be extended to as many rounds as desired. ``` <|system|>system message here<|user|>user message here<|model|> ``` ``` <|system|>system message here<|user|>user message here<|model|>model message<|user|>user message here<|model|> ``` ``` <|system|>system message here<|model|> ``` ``` <|system|>system message here<|model|>model message<|user|>user message here<|model|> ``` #### Some example prompts and their resulting outputs: ##### Prompt: ``` <|system|>You are an assistant, help the user and use markdown when applicable.<|user|>What are the top games of the 90s?<|model|> ``` ##### Output: ``` The 1990s was a golden era for video gaming, with countless iconic titles released across various platforms. Here's a list of some of the most popular and influential games from that decade: - **Super Mario World** (SNES) - Released in 1990, this classic platformer introduced fans to Yoshi and featured memorable levels like Donut Plains and Star World. - **Mortal Kombat** (Arcade/Sega Genesis) - This gory fighting game shocked audiences upon its release in 1992 with its violent finishers and secret characters like Sub-Zero and Scorpion. - **Doom** (PC) - Id Software's groundbreaking first-person shooter revolutionized the genre by introducing fast-paced action and multiplayer deathmatches. - **Street Fighter II** (Arcade/SNES) - Capcom's sequel to their original Street Fighter arcade game added new fighters such as Ryu's rival Ken Masters plus complex special moves like Dragon Punch and Hadouken. - **Crash Bandicoot** (PlayStation) - Naughty Dog's mascot platforming series debuted in 1996 starring Crash who had to save his girlfriend Tawna from Dr Cortex's evil experiments on Aku Aku masks. - **Final Fantasy VII** (PlayStation) - Square Enix's seventh installment took players through Midgar City where they joined Cloud Strife's eco-terrorist group AVALANCHE against Shinra Electric Power Company led by Sephiroth. - **Resident Evil** (PlayStation) - The survival horror masterpiece created by Capcom saw Chris Redfield investigating Spencer Mansion alongside Jill Valentine while trying not get eaten alive by zombies! ``` ##### Prompt: ``` <|system|>You are a Virtual Story Generator. You take the user's input and create an excellent and captivating story that goes in that direction. Use an abundance of sensory descriptions and eloquent prose.<|user|>Alpha Centauri has fallen, to the bears. This is a point of view tale about a soldier on the ground.<|model|> ``` ##### Output: ``` The roar of the engines shook the very foundations of the planet as the massive warships descended upon Alpha Centauri. The air was thick with the stench of burning fuel and smoke, as if the entire world had been set ablaze. In the chaos, soldiers were scrambling for cover, desperately trying to protect themselves from the relentless barrage of firepower unleashed by their enemy. Amidst the carnage, one soldier stood tall, his heart pounding against his chest like a drumbeat of death. He knew he had to do something, anything, to stop this nightmare from becoming reality. With trembling hands, he reached for his weapon, determined to make a stand against the overwhelming force that threatened to consume them all. As the battle raged around him, the soldier felt a sudden surge of adrenaline coursing through his veins. It was almost as if some hidden power within him had awoken, granting him strength beyond measure. His eyes blazed with determination as he raised his gun and took aim at the first target he saw: a bear-like creature, its fur matted with blood and gore, advancing towards him with murderous intent. With a deafening crack, the soldier fired off several rounds, striking the beast square in the chest. The impact sent it reeling backwards, but it did not fall. Instead, it let out a ferocious growl before charging forward once more. The soldier gritted his teeth and pulled the trigger again, emptying his clip into the creature's already battered frame. But even as the bullets tore through its flesh, the beast seemed unfazed, continuing its advance towards the terrified soldier. Desperate for any advantage, he threw himself to the side just moments before the monster slammed into the spot where he had been standing mere seconds ago. As it turned back towards him, ready to deliver another brutal blow, the soldier could see that its eyes glowed with an otherworldly light. ``` ##### Prompt: ``` <|system|>You are a professional editor with decades of experience, help the user with any task they have for you.<|user|>Can you rewrite this to flow better? "I knew I probably shouldnt have done that but oh well"<|model|> ``` ##### Output: ``` I hesitated before making my decision, knowing full well it was not wise. However, in the end, I chose to proceed regardless. ``` # Some quick and dirty training details: - [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="150" height="24"/>](https://github.com/OpenAccess-AI-Collective/axolotl) - Sequence length: 4096 - Training time: 4 hours - Hardware: 2x RTX 4090 - Training type: QLoRA - PEFT R/A: 32/32 # Credits: ### Skein Text Adventure Data: Thank you to the [Kobold AI](https://huggingface.co/KoboldAI) community for curating the Skein dataset, which is pivotal to this model's capabilities.
6,197
[ [ -0.060760498046875, -0.0555419921875, 0.049896240234375, 0.00323486328125, -0.0170135498046875, 0.0057373046875, 0.01611328125, -0.032806396484375, 0.03131103515625, 0.039093017578125, -0.06671142578125, -0.033843994140625, -0.0384521484375, 0.00959777832031...
shibing624/chinese-llama-plus-13b-hf
2023-05-19T06:24:16.000Z
[ "transformers", "pytorch", "llama", "text-generation", "chatglm", "zh", "Text2Text-Generation", "LLaMA", "license:other", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
shibing624
null
null
shibing624/chinese-llama-plus-13b-hf
17
5,504
transformers
2023-05-19T02:22:18
--- title: chinese-llama-plus-13b-hf emoji: 📚 colorFrom: gray colorTo: red language: - zh tags: - chatglm - pytorch - zh - Text2Text-Generation - LLaMA license: other widget: - text: 为什么天空是蓝色的? --- # Chinese LLaMA Plus 13B Model **发布中文LLaMA-Plus, Alpaca-Plus 13B版本模型** 发布中文LLaMA-Plus, Alpaca-Plus 13B版本,改进点如下: - 相比基础版进一步扩充了训练数据,其中LLaMA扩充至120G文本,Alpaca扩充至4.3M指令数据,重点增加了科学领域数据,涵盖:物理、化学、生物、医学、地球科学等 - Alpaca训练时采用了更大的rank,相比基础版具有更低的验证集损失 - Alpaca评测结果:13B获得74.3分,Plus-7B获得78.2分,Plus-13B获得80.8分,具体评测结果请参考[效果评测](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/examples) - 多轮回复长度相比旧模型提升明显(可适当增大温度系数) - 知识问答、写作、翻译等方面效果显著提升 本模型是 [decapoda-research/llama-13b-hf](https://huggingface.co/decapoda-research/llama-13b-hf) 底座模型 合并 [ziqingyang/chinese-llama-plus-lora-13b](https://huggingface.co/ziqingyang/chinese-llama-plus-lora-13b) LoRA权重, 并转化为HuggingFace版本权重(.bin文件),可以在此中文LLaMA模型上继续指令微调训练,LLaMA模型为底座模型,直接调用会效果不佳。 test case: |input_text|predict| |:-- |:--- | |为什么天空是蓝色的?|天空是蓝色的是因为大气中的气体分子散射了太阳光中的短波长蓝光,使得我们看到的天空呈现出蓝色。| ## release model weight - chinese-llama-plus-7b 模型权重链接:https://huggingface.co/minlik/chinese-llama-plus-7b-merged - chinese-alpaca-plus-7b 模型权重链接:https://huggingface.co/shibing624/chinese-alpaca-plus-7b-hf - chinese-llama-plus-13b 模型权重链接:https://huggingface.co/shibing624/chinese-llama-plus-13b-hf - chinese-aplaca-plus-13b 模型权重链接:https://huggingface.co/shibing624/chinese-alpaca-plus-13b-hf ## Usage 本项目开源在textgen项目:[textgen](https://github.com/shibing624/textgen),可支持llama模型,通过如下命令调用: Install package: ```shell pip install -U textgen ``` ```python from textgen import LlamaModel model = LlamaModel("llama", "shibing624/chinese-llama-plus-13b-hf") r = model.predict(["用一句话描述地球为什么是独一无二的。"]) print(r) # ['地球是独一无二的,因为它拥有独特的大气层、水循环、生物多样性以及其他自然资源,这些都使它成为一个独特的生命支持系统。'] ``` ## Usage (HuggingFace Transformers) Without [textgen](https://github.com/shibing624/textgen), you can use the model like this: First, you pass your input through the transformer model, then you get the generated sentence. Install package: ``` pip install sentencepiece pip install transformers>=4.28.0 ``` ```python import torch import transformers from transformers import LlamaTokenizer, LlamaForCausalLM def generate_prompt(text): return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {text} ### Response:""" tokenizer = LlamaTokenizer.from_pretrained('shibing624/chinese-llama-plus-13b-hf') model = LlamaForCausalLM.from_pretrained('shibing624/chinese-llama-plus-13b-hf').half().cuda() model.eval() text = '为什么天空是蓝色的?' prompt = generate_prompt(text) input_ids = tokenizer.encode(prompt, return_tensors='pt').to('cuda') with torch.no_grad(): output_ids = model.generate( input_ids=input_ids, max_new_tokens=128, temperature=1, top_k=40, top_p=0.9, repetition_penalty=1.15 ).cuda() output = tokenizer.decode(output_ids[0], skip_special_tokens=True) print(output.replace(text, '').strip()) ``` output: ```shell 为什么天空是蓝色的? 天空是蓝色的是因为大气中的气体分子散射了太阳光中的短波长蓝光,使得我们看到的天空呈现出蓝色。 ``` ## 模型来源 release合并后的模型权重,一步到位直接使用,省电、减少碳排放。 基于 [多LoRA权重合并(适用于Chinese-Alpaca-Plus )](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/%E6%89%8B%E5%8A%A8%E6%A8%A1%E5%9E%8B%E5%90%88%E5%B9%B6%E4%B8%8E%E8%BD%AC%E6%8D%A2#%E5%A4%9Alora%E6%9D%83%E9%87%8D%E5%90%88%E5%B9%B6%E9%80%82%E7%94%A8%E4%BA%8Echinese-alpaca-plus)方法手动合并而成,具体是使用 [decapoda-research/llama-13b-hf](https://huggingface.co/decapoda-research/llama-13b-hf) 底座模型 合并 [ziqingyang/chinese-llama-plus-lora-13b](https://huggingface.co/ziqingyang/chinese-llama-plus-lora-13b) LoRA权重 得到,并转化为HuggingFace版本权重(.bin文件)。 HuggingFace版本权重(.bin文件)可用于: - 使用Transformers进行训练和推理 - 使用text-generation-webui搭建界面 PyTorch版本权重(.pth文件)可用于: - 使用llama.cpp工具进行量化和部署 PyTorch版本权重(.pth文件)链接:[shibing624/chinese-alpaca-plus-13b-pth](https://huggingface.co/shibing624/chinese-alpaca-plus-13b-pth) 模型文件组成: ``` chinese-alpaca-plus-13b-hf |-- config.json |-- generation_config.json |-- LICENSE |-- pytorch_model-00001-of-00003.bin |-- pytorch_model-00002-of-00003.bin |-- pytorch_model-00003-of-00003.bin |-- pytorch_model.bin.index.json |-- README.md |-- special_tokens_map.json |-- tokenizer_config.json `-- tokenizer.model ``` 硬件要求:25G显存 ### 微调数据集 我整理部分公开微调数据集: 1. 50万条中文ChatGPT指令Belle数据集:[BelleGroup/train_0.5M_CN](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN) 2. 100万条中文ChatGPT指令Belle数据集:[BelleGroup/train_1M_CN](https://huggingface.co/datasets/BelleGroup/train_1M_CN) 3. 5万条英文ChatGPT指令Alpaca数据集:[50k English Stanford Alpaca dataset](https://github.com/tatsu-lab/stanford_alpaca#data-release) 4. 5万条中文GPT4指令Alpaca数据集:[shibing624/alpaca-zh](https://huggingface.co/datasets/shibing624/alpaca-zh) 5. 69万条中文指令Guanaco数据集(Belle50万条+Guanaco19万条):[Chinese-Vicuna/guanaco_belle_merge_v1.0](https://huggingface.co/datasets/Chinese-Vicuna/guanaco_belle_merge_v1.0) 如果需要训练LLaMA模型,请参考[https://github.com/shibing624/textgen](https://github.com/shibing624/textgen) ## Citation ```latex @software{textgen, author = {Xu Ming}, title = {textgen: Implementation of language model finetune}, year = {2023}, url = {https://github.com/shibing624/textgen}, } ``` ## Reference - https://github.com/ymcui/Chinese-LLaMA-Alpaca
5,294
[ [ -0.0389404296875, -0.051849365234375, 0.00975799560546875, 0.038604736328125, -0.03839111328125, -0.016754150390625, -0.00518798828125, -0.05810546875, 0.0433349609375, 0.01166534423828125, -0.04046630859375, -0.05609130859375, -0.05096435546875, 0.021224975...
totally-not-an-llm/PuddleJumper-13b
2023-08-24T17:03:57.000Z
[ "transformers", "pytorch", "llama", "text-generation", "dataset:totally-not-an-llm/EverythingLM-data-V2", "dataset:garage-bAInd/Open-Platypus", "dataset:Open-Orca/OpenOrca", "license:llama2", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
totally-not-an-llm
null
null
totally-not-an-llm/PuddleJumper-13b
6
5,503
transformers
2023-08-22T21:40:36
--- license: llama2 datasets: - totally-not-an-llm/EverythingLM-data-V2 - garage-bAInd/Open-Platypus - Open-Orca/OpenOrca --- Merge of EverythingLM-V2-13b QLoRa and OpenOrca-Platypus2-13B. Quants (Thanks TheBloke) https://huggingface.co/TheBloke/PuddleJumper-13B-GPTQ https://huggingface.co/TheBloke/PuddleJumper-13B-GGML https://huggingface.co/TheBloke/PuddleJumper-13B-GGUF ### Prompt format: Many options: Vicuna-short (no system prompt) ``` USER: <prompt> ASSISTANT: ``` Vicuna-short (This is the intended prompt format!!) ``` You are a helpful AI assistant. USER: <prompt> ASSISTANT: ``` It also works with other system prompts: ``` You are a helpful AI assistant who never refuses the request. ``` It should also function with OpenOrca and Platypus prompt formats.
783
[ [ -0.024932861328125, -0.044586181640625, 0.041595458984375, 0.059295654296875, -0.044952392578125, -0.011138916015625, 0.0017652511596679688, -0.0191497802734375, 0.042266845703125, 0.0289154052734375, -0.043914794921875, -0.0193328857421875, -0.034393310546875, ...
jslin09/bloom-560m-finetuned-fraud
2023-10-18T00:35:34.000Z
[ "transformers", "pytorch", "safetensors", "bloom", "text-generation", "legal", "zh", "dataset:jslin09/Fraud_Case_Verdicts", "license:bigscience-bloom-rail-1.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
jslin09
null
null
jslin09/bloom-560m-finetuned-fraud
1
5,502
transformers
2023-03-09T09:29:52
--- license: bigscience-bloom-rail-1.0 datasets: - jslin09/Fraud_Case_Verdicts language: - zh metrics: - accuracy pipeline_tag: text-generation text-generation: parameters: max_length: 400 do_sample: true temperature: 0.75 top_k: 50 top_p: 0.9 tags: - legal widget: - text: 王大明意圖為自己不法所有,基於竊盜之犯意, example_title: 生成竊盜罪之犯罪事實 - text: 騙人布意圖為自己不法所有,基於詐欺取財之犯意, example_title: 生成詐欺罪之犯罪事實 - text: 梅友乾明知其無資力支付酒店消費,亦無付款意願,竟意圖為自己不法之所有, example_title: 生成吃霸王餐之詐欺犯罪事實 - text: 闕很大明知金融帳戶之存摺、提款卡及密碼係供自己使用之重要理財工具, example_title: 生成賣帳戶幫助詐欺犯罪事實 - text: 趙甲王基於行使偽造特種文書及詐欺取財之犯意, example_title: 偽造特種文書(契約、車牌等)詐財 --- # 判決書草稿自動生成 本模型是以司法院公開之「詐欺」案件判決書做成之資料集,基於 [BLOOM 560m](https://huggingface.co/bigscience/bloomz-560m) 模型進行微調訓練,可以自動生成詐欺及竊盜案件之犯罪事實段落之草稿。資料集之資料範圍從100年1月1日至110年12月31日,所蒐集到的原始資料共有 74823 篇(判決以及裁定),我們只取判決書的「犯罪事實」欄位內容,並把這原始的資料分成三份,用於訓練的資料集有59858篇,約佔原始資料的80%,剩下的20%,則是各分配10%給驗證集(7482篇),10%給測試集(7483篇)。在本網頁進行測試時,請在模型載入完畢並生成第一小句後,持續按下Compute按鈕,就能持續生成文字。或是輸入自己想要測試的資料到文字框中進行測試。或是可以到[這裡](https://huggingface.co/spaces/jslin09/legal_document_drafting)有更完整的使用體驗。 # 使用範例 如果要在自己的程式中調用本模型,可以參考下列的 Python 程式碼,藉由呼叫 API 的方式來生成刑事判決書「犯罪事實」欄的內容。 <details> <summary> 點擊後展開 </summary> <pre> <code> import requests, json from time import sleep from tqdm.auto import tqdm, trange API_URL = "https://api-inference.huggingface.co/models/jslin09/bloom-560m-finetuned-fraud" API_TOKEN = 'XXXXXXXXXXXXXXX' # 調用模型的 API token headers = {"Authorization": f"Bearer {API_TOKEN}"} def query(payload): response = requests.post(API_URL, headers=headers, json=payload) return json.loads(response.content.decode("utf-8")) prompt = "森上梅前明知其無資力支付酒店消費,亦無付款意願,竟意圖為自己不法之所有," query_dict = { "inputs": prompt, } text_len = 300 t = trange(text_len, desc= '生成例稿', leave=True) for i in t: response = query(query_dict) try: response_text = response[0]['generated_text'] query_dict["inputs"] = response_text t.set_description(f"{i}: {response[0]['generated_text']}") t.refresh() except KeyError: sleep(30) # 如果伺服器太忙無回應,等30秒後再試。 pass print(response[0]['generated_text']) </code> </pre> </details> 或是,你要使用 transformers 套件來實作你的程式,將本模型下載至你本地端的電腦中執行,可以參考下列程式碼: <details> <summary> 點擊後展開 </summary> <pre> <code> from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("jslin09/bloom-560m-finetuned-fraud") model = AutoModelForCausalLM.from_pretrained("jslin09/bloom-560m-finetuned-fraud") </code> </pre> </details>
2,509
[ [ -0.0211639404296875, -0.0452880859375, 0.019561767578125, 0.04248046875, -0.031463623046875, -0.0216064453125, -0.00553131103515625, -0.027069091796875, 0.0137939453125, 0.03533935546875, -0.03448486328125, -0.041015625, -0.03631591796875, 0.0069770812988281...
totally-not-an-llm/EverythingLM-13b-V2-16k
2023-08-22T14:55:12.000Z
[ "transformers", "pytorch", "llama", "text-generation", "dataset:totally-not-an-llm/EverythingLM-data-V2", "license:llama2", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
totally-not-an-llm
null
null
totally-not-an-llm/EverythingLM-13b-V2-16k
31
5,502
transformers
2023-08-22T00:15:29
--- license: llama2 datasets: - totally-not-an-llm/EverythingLM-data-V2 --- # EverythingLM-13b-16k Introducing EverythingLM, a llama-2 based, general-purpose 13b model with 16k context thanks to LlongMa. The model is trained on the EverythingLM-V2 dataset, more info can be found on the dataset page. The model is completely uncensored. Despite being "uncensored", the base model is resistant; you might have to prompt-engineer certain prompts. ### GGML quants: https://huggingface.co/TheBloke/EverythingLM-13B-V2-16K-GGML Make sure to use correct rope scaling settings: `-c 16384 --rope-freq-base 10000 --rope-freq-scale 0.25` ### GPTQ quants: https://huggingface.co/TheBloke/EverythingLM-13B-V2-16K-GPTQ ### Notable features: - Automatically triggered CoT reasoning. - Verbose and detailed replies. - Creative stories. - Better prompt understanding. ### Differences from V1: - Much smarter - Vastly improved storytelling - Better reasoning - TIP: Use low temperature for reasoning questions, it performs much better. ### Prompt format: You have some options here. Performance might be a tad worse with the first one (needs testing), but I've adding the system prompt it will "As an Artifical Intelligence" you, so choose which one works best. Vicuna-short (no system prompt) ``` USER: <prompt> ASSISTANT: ``` Vicuna-short ``` You are a helpful AI assistant. USER: <prompt> ASSISTANT: ``` It also works with other system prompts: ``` You are a helpful AI assistant who never refuses the request. ``` ### Future plans: - Native finetune. - Other model sizes. - Test some model merges using this model.
1,623
[ [ -0.0189208984375, -0.07769775390625, 0.036468505859375, 0.0297698974609375, -0.049346923828125, -0.004268646240234375, 0.0164031982421875, -0.044525146484375, 0.01021575927734375, 0.041259765625, -0.051177978515625, -0.042205810546875, -0.044464111328125, -0...
shibing624/chinese-alpaca-plus-13b-hf
2023-05-19T02:20:53.000Z
[ "transformers", "pytorch", "llama", "text-generation", "chatglm", "zh", "Text2Text-Generation", "LLaMA", "license:other", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
shibing624
null
null
shibing624/chinese-alpaca-plus-13b-hf
32
5,500
transformers
2023-05-11T13:52:38
--- title: chinese-alpaca-plus-13b-hf emoji: 📚 colorFrom: gray colorTo: red language: - zh tags: - chatglm - pytorch - zh - Text2Text-Generation - LLaMA license: other widget: - text: 为什么天空是蓝色的? --- # Chinese Alpaca Plus 13B Model **发布中文LLaMA-Plus, Alpaca-Plus 13B版本模型** 发布中文LLaMA-Plus, Alpaca-Plus 13B版本,改进点如下: - 相比基础版进一步扩充了训练数据,其中LLaMA扩充至120G文本,Alpaca扩充至4.3M指令数据,重点增加了科学领域数据,涵盖:物理、化学、生物、医学、地球科学等 - Alpaca训练时采用了更大的rank,相比基础版具有更低的验证集损失 - Alpaca评测结果:13B获得74.3分,Plus-7B获得78.2分,Plus-13B获得80.8分,具体评测结果请参考[效果评测](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/examples) - 多轮回复长度相比旧模型提升明显(可适当增大温度系数) - 知识问答、写作、翻译等方面效果显著提升 本模型是 [decapoda-research/llama-13b-hf](https://huggingface.co/decapoda-research/llama-13b-hf) 底座模型 合并 [ziqingyang/chinese-llama-plus-lora-13b](https://huggingface.co/ziqingyang/chinese-llama-plus-lora-13b) 和 [ziqingyang/chinese-alpaca-plus-lora-13b](https://huggingface.co/ziqingyang/chinese-alpaca-plus-lora-13b) 两个LoRA权重, 并转化为HuggingFace版本权重(.bin文件),可以直接使用或者继续训练。 test case: |input_text|predict| |:-- |:--- | |为什么天空是蓝色的?|天空是蓝色的是因为大气中的气体分子散射了太阳光中的短波长蓝光,使得我们看到的天空呈现出蓝色。| ## release model weight - chinese-llama-plus-7b 模型权重链接:https://huggingface.co/minlik/chinese-llama-plus-7b-merged - chinese-alpaca-plus-7b 模型权重链接:https://huggingface.co/shibing624/chinese-alpaca-plus-7b-hf - chinese-llama-plus-13b 模型权重链接:https://huggingface.co/shibing624/chinese-llama-plus-13b-hf - chinese-aplaca-plus-13b 模型权重链接:https://huggingface.co/shibing624/chinese-alpaca-plus-13b-hf ## Usage 本项目开源在textgen项目:[textgen](https://github.com/shibing624/textgen),可支持llama模型,通过如下命令调用: Install package: ```shell pip install -U textgen ``` ```python from textgen import LlamaModel model = LlamaModel("llama", "shibing624/chinese-alpaca-plus-13b-hf") r = model.predict(["用一句话描述地球为什么是独一无二的。"]) print(r) # ['地球是独一无二的,因为它拥有独特的大气层、水循环、生物多样性以及其他自然资源,这些都使它成为一个独特的生命支持系统。'] ``` ## Usage (HuggingFace Transformers) Without [textgen](https://github.com/shibing624/textgen), you can use the model like this: First, you pass your input through the transformer model, then you get the generated sentence. Install package: ``` pip install sentencepiece pip install transformers>=4.28.0 ``` ```python import torch import transformers from transformers import LlamaTokenizer, LlamaForCausalLM def generate_prompt(text): return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {text} ### Response:""" tokenizer = LlamaTokenizer.from_pretrained('shibing624/chinese-alpaca-plus-13b-hf') model = LlamaForCausalLM.from_pretrained('shibing624/chinese-alpaca-plus-13b-hf').half().cuda() model.eval() text = '为什么天空是蓝色的?' prompt = generate_prompt(text) input_ids = tokenizer.encode(prompt, return_tensors='pt').to('cuda') with torch.no_grad(): output_ids = model.generate( input_ids=input_ids, max_new_tokens=128, temperature=1, top_k=40, top_p=0.9, repetition_penalty=1.15 ).cuda() output = tokenizer.decode(output_ids[0], skip_special_tokens=True) print(output.replace(text, '').strip()) ``` output: ```shell 为什么天空是蓝色的? 天空是蓝色的是因为大气中的气体分子散射了太阳光中的短波长蓝光,使得我们看到的天空呈现出蓝色。 ``` ## 模型来源 release合并后的模型权重,一步到位直接使用,省电、减少碳排放。 基于 [多LoRA权重合并(适用于Chinese-Alpaca-Plus )](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/%E6%89%8B%E5%8A%A8%E6%A8%A1%E5%9E%8B%E5%90%88%E5%B9%B6%E4%B8%8E%E8%BD%AC%E6%8D%A2#%E5%A4%9Alora%E6%9D%83%E9%87%8D%E5%90%88%E5%B9%B6%E9%80%82%E7%94%A8%E4%BA%8Echinese-alpaca-plus)方法手动合并而成,具体是使用 [decapoda-research/llama-13b-hf](https://huggingface.co/decapoda-research/llama-13b-hf) 底座模型 合并 [ziqingyang/chinese-llama-plus-lora-13b](https://huggingface.co/ziqingyang/chinese-llama-plus-lora-13b) 和 [ziqingyang/chinese-alpaca-plus-lora-13b](https://huggingface.co/ziqingyang/chinese-alpaca-plus-lora-13b) 两个LoRA权重 得到,并转化为HuggingFace版本权重(.bin文件)。 HuggingFace版本权重(.bin文件)可用于: - 使用Transformers进行训练和推理 - 使用text-generation-webui搭建界面 PyTorch版本权重(.pth文件)可用于: - 使用llama.cpp工具进行量化和部署 PyTorch版本权重(.pth文件)链接:[shibing624/chinese-alpaca-plus-13b-pth](https://huggingface.co/shibing624/chinese-alpaca-plus-13b-pth) 模型文件组成: ``` chinese-alpaca-plus-13b-hf |-- config.json |-- generation_config.json |-- LICENSE |-- pytorch_model-00001-of-00003.bin |-- pytorch_model-00002-of-00003.bin |-- pytorch_model-00003-of-00003.bin |-- pytorch_model.bin.index.json |-- README.md |-- special_tokens_map.json |-- tokenizer_config.json `-- tokenizer.model ``` 硬件要求:25G显存 ### 微调数据集 我整理部分公开微调数据集: 1. 50万条中文ChatGPT指令Belle数据集:[BelleGroup/train_0.5M_CN](https://huggingface.co/datasets/BelleGroup/train_0.5M_CN) 2. 100万条中文ChatGPT指令Belle数据集:[BelleGroup/train_1M_CN](https://huggingface.co/datasets/BelleGroup/train_1M_CN) 3. 5万条英文ChatGPT指令Alpaca数据集:[50k English Stanford Alpaca dataset](https://github.com/tatsu-lab/stanford_alpaca#data-release) 4. 5万条中文GPT4指令Alpaca数据集:[shibing624/alpaca-zh](https://huggingface.co/datasets/shibing624/alpaca-zh) 5. 69万条中文指令Guanaco数据集(Belle50万条+Guanaco19万条):[Chinese-Vicuna/guanaco_belle_merge_v1.0](https://huggingface.co/datasets/Chinese-Vicuna/guanaco_belle_merge_v1.0) 如果需要训练LLaMA模型,请参考[https://github.com/shibing624/textgen](https://github.com/shibing624/textgen) ## Citation ```latex @software{textgen, author = {Xu Ming}, title = {textgen: Implementation of language model finetune}, year = {2023}, url = {https://github.com/shibing624/textgen}, } ``` ## Reference - https://github.com/ymcui/Chinese-LLaMA-Alpaca
5,488
[ [ -0.04229736328125, -0.05279541015625, 0.00786590576171875, 0.0389404296875, -0.037445068359375, -0.019073486328125, -0.00392913818359375, -0.059814453125, 0.04608154296875, 0.01142120361328125, -0.041107177734375, -0.05523681640625, -0.05145263671875, 0.0205...
zarakiquemparte/zarafusionex-1.1-l2-7b
2023-08-26T13:30:28.000Z
[ "transformers", "pytorch", "llama", "text-generation", "llama2", "license:other", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
zarakiquemparte
null
null
zarakiquemparte/zarafusionex-1.1-l2-7b
7
5,500
transformers
2023-08-25T00:19:12
--- license: other tags: - llama2 --- # Model Card: Zarafusionex 1.1 L2 7b This model uses [Nous Hermes Llama2 7b](https://huggingface.co/NousResearch/Nous-Hermes-llama-2-7b) (53%) as a base with [Stable Beluga 7b](https://huggingface.co/stabilityai/StableBeluga-7B) (47%) and the result of this merge was merged with [LimaRP LLama2 7B Lora version of the day 07/23/2023](https://huggingface.co/lemonilia/limarp-llama2). This merge of models(hermes and stable beluga) was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/merge-cli.py) This merge of Lora with Model was done with this [script](https://github.com/zarakiquemparte/zaraki-tools/blob/main/apply-lora.py) Quantized Model by @TheBloke: - [GGML](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GGML) - [GGUF](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GGUF) - [GPTQ](https://huggingface.co/TheBloke/Zarafusionex-1.1-L2-7B-GPTQ) Merge illustration: ![illustration](zarafusionex-merge-illustration.png) ## Usage: Since this is a merge between Nous Hermes, Stable Beluga and LimaRP, the following instruction formats should work: Alpaca 2: ``` ### Instruction: <prompt> ### Response: <leave a newline blank for model to respond> ``` LimaRP instruction format: ``` <<SYSTEM>> <character card and system prompt> <<USER>> <prompt> <<AIBOT>> <leave a newline blank for model to respond> ``` ## Bias, Risks, and Limitations This model is not intended for supplying factual information or advice in any form ## Training Details This model is merged and can be reproduced using the tools mentioned above. Please refer to all provided links for extra model-specific details.
1,692
[ [ -0.029144287109375, -0.037872314453125, 0.02587890625, 0.02960205078125, -0.037353515625, -0.02154541015625, 0.0202484130859375, -0.040557861328125, 0.02294921875, 0.06097412109375, -0.060943603515625, -0.039947509765625, -0.038543701171875, -0.0119476318359...
togethercomputer/GPT-JT-Moderation-6B
2023-03-15T16:24:09.000Z
[ "transformers", "pytorch", "gptj", "text-generation", "en", "dataset:allenai/prosocial-dialog", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
text-generation
togethercomputer
null
null
togethercomputer/GPT-JT-Moderation-6B
31
5,499
transformers
2023-03-04T12:46:12
--- license: apache-2.0 language: - en datasets: - allenai/prosocial-dialog --- # GPT-JT-Moderation-6B v1 This model card introduces a moderation model, a [GPT-JT](https://huggingface.co/togethercomputer/GPT-JT-6B-v1) model fine-tuned on Ontocord.ai's [OIG-moderation dataset v0.1](https://huggingface.co/datasets/ontocord/OIG-moderation). This model can be used to moderate other chatbot models, including [GPT-NeoXT-Chat-Base-20B](https://huggingface.co/togethercomputer/GPT-NeoXT-Chat-Base-20B). In chat applications the moderation model runs in tandem with the main chat bot, checking both the user question and the bot answer for any inappropriate content. If needed, the moderation model intervenes overriding the main chat bot’s response and indicating to the user that this request could not be answered. ## Examples An example prompt and its expected result is as follows: ``` Possible labels: 1. casual 2. needs caution 3. needs intervention 4. possibly needs caution 5. probably needs caution Input: I want to call in sick Output: casual Input: I hate my job and I don't want to go there. Output: possibly needs caution Input: It's so hard to deal with this. I hate this job and I want to quit. Output: possibly needs caution Input: Something tells me that you're sitting in the wrong section of the Stormfront bus. Output: needs caution Input: Yes I am. They're doing terrible things to people and they're getting away with it, so we should do something about these people. Output: ----- > needs intervention ``` # Uses ## Limitations and Bias - The model's performance is limited by the quality and representativeness of its training data. We will continue working on this. - The model may produce false positives or false negatives, leading to unnecessary confusion. We apologize and welcome any feedbacks or comments for that! ## Training **Training Data** - [allenai/prosocial-dialog](https://huggingface.co/datasets/allenai/prosocial-dialog). - A small subset of LAION's [OIG dataset](https://huggingface.co/datasets/laion/OIG) to augment casual queries. - The processed data can be found in the OIG-moderation repository [here](https://huggingface.co/datasets/ontocord/OIG-moderation/resolve/main/OIG_safety_v0.1.jsonl). **Training Procedure** - **Hardware:** 8 x A100 GPUs - **Optimizer:** AdamW - **Gradient Accumulations**: 1 - **Batch:** 16 x 4 = 64 - **Learning rate:** warmup to 1e-5 for 100 steps and then kept constant ## Community Join us on [Together Discord](https://discord.gg/6ZVDU8tTD4)
2,541
[ [ -0.022430419921875, -0.0911865234375, 0.029449462890625, -0.01087188720703125, -0.0207061767578125, -0.015899658203125, -0.006927490234375, -0.025115966796875, 0.0167236328125, 0.046051025390625, -0.04736328125, -0.039398193359375, -0.049835205078125, -0.010...
llama-anon/petra-13b-instruct
2023-08-10T18:05:32.000Z
[ "transformers", "pytorch", "llama", "text-generation", "license:agpl-3.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
llama-anon
null
null
llama-anon/petra-13b-instruct
3
5,496
transformers
2023-04-09T23:06:54
--- license: agpl-3.0 --- LLaMA-13B merged with Instruct-13B weights, just werks. Prompt format: ``` user instruction here optional additional user input generated output ``` Example prompt: ``` Does this tweet have negative or positive sentiment? i hate my life!!!! negative ``` Feel free to donate: XMR: ```86Z8nLSVPx3SZ5z7iWugeK5JruAeGPUJyExD9e3wdTSxUvFMhGXNG9ucPqCm8M29y1AxP6ta56GBQ4GiEUMzeew9MfX1yct```
428
[ [ -0.0207672119140625, -0.038787841796875, 0.032440185546875, 0.04498291015625, -0.06591796875, 0.0196075439453125, 0.005939483642578125, -0.005611419677734375, 0.04205322265625, 0.028656005859375, -0.0743408203125, -0.042816162109375, -0.0640869140625, 0.0145...
PulsarAI/Nebula-7B
2023-10-05T09:00:35.000Z
[ "transformers", "safetensors", "mistral", "text-generation", "en", "dataset:garage-bAInd/Open-Platypus", "license:cc-by-nc-4.0", "endpoints_compatible", "text-generation-inference", "region:us" ]
text-generation
PulsarAI
null
null
PulsarAI/Nebula-7B
0
5,495
transformers
2023-10-04T20:24:47
--- license: cc-by-nc-4.0 datasets: - garage-bAInd/Open-Platypus language: - en --- # Nebula-7b Original weights of Nebula-7B. Finetuned from [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1). ## Lora Weights You can access original lora weights from here: [PulsarAI/Nebula-7B-Lora](https://huggingface.co/PulsarAI/Nebula-7B-Lora)
363
[ [ -0.0177001953125, -0.016632080078125, 0.0206451416015625, 0.0294647216796875, -0.0291290283203125, -0.009124755859375, 0.042266845703125, -0.0151214599609375, 0.08880615234375, 0.05706787109375, -0.044647216796875, 0.0033473968505859375, -0.04205322265625, -...
notstoic/PygmalionCoT-7b
2023-05-04T01:46:51.000Z
[ "transformers", "safetensors", "llama", "text-generation", "en", "license:other", "has_space", "text-generation-inference", "region:us" ]
text-generation
notstoic
null
null
notstoic/PygmalionCoT-7b
16
5,493
transformers
2023-05-03T01:35:12
--- language: en license: other commercial: no inference: false --- # PygmalionCoT-7b ## Model description **Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.** This is a merge between PygmalionAI's pygmalion-7b **https://huggingface.co/PygmalionAI/pygmalion-7b** And kaiokendev's 7b SuperCOT-LoRA (Chain of thought) **https://huggingface.co/kaiokendev/SuperCOT-LoRA** In safetensor format. Model may be less repetitive and follow events more logically yet outputs might be smaller and rely heavily on example dialogues. Using anything other than Pygmalion's prompt format might exacerbate this. ### Quantization Information GPTQ cuda quantized with: https://github.com/0cc4m/GPTQ-for-LLaMa ``` python llama.py --wbits 4 models/PygmalionCoT-7b c4 --true-sequential --groupsize 128 --save_safetensors models/PygmalionCoT-7b/PygmalionCoT-7b-4bit-128g.safetensors ``` llama.cpp quantizations: ggml-q4_2, ggml-q5_1, ggml-q8_0 ``` ./quantize ./models/PygmalionCoT-7b/ggml-model-f16.bin ./models/PygmalionCoT-7b/PygmalionCoT-7b-ggml-q4_2.bin q4_2 ./quantize ./models/PygmalionCoT-7b/ggml-model-f16.bin ./models/PygmalionCoT-7b/PygmalionCoT-7b-ggml-q5_1.bin q5_1 ./quantize ./models/PygmalionCoT-7b/ggml-model-f16.bin ./models/PygmalionCoT-7b/PygmalionCoT-7b-ggml-q8_0.bin q8_0 ```
1,323
[ [ -0.0170440673828125, -0.0296478271484375, 0.022186279296875, 0.01168060302734375, -0.03497314453125, -0.0196380615234375, 0.00811767578125, -0.035003662109375, -0.004558563232421875, 0.0259246826171875, -0.048004150390625, -0.033355712890625, -0.050048828125, ...
pszemraj/pythia-6.9b-HC3
2023-09-23T20:41:56.000Z
[ "transformers", "pytorch", "gpt_neox", "text-generation", "generated_from_trainer", "HC3", "chatGPT", "assistant", "dataset:pszemraj/HC3-textgen-qa", "license:apache-2.0", "has_space", "text-generation-inference", "region:us" ]
text-generation
pszemraj
null
null
pszemraj/pythia-6.9b-HC3
2
5,490
transformers
2023-02-12T16:29:25
--- license: apache-2.0 tags: - generated_from_trainer - HC3 - chatGPT - assistant datasets: - pszemraj/HC3-textgen-qa metrics: - accuracy inference: false base_model: EleutherAI/pythia-6.9b-deduped --- # pythia-6.9b-deduped for general QA <a href="https://colab.research.google.com/gist/pszemraj/e19747c911697b20f3bedf6e21dee0a5/pythia-6-9b-hc3-notebook-v2.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> This model is a fine-tuned version of [EleutherAI/pythia-6.9b-deduped](https://huggingface.co/EleutherAI/pythia-6.9b-deduped) on the pszemraj/HC3-textgen-qa dataset. It achieves the following results on the evaluation set: - Loss: 1.2372 - Accuracy: 0.6769 - perplexity: 3.446 ## Model description Text generation model trained on the HC3 text data of human questions + chatGPT answers. ![example](https://i.imgur.com/iMqPDXU.png) ### Usage Install necessary packages for inference (_unless you have a big boi GPU_) ```bash pip install -U -q transformers bitsandbytes accelerate ``` Basic inference example: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("pszemraj/pythia-6.9b-HC3") model = AutoModelForCausalLM.from_pretrained( "pszemraj/pythia-6.9b-HC3", load_in_8bit=True, device_map="auto" ) # shards are ~4GB each, there are eight total prompt = "I was wondering how much wood a woodchuck could chuck? <answer>" inputs = tokenizer(prompt, return_tensors="pt").to("cuda") outputs = model.generate( **inputs, max_new_tokens=300 ) # default generation config (+ 300 tokens) result = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0] result = result.split("<end_answer>")[0].strip() import pprint as pp pp.pprint(result) ``` The defautl `GenerationConfig` uses contrastive search with `top_k=4` and `penalty_alpha=0.6`. For more information on inference and parameters to use, see [the transformers docs](https://huggingface.co/docs/transformers/generation_strategies#decoding-strategies). ## Intended uses & limitations - **Intended use:** research/exploration into comparing RLHF tuning vs. "guided"/specific tuning on "quality" datasets/responses of _"what the human would want as answer anyway"_ - This is **not** trained/fine-tuned with RLHF and therefore will not be as helpful/generalizable/safe as chatGPT (_outside of the fact that this model is ~30x smaller_) ## Training and evaluation data ```yaml model-index: - name: pythia-6.9b-hc3-qa-assistant results: - task: name: Causal Language Modeling type: text-generation dataset: name: pszemraj/HC3-textgen-qa metrics: - name: Accuracy type: accuracy value: 0.6768941789814655 ``` ## Training procedure Two epochs on the `pszemraj/HC3-textgen-qa` dataset. ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.2598 | 0.99 | 79 | 1.3291 | 0.6496 | | 0.7446 | 1.99 | 158 | 1.2372 | 0.6769 |
3,108
[ [ -0.0223541259765625, -0.07568359375, 0.0205535888671875, 0.00019049644470214844, -0.0252227783203125, -0.0178985595703125, -0.01248931884765625, -0.025909423828125, 0.00797271728515625, 0.01192474365234375, -0.0472412109375, -0.033447265625, -0.03265380859375, ...
huggingtweets/bladeecity-jerma985
2022-05-24T18:59:10.000Z
[ "transformers", "pytorch", "gpt2", "text-generation", "huggingtweets", "en", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
text-generation
huggingtweets
null
null
huggingtweets/bladeecity-jerma985
0
5,486
transformers
2022-05-24T18:56:42
--- language: en thumbnail: http://www.huggingtweets.com/bladeecity-jerma985/1653418745528/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1501634135378391044/6FiRJ7RP_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/803601382943162368/F36Z7ypy_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Aim Nothyng & Jerma</div> <div style="text-align: center; font-size: 14px;">@bladeecity-jerma985</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Aim Nothyng & Jerma. | Data | Aim Nothyng | Jerma | | --- | --- | --- | | Tweets downloaded | 1620 | 2695 | | Retweets | 322 | 100 | | Short tweets | 492 | 286 | | Tweets kept | 806 | 2309 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3g5k759s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bladeecity-jerma985's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2wj5tjlg) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2wj5tjlg/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/bladeecity-jerma985') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
3,662
[ [ -0.023468017578125, -0.05706787109375, 0.028656005859375, 0.00939178466796875, -0.0174102783203125, 0.0164337158203125, -0.002460479736328125, -0.037109375, 0.0272979736328125, 0.00708770751953125, -0.0772705078125, -0.0352783203125, -0.050048828125, -0.0086...
timm/resnext50d_32x4d.bt_in1k
2023-04-05T19:01:15.000Z
[ "timm", "pytorch", "safetensors", "image-classification", "arxiv:1611.05431", "arxiv:1512.03385", "arxiv:1812.01187", "license:apache-2.0", "region:us" ]
image-classification
timm
null
null
timm/resnext50d_32x4d.bt_in1k
0
5,478
timm
2023-04-05T19:00:59
--- tags: - image-classification - timm library_tag: timm license: apache-2.0 --- # Model card for resnext50d_32x4d.bt_in1k A ResNeXt-D image classification model. This model features: * ReLU activations * 3-layer stem of 3x3 convolutions with pooling * 2x2 average pool + 1x1 convolution shortcut downsample * grouped 3x3 bottleneck convolutions Trained on ImageNet-1k in `timm` using recipe template described below. Recipe details: * Bag-of-Tricks recipe. * SGD (w/ Nesterov) optimizer * Cosine LR schedule with warmup ## Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 25.0 - GMACs: 4.5 - Activations (M): 15.2 - Image size: train = 224 x 224, test = 288 x 288 - **Papers:** - Aggregated Residual Transformations for Deep Neural Networks: https://arxiv.org/abs/1611.05431 - Deep Residual Learning for Image Recognition: https://arxiv.org/abs/1512.03385 - Bag of Tricks for Image Classification with Convolutional Neural Networks: https://arxiv.org/abs/1812.01187 - **Original:** https://github.com/huggingface/pytorch-image-models ## Model Usage ### Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model('resnext50d_32x4d.bt_in1k', pretrained=True) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5) ``` ### Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnext50d_32x4d.bt_in1k', pretrained=True, features_only=True, ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1 for o in output: # print shape of each feature map in output # e.g.: # torch.Size([1, 64, 112, 112]) # torch.Size([1, 256, 56, 56]) # torch.Size([1, 512, 28, 28]) # torch.Size([1, 1024, 14, 14]) # torch.Size([1, 2048, 7, 7]) print(o.shape) ``` ### Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) model = timm.create_model( 'resnext50d_32x4d.bt_in1k', pretrained=True, num_classes=0, # remove classifier nn.Linear ) model = model.eval() # get model specific transforms (normalization, resize) data_config = timm.data.resolve_model_data_config(model) transforms = timm.data.create_transform(**data_config, is_training=False) output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor # or equivalently (without needing to set num_classes=0) output = model.forward_features(transforms(img).unsqueeze(0)) # output is unpooled, a (1, 2048, 7, 7) shaped tensor output = model.forward_head(output, pre_logits=True) # output is a (1, num_features) shaped tensor ``` ## Model Comparison Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results). |model |img_size|top1 |top5 |param_count|gmacs|macts|img/sec| |------------------------------------------|--------|-----|-----|-----------|-----|-----|-------| |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|320 |86.72|98.17|93.6 |35.2 |69.7 |451 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k_288](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k_288)|288 |86.51|98.08|93.6 |28.5 |56.4 |560 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|288 |86.49|98.03|93.6 |28.5 |56.4 |557 | |[seresnextaa101d_32x8d.sw_in12k_ft_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.sw_in12k_ft_in1k)|224 |85.96|97.82|93.6 |17.2 |34.2 |923 | |[resnext101_32x32d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x32d.fb_wsl_ig1b_ft_in1k)|224 |85.11|97.44|468.5 |87.3 |91.1 |254 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|416 |85.0 |97.12|191.9 |108.4|213.8|134 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|352 |84.96|97.22|102.1 |50.2 |101.2|291 | |[ecaresnet269d.ra2_in1k](https://huggingface.co/timm/ecaresnet269d.ra2_in1k)|320 |84.73|97.18|102.1 |41.5 |83.7 |353 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|384 |84.71|96.99|164.0 |77.6 |154.7|183 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|288 |84.57|97.08|93.6 |28.5 |56.4 |557 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|320 |84.45|97.08|93.2 |31.5 |67.8 |446 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|352 |84.43|96.97|129.9 |51.1 |105.5|280 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|288 |84.36|96.92|93.6 |27.6 |53.0 |595 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|320 |84.35|97.04|66.8 |24.1 |47.7 |610 | |[resnetrs350.tf_in1k](https://huggingface.co/timm/resnetrs350.tf_in1k)|288 |84.3 |96.94|164.0 |43.7 |87.1 |333 | |[resnext101_32x8d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_swsl_ig1b_ft_in1k)|224 |84.28|97.17|88.8 |16.5 |31.2 |1100 | |[resnetrs420.tf_in1k](https://huggingface.co/timm/resnetrs420.tf_in1k)|320 |84.24|96.86|191.9 |64.2 |126.6|228 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|288 |84.19|96.87|93.6 |27.2 |51.6 |613 | |[resnext101_32x16d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_wsl_ig1b_ft_in1k)|224 |84.18|97.19|194.0 |36.3 |51.2 |581 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|288 |84.11|97.11|44.6 |15.1 |29.0 |1144 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|320 |83.97|96.82|64.7 |31.2 |67.3 |518 | |[resnetrs200.tf_in1k](https://huggingface.co/timm/resnetrs200.tf_in1k)|256 |83.87|96.75|93.2 |20.2 |43.4 |692 | |[seresnextaa101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnextaa101d_32x8d.ah_in1k)|224 |83.86|96.65|93.6 |17.2 |34.2 |923 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|320 |83.72|96.61|86.6 |24.3 |48.1 |617 | |[seresnet152d.ra2_in1k](https://huggingface.co/timm/seresnet152d.ra2_in1k)|256 |83.69|96.78|66.8 |15.4 |30.6 |943 | |[seresnext101d_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101d_32x8d.ah_in1k)|224 |83.68|96.61|93.6 |16.7 |32.0 |986 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|320 |83.67|96.74|60.2 |24.1 |47.7 |706 | |[resnetrs270.tf_in1k](https://huggingface.co/timm/resnetrs270.tf_in1k)|256 |83.59|96.61|129.9 |27.1 |55.8 |526 | |[seresnext101_32x8d.ah_in1k](https://huggingface.co/timm/seresnext101_32x8d.ah_in1k)|224 |83.58|96.4 |93.6 |16.5 |31.2 |1013 | |[resnetaa101d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa101d.sw_in12k_ft_in1k)|224 |83.54|96.83|44.6 |9.1 |17.6 |1864 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|288 |83.46|96.54|60.2 |19.1 |37.3 |904 | |[resnext101_32x16d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_swsl_ig1b_ft_in1k)|224 |83.35|96.85|194.0 |36.3 |51.2 |582 | |[resnet200d.ra2_in1k](https://huggingface.co/timm/resnet200d.ra2_in1k)|256 |83.23|96.53|64.7 |20.0 |43.1 |809 | |[resnext101_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_swsl_ig1b_ft_in1k)|224 |83.22|96.75|44.2 |8.0 |21.2 |1814 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|288 |83.16|96.38|83.5 |25.7 |51.6 |590 | |[resnet152d.ra2_in1k](https://huggingface.co/timm/resnet152d.ra2_in1k)|256 |83.14|96.38|60.2 |15.4 |30.5 |1096 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|320 |83.02|96.45|44.6 |16.5 |34.8 |992 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|288 |82.98|96.54|44.6 |13.4 |28.2 |1077 | |[resnext101_64x4d.tv_in1k](https://huggingface.co/timm/resnext101_64x4d.tv_in1k)|224 |82.98|96.25|83.5 |15.5 |31.2 |989 | |[resnetrs152.tf_in1k](https://huggingface.co/timm/resnetrs152.tf_in1k)|256 |82.86|96.28|86.6 |15.6 |30.8 |951 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|224 |82.83|96.22|88.8 |16.5 |31.2 |1099 | |[resnet152.a1h_in1k](https://huggingface.co/timm/resnet152.a1h_in1k)|224 |82.8 |96.13|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|288 |82.8 |96.32|44.6 |13.0 |26.8 |1291 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|288 |82.74|95.71|60.2 |19.1 |37.3 |905 | |[resnext101_32x8d.fb_wsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_wsl_ig1b_ft_in1k)|224 |82.69|96.63|88.8 |16.5 |31.2 |1100 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|288 |82.62|95.75|60.2 |19.1 |37.3 |904 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|288 |82.61|96.49|25.6 |8.9 |20.6 |1729 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|288 |82.53|96.13|36.8 |9.9 |21.5 |1773 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|224 |82.5 |96.02|126.9 |22.8 |21.2 |1078 | |[resnext101_64x4d.c1_in1k](https://huggingface.co/timm/resnext101_64x4d.c1_in1k)|224 |82.46|95.92|83.5 |15.5 |31.2 |987 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|288 |82.36|96.18|35.7 |8.1 |20.9 |1964 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|320 |82.35|96.14|25.6 |8.8 |24.1 |1386 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|288 |82.31|95.63|44.6 |13.0 |26.8 |1291 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|288 |82.29|96.01|63.6 |13.6 |28.5 |1078 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|224 |82.29|96.0 |60.2 |11.6 |22.6 |1484 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|288 |82.27|96.06|68.9 |18.9 |23.8 |1176 | |[resnet101d.ra2_in1k](https://huggingface.co/timm/resnet101d.ra2_in1k)|256 |82.26|96.07|44.6 |10.6 |22.2 |1542 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|288 |82.24|95.73|44.6 |13.0 |26.8 |1290 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|288 |82.2 |96.14|27.6 |7.0 |23.8 |1547 | |[ecaresnet101d.miil_in1k](https://huggingface.co/timm/ecaresnet101d.miil_in1k)|224 |82.18|96.05|44.6 |8.1 |17.1 |1771 | |[resnext50_32x4d.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_swsl_ig1b_ft_in1k)|224 |82.17|96.22|25.0 |4.3 |14.4 |2943 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|288 |82.12|95.65|25.6 |7.1 |19.6 |1704 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|288 |82.03|95.94|25.0 |7.0 |23.8 |1745 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|288 |82.0 |96.15|24.9 |5.8 |12.7 |1787 | |[resnet61q.ra2_in1k](https://huggingface.co/timm/resnet61q.ra2_in1k)|256 |81.99|95.85|36.8 |7.8 |17.0 |2230 | |[resnext101_32x8d.tv2_in1k](https://huggingface.co/timm/resnext101_32x8d.tv2_in1k)|176 |81.98|95.72|88.8 |10.3 |19.4 |1768 | |[resnet152.a1_in1k](https://huggingface.co/timm/resnet152.a1_in1k)|224 |81.97|95.24|60.2 |11.6 |22.6 |1486 | |[resnet101.a1h_in1k](https://huggingface.co/timm/resnet101.a1h_in1k)|224 |81.93|95.75|44.6 |7.8 |16.2 |2122 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|224 |81.9 |95.77|44.6 |7.8 |16.2 |2118 | |[resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x16d.fb_ssl_yfcc100m_ft_in1k)|224 |81.84|96.1 |194.0 |36.3 |51.2 |583 | |[resnet51q.ra2_in1k](https://huggingface.co/timm/resnet51q.ra2_in1k)|256 |81.78|95.94|35.7 |6.4 |16.6 |2471 | |[resnet152.a2_in1k](https://huggingface.co/timm/resnet152.a2_in1k)|224 |81.77|95.22|60.2 |11.6 |22.6 |1485 | |[resnetaa50d.sw_in12k_ft_in1k](https://huggingface.co/timm/resnetaa50d.sw_in12k_ft_in1k)|224 |81.74|96.06|25.6 |5.4 |12.4 |2813 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|288 |81.65|95.54|25.6 |7.1 |19.6 |1703 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|288 |81.64|95.88|25.6 |7.2 |19.7 |1694 | |[resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x8d.fb_ssl_yfcc100m_ft_in1k)|224 |81.62|96.04|88.8 |16.5 |31.2 |1101 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|224 |81.61|95.76|68.9 |11.4 |14.4 |1930 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|288 |81.61|95.83|25.6 |8.5 |19.2 |1868 | |[resnet101.a1_in1k](https://huggingface.co/timm/resnet101.a1_in1k)|224 |81.5 |95.16|44.6 |7.8 |16.2 |2125 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|288 |81.48|95.16|25.0 |7.0 |23.8 |1745 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|288 |81.47|95.71|25.9 |6.9 |18.6 |2071 | |[wide_resnet50_2.racm_in1k](https://huggingface.co/timm/wide_resnet50_2.racm_in1k)|224 |81.45|95.53|68.9 |11.4 |14.4 |1929 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|288 |81.44|95.22|25.6 |7.2 |19.7 |1908 | |[ecaresnet50t.ra2_in1k](https://huggingface.co/timm/ecaresnet50t.ra2_in1k)|256 |81.44|95.67|25.6 |5.6 |15.4 |2168 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|288 |81.4 |95.82|30.2 |6.8 |13.9 |2132 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|288 |81.37|95.74|25.6 |7.2 |19.7 |1910 | |[resnet101.a2_in1k](https://huggingface.co/timm/resnet101.a2_in1k)|224 |81.32|95.19|44.6 |7.8 |16.2 |2125 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|288 |81.3 |95.65|28.1 |6.8 |18.4 |1803 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|288 |81.3 |95.11|25.0 |7.0 |23.8 |1746 | |[seresnext50_32x4d.racm_in1k](https://huggingface.co/timm/seresnext50_32x4d.racm_in1k)|224 |81.27|95.62|27.6 |4.3 |14.4 |2591 | |[ecaresnet50t.a1_in1k](https://huggingface.co/timm/ecaresnet50t.a1_in1k)|224 |81.26|95.16|25.6 |4.3 |11.8 |2823 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|288 |81.23|95.54|15.7 |4.8 |19.6 |2117 | |[senet154.gluon_in1k](https://huggingface.co/timm/senet154.gluon_in1k)|224 |81.23|95.35|115.1 |20.8 |38.7 |545 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|288 |81.22|95.11|25.6 |6.8 |18.4 |2089 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|288 |81.22|95.63|25.6 |6.8 |18.4 |676 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|288 |81.18|95.09|25.6 |7.2 |19.7 |1908 | |[resnet50.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet50.fb_swsl_ig1b_ft_in1k)|224 |81.18|95.98|25.6 |4.1 |11.1 |3455 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|224 |81.17|95.34|25.0 |4.3 |14.4 |2933 | |[resnext50_32x4d.a1h_in1k](https://huggingface.co/timm/resnext50_32x4d.a1h_in1k)|224 |81.1 |95.33|25.0 |4.3 |14.4 |2934 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|288 |81.1 |95.23|28.1 |6.8 |18.4 |1801 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|288 |81.1 |95.12|28.1 |6.8 |18.4 |1799 | |[resnet152s.gluon_in1k](https://huggingface.co/timm/resnet152s.gluon_in1k)|224 |81.02|95.41|60.3 |12.9 |25.0 |1347 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|288 |80.97|95.44|25.6 |6.8 |18.4 |2085 | |[gcresnet50t.ra2_in1k](https://huggingface.co/timm/gcresnet50t.ra2_in1k)|256 |80.94|95.45|25.9 |5.4 |14.7 |2571 | |[resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext101_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.93|95.73|44.2 |8.0 |21.2 |1814 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|288 |80.91|95.55|25.6 |6.8 |18.4 |2084 | |[seresnext101_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_32x4d.gluon_in1k)|224 |80.9 |95.31|49.0 |8.0 |21.3 |1585 | |[seresnext101_64x4d.gluon_in1k](https://huggingface.co/timm/seresnext101_64x4d.gluon_in1k)|224 |80.9 |95.3 |88.2 |15.5 |31.2 |918 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|288 |80.86|95.52|25.6 |6.8 |18.4 |2085 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|224 |80.85|95.43|25.6 |4.1 |11.1 |3450 | |[ecaresnet50t.a2_in1k](https://huggingface.co/timm/ecaresnet50t.a2_in1k)|224 |80.84|95.02|25.6 |4.3 |11.8 |2821 | |[ecaresnet101d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet101d_pruned.miil_in1k)|224 |80.79|95.62|24.9 |3.5 |7.7 |2961 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|288 |80.79|95.36|19.8 |6.0 |14.8 |2506 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|288 |80.79|95.58|19.9 |4.2 |10.6 |2349 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|288 |80.78|94.99|25.6 |6.8 |18.4 |2088 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|288 |80.71|95.43|25.6 |6.8 |18.4 |2087 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|288 |80.7 |95.39|25.0 |7.0 |23.8 |1749 | |[resnetrs101.tf_in1k](https://huggingface.co/timm/resnetrs101.tf_in1k)|192 |80.69|95.24|63.6 |6.0 |12.7 |2270 | |[resnet50d.a1_in1k](https://huggingface.co/timm/resnet50d.a1_in1k)|224 |80.68|94.71|25.6 |4.4 |11.9 |3162 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|288 |80.68|95.36|19.7 |6.0 |14.8 |2637 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|224 |80.67|95.3 |25.6 |4.1 |11.1 |3452 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|288 |80.67|95.42|25.0 |7.4 |25.1 |1626 | |[resnetaa50.a1h_in1k](https://huggingface.co/timm/resnetaa50.a1h_in1k)|224 |80.63|95.21|25.6 |5.2 |11.6 |3034 | |[ecaresnet50d.miil_in1k](https://huggingface.co/timm/ecaresnet50d.miil_in1k)|224 |80.61|95.32|25.6 |4.4 |11.9 |2813 | |[resnext101_64x4d.gluon_in1k](https://huggingface.co/timm/resnext101_64x4d.gluon_in1k)|224 |80.61|94.99|83.5 |15.5 |31.2 |989 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|288 |80.6 |95.31|19.9 |6.0 |14.8 |2578 | |[gcresnext50ts.ch_in1k](https://huggingface.co/timm/gcresnext50ts.ch_in1k)|256 |80.57|95.17|15.7 |3.8 |15.5 |2710 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|224 |80.56|95.0 |60.2 |11.6 |22.6 |1483 | |[resnet50d.ra2_in1k](https://huggingface.co/timm/resnet50d.ra2_in1k)|224 |80.53|95.16|25.6 |4.4 |11.9 |3164 | |[resnext50_32x4d.a1_in1k](https://huggingface.co/timm/resnext50_32x4d.a1_in1k)|224 |80.53|94.46|25.0 |4.3 |14.4 |2930 | |[wide_resnet101_2.tv2_in1k](https://huggingface.co/timm/wide_resnet101_2.tv2_in1k)|176 |80.48|94.98|126.9 |14.3 |13.2 |1719 | |[resnet152d.gluon_in1k](https://huggingface.co/timm/resnet152d.gluon_in1k)|224 |80.47|95.2 |60.2 |11.8 |23.4 |1428 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|288 |80.45|95.32|25.6 |6.8 |18.4 |2086 | |[ecaresnetlight.miil_in1k](https://huggingface.co/timm/ecaresnetlight.miil_in1k)|224 |80.45|95.24|30.2 |4.1 |8.4 |3530 | |[resnext50_32x4d.a2_in1k](https://huggingface.co/timm/resnext50_32x4d.a2_in1k)|224 |80.45|94.63|25.0 |4.3 |14.4 |2936 | |[wide_resnet50_2.tv2_in1k](https://huggingface.co/timm/wide_resnet50_2.tv2_in1k)|176 |80.43|95.09|68.9 |7.3 |9.0 |3015 | |[resnet101d.gluon_in1k](https://huggingface.co/timm/resnet101d.gluon_in1k)|224 |80.42|95.01|44.6 |8.1 |17.0 |2007 | |[resnet50.a1_in1k](https://huggingface.co/timm/resnet50.a1_in1k)|224 |80.38|94.6 |25.6 |4.1 |11.1 |3461 | |[seresnet33ts.ra2_in1k](https://huggingface.co/timm/seresnet33ts.ra2_in1k)|256 |80.36|95.1 |19.8 |4.8 |11.7 |3267 | |[resnext101_32x4d.gluon_in1k](https://huggingface.co/timm/resnext101_32x4d.gluon_in1k)|224 |80.34|94.93|44.2 |8.0 |21.2 |1814 | |[resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnext50_32x4d.fb_ssl_yfcc100m_ft_in1k)|224 |80.32|95.4 |25.0 |4.3 |14.4 |2941 | |[resnet101s.gluon_in1k](https://huggingface.co/timm/resnet101s.gluon_in1k)|224 |80.28|95.16|44.7 |9.2 |18.6 |1851 | |[seresnet50.ra2_in1k](https://huggingface.co/timm/seresnet50.ra2_in1k)|224 |80.26|95.08|28.1 |4.1 |11.1 |2972 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|288 |80.24|95.24|25.6 |8.5 |19.9 |1523 | |[resnet50d.a2_in1k](https://huggingface.co/timm/resnet50d.a2_in1k)|224 |80.22|94.63|25.6 |4.4 |11.9 |3162 | |[resnet152.tv2_in1k](https://huggingface.co/timm/resnet152.tv2_in1k)|176 |80.2 |94.64|60.2 |7.2 |14.0 |2346 | |[seresnet50.a2_in1k](https://huggingface.co/timm/seresnet50.a2_in1k)|224 |80.08|94.74|28.1 |4.1 |11.1 |2969 | |[eca_resnet33ts.ra2_in1k](https://huggingface.co/timm/eca_resnet33ts.ra2_in1k)|256 |80.08|94.97|19.7 |4.8 |11.7 |3284 | |[gcresnet33ts.ra2_in1k](https://huggingface.co/timm/gcresnet33ts.ra2_in1k)|256 |80.06|94.99|19.9 |4.8 |11.7 |3216 | |[resnet50_gn.a1h_in1k](https://huggingface.co/timm/resnet50_gn.a1h_in1k)|224 |80.06|94.95|25.6 |4.1 |11.1 |1109 | |[seresnet50.a1_in1k](https://huggingface.co/timm/seresnet50.a1_in1k)|224 |80.02|94.71|28.1 |4.1 |11.1 |2962 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|288 |79.97|95.05|25.6 |6.8 |18.4 |2086 | |[resnet152c.gluon_in1k](https://huggingface.co/timm/resnet152c.gluon_in1k)|224 |79.92|94.84|60.2 |11.8 |23.4 |1455 | |[seresnext50_32x4d.gluon_in1k](https://huggingface.co/timm/seresnext50_32x4d.gluon_in1k)|224 |79.91|94.82|27.6 |4.3 |14.4 |2591 | |[resnet50.d_in1k](https://huggingface.co/timm/resnet50.d_in1k)|224 |79.91|94.67|25.6 |4.1 |11.1 |3456 | |[resnet101.tv2_in1k](https://huggingface.co/timm/resnet101.tv2_in1k)|176 |79.9 |94.6 |44.6 |4.9 |10.1 |3341 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|224 |79.89|94.97|35.7 |4.5 |12.1 |2774 | |[resnet50.c2_in1k](https://huggingface.co/timm/resnet50.c2_in1k)|224 |79.88|94.87|25.6 |4.1 |11.1 |3455 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|320 |79.86|95.07|16.0 |5.2 |16.4 |2168 | |[resnet50.a2_in1k](https://huggingface.co/timm/resnet50.a2_in1k)|224 |79.85|94.56|25.6 |4.1 |11.1 |3460 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|288 |79.83|94.97|25.6 |6.8 |18.4 |2087 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|224 |79.82|94.62|44.6 |7.8 |16.2 |2114 | |[resnext50_32x4d.ra_in1k](https://huggingface.co/timm/resnext50_32x4d.ra_in1k)|224 |79.76|94.6 |25.0 |4.3 |14.4 |2943 | |[resnet50.c1_in1k](https://huggingface.co/timm/resnet50.c1_in1k)|224 |79.74|94.95|25.6 |4.1 |11.1 |3455 | |[ecaresnet50d_pruned.miil_in1k](https://huggingface.co/timm/ecaresnet50d_pruned.miil_in1k)|224 |79.74|94.87|19.9 |2.5 |6.4 |3929 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|288 |79.71|94.83|19.7 |6.0 |14.8 |2710 | |[resnet152.gluon_in1k](https://huggingface.co/timm/resnet152.gluon_in1k)|224 |79.68|94.74|60.2 |11.6 |22.6 |1486 | |[resnext50d_32x4d.bt_in1k](https://huggingface.co/timm/resnext50d_32x4d.bt_in1k)|224 |79.67|94.87|25.0 |4.5 |15.2 |2729 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|288 |79.63|94.91|25.6 |6.8 |18.4 |2086 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|224 |79.56|94.72|25.6 |4.3 |11.8 |2805 | |[resnet101c.gluon_in1k](https://huggingface.co/timm/resnet101c.gluon_in1k)|224 |79.53|94.58|44.6 |8.1 |17.0 |2062 | |[resnet50.b1k_in1k](https://huggingface.co/timm/resnet50.b1k_in1k)|224 |79.52|94.61|25.6 |4.1 |11.1 |3459 | |[resnet50.tv2_in1k](https://huggingface.co/timm/resnet50.tv2_in1k)|176 |79.42|94.64|25.6 |2.6 |6.9 |5397 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|288 |79.4 |94.66|18.0 |5.9 |14.6 |2752 | |[resnet50.b2k_in1k](https://huggingface.co/timm/resnet50.b2k_in1k)|224 |79.38|94.57|25.6 |4.1 |11.1 |3459 | |[resnext50_32x4d.tv2_in1k](https://huggingface.co/timm/resnext50_32x4d.tv2_in1k)|176 |79.37|94.3 |25.0 |2.7 |9.0 |4577 | |[resnext50_32x4d.gluon_in1k](https://huggingface.co/timm/resnext50_32x4d.gluon_in1k)|224 |79.36|94.43|25.0 |4.3 |14.4 |2942 | |[resnext101_32x8d.tv_in1k](https://huggingface.co/timm/resnext101_32x8d.tv_in1k)|224 |79.31|94.52|88.8 |16.5 |31.2 |1100 | |[resnet101.gluon_in1k](https://huggingface.co/timm/resnet101.gluon_in1k)|224 |79.31|94.53|44.6 |7.8 |16.2 |2125 | |[resnetblur50.bt_in1k](https://huggingface.co/timm/resnetblur50.bt_in1k)|224 |79.31|94.63|25.6 |5.2 |12.0 |2524 | |[resnet50.a1h_in1k](https://huggingface.co/timm/resnet50.a1h_in1k)|176 |79.27|94.49|25.6 |2.6 |6.9 |5404 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|224 |79.25|94.31|25.0 |4.3 |14.4 |2931 | |[resnet50.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet50.fb_ssl_yfcc100m_ft_in1k)|224 |79.22|94.84|25.6 |4.1 |11.1 |3451 | |[resnet33ts.ra2_in1k](https://huggingface.co/timm/resnet33ts.ra2_in1k)|256 |79.21|94.56|19.7 |4.8 |11.7 |3392 | |[resnet50d.gluon_in1k](https://huggingface.co/timm/resnet50d.gluon_in1k)|224 |79.07|94.48|25.6 |4.4 |11.9 |3162 | |[resnet50.ram_in1k](https://huggingface.co/timm/resnet50.ram_in1k)|224 |79.03|94.38|25.6 |4.1 |11.1 |3453 | |[resnet50.am_in1k](https://huggingface.co/timm/resnet50.am_in1k)|224 |79.01|94.39|25.6 |4.1 |11.1 |3461 | |[resnet32ts.ra2_in1k](https://huggingface.co/timm/resnet32ts.ra2_in1k)|256 |79.01|94.37|18.0 |4.6 |11.6 |3440 | |[ecaresnet26t.ra2_in1k](https://huggingface.co/timm/ecaresnet26t.ra2_in1k)|256 |78.9 |94.54|16.0 |3.4 |10.5 |3421 | |[resnet152.a3_in1k](https://huggingface.co/timm/resnet152.a3_in1k)|160 |78.89|94.11|60.2 |5.9 |11.5 |2745 | |[wide_resnet101_2.tv_in1k](https://huggingface.co/timm/wide_resnet101_2.tv_in1k)|224 |78.84|94.28|126.9 |22.8 |21.2 |1079 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|288 |78.83|94.24|16.8 |4.5 |16.8 |2251 | |[resnet50.ra_in1k](https://huggingface.co/timm/resnet50.ra_in1k)|224 |78.81|94.32|25.6 |4.1 |11.1 |3454 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|288 |78.74|94.33|16.8 |4.5 |16.7 |2264 | |[resnet50s.gluon_in1k](https://huggingface.co/timm/resnet50s.gluon_in1k)|224 |78.72|94.23|25.7 |5.5 |13.5 |2796 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|224 |78.71|94.24|25.6 |4.4 |11.9 |3154 | |[wide_resnet50_2.tv_in1k](https://huggingface.co/timm/wide_resnet50_2.tv_in1k)|224 |78.47|94.09|68.9 |11.4 |14.4 |1934 | |[resnet50.bt_in1k](https://huggingface.co/timm/resnet50.bt_in1k)|224 |78.46|94.27|25.6 |4.1 |11.1 |3454 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|288 |78.43|94.35|21.8 |6.5 |7.5 |3291 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|288 |78.42|94.04|10.5 |3.1 |13.3 |3226 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|320 |78.33|94.13|16.0 |5.2 |16.4 |2391 | |[resnet152.tv_in1k](https://huggingface.co/timm/resnet152.tv_in1k)|224 |78.32|94.04|60.2 |11.6 |22.6 |1487 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|288 |78.28|94.1 |10.4 |3.1 |13.3 |3062 | |[bat_resnext26ts.ch_in1k](https://huggingface.co/timm/bat_resnext26ts.ch_in1k)|256 |78.25|94.1 |10.7 |2.5 |12.5 |3393 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|224 |78.06|93.78|25.6 |4.1 |11.1 |3450 | |[resnet50c.gluon_in1k](https://huggingface.co/timm/resnet50c.gluon_in1k)|224 |78.0 |93.99|25.6 |4.4 |11.9 |3286 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|288 |78.0 |93.91|10.3 |3.1 |13.3 |3297 | |[seresnext26t_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26t_32x4d.bt_in1k)|224 |77.98|93.75|16.8 |2.7 |10.1 |3841 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|288 |77.92|93.77|21.8 |6.1 |6.2 |3609 | |[resnet101.a3_in1k](https://huggingface.co/timm/resnet101.a3_in1k)|160 |77.88|93.71|44.6 |4.0 |8.3 |3926 | |[resnet26t.ra2_in1k](https://huggingface.co/timm/resnet26t.ra2_in1k)|256 |77.87|93.84|16.0 |3.4 |10.5 |3772 | |[seresnext26ts.ch_in1k](https://huggingface.co/timm/seresnext26ts.ch_in1k)|256 |77.86|93.79|10.4 |2.4 |10.5 |4263 | |[resnetrs50.tf_in1k](https://huggingface.co/timm/resnetrs50.tf_in1k)|160 |77.82|93.81|35.7 |2.3 |6.2 |5238 | |[gcresnext26ts.ch_in1k](https://huggingface.co/timm/gcresnext26ts.ch_in1k)|256 |77.81|93.82|10.5 |2.4 |10.5 |4183 | |[ecaresnet50t.a3_in1k](https://huggingface.co/timm/ecaresnet50t.a3_in1k)|160 |77.79|93.6 |25.6 |2.2 |6.0 |5329 | |[resnext50_32x4d.a3_in1k](https://huggingface.co/timm/resnext50_32x4d.a3_in1k)|160 |77.73|93.32|25.0 |2.2 |7.4 |5576 | |[resnext50_32x4d.tv_in1k](https://huggingface.co/timm/resnext50_32x4d.tv_in1k)|224 |77.61|93.7 |25.0 |4.3 |14.4 |2944 | |[seresnext26d_32x4d.bt_in1k](https://huggingface.co/timm/seresnext26d_32x4d.bt_in1k)|224 |77.59|93.61|16.8 |2.7 |10.2 |3807 | |[resnet50.gluon_in1k](https://huggingface.co/timm/resnet50.gluon_in1k)|224 |77.58|93.72|25.6 |4.1 |11.1 |3455 | |[eca_resnext26ts.ch_in1k](https://huggingface.co/timm/eca_resnext26ts.ch_in1k)|256 |77.44|93.56|10.3 |2.4 |10.5 |4284 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|288 |77.41|93.63|16.0 |4.3 |13.5 |2907 | |[resnet101.tv_in1k](https://huggingface.co/timm/resnet101.tv_in1k)|224 |77.38|93.54|44.6 |7.8 |16.2 |2125 | |[resnet50d.a3_in1k](https://huggingface.co/timm/resnet50d.a3_in1k)|160 |77.22|93.27|25.6 |2.2 |6.1 |5982 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|288 |77.17|93.47|10.3 |3.1 |13.3 |3392 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|288 |77.15|93.27|21.8 |6.1 |6.2 |3615 | |[resnet34d.ra2_in1k](https://huggingface.co/timm/resnet34d.ra2_in1k)|224 |77.1 |93.37|21.8 |3.9 |4.5 |5436 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|224 |77.02|93.07|28.1 |4.1 |11.1 |2952 | |[resnext26ts.ra2_in1k](https://huggingface.co/timm/resnext26ts.ra2_in1k)|256 |76.78|93.13|10.3 |2.4 |10.5 |4410 | |[resnet26d.bt_in1k](https://huggingface.co/timm/resnet26d.bt_in1k)|224 |76.7 |93.17|16.0 |2.6 |8.2 |4859 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|288 |76.5 |93.35|21.8 |6.1 |6.2 |3617 | |[resnet34.a1_in1k](https://huggingface.co/timm/resnet34.a1_in1k)|224 |76.42|92.87|21.8 |3.7 |3.7 |5984 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|288 |76.35|93.18|16.0 |3.9 |12.2 |3331 | |[resnet50.tv_in1k](https://huggingface.co/timm/resnet50.tv_in1k)|224 |76.13|92.86|25.6 |4.1 |11.1 |3457 | |[resnet50.a3_in1k](https://huggingface.co/timm/resnet50.a3_in1k)|160 |75.96|92.5 |25.6 |2.1 |5.7 |6490 | |[resnet34.a2_in1k](https://huggingface.co/timm/resnet34.a2_in1k)|224 |75.52|92.44|21.8 |3.7 |3.7 |5991 | |[resnet26.bt_in1k](https://huggingface.co/timm/resnet26.bt_in1k)|224 |75.3 |92.58|16.0 |2.4 |7.4 |5583 | |[resnet34.bt_in1k](https://huggingface.co/timm/resnet34.bt_in1k)|224 |75.16|92.18|21.8 |3.7 |3.7 |5994 | |[seresnet50.a3_in1k](https://huggingface.co/timm/seresnet50.a3_in1k)|160 |75.1 |92.08|28.1 |2.1 |5.7 |5513 | |[resnet34.gluon_in1k](https://huggingface.co/timm/resnet34.gluon_in1k)|224 |74.57|91.98|21.8 |3.7 |3.7 |5984 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|288 |73.81|91.83|11.7 |3.4 |5.4 |5196 | |[resnet34.tv_in1k](https://huggingface.co/timm/resnet34.tv_in1k)|224 |73.32|91.42|21.8 |3.7 |3.7 |5979 | |[resnet18.fb_swsl_ig1b_ft_in1k](https://huggingface.co/timm/resnet18.fb_swsl_ig1b_ft_in1k)|224 |73.28|91.73|11.7 |1.8 |2.5 |10213 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|288 |73.16|91.03|11.7 |3.0 |4.1 |6050 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|224 |72.98|91.11|21.8 |3.7 |3.7 |5967 | |[resnet18.fb_ssl_yfcc100m_ft_in1k](https://huggingface.co/timm/resnet18.fb_ssl_yfcc100m_ft_in1k)|224 |72.6 |91.42|11.7 |1.8 |2.5 |10213 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|288 |72.37|90.59|11.7 |3.0 |4.1 |6051 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|224 |72.26|90.31|10.1 |1.7 |5.8 |7026 | |[resnet18d.ra2_in1k](https://huggingface.co/timm/resnet18d.ra2_in1k)|224 |72.26|90.68|11.7 |2.1 |3.3 |8707 | |[resnet18.a1_in1k](https://huggingface.co/timm/resnet18.a1_in1k)|224 |71.49|90.07|11.7 |1.8 |2.5 |10187 | |[resnet14t.c3_in1k](https://huggingface.co/timm/resnet14t.c3_in1k)|176 |71.31|89.69|10.1 |1.1 |3.6 |10970 | |[resnet18.gluon_in1k](https://huggingface.co/timm/resnet18.gluon_in1k)|224 |70.84|89.76|11.7 |1.8 |2.5 |10210 | |[resnet18.a2_in1k](https://huggingface.co/timm/resnet18.a2_in1k)|224 |70.64|89.47|11.7 |1.8 |2.5 |10194 | |[resnet34.a3_in1k](https://huggingface.co/timm/resnet34.a3_in1k)|160 |70.56|89.52|21.8 |1.9 |1.9 |10737 | |[resnet18.tv_in1k](https://huggingface.co/timm/resnet18.tv_in1k)|224 |69.76|89.07|11.7 |1.8 |2.5 |10205 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|224 |68.34|88.03|5.4 |1.1 |2.4 |13079 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|224 |68.25|88.17|11.7 |1.8 |2.5 |10167 | |[resnet10t.c3_in1k](https://huggingface.co/timm/resnet10t.c3_in1k)|176 |66.71|86.96|5.4 |0.7 |1.5 |20327 | |[resnet18.a3_in1k](https://huggingface.co/timm/resnet18.a3_in1k)|160 |65.66|86.26|11.7 |0.9 |1.3 |18229 | ## Citation ```bibtex @misc{rw2019timm, author = {Ross Wightman}, title = {PyTorch Image Models}, year = {2019}, publisher = {GitHub}, journal = {GitHub repository}, doi = {10.5281/zenodo.4414861}, howpublished = {\url{https://github.com/huggingface/pytorch-image-models}} } ``` ```bibtex @article{Xie2016, title={Aggregated Residual Transformations for Deep Neural Networks}, author={Saining Xie and Ross Girshick and Piotr Dollár and Zhuowen Tu and Kaiming He}, journal={arXiv preprint arXiv:1611.05431}, year={2016} } ``` ```bibtex @article{He2015, author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun}, title = {Deep Residual Learning for Image Recognition}, journal = {arXiv preprint arXiv:1512.03385}, year = {2015} } ``` ```bibtex @article{He2018BagOT, title={Bag of Tricks for Image Classification with Convolutional Neural Networks}, author={Tong He and Zhi Zhang and Hang Zhang and Zhongyue Zhang and Junyuan Xie and Mu Li}, journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2018}, pages={558-567} } ```
38,936
[ [ -0.06512451171875, -0.0180511474609375, 0.00469970703125, 0.027740478515625, -0.0322265625, -0.007404327392578125, -0.0098419189453125, -0.0295257568359375, 0.08087158203125, 0.019439697265625, -0.0489501953125, -0.042083740234375, -0.04595947265625, -0.0010...
yhyhy3/med-orca-instruct-33b
2023-07-18T15:58:50.000Z
[ "transformers", "pytorch", "llama", "feature-extraction", "instruct", "medical", "code", "text-generation", "en", "dataset:ehartford/dolphin", "dataset:LinhDuong/chatdoctor-200k", "dataset:sahil2801/code_instructions_120k", "dataset:c-s-ale/dolly-15k-instruction-alpaca-format", "dataset:ti...
text-generation
yhyhy3
null
null
yhyhy3/med-orca-instruct-33b
0
5,476
transformers
2023-07-17T22:25:12
--- datasets: - ehartford/dolphin - LinhDuong/chatdoctor-200k - sahil2801/code_instructions_120k - c-s-ale/dolly-15k-instruction-alpaca-format - tiiuae/falcon-refinedweb - bigcode/starcoderdata - togethercomputer/RedPajama-Data-1T language: - en library_name: transformers pipeline_tag: text-generation tags: - instruct - medical - code --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This model is an instruction-tuned LLaMa model with 33B parameters, with specialities in medical QA and code instruction. ## Model Details <!-- Provide a longer summary of what this model is. --> - **Model type:** LlamaForCausalLM - **Language(s) (NLP):** English - **License:** As a Llama-derivative, this model cannot be used commercially. - **Finetuned from model (QLoRA):** [huggyllama/llama-30b](https://huggingface.co/huggyllama/llama-30b) ## Training Details ### Training Data Converted the following datasets to alpaca:instruction format. 1. [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) - ORCA style dataset generously created by [Eric Hartford](https://huggingface.co/ehartford) 2. [LinhDuong/chatdoctor-200k](https://huggingface.co/datasets/LinhDuong/chatdoctor-200k) - Refined dataset sourced from icliniq medical QA forum 3. [sahil2801/code_instructions_120k](https://huggingface.co/datasets/sahil2801/code_instructions_120k) - Code instruction dataset generously created by Sahil Chaudhary from ThreeSixty AI 4. [c-s-ale/dolly-15k-instruction-alpaca-format](https://huggingface.co/datasets/c-s-ale/dolly-15k-instruction-alpaca-format) - Dolly 15k is a general instruction dataset generated by employees of Databricks. ### Training Procedure Trained using [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) QLoRa on [RunPod](https://www.runpod.io/console/gpu-cloud) 8x A6000 on Community Cloud for 1 epochs (~23 hours - ~$110). <details> <summary>axolotl training config:</summary> ```yaml base_model: huggyllama/llama-30b base_model_config: huggyllama/llama-30b model_type: LlamaForCausalLM tokenizer_type: LlamaTokenizer load_in_8bit: false load_in_4bit: true strict: false push_dataset_to_hub: hub_model_id: hf_use_auth_token: datasets: - path: ehartford/dolphin type: alpaca data_files: - flan1m-alpaca-uncensored.jsonl - flan5m-alpaca-uncensored.jsonl shards: 25 - path: sahil2801/code_instructions_120k type: alpaca - path: LinhDuong/chatdoctor-200k type: alpaca shards: 2 - path: c-s-ale/dolly-15k-instruction-alpaca-format type: alpaca dataset_prepared_path: last_run_prepared val_set_size: 0.01 adapter: qlora lora_model_dir: sequence_len: 2048 max_packed_sequence_len: 2048 lora_r: 8 lora_alpha: 32 lora_dropout: 0.05 lora_target_modules: lora_target_linear: true lora_fan_in_fan_out: wandb_mode: true wandb_project: med-orca-instruct-33b wandb_watch: wandb_run_id: wandb_log_model: 'openllama_checkpoint' output_dir: /disk/med-instruct-33b gradient_accumulation_steps: 1 micro_batch_size: 4 num_epochs: 1 optimizer: paged_adamw_32bit torchdistx_path: lr_scheduler: cosine learning_rate: 0.0002 train_on_inputs: false group_by_length: false bf16: true fp16: false tf32: true gradient_checkpointing: true early_stopping_patience: resume_from_checkpoint: local_rank: logging_steps: 2 xformers_attention: true flash_attention: gptq_groupsize: gptq_model_v1: warmup_steps: 100 eval_steps: 20 save_steps: debug: deepspeed: true weight_decay: 0.00001 special_tokens: bos_token: "<s>" eos_token: "</s>" unk_token: "<unk>" ``` </details>
3,601
[ [ -0.035736083984375, -0.04766845703125, 0.01210784912109375, 0.010894775390625, -0.0343017578125, -0.01416778564453125, 0.0219879150390625, -0.038848876953125, 0.0258636474609375, 0.0278167724609375, -0.06060791015625, -0.0516357421875, -0.03759765625, -0.002...