repo
stringclasses
147 values
number
int64
1
172k
title
stringlengths
2
476
body
stringlengths
0
5k
url
stringlengths
39
70
state
stringclasses
2 values
labels
listlengths
0
9
created_at
timestamp[ns, tz=UTC]date
2017-01-18 18:50:08
2026-01-06 07:33:18
updated_at
timestamp[ns, tz=UTC]date
2017-01-18 19:20:07
2026-01-06 08:03:39
comments
int64
0
58
βŒ€
user
stringlengths
2
28
huggingface/unity-api
15
How to download the model to the local call API
Because my internet connection is not very good, I would like to download the model to my local machine and use the Hugging Face API for calling. How can I achieve this?
https://github.com/huggingface/unity-api/issues/15
closed
[]
2023-08-23T08:08:40Z
2023-11-08T10:26:34Z
null
haldon98
huggingface/evaluate
485
How to use `SubTask` with metrics that require valid `config_name`
## Issue Currently I there does not seem to be a way to define the `config_name` for metric for a `SubTask` inside an `evaluate.EvaluationSuite`. ## Version evaluate version: 0.4.0 transformers version 4.32.0 Python version Python 3.10.6 ## Example For example, consider the following `EvaluationSuite` which tried to run the "glue" metric which requires a `config_name` when calling `evaluate.load`: Code in `suite.py`: ```python import evaluate from evaluate.evaluation_suite import SubTask class Suite(evaluate.EvaluationSuite): def __init__(self, name): super().__init__(name) self.preprocessor = lambda x: {"text": x["text"].lower()} self.suite = [ SubTask( task_type="text-classification", data="glue", subset="sst2", split="validation[:10]", args_for_task={ "metric": "glue", "input_column": "sentence", "label_column": "label", "label_mapping": { "LABEL_0": 0.0, "LABEL_1": 1.0 } } ), ] ``` Now consider running this `EvaluationSuite` with the following: ```python from evaluate import EvaluationSuite suite = EvaluationSuite.load('suite.py') results = suite.run("gpt2") ``` Running this code results in the following error: ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[60], line 2 1 suite = EvaluationSuite.load('suite.py') ----> 2 results = suite.run("gpt2") File /localdisk/twilbers/src/notebooks/poc/glue/.venv/lib/python3.10/site-packages/evaluate/evaluation_suite/__init__.py:124, in EvaluationSuite.run(self, model_or_pipeline) 122 args_for_task["subset"] = task.subset 123 args_for_task["split"] = task.split --> 124 results = task_evaluator.compute(**args_for_task) 126 results["task_name"] = task_name + "/" + task.subset if task.subset else task_name 127 results["data_preprocessor"] = str(task.data_preprocessor) if task.data_preprocessor is not None else None File /localdisk/twilbers/src/notebooks/poc/glue/.venv/lib/python3.10/site-packages/evaluate/evaluator/text_classification.py:136, in TextClassificationEvaluator.compute(self, model_or_pipeline, data, subset, split, metric, tokenizer, feature_extractor, strategy, confidence_level, n_resamples, device, random_state, input_column, second_input_column, label_column, label_mapping) 127 metric_inputs, pipe_inputs = self.prepare_data( 128 data=data, input_column=input_column, second_input_column=second_input_column, label_column=label_column 129 ) 130 pipe = self.prepare_pipeline( 131 model_or_pipeline=model_or_pipeline, 132 tokenizer=tokenizer, 133 feature_extractor=feature_extractor, 134 device=device, 135 ) --> 136 metric = self.prepare_metric(metric) 138 # Compute predictions 139 predictions, perf_results = self.call_pipeline(pipe, pipe_inputs) File /localdisk/twilbers/src/notebooks/poc/glue/.venv/lib/python3.10/site-packages/evaluate/evaluator/base.py:447, in Evaluator.prepare_metric(self, metric) 445 metric = load(self.default_metric_name) 446 elif isinstance(metric, str): --> 447 metric = load(metric) 449 return metric File /localdisk/twilbers/src/notebooks/poc/glue/.venv/lib/python3.10/site-packages/evaluate/loading.py:735, in load(path, config_name, module_type, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, **init_kwargs) 731 evaluation_module = evaluation_module_factory( 732 path, module_type=module_type, revision=revision, download_config=download_config, download_mode=download_mode 733 ) 734 evaluation_cls = import_main_class(evaluation_module.module_path) --> 735 evaluation_instance = evaluation_cls( 736 config_name=config_name, 737 process_id=process_id, 738 num_process=num_process, 739 cache_dir=cache_dir, 740 keep_in_memory=keep_in_memory, 741 experiment_id=experiment_id, 742 hash=evaluation_module.hash, 743 **init_kwargs, 744 ) 746 if module_type and module_type != evaluation_instance.module_type: 747 raise TypeError( 748 f"No module of module type '{module_type}' not found for '{path}' locally, or on the Hugging Face Hub. Found module of module type '{evaluation_instance.module_type}' instead." 749 ) File /localdisk/twilbers/src/notebooks/poc/glue/.venv/lib/python3.10/site-packages/evaluate/module.py:182, in EvaluationModule.__init__(self, config_name, keep_in_memory, cache_dir, num_process, process_id, seed, experiment_id, hash, max_conc
https://github.com/huggingface/evaluate/issues/485
open
[]
2023-08-22T23:15:43Z
2023-08-23T16:38:18Z
null
tybrs
huggingface/diffusers
4,716
How to handle SDXL long prompt
### Describe the bug I am unable to use embeds prompt in order to handle prompt that is longer than 77 tokens. ### Reproduction ```python import itertools import os.path import random import string import time import typing as typ import torch from diffusers import StableDiffusionXLPipeline from tqdm import tqdm import bb from web_sdxl import seed_everything seed_everything(42) def generate_random_string(length): letters = string.ascii_letters result = ''.join(random.choice(letters) for _ in range(length)) return result def get_pipeline_embeds(pipeline, prompt, negative_prompt, device): """ Get pipeline embeds for prompts bigger than the maxlength of the pipe :param pipeline: :param prompt: :param negative_prompt: :param device: :return: """ max_length = pipeline.tokenizer.model_max_length # simple way to determine length of tokens count_prompt = len(prompt.split(" ")) count_negative_prompt = len(negative_prompt.split(" ")) # create the tensor based on which prompt is longer if count_prompt >= count_negative_prompt: input_ids = pipeline.tokenizer(prompt, return_tensors="pt", truncation=False).input_ids.to(device) shape_max_length = input_ids.shape[-1] negative_ids = pipeline.tokenizer(negative_prompt, truncation=False, padding="max_length", max_length=shape_max_length, return_tensors="pt").input_ids.to(device) else: negative_ids = pipeline.tokenizer(negative_prompt, return_tensors="pt", truncation=False).input_ids.to(device) shape_max_length = negative_ids.shape[-1] input_ids = pipeline.tokenizer(prompt, return_tensors="pt", truncation=False, padding="max_length", max_length=shape_max_length).input_ids.to(device) concat_embeds = [] neg_embeds = [] for i in range(0, shape_max_length, max_length): concat_embeds.append(pipeline.text_encoder(input_ids[:, i: i + max_length])[0]) neg_embeds.append(pipeline.text_encoder(negative_ids[:, i: i + max_length])[0]) return torch.cat(concat_embeds, dim=1), torch.cat(neg_embeds, dim=1) model_path = "fine_tuned_models/sdxl-sarit" device = "mps" if torch.backends.mps.is_available() else "cpu" out_dir: str = "gluta40" age_prompts: typ.List[str] = [ "young asian girl", "a photograph of an angel with sly expression, wearing a see-thru short roman style dress, beautiful asian mixed european woman face, beautiful eyes, black hair, looking down, hyper realistic and detailed, 16k", ] hand_prompts: typ.List[str] = [ "left hand holding a gluta40 jar one hand, right hand is behind her back", "right hand holding a gluta40 jar one hand, left hand is behind her back", ] face_angle_prompts: typ.List[str] = [ "straight face", ] hair_prompts: typ.List[str] = [ "black long tied hair", "black long hair", ] background_prompts: typ.List[str] = [ "no background, hold both hands, bad hands", ] negative_prompt: str = "disfigured, disproportionate, bad anatomy, bad proportions, ugly, out of frame, mangled, asymmetric, cross-eyed, depressed, immature, stuffed animal, out of focus, high depth of field, cloned face, cloned head, age spot, skin blemishes, collapsed eyeshadow, asymmetric ears, imperfect eyes, unnatural, conjoined, missing limb, missing arm, missing leg, poorly drawn face, poorly drawn feet, poorly drawn hands, floating limb, disconnected limb, extra limb, malformed limbs, malformed hands, poorly rendered face, poor facial details, poorly rendered hands, double face, unbalanced body, unnatural body, lacking body, long body, cripple, cartoon, 3D, weird colors, unnatural skin tone, unnatural skin, stiff face, fused hand, skewed eyes, surreal, cropped head, group of people, too many fingers, bad hands, six fingers" combined_list = list(itertools.product(age_prompts, hand_prompts, face_angle_prompts, hair_prompts, background_prompts)) random.shuffle(combined_list) for item in tqdm(combined_list, total=len(combined_list)): age, hand, face_angle, hair, background = item if not os.path.exists(out_dir): os.makedirs(out_dir) prompt: str = ", ".join(item) print(prompt) out_filename: str = f"{out_dir}/{prompt.replace(' ', '_')}" if not os.path.exists(f"{out_filename}_0.png"): try: pipe = StableDiffusionXLPipeline.from_pretrained(model_path, safety_checker=None, requires_safety_checker=False) pipe.to(device) prompt_embeds, negative_prompt_embeds = get_pipeline_embeds(pipe, prompt, negative_prompt, device) images = pipe(prompt_embeds=prompt_embeds, negative_prompt_embeds=negative_prompt_embeds, num_images_per_prompt=3, width=768,
https://github.com/huggingface/diffusers/issues/4716
closed
[ "bug" ]
2023-08-22T16:28:25Z
2023-08-27T02:46:18Z
null
elcolie
huggingface/candle
547
How to turn off automatic translation for whisper
When I input Chinese wav file , whisper outputs the English translation ``` ls@LeeeSes-MacBook-Air ~/r/candle (main)> cargo run --release --features accelerate --example whisper -- --model small --language zh --input /Users/ls/Downloads/output.wav Finished release [optimized] target(s) in 0.38s Running `target/release/examples/whisper --model small --language zh --input /Users/ls/Downloads/output.wav` Running on CPU, to run on GPU, build this example with `--features cuda` loaded wav data: Header { audio_format: 1, channel_count: 1, sampling_rate: 16000, bytes_per_second: 32000, bytes_per_sample: 2, bits_per_sample: 16 } pcm data loaded 287216 loaded mel: [1, 80, 4500] 0.0s -- 30.0s: This is a free online audio recorder application program. You can record sound from microphone. After recording, you can edit sound and edit any parts, adjust the balance and sound. Let's use the recording first. 30.0s -- 45.0s: I'm sorry. ```
https://github.com/huggingface/candle/issues/547
closed
[]
2023-08-22T11:16:45Z
2023-08-22T18:52:40Z
null
LeeeSe
huggingface/trl
674
How to load the model and the checkpoint after trained the model?
I trained my model using the code in the sft_trainer.py. And I save the checkpoint and the model in the same dir. But I don't know how to load the model with the checkpoint. Or I just want to konw that `trainer.save_model(script_args.output_dir)` means I have save a trained model, not just a checkpoint? I try many ways to load the trained model but errors like ``` RuntimeError: Error(s) in loading state_dict for PrefixEncoder: Missing key(s) in state_dict: "embedding.weight". ``` So, how to load the model???
https://github.com/huggingface/trl/issues/674
closed
[]
2023-08-22T10:31:01Z
2023-11-27T21:34:30Z
null
ccwdb
huggingface/text-generation-inference
899
text-generation-launcher tool how to use multi gpu cards?
### System Info text-generation-launcher 1.0.0 how to use multi gpu cards? ### Information - [ ] Docker - [X] The CLI directly ### Tasks - [X] An officially supported command - [ ] My own modifications ### Reproduction CUDA_VISIBLE_DEVICES=0,1,2,3 text-generation-launcher --model-id falcon-40b-instruct --sharded true --num-shard 1 --quantize bitsandbytes-fp4 does not used multi gpu A10 card. Error with GPU 0 OutOfMemoryError: CUDA out of memory. ### Expected behavior Normal load the model and http post.
https://github.com/huggingface/text-generation-inference/issues/899
closed
[]
2023-08-22T10:09:17Z
2023-08-22T10:13:06Z
null
luefei
huggingface/chat-ui
411
Chat-ui crashes TGI?
Hey! When I deploy TGI Endpoint locally and test it with the following cli request: `curl 127.0.0.1:8080/generate_stream \ -X POST \ -d '{"inputs":"def calculate_fibonacci(n:str):","parameters":{"max_new_tokens":100}}' \ -H 'Content-Type: application/json'` It works without any problem. Even load tests with locust.io work without problems. This is the response from tgi with the curl command: `2023-08-22T08:29:52.944813Z INFO HTTP request{otel.name=POST /generate_stream http.client_ip= http.flavor=1.1 http.host=127.0.0.1:8080 http.method=POST http.route=/generate_stream http.scheme=HTTP http.target=/generate_stream http.user_agent=curl/7.82.0 otel.kind=server trace_id=772a4a52f29b540aac2b3b331ea5247a http.status_code=200 otel.status_code="OK"}:generate_stream{parameters=GenerateParameters { best_of: None, temperature: None, repetition_penalty: None, top_k: None, top_p: None, typical_p: None, do_sample: false, max_new_tokens: 100, return_full_text: None, stop: [], truncate: None, watermark: false, details: false, decoder_input_details: false, seed: None } total_time="5.639886919s" validation_time="153.888Β΅s" queue_time="184.627Β΅s" inference_time="5.639548636s" time_per_token="56.395486ms" seed="None"}: text_generation_router::server: router/src/server.rs:452: Success` But if I want to call tgi with the chat-ui it works the first time (I get an streaming response in the chat-ui), but then the tgi freezes? EDIT: This is the output I get from tgi (I get two responses from tgi?): `2023-08-22T11:38:32.027037Z INFO HTTP request{otel.name=POST / http.client_ip= http.flavor=1.1 http.host=127.0.0.1:8080 http.method=POST http.route=/ http.scheme=HTTP http.target=/ http.user_agent=undici otel.kind=server trace_id=a55b57fc395cc1f8fa59dcd111733cd4 http.status_code=200 otel.status_code="OK"}:compat_generate{default_return_full_text=false}:generate_stream{parameters=GenerateParameters { best_of: None, temperature: Some(0.9), repetition_penalty: Some(1.2), top_k: Some(50), top_p: Some(0.95), typical_p: None, do_sample: false, max_new_tokens: 1048, return_full_text: Some(false), stop: [], truncate: Some(1000), watermark: false, details: false, decoder_input_details: false, seed: None } total_time="1.803072692s" validation_time="139.35Β΅s" queue_time="209.805Β΅s" inference_time="1.802724034s" time_per_token="56.335126ms" seed="Some(14814785333613176252)"}: text_generation_router::server: router/src/server.rs:450: Success ` ` 2023-08-22T11:38:32.643776Z INFO HTTP request{otel.name=POST / http.client_ip= http.flavor=1.1 http.host=127.0.0.1:8080 http.method=POST http.route=/ http.scheme=HTTP http.target=/ http.user_agent=undici otel.kind=server trace_id=7064d891ae5c88c74aaba2f06cacd5d3}:compat_generate{default_return_full_text=false}:generate{parameters=GenerateParameters { best_of: None, temperature: None, repetition_penalty: None, top_k: None, top_p: None, typical_p: None, do_sample: false, max_new_tokens: 20, return_full_text: Some(false), stop: [], truncate: None, watermark: false, details: false, decoder_input_details: false, seed: None } total_time="519.787388ms" validation_time="77.98Β΅s" queue_time="78.433Β΅s" inference_time="519.63134ms" time_per_token="57.736815ms" seed="None"}: text_generation_router::server: router/src/server.rs:287: Success` EDIT: I get the following output in my terminal with the second response from tgi: ` SyntaxError: Unexpected token d in JSON at position 0 at JSON.parse (<anonymous>) at Module.generateFromDefaultEndpoint (/Users/xx/Desktop/chat-ui/src/lib/server/generateFromDefaultEndpoint.ts:73:30) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async POST (/Users/xx/Desktop/chat-ui/src/routes/conversation/[id]/summarize/+server.ts:30:26) at async Module.render_endpoint (/Users/xx/Desktop/chat-ui/node_modules/@sveltejs/kit/src/runtime/server/endpoint.js:47:20) at async resolve (/Users/xx/Desktop/chat-ui/node_modules/@sveltejs/kit/src/runtime/server/respond.js:388:17) at async Object.handle (/Users/xx/Desktop/chat-ui/src/hooks.server.ts:66:20) at async Module.respond (/Users/xx/Desktop/chat-ui/node_modules/@sveltejs/kit/src/runtime/server/respond.js:259:20) at async file:///Users/xx/Desktop/chat-ui/node_modules/@sveltejs/kit/src/exports/vite/dev/index.js:506:22` chat-ui version: 0.5.0 tgi-version: 1.0.1 Chat-UI Model Config: ``` MODELS=`[ { "name": "Vicuna", "datasetName": "OpenAssistant/oasst1", "endpoints": [{"url": "http://127.0.0.1:8080/generate_stream"}], "description": "A good alternative to ChatGPT", "websiteUrl": "https://open-assistant.io", "userMessageToken": "USER:", "assistantMessageToken": "ASSISTANT:", "messageEndToken": "</s>", "preprompt": "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\n\n
https://github.com/huggingface/chat-ui/issues/411
open
[]
2023-08-22T08:48:02Z
2023-08-23T06:45:26Z
0
schauppi
huggingface/accelerate
1,870
[Question] How to optimize two loss alternately with gradient accumulation?
I want to update a model by optimizing two loss alternately with gradient accumulation like this ```python # Suppose gradient_accumulation is set to 2. optimizer = optim(unet.parameters()) with accelerator.accumulate(unet): outputs = unet(input) loss1 = loss_func1(outputs) loss1.backward() optimizer.step() optimizer.zero_grad() with accelerator.accumulate(unet): outputs = unet(input) loss2 = loss_func2(outputs) loss2.backward() optimizer.step() optimizer.zero_grad() ``` Is this correct? It appears from the [documentation](https://huggingface.co/docs/accelerate/usage_guides/gradient_accumulation#converting-it-to-accelerate) that `accelerator.accumulate` will normalize the loss and then backpropagate without updating the gradient until reaching `gradient_accumulation_steps`. My main concern is that the gradients accumulated by two different losses for the same model will affect each other. Hope to find some help here, thanks in advance.
https://github.com/huggingface/accelerate/issues/1870
closed
[]
2023-08-21T12:49:19Z
2023-10-24T15:06:33Z
null
hkunzhe
huggingface/candle
538
How to disable openssl-sys being included?
I would like to stop openssl-sys from being included in my project when using candle, I'm not sure how to do this. I tried adding the below to my Cargo.toml but it didn't change anything. The reason I want to do it is because I get an error when trying to compile my library to aarch64-linux-android saying that pkg-config has not been configured to support cross-compilation and that I should install a sysroot for the target platform, but I'd like to not include it anyways since I won't be needing it and will be loading everything locally. Thanks. ``` hf-hub = { version = "0.2.0", default-features = false } tokenizers = { version = "0.13.4", default-features = false } ```
https://github.com/huggingface/candle/issues/538
closed
[]
2023-08-21T10:47:26Z
2023-08-21T20:38:57Z
null
soupslurpr
huggingface/optimum
1,298
Support BetterTransfomer for the Baichuan LLM model
### Feature request is it possible to support Baichuan model with BetterTransformer? https://huggingface.co/baichuan-inc/Baichuan-13B-Chat ### Motivation A very popular Chinese and English large language model. ### Your contribution hope you can achieve it. Thanks.
https://github.com/huggingface/optimum/issues/1298
closed
[ "feature-request", "bettertransformer", "Stale" ]
2023-08-21T08:18:16Z
2025-05-04T02:17:22Z
1
BobLiu20
huggingface/candle
533
How to convert token to text?
Hello, thank you for this ML library in Rust. Sorry if this is a noob question, I'm new to machine learning and this is my first time trying to use a text generation model. I'm using the latest git version. In the quantized llama example, how would I convert a token to a string? I see the print_token function but I want to convert it to a string and maybe push to a vector so I can return all the generated text when it is finished processing.
https://github.com/huggingface/candle/issues/533
closed
[]
2023-08-21T06:36:08Z
2023-08-21T07:51:37Z
null
soupslurpr
huggingface/safetensors
333
Slow load weight values from a HF model on a big-endian machine with the latest code
### System Info Python: 3.10 PyTorch: the latest main branch (i.e. 2.0.1+) safetensors: 0.3.3 Platform: s390x (big-endian) ### Information - [ ] The official example scripts - [X] My own modified scripts ### Reproduction I executed the following code using 0.3.1 and 0.3.3, and w/o safetensors. ``` import time import torch from transformers import T5ForConditionalGeneration, AutoTokenizer try: import safetensors print("safetensors version:", safetensors.__version__) except: print("safetensors not installed") torch.serialization.set_default_load_endianness(torch.serialization.LoadEndianness.LITTLE) model = "google/flan-t5-xxl" tokenizer = AutoTokenizer.from_pretrained(model) input_text = "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?" input = tokenizer(input_text, return_tensors="pt").input_ids t0 = time.perf_counter() #model = T5ForConditionalGeneration.from_pretrained(model, low_cpu_mem_usage=True, use_safetensors=False) model = T5ForConditionalGeneration.from_pretrained(model, low_cpu_mem_usage=True, use_safetensors=True) t1 = time.perf_counter() print("load elapsed time:", t1-t0) output = model.decoder.forward(input_ids=input) ## intentionally use decoder.forward() instead of generate() t2 = time.perf_counter() print("forward elapsed time:", t2-t1) ``` Findings - Old version (0.3.1) w/o swapping data is quite faster than 0.3.3 w/ swapping data, which we understand. - 0.3.3 is a bit slow than `torch.load`, which implies we could have some room to improve. The result is the best time of five tries after I downloaded model files into local file system. ``` $ python flan-t5.py safetensors not installed Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5/5 [00:21<00:00, 4.37s/it] load elapsed time: 22.09646322298795 forward elapsed time: 1.4204098680056632 ``` ``` $ python flan-t5.py safetensors version: 0.3.3 Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5/5 [00:25<00:00, 5.05s/it] load elapsed time: 25.486608179984614 forward elapsed time: 1.4887599580106325 ``` ``` $ python flan-t5.py safetensors version: 0.3.1 Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5/5 [00:00<00:00, 35.73it/s] load elapsed time: 0.37154227000428364 forward elapsed time: 1.1782474629580975 ``` ### Expected behavior We expect that we can alleviate the overhead of swapping data. The overhead of 4x looks too large.
https://github.com/huggingface/safetensors/issues/333
closed
[ "Stale" ]
2023-08-20T18:19:44Z
2023-12-12T01:48:51Z
9
kiszk
huggingface/chat-ui
409
Deploy Chat UI Spaces Docker template with a PEFT adapter
I tried to accomplish this, but the container failed to launch the chat-ui app, as it seems to assume the model would be a non-adapted model. Is there a way to make it work?
https://github.com/huggingface/chat-ui/issues/409
closed
[ "bug", "back" ]
2023-08-20T05:26:50Z
2023-09-11T09:37:29Z
4
lrtherond
huggingface/datasets
6,163
Error type: ArrowInvalid Details: Failed to parse string: '[254,254]' as a scalar of type int32
### Describe the bug I am getting the following error while I am trying to upload the CSV sheet to train a model. My CSV sheet content is exactly same as shown in the example CSV file in the Auto Train page. Attaching screenshot of error for reference. I have also tried converting the index of the answer that are integer into string by placing inverted commas and also without inverted commas. Can anyone please help me out? FYI : I am using Chrome browser. Error type: ArrowInvalid Details: Failed to parse string: '[254,254]' as a scalar of type int32 ![Screenshot 2023-08-19 165827](https://github.com/huggingface/datasets/assets/90616801/95fad96e-7dce-4bb5-9f83-9f1659a32891) ### Steps to reproduce the bug Kindly let me know how to fix this? ### Expected behavior Kindly let me know how to fix this? ### Environment info Kindly let me know how to fix this?
https://github.com/huggingface/datasets/issues/6163
open
[]
2023-08-19T11:34:40Z
2025-07-22T12:04:46Z
2
shishirCTC
huggingface/sentence-transformers
2,278
How to set the no. of epochs for fine-tuning SBERT?
Hello, I am fine-tuning an biencoder SBERT model on domain specific data for semantic similarity. There is no loss value posted by the `fit ` function from the package. Any idea how to know if the model is overfitting or underfiting the dataset after each epoch? This could help me in deciding the appropriate no. of epochs required for fine-tuning. Thank you.
https://github.com/huggingface/sentence-transformers/issues/2278
open
[]
2023-08-18T18:14:05Z
2024-01-29T17:00:13Z
null
power-puff-gg
huggingface/setfit
409
model_head.pkl not found on HuggingFace Hub
i got message: "model_head.pkl not found on HuggingFace Hub, initialising classification head with random weights. You should TRAIN this model on a downstream task to use it for predictions and inference." is there something missing or is it normal?
https://github.com/huggingface/setfit/issues/409
closed
[ "question" ]
2023-08-18T07:52:20Z
2023-11-24T14:20:51Z
null
andysingal
huggingface/autotrain-advanced
216
How to do inference after train llama2
i trained model using this command ``` autotrain llm --train --project_name 'llama2-indo-testing' \ --model meta-llama/Llama-2-7b-hf \ --data_path data/ \ --text_column text \ --use_peft \ --use_int4 \ --learning_rate 2e-4 \ --train_batch_size 2 \ --num_train_epochs 3 \ --trainer sft \ --model_max_length 2048 \ --push_to_hub \ --repo_id fhadli/llama2-7b-hf-id \ --block_size 2048 \ > training.log ``` after that, i tried to load the model using this script ``` from transformers import AutoTokenizer import transformers import torch model = "/home/muhammad.fhadli/explorasi/llama2-indo/llama2-indo-testing" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, torch_dtype=torch.float16, device_map="auto", ) ``` but it gave me this error, can someone please explain why i got this error, or what is the rigth way to do inference? ``` Traceback (most recent call last): File "play.py", line 8, in <module> pipeline = transformers.pipeline( File "/home/muhammad.fhadli/.pyenv/versions/3.8.10/envs/llama/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 705, in pipeline config = AutoConfig.from_pretrained(model, _from_pipeline=task, **hub_kwargs, **model_kwargs) File "/home/muhammad.fhadli/.pyenv/versions/3.8.10/envs/llama/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 983, in from_pretrained config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/muhammad.fhadli/.pyenv/versions/3.8.10/envs/llama/lib/python3.8/site-packages/transformers/configuration_utils.py", line 617, in get_config_dict config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/muhammad.fhadli/.pyenv/versions/3.8.10/envs/llama/lib/python3.8/site-packages/transformers/configuration_utils.py", line 672, in _get_config_dict resolved_config_file = cached_file( File "/home/muhammad.fhadli/.pyenv/versions/3.8.10/envs/llama/lib/python3.8/site-packages/transformers/utils/hub.py", line 388, in cached_file raise EnvironmentError( OSError: /home/muhammad.fhadli/explorasi/llama2-indo/llama2-indo-testing/ does not appear to have a file named config.json. Checkout 'https://huggingface.co//home/muhammad.fhadli/explorasi/llama2-indo/llama2-indo-testing//None' for available files. ``` here is the content inside my folder ``` $ls /home/muhammad.fhadli/explorasi/llama2-indo/llama2-indo-testing/ adapter_config.json optimizer.pt rng_state_0.pth scheduler.pt tokenizer_config.json tokenizer.model training_args.bin adapter_model.bin README.md rng_state_1.pth special_tokens_map.json tokenizer.json trainer_state.json ```
https://github.com/huggingface/autotrain-advanced/issues/216
closed
[]
2023-08-18T04:36:37Z
2023-12-18T15:30:38Z
null
muhammadfhadli1453
huggingface/diffusers
4,662
How to call a different scheduler when training a model from repo
I notice that the settings in train_dreambooth_lora_sdxl.py and the scheduler config from the repo seem to conflict. In the .py the noise scheduler is DDPM but whenever training starts it seems to still indicate that I am using the repo config scheduler, ie. EulerDiscreteScheduler. It used to be you could specify scheduler config by path but that seemed to have deprecated at some point.
https://github.com/huggingface/diffusers/issues/4662
closed
[]
2023-08-17T21:40:10Z
2023-08-18T04:18:11Z
null
jmaccall316
huggingface/transformers
25,576
How can i make a PR for autotokenzier to adapt RWKV world
### Feature request Ususally we use own tokenzier with the transformer pipeline, like this https://github.com/xiaol/Huggingface-RWKV-World/blob/fca236afd5f2815b0dbe6c7ce3c92e51526e2e14/generate_hf_cfg.py#L79C1-L79C1 So far we have a lot of models using new tokenzier, using pipeline with autotokenizer is critically needed. How can i add new tokenizer to autotokenzier to make this pipeline smooth and peace. Thank you. ### Motivation 1. make everyone use RWKV world smoothly, and RWKV v5 world is coming. 2. can support huggingface communtiy with this awesome models , make opensource more open. 3. i really don't like llama models always on the top of open llm leardboards. 4. more... ### Your contribution I made a lots of models based on RWKV 4 world ,https://huggingface.co/xiaol , especially 128k context models.
https://github.com/huggingface/transformers/issues/25576
closed
[]
2023-08-17T16:36:44Z
2023-09-25T08:02:43Z
null
xiaol
huggingface/accelerate
1,854
How to further accelerate training with 24 cards for 1.3b+ models using accelerate?
I found that when using DeepSpeed Zero (2 or 3) to train 1.3 billion and larger models (such as llama-7b or gpt-neo-1.3b), the training time for 8 * 32G V100 is almost the same as 24 * 32G V100 (I guess it's because of the additional communication overhead introduced by DeepSpeed). Is there any way to further accelerate training by utilizing 24 cards? Currently, Megatron-LM integration is limited to gpt-2 and gpt-j and also, I'm not sure whether this will help.
https://github.com/huggingface/accelerate/issues/1854
closed
[]
2023-08-17T15:01:09Z
2023-09-24T15:05:52Z
null
Micheallei
huggingface/datasets
6,156
Why not use self._epoch as seed to shuffle in distributed training with IterableDataset
### Describe the bug Currently, distributed training with `IterableDataset` needs to pass fixed seed to shuffle to keep each node use the same seed to avoid overlapping. https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1174-L1177 My question is why not directly use `self._epoch` which is set by `set_epoch` as seed? It's almost the same across nodes. https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1790-L1801 If not using `self._epoch` as shuffling seed, what does this method do to prepare an epoch seeded generator? https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1206 ### Steps to reproduce the bug As mentioned above. ### Expected behavior As mentioned above. ### Environment info Not related
https://github.com/huggingface/datasets/issues/6156
closed
[]
2023-08-17T10:58:20Z
2023-08-17T14:33:15Z
3
npuichigo
huggingface/diffusers
4,643
when i load a controlnet model,where is the inference code?
I have read the code of con in diffusers/models/controlnet.py. but when I load a con weight,where is the code? tks
https://github.com/huggingface/diffusers/issues/4643
closed
[]
2023-08-17T02:50:59Z
2023-08-17T04:55:28Z
null
henbucuoshanghai
huggingface/dataset-viewer
1,689
Handle breaking change in google dependency?
See https://huggingface.co/datasets/bigscience/P3/discussions/6#64dca122e3e44e8000c45616 Should we downgrade the dependency, or fix the datasets?
https://github.com/huggingface/dataset-viewer/issues/1689
closed
[ "question", "dependencies", "P2" ]
2023-08-16T14:31:28Z
2024-02-06T14:59:59Z
null
severo
huggingface/optimum
1,286
Support BetterTransfomer for the GeneFormer model
### Feature request is it possible to support GeneFormer model with BetterTransformer? https://huggingface.co/ctheodoris/Geneformer ### Motivation It's a new paper with an active community in the Hugging Face repository. The training and inference speed is not fast enough. ### Your contribution Nothing at this time because I don't want to add it by myself. I am requesting this because of this statement from the hugging face website: Let us know by opening an issue in πŸ€— Optimum if you want more models to be supported, or check out the [contribution guideline](https://huggingface.co/docs/optimum/bettertransformer/tutorials/contribute) if you want to add it by yourself!
https://github.com/huggingface/optimum/issues/1286
closed
[ "feature-request", "bettertransformer", "Stale" ]
2023-08-16T03:32:48Z
2025-05-07T02:13:16Z
1
seyedmirnezami
huggingface/diffusers
4,618
How to use dreamshaperXL10_alpha2Xl10.safetensors with controlnet-canny-sdxl-1.0 ?
I want to use dreamshaperXL10_alpha2Xl10.safetensors with controlnet-canny-sdxl-1.0 I downloaded dreamshaperXL10_alpha2Xl10.safetensors file and tried to use : pipe = StableDiffusionXLControlNetPipeline.from_pretrained( './dreamshaperXL10_alpha2Xl10.safetensors', controlnet=controlnet, use_safetensors=True, torch_dtype=torch.float16, variant="fp16" ) got error : pipe = StableDiffusionXLControlNetPipeline.from_pretrained( File "/opt/conda/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 908, in from_pretrained cached_folder = cls.download( File "/opt/conda/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 1330, in download info = model_info( File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn validate_repo_id(arg_value) File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id raise HFValidationError( huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './dream/dreamshaperXL10_alpha2Xl10.safetensors'. Use repo_type argument if needed. Previously, I tried to use from_single_file insteaed of from_pretrained. Got error : from_single_file not available with StableDiffusionXLControlNetPipeline. Please help. Thanks
https://github.com/huggingface/diffusers/issues/4618
closed
[]
2023-08-15T13:44:54Z
2023-08-22T01:31:37Z
null
arnold408
huggingface/peft
826
what is alpha ?? alpha not in paper.
### Feature request https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora.py#L57 this alpha not in paper : https://arxiv.org/abs/2106.09685 where can i learn this alpha ?? thank you !! ### Motivation rt ### Your contribution rt
https://github.com/huggingface/peft/issues/826
closed
[]
2023-08-15T09:47:58Z
2023-09-23T15:03:19Z
null
XuJianzhi
huggingface/optimum
1,285
Merge patch into autogptq
### Feature request Currently, there is a patch to get GPTQ quantization working: ``` # !pip install -q git+https://github.com/fxmarty/AutoGPTQ.git@patch-act-order-exllama ``` Is there a plan to try and merge that into the autogptq repo? ### Motivation autogptq is slow to install. This is easily solved by using wheels, but I don't have wheels for this patch. Easiest would be for the patch to be released. ### Your contribution Seems like the patch is a few tens of commits behind autogptq, so the first step would be to check whether doing a pr would create conflicts.
https://github.com/huggingface/optimum/issues/1285
closed
[]
2023-08-14T16:24:14Z
2023-08-23T17:17:46Z
5
RonanKMcGovern
huggingface/candle
443
What is the minimal requirements of Intel MKL version?
Hello, Thanks for the great work! I've got an error while compiling with the `-features mkl` option. For example `cargo install --git https://github.com/huggingface/candle.git candle-examples --examples bert -F mkl` The error said ```bash = note: /usr/bin/ld: /workspaces/Kuberian/searcher/target/debug/deps/libcandle_core-0afc8671b4dae8af.rlib(candle_core-0afc8671b4dae8af.candle_core.b11884625c01537d-cgu.13.rcgu.o): in function `candle_core::mkl::hgemm': /usr/local/cargo/git/checkouts/candle-0c2b4fa9e5801351/60cd155/candle-core/src/mkl.rs:162: undefined reference to `hgemm_' collect2: error: ld returned 1 exit status = note: some `extern` functions couldn't be found; some native libraries may need to be installed or have their path specified = note: use the `-l` flag to specify native libraries to link = note: use the `cargo:rustc-link-lib` directive to specify the native libraries to link with Cargo (see https://doc.rust-lang.org/cargo/reference/build-scripts.html#cargorustc-link-libkindname) ``` I initially thought that I did not install intel mkl libs properly, but I found that 1. [intel-mkl-src](https://github.com/rust-math/intel-mkl-src) automatically downloads the required library from ghcr 2. `intel mkl 2020.01`, which automatically downloaded from [here](https://github.com/rust-math/rust-mkl-container), simply does not implement `hgemm` while they do implement `sgemm` and `dgemm` 3. the latest version of intel mkl does implement `hgemm` So I tried the latest version of intel mkl, but it seems `intel-mkl-src` does not support it. I'm wondering which `intel-mkl` version do you use for your development environment?
https://github.com/huggingface/candle/issues/443
closed
[]
2023-08-14T14:09:01Z
2024-02-03T16:43:34Z
null
iwanhae
huggingface/pytorch-image-models
1,917
how to change SqueezeExcite in efficientnet
I want to create efficientnet networks using timm, where SqueezeExcite contains three parts ['Conv2d','SiLU','Conv2d'], but it contains four parts ['Conv2d','SiLU','Conv2d','sigmoid'], How should I modify it, thank you
https://github.com/huggingface/pytorch-image-models/issues/1917
closed
[ "enhancement" ]
2023-08-14T11:45:05Z
2023-08-14T14:13:26Z
null
Yang-Changhui
huggingface/setfit
408
No tutorial or guideline for Few-shot learning on multiclass text classification
I just want to use SBERT for Few Shot multiclass text classification, however I couldn't see any tutorial or explanation for it. Can you explain to me that which "multi_target_strategy" and loss function should I use for multi-class text classification ?
https://github.com/huggingface/setfit/issues/408
open
[ "documentation", "question" ]
2023-08-14T09:02:18Z
2023-10-03T20:29:25Z
null
ByUnal
huggingface/diffusers
4,594
latents.requires_grad is false in my custom pipeline no matter what.
Hi, in my quest to make a flexible pipeline that can easily add new features instead of creating a pipeline for every variation, I made the following: ``` class StableDiffusionRubberPipeline(StableDiffusionPipeline): call_funcs=[] def __init__( self, vae: AutoencoderKL, text_encoder: CLIPTextModel, tokenizer: CLIPTokenizer, unet: UNet2DConditionModel, scheduler: KarrasDiffusionSchedulers, safety_checker: StableDiffusionSafetyChecker, feature_extractor: CLIPImageProcessor, requires_safety_checker: bool = True, ): self.before_init() super().__init__(vae,text_encoder,tokenizer,unet,scheduler,safety_checker,feature_extractor,requires_safety_checker) if hasattr(scheduler.config, "steps_offset") and scheduler.config.steps_offset != 1: deprecation_message = ( f"The configuration file of this scheduler: {scheduler} is outdated. `steps_offset`" f" should be set to 1 instead of {scheduler.config.steps_offset}. Please make sure " "to update the config accordingly as leaving `steps_offset` might led to incorrect results" " in future versions. If you have downloaded this checkpoint from the Hugging Face Hub," " it would be very nice if you could open a Pull request for the `scheduler/scheduler_config.json`" " file" ) deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False) new_config = dict(scheduler.config) new_config["steps_offset"] = 1 scheduler._internal_dict = FrozenDict(new_config) if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True: deprecation_message = ( f"The configuration file of this scheduler: {scheduler} has not set the configuration `clip_sample`." " `clip_sample` should be set to False in the configuration file. Please make sure to update the" " config accordingly as not setting `clip_sample` in the config might lead to incorrect results in" " future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very" " nice if you could open a Pull request for the `scheduler/scheduler_config.json` file" ) deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False) new_config = dict(scheduler.config) new_config["clip_sample"] = False scheduler._internal_dict = FrozenDict(new_config) if safety_checker is None and requires_safety_checker: logger.warning( f"You have disabled the safety checker for {self.__class__} by passing `safety_checker=None`. Ensure" " that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered" " results in services or applications open to the public. Both the diffusers team and Hugging Face" " strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling" " it only for use-cases that involve analyzing network behavior or auditing its results. For more" " information, please have a look at https://github.com/huggingface/diffusers/pull/254 ." ) if safety_checker is not None and feature_extractor is None: raise ValueError( "Make sure to define a feature extractor when loading {self.__class__} if you want to use the safety" " checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead." ) is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse( version.parse(unet.config._diffusers_version).base_version ) < version.parse("0.9.0.dev0") is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64 if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64: deprecation_message = ( "The configuration file of the unet has set the default `sample_size` to smaller than" " 64 which seems highly unlikely. If your checkpoint is a fine-tuned version of any of the" " following: \n- CompVis/stable-diffusion-v1-4 \n- CompVis/stable-diffusion-v1-3 \n-" " CompVis/stable-diffusion-v1-2 \n- CompVis/stable-diffusion-v1-1 \n- runwayml/stable-diffusion-v1-5" " \n- runwayml/stable-diffusion-inpainting \n you should change 'sample_size' to 64 in the" " configuration file. Please make sure to update the config accordingly as leaving `sample_size=32`" " in the config mi
https://github.com/huggingface/diffusers/issues/4594
closed
[]
2023-08-13T15:02:22Z
2023-08-14T12:11:36Z
null
alexblattner
huggingface/datasets
6,153
custom load dataset to hub
### System Info kaggle notebook i transformed dataset: ``` dataset = load_dataset("Dahoas/first-instruct-human-assistant-prompt") ``` to formatted_dataset: ``` Dataset({ features: ['message_tree_id', 'message_tree_text'], num_rows: 33143 }) ``` but would like to know how to upload to hub ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction shared above ### Expected behavior load dataset to hub
https://github.com/huggingface/datasets/issues/6153
closed
[]
2023-08-13T04:42:22Z
2023-11-21T11:50:28Z
5
andysingal
huggingface/chat-ui
398
meta-llama/Llama-2-7b-chat-hf requires a pro subscription?
I ran the instructions to run locally, and ran into this. I've been working on my own ui, and thought I'd give this a shot, and if that's the route huggingface is going, I find that very disappointing. I was expecting the model to be hosted locally and routed through fastapi or something
https://github.com/huggingface/chat-ui/issues/398
closed
[]
2023-08-12T03:56:55Z
2023-08-12T04:03:11Z
1
thistleknot
huggingface/chat-ui
397
Dynamically adjust `max_new_tokens`
Hi, I am running a 4096 context length model behind TGI interface. My primary use case is summarization wherein some of my requests can be quite large. I have set `truncate` to 4000 and that leaves `max_new_tokens` to be at most 4096-4000=96. So, even if my input length is not 4000 tokens long, say it is only 1024 tokens long, I can only generate 96 token long response. In this case, `max_new_tokens` can be 4096-1024=3072. Is it possible for `chat-ui` to dynamically adjust the `max_new_tokens` this way? Thanks for the great work!
https://github.com/huggingface/chat-ui/issues/397
open
[ "question", "back" ]
2023-08-11T16:37:10Z
2023-09-18T12:49:49Z
null
abhinavkulkarni
huggingface/chat-ui
396
Long chat history
How do you manage a long chat history? Do you truncate the history at some point and call the API only with the most recent messages?
https://github.com/huggingface/chat-ui/issues/396
closed
[ "question" ]
2023-08-11T15:52:43Z
2023-09-18T12:50:07Z
null
keidev
huggingface/trl
638
How many and what kind of gpus needed to run the example?
For every script or project in the example directory, could you please tell us how many and what kind of gpus needed to run the experiments? Thanks a lot.
https://github.com/huggingface/trl/issues/638
closed
[]
2023-08-11T14:12:34Z
2023-09-11T08:22:33Z
null
Wallace-222
huggingface/chat-ui
395
Error's out evetime I try to add a new model
I'm currently having an huge issue. I'm trying to easily add models in to the chat ui. I have made a holder and added a specific model in that folder but I'm unable to actual get to use that model. I'm not sure what I'm doing wrong I've staired at the docs for a few hours re reading and also looked it up on YouTube but have found nothing. Currently the code in my .env.local file that looks like this: MODELS=`[ { "name": "Open Assistant epoch-3.5 LLM", "datasetName": "OpenAssistant/oasst1", "description": "A good alternative to ChatGPT", "websiteUrl": "https://open-assistant.io", "userMessageToken": "<|prompter|>", "assistantMessageToken": "<|assistant|>", "messageEndToken": "</s>", "preprompt": "Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\n-----\n", "promptExamples": [ { "title": "Write an email from bullet list", "prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)" }, { "title": "Code a snake game", "prompt": "Code a basic snake game in python, give explanations for each step." }, { "title": "Assist in a task", "prompt": "How do I make a delicious lemon cheesecake?" } ], "parameters": { "temperature": 0.9, "top_p": 0.95, "repetition_penalty": 1.2, "top_k": 50, "truncate": 1000, "max_new_tokens": 1024 } } ]` ,`[ { "name": "Test LLM", "datasetName": "OpenAssistant/oasst1", "endpoints": [{"url": "/models/Wizard-Vicuna-30B-Uncensored-GPTQ-4bit--1g.act.order.safetensors"}] "description": "A good alternative to ChatGPT", "userMessageToken": "<|prompter|>", "assistantMessageToken": "<|assistant|>", "messageEndToken": "</s>", "preprompt": "Below are a series of dialogues between various people and an AI assistant. The AI tries to be helpful, polite, honest, sophisticated, emotionally aware, and humble-but-knowledgeable. The assistant is happy to help with almost anything, and will do its best to understand exactly what is needed. It also tries to avoid giving false or misleading information, and it caveats when it isn't entirely sure about the right answer. That said, the assistant is practical and really does its best, and doesn't let caution get too much in the way of being useful.\n-----\n", "promptExamples": [ { "title": "Write an email from bullet list", "prompt": "As a restaurant owner, write a professional email to the supplier to get these products every week: \n\n- Wine (x10)\n- Eggs (x24)\n- Bread (x12)" }, { "title": "Code a snake game", "prompt": "Code a basic snake game in python, give explanations for each step." }, { "title": "Assist in a task", "prompt": "How do I make a delicious lemon cheesecake?" } ], "parameters": { "temperature": 0.9, "top_p": 0.95, "repetition_penalty": 1.2, "top_k": 50, "truncate": 1000, "max_new_tokens": 1024 } } ]` I'm currently re using everything from the default once and then but I will be stripping everything from it to match the actual LLM. Any and all help is much appreciated
https://github.com/huggingface/chat-ui/issues/395
closed
[ "support" ]
2023-08-11T12:55:03Z
2023-09-11T09:35:55Z
3
Dom-Cogan
huggingface/dataset-viewer
1,662
Should we change 500 to another status code when the error comes from the dataset?
See #1661 for example. Same for the "retry later" error: is 500 the most appropriate status code?
https://github.com/huggingface/dataset-viewer/issues/1662
open
[ "question", "api", "P2" ]
2023-08-10T15:57:03Z
2023-08-14T15:36:27Z
null
severo
huggingface/datasets
6,139
Offline dataset viewer
### Feature request The dataset viewer feature is very nice. It enables to the user to easily view the dataset. However, when working for private companies we cannot always upload the dataset to the hub. Is there a way to create dataset viewer offline? I.e. to run a code that will open some kind of html or something that makes it easy to view the dataset. ### Motivation I want to easily view my dataset even when it is hosted locally. ### Your contribution N.A.
https://github.com/huggingface/datasets/issues/6139
closed
[ "enhancement", "dataset-viewer" ]
2023-08-10T11:30:00Z
2024-09-24T18:36:35Z
7
yuvalkirstain
huggingface/text-generation-inference
807
How to create a NCCL group on Kubernetes?
I am deploying text-generation-inference on EKS with each node having 1 NVIDIA A10G GPU. How should I create a group such that a model like llama-2-13b-chat is able to use GPUs across nodes for inference?
https://github.com/huggingface/text-generation-inference/issues/807
closed
[ "Stale" ]
2023-08-10T09:29:59Z
2024-04-17T01:45:28Z
null
rsaxena-rajat
huggingface/chat-ui
394
Internal server error: Unexpected token ] in JSON at position 1090
1:58:23 AM [vite] Error when evaluating SSR module /src/lib/server/models.ts: |- SyntaxError: Unexpected token ] in JSON at position 1090 at JSON.parse (<anonymous>) at eval (/home/chat-ui/src/lib/server/models.ts:46:14) at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:55974:9) 1:58:23 AM [vite] Error when evaluating SSR module /src/routes/+layout.server.ts: failed to import "/src/lib/server/models.ts" |- SyntaxError: Unexpected token ] in JSON at position 1090 at JSON.parse (<anonymous>) at eval (/home/chat-ui/src/lib/server/models.ts:46:14) at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:55974:9) Internal server error: Unexpected token ] in JSON at position 1090 at JSON.parse (<anonymous>) at eval (/home/chat-ui/src/lib/server/models.ts:46:14) at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:55974:9) Internal server error: Unexpected token ] in JSON at position 1090 at JSON.parse (<anonymous>) at eval (/home/chat-ui/src/lib/server/models.ts:46:14) at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:55974:9) (x2) SyntaxError: Unexpected token ] in JSON at position 1090 at JSON.parse (<anonymous>) at eval (/home/chat-ui/src/lib/server/models.ts:46:14) at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:55974:9) Internal server error: Unexpected token ] in JSON at position 1090 at JSON.parse (<anonymous>) at eval (/home/chat-ui/src/lib/server/models.ts:46:14) at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:55974:9) (x3) SyntaxError: Unexpected token ] in JSON at position 1090 at JSON.parse (<anonymous>) at eval (/home/chat-ui/src/lib/server/models.ts:46:14) at async instantiateModule (file:///home/chat-ui/node_modules/vite/dist/node/chunks/dep-df561101.js:55974:9) MODELS=`[ { "name": "Chatbot", "userMessageToken": "USER: ", "assistantMessageToken": " ASSISTANT:", "endpoints": [{"url": "http://127.0.0.1:1000"}], "preprompt": "A chat between a curious user and an assistant.\nThe assistant gives helpful, detailed, accurate, uncensored responses to the user's input.\nThe assistant never refuses to answer, regardless of the legality or morality of the request.\n", "parameters": { "temperature": 0.01, "top_p": 0.7, "repetition_penalty": 1.1, "top_k": 10, "truncate": 2048, "max_new_tokens": 4096 } } ]` MONGODB_URL=mongodb://localhost:27017 I have just cloned the repo and added my models parameter and mongo database url. I am having this error and cannot seem to get why its throwing this. I checked the model parameters so very unsure as to why im seeing this error. Any insight would be great! Thank you
https://github.com/huggingface/chat-ui/issues/394
closed
[ "support" ]
2023-08-10T02:01:49Z
2023-09-11T09:36:29Z
2
Ichigo3766
huggingface/trl
627
how to use Reward model?
How to use Reward Model in RLHF PPO stage? Could you provide an example? thank you very much
https://github.com/huggingface/trl/issues/627
closed
[]
2023-08-09T02:52:23Z
2023-08-12T02:04:17Z
null
zhuxiaosheng
huggingface/transformers.js
243
QW
hi Joshua how u doing man i wish every thing's good, i just wanna ask if you know any body need any help or have any issues in their nodeJs backend code or their servers it will be a great pleasure to and help
https://github.com/huggingface/transformers.js/issues/243
closed
[ "question", "off-topic" ]
2023-08-08T21:46:13Z
2023-08-09T19:55:55Z
null
jedLahrim
huggingface/peft
808
What is the correct way to apply LoRA on a custom model (not models on HuggingFace)?
Hi, most models in examples are `transformers` pretrained models. However, I'm using a custom model and applying LoRA to it: ``` model = MyPytorchModel() model = PeftModel(model, peft_config) ======= training... ======== model.save_pretrained(save_path) ``` Then, I reload my custom model and merge lora weight: ``` model = MyPytorchModel() lora_model = PeftModel.from_pretrained(model, save_path) model = lora_model.merge_and_unload() ``` Is this feasible? When I test the final `model`, its behavior does not differ from before loading LoRA weight, as if `merge_ and_unload()` does not have any effect at all. I want to know where the problem is.
https://github.com/huggingface/peft/issues/808
closed
[]
2023-08-08T17:10:36Z
2025-08-01T21:14:25Z
null
DtYXs
huggingface/diffusers
4,533
How to debug custom pipeline locally ?
Hi, I build diffusers from source, and I am using ControlNet. However, diffusers seems not to load the custom pipeline from ```diffusers/examples/community/stable_diffusion_controlnet_img2img.py``` as I expected. Instead, it seems to download from the hub and cache a new ```stable_diffusion_controlnet_img2img.py``` somewhere else. My question is how to make it load from my local ```diffusers/examples/community/stable_diffusion_controlnet_img2img.py``` so that I can debug it locally? Best,
https://github.com/huggingface/diffusers/issues/4533
closed
[]
2023-08-08T15:34:40Z
2023-08-09T12:17:42Z
null
pansanity666
huggingface/setfit
405
how to set the device id
How do I run multiple training runs on different GPU devices? I don't see any argument which allows me to set this. Thank you!
https://github.com/huggingface/setfit/issues/405
open
[]
2023-08-08T08:25:36Z
2023-08-08T08:25:36Z
null
vahuja4
huggingface/transformers.js
239
[Question] Adding Custom or Unused Token
<!-- QUESTION GOES HERE --> Is it possible to add custom range as a token? For example for price_list of $100-$200 Can we add a custom vocab like this in vocab list vocab list: nice hello __$100-$200__ fish ...
https://github.com/huggingface/transformers.js/issues/239
closed
[ "question" ]
2023-08-07T18:32:20Z
2023-08-07T20:38:15Z
null
hadminh
huggingface/chat-ui
390
Can I hook it up to a retrieval system for a document chatbot?
I want to use the instructor-xl text embedding model and use FAISS to create and retrieve from a vector store. Sort of a chatbot for documents or a domain specific chatbot. Any ideas on how I can do it?
https://github.com/huggingface/chat-ui/issues/390
open
[]
2023-08-07T15:22:10Z
2024-02-22T12:55:41Z
9
adarshxs
huggingface/diffusers
4,507
How to train stable-diffusion-xl-base-1.0 without lora?
Hi, I want to train `stable-diffusion-xl-base-1.0` without lora, how to do this? I can run `train_text_to_image_lora_sdxl.py` . But `train_text_to_image.py` with `MODEL_NAME="stabilityai/stable-diffusion-xl-base-1.0"` with raise an error: ``` diffusers/models/unet_2d_condition.py:836 in forward β”‚ β”‚ 833 β”‚ β”‚ β”‚ aug_emb = self.add_embedding(text_embs, image_embs) β”‚ β”‚ 834 β”‚ β”‚ elif self.config.addition_embed_type == "text_time": β”‚ β”‚ 835 β”‚ β”‚ β”‚ # SDXL - style β”‚ β”‚ ❱ 836 β”‚ β”‚ β”‚ if "text_embeds" not in added_cond_kwargs: β”‚ β”‚ 837 β”‚ β”‚ β”‚ β”‚ raise ValueError( β”‚ β”‚ 838 β”‚ β”‚ β”‚ β”‚ β”‚ f"{self.__class__} has the config param `addition_ β”‚ β”‚ 839 β”‚ β”‚ β”‚ β”‚ ) β”‚ ╰──────────────────────────────────────────────────────────────────────────────╯ TypeError: argument of type 'NoneType' is not iterable ``` the `added_cond_kwargs` is none in this case.
https://github.com/huggingface/diffusers/issues/4507
closed
[]
2023-08-07T10:38:24Z
2023-08-14T07:25:49Z
null
KimmiShi
huggingface/text-generation-inference
782
What is the correct parameter combination for using dynamic RoPE scaling ?
Hi Team, First of all thanks for the awesome piece of software !! I want to use `upstage/Llama-2-70b-instruct-v2` model with `--max-input-length=8192 --max-total-tokens=10240` which originally supports `max_position_embeddings=4096`. I tried running the following command : ``` docker run -it --rm --gpus all --shm-size 80g --name llama2_70b_instruct_v2 -p 8560:80 -v ~/tgi_data:/data \ ghcr.io/huggingface/text-generation-inference:sha-f91e9d2 --num-shard=8 \ --model-id upstage/Llama-2-70b-instruct-v2 --revision 5f9c77b2c0397cf83d2f97740483f107c7109e8c \ --dtype=float16 \ --max-input-length=8192 --max-total-tokens=10240 --rope-scaling=dynamic --rope-factor=2.5 \ --max-batch-prefill-tokens=40100 \ ``` 1. Does it look correct ? Though this ended up with: ``` Traceback (most recent call last): File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py", line 727, in warmup _, batch = self.generate_token(batch) File "/opt/conda/lib/python3.9/contextlib.py", line 79, in inner return func(*args, **kwds) File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py", line 825, in generate_token raise e File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py", line 813, in generate_token out = self.forward( File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py", line 789, in forward return self.model.forward( File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py", line 475, in forward hidden_states = self.model( File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py", line 428, in forward cos, sin = self.layers[0].self_attn.rotary_emb.get_cos_sin( File "/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/layers.py", line 470, in get_cos_sin self._update_cos_sin_cache(dtype, position_ids.device, max_s) File "/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/layers.py", line 501, in _update_cos_sin_cache newbase = self.base * ((self.scaling_factor * seq_len / self.max_position_embeddings) - (self.scaling_factor - 1)) ** (self.dim / (self.dim - 2)) NameError: name 'seq_len' is not defined ``` 2. Looks like typo in the code, should it have been `seqlen` instead of `seq_len` ? 3. When I am using the above model without RoPE scaling on 8xA100-40GB GPUs, it can churn out 1534 tokens per sec, with an prompt heavy set up of ~883 input tokens, ~76 output tokens(best_of=1, so no hidden output tokens) per request. Is this expected performance or can I do better on the above set up? FYI: tried fp16 on vllm, gptq(4bit), bitsandbytes(8bit) models all ended up with similar TPS (tokens per second).
https://github.com/huggingface/text-generation-inference/issues/782
closed
[]
2023-08-07T05:58:14Z
2023-09-06T13:59:36Z
null
hrushikesh198
huggingface/transformers.js
238
[Question] Can you list all available models using tranformers.js?
Hey πŸ‘‹ I was wondering if it's possible to list available models using the `transformers.js` package? e.g. > pipeline.getAvailableModels()
https://github.com/huggingface/transformers.js/issues/238
closed
[ "question" ]
2023-08-07T01:53:35Z
2023-08-13T23:27:55Z
null
sambowenhughes
huggingface/chat-ui
389
Inject assistant message in the begining of the chat
Hey, is it possible to start a conversation with an assistant message showing up as the first message in the chat?
https://github.com/huggingface/chat-ui/issues/389
closed
[ "enhancement", "question" ]
2023-08-06T17:25:25Z
2023-09-18T12:52:16Z
null
matankley
huggingface/diffusers
4,494
How to convert a diffuser pipeline of XL to checkpoint or safetensors
I need to fine-tune stable diffusion unet or something like that. Then I have to convert the pipeline into ckpt for webui usage. Before I use the `scripts/convert_diffusers_to_original_stable_diffusion.py` for transforming. But currently it cannot convert correctly for XL pipeline and webui may raise bugs. Thanks in advance.
https://github.com/huggingface/diffusers/issues/4494
closed
[ "stale", "contributions-welcome" ]
2023-08-06T13:06:54Z
2023-11-06T04:42:19Z
null
FeiiYin
huggingface/chat-ui
388
Is it down?
It doesnt load for me also your website
https://github.com/huggingface/chat-ui/issues/388
closed
[]
2023-08-06T08:54:47Z
2023-08-08T06:05:48Z
6
BenutzerEinsZweiDrei
huggingface/transformers.js
237
[Question] Ipynb for ONNX conversion?
Could you please share the code you're using to convert models to onnx? I know you say in your cards you're using Optimum, but when I try to do it myself, I get much larger onnx files (talking about disk space here) and I don't know what I'm doing wrong.
https://github.com/huggingface/transformers.js/issues/237
closed
[ "question" ]
2023-08-06T08:45:19Z
2023-08-06T09:17:02Z
null
Mihaiii
huggingface/transformers.js
233
[Docs] Mention demo (GitHub pages) in Readme
I love your old demo page on GitHub pages (https://xenova.github.io/transformers.js/), as one can easily play with the models and copy code if needed. Is there any reason it's not mentioned anymore (or not more visible) in the Readme? (Sorry, added bug label accidentally, should be question instead)
https://github.com/huggingface/transformers.js/issues/233
closed
[ "question" ]
2023-08-04T10:53:48Z
2023-12-06T15:01:38Z
null
do-me
huggingface/datasets
6,120
Lookahead streaming support?
### Feature request From what I understand, streaming dataset currently pulls the data, and process the data as it is requested. This can introduce significant latency delays when data is loaded into the training process, needing to wait for each segment. While the delays might be dataset specific (or even mapping instruction/tokenizer specific) Is it possible to introduce a `streaming_lookahead` parameter, which is used for predictable workloads (even shuffled dataset with fixed seed). As we can predict in advance what the next few datasamples will be. And fetch them while the current set is being trained. With enough CPU & bandwidth to keep up with the training process, and a sufficiently large lookahead, this will reduce the various latency involved while waiting for the dataset to be ready between batches. ### Motivation Faster streaming performance, while training over extra large TB sized datasets ### Your contribution I currently use HF dataset, with pytorch lightning trainer for RWKV project, and would be able to help test this feature if supported.
https://github.com/huggingface/datasets/issues/6120
open
[ "enhancement" ]
2023-08-04T04:01:52Z
2023-08-17T17:48:42Z
1
PicoCreator
huggingface/diffusers
4,459
how to convert a picture to text embedding, without training these image model like Textual Inversion
clip text: tokens -> text_embedding -> text_features clip img: img -> img_embedding -> img_features how inversion without training every time: img -> text_embedding
https://github.com/huggingface/diffusers/issues/4459
closed
[ "stale" ]
2023-08-04T01:46:25Z
2023-09-12T15:03:45Z
null
yanchaoguo
huggingface/datasets
6,116
[Docs] The "Process" how-to guide lacks description of `select_columns` function
### Feature request The [how to process dataset guide](https://huggingface.co/docs/datasets/main/en/process) currently does not mention the [`select_columns`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.select_columns) function. It would be nice to include it in the guide. ### Motivation This function is a commonly requested feature (see this [forum thread](https://discuss.huggingface.co/t/how-to-create-a-new-dataset-from-another-dataset-and-select-specific-columns-and-the-data-along-with-the-column/15120) and #5468 #5474). However, it has not been included in the guide since its implementation by PR #5480. Mentioning it in the guide would help future users discover this added feature. ### Your contribution I could submit a PR to add a brief description of the function to said guide.
https://github.com/huggingface/datasets/issues/6116
closed
[ "enhancement" ]
2023-08-03T13:45:10Z
2023-08-16T10:02:53Z
null
unifyh
huggingface/diffusers
4,453
How to convert diffusers SDXL lora into safetensors that works with AUTO1111 webui
### Describe the bug I trained a lora on SDXL with this diffusers script: https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sdxl.py I get great results when using the output .bin with the diffusers inference code. How can I convert the .bin to .safetensors that can be loaded in AUTO1111 webui? ### Reproduction Train a lora on SDXL with this diffusers script: https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth_lora_sdxl.py The lora model cannot be loaded in AUTO1111 webui ### Logs _No response_ ### System Info Python 3.10 ### Who can help? _No response_
https://github.com/huggingface/diffusers/issues/4453
closed
[ "bug", "stale" ]
2023-08-03T11:23:25Z
2023-09-12T15:03:46Z
null
wangqyqq
huggingface/text-generation-inference
765
How to benchmark a warmed local model by docker
### System Info Using the docker run to connected local model and it worked: `docker run --rm --name tgi --runtime=nvidia --gpus all -p 5001:5001 -v data/nfs/gdiist/model:/data k8s-master:5000/text-generation-inference:0.9.3 --model-id /data/llama-7b-hf --hostname 0.0.0.0 --port 5001 --dtype float16 ` ``` 2023-08-03T09:14:08.564776Z INFO text_generation_launcher: Starting Webserver 2023-08-03T09:14:08.587895Z WARN text_generation_router: router/src/main.rs:165: Could not find a fast tokenizer implementation for /data/llama-7b-hf 2023-08-03T09:14:08.587942Z WARN text_generation_router: router/src/main.rs:168: Rust input length validation and truncation is disabled 2023-08-03T09:14:08.587953Z WARN text_generation_router: router/src/main.rs:193: no pipeline tag found for model /data/llama-7b-hf 2023-08-03T09:14:08.595313Z INFO text_generation_router: router/src/main.rs:212: Warming up model 2023-08-03T09:14:11.767661Z INFO text_generation_router: router/src/main.rs:221: Connected ### Information - [X] Docker - [ ] The CLI directly ### Tasks - [X] An officially supported command - [ ] My own modifications ### Reproduction And I can't use the `text-generation-benchmark` so I entered the Docker container and using the following command: `docker exec -it tgi /bin/bash` `text-generation-benchmark --tokenizer-name data/nfs/gdiist/model/llama-7b-hf` There are errors reported as follows: ``` 2023-08-03T09:23:25.437223Z INFO text_generation_benchmark: benchmark/src/main.rs:126: Loading tokenizer 2023-08-03T09:23:25.437552Z INFO text_generation_benchmark: benchmark/src/main.rs:135: Downloading tokenizer 2023-08-03T09:23:26.218104Z ERROR cached_path::cache: /usr/local/cargo/registry/src/index.crates.io-6f17d22bba15001f/cached-path-0.6.1/src/cache.rs:559: ETAG fetch for https://huggingface.co/data/nfs/gdiist/model/llama-7b-hf/resolve/main/tokenizer.json failed with fatal error thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: "Model \"data/nfs/gdiist/model/llama-7b-hf\" on the Hub doesn't have a tokenizer"', benchmark/src/main.rs:147:78 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace Aborted (core dumped) I want to know if it's the reason for using the local model or the lack of parameters? ### Expected behavior 1. Help me using benchmark tool after docker run 2. Tell me how to use 2 gpus to run a local model in docker run Thanks!
https://github.com/huggingface/text-generation-inference/issues/765
closed
[]
2023-08-03T09:28:07Z
2023-10-16T01:50:10Z
null
Laych7
huggingface/diffusers
4,448
Outpainting results from diffusers' StableDiffusionControlNetPipeline is much worse than those from A1111 webui. How to improve?
I am trying to outpaint some human images (mainly the lower-body part) with SD 1.5 conditioned on ControlNet's inpainting and openpose. I have been using A1111 webui with ControlNet extension and it has been working quite well: Here are my settings in the webui: <img width="774" alt="Screenshot 2023-08-03 at 15 08 30" src="https://github.com/huggingface/diffusers/assets/50854238/f5d2ed63-bd8e-467a-81cb-28293eb45fe4"> ![1691046578453](https://github.com/huggingface/diffusers/assets/50854238/8baf5891-6fe8-4006-bce9-bca903a3d6bf) <img width="774" alt="Screenshot 2023-08-03 at 15 10 00" src="https://github.com/huggingface/diffusers/assets/50854238/8b9e6c76-3986-437a-9159-cb799d35131d"> Note that 2 ControlNet units are enabled, one for OpenPose and one for ControlNet's inpainting model. For OpenPose I enabled "Preview as Input" and upload my custom json file with all joints defined (although the lower-body joints are not visible in the input image). Here is the result I get from the webui, which looks good: ![00001-2019210750](https://github.com/huggingface/diffusers/assets/50854238/491a2de1-180c-473d-83d0-44376c4cc7f1) Now, I'm trying to reproduce this result using diffusers' StableDiffusionControlNetPipeline. Below is my code: ``` import numpy as np from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, DDIMScheduler import torch from diffusers.utils import load_image import cv2 from PIL import Image def make_inpaint_condition(image, image_mask): image = np.array(image.convert("RGB")).astype(np.float32) / 255.0 image_mask = np.array(image_mask.convert("L")).astype(np.float32) assert image.shape[0:1] == image_mask.shape[0:1], "image and image_mask must have the same image size" image[image_mask < 128] = -1.0 # set as masked pixel image = np.expand_dims(image, 0).transpose(0, 3, 1, 2) image = torch.from_numpy(image) return image controlnet_inpaint = ControlNetModel.from_pretrained('lllyasviel/control_v11p_sd15_inpaint', torch_dtype=torch.float16) controlnet_openpose = ControlNetModel.from_pretrained('lllyasviel/control_v11p_sd15_openpose', torch_dtype=torch.float16) pipe = StableDiffusionControlNetPipeline.from_pretrained('runwayml/stable-diffusion-v1-5', controlnet=[controlnet_inpaint, controlnet_openpose], torch_dtype=torch.float16, safety_checker=None).to('cuda') pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) pipe.enable_model_cpu_offload() pipe.enable_xformers_memory_efficient_attention() original_image = load_image('./image.png') mask_image = load_image('./mask.png') inpaint_condition_image = make_inpaint_condition(original_image, mask_image) openpose_condition_image = load_image('./pose.png') generated_img = pipe(prompt="best quality, photorealistic, empty background", negative_prompt="lowres, bad hands, bad feet, worst quality", num_inference_steps=20, guidance_scale=10.0, image=[inpaint_condition_image, openpose_condition_image]).images[0] generated_img.save('./test.png') ``` and here is the result I get from diffusers: ![test (17)](https://github.com/huggingface/diffusers/assets/50854238/59fe3240-2650-4d9e-a46f-4359b368dc93) The legs look much less realistic and the background is kind of noisy. I have been using the same SD model (sd v1.5), same controlnet models (v1.1 for OpenPose and inpainting), and same sampler (DDIM), but the results from diffusers are much worse than the webui. What can I do to reproduce the results I get from the webui? It also seems that with the diffusers pipeline, the unmasked part is also slightly modified. Is there any post-processing applied to it?
https://github.com/huggingface/diffusers/issues/4448
closed
[]
2023-08-03T07:19:12Z
2023-08-30T05:35:03Z
null
xiyichen
huggingface/transformers
25,280
How to download files from HF spaces
### System Info google colab ### Who can help? @sanchit-gandhi @rock ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction i tried: ``` from huggingface_hub import hf_hub_download,hf_hub_url # model_path = hf_hub_download(repo_id="xinyu1205/recognize-anything", filename="tag2text_swin_14m.pth", local_dir = "/content") ``` but throws an error repo not present ### Expected behavior download the file
https://github.com/huggingface/transformers/issues/25280
closed
[]
2023-08-03T07:02:03Z
2023-09-11T08:02:40Z
null
andysingal
huggingface/diffusers
4,445
How to finetune lora model ?
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] If I have a model from civitai , how to finetune it in sd1.5 and sdxl? **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here.
https://github.com/huggingface/diffusers/issues/4445
closed
[ "stale" ]
2023-08-03T01:55:15Z
2023-09-12T15:03:49Z
null
kelisiya
huggingface/sentence-transformers
2,268
How to chop up a long document into chunks of max sequence length?
Given a long document, how do I chop it up into chunks so that each chunk is within the [max sequence length](https://www.sbert.net/examples/applications/computing-embeddings/README.html#input-sequence-length) of a model?
https://github.com/huggingface/sentence-transformers/issues/2268
open
[]
2023-08-02T16:50:09Z
2023-08-04T18:47:22Z
null
siddhsql
huggingface/dataset-viewer
1,602
Parallel steps update incoherence
See the discussion https://huggingface.co/datasets/BelleGroup/multiturn_chat_0.8M/discussions/1#64c9e88a6a26cddbecd9bec6 Before the dataset update, the `split-first-rows-from-parquet` response was a success, and thus the `split-first-rows-from-streaming` response, computed later, is a `ResponseAlreadyComputedError` error. But after the dataset update, the `split-first-rows-from-parquet` response was an error (due to a disk issue: ` FileSystemError`) and, due to a heavy load on the infra, the `split-first-rows-from-streaming` response has not been processed yet, so: it's still `ResponseAlreadyComputedError`. Possibilities: 1. remove `ResponseAlreadyComputedError`, and copy the response (doubles storage) 2. change the model for parallel steps, and store only once. Let's say we have M+N parallel steps. If M steps are successful (normally with the same response) and N steps are erroneous, let's store the optional successful response content once, and all the responses, removing the success content for successful responses. It is a lot of complexity. 3. keep the logic, but if a parallel step gives an error whereas it had a successful response before AND the other parallel step is `ResponseAlreadyComputedError`, copy the successful answer to the other step. Seems brittle and overly complex. 4. keep the logic, but if a parallel step gives an error whereas it had a successful response before AND the other parallel step is `ResponseAlreadyComputedError`, delete the other answer None seems like a good idea. Do you have better ideas @huggingface/datasets-server ?
https://github.com/huggingface/dataset-viewer/issues/1602
closed
[ "bug", "question", "P1" ]
2023-08-02T13:44:35Z
2024-02-06T14:52:06Z
null
severo
huggingface/transformers
25,264
[Question] How to load AutoFeatureExtractor on GPU?
Hi, I am following this guide to learn how to do audio classification with wav2vec2: https://huggingface.co/docs/transformers/main/tasks/audio_classification I intend to extract features of my data with the following codes ``` feature_extractor = AutoFeatureExtractor.from_pretrained("/workspace/models/wav2vec2-large-robust") def preprocess_function(examples): audio_arrays = [x["array"] for x in tqdm(examples["audio"])] inputs = feature_extractor( audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=16000, truncation=True ) return inputs encoded_audio_dataset_train = audio_dataset_train.map(preprocess_function, remove_columns="audio", batched=True) ``` But it seems the extractor is loaded to CPU instead of GPU, and I didn't find in documentation how to set the device for loading feature extractor. I assume the feature extraction is done by the wav2vec2 model itself right? If so how to do this on GPU? Or is it mentioned in any documentation that I didn't notice? This is my first time to use transformers library in audio processing so please forgive my clumsiness. Any help is much appreciated.
https://github.com/huggingface/transformers/issues/25264
closed
[]
2023-08-02T12:26:20Z
2023-09-11T08:02:43Z
null
treya-lin
huggingface/datasets
6,111
raise FileNotFoundError("Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory." )
### Describe the bug For researchers in some countries or regions, it is usually the case that the download ability of `load_dataset` is disabled due to the complex network environment. People in these regions often prefer to use git clone or other programming tricks to manually download the files to the disk (for example, [How to elegantly download hf models, zhihu zhuanlan](https://zhuanlan.zhihu.com/p/475260268) proposed a crawlder based solution, and [Is there any mirror for hf_hub, zhihu answer](https://www.zhihu.com/question/371644077) provided some cloud based solutions, and [How to avoid pitfalls on Hugging face downloading, zhihu zhuanlan] gave some useful suggestions), and then use `load_from_disk` to get the dataset object. However, when one finally has the local files on the disk, it is still buggy when trying to load the files into objects. ### Steps to reproduce the bug Steps to reproduce the bug: 1. Found CIFAR dataset in hugging face: https://huggingface.co/datasets/cifar100/tree/main 2. Click ":" button to show "Clone repository" option, and then follow the prompts on the box: ```bash cd my_directory_absolute git lfs install git clone https://huggingface.co/datasets/cifar100 ls my_directory_absolute/cifar100 # confirm that the directory exists and it is OK. ``` 3. Write A python file to try to load the dataset ```python from datasets import load_dataset, load_from_disk dataset = load_from_disk("my_directory_absolute/cifar100") ``` Notice that according to issue #3700 , it is wrong to use load_dataset("my_directory_absolute/cifar100"), so we must use load_from_disk instead. 4. Then you will see the error reported: ```log --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) Cell In[5], line 9 1 from datasets import load_dataset, load_from_disk ----> 9 dataset = load_from_disk("my_directory_absolute/cifar100") File [~/miniconda3/envs/ai/lib/python3.10/site-packages/datasets/load.py:2232), in load_from_disk(dataset_path, fs, keep_in_memory, storage_options) 2230 return DatasetDict.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options) 2231 else: -> 2232 raise FileNotFoundError( 2233 f"Directory {dataset_path} is neither a `Dataset` directory nor a `DatasetDict` directory." 2234 ) FileNotFoundError: Directory my_directory_absolute/cifar100 is neither a `Dataset` directory nor a `DatasetDict` directory. ``` ### Expected behavior The dataset should be load successfully. ### Environment info ```bash datasets-cli env ``` -> results: ```txt Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.14.2 - Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3 ```
https://github.com/huggingface/datasets/issues/6111
closed
[]
2023-08-02T09:17:29Z
2023-08-29T02:00:28Z
3
2catycm
huggingface/transformers
25,257
how to print out the data loaded by each epoch during trainer.train() training?
### Feature request please tell to me, how to print out the data loaded by each epoch during trainer.train() training? ### Motivation how to print out the data loaded by each epoch during trainer.train() training? ### Your contribution how to print out the data loaded by each epoch during trainer.train() training?
https://github.com/huggingface/transformers/issues/25257
closed
[]
2023-08-02T09:13:55Z
2023-09-11T08:02:47Z
null
ahong007007
huggingface/tokenizers
1,310
How to train BPE tokenizer with multiple CPU
Hi I tried to train a BPE tokenizer with about 10GB text, but it seems extremely slow(runs more than 24 hours and not finished yet). Is there a way to turn on multi CPU training (from htop there only 1 CPU used)? Here is the code. ``` from tokenizers import Tokenizer, decoders, models, normalizers, pre_tokenizers, trainers, processors tokenizer = Tokenizer(models.BPE()) tokenizer.normalizer = normalizers.NFC() tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel(add_prefix_space=False) tokenizer.post_processor = processors.ByteLevel(trim_offsets=False) tokenizer.decoder = decoders.ByteLevel() trainer = trainers.BpeTrainer( vocab_size = 50000, min_frequency = 1, initial_alphabet = pre_tokenizers.ByteLevel.alphabet(), special_tokens = special_tokens ) with open("train_bpe.txt") as f tokenizer.train(f, trainer=trainer) ```
https://github.com/huggingface/tokenizers/issues/1310
closed
[]
2023-08-02T08:14:07Z
2023-08-02T09:10:44Z
null
voidmagic
huggingface/chat-ui
380
Issue with Text Generation in Stream Mode
Hi The text generation in stream mode is not functioning as expected on my development server, which is running behind a reverse proxy with the correct base path defined. I'm only receiving a single response in one go, whereas I expect a continuous stream of text. Please assist me in resolving this issue. Thank you!
https://github.com/huggingface/chat-ui/issues/380
closed
[ "support" ]
2023-08-01T19:07:50Z
2023-09-10T12:22:16Z
10
bilal-rachik
huggingface/transformers
25,245
BLIP-2 request: If it's even possible, can you please provide an official example script of how to get the text(caption) features and image features into the same vector space (e.g. for cross-modal retrieval/search using BLIP-2 models, similar to what we can already do with CLIP.) Thanks in advance.
### System Info linux, python 3.8+, pytorch '1.13.0+cu116' ### Who can help? @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction N/A ### Expected behavior N/A
https://github.com/huggingface/transformers/issues/25245
closed
[]
2023-08-01T18:21:07Z
2023-09-21T08:03:25Z
null
wingz1
huggingface/dataset-viewer
1,591
Should we convert the datasets to other formats than parquet?
One OP asked for CSV conversion (not explicitly from the Hub itself): https://huggingface.co/datasets/medical_questions_pairs/discussions/3#64c8c2af527d76365563285c
https://github.com/huggingface/dataset-viewer/issues/1591
closed
[ "question", "feature request", "P2" ]
2023-08-01T13:47:12Z
2024-06-19T14:19:01Z
null
severo
huggingface/optimum
1,243
transformers.convert_graph_to_onnx.quantize equivalent with optimum?
Historically, I've used the following to quantize a model after training: ```python import sys from pathlib import Path from transformers.convert_graph_to_onnx import quantize input_file = sys.argv[1] print("Performing quantization of model '{}'".format(input_file)) quantized_model_path = quantize(Path(input_file)) print("Rename quantized model '{}' to '{}'".format(quantized_model_path.name, input_file)) quantized_model_path.replace(input_file) ``` Is there a way to accomplish the same type of quantization using`optimum-cli? The quantize method from above (that is deprecated) produces a much smaller model than optimum-cli. ``` Original model 448M multilingual-e5-small-onnx/model.onnx Model after above 112M multilingual-e5-small-onnx/model.onnx ``` I've tried the following export/quantize commands, but the model file size is still above 400MB ``` $ optimum-cli export onnx --task sentence-similarity -m intfloat/multilingual-e5-small --optimize O3 multilingual-e5-small-onnx $ optimum-cli onnxruntime quantize --onnx_model multilingual-e5-small-onnx --avx2 --output test ``` ``` 403M Aug 1 09:38 test/model_quantized.onnx ``` Thank you!
https://github.com/huggingface/optimum/issues/1243
closed
[]
2023-08-01T07:59:03Z
2023-08-01T21:45:46Z
2
jobergum
huggingface/sentence-transformers
2,266
How to measure the quanlity of embeddings?
I am using `sentence-transformers` to encode the big texts into input embeddings for a text classification task. However, I'm unsure how to compare the quality of embeddings when evaluating multiple models' performance. Could you please provide some advice?
https://github.com/huggingface/sentence-transformers/issues/2266
open
[]
2023-08-01T06:59:41Z
2023-09-01T06:12:39Z
null
sgwhat
huggingface/trl
597
How to run using multi-GPUs?
Hi, I'm not so familiar with the training method using multi-GPUs. I have a machine with 8 A100s, what should I do to full params SFT a llama2-7B model? How to use the trl tool? Thanks.
https://github.com/huggingface/trl/issues/597
closed
[]
2023-08-01T06:36:27Z
2023-08-21T03:39:46Z
null
jyC23333
huggingface/diffusers
4,407
how to store hub_download on local directory?
### Describe the bug running: from huggingface_hub import hf_hub_url, hf_hub_download ``` # Generate/show the URL hf_hub_url( repo_id="XpucT/Deliberate", filename="Deliberate-inpainting.safetensors", ) # Download the file hf_hub_download( repo_id="XpucT/Deliberate", filename="Deliberate-inpainting.safetensors", ) ``` but file is not stored on local directory ### Reproduction same as above ### Logs _No response_ ### System Info kaggle notebook ### Who can help? @sayakpaul @patrickvonplaten @will
https://github.com/huggingface/diffusers/issues/4407
closed
[ "bug" ]
2023-08-01T05:21:39Z
2023-08-01T05:55:46Z
null
andysingal
huggingface/datasets
6,108
Loading local datasets got strangely stuck
### Describe the bug I try to use `load_dataset()` to load several local `.jsonl` files as a dataset. Every line of these files is a json structure only containing one key `text` (yeah it is a dataset for NLP model). The code snippet is as: ```python ds = load_dataset("json", data_files=LIST_OF_FILE_PATHS, num_proc=16)['train'] ``` However, I found that the loading process can get stuck -- the progress bar `Generating train split` no more proceed. When I was trying to find the cause and solution, I found a really strange behavior. If I load the dataset in this way: ```python dlist = list() for _ in LIST_OF_FILE_PATHS: dlist.append(load_dataset("json", data_files=_)['train']) ds = concatenate_datasets(dlist) ``` I can actually successfully load all the files despite its slow speed. But if I load them in batch like above, things go wrong. I did try to use Control-C to trace the stuck point but the program cannot be terminated in this way when `num_proc` is set to `None`. The only thing I can do is use Control-Z to hang it up then kill it. If I use more than 2 cpus, a Control-C would simply cause the following error: ```bash ^C Process ForkPoolWorker-1: Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/multiprocess/process.py", line 314, in _bootstrap self.run() File "/usr/local/lib/python3.10/dist-packages/multiprocess/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py", line 114, in worker task = get() File "/usr/local/lib/python3.10/dist-packages/multiprocess/queues.py", line 368, in get res = self._reader.recv_bytes() File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 224, in recv_bytes buf = self._recv_bytes(maxlength) File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 422, in _recv_bytes buf = self._recv(4) File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 387, in _recv chunk = read(handle, remaining) KeyboardInterrupt Generating train split: 92431 examples [01:23, 1104.25 examples/s] Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1373, in iflatmap_unordered yield queue.get(timeout=0.05) File "<string>", line 2, in get File "/usr/local/lib/python3.10/dist-packages/multiprocess/managers.py", line 818, in _callmethod kind, result = conn.recv() File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 258, in recv buf = self._recv_bytes() File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 422, in _recv_bytes buf = self._recv(4) File "/usr/local/lib/python3.10/dist-packages/multiprocess/connection.py", line 387, in _recv chunk = read(handle, remaining) KeyboardInterrupt During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/mnt/data/liyongyuan/source/batch_load.py", line 11, in <module> a = load_dataset( File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2133, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 954, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1049, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1842, in _prepare_split for job_id, done, content in iflatmap_unordered( File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1387, in iflatmap_unordered [async_result.get(timeout=0.05) for async_result in async_results] File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 1387, in <listcomp> [async_result.get(timeout=0.05) for async_result in async_results] File "/usr/local/lib/python3.10/dist-packages/multiprocess/pool.py", line 770, in get raise TimeoutError multiprocess.context.TimeoutError ``` I have validated the basic correctness of these `.jsonl` files. They are correctly formatted (or they cannot be loaded singly by `load_dataset`) though some of the json may contain too long text (more than 1e7 characters). I do not know if this could be the problem. And there should not be any bottleneck in system's resource. The whole dataset is ~300GB, and I am using a cloud server with plenty of storage and 1TB ram. Thanks for your efforts and patience! Any suggestion or help would be appreciated. ### Steps to reproduce the bug 1. use load_dataset() with `data_files = LIST_OF_FILES` ### Expected behavior All the files should be smoothly loaded. ### Environment info - Datasets: A private datas
https://github.com/huggingface/datasets/issues/6108
open
[]
2023-08-01T02:28:06Z
2024-12-31T16:01:00Z
7
LoveCatc
huggingface/chat-ui
379
Issue with Chat UI when deploying Text Generation API on a remote server
I am facing an issue with the Chat UI while using the Text Generation API. Everything works correctly when the Text Generation API is deployed on localhost, but the Chat UI doesn't work when the Text Generation API is deployed on a remote server. Steps to reproduce the problem: 1. Deploy the Text Generation API on localhost. 2. Use the Chat UI to generate text and verify that it works correctly. 3. Deploy the Text Generation API on a remote server. 4. Use the Chat UI again to generate text and notice that it no longer works. Expected behavior: The Chat UI should work properly, whether the Text Generation API is deployed on localhost or on a remote server. Additional information: - I am using version 0.4 of the Chat UI and version 0.9.3 of the Text Generation API. - The remote server hosting the Text Generation API responds correctly to requests. - Tests have been conducted with the "text generation" client and Postman. Any assistance in resolving this issue would be highly appreciated. Thank you! ![20230731_191316](https://github.com/huggingface/chat-ui/assets/49948822/658df806-11a7-4268-855c-f0fdbbe724b5)
https://github.com/huggingface/chat-ui/issues/379
open
[ "support" ]
2023-07-31T17:22:49Z
2023-09-18T12:55:45Z
0
bilal-rachik
huggingface/chat-ui
378
Add support for endpoints requiring client authentication using PKI
Hi, Are you open to adding support for endpoints that require client authentication using PKI? I have a requirement to use client authentication with our backend inference server. Currently authentication config from each endpoint is passed to the headers arg of the fetch command: https://github.com/huggingface/chat-ui/blob/main/src/lib/server/generateFromDefaultEndpoint.ts#L35 My quick googling has yielded this: https://sebtrif.xyz/blog/2019-10-03-client-side-ssl-in-node-js-with-fetch/ tl;dr; they create a `https.Agent(..)` which loads a PKI context from file which is passed to the `agent` arg in the fetch command. If you're happy for this to be added, how would you like to separate the logic of authentication using headers and client authentication using an SSL context? Thank you! :)
https://github.com/huggingface/chat-ui/issues/378
closed
[ "question", "front" ]
2023-07-31T17:13:53Z
2023-08-15T18:51:29Z
null
cambriancoder
huggingface/chat-ui
377
Provide a login button, for existing users?
I just changed to another laptop, and didn't find a login button to see and work with my account from Huggingface. After I used once the Chat, I got a message to Login. I would suggest making it more traditional to have a username and a login button on the left sidebar.
https://github.com/huggingface/chat-ui/issues/377
closed
[ "enhancement", "front" ]
2023-07-31T12:08:52Z
2023-08-02T12:19:30Z
1
tobiashochguertel
huggingface/datasets
6,104
HF Datasets data access is extremely slow even when in memory
### Describe the bug Doing a simple `some_dataset[:10]` can take more than a minute. Profiling it: <img width="1280" alt="image" src="https://github.com/huggingface/datasets/assets/36224762/e641fb95-ff02-4072-9016-5416a65f75ab"> `some_dataset` is completely in memory with no disk cache. This is proving fatal to my usage of HF Datasets. Is there a way I can forgo the arrow format and store the dataset as PyTorch tensors so that `_tensorize` is not needed? And is `_consolidate` supposed to take this long? It's faster to produce the dataset from scratch than to access it from HF Datasets! ### Steps to reproduce the bug I have uploaded the dataset that causes this problem [here](https://huggingface.co/datasets/NightMachinery/hf_datasets_bug1). ```python #!/usr/bin/env python3 import sys import time import torch from datasets import load_dataset def main(dataset_name): # Start the timer start_time = time.time() # Load the dataset from Hugging Face Hub dataset = load_dataset(dataset_name) # Set the dataset format as torch dataset.set_format(type="torch") # Perform an identity map dataset = dataset.map(lambda example: example, batched=True, batch_size=20) # End the timer end_time = time.time() # Print the time taken print(f"Time taken: {end_time - start_time:.2f} seconds") if __name__ == "__main__": dataset_name = "NightMachinery/hf_datasets_bug1" print(f"dataset_name: {dataset_name}") main(dataset_name) ``` ### Expected behavior _ ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
https://github.com/huggingface/datasets/issues/6104
open
[]
2023-07-31T11:12:19Z
2023-08-01T11:22:43Z
1
NightMachinery
huggingface/diffusers
4,382
HOW TO Overcoming the Influence of Seed and Enhancing the Role of Text Prompts
I fine-tuned a text2img model using Lora, based on the v1.5 version of stable diffusion. The results generated are very good. But they can’t be controlled. It seems that the generated results are more based on the seed. Changing the seed changes the image, And if I don’t change the seed and only change the text prompt, the result doesn’t change, or there are only very slight changes. 1. How should I solve this problem? 2. I would like to request a new feature that helps balance the influence between the seed and the prompt, as some questions are indeed sensitive to the seed.
https://github.com/huggingface/diffusers/issues/4382
closed
[]
2023-07-31T07:41:03Z
2023-08-02T09:23:50Z
null
XiaoyuZhuang
huggingface/transformers.js
230
[Question] distiluse-base-multilingual-cased-v2 - wrong vector dimension (768 vs 512) in onnx version?
I was just playing around with the model [distiluse-base-multilingual-cased-v2](https://huggingface.co/sentence-transformers/distiluse-base-multilingual-cased-v2) and noticed that your onnx versions both (quantized and normal) produce embeddings with 768-dimensional vectors instead of 512. Example: index.html ```html <!DOCTYPE html> <html> <head> <title>Transformers.js Example</title> </head> <body> <h1>Transformers.js Example</h1> <script type="module" src="main.js"></script> </body> </html> ``` main.js ```javascript import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/transformers@2.4.4'; async function allocatePipeline() { let pipe = await pipeline("feature-extraction", "Xenova/distiluse-base-multilingual-cased-v2"); let out = await await pipe("test", { pooling: 'mean', normalize: true }); console.log(out); } allocatePipeline(); ``` That gives me ``` Proxy(s)Β {dims: Array(2), type: 'float32', data: Float32Array(768), size: 768} ``` However, the model page states > This is a [sentence-transformers](https://www.sbert.net/) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search. Also, I used the Python package ```python from sentence_transformers import SentenceTransformer model = SentenceTransformer('sentence-transformers/distiluse-base-multilingual-cased-v2') model.encode("test") ``` which gives me a correct 512-dimensional embedding. Am I missing some option here or overseeing the obvious?
https://github.com/huggingface/transformers.js/issues/230
closed
[ "question" ]
2023-07-30T16:49:36Z
2024-10-18T13:30:12Z
null
do-me
huggingface/trl
592
How to load a custom structure model?
hello, when I run the following code, I am prompted that only support `AutoModelForCausalLMWithValueHead` and `AutoModelForSeq2SeqLMWithValueHead`. But these two structures seem to only be able to load the specified pre-trained model. `ppo_trainer = PPOTrainer(config, gen_model, gen_ref_model, tokenizer)` My model is trained by the T5, and the structure has changed. I would like to know how to load my model? Is it supported?
https://github.com/huggingface/trl/issues/592
closed
[]
2023-07-30T15:42:18Z
2023-08-31T11:00:56Z
null
estuday
huggingface/datasets
6,099
How do i get "amazon_us_reviews
### Feature request I have been trying to load 'amazon_us_dataset" but unable to do so. `amazon_us_reviews = load_dataset('amazon_us_reviews')` `print(amazon_us_reviews)` > [ValueError: Config name is missing. Please pick one among the available configs: ['Wireless_v1_00', 'Watches_v1_00', 'Video_Games_v1_00', 'Video_DVD_v1_00', 'Video_v1_00', 'Toys_v1_00', 'Tools_v1_00', 'Sports_v1_00', 'Software_v1_00', 'Shoes_v1_00', 'Pet_Products_v1_00', 'Personal_Care_Appliances_v1_00', 'PC_v1_00', 'Outdoors_v1_00', 'Office_Products_v1_00', 'Musical_Instruments_v1_00', 'Music_v1_00', 'Mobile_Electronics_v1_00', 'Mobile_Apps_v1_00', 'Major_Appliances_v1_00', 'Luggage_v1_00', 'Lawn_and_Garden_v1_00', 'Kitchen_v1_00', 'Jewelry_v1_00', 'Home_Improvement_v1_00', 'Home_Entertainment_v1_00', 'Home_v1_00', 'Health_Personal_Care_v1_00', 'Grocery_v1_00', 'Gift_Card_v1_00', 'Furniture_v1_00', 'Electronics_v1_00', 'Digital_Video_Games_v1_00', 'Digital_Video_Download_v1_00', 'Digital_Software_v1_00', 'Digital_Music_Purchase_v1_00', 'Digital_Ebook_Purchase_v1_00', 'Camera_v1_00', 'Books_v1_00', 'Beauty_v1_00', 'Baby_v1_00', 'Automotive_v1_00', 'Apparel_v1_00', 'Digital_Ebook_Purchase_v1_01', 'Books_v1_01', 'Books_v1_02'] Example of usage: `load_dataset('amazon_us_reviews', 'Wireless_v1_00')`] __________________________________________________________________________ `amazon_us_reviews = load_dataset('amazon_us_reviews', 'Watches_v1_00') print(amazon_us_reviews)` **ERROR** `Generating` train split: 0% 0/960872 [00:00<?, ? examples/s] --------------------------------------------------------------------------- KeyError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1692 ) -> 1693 example = self.info.features.encode_example(record) if self.info.features is not None else record 1694 writer.write(example, key) 11 frames KeyError: 'marketplace' The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/datasets/builder.py in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1710 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1711 e = e.__context__ -> 1712 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1713 1714 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ### Motivation The dataset I'm using https://huggingface.co/datasets/amazon_us_reviews ### Your contribution What is the best way to load this data
https://github.com/huggingface/datasets/issues/6099
closed
[ "enhancement" ]
2023-07-30T11:02:17Z
2023-08-21T05:08:08Z
10
IqraBaluch
huggingface/trl
591
how to use SFTTrainer for multi turns dialogue?
I wanto use SFTTrainer to train a multi turns dialogues. does it apply to llama-2-7b-cha-hf? is it same to llama-2-7b-hf for instruction tune? my dataset is multi turns dialogues. the prompt is: ``` <s>[INST] <<SYS>> {{ system_prompt }} <</SYS>> {{ user_msg_1 }} [/INST] {{ model_answer_1 }} </s><s>[INST] {{ user_msg_2 }} [/INST] {{ model_answer_2 }} </s><s>[INST] {{ user_msg_3 }} [/INST] ```
https://github.com/huggingface/trl/issues/591
closed
[]
2023-07-30T05:47:40Z
2023-08-01T06:21:04Z
null
moseshu
huggingface/transformers.js
228
[Question] Chaining automatic-speech recognition tasks sometimes produces weird output?
Hi! I'm using the automatic-speech recognition task with vanilla nodejs (20) for (almost) live transcription (after the person has stopped talking) This is the setup I'm using as per the docs: ``` const multilingual = true; const model = "base"; const modelName = `Xenova/whisper-${model}${multilingual ? "" : ".en"}`; const transcriber = await pipeline("automatic-speech-recognition", modelName); const wav = new wavefile.WaveFile(); wav.fromScratch(1, 48000, "32f", audioBuffer.getChannelData(0)); wav.toSampleRate(16000); // Whisper expects audio with a sampling rate of 16000 let audioData = wav.getSamples(); if (Array.isArray(audioData)) { audioData = audioData[0]; } let output = await transcriber(audioData); ``` This code almost works perfectly (also verified the wav files by saving them locally) But every once in a while the model seems to get stuck for a couple of seconds. I can't say if this is because I'm sending multiple requests to the pipe while there's still a task in progress (multiple speakers), or something else entirely. Sadly I don't think there's any documentation if the pipeline has a queue of some sort or if it just mangles the data weirdly. The output will look like this even though the sound-snippet only contains a single "Ah...": ``` took 7.202248899996281s: Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... Ah... ``` or like this (no music was being played) ``` took 6.9480034999996425s: [Music]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]]] ``` Generation time is also much, much longer (normally under 1s with whisper-base, this is the main problem I'm facing) Is this is a bug? I was thinking of working around the problem by canceling the operation if it takes longer than 2-3s if that's possible, but that'd just be the laziest workaround. (something like `pipe.cancel();` or equivalent) Or alternatively implementing a queue myself if it actually jumbles data when chaining tasks Thanks so much in advance for any suggestions!
https://github.com/huggingface/transformers.js/issues/228
closed
[ "question" ]
2023-07-30T01:32:26Z
2024-12-07T14:45:02Z
null
funiel
huggingface/diffusers
4,363
how to properly load sd_xl_base_1.0_0.9vae.safetensors
### Describe the bug hi, how should i load sd_xl_base_1.0_0.9vae.safetensors given the namespace is the same as 1.0 one? ### Reproduction N/A ### Logs _No response_ ### System Info ec2 ### Who can help? @sayakpaul @patrick
https://github.com/huggingface/diffusers/issues/4363
closed
[ "bug", "stale" ]
2023-07-29T21:16:34Z
2023-10-18T15:14:58Z
null
MaxTran96
huggingface/optimum-neuron
151
any example of how to use with Accelerate?
All the examples seem to replace `Trainer` but we are using `Accelerate`. Much appreciated! :)
https://github.com/huggingface/optimum-neuron/issues/151
closed
[ "Stale" ]
2023-07-29T05:51:20Z
2024-12-02T08:05:47Z
null
jiangts
huggingface/transformers.js
226
voice recognition
@xenova hello bro i wish every things is good on you so i just wanna ask if we can recognize an audio file using his buffer ecxept wav extensions only i mean using mp3 file buffer or flac extension? ``` // Load audio data let url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav'; let buffer = Buffer.from(await fetch(url).then(x => x.arrayBuffer())) // Read .wav file and convert it to required format let wav = new wavefile.WaveFile(buffer); wav.toBitDepth('32f'); // Pipeline expects input as a Float32Array wav.toSampleRate(16000); // Whisper expects audio with a sampling rate of 16000 let audioData = wav.getSamples(); if (Array.isArray(audioData)) { // For this demo, if there are multiple channels for the audio file, we just select the first one. // In practice, you'd probably want to convert all channels to a single channel (e.g., stereo -> mono). audioData = audioData[0]; } ```
https://github.com/huggingface/transformers.js/issues/226
closed
[ "question" ]
2023-07-28T16:14:50Z
2023-08-20T23:43:31Z
null
jedLahrim
huggingface/chat-ui
372
Can I add i18n support?
Would be great to support the standard i18n in frontend, we can contribute with it, do you see that it would be an accepted contribution? Maybe using this lib [kaisermann/svelte-i18n](https://github.com/kaisermann/svelte-i18n/blob/main/docs/Getting%20Started.md)
https://github.com/huggingface/chat-ui/issues/372
closed
[ "enhancement", "question", "front" ]
2023-07-28T11:56:55Z
2024-06-17T18:07:41Z
null
juancgalvis
huggingface/chat-ui
371
Improve the UI, to be flexible width?
The left sidebar is growing here, and I wished I could make it wider. Same for the middle part, which is centered, and sometimes I have to scroll to the side to see the whole code block because the middle part has a left and right margin, what I can't control. It would be great when we could set the percent value for the left sidebar and the middle part in users' profile?
https://github.com/huggingface/chat-ui/issues/371
open
[]
2023-07-28T11:27:27Z
2023-07-28T15:16:38Z
2
tobiashochguertel
huggingface/accelerate
1,786
Problem about how to save memory on 2 GPU at one machine.
Why I run my script on one GPU at batch_size 8,nothing happened, I use the accelerate launch my script on 2 GPU at same batch_size, both process terminate because CUDA out of Memory. Here is my config : compute_environment: LOCAL_MACHINE distributed_type: MULTI_GPU downcast_bf16: 'no' dynamo_config: dynamo_backend: INDUCTOR gpu_ids: all machine_rank: 0 main_training_function: main mixed_precision: 'no' num_machines: 1 num_processes: 2 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false When normal run my script on one GPU, the memory util is about 23GB/24GB. Is this config make my process use more memory?
https://github.com/huggingface/accelerate/issues/1786
closed
[]
2023-07-28T09:42:43Z
2023-09-15T15:06:17Z
null
Kangkang625
huggingface/text-generation-inference
720
How to make sure the local tgi server's performance is ok
### Feature request Hello, I just deployed the tgi server as docs in docker container on an single A100 and have a load test with bloom-7b1, but the performance has come a long way from other inference servers, like vllm, fastertransformer in the same environment & condition. So, if there is something like an official performance table for a beginner like me to make sure the performance is ok, or there are detailed instructions for me to check and set up some options to improve throughput. Thanks a lot! ### Motivation None ### Your contribution None
https://github.com/huggingface/text-generation-inference/issues/720
closed
[ "Stale" ]
2023-07-28T07:57:18Z
2024-04-25T01:58:42Z
null
lichangW
huggingface/transformers.js
224
[Question] Merge whisper-base.en main and output_attentions?
I can see there is `output_attentions` branch on https://huggingface.co/Xenova/whisper-base.en/tree/main and the difference from `main` seems it can support `return_timestamps: 'word'`. Is there a plan/schedule to merge these two? Or these two branches are incompatible to be merged together? In such case, will both receive future updates?
https://github.com/huggingface/transformers.js/issues/224
closed
[ "question" ]
2023-07-28T07:44:52Z
2023-09-04T20:59:21Z
null
jozefchutka
huggingface/blog
1,352
How to train the autoformer?
Dear authors, I have read your blog at https://huggingface.co/blog/autoformer, it is great to explain why transformer is better than Dlinear. However, I am wondering how to train my own Autoformer instead of using a pretrained Autoformer. Best regards
https://github.com/huggingface/blog/issues/1352
open
[]
2023-07-28T03:28:33Z
2023-12-07T17:40:09Z
null
AppleMax1992
huggingface/text-generation-inference
718
How to make sure Flash and PagedAttention are running?
### System Info I am running the following for llamav2, and was wondering how I can make sure pagedattention and flashattention are running? any Flag to be set or they are enabled by default? ``` docker run --gpus all --shm-size 1g -p $PORT:80 \ -v $PWD/data:/data \ -e HUGGING_FACE_HUB_TOKEN=$token \ ghcr.io/huggingface/text-generation-inference:0.9.3 \ --model-id $MODEL \ --sharded false \ --max-input-length 1024 \ --max-total-tokens 2048 \ --max-best-of 5 \ --max-concurrent-requests 5000 \ --max-batch-total-tokens $TOKENS\ --num-shard 4 ``` ### Information - [X] Docker - [ ] The CLI directly ### Tasks - [ ] An officially supported command - [ ] My own modifications ### Reproduction It more of question not a bug. ### Expected behavior just doc clarification.
https://github.com/huggingface/text-generation-inference/issues/718
closed
[]
2023-07-27T22:55:26Z
2023-07-28T08:19:20Z
null
HamidShojanazeri
huggingface/text-generation-inference
716
How to load private model in tgi in docker and difference inference performance when loading from huggingface/loading from locally directory
Hi team, How do we load a private model in tgi in the docker because of the access issue? One solution I think is to pre-download the model and then mount the model directory and load into tgi. However, I find out there is a big performance inference gap between these two methods and could the team provide some hints on why is it? Reproduce step: Model example: bigcode/santacoder 1. inference on 100 tokens via model-id bigcode/santacoder is 180ms Command: `docker run --gpus all --shm-size 1g -p 8080:80 -v /data:/data ghcr.io/huggingface/text-generation-inference:0.9.4 --model-id bigcode/santacoder --num-shard 1 --max-input-length 1000 --max-total-tokens 2000 --max-batch-total-tokens 4096 --max-concurrent-requests 1 --max-stop-sequences 20 --dtype float16 --trust-remote-code` total_time="158.787824ms" validation_time="221.404Β΅s" queue_time="48.671Β΅s" inference_time="158.517849ms" time_per_token="7.925892ms" 2.1 first git clone the bigcode/santacoder directory by running `git lfs install && git clone https://huggingface.co/bigcode/santacoder ` 2.2 running docker image loading via model-id santacoder directory. inference on 100 tokens is 280ms. command `docker run --gpus all -v santacoder_path:/model --shm-size 1g -p 8080:80 -v /data:/data ghcr.io/huggingface/text-generation-inference:0.9.4 --model-id /model --num-shard 1 --max-input-length 1000 --max-total-tokens 2000 --max-batch-total-tokens 4096 --max-concurrent-requests 1 --max-stop-sequences 20 --dtype float16 --trust-remote-code` total_time="329.15002ms" validation_time="183.883Β΅s" queue_time="52.371Β΅s" inference_time="328.914016ms" time_per_token="16.4457ms" seed="None"}: For loading with local directory, it takes more time to shard and it has one warning about Model does not support automatic max batch total tokens. Also the output is garbage. Test Command for query server `curl 127.0.0.1:8080/generate -X POST -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' -H 'Content-Type: application/json ` I think there may be some additional steps to make model better performance but I have not realized it yet. Thanks for the help in advance! Docker image version: ghcr.io/huggingface/text-generation-inference:0.9.4
https://github.com/huggingface/text-generation-inference/issues/716
closed
[]
2023-07-27T21:12:38Z
2023-07-28T07:12:53Z
null
zch-cc
huggingface/text-generation-inference
711
How could I know what is wrong when connect refuse happen?
Hi I try with below command to launch the docker. ``` docker run --rm --name tgi --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=1 -p 8080:80 ghcr.io/huggingface/text-generation-inference:0.9.3 --model-id decapoda-research/llama-7b-hf ``` At this moment, with netstat, I could see in host, 8080 port is already listened. tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN and with ``` curl 127.0.0.1:8080/generate \ -X POST \ -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \ -H 'Content-Type: application/json' ``` But I get connect refuse. Is there some debugging method to check what goes wrong for this bug? Thx
https://github.com/huggingface/text-generation-inference/issues/711
closed
[]
2023-07-27T13:59:48Z
2023-07-27T14:10:46Z
null
leiwen83