repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
davidsandberg/facenet
computer-vision
583
model definition selection
Hi, Currently inception_resnet_v1 is used to train facenet, I notice definition for inception_resnet_v2, and nn2, nn3, nn4 are also provided. Is currently inception_resnet_v1 providing best performance even than NN2 network? thanks in advance!
open
2017-12-18T03:40:34Z
2018-04-11T16:17:59Z
https://github.com/davidsandberg/facenet/issues/583
[]
zhenglaizhang
1
aio-libs/aiomysql
asyncio
505
DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10.
Any plans to fix this deprecation warning? ``` root/.local/share/virtualenvs/app-ueEJiAOq/lib/python3.8/site-packages/aiomysql/pool.py:46: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10. self._cond = asyncio.Condition(loop=loop) /usr/local/lib/python3.8/asyncio/locks.py:335: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10. lock = Lock(loop=loop) /usr/local/lib/python3.8/asyncio/tasks.py:455: DeprecationWarning: The loop argument is deprecated since Python 3.8, and scheduled for removal in Python 3.10. return await fut ```
closed
2020-06-19T20:01:42Z
2021-11-14T09:53:13Z
https://github.com/aio-libs/aiomysql/issues/505
[]
youngamichael
2
huggingface/datasets
numpy
6,530
Impossible to save a mapped dataset to disk
### Describe the bug I want to play around with different hyperparameters when training but don't want to re-map my dataset with 3 million samples each time for tens of hours when I [fully fine-tune SDXL](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py). After I do the mapping like this: ``` train_dataset = train_dataset.map(compute_embeddings_fn, batched=True) train_dataset = train_dataset.map( compute_vae_encodings_fn, batched=True, batch_size=16, ) ``` and try to save it like this: `train_dataset.save_to_disk("test")` i get this error ([full traceback](https://pastebin.com/kq3vt739)): ``` TypeError: Object of type function is not JSON serializable The format kwargs must be JSON serializable, but key 'transform' isn't. ``` But what is interesting is that pushing to hub works like that: `train_dataset.push_to_hub("kopyl/mapped-833-icons-sdxl-1024-dataset", token=True)` Here is the link of the pushed dataset: https://huggingface.co/datasets/kopyl/mapped-833-icons-sdxl-1024-dataset ### Steps to reproduce the bug Here is the self-contained notebook: https://colab.research.google.com/drive/1RtCsEMVcwWcMwlWURk_cj_9xUBHz065M?usp=sharing ### Expected behavior It should be easily saved to disk ### Environment info NVIDIA A100, Linux (NC24ads A100 v4 from Azure), CUDA 12.2. [pip freeze](https://pastebin.com/QTNb6iru)
open
2023-12-23T15:18:27Z
2023-12-24T09:40:30Z
https://github.com/huggingface/datasets/issues/6530
[]
kopyl
1
miguelgrinberg/Flask-Migrate
flask
353
Getting error `Error: Could not import "dms.database"` while doing `flask db init`
I'm trying to perform the db migration but getting the error ```Error: Could not import "dms.database``` when I run the command ```flask db init``` Note that the configuration values of this project are stored in the ```__init__.py``` inside the ```\dms``` directory whereas the ```app.run()``` line of code is in the ```run.py``` file which is outside the ```\dms`` directory Please find the directory tree for reference ``` | requirements.txt | run.py | \---dms | api_file.py | email.py | logout.py | security.py | __init__.py | +---models | | __init__.py | | | +---donations | | ChequeDonationsModel.py | | DonationsModel.py | | KindDonationsModel.py | | ModesModel.py | | | +---donors | | CountryModel.py | | DonorsModel.py | | ReferencesModel.py | | StatesModel.py | | __init__.py | | | \---users | CredentialsModel.py | ProjectsModel.py | RightsModel.py | RolesModel.py | TypesModel.py | UsersModel.py | __init__.py | +---resources | | __init__.py | | | +---donations | | Donations.py | | KindDonations.py | | Modes.py | | | +---donors | | Country.py | | Donors.py | | References.py | | States.py | | __init__.py | | | \---users | Projects.py | Rights.py | Roles.py | Types.py | User.py | __init__.py | +---static \---templates ``` I tried executing the ```flask db init``` command from inside the ```\dms``` dir and from outside the ```\dms``` as well Currently, I am using a MySQL database on localhost The ```run.py``` code is as follows ``` from dms import app, api, db, jwt # Third Party Library Imports from flask import jsonify from flask_jwt_extended import JWTManager # Imports from user related Resources from dms.resources.users.Projects import Projects, SingleProject from dms.resources.users.Rights import Rights, SingleRight from dms.resources.users.Types import Types, SingleType from dms.resources.users.Roles import Roles, SingleRole from dms.resources.users.User import ( Users, SingleUser, UserLogin, UserCredentials, TokenRefresh, UserLogout, ) # Imports from donor related Resources from dms.resources.donors.Donors import Donors, SingleDonor from dms.resources.donors.References import Reference, SingleReference from dms.resources.donors.States import State, SingleState from dms.resources.donors.Country import Country, SingleCountry # Imports from donations related Resources from dms.resources.donations.Donations import Donation, SingleDonation from dms.resources.donations.KindDonations import KindDonations, SingleKindDonation from dms.resources.donations.Modes import Modes, SingleMode from dms.models.users.UsersModel import UsersModel from dms.logout import revoked_store jwt = JWTManager(app) # Used to add some more values and functionalities to the existing JWT token, like admin and user access @jwt.user_claims_loader def add_claims_to_jwt(identity): # TODO: Instead of 1, write the query to get id where user name is Vaishali Modak if identity == 8: return {"is_admin": True} return {"is_admin": False} # To be used to check whether a token is logged out or not @jwt.token_in_blacklist_loader def check_token_in_logout(decrypted_token): jti = decrypted_token['jti'] entry = revoked_store.get(jti) if entry is None: return True return entry == 'true' # When JWT token sent by user to server is expired (a JWT token expired after 5 minutes) @jwt.expired_token_loader def expired_token_callback(): return jsonify({"message": "The token has expired", "error": "token_expired"}), 401 # When the user does not send any token to the server @jwt.unauthorized_loader def no_token_callback(): return jsonify({"message": "No token provided", "error": "no_token_received"}) # When the server needs a fresh token @jwt.needs_fresh_token_loader def no_fresh_token_callback(): return jsonify({"message": "Send a fresh token", "error": "fresh_token_required"}) # When the server gets a revoked token from user, used in case of logout @jwt.revoked_token_loader def revoked_token_callback(): return ( jsonify( { "message": "You have been logged out from the system", "error": "revoked_token", } ), 401, ) api.add_resource(Users, "/users") api.add_resource(SingleUser, "/users/<int:_id>", "/users") api.add_resource(UserCredentials, "/change%password") api.add_resource(Projects, "/projects") api.add_resource(SingleProject, "/projects/<int:_id>", "/projects") api.add_resource(Roles, "/roles") api.add_resource(SingleRole, "/roles/<int:_id>", "/roles") api.add_resource(Rights, "/rights") api.add_resource(SingleRight, "/rights/<int:_id>", "/rights") api.add_resource(Types, "/types") api.add_resource(SingleType, "/types/<int:_id>", "/types") api.add_resource(Country, "/country") api.add_resource(SingleCountry, "/country/<int:_id>", "/country") api.add_resource(Reference, "/references") api.add_resource(SingleReference, "/references/<int:_id>", "/references") api.add_resource(State, "/states") api.add_resource(SingleState, "/states/<int:_id>", "/states") api.add_resource(Donors, "/donors") api.add_resource(SingleDonor, "/donors/<int:_id>", "/donors") api.add_resource(Donation, "/donations") api.add_resource(SingleDonation, "/donations/<int:_id>", "/donations") api.add_resource(KindDonations, "/kind_donations") api.add_resource(SingleKindDonation, "/kind_donations/<int:id>") api.add_resource(Modes, "/modes") api.add_resource(SingleMode, "/modes/<int:_id>", "/modes") api.add_resource(UserLogin, "/login") api.add_resource(TokenRefresh, "/refresh") api.add_resource(UserLogout, "/logout") if __name__ == "__main__": db.init_app(app) app.run() ``` whereas the ```/dms/__init__.py``` code is as follows ``` from datetime import timedelta from flask import Flask from flask_restful import Api from flask_sqlalchemy import SQLAlchemy from flask_jwt_extended import JWTManager from flask_migrate import Migrate app = Flask(__name__) app.config["SQLALCHEMY_DATABASE_URI"] = "mysql://root:@localhost/dms" app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False app.config["PROPAGATE_EXCEPTIONS"] = True # This line enables the Flask app to identify errors and exceptions # related to FlaskJWT and then report them accordingly # Authentication related configuration values app.secret_key = "aniket" app.config["JWT_SECRET_KEY"] = "aniket" app.config["JWT_AUTH_URL_RULE"] = "/login" ACCESS_EXPIRES = timedelta(minutes=15) REFRESH_EXPIRES = timedelta(days=30) app.config['JWT_ACCESS_TOKEN_EXPIRES'] = ACCESS_EXPIRES app.config['JWT_REFRESH_TOKEN_EXPIRES'] = REFRESH_EXPIRES app.config["JWT_BLACKLIST_ENABLE"] = True app.config["JWT_BLACKLIST_TOKEN_CHECKS"] = ["access", "refresh"] # Linkages of the functionalities with the main flask app db = SQLAlchemy(app) api = Api(app) migrate = Migrate(app, db) def create_db(): db.create_all() print("DB Created Successfully") @app.before_first_request def db_creation_command(): create_db() jwt = JWTManager(app) ``` Where am I actually going wrong ?
closed
2020-06-24T03:41:49Z
2020-06-28T09:57:09Z
https://github.com/miguelgrinberg/Flask-Migrate/issues/353
[ "question" ]
aniketsnv-1997
13
dask/dask
scikit-learn
11,041
Weird behavior of `rename`
<!-- Please include a self-contained copy-pastable example that generates the issue if possible. Please be concise with code posted. See guidelines below on how to provide a good bug report: - Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports - Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly. --> **Describe the issue**: In our project, we rename columns in pandas DataFrames and dask DataFrames. However, dask DataFrames behave different, if a dict is passed to `rename`, in which a value corresponds to a column name in the DataFrame. Is this intended? **Minimal Complete Verifiable Example**: ```python import dask.dataframe as ddf import pandas as pd columns_ab = {"a": "b"} columns_bb = {"b": "b"} columns_ba = {"b": "a"} df = pd.DataFrame({"a": [15.0]}) # Works as intended df.rename(columns=columns_ab)["b"] df.rename(columns=columns_bb)["a"] df.rename(columns=columns_ba)["a"] # KeyError in the last line df_dff = ddf.from_pandas(df.copy()) df_dff.rename(columns=columns_ab)["b"].compute() df_dff.rename(columns=columns_bb)["a"].compute() df_dff.rename(columns=columns_ba)["a"].compute() # fails with KeyError: 'a' ``` **Anything else we need to know?**: **Environment**: - Dask version: 2024.4.0 - Python version: 3.9.19 - Operating System: MacOS - Install method (conda, pip, source): conda
closed
2024-04-09T15:06:39Z
2024-04-10T06:52:56Z
https://github.com/dask/dask/issues/11041
[ "needs triage" ]
dualtob
3
pytorch/pytorch
deep-learning
149,570
torch.compile fails in kokoro (both fullgraph=True and False)
### 🐛 Describe the bug Repro: ``` conda create -y -n user-empathy python=3.11 conda activate user-empathy pip install -q kokoro>=0.9.2 soundfile pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu126 ``` ``` text = ''' PyTorch is an open-source machine learning library developed by Facebook's AI Research Lab (FAIR), providing a dynamic computation graph, autograd system, and modular architecture that allows for more flexibility and ease of use compared to other popular deep learning frameworks like TensorFlow. It features a Pythonic API, native support for NVIDIA GPUs, and is widely used in computer vision tasks such as image classification, object detection, segmentation, and generation, as well as natural language processing (NLP) tasks like language modeling, text classification, sentiment analysis, and machine translation. PyTorch's advantages include ease of use, flexibility, fast prototyping, and a large community, making it an ideal choice for researchers and developers working on a wide range of applications, from speech recognition and reinforcement learning to robotics and autonomous systems. With its extensive documentation, tutorials, and pre-built models, PyTorch is an excellent choice for anyone looking to get started with deep learning or take their existing projects to the next level, and can be easily integrated into various workflows, including research, development, and production environments. ''' from kokoro import KPipeline from kokoro import KModel import soundfile as sf import torch import time torch._dynamo.config.capture_scalar_outputs = True device = "cuda" model = KModel().to(device).eval() pipeline = KPipeline(lang_code='a', model=model, device=device) pack = pipeline.load_voice('af_heart') # eager mode @torch.compile(fullgraph=False) # or fullgraph=True def forward_gpu(ps, ref_s): return model(ps, ref_s, 1) def run(): times = [] for _ in range(10): audios = [] generator = pipeline(text, voice='af_heart') start = time.time() for (_, ps, _) in generator: ref_s = pack[len(ps)-1] audio = forward_gpu(ps, ref_s) audios.append(audio) end = time.time() times.append(end-start) print(times) print(sum(times[2:])/len(times[2:])) # for i, audio in enumerate(audios): # # print(i, gs, ps) # sf.write(f'{i}.wav', audio, 24000) # print("done") run() ``` Error msg for `fullgraph=True`: ``` WARNING: Defaulting repo_id to hexgrad/Kokoro-82M. Pass repo_id='hexgrad/Kokoro-82M' to suppress this warning. /home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/nn/modules/rnn.py:123: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.2 and num_layers=1 warnings.warn( /home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/nn/utils/weight_norm.py:143: FutureWarning: `torch.nn.utils.weight_norm` is deprecated in favor of `torch.nn.utils.parametrizations.weight_norm`. WeightNorm.apply(module, name, dim) WARNING: Defaulting repo_id to hexgrad/Kokoro-82M. Pass repo_id='hexgrad/Kokoro-82M' to suppress this warning. W0319 13:42:00.363000 419191 .conda/envs/user-empathy/lib/python3.11/site-packages/torch/fx/experimental/symbolic_shapes.py:6679] [0/0] failed during evaluate_expr(Ne(Mod(310*Max(1, u0), 8), 0), hint=None, size_oblivious=False, forcing_spec=False E0319 13:42:00.364000 419191 .conda/envs/user-empathy/lib/python3.11/site-packages/torch/fx/experimental/recording.py:299] [0/0] failed while running evaluate_expr(*(Ne(Mod(310*Max(1, u0), 8), 0), None, False, False), **{}) Traceback (most recent call last): File "/home/shangdiy/test.py", line 46, in <module> run_eager() File "/home/shangdiy/test.py", line 34, in run_eager audio = forward_gpu(ps, ref_s) ^^^^^^^^^^^^^^^^^^^^^^ File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 659, in _fn raise e.with_traceback(None) from None torch._dynamo.exc.UserError: Could not guard on data-dependent expression Ne(Mod(310*Max(1, u0), 8), 0) (unhinted: Ne(Mod(310*Max(1, u0), 8), 0)). (Size-like symbols: u0) Caused by: attention_output = torch.nn.functional.scaled_dot_product_attention( # transformers/models/albert/modeling_albert.py:404 in forward (_dynamo/utils.py:3285 in run_node) For more information, run with TORCH_LOGS="dynamic" For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0" If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1 For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing User Stack (most recent call last): (snipped, see stack below for prefix) File "/home/shangdiy/test.py", line 23, in forward_gpu return model(ps, ref_s, 1) File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/kokoro/model.py", line 133, in forward audio, pred_dur = self.forward_with_tokens(input_ids, ref_s, speed) File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/kokoro/model.py", line 102, in forward_with_tokens bert_dur = self.bert(input_ids, attention_mask=(~text_mask).int()) File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/kokoro/modules.py", line 182, in forward outputs = super().forward(*args, **kwargs) File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/transformers/models/albert/modeling_albert.py", line 804, in forward encoder_outputs = self.encoder( File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/transformers/models/albert/modeling_albert.py", line 535, in forward layer_group_output = self.albert_layer_groups[group_idx]( File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/transformers/models/albert/modeling_albert.py", line 487, in forward layer_output = albert_layer(hidden_states, attention_mask, head_mask[layer_index], output_attentions) File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/transformers/models/albert/modeling_albert.py", line 450, in forward attention_output = self.attention(hidden_states, attention_mask, head_mask, output_attentions) File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/transformers/models/albert/modeling_albert.py", line 404, in forward attention_output = torch.nn.functional.scaled_dot_product_attention( For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1 For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#constrain-as-size-example from user code: File "/home/shangdiy/test.py", line 23, in forward_gpu return model(ps, ref_s, 1) File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/kokoro/model.py", line 133, in forward audio, pred_dur = self.forward_with_tokens(input_ids, ref_s, speed) File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/kokoro/model.py", line 102, in forward_with_tokens bert_dur = self.bert(input_ids, attention_mask=(~text_mask).int()) File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/kokoro/modules.py", line 182, in forward outputs = super().forward(*args, **kwargs) File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/transformers/models/albert/modeling_albert.py", line 804, in forward encoder_outputs = self.encoder( File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/transformers/models/albert/modeling_albert.py", line 535, in forward layer_group_output = self.albert_layer_groups[group_idx]( File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/transformers/models/albert/modeling_albert.py", line 487, in forward layer_output = albert_layer(hidden_states, attention_mask, head_mask[layer_index], output_attentions) File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/transformers/models/albert/modeling_albert.py", line 450, in forward attention_output = self.attention(hidden_states, attention_mask, head_mask, output_attentions) File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/transformers/models/albert/modeling_albert.py", line 404, in forward attention_output = torch.nn.functional.scaled_dot_product_attention( Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo" ``` Error msg for `fullgraph=False`: ``` Traceback (most recent call last): File "/home/shangdiy/test.py", line 46, in <module> run() File "/home/shangdiy/test.py", line 34, in run audio = forward_gpu(ps, ref_s) ^^^^^^^^^^^^^^^^^^^^^^ File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 655, in _fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/home/shangdiy/test.py", line 23, in forward_gpu return model(ps, ref_s, 1) ^^^^^^^^^^^^^^^^^^^ File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl return self._call_impl(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl return forward_call(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/kokoro/model.py", line 133, in forward audio, pred_dur = self.forward_with_tokens(input_ids, ref_s, speed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/kokoro/model.py", line 86, in forward_with_tokens @torch.no_grad() File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn return fn(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^ File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1201, in forward return compiled_fn(full_args) ^^^^^^^^^^^^^^^^^^^^^^ File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 328, in runtime_wrapper all_outs = call_func_at_runtime_with_args( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in call_func_at_runtime_with_args out = normalize_as_list(f(args)) ^^^^^^^ File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line 495, in wrapper return compiled_fn(runtime_args) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_inductor/output_code.py", line 553, in __call__ return self.current_callable(inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/tmp/torchinductor_shangdiy/ee/cee3vxf5cyozzwpjjizc3knv674q2zfikzhti67aecnxiwn3dlpy.py", line 206, in call triton_poi_fused__to_copy_add_gt_2.run(buf4, buf0, buf5, u0, stream=stream0) File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 921, in run self.autotune_to_one_config(*args, **kwargs) File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 775, in autotune_to_one_config timings = self.benchmark_all_configs(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 749, in benchmark_all_configs timings = { ^ File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 750, in <dictcomp> launcher: self.bench(launcher, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 627, in bench return benchmarker.benchmark_gpu(kernel_call, rep=40) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_inductor/runtime/benchmarking.py", line 39, in wrapper return fn(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_inductor/runtime/benchmarking.py", line 243, in benchmark_gpu _callable() File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/torch/_inductor/runtime/triton_heuristics.py", line 612, in kernel_call launcher( File "<string>", line 5, in launcher File "/home/shangdiy/.conda/envs/user-empathy/lib/python3.11/site-packages/triton/backends/nvidia/driver.py", line 529, in __call__ self.launch(gridX, gridY, gridZ, stream, function, self.launch_cooperative_grid, global_scratch, *args) ValueError: Pointer argument (at 0) cannot be accessed from Triton (cpu tensor?) ``` cc @chauhang @penguinwu @ezyang @bobrenjc93 @anijain2305 ### Versions pytorch nightly
open
2025-03-19T22:38:08Z
2025-03-24T10:34:09Z
https://github.com/pytorch/pytorch/issues/149570
[ "triaged", "oncall: pt2", "module: dynamic shapes", "empathy-day" ]
yushangdi
0
cvat-ai/cvat
tensorflow
8,858
Prometheus metric server side monitoring
hello does cvat backend offer server monitoring metrics related to error/failed count , success count etc ( Prometheus if exist to scrap them )
closed
2024-12-22T16:45:25Z
2025-01-15T16:36:07Z
https://github.com/cvat-ai/cvat/issues/8858
[ "enhancement" ]
MohamedKHALILRouissi
1
huggingface/pytorch-image-models
pytorch
2,046
[BUG] reg_token not working for ViT models
**Describe the bug** If I try to create a 'vit_base_patch16_384' model, for example, and set the argument: reg_token=4 (to add 3 register tokens to the model, according to this paper: https://arxiv.org/pdf/2309.16588.pdf. I hope I'm not missing something. I understand if this is not a supported feature yet. the model fails to instantiate, resulting in a size mismatch issue in timm/layers/pos_embed.py line 45 - see below. ![image](https://github.com/huggingface/pytorch-image-models/assets/2816214/0521ba1b-e4c4-4e1c-82e7-08b1cca94842) **To Reproduce** Steps to reproduce the behavior: 1. create a vit_base_patch16_384 and pass in the function argument reg_token=4. **Expected behavior** I would expect the model to be built/instantiated correctly.
closed
2023-11-30T21:51:35Z
2023-11-30T22:18:48Z
https://github.com/huggingface/pytorch-image-models/issues/2046
[ "bug" ]
Tgaaly
3
graphdeco-inria/gaussian-splatting
computer-vision
915
SIBR remote viewer
**Problem Description:** Hey, I have some issues with the remote viewer. I am training the model on a remote machine through SSH connection and want to view it on a different machine. Hence, I want to use the remote viewer provided. I am forwarding the traffic from my viewing host to the listening port on the host running the training. I've tried to no avail : `ssh -L 6009:127.0.0.1:6009<username>@<training-host-ip>` I've also selected different ports that I know are open, but no success. I am not very experienced with port forwarding so any help is welcome. Thank you in advance. P.S. is the remote viewer still possible post-training? Because the documentation does not specify how to configure the remote viewer after training is done. **System details** _Forwarding_ : Ubuntu 22.04.4 RTX 4090 _Receiving_ : Windows 11 RTX 4060
open
2024-07-30T16:38:44Z
2024-07-30T16:38:44Z
https://github.com/graphdeco-inria/gaussian-splatting/issues/915
[]
wjmenu
0
modin-project/modin
pandas
7,038
`test_series.py::test___getitem__` failed due to different exception messages
Found in https://github.com/modin-project/modin/pull/6954
open
2024-03-07T15:29:31Z
2024-03-07T15:36:25Z
https://github.com/modin-project/modin/issues/7038
[ "bug 🦗", "pandas concordance 🐼", "P2" ]
anmyachev
0
ExpDev07/coronavirus-tracker-api
fastapi
164
Inconsistent country results
Hi, Thanks for doing this. I was looking for this exact type of project for a bot I was requested to write. :) So far, everything has worked pretty well. There are a couple problems with the consistency of the data that I've noticed, though. 1) Countries with provinces/states/sub-areas don't have their own ID/timeline. This was changed for the US query yesterday; the US now has a timeline and ID, but no longer has states. 2) The structure of sub-states/provinces for countries seems inconsistent. For example, when I request the country page for Canada, it lists all the provinces so I can easily query the ID from one of the provinces directly from that JSON and get an individual provincial timeline. However, the country page for the US now doesn't show all the states as it did before, so I can't just get the USA JSON if I want to query a state - am I understanding correctly I'd have to pull the entire `/locations`? I'd like to avoid making large queries if possible, and this seems inconsistent. Thanks!
closed
2020-03-24T15:08:43Z
2020-03-24T15:31:03Z
https://github.com/ExpDev07/coronavirus-tracker-api/issues/164
[]
lovelaced
2
mckinsey/vizro
plotly
937
data_cache works OK?
### Question Hey vizro team! That code should work as supposed? in my setup it doesn't refresh my grid ```py data_manager['stocks_realtime'] = get_stocks() data_manager.timeout = 300 def get_stocks(): df = run_some_3rd_party_api_request() return df dashboard = vm.Dashboard( pages=[vm.Page(layout=vm.Layout(grid=[[0],[0],[0],[0],[0],[0],[1]]), title="Stock",id='stocks', components=[ vm.AgGrid( figure=dash_ag_grid( data_frame='stocks_realtime', ) ), vm.Button( text="Export", actions=[vm.Action(function=export_data())], ), ], )] app = Vizro().build(dashboard) if __name__ == "__main__": app.run() ``` may be I should call some specific flask method like app.config.from_mapping(config) cache = Cache(app) (see https://flask-caching.readthedocs.io/en/latest/) thanks much for your great job here at vizro! ### Code/Examples _No response_ ### Which package? None ### Code of Conduct - [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md).
closed
2025-01-02T08:29:09Z
2025-01-06T13:12:04Z
https://github.com/mckinsey/vizro/issues/937
[ "General Question :question:" ]
vks2
4
jadore801120/attention-is-all-you-need-pytorch
nlp
114
A cause of "CUDA out of memory" problem
I have met `RuntimeError: CUDA out of memory. ` problem. And I found solution in [GPU is not utilized while occur RuntimeError: cuda runtime error: out of memory at](https://discuss.pytorch.org/t/gpu-is-not-utilized-while-occur-runtimeerror-cuda-runtime-error-out-of-memory-at/34780). > albanD said: > Hi, > Tensorflow has the bad habbit of taking all the memory on the device and prevent anything from happening on it as anything will OOM. > There was a small bug in pytorch that was initializing the cuda runtime on device 0 when printing that has been fixed. > A simple workaround is to use CUDA_VISIBLE_DEVICES=2. This will hide all devices but the one you specify and will make sure you never use other devices.
closed
2019-08-09T06:05:38Z
2019-12-08T09:42:17Z
https://github.com/jadore801120/attention-is-all-you-need-pytorch/issues/114
[]
whxf
1
koxudaxi/fastapi-code-generator
pydantic
467
snake_case_arguments creates unnecessary Suffixes for methods in main.py
When I want to create an fastapi app from my OpenAPI spec, the tool has issues with query parameters that are arrays. Instead of referencing the parameters as arguments in the function calls correctly a suffix get added to Class Name for example instead of PetIds there would be PetIds3. The problem can be reproduced with this example api: ``` openapi: "3.0.0" info: version: 1.0.0 title: Swagger Petstore license: name: MIT servers: - url: http://petstore.swagger.io/v1 paths: /pets: get: summary: List all pets operationId: listPets tags: - pets parameters: - name: limit in: query description: How many items to return at one time (max 100) required: false schema: type: integer format: int32 - name: petIds in: query description: Filter pets by these pet IDs required: false schema: type: array items: type: integer format: int64 responses: '200': description: A paged array of pets headers: x-next: description: A link to the next page of responses schema: type: string content: application/json: schema: $ref: "#/components/schemas/Pets" default: description: unexpected error content: application/json: schema: $ref: "#/components/schemas/Error" x-amazon-apigateway-integration: uri: Fn::Sub: arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${PythonVersionFunction.Arn}/invocations passthroughBehavior: when_no_templates httpMethod: POST type: aws_proxy post: summary: Create a pet operationId: createPets tags: - pets responses: '201': description: Null response default: description: unexpected error content: application/json: schema: $ref: "#/components/schemas/Error" x-amazon-apigateway-integration: uri: Fn::Sub: arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${PythonVersionFunction.Arn}/invocations passthroughBehavior: when_no_templates httpMethod: POST type: aws_proxy /pets/{petId}: get: summary: Info for a specific pet operationId: showPetById tags: - pets parameters: - name: petId in: path required: true description: The id of the pet to retrieve schema: type: string responses: '200': description: Expected response to a valid request content: application/json: schema: $ref: "#/components/schemas/Pets" default: description: unexpected error content: application/json: schema: $ref: "#/components/schemas/Error" x-amazon-apigateway-integration: uri: Fn::Sub: arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${PythonVersionFunction.Arn}/invocations passthroughBehavior: when_no_templates httpMethod: POST type: aws_proxy components: schemas: Pet: required: - id - name properties: id: type: integer format: int64 name: type: string tag: type: string Pets: type: array description: list of pet items: $ref: "#/components/schemas/Pet" Error: required: - code - message properties: code: type: integer format: int32 message: type: string ``` …the tool generates a Pydantic RootModel class (e.g., class PetIds(RootModel[List[int]]):) but references it incorrectly in main.py. The parameter name gets changed to include a number (e.g., PetIds1), causing a mismatch. For example this code would be the output ``` @app.get( '/pets', response_model=Pets, responses={'default': {'model': Error}}, tags=['pets'] ) def list_pets( limit: Optional[int] = None, pet_ids: Optional[PetIds1] = Query(None, alias='petIds'), ) -> Union[Pets, Error]: """ List all pets """ pass ``` I have used this command to call the code generator: ``` fastapi-codegen -i petstore-api.yaml \ -o out/petstore-test \ -p 3.11 \ --output-model-type pydantic_v2.BaseModel ```
open
2025-01-13T09:25:25Z
2025-01-13T09:25:25Z
https://github.com/koxudaxi/fastapi-code-generator/issues/467
[]
MichaelNowakMining
0
Nekmo/amazon-dash
dash
58
Improve setup.py compatibility
- [x] Remove pip requirement - [x] Python2 requirements on setup.py function
closed
2018-07-16T22:02:04Z
2018-07-17T21:31:22Z
https://github.com/Nekmo/amazon-dash/issues/58
[ "Setup" ]
Nekmo
0
Kanaries/pygwalker
plotly
545
pygwalker cannot be rendered
![( O4{0{APDJ64}58 M B4ZH](https://github.com/Kanaries/pygwalker/assets/96034897/29c2c492-7d24-4d01-beab-b6b073018cbd) ![KZD(%E8C`@A(R LV4CZ7KL3](https://github.com/Kanaries/pygwalker/assets/96034897/131d016e-b7d1-4b9a-988d-46ea89d38b1d)
closed
2024-05-12T03:00:32Z
2024-05-13T11:03:19Z
https://github.com/Kanaries/pygwalker/issues/545
[ "bug" ]
3dsf0sge
7
AirtestProject/Airtest
automation
1,180
剪贴板设置和获取功能
(请尽量按照下面提示内容填写,有助于我们快速定位和解决问题,感谢配合。否则直接关闭。) **(重要!问题分类)** * 测试开发环境AirtestIDE使用问题 -> https://github.com/AirtestProject/AirtestIDE/issues **描述问题bug** /venv/lib/python3.9/site-packages/airtest/core/android/static/adb/mac/adb -s 1A051FDEE00AEQ shell app_process -Djava.class.path=/data/app/~~WTAurYPHzHfjGzcyl0XUfg==/com.netease.nie.yosemite-siGNYHvhDCAvshw4PIw61w==/base.apk / com.netease.nie.yosemite.control.Control --DEVICE_OP clipboard_get Could not invoke method java.lang.NoSuchMethodException: android.content.IClipboard$Stub$Proxy.getPrimaryClip [class java.lang.String, class java.lang.String, int] **复现步骤** 剪贴板已有内容,调用get_clipboard() **预期效果** 获取剪贴板内容 **airtest 版本:** `1.3.2` - 型号: 三星 - 系统: android 14 执行平台:mac intel芯片
closed
2023-12-15T11:52:22Z
2024-05-24T09:07:45Z
https://github.com/AirtestProject/Airtest/issues/1180
[ "bug" ]
Ymars1990
1
python-restx/flask-restx
api
477
Suggestion: fix for `abort()` pylance typing error
### **Code** In `flask_restx/errors.py` I recommend we declare a typehint to the `code` parameter in the abort function like so: ```python # -*- coding: utf-8 -*- from __future__ import unicode_literals from typing import Union import flask from werkzeug.exceptions import HTTPException from ._http import HTTPStatus __all__ = ( "abort", "RestError", "ValidationError", "SpecsError", ) def abort( code: Union[int, HTTPStatus] = HTTPStatus.INTERNAL_SERVER_ERROR, message=None, **kwargs ): ... ``` ### **Repro Steps** (if applicable) 1. Install `flask_restx` to your project 2. import `from flask_restx import abort` 3. If you have [pylance](https://marketplace.visualstudio.com/items?itemName=ms-python.vscode-pylance) enabled, you will be getting the following error: ```Argument of type "Literal[500]" cannot be assigned to parameter "code" of type "HTTPStatus" in function "abort" "Literal[500]" is incompatible with "HTTPStatus" ``` ### **Expected Behavior** You should be able to pass in an error code integer, i.e. `500` or an `HTTPStatus` variable without issue. ### **Actual Behavior** Pylance will complain: ```Argument of type "Literal[500]" cannot be assigned to parameter "code" of type "HTTPStatus" in function "abort" "Literal[500]" is incompatible with "HTTPStatus" ``` ### **Error Messages/Stack Trace** ``` Argument of type "Literal[500]" cannot be assigned to parameter "code" of type "HTTPStatus" in function "abort" "Literal[500]" is incompatible with "HTTPStatus" ``` ### **Environment** - Python version: 3.8 - Flask version: 1.1.1 - Flask-RESTX version: 0.5.1 - Other installed Flask extensions: None ### **Additional Context** I believe this fix would help the increasingly popular usage of type hints in python. If i can become a contributor, I can work on getting flask-restx to become more typehint friendly. Thanks!
open
2022-09-23T18:40:54Z
2022-09-23T18:45:25Z
https://github.com/python-restx/flask-restx/issues/477
[ "bug" ]
itsmostafa
0
pandas-dev/pandas
python
61,072
BUG: str.fullmatch behavior is not the same for object dtype and string[pyarrow] dtype
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import pandas test_series = pandas.Series(['asdf', 'as'], dtype='string[pyarrow]') regex = r'((as)|(as))' regex2 = r'(as)|(as)' test_series.str.fullmatch(regex) # False # True test_series.str.fullmatch(regex2) # True # True test_series2 = pandas.Series(['asdf', 'as'], dtype=str) test_series2.str.fullmatch(regex) # False # True test_series2.str.fullmatch(regex2) # False # True ``` ### Issue Description As the example shows you can use the same regular expression for the str.fullmatch method when using a str dtype and string[pyarrow] dtype and get different results. This seems to stem from Apache Arrow not having a dedicated fullmatch or match, so the regular expression has to be edited with "^" and "$" characters before being delivered to its search function. There might also be some special handling in Python's fullmatch method with the "|" operator. Long story short, at least some regular expressions delivered to PyArrow need additional surrounding parentheses to get the same fullmatch results as when using Python's fullmatch. I have submitted #61073 to try and address this. ### Expected Behavior The second set of fullmatch results in the example shows the expected behavior. The str.fullmatch method should behave the same for either dtype. ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : 0691c5cf90477d3503834d983f69350f250a6ff7 python : 3.10.5 python-bits : 64 OS : Windows OS-release : 10 Version : 10.0.19045 machine : AMD64 processor : Intel64 Family 6 Model 158 Stepping 12, GenuineIntel byteorder : little LC_ALL : None LANG : en LOCALE : English_United States.1252 pandas : 2.2.3 numpy : 1.24.4 pytz : 2022.1 dateutil : 2.8.2 pip : 25.0.1 Cython : 3.0.11 sphinx : 5.1.1 IPython : 8.21.0 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : None blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : 1.1 hypothesis : None gcsfs : None jinja2 : None lxml.etree : 4.9.1 matplotlib : None numba : None numexpr : None odfpy : None openpyxl : 3.1.4 pandas_gbq : None psycopg2 : None pymysql : None pyarrow : 19.0.1 pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : None sqlalchemy : 2.0.9 tables : None tabulate : 0.9.0 xarray : None xlrd : None xlsxwriter : 3.2.0 zstandard : None tzdata : 2024.1 qtpy : 2.4.1 pyqt5 : None </details>
open
2025-03-07T00:19:21Z
2025-03-20T21:26:16Z
https://github.com/pandas-dev/pandas/issues/61072
[ "Bug", "Strings", "Arrow" ]
ptth222
2
ultralytics/ultralytics
machine-learning
18,829
Yolo incompatible with Jetpack 6.2(Jetson Orin Nano Super)
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report. ### Ultralytics YOLO Component _No response_ ### Bug I just installed Jepack 6.2, and sudo `sudo pip3 install ultralytics`. It seems it uses libcudnn.so.8 not cuDNN: 9.3.0.75. The issue might be pytorch, as I didn't see any correct version for L4T36.4.3, which is for Super performance: https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048 Any ideas? See below link for details: https://forums.developer.nvidia.com/t/yolo-incompatible-with-jetpack-6-2-jetson-orin-nano-super/321078 ### Environment ``` Software part of jetson-stats 4.3.1 - (c) 2024, Raffaello Bonghi Model: NVIDIA Jetson Orin Nano Developer Kit - Jetpack 6.2 [L4T 36.4.3] NV Power Mode[0]: 15W Serial Number: [XXX Show with: jetson_release -s XXX] Hardware: - P-Number: p3767-0005 - Module: NVIDIA Jetson Orin Nano (Developer kit) Platform: - Distribution: Ubuntu 22.04 Jammy Jellyfish - Release: 5.15.148-tegra jtop: - Version: 4.3.1 - Service: Active Libraries: - CUDA: 12.6.68 - cuDNN: 9.3.0.75 - TensorRT: 10.3.0.30 - VPI: 3.2.4 - Vulkan: 1.3.204 - OpenCV: 4.11.0 - with CUDA: YES ``` ### Minimal Reproducible Example ``` sudo pip3 install ultralytics ``` ### Additional ### Are you willing to submit a PR? - [ ] Yes I'd like to help by submitting a PR!
closed
2025-01-22T22:21:29Z
2025-03-19T23:51:16Z
https://github.com/ultralytics/ultralytics/issues/18829
[ "bug", "dependencies", "embedded" ]
lida2003
8
tiangolo/uwsgi-nginx-flask-docker
flask
17
How could I fixed the dockerfile setting?
Yesterday, I build a new dockerfile with my python project as usual. But when I start it up, the static files all broken down. After some time debugging I found the static file path is not config as usual. I use the COPY in dockerfile building to change the nginx static file path. But now it did not working. I try some other ways, it was still not working too. Suddenly, I realize that maybe there is some thing wrong in the entrypoint.sh Aha, you have change the static file config to ENV in docker file. So I fix it following the instruction. But I am worry about that if it will change some way sometimes in the future. So my question is how could I get a fixed dockerfile settings but not auto update with the repository? Thank you.
closed
2017-08-28T00:48:13Z
2017-08-29T01:44:19Z
https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/17
[]
aleonchen
3
MagicStack/asyncpg
asyncio
334
Maybe a typo in documentation
So documentation https://magicstack.github.io/asyncpg/current/api/index.html#transactions about transaction are saying: ``` tr = connection.transaction() await tr.start() try: ... except: await tr.rollback() raise finally: await tr.commit() ``` But i dont think that commit after rollback is a right thing to do, cause you need to call either commit or rollback. That's all. thanks)
closed
2018-08-01T09:32:41Z
2018-08-08T21:41:33Z
https://github.com/MagicStack/asyncpg/issues/334
[]
creotiv
3
google-research/bert
nlp
406
How can I change vocab size for pretrained model?
Is there way to change (expand) vocab size for pretrained model?
open
2019-01-30T05:03:35Z
2022-10-04T19:03:38Z
https://github.com/google-research/bert/issues/406
[]
hahmyg
8
Lightning-AI/pytorch-lightning
data-science
20,340
DDP and BackboneFinetuning: model weights get out of sync when unfreezing layers for training
### Bug description When model training using DDP and pl.callbacks.BackboneFinetuning, it seems that model weights start to get out of sync across the processes after the backbone is unfrozen. Prior to unfreezing, model weights stay in sync across processes as expected. I discovered this issue when trying to adopt DDP. I saw that on rank 0 process, validation loss trended downward while training, while on rank > 1 processes validation loss increased steadily. This led to the suspicion that model weights were different across nodes, which was confirmed by printing out the hash of model weights on the different processes on each epoch. ### What version are you seeing the problem on? v2.4 ### How to reproduce the bug The below example is programmed to check that model weights are in sync after every epoch. It fails the assertion after epoch 3 (`unfreeze_backbone_at_epoch`). ```python import hashlib import pytorch_lightning as pl import torch from torch import nn import torch.distributed as dist from torch.utils.data import DataLoader, Dataset # 1. Define a simple dataset class RandomDataset(Dataset): def __init__(self, size, length): self.len = length self.data = torch.randn(length, size) def __getitem__(self, index): return self.data[index] def __len__(self): return self.len # 2. Define a LightningModule class SimpleModel(pl.LightningModule): def __init__(self): super().__init__() self.backbone = nn.Linear(32, 16) self.layer = nn.Linear(16, 2) def forward(self, x): x = torch.relu(self.backbone(x)) x = self.layer(x) return x def training_step(self, batch, batch_idx): x = batch y_hat = self(x) loss = torch.nn.functional.mse_loss(y_hat, torch.ones_like(y_hat)) return loss def configure_optimizers(self): return torch.optim.SGD( filter(lambda p: p.requires_grad, self.parameters()), lr=0.1 ) def on_train_epoch_end(self): # Compute hash of model weights and checks if they're equal across processes. hasher = hashlib.sha256() for param in self.parameters(): hasher.update(param.data.cpu().numpy().tobytes()) param_hash = hasher.hexdigest() all_param_hashes = [None] * dist.get_world_size() dist.all_gather_object(all_param_hashes, param_hash) if self.trainer.is_global_zero: assert len(set(all_param_hashes)) == 1, "Model weights not in sync :(" print("Model weights in sync!") # 3. Create data loaders pl.seed_everything(0) train_loader = DataLoader(RandomDataset(32, 64), batch_size=2) # 4. Initialize the model and trainer model = SimpleModel() trainer = pl.Trainer( accelerator="cpu", strategy="ddp", devices=2, callbacks=[ pl.callbacks.BackboneFinetuning(unfreeze_backbone_at_epoch=3, verbose=True) ], ) # 5. Train the model trainer.fit(model, train_loader) ``` Output: ``` Epoch 0: 100%|████████████████████████████| 16/16 [00:00<00:00, 163.83it/s, loss=0.295, v_num=22] Model weights in sync! Epoch 1: 100%|███████████████████████████| 16/16 [00:00<00:00, 269.84it/s, loss=0.0667, v_num=22] Model weights in sync! Epoch 2: 100%|███████████████████████████| 16/16 [00:00<00:00, 257.68it/s, loss=0.0361, v_num=22] Model weights in sync! Current lr: 0.1, Backbone lr: 0.01 Current lr: 0.1, Backbone lr: 0.01 Epoch 3: 100%|███████████████████████████| 16/16 [00:00<00:00, 244.17it/s, loss=0.0243, v_num=22]Current lr: 0.1, Backbone lr: 0.02 [rank0]: Traceback (most recent call last): ... [rank0]: File "/home/ksikka/lightning-pose/example2.py", line 58, in _assert_model_weights_in_sync [rank0]: assert len(set(all_param_hashes)) == 1, "Model weights not in sync :(" [rank0]: AssertionError: Model weights not in sync :( ``` ### Error messages and logs No warning or error. Validation loss with `sync_dist=True` increases after unfreezing, while with sync_dist=False, it decreases although at a lower rate than single process. ### Environment I originally noticed the issue in a multi-GPU linux environment in lightning studio, but I reproduced with the example code above on the following environment. <details> <summary>Current environment</summary> * CUDA: - GPU: - NVIDIA GeForce GTX 1080 Ti - available: True - version: 12.1 * Lightning: - lightning: 2.4.0 - lightning-bolts: 0.7.0 - lightning-pose: 1.5.1 - lightning-utilities: 0.11.7 - pytorch-lightning: 1.9.5 - torch: 2.4.1 - torchmetrics: 1.4.2 - torchtyping: 0.1.5 - torchvision: 0.19.1 * Packages: - absl-py: 2.1.0 - aiofiles: 24.1.0 - aiohappyeyeballs: 2.4.3 - aiohttp: 3.10.8 - aiosignal: 1.3.1 - alabaster: 0.7.16 - altair: 5.4.1 - antlr4-python3-runtime: 4.9.3 - anyio: 4.6.0 - argcomplete: 3.5.0 - astunparse: 1.6.3 - async-timeout: 4.0.3 - attrs: 24.2.0 - autocommand: 2.2.2 - babel: 2.16.0 - backports.tarfile: 1.2.0 - beautifulsoup4: 4.12.3 - black: 24.8.0 - blinker: 1.8.2 - boto3: 1.35.32 - botocore: 1.35.32 - brotli: 1.1.0 - cachetools: 5.5.0 - certifi: 2024.8.30 - charset-normalizer: 3.3.2 - click: 8.1.7 - contourpy: 1.3.0 - cycler: 0.12.1 - dacite: 1.7.0 - decorator: 4.4.2 - deprecated: 1.2.14 - dill: 0.3.9 - dm-tree: 0.1.8 - dnspython: 2.6.1 - docutils: 0.20.1 - exceptiongroup: 1.2.2 - execnet: 2.1.1 - fiftyone: 1.0.0 - fiftyone-brain: 0.17.0 - fiftyone-db: 1.1.6 - filelock: 3.16.1 - flake8: 7.1.1 - fonttools: 4.54.1 - frozenlist: 1.4.1 - fsspec: 2024.9.0 - ftfy: 6.2.3 - future: 1.0.0 - gast: 0.6.0 - gitdb: 4.0.11 - gitpython: 3.1.43 - glob2: 0.7 - graphql-core: 3.2.4 - grpcio: 1.66.2 - h11: 0.14.0 - h2: 4.1.0 - h5py: 3.12.1 - hpack: 4.0.0 - httpcore: 1.0.6 - httpx: 0.27.2 - humanize: 4.10.0 - hydra-core: 1.3.2 - hypercorn: 0.17.3 - hyperframe: 6.0.1 - idna: 3.10 - imageio: 2.35.1 - imageio-ffmpeg: 0.5.1 - imagesize: 1.4.1 - imgaug: 0.4.0 - importlib-metadata: 8.0.0 - importlib-resources: 6.4.0 - inflate64: 1.0.0 - inflect: 7.3.1 - iniconfig: 2.0.0 - isort: 5.13.2 - jaraco.collections: 5.1.0 - jaraco.context: 5.3.0 - jaraco.functools: 4.0.1 - jaraco.text: 3.12.1 - jinja2: 3.1.4 - jmespath: 1.0.1 - joblib: 1.4.2 - jsonlines: 4.0.0 - jsonschema: 4.23.0 - jsonschema-specifications: 2023.12.1 - kaleido: 0.2.1 - kiwisolver: 1.4.7 - kornia: 0.7.3 - kornia-rs: 0.1.5 - lazy-loader: 0.4 - lightning: 2.4.0 - lightning-bolts: 0.7.0 - lightning-pose: 1.5.1 - lightning-utilities: 0.11.7 - markdown: 3.7 - markdown-it-py: 3.0.0 - markupsafe: 2.1.5 - matplotlib: 3.9.2 - mccabe: 0.7.0 - mdurl: 0.1.2 - mongoengine: 0.24.2 - more-itertools: 10.3.0 - motor: 3.5.3 - moviepy: 1.0.3 - mpmath: 1.3.0 - multidict: 6.1.0 - multivolumefile: 0.2.3 - mypy-extensions: 1.0.0 - narwhals: 1.9.0 - networkx: 3.3 - numpy: 1.26.4 - nvidia-cublas-cu12: 12.1.3.1 - nvidia-cuda-cupti-cu12: 12.1.105 - nvidia-cuda-nvrtc-cu12: 12.1.105 - nvidia-cuda-runtime-cu12: 12.1.105 - nvidia-cudnn-cu12: 9.1.0.70 - nvidia-cufft-cu12: 11.0.2.54 - nvidia-curand-cu12: 10.3.2.106 - nvidia-cusolver-cu12: 11.4.5.107 - nvidia-cusparse-cu12: 12.1.0.106 - nvidia-dali-cuda110: 1.42.0 - nvidia-nccl-cu12: 2.20.5 - nvidia-nvimgcodec-cu11: 0.3.0.5 - nvidia-nvjitlink-cu12: 12.6.77 - nvidia-nvtx-cu12: 12.1.105 - omegaconf: 2.3.0 - opencv-python: 4.10.0.84 - opencv-python-headless: 4.10.0.84 - packaging: 24.1 - pandas: 2.2.3 - pathspec: 0.12.1 - pillow: 10.4.0 - pip: 24.2 - platformdirs: 4.3.6 - plotly: 5.24.1 - pluggy: 1.5.0 - pprintpp: 0.4.0 - priority: 2.0.0 - proglog: 0.1.10 - protobuf: 5.28.2 - psutil: 6.0.0 - py7zr: 0.22.0 - pyarrow: 17.0.0 - pybcj: 1.0.2 - pycodestyle: 2.12.1 - pycryptodomex: 3.21.0 - pydash: 8.0.3 - pydeck: 0.9.1 - pyflakes: 3.2.0 - pygments: 2.18.0 - pymongo: 4.8.0 - pyparsing: 3.1.4 - pyppmd: 1.1.0 - pytest: 8.3.3 - pytest-xdist: 3.6.1 - python-dateutil: 2.9.0.post0 - pytorch-lightning: 1.9.5 - pytz: 2024.2 - pyyaml: 6.0.2 - pyzstd: 0.16.1 - rarfile: 4.2 - referencing: 0.35.1 - regex: 2024.9.11 - requests: 2.32.3 - retrying: 1.3.4 - rich: 13.9.1 - rpds-py: 0.20.0 - s3transfer: 0.10.2 - scikit-image: 0.24.0 - scikit-learn: 1.5.2 - scipy: 1.14.1 - seaborn: 0.13.2 - segment-anything: 1.0 - setuptools: 75.1.0 - shapely: 2.0.6 - six: 1.16.0 - smmap: 5.0.1 - sniffio: 1.3.1 - snowballstemmer: 2.2.0 - sortedcontainers: 2.4.0 - soupsieve: 2.6 - sphinx: 7.4.7 - sphinx-automodapi: 0.18.0 - sphinx-copybutton: 0.5.2 - sphinx-design: 0.6.1 - sphinx-rtd-dark-mode: 1.3.0 - sphinx-rtd-theme: 2.0.0 - sphinxcontrib-applehelp: 2.0.0 - sphinxcontrib-devhelp: 2.0.0 - sphinxcontrib-htmlhelp: 2.1.0 - sphinxcontrib-jquery: 4.1 - sphinxcontrib-jsmath: 1.0.1 - sphinxcontrib-qthelp: 2.0.0 - sphinxcontrib-serializinghtml: 2.0.0 - sse-starlette: 0.10.3 - sseclient-py: 1.8.0 - starlette: 0.39.2 - strawberry-graphql: 0.243.1 - streamlit: 1.39.0 - sympy: 1.13.3 - tabulate: 0.9.0 - taskgroup: 0.0.0a4 - tenacity: 9.0.0 - tensorboard: 2.18.0 - tensorboard-data-server: 0.7.2 - texttable: 1.7.0 - threadpoolctl: 3.5.0 - tifffile: 2024.9.20 - toml: 0.10.2 - tomli: 2.0.2 - torch: 2.4.1 - torchmetrics: 1.4.2 - torchtyping: 0.1.5 - torchvision: 0.19.1 - tornado: 6.4.1 - tqdm: 4.66.5 - triton: 3.0.0 - typeguard: 2.13.3 - typing: 3.7.4.3 - typing-extensions: 4.12.2 - tzdata: 2024.2 - tzlocal: 5.2 - universal-analytics-python3: 1.1.1 - urllib3: 2.2.3 - voxel51-eta: 0.13.0 - watchdog: 5.0.3 - wcwidth: 0.2.13 - werkzeug: 3.0.4 - wheel: 0.44.0 - wrapt: 1.16.0 - wsproto: 1.2.0 - xmltodict: 0.13.0 - yarl: 1.13.1 - zipp: 3.19.2 * System: - OS: Linux - architecture: - 64bit - ELF - processor: x86_64 - python: 3.10.0 - release: 5.15.153.1-microsoft-standard-WSL2 - version: #1 SMP Fri Mar 29 23:14:13 UTC 2024 </details> ### More info _No response_
open
2024-10-13T01:46:02Z
2024-10-17T18:48:33Z
https://github.com/Lightning-AI/pytorch-lightning/issues/20340
[ "bug", "needs triage", "ver: 2.4.x" ]
ksikka
2
Neoteroi/BlackSheep
asyncio
183
Deploy to Deta.sh error: Exception in ASGI application
**Describe the bug** I just try deploying the example code to Deta Micros and it got the error. main.py ``` from datetime import datetime from blacksheep.server import Application from blacksheep.server.responses import text app = Application() @app.route("/") async def home(request): return text(f"Hello, World! {datetime.utcnow().isoformat()}")` ``` **Response** "Internal Server Error" ![Screen Shot 2021-07-10 at 22 13 04](https://user-images.githubusercontent.com/33455900/125167691-034eb580-e1cc-11eb-932a-6da27a5f7cd8.png) **Log** ``` [ERROR] 2021-07-10T14:54:24.243Z 18cdc174-eb67-4040-b58b-9aa662fb9b77 Exception in ASGI application Traceback (most recent call last): File "/opt/python/detalib/adapters/asgi/protocols/http.py", line 47, in run await app(self.scope, self.receive, self.send) File "/opt/python/blacksheep/server/application.py", line 606, in __call__ response = await self.handle(request) File "blacksheep/baseapp.pyx", line 72, in handle File "blacksheep/baseapp.pyx", line 66, in blacksheep.baseapp.BaseApplication.get_route_match File "/opt/python/blacksheep/server/routing.py", line 432, in get_match match = route.match(ensure_bytes(value)) File "/opt/python/blacksheep/utils/__init__.py", line 10, in ensure_bytes raise ValueError("Expected bytes or str") ValueError: Expected bytes or str No errors ``` Please check it https://www.deta.sh/, the other ASGI/WSGI framework: Flask, FastAPI worked.
closed
2021-07-10T15:23:24Z
2021-11-03T20:36:32Z
https://github.com/Neoteroi/BlackSheep/issues/183
[]
Ppang0405
1
microsoft/nni
pytorch
5,587
Bug: imcompatability between save & load functions and configs.
There is a imcompatability between standard config and code, as shown in functions save and load: https://github.com/microsoft/nni/blob/master/nni/tools/nnictl/nnictl_utils.py#L828 https://github.com/microsoft/nni/blob/master/nni/tools/nnictl/nnictl_utils.py#LL931C5-L931C22 It can be reproduced by saving and loading tutorial examples. It's due to differences between "trialCodeDirectory" and "trial codeDir".
open
2023-05-29T19:12:42Z
2023-07-19T12:00:40Z
https://github.com/microsoft/nni/issues/5587
[]
zjowowen
2
miguelgrinberg/Flask-SocketIO
flask
1,318
Implement clear cache function
**Is your feature request related to a problem? Please describe.** When implementing tests, sometimes my test clients will have a backlog of packets that I would like to flush out **Describe the solution you'd like** Would like to see the SocketIOTestClient have a simple function to clear the queue like, ``` def clear_cache(self): self.queue[self.sid] = [] ``` **Describe alternatives you've considered** I know I could just clear the queue myself in the tests but I thought this might be a nice addition to have to make tests more readable. Would love to hear your opinion on this 😃
closed
2020-06-30T11:10:23Z
2020-07-01T13:11:47Z
https://github.com/miguelgrinberg/Flask-SocketIO/issues/1318
[ "question" ]
RobustProgram
4
pallets-eco/flask-wtf
flask
123
QuerySelectField and json payload
Hi, When sending an application/json payload, form.**init** converts it to a json dict. QuerySelectField is always converting pk into str/unicode, the issue is that some new framework tend to submit models (like angularjs) as a json payload, the common pk still being an integer/autoincrement, the validation always fails as the pk (type int) is now compared with a type str. Where the conversion of the pk to str occurs https://github.com/wtforms/wtforms/blob/master/wtforms/ext/sqlalchemy/fields.py#L100 Where Flask-WTF do the magic of converting the payload to a json object https://github.com/lepture/flask-wtf/blob/master/flask_wtf/form.py#L78 I made a request to WTForms https://github.com/wtforms/wtforms/pull/79 but got closed for legitimate reason I think.
closed
2014-04-30T17:42:32Z
2021-05-28T01:03:54Z
https://github.com/pallets-eco/flask-wtf/issues/123
[]
typehorror
3
taverntesting/tavern
pytest
238
Testing value in nested list
Hello, I really enjoy your testing framework. I am relatively new to API writing and was excited to implement tests after only a short time reading your tavern examples. I got a problem though testing for a specific value in a nested dictionary(list(dictionary)) my API returns a large json object (~5k lines) of the structure `{"data":[{"key1":1,"key12":"STRING","key13":{"key131":null,..}, ...},{"key2":2, "key22":"STRING", ... }] "metadata":{"mkey1":"val1"...}` Now I wanted to test if the value of `key12` contrains the `"STRING"` or not. so I wrote the test as I already did for other simpler API outputs. I just gave the key structure and exprected it to work as for the simple `key:value` test you described in the introductory examples. ``` response: status_code: 200 body: data: - key12: "STRING" ``` but sadly i get an error as the response with the notification that the returned dict does not match the one specified by me the error message says ``` expected["data"] = '[{'key12': 'STRING'}]' (type = <class 'list'>), actual["data"] = '[{'key1': 1, 'key12': 'STRING',"key13":{"key131":null, ... ``` So the test does not check only the `key12` value as I expected, but it checked the whole dict described in the `body` of `response` matches the whole dict returned by the api. This was quite suprising to me since I already wrote other tests where the api returned a simple dictionary with key:value pairs and I selected a random key and checked for the value and it all worked fine but with the more complex data structure the test is now comparing the literal dictionary setup and not only the value of my key12 ... If this question is already answered somewehre I'm sorry for asking it again but I couldn't find it. I also tried to write an external function that processes the returned dictionary and asseses the value of `key12` but I was not able to get the connection with pytest yet.
closed
2019-01-30T12:19:03Z
2019-04-20T17:26:07Z
https://github.com/taverntesting/tavern/issues/238
[]
Cattes
6
yihong0618/running_page
data-visualization
771
Run actions/cache@v4 出现问题,请大佬指教
自动同步数据的时候,发现失败的情况,查了日志发现Run actions/cache@v4 出现失败,请大佬指教,如图 ![Image](https://github.com/user-attachments/assets/06fd9b4f-17ab-4a7b-a4c7-628d07107123)
closed
2025-02-03T03:08:09Z
2025-02-09T02:52:31Z
https://github.com/yihong0618/running_page/issues/771
[]
Leerol
8
521xueweihan/HelloGitHub
python
2,664
【开源自荐】VersionFox: 一款轻量、通用、跨平台的SDK版本管理工具。
## 推荐项目 <!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。--> <!-- 点击上方 “Preview” 立刻查看提交的内容 --> <!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址--> - 项目地址:https://github.com/version-fox/vfox <!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)--> - 类别:Go <!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 --> - 项目标题:一款轻量、通用、跨平台的SDK版本管理工具。 <!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符--> - 项目描述:每种编程语言都有对应的版本管理工具,如nvm、fvm、gvm、sdkman等,它们的核心功能大同小异。但对于使用多种语言的开发者来说,这意味着需要学习和记忆各种不同的命令,增加了学习成本。如果你是全栈工程师,或者使用不止一种语言,使用VersionFox,你无需再学习这些繁杂的工具,从而降低学习成本,节省时间。 <!--令人眼前一亮的点是什么?类比同类型项目有什么特点!--> - 亮点:跨平台! 以插件的形式提供SDK, 有官方插件库. 另外可根据插件规范实现自定义插件. 可分享插件. - 示例代码:(可选) - 截图:(可选)gif/png/jpg <img width="1334" alt="image" src="https://github.com/521xueweihan/HelloGitHub/assets/40265686/0572ab40-aed2-43a1-b358-bf894418bf93"> <img width="1335" alt="image" src="https://github.com/521xueweihan/HelloGitHub/assets/40265686/9e934561-6f49-4be9-ad9c-f21e09a0723e"> ### 管理Dart ![v-dart](https://github.com/521xueweihan/HelloGitHub/assets/40265686/42926229-9ff1-466e-972a-10cdc1d094ca) ### 管理Flutter ![v-flutter](https://github.com/521xueweihan/HelloGitHub/assets/40265686/c46f77fb-29d7-4ab9-bf68-09127d5a3d59) - 后续更新计划: 继续丰富插件仓库, 提供更多种语言的插件.
closed
2023-12-28T08:43:10Z
2024-01-26T02:22:05Z
https://github.com/521xueweihan/HelloGitHub/issues/2664
[ "已发布", "Go 项目" ]
aooohan
5
zappa/Zappa
flask
1,132
Not recognizing virtaulenv created with pyenv.
<!--- Provide a general summary of the issue in the Title above --> ## Context If .python-version does not exist in the current path, the virtual environment of pyenv is not recognized. <!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug --> <!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 3.6/3.7/3.8 --> ## Expected Behavior <!--- Tell us what should happen --> ```shell > zappa update dev Calling update for stage dev.. Downloading and installing dependencies.. ... ``` ## Actual Behavior ```shell # .python-version is not in that path, but pyenv works correctly. > pwd /Users/username/Documents/GitHub/myenv/myproject > pyenv versions * my-env (set by /Users/username/Documents/GitHub/my-env/.python-version) ... > zappa update dev Calling update for stage dev.. Error: Zappa requires an active virtual environment! Learn more about virtual environments here: http://docs.python-guide.org/en/latest/dev/virtualenvs/ ``` <!--- Tell us what happens instead --> ## Possible Fix In addition to detecting the .python-version file in the current path, you can check the current virtual environment by using the `pyenv version` command. <!--- Not obligatory, but suggest a fix or reason for the bug --> ## Steps to Reproduce <!--- Provide a link to a live example, or an unambiguous set of steps to --> <!--- reproduce this bug include code to reproduce, if relevant --> 1. 2. 3. ## Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> * Zappa version used: 0.54.1 * Operating System and Python version: 3.9.12 * The output of `pip freeze`: ``` argcomplete==2.0.0 awscli==1.22.87 boto3==1.21.32 botocore==1.24.32 certifi==2021.10.8 cfn-flip==1.3.0 charset-normalizer==2.0.12 click==8.1.2 colorama==0.4.3 docutils==0.15.2 durationpy==0.5 Faker==13.3.4 Flask==2.1.1 flask-validation-extended==0.1.7 future==0.18.2 hjson==3.0.2 idna==3.3 importlib-metadata==4.11.3 itsdangerous==2.1.2 Jinja2==3.1.1 jmespath==1.0.0 kappa==0.6.0 MarkupSafe==2.1.1 placebo==0.9.0 pyasn1==0.4.8 python-dateutil==2.8.2 python-dotenv==0.20.0 python-slugify==6.1.1 PyYAML==5.4.1 requests==2.27.1 rsa==4.7.2 s3transfer==0.5.2 six==1.16.0 slack-sdk==3.15.2 text-unidecode==1.3 toml==0.10.2 tqdm==4.64.0 troposphere==4.0.0 urllib3==1.26.9 Werkzeug==2.1.1 wsgi-request-logger==0.4.6 zappa==0.54.1 zipp==3.8.0 ``` * Link to your project (optional): I don't think this is necessary. * Your `zappa_settings.json`: I don't think this is necessary.
closed
2022-05-10T08:08:11Z
2022-12-01T10:02:40Z
https://github.com/zappa/Zappa/issues/1132
[ "bug", "next-release-candidate" ]
iml1111
1
scrapy/scrapy
web-scraping
6,031
Update type hints for Twisted 23.8.0 changes
Twisted 23.8.0 has improved and fixed type hints around deferreds so now mypy prints some errors: ``` scrapy/core/scraper.py:184: error: Argument 1 to "addErrback" of "Deferred" has incompatible type "Callable[[Failure, Request, Response, Spider], None]"; expected "Callable[[Failure, Request, Response, Spider], Deferred[<nothing>]]" [arg-type] scrapy/core/scraper.py:184: error: Argument 3 to "addErrback" of "Deferred" has incompatible type "Response | Failure"; expected "Response" [arg-type] scrapy/core/scraper.py:185: error: Argument 1 to "addCallback" of "Deferred" has incompatible type "Callable[[Iterable[Any] | AsyncIterable[Any], Request, Response, Spider], Deferred[Any]]"; expected "Callable[[Any, Request, Response, Spider], Failure]" [arg-type] scrapy/core/scraper.py:185: error: Argument 3 to "addCallback" of "Deferred" has incompatible type "Response | Failure"; expected "Response" [arg-type] scrapy/core/downloader/__init__.py:157: error: Need type annotation for "deferred" [var-annotated] scrapy/core/engine.py:223: error: Incompatible return value type (got "None", expected "Failure") [return-value] scrapy/core/engine.py:300: error: Argument 1 to "addBoth" of "Deferred" has incompatible type "Callable[[Response | Request, Request], Deferred[Any] | Response]"; expected "Callable[[Any | Failure, Request], Failure]" [arg-type] ``` (I omitted some unhelpful mypy tips). This needs reviewing, as it's not clear at the first glance if we have any problems with code and/or type hints.
closed
2023-08-30T09:17:35Z
2023-09-01T06:47:25Z
https://github.com/scrapy/scrapy/issues/6031
[ "typing" ]
wRAR
1
microsoft/nni
data-science
5,276
Run experiment without internet access
I was wondering if we could run an NNI experiment in a system with no access to the internet. Like university clusters in which submitted GPU jobs have no internet access.
open
2022-12-10T06:23:58Z
2022-12-12T02:31:05Z
https://github.com/microsoft/nni/issues/5276
[]
tanmay2798
0
marshmallow-code/apispec
rest-api
503
Example in Readme Fails in python 3.6.3
``` python spectest.py Traceback (most recent call last): File "spectest.py", line 1, in <module> from apispec import APISpec File "/Users/jandrews/flaskbase/apispec.py", line 1, in <module> from apispec import APISpec ImportError: cannot import name 'APISpec' ``` Python 3.6.3 in a virtual environment. Unable to use the sample script in the Readme.md
closed
2019-09-21T09:43:25Z
2019-09-21T11:40:11Z
https://github.com/marshmallow-code/apispec/issues/503
[]
thenetimp
2
darrenburns/posting
rest-api
99
Crashes on Linux if no clipboard mechanism is installed.
On Ubuntu at least, there is often no terminal clipboard handler (ie 'xclip') installed by default. In this case, trying to copy the Response will crash with a traceback and lose any work. Output is a full taceback followed by: ```console PyperclipException: Pyperclip could not find a copy/paste mechanism for your system. For more information, please visit https://pyperclip.readthedocs.io/en/latest/index.html#not-implemented-error ``` I have opened a PR #98 that catches this cleanly
closed
2024-08-30T17:22:21Z
2024-09-06T11:54:16Z
https://github.com/darrenburns/posting/issues/99
[]
seapagan
0
OFA-Sys/Chinese-CLIP
computer-vision
258
AttributeError: 'NoneType' object has no attribute 'get'
大佬们这是为什么呀,跑步起来 版本 ``` torch 1.13.1 cuda 11.7 torchvision 0.16.0 torchaudio 2.1.0 ``` 运行后报错 ``` /home/user/.virtualenvs/zjk_Chinese-CLIP-master/lib/python3.11/site-packages/torch/distributed/launch.py:180: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use_env is set by default in torchrun. If your script expects `--local_rank` argument to be set, please change it to read from `os.environ['LOCAL_RANK']` instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for further instructions warnings.warn( Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "/home/user/.virtualenvs/zjk_Chinese-CLIP-master/lib/python3.11/site-packages/torch/distributed/launch.py", line 195, in <module> main() File "/home/user/.virtualenvs/zjk_Chinese-CLIP-master/lib/python3.11/site-packages/torch/distributed/launch.py", line 191, in main launch(args) File "/home/user/.virtualenvs/zjk_Chinese-CLIP-master/lib/python3.11/site-packages/torch/distributed/launch.py", line 176, in launch run(args) File "/home/user/.virtualenvs/zjk_Chinese-CLIP-master/lib/python3.11/site-packages/torch/distributed/run.py", line 753, in run elastic_launch( File "/home/user/.virtualenvs/zjk_Chinese-CLIP-master/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/.virtualenvs/zjk_Chinese-CLIP-master/lib/python3.11/site-packages/torch/distributed/launcher/api.py", line 237, in launch_agent result = agent.run() ^^^^^^^^^^^ File "/home/user/.virtualenvs/zjk_Chinese-CLIP-master/lib/python3.11/site-packages/torch/distributed/elastic/metrics/api.py", line 129, in wrapper result = f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "/home/user/.virtualenvs/zjk_Chinese-CLIP-master/lib/python3.11/site-packages/torch/distributed/elastic/agent/server/api.py", line 709, in run result = self._invoke_run(role) ^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/.virtualenvs/zjk_Chinese-CLIP-master/lib/python3.11/site-packages/torch/distributed/elastic/agent/server/api.py", line 844, in _invoke_run self._initialize_workers(self._worker_group) File "/home/user/.virtualenvs/zjk_Chinese-CLIP-master/lib/python3.11/site-packages/torch/distributed/elastic/metrics/api.py", line 129, in wrapper result = f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "/home/user/.virtualenvs/zjk_Chinese-CLIP-master/lib/python3.11/site-packages/torch/distributed/elastic/agent/server/api.py", line 681, in _initialize_workers worker_ids = self._start_workers(worker_group) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/.virtualenvs/zjk_Chinese-CLIP-master/lib/python3.11/site-packages/torch/distributed/elastic/metrics/api.py", line 129, in wrapper result = f(*args, **kwargs) ^^^^^^^^^^^^^^^^^^ File "/home/user/.virtualenvs/zjk_Chinese-CLIP-master/lib/python3.11/site-packages/torch/distributed/elastic/agent/server/local_elastic_agent.py", line 271, in _start_workers self._pcontext = start_processes( ^^^^^^^^^^^^^^^^ File "/home/user/.virtualenvs/zjk_Chinese-CLIP-master/lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/__init__.py", line 207, in start_processes redirs = to_map(redirects, nprocs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/user/.virtualenvs/zjk_Chinese-CLIP-master/lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/api.py", line 162, in to_map map[i] = val_or_map.get(i, Std.NONE) ^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'get' ``` 以下是配置文件 ``` #!/usr/bin/env # Guide: # This script supports distributed training on multi-gpu workers (as well as single-worker training). # Please set the options below according to the comments. # For multi-gpu workers training, these options should be manually set for each worker. # After setting the options, please run the script on each worker. # Command: bash run_scripts/muge_finetune_vit-b-16_rbt-base.sh ${DATAPATH} # Number of GPUs per GPU worker GPUS_PER_NODE=1 # Number of GPU workers, for single-worker training, please set to 1 WORKER_CNT=1 # The ip address of the rank-0 worker, for single-worker training, please set to localhost export MASTER_ADDR=localhost # The port for communication export MASTER_PORT=8514 # The rank of this worker, should be in {0, ..., WORKER_CNT-1}, for single-worker training, please set to 0 export RANK=0 export PYTHONPATH=${PYTHONPATH}:`pwd`/cn_clip/ DATAPATH=${1} # data options train_data=${DATAPATH}/datasets/MUGE/lmdb/train val_data=${DATAPATH}/datasets/MUGE/lmdb/valid # if val_data is not specified, the validation will be automatically disabled # restore options resume=${DATAPATH}/pretrained_weights/clip_cn_vit-b-16.pt # or specify your customed ckpt path to resume reset_data_offset="--reset-data-offset" reset_optimizer="--reset-optimizer" # reset_optimizer="" # output options output_base_dir=${DATAPATH}/experiments/ name=muge_finetune_vit-b-16_roberta-base_bs128_8gpu save_step_frequency=999999 # disable it save_epoch_frequency=1 log_interval=1 report_training_batch_acc="--report-training-batch-acc" # report_training_batch_acc="" # training hyper-params context_length=52 warmup=100 batch_size=128 valid_batch_size=128 accum_freq=1 lr=5e-5 wd=0.001 max_epochs=3 # or you can alternatively specify --max-steps valid_step_interval=150 valid_epoch_interval=1 vision_model=ViT-B-16 text_model=RoBERTa-wwm-ext-base-chinese use_augment="--use-augment" # use_augment="" python3 -m torch.distributed.launch --use_env --nproc_per_node=${GPUS_PER_NODE} --nnodes=${WORKER_CNT} --node_rank=${RANK} \ --master_addr=${MASTER_ADDR} --master_port=${MASTER_PORT} cn_clip/training/main.py \ --train-data=${train_data} \ --val-data=${val_data} \ --resume=${resume} \ ${reset_data_offset} \ ${reset_optimizer} \ --logs=${output_base_dir} \ --name=${name} \ --save-step-frequency=${save_step_frequency} \ --save-epoch-frequency=${save_epoch_frequency} \ --log-interval=${log_interval} \ ${report_training_batch_acc} \ --context-length=${context_length} \ --warmup=${warmup} \ --batch-size=${batch_size} \ --valid-batch-size=${valid_batch_size} \ --valid-step-interval=${valid_step_interval} \ --valid-epoch-interval=${valid_epoch_interval} \ --accum-freq=${accum_freq} \ --lr=${lr} \ --wd=${wd} \ --max-epochs=${max_epochs} \ --vision-model=${vision_model} \ ${use_augment} \ --text-model=${text_model} ```
open
2024-02-27T03:19:37Z
2024-03-18T16:15:56Z
https://github.com/OFA-Sys/Chinese-CLIP/issues/258
[]
5zjk5
5
cleanlab/cleanlab
data-science
814
Extend Datalab to token classification (entity recognition) datasets
Allow Datalab to find label issues when `labels` object is token classification annotations (per-word class labels for each sentence/document). Related things to look at: Extending Datalab to other ML tasks: https://github.com/cleanlab/cleanlab/issues/774 https://github.com/cleanlab/cleanlab/issues/765 https://github.com/cleanlab/cleanlab/pull/796 Existing Datalab code for label issues: https://github.com/cleanlab/cleanlab/blob/master/cleanlab/datalab/internal/issue_manager/label.py Adding new issue manager in Datalab: https://docs.cleanlab.ai/master/cleanlab/datalab/guide/custom_issue_manager.html Existing Cleanlab code for token classification label issues: https://github.com/cleanlab/cleanlab/tree/master/cleanlab/token_classification
open
2023-08-12T19:43:16Z
2023-11-23T18:42:33Z
https://github.com/cleanlab/cleanlab/issues/814
[ "enhancement", "good first issue", "help-wanted" ]
jwmueller
0
Significant-Gravitas/AutoGPT
python
8,820
Agent Page Styling - Double separators at bottom
closed
2024-11-27T13:08:51Z
2024-12-09T16:30:52Z
https://github.com/Significant-Gravitas/AutoGPT/issues/8820
[ "bug", "UI", "platform/frontend" ]
Swiftyos
0
itamarst/eliot
numpy
147
Get rid of `system` field in tracebacks/failure logging
Probably the `system` thing is a bogus feature that should be removed. 1. You should be able to deduce what caused the problem from the logging action context. 2. It's a traceback! It tells you exactly where it came from!Probably the system thing is a bogus feature that should be removed. Other places that takes a system should also be fixed.
closed
2015-03-04T15:14:17Z
2018-09-22T20:59:16Z
https://github.com/itamarst/eliot/issues/147
[]
itamarst
0
ClimbsRocks/auto_ml
scikit-learn
353
feature_responses
- [ ] categorical columns - [ ] maybe just randomly permute these and see how much things change? that would oversample from minority classes. maybe create a list of len 100 that's relatively balanced based on actual class distributions, generate a random int from 1-100, and grab that item? - [ ] binary columns, or other columns with step deltas - [ ] these we can probably handle with finding what the minimum delta is between values, and making our delta at least that.
open
2017-11-10T20:17:49Z
2017-11-10T20:17:49Z
https://github.com/ClimbsRocks/auto_ml/issues/353
[]
ClimbsRocks
0
feature-engine/feature_engine
scikit-learn
306
change docs layout to pydata sphinx theme
closed
2021-09-03T08:03:05Z
2021-11-16T16:21:46Z
https://github.com/feature-engine/feature_engine/issues/306
[]
solegalli
0
babysor/MockingBird
pytorch
702
macbook m1: web.py启动 报错循环调用,正确的启动姿势是什么?
**Summary[问题简述(一句话)]** python web.py 启动 报错循环调用,正确的启动姿势是什么? **Env & To Reproduce[复现与环境]** macbook m1,main主分支,python3.8 **Screenshots[截图(如有)]** <img width="662" alt="image" src="https://user-images.githubusercontent.com/5927124/183737266-84c9e105-e771-4046-84d3-e5815b92be3f.png">
open
2022-08-09T18:45:47Z
2022-08-12T15:14:11Z
https://github.com/babysor/MockingBird/issues/702
[]
Daniel-ccx
1
AirtestProject/Airtest
automation
633
小米6点击connect无法连接
* 图像识别、设备控制相关问题 -> 按下面的步骤 **描述问题bug** win10操作系统,1.2.2版本。 安卓版本9.0 miu11 小米6无法连接,已经按帮助文档打开了usb安装,usb调试,等开关 ``` F:\新建文件夹 (2)\AirtestIDE_2019-09-11_py3_win64\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s fe31d239 wait-for-device F:\新建文件夹 (2)\AirtestIDE_2019-09-11_py3_win64\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s fe31d239 shell getprop ro.build.version.sdk F:\新建文件夹 (2)\AirtestIDE_2019-09-11_py3_win64\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s fe31d239 shell ls /data/local/tmp/minicap F:\新建文件夹 (2)\AirtestIDE_2019-09-11_py3_win64\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s fe31d239 shell ls /data/local/tmp/minicap.so F:\新建文件夹 (2)\AirtestIDE_2019-09-11_py3_win64\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s fe31d239 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -v 2>&1 version:5 skip install minicap F:\新建文件夹 (2)\AirtestIDE_2019-09-11_py3_win64\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s fe31d239 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -i F:\新建文件夹 (2)\AirtestIDE_2019-09-11_py3_win64\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s fe31d239 shell dumpsys window displays F:\新建文件夹 (2)\AirtestIDE_2019-09-11_py3_win64\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s fe31d239 shell pm path jp.co.cyberagent.stf.rotationwatcher F:\新建文件夹 (2)\AirtestIDE_2019-09-11_py3_win64\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s fe31d239 shell export CLASSPATH=/data/app/jp.co.cyberagent.stf.rotationwatcher-dTrOMa87xO-xLp6jS6FaAQ==/base.apk;exec app_process /system/bin jp.co.cyberagent.stf.rotationwatcher.RotationWatcher F:\新建文件夹 (2)\AirtestIDE_2019-09-11_py3_win64\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s fe31d239 shell dumpsys window windows F:\新建文件夹 (2)\AirtestIDE_2019-09-11_py3_win64\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s fe31d239 forward --no-rebind tcp:16565 localabstract:minicap_16565 F:\新建文件夹 (2)\AirtestIDE_2019-09-11_py3_win64\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s fe31d239 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -i ADB指令执行失败,可能需要修改部分手机设置才能使用,请访问[帮助文档](http://airtest.netease.com/docs/cn/2_device_connection/2_android_faq.html)查看如何设置。 ``` **其他相关环境信息** (其他运行环境,例如在linux ubuntu16.04上运行异常,在windows上正常。)
closed
2019-12-06T18:09:13Z
2019-12-09T02:13:33Z
https://github.com/AirtestProject/Airtest/issues/633
[]
a456210
1
ScrapeGraphAI/Scrapegraph-ai
machine-learning
942
Licensing issues
Hi, I've been testing this package and I am really satisfied with the results. The problem is that I'd like to use this package in a closed-source commercial project. Although this package uses MIT license, it also has `html2text` as a dependency. `html2text` uses GPL3.0 license. To my knowledge, if the ScrapeGraphAI package includes the GPL3.0 subpackage in its distribution, then it is effectively GPL3.0 licensed. Do you have any plans to replace the package in the future?
closed
2025-03-04T18:56:12Z
2025-03-09T14:09:00Z
https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/942
[ "dependencies" ]
sleter
1
BeanieODM/beanie
asyncio
801
[BUG] Multi-model pattern
**Describe the bug** "Never" is not awaitablePylance[reportGeneralTypeIssues](https://github.com/microsoft/pyright/blob/main/docs/configuration.md#reportGeneralTypeIssues) Could not bind method "find" because "type[Parent]" is not assignable to parameter "cls" Type "Parent" cannot be assigned to type "Document" "Parent" is incompatible with "Document"Pylance[reportGeneralTypeIssues](https://github.com/microsoft/pyright/blob/main/docs/configuration.md#reportGeneralTypeIssues) (method) find: Never "Never" is not awaitablePylance[reportGeneralTypeIssues](https://github.com/microsoft/pyright/blob/main/docs/configuration.md#reportGeneralTypeIssues) (function) to_list: Never **To Reproduce** ```python from beanie import Document, UnionDoc class Parent(UnionDoc): # Union class Settings: name = "union_doc_collection" # Collection name class_id = "_class_id" # _class_id is default beanie internal field used to filter children Documents class One(Document): int_field: int = 0 shared: int = 0 class Settings: name = "One" # Name used to filer union document 'One', default to class name union_doc = Parent class Two(Document): str_field: str = "test" shared: int = 0 class Settings: union_doc = Parent async def example(): db = AsyncIOMotorClient(config._DB_CONNECT, compressors='zstd', tz_aware=True).LOTR await init_beanie(database=db, document_models=[Parent, One, Two]) # type: ignore data = await Parent.find().to_list() if __name__ == "__main__": asyncio.run(example()) ``` **Expected behavior** This bug is very annoying I expect it display a list of Parent but instead its unknown and full of error Im expecting it display data: list[Parent] = await Parent.all().to_list() # type: ignore **Additional context**
closed
2023-12-09T14:42:41Z
2024-10-04T16:14:37Z
https://github.com/BeanieODM/beanie/issues/801
[ "bug" ]
CAPITAINMARVEL
2
dmlc/gluon-cv
computer-vision
1,703
FPS of TSN
The input of TSN is multiple frames (e.g. 32 frames). However, 32 has not been multiplied in calculating FPS. I wonder if the input frame number should be taken into consideration when calculating FPS.
closed
2021-09-16T14:49:02Z
2021-12-23T06:36:46Z
https://github.com/dmlc/gluon-cv/issues/1703
[ "Stale" ]
wenzhengzeng
1
benlubas/molten-nvim
jupyter
28
[Bug] `:MoltenInfo` doesn't show external kernels
When attaching to an externally run Jupyter kernel via a connection file, molten will not show information about the running kernel.
closed
2023-11-09T22:41:03Z
2023-11-10T16:16:03Z
https://github.com/benlubas/molten-nvim/issues/28
[ "bug" ]
benlubas
0
tableau/server-client-python
rest-api
664
Typo in guidance causing: AttributeError: 'tuple' object has no attribute 'id'
#Currently showing target = (workbookid, 'workbook') #Should show target = TSC.Target(workbookid, 'workbook')
closed
2020-08-10T22:55:34Z
2020-08-29T01:57:04Z
https://github.com/tableau/server-client-python/issues/664
[]
ValerieVelez
1
ufoym/deepo
jupyter
49
Dependency Issues
I keep having some random dependency issues when using Deepo. Is there any plans (or could there be) to move over to the Ananconda python distributions from Continuum analytics? I've never had dependency issues with their package management system.
closed
2018-08-04T23:31:58Z
2018-08-27T10:42:17Z
https://github.com/ufoym/deepo/issues/49
[]
switch527
2
pyjanitor-devs/pyjanitor
pandas
1,115
[EHN] let `column_name` or `column_names` support callback type
# Brief Description <!-- Please provide a brief description of what you'd like to propose. --> There always has an option called `column_name` or `column_names`. It's used to select the columns of `df`. The type could be a single value like `Hashable` and list values likes `Iterable[Hashable]`. And this idea is to support callback type. _Originally posted by @ericmjl in https://github.com/pyjanitor-devs/pyjanitor/pull/1112#discussion_r895257358_ # Example API <!-- One of the selling points of pyjanitor is the API. Hence, we guard the API very carefully, and want to make sure that it is accessible and understandable to many people. Please provide a few examples of what the API of the new function you're proposing will look like. We have provided an example that you should modify. --> An implicit style and also a trick to select columns. | Select columns | Using callable type | Using Iterable type | `df.columns` | | ------------------------------ | --------------------------------------------------------------------- | ------------------- | ---------------------------- | | Select the first three columns | `lambda df: df.columns[:3]` | `['a', 'b', 'c']` | pd.Index(list('abcde')) | | Select str type columns | `lambda df: [i for i in df.columns if instance(i, str)]` | `['a', 'b', 'c']` | pd.Index(['a', 'b', 'c', 1]) |
open
2022-06-13T13:40:14Z
2022-09-11T00:21:06Z
https://github.com/pyjanitor-devs/pyjanitor/issues/1115
[]
Zeroto521
1
vitalik/django-ninja
pydantic
1,160
[BUG] `ModelSchema` produces `id > (integer | null)` openapi
**Describe the bug** Having this model definition: ```python class MyModel(ModelSchema): class Meta: model = MyModel fields = ["id"] ``` Produces next definition on Swagger: `id > (integer | null)`. However this definition: ```python class MyModel(ModelSchema): id: int class Meta: model = MyModel ``` Produces `id* integer` as expected. SQL is default — `id bigint NOT NULL` **Versions (please complete the following information):** - Python version: 3.12.2 - Django version: 5.0.2 - Django-Ninja version: 1.1.0 - Pydantic version: 2.6.4 Probably a duplicate of https://github.com/vitalik/django-ninja/issues/907
open
2024-05-10T09:45:42Z
2025-02-21T19:36:23Z
https://github.com/vitalik/django-ninja/issues/1160
[]
viktorvsk
2
521xueweihan/HelloGitHub
python
2,469
Clipboard
## 推荐项目 <!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。--> <!-- 点击上方 “Preview” 立刻查看提交的内容 --> <!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址--> - 项目地址:https://github.com/Slackadays/Clipboard <!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)--> - 类别:C++ <!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 --> - 项目标题:Clipboard - 方便的命令行剪贴板 <!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符--> - 项目描述:这是一个使用 C++ 23 编写的小巧便捷的剪贴板工具,可以方便地在命令行的任何位置使用统一的剪贴板,同时也可以绑定 GUI。 <!--令人眼前一亮的点是什么?类比同类型项目有什么特点!--> - 亮点:方便 - 示例代码:(可选) - 截图:![](https://github.com/Slackadays/Clipboard/blob/main/documentation/readme-banners/CBDemo.png?raw=true) - 后续更新计划:
closed
2023-01-15T07:57:40Z
2023-02-28T13:43:33Z
https://github.com/521xueweihan/HelloGitHub/issues/2469
[ "已发布", "C++ 项目" ]
ChungZH
0
deepfakes/faceswap
deep-learning
894
how can I use the muliti-gpu
python faceswap.py train -A ~/faceswap/faces/trump -B ~/faceswap/faces/cage -m ~/faceswap/trump_cage_model/ how can I use the muliti-gpu
closed
2019-10-05T15:15:14Z
2019-10-05T15:42:10Z
https://github.com/deepfakes/faceswap/issues/894
[]
oracle9i88
0
ultralytics/yolov5
deep-learning
12,523
Structure diagram of YOLOv5
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question Hello, I am now using yolov5 7.0 version, I now want to draw a yolov5 network structure diagram, I see that the Backbone and Head in the yolov5s.yaml file are all Conv modules, but I found that many people use CBS modules when drawing yolov5 7.0 version network structure diagrams, I want to know if this approach is correct, and whether to draw Conv or CBS or other when drawing the structure diagram? ### Additional _No response_
closed
2023-12-18T14:17:04Z
2024-01-28T00:22:07Z
https://github.com/ultralytics/yolov5/issues/12523
[ "question", "Stale" ]
ZCzzzzzz
2
huggingface/datasets
numpy
7,248
ModuleNotFoundError: No module named 'datasets.tasks'
### Describe the bug --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) [<ipython-input-9-13b5f31bd391>](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in <cell line: 1>() ----> 1 dataset = load_dataset('knowledgator/events_classification_biotech') 11 frames [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2130 2131 # Create a dataset builder -> 2132 builder_instance = load_dataset_builder( 2133 path=path, 2134 name=name, [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs) 1886 raise ValueError(error_msg) 1887 -> 1888 builder_cls = get_dataset_builder_class(dataset_module, dataset_name=dataset_name) 1889 # Instantiate the dataset builder 1890 builder_instance: DatasetBuilder = builder_cls( [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in get_dataset_builder_class(dataset_module, dataset_name) 246 dataset_module.importable_file_path 247 ) if dataset_module.importable_file_path else nullcontext(): --> 248 builder_cls = import_main_class(dataset_module.module_path) 249 if dataset_module.builder_configs_parameters.builder_configs: 250 dataset_name = dataset_name or dataset_module.builder_kwargs.get("dataset_name") [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in import_main_class(module_path) 167 def import_main_class(module_path) -> Optional[Type[DatasetBuilder]]: 168 """Import a module at module_path and return its main class: a DatasetBuilder""" --> 169 module = importlib.import_module(module_path) 170 # Find the main class in our imported module 171 module_main_cls = None [/usr/lib/python3.10/importlib/__init__.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in import_module(name, package) 124 break 125 level += 1 --> 126 return _bootstrap._gcd_import(name[level:], package, level) 127 128 /usr/lib/python3.10/importlib/_bootstrap.py in _gcd_import(name, package, level) /usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load(name, import_) /usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_) /usr/lib/python3.10/importlib/_bootstrap.py in _load_unlocked(spec) /usr/lib/python3.10/importlib/_bootstrap_external.py in exec_module(self, module) /usr/lib/python3.10/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds) [~/.cache/huggingface/modules/datasets_modules/datasets/knowledgator--events_classification_biotech/9c8086d498c3104de3a3c5b6640837e18ccd829dcaca49f1cdffe3eb5c4a6361/events_classification_biotech.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in <module> 1 import datasets 2 from datasets import load_dataset ----> 3 from datasets.tasks import TextClassification 4 5 DESCRIPTION = """ ModuleNotFoundError: No module named 'datasets.tasks' --------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the "Open Examples" button below. --------------------------------------------------------------------------- ### Steps to reproduce the bug !pip install datasets from datasets import load_dataset dataset = load_dataset('knowledgator/events_classification_biotech') ### Expected behavior no ModuleNotFoundError ### Environment info google colab
open
2024-10-23T21:58:25Z
2024-10-24T17:00:19Z
https://github.com/huggingface/datasets/issues/7248
[]
shoowadoo
2
pydantic/pydantic-settings
pydantic
192
test_docs_examples fail, the document need to update
When running `pytest`, the following test failuer occurred: ``` =================================== FAILURES =================================== __________________ test_docs_examples[docs/index.md:212-246] ___________________ Print output changed code: --- before +++ after @@ -233,225 +233,225 @@ try: Settings() except ValidationError as e: print(e) """ - 2 validation errors for RedisSettings - host + 2 validation errors for Settings + redis.host Field required [type=missing, input_value={'HOST': 'localhost', 'port': 6379}, input_type=dict] For further information visit https://errors.pydantic.dev/2/v/missing - HOST + redis.HOST Extra inputs are not permitted [type=extra_forbidden, input_value='localhost', input_type=str] For further information visit https://errors.pydantic.dev/2/v/extra_forbidden """ =========================== short test summary info ============================ FAILED tests/test_docs.py::test_docs_examples[docs/index.md:212-246] - Failed... =================== 1 failed, 114 passed, 1 skipped in 9.21s =================== ``` Upon reviewing the error, it appears that the `docs/index.md` documentation is incorrect. Therefore, the documentation should be modified as follows: ```patch diff --git a/docs/index.md b/docs/index.md index c1bef85..f88b430 100644 --- a/docs/index.md +++ b/docs/index.md @@ -235,11 +235,11 @@ try: except ValidationError as e: print(e) """ - 2 validation errors for RedisSettings - host + 2 validation errors for Settings + redis.host Field required [type=missing, input_value={'HOST': 'localhost', 'port': 6379}, input_type=dict] For further information visit https://errors.pydantic.dev/2/v/missing - HOST + redis.HOST Extra inputs are not permitted [type=extra_forbidden, input_value='localhost', input_type=str] For further information visit https://errors.pydantic.dev/2/v/extra_forbidden """ ``` Maybe I can make a pull request for this issue?
closed
2023-11-27T12:48:54Z
2023-11-28T09:55:03Z
https://github.com/pydantic/pydantic-settings/issues/192
[]
Xunop
4
pallets/flask
flask
5,288
Refactoring and improving documentation in tox.ini
<!-- Improved the readability and organization of the dependency configuration by grouping all development dependencies from Pallets GitHub repositories under a 'dev' section. This change eliminates redundancy and makes it easier to manage and update development dependencies in the future. --> <!-- This problem is solvable without changes to Flask and should not provide any issues or conflicts! -->
closed
2023-10-05T19:57:47Z
2023-10-20T00:05:38Z
https://github.com/pallets/flask/issues/5288
[]
Ethanqg0
0
Kanaries/pygwalker
matplotlib
232
PyGWalker + Streamlit: JavaScript crashes when trying draw on canvas
Hi there, Currently working on a proof of concept with PyGWalker + Streamlit and we are running into an issue. * In the PyGWalker component, when we drag certain fields into either the x-axis or y-axis, the JavaScript responsible for drawing on the canvas immediately crashes: <img width="2034" alt="Screenshot 2023-09-21 at 1 55 25 PM" src="https://github.com/Kanaries/pygwalker/assets/107717628/acf60e16-42fd-4741-b826-a3e172d647cf"> * Can consistently reproduce the issue in Chrome v116.0.5845.187 and Edge v117.0.2045.35 * Interestingly, Safari 16.5 works fine * Disabling hardware acceleration does not fix it * We see the same console errors in all 3 browsers, regardless if it works or not Any thoughts on what could be causing this or things we should be looking for? Is there a limitation, not documented, with PyGWalker + Streamlit and certain browsers? Appreciate any help!
closed
2023-09-21T21:02:14Z
2023-09-27T15:48:57Z
https://github.com/Kanaries/pygwalker/issues/232
[]
ian-tvt
3
google-deepmind/graph_nets
tensorflow
154
TensorFlow 1 is not supported in Google Colab
graph_nets/demos/shortest_path.ipynb TensorFlow version 1 is not supported in Google Colab anymore, so the code is not running.
open
2023-07-18T11:27:20Z
2023-08-24T10:54:10Z
https://github.com/google-deepmind/graph_nets/issues/154
[]
muratcanterzi
1
mckinsey/vizro
plotly
591
File browser widget
### Which package? vizro ### What's the problem this feature will solve? The widget contains an interactive file browser and returns its state(current path, etc.) and events like clicking a file in it. Other libraries I tried(like Gradio and Streamlit) only provide the feature of uploading and downloading files. But for data visualization, a lot of times I need to deal with directories. ### Describe the solution you'd like The third-party addon [streamlit-file-browser](https://github.com/pragmatic-streamlit/streamlit-file-browser) of Streamlit shows a nice way of implementing it: ![image](https://github.com/user-attachments/assets/4c15173c-75a0-43a3-8a11-85cdae2fa2a1) The widget connects to a local directory or a file server and returns a dict showing its current state, including the path and whether the user clicks a folder or a file. ### Code of Conduct - [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md).
open
2024-07-22T08:14:54Z
2024-07-26T05:50:11Z
https://github.com/mckinsey/vizro/issues/591
[ "Feature Request :nerd_face:", "Custom Components :rocket:" ]
LiShaoyu5
4
plotly/dash
data-visualization
3,210
Devtools UI blocks mantine Notification
This is the app that I used to test this: ``` from dash import Dash, html, Input, Output, dcc import dash_mantine_components as dmc from dash_iconify import DashIconify # external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css'] app = Dash(__name__) import dash_mantine_components as dmc from dash import Output, Input, html, callback """ Add Notifications to your app layout. """ app.layout = dmc.MantineProvider( [ dmc.NotificationProvider(), html.Div(id="notifications-container"), dmc.Button("Show Notification", id="notify"), ] ) @callback( Output("notifications-container", "children"), Input("notify", "n_clicks"), prevent_initial_call=True, ) def show(n_clicks): return dmc.Notification( title="Hey there!", id="simple-notify", action="show", autoClose=False, position="bottom-right", message="Notifications in Dash, Awesome!", icon=DashIconify(icon="ic:round-celebration"), ) if __name__ == '__main__': app.run(debug=True) ``` I tried changing the z-index for the devtools, but it appears that the dmc Notification component uses position static, which means that it ignores z-index. Not sure what the solution is for that! If we switch to a footer for the devtools then this issue can be closed
open
2025-03-11T18:51:38Z
2025-03-17T18:20:48Z
https://github.com/plotly/dash/issues/3210
[ "bug", "P1" ]
marthacryan
0
huggingface/transformers
machine-learning
36,660
[FEAT] [non-CUDA]: Support alternative implementation for `constraints.positive_definite.check`
### Feature request Could there be an alternative implementation for ``` /usr/local/lib/python3.12/dist-packages/transformers/modeling_utils.py:2470: in _init_added_embeddings_weights_with_mean is_covariance_psd = constraints.positive_definite.check(epsilon * covariance).all() ``` the `torch.linalg.cholesky` only exists for CUDA in pytorch. ### Motivation To support vision language embedding model (llava model) on vLLM for ROCm. When I am trying to enable vision_language embedding model support on vLLM for ROCm, I encounter this issue. ``` tests/models/embedding/vision_language/test_llava_next.py:134: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/models/embedding/vision_language/test_llava_next.py:63: in _run_test hf_model.model.resize_token_embeddings( /usr/local/lib/python3.12/dist-packages/transformers/modeling_utils.py:2109: in resize_token_embeddings model_embeds = self._resize_token_embeddings(new_num_tokens, pad_to_multiple_of, mean_resizing) /usr/local/lib/python3.12/dist-packages/transformers/modeling_utils.py:2134: in _resize_token_embeddings new_embeddings = self._get_resized_embeddings( /usr/local/lib/python3.12/dist-packages/transformers/modeling_utils.py:2291: in _get_resized_embeddings self._init_added_embeddings_weights_with_mean( /usr/local/lib/python3.12/dist-packages/transformers/modeling_utils.py:2470: in _init_added_embeddings_weights_with_mean is_covariance_psd = constraints.positive_definite.check(epsilon * covariance).all() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = PositiveDefinite() value = tensor([[ 8.4661e-14, -9.3146e-17, 5.4274e-16, ..., -1.2541e-16, 8.1008e-16, 2.6355e-16], [-9.314... [ 2.6355e-16, -5.6042e-16, 5.1984e-16, ..., -1.9993e-16, -2.7124e-16, 8.5429e-14]], device='cuda:0') def check(self, value): sym_check = super().check(value) if not sym_check.all(): return sym_check > return torch.linalg.cholesky_ex(value).info.eq(0) E RuntimeError: Calling torch.linalg.cholesky on a CUDA tensor requires compiling PyTorch with MAGMA. Please use PyTorch built with MAGMA support. ``` the `torch.linalg.cholesky` only exists for CUDA in pytorch. ### Your contribution By helping to test on AMD GPUs with the fixed and providing feedback.
open
2025-03-12T09:38:30Z
2025-03-15T18:19:37Z
https://github.com/huggingface/transformers/issues/36660
[ "Feature request" ]
tjtanaa
10
wagtail/wagtail
django
12,512
Documentation: Elasticsearch how-to guides
### Pertinent section of the Wagtail docs [Advanced topics](https://docs.wagtail.org/en/stable/advanced_topics/index.html) ### Details Extracted from #6798 – The Wagtail docs don't give a lot of details of customising Elasticsearch's behaviour. I propose to add how-to articles for several common search features: - [ ] synonym matching - [ ] annotating results - [ ] result highlights  - [ ] custom boosting ### Working on this <!-- Do you have thoughts on skills needed? Are you keen to work on this yourself once the issue has been accepted? Please let us know here. --> Anyone can contribute to this. There was good progress in #6798, which just needs someone else taking over and reusing some of that content. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
open
2024-10-31T11:13:34Z
2025-03-07T07:28:59Z
https://github.com/wagtail/wagtail/issues/12512
[ "Documentation", "component:Search" ]
thibaudcolas
5
PedroBern/django-graphql-auth
graphql
68
User partial update?
closed
2020-09-26T14:27:40Z
2020-09-26T14:33:54Z
https://github.com/PedroBern/django-graphql-auth/issues/68
[]
bloodwithmilk25
0
serengil/deepface
deep-learning
458
Maybe it's a bug
When the program detects a face, the program runs normally (12%)and the cpu utilization is low. But when there is no face in the picture, the cpu utilization increases to nearly 40%, but the gpu does not work. cpu AMD Ryzen 7 5800H gpu rtx3070
closed
2022-04-18T08:38:36Z
2022-04-19T22:58:04Z
https://github.com/serengil/deepface/issues/458
[ "question" ]
Decemviri
1
ShishirPatil/gorilla
api
765
[BFCL] Dataset Revamp Initiative
**Describe the issue** There has been constant feedback for dataset ground truth inconsistency and our team is tasked on a 2-week initiative to re-scrutinize across V3 dataset issues with several objectives: - Eliminate Ground Truth mismatches against user questions. - Polish ambiguous prompts that have unclear user intents to eliminate biased-judgement and saturation. **Proposed Change Tacker** - [ ] live_simple: #737 - [ ] live_multiple: #739 - [ ] live_parallel: #737 - [ ] live_parallel_multiple: #737 - [ ] live_irrelevance: #763 - [ ] live_relevance: #763 - [ ] multi_turn_base: #740 - [ ] multi_turn_miss_func: - [ ] multi_turn_miss_param: - [ ] multi_turn_long_context:
closed
2024-11-15T23:58:57Z
2024-12-09T08:47:42Z
https://github.com/ShishirPatil/gorilla/issues/765
[]
Fanjia-Yan
0
onnx/onnxmltools
scikit-learn
544
Update float16 converter api with auto_convert_mixed_precision
Based on discussion from #543, the feature should be updated to * Ensure auto_convert_mixed_precision supports large models and * Ensure auto_convert_mixed_precision runs reasonably fast when there are no underflow / overflow issues, then * Expose auto_convert_mixed_precision as part of onnxmltools. Stop exposing convert_float_to_float16* as part of onnxmltools
closed
2022-05-02T19:17:01Z
2022-06-06T14:15:34Z
https://github.com/onnx/onnxmltools/issues/544
[]
BowenBao
9
minimaxir/textgenrnn
tensorflow
23
"Dimension 0 in both shapes must be equal" when loading weights made from large data sets
I can successfully train and sample new models, but I am unable to load and continue training weights files saved after training with large data sets. ``` from textgenrnn import textgenrnn tg=textgenrnn(name="dyk") tg=textgenrnn(weights_path='dyk_weights.hdf5', vocab_path='dyk_vocab.json', config_path='dyk_config.json') ``` Running the above gives me the following error: >tensorflow.python.framework.errors_impl.InvalidArgumentError: Dimension 0 in both shapes must be equal, but are 96 and 178. Shapes are [96,100] and [178,100]. for 'Assign_20' (op: 'Assign') with input shapes: [96,100], [178,100] I'm unable to load the weights even by omitting vocab_path and config_path, as in ```tg=textgenrnn('dyk_weights.hdf5')``` "dyk.txt" is about 12.4 million characters, while other sets I've tried out are ~90k - 600k characters. Smaller sets load, sample, and continue just fine.
closed
2018-05-25T22:12:39Z
2018-05-26T03:15:56Z
https://github.com/minimaxir/textgenrnn/issues/23
[]
ataricom
4
HumanSignal/labelImg
deep-learning
68
can't use in OS X with qt5py3
after completing `make qt5py3` without any warning, do `python3 labeling.py`,get below: ``` 2017-03-13 13:44:24.522 Python[31249:2190288] *** Assertion failure in -[NSBitmapImageRep initWithCGImage:], /Library/Caches/com.apple.xbs/Sources/AppKit/AppKit-1504.81.100/AppKit.subproj/NSBitmapImageRep.m:1296 2017-03-13 13:44:24.525 Python[31249:2190288] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid parameter not satisfying: cgImage != NULL' ``` how can I fix it?
closed
2017-03-13T05:50:05Z
2017-03-17T15:34:39Z
https://github.com/HumanSignal/labelImg/issues/68
[]
xichangzun
3
akfamily/akshare
data-science
5,120
AKShare 接口问题报告
1. 请先详细阅读文档对应接口的使用方式:https://akshare.akfamily.xyz 2. 操作系统版本,目前只支持 64 位操作系统 3. Python 版本,目前只支持 3.8 以上的版本 Python 3.9 5. AKShare 版本,请升级到最新版 akshare-1.14.56 6. 接口的名称和相应的调用代码 接口名称:bond_futures_deliverable_coupons 调用代码: import akshare as ak bond_futures_deliverable_coupons_df = ak.bond_futures_deliverable_coupons(trade_date="20240808") print(bond_futures_deliverable_coupons_df) 7. 接口报错的截图或描述 ![image](https://github.com/user-attachments/assets/5fdf28bb-6618-49c2-bf04-958c3ef07174)
closed
2024-08-10T12:35:56Z
2024-08-12T10:18:07Z
https://github.com/akfamily/akshare/issues/5120
[ "bug" ]
Hellohistory
1
StackStorm/st2
automation
5,266
E: Unable to locate package st2 St2 installation Ubuntu 16.04-
## SUMMARY E: Unable to locate package st2 St2 installation Ubuntu 16.04- . I get this error when I follow the instructions from https://docs.stackstorm.com/install/u16.html# . All the dependencies are successful until the step to start installing ST2. Thats where its fails. I see that ST2 packages are not available at https://packagecloud.io/StackStorm/stable/ubuntu/ . The default script as the below path to set as repository https://packagecloud.io/StackStorm/stable/ubuntu/ but in this location I could not find anything that is why we are getting error however the packages are found at https://packagecloud.io/StackStorm/stable a level down. Below is the output I get when I run app-get install st2 Reading package lists... Done Building dependency tree Reading state information... Done N: Ignoring file 'script.deb.sh' in directory '/etc/apt/sources.list.d/' as it has an invalid filename extension E: Unable to locate package st2 ### STACKSTORM VERSION Paste the output of ``st2 --version``: ##### OS, environment, install method Ubuntu 16.04. , Manual install method. Distributor ID: Ubuntu Description: Ubuntu 16.04.7 LTS Release: 16.04 Codename: xenial ## Steps to reproduce the problem Pick this os version and start intsalling St2 through one-install or manual install
closed
2021-05-13T05:04:45Z
2021-08-21T12:36:07Z
https://github.com/StackStorm/st2/issues/5266
[ "stale" ]
Sudhigalagali
6
sammchardy/python-binance
api
1,503
Freezes when trying to close the socket
**Describe the bug** I use a fairly loaded multiplex websocket, which contains data on 300+ tickers. When updating the ticker list, I need to restart the socket with new subscriptions. Very often, when restarting, the socket simply freezes when the connection is closed. **To Reproduce** A very simplified code example that simply receives one message from a stream and restarts the socket. ```python import asyncio from binance import AsyncClient, BinanceSocketManager import logging logging.basicConfig(level=logging.DEBUG, filename="debug.log",filemode="a", format="%(asctime)s %(levelname)s %(message)s") logger = logging.getLogger() async def test(client, bsm): INTERVAL = '1m' SUBSCRIPTIONS = [] tickers_list = await client.get_ticker() for ticker in tickers_list[:350]: symbol = ticker.get('symbol').lower() SUBSCRIPTIONS.append(f'{symbol}@kline_{INTERVAL}') while True: logger.info('Opening websocket...') ws = bsm.multiplex_socket(SUBSCRIPTIONS) async with ws as stream: while True: msg = await stream.recv() logger.info(msg) break async def run(): client = await AsyncClient.create(testnet=False) bsm = BinanceSocketManager(client) await asyncio.gather(test(client, bsm)) asyncio.run(run()) ``` **Expected behavior** Continuous operation **Environment** - Python version: **3.9.19** - Virtual Env: **no** - OS: **AlmaLinux 9 5.14.0-503.14.1.el9_5.x86_64** - python-binance **1.0.24** **Logs** See the attached file. This time, the hang occurred on the first attempt to restart the socket. [debug.log](https://github.com/user-attachments/files/18033802/debug.log) **Additional context** The hang may occur immediately or after a short period of operation. Adding pauses between closing and opening the socket does not solve the problem.
closed
2024-12-06T06:53:32Z
2024-12-12T05:02:51Z
https://github.com/sammchardy/python-binance/issues/1503
[]
Ramesses3
6
marcomusy/vedo
numpy
316
Doc Link incorrect
The fxy example from https://vedo.embl.es/ is pointing to https://github.com/marcomusy/vedo/tree/master/examples/pyplot/fxy.py, should be https://github.com/marcomusy/vedo/blob/master/examples/pyplot/plot4_fxy.py right?
closed
2021-02-17T20:22:17Z
2021-03-24T16:36:15Z
https://github.com/marcomusy/vedo/issues/316
[ "fixed" ]
lw1321
1
Yorko/mlcourse.ai
scikit-learn
615
RF cons - unclear statement about correlated features
"If a dataset contains groups of correlated features with similar importance for predicted classes, then the preference will be given to smaller groups." - shalle be reformulated and linked to [this work](http://rnowling.github.io/machine/learning/2015/08/11/random-forest-correlation-bias.html).
closed
2019-09-13T19:16:21Z
2019-09-27T14:48:58Z
https://github.com/Yorko/mlcourse.ai/issues/615
[]
Yorko
1
randyzwitch/streamlit-folium
streamlit
227
streamlit-folium: release button not working
I'm trying to implement this code: ``` import folium import streamlit as st from folium.plugins import Draw from streamlit_folium import st_folium m = folium.Map(location=[43.72299, 10.396579], zoom_start=13) Draw(export=True).add_to(m) folium.Marker( location=[43.72299, 10.396579], popup="Torre di Pisa" ).add_to(m) c1, c2 = st.columns(2) with c1: output = st_folium(m, use_container_width=True) with c2: st.write(output) ``` But after clicking on the marker and trying to draw a shape, it seems that the mouse gets stuck, just like what is described here: [Streamlit_folium release button not working](https://discuss.streamlit.io/t/streamlit-folium-release-button-not-working/61894). Do you guys have any idea how to solve this problem? **Debug info:** ``` folium==0.17.0 streamlit==1.38.0 streamlit_folium==0.22.1 ```
closed
2024-10-01T14:23:19Z
2024-10-28T17:50:33Z
https://github.com/randyzwitch/streamlit-folium/issues/227
[ "question" ]
gomesdaciel
2
bigscience-workshop/petals
nlp
367
How to upload to hub and use model later on?
I am following this basic tutorial, and I'm wondering how I save the fine tuned model and use it later on? For example, in this tutorial, we fine tune a model, but then how can I use it at a later time without having to fine tune again? https://colab.research.google.com/github/bigscience-workshop/petals/blob/main/examples/prompt-tuning-sst2.ipynb
open
2023-07-18T18:24:47Z
2023-07-23T20:17:44Z
https://github.com/bigscience-workshop/petals/issues/367
[]
ryanshrott
3
CTFd/CTFd
flask
1,958
Add CSV examples for CSV Import
Add CSV examples for CSV Import
closed
2021-07-23T19:42:25Z
2021-07-27T21:03:28Z
https://github.com/CTFd/CTFd/issues/1958
[]
ColdHeat
0
harry0703/MoneyPrinterTurbo
automation
214
writing 步骤报错了
![image](https://github.com/harry0703/MoneyPrinterTurbo/assets/87410957/5b8f8d0f-72d2-4e40-8b04-b5cdeff843c8)
closed
2024-04-09T11:57:12Z
2024-04-10T02:30:37Z
https://github.com/harry0703/MoneyPrinterTurbo/issues/214
[]
soleillight
2
jupyterhub/repo2docker
jupyter
656
Give the Binder repository spec a name
We've had a [few conversations about an open standard for reproducible repositories](https://discourse.jupyter.org/t/creating-a-specification-for-reproducible-repositories/569). I think it'll take a while for that all to settle out, but in the meantime could we name the specification that Binder uses to build a repo's environment. Even if that spec changes, we could start using it in docs, presentations, etc. Something like **The [Binder] Reproducible Repository specification**. Then rather than saying "A Binder-ready repository is a repository that can be built with repo2docker without an error" we could say "A Binder-ready repository is a repository that follows the (link to) Binder reproducible repository specification".
closed
2019-04-29T16:16:58Z
2019-05-05T13:07:00Z
https://github.com/jupyterhub/repo2docker/issues/656
[ "needs: discussion" ]
choldgraf
5
feder-cr/Jobs_Applier_AI_Agent_AIHawk
automation
568
[FEATURE]:make it work on windows 11.
### Feature summary i'd like to use it on windows 11. ### Feature description Its simple, i dont have a windows 10 machine, and really loved the project and wanted to test it out! ### Motivation I Dont have a windows 10 machine ### Alternatives considered Im trying to install in windows 11 and tried every combination possible but i always get the error : ResolutionImpossible: When installing the dependencies. if there's any workaround possible i'd love to know. ### Additional context _No response_
closed
2024-10-19T19:15:44Z
2024-10-22T23:57:26Z
https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/568
[ "enhancement" ]
Niassamond1
3
apache/airflow
python
47,787
Add support to link to a log line by line number
### Description Add support to link to a log by line number similar to how a certain line in Github actions can be linked. I haven't checked fully how it's implemented in other applications but it seems there should be a ref per line and then scroll to the line by ref when line number is present in the url. I am not sure how it scales creating a lot of refs in case of large logs. Ex https://github.com/apache/airflow/actions/runs/13858147476/job/38780330751#step:5:822 ### Use case/motivation This will be useful in getting to the particular line in case of errors and can be shared with others for better debugging and support. ### Related issues _No response_ ### Are you willing to submit a PR? - [x] Yes I am willing to submit a PR! ### Code of Conduct - [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
closed
2025-03-14T14:23:50Z
2025-03-15T16:22:32Z
https://github.com/apache/airflow/issues/47787
[ "area:logging", "kind:feature", "area:UI", "AIP-38" ]
tirkarthi
2
xuebinqin/U-2-Net
computer-vision
234
Few questions about training
Hi @xuebinqin, you've made an excellent work thank you! I'm pretty new to ML so I apologize for some stupid questions ahead. 1. What GPUs (and how many) you've used to train the last u2net (basic one) model? How long does it took you and how many epochs was trained? 2. What should I do in order to train higher resolution mask output: 512x512 or 1024x1024? Is it enough just change RescaleT(320) to RescaleT(512) in dataset loader? 3. What should I do train more then one output channels? For example to train not the whole object mask but hair, skin and clothes separately ([link to dataset](https://github.com/switchablenorms/CelebAMask-HQ))? And how do I feed those additional channels as input? 4. Have you used different then in u2net_train.py settings for optimizer? Maybe some LR decay etc? I've tried to train on DUTS-TR and after 60-70 epochs train loss just stops decreasing. But after reducing LR from 0.001 to 0.0001 it did better work for some time (and stops again). Can you give me some ideas how to make training more stable (automatically)? 5. From your experience, how good u2net for generating similar to pix2pix results with color images? Can it do "sketch to cats" ([lsample image](https://phillipi.github.io/pix2pix/images/edges2cats.jpg)) kind of things? Or pix2pix model better for that? If u2net is great - how do I tune training process to do those "sketch to cats"? Thank you a lot for your time and answers!
closed
2021-07-26T06:47:26Z
2024-08-22T08:33:47Z
https://github.com/xuebinqin/U-2-Net/issues/234
[]
baleksey
1
pytorch/pytorch
python
149,821
`Aborted` error when using `torch.cuda.memory.caching_allocator_delete`
### 🐛 Describe the bug Code: ``` import torch from torch.cuda.memory import caching_allocator_delete torch.cuda.empty_cache() dev_props = torch.cuda.get_device_properties(0) total_memory = dev_props.total_memory allocation = int(total_memory * 0.5) tmp_tensor = torch.empty(allocation, dtype=torch.int8, device='cuda') mem_ptr = tmp_tensor.data_ptr() caching_allocator_delete(mem_ptr) ``` Output: ``` Aborted ``` ### Versions PyTorch 2.6.0 cc @ptrblck @msaroufim @eqy
open
2025-03-23T01:21:43Z
2025-03-24T18:54:37Z
https://github.com/pytorch/pytorch/issues/149821
[ "module: cuda", "triaged", "module: CUDACachingAllocator" ]
default1360
0
davidteather/TikTok-Api
api
1,173
[BUG] - X-Bogus creator script is dead
X-Bogus js creator script doesnt work. Creator repository currently unavailable (https://github.com/aithedev/X-Bogus). I found the same script here https://github.com/carcabot/tiktok-signature/blob/master/javascript/xbogus.js What i can do to solve this problem? About problem: Each time i trying to run my application, request to tiktok freezes on `api.video` func, timeout exceeded. ``` Timeout 30000ms exceeded. =========================== logs =========================== navigating to "https://www.tiktok.com/api/item/detail/?aid=1988&app_name=tiktok_web&browser_language=en-US&browser_platform=Win32&count=20&device_id=&device_platform=web_pc&os=windows&priority_region=&referrer=&region=US&screen_height=0&screen_width=0&browser_name=Mozilla&browser_version=5.0+%28Windows+NT+10.0%3B+Win64%3B+x64%29+AppleWebKit%2F537.36+%28KHTML%2C+like+Gecko%29+Chrome%2F110.0.5481.38+Safari%2F537.36+Edg%2F110.0.5481.38&cursor=0&itemId=7054940332990942465&X-Bogus=DFSzswSLsuzANamNtwHAAt9WcBnl", waiting until "load" ============================================================ ``` I guess problem in signing.py on 592 line.
closed
2024-07-18T13:33:03Z
2024-07-18T15:58:20Z
https://github.com/davidteather/TikTok-Api/issues/1173
[ "bug" ]
Hanokiru
1
AntonOsika/gpt-engineer
python
934
Data Collection: Why?
Seriously, can you explain to me why you collect data from this project that uses AI services from a company that literally invests billions of dollars? How exactly you want to improve this system by collecting user input compared to the company you rely on?
closed
2023-12-24T22:21:59Z
2023-12-25T09:17:05Z
https://github.com/AntonOsika/gpt-engineer/issues/934
[ "question" ]
gh-PonyM
1
milesmcc/shynet
django
28
Fastest way to deploy shynet
This is a really great project, the [simple image tracker script](https://github.com/milesmcc/shynet/blob/master/shynet/analytics/templates/analytics/scripts/page.js) looks awesome. I will run it for my personal websites. I was wondering if [Shynet could be deployed as a Dockerized Heroku App](https://blog.heroku.com/six-strategies-deploy-to-heroku#deploying-with-docker)? And if so, could be possible to create a quick deploy button? So I did It. And I think it is working pretty well. ## Deploy Shynet on Heroku You may try it in your account right now. Just go to my [shynet fork](https://github.com/thomasgroch/shynet/blob/master/GUIDE.md#quick-deploy) and press the "Deploy to Heroku" button. After deploy your app it will not work. You need to manually fill the generated database config like so: ``` heroku config:get DATABASE_URL --app=my-app heroku config:set DB_NAME "myDbName" --app=my-app heroku config:set DB_USER "myDbUser" --app=my-app heroku config:set DB_PASSWORD "myDbPassword" --app=my-app heroku config:set DB_HOST "myDbHost" --app=my-app heroku config:set DB_PORT "myDbPort" --app=my-app heroku restart --app my-app ``` ## Gmail SMTP (quick'n dirty) To use your gmail account as smtp you need to: [Allow less secure apps](https://support.google.com/accounts/answer/6010255?hl=en) [Display Unlock Captcha](https://accounts.google.com/DisplayUnlockCaptcha) ``` heroku config:set EMAIL_HOST "smtp.gmail.com" --app=my-app heroku config:set EMAIL_HOST_PASSWORD "mypassword" --app=my-app heroku config:set EMAIL_HOST_USER "myusername@gmail.com" --app=my-app heroku config:set EMAIL_PORT "465" --app=my-app heroku restart --app my-app ``` I think it would also be possible to recognize the presence of heroku's environment variable DATABASE_URL, parse it and then overwrite the others (uppon the application boot). Does anyone have an interest in making skynet compatible with heroku?
closed
2020-05-09T20:08:33Z
2023-08-04T09:33:18Z
https://github.com/milesmcc/shynet/issues/28
[ "enhancement" ]
thomasgroch
6
biolab/orange3
data-visualization
6,100
Basic information about how data objects in Orange3 are handled in memory / tips for profiling add-on memory performance
<!-- Thanks for taking the time to submit a feature request! For the best chance at our team considering your request, please answer the following questions to the best of your ability. --> ## **What's your use case?** <!-- In other words, what's your pain point? --> <!-- Is your request related to a problem, or perhaps a frustration? --> <!-- Tell us the story that led you to write this request. --> I'm developing add-on for Orange, at now mostly to add features for data preparation (e.g. before the data becomes a Data or DataFrame). The ideal scenario would be have a subset of boring, low level file operations (such as KNIME or pentaho-kettle have) to make data cleaning before it become some sort of a tabular format good enough to be imported traditionally with Orange. > **Boring internals, not really need for this issue** >> The strategy I'm doing to be able to allow raw data preparation before converting to orange is a two tyoes, `FileRAW` and `FileRAWCollection` which mostly only have identifiers which explain how to find the real files (or directory with real files) on disk. In other words, I'm already somewhat using a way to pass the information between widgets, but it still following the philosophy of _"In Linux and UNIX, everything is a file"_ in a literal sense. In this sense, even if eventually we here add features such as using pandas to convert a FileRAW to another FileRAW the end result will release memory as soon as it stops. To advantage of this development approach, the optimizations are mostly generic to how to handle memory with python (or pandas) >> >> For now, the add-on is able to use abstract low level `pandas.read_table`, `pandas.read_csv`, `pandas.read_excel`, `pandas.read_feather`, `pandas.read_fwf`, `pandas.read_html`, `pandas.read_json`, `pandas.json_normalize`, `pandas.read_orc`, `pandas.read_parquet`, `pandas.read_sas`, ` pandas.read_spss`, `pandas.read_stata`, `pandas.read_xml` to a dataframe, and I discovered some function in your code that convert data frames to Orange Table format. >> >> Note: is explicitly out of my plans "reinvent the wheel" of what Orange3 do. For example, maybe one "smart default" if users are reusing workflows from someone else, but the data they have now is way much bigger, would be slice like 25% or 10% of the data and warn the user to optimize the types before passing for orange. So the user could know by next steps what could be optimized on previous ones until it fits the memory. However, my **challenge becomes know how Orange deal with memory** before releasing the add-on for general use. For sake of this issue, while I still need to deal with interface "freezing" for long downloads (this article in on my todo list https://orange3.readthedocs.io/projects/orange-development/en/latest/tutorial-responsive-gui.html). **However, as long as the user have disk space, the "importer" to generate the allow user add gigabyte size files on disk**. And even for data which would on a 1:1 fits on memory, by using only pandas without proper optimized data types, it is easily to also use way too many memory. ## **What's your proposed solution?** With all this context said, I think two questions could solve it 1. **Did exist some way like** `import logging; log = logging.getLogger(__name__); log.exception(get_memory_size_of(self.data))**` **in which the get_memory_size_of is something I can use to know the Orange3 internals?** If this alone is not sufficient, maybe there's something you here already use, which would list all data objects sizes and which widgets created then or sort of?* 2. **There's some general summary like how Orange manage memory?** I assume it will reuse as much as possible one Data from a previous widget. I know low level way computer works, and I'm conformable with python, but not with GUIs or QT and I'm aware long-running scripts for like nodejs can memory leak. 1. I think my main question here is... what happens if I generate different objects (such as Data and DataFrame) as output from an Widget, but the will never attach the DataFrame input of my widget to another widget, the way Orange works will free memory of the outputs which are not used by anything else? Does the `self.Outputs.data_frame.send(self.data_frame)` is smart to discard the memory no widgets wants? 1. This question is relevant, because if is the case, I will avoid creating too much outputs for all potential widgets that would make use of it. So, for me would be easier to workaround (even if take 10's of hours) than wait for something be implemented/tested on Orange3 ## **Are there any alternative solutions?** Orange3 is actually quite fantastic to protect errors in specific widgets to blow up entire interface, but this don't work for memory-related issues. So I think is better to make the widgets on this add-on that prepare data for use with the Orange be aware of it to protect orange. But for now, think that most of what the data preparation steps are doing is... a visual frontend for what would be possible doing with one-time operation with python (not just pandas). Also, maybe point for another topic, but since `FileRAW` and `FileRAWCollection` just have codes to represent physical files on disk, this strategy could be used as lazy loading for other widgets. I just started with extension development around 2 weeks, so by now I'm mostly dealing with QT and deal with the basics, but I think would be feasible to export `FileRAW` to some Dask object or something you have.
closed
2022-08-19T00:15:11Z
2022-09-10T06:07:48Z
https://github.com/biolab/orange3/issues/6100
[]
fititnt
3
arnaudmiribel/streamlit-extras
streamlit
201
🐛 [BUG] - Streamlit 1.29.0 breaks row vertical align options
### Description Upon upgrading to streamlit 1.29.0 the vertical align options of row are no longer working. ### Reproduction steps ```bash 1. Upgrade to 1.29.0 2. Add items to a row 3. Set vertical_align='bottom' 4. Items are aligned at the top and the vertical_align options don't work anymore. ``` ### Screenshots ```bash ![DESCRIPTION](LINK.png) ``` ### Logs _No response_ ### Version of streamlit 1.29.0 ### Version of streamlit-extras 0.3.4
closed
2023-12-01T10:54:00Z
2023-12-11T18:16:09Z
https://github.com/arnaudmiribel/streamlit-extras/issues/201
[ "bug" ]
windischbauer
1
WZMIAOMIAO/deep-learning-for-image-processing
deep-learning
316
请问时候支持多标签任务呢?
**System information** * Have I written custom code: * OS Platform(e.g., window10 or Linux Ubuntu 16.04): * Python version: * Deep learning framework and version(e.g., Tensorflow2.1 or Pytorch1.3): * Use GPU or not: * CUDA/cuDNN version(if you use GPU): * The network you trained(e.g., Resnet34 network): **Describe the current behavior** **Error info / logs**
closed
2021-07-05T11:02:30Z
2021-07-06T06:59:37Z
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/316
[]
shihanyu
5
PaddlePaddle/models
computer-vision
5,414
fluid版本的resnet预训练权重还能下载吗
https://github.com/PaddlePaddle/models/tree/develop/PaddleCV/image_classification
open
2021-12-10T04:39:02Z
2024-02-26T05:08:22Z
https://github.com/PaddlePaddle/models/issues/5414
[]
renmada
0
django-import-export/django-import-export
django
1,157
Examples of importing nested data?
Here's a simple use case. ``` class Address(models.Model): city = models.CharField(blank=True, max_length=255) place = models.CharField(blank=True, max_length=255) state = models.CharField(blank=True, max_length=255) county = models.CharField(blank=True, max_length=255) country = models.CharField(blank=True, max_length=255) country_code = models.CharField(blank=True, max_length=3) class Person(models.Model): name = models.CharField(blank=True, max_length=255) address = models.OneToOneField(Address, related_name='+', blank=True, null=True, on_delete=models.SET_NULL) ... ``` `django-import-export` makes it really straight forward if the `Address` fields are inside the `Person` model but the documentation doesn't show any examples of importing nested structures like this. How do we make the import create two objects and and create a relationship? --- Related: https://github.com/django-import-export/django-import-export/issues/571, https://github.com/django-import-export/django-import-export/issues/318
closed
2020-06-22T15:42:13Z
2020-06-25T12:58:34Z
https://github.com/django-import-export/django-import-export/issues/1157
[ "question" ]
charleshan
3
Crinibus/scraper
web-scraping
32
Make a optional flag to add_product.py, so only certain domains gets added to records.py for the new product
E.g. --komplett, --proshop, --computersalg, --elgiganten If --komplett is chosen as a flag in the command line, then komplett is the only domain that gets added under the product-name in records.json. If none of the domain-flags is in the command line, then all of the domain gets added under the product-name in records.json.
closed
2020-07-27T17:24:09Z
2020-07-27T23:25:46Z
https://github.com/Crinibus/scraper/issues/32
[ "enhancement" ]
Crinibus
0
mitmproxy/mitmproxy
python
7,516
Option allow_hosts breaks https for mitmdump/mitmproxy
#### Problem Description Option allow_hosts option breaks **https** for mitmdump/mitmproxy. #### Steps to reproduce the behavior: Assuming that certificates are installed for mitmproxy: https://docs.mitmproxy.org/stable/concepts-certificates/ When running mitmdump/mitmproxy **without** "allow_hosts" option, then **https** works correctly, **both** for proxy localhost address and target address: `curl --proxy https://localhost:8080/ https://my-service-url` However, when I start mitmdump/mitmproxy **with** "allow_hosts" command-line option, then **https** stops working and I have to use **http** for proxy localhost OR for target address: `curl --proxy http://localhost:8080/ https://my-service-url` also vice-versa works (but **not both** proxy and target can have https): `curl --proxy https://localhost:8080/ http://my-service-url` Conclusion: "allow_hosts" option should not affect the usage of **https**. #### System Information "mitmproxy --version" here: ``` Mitmproxy: 11.1.0 Python: 3.12.7 OpenSSL: OpenSSL 3.4.0 22 Oct 2024 Platform: macOS-15.2-arm64-arm-64bit ```
closed
2025-01-27T16:40:47Z
2025-01-28T16:34:19Z
https://github.com/mitmproxy/mitmproxy/issues/7516
[ "kind/bug", "area/protocols" ]
user1bh
3
snooppr/snoop
web-scraping
85
Full Version Purchase US
Hi, I am very impressed with your project and would like to acquire the full version, but I cannot find on the page how to do so, is there any way I can get it? Thanks, all the best.
closed
2024-01-05T03:11:40Z
2024-01-05T03:24:43Z
https://github.com/snooppr/snoop/issues/85
[ "question" ]
J775w
1
serengil/deepface
deep-learning
1,198
I did what you commented and now this happened
![323515070-7b0934e5-a0f9-4e6b-abbd-d4e580579cbc](https://github.com/serengil/deepface/assets/124982114/d5ef070c-d929-49d5-a73a-36c8d2094fd3) your instructions in the readme arent detailed enough so will you please help me!!!!
closed
2024-04-18T22:08:46Z
2024-04-19T01:08:51Z
https://github.com/serengil/deepface/issues/1198
[ "invalid" ]
olstice
10
junyanz/pytorch-CycleGAN-and-pix2pix
computer-vision
879
How can i save model more frequently?
i need to save model after a predefined number of iterations, is there a way to do this? Or should I wait for the end of an epoch? Can I set the maximum number of iterations?
closed
2019-12-17T13:41:34Z
2019-12-17T20:15:31Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/879
[]
GiovanniPasq
1
hankcs/HanLP
nlp
802
MathTools.java 代碼疑問
<!-- 注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。 --> ## 注意事项 请确认下列注意事项: * 我已仔细阅读下列文档,都没有找到答案: - [首页文档](https://github.com/hankcs/HanLP) - [wiki](https://github.com/hankcs/HanLP/wiki) - [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ) * 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。 * 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。 * [x] 我在此括号内输入x打钩,代表上述事项确认完毕。 ## 版本号 <!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 --> 当前最新版本号是: 1.6.4 我使用的版本是: 1.5.4 <!--以上属于必填项,以下可自由发挥--> ## 我的问题 您好,我想請教 MathTools.java 裡的 calculateWeight ` double value = -Math.log(dSmoothingPara * frequency / (MAX_FREQUENCY) + (1 - dSmoothingPara) * ((1 - dTemp) * nTwoWordsFreq / frequency + dTemp));` 這行代碼是參考什麼算法或公式所寫的? <!-- 请详细描述问题,越详细越可能得到解决 -->
closed
2018-04-20T01:55:42Z
2020-01-01T10:50:26Z
https://github.com/hankcs/HanLP/issues/802
[ "ignored" ]
gunblues
2
PaddlePaddle/PaddleHub
nlp
1,835
chinese_ocr_db_crnn_server 调用多次后报内存错误ResourceExhaustedError: Fail to alloc memory of 524288000 size, error code is 12
PaddleHub2.2.0,PaddlePaddle2.2.1 ocr = hub.Module(name="chinese_ocr_db_crnn_server") results = ocr.recognize_text(images=np_images, use_gpu=False, output_dir="", visualization=True,box_thresh=0.7,text_thresh=0.5) Traceback (most recent call last): File "/data/software/anaconda3/lib/python3.7/site-packages/flask/app.py", line 2091, in __call__ return self.wsgi_app(environ, start_response) File "/data/software/anaconda3/lib/python3.7/site-packages/flask/app.py", line 2076, in wsgi_app response = self.handle_exception(e) File "/data/software/anaconda3/lib/python3.7/site-packages/flask/app.py", line 2073, in wsgi_app response = self.full_dispatch_request() File "/data/software/anaconda3/lib/python3.7/site-packages/flask/app.py", line 1518, in full_dispatch_request rv = self.handle_user_exception(e) File "/data/software/anaconda3/lib/python3.7/site-packages/flask/app.py", line 1516, in full_dispatch_request rv = self.dispatch_request() File "/data/software/anaconda3/lib/python3.7/site-packages/flask/app.py", line 1502, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) File "/data/yuyuechun/DeepLearningSlideCaptcha/flask/app/routes.py", line 285, in yyzz_ocr visualization=True,box_thresh=0.5,text_thresh=0.5) File "/data/software/anaconda3/lib/python3.7/site-packages/paddlehub/compat/paddle_utils.py", line 220, in runner return func(*args, **kwargs) File "/home/devuser/.paddlehub/modules/chinese_ocr_db_crnn_server/module.py", line 231, in recognize_text images=predicted_data, use_gpu=self.use_gpu, box_thresh=box_thresh) File "/data/software/anaconda3/lib/python3.7/site-packages/paddlehub/compat/paddle_utils.py", line 220, in runner return func(*args, **kwargs) File "/home/devuser/.paddlehub/modules/chinese_text_detection_db_server/module.py", line 207, in detect_text self.predictor.zero_copy_run() MemoryError: In user code: File "tools/export_model.py", line 75, in <module> main() File "tools/export_model.py", line 51, in main config, eval_program, startup_prog) File "/paddle/PaddleOCR/PaddleOCR/tools/program.py", line 215, in build_export image, outputs = model(mode='export') File "/paddle/PaddleOCR/PaddleOCR/ppocr/modeling/architectures/det_model.py", line 135, in __call__ conv_feas = self.backbone(image) File "/paddle/PaddleOCR/PaddleOCR/ppocr/modeling/backbones/det_resnet_vd.py", line 75, in __call__ name='conv1_2') File "/paddle/PaddleOCR/PaddleOCR/ppocr/modeling/backbones/det_resnet_vd.py", line 138, in conv_bn_layer bias_attr=False) File "/root/anaconda3/envs/deploy/lib/python3.7/site-packages/paddle/fluid/layers/nn.py", line 1585, in conv2d "data_format": data_format, File "/root/anaconda3/envs/deploy/lib/python3.7/site-packages/paddle/fluid/layer_helper.py", line 43, in append_op return self.main_program.current_block().append_op(*args, **kwargs) File "/root/anaconda3/envs/deploy/lib/python3.7/site-packages/paddle/fluid/framework.py", line 2880, in append_op attrs=kwargs.get("attrs", None)) File "/root/anaconda3/envs/deploy/lib/python3.7/site-packages/paddle/fluid/framework.py", line 1977, in __init__ for frame in traceback.extract_stack(): ResourceExhaustedError: Fail to alloc memory of 524288000 size, error code is 12. [Hint: Expected error == 0, but received error:12 != 0:0.] (at /paddle/paddle/fluid/memory/detail/system_allocator.cc:62) [operator < conv2d > error]
open
2022-04-14T04:34:26Z
2024-02-26T05:02:35Z
https://github.com/PaddlePaddle/PaddleHub/issues/1835
[]
mavisyyc
4
JaidedAI/EasyOCR
machine-learning
983
Using it in a loop causes memory not to be freed
open
2023-04-07T06:28:02Z
2023-04-17T08:49:57Z
https://github.com/JaidedAI/EasyOCR/issues/983
[]
toocf
0