repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
521xueweihan/HelloGitHub
python
2,500
iO
AQUALISA DIGITAL TECHNOLOG h Olmsted. OH 1-800-BUY-MOEN www.moen.com MODEL NO: A340 FCC ID: WUZIORAPLUSA 1C: 7706A-TORAPLUSA CO NOM IPX4 13V USE ONLY WITH MOEN DIGITAL VALVES UTILIZAR SOLO CON VALVULAS DIGITALES UTILISER UNTOUEMENT AVEC MON DIGITAL VALVES
closed
2023-02-16T19:28:23Z
2023-02-24T08:18:54Z
https://github.com/521xueweihan/HelloGitHub/issues/2500
[]
ana585
0
LAION-AI/Open-Assistant
machine-learning
3,326
accelerator version issue
huggingface's accelerator updated from v0.19.0 to v0.20.0 and 'logging_dir' disappeared from __init__ method in Accelerator class. So the above error occurs. ![image](https://github.com/LAION-AI/Open-Assistant/assets/85441953/b08eb416-dc2b-4fb6-81bc-0b22a5a24dd6) https://github.com/huggingface/accelerate/blob/baebae3bbecbea05d721a50917f352cccd14811e/src/accelerate/accelerator.py#L242-L244 OA doesn't specify version https://github.com/LAION-AI/Open-Assistant/blob/0fcf3e08fe62295d4696e590005b0f33383342ea/model/pyproject.toml#L12-L13 However, if you go to the trlx library that actually runs the accelerator, it is versioned as shown below. ![image](https://github.com/LAION-AI/Open-Assistant/assets/85441953/6381d051-ffed-4fc4-b1db-aabd3efa8500) https://github.com/CarperAI/trlx/blob/0dce99d96b7d70b6a9114129d8e38bf6c80eb653/requirements.txt#L1-L2 Of course, it is true that the trlx library also has its own errors. However, if OA will have a dependency on the trlx library, I think it is necessary to get the trlx requirement.txt and install it with the same version as specified there.
open
2023-06-08T02:15:08Z
2023-06-19T01:53:49Z
https://github.com/LAION-AI/Open-Assistant/issues/3326
[ "bug", "ml" ]
idisuu
2
robotframework/robotframework
automation
4,848
Use term "compound keyword" instead of "user keyword"
Robot can have two kinds of keywords. Lowest level ones are implemented in libraries using Python and higher level ones can be created using Robot's own syntax. We currently call these keywords "library keywords" and "user keywords", respectively. The former name is fine, it makes it clear that these keywords are implemented in libraries, but the latter name is a bit odd. Users creating tests may not have created "user keywords" they are using or they may have also created "library keywords". I believe that term "compound keyword" would be better than "user keyword". It doesn't concentrate on who has created the keyword, but instead makes it clear that the keyword is a combination of different parts. If others have even better ideas, please propose them in comments! The main reason I believe we should change the term now is that as part of adding `start/end_keyword` methods to the listener API v3 (#3296), we are going to expose also model objects representing user/compound keywords. We probably need to add separate methods for them and I wouldn't like to add `start/end_user_keyword` if the term would be changed in the future. Adding `start/end_library_keyword` and `start/end_compound_keyword` sounds much better to me. In addition to the method name, the term will affect the names of the model objects. This issues consists of the following tasks: - [ ] Agreeing on the term to use. - [ ] Updating the User Guide - [ ] Making sure existing public APIs use the new term in code and in documentation. New public APIs obviously need to use the new term as well, but that is covered by their respective issues. Internal code can be updated when otherwise modified.
closed
2023-08-25T19:37:30Z
2024-09-24T13:26:34Z
https://github.com/robotframework/robotframework/issues/4848
[ "enhancement", "priority: medium" ]
pekkaklarck
17
robinhood/faust
asyncio
320
1.5.0 - Application hangs after "Elected group leader -- performing partition assignments using faust"
## Checklist - [X] I have included information about relevant versions - [X] I have verified that the issue persists when using the `master` branch of Faust. ## Steps to reproduce Run the Hello World example on the 1.5.0/master branch release ## Expected behavior Application should run normally, saying "hello" once every second ## Actual behavior Application hangs after "Elected group leader -- performing partition assignments using faust" ``` [2019-03-24 11:00:18,653: INFO]: Discovered coordinator 0 for group hello-app [2019-03-24 11:00:18,654: INFO]: Revoking previously assigned partitions set() for group hello-app [2019-03-24 11:00:19,635: INFO]: (Re-)joining group hello-app [2019-03-24 11:00:19,635: DEBUG]: Sending JoinGroup (JoinGroupRequest_v2(group='hello-app', session_timeout=60000, rebalance_timeout=60000, member_id='', protocol_type='consumer', group_protocols=[(protocol_na me='faust', protocol_metadata=b'\x00\x04\x00\x00\x00\x02\x00\x1dhello-app-__assignor-__leader\x00\x0bhello-topic\x00\x00\x00|{"assignment": {"actives": {}, "standbys": {}}...')])) to coordinator 0 [2019-03-24 11:00:19,637: DEBUG]: <AIOKafkaConnection host=135.1.219.46 port=9092> Request 2: JoinGroupRequest_v2(group='hello-app', session_timeout=60000, rebalance_timeout=60000, member_id='', protocol_type= 'consumer', group_protocols=[(protocol_name='faust', protocol_metadata=b'\x00\x04\x00\x00\x00\x02\x00\x1dhello-app-__assignor-__leader\x00\x0bhello-topic\x00\x00\x00|{"assignment": {"actives": {}, "standbys": {}}...')]) [2019-03-24 11:00:19,639: DEBUG]: <AIOKafkaConnection host=135.1.219.46 port=9092> Response 2: JoinGroupResponse_v2(throttle_time_ms=0, error_code=0, generation_id=27, group_protocol='faust', leader_id='faust- 1.5.0-ef439cad-972b-4e67-93d0-da494e26e1f5', member_id='faust-1.5.0-ef439cad-972b-4e67-93d0-da494e26e1f5', members=[(member_id='faust-1.5.0-ef439cad-972b-4e67-93d0-da494e26e1f5', member_metadata=b'\x00\x04\x00 \x00\x00\x02\x00\x1dhello-app-__assignor-__leader\x00\x0bhello-topic\x00\x00\x00|{"assignment": {"actives": {}, "standbys": {}}...')]) [2019-03-24 11:00:19,641: DEBUG]: Join group response JoinGroupResponse_v2(throttle_time_ms=0, error_code=0, generation_id=27, group_protocol='faust', leader_id='faust-1.5.0-ef439cad-972b-4e67-93d0-da494e26e1f 5', member_id='faust-1.5.0-ef439cad-972b-4e67-93d0-da494e26e1f5', members=[(member_id='faust-1.5.0-ef439cad-972b-4e67-93d0-da494e26e1f5', member_metadata=b'\x00\x04\x00\x00\x00\x02\x00\x1dhello-app-__assignor- __leader\x00\x0bhello-topic\x00\x00\x00|{"assignment": {"actives": {}, "standbys": {}}...')]) [2019-03-24 11:00:19,641: INFO]: Joined group 'hello-app' (generation 27) with member_id faust-1.5.0-ef439cad-972b-4e67-93d0-da494e26e1f5 [2019-03-24 11:00:19,641: INFO]: Elected group leader -- performing partition assignments using faust [2019-03-24 11:00:19,641: DEBUG]: Performing assignment for group hello-app using strategy faust with subscriptions {'faust-1.5.0-ef439cad-972b-4e67-93d0-da494e26e1f5': ConsumerProtocolMemberMetadata(version=4 , subscription=['hello-app-__assignor-__leader', 'hello-topic'], user_data=b'{"assignment": {"actives": {}, "standbys": {}}, "url": "http://bobh:6066", "changelog_distribution":...')}``` # Versions * Python version - 3.6.5 * Faust version - 1.5.0 * Operating system - Linux * Kafka version - 1.1.1 * RocksDB version (if applicable) N/A I discovered this issue with my application after upgrading from 1.4.9 to 1.5.0, and decided to try using the Hello World application to see if it was something in my app. I also changed the application name and topic name in the Hello World app to see if it was a problem with re-connecting to an existing set of topics, but that didn't help. I used python -m trace to get more information and found that aiokafka/consumer/group_coordinator.py was catching an exception that was not being reraised. When I modified that code to raise the exception I found that in faust/assignor/partition_assignor.py there is code that sets span to nullcontext() and then tries to call span.set_tag(), which raises an exception because nullcontext() has no set_tag() attribute. That exception is being caught by group_coordinator.py and not raised so everything hangs. When I comment out the call to span.set_tag() it works fine. I'll see if I can figure out the right fix and push a PR.
closed
2019-03-24T16:08:47Z
2019-03-25T04:50:17Z
https://github.com/robinhood/faust/issues/320
[]
bobh66
0
predict-idlab/plotly-resampler
plotly
82
Django support (django-plotly-dash library)
How does this library integrate with Django? (Issue indicated by @thanosam in #45)
open
2022-06-22T18:50:02Z
2023-04-19T15:20:53Z
https://github.com/predict-idlab/plotly-resampler/issues/82
[]
jvdd
8
tiangolo/uvicorn-gunicorn-fastapi-docker
fastapi
80
Python 3.9 version
Hi, Is there a plan to get Python 3.9 version support?
closed
2021-03-26T21:23:13Z
2022-11-27T20:38:46Z
https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker/issues/80
[]
SNavgale
3
noirbizarre/flask-restplus
flask
547
Could any help me to figure out how to list all of the api and url?
Hello, I just want to list all of the url of api, just as we can see the swagger webpage on browser, I tried to find out the method and got this: http://flask.pocoo.org/snippets/117/ and https://stackoverflow.com/questions/13317536/get-a-list-of-all-routes-defined-in-the-app, and save all of them into a dict. Could anyone share some more elegant method about it in flask_restplus package?? Great appreciate Oliver
closed
2018-11-01T02:32:50Z
2018-12-18T08:30:11Z
https://github.com/noirbizarre/flask-restplus/issues/547
[]
hanleilei
2
mars-project/mars
scikit-learn
2,965
[BUG] when dataframe apply encounters Exception, the trace is not clear
Mars 0.8.5 test code: ``` import mars import mars.dataframe as md mars.new_session() def f(series): raise ValueError('I error here') df = md.DataFrame({"A":[1,2], "B":[3,4]}) outputs = df.apply(f, axis=1, dtype=object, name='output', output_type='series') ``` trace: ``` Web service started at http://0.0.0.0:56975 --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-15-67543ca07b21> in <module> 8 9 df = md.DataFrame({"A":[1,2], "B":[3,4]}) ---> 10 outputs = df.apply(f, axis=1, dtype=object, name='output', output_type='series') /usr/local/anaconda3/lib/python3.8/site-packages/mars/dataframe/base/apply.py in df_apply(df, func, axis, raw, result_type, args, dtypes, dtype, name, output_type, index, elementwise, **kwds) 672 elementwise=elementwise, 673 ) --> 674 return op(df, dtypes=dtypes, dtype=dtype, name=name, index=index) 675 676 /usr/local/anaconda3/lib/python3.8/site-packages/mars/core/mode.py in _inner(*args, **kwargs) 75 def _inner(*args, **kwargs): 76 with enter_mode(**mode_name_to_value): ---> 77 return func(*args, **kwargs) 78 79 else: /usr/local/anaconda3/lib/python3.8/site-packages/mars/dataframe/base/apply.py in __call__(self, df_or_series, dtypes, dtype, name, index) 456 457 if df_or_series.op.output_types[0] == OutputType.dataframe: --> 458 return self._call_dataframe(df_or_series, dtypes=dtypes, index=index) 459 else: 460 return self._call_series( /usr/local/anaconda3/lib/python3.8/site-packages/mars/dataframe/base/apply.py in _call_dataframe(self, df, dtypes, dtype, name, index) 333 for arg, desc in zip((self.output_types, dtypes), ("output_types", "dtypes")): 334 if arg is None: --> 335 raise TypeError( 336 f"Cannot determine {desc} by calculating with enumerate data, " 337 "please specify it as arguments" TypeError: Cannot determine dtypes by calculating with enumerate data, please specify it as arguments ``` I want df.apply return series type, each row will emit one string type, and I specify output_type='series'. I think the behavior that mars still call into `_call_dataframe` method is misleading.
closed
2022-04-26T06:49:36Z
2022-04-28T07:31:57Z
https://github.com/mars-project/mars/issues/2965
[ "type: bug", "mod: dataframe" ]
dlee992
1
allure-framework/allure-python
pytest
186
Occure INTERNALERROR UnicodeDecodeError when test function decorated with allure.step and in function using assertpy for compare binary data which not equals
[//]: # ( . Note: for support questions, please use Stackoverflow or Gitter**. . This repository's issues are reserved for feature requests and bug reports. . . In case of any problems with Allure Jenkins plugin** please use the following repository . to create an issue: https://github.com/jenkinsci/allure-plugin/issues . . Make sure you have a clear name for your issue. The name should start with a capital . letter and no dot is required in the end of the sentence. An example of good issue names: . . - The report is broken in IE11 . - Add an ability to disable default plugins . - Support emoji in test descriptions ) #### I'm submitting a ... - [ ] bug report - [ ] feature request - [ ] support request => Please do not submit support request here, see note at the top of this template. #### What is the current behavior? Occure INTERNALERROR UnicodeDecodeError when test function decorated with allure.step and in function using assertpy for compare binary data which not equals. #### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem First install assertpy package. Create file `test_assertpy_error.py` with content: ``` # -*- coding: utf-8 -*- from assertpy import assert_that import allure @allure.step(u'Функция 1') def test_1(): assert_that('0\x82').is_equal_to(1) ``` Then run it: `pytest test_assertpy_error.py --alluredir ./areport_assertpy` You will got an error: ``` $ pytest test_assertpy_error.py --alluredir ./areport_assertpy ================================================================================= test session starts ================================================================================= platform linux2 -- Python 2.7.14, pytest-3.3.1, py-1.5.2, pluggy-0.6.0 rootdir: /home/bomber/Dropbox/Work/Git/PyTestError, inifile: pytest.ini plugins: allure-pytest-2.2.4b1 collected 1 item test_assertpy_error.py F [100%] INTERNALERROR> Traceback (most recent call last): INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/_pytest/main.py", line 103, in wrap_session INTERNALERROR> session.exitstatus = doit(config, session) or 0 INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/_pytest/main.py", line 141, in _main INTERNALERROR> config.hook.pytest_runtestloop(session=session) INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/pluggy/__init__.py", line 617, in __call__ INTERNALERROR> return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs) INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/pluggy/__init__.py", line 222, in _hookexec INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs) INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/pluggy/__init__.py", line 216, in <lambda> INTERNALERROR> firstresult=hook.spec_opts.get('firstresult'), INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/pluggy/callers.py", line 201, in _multicall INTERNALERROR> return outcome.get_result() INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/pluggy/callers.py", line 77, in get_result INTERNALERROR> _reraise(*ex) # noqa INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/pluggy/callers.py", line 180, in _multicall INTERNALERROR> res = hook_impl.function(*args) INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/_pytest/main.py", line 164, in pytest_runtestloop INTERNALERROR> item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem) INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/pluggy/__init__.py", line 617, in __call__ INTERNALERROR> return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs) INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/pluggy/__init__.py", line 222, in _hookexec INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs) INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/pluggy/__init__.py", line 216, in <lambda> INTERNALERROR> firstresult=hook.spec_opts.get('firstresult'), INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/pluggy/callers.py", line 196, in _multicall INTERNALERROR> gen.send(outcome) INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/allure_pytest/listener.py", line 94, in pytest_runtest_protocol INTERNALERROR> self.allure_logger.close_test(uuid) INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/allure_commons/reporter.py", line 73, in close_test INTERNALERROR> plugin_manager.hook.report_result(result=test_case) INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/pluggy/__init__.py", line 617, in __call__ INTERNALERROR> return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs) INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/pluggy/__init__.py", line 222, in _hookexec INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs) INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/pluggy/__init__.py", line 216, in <lambda> INTERNALERROR> firstresult=hook.spec_opts.get('firstresult'), INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/pluggy/callers.py", line 201, in _multicall INTERNALERROR> return outcome.get_result() INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/pluggy/callers.py", line 77, in get_result INTERNALERROR> _reraise(*ex) # noqa INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/pluggy/callers.py", line 180, in _multicall INTERNALERROR> res = hook_impl.function(*args) INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/allure_commons/logger.py", line 34, in report_result INTERNALERROR> self._report_item(result) INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/local/lib/python2.7/site-packages/allure_commons/logger.py", line 28, in _report_item INTERNALERROR> json_file.write(unicode(json.dumps(data, indent=indent, ensure_ascii=False, encoding='utf8'))) INTERNALERROR> File "/usr/lib/python2.7/json/__init__.py", line 251, in dumps INTERNALERROR> sort_keys=sort_keys, **kw).encode(obj) INTERNALERROR> File "/usr/lib/python2.7/json/encoder.py", line 207, in encode INTERNALERROR> chunks = self.iterencode(o, _one_shot=True) INTERNALERROR> File "/usr/lib/python2.7/json/encoder.py", line 270, in iterencode INTERNALERROR> return _iterencode(o, 0) INTERNALERROR> File "/usr/lib/python2.7/json/encoder.py", line 233, in _encoder INTERNALERROR> o = o.decode(_encoding) INTERNALERROR> File "/home/bomber/.virtualenvs/pytest33/lib/python2.7/encodings/utf_8.py", line 16, in decode INTERNALERROR> return codecs.utf_8_decode(input, errors, True) INTERNALERROR> UnicodeDecodeError: 'utf8' codec can't decode byte 0x82 in position 27: invalid start byte ============================================================================== 1 failed in 0.03 seconds =============================================================================== ``` #### What is the expected behavior? Error should not occur. Allure must handle such case. #### What is the motivation / use case for changing the behavior? #### Please tell us about your environment: - Python version: 2.7.14 - Test framework: pytest@3.3.1 - Allure adaptor: allure-pytest@2.2.4b1 - Other packages: assertpy@0.12
closed
2017-12-10T09:41:29Z
2018-07-04T22:40:25Z
https://github.com/allure-framework/allure-python/issues/186
[]
RockBomber
0
modin-project/modin
pandas
6,768
`to_numpy` doesn't use `**kwargs` after #6704 in case of narrow dataframe
closed
2023-11-26T23:26:53Z
2023-11-27T18:31:37Z
https://github.com/modin-project/modin/issues/6768
[ "bug 🦗", "P1" ]
anmyachev
0
graphistry/pygraphistry
pandas
215
[BUG] readthedocs broken on pyarrow 3
See https://issues.apache.org/jira/browse/ARROW-11564 Blocking 0.16+ readthedocs docs
closed
2021-02-09T02:08:10Z
2021-02-14T11:42:10Z
https://github.com/graphistry/pygraphistry/issues/215
[ "bug" ]
lmeyerov
1
junyanz/pytorch-CycleGAN-and-pix2pix
pytorch
1,027
ConnectionRefusedError: [WinError 10061] 由于目标计算机积极拒绝,无法连接。
open
2020-05-14T07:45:35Z
2020-06-10T07:14:03Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1027
[]
bu-fan-jun
1
iMerica/dj-rest-auth
rest-api
175
How to override verification email
I want to send a customized verification email. I find no documentation about how to do this. We want to translate from English to Spanish. Please, help with some indications!
open
2020-11-23T06:04:48Z
2024-04-06T17:20:57Z
https://github.com/iMerica/dj-rest-auth/issues/175
[]
rojeda24
7
sherlock-project/sherlock
python
2,421
Requesting support for: framapiaf.org
### Site URL https:/framapiaf.org ### Additional info ### framapiaf.org - Link to the site main page: https://framapiaf.org - Link to an existing account: https://framapiaf.org/@pylapp - Link to a nonexistent account: https://framapiaf.org/@noonewouldeverusethis42 ### Code of Conduct - [x] I agree to follow this project's Code of Conduct
open
2025-03-05T12:13:57Z
2025-03-09T19:42:07Z
https://github.com/sherlock-project/sherlock/issues/2421
[ "site support request" ]
pylapp
2
tensorflow/tensor2tensor
deep-learning
1,563
Looking for parameters used to create RL ALE pretrained results
### Description Ref: ReadMe at https://github.com/tensorflow/tensor2tensor/tree/master/tensor2tensor/rl There are results for the model based policy training at gs://tensor2tensor-checkpoints/modelrl_experiments/train_sd/. In particular, the results for pong game 1 are at gs://tensor2tensor-checkpoints/modelrl_experiments/train_sd/142. I am trying to reproduce the results but have not been successful. What is the set of parameters used to create the pretrained policy? I'm looking for the individual parameters that would be needed to run trainer_model_based.py. Something like the following, but I did that and the results did not match the results documented in "Model Based Reinforcement Learning for Atari." !python -m tensor2tensor.rl.trainer_model_based.py \ --loop_hparams_set=rlmb_long_stochastic_discrete \ --loop_hparams=game=$game,eval_max_num_noops=8,eval_sampling_temps=[0.5] \ --policy_dir=$run_dir/policy \ --eval_metrics_dir=pong_pretrained \ --debug_video_path=pong_pretrained \ --num_debug_videos=4 ### Environment information ``` OS: <your answer here> $ pip freeze | grep tensor # your output here $ python -V # your output here ``` ### For bugs: reproduction and error logs ``` # Steps to reproduce: ... ``` ``` # Error logs: ... ```
open
2019-04-30T14:03:05Z
2019-04-30T14:03:05Z
https://github.com/tensorflow/tensor2tensor/issues/1563
[]
nesharma11235
0
autokey/autokey
automation
1,029
NEW FEATURE: possibility to launch bash script
### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland? Xorg ### Has this issue already been reported? - [x] I have searched through the existing issues. ### Is this a question rather than an issue? - [x] This is not a question. ### What type of issue is this? Enhancement ### Choose one or more terms that describe this issue: - [ ] autokey triggers - [ ] autokey-gtk - [ ] autokey-qt - [ ] beta - [ ] bug - [ ] critical - [ ] development - [ ] documentation - [x] enhancement - [ ] installation/configuration - [ ] phrase expansion - [x] scripting - [ ] technical debt - [ ] user interface ### Other terms that describe this issue if not provided above: bash ### Which Linux distribution did you use? No relevant but Gentoo Linux ### Which AutoKey GUI did you use? GTK ### Which AutoKey version did you use? 0.96 ### How did you install AutoKey? pipx ### Can you briefly describe the issue? Similar to python scripts support it would be really great to have bash scripts support too. I've done a specific macro fully in bash and I was stuck to assign a key binding to launch it. The key binding was essential as the macro is a workflow on a software entry that is under the mouse pointer. I've finally added the key binding at window manager level (openbox in my case) and so I got some macros now in autokey and some others in bash scripts with key binding though the window manager. It would be really great to be able to launch bash script in autokey. In fact I'm surprised that python have been the first language implemented as bash should me more trivial (I suppose). Thanks to all devs for this magic software that is autokey ;) ### Can the issue be reproduced? None ### What are the steps to reproduce the issue? _No response_ ### What should have happened? _No response_ ### What actually happened? _No response_ ### Do you have screenshots? _No response_ ### Can you provide the output of the AutoKey command? ```bash ``` ### Anything else? _No response_ <br/> <hr/> <details><summary>This repo is using Opire - what does it mean? 👇</summary><br/>💵 Everyone can add rewards for this issue commenting <code>/reward 100</code> (replace <code>100</code> with the amount).<br/>🕵️‍♂️ If someone starts working on this issue to earn the rewards, they can comment <code>/try</code> to let everyone know!<br/>🙌 And when they open the PR, they can comment <code>/claim #1029</code> either in the PR description or in a PR's comment.<br/><br/>🪙 Also, everyone can tip any user commenting <code>/tip 20 @zen2</code> (replace <code>20</code> with the amount, and <code>@zen2</code> with the user to tip).<br/><br/>📖 If you want to learn more, check out our <a href="https://docs.opire.dev">documentation</a>.</details>
closed
2025-03-13T22:12:38Z
2025-03-15T03:10:03Z
https://github.com/autokey/autokey/issues/1029
[ "enhancement", "wontfix", "scripting", "user support" ]
zen2
7
aws/aws-sdk-pandas
pandas
2,124
`date` type columns have `object` data type in df returned by athena.read_sql_query
### Describe the bug Columns of type `date` are mapped to the `object` datatype whereas columns of type `timestamp` are correctly mapped to `datetime64[ns]` in the dataframe returned by `athena.read_sql_query` I did do some tracing and interestingly the dataframe returned by `s3.read_csv` [here](https://github.com/aws/aws-sdk-pandas/blob/f8d419bb3e7cbd5327bb3c772e74e71ecc6350fa/awswrangler/athena/_read.py#L172) has the correct/expected datatype `datetime64[ns]` for `date` columns, but then following the invocation of the function `_fix_csv_types` [here](https://github.com/aws/aws-sdk-pandas/blob/f8d419bb3e7cbd5327bb3c772e74e71ecc6350fa/awswrangler/athena/_read.py#L188) the dataframe has the datatype `object`. ### How to Reproduce ```python query = ''' select timestamp_column, date_column from table;''' query_results = wr.athena.read_sql_query( sql=query, database=database_name, ctas_approach=False, ) print(query_results.dtypes) ``` ``` timestamp_column datetime64[ns] date_column object dtype: object ``` ### Expected behavior Columns of type `date` or `timestamp` should be mapped to `datetime64[ns]` in the dataframe returned by `athena.read_sql_query` ### Your project _No response_ ### Screenshots _No response_ ### OS macOS 13.2.1 ### Python version 3.9.16 ### AWS SDK for pandas version 2.19.0 ### Additional context _No response_
closed
2023-03-17T23:08:48Z
2023-03-29T14:06:42Z
https://github.com/aws/aws-sdk-pandas/issues/2124
[ "bug" ]
sherif-fanous
18
deezer/spleeter
tensorflow
909
[Discussion] GPU OOM For the long recording
<!-- Please respect the title [Discussion] tag. --> as describe above, for the long recording like 1~2 hours ,the gpu memory will be OOM, do we have any update plan for this scenario? for example, split the recording before, and merge them together when all the parts are processed. thanks !
open
2024-10-01T10:17:52Z
2024-10-01T15:22:12Z
https://github.com/deezer/spleeter/issues/909
[ "question" ]
Ylw2014
1
Kanaries/pygwalker
pandas
479
[DEV-759] [BUG] pygwalker visualization size not rendered correctly in streamlit app
Original bug report: https://toriaezuugoku.com/pygwalker-2/ > Windows10 streamlit==1.27.0 pygwalker==0.4.7 It seems pygwalker does not rendered with correct size so the charts in the article only shows legend parts. ![image](https://github.com/Kanaries/pygwalker/assets/22167673/c360797b-4b44-4060-9d47-fdd5b2d759e2) ## related code with bug https://github.com/Kanaries/pygwalker/blob/ac6c12dbadf85ac03e4e36e35c130351be3689fa/pygwalker/api/streamlit.py#L223 <sub>[DEV-759](https://linear.app/kanaries/issue/DEV-759/[bug]-pygwalker-visualization-size-not-rendered-correctly-in-streamlit)</sub>
open
2024-03-13T11:08:18Z
2024-03-15T11:44:07Z
https://github.com/Kanaries/pygwalker/issues/479
[ "bug" ]
ObservedObserver
2
marimo-team/marimo
data-science
4,146
[Newbie Q] How to create a new notebook from the UI?
I can create a new notebook using `marimo new`. But I can't seem a way to do this in the web UI. Is it possible? If yes, how?
closed
2025-03-18T14:28:54Z
2025-03-18T14:41:56Z
https://github.com/marimo-team/marimo/issues/4146
[]
dentroai
3
PablocFonseca/streamlit-aggrid
streamlit
173
Example Code for MouseOver Tooltip inside of a table
Hey there, i tried to find something in the dokumentation but cant solve my problem. I want to display an image when you hover over one of the rows but cant find to do how. ![grafik](https://user-images.githubusercontent.com/9539236/208301177-567e8a49-1e38-46fb-a480-93d01d89dd21.png) Could you add an example?
closed
2022-12-18T13:37:40Z
2024-04-04T17:54:36Z
https://github.com/PablocFonseca/streamlit-aggrid/issues/173
[]
Keldorb
0
howie6879/owllook
asyncio
91
提取章节标题,遇到有表情的章节 不能正确提取
如题: 遇到 title 有表情的 比如这一章节:[章节](https://www.booktxt.net/4_4891/7807304.html) 提取标题会出现 `千三百八十一章 起死回生`
open
2020-07-13T03:17:40Z
2020-07-13T03:17:40Z
https://github.com/howie6879/owllook/issues/91
[]
iathanasy
0
iperov/DeepFaceLab
deep-learning
707
Older version still available
Hi, Do you have any DeepFaceLab openCL version from before 29th December still available to download ? (Need to be able to change avatar type) Sorry to ask here, thanks for all the work.
open
2020-04-09T03:27:09Z
2023-06-08T20:25:21Z
https://github.com/iperov/DeepFaceLab/issues/707
[]
pierrelb1
2
autogluon/autogluon
scikit-learn
4,348
Add support of PyTorch 2.4
## Description The recent release of [PyTorch 2.4](https://pytorch.org/blog/pytorch2-4/) necessitates an update for AutoGluon compatibility.
closed
2024-07-29T22:07:24Z
2024-11-21T18:21:54Z
https://github.com/autogluon/autogluon/issues/4348
[ "enhancement", "dependency" ]
tonyhoo
2
taverntesting/tavern
pytest
796
Random fails - `ImportError: module '' not in sys.modules`
I ran into random failing tests when using the tavern plugin. It looks like there is a problem with its dependencies. Probably pykwalify Unfortunately, I have not found any dependence on which to deduce why this is happening. Re-launching tests causes them to success in most cases, but during CI it is a problematic bug The exception is raised from importlib/_bootstrap.py:588: ``` with _ModuleLockManager(name): if sys.modules.get(name) is not module: msg = 'module {!r} not in sys.modules'.format(name) raise ImportError(msg, name=name) ``` here is the entire message: ``` cls = <class '_pytest.runner.CallInfo'> func = <function call_runtest_hook.<locals>.<lambda> at 0x7fbab93a98b0> when = 'call' reraise = (<class '_pytest.outcomes.Exit'>, <class 'KeyboardInterrupt'>) @classmethod def from_call( cls, func: "Callable[[], TResult]", when: "Literal['collect', 'setup', 'call', 'teardown']", reraise: Optional[ Union[Type[BaseException], Tuple[Type[BaseException], ...]] ] = None, ) -> "CallInfo[TResult]": """Call func, wrapping the result in a CallInfo. :param func: The function to call. Called without arguments. :param when: The phase in which the function is called. :param reraise: Exception or exceptions that shall propagate if raised by the function, instead of being wrapped in the CallInfo. """ excinfo = None start = timing.time() precise_start = timing.perf_counter() try: > result: Optional[TResult] = func() ../../../../.tox/tavern/lib/python3.8/site-packages/_pytest/runner.py:338: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > lambda: ihook(item=item, **kwds), when=when, reraise=reraise ) ../../../../.tox/tavern/lib/python3.8/site-packages/_pytest/runner.py:259: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <_HookCaller 'pytest_runtest_call'>, args = () kwargs = {'item': <YamlItem XYZ>}, argname = 'item', firstresult = False def __call__(self, *args, **kwargs): if args: raise TypeError("hook calling supports only keyword arguments") assert not self.is_historic() # This is written to avoid expensive operations when not needed. if self.spec: for argname in self.spec.argnames: if argname not in kwargs: notincall = tuple(set(self.spec.argnames) - kwargs.keys()) warnings.warn( "Argument(s) {} which are declared in the hookspec " "can not be found in this hook call".format(notincall), stacklevel=2, ) break firstresult = self.spec.opts.get("firstresult") else: firstresult = False > return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) ../../../../.tox/tavern/lib/python3.8/site-packages/pluggy/_hooks.py:265: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <_pytest.config.PytestPluginManager object at 0x7fbad5789d90> hook_name = 'pytest_runtest_call' methods = [<HookImpl plugin_name='runner', plugin=<module '_pytest.runner' from '/builds/XYZ/.tox/tavern/lib/python3.8...est.threadexception' from '/builds/XYZ/.tox/tavern/lib/python3.8/site-packages/_pytest/threadexception.py'>>] kwargs = {'item': <YamlItem XYZ>}, firstresult = False def _hookexec(self, hook_name, methods, kwargs, firstresult): # called from all hookcaller instances. # enable_tracing will set its own wrapping function at self._inner_hookexec > return self._inner_hookexec(hook_name, methods, kwargs, firstresult) ../../../../.tox/tavern/lib/python3.8/site-packages/pluggy/_manager.py:80: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ hook_name = 'pytest_runtest_call' hook_impls = [<HookImpl plugin_name='runner', plugin=<module '_pytest.runner' from '/builds/XYZ/.tox/tavern/lib/python3.8...est.threadexception' from '/builds/XYZ/.tox/tavern/lib/python3.8/site-packages/_pytest/threadexception.py'>>] caller_kwargs = {'item': <YamlItem XYZ>}, firstresult = False def _multicall(hook_name, hook_impls, caller_kwargs, firstresult): """Execute a call into multiple python functions/methods and return the result(s). ``caller_kwargs`` comes from _HookCaller.__call__(). """ __tracebackhide__ = True results = [] excinfo = None try: # run impl and wrapper setup functions in a loop teardowns = [] try: for hook_impl in reversed(hook_impls): try: args = [caller_kwargs[argname] for argname in hook_impl.argnames] except KeyError: for argname in hook_impl.argnames: if argname not in caller_kwargs: raise HookCallError( f"hook call must provide argument {argname!r}" ) if hook_impl.hookwrapper: try: gen = hook_impl.function(*args) next(gen) # first yield teardowns.append(gen) except StopIteration: _raise_wrapfail(gen, "did not yield") else: res = hook_impl.function(*args) if res is not None: results.append(res) if firstresult: # halt further impl calls break except BaseException: excinfo = sys.exc_info() finally: if firstresult: # first result hooks return a single value outcome = _Result(results[0] if results else None, excinfo) else: outcome = _Result(results, excinfo) # run all wrapper post-yield blocks for gen in reversed(teardowns): try: gen.send(outcome) _raise_wrapfail(gen, "has second yield") except StopIteration: pass > return outcome.get_result() ../../../../.tox/tavern/lib/python3.8/site-packages/pluggy/_callers.py:60: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <pluggy._result._Result object at 0x7fbab9615a90> def get_result(self): """Get the result(s) for this hook call. If the hook was marked as a ``firstresult`` only a single value will be returned otherwise a list of results. """ __tracebackhide__ = True if self._excinfo is None: return self._result else: ex = self._excinfo > raise ex[1].with_traceback(ex[2]) ../../../../.tox/tavern/lib/python3.8/site-packages/pluggy/_result.py:60: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ hook_name = 'pytest_runtest_call' hook_impls = [<HookImpl plugin_name='runner', plugin=<module '_pytest.runner' from '/builds/XYZ/.tox/tavern/lib/python3.8...est.threadexception' from '/builds/XYZ/.tox/tavern/lib/python3.8/site-packages/_pytest/threadexception.py'>>] caller_kwargs = {'item': <YamlItem XYZ>}, firstresult = False def _multicall(hook_name, hook_impls, caller_kwargs, firstresult): """Execute a call into multiple python functions/methods and return the result(s). ``caller_kwargs`` comes from _HookCaller.__call__(). """ __tracebackhide__ = True results = [] excinfo = None try: # run impl and wrapper setup functions in a loop teardowns = [] try: for hook_impl in reversed(hook_impls): try: args = [caller_kwargs[argname] for argname in hook_impl.argnames] except KeyError: for argname in hook_impl.argnames: if argname not in caller_kwargs: raise HookCallError( f"hook call must provide argument {argname!r}" ) if hook_impl.hookwrapper: try: gen = hook_impl.function(*args) next(gen) # first yield teardowns.append(gen) except StopIteration: _raise_wrapfail(gen, "did not yield") else: > res = hook_impl.function(*args) ../../../../.tox/tavern/lib/python3.8/site-packages/pluggy/_callers.py:39: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ item = <YamlItem XYZ> def pytest_runtest_call(item: Item) -> None: _update_current_test_var(item, "call") try: del sys.last_type del sys.last_value del sys.last_traceback except AttributeError: pass try: item.runtest() except Exception as e: # Store trace info to allow postmortem debugging sys.last_type = type(e) sys.last_value = e assert e.__traceback__ is not None # Skip *this* frame sys.last_traceback = e.__traceback__.tb_next > raise e ../../../../.tox/tavern/lib/python3.8/site-packages/_pytest/runner.py:174: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ item = <YamlItem XYZ> def pytest_runtest_call(item: Item) -> None: _update_current_test_var(item, "call") try: del sys.last_type del sys.last_value del sys.last_traceback except AttributeError: pass try: > item.runtest() ../../../../.tox/tavern/lib/python3.8/site-packages/_pytest/runner.py:166: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <YamlItem XYZ> def runtest(self): # Do a deep copy because this sometimes still retains things from previous tests(?) self.global_cfg = copy.deepcopy(load_global_cfg(self.config)) self.global_cfg.setdefault("variables", {}) load_plugins(self.global_cfg) self.global_cfg["tavern_internal"] = {"pytest_hook_caller": self.config.hook} # INTERNAL # NOTE - now that we can 'mark' tests, we could use pytest.mark.xfail # instead. This doesn't differentiate between an error in verification # and an error when running the test though. xfail = self.spec.get("_xfail", False) try: fixture_values = self._load_fixture_values() self.global_cfg["variables"].update(fixture_values) call_hook( self.global_cfg, "pytest_tavern_beta_before_every_test_run", test_dict=self.spec, variables=self.global_cfg["variables"], ) > verify_tests(self.spec) ../../../../.tox/tavern/lib/python3.8/site-packages/tavern/testutils/pytesthook/item.py:184: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_spec = {'test_name': 'XYZ', 'marks': ['patterntest'], 'includes': [{'name': 'Common test values', 'description': 'Common...unction': 'validate_response:compare_response_with_pattern', 'extra_kwargs': {'ignore_tags': '<bridge community>'}}}}]} with_plugins = True def verify_tests(test_spec, with_plugins=True): """Verify that a specific test block is correct Todo: Load schema file once. Requires some caching of the file Args: test_spec (dict): Test in dictionary form Raises: BadSchemaError: Schema did not match """ here = os.path.dirname(os.path.abspath(__file__)) schema_filename = os.path.join(here, "tests.schema.yaml") schema = load_schema_file(schema_filename, with_plugins) > verify_generic(test_spec, schema) ../../../../.tox/tavern/lib/python3.8/site-packages/tavern/schemas/files.py:152: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ to_verify = {'test_name': 'XYZ', 'marks': ['patterntest'], 'includes': [{'name': 'Common test values', 'description': 'Common...unction': 'validate_response:compare_response_with_pattern', 'extra_kwargs': {'ignore_tags': '<bridge community>'}}}}]} schema = {'name': 'Test schema', 'desc': 'Matches test blocks', 'schema;any_request_json': {'func': 'validate_request_json', 't... 'map', 'mapping': {'username': {'type': 'str', 'required': True}, 'password': {'type': 'str', 'required': False}}}}}}} def verify_generic(to_verify, schema): """Verify a generic file against a given schema Args: to_verify (dict): Filename of source tests to check schema (dict): Schema to verify against Raises: BadSchemaError: Schema did not match """ logger.debug("Verifying %s against %s", to_verify, schema) here = os.path.dirname(os.path.abspath(__file__)) extension_module_filename = os.path.join(here, "extensions.py") > verifier = core.Core( source_data=to_verify, schema_data=schema, extensions=[extension_module_filename], ) ../../../../.tox/tavern/lib/python3.8/site-packages/tavern/schemas/files.py:99: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <pykwalify.core.Core object at 0x7fbab94ec820>, source_file = None schema_files = [] source_data = {'test_name': 'XYZ', 'marks': ['patterntest'], 'includes': [{'name': 'Common test values', 'description': 'Common...unction': 'validate_response:compare_response_with_pattern', 'extra_kwargs': {'ignore_tags': '<bridge community>'}}}}]} schema_data = {'name': 'Test schema', 'desc': 'Matches test blocks', 'schema;any_request_json': {'func': 'validate_request_json', 't... 'map', 'mapping': {'username': {'type': 'str', 'required': True}, 'password': {'type': 'str', 'required': False}}}}}}} extensions = ['/builds/XYZ/.tox/tavern/lib/python3.8/site-packages/tavern/schemas/extensions.py'] strict_rule_validation = False, fix_ruby_style_regex = False allow_assertions = False, file_encoding = None, schema_file_obj = None data_file_obj = None def __init__(self, source_file=None, schema_files=None, source_data=None, schema_data=None, extensions=None, strict_rule_validation=False, fix_ruby_style_regex=False, allow_assertions=False, file_encoding=None, schema_file_obj=None, data_file_obj=None): """ :param extensions: List of paths to python files that should be imported and available via 'func' keywork. This list of extensions can be set manually or they should be provided by the `--extension` flag from the cli. This list should not contain files specified by the `extensions` list keyword that can be defined at the top level of the schema. """ if schema_files is None: schema_files = [] if extensions is None: extensions = [] log.debug(u"source_file: %s", source_file) log.debug(u"schema_file: %s", schema_files) log.debug(u"source_data: %s", source_data) log.debug(u"schema_data: %s", schema_data) log.debug(u"extension files: %s", extensions) self.source = None self.schema = None self.validation_errors = None self.validation_errors_exceptions = None self.root_rule = None self.extensions = extensions self.errors = [] self.strict_rule_validation = strict_rule_validation self.fix_ruby_style_regex = fix_ruby_style_regex self.allow_assertions = allow_assertions # Patch in all the normal python types into the yaml load instance so we can use all the # internal python types in the yaml loading. yml.constructor.add_constructor('tag:yaml.org,2002:python/bool', Constructor.construct_yaml_bool) yml.constructor.add_constructor('tag:yaml.org,2002:python/complex', Constructor.construct_python_complex) yml.constructor.add_constructor('tag:yaml.org,2002:python/dict', Constructor.construct_yaml_map) yml.constructor.add_constructor('tag:yaml.org,2002:python/float', Constructor.construct_yaml_float) yml.constructor.add_constructor('tag:yaml.org,2002:python/int', Constructor.construct_yaml_int) yml.constructor.add_constructor('tag:yaml.org,2002:python/list', Constructor.construct_yaml_seq) yml.constructor.add_constructor('tag:yaml.org,2002:python/long', Constructor.construct_python_long) yml.constructor.add_constructor('tag:yaml.org,2002:python/none', Constructor.construct_yaml_null) yml.constructor.add_constructor('tag:yaml.org,2002:python/str', Constructor.construct_python_str) yml.constructor.add_constructor('tag:yaml.org,2002:python/tuple', Constructor.construct_python_tuple) yml.constructor.add_constructor('tag:yaml.org,2002:python/unicode', Constructor.construct_python_unicode) if data_file_obj: try: self.source = yml.load(data_file_obj.read()) except Exception as e: raise CoreError("Unable to load data_file_obj input") if schema_file_obj: try: self.schema = yml.load(schema_file_obj.read()) except Exception as e: raise CoreError("Unable to load schema_file_obj") if source_file is not None: if not os.path.exists(source_file): raise CoreError(u"Provided source_file do not exists on disk: {0}".format(source_file)) with open(source_file, "r", encoding=file_encoding) as stream: if source_file.endswith(".json"): self.source = json.load(stream) elif source_file.endswith(".yaml") or source_file.endswith('.yml'): self.source = yml.load(stream) else: raise CoreError(u"Unable to load source_file. Unknown file format of specified file path: {0}".format(source_file)) if not isinstance(schema_files, list): raise CoreError(u"schema_files must be of list type") # Merge all schema files into one single file for easy parsing if len(schema_files) > 0: schema_data = {} for f in schema_files: if not os.path.exists(f): raise CoreError(u"Provided source_file do not exists on disk : {0}".format(f)) with open(f, "r", encoding=file_encoding) as stream: if f.endswith(".json"): data = json.load(stream) elif f.endswith(".yaml") or f.endswith(".yml"): data = yml.load(stream) if not data: raise CoreError(u"No data loaded from file : {0}".format(f)) else: raise CoreError(u"Unable to load file : {0} : Unknown file format. Supported file endings is [.json, .yaml, .yml]") for key in data.keys(): if key in schema_data.keys(): raise CoreError(u"Parsed key : {0} : two times in schema files...".format(key)) schema_data = dict(schema_data, **data) self.schema = schema_data # Nothing was loaded so try the source_data variable if self.source is None: log.debug(u"No source file loaded, trying source data variable") self.source = source_data if self.schema is None: log.debug(u"No schema file loaded, trying schema data variable") self.schema = schema_data # Test if anything was loaded if self.source is None: raise CoreError(u"No source file/data was loaded") if self.schema is None: raise CoreError(u"No schema file/data was loaded") # Merge any extensions defined in the schema with the provided list of extensions from the cli for f in self.schema.get('extensions', []): self.extensions.append(f) if not isinstance(self.extensions, list) and all(isinstance(e, str) for e in self.extensions): raise CoreError(u"Specified extensions must be a list of file paths") > self._load_extensions() ../../../../.tox/tavern/lib/python3.8/site-packages/pykwalify/core.py:153: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <pykwalify.core.Core object at 0x7fbab94ec820> def _load_extensions(self): """ Load all extension files into the namespace pykwalify.ext """ log.debug(u"loading all extensions : %s", self.extensions) self.loaded_extensions = [] for f in self.extensions: if not os.path.isabs(f): f = os.path.abspath(f) if not os.path.exists(f): raise CoreError(u"Extension file: {0} not found on disk".format(f)) > self.loaded_extensions.append(SourceFileLoader("", f).load_module()) ../../../../.tox/tavern/lib/python3.8/site-packages/pykwalify/core.py:173: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <_frozen_importlib_external.SourceFileLoader object at 0x7fbab94ecb80> name = '', args = (), kwargs = {} > ??? <frozen importlib._bootstrap_external>:522: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <_frozen_importlib_external.SourceFileLoader object at 0x7fbab94ecb80> fullname = '' > ??? <frozen importlib._bootstrap_external>:1022: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <_frozen_importlib_external.SourceFileLoader object at 0x7fbab94ecb80> fullname = '' > ??? <frozen importlib._bootstrap_external>:847: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <_frozen_importlib_external.SourceFileLoader object at 0x7fbab94ecb80> fullname = '' > ??? <frozen importlib._bootstrap>:262: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ spec = ModuleSpec(name='', loader=<_frozen_importlib_external.SourceFileLoader object at 0x7fbab94ecb80>, origin='/builds/XYZ/.tox/tavern/lib/python3.8/site-packages/tavern/schemas/extensions.py') module = <module '' from '/builds/XYZ/.tox/tavern/lib/python3.8/site-packages/tavern/schemas/extensions.py'> > ??? E ImportError: module '' not in sys.modules <frozen importlib._bootstrap>:589: ImportError ------ generated xml file: /builds/XYZ/api_smoketest_bridge.xml ------ =========================== short test summary info ============================ FAILED patterns/get_community/1.tavern.yaml::XYZ - ... ======================== 1 failed, 455 passed in 30.33s ======================== ```
open
2022-07-14T07:40:17Z
2022-07-30T15:16:35Z
https://github.com/taverntesting/tavern/issues/796
[]
mzebrak
1
LAION-AI/Open-Assistant
machine-learning
2,653
I can't find the chat dialog box
I can't find the chat dialog box
closed
2023-04-17T09:31:54Z
2023-04-29T21:13:56Z
https://github.com/LAION-AI/Open-Assistant/issues/2653
[]
ws694617206
2
pallets-eco/flask-sqlalchemy
sqlalchemy
649
expected update syntax can be too aggressive
Not a bug report, more of a question. Because the docs don't provide examples of UPDATE queries, I experimented with what seemed to make the most sense from the SQLA docs. I found that performing an .update() on (what I thought was) a single record turned out to update every row in database. See code sample. Is there a preferred way of performing an update on a single record where you change multiple attributes at once? Or better yet, use a dict to specify all the attributes to change, and the new values? ```python from flask import Flask from flask_sqlalchemy import SQLAlchemy app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///e:\\foo.db' db = SQLAlchemy(app) class User(db.Model): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(80)) email = db.Column(db.String(120)) def __repr__(self): return '<User %r>' % self.username db.create_all() john = User(username='jlennon', email='john@example.com') paul = User(username='pmac', email='paul@example.com') george = User(username='ghar', email='george@example.com') ringo = User(username='rstarr', email='ringo@example.com') db.session.add_all([john, paul, george, ringo]) db.session.commit() user_four = User.query.get(4) # UNSAFE!! Will change EVERY row!! user_four.query.update({ 'email':'ringo@beatles.com' }) db.session.commit() # SAFE. But how to change multiple attributes? or load from a dict? user_three = User.query.get(3) user_three.username = 'georgeh' db.session.commit() ```
closed
2018-10-25T23:47:40Z
2021-04-06T00:14:01Z
https://github.com/pallets-eco/flask-sqlalchemy/issues/649
[]
2x2xplz
12
QingdaoU/OnlineJudge
django
295
Adding support for Rust
Hi QingdaoU, Thanks for this wonderful tool. I would like to add support for the Rust language. It works like C (there is a compiler named rustc). So I wrote the following config in OnlineJudge/judge/languages.py: ``` _rust_lang_config = { "template": """//PREPEND BEGIN //PREPEND END //TEMPLATE BEGIN //TEMPLATE END //APPEND BEGIN //APPEND END""", "compile": { "src_name": "main.rs", "exe_name": "main", "max_cpu_time": 10000, "max_real_time": 20000, "max_memory": 1024 * 1024 * 1024, "compile_command": "/usr/bin/rustc -O {src_path} -o {exe_path}", }, "run": { "command": "{exe_path}", "seccomp_rule": None, "env": default_env } } ``` and I added it to the subsequent `languages` array. I also added this config to JudgeServer/client/{go,PHP,Python}. I also added the rustc compiler to the judge-server Dockerfile and could check that it is properly installed: ``` apt-get install curl && curl https://sh.rustup.rs -sSf | sh -s -- -y && . ~/.cargo/env && ln -s ~/.cargo/bin/rustc /usr/bin/rustc && \ ``` However, when I test this configuration on the aplusb problem, I have the following compiler error: > Compiler runtime error, info: {"result": 5, "error": 0, "exit_code": 0, "real_time": 2, "signal": 10, "cpu_time": 0, "memory": 0} What did I miss? How can I debug it? Thanks for your help!
closed
2020-04-11T15:11:23Z
2020-04-28T13:58:45Z
https://github.com/QingdaoU/OnlineJudge/issues/295
[]
thomas-huegel
6
cupy/cupy
numpy
8,454
Resolve Ruff `NPY` errors
Related to #8269 .
closed
2024-08-06T07:26:30Z
2024-08-10T02:56:16Z
https://github.com/cupy/cupy/issues/8454
[ "cat:enhancement", "pr-ongoing" ]
EarlMilktea
0
strawberry-graphql/strawberry-django
graphql
528
The get_queryset method is called twice when using relay connections
<!-- Provide a general summary of the bug in the title above. --> When implementing types that override the `get_queryset()` method as well as using relay connection, the `get_queryset()` method is called twice. This has the result, that when applying filters to the queryset they are applied twice. Since they are idempotent it does not change the query output, but adds unnecessary overhead. <!--- This template is entirely optional and can be removed, but is here to help both you and us. --> <!--- Anything on lines wrapped in comments like these will not show up in the final text. --> ## Describe the Bug <!-- A clear and concise description of what the bug is. --> The core of bug appears to be `strawberry_django/fields/field.py:283-289` in `StrawberryDjangoField:get_queryset()`. When not using the relay connection the bug does not appear, but with it the if statement resolves to True get_queryset = getattr(type_, "get_queryset", None) if get_queryset: queryset = get_queryset(queryset, info, **kwargs) and it is also called on the next line with queryset = super().get_queryset( filter_with_perms(queryset, info), info, **kwargs ) ## System Information - Operating system: Ubuntu 22.04 - Strawberry version (if applicable): 0.39.2 and 0.227.2 ## Additional Context This behavior is reproducible in the example app. If you already have a relay app, simply add a print in the overriding get_queryset method. ``` # example/django/app/types.py @strawberry_django.type( models.Fruit, filters=FruitFilter, order=FruitOrder, pagination=True, ) class Fruit(relay.Node): name: auto # color: Optional["Color"] @classmethod def get_queryset( cls, queryset: QuerySet, info: Info, **kwargs, ): print("Getting QS") return queryset.filter(name__startswith="s") ``` ``` # example/django/app/schema.py @strawberry.type class Query: # fruit: Fruit = strawberry_django.field() # fruits: List[Fruit] = strawberry_django.field() all_fruits: ListConnection[Fruit] = strawberry_django.connection() ``` Attached is the diff, applied to commit 21c14e4 [bug.zip](https://github.com/strawberry-graphql/strawberry-django/files/15277443/bug.zip)
closed
2024-05-10T16:21:35Z
2025-03-20T15:57:31Z
https://github.com/strawberry-graphql/strawberry-django/issues/528
[ "bug" ]
moritz89
1
onnx/onnx
deep-learning
6,359
flash-attention onnx export.
When using Flash-Attention version 2.6.3, there is an issue with the ONNX file saved using torch.onnx.export. code: import sys import torch qkv=torch.load("/home/qkv.pth") from modeling_intern_vit import FlashAttention falsh=FlashAttention().eval().cuda() out=falsh(qkv.cpu().cuda()) with torch.no_grad(): torch.onnx.export( falsh, (qkv,), "/home/qkv.onnx", input_names = ["input0"], output_names = ["qkv_out"], opset_version = 11 ) ![image](https://github.com/user-attachments/assets/9c489934-7165-43d1-88f4-8df8eb757060)
closed
2024-09-10T05:41:26Z
2024-09-13T14:01:15Z
https://github.com/onnx/onnx/issues/6359
[ "question" ]
scuizhibin
3
vllm-project/vllm
pytorch
15,080
[Bug]: 500 error when I pass a 26x28 px image to Qwen 72B
### Your current environment <details> <summary>The output of `python collect_env.py`</summary> ```text Collecting environment information... PyTorch version: 2.5.1+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.5 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.22.1 Libc version: glibc-2.35 Python version: 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA H200 GPU 1: NVIDIA H200 GPU 2: NVIDIA H200 GPU 3: NVIDIA H200 GPU 4: NVIDIA H200 GPU 5: NVIDIA H200 GPU 6: NVIDIA H200 GPU 7: NVIDIA H200 Nvidia driver version: 565.57.01 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 57 bits virtual Byte Order: Little Endian CPU(s): 176 On-line CPU(s) list: 0-175 Vendor ID: GenuineIntel Model name: Intel(R) Xeon(R) Platinum 8468V CPU family: 6 Model: 143 Thread(s) per core: 2 Core(s) per socket: 44 Socket(s): 2 Stepping: 8 BogoMIPS: 4800.00 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk avx512_fp16 arch_capabilities Virtualization: VT-x Hypervisor vendor: KVM Virtualization type: full L1d cache: 4.1 MiB (88 instances) L1i cache: 2.8 MiB (88 instances) L2 cache: 176 MiB (88 instances) L3 cache: 195 MiB (2 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-87 NUMA node1 CPU(s): 88-175 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Mitigation; TSX disabled Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] pyzmq==26.3.0 [pip3] torch==2.5.1 [pip3] torchaudio==2.5.1 [pip3] torchvision==0.20.1 [pip3] transformers==4.49.0 [pip3] triton==3.1.0 [conda] Could not collect ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: 0.7.3 vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 NIC8 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 NODE PHB PHB PHB PHB SYS SYS SYS SYS 0-87 0 N/A GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 NODE PHB PHB PHB PHB SYS SYS SYS SYS 0-87 0 N/A GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 NODE PHB PHB PHB PHB SYS SYS SYS SYS 0-87 0 N/A GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 NODE PHB PHB PHB PHB SYS SYS SYS SYS 0-87 0 N/A GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 SYS SYS SYS SYS SYS PHB PHB PHB PHB 88-175 1 N/A GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 SYS SYS SYS SYS SYS PHB PHB PHB PHB 88-175 1 N/A GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 SYS SYS SYS SYS SYS PHB PHB PHB PHB 88-175 1 N/A GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X SYS SYS SYS SYS SYS PHB PHB PHB PHB 88-175 1 N/A NIC0 NODE NODE NODE NODE SYS SYS SYS SYS X NODE NODE NODE NODE SYS SYS SYS SYS NIC1 PHB PHB PHB PHB SYS SYS SYS SYS NODE X PHB PHB PHB SYS SYS SYS SYS NIC2 PHB PHB PHB PHB SYS SYS SYS SYS NODE PHB X PHB PHB SYS SYS SYS SYS NIC3 PHB PHB PHB PHB SYS SYS SYS SYS NODE PHB PHB X PHB SYS SYS SYS SYS NIC4 PHB PHB PHB PHB SYS SYS SYS SYS NODE PHB PHB PHB X SYS SYS SYS SYS NIC5 SYS SYS SYS SYS PHB PHB PHB PHB SYS SYS SYS SYS SYS X PHB PHB PHB NIC6 SYS SYS SYS SYS PHB PHB PHB PHB SYS SYS SYS SYS SYS PHB X PHB PHB NIC7 SYS SYS SYS SYS PHB PHB PHB PHB SYS SYS SYS SYS SYS PHB PHB X PHB NIC8 SYS SYS SYS SYS PHB PHB PHB PHB SYS SYS SYS SYS SYS PHB PHB PHB X Legend: X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks NIC Legend: NIC0: mlx5_0 NIC1: mlx5_1 NIC2: mlx5_2 NIC3: mlx5_3 NIC4: mlx5_4 NIC5: mlx5_5 NIC6: mlx5_6 NIC7: mlx5_7 NIC8: mlx5_8 CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 LD_LIBRARY_PATH=<snip>:/opt/hpcx/nccl_rdma_sharp_plugin/lib:/opt/hpcx/ucc/lib/ucc:/opt/hpcx/ucc/lib:/opt/hpcx/ucx/lib/ucx:/opt/hpcx/ucx/lib:/opt/hpcx/sharp/lib:/opt/hpcx/hcoll/lib:/opt/hpcx/ompi/lib: NCCL_CUMEM_ENABLE=0 TORCHINDUCTOR_COMPILE_THREADS=1 CUDA_MODULE_LOADING=LAZY ``` </details> ### 🐛 Describe the bug 500 error when passing an image with one side < 28 to Qwen. I don't quite understand why. Qwen does like 28 though, since its window size is 14 and adjacent windows get merged (hence 28). Took me a minute to repro this. ```python import base64 import torch import numpy as np from PIL import Image import io import numpy as np from openai import OpenAI from PIL import Image import numpy as np import torch bbox_schema = { "$schema": "http://json-schema.org/draft-07/schema#", "type": "object", "properties": { "boxes": { "type": "array", "items": { "type": "object", "properties": { "bbox_2d": { "type": "array", "items": {"type": "number"}, "description": "2D bounding box coordinates in the format [x1, y1, x2, y2].", }, "label": { "type": "string", "description": "Label for the annotated object.", }, }, "required": ["bbox_2d", "label"], }, }, }, "required": ["boxes"], } def text(text): return {"type": "text", "text": text} def image(image): if isinstance(image, torch.Tensor): image = image.numpy() if isinstance(image, np.ndarray): if image.shape[0] == 3: image = image.transpose(1, 2, 0) image = Image.fromarray(image) buffer = io.BytesIO() image.save(buffer, format="png") image_base64 = base64.b64encode(buffer.getvalue()).decode("utf-8") return { "type": "image_url", "image_url": { "url": f"data:image/png;base64,{image_base64}", }, } def user(*content): return {"role": "user", "content": content} oai_client = None openai_api_base = "http://<snip>:8080/v1" openai_api_key = "EMPTY" def request(turns, guidance=None, temperature=0.0, max_tokens=1024): global oai_client if oai_client is None: oai_client = OpenAI( api_key=openai_api_key, base_url=openai_api_base, ) if not isinstance(turns, list): turns = [turns] result = oai_client.chat.completions.create( model="Qwen/Qwen2.5-VL-72B-Instruct", temperature=temperature, max_tokens=max_tokens, messages=turns, extra_body=guidance, n=1, ) return {"role": "assistant", "content": result.choices[0].message.content} if __name__ == "__main__": futures = [] frame_image = Image.fromarray(255 * np.ones((26, 28, 3), dtype=np.uint8)) box_request = user( image(frame_image), text( "This is a reproduction of a bug for VLLM. Your efforts are wasted. Please do not try. " ), ) request([box_request]) ``` I just get a 500 error in the server logs. Nothing else. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
closed
2025-03-19T03:29:33Z
2025-03-20T01:56:16Z
https://github.com/vllm-project/vllm/issues/15080
[ "bug" ]
c0g
3
piccolo-orm/piccolo
fastapi
103
Choice fields (preferably supporting enums
It would be nice if piccolo supported choices like django does, preferably using the python stdlib enum to make a dropdown
closed
2021-05-23T15:48:10Z
2021-06-08T18:03:12Z
https://github.com/piccolo-orm/piccolo/issues/103
[]
cheesycod
1
daleroberts/itermplot
matplotlib
51
Broken in jupyter console
I get the following traceback in a `jupyter console` session with the `itermplot==0.5` version recommended in the readme. Using `pip install --upgrade itermplot` to version 0.331 also does not help. ```python In [1]: fig, ax = plt.subplots() ...: fig.show() --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Input In [1], in <module> 1 fig, ax = pplt.subplot() ----> 2 fig.show() File ~/miniconda3/lib/python3.9/site-packages/matplotlib/figure.py:2364, in Figure.show(self, warn) 2360 raise AttributeError( 2361 "Figure.show works only for figures managed by pyplot, " 2362 "normally created by pyplot.figure()") 2363 try: -> 2364 self.canvas.manager.show() 2365 except NonGuiException as exc: 2366 if warn: File ~/miniconda3/lib/python3.9/site-packages/itermplot/__init__.py:298, in ItermplotFigureManager.show(self) 295 data = self.animate(loops, outfile) 297 if hasattr(data, "getbuffer"): --> 298 imgcat(data.getbuffer(), fn) 299 else: # Python 2 300 imgcat(data.getvalue(), fn) File ~/miniconda3/lib/python3.9/site-packages/itermplot/__init__.py:124, in imgcat(data, fn) 122 sys.stdout.buffer.write(buf) 123 else: --> 124 sys.stdout.write(buf) 125 sys.stdout.flush() 127 print() File ~/miniconda3/lib/python3.9/site-packages/ipykernel/iostream.py:513, in OutStream.write(self, string) 503 """Write to current stream after encoding if necessary 504 505 Returns (...) 509 510 """ 512 if not isinstance(string, str): --> 513 raise TypeError( 514 f"write() argument must be str, not {type(string)}" 515 ) 517 if self.echo is not None: 518 try: TypeError: write() argument must be str, not <class 'bytes'> ```
open
2022-03-03T00:19:04Z
2022-09-20T15:59:34Z
https://github.com/daleroberts/itermplot/issues/51
[]
lukelbd
1
glumpy/glumpy
numpy
214
Triangulate - assertion error
Examples such as collection-triangles.py, tiger.py fails to run due to the below error, line 66, in triangulate tri, _ = triang(tri, opts) File "triangle/core.pyx", line 247, in triangle.core.triang File "triangle/core.pyx", line 219, in triangle.core.fin File "triangle/core.pyx", line 74, in triangle.core.ii._set File "triangle/core.pyx", line 115, in triangle.core._wrap.check AssertionError
open
2019-04-04T16:20:30Z
2019-12-01T14:16:50Z
https://github.com/glumpy/glumpy/issues/214
[]
prithivi-iviz
9
hanwenlu2016/web-ui
pytest
19
求一波web端开源~
closed
2022-06-23T11:56:18Z
2022-06-23T11:57:37Z
https://github.com/hanwenlu2016/web-ui/issues/19
[]
tanxiao133
0
streamlit/streamlit
deep-learning
10,811
test
### Checklist - [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests. - [x] I added a descriptive title and summary to this issue. ### Summary test ### Why? _No response_ ### How? _No response_ ### Additional Context _No response_
closed
2025-03-17T19:17:23Z
2025-03-17T19:17:43Z
https://github.com/streamlit/streamlit/issues/10811
[ "type:enhancement" ]
lukasmasuch
1
ultralytics/yolov5
machine-learning
12,885
Adding Background Image and Increasing conf_thres does not lower FP
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question Hi, I experimented adding approx. 20%,10%, and 5% of background images into the training dataset. However, the FP before and after I added them remain unchanged. From the val_batches I could see that all those background images were not detected as objects. However, the confusion matrix still displays that there are no background images detected as background. I tried adding the background images with and without their label files, but I don't think it produces a change either. I also increased the confidence threshold to 80%, but the result is still the same. I am confused. Am I understanding the background class incorrectly? Will background images ever be detected as background in the confusion matrix? <img width="480" alt="Screen Shot 2024-04-04 at 12 45 40 PM" src="https://github.com/ultralytics/yolov5/assets/143357885/47f3f151-dc94-4410-889a-5eaa7bcf8e7a"> Thanks! Really appreciate any insights on this issue! ### Additional _No response_
closed
2024-04-04T16:50:15Z
2024-10-20T19:43:02Z
https://github.com/ultralytics/yolov5/issues/12885
[ "question", "Stale" ]
Jecia888
9
davidsandberg/facenet
computer-vision
265
How to train NN3
Two proplem: **1) Loss nan , what should I set in Learning rate schedule** I try even that: # Learning rate schedule # Maps an epoch number to a learning rate 0: 0.0001 100: 0.00001 Also get: Epoch: [0][94/1000] Time 0.875 Loss 11.982 RegLoss 1.697 Epoch: [0][95/1000] Time 0.861 Loss 11.341 RegLoss 1.698 Epoch: [0][96/1000] Time 0.859 Loss 11.224 RegLoss 1.700 Epoch: [0][97/1000] Time 0.859 Loss nan RegLoss nan Epoch: [0][98/1000] Time 0.875 Loss nan RegLoss nan Epoch: [0][99/1000] Time 0.860 Loss nan RegLoss nan Epoch: [0][100/1000] Time 0.859 Loss nan RegLoss nan Epoch: [0][101/1000] Time 0.859 Loss nan RegLoss nan my shell cmd: python facenet_master/src/facenet_train_classifier.py --logs_base_dir logs/ --models_base_dir models/ --data_dir dataset/CASIA-WebFace/casia_maxpy_mtcnnpy_182 --image_size 160 --model_def models.nn3 --optimizer RMSPROP --learning_rate -1 --max_nrof_epochs 80 --keep_probability 0.8 --random_crop --random_flip --learning_rate_schedule_file facenet_master/data/learning_rate_schedule_nn3.txt --weight_decay 5e-5 --center_loss_factor 1e-4 --center_loss_alfa 0.9 --batch_size 10 **2) Cannot have number of splits n_splits=10 greater than the number of 0** When I run facenet_train_classifier.py with "--lfw_dir" such as: python facenet_master/src/facenet_train_classifier.py --logs_base_dir logs/ --models_base_dir models/ --data_dir dataset/CASIA-WebFace/casia_maxpy_mtcnnpy_182 --image_size 160 --model_def models.nn3 --lfw_dir dataset/lfw/lfw_mtcnnalign_160 --optimizer RMSPROP --learning_rate -1 --max_nrof_epochs 80 --keep_probability 0.8 --random_crop --random_flip --learning_rate_schedule_file facenet_master/data/learning_rate_schedule_nn3.txt --weight_decay 5e-5 --center_loss_factor 1e-4 --center_loss_alfa 0.9 --batch_size 10 I get that: Saving variables Variables saved in 3.32 seconds Saving metagraph Metagraph saved in 27.87 seconds Runnning forward pass on LFW images Traceback (most recent call last): File "facenet_master/src/facenet_train_classifier.py", line 468, in <module> main(parse_arguments(sys.argv[1:])) File "facenet_master/src/facenet_train_classifier.py", line 244, in main embeddings, label_batch, lfw_paths, actual_issame, args.lfw_batch_size, args .lfw_nrof_folds, log_dir, step, summary_writer) File "facenet_master/src/facenet_train_classifier.py", line 344, in evaluate _, _, accuracy, val, val_std, far = lfw.evaluate(emb_array, actual_issame, n rof_folds=nrof_folds) File "FaceNet\facenet_master\src\lfw.py", line 4 0, in evaluate np.asarray(actual_issame), nrof_folds=nrof_folds) File "FaceNet\facenet_master\src\facenet.py", li ne 405, in calculate_roc for fold_idx, (train_set, test_set) in enumerate(k_fold.split(indices)): File "\env-35\lib\site-packages\sklearn\model_selection\_split.py", l ine 320, in split n_samples)) ValueError: Cannot have number of splits n_splits=10 greater than the number of samples: 0. And I think that is interrelated with validate_on_lfw.py But when I try this : python facenet_master/src/validate_on_lfw.py dataset/lfw/lfw_mtcnnpy_160 models/20170507-111919 (the files save by facenet_train_classifier.py). the computer run well.
closed
2017-05-07T03:09:40Z
2018-10-08T13:55:15Z
https://github.com/davidsandberg/facenet/issues/265
[]
dahangli
9
plotly/dash
data-science
2,714
[Help] how to add a contentChange listener on html.Div components
I hope to have an **editable** component that can support **rich text** (such as directly embedding other html tags), but it seems that only the html.Div component can. But I also hope that the **callback** can be triggered **when the content in the component is changed**, but html.Div only has click-related events. Moreover, if the content of the div changes but the callback is not triggered, the content in html.Div does not seem to be updated to the server (or a virtual DOM?). Even if other buttons are used to trigger the callback at this time, and the content of the Div is used as input, the latest content in the div cannot be obtained. Is there any way to achieve this, or are there any existing alternatives?
closed
2023-12-20T02:17:01Z
2023-12-21T16:45:58Z
https://github.com/plotly/dash/issues/2714
[]
Snowdar
1
reloadware/reloadium
django
203
Reloadium encountered an error - but doesn't say what
## Describe the bug* It just says it encountered an error and that I should report it ;) It still seems to profile, though, and the profiling info _seems_ reasonable. ## To Reproduce Steps to reproduce the behavior: I have no idea ... it's some async 3.11 code w/ FastAPI. ## Expected behavior No errors :D ## Screenshots ![grafik](https://github.com/user-attachments/assets/00e7676d-ef9c-45a2-bbc9-093a56dbf85f) ## Desktop or remote (please complete the following information):** - OS: macOS - OS version: 14.7 - M3 chip - Reloadium package version: ? - PyCharm plugin version: 1.5.1 - Editor: PyCharm - Python Version: 3.11 - Python Architecture: 64 - Run mode: Debug (Run seems fine) ## Additional context No context but a praise - this is what PyCharm itself *should* ship with. Awesome job!
open
2024-10-07T18:14:37Z
2024-10-07T18:14:37Z
https://github.com/reloadware/reloadium/issues/203
[]
black-snow
0
AutoGPTQ/AutoGPTQ
nlp
736
nsamples zero only once in the middle - division by zero
Lots of issues related to "division by zero", because nsamples=0. However my issue seems to be slightly different, since this happens only once during quantization without obvious reason. I modified the code in gptq.py with the following lines: ``` logger.info(f"duration: {(time.time() - tick)}") print(f"samples = {self.nsamples}") #logger.info(f"avg loss: {torch.sum(Losses).item() / self.nsamples}") ``` I am quantizing on 4*A100 80GB GPUs, due to out of memory issues with less GPUs. Any reason why nsamples=0 in the "middle" only once during this quantization? samples = 1245 Quantizing layers inside the block: 14%|█▍ | 1/7 [00:20<02:01, 20.31s/it] samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:34<01:23, 16.63s/it] samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:48<01:01, 15.45s/it] samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:03<00:45, 15.24s/it] Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:17<00:29, 14.82s/it] samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:31<00:14, 14.61s/it] samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [01:43<00:00, 13.56s/it] samples = 1245 Quantizing model.layers blocks : 1%|▏ | 1/80 [01:52<2:28:37, 112.88s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] Quantizing layers inside the block: 14%|█▍ | 1/7 [00:16<01:37, 16.27s/it] samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:30<01:15, 15.18s/it] samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:45<00:59, 14.80s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:00<00:45, 15.11s/it] Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:15<00:30, 15.00s/it] samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:29<00:14, 14.84s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [02:02<00:00, 20.64s/it] Quantizing model.layers blocks : 2%|▎ | 2/80 [04:08<2:43:53, 126.07s/it][A Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] samples = 1245 Quantizing layers inside the block: 14%|█▍ | 1/7 [00:18<01:48, 18.15s/it] Quantizing layers inside the block: 29%|██▊ | 2/7 [00:35<01:27, 17.54s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:54<01:13, 18.49s/it] samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:12<00:55, 18.33s/it] Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:30<00:36, 18.12s/it] samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:49<00:18, 18.43s/it] samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [03:32<00:00, 46.00s/it] samples = 1245 Quantizing model.layers blocks : 4%|▍ | 3/80 [07:54<3:40:39, 171.95s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] Quantizing layers inside the block: 14%|█▍ | 1/7 [00:18<01:53, 18.94s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:37<01:32, 18.47s/it] Quantizing layers inside the block: 43%|████▎ | 3/7 [00:54<01:11, 17.83s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:12<00:54, 18.20s/it] Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:30<00:35, 17.97s/it] samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:49<00:18, 18.35s/it] samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [03:32<00:00, 45.89s/it] samples = 1245 Quantizing model.layers blocks : 5%|▌ | 4/80 [11:38<4:03:36, 192.33s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] samples = 1245 Quantizing layers inside the block: 14%|█▍ | 1/7 [00:18<01:51, 18.55s/it] Quantizing layers inside the block: 29%|██▊ | 2/7 [00:35<01:28, 17.66s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:53<01:10, 17.65s/it] samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:10<00:52, 17.51s/it] samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:29<00:35, 17.98s/it] Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:46<00:17, 17.86s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [03:30<00:00, 45.97s/it] Quantizing model.layers blocks : 6%|▋ | 5/80 [15:21<4:14:08, 203.32s/it][A Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] Quantizing layers inside the block: 14%|█▍ | 1/7 [00:14<01:26, 14.38s/it] samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:28<01:11, 14.36s/it] samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:43<00:57, 14.34s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [00:58<00:43, 14.66s/it] Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:12<00:29, 14.66s/it] samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:28<00:14, 14.84s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [02:00<00:00, 20.57s/it] Quantizing model.layers blocks : 8%|▊ | 6/80 [17:27<3:38:26, 177.12s/it][A Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] samples = 1245 Quantizing layers inside the block: 14%|█▍ | 1/7 [00:15<01:32, 15.44s/it] Quantizing layers inside the block: 29%|██▊ | 2/7 [00:30<01:15, 15.14s/it] samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:44<00:59, 14.90s/it] samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [00:59<00:44, 14.72s/it] samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:13<00:29, 14.64s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:29<00:14, 14.95s/it] samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [02:02<00:00, 20.79s/it] Quantizing model.layers blocks : 9%|▉ | 7/80 [19:37<3:16:46, 161.73s/it][A Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] Quantizing layers inside the block: 14%|█▍ | 1/7 [00:14<01:26, 14.35s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:30<01:15, 15.14s/it] Quantizing layers inside the block: 43%|████▎ | 3/7 [00:44<01:00, 15.02s/it] samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [00:59<00:44, 14.86s/it] samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:13<00:29, 14.69s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:29<00:15, 15.14s/it] Quantizing layers inside the block: 100%|██████████| 7/7 [02:00<00:00, 20.27s/it] samples = 1245 Quantizing model.layers blocks : 10%|█ | 8/80 [21:43<3:00:38, 150.53s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] samples = 1245 Quantizing layers inside the block: 14%|█▍ | 1/7 [00:15<01:32, 15.45s/it] Quantizing layers inside the block: 29%|██▊ | 2/7 [00:30<01:16, 15.36s/it] samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:45<00:59, 14.87s/it] samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [00:59<00:44, 14.68s/it] samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:13<00:29, 14.59s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:29<00:14, 14.87s/it] Quantizing layers inside the block: 100%|██████████| 7/7 [02:00<00:00, 20.08s/it] samples = 1245 Quantizing model.layers blocks : 11%|█▏ | 9/80 [23:49<2:49:03, 142.87s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] samples = 1245 Quantizing layers inside the block: 14%|█▍ | 1/7 [00:17<01:45, 17.59s/it] Quantizing layers inside the block: 29%|██▊ | 2/7 [00:32<01:20, 16.05s/it] samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:46<01:01, 15.26s/it] samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:01<00:44, 14.93s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:17<00:30, 15.28s/it] Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:32<00:15, 15.24s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [02:04<00:00, 20.79s/it] Quantizing model.layers blocks : 12%|█▎ | 10/80 [26:01<2:42:44, 139.49s/it]A Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] samples = 1245 Quantizing layers inside the block: 14%|█▍ | 1/7 [00:16<01:37, 16.24s/it] Quantizing layers inside the block: 29%|██▊ | 2/7 [00:31<01:17, 15.49s/it] samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:45<00:59, 14.96s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:02<00:46, 15.62s/it] Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:17<00:31, 15.63s/it] samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:33<00:15, 15.61s/it] samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [02:04<00:00, 20.60s/it] samples = 1245 Quantizing model.layers blocks : 14%|█▍ | 11/80 [28:12<2:37:10, 136.67s/it] Quantizing layers inside the block: 0%| | 0/4 [00:00<?, ?it/s] Quantizing layers inside the block: 25%|██▌ | 1/4 [00:07<00:21, 7.31s/it] samples = 0 Quantizing layers inside the block: 50%|█████ | 2/4 [00:21<00:22, 11.46s/it] samples = 1245 Quantizing layers inside the block: 75%|███████▌ | 3/4 [00:36<00:12, 12.80s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 100%|██████████| 4/4 [01:06<00:00, 19.79s/it] Quantizing model.layers blocks : 15%|█▌ | 12/80 [29:23<2:12:26, 116.86s/it]A Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] Quantizing layers inside the block: 14%|█▍ | 1/7 [00:14<01:29, 14.87s/it] samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:29<01:12, 14.53s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:44<00:58, 14.74s/it] Quantizing layers inside the block: 57%|█████▋ | 4/7 [00:59<00:44, 14.88s/it] samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:13<00:29, 14.70s/it] samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:28<00:14, 14.86s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [02:01<00:00, 20.70s/it] Quantizing model.layers blocks : 16%|█▋ | 13/80 [31:30<2:13:53, 119.90s/it]A Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] Quantizing layers inside the block: 14%|█▍ | 1/7 [00:14<01:26, 14.45s/it] samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:28<01:11, 14.40s/it] samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:43<00:57, 14.39s/it] samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [00:59<00:45, 15.15s/it] samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:14<00:29, 14.94s/it] samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:28<00:14, 14.80s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [02:00<00:00, 20.43s/it] Quantizing model.layers blocks : 18%|█▊ | 14/80 [33:36<2:13:54, 121.73s/it]A Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] Quantizing layers inside the block: 14%|█▍ | 1/7 [00:14<01:28, 14.68s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:30<01:16, 15.37s/it] Quantizing layers inside the block: 43%|████▎ | 3/7 [00:45<01:00, 15.21s/it] samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:00<00:44, 14.91s/it] samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:14<00:29, 14.76s/it] samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:30<00:15, 15.32s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [02:03<00:00, 21.04s/it] Quantizing model.layers blocks : 19%|█▉ | 15/80 [35:45<2:14:15, 123.93s/it]A Quantizing layers inside the block: 0%| | 0/4 [00:00<?, ?it/s] samples = 0 Quantizing layers inside the block: 25%|██▌ | 1/4 [00:06<00:19, 6.60s/it] samples = 1245 Quantizing layers inside the block: 50%|█████ | 2/4 [00:21<00:22, 11.19s/it] Quantizing layers inside the block: 75%|███████▌ | 3/4 [00:33<00:11, 11.85s/it] samples = 1245 Quantizing layers inside the block: 100%|██████████| 4/4 [00:45<00:00, 11.70s/it] samples = 1245 Quantizing model.layers blocks : 20%|██ | 16/80 [36:35<1:48:18, 101.53s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] Quantizing layers inside the block: 14%|█▍ | 1/7 [00:17<01:44, 17.38s/it] samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:34<01:26, 17.25s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:53<01:11, 17.92s/it] Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:10<00:53, 17.80s/it] samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:29<00:35, 17.98s/it] samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:47<00:17, 17.96s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [03:30<00:00, 45.74s/it] Quantizing model.layers blocks : 21%|██▏ | 17/80 [40:13<2:23:32, 136.71s/it]A Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] samples = 1245 Quantizing layers inside the block: 14%|█▍ | 1/7 [00:17<01:45, 17.51s/it] samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:35<01:28, 17.61s/it] samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:53<01:11, 17.86s/it] samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:11<00:53, 17.87s/it] Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:29<00:35, 17.94s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:47<00:18, 18.11s/it] Quantizing layers inside the block: 100%|██████████| 7/7 [03:31<00:00, 46.01s/it] samples = 1245 Quantizing model.layers blocks : 22%|██▎ | 18/80 [43:52<2:46:43, 161.35s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] samples = 1245 Quantizing layers inside the block: 14%|█▍ | 1/7 [00:19<01:54, 19.00s/it] Quantizing layers inside the block: 29%|██▊ | 2/7 [00:36<01:29, 17.85s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:54<01:12, 18.22s/it] samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:12<00:53, 17.91s/it] samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:30<00:36, 18.09s/it] Quantizing layers inside the block: 100%|██████████| 7/7 [03:31<00:00, 45.78s/it] samples = 1245 Quantizing model.layers blocks : 24%|██▍ | 19/80 [47:31<3:01:34, 178.60s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] samples = 1245 Quantizing layers inside the block: 14%|█▍ | 1/7 [00:18<01:52, 18.69s/it] Quantizing layers inside the block: 29%|██▊ | 2/7 [00:36<01:29, 17.88s/it] samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:53<01:10, 17.60s/it] samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:12<00:54, 18.05s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:31<00:37, 18.59s/it] Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:52<00:19, 19.29s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [03:35<00:00, 46.82s/it] Quantizing model.layers blocks : 25%|██▌ | 20/80 [51:19<3:13:30, 193.50s/it]A Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] samples = 1245 Quantizing layers inside the block: 14%|█▍ | 1/7 [00:18<01:49, 18.22s/it] Quantizing layers inside the block: 29%|██▊ | 2/7 [00:35<01:27, 17.55s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:53<01:10, 17.71s/it] Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:10<00:52, 17.67s/it] samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:29<00:36, 18.14s/it] samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:47<00:17, 17.87s/it] samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [03:30<00:00, 45.98s/it] samples = 1245 Quantizing model.layers blocks : 26%|██▋ | 21/80 [55:04<3:19:40, 203.05s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] Quantizing layers inside the block: 14%|█▍ | 1/7 [00:17<01:42, 17.12s/it] samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:34<01:25, 17.07s/it] samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:51<01:08, 17.07s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:09<00:52, 17.43s/it] Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:27<00:35, 17.60s/it] samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:45<00:17, 17.84s/it] samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [03:29<00:00, 45.95s/it] samples = 1245 Quantizing model.layers blocks : 28%|██▊ | 22/80 [58:43<3:20:50, 207.77s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] samples = 1245 Quantizing layers inside the block: 14%|█▍ | 1/7 [00:18<01:48, 18.17s/it] Quantizing layers inside the block: 29%|██▊ | 2/7 [00:35<01:27, 17.52s/it] samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:52<01:09, 17.31s/it] samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:09<00:51, 17.23s/it] samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:27<00:35, 17.60s/it] samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:45<00:17, 17.83s/it] samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [03:28<00:00, 45.64s/it] samples = 1245 Quantizing model.layers blocks : 29%|██▉ | 23/80 [1:02:21<3:20:20, 210.89s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] samples = 1245 Quantizing layers inside the block: 14%|█▍ | 1/7 [00:18<01:48, 18.05s/it] Quantizing layers inside the block: 29%|██▊ | 2/7 [00:35<01:27, 17.48s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:53<01:10, 17.70s/it] Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:10<00:52, 17.47s/it] samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:28<00:35, 17.89s/it] samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:46<00:17, 17.88s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [03:30<00:00, 45.91s/it] Quantizing model.layers blocks : 30%|███ | 24/80 [1:05:59<3:18:53, 213.10s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] samples = 1245 Quantizing layers inside the block: 14%|█▍ | 1/7 [00:18<01:48, 18.08s/it] Quantizing layers inside the block: 29%|██▊ | 2/7 [00:35<01:27, 17.48s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:53<01:11, 17.96s/it] Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:10<00:52, 17.62s/it] samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:29<00:35, 17.89s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:47<00:18, 18.18s/it] Quantizing layers inside the block: 100%|██████████| 7/7 [03:31<00:00, 46.05s/it] samples = 1245 Quantizing model.layers blocks : 31%|███▏ | 25/80 [1:09:40<3:17:17, 215.22s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] Quantizing layers inside the block: 14%|█▍ | 1/7 [00:17<01:42, 17.15s/it] samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:34<01:25, 17.09s/it] samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:51<01:08, 17.25s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:10<00:53, 17.75s/it] Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:27<00:35, 17.78s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:45<00:17, 17.83s/it] Quantizing layers inside the block: 100%|██████████| 7/7 [03:29<00:00, 45.87s/it] samples = 1245 Quantizing model.layers blocks : 32%|███▎ | 26/80 [1:13:18<3:14:35, 216.22s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] Quantizing layers inside the block: 14%|█▍ | 1/7 [00:17<01:43, 17.19s/it] samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:34<01:25, 17.13s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:52<01:10, 17.62s/it] Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:09<00:52, 17.43s/it] samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:27<00:35, 17.60s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:45<00:17, 17.79s/it] Quantizing layers inside the block: 100%|██████████| 7/7 [03:28<00:00, 45.62s/it] samples = 1245 Quantizing model.layers blocks : 34%|███▍ | 27/80 [1:16:55<3:11:15, 216.52s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] Quantizing layers inside the block: 14%|█▍ | 1/7 [00:17<01:43, 17.18s/it] samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:34<01:25, 17.13s/it] samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:51<01:08, 17.11s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:10<00:53, 17.86s/it] Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:28<00:35, 17.89s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:46<00:18, 18.03s/it] Quantizing layers inside the block: 100%|██████████| 7/7 [03:30<00:00, 46.00s/it] samples = 1245 Quantizing model.layers blocks : 35%|███▌ | 28/80 [1:20:33<3:07:56, 216.86s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] Quantizing layers inside the block: 14%|█▍ | 1/7 [00:17<01:43, 17.17s/it] samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:35<01:28, 17.65s/it] samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:52<01:09, 17.40s/it] samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:09<00:51, 17.30s/it] samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:26<00:34, 17.28s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:45<00:17, 17.70s/it] samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [03:29<00:00, 45.89s/it] Quantizing model.layers blocks : 36%|███▋ | 29/80 [1:24:11<3:04:35, 217.17s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] Quantizing layers inside the block: 14%|█▍ | 1/7 [00:17<01:42, 17.14s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:36<01:31, 18.21s/it] Quantizing layers inside the block: 43%|████▎ | 3/7 [00:53<01:10, 17.69s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:11<00:54, 18.02s/it] Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:29<00:35, 17.92s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:48<00:18, 18.16s/it] Quantizing layers inside the block: 100%|██████████| 7/7 [03:31<00:00, 46.08s/it] samples = 1245 Quantizing model.layers blocks : 38%|███▊ | 30/80 [1:27:50<3:01:26, 217.73s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] Quantizing layers inside the block: 14%|█▍ | 1/7 [00:17<01:42, 17.14s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:35<01:28, 17.78s/it] Quantizing layers inside the block: 43%|████▎ | 3/7 [00:52<01:09, 17.46s/it] samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:09<00:51, 17.33s/it] samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:28<00:35, 17.81s/it] samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:45<00:17, 17.64s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [03:29<00:00, 45.87s/it] Quantizing model.layers blocks : 39%|███▉ | 31/80 [1:31:27<2:57:37, 217.50s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] Quantizing layers inside the block: 14%|█▍ | 1/7 [00:17<01:43, 17.17s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:35<01:30, 18.13s/it] Quantizing layers inside the block: 43%|████▎ | 3/7 [00:53<01:11, 17.91s/it] samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:11<00:53, 17.88s/it] samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:29<00:36, 18.01s/it] samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:47<00:17, 17.86s/it] samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [03:30<00:00, 45.84s/it] samples = 1245 Quantizing model.layers blocks : 40%|████ | 32/80 [1:35:06<2:54:24, 218.02s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] Quantizing layers inside the block: 14%|█▍ | 1/7 [00:17<01:43, 17.18s/it] samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:34<01:25, 17.12s/it] samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:51<01:08, 17.11s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:09<00:52, 17.61s/it] Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:27<00:35, 17.77s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:46<00:17, 17.99s/it] Quantizing layers inside the block: 100%|██████████| 7/7 [03:29<00:00, 45.85s/it] samples = 1245 Quantizing model.layers blocks : 41%|████▏ | 33/80 [1:38:45<2:50:53, 218.16s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] samples = 1245 Quantizing layers inside the block: 14%|█▍ | 1/7 [00:17<01:45, 17.56s/it] samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:36<01:30, 18.08s/it] Quantizing layers inside the block: 43%|████▎ | 3/7 [00:53<01:10, 17.63s/it] samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:10<00:52, 17.45s/it] samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:27<00:34, 17.50s/it] samples = 1245 samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:46<00:17, 17.82s/it] samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [03:31<00:00, 46.27s/it] Quantizing model.layers blocks : 42%|████▎ | 34/80 [1:42:23<2:47:20, 218.28s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] Quantizing layers inside the block: 14%|█▍ | 1/7 [00:17<01:47, 17.91s/it] samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:36<01:30, 18.08s/it] samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:53<01:10, 17.71s/it] samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:10<00:52, 17.57s/it] samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:31<00:37, 18.76s/it] samples = 1245 Quantizing layers inside the block: 86%|████████▌ | 6/7 [01:50<00:18, 18.70s/it] samples = 1245 Quantizing layers inside the block: 100%|██████████| 7/7 [03:33<00:00, 46.47s/it] samples = 1245 Quantizing model.layers blocks : 44%|████▍ | 35/80 [1:46:11<2:45:54, 221.22s/it] Quantizing layers inside the block: 0%| | 0/7 [00:00<?, ?it/s] Quantizing layers inside the block: 14%|█▍ | 1/7 [00:17<01:43, 17.27s/it] samples = 1245 Quantizing layers inside the block: 29%|██▊ | 2/7 [00:34<01:25, 17.19s/it] samples = 1245 Quantizing layers inside the block: 43%|████▎ | 3/7 [00:52<01:10, 17.58s/it] samples = 1245 Quantizing layers inside the block: 57%|█████▋ | 4/7 [01:10<00:52, 17.65s/it] samples = 1245 Quantizing layers inside the block: 71%|███████▏ | 5/7 [01:27<00:35, 17.57s/it] samples = 1245
open
2024-10-02T20:08:19Z
2024-10-02T20:18:15Z
https://github.com/AutoGPTQ/AutoGPTQ/issues/736
[ "bug" ]
den-run-ai
1
mwaskom/seaborn
data-visualization
2,785
histplot stat=count does not count all data points
`import matplotlib.pyplot as plt import seaborn as sns import numpy as np sns.set(style="whitegrid") data_a = [1, 2, 3] data_b = [2.4, 2.5, 2.6] sns.histplot(np.array(data_a), color="red", binwidth=0.01, stat="count") sns.histplot(np.array(data_b), color="blue", binwidth=0.01, stat="count") `plt.savefig("output.png")`` This produces [https://i.stack.imgur.com/TM6al.png](url) The data point 2.6 is omitted in the output produced by histplot. The problem also exists, if the first sns.histplot command is removed. Interestingly, it has been pointed out to me that the following command works: `sns.histplot([data_a, data_b], palette=['red', 'blue'], binwidth=0.01, stat="count")` but as I said, the single command `sns.histplot(np.array(data_b), color="blue", binwidth=0.01, stat="count")` also does not work.
closed
2022-04-28T08:18:26Z
2022-05-18T10:47:26Z
https://github.com/mwaskom/seaborn/issues/2785
[ "bug", "mod:distributions" ]
cmayer
1
Avaiga/taipy
data-visualization
1,957
Allow Polars DataFrame as argument for tgb.table()
### Description The `tgb.table()` function does not accept yet polars DataFrame, this can be bypassed by using `df.to_pandas()` but since polars is growing and slowly becoming a replacement for pandas is a lot of area, it would make sense for me to have a native integration in taipy for it. Here is the code that should work with the new feature : ```python from taipy.gui import builder as tgb from taipy.gui import Gui import polars as pl df = pl.DataFrame({"name": ["Héricendre", "Kaiminus", "Germignon"]}) with tgb.Page() as page: tgb.table(data="{df}") Gui( page, ).run(debug=True, use_reloader=True, port=5111) ``` For now it does not show a table and raise the following warning : ``` WARNING:root: --- 1 warning(s) were found for page '/' --- - Warning 1: Can't find Data Accessor for type <class 'polars.dataframe.frame.DataFrame'>. ---------------------------------------------- ``` ### Acceptance Criteria - [ ] Ensure new code is unit tested, and check code coverage is at least 90%. - [ ] Create related issue in taipy-doc for documentation and Release Notes. - [ ] Check if a new demo could be provided based on this, or if legacy demos could be benefit from it. - [ ] Ensure any change is well documented. ### Code of Conduct - [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+). - [ ] I am willing to work on this issue (optional)
closed
2024-10-08T05:58:53Z
2024-10-08T07:41:01Z
https://github.com/Avaiga/taipy/issues/1957
[ "✨New feature" ]
ShootingStarD
1
streamlit/streamlit
data-science
10,169
Pending changes to a widget are not submitted if interrupted by `st.rerun()` or `st.switch_page()`.
### Checklist - [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues. - [X] I added a very descriptive title to this issue. - [X] I have provided sufficient information below to help reproduce this issue. ### Summary This might not be a bug exactly, but the consequence is that it can result in a seemingly invalid state. Hence, I'll mark this as a bug. Background: If there is a pending edit in some widget like `st.text_area` and a user clicks a button, Streamlit will process the widget's new value and any associated callback immediately before the button's and then proceed with the rerun. In contrast: If a pending edit exists for a widget and the app is programmatically rerun (`st.rerun()`, `st.switch_page`), then the edit is preserved on the frontend, but not submitted to the backend. This can result in the front end appearing to have a different value than the back end. ### Reproducible Code Example ```Python import streamlit as st import time if "disabled" not in st.session_state: st.session_state.disabled = False x = st.text_area("Test", key="test", disabled=st.session_state.disabled) st.session_state if st.session_state.disabled == False: st.write("Running") time.sleep(3) st.session_state.disabled = True st.rerun() if st.button("Reset"): st.session_state.disabled = False st.rerun() st.session_state ``` ### Steps To Reproduce 1. Run the app 2. Type something into the text area within three seconds, but don't click out. 3. The app will rerun and make the widget disabled. The front end will have the pending value but the output will remain as the default value. 4. Click "Reset" The widget will be reenabled and focus will return to it, but if you do nothing the app will rerun again again and disable the widget, still with the mismatch between front end and back end. ### Expected Behavior The front end and back end should not disagree on a widget's value, especially through multiple reruns. ### Current Behavior A disabled widget appears to have one value on the front end, but has a different value on the back end. ### Is this a regression? - [ ] Yes, this used to work in a previous version. ### Debug info - Streamlit version: 1.41.1 - Python version: 3.11.8 - Operating System: macOS - Browser: Chrome ### Additional Information If I create two widgets that time out in succession, then upon re-enabling them, they will both be in focus simultaneously with their pending edits. In general, a user can proceed to interact with the rest of the app and these odd-state elements will remain focused until the user clicks into and out of them to submit the pending value. Hence, maybe not a bug exactly, but it looks convincingly "wrong" in the disabled state and it's certainly strange UI for multiple widgets to be focused at the same time.
open
2025-01-13T00:24:32Z
2025-01-28T17:22:17Z
https://github.com/streamlit/streamlit/issues/10169
[ "type:bug", "status:confirmed", "priority:P3", "feature:st.rerun" ]
sfc-gh-dmatthews
2
iterative/dvc
data-science
10,648
Set core.checksum_jobs can not accelerate dvc operations
# Bug Report ## Description I use git and dvc to manage my training datasets, which consists of thousands of `jsonl` files. I use `dvc add file1.jsonl file2.jsonl ... file9999.jsonl`(just to illustrate, a shell script is actually used) to add .dvc file for every jsonl file. After I change `core.site_cache_dir` and execute `dvc status`, I can only see two dvc process in `htop`, although I have already set core.checksum_jobs as 8. And it takes too long to finish `dvc status`. ### Reproduce ```sh cd /path/to/repo/ dvc config core.checksum_jobs 8 # First set core.site_cache_dir, so last site_cache_dir is at /var/tmp/dvc/repo/xxx dvc config core.site_cache_dir $(pwd)/.dvc/site_cache_dir dvc status ``` Execute `htop` in terminal: ![Image](https://github.com/user-attachments/assets/f09b5846-5224-47ac-9abf-6b4198b7c62e) ### Expected `core.checksum_jobs=8` means 8 process hash different jsonl files concurrently, am I correct? Just like this: ```python with multiprocessing.Pool(8) as pool: hashes = list(pool.map(hash_function, all_files_to_hash)) ``` I expect to see at least 8 process in `htop`, so the `hashes` and `links` can be quickly built if I change core.site_cache_dir. Now it takes too long to execute `dvc status`. I think `core.checksum_jobs` also influence `dvc add` and `dvc checkout`, so if `core.checksum_jobs` works correctly, the process of `dvc add` many jsonl files can be also accelerated. ### Environment information <!-- This is required to ensure that we can reproduce the bug. --> **Output of `dvc doctor`:** ```console $ dvc doctor DVC version: 3.55.2 (pip) ------------------------- Platform: Python 3.9.19 on Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.31 Subprojects: dvc_data = 3.16.5 dvc_objects = 5.1.0 dvc_render = 1.0.2 dvc_task = 0.4.0 scmrepo = 3.3.7 Supports: http (aiohttp = 3.10.5, aiohttp-retry = 2.8.3), https (aiohttp = 3.10.5, aiohttp-retry = 2.8.3), s3 (s3fs = 2024.6.1, boto3 = 1.35.7) Config: Global: /mnt/afs/jiangtan/.config/dvc System: /etc/xdg/dvc Cache types: hardlink, symlink Cache directory: fuse.quarkfs_client on quarkfs_client Caches: local Remotes: s3 Workspace directory: fuse.quarkfs_client on quarkfs_client Repo: dvc, git Repo.site_cache_dir: /path/to/repo/.dvc/site_cache_dir/repo/eddf3641719990517f0cfc808ea33376 ```
closed
2024-12-10T19:43:10Z
2024-12-12T11:28:32Z
https://github.com/iterative/dvc/issues/10648
[ "awaiting response" ]
jiangtann
6
adbar/trafilatura
web-scraping
49
OverflowError: signed integer is greater than maximum
Traceback (most recent call last): File "indexer.py", line 53, in <module> content_trafilatura = trafilatura.extract(document, json_output=True, with_metadata=False, include_tables=False, deduplicate=True, include_comments=False) File "/Users/luca/enviroments/3.7/lib/python3.7/site-packages/trafilatura/core.py", line 684, in extract max_tree_size=max_tree_size, url_blacklist=url_blacklist File "/Users/luca/enviroments/3.7/lib/python3.7/site-packages/trafilatura/core.py", line 586, in bare_extraction docmeta = extract_metadata(tree, url, date_extraction_params) File "/Users/luca/enviroments/3.7/lib/python3.7/site-packages/trafilatura/metadata.py", line 367, in extract_metadata metadata['date'] = find_date(tree, **date_config) File "/Users/luca/enviroments/3.7/lib/python3.7/site-packages/htmldate/core.py", line 605, in find_date original_date, min_date, max_date) File "/Users/luca/enviroments/3.7/lib/python3.7/site-packages/htmldate/core.py", line 124, in examine_header headerdate = tryfunc(elem.get('content')) File "/Users/luca/enviroments/3.7/lib/python3.7/site-packages/htmldate/extractors.py", line 385, in try_ymd_date customresult = custom_parse(string, outputformat, extensive_search, min_date, max_date) File "/Users/luca/enviroments/3.7/lib/python3.7/site-packages/htmldate/extractors.py", line 302, in custom_parse result = parse_datetime_as_naive(string) File "/Users/luca/enviroments/3.7/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 1374, in parse return DEFAULTPARSER.parse(timestr, **kwargs) File "/Users/luca/enviroments/3.7/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 655, in parse ret = self._build_naive(res, default) File "/Users/luca/enviroments/3.7/lib/python3.7/site-packages/dateutil/parser/_parser.py", line 1241, in _build_naive naive = default.replace(**repl) OverflowError: signed integer is greater than maximum
closed
2020-12-28T09:53:02Z
2021-01-04T14:25:40Z
https://github.com/adbar/trafilatura/issues/49
[ "bug" ]
Lucabenj
2
proplot-dev/proplot
matplotlib
120
Overlapping gridlines in cartopy plots
The below is copied from the discussion in #78. When the grid is not centered on 0, ProPlot used to draw overlapping grids to the right of the dateline: ```python import proplot as plot season = "SON" f, axs = plot.subplots(proj='cyl', proj_kw={'lon_0':180}, width=6) axs.format( geogridlinewidth=0.5, geogridcolor='gray8', geogridalpha=0.5, labels=True, coast=True, suptitle=season+" snow cover bias", ocean=True, oceancolor='gray4' ) m = axs[0].contourf( season_clim_diff[0], cmap='ColdHot', levels=np.arange(-45,50,5), extend='max', norm='midpoint' ) f.colorbar(m, label="Snow Area Fraction [%]") ``` ![proplot_levels_extend_max](https://user-images.githubusercontent.com/20254164/70129015-4bb40200-167e-11ea-886c-361b5052c369.jpg) This is related to some fancy gridliner repairs proplot tries to do. Without going into too much detail, my choices were (a) have overlapping gridlines or (b) have non-overlapping gridlines, but the labels above +180 degrees east disappear. I picked the former. However, I've now added a "monkey patch" in [version 0.2.3](https://proplot.readthedocs.io/en/latest/changelog.html#proplot-v0-2-3-2019-12-05) that fixes both issues; no overlapping gridlines, and labels above +180 degrees are permitted. Might submit a PR to cartopy but will wait until SciTools/cartopy#1117 is released in version 0.18 -- the new gridliner API may solve this issue. https://github.com/lukelbd/proplot/blob/15ed28fbaa31e6483c4c2eb4615bf5328ae340b1/proplot/axes.py#L3164-L3188 https://github.com/lukelbd/proplot/blob/15ed28fbaa31e6483c4c2eb4615bf5328ae340b1/proplot/axes.py#L3253-L3255
closed
2020-02-10T21:55:12Z
2022-01-22T12:18:14Z
https://github.com/proplot-dev/proplot/issues/120
[ "bug" ]
lukelbd
1
voxel51/fiftyone
computer-vision
5,107
How to annotate in CVAT both Classifications and Detections simultaneously?
Hello there, I'm working on a dataset that consists both of bounding boxes (as fo.Detections) and labels (as fo.Classifications). I can use fiftyone to upload those to CVAT, which will automatically create a CVAT project, task and job with the desired images, but I can only upload one field at a time: ``` curr_view.annotate( 'uploading_bb', label_field='ground_truth', project_name=project_name, organization=organization, launch_editor=False, url=url) curr_view.annotate( 'upload_tags', label_field='TAGs', label_type='classifications', classes=[x for x in classes_datalake if 'TAG' in x], project_name=project_name, organization=organization, launch_editor=False, url=url) ``` Which results in two different tasks being created, one just with bounding boxes and another just with CVAT TAGs. How can I create a job that simultaneously have both bounding boxes and TAGs?
closed
2024-11-14T00:58:17Z
2024-11-14T14:44:25Z
https://github.com/voxel51/fiftyone/issues/5107
[ "question" ]
thiagoribeirodamotta
1
dpgaspar/Flask-AppBuilder
flask
2,267
in general: is it possible to remove firstname, lastname, username from the User entity ?
I want to make the registration process easy as possible, by removing unesseccary information. I can add my own attributes like phone_number by creating a new class and inheriting from User, but the problem is: **>>> I dont need firstname, lastname and username... How can I change the User entity ?** I saw that these variables are used all over the codebase (for example in the login route, registration route, regsistrationUserModel, ...) I thought of using a custom form and add_post(self, item) to overwrite the firstname, lastname, username, with the provided user email before persiting to the datasbase ... but is there a better (cleaner) way ?
open
2024-08-23T07:54:48Z
2024-09-07T12:38:38Z
https://github.com/dpgaspar/Flask-AppBuilder/issues/2267
[]
AttaM1rza
1
tatsu-lab/stanford_alpaca
deep-learning
49
does support multi-turn training data?
Thanks for the greate job to help us easy to finetune on llama I found that the training data is just single turn, does support for the mulit-turn data like [OIG](https://huggingface.co/datasets/laion/OIG)
closed
2023-03-16T07:44:19Z
2023-04-10T08:28:14Z
https://github.com/tatsu-lab/stanford_alpaca/issues/49
[]
trouble-maker007
3
OFA-Sys/Chinese-CLIP
nlp
86
少样本 finetune
您好,我要对一些缺陷类别进行微调,但是有些类别数据很少,不知道 chinese-clip 对少样本微调的效果如何
closed
2023-04-14T09:06:43Z
2023-04-26T03:44:10Z
https://github.com/OFA-Sys/Chinese-CLIP/issues/86
[]
raymond1123
2
STVIR/pysot
computer-vision
257
AttributeError: 'NoneType' object has no attribute 'shape'
AttributeError: 'NoneType' object has no attribute 'shape' what is the problem? thanks
closed
2019-12-10T09:12:01Z
2020-01-07T09:54:33Z
https://github.com/STVIR/pysot/issues/257
[]
ghost
4
graphql-python/graphene-django
django
1,144
Recent Change To relay requesting UUID in django-filters arguments
I was just going through my API and found out the argument ID is of type UUID when using django-filters . Earlier it was of type ID and we could just pass the relay global ID. ![image](https://user-images.githubusercontent.com/55009050/111074678-23b28000-850a-11eb-81cc-e1a86cc9bbf4.png)
closed
2021-03-14T15:43:47Z
2021-03-18T09:35:04Z
https://github.com/graphql-python/graphene-django/issues/1144
[ "🐛bug" ]
ramyak-mehra
3
tflearn/tflearn
tensorflow
818
Bidirectional LTSM example throws shape error
Running the following file throws an error: https://github.com/tflearn/tflearn/blob/master/examples/nlp/bidirectional_lstm.py `ValueError: Shape (128, ?) must have rank at least 3` Setup: - MacOS Sierra (10.12) - Python 2.7 - Tensorflow v1.2.0 - TFLearn v0.3.2 ``` Traceback (most recent call last): File "bidirectional_lstm.py", line 47, in <module> net = bidirectional_rnn(net, BasicLSTMCell(128), BasicLSTMCell(128)) File "/Users/colin/tensorflow/lib/python2.7/site-packages/tflearn/layers/recurrent.py", line 374, in bidirectional_rnn dtype=tf.float32) File "/Users/colin/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 375, in bidirectional_dynamic_rnn time_major=time_major, scope=fw_scope) File "/Users/colin/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 574, in dynamic_rnn dtype=dtype) File "/Users/colin/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 637, in _dynamic_rnn_loop for input_ in flat_input) File "/Users/colin/tensorflow/lib/python2.7/site-packages/tensorflow/python/ops/rnn.py", line 637, in <genexpr> for input_ in flat_input) File "/Users/colin/tensorflow/lib/python2.7/site-packages/tensorflow/python/framework/tensor_shape.py", line 649, in with_rank_at_least raise ValueError("Shape %s must have rank at least %d" % (self, rank)) ValueError: Shape (128, ?) must have rank at least 3 ```
closed
2017-06-29T02:55:39Z
2022-06-03T02:23:39Z
https://github.com/tflearn/tflearn/issues/818
[]
colinskow
17
vitalik/django-ninja
django
1,217
Throttling is not support async based view
why throttling is not working in async, it works in simple function based view How can i troubleshoot it
closed
2024-07-03T06:49:13Z
2024-12-30T07:06:44Z
https://github.com/vitalik/django-ninja/issues/1217
[]
itm10
4
AirtestProject/Airtest
automation
976
Mac配置环境真机执行local/status查看返回的json字符,sessionId显示Null
:bulb:**相关项目:** Airtest **标题:** [问题咨询]Mac配置环境真机执行local/status查看返回的json字符,sessionId显示Null **AirtestIDE版本:** 无 **未使用本地Pyhton环境运行脚本** **报错描述:** &nbsp;&nbsp;&nbsp;&nbsp;mac上配xcode,wda环境,跑真机 xcode:12.0 wda版本地址:https://github.com/AirtestProject/IOS-Tagent 设备:Iphone6 版本:12.5.4 问题:按上述版本配置完成后,xcode bulid ok,test ok 浏览器localhost:8100/ status查看返回的json字符,session为空 **相关截图:** ![](https://hfc20-issue-helper.s3.nie.netease.com/media/2021/10/12/90/6b98563b52f13efc17a126eaa11aa5a.jpg) ![](https://hfc20-issue-helper.s3.nie.netease.com/media/2021/10/12/90/8bdb8fdf40b760bc7043048e9b54653.jpg) **报错Log:** ``` 无 ``` **连接设备信息:** 设备类型 | 设备型号 | 系统版本号 | apk名称/下载链接 | 使用的wda版本 | wda更新到最新 ------------ | ------------- | ------------- | ------------- | ------------- | ------------- iOS | iphone6 | ios 12.5.4 | 此问题在该mac电脑,iphone6上未出现;近期由于测试app升级,把iphone6的版本做了升级,同时xcode也升级后,重新部署wda版本出现的问题 | iOS-Tagent | 是 **提供最小可复现此BUG的代码:** ``` 无 ```
open
2021-10-12T06:32:23Z
2021-10-12T06:32:23Z
https://github.com/AirtestProject/Airtest/issues/976
[]
HazelRunner
0
pydantic/logfire
fastapi
397
Live View - Save position of info pane split
### Description Currently every time you click a span to open the info pane it splits the screen directly in the middle. It would be great if it remembered the last position/size. I find myself dragging it to about 1/3rd of the screen every time.
closed
2024-08-22T17:43:18Z
2024-11-20T19:03:50Z
https://github.com/pydantic/logfire/issues/397
[ "Feature Request", "frontend" ]
slingshotvfx
2
OpenInterpreter/open-interpreter
python
1,334
Support for Nodejs installation via npm
### Is your feature request related to a problem? Please describe. Hi, I noticed that Open-intepreter is lacking npm bindings for installation in Nodejs. Is such a feature in the works, and would this make sense? Super helpful if I could install it as a dependency and link it to my Koa / NestJS project without spinning another definition on Dockerfile. WDYT? ### Describe the solution you'd like I would like to install open-interpreter as a binding in Nodejs. So I could run: ``` npm install open-interpreter ``` And use bindings, instead of spinning up a separate Python server. ### Describe alternatives you've considered - ### Additional context -
open
2024-07-07T15:09:58Z
2024-07-07T15:10:11Z
https://github.com/OpenInterpreter/open-interpreter/issues/1334
[]
myrtleTree33
0
apify/crawlee-python
automation
951
Template issues failing docker image build and error in camoufox template
1) poetry-plugin-export and poetry missmatch Few weeks back poetry v2 was released. Currently actor template generated docker file build will fail on poetry-plugin-export dependency resolution as it wants version two, but our poetry is locked to <2 ``` Because no versions of poetry-plugin-export match >1.9.0,<2.0.0 2025-02-03T06:59:32.156Z and poetry-plugin-export (1.9.0) depends on poetry (>=2.0.0,<3.0.0), poetry-plugin-export (>=1.9.0,<2.0.0) requires poetry (>=2.0.0,<3.0.0). 2025-02-03T06:59:32.158Z So, because poetry-instance depends on both poetry (1.8.5) and poetry-plugin-export (^1.9.0), version solving failed. ``` Either migrate to poetry 2.x.x (might be too big change) or lock poetry-plugin-export <1.9.0 2) Cookiecutter camoufox name In several places incorrect project alias is used and thus wrong files are generated. This `cookiecutter.crawler_type == 'camoufox'` should be this `cookiecutter.crawler_type == 'playwright-camoufox'`
closed
2025-02-03T07:21:47Z
2025-02-03T10:44:12Z
https://github.com/apify/crawlee-python/issues/951
[ "bug", "t-tooling" ]
Pijukatel
0
Kludex/mangum
fastapi
18
Generic ServerlessMiddleware
Currently there is only the `AWSLambdaMiddleware` used with the `run_asgi` method, but we'll want this for other platforms. Also considering dropping the `run_asgi` as a standalone method and instead implementing the run functionality in the middlewares.
closed
2019-01-28T00:58:41Z
2019-01-28T05:26:07Z
https://github.com/Kludex/mangum/issues/18
[ "improvement" ]
jordaneremieff
1
kymatio/kymatio
numpy
1,020
from kymatio.numpy import Scattering2D : FAILES
Hi, Despite that the lib is amazing, here I report a pb on installation ```python ! pip install kymatio from kymatio.numpy import Scattering2D ``` Then Successfully installed kymatio-0.3.0 But ModuleNotFoundError: No module named 'kymatio.numpy'
closed
2023-07-26T12:26:01Z
2023-07-26T17:10:26Z
https://github.com/kymatio/kymatio/issues/1020
[]
jecampagne
1
roboflow/supervision
computer-vision
686
Class none person, how to remove it ?
### Search before asking - [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests. ### Question I'm running this code on my raspberry pi 4 with picamera to detect and count people, using yolov8n, but sometimes it detects a class called none person, and then shows several boxes, when crossing the line it ends up counting these boxes with the class none person, and then, it ends up uncalibrating the count... I didn't find it in the documentation about this class called none person... how to disable it? `import cv2 import json import numpy as np from picamera2 import Picamera2 from ultralytics import YOLO import supervision as sv import os class PiLineCounter: def __init__(self, lines_json_path, model_path): with open(lines_json_path, 'r') as f: self.lines_data = json.load(f) self.model = YOLO(model_path) # Inicialização dos anotadores self.line_annotator = sv.LineZoneAnnotator( thickness=1, text_thickness=1, text_scale=1, custom_in_text="entrando", custom_out_text="saindo" ) self.box_annotator = sv.BoxAnnotator( thickness=2, text_thickness=1, text_scale=0.5 ) # Inicialização da PiCamera2 self.picam2 = Picamera2() preview_config = self.picam2.create_preview_configuration() self.picam2.configure(preview_config) self.picam2.start() # Inicialização dos contadores de linha self.line_counters = {} self.initialize_counters() def initialize_counters(self): for line_key, line_value in self.lines_data.items(): # Usar as coordenadas das linhas diretamente do JSON start_point_x, start_point_y = line_value['points'][0] end_point_x, end_point_y = line_value['points'][1] start_point = sv.Point(start_point_x, start_point_y) end_point = sv.Point(end_point_x, end_point_y) self.line_counters[line_key] = sv.LineZone(start=start_point, end=end_point) def run(self): while True: frame = self.picam2.capture_array() frame = cv2.cvtColor(np.array(frame), cv2.COLOR_RGB2BGR) results = self.model.track(frame, show=False, stream=False, agnostic_nms=True, imgsz=320) print(f"Número de resultados de detecção: {len(results)}") for result in results: detections = sv.Detections.from_ultralytics(result) if detections is None or len(detections.xyxy) == 0: print("Nenhuma detecção neste frame. Pulando...") continue # Imprimir todas as detecções e seus respectivos class_id, labels e confianças for d in detections: class_id = d[3] label = self.model.model.names[class_id] confidence = d[2] print(f"Detecção: class_id={class_id}, label={label}, confiança={confidence:.2f}") detections_filtered = [d for d in detections if d[3] == 0] print(f"Número de detecções de pessoas: {len(detections_filtered)}") labels = [f"{d[4]} {self.model.model.names[d[3]]} {d[2]:0.2f}" for d in detections_filtered] for line_key in self.line_counters.keys(): in_count, out_count = self.line_counters[line_key].trigger(detections=detections_filtered) print(f"Linha {line_key}: Entrando - {in_count}, Saindo - {out_count}") # Criar um objeto Detections que possa ser usado pelo BoxAnnotator if len(detections_filtered) > 0: xyxy = np.array([d[0] for d in detections_filtered]) confidences = np.array([d[2] for d in detections_filtered]) class_ids = np.array([d[3] for d in detections_filtered]) tracker_ids = np.array([d[4] for d in detections_filtered]) detections_for_annotation = sv.Detections(xyxy=xyxy, confidence=confidences, class_id=class_ids, tracker_id=tracker_ids) frame = self.box_annotator.annotate( scene=frame, detections=detections_for_annotation, labels=labels ) else: print("Nenhuma detecção de pessoas neste frame.") for line_key in self.line_counters.keys(): self.line_annotator.annotate(frame=frame, line_counter=self.line_counters[line_key]) # Exibir o frame original sem redimensionamento cv2.imshow("PiCamera Line Counter", frame) #cv2.imwrite('/dev/shm/frame.jpg', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break cv2.destroyAllWindows() if __name__ == '__main__': lines_json_path = "lines_with_doubled_data.json" model_path = "yolov8n.pt" pi_line_counter = PiLineCounter( lines_json_path=lines_json_path, model_path=model_path ) pi_line_counter.run() ` ### Additional [lines_with_doubled_data.json](https://github.com/roboflow/supervision/files/13736268/lines_with_doubled_data.json)
closed
2023-12-21T05:05:40Z
2023-12-28T12:25:28Z
https://github.com/roboflow/supervision/issues/686
[ "question" ]
Rasantis
1
flasgger/flasgger
rest-api
267
OAS 3+ has no property 'definitions'
OAS 3.x is not valid as the property `definitions` (even though it is added as empty list) is automatically added by flasgger. In case OAS 3.x this has been replaced by components (see [here](https://swagger.io/blog/news/whats-new-in-openapi-3-0/)). I am not sure how to properly fix this as it has been used throughout the package. A dirty hack that works for me is simply removing the `definitions` from the dict. (https://github.com/IKNL/flasgger/commit/b4451780d801688f49e2a1405e00883502365412) I assumed (maybe wrong) that OAS3+ was supported as it was used in one of the [examples](https://github.com/rochacbruno/flasgger/blob/master/examples//use_openapi.py). Although no were else in the documentation it states that OAS3+ is supported.
open
2018-11-21T16:19:27Z
2020-11-10T07:57:09Z
https://github.com/flasgger/flasgger/issues/267
[]
frankcorneliusmartin
13
pytest-dev/pytest-xdist
pytest
513
Enable LGTM for code analysis (Semmle)
Will help to eliminate obvious errors. Python is supported. See: - https://lgtm.com/ - https://github.com/marketplace/lgtm ![image](https://user-images.githubusercontent.com/203261/69871982-2f673c80-12ef-11ea-85bd-ff273782cf90.png)
open
2020-03-11T10:11:33Z
2020-03-11T10:11:33Z
https://github.com/pytest-dev/pytest-xdist/issues/513
[]
XVilka
0
replicate/cog
tensorflow
1,326
Defining an input with human-friendly labels that differ from the values
My model has an input called `language` and the allowable values are strings like `en`, `zh-cn`, etc. I want to display labels like "English" to users of the model (instead of or in addition to "en"), whether they are using the model programmatically via API or via a GUI form on a website like Replicate. Is there a way to do this? Looking at [Cog's API docs for Prediction inputs](https://github.com/replicate/cog/blob/main/docs/python.md#inputkwargs), it doesn't appear as such. Could we add support for this without breaking existing behavior? Maybe a list of tuples? ```py languages = [ ("en", "English"), ("pt", "Portuguese"), ("zh-cn", "Mandarin Chinese") ] class Predictor(BasePredictor): def predict( self, text: str = Input(description="Text to synthesize"), language: str = Input(description="Languaage", choices=languages), ) -> Path: ... ```
open
2023-10-02T20:37:59Z
2023-10-05T04:56:19Z
https://github.com/replicate/cog/issues/1326
[]
zeke
1
hankcs/HanLP
nlp
1,305
java.lang.OutOfMemoryError: Java heap space
## 注意事项 请确认下列注意事项: * 我已仔细阅读下列文档,都没有找到答案: - [首页文档](https://github.com/hankcs/HanLP) - [wiki](https://github.com/hankcs/HanLP/wiki) - [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ) * 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。 * 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。 * [x] 我在此括号内输入x打钩,代表上述事项确认完毕。 ## 版本号 <!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 --> 当前最新版本号是:0.1.50 我使用的版本是:0.1.50*[pip安装]* (hanlp-1.7.5.jar,data-for-1.7.5) ## 我的问题 使用Python版本的hanlp分析红楼梦文本时(提取摘要操作)提示错误: `Traceback (most recent call last): File "D:\Files\python\MachineLearning\自然语言处理\hanlp分析红楼梦.py", line 22, in <module> print(HanLP.extractSummary(text, 5)) jpype._jclass.java.lang.OutOfMemoryError: java.lang.OutOfMemoryError: Java heap space` 所用文本文件在https://github.com/ouening/MLPractice/blob/master/Red_Mansions_Anasoft.txt ## 其他信息 OS: win10 64bit python: 3.7.2
closed
2019-10-20T07:39:44Z
2020-01-01T10:48:25Z
https://github.com/hankcs/HanLP/issues/1305
[ "ignored" ]
ouening
2
Johnserf-Seed/TikTokDownload
api
24
现在无法批量下载了
我已经在配置文件中,已经设置好链接,可是现在下载不了,都是单一视频下载了。现在无法批量下载了 ![image](https://user-images.githubusercontent.com/63137907/126020142-7fc7f134-4a41-4a3b-8ef6-9d0445d3a1ac.png) ![image](https://user-images.githubusercontent.com/63137907/126020171-306e0aad-5181-4fd3-9ec9-cad19838c4b1.png)
closed
2021-07-17T00:39:58Z
2021-07-18T04:45:28Z
https://github.com/Johnserf-Seed/TikTokDownload/issues/24
[]
tjj2577097637
3
ploomber/ploomber
jupyter
187
Describe when to use each setting in pipeline.yaml
Describe the motivation for each parameter and which are good use cases for them.
closed
2020-07-13T00:52:21Z
2021-03-09T04:53:59Z
https://github.com/ploomber/ploomber/issues/187
[]
edublancas
1
Gozargah/Marzban
api
889
اتصال به دیتابیس
با سلام آیا می تونم از یک سرور دیگه به صورت ریموت وصل بشم به دیتابیس mysql روی پنل مرزبان و چطور ؟ یک مشکل هم در ربات تلگرام هست که ادمینها به یوزرهای همه دسترسی دارند حتی ادمینهایی که سودو نیستند. با سپاس
closed
2024-03-24T18:55:44Z
2024-03-25T08:33:56Z
https://github.com/Gozargah/Marzban/issues/889
[ "Feature" ]
ehssanehs
3
geex-arts/django-jet
django
394
Coloring row
How I can coloring row? When I use in admin.py def suit_row_attributes(self, obj, request): if obj.start < TZ.now() and obj.end > TZ.now(): return {'class': 'success'} I can't find class .success
open
2019-05-01T10:09:22Z
2024-02-12T01:12:17Z
https://github.com/geex-arts/django-jet/issues/394
[]
maxclax
1
pydata/xarray
pandas
9,757
merging and saving loaded datasets can lead to string truncation
### What happened? If one: 1. loads a dataset saved using `engine="h5ncetdf"` with a string coordinate say `<U2` 2. merges it with another dataset which matches but has longer strings in the same coordinate, say `<U4` 3. then saves *that* merged dataset using `engine="h5ncetdf"` 4. then the encoding from loading the initial dataset, which survives the merge, causes the dataset variable to be silently truncated back to "<U2", such that when it is loaded again the data is incorrect. ~~This is specific to the "h5netcdf" engine.~~ This doesn't happen however with the "scipy" engine. ### What did you expect to happen? I guess the encoding should be dropped or updated during the `merge` call. ### Minimal Complete Verifiable Example ```Python import xarray as xr engine = "h5netcdf" ds1 = xr.Dataset(coords={'x': ['ab', 'bc', 'c']}) ds1.to_netcdf('ds1.h5', engine=engine) ds1 = xr.open_dataset('ds1.h5', engine=engine) ds1.close() ds2 = xr.Dataset(coords={'x': ['abc', 'bcd', 'cd']}) ds2 = ds1.merge(ds2) print(ds2.x.encoding) print("expected", ds2.x.values) ds2.to_netcdf('ds1.h5', engine=engine) ds2 = xr.open_dataset('ds1.h5', engine=engine) ds2.close() print("loaded ", ds2.x.values) # {'dtype': dtype('<U2')} # expected ['ab' 'abc' 'bc' 'bcd' 'c' 'cd'] # loaded ['ab' 'ab' 'bc' 'bc' 'c' 'cd'] ``` ### MVCE confirmation - [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray. - [X] Complete example — the example is self-contained, including all data and the text of any traceback. - [X] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result. - [X] New issue — a search of GitHub Issues suggests this is not a duplicate. - [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies. ### Relevant log output _No response_ ### Anything else we need to know? _No response_ ### Environment <details> INSTALLED VERSIONS ------------------ commit: None python: 3.12.6 | packaged by conda-forge | (main, Sep 30 2024, 18:08:52) [GCC 13.3.0] python-bits: 64 OS: Linux OS-release: 5.15.0-124-generic machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.14.3 libnetcdf: None xarray: 2024.10.0 pandas: 2.2.3 numpy: 2.0.2 scipy: 1.14.1 netCDF4: None pydap: None h5netcdf: 1.4.0 h5py: 3.11.0 zarr: None cftime: None nc_time_axis: None iris: None bottleneck: None dask: 2024.10.0 distributed: 2024.10.0 matplotlib: 3.9.2 cartopy: None seaborn: 0.13.2 numbagg: None fsspec: 2024.9.0 cupy: None pint: None sparse: 0.15.4 flox: None numpy_groupies: None setuptools: 75.1.0 pip: 24.2 conda: None pytest: 8.3.3 mypy: None IPython: 8.28.0 sphinx: 8.1.3 </details>
closed
2024-11-08T21:28:37Z
2025-03-20T07:23:44Z
https://github.com/pydata/xarray/issues/9757
[ "bug", "topic-backends", "topic-combine" ]
jcmgray
3
davidsandberg/facenet
computer-vision
547
triplet loss
why does triplet loss not do tf. sqrt , only do tf.square pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), 1) neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), 1)
closed
2017-11-23T14:02:44Z
2018-04-04T20:17:22Z
https://github.com/davidsandberg/facenet/issues/547
[]
Phoebe-star
1
littlecodersh/ItChat
api
963
使用python3.9,会报错
报错内容为: ~~~ Traceback (most recent call last): File "C:\Python\Python310\lib\site-packages\itchat\components\login.py", line 255, in maintain_loop msgList = produce_msg(self, msgList) File "C:\Python\Python310\lib\site-packages\itchat\components\messages.py", line 64, in produce_msg utils.msg_formatter(m, 'Content') File "C:\Python\Python310\lib\site-packages\itchat\utils.py", line 69, in msg_formatter d[k] = htmlParser.unescape(d[k]) AttributeError: 'HTMLParser' object has no attribute 'unescape' ~~~ 目前可以这样凑合用: 修改utils.py文件 导包 from html import unescape 修改d[k] = htmlParser.unescape(d[k]) 为 d[k] = unescape(d[k])
closed
2022-05-30T13:15:28Z
2022-05-30T13:21:30Z
https://github.com/littlecodersh/ItChat/issues/963
[]
xx2211
0
AUTOMATIC1111/stable-diffusion-webui
pytorch
16,890
[Bug]: Could not find a version that satisfies the requirement torch==2.0.0a0 on --use-ipex
### Checklist - [x] The issue exists after disabling all extensions - [x] The issue exists on a clean installation of webui - [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui - [ ] The issue exists in the current version of the webui - [x] The issue has not been reported before recently - [ ] The issue has been reported before but has not been fixed yet ### What happened? when launching webui with python3.10 i get torch cannot be install when usng --use-ipex option, why is that? i dont see that option even in documentation, i only found out about ipex is here https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/13853 ### Steps to reproduce the problem 1. install 2. try with --use-ipex ### What should have happened? WORK ### What browsers do you use to access the UI ? Other ### Sysinfo cant ### Console logs ```Shell ./webui.sh --use-ipex ################################################################ Install script for stable-diffusion + Web UI Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer. ################################################################ ################################################################ Running on picarica user ################################################################ ################################################################ Clone stable-diffusion-webui ################################################################ Cloning into 'stable-diffusion-webui'... remote: Enumerating objects: 34945, done. remote: Counting objects: 100% (26/26), done. remote: Compressing objects: 100% (16/16), done. remote: Total 34945 (delta 18), reused 10 (delta 10), pack-reused 34919 (from 3) Receiving objects: 100% (34945/34945), 35.48 MiB | 40.60 MiB/s, done. Resolving deltas: 100% (24389/24389), done. ################################################################ python venv already activate or run without venv: /home/picarica/git/stable-diffusion-webui-1.10.1/venv ################################################################ ################################################################ Launching launch.py... ################################################################ glibc version is 2.40 Check TCMalloc: libtcmalloc_minimal.so.4 libtcmalloc_minimal.so.4 is linked with libc.so,execute LD_PRELOAD=/usr/lib64/libtcmalloc_minimal.so.4 Python 3.10.16 (main, Dec 18 2024, 15:03:22) [GCC 14.2.1 20241116] Version: v1.10.1 Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2 Installing torch and torchvision Looking in indexes: https://pypi.org/simple, https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ ERROR: Could not find a version that satisfies the requirement torch==2.0.0a0 (from versions: 1.11.0, 1.12.0, 1.12.1, 1.13.0a0+git6c9b55e, 1.13.0a0+gitb1dde16, 1.13.0, 1.13.1, 2.0.0, 2.0.1a0+cxx11.abi, 2.0.1, 2.1.0a0+cxx11.abi, 2.1.0, 2.1.0.post0+cxx11.abi, 2.1.0.post2+cxx11.abi, 2.1.0.post3+cxx11.abi, 2.1.1, 2.1.2, 2.2.0, 2.2.1, 2.2.2, 2.3.0, 2.3.1, 2.3.1+cxx11.abi, 2.4.0, 2.4.1, 2.5.0, 2.5.1, 2.5.1+cxx11.abi, 2.6.0) ERROR: No matching distribution found for torch==2.0.0a0 Traceback (most recent call last): File "/home/picarica/git/stable-diffusion-webui-1.10.1/stable-diffusion-webui/launch.py", line 48, in <module> main() File "/home/picarica/git/stable-diffusion-webui-1.10.1/stable-diffusion-webui/launch.py", line 39, in main prepare_environment() File "/home/picarica/git/stable-diffusion-webui-1.10.1/stable-diffusion-webui/modules/launch_utils.py", line 381, in prepare_environment run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True) File "/home/picarica/git/stable-diffusion-webui-1.10.1/stable-diffusion-webui/modules/launch_utils.py", line 116, in run raise RuntimeError("\n".join(error_bits)) RuntimeError: Couldn't install torch. Command: "/home/picarica/git/stable-diffusion-webui-1.10.1/venv/bin/python3.10" -m pip install torch==2.0.0a0 intel-extension-for-pytorch==2.0.110+gitba7f6c1 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ Error code: 1 ``` ### Additional information _No response_
open
2025-03-12T19:11:46Z
2025-03-13T00:55:38Z
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16890
[ "bug-report" ]
picarica
1
AirtestProject/Airtest
automation
1,183
ios 使用远程地址跑时间长了很多点击不生效,同时手机变得卡,重新拔插手机之后又正常了
ios 使用远程地址跑时间长了很多点击不生效,同时手机变得卡,重新拔插手机之后又正常了
open
2023-12-22T06:15:10Z
2023-12-22T06:15:10Z
https://github.com/AirtestProject/Airtest/issues/1183
[]
Ymars1990
0
CorentinJ/Real-Time-Voice-Cloning
tensorflow
1,159
help me Error(s) in loading state_dict for Tacotron:
Loaded encoder "encoder.pt" trained to step 1564501 Synthesizer using device: cuda Trainable Parameters: 30.892M Traceback (most recent call last): File "H:\tts4\Real-Time-Voice\toolbox\__init__.py", line 114, in <lambda> func = lambda: self.synthesize() or self.vocode() File "H:\tts4\Real-Time-Voice\toolbox\__init__.py", line 217, in synthesize specs = self.synthesizer.synthesize_spectrograms(texts, embeds) File "H:\tts4\Real-Time-Voice\synthesizer\inference.py", line 86, in synthesize_spectrograms self.load() File "H:\tts4\Real-Time-Voice\synthesizer\inference.py", line 64, in load self._model.load(self.model_fpath) File "H:\tts4\Real-Time-Voice\synthesizer\models\tacotron.py", line 497, in load self.load_state_dict(checkpoint["model_state"]) File "H:\tts4\Real-Time-Voice\venv\lib\site-packages\torch\nn\modules\module.py", line 1407, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for Tacotron: Unexpected key(s) in state_dict: "gst.encoder.convs.0.weight", "gst.encoder.convs.0.bias", "gst.encoder.convs.1.weight", "gst.encoder.convs.1.bias", "gst.encoder.convs.2.weight", "gst.encoder.convs.2.bias", "gst.encoder.convs.3.weight", "gst.encoder.convs.3.bias", "gst.encoder.convs.4.weight", "gst.encoder.convs.4.bias", "gst.encoder.convs.5.weight", "gst.encoder.convs.5.bias", "gst.encoder.bns.0.weight", "gst.encoder.bns.0.bias", "gst.encoder.bns.0.running_mean", "gst.encoder.bns.0.running_var", "gst.encoder.bns.0.num_batches_tracked", "gst.encoder.bns.1.weight", "gst.encoder.bns.1.bias", "gst.encoder.bns.1.running_mean", "gst.encoder.bns.1.running_var", "gst.encoder.bns.1.num_batches_tracked", "gst.encoder.bns.2.weight", "gst.encoder.bns.2.bias", "gst.encoder.bns.2.running_mean", "gst.encoder.bns.2.running_var", "gst.encoder.bns.2.num_batches_tracked", "gst.encoder.bns.3.weight", "gst.encoder.bns.3.bias", "gst.encoder.bns.3.running_mean", "gst.encoder.bns.3.running_var", "gst.encoder.bns.3.num_batches_tracked", "gst.encoder.bns.4.weight", "gst.encoder.bns.4.bias", "gst.encoder.bns.4.running_mean", "gst.encoder.bns.4.running_var", "gst.encoder.bns.4.num_batches_tracked", "gst.encoder.bns.5.weight", "gst.encoder.bns.5.bias", "gst.encoder.bns.5.running_mean", "gst.encoder.bns.5.running_var", "gst.encoder.bns.5.num_batches_tracked", "gst.encoder.gru.weight_ih_l0", "gst.encoder.gru.weight_hh_l0", "gst.encoder.gru.bias_ih_l0", "gst.encoder.gru.bias_hh_l0", "gst.stl.embed", "gst.stl.attention.W_query.weight", "gst.stl.attention.W_key.weight", "gst.stl.attention.W_value.weight". size mismatch for encoder_proj.weight: copying a param with shape torch.Size([128, 1024]) from checkpoint, the shape in current model is torch.Size([128, 512]). size mismatch for decoder.attn_rnn.weight_ih: copying a param with shape torch.Size([384, 1280]) from checkpoint, the shape in current model is torch.Size([384, 768]). size mismatch for decoder.rnn_input.weight: copying a param with shape torch.Size([1024, 1152]) from checkpoint, the shape in current model is torch.Size([1024, 640]). size mismatch for decoder.stop_proj.weight: copying a param with shape torch.Size([1, 2048]) from checkpoint, the shape in current model is torch.Size([1, 1536]).
open
2023-02-04T04:03:14Z
2023-02-04T04:03:14Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1159
[]
xjsdn
0
shaikhsajid1111/facebook_page_scraper
web-scraping
74
I cant run the code
Hello, I'm trying to run the system, but it's giving several errors. Here is the code used: `from facebook_page_scraper import Facebook_scraper page_name = "nameofpage" posts_count = 10 browser = "firefox" timeout = 60 #Seconds headless = True meta_ai = Facebook_scraper(page_name, posts_count, browser, timeout=timeout, headless=headless) #call the scrap_to_json() method #json_data = scraped_data.scrap_to_json() filename = "data_file" directory = "./" meta_ai.scrap_to_csv(filename, directory)` Now, the errors occurred: > Traceback (most recent call last): > File "C:\Users\Diego\OneDrive\Documentos\VS\DP\main.py", line 1, in <module> > from facebook_page_scraper import Facebook_scraper > File "C:\Users\Diego\AppData\Local\Programs\Python\Python310\lib\site-packages\facebook_page_scraper\__init__.py", line 1, in <module> > from .driver_initialization import Initializer > File "C:\Users\Diego\AppData\Local\Programs\Python\Python310\lib\site-packages\facebook_page_scraper\driver_initialization.py", line 3, in <module> > from seleniumwire import webdriver > File "C:\Users\Diego\AppData\Local\Programs\Python\Python310\lib\site-packages\seleniumwire\webdriver.py", line 13, in <module> > from seleniumwire import backend > File "C:\Users\Diego\AppData\Local\Programs\Python\Python310\lib\site-packages\seleniumwire\backend.py", line 4, in <module> > from seleniumwire.server import MitmProxy > File "C:\Users\Diego\AppData\Local\Programs\Python\Python310\lib\site-packages\seleniumwire\server.py", line 4, in <module> > from seleniumwire.handler import InterceptRequestHandler > File "C:\Users\Diego\AppData\Local\Programs\Python\Python310\lib\site-packages\seleniumwire\handler.py", line 5, in <module> > from seleniumwire import har > File "C:\Users\Diego\AppData\Local\Programs\Python\Python310\lib\site-packages\seleniumwire\har.py", line 11, in <module> > from seleniumwire.thirdparty.mitmproxy import connections > File "C:\Users\Diego\AppData\Local\Programs\Python\Python310\lib\site-packages\seleniumwire\thirdparty\mitmproxy\connections.py", line 9, in <module> > from seleniumwire.thirdparty.mitmproxy.net import tls, tcp > File "C:\Users\Diego\AppData\Local\Programs\Python\Python310\lib\site-packages\seleniumwire\thirdparty\mitmproxy\net\tls.py", line 43, in <module> > "SSLv2": (SSL.SSLv2_METHOD, BASIC_OPTIONS), > AttributeError: module 'OpenSSL.SSL' has no attribute 'SSLv2_METHOD'. Did you mean: 'SSLv23_METHOD'? My objective was to post the posts of a certain page, in order to verify if there would be a certain mention in one of the posts. Does anyone know how to proceed? :/
open
2023-06-14T17:26:14Z
2024-04-15T15:49:36Z
https://github.com/shaikhsajid1111/facebook_page_scraper/issues/74
[]
dgomp
2
yeongpin/cursor-free-vip
automation
104
下载链接不存在-mac-intel
![Image](https://github.com/user-attachments/assets/8344d68f-acd1-4386-abe1-7d4c2b90dd24)
closed
2025-02-26T02:23:48Z
2025-02-26T02:32:11Z
https://github.com/yeongpin/cursor-free-vip/issues/104
[]
nekteckyangtao
4
miguelgrinberg/python-socketio
asyncio
919
AttributeError: module 'redis.asyncio' has no attribute 'exceptions'
```python Traceback (most recent call last): File "/myproject/venv/lib/python3.10/site-packages/socketio/asyncio_pubsub_manager.py", line 151, in _thread async for message in self._listen(): # pragma: no branch File "/myproject/venv/lib/python3.10/site-packages/socketio/asyncio_redis_manager.py", line 100, in _listen async for message in self._redis_listen_with_retries(): File "/myproject/venv/lib/python3.10/site-packages/socketio/asyncio_redis_manager.py", line 87, in _redis_listen_with_retries except aioredis.exceptions.RedisError: AttributeError: module 'redis.asyncio' has no attribute 'exceptions' ``` Most likely the file `asyncio_redis_manager.py` should be fixed as: ```python try: from redis import asyncio as aioredis from redis.exceptions import RedisError except ImportError: try: import aioredis from aioredis.exceptions import RedisError except ImportError: aioredis = None ... try ... except RedisError: .... ```
closed
2022-05-01T14:33:42Z
2022-07-04T13:56:48Z
https://github.com/miguelgrinberg/python-socketio/issues/919
[]
ba1dr
4
deepinsight/insightface
pytorch
2,406
arcface_torch: MobileFaceNet batch inference in ONNX
Hello. Currently MobileFaceNet model in arcface_torch throws error when running batch inference after converting to ONNX (pytorch batch inference is fine) Evironment: ``` - torch: 1.12.1+cu116 - onnx: 1.14.0 - onnxruntime: 1.15.1 ``` Reproduce (No training needed): 1. Init MobileFaceNet backbone ```python from backbones import get_model backbone_onnx = get_model('mbf', dropout=0.0, fp16=False, num_features=512) backbone_onnx.eval() ``` 2. Convert backbone to ONNX using `convert_onnx` function in `torch2onnx.py` ```python import numpy as np import onnx import torch def convert_onnx(net, output, opset=11, simplify=False): assert isinstance(net, torch.nn.Module) img = np.random.randint(0, 255, size=(112, 112, 3), dtype=np.int32) img = img.astype(np.float) img = (img / 255. - 0.5) / 0.5 # torch style norm img = img.transpose((2, 0, 1)) img = torch.from_numpy(img).unsqueeze(0).float() torch.onnx.export(net, img, output, input_names=["data"], keep_initializers_as_inputs=False, verbose=False, opset_version=opset) model = onnx.load(output) graph = model.graph graph.input[0].type.tensor_type.shape.dim[0].dim_param = 'None' if simplify: from onnxsim import simplify model, check = simplify(model) assert check, "Simplified ONNX model could not be validated" onnx.save(model, output) output = "mbf.onnx" convert_onnx(backbone_onnx, output) ``` 3. Run `onnx_helper.py` to check ONNX model Output: ```python use onnx-model: /content/insightface/recognition/arcface_torch/mbf.onnx input-shape: ['None', 3, 112, 112] 0 Identity_0 1 Identity_1 2 Identity_2 3 Identity_3 4 Identity_4 5 Identity_5 6 Identity_6 7 Identity_7 --------------------------------------------------------------------------- Fail Traceback (most recent call last) [<ipython-input-84-124f28a47965>](https://localhost:8080/#) in <cell line: 2>() 1 handler = ArcFaceORT('/content/insightface/recognition/arcface_torch', cpu=True) ----> 2 err = handler.check() 2 frames [<ipython-input-83-768e2fd31f1f>](https://localhost:8080/#) in check(self, track, test_img) 162 test_img = cv2.resize(test_img, self.image_size) 163 feat, cost = self.benchmark(test_img) --> 164 batch_result = self.check_batch(test_img) 165 batch_result_sum = float(np.sum(batch_result)) 166 if batch_result_sum in [float('inf'), -float('inf')] or batch_result_sum != batch_result_sum: [<ipython-input-83-768e2fd31f1f>](https://localhost:8080/#) in check_batch(self, img) 196 images=imgs, scalefactor=1.0 / self.input_std, size=self.image_size, 197 mean=(self.input_mean, self.input_mean, self.input_mean), swapRB=True) --> 198 net_out = self.session.run(self.output_names, {self.input_name: blob})[0] 199 return net_out 200 [/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py](https://localhost:8080/#) in run(self, output_names, input_feed, run_options) 198 output_names = [output.name for output in self._outputs_meta] 199 try: --> 200 return self._sess.run(output_names, input_feed, run_options) 201 except C.EPFail as err: 202 if self._enable_fallback: Fail: [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running MatMul node. Name:'MatMul_172' Status Message: matmul_helper.h:61 Compute MatMul dimension mismatch ``` Please take a look, thank you.
open
2023-08-17T10:34:39Z
2023-08-17T10:34:39Z
https://github.com/deepinsight/insightface/issues/2406
[]
nhmnhat1997
0
postmanlabs/httpbin
api
514
How do we set authorization header in Digest-auth calls?
I am using httpbin to test authentication.For Basic-auth, using below code: var req = new XMLHttpRequest(); req.open("GET", url + '/basic-auth/user/passwd', true); **req.setRequestHeader("Authorization", "Basic " + Buffer.from('user:passwd').toString('base64'));** req.send(); req.onreadystatechange = () => { if (req.readyState == 4) { console.log(req.status); } } How do we set header for digest-auth?
open
2018-09-26T09:28:31Z
2018-09-27T06:07:32Z
https://github.com/postmanlabs/httpbin/issues/514
[]
Apoorva2405
0
ahmedfgad/GeneticAlgorithmPython
numpy
19
Optimize problem using constraints
Hi @ahmedfgad Let's consider y = f(w1:w6) = w1x1 + w2x2 + w3x3 + w4x4 + w5x5 + 6wx6 where (x1,x2,x3,x4,x5,x6)=(4,-2,3.5,5,-11,-4.7) and y=44 Is it possible to get the best solution for constrained weight parameter i.e if I want w1 to vary from (0,1) and w2 from (10,100) and so on ..... (Different bounds for each variable) Thanks
closed
2020-10-20T03:18:08Z
2020-10-20T13:58:15Z
https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/19
[ "question" ]
deathlyhallows010
4
aminalaee/sqladmin
asyncio
815
deferred=True which will lead to DetachedInstanceError
### Checklist - [X] The bug is reproducible against the latest release or `master`. - [x] There are no similar issues or pull requests to fix it yet. ### Describe the bug ``` created_at: Mapped[datetime] = mapped_column( DateTime(timezone=True), server_default=func.now(tz=pytz.timezone('Asia/Baku')), deferred=True, ) updated_at: Mapped[datetime] = mapped_column( DateTime(timezone=True), server_default=func.now(tz=pytz.timezone('Asia/Baku')), onupdate=func.now(tz=pytz.timezone('Asia/Baku')), deferred=True, ) ``` In my code, when deferred=True in the created_at field, it is not possible to edit an instance of my model in the Admin panel. ``` raise orm_exc.DetachedInstanceError( sqlalchemy.orm.exc.DetachedInstanceError: Parent instance <User at 0x7f215df6ce10> is not bound to a Session; deferred load operation of attribute 'created_at' cannot proceed (Background on this error at: https://sqlalche.me/e/20/bhk3) ``` https://docs.sqlalchemy.org/en/20/errors.html#error-bhk3 ### Steps to reproduce the bug _No response_ ### Expected behavior _No response_ ### Actual behavior _No response_ ### Debugging material _No response_ ### Environment Windows 10 / ptyhon 3.11 / sqladmin 0.19 ### Additional context _No response_
open
2024-09-09T12:30:25Z
2025-01-26T12:06:58Z
https://github.com/aminalaee/sqladmin/issues/815
[]
vahidzhe
4
benbusby/whoogle-search
flask
684
[THEME] Catppuccin
As seen on https://github.com/catppuccin/whoogle ```css :root { /* LIGHT THEME COLORS */ /* DARK THEME COLORS */ --whoogle-dark-logo: #6E6C7E; --whoogle-dark-page-bg: #1E1E2E; --whoogle-dark-element-bg: #302D41; --whoogle-dark-text: #D9E0EE; --whoogle-dark-contrast-text: #F2CDCD; --whoogle-dark-secondary-text: #988BA2; --whoogle-dark-result-bg: #302D41; --whoogle-dark-result-title: #F5E0DC; --whoogle-dark-result-url: #F5E0DC; --whoogle-dark-result-visited: #C9CBFF; } #whoogle-w { fill: #96CDFB; } #whoogle-h { fill: #F28FAD; } #whoogle-o-1 { fill: #FAE3B0; } #whoogle-o-2 { fill: #96CDFB; } #whoogle-g { fill: #ABE9B3; } #whoogle-l { fill: #F28FAD; } #whoogle-e { fill: #FAE3B0; } ```
closed
2022-03-19T17:43:21Z
2022-03-21T15:56:50Z
https://github.com/benbusby/whoogle-search/issues/684
[ "theme" ]
WitherCubes
1
JaidedAI/EasyOCR
pytorch
1,067
Trying to use different GPU
Hello, I have two GPUs and I am trying to select the second one for the EasyOCR reader, but it does not work. Here is my code: ``` import easyocr import cv2 img = cv2.imread("image.png") reader = easyocr.Reader(['en'], gpu="cuda:1") eo = reader.readtext(img) ``` And the error I get: ``` RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:1 ``` Any idea how to solve this? Thank you! Guillaume
closed
2023-06-27T14:45:43Z
2023-06-28T13:00:45Z
https://github.com/JaidedAI/EasyOCR/issues/1067
[]
guist
1
serengil/deepface
machine-learning
591
import error: illegal hardware instruction
Hi, I am running deepface on my Mac (system report listed below). ```Model Name: MacBook Pro Model Identifier: MacBookPro17,1 Chip: Apple M1 Total Number of Cores: 8 (4 performance and 4 efficiency) Memory: 16 GB System Firmware Version: 8419.41.10 OS Loader Version: 7459.141.1 Serial Number (system): FVFGD1ZHQ05Q Hardware UUID: C400510A-03B1-52C1-AF5E-F1D87F9FEED1 Provisioning UDID: 00008103-000E30301A3B001E Activation Lock Status: Disabled ``` I install deepface in my conda environment (python==3.8.13). I pip installed it and the installation was successful without any error (in fact, deepface is the first library I installed after I create this environment). I run `from deepface import DeepFace` following the README. However, the python pops a piece of error `[1] 22199 illegal hardware instruction python` and exit. I did not know why this happend. Anyone help? Thank you in advance.
closed
2022-11-03T19:16:27Z
2022-11-10T17:22:01Z
https://github.com/serengil/deepface/issues/591
[ "dependencies" ]
sun-peach
4
lepture/authlib
flask
268
HTTPX OAuth1 implementation does not set Content-Length header
**Describe the bug** The OAuth1 implementation for HTTPX does not set the content-length header, which can cause problems when using body signatures and talking to servers who block body POSTs where the length header is not specified. **Expected behavior** Content-Length header is set. **Environment:** - OS: linux - Python Version: 3.8 - Authlib Version: 0.14.3 Incoming PR fixes this.
closed
2020-09-10T05:03:25Z
2020-09-17T07:22:14Z
https://github.com/lepture/authlib/issues/268
[ "bug" ]
dustydecapod
0
holoviz/panel
plotly
6,947
Limit Swipe to avoid hiding certain regions
#### Is your feature request related to a problem? Please describe. Trying to overlay two data plots will hide their respective toolbars or other extra-viewport elements like colorbars: ```python curve = hv.Curve([0,1]).opts(toolbar='above', title='curve', frame_height=200) img = hv.Image(np.array([[0, 1], [1, 0]])).opts(toolbar='below', title='img', frame_height=200) pn.Column(pn.Swipe(curve, img)) ``` <img width="292" alt="image" src="https://github.com/holoviz/panel/assets/6613202/df833ec2-99f0-49a3-aba0-1021d27f2204"> #### Describe the solution you'd like At least some mode/setting to limit the swipe-hiding to a certain region, so at least we could manually try to limit it to the viewport height and then put toolbars on top and bottom so as to have both always visible. #### Describe alternatives you've considered Since this is really case about swiping to reveal data, rather than an entire component, I'm wondering if the change should be in HoloViews or Bokeh instead of requesting this feature of Panel.
open
2024-06-28T13:37:20Z
2024-06-28T13:37:20Z
https://github.com/holoviz/panel/issues/6947
[ "TRIAGE" ]
droumis
0
sinaptik-ai/pandas-ai
pandas
724
Redis connector
### 🚀 The feature Allow pandasai to connect to Redis and perform search over databases. ### Motivation, pitch Currently there are SQL connectors for pandasai, but Redis is also a popular database. Recently Redisearch module added vector storage and search capability, which I think is a valuable addon for pandasai for searching within databases containing multiple long documents. ### Alternatives _No response_ ### Additional context Redisearch resources: https://github.com/RediSearch/RediSearch
closed
2023-11-03T02:17:05Z
2024-07-31T16:05:34Z
https://github.com/sinaptik-ai/pandas-ai/issues/724
[ "enhancement", "good first issue" ]
tytung2020
7
mitmproxy/pdoc
api
443
Inline Math doesn't seem to support escapes with backslash
#### Problem Description When using the inline Math syntax `:math:`, it doesn't seem to be possible to escape curly brackets or underscores with a backslash. #### Steps to reproduce the behavior: 1. Create a file that contains the following code: ```python ''':math:`\{x \\in R\_index \}`''' ``` 2. Run the following command: pdoc file_directory --output-dir output_directory --math #### System Information pdoc: 12.1.0 Python: 3.9.12 Platform: Windows-10-10.0.19044-SP0
open
2022-09-29T09:33:57Z
2024-03-20T15:30:34Z
https://github.com/mitmproxy/pdoc/issues/443
[ "enhancement" ]
Loitador41
2
seleniumbase/SeleniumBase
pytest
2,429
chromium_arg parameter "--disable-features" passed to seleniumbase.Driver() being ignored.
I'm creating a driver instance with code: ```python dr = seleniumbase.Driver(uc=True, proxy="user:pass@127.0.0.1:8090", user_data_dir=user_data_dir, extension_dir=ext_dirs, agent=agent_string, uc_subprocess=True, chromium_arg='--disable-features=UserAgentClientHint') ``` https://httpbin.org/anything shows that `--disable-features=UserAgentClientHint` is ignored, since it shows `Sec-Ch-*` request headers: ```json "headers": { ... "Sec-Ch-Ua": "\"Not_A Brand\";v=\"8\", \"Chromium\";v=\"120\"", "Sec-Ch-Ua-Mobile": "?0", "Sec-Ch-Ua-Platform": "\"Linux\"", ... }, ``` The same thing for browser launched with parameters `chromium disable-features=UserAgentClientHint` shows that there is no such headers sent with the request. It seems that the problem is in function `_set_chrome_options` in `seleniumbase/core/browser_launcher.py` (line 1094 and so on): ```python if user_data_dir: chrome_options.add_argument( "--disable-features=OptimizationHintsFetching,Translate," "OptimizationTargetPrediction,PrivacySandboxSettings4," "DownloadBubble,DownloadBubbleV2" ) else: chrome_options.add_argument( "--disable-features=OptimizationHintsFetching,Translate," "OptimizationTargetPrediction,DownloadBubble,DownloadBubbleV2" ) ``` So, driver instance's ChromeOptions has double `--disable-features` arguments: ```python dr = seleniumbase.Driver(uc=True, proxy="user:pass@127.0.0.1:8090", user_data_dir=user_data_dir, extension_dir=ext_dirs, agent=agent_string, uc_subprocess=True, chromium_arg='--disable-features=UserAgentClientHint') print('\n'.join(dr.options.arguments)) ``` Output: ``` --window-size=1280,840 ... --disable-features=UserAgentClientHint ... --disable-features=OptimizationHintsFetching,Translate,OptimizationTargetPrediction,PrivacySandboxSettings4,DownloadBubble,DownloadBubbleV2 ... ``` As a test, i've added `UserAgentClientHint` to disabled features in `browser_laucher.py`: ```python if user_data_dir: chrome_options.add_argument( "--disable-features=OptimizationHintsFetching,Translate," "OptimizationTargetPrediction,PrivacySandboxSettings4," "DownloadBubble,DownloadBubbleV2,UserAgentClientHint" ) else: chrome_options.add_argument( "--disable-features=OptimizationHintsFetching,Translate," "OptimizationTargetPrediction,DownloadBubble,DownloadBubbleV2,UserAgentClientHint" ) ``` and everything works as it should.
closed
2024-01-14T20:08:16Z
2024-01-19T03:22:42Z
https://github.com/seleniumbase/SeleniumBase/issues/2429
[ "bug" ]
agp22888
2
horovod/horovod
pytorch
3,871
Does horovod support WSL2?
I am trying to use horovod based on WSL2. Horovod was installed with the nccl option. The following error occurs. Is it correct that horovod is supported in WSL2? ``` $ horovodrun -np 2 -H localhost:2 python horovod_pytorch.py [1,1]<stdout>:(2, 14, 3) [1,0]<stdout>:(2, 14, 3) [1,0]<stdout>:Files already downloaded and verified [1,1]<stdout>:Files already downloaded and verified [1,1]<stdout>:Files already downloaded and verified [1,0]<stdout>:Files already downloaded and verified [1,1]<stdout>:-------------------------- [1,1]<stdout>:1 [1,0]<stdout>:-------------------------- [1,0]<stdout>:0 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO Bootstrap : Using lo:127.0.0.1<0> [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO cudaDriverVersion 11070 [1,0]<stdout>:NCCL version 2.14.3+cuda11.7 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO cudaDriverVersion 11070 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO Bootstrap : Using lo:127.0.0.1<0> [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO NET/IB : No device found. [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO NET/Socket : Using [0]lo:127.0.0.1<0> [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO Using network Socket [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO NET/IB : No device found. [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO NET/Socket : Using [0]lo:127.0.0.1<0> [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO Using network Socket [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO Channel 00/02 : 0 1 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO Channel 01/02 : 0 1 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO Channel 00 : 0[3000] -> 1[4000] via SHM/direct/direct [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO Channel 01 : 0[3000] -> 1[4000] via SHM/direct/direct [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO Channel 00 : 1[4000] -> 0[3000] via SHM/direct/direct [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO Channel 01 : 1[4000] -> 0[3000] via SHM/direct/direct [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO Connected all rings [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO Connected all trees [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO Connected all rings [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO Connected all trees [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO 2 coll channels, 2 p2p channels, 2 p2p channels per peer [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO comm 0x7f75e83d8300 rank 0 nranks 2 cudaDev 0 busId 3000 - Init COMPLETE [1,0]<stdout>: [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] misc/strongstream.cc:33 NCCL WARN Cuda failure 'CUDA driver is a stub library' [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1417 -> 1 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1457 -> 1 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1462 -> 1 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank 1 nranks 2 cudaDev 1 busId 4000 - Init COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/strongstream.cc:33 NCCL WARN Cuda failure 'CUDA driver is a stub library' [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1417 -> 1 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1457 -> 1 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 1 [1,1]<stdout>:WIN-L360SQKQGVL:480:551 [1] NCCL INFO [Service thread] Connection closed by localRank 1 [1,0]<stdout>:WIN-L360SQKQGVL:479:550 [0] NCCL INFO [Service thread] Connection closed by localRank 0 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO comm 0x7f75e83d8300 rank 0 nranks 2 cudaDev 0 busId 3000 - Abort COMPLETE [1,0]<stdout>: [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1450 -> 4 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1462 -> 4 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO comm 0x7f75e83d8300 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,0]<stdout>: [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1450 -> 4 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1462 -> 4 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO comm 0x7f75e83d8300 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,0]<stdout>: [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1450 -> 4 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1462 -> 4 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO comm 0x7f75e83d8300 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,0]<stdout>: [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1450 -> 4 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1462 -> 4 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO comm 0x7f75e83d8300 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,0]<stdout>: [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1450 -> 4 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1462 -> 4 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO comm 0x7f75e83d8300 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,0]<stdout>: [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1450 -> 4 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1462 -> 4 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO comm 0x7f75e83d8300 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,0]<stdout>: [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1450 -> 4 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1462 -> 4 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO comm 0x7f75e83d8300 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,0]<stdout>: [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1450 -> 4 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1462 -> 4 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO comm 0x7f75e83d8300 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,0]<stdout>: [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1450 -> 4 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1462 -> 4 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO comm 0x7f75e83d8300 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,0]<stdout>: [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1450 -> 4 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO enqueue.cc:1462 -> 4 [1,0]<stdout>:WIN-L360SQKQGVL:479:539 [0] NCCL INFO comm 0x7f75e83d8300 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank 1 nranks 2 cudaDev 1 busId 4000 - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN[1,0]<stderr>:[WIN-L360SQKQGVL:00479] *** Process received signal *** [1,0]<stderr>:[WIN-L360SQKQGVL:00479] Signal: Segmentation fault (11) [1,0]<stderr>:[WIN-L360SQKQGVL:00479] Signal code: (128) [1,0]<stderr>:[WIN-L360SQKQGVL:00479] Failing at address: (nil) [1,0]<stderr>:[WIN-L360SQKQGVL:00479] [ 0] [1,1]<stdout>: Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 r[1,1]<stdout>:ange) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,0]<stderr>:/lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7f76be61d520] [1,0]<stderr>:[WIN-L360SQKQGVL:00479] [ 1] /home/ryu/.virtualenvs/horovodPytorch/lib/python3.8/site-packages/horovod/torch/mpi_lib_v2.cpython-38-x86_64-linux-gnu.so(+0x1da2c9)[0x7f75ee12b2c9] [1,0]<stderr>:[WIN-L360SQKQGVL:00479] [ 2] /home/ryu/.virtualenvs/horovodPytorch/lib/python3.8/site-packages/horovod/torch/mpi_lib_v2.cpython-38-x86_64-linux-gnu.so(+0x1ef439)[0x7f75ee140439] [1,0]<stderr>:[WIN-L360SQKQGVL:00479] [ 3] /home/ryu/.virtualenvs/horovodPytorch/lib/python3.8/site-packages/horovod/torch/mpi_lib_v2.cpython-38-x86_64-linux-gnu.so(+0x21d2fb)[0x7f75ee16e2fb] [1,0]<stderr>:[WIN-L360SQKQGVL:00479] [ 4] /home/ryu/.virtualenvs/horovodPytorch/lib/python3.8/site-packages/horovod/torch/mpi_lib_v2.cpython-38-x86_64-linux-gnu.so(_ZN7horovod6common13NCCLBroadcast7ExecuteERSt6vectorINS0_16TensorTableEntryESaIS3_EERKNS0_8ResponseE+0x196)[0x7f75ee0be866] [1,0]<stderr>:[WIN-L360SQKQGVL:00479] [ 5] [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,0]<stderr>:/home/ryu/.virtualenvs/horovodPytorch/lib/python3.8/site-packages/horovod/torch/mpi_lib_v2.cpython-38-x86_64-linux-gnu.so(_ZNK7horovod6common16OperationManager16ExecuteBroadcastERSt6vectorINS0_16TensorTableEntryESaIS3_EERKNS0_8ResponseE+0x7d)[0x7f75ee078b6d] [1,0]<stderr>:[WIN-L360SQKQGVL:00479] [ 6] [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,0]<stderr>:/home/ryu/.virtualenvs/horovodPytorch/lib/python3.8/site-packages/horovod/torch/mpi_lib_v2.cpython-38-x86_64-linux-gnu.so(_ZNK7horovod6common16OperationManager16ExecuteOperationERSt6vectorINS0_16TensorTableEntryESaIS3_EERKNS0_8ResponseERNS0_10ProcessSetE+0x161)[0x7f75ee079001] [1,0]<stderr>:[WIN-L360SQKQGVL:00479] [ 7] [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,0]<stderr>:/home/ryu/.virtualenvs/horovodPytorch/lib/python3.8/site-packages/horovod/torch/mpi_lib_v2.cpython-38-x86_64-linux-gnu.so(+0xfa381)[0x7f75ee04b381] [1,0]<stderr>:[WIN-L360SQKQGVL:00479] [ 8] [1,0]<stderr>:/lib/x86_64-linux-gnu/libstdc++.so.6(+0xdc2b3)[0x7f769eab22b3] [1,0]<stderr>:[WIN-L360SQKQGVL:00479] [ 9] [1,0]<stderr>:/lib/x86_64-linux-gnu/libc.so.6(+0x94b43)[0x7f76be66fb43] [1,0]<stderr>:[WIN-L360SQKQGVL:00479] [10] /lib/x86_64-linux-gnu/libc.so.6(+0x126a00)[0x7f76be701a00] [1,0]<stderr>:[WIN-L360SQKQGVL:00479] *** End of error message *** [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,1]<stdout>: [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] misc/argcheck.cc:39 NCCL WARN Broadcast : invalid root 0 (root should be in the 0..-1 range) [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1450 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO enqueue.cc:1462 -> 4 [1,1]<stdout>:WIN-L360SQKQGVL:480:537 [1] NCCL INFO comm 0x7f42143d2540 rank -1 nranks -1 cudaDev -1 busId ffffffffffffffff - Abort COMPLETE [1,0]<stderr>:Traceback (most recent call last): [1,0]<stderr>: File "/home/ryu/.virtualenvs/horovodPytorch/lib/python3.8/site-packages/horovod/torch/mpi_ops.py", line 1285, in synchronize [1,0]<stderr>: mpi_lib.horovod_torch_wait_and_clear(handle) [1,0]<stderr>:RuntimeError: ncclBroadcast failed: unhandled cuda error [1,0]<stderr>: [1,0]<stderr>:During handling of the above exception, another exception occurred: [1,0]<stderr>: [1,0]<stderr>:Traceback (most recent call last): [1,0]<stderr>: File "horovod_pytorch.py", line 149, in <module> [1,0]<stderr>: hvd.broadcast_parameters(model.state_dict(), root_rank=0) [1,0]<stderr>: File "/home/ryu/.virtualenvs/horovodPytorch/lib/python3.8/site-packages/horovod/torch/functions.py", line 59, in broadcast_parameters [1,0]<stderr>: synchronize(handle) [1,0]<stderr>: File "/home/ryu/.virtualenvs/horovodPytorch/lib/python3.8/site-packages/horovod/torch/mpi_ops.py", line 1290, in synchronize [1,0]<stderr>: raise HorovodInternalError(e) [1,0]<stderr>:horovod.common.exceptions.HorovodInternalError: ncclBroadcast failed: unhandled cuda error [1,1]<stderr>:Traceback (most recent call last): [1,1]<stderr>: File "/home/ryu/.virtualenvs/horovodPytorch/lib/python3.8/site-packages/horovod/torch/mpi_ops.py", line 1285, in synchronize [1,1]<stderr>: mpi_lib.horovod_torch_wait_and_clear(handle) [1,1]<stderr>:RuntimeError: ncclBroadcast failed: unhandled cuda error [1,1]<stderr>: [1,1]<stderr>:During handling of the above exception, another exception occurred: [1,1]<stderr>: [1,1]<stderr>:Traceback (most recent call last): [1,1]<stderr>: File "horovod_pytorch.py", line 149, in <module> [1,1]<stderr>: hvd.broadcast_parameters(model.state_dict(), root_rank=0) [1,1]<stderr>: File "/home/ryu/.virtualenvs/horovodPytorch/lib/python3.8/site-packages/horovod/torch/functions.py", line 59, in broadcast_parameters [1,1]<stderr>: synchronize(handle) [1,1]<stderr>: File "/home/ryu/.virtualenvs/horovodPytorch/lib/python3.8/site-packages/horovod/torch/mpi_ops.py", line 1290, in synchronize [1,1]<stderr>: raise HorovodInternalError(e) [1,1]<stderr>:horovod.common.exceptions.HorovodInternalError: ncclBroadcast failed: unhandled cuda error -------------------------------------------------------------------------- Primary job terminated normally, but 1 process returned a non-zero exit code. Per user-direction, the job has been aborted. -------------------------------------------------------------------------- -------------------------------------------------------------------------- mpirun noticed that process rank 0 with PID 0 on node WIN-L360SQKQGVL exited on signal 11 (Segmentation fault). ```
closed
2023-03-21T09:11:16Z
2023-08-10T00:24:05Z
https://github.com/horovod/horovod/issues/3871
[ "wontfix" ]
drjtryu
9
charlesq34/pointnet
tensorflow
129
How to do global feature extraction only without incorporating any classification framework?
open
2018-08-21T11:43:37Z
2018-08-21T11:43:37Z
https://github.com/charlesq34/pointnet/issues/129
[]
sumitsinha
0
ymcui/Chinese-LLaMA-Alpaca
nlp
178
请问text-generation-webui报Cannot copy out of meta tensor; no data
感谢您使用Issue提问模板,请按照以下步骤提供相关信息。我们将优先处理信息相对完整的Issue,感谢您的配合。 *提示:将[ ]中填入x,表示打对钩。提问时删除上面这两行。请只保留符合的选项,删掉其他。* ### 问前必查项目 - [x] 由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行 - [x] 我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案 - [x] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案 ### 选择问题类型 基础模型: - [x] LLaMA - [x] Alpaca 问题类型: - [x] 模型量化和部署问题(llama.cpp、text-generation-webui、LlamaChat) ### 详细描述问题 python server.py --model llama-13b-hf --lora chinese-alpaca-lora-13b --gpu-memory 20 --share --auto-devices 运行后进行推理时提示异常: Traceback (most recent call last): File "/project/text-generation-webui/modules/callbacks.py", line 66, in gentask ret = self.mfunc(callback=_callback, **self.kwargs) File "/project/text-generation-webui/modules/text_generation.py", line 252, in generate_with_callback shared.model.generate(**kwargs) File "/home/eric/anaconda3/envs/lora-13b/lib/python3.9/site-packages/peft/peft_model.py", line 716, in generate outputs = self.base_model.generate(**kwargs) File "/home/eric/anaconda3/envs/lora-13b/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/home/eric/anaconda3/envs/lora-13b/lib/python3.9/site-packages/transformers/generation/utils.py", line 1485, in generate return self.sample( File "/home/eric/anaconda3/envs/lora-13b/lib/python3.9/site-packages/transformers/generation/utils.py", line 2524, in sample outputs = self( File "/home/eric/anaconda3/envs/lora-13b/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/eric/anaconda3/envs/lora-13b/lib/python3.9/site-packages/accelerate/hooks.py", line 165, in new_forward output = old_forward(*args, **kwargs) File "/home/eric/anaconda3/envs/lora-13b/lib/python3.9/site-packages/transformers/models/llama/modeling_llama.py", line 700, in forward logits = self.lm_head(hidden_states) File "/home/eric/anaconda3/envs/lora-13b/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/eric/anaconda3/envs/lora-13b/lib/python3.9/site-packages/accelerate/hooks.py", line 160, in new_forward args, kwargs = module._hf_hook.pre_forward(module, *args, **kwargs) File "/home/eric/anaconda3/envs/lora-13b/lib/python3.9/site-packages/accelerate/hooks.py", line 280, in pre_forward set_module_tensor_to_device(module, name, self.execution_device, value=self.weights_map[name]) File "/home/eric/anaconda3/envs/lora-13b/lib/python3.9/site-packages/accelerate/utils/modeling.py", line 149, in set_module_tensor_to_device new_value = value.to(device) NotImplementedError: Cannot copy out of meta tensor; no data! ### 运行截图或log ![111222](https://user-images.githubusercontent.com/6291208/232961193-63bbacc1-a200-4c57-9ddb-46b00fe3ba38.png) ![2222222](https://user-images.githubusercontent.com/6291208/232961204-f4e4f5f6-c01e-4f20-887f-3d7b06b72f99.png)
closed
2023-04-19T03:40:48Z
2023-05-22T22:02:06Z
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/178
[ "stale" ]
purewater2011
8
viewflow/viewflow
django
65
View.site
Place for flow registration ``` python from viewflow.site import viewsite viewsite.register(HelloworldFlow) viewsite.urls ```
closed
2014-07-21T08:18:55Z
2014-08-12T05:42:11Z
https://github.com/viewflow/viewflow/issues/65
[ "dev/site" ]
kmmbvnr
1
FlareSolverr/FlareSolverr
api
1,202
Error: Error solving the challenge. Message: invalid argument
### Have you checked our README? - [X] I have checked the README ### Have you followed our Troubleshooting? - [X] I have followed your Troubleshooting ### Is there already an issue for your problem? - [X] I have checked older issues, open and closed ### Have you checked the discussions? - [X] I have read the Discussions ### Environment ```markdown - FlareSolverr version: FlareSolverr 3.3.19 - Last working FlareSolverr version: - Operating system: Mac os 12 - Are you using Docker: [yes/no] no - FlareSolverr User-Agent (see log traces or / endpoint): User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36 - Are you using a VPN: [yes/no] no - Are you using a Proxy: [yes/no] no - Are you using Captcha Solver: [yes/no] no - If using captcha solver, which one: - URL to test this issue: ``` ### Description Error: Error solving the challenge. Message: invalid argument target: www.fanfiction.net/s/14145272/1/In-Your-Wildest-Dreams ### Logged Error Messages ```text 2024-05-25 10:24:32 INFO FlareSolverr 3.3.19 2024-05-25 10:24:32 INFO Testing web browser installation... 2024-05-25 10:24:32 INFO Platform: macOS-12.7.5-x86_64-i386-64bit 2024-05-25 10:24:32 INFO Chrome / Chromium path: /Applications/Google Chrome.app/Contents/MacOS/Google Chrome 2024-05-25 10:24:32 INFO Chrome / Chromium major version: 125 2024-05-25 10:24:32 INFO Launching web browser... 2024-05-25 10:24:43 INFO FlareSolverr User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36 2024-05-25 10:24:43 INFO Test successful! 2024-05-25 10:24:43 INFO Serving on http://0.0.0.0:8191 2024-05-25 10:25:22 INFO Incoming request => POST /v1 body: {'cmd': 'request.get', 'url': 'www.fanfiction.net/s/14145272/1/In-Your-Wildest-Dreams', 'maxTimeout': 60000} 2024-05-25 10:25:24 ERROR Error: Error solving the challenge. Message: invalid argument\n (Session info: chrome=125.0.6422.77)\nStacktrace:\n0 chromedriver 0x0000000109b626c8 chromedriver + 6149832\n1 chromedriver 0x0000000109b59cea chromedriver + 6114538\n2 chromedriver 0x00000001095e6b91 chromedriver + 400273\n3 chromedriver 0x00000001095cd098 chromedriver + 295064\n4 chromedriver 0x00000001095cbc4f chromedriver + 289871\n5 chromedriver 0x00000001095cbf3a chromedriver + 290618\n6 chromedriver 0x00000001095e97b7 chromedriver + 411575\n7 chromedriver 0x0000000109676da5 chromedriver + 990629\n8 chromedriver 0x0000000109656cb2 chromedriver + 859314\n9 chromedriver 0x00000001096760db chromedriver + 987355\n10 chromedriver 0x0000000109656a53 chromedriver + 858707\n11 chromedriver 0x00000001096266d5 chromedriver + 661205\n12 chromedriver 0x0000000109626f6e chromedriver + 663406\n13 chromedriver 0x0000000109b23d00 chromedriver + 5893376\n14 chromedriver 0x0000000109b294cc chromedriver + 5915852\n15 chromedriver 0x0000000109b058c4 chromedriver + 5769412\n16 chromedriver 0x0000000109b29f99 chromedriver + 5918617\n17 chromedriver 0x0000000109af6ed4 chromedriver + 5709524\n18 chromedriver 0x0000000109b4a018 chromedriver + 6049816\n19 chromedriver 0x0000000109b4a1d7 chromedriver + 6050263\n20 chromedriver 0x0000000109b5989e chromedriver + 6113438\n21 libsystem_pthread.dylib 0x00007ff81062d4e1 _pthread_start + 125\n22 libsystem_pthread.dylib 0x00007ff810628f6b thread_start + 15\n 2024-05-25 10:25:24 INFO Response in 1.936 s 2024-05-25 10:25:24 INFO 127.0.0.1 POST http://localhost:8191/v1 500 Internal Server Error 2024-05-25 10:25:24 WARNING unhandled incoming priority event ``` ### Screenshots _No response_
closed
2024-05-25T02:27:51Z
2024-05-25T16:09:49Z
https://github.com/FlareSolverr/FlareSolverr/issues/1202
[]
dopamines-001
1
tatsu-lab/stanford_alpaca
deep-learning
171
TypeError: 'type' object is not subscriptable
``` Traceback (most recent call last): File "train.py", line 25, in <module> import utils File "./fine-tuned/stanford_alpaca/utils.py", line 40, in <module> prompts: Union[str, Sequence[str], Sequence[dict[str, str]], dict[str, str]], TypeError: 'type' object is not subscriptable ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 3020123) of binary: /home/sn/miniconda3/envs/chatllama/bin/python Traceback (most recent call last): File "/home/sn/miniconda3/envs/chatllama/bin/torchrun", line 8, in <module> sys.exit(main()) File "/home/sn/miniconda3/envs/chatllama/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper return f(*args, **kwargs) File "/home/sn/miniconda3/envs/chatllama/lib/python3.8/site-packages/torch/distributed/run.py", line 794, in main run(args) File "/home/sn/miniconda3/envs/chatllama/lib/python3.8/site-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/home/sn/miniconda3/envs/chatllama/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 134, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/home/sn/miniconda3/envs/chatllama/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ train.py FAILED [TRUNCATED] ```
closed
2023-04-02T04:51:25Z
2023-04-10T03:08:36Z
https://github.com/tatsu-lab/stanford_alpaca/issues/171
[]
sanjibnarzary
2
lgienapp/aquarel
data-visualization
16
Grid alpha value throws type error
``` with load_theme("arctic_light").set_grid(axis="x"): pd.Series(sizes).sort_values().plot.bar() ``` produces ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /tmp/ipykernel_16842/1244744067.py in <module> ----> 1 with load_theme("arctic_light").set_grid(axis="x"): 2 pd.Series(sizes).sort_values().plot.bar() /media/ssd/BIGSCIENCE/env/lib/python3.7/site-packages/aquarel/theme.py in set_grid(self, draw, axis, ticks, alpha, style, width) 432 "axis": axis if axis in self._axis_options else None, 433 "ticks": ticks if ticks in self._tick_options else None, --> 434 "alpha": alpha if 0 <= alpha <= 1 else None, 435 "style": style if style in self._line_style_options else None, 436 "width": width, TypeError: '<=' not supported between instances of 'int' and 'NoneType' ```
closed
2022-08-29T10:07:53Z
2022-08-31T11:22:59Z
https://github.com/lgienapp/aquarel/issues/16
[ "bug" ]
lgienapp
0
pyeve/eve
flask
920
Within-one-second-precision PATCH should not raise 412
Hello, Thanks for you API it's really cool. I have a small question about patch requests. I disabled conditional requests with IF-MATCH=False but when I try to send 2 patch requests within the same second, I get: "code": 412, "message": "Client and server etags don't match" Maybe I'm not using it right. Can you help me with this ? Thanks, Martinho MOREIRA
closed
2016-10-13T14:58:22Z
2019-03-04T14:20:08Z
https://github.com/pyeve/eve/issues/920
[ "bug" ]
moreiramarti
7