repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
betodealmeida/shillelagh | sqlalchemy | 29 | [WeatherAPI] add support for forecasts | WeatherAPI also has an API for forecasts, we could extend the current adapter or create a new one to support it. | open | 2021-06-22T16:49:33Z | 2021-12-23T19:49:32Z | https://github.com/betodealmeida/shillelagh/issues/29 | [
"enhancement",
"help wanted",
"good first issue"
] | betodealmeida | 0 |
ets-labs/python-dependency-injector | flask | 168 | Create FactoryAggregate provider | Idea is to create provider that would aggregate factories and provide dynamic interface for creating different types of objects:
```python
"""`FactoryAggregate` providers example."""
import sys
import dependency_injector.providers as providers
from games import Chess, Checkers, Ludo
game_factory = providers.FactoryAggregate(chess=providers.Factory(Chess),
checkers=providers.Factory(Checkers),
ludo=providers.Factory(Ludo))
if __name__ == '__main__':
game_type = sys.argv[1].lower()
selected_game = game_factory.create(game_type)
selected_game.play()
# $ python example.py chess
# Playing chess
# $ python example.py checkers
# Playing checkers
# $ python example.py ludo
# Playing ludo
```
Also such aggregates could be used for representing families of related object factories. | closed | 2017-10-03T22:23:42Z | 2017-10-13T05:10:07Z | https://github.com/ets-labs/python-dependency-injector/issues/168 | [
"feature"
] | rmk135 | 0 |
joeyespo/grip | flask | 367 | Question: navigation | There is a space on the right side (Github Wiki) where the pages are listed. How can I get the same list working on Grip? | open | 2022-08-24T14:08:26Z | 2022-08-24T14:08:26Z | https://github.com/joeyespo/grip/issues/367 | [] | azisaka | 0 |
biosustain/potion | sqlalchemy | 126 | Create Routes with prefilters | Hi,
I'm try to understand if there is a way to define a Route on a Resource equivalent to a filtered query.
Here is an example:
I've a reource `ExampleResource` that has a filed `example`, an i'd like to create a Get custom route '/example' that returns only the elements having `example == True`
```python
class ExampleResource(PrincipalResource):
class Schema:
author = fields.ToOne('user')
class Meta:
model = Code
read_only_fields = ['author']
permissions = {
'create': 'yes',
'read': 'user:author',
'update': 'read',
'delete': 'update'
}
@Route.GET
def examples(self):
# Return equivalent to filter example == True
```
| open | 2017-09-15T09:55:13Z | 2017-09-15T09:55:13Z | https://github.com/biosustain/potion/issues/126 | [] | ludusrusso | 0 |
axnsan12/drf-yasg | django | 167 | Serializer Meta class options do not apply to serializer Fields | Hello, i want to implement Unix Timestamp Datetime Field in DRF and serve the documentation using this library. But i have some issues with how to custom the field that serve in swagger ui.
First issue is i cant change the format, title, and description in certain fields even after i follow this library documentation in readthedocs. Here is my snippets of code:
```
class UnixTimeSerializer(serializers.DateTimeField):
def to_representation(self, value):
""" Return epoch time for a datetime object or ``None``"""
from django.utils.dateformat import format
try:
return int(format(value, 'U'))
except (AttributeError, TypeError):
return None
def to_internal_value(self, value):
import datetime
return datetime.datetime.fromtimestamp(int(value))
class Meta:
# i follow this snippet from readthedocs to customize individual serializer
swagger_schema_fields = {
'format': 'integer',
'title': 'Client date time suu',
'description': 'Date time in unix timestamp format',
}
..........some code here
# Serializer below is the one that used in the viewsets
class TransactionSerializer(serializers.ModelSerializer):
transaction_details = TransactionDetailSerializer(many=True)
client_date_time = UnixTimeSerializer()
.......... some code here
```
and the second issue is how can i change the example when i click try it out in swagger ui, it is still showing the django native date time format.
Thank you for your attention. | closed | 2018-07-22T20:49:33Z | 2018-08-08T14:17:54Z | https://github.com/axnsan12/drf-yasg/issues/167 | [] | christopherwira | 5 |
pytest-dev/pytest-html | pytest | 615 | Add unit tests for encoding | Related #585 | open | 2023-04-02T00:08:07Z | 2023-04-02T00:08:07Z | https://github.com/pytest-dev/pytest-html/issues/615 | [] | BeyondEvil | 0 |
microsoft/nni | machine-learning | 5,653 | Error: /lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found | **Describe the issue**:
Hello, I encountered the following error while using NNI 2.10.1:
```
[2023-08-01 14:07:41] Creating experiment, Experiment ID: aj7wd2ey
[2023-08-01 14:07:41] Starting web server...
node:internal/modules/cjs/loader:1187
return process.dlopen(module, path.toNamespacedPath(filename));
^
Error: /lib64/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by /home/zzdx/.local/lib/python3.10/site-packages/nni_node/node_modules/sqlite3/lib/binding/napi-v6-linux-glibc-x64/node_sqlite3.node)
at Object.Module._extensions..node (node:internal/modules/cjs/loader:1187:18)
at Module.load (node:internal/modules/cjs/loader:981:32)
at Function.Module._load (node:internal/modules/cjs/loader:822:12)
at Module.require (node:internal/modules/cjs/loader:1005:19)
at require (node:internal/modules/cjs/helpers:102:18)
at Object.<anonymous> (/home/zzdx/.local/lib/python3.10/site-packages/nni_node/node_modules/sqlite3/lib/sqlite3-binding.js:4:17)
at Module._compile (node:internal/modules/cjs/loader:1103:14)
at Object.Module._extensions..js (node:internal/modules/cjs/loader:1157:10)
at Module.load (node:internal/modules/cjs/loader:981:32)
at Function.Module._load (node:internal/modules/cjs/loader:822:12) {
code: 'ERR_DLOPEN_FAILED'
}
Thrown at:
at Module._extensions..node (node:internal/modules/cjs/loader:1187:18)
at Module.load (node:internal/modules/cjs/loader:981:32)
at Module._load (node:internal/modules/cjs/loader:822:12)
at Module.require (node:internal/modules/cjs/loader:1005:19)
at require (node:internal/modules/cjs/helpers:102:18)
at /home/zzdx/.local/lib/python3.10/site-packages/nni_node/node_modules/sqlite3/lib/sqlite3-binding.js:4:17
at Module._compile (node:internal/modules/cjs/loader:1103:14)
at Module._extensions..js (node:internal/modules/cjs/loader:1157:10)
at Module.load (node:internal/modules/cjs/loader:981:32)
at Module._load (node:internal/modules/cjs/loader:822:12)
[2023-08-01 14:07:42] WARNING: Timeout, retry...
[2023-08-01 14:07:43] WARNING: Timeout, retry...
[2023-08-01 14:07:44] ERROR: Create experiment failed
Traceback (most recent call last):
File "/home/zzdx/.local/lib/python3.10/site-packages/urllib3/connection.py", line 203, in _new_conn
sock = connection.create_connection(
File "/home/zzdx/.local/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
raise err
File "/home/zzdx/.local/lib/python3.10/site-packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/zzdx/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 790, in urlopen
response = self._make_request(
File "/home/zzdx/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 496, in _make_request
conn.request(
File "/home/zzdx/.local/lib/python3.10/site-packages/urllib3/connection.py", line 395, in request
self.endheaders()
File "/usr/local/python3/lib/python3.10/http/client.py", line 1278, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/python3/lib/python3.10/http/client.py", line 1038, in _send_output
self.send(msg)
File "/usr/local/python3/lib/python3.10/http/client.py", line 976, in send
self.connect()
File "/home/zzdx/.local/lib/python3.10/site-packages/urllib3/connection.py", line 243, in connect
self.sock = self._new_conn()
File "/home/zzdx/.local/lib/python3.10/site-packages/urllib3/connection.py", line 218, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f2e65bc04c0>: Failed to establish a new connection: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/zzdx/.local/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/home/zzdx/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 844, in urlopen
retries = retries.increment(
File "/home/zzdx/.local/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8080): Max retries exceeded with url: /api/v1/nni/check-status (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2e65bc04c0>: Failed to establish a new connection: [Errno 111] Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/zzdx/search_api/area_api/3_5_1126_1623_711_715-30min_20230801140738/main.py", line 51, in <module>
experiment.run(8080)
File "/home/zzdx/.local/lib/python3.10/site-packages/nni/experiment/experiment.py", line 180, in run
self.start(port, debug)
File "/home/zzdx/.local/lib/python3.10/site-packages/nni/experiment/experiment.py", line 135, in start
self._start_impl(port, debug, run_mode, None, [])
File "/home/zzdx/.local/lib/python3.10/site-packages/nni/experiment/experiment.py", line 103, in _start_impl
self._proc = launcher.start_experiment(self._action, self.id, config, port, debug, run_mode,
File "/home/zzdx/.local/lib/python3.10/site-packages/nni/experiment/launcher.py", line 148, in start_experiment
raise e
File "/home/zzdx/.local/lib/python3.10/site-packages/nni/experiment/launcher.py", line 126, in start_experiment
_check_rest_server(port, url_prefix=url_prefix)
File "/home/zzdx/.local/lib/python3.10/site-packages/nni/experiment/launcher.py", line 196, in _check_rest_server
rest.get(port, '/check-status', url_prefix)
File "/home/zzdx/.local/lib/python3.10/site-packages/nni/experiment/rest.py", line 43, in get
return request('get', port, api, prefix=prefix)
File "/home/zzdx/.local/lib/python3.10/site-packages/nni/experiment/rest.py", line 31, in request
resp = requests.request(method, url, timeout=timeout)
File "/home/zzdx/.local/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/home/zzdx/.local/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/home/zzdx/.local/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/home/zzdx/.local/lib/python3.10/site-packages/requests/adapters.py", line 519, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8080): Max retries exceeded with url: /api/v1/nni/check-status (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f2e65bc04c0>: Failed to establish a new connection: [Errno 111] Connection refused'))
[2023-08-01 14:07:44] Stopping experiment, please wait...
[2023-08-01 14:07:44] Experiment stopped
```
**Environment**:
- NNI version: 2.10.1
- Training service (local|remote|pai|aml|etc): remote
- Client OS: Windows 11
- Server OS (for remote mode only): Centos 7.9
- Python version: 3.10
- PyTorch/TensorFlow version: None
- Is conda/virtualenv/venv used?: no
- Is running in Docker?: no
**Log message**:
- nnimanager.log:
```
[2023-08-01 14:38:48] INFO (nni.experiment) Creating experiment, Experiment ID: s1dz03t2
[2023-08-01 14:38:48] INFO (nni.experiment) Starting web server...
[2023-08-01 14:38:49] WARNING (nni.experiment) Timeout, retry...
[2023-08-01 14:38:50] WARNING (nni.experiment) Timeout, retry...
[2023-08-01 14:38:51] ERROR (nni.experiment) Create experiment failed
[2023-08-01 14:38:51] INFO (nni.experiment) Stopping experiment, please wait...
```
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
| closed | 2023-08-01T06:42:16Z | 2023-08-01T09:07:31Z | https://github.com/microsoft/nni/issues/5653 | [] | yifan-dadada | 0 |
DistrictDataLabs/yellowbrick | matplotlib | 514 | The Manifold visualizer doesn't work with 'tsne' | **Describe the bug**
calling _fit_transform()_ on _Manifold_ object fails when using **'tsne'**. It seems that it is calling _transform()_ function which dosent exist on [sklearn.manifold.TSNE](http://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html)
**To Reproduce**
```
from yellowbrick.features.manifold import Manifold
X = np.random.rand(250,20)
y = np.random.randint(0,2,20)
visualizer = Manifold(manifold='tsne', target='discrete')
visualizer.fit_transform(X,y)
visualizer.poof()
```
**Traceback**
```
AttributeError Traceback (most recent call last)
<ipython-input-524-025714ae6ba9> in <module>()
5
6 visualizer = Manifold(manifold='tsne', target='discrete')
----> 7 visualizer.fit_transform(X,y)
8 visualizer.poof()
...\lib\site-packages\sklearn\base.py in fit_transform(self, X, y, **fit_params)
518 else:
519 # fit method of arity 2 (supervised transformation)
--> 520 return self.fit(X, y, **fit_params).transform(X)
521
522
...\lib\site-packages\yellowbrick\features\manifold.py in transform(self, X)
328 Returns the 2-dimensional embedding of the instances.
329 """
--> 330 return self.manifold.transform(X)
331
332 def draw(self, X, y=None):
AttributeError: 'TSNE' object has no attribute 'transform'
```
**Desktop**
- OS: [Windows 10]
- Python Version [3.4.5]
- Scikit-Learn version is 0.19.1
- Yellowbrick Version [0.8]
| closed | 2018-07-20T15:22:19Z | 2018-07-20T15:31:25Z | https://github.com/DistrictDataLabs/yellowbrick/issues/514 | [] | imad24 | 2 |
SkalskiP/courses | nlp | 31 | Please add a license? | Would love to add/reuse this repo in other projects, but without a license I am unable to use it. Please consider adding a license? | open | 2023-05-25T14:11:50Z | 2023-05-29T15:16:16Z | https://github.com/SkalskiP/courses/issues/31 | [
"question"
] | tosaddler | 2 |
alteryx/featuretools | data-science | 2,071 | Add Yeo-Johnson primitive | - https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.yeojohnson.html | open | 2022-05-11T20:35:55Z | 2023-06-26T19:11:04Z | https://github.com/alteryx/featuretools/issues/2071 | [] | gsheni | 1 |
biolab/orange3 | data-visualization | 6,855 | K-Means widget hangs (intermittent, multiple number of clusters) | <!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
The k-means widget sometimes hangs while running the multiple k fitting option.
The widget simply sits in the processing progress status, and will never recover from this state: Orange must be quit.
This happens on medium size datasets (~10k rows, 10-250 features) and larger. Clustering is generally quite fast for these datasets unless this bug happens.
Once the bug has been triggered, it persists for the user session (survives quit / restart of Orange) and most times can only be fixed with system reboot (or perhaps user logout).
<!-- Be specific, clear, and concise. Include screenshots if relevant. -->
<!-- If you're getting an error message, copy it, and enclose it with three backticks (```). -->
I've connected a py-spy profiler to the hung application and dumped the following (slightly redacted) call trace:
[hung-k-means_edited.txt](https://github.com/user-attachments/files/16270575/hung-k-means_edited.txt)
**How can we reproduce the problem?**
<!-- Upload a zip with the .ows file and data. -->
<!-- Describe the steps (open this widget, click there, then add this...) -->
We haven't been able to create an .ows file that reliably reproduces this problem, despite quite some effort.
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system: Linux, happens on Debian 12, Almalinux 9, CentOS 8
- Orange version: Most recent versions, including at least 3.36.2
- How you installed Orange: both conda environment and pip in a venv
| open | 2024-07-17T18:49:41Z | 2024-07-19T16:40:57Z | https://github.com/biolab/orange3/issues/6855 | [
"bug report"
] | stuart-cls | 2 |
FlareSolverr/FlareSolverr | api | 716 | session.create is error | ### Have you checked our README?
- [X] I have checked the README
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version:
- Last working FlareSolverr version:
- Operating system:
- Are you using Docker: [yes/no]
- FlareSolverr User-Agent (see log traces or / endpoint):
- Are you using a VPN: [yes/no]
- Are you using a Proxy: [yes/no]
- Are you using Captcha Solver: [yes/no]
- If using captcha solver, which one:
- URL to test this issue:
```
### Description
i have run a docker in my mac:
docker run -d \
--name=flaresolverr \
-p 8191:8191 \
-e LOG_LEVEL=info \
--restart unless-stopped \
ghcr.io/flaresolverr/flaresolverr:latest
i want to create a seesion in my terminal:
curl -L -X POST 'http://localhost:8191/v1' \
-H 'Content-Type: application/json' \
-d '{
"cmd": "sessions.create",
"session":"123"
}'
result is:
{"status": "error", "message": "Error: Not implemented yet.", "startTimestamp": 1677486520765, "endTimestamp": 1677486520765, "version": "3.0.2"}%
sessions.list session.destroy problem is same
only request.get and request.post is work
### Logged Error Messages
```text
result is:
{"status": "error", "message": "Error: Not implemented yet.", "startTimestamp": 1677486520765, "endTimestamp": 1677486520765, "version": "3.0.2"}%
```
### Screenshots
_No response_ | closed | 2023-02-27T08:33:19Z | 2023-04-16T11:39:03Z | https://github.com/FlareSolverr/FlareSolverr/issues/716 | [
"duplicate"
] | coolberwin | 4 |
waditu/tushare | pandas | 941 | 需求:希望在下载股票数据的时候可以有一个参数用来下载例如上证50成分股,沪深300成分股,沪深500成分股,以及其他概念的成分股 | closed | 2019-03-01T02:52:43Z | 2019-03-01T11:07:54Z | https://github.com/waditu/tushare/issues/941 | [] | czr22 | 2 | |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 484 | [BUG]: <Ethnicity is being put down instead of State in applications> | ### Describe the bug
As I'm watching the application run and fill out Indeed applications, it is putting in the Ethnicity "Hispanic" instead of State which should be "California".
### Steps to reproduce
Putting "Hispanic" in ethnicity field in `data_folder/plain_text_resume`
### Expected behavior
California
### Actual behavior
Hispanic
### Branch
main
### Branch name
_No response_
### Python version
3.10
### LLM Used
ChatGPT
### Model used
4-o mini
### Additional context
_No response_ | closed | 2024-10-06T21:11:21Z | 2024-11-19T23:53:38Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/484 | [
"bug"
] | cwort40 | 5 |
Johnserf-Seed/TikTokDownload | api | 458 | I'm Very Beginner | sorry if I'm very beginner, how to overcome a display like this

| closed | 2023-06-22T05:54:22Z | 2023-07-06T05:50:39Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/458 | [
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] | radenhendriawan | 8 |
xlwings/xlwings | automation | 2,054 | Is there any way to use xlwings in Azure Batch? | I met a problem when running python script with xlwings contained.
The error is :
**pywintypes.com_error: (-2147024891, 'Access is denied.', None, None)**
Which problem caused this issue? And how to fix it? Thanks! | closed | 2022-10-13T11:19:45Z | 2023-01-18T19:33:46Z | https://github.com/xlwings/xlwings/issues/2054 | [] | jnw1 | 1 |
PaddlePaddle/ERNIE | nlp | 911 | text match model , an illegal memory access was encountered. | 报错信息:
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py:1492: UserWarning: Skip loading for concat_fc.weight. concat_fc.w is not found in the provided dict.
warnings.warn(("Skip loading for {}. ".format(key) + str(err)))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py:1492: UserWarning: Skip loading for concat_fc.bias. concat.b is not found in the provided dict.
warnings.warn(("Skip loading for {}. ".format(key) + str(err)))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py:1492: UserWarning: Skip loading for output_layer.weight. linear_74.w_0 is not found in the provided dict.
warnings.warn(("Skip loading for {}. ".format(key) + str(err)))
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py:1492: UserWarning: Skip loading for output_layer.bias. linear_74.b_0 is not found in the provided dict.
warnings.warn(("Skip loading for {}. ".format(key) + str(err)))
Traceback (most recent call last):
File "run_trainer.py", line 124, in <module>
run_trainer(_params)
File "run_trainer.py", line 101, in run_trainer
trainer.do_train()
File "/home/aistudio/work/ERNIE/applications/tasks/text_matching/trainer/custom_dynamic_trainer.py", line 55, in do_train
forward_out = self.model_class(example, phase=InstanceName.TRAINING)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py", line 930, in __call__
return self._dygraph_call_func(*inputs, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py", line 915, in _dygraph_call_func
outputs = self.forward(*inputs, **kwargs)
File "/home/aistudio/work/ERNIE/applications/tasks/text_matching/model/ernie_matching_siamese_pointwise.py", line 68, in forward
task_ids=text_task_ids)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py", line 930, in __call__
return self._dygraph_call_func(*inputs, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py", line 915, in _dygraph_call_func
outputs = self.forward(*inputs, **kwargs)
File "../../../erniekit/modules/ernie.py", line 169, in forward
sent_embedded = self.sent_emb(sent_ids)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py", line 930, in __call__
return self._dygraph_call_func(*inputs, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py", line 915, in _dygraph_call_func
outputs = self.forward(*inputs, **kwargs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/nn/layer/common.py", line 1469, in forward
name=self._name)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/nn/functional/input.py", line 206, in embedding
'remote_prefetch', False, 'padding_idx', padding_idx)
OSError: (External) CUDA error(700), an illegal memory access was encountered.
[Hint: 'cudaErrorIllegalAddress'. The device encountered a load or store instruction on an invalid memory address. This leaves the process in an inconsistentstate and any further CUDA work will return the same error. To continue using CUDA, the process must be terminated and relaunched. ] (at /paddle/paddle/phi/backends/gpu/cuda/cuda_info.cc:258)
[operator < lookup_table_v2 > error]
我参考了https://github.com/PaddlePaddle/ERNIE/issues/466 ,不过这里比较久远了。
使用的环境是ai studio 上的V100 环境,paddlepaddle-gpu==2.3.2, paddlenlp==2.3.2 | open | 2023-07-13T03:29:26Z | 2023-09-17T00:38:54Z | https://github.com/PaddlePaddle/ERNIE/issues/911 | [
"wontfix"
] | shikeno | 1 |
sqlalchemy/alembic | sqlalchemy | 347 | Default value doesn't work on Boolean | **Migrated issue, originally created by Mike Yao ([@yaoweizhen](https://github.com/yaoweizhen))**
I think it's caused by 'False' in Python and 'false' in Postgresql.
| closed | 2015-12-13T19:41:41Z | 2015-12-14T14:38:23Z | https://github.com/sqlalchemy/alembic/issues/347 | [
"bug"
] | sqlalchemy-bot | 5 |
jacobgil/pytorch-grad-cam | computer-vision | 63 | the gb image on my personal network | Hi, I am trying to apply this model on my personal network.
My network has two inputs of different sizes, when I visualize the gb image of the big image, only an area of the image that is the same size as the smaller image will show the gradient.
I would like to know how to solve this problem, thank you very much.
```
def __call__(self, input_small, input_big, target_category=None):
if self.cuda:
input_small = input_small.cuda()
input_big = input_big.cuda()
input_small= input_small.requires_grad_(True)
input_big= input_big.requires_grad_(True)
cls_score, offsets = self.forward(input_small, input_big)
B, _, H, W = cls_score.shape
cls_score = cls_score.reshape(B, -1) # 1,5329
if target_category == None:
target_category = np.argmax(cls_score.cpu().data.numpy())
one_hot = np.zeros((1, cls_score.size()[-1]), dtype=np.float32)
one_hot[0][target_category] = 1
one_hot = torch.from_numpy(one_hot).requires_grad_(True)
if self.cuda:
one_hot = one_hot.cuda()
one_hot = torch.sum(one_hot * cls_score)
one_hot.backward(retain_graph=True)
output = input_big.grad.cpu().data.numpy()
# output2 = input_small.grad.cpu().data.numpy() #(1, 512, 512)
output = output[0, :, :, :] #(1, 800, 800)
#output2=output2[0, :, :, :] #(1, 512, 512)
return output #,output2
```
| closed | 2021-03-10T02:27:36Z | 2021-05-14T10:36:02Z | https://github.com/jacobgil/pytorch-grad-cam/issues/63 | [] | Songtingt | 3 |
waditu/tushare | pandas | 851 | 主营业务构成 存在部分数据缺失, 接口:fina_mainbz | ```
import tushare as ts
pro = ts.pro_api('xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')
df = pro.fina_mainbz(ts_code='300417.SZ', period='20171231', type='P')
ts_code end_date bz_item bz_sales bz_profit bz_cost curr_type
0 300417.SZ 20171231 机动车排放物检测系统 57075272.62 NaN NaN CNY
1 300417.SZ 20171231 机动车排放物检测仪器 25199631.79 NaN NaN CNY
2 300417.SZ 20171231 前照灯检测仪 9705937.18 NaN NaN CNY
3 300417.SZ 20171231 机动车安全检测系统 43593076.76 NaN NaN CNY
4 300417.SZ 20171231 其它机动车检测设备 41536573.57 NaN NaN CNY
5 300417.SZ 20171231 组件及配件 8978229.51 NaN NaN CNY
6 300417.SZ 20171231 机动车排放物检测系统 57075272.62 21090812.30 35984460.32 CNY
7 300417.SZ 20171231 机动车排放物检测仪器 25199631.79 14310442.18 10889189.61 CNY
8 300417.SZ 20171231 机动车安全检测系统 43593076.76 16397226.67 27195850.09 CNY
9 300417.SZ 20171231 其它机动车检测设备 41536573.57 18251622.26 23284951.31 CNY
```
南华仪器的2017年度产品数据存在缺失,我用在通达信、同花顺都可以正常显示:
```
通达信:
2017年报主营
┌─────────────┬─────┬────┬─────┬────┐
| 业务名称 |营业收入( | 收入比 |营业利润( | 毛利率 |
| | 万元) | | 万元) | |
├─────────────┼─────┼────┼─────┼────┤
|专业仪器仪表制造业(行业) | 18608.87| 100.00%| 8214.23| 44.14%|
|合计(行业) | 18608.87| 100.00%| 8214.23| 44.14%|
├─────────────┼─────┼────┼─────┼────┤
|机动车排放物检测系统(产品)| 5707.53| 30.67%| 2109.08| 36.95%|
|机动车排放物检测仪器(产品)| 2519.96| 13.54%| 1431.04| 56.79%|
|前照灯检测仪(产品) | 970.59| 5.22%| -| -|
|机动车安全检测系统(产品) | 4359.31| 23.43%| 1639.72| 37.61%|
|其它机动车检测设备(产品) | 4153.66| 22.32%| 1825.16| 43.94%|
|组件及配件(产品) | 897.82| 4.82%| -| -|
|合计(产品) | 18608.87| 100.00%| 8214.23| 44.14%|
└─────────────┴─────┴────┴─────┴────┘
``` | closed | 2018-11-30T00:32:38Z | 2018-12-04T02:02:59Z | https://github.com/waditu/tushare/issues/851 | [] | chinobing | 5 |
ymcui/Chinese-BERT-wwm | tensorflow | 82 | Chinese-BERT-wwm与Chinese-PreTrained-XLNet的模型下载地址有错误 | 其中两者的base版模型的讯飞云下载地址发生了交错。 | closed | 2020-01-01T22:29:18Z | 2020-01-02T00:41:31Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/82 | [] | fungxg | 1 |
explosion/spaCy | data-science | 12,062 | Error while loading spacy model from the pickle file | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
I am getting the following error while loading spacy NER model from the pickle file.
`self.model = pickle.load(open(model_path, 'rb'))`
```
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm Community Edition 2022.2.3\plugins\python-ce\helpers\pydev\pydevd.py", line 1496, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files\JetBrains\PyCharm Community Edition 2022.2.3\plugins\python-ce\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:\Projects\pythonworkspace\invoice_processing_prototype\invoice_data_extractor_notebook.py", line 101, in <module>
extractor = InvoiceDataExtractor(model_dir_path, input_file_paths[0], config_path)
File "C:\Projects\pythonworkspace\invoice_processing_prototype\invoicedataextractor.py", line 27, in __init__
self.spatial_extractor = SpatialExtractor(model_dir_path, config_path)
File "C:\Projects\pythonworkspace\invoice_processing_prototype\spatialextractor.py", line 54, in __init__
self.inv_date = Model(f"{self.model_dir_path}\\invoice_date_with_corrected_training_data_and_line_seperator_21_07_2022.pkl",
File "C:\Projects\pythonworkspace\invoice_processing_prototype\spatialextractor.py", line 34, in __init__
self.model = pickle.load(open(model_path, 'rb'))
File "stringsource", line 6, in spacy.pipeline.trainable_pipe.__pyx_unpickle_TrainablePipe
_pickle.PickleError: Incompatible checksums (0x417ddeb vs (0x61fbab5, 0x27e6ee8, 0xbe56bc9) = (cfg, model, name, scorer, vocab))
```
## How to reproduce the behaviour
I have trained the NER model using the spacy version `3.1.2` and I recently upgraded the spacy to the latest `3.4`. The error might be because of some version incompatibilities. If that is the case can someone confirm if is it possible to load spacy NER model trained on spacy version '3.1.2' can be loaded on the upgraded spacy '3.4'
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: Windows 10
* Python Version Used: 3.10
* spaCy Version Used while training: 3.1.2
* spaCy Version Used while prediction: 3.4
* Environment Information:
| closed | 2023-01-05T13:46:04Z | 2023-01-10T06:51:39Z | https://github.com/explosion/spaCy/issues/12062 | [
"windows",
"feat / serialize"
] | devendrasoni18 | 2 |
dynaconf/dynaconf | flask | 1,016 | Can dynaconf parse yaml strings to settings? | for example, i have a string like this (which is sensitive, cannot dump to file):
```python
config_str = """
development:
configer:
host: '0.0.0.0'
port: 30811
production:
configer:
host: '0.0.0.0'
port: 30811
"""
```
dose dynaconf provide interface to build setting from the string input?
| closed | 2023-10-23T11:28:53Z | 2023-10-23T12:15:43Z | https://github.com/dynaconf/dynaconf/issues/1016 | [
"question"
] | zhang-xh95 | 2 |
Nemo2011/bilibili-api | api | 86 | 【提问】直播弹幕类接口可能有问题 | **Python 版本:** 3.8
**模块版本:** 12.5.2
**运行环境:** Linux
---
今天在使用LiveDanmuku时突然发现卡在了获取真实房间号 stderr.log:
[2022-10-08 13:12:40,289][WARNING]创建主数据库容器成功
[2022-10-08 13:12:40,718][WARNING]创建缓存数据库容器成功
[2022-10-08 13:12:40,784][WARNING]获取用户-----------会话成功
[-----------][2022-10-08 13:12:40,790][INFO] 准备连接直播间 --------
[-----------][2022-10-08 13:12:40,791][DEBUG] 正在获取真实房间号
在怀疑是不是存在接口更新导致失效,还是说ip被墙这种,但是测试过的其他api一律有效,唯独卡死在这里的获取。 | closed | 2022-10-08T13:14:30Z | 2022-10-08T13:56:44Z | https://github.com/Nemo2011/bilibili-api/issues/86 | [] | cabinary | 2 |
nschloe/tikzplotlib | matplotlib | 3 | Numbers should be printed in scientific (exponent) notation | Problem:
Numbers are currently printed in the general format `%g`. As a result very large decimals are printed in the tex file that matplotlib2tikz produces.
(TeXLive 2010, pgfplots 1.4)
Example that exhibits the problem:
from matplotlib.pylab import *
from numpy import *
from matplotlib2tikz import save as tikz_save
x = logspace(0,6,num=5)
loglog(x,x**2)
tikz_save('logparabola.tex',figureheight='0.6\textwidth',figurewidth='0.8\textwidth',strict=True)
This results in a tex files that contains very large decimal coordinates and causes tex to choke with "Number too big" or "Dimension too large" errors.
Proposed (?) solution:
Use `%e` (eg `%.10e` ) instead of `%.15g` when printing ticks or coordinates (quite possibly graphics as well, haven't tested that). For example in revision `55d56869` in `_draw_line2d` lines 659, 661 and 664 read `%.15g`. Changing `%.15g` to `%.10e` fixes the "Dimension too large" errors Also in `_get_ticks` line 408 should read `%.10e` instead of `%s` for the very same reason.
| closed | 2010-10-14T10:59:51Z | 2010-10-14T12:09:10Z | https://github.com/nschloe/tikzplotlib/issues/3 | [] | foucault | 2 |
Lightning-AI/pytorch-lightning | data-science | 20,016 | Gathering a list of strings from multiple devices using Fabric | ### Bug description
I have a list of strings, on each device in multi-gpu evaluation, I want to be able to collect them all on all devices across all devices into a single list
```
m_preds = fabric.all_gather(all_preds)
m_gt = fabric.all_gather(all_gt)
```
when I try the above code (`all_preds` I and `all_gt` are lists of strings), `m_preds` and `m_gt` are the same lists as `all_preds` and `all_gt` as per the device their on. Am I doing something wrong?
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_
cc @borda | open | 2024-06-26T01:28:46Z | 2024-06-27T18:44:48Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20016 | [
"question",
"docs"
] | Haran71 | 1 |
onnx/onnx | machine-learning | 6,172 | How to use onnx.utils.extract_model to extract more than 2GB child onnx model ? |
` input_name = "sample" #'/conv_in/Conv_output_0'
output_name = "/down_blocks.1/resnets.0/norm1/Reshape_output_0" #
onnx.utils.extract_model(original_model_path, extracted_model_path, [input_name], [output_name])
`
onnx.utils.extract_model(original_model_path, extracted_model_path, [input_name], [output_name])
File "/home/tiger/.local/lib/python3.10/site-packages/onnx/utils.py", line 209, in extract_model
e = Extractor(model)
File "/home/tiger/.local/lib/python3.10/site-packages/onnx/utils.py", line 16, in __init__
self.model = onnx.shape_inference.infer_shapes(model)
File "/home/tiger/.local/lib/python3.10/site-packages/onnx/shape_inference.py", line 45, in infer_shapes
model_str = model if isinstance(model, bytes) else model.SerializeToString()
ValueError: Message onnx.ModelProto exceeds maximum protobuf size of 2GB: 10275992708 | closed | 2024-06-12T07:42:11Z | 2024-06-20T08:52:06Z | https://github.com/onnx/onnx/issues/6172 | [
"question"
] | Lenan22 | 2 |
man-group/arctic | pandas | 733 | Correct way to use VersionStore? | I have read the available documentation (even watched the presentations) and skimmed through the code but I am still not sure if I understand how to achieve semantics of patching timeseries data in VersionStore.
Let's say that I store large timeseries under one symbol - it has a large number of datapoints indexed by their datetime info, possibly from many years. I understand, that in the background, data is split to chunks/segments by the datetime and associated with one version. Now I want to replace certain existing datapoints with their corrections - lets say I want to patch datapoint with timestamp `j`.
Illustration:
```
original timeseries patch data with applied patch
date data date data date data
2016-01-01 1 2016-01-02 42 2016-01-01 1
2016-01-02 2 2016-01-02 42
2016-01-03 3 2016-01-03 3
... ...
```
Now if I used `write` the new version would contain only the patch, unless I inserted the whole patched dataset again which I want to avoid as the dataset is too large and I don't want to transmit and store duplicates. Of course, I want to retain the data before correction as a previous version. How to accomplish such write? What would happen internally if I used `append` with `prune_previous_version=False` to write the patch? Is such operation even legal? (the name `append` is confusing me a bit in this case as I am not appending new data).
Let's say I used some correct method of writing and now I want to read the patched data. Given the interval `<j-5,j+5)` how can I read original data with all applied patches from this interval? The `from_version` parameter in `read` seems promising, but does it actually apply all the patches? In case of many subsequent patches, are all of them read and aggregated in memory or is there some trick that minimizes read amplification by avoiding reading a large number of small documents containing unneeded patches?
I am sorry for such a battery of questions but I am sure most of these things are trivial with enough knowledge of Arctic internals.
Thanks for help! | closed | 2019-03-04T11:35:22Z | 2019-09-08T14:56:56Z | https://github.com/man-group/arctic/issues/733 | [
"VersionStore"
] | FilipJanitor | 6 |
mage-ai/mage-ai | data-science | 5,078 | [BUG] Git Stage / un-stage / commit API error - reproducible | ### Mage version
9.70
### Describe the bug
This is happening in two Mage Multi Project EKS environments
- When Staging files from UI nothing happens
- I can stage in the terminal.
- Once staged from the terminal - I cannot un-stage from UI
- Once staged from the terminal - I cannot commit
### To reproduce
- Create a new Empty Folder > Register it as a project
- Switch to the Project in question I just created
- From Version Control: Initial Git
- From Version Control: Add remote
- In Settings Set : Authentication type Remote Repo URL, User Name, Access Token
- From Version Control: select remote > Pull > all branches
- From Version Control: switched to desired branch (feature/mageinit)
- Edit any file or create any file in files
- From Version Control: attempt to commit file - nothing happens
### Expected behavior
_No response_
### Screenshots





### Operating system
_No response_
### Additional context
_No response_ | open | 2024-05-16T22:05:46Z | 2024-05-16T22:05:46Z | https://github.com/mage-ai/mage-ai/issues/5078 | [
"bug"
] | Arthidon | 0 |
autokey/autokey | automation | 142 | Dependencies for autokey v0.94.1 | Hello.
Thank you for your effort. I really appreciate it.
I tried to upload the newest autokey v0.94.1 to my PPA(ppa:nemonein/tailored), for Bionic(Ubuntu), but it failed.
After a few hours of struggle(;-0), I've found a way to fix it.
debian/control file should contain,
```
Source: autokey
Section: utils
Priority: optional
Maintainer: Troy C. (Ubuntu PPA) <ubuntu.troxor@xoxy.net>
Build-Depends:
python3-all,
python3-setuptools,
python3-dbus,
dh-python,
debhelper,
python3-xlib,
python3-gi,
gir1.2-gtk-3.0
```
```python3-xlib, python3-gi, gir1.2-gtk-3.0``` have to be added. Otherwise, the Ubuntu PPA's build system can not build it.
By the way, I removed autokey-qt package because it does not work any more.
One more thing I'd like to say.
In the About menu, it still shows the version number as '0.94.0'.
Could that be set to '0.94.1'?

| closed | 2018-05-12T08:46:18Z | 2018-07-30T20:11:21Z | https://github.com/autokey/autokey/issues/142 | [] | nemonein | 2 |
dpgaspar/Flask-AppBuilder | flask | 1,418 | Chart is not visble | Chart is not visble
I cannot see the all chart. I also found in Demo Site (http://flaskappbuilder.pythonanywhere.com/)
In the Chrome Developer Menu, the following error occurs.
google_charts.js:21 GET https://www.google.com/uds/?file=visualization&v=1&packages=corechart net::ERR_ABORTED 404 | closed | 2020-07-02T02:13:09Z | 2020-07-02T07:34:30Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1418 | [] | kmorkah | 1 |
plotly/dash | flask | 2,641 | [BUG] Reset View Button in Choropleth Mapbox Shows Old Data | Hello,
I've built a dash app (https://www.greenspace.city), which uses the Plotly Choropleth Mapbox to visualise spatial data. The map is centered on the selected suburb (dropdown value).
Unfortunately, when I switch the data (i.e. select a different suburb or city from the dropdown) and hit the reset view button (on the Choropleth Mapbox), it takes the user back to the original map center.
How do I stop the Choropleth Mapbox from caching the old data?
```
dash 2.12.1
plotly 5.16.1
```
| closed | 2023-09-12T07:31:44Z | 2024-07-25T13:42:09Z | https://github.com/plotly/dash/issues/2641 | [] | masands | 1 |
Gozargah/Marzban | api | 1,303 | Nodes Usage | در نسخه dev بخش Nodes Usage رو نشون نمیده

| closed | 2024-09-05T00:06:35Z | 2024-09-05T07:27:06Z | https://github.com/Gozargah/Marzban/issues/1303 | [] | sirvan1133 | 4 |
nteract/papermill | jupyter | 327 | Plugin 'jupyterlab-celltags' failed to activate. | Installed JupyterLab v1.0.0a1
Installed Jupyterlab celltag extension , celltags v0.1.4
I am getting error Plugin 'jupyterlab-celltags' failed to activate.
I couldn't see the 'Add Tag' in Tools.
jupyterlab-celltags extension is not compatible with latest JupyterLab v1.0.0a1
Please look into the issue.
| closed | 2019-03-06T08:02:59Z | 2019-03-26T17:34:48Z | https://github.com/nteract/papermill/issues/327 | [] | rekham23 | 5 |
bendichter/brokenaxes | matplotlib | 121 | Group view error in Radis | AttributeError Traceback (most recent call last)
Cell In[10], line 49
40 fit_properties = {
41 "method": "least_squares", # Preferred fitting method from the 17 confirmed methods of LMFIT stated in week 4 blog. By default, "leastsq".
42 "fit_var": "transmittance", # Spectral quantity to be extracted for fitting process, such as "radiance", "absorbance", etc.
(...)
45 "tol": 1e-15, # Fitting tolerance, only applicable for "lbfgsb" method.
46 }
48 # Conduct the fitting process!
---> 49 s_best, result, log = fit_spectrum(
50 s_exp=s_experimental, # Experimental spectrum.
51 fit_params=fit_parameters, # Fit parameters.
52
53 model=experimental_conditions, # Experimental ground-truths conditions.
54 pipeline=fit_properties, # Fitting pipeline references.
55 fit_kws={"gtol": 1e-12},
56 )
58 # Now investigate the result logs for additional information about what's going on during the fitting process
60 print("\nResidual history: \n")
File ~\AppData\Local\anaconda3\envs\radis_env\lib\site-packages\radis\tools\new_fitting.py:935, in fit_spectrum(s_exp, fit_params, model, pipeline, bounds, input_file, verbose, show_plot, fit_kws)
931 # PLOT THE DIFFERENCE BETWEEN THE TWO
932
933 # If show_plot = True by default, show the diff plot
934 if show_plot:
--> 935 plot_diff(
936 s_data,
937 s_result,
938 fit_var,
939 method=["diff", "ratio"],
940 show=True,
941 )
943 if verbose:
944 print(
945 "\n======================== END OF FITTING PROCESS ========================\n"
946 )
File ~\AppData\Local\anaconda3\envs\radis_env\lib\site-packages\radis\spectrum\compare.py:869, in plot_diff(s1, s2, var, wunit, Iunit, resample, method, diff_window, show_points, label1, label2, figsize, title, nfig, normalize, yscale, verbose, save, show, show_residual, lw_multiplier, diff_scale_multiplier, discard_centile, plot_medium, legendargs, show_ruler)
867 for i in range(len(methods)):
868 ax1i = fig.add_subplot(gs[i + 1])
--> 869 ax1i.get_shared_x_axes().join(ax0, ax1i)
870 ax1i.ticklabel_format(useOffset=False)
871 ax1.append(ax1i)
AttributeError: 'GrouperView' object has no attribute 'join'
Could someone help me with this error. I am stuck at this point
| open | 2024-08-13T08:09:59Z | 2024-09-16T19:53:46Z | https://github.com/bendichter/brokenaxes/issues/121 | [] | Abhay-becks | 1 |
kymatio/kymatio | numpy | 318 | Python 2.7 compatibility? | This is something that may simplify adoption in certain environments. Obviously, making this work is not a trivial amount of effort, so we need to have a discussion on the pros and cons. | closed | 2019-01-22T10:35:03Z | 2019-02-25T15:34:22Z | https://github.com/kymatio/kymatio/issues/318 | [
"enhancement",
"management",
"API"
] | janden | 9 |
scikit-optimize/scikit-optimize | scikit-learn | 379 | Expose transform to the public API of dimensions and allow it to be a callable | Currently we allow each dimension to be a `(lower_bound, upper_bound, prior)` tuple where prior can be "uniform" or "log-uniform". In order to allow more flexibilty, we can allow prior to be a callable that maps from the input space to some arbitrary space. | closed | 2017-05-24T01:58:36Z | 2017-05-26T07:54:32Z | https://github.com/scikit-optimize/scikit-optimize/issues/379 | [
"API"
] | MechCoder | 6 |
scanapi/scanapi | rest-api | 274 | Lint Action failing because different black version | `lgeiger/black-action` does not permit to set a specific version for Black. The action always install it using `pip install black`. See the code [here](https://github.com/lgeiger/black-action/blob/master/Dockerfile#L12)
For our project we fixed the version on [19.10b0](https://github.com/scanapi/scanapi/blob/f1a3b20892a3356f07a8688a63ee51c0c7bb3291/pyproject.toml#L33)
Because of that, our checks started failing.

https://github.com/scanapi/scanapi/pull/272/checks?check_run_id=1109422609
| closed | 2020-09-13T22:32:15Z | 2020-09-14T18:03:33Z | https://github.com/scanapi/scanapi/issues/274 | [
"Bug"
] | camilamaia | 1 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 15,723 | [Feature Request]: Increase image input size limit in imgtoimg | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
There is a size limit for sending a file to imgtoimg I believe it is a maximum of 300mb, I also believe it comes from the Gradio interface, because even changing possible limits in grid files etc we still have this limit imposed, could anyone tell us where this limit to making a change is implicit, it would be incredible as it is limiting a number of cool things to happen, thank you very much in advance for your attention and congratulations on the incredible work 🙏
### Proposed workflow
1. Go to ....
2. Press ....
3. ...
### Additional information
_No response_ | open | 2024-05-06T11:43:00Z | 2024-05-11T02:15:49Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15723 | [
"enhancement"
] | utopic-dev | 6 |
agronholm/anyio | asyncio | 857 | `create_tcp_listener(local_port=0)`: return listeners with the same port | ### Things to check first
- [X] I have searched the existing issues and didn't find my feature already requested there
### Feature description
> it would be nice if `multilistener = await create_tcp_listener(local_port=0)` used the same port for both of the listeners. This would be [implemented via something roughly] like
>
> ```python
> def get_free_ipv4_ipv6_port(kind, bind_ipv4_addr, bind_ipv6_addr):
> if platform_has_dualstack and bind_ipv4_addr == bind_ipv6_addr == "":
> # We can use the dualstack_sock.bind(("", 0)) trick
> ...
> else:
> ...
> ```
>
> Currently, folks who need this have to implement it downstream.)
Ow, that's definitely something I would like to fix! Strange that nobody has reported that as an issue. I definitely meant for the listener to bind to the same ephemeral port if the local port was omitted.
_Originally posted by @agronholm in https://github.com/agronholm/anyio/pull/856#discussion_r1907675607_
### Use case
. | open | 2025-01-09T09:25:13Z | 2025-03-21T23:02:29Z | https://github.com/agronholm/anyio/issues/857 | [
"enhancement"
] | gschaffner | 1 |
numpy/numpy | numpy | 28,351 | BUG: FreeBSD fails a conversion test | FreeBSD recently began failing the `test_nested_arrays_stringlength` test in `test_array_coercion.py`. It emits a `RuntimeWarning` when casting `float` objects to `string` dtype
```
self = <test_array_coercion.TestStringDiscovery object at 0x1a16d2f95dd0>
obj = 1.2
@pytest.mark.parametrize("obj",
[object(), 1.2, 10**43, None, "string"],
ids=["object", "1.2", "10**43", "None", "string"])
def test_nested_arrays_stringlength(self, obj):
length = len(str(obj))
expected = np.dtype(f"S{length}")
arr = np.array(obj, dtype="O")
> assert np.array([arr, arr], dtype="S").dtype == expected
E RuntimeWarning: invalid value encountered in cast
arr = array(1.2, dtype=object)
expected = dtype('S3')
length = 3
obj = 1.2
self = <test_array_coercion.TestStringDiscovery object at 0x1a16d2f95dd0>
../.venv/lib/python3.11/site-packages/numpy/_core/tests/test_array_coercion.py:180: RuntimeWarning
```
Maybe we could get some FreeBSD eyeballs on this, I don't know who we should ping. I see @oscargus and @DavidAlphaFox have previously contributed FreeBSD issues and PRs, maybe they know who to turn to? In any case, #28331 makes CI pass by checking that this warning is emitted. This issue is to remind us to try to fix it. | closed | 2025-02-18T06:13:37Z | 2025-02-21T15:55:34Z | https://github.com/numpy/numpy/issues/28351 | [] | mattip | 5 |
jazzband/django-oauth-toolkit | django | 878 | RELEASE 1.3.3 | Checklist:
- [x] Update the changelog | closed | 2020-10-07T13:50:54Z | 2020-10-20T05:39:34Z | https://github.com/jazzband/django-oauth-toolkit/issues/878 | [] | MattBlack85 | 5 |
ContextLab/hypertools | data-visualization | 223 | roation & saving not working on large legends | I have got a large legend of numerical data, so the legend is larger than the window size and on clicking the _save button_ an exception is thrown
```
Traceback (most recent call last):
File "matplotlib/cbook/__init__.py", line 216, in process
func(*args, **kwargs)
File "hypertools/plot/draw.py", line 127, in update_position
proj = ax.get_proj()
AttributeError: 'AxesSubplot' object has no attribute 'get_proj'
```
It would be nice, that the lgend is fit to the window size and/or the saving button works also if the legend is larger than the window | open | 2019-07-01T08:24:23Z | 2019-07-01T08:24:23Z | https://github.com/ContextLab/hypertools/issues/223 | [] | flashpixx | 0 |
mars-project/mars | numpy | 2,814 | [BUG] release_free_slot got wrong slot_id | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version
2. The version of Mars you use: 8f322b496df58022f6b7dd3839f1ce6b2d119c73
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
```
2022-03-14 11:57:44,895 ERROR api.py:115 -- Got unhandled error when handling message ('release_free_slot', 0, (0, ('t7rQ0udN4qHUWr9x3qg1baTI', 'XPRUxfx1ld4Dzz797R5KZeGW')), {}) in actor b'numa-0_band_slot_manager' at 127.0.0.1:14887
Traceback (most recent call last):
File "mars/oscar/core.pyx", line 478, in mars.oscar.core._BaseActor.__on_receive__
with debug_async_timeout('actor_lock_timeout',
File "mars/oscar/core.pyx", line 481, in mars.oscar.core._BaseActor.__on_receive__
async with self._lock:
File "mars/oscar/core.pyx", line 482, in mars.oscar.core._BaseActor.__on_receive__
result = func(*args, **kwargs)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/workerslot.py", line 169, in release_free_slot
assert acquired_slot_id == slot_id, f"acquired_slot_id {acquired_slot_id} != slot_id {slot_id}"
AssertionError: acquired_slot_id 1 != slot_id 0
2022-03-14 11:57:44,897 ERROR execution.py:120 -- Failed to run subtask XPRUxfx1ld4Dzz797R5KZeGW on band numa-0
Traceback (most recent call last):
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/execution.py", line 331, in internal_run_subtask
subtask_info.result = await self._retry_run_subtask(
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/execution.py", line 420, in _retry_run_subtask
return await _retry_run(subtask, subtask_info, _run_subtask_once)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/execution.py", line 107, in _retry_run
raise ex
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/execution.py", line 67, in _retry_run
return await target_async_func(*args)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/execution.py", line 412, in _run_subtask_once
await slot_manager_ref.release_free_slot(
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/oscar/backends/context.py", line 189, in send
return self._process_result_message(result)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/oscar/backends/context.py", line 70, in _process_result_message
raise message.as_instanceof_cause()
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/oscar/backends/pool.py", line 542, in send
result = await self._run_coro(message.message_id, coro)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/oscar/backends/pool.py", line 333, in _run_coro
return await coro
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/oscar/api.py", line 115, in __on_receive__
return await super().__on_receive__(message)
File "mars/oscar/core.pyx", line 506, in __on_receive__
raise ex
File "mars/oscar/core.pyx", line 478, in mars.oscar.core._BaseActor.__on_receive__
with debug_async_timeout('actor_lock_timeout',
File "mars/oscar/core.pyx", line 481, in mars.oscar.core._BaseActor.__on_receive__
async with self._lock:
File "mars/oscar/core.pyx", line 482, in mars.oscar.core._BaseActor.__on_receive__
result = func(*args, **kwargs)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/workerslot.py", line 169, in release_free_slot
assert acquired_slot_id == slot_id, f"acquired_slot_id {acquired_slot_id} != slot_id {slot_id}"
AssertionError: [address=127.0.0.1:14887, pid=42643] acquired_slot_id 1 != slot_id 0
2022-03-14 11:57:44,900 INFO processor.py:508 -- Time consuming to execute a subtask is 0.023977041244506836s with session_id t7rQ0udN4qHUWr9x3qg1baTI, subtask_id XPRUxfx1ld4Dzz797R5KZeGW
2022-03-14 11:57:44,903 ERROR api.py:115 -- Got unhandled error when handling message ('release_free_slot', 0, (1, ('t7rQ0udN4qHUWr9x3qg1baTI', 'XPRUxfx1ld4Dzz797R5KZeGW')), {}) in actor b'numa-0_band_slot_manager' at 127.0.0.1:14887
Traceback (most recent call last):
File "mars/oscar/core.pyx", line 478, in mars.oscar.core._BaseActor.__on_receive__
with debug_async_timeout('actor_lock_timeout',
File "mars/oscar/core.pyx", line 481, in mars.oscar.core._BaseActor.__on_receive__
async with self._lock:
File "mars/oscar/core.pyx", line 482, in mars.oscar.core._BaseActor.__on_receive__
result = func(*args, **kwargs)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/workerslot.py", line 168, in release_free_slot
acquired_slot_id = self._session_stid_to_slot.pop(acquired_session_stid)
KeyError: ('t7rQ0udN4qHUWr9x3qg1baTI', 'XPRUxfx1ld4Dzz797R5KZeGW')
2022-03-14 11:57:44,904 ERROR execution.py:120 -- Failed to run subtask XPRUxfx1ld4Dzz797R5KZeGW on band numa-0
Traceback (most recent call last):
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/execution.py", line 331, in internal_run_subtask
subtask_info.result = await self._retry_run_subtask(
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/execution.py", line 420, in _retry_run_subtask
return await _retry_run(subtask, subtask_info, _run_subtask_once)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/execution.py", line 107, in _retry_run
raise ex
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/execution.py", line 67, in _retry_run
return await target_async_func(*args)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/services/scheduling/worker/execution.py", line 412, in _run_subtask_once
await slot_manager_ref.release_free_slot(
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/oscar/backends/context.py", line 189, in send
return self._process_result_message(result)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/oscar/backends/context.py", line 70, in _process_result_message
raise message.as_instanceof_cause()
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/oscar/backends/pool.py", line 542, in send
result = await self._run_coro(message.message_id, coro)
File "/Users/chaokunyang/ant/Development/DevProjects/python/mars/mars/oscar/backends/pool.py", line 333, in _run_coro
return await coro
```
6. Minimized code to reproduce the error.
` pytest --log-level=INFO -v -s mars/dataframe/indexing/tests/test_indexing_execution.py::test_series_getitem` | closed | 2022-03-14T03:59:14Z | 2022-03-15T03:17:29Z | https://github.com/mars-project/mars/issues/2814 | [
"type: bug",
"mod: scheduling service"
] | chaokunyang | 2 |
jonaswinkler/paperless-ng | django | 880 | [BUG] Paperless does not start on upgrading to 1.4.0 | <!---
=> Before opening an issue, please check the documentation and see if it helps you resolve your issue: https://paperless-ng.readthedocs.io/en/latest/troubleshooting.html
=> Please also make sure that you followed the installation instructions.
=> Please search the issues and look for similar issues before opening a bug report.
=> If you would like to submit a feature request please submit one under https://github.com/jonaswinkler/paperless-ng/discussions/categories/feature-requests
=> If you encounter issues while installing of configuring Paperless-ng, please post that in the "Support" section of the discussions. Remember that Paperless successfully runs on a variety of different systems. If paperless does not start, it's probably an issue with your system, and not an issue of paperless.
=> Don't remove the [BUG] prefix from the title.
-->
**Describe the bug**
paperless_webserver docker container won't start after upgrading from 1.2.1 to 1.4.0 with docker-compose pull
At this time, I'm attempting to pull different versions of paperless_ng to try and get it up and running. Going down to 1.3.* didn't work because of an error: `OverflowError: timestamp out of range for platform time_t`. But going back to 1.2.1 does let paperless get up and running.
**To Reproduce**
Steps to reproduce the behavior:
1. begin with release v1.2.1
2. docker-compose pull (upgrade to 1.4.0)
3. docker-compose up -d
4. failure
**Expected behavior**
I expected to see paperless start up without failures.
**Webserver logs**
running 1.4.0:
```
Paperless-ng docker container starting...
creating directory /tmp/paperless
Apply database migrations...
sudo: unable to get time of day: Operation not permitted
sudo: error initializing audit plugin sudoers_audit
```
**Relevant information**
- Raspbian: Linux pi 5.10.17-v7l+ #1403 SMP Mon Feb 22 11:33:35 GMT 2021 armv7l GNU/Linux
- Paperless Version 1.2 --> 1.4
- Docker version 19.03.13, build 4484c46
- docker-compose version 1.27.4
- Installation method: Docker
- Any configuration changes:
[docker-compose.txt](https://github.com/jonaswinkler/paperless-ng/files/6271675/docker-compose.txt) and
[docker-compose.env.txt](https://github.com/jonaswinkler/paperless-ng/files/6271689/docker-compose.env.txt)
| closed | 2021-04-07T12:57:10Z | 2021-04-07T13:09:11Z | https://github.com/jonaswinkler/paperless-ng/issues/880 | [] | starbuck93 | 1 |
airtai/faststream | asyncio | 2,035 | Bug: miss nested model in AsyncAPI docoumentation | ```python
from dataclasses import dataclass, field
from faststream.rabbit import RabbitBroker
from faststream.specification import AsyncAPI
@dataclass
class PartnerOrderProduct:
product_uid: str
@dataclass
class PartnerOrderPayload:
uid: str
status: str
date: str
products: list[PartnerOrderProduct] = field(default_factory=list)
broker = RabbitBroker()
@broker.subscriber(queue="test")
async def partner_order_to_1c(
payload: PartnerOrderPayload,
):
print(payload)
docs = AsyncAPI(broker)
```
Generated schema
```json
{
"info": {
"title": "FastStream",
"version": "0.1.0",
"description": ""
},
"asyncapi": "3.0.0",
"defaultContentType": "application/json",
"servers": {
"development": {
"host": "admin:admin@localhost:5672",
"pathname": "/robots",
"protocol": "amqp",
"protocolVersion": "0.9.1"
}
},
"channels": {
"test:_:PartnerOrderTo1C": {
"address": "test:_:PartnerOrderTo1C",
"servers": [
{
"$ref": "#/servers/development"
}
],
"messages": {
"SubscribeMessage": {
"$ref": "#/components/messages/test:_:PartnerOrderTo1C:SubscribeMessage"
}
},
"bindings": {
"amqp": {
"is": "queue",
"bindingVersion": "0.3.0",
"queue": {
"name": "test",
"durable": false,
"exclusive": false,
"autoDelete": false,
"vhost": "/robots"
}
}
}
}
},
"operations": {
"test:_:PartnerOrderTo1CSubscribe": {
"action": "receive",
"channel": {
"$ref": "#/channels/test:_:PartnerOrderTo1C"
},
"bindings": {
"amqp": {
"cc": [
"test"
],
"ack": true,
"bindingVersion": "0.3.0"
}
},
"messages": [
{
"$ref": "#/channels/test:_:PartnerOrderTo1C/messages/SubscribeMessage"
}
]
}
},
"components": {
"messages": {
"test:_:PartnerOrderTo1C:SubscribeMessage": {
"title": "test:_:PartnerOrderTo1C:SubscribeMessage",
"correlationId": {
"location": "$message.header#/correlation_id"
},
"payload": {
"$ref": "#/components/schemas/PartnerOrderPayload"
}
}
},
"schemas": {
"PartnerOrderPayload": {
"properties": {
"uid": {
"title": "Uid",
"type": "string"
},
"status": {
"title": "Status",
"type": "string"
},
"date": {
"title": "Date",
"type": "string"
},
"products": {
"items": {
"$ref": "#/components/schemas/PartnerOrderProduct" # there is no such schema
},
"title": "Products",
"type": "array"
}
},
"required": [
"uid",
"status",
"date"
],
"title": "PartnerOrderPayload",
"type": "object"
}
}
}
}
``` | closed | 2025-01-12T18:11:13Z | 2025-03-10T09:29:54Z | https://github.com/airtai/faststream/issues/2035 | [
"bug"
] | Lancetnik | 2 |
jumpserver/jumpserver | django | 15,071 | upgrade | ### 产品版本
v4.3.1
### 版本类型
- [x] 社区版
- [ ] 企业版
- [ ] 企业试用版
### 安装方式
- [x] 在线安装 (一键命令安装)
- [ ] 离线包安装
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] 源码安装
### 环境信息
kylin v10 sp2 system kernel 4.x
### 🤔 问题描述
v4.3.1 upgrade v4.7.0 excute to step 6,Error response frome daemon : failed to create task for container:failed to create shim task:OCI runtime create failed:container_linix.go:328:starting container process caused "permision denied";unknow
### 期望结果
_No response_
### 补充信息
_No response_ | open | 2025-03-19T08:24:19Z | 2025-03-24T02:54:51Z | https://github.com/jumpserver/jumpserver/issues/15071 | [
"⏳ Pending feedback",
"🤔 Question"
] | tianchaopow | 3 |
supabase/supabase-py | fastapi | 703 | Update dependency of httpx to 0.26 or higher | **Is your feature request related to a problem? Please describe.**
The version window 0.24 >= v > 0.26 is causing the problems with other packages that are relied on httpx of versions >= 0.26 (in my case it was python-telegram-bot). However there were no breaking changes in 0.26 and 0.27.
Also it appears that postgrest uses even older version of httpx (0.16) which make it impossible to download the dependency using any new version of httpx in a project
**Describe the solution you'd like**
Please update the version of httpx dependency so that 0.26 version is in bound. Note that packages that this project rely on (storage3 and postgrest have the same issue)
| closed | 2024-02-26T14:05:35Z | 2024-04-02T01:05:57Z | https://github.com/supabase/supabase-py/issues/703 | [
"enhancement",
"postgrest"
] | deaddrunkspirit | 13 |
ShishirPatil/gorilla | api | 873 | [BFCL] 关于多轮测试的问题 | 我再运行bfcl generate --model Qwen/Qwen2.5-7B-Instruct --test-category multi_turn --backend vllm的时候,在berkeley-function-call-leaderboard/bfcl/model_handler/oss_model/base_oss_handler.py:225 in _multi_threaded_inference 的assert type(test_case["function"]) is list处报错。显示没有function这个key,然后我去查了数据,发现数据确实没有function,这该怎么解决
| closed | 2025-01-09T02:28:43Z | 2025-02-20T22:34:38Z | https://github.com/ShishirPatil/gorilla/issues/873 | [
"BFCL-General"
] | qianzhang2018 | 1 |
deepfakes/faceswap | machine-learning | 644 | Faceswap Windows Installer v0.95b DLL load failed: The specified module could not be found. | 
| closed | 2019-03-03T19:07:23Z | 2019-03-05T01:17:01Z | https://github.com/deepfakes/faceswap/issues/644 | [] | KaveriWaikar | 2 |
marimo-team/marimo | data-visualization | 3,521 | The "Edit Markdown" button should respect the "cell output area = below" setting | ### Description
When the `cell output area` is set to `below`, the `edit markdown` button (i.e., the "eye" icon) should be placed at the top of the card, instead of below.

### Suggested solution
Having it at the bottom of the card makes sense when the output is "above" because it shows you the first line of code directly, but not when it's "below". Ideally, you'd also auto-scroll the user to the top of the code/markdown block, though I'm not sure how easy that would be to implement.
### Alternative
_No response_
### Additional context
_No response_ | closed | 2025-01-21T01:33:52Z | 2025-01-21T15:49:02Z | https://github.com/marimo-team/marimo/issues/3521 | [
"enhancement"
] | mjkanji | 1 |
NullArray/AutoSploit | automation | 988 | Ekultek, you are correct. | Kek | closed | 2019-04-19T16:46:35Z | 2019-04-19T16:58:00Z | https://github.com/NullArray/AutoSploit/issues/988 | [] | AutosploitReporter | 0 |
neuml/txtai | nlp | 776 | Persist Scoring component metadata with MessagePack | Persist Scoring component metadata with MessagePack. Keep support for loading existing Pickle format but only save in new format. | closed | 2024-08-23T20:47:44Z | 2024-08-24T12:12:57Z | https://github.com/neuml/txtai/issues/776 | [] | davidmezzetti | 0 |
flairNLP/flair | nlp | 2,995 | Possibility of setting Adam optimizer for TARS Classifier | Hi,
I appreciate your beneficial library. In this issue, I have a question regarding TARS Classifier and its optimizer.
I was applying few-shot learning to my own dataset using your library. In the optimization, I wanted to use different optimizers, such as Adam or AdamW, but whenever I used Adam (`torch.optim.Adam`), all the required metrics became zero!
Would it be possible to use other optimizers for the TARS classifier? What configuration is needed if it is possible?
Thank you in advance. | closed | 2022-11-20T21:53:25Z | 2023-05-05T12:26:29Z | https://github.com/flairNLP/flair/issues/2995 | [
"question",
"wontfix"
] | FaezeGhorbanpour | 5 |
jupyterlab/jupyter-ai | jupyter | 1,273 | Minimum dependencies CI job is failing | ## Description
The `Python Tests / Unit tests (Python 3.9, minimum dependencies)` workflow is currently failing on CI.
## Reproduce
Example of failing run: https://github.com/jupyterlab/jupyter-ai/actions/runs/13762125319/job/38480403001?pr=1271
## Context
- Updating the version floor and clearing the runner cache did not fix the issue.
- This PR was just merged last week: https://github.com/jupyterlab/maintainer-tools/pull/250 | closed | 2025-03-10T20:48:12Z | 2025-03-11T22:22:22Z | https://github.com/jupyterlab/jupyter-ai/issues/1273 | [
"bug"
] | dlqqq | 3 |
ivy-llc/ivy | tensorflow | 28,098 | Fix Frontend Failing Test: jax - attribute.paddle.real | To-do List: https://github.com/unifyai/ivy/issues/27496 | closed | 2024-01-28T19:01:47Z | 2024-01-29T13:08:51Z | https://github.com/ivy-llc/ivy/issues/28098 | [
"Sub Task"
] | Sai-Suraj-27 | 0 |
ultrafunkamsterdam/undetected-chromedriver | automation | 2,145 | How does nodriver support Internet Explorer | How does nodriver support Internet Explorer | open | 2025-02-24T09:14:22Z | 2025-02-24T10:14:10Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/2145 | [] | pythonlw | 1 |
litestar-org/polyfactory | pydantic | 455 | Bug: Support for Python 3.12 type keyword | ### Description
When upgrading from Python 3.11 syntax to 3.12, my TypeAliases are now `type` lines.
Ex:
```python
LiteralAlias: TypeAlias = Literal['a','b','c']
```
becomes
```python
type LiteralAlias = Literal['a','b','c']
```
With this change Polyfactory no longer recognizes the type:
```
E polyfactory.exceptions.ParameterException: Unsupported type: LiteralAlias
E
E Either extend the providers map or add a factory function for this type.
```
### URL to code causing the issue
_No response_
### MCVE
```python
# Your MCVE code here
```
### Steps to reproduce
```bash
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
```
### Screenshots
"In the format of: ``"
### Logs
_No response_
### Release Version
2.12.0
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2023-12-12T07:54:25Z | 2025-03-20T15:53:12Z | https://github.com/litestar-org/polyfactory/issues/455 | [
"bug",
"high priority"
] | sherbang | 1 |
yihong0618/running_page | data-visualization | 664 | 悦跑圈抓取数据时错误,遇到错误:Max retries exceeded with url错误。 | ## 在vscode中运行,具体出现的反馈代码如下:
```
C:\Users\Administrator> & C:/Users/Administrator/AppData/Local/Programs/Python/Python38/python.exe d:/JoyRundateto/running_page-master/run_page/joyrun_sync.py 18XXXXX 验证码 --with-gpx
Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 776, in urlopen
self._prepare_proxy(conn)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 1045, in _prepare_proxy
conn.connect()
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connection.py", line 619, in connect
self.sock = sock = self._connect_tls_proxy(self.host, sock)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connection.py", line 675, in _connect_tls_proxy
sock_and_verified = _ssl_wrap_socket_and_match_hostname(
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connection.py", line 782, in _ssl_wrap_socket_and_match_hostname
ssl_sock = ssl_wrap_socket(
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\util\ssl_.py", line 470, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\util\ssl_.py", line 514, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\ssl.py", line 1040, in _create
self.do_handshake()
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
OSError: [Errno 0] Error
The above exception was the direct cause of the following exception:
urllib3.exceptions.ProxyError: ('Unable to connect to proxy', OSError(0, 'Error'))
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\adapters.py", line 486, in send
resp = conn.urlopen(
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\connectionpool.py", line 844, in urlopen
retries = retries.increment(
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\urllib3\util\retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='api.thejoyrun.com', port=443): Max retries exceeded with url: //user/login/phonecode?phoneNumber=188XXXXX&identifyingCode=验证码&signature=C914F1CE592643369EE52D84EF7CC8D6×tamp=1713698195 (Caused by ProxyError('Unable to connect to proxy', OSError(0, 'Error')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "d:/JoyRundateto/running_page-master/run_page/joyrun_sync.py", line 368, in <module>
j.login_by_phone()
File "d:/JoyRundateto/running_page-master/run_page/joyrun_sync.py", line 142, in login_by_phone
r = self.session.get(
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python38\lib\site-packages\requests\adapters.py", line 513, in send
raise ProxyError(e, request=request)
requests.exceptions.ProxyError: HTTPSConnectionPool(host='api.thejoyrun.com', port=443): Max retries exceeded with url: //user/login/phonecode?phoneNumber=手机号&identifyingCode=820481&signature=C914F1CE592643369EE52D84EF7CC8D6×tamp=1713698195 (Caused by ProxyError('Unable to connect to proxy', OSError(0, 'Error')))
```
## 问ai,它的解释是:
从您提供的错误信息来看,似乎您的 Python 脚本在尝试通过 HTTPS 连接到 api.thejoyrun.com 时无法成功通过代理服务器。更具体地说,脚本在 SSL 握手过程中无法建立与代理服务器的连接,并且在尝试多次重试后,最终引发了 MaxRetryError 异常。
解决这个问题可能需要以下几个步骤:
检查代理服务器设置:
确认您的代理服务器设置是否正确。包括代理服务器地址、端口和任何身份认证信息。
检查网络连接:
确认您的机器可以连接到代理服务器以及目标 api.thejoyrun.com。您可以通过 ping 命令或者尝试在浏览器中访问来检查。
检查 SSL/TLS 支持:
确保您的代理服务器支持 SSL/TLS 连接。您可以尝试访问其他 HTTPS 网站以验证此事。
检查 Python 和库版本:
确保您的 Python 版本和所有相关库(特别是 requests 和 urllib3)都是最新的,或者至少是还在维护的版本。旧版本的库可能不支持某些加密协议或包含已知的错误。
禁用或配置 TLS/SSL 验证:
如果您信任代理服务器并了解潜在的安全风险,您可以尝试在 requests 调用中禁用 SSL 验证(仅作为临时测试手段,不推荐在生产环境中使用):
python
response = requests.get('https://example.com', verify=False)
或者正确配置 SSL 上下文以适应代理的要求。
查看更详细的日志:
您可以使用 logging 模块查看更详细的日志,了解问题发生的具体环节。
检查代码中代理使用方式:
检查 joyrun_sync.py 脚本中的代理使用方式,确保代理的配置是正确的。
联系代理服务提供商:
如果问题依然存在,您可以考虑联系您的代理服务提供商,可能是代理服务侧有问题或者有特定的连接要求。
最后,请再次检查您是否需要通过代理服务器进行连接。如果不是必须的,您可以尝试直接连接到目标服务器而不经过代理。如果需要通过代理服务器,确保您的代理设置是正确的,并且代理服务器处于工作状态且可接受来自您的机器的连接。
---
## 我进行的尝试
### 1. ping尝试
关闭我个人的代理,然后`ping api.thejoyrun.com`,是通的。

并且用以及等待5、6分钟的验证码,在悦跑圈上通过验证码登录,竟然一下子就登录上去了。
### 2. 检查python 和库版本
主要的requests 2.31.0 基本上是最新版本,urllib3 2.2.1 也是2个月前更新的;但是requests相关的requests-oauthlib 、requests-toolbelt 看着版本好低,尝试更新
requests-oauthlib 1.3.1 >> 2.0.0
requests-toolbelt 1.0.0 >> 1.0.0
更新过再次尝试,还是出现同样的错误代码。
查看python相关库,所有内容
```
C:\Users\Administrator>pip list
Package Version
--------------------- --------------
aiofiles 23.2.1
annotated-types 0.6.0
appdirs 1.4.4
arrow 1.3.0
beautifulsoup4 4.12.3
bitstruct 8.11.1
certifi 2024.2.2
cffi 1.16.0
charset-normalizer 3.3.2
cloudscraper 1.2.58
colour 0.1.5
et-xmlfile 1.1.0
eviltransform 0.1.1
fit-tool 0.9.13
future 1.0.0
garmin-fit-sdk 21.133.0
garth 0.4.45
geographiclib 2.0
geopy 2.4.1
gpxpy 1.4.2
greenlet 3.0.3
h11 0.9.0
h3 3.7.7
haversine 2.8.0
httpcore 0.11.1
httpx 0.15.5
idna 3.7
jdcal 1.4.1
lxml 4.9.4
markdown-it-py 3.0.0
mdurl 0.1.2
numpy 1.26.4
oauthlib 3.2.2
openpyxl 2.5.12
pip 24.0
polyline 2.0.2
pycparser 2.22
pycryptodome 3.20.0
pydantic 2.7.0
pydantic_core 2.18.1
Pygments 2.17.2
pyparsing 3.1.2
python-dateutil 2.9.0.post0
pytz 2024.1
PyYAML 6.0.1
requests 2.31.0
requests-oauthlib 1.3.1
requests-toolbelt 1.0.0
rfc3986 1.5.0
rich 13.7.1
s2sphere 0.2.5
setuptools 69.5.1
six 1.16.0
sniffio 1.3.1
soupsieve 2.5
SQLAlchemy 2.0.29
stravalib 0.10.4
stravaweblib 0.0.8
svgwrite 1.4.3
tcxreader 0.4.10
tenacity 8.2.3
timezonefinder 6.5.0
types-python-dateutil 2.9.0.20240316
typing_extensions 4.11.0
tzdata 2024.1
tzlocal 5.2
units 0.7
urllib3 2.2.1
```
请问怎么处理这个错误吗? | closed | 2024-04-21T11:47:04Z | 2024-05-28T07:23:13Z | https://github.com/yihong0618/running_page/issues/664 | [] | Xiaoshizi1024 | 7 |
onnx/onnx | scikit-learn | 5,828 | Pytorch to ONNX export: Unexpected reduction in model size. | I am attempting to convert a PyTorch text detection model to ONNX, but I am experiencing unexpected behavior during the export process. The size of the exported ONNX model is significantly reduced from the original PyTorch model (15 MB to 7 MB), and the inferences from the ONNX model are not as accurate as the PyTorch model.
### Environment
- PyTorch Version: 1.12.0
- ONNX Version: 1.12.0
- Operating System: Ubuntu
I used the mentioned repo to train a text detection model
Model name: EAST text detection model [https://github.com/mingliangzhang2018/EAST-Pytorch](url)
What I have tried:
I have tried using different PyTorch and ONNX versions, but the issue remains.
To isolate the problem, I tested the ONNX export with the tutorial [https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html?highlight=onnx](url) and it did not exhibit size reduction issues.
Also I have tried different versions of torch (1.14) and onnx but still have the same issue. During the conversion from the torch, there are no errors or warnings.
| closed | 2023-12-28T20:10:35Z | 2024-01-01T17:02:10Z | https://github.com/onnx/onnx/issues/5828 | [
"question",
"topic: converters"
] | Ashish0091 | 1 |
tox-dev/tox | automation | 2,844 | tox v4 gets stuck at the end waiting for a lock to be released | ## Issue
In particular conditions, tox might get stuck forever (100% cpu) after it finished running.
Describe what's the expected behaviour and what you're observing.
## Environment
Provide at least:
- OS: any
- `pip list` of the host Python where `tox` is installed:
```console
```
## Output of running tox
Provide the output of `tox -rvv`:
```console
ssbarnea@m1: ~/c/tox-bug main
$ tox -vvv --exit-and-dump-after 40 -e py
ROOT: 140 D setup logging to NOTSET on pid 46700 [tox/report.py:221]
ROOT: 297 W will run in automatically provisioned tox, host /Users/ssbarnea/.pyenv/versions/3.11-dev/bin/python3.11 is missing [requires (has)]: tox>=4.2.6 (4.2.5) [tox/provision.py:124]
.pkg: 309 I find interpreter for spec PythonSpec(path=/Users/ssbarnea/.pyenv/versions/3.11-dev/bin/python3.11) [virtualenv/discovery/builtin.py:56]
.pkg: 309 I proposed PythonInfo(spec=CPython3.11.0.final.0-64, exe=/Users/ssbarnea/.pyenv/versions/3.11-dev/bin/python3.11, platform=darwin, version='3.11.0+ (heads/3.11:4cd5ea62ac, Oct 25 2022, 18:19:49) [Clang 14.0.0 (clang-1400.0.29.102)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:63]
.pkg: 309 D accepted PythonInfo(spec=CPython3.11.0.final.0-64, exe=/Users/ssbarnea/.pyenv/versions/3.11-dev/bin/python3.11, platform=darwin, version='3.11.0+ (heads/3.11:4cd5ea62ac, Oct 25 2022, 18:19:49) [Clang 14.0.0 (clang-1400.0.29.102)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65]
.pkg: 311 D filesystem is not case-sensitive [virtualenv/info.py:24]
.pkg: 373 I find interpreter for spec PythonSpec(path=/Users/ssbarnea/.pyenv/versions/3.11-dev/bin/python3.11) [virtualenv/discovery/builtin.py:56]
.pkg: 373 I proposed PythonInfo(spec=CPython3.11.0.final.0-64, exe=/Users/ssbarnea/.pyenv/versions/3.11-dev/bin/python3.11, platform=darwin, version='3.11.0+ (heads/3.11:4cd5ea62ac, Oct 25 2022, 18:19:49) [Clang 14.0.0 (clang-1400.0.29.102)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:63]
.pkg: 373 D accepted PythonInfo(spec=CPython3.11.0.final.0-64, exe=/Users/ssbarnea/.pyenv/versions/3.11-dev/bin/python3.11, platform=darwin, version='3.11.0+ (heads/3.11:4cd5ea62ac, Oct 25 2022, 18:19:49) [Clang 14.0.0 (clang-1400.0.29.102)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65]
.pkg: 375 I find interpreter for spec PythonSpec() [virtualenv/discovery/builtin.py:56]
.pkg: 375 I proposed PythonInfo(spec=CPython3.11.0.final.0-64, exe=/Users/ssbarnea/.pyenv/versions/3.11-dev/bin/python3.11, platform=darwin, version='3.11.0+ (heads/3.11:4cd5ea62ac, Oct 25 2022, 18:19:49) [Clang 14.0.0 (clang-1400.0.29.102)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:63]
.pkg: 375 D accepted PythonInfo(spec=CPython3.11.0.final.0-64, exe=/Users/ssbarnea/.pyenv/versions/3.11-dev/bin/python3.11, platform=darwin, version='3.11.0+ (heads/3.11:4cd5ea62ac, Oct 25 2022, 18:19:49) [Clang 14.0.0 (clang-1400.0.29.102)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65]
ROOT: 376 I will run in a automatically provisioned python environment under /Users/ssbarnea/c/tox-bug/.tox/.tox/bin/python [tox/provision.py:145]
.pkg: 379 W _optional_hooks> python /Users/ssbarnea/.pyenv/versions/3.11-dev/lib/python3.11/site-packages/pyproject_api/_backend.py True setuptools.build_meta [tox/tox_env/api.py:427]
Backend: run command _optional_hooks with args {}
Backend: Wrote response {'return': {'get_requires_for_build_sdist': True, 'prepare_metadata_for_build_wheel': True, 'get_requires_for_build_wheel': True, 'build_editable': True, 'get_requires_for_build_editable': True, 'prepare_metadata_for_build_editable': True}} to /var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/pep517__optional_hooks-dh8_2_0b.json
.pkg: 570 I exit None (0.19 seconds) /Users/ssbarnea/c/tox-bug> python /Users/ssbarnea/.pyenv/versions/3.11-dev/lib/python3.11/site-packages/pyproject_api/_backend.py True setuptools.build_meta pid=46718 [tox/execute/api.py:275]
.pkg: 571 W get_requires_for_build_editable> python /Users/ssbarnea/.pyenv/versions/3.11-dev/lib/python3.11/site-packages/pyproject_api/_backend.py True setuptools.build_meta [tox/tox_env/api.py:427]
Backend: run command get_requires_for_build_editable with args {'config_settings': None}
/Users/ssbarnea/c/tox-bug/.tox/.pkg/lib/python3.11/site-packages/setuptools/config/expand.py:144: UserWarning: File '/Users/ssbarnea/c/tox-bug/README.md' cannot be found
warnings.warn(f"File {path!r} cannot be found")
running egg_info
writing src/ansible_compat.egg-info/PKG-INFO
writing dependency_links to src/ansible_compat.egg-info/dependency_links.txt
writing top-level names to src/ansible_compat.egg-info/top_level.txt
writing manifest file 'src/ansible_compat.egg-info/SOURCES.txt'
Backend: Wrote response {'return': ['wheel']} to /var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/pep517_get_requires_for_build_editable-xbuoiu_g.json
.pkg: 772 I exit None (0.20 seconds) /Users/ssbarnea/c/tox-bug> python /Users/ssbarnea/.pyenv/versions/3.11-dev/lib/python3.11/site-packages/pyproject_api/_backend.py True setuptools.build_meta pid=46718 [tox/execute/api.py:275]
.pkg: 773 W build_editable> python /Users/ssbarnea/.pyenv/versions/3.11-dev/lib/python3.11/site-packages/pyproject_api/_backend.py True setuptools.build_meta [tox/tox_env/api.py:427]
Backend: run command build_editable with args {'wheel_directory': '/Users/ssbarnea/c/tox-bug/.tox/.pkg/dist', 'config_settings': {'--build-option': []}, 'metadata_directory': '/Users/ssbarnea/c/tox-bug/.tox/.pkg/.meta'}
/Users/ssbarnea/c/tox-bug/.tox/.pkg/lib/python3.11/site-packages/setuptools/config/expand.py:144: UserWarning: File '/Users/ssbarnea/c/tox-bug/README.md' cannot be found
warnings.warn(f"File {path!r} cannot be found")
running editable_wheel
creating /Users/ssbarnea/c/tox-bug/.tox/.pkg/dist/.tmp-wfme8ksy/ansible_compat.egg-info
writing /Users/ssbarnea/c/tox-bug/.tox/.pkg/dist/.tmp-wfme8ksy/ansible_compat.egg-info/PKG-INFO
writing dependency_links to /Users/ssbarnea/c/tox-bug/.tox/.pkg/dist/.tmp-wfme8ksy/ansible_compat.egg-info/dependency_links.txt
writing top-level names to /Users/ssbarnea/c/tox-bug/.tox/.pkg/dist/.tmp-wfme8ksy/ansible_compat.egg-info/top_level.txt
writing manifest file '/Users/ssbarnea/c/tox-bug/.tox/.pkg/dist/.tmp-wfme8ksy/ansible_compat.egg-info/SOURCES.txt'
writing manifest file '/Users/ssbarnea/c/tox-bug/.tox/.pkg/dist/.tmp-wfme8ksy/ansible_compat.egg-info/SOURCES.txt'
creating '/Users/ssbarnea/c/tox-bug/.tox/.pkg/dist/.tmp-wfme8ksy/ansible_compat-0.1.dev1.dist-info'
creating /Users/ssbarnea/c/tox-bug/.tox/.pkg/dist/.tmp-wfme8ksy/ansible_compat-0.1.dev1.dist-info/WHEEL
running build_py
running egg_info
creating /var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/tmplg3vacmn.build-temp/ansible_compat.egg-info
writing /var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/tmplg3vacmn.build-temp/ansible_compat.egg-info/PKG-INFO
writing dependency_links to /var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/tmplg3vacmn.build-temp/ansible_compat.egg-info/dependency_links.txt
writing top-level names to /var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/tmplg3vacmn.build-temp/ansible_compat.egg-info/top_level.txt
writing manifest file '/var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/tmplg3vacmn.build-temp/ansible_compat.egg-info/SOURCES.txt'
writing manifest file '/var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/tmplg3vacmn.build-temp/ansible_compat.egg-info/SOURCES.txt'
Editable install will be performed using .pth file to extend `sys.path` with:
['src']
Options like `package-data`, `include/exclude-package-data` or
`packages.find.exclude/include` may have no effect.
adding '__editable__.ansible_compat-0.1.dev1.pth'
creating '/Users/ssbarnea/c/tox-bug/.tox/.pkg/dist/.tmp-wfme8ksy/ansible_compat-0.1.dev1-0.editable-py3-none-any.whl' and adding '/var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/tmp_6d_ux_iansible_compat-0.1.dev1-0.editable-py3-none-any.whl' to it
adding 'ansible_compat-0.1.dev1.dist-info/METADATA'
adding 'ansible_compat-0.1.dev1.dist-info/WHEEL'
adding 'ansible_compat-0.1.dev1.dist-info/top_level.txt'
adding 'ansible_compat-0.1.dev1.dist-info/RECORD'
Backend: Wrote response {'return': 'ansible_compat-0.1.dev1-0.editable-py3-none-any.whl'} to /var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/pep517_build_editable-owjdt97d.json
.pkg: 907 I exit None (0.13 seconds) /Users/ssbarnea/c/tox-bug> python /Users/ssbarnea/.pyenv/versions/3.11-dev/lib/python3.11/site-packages/pyproject_api/_backend.py True setuptools.build_meta pid=46718 [tox/execute/api.py:275]
.pkg: 908 D package .tmp/package/21/ansible_compat-0.1.dev1-0.editable-py3-none-any.whl links to .pkg/dist/ansible_compat-0.1.dev1-0.editable-py3-none-any.whl (/Users/ssbarnea/c/tox-bug/.tox) [tox/util/file_view.py:36]
ROOT: 908 W install_package> python -I -m pip install --force-reinstall --no-deps /Users/ssbarnea/c/tox-bug/.tox/.tmp/package/21/ansible_compat-0.1.dev1-0.editable-py3-none-any.whl [tox/tox_env/api.py:427]
Processing ./.tox/.tmp/package/21/ansible_compat-0.1.dev1-0.editable-py3-none-any.whl
Installing collected packages: ansible-compat
Attempting uninstall: ansible-compat
Found existing installation: ansible-compat 0.1.dev1
Uninstalling ansible-compat-0.1.dev1:
Successfully uninstalled ansible-compat-0.1.dev1
Successfully installed ansible-compat-0.1.dev1
ROOT: 1400 I exit 0 (0.49 seconds) /Users/ssbarnea/c/tox-bug> python -I -m pip install --force-reinstall --no-deps /Users/ssbarnea/c/tox-bug/.tox/.tmp/package/21/ansible_compat-0.1.dev1-0.editable-py3-none-any.whl pid=46748 [tox/execute/api.py:275]
ROOT: 1400 W provision> .tox/.tox/bin/python -m tox -vvv --exit-and-dump-after 40 -e py [tox/tox_env/api.py:427]
ROOT: 77 D setup logging to NOTSET on pid 46755 [tox/report.py:221]
.pkg: 112 I find interpreter for spec PythonSpec() [virtualenv/discovery/builtin.py:56]
.pkg: 114 D got python info of /Users/ssbarnea/.pyenv/versions/3.11-dev/bin/python3.11 from /Users/ssbarnea/Library/Application Support/virtualenv/py_info/1/7c440f9733fdf26ad06b36085586625aa56ad3867d4add5eecd4dc174170d65a.json [virtualenv/app_data/via_disk_folder.py:129]
.pkg: 114 I proposed PythonInfo(spec=CPython3.11.0.final.0-64, system=/Users/ssbarnea/.pyenv/versions/3.11-dev/bin/python3.11, exe=/Users/ssbarnea/c/tox-bug/.tox/.tox/bin/python, platform=darwin, version='3.11.0+ (heads/3.11:4cd5ea62ac, Oct 25 2022, 18:19:49) [Clang 14.0.0 (clang-1400.0.29.102)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:63]
.pkg: 114 D accepted PythonInfo(spec=CPython3.11.0.final.0-64, system=/Users/ssbarnea/.pyenv/versions/3.11-dev/bin/python3.11, exe=/Users/ssbarnea/c/tox-bug/.tox/.tox/bin/python, platform=darwin, version='3.11.0+ (heads/3.11:4cd5ea62ac, Oct 25 2022, 18:19:49) [Clang 14.0.0 (clang-1400.0.29.102)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65]
.pkg: 115 D filesystem is not case-sensitive [virtualenv/info.py:24]
.pkg: 132 I find interpreter for spec PythonSpec(path=/Users/ssbarnea/c/tox-bug/.tox/.tox/bin/python) [virtualenv/discovery/builtin.py:56]
.pkg: 132 I proposed PythonInfo(spec=CPython3.11.0.final.0-64, system=/Users/ssbarnea/.pyenv/versions/3.11-dev/bin/python3.11, exe=/Users/ssbarnea/c/tox-bug/.tox/.tox/bin/python, platform=darwin, version='3.11.0+ (heads/3.11:4cd5ea62ac, Oct 25 2022, 18:19:49) [Clang 14.0.0 (clang-1400.0.29.102)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:63]
.pkg: 132 D accepted PythonInfo(spec=CPython3.11.0.final.0-64, system=/Users/ssbarnea/.pyenv/versions/3.11-dev/bin/python3.11, exe=/Users/ssbarnea/c/tox-bug/.tox/.tox/bin/python, platform=darwin, version='3.11.0+ (heads/3.11:4cd5ea62ac, Oct 25 2022, 18:19:49) [Clang 14.0.0 (clang-1400.0.29.102)]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65]
.pkg: 134 W _optional_hooks> python /Users/ssbarnea/c/tox-bug/.tox/.tox/lib/python3.11/site-packages/pyproject_api/_backend.py True setuptools.build_meta [tox/tox_env/api.py:427]
Backend: run command _optional_hooks with args {}
Backend: Wrote response {'return': {'get_requires_for_build_sdist': True, 'prepare_metadata_for_build_wheel': True, 'get_requires_for_build_wheel': True, 'build_editable': True, 'get_requires_for_build_editable': True, 'prepare_metadata_for_build_editable': True}} to /var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/pep517__optional_hooks-6hnvgpmw.json
.pkg: 237 I exit None (0.10 seconds) /Users/ssbarnea/c/tox-bug> python /Users/ssbarnea/c/tox-bug/.tox/.tox/lib/python3.11/site-packages/pyproject_api/_backend.py True setuptools.build_meta pid=46757 [tox/execute/api.py:275]
.pkg: 237 W get_requires_for_build_editable> python /Users/ssbarnea/c/tox-bug/.tox/.tox/lib/python3.11/site-packages/pyproject_api/_backend.py True setuptools.build_meta [tox/tox_env/api.py:427]
Backend: run command get_requires_for_build_editable with args {'config_settings': None}
/Users/ssbarnea/c/tox-bug/.tox/.pkg/lib/python3.11/site-packages/setuptools/config/expand.py:144: UserWarning: File '/Users/ssbarnea/c/tox-bug/README.md' cannot be found
warnings.warn(f"File {path!r} cannot be found")
running egg_info
writing src/ansible_compat.egg-info/PKG-INFO
writing dependency_links to src/ansible_compat.egg-info/dependency_links.txt
writing top-level names to src/ansible_compat.egg-info/top_level.txt
writing manifest file 'src/ansible_compat.egg-info/SOURCES.txt'
Backend: Wrote response {'return': ['wheel']} to /var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/pep517_get_requires_for_build_editable-4uddn4ko.json
.pkg: 392 I exit None (0.15 seconds) /Users/ssbarnea/c/tox-bug> python /Users/ssbarnea/c/tox-bug/.tox/.tox/lib/python3.11/site-packages/pyproject_api/_backend.py True setuptools.build_meta pid=46757 [tox/execute/api.py:275]
.pkg: 392 W build_editable> python /Users/ssbarnea/c/tox-bug/.tox/.tox/lib/python3.11/site-packages/pyproject_api/_backend.py True setuptools.build_meta [tox/tox_env/api.py:427]
Backend: run command build_editable with args {'wheel_directory': '/Users/ssbarnea/c/tox-bug/.tox/.pkg/dist', 'config_settings': {'--build-option': []}, 'metadata_directory': '/Users/ssbarnea/c/tox-bug/.tox/.pkg/.meta'}
/Users/ssbarnea/c/tox-bug/.tox/.pkg/lib/python3.11/site-packages/setuptools/config/expand.py:144: UserWarning: File '/Users/ssbarnea/c/tox-bug/README.md' cannot be found
warnings.warn(f"File {path!r} cannot be found")
running editable_wheel
creating /Users/ssbarnea/c/tox-bug/.tox/.pkg/dist/.tmp-ema2szox/ansible_compat.egg-info
writing /Users/ssbarnea/c/tox-bug/.tox/.pkg/dist/.tmp-ema2szox/ansible_compat.egg-info/PKG-INFO
writing dependency_links to /Users/ssbarnea/c/tox-bug/.tox/.pkg/dist/.tmp-ema2szox/ansible_compat.egg-info/dependency_links.txt
writing top-level names to /Users/ssbarnea/c/tox-bug/.tox/.pkg/dist/.tmp-ema2szox/ansible_compat.egg-info/top_level.txt
writing manifest file '/Users/ssbarnea/c/tox-bug/.tox/.pkg/dist/.tmp-ema2szox/ansible_compat.egg-info/SOURCES.txt'
writing manifest file '/Users/ssbarnea/c/tox-bug/.tox/.pkg/dist/.tmp-ema2szox/ansible_compat.egg-info/SOURCES.txt'
creating '/Users/ssbarnea/c/tox-bug/.tox/.pkg/dist/.tmp-ema2szox/ansible_compat-0.1.dev1.dist-info'
creating /Users/ssbarnea/c/tox-bug/.tox/.pkg/dist/.tmp-ema2szox/ansible_compat-0.1.dev1.dist-info/WHEEL
running build_py
running egg_info
creating /var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/tmpat9h5lcf.build-temp/ansible_compat.egg-info
writing /var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/tmpat9h5lcf.build-temp/ansible_compat.egg-info/PKG-INFO
writing dependency_links to /var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/tmpat9h5lcf.build-temp/ansible_compat.egg-info/dependency_links.txt
writing top-level names to /var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/tmpat9h5lcf.build-temp/ansible_compat.egg-info/top_level.txt
writing manifest file '/var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/tmpat9h5lcf.build-temp/ansible_compat.egg-info/SOURCES.txt'
writing manifest file '/var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/tmpat9h5lcf.build-temp/ansible_compat.egg-info/SOURCES.txt'
Editable install will be performed using .pth file to extend `sys.path` with:
['src']
Options like `package-data`, `include/exclude-package-data` or
`packages.find.exclude/include` may have no effect.
adding '__editable__.ansible_compat-0.1.dev1.pth'
creating '/Users/ssbarnea/c/tox-bug/.tox/.pkg/dist/.tmp-ema2szox/ansible_compat-0.1.dev1-0.editable-py3-none-any.whl' and adding '/var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/tmp41rddnayansible_compat-0.1.dev1-0.editable-py3-none-any.whl' to it
adding 'ansible_compat-0.1.dev1.dist-info/METADATA'
adding 'ansible_compat-0.1.dev1.dist-info/WHEEL'
adding 'ansible_compat-0.1.dev1.dist-info/top_level.txt'
adding 'ansible_compat-0.1.dev1.dist-info/RECORD'
Backend: Wrote response {'return': 'ansible_compat-0.1.dev1-0.editable-py3-none-any.whl'} to /var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/pep517_build_editable-tho58wbq.json
.pkg: 520 I exit None (0.13 seconds) /Users/ssbarnea/c/tox-bug> python /Users/ssbarnea/c/tox-bug/.tox/.tox/lib/python3.11/site-packages/pyproject_api/_backend.py True setuptools.build_meta pid=46757 [tox/execute/api.py:275]
.pkg: 520 D package .tmp/package/22/ansible_compat-0.1.dev1-0.editable-py3-none-any.whl links to .pkg/dist/ansible_compat-0.1.dev1-0.editable-py3-none-any.whl (/Users/ssbarnea/c/tox-bug/.tox) [tox/util/file_view.py:36]
py: 521 W install_package> python -I -m pip install --force-reinstall --no-deps /Users/ssbarnea/c/tox-bug/.tox/.tmp/package/22/ansible_compat-0.1.dev1-0.editable-py3-none-any.whl [tox/tox_env/api.py:427]
Processing ./.tox/.tmp/package/22/ansible_compat-0.1.dev1-0.editable-py3-none-any.whl
Installing collected packages: ansible-compat
Attempting uninstall: ansible-compat
Found existing installation: ansible-compat 0.1.dev1
Uninstalling ansible-compat-0.1.dev1:
Successfully uninstalled ansible-compat-0.1.dev1
Successfully installed ansible-compat-0.1.dev1
py: 784 I exit 0 (0.26 seconds) /Users/ssbarnea/c/tox-bug> python -I -m pip install --force-reinstall --no-deps /Users/ssbarnea/c/tox-bug/.tox/.tmp/package/22/ansible_compat-0.1.dev1-0.editable-py3-none-any.whl pid=46788 [tox/execute/api.py:275]
py: 784 W commands[0]> echo 123 [tox/tox_env/api.py:427]
123
py: 808 I exit 0 (0.02 seconds) /Users/ssbarnea/c/tox-bug> echo 123 pid=46790 [tox/execute/api.py:275]
.pkg: 809 W _exit> python /Users/ssbarnea/c/tox-bug/.tox/.tox/lib/python3.11/site-packages/pyproject_api/_backend.py True setuptools.build_meta [tox/tox_env/api.py:427]
Backend: run command _exit with args {}
Backend: Wrote response {'return': 0} to /var/folders/32/1xrphgzd4xv777syxjtkpdw80000gn/T/pep517__exit-wsk25xml.json
.pkg: 810 I exit None (0.00 seconds) /Users/ssbarnea/c/tox-bug> python /Users/ssbarnea/c/tox-bug/.tox/.tox/lib/python3.11/site-packages/pyproject_api/_backend.py True setuptools.build_meta pid=46757 [tox/execute/api.py:275]
.pkg: 841 D delete package /Users/ssbarnea/c/tox-bug/.tox/.tmp/package/22/ansible_compat-0.1.dev1-0.editable-py3-none-any.whl [tox/tox_env/python/virtual_env/package/pyproject.py:171]
py: OK (0.71=setup[0.68]+cmd[0.02] seconds)
congratulations :) (0.76 seconds)
Timeout (0:00:40)!
Thread 0x000000017d037000 (most recent call first):
File "/Users/ssbarnea/.pyenv/versions/3.11-dev/lib/python3.11/site-packages/tox/execute/local_sub_process/read_via_thread_unix.py", line 35 in _read_available
File "/Users/ssbarnea/.pyenv/versions/3.11-dev/lib/python3.11/site-packages/tox/execute/local_sub_process/read_via_thread_unix.py", line 24 in _read_stream
File "/Users/ssbarnea/.pyenv/versions/3.11-dev/lib/python3.11/threading.py", line 975 in run
File "/Users/ssbarnea/.pyenv/versions/3.11-dev/lib/python3.11/threading.py", line 1038 in _bootstrap_inner
File "/Users/ssbarnea/.pyenv/versions/3.11-dev/lib/python3.11/threading.py", line 995 in _bootstrap
Thread 0x000000017c02b000 (most recent call first):
File "/Users/ssbarnea/.pyenv/versions/3.11-dev/lib/python3.11/site-packages/tox/execute/local_sub_process/read_via_thread_unix.py", line 35 in _read_available
File "/Users/ssbarnea/.pyenv/versions/3.11-dev/lib/python3.11/site-packages/tox/execute/local_sub_process/read_via_thread_unix.py", line 24 in _read_stream
File "/Users/ssbarnea/.pyenv/versions/3.11-dev/lib/python3.11/threading.py", line 975 in run
File "/Users/ssbarnea/.pyenv/versions/3.11-dev/lib/python3.11/threading.py", line 1038 in _bootstrap_inner
File "/Users/ssbarnea/.pyenv/versions/3.11-dev/lib/python3.11/threading.py", line 995 in _bootstrap
Thread 0x00000001e3983a80 (most recent call first):
File "/Users/ssbarnea/.pyenv/versions/3.11-dev/lib/python3.11/threading.py", line 1583 in _shutdown
FAIL: 1
```
## Minimal example
At https://github.com/ssbarnea/tox-bug there is a full repository created for reproducing the bug.
I will explain the conditions I already identified as required in order to reproduce the bug:
- tox needs to be convinced to reprovision itself (it does not happen otherwise)
- `usedevelop = true` must be present
I also mention that the `--exit-and-dump-after` trick does force tox to exit but does not close the running thread, which will be stuck until user kills the process. Bug reproduced on both macos/linux.
| open | 2023-01-09T19:09:21Z | 2024-03-05T22:16:13Z | https://github.com/tox-dev/tox/issues/2844 | [
"bug:minor",
"help:wanted"
] | ssbarnea | 6 |
dask/dask | numpy | 11,068 | Unable to get series when previously filtered with datetime slice | <!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**:
When I filter my datetime-indexed dataset and then try to access a column an error is raised.
**Minimal Complete Verifiable Example**:
```python
import dask.dataframe as dd
import pandas as pd
import numpy as np
npartitions = 10
df = pd.DataFrame(
{
'A': np.random.randn(npartitions * 10),
},
index=pd.date_range('2024-01-01', '2024-12-31', npartitions*10),
)
ddf = dd.from_pandas(df, npartitions=npartitions)
ddf = ddf['2024-03-01':'2024-09-30']
ddf['A'].compute()
```
raises the following error
```python
Traceback (most recent call last):
File "issue.py", line 15, in <module>
ddf['A'].compute()
File "C:\Python\Lib\site-packages\dask_expr\_collection.py", line 475, in compute
return DaskMethodsMixin.compute(out, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Lib\site-packages\dask\base.py", line 375, in compute
(result,) = compute(self, traverse=False, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Lib\site-packages\dask\base.py", line 661, in compute
results = schedule(dsk, keys, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Lib\site-packages\dask_expr\_expr.py", line 3698, in _execute_task
return dask.core.get(graph, name)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Lib\site-packages\dask\dataframe\methods.py", line 51, in loc
return df.loc[iindexer]
^^^^^^
AttributeError: 'tuple' object has no attribute 'loc'
```
**Anything else we need to know?**:
If you do not compute the series, but the whole dataframe no error is raised and the expected data is computed.
**Environment**:
- Dask version: dask 2024.4.2, dask-core 2024.4.2, dask-expr 1.0.12
- Python version: 3.12.2
- Operating System: Windows
- Install method (conda, pip, source): conda
| closed | 2024-04-25T10:05:05Z | 2024-04-25T17:49:26Z | https://github.com/dask/dask/issues/11068 | [
"needs triage"
] | m-rossi | 3 |
scikit-image/scikit-image | computer-vision | 7,062 | Error when using pip to install version 0.17.2 | ### Description:
I'm trying to install scikit-image==0.17.2 using pip, but doing so results in an error.
### Way to reproduce:
Running the command: "pip install scikit-image==0.17.2" results in this:
```
Collecting scikit-image==0.17.2
Using cached scikit-image-0.17.2.tar.gz (29.8 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [18 lines of output]
Traceback (most recent call last):
File "C:\Users\....\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Users\....\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "C:\Users\....\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "C:\Users\....\AppData\Local\Temp\pip-build-env-hgr10cdb\overlay\Lib\site-packages\setuptools\build_meta.py", line 341, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
File "C:\Users\....\AppData\Local\Temp\pip-build-env-hgr10cdb\overlay\Lib\site-packages\setuptools\build_meta.py", line 323, in _get_build_requires
self.run_setup()
File "C:\Users\....\AppData\Local\Temp\pip-build-env-hgr10cdb\overlay\Lib\site-packages\setuptools\build_meta.py", line 487, in run_setup
super(_BuildMetaLegacyBackend,
File "C:\Users\....\AppData\Local\Temp\pip-build-env-hgr10cdb\overlay\Lib\site-packages\setuptools\build_meta.py", line 338, in run_setup
exec(code, locals())
File "<string>", line 234, in <module>
File "<string>", line 58, in openmp_build_ext
ModuleNotFoundError: No module named 'numpy'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
```
### Version information:
```Shell
3.10.1 (tags/v3.10.1:2cd268a, Dec 6 2021, 19:10:37) [MSC v.1929 64 bit (AMD64)]
Windows-10-10.0.19045-SP0
scikit-image version: Not installed
numpy version: 1.24.3
python version: Python 3.10.1
```
| closed | 2023-07-11T19:39:35Z | 2024-06-17T23:35:49Z | https://github.com/scikit-image/scikit-image/issues/7062 | [
":bug: Bug"
] | xdTAGCLAN | 6 |
aminalaee/sqladmin | sqlalchemy | 179 | UUIDType in sqlalchemy_utils support | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
There is a [UUID converter for `sqlalchemy.dialects.postgresql.base.UUID`](https://github.com/aminalaee/sqladmin/blob/e268c0faf200cc13c558c2d8d21cf5f359fdd327/sqladmin/forms.py#L337-L340) but `sqlalchemy_utils.types.uuid.UUIDType` is missing.
### Describe the solution you would like.
Add `sqlalchemy_utils.types.uuid.UUIDType` to the `@converts` annotation for `conv_PgUuid`.
### Describe alternatives you considered
_No response_
### Additional context
_No response_ | closed | 2022-06-16T07:44:57Z | 2022-06-17T11:32:22Z | https://github.com/aminalaee/sqladmin/issues/179 | [] | okapies | 1 |
ijl/orjson | numpy | 308 | Too large integers are not rejected | ```
>>> import orjson
>>> import ujson
>>> s = '{"foo":9999999999999999999999999999999999}'
>>> orjson.loads(s)
{'foo': 1e+34}
>>> ujson.loads(s)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ujson.JSONDecodeError: Value is too big!
``` | closed | 2022-10-03T07:18:02Z | 2022-12-08T09:34:45Z | https://github.com/ijl/orjson/issues/308 | [] | laurivosandi | 3 |
quasarstream/python-ffmpeg-video-streaming | dash | 85 | Is anyone using this tool as an NVR? | Hello,
I *think* this tool could be used as a Network Video Recorder (NVR) for IP cameras and such.
1. Do you know of anyone using it as such?
2. Are there any examples you could point me to?
Thank you | closed | 2022-06-10T18:36:39Z | 2022-07-22T11:12:38Z | https://github.com/quasarstream/python-ffmpeg-video-streaming/issues/85 | [] | SeaDude | 1 |
matplotlib/matplotlib | data-science | 28,961 | [MNT]: Defining bins in a graph takes way too long | ### Summary
Defining bins takes way too long for the task. Here is an example:
```py
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(42)
df = pd.DataFrame({
'Value 1': range(1, 251), # Numbers from 1 to 250
'Value 2': np.random.randint(1, 11, size=250) # Random integers from 1 to 10
})
fig, axs = plt.subplots(1, 2, figsize=(12, 7), dpi=100) # Adjust the width and DPI here
bins = [*range(1, len(df))]
Len_Df=len(df)
axs[0].hist(df['Value 1'], weights=df['Value 2'], bins=Len_Df)
%timeit -n 1 axs[0].hist(df['Value 1'], weights=df['Value 2'], bins=250)
%timeit -n 1 axs[0].hist(df['Value 1'], weights=df['Value 2'])
```
With output:
```
146 ms ± 16.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
6.34 ms ± 384 μs per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
### Proposed fix
Would be nice if anyone could find a performance improvement when defining bins, because I dont think it should be that great of an runtime increase. | closed | 2024-10-10T13:38:08Z | 2024-10-10T14:15:50Z | https://github.com/matplotlib/matplotlib/issues/28961 | [
"Maintenance"
] | Chuck321123 | 1 |
onnx/onnx | tensorflow | 6,713 | Precision drop after exporting PyTorch.sigmoid to ONNX.sigmoid | # Bug Report
### Is the issue related to model conversion?
<!-- If the ONNX checker reports issues with this model then this is most probably related to the converter used to convert the original framework model to ONNX. Please create this bug in the appropriate converter's GitHub repo (pytorch, tensorflow-onnx, sklearn-onnx, keras-onnx, onnxmltools) to get the best help. -->
no
### Describe the bug
<!-- Please describe the bug clearly and concisely -->
I have noticed a significant precision drop when exporting a PyTorch.sigmoid model to the ONNX format. The model performs well in PyTorch, but after converting it to ONNX and running inference, the output accuracy or results are not the same.
result sigmoid([[1.0, -17.0, -20.0]]))
onnx output: [[7.310586e-01 8.940697e-08 0.000000e+00]]
pytorch output: tensor([[7.3106e-01, 4.1399e-08, 2.0612e-09]])
### System information
<!--
- OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*):
- ONNX version (*e.g. 1.13*): onnx-1.17.0
- Python version: 3.9.20
- GCC/Compiler version (if compiling from source):
- CMake version: 3.22.1
- Protobuf version: ?
- Visual Studio version (if applicable):-->
### Reproduction instructions
<!--
- Describe the code to reproduce the behavior.
```
import torch
import torch.onnx as onnx
import onnx
import onnxruntime as ort
import numpy as np
class SigmoidModel(torch.nn.Module):
def __init__(self):
super(SigmoidModel, self).__init__()
def forward(self, x):
return torch.sigmoid(x)
model = SigmoidModel()
model.eval()
x = torch.tensor([[1.0, -17.0, -20.0]])
output_pytorch = model(x)
onnx_path = "sigmoid_model.onnx"
torch.onnx.export(model, x, onnx_path, input_names=["input"], output_names=["output"],opset_version=11)
print(f" export onnx to: {onnx_path}")
onnx_model = onnx.load(onnx_path)
input = x.numpy()
output = ort.InferenceSession(onnx_model.SerializeToString()).run(None, {"input": input})
print("onnx output:",output[0])
print("pytorch output:",output_pytorch)
```
- Attach the ONNX model to the issue (where applicable)-->
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
The output of the ONNX model should match the output of the original PyTorch model in terms of both accuracy and precision.
### Notes
<!-- Any additional information -->
| closed | 2025-02-20T02:15:52Z | 2025-02-20T03:56:04Z | https://github.com/onnx/onnx/issues/6713 | [
"topic: runtime"
] | blingbling22 | 1 |
apachecn/ailearning | python | 628 | it is excellent project,thanks to all guys. | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| closed | 2022-01-22T04:25:00Z | 2022-04-26T07:47:34Z | https://github.com/apachecn/ailearning/issues/628 | [] | 1qiqiqi | 8 |
comfyanonymous/ComfyUI | pytorch | 6,791 | Help! How to solve the issue of unsupported operand types for //: 'int' and 'NoneType'? | ### Your question
# ComfyUI Error Report
## Error Details
- **Node ID:** 36
- **Node Type:** HyVideoTextImageEncode
- **Exception Type:** TypeError
- **Exception Message:** unsupported operand type(s) for //: 'int' and 'NoneType'
## Stack Trace
```
File "D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 328, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 203, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 912, in process
prompt_embeds, negative_prompt_embeds, attention_mask, negative_attention_mask = encode_prompt(self,
File "D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 837, in encode_prompt
text_inputs = text_encoder.text2tokens(prompt,
File "D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\text_encoder\__init__.py", line 253, in text2tokens
text_tokens = self.processor(
File "D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\transformers\models\llava\processing_llava.py", line 160, in __call__
num_image_tokens = (height // self.patch_size) * (
```
## System Information
- **ComfyUI Version:** v0.3.9
- **Arguments:** D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\main.py --auto-launch --preview-method auto --disable-cuda-malloc --fast
- **OS:** nt
- **Python Version:** 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
- **Embedded Python:** false
- **PyTorch Version:** 2.5.1+cu124
## Devices
- **Name:** cuda:0 NVIDIA GeForce RTX 4070 SUPER : cudaMallocAsync
- **Type:** cuda
- **VRAM Total:** 12878086144
- **VRAM Free:** 9915858
- **Torch VRAM Total:** 17213423616
- **Torch VRAM Free:** 9915858
## Logs
```
2025-02-12T12:43:31.808123 - Adding extra search path checkpoints E:\AI\sd-webui-aki\sd-webui-aki-v4.1\models/Stable-diffusion
2025-02-12T12:43:31.808123 - Adding extra search path configs E:\AI\sd-webui-aki\sd-webui-aki-v4.1\models/Stable-diffusion
2025-02-12T12:43:31.808123 - Adding extra search path vae E:\AI\sd-webui-aki\sd-webui-aki-v4.1\models/VAE
2025-02-12T12:43:31.808123 - Adding extra search path loras E:\AI\sd-webui-aki\sd-webui-aki-v4.1\models/Lora
2025-02-12T12:43:31.808123 - Adding extra search path loras E:\AI\sd-webui-aki\sd-webui-aki-v4.1\models/LyCORIS
2025-02-12T12:43:31.808123 - Adding extra search path upscale_models E:\AI\sd-webui-aki\sd-webui-aki-v4.1\models/ESRGAN
2025-02-12T12:43:31.808123 - Adding extra search path upscale_models E:\AI\sd-webui-aki\sd-webui-aki-v4.1\models/RealESRGAN
2025-02-12T12:43:31.808123 - Adding extra search path upscale_models E:\AI\sd-webui-aki\sd-webui-aki-v4.1\models/SwinIR
2025-02-12T12:43:31.808123 - Adding extra search path embeddings E:\AI\sd-webui-aki\sd-webui-aki-v4.1\embeddings
2025-02-12T12:43:31.808123 - Adding extra search path hypernetworks E:\AI\sd-webui-aki\sd-webui-aki-v4.1\models/hypernetworks
2025-02-12T12:43:31.808123 - Adding extra search path controlnet E:\AI\sd-webui-aki\sd-webui-aki-v4.1\models\ControlNet
2025-02-12T12:43:32.016844 - [START] Security scan2025-02-12T12:43:32.016844 -
2025-02-12T12:43:35.962361 - [DONE] Security scan2025-02-12T12:43:35.962361 -
2025-02-12T12:43:36.121752 - ## ComfyUI-Manager: installing dependencies done.2025-02-12T12:43:36.121752 -
2025-02-12T12:43:36.122353 - ** ComfyUI startup time:2025-02-12T12:43:36.122353 - 2025-02-12T12:43:36.122353 - 2025-02-12 12:43:36.1212025-02-12T12:43:36.122353 -
2025-02-12T12:43:36.122353 - ** Platform:2025-02-12T12:43:36.122353 - 2025-02-12T12:43:36.122353 - Windows2025-02-12T12:43:36.122353 -
2025-02-12T12:43:36.122353 - ** Python version:2025-02-12T12:43:36.122353 - 2025-02-12T12:43:36.122353 - 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]2025-02-12T12:43:36.122353 -
2025-02-12T12:43:36.122353 - ** Python executable:2025-02-12T12:43:36.122353 - 2025-02-12T12:43:36.122353 - D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\python.exe2025-02-12T12:43:36.122353 -
2025-02-12T12:43:36.122353 - ** ComfyUI Path:2025-02-12T12:43:36.122353 - 2025-02-12T12:43:36.122353 - D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.42025-02-12T12:43:36.122353 -
2025-02-12T12:43:36.122353 - ** ComfyUI Base Folder Path:2025-02-12T12:43:36.123015 - 2025-02-12T12:43:36.123015 - D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.42025-02-12T12:43:36.123015 -
2025-02-12T12:43:36.123015 - ** User directory:2025-02-12T12:43:36.123015 - 2025-02-12T12:43:36.123015 - D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\user2025-02-12T12:43:36.123015 -
2025-02-12T12:43:36.123015 - ** ComfyUI-Manager config path:2025-02-12T12:43:36.123015 - 2025-02-12T12:43:36.123015 - D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\user\default\ComfyUI-Manager\config.ini2025-02-12T12:43:36.123015 -
2025-02-12T12:43:36.123015 - ** Log path:2025-02-12T12:43:36.123015 - 2025-02-12T12:43:36.123015 - D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\user\comfyui.log2025-02-12T12:43:36.123015 -
2025-02-12T12:43:40.436034 -
Prestartup times for custom nodes:2025-02-12T12:43:40.436034 -
2025-02-12T12:43:40.436034 - 0.0 seconds:2025-02-12T12:43:40.436034 - 2025-02-12T12:43:40.436034 - D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\rgthree-comfy2025-02-12T12:43:40.436034 -
2025-02-12T12:43:40.436034 - 0.0 seconds:2025-02-12T12:43:40.436034 - 2025-02-12T12:43:40.436034 - D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Marigold2025-02-12T12:43:40.436034 -
2025-02-12T12:43:40.436571 - 0.0 seconds:2025-02-12T12:43:40.436571 - 2025-02-12T12:43:40.436571 - D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Easy-Use2025-02-12T12:43:40.436571 -
2025-02-12T12:43:40.436571 - 8.6 seconds:2025-02-12T12:43:40.436571 - 2025-02-12T12:43:40.436571 - D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Manager2025-02-12T12:43:40.436571 -
2025-02-12T12:43:40.436571 -
2025-02-12T12:43:42.733696 - Total VRAM 12282 MB, total RAM 65277 MB
2025-02-12T12:43:42.734202 - pytorch version: 2.5.1+cu124
2025-02-12T12:43:44.027717 - xformers version: 0.0.28.post3
2025-02-12T12:43:44.027717 - Set vram state to: NORMAL_VRAM
2025-02-12T12:43:44.027717 - Device: cuda:0 NVIDIA GeForce RTX 4070 SUPER : cudaMallocAsync
2025-02-12T12:43:44.349176 - Using xformers attention
2025-02-12T12:43:46.104827 - [Prompt Server] web root: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\web
2025-02-12T12:43:47.717409 - [Crystools [0;32mINFO[0m] Crystools version: 1.22.0
2025-02-12T12:43:47.737313 - [Crystools [0;32mINFO[0m] CPU: 12th Gen Intel(R) Core(TM) i7-12700 - Arch: AMD64 - OS: Windows 10
2025-02-12T12:43:47.747727 - [Crystools [0;32mINFO[0m] Pynvml (Nvidia) initialized.
2025-02-12T12:43:47.747727 - [Crystools [0;32mINFO[0m] GPU/s:
2025-02-12T12:43:47.753784 - [Crystools [0;32mINFO[0m] 0) NVIDIA GeForce RTX 4070 SUPER
2025-02-12T12:43:47.753784 - [Crystools [0;32mINFO[0m] NVIDIA Driver: 566.36
2025-02-12T12:43:48.116028 - [34m[ComfyUI-Easy-Use] server: [0mv1.2.8 [92mLoaded[0m2025-02-12T12:43:48.116028 -
2025-02-12T12:43:48.116028 - [34m[ComfyUI-Easy-Use] web root: [0mD:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Easy-Use\web_version/v2 [92mLoaded[0m2025-02-12T12:43:48.116028 -
2025-02-12T12:43:48.179244 - ### Loading: ComfyUI-Impact-Pack (V8.8.1)2025-02-12T12:43:48.179244 -
2025-02-12T12:43:48.318565 - [Impact Pack] Wildcards loading done.2025-02-12T12:43:48.318565 -
2025-02-12T12:43:48.325865 - ### Loading: ComfyUI-Inspire-Pack (V1.13.1)2025-02-12T12:43:48.325865 -
2025-02-12T12:43:52.837971 - Total VRAM 12282 MB, total RAM 65277 MB
2025-02-12T12:43:52.837971 - pytorch version: 2.5.1+cu124
2025-02-12T12:43:52.838485 - xformers version: 0.0.28.post3
2025-02-12T12:43:52.838485 - Set vram state to: NORMAL_VRAM
2025-02-12T12:43:52.838485 - Device: cuda:0 NVIDIA GeForce RTX 4070 SUPER : cudaMallocAsync
2025-02-12T12:43:52.880265 - ### Loading: ComfyUI-Manager (V3.18.1)
2025-02-12T12:43:53.134496 - ### ComfyUI Version: v0.3.9 | Released on '2024-12-20'
2025-02-12T12:43:53.886258 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
2025-02-12T12:43:54.077353 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
2025-02-12T12:43:54.252338 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
2025-02-12T12:43:54.678456 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
2025-02-12T12:43:54.857797 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
2025-02-12T12:43:55.130479 - (pysssss:WD14Tagger) [DEBUG] Available ORT providers: TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider2025-02-12T12:43:55.130479 -
2025-02-12T12:43:55.130479 - (pysssss:WD14Tagger) [DEBUG] Using ORT providers: CUDAExecutionProvider, CPUExecutionProvider2025-02-12T12:43:55.130479 -
2025-02-12T12:43:55.163208 - Workspace manager - Openning file hash dict2025-02-12T12:43:55.163849 -
2025-02-12T12:43:55.163849 - 🦄🦄Loading: Workspace Manager (V2.1.0)2025-02-12T12:43:55.163849 -
2025-02-12T12:43:55.172047 - ColorMod: Ignoring node 'CV2TonemapDurand' due to cv2 edition/version2025-02-12T12:43:55.172047 -
2025-02-12T12:43:55.205289 - ------------------------------------------2025-02-12T12:43:55.205289 -
2025-02-12T12:43:55.205804 - [34mComfyroll Studio v1.76 : [92m 175 Nodes Loaded[0m2025-02-12T12:43:55.205804 -
2025-02-12T12:43:55.205804 - ------------------------------------------2025-02-12T12:43:55.205804 -
2025-02-12T12:43:55.205804 - ** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md2025-02-12T12:43:55.205804 -
2025-02-12T12:43:55.205804 - ** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki2025-02-12T12:43:55.205804 -
2025-02-12T12:43:55.205804 - ------------------------------------------2025-02-12T12:43:55.205804 -
2025-02-12T12:43:55.218762 - [36;20m[comfyui_controlnet_aux] | INFO -> Using ckpts path: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\comfyui_controlnet_aux\ckpts[0m
2025-02-12T12:43:55.219264 - [36;20m[comfyui_controlnet_aux] | INFO -> Using symlinks: False[0m
2025-02-12T12:43:55.219264 - [36;20m[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider'][0m
2025-02-12T12:43:55.382143 - DWPose: Onnxruntime with acceleration providers detected2025-02-12T12:43:55.382143 -
2025-02-12T12:43:56.865079 - Cannot connect to comfyregistry.2025-02-12T12:43:56.865079 -
2025-02-12T12:43:56.905514 - nightly_channel: 2025-02-12T12:43:56.905514 -
2025-02-12T12:43:56.905514 - https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/remote2025-02-12T12:43:56.905514 -
2025-02-12T12:43:56.905514 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-02-12T12:43:56.905514 - 2025-02-12T12:43:58.269949 - [DONE]2025-02-12T12:43:58.270475 -
2025-02-12T12:43:58.306997 - [ComfyUI-Manager] All startup tasks have been completed.
2025-02-12T12:43:58.791080 - [1;35m
### [START] ComfyUI AlekPet Nodes [1;34mv1.0.48[0m[1;35m ###[0m2025-02-12T12:43:58.791080 -
2025-02-12T12:43:58.791080 - [92mNode -> ArgosTranslateNode: [93mArgosTranslateCLIPTextEncodeNode, ArgosTranslateTextNode[0m [92m[92m[Loading][0m[0m2025-02-12T12:43:58.791080 -
2025-02-12T12:43:58.791080 - [92mNode -> ChatGLMNode: [93mChatGLM4TranslateCLIPTextEncodeNode, ChatGLM4TranslateTextNode, ChatGLM4InstructNode, ChatGLM4InstructMediaNode[0m [92m[92m[Loading][0m[0m2025-02-12T12:43:58.791080 -
2025-02-12T12:43:58.791080 - [92mNode -> DeepTranslatorNode: [93mDeepTranslatorCLIPTextEncodeNode, DeepTranslatorTextNode[0m [92m[92m[Loading][0m[0m2025-02-12T12:43:58.791080 -
2025-02-12T12:43:58.791080 - [92mNode -> ExtrasNode: [93mPreviewTextNode, HexToHueNode, ColorsCorrectNode[0m [92m[92m[Loading][0m[0m2025-02-12T12:43:58.791080 -
2025-02-12T12:43:58.791080 - [92mNode -> GoogleTranslateNode: [93mGoogleTranslateCLIPTextEncodeNode, GoogleTranslateTextNode[0m [92m[92m[Loading][0m[0m2025-02-12T12:43:58.791080 -
2025-02-12T12:43:58.791761 - [92mNode -> IDENode: [93mIDENode[0m [92m[92m[Loading][0m[0m2025-02-12T12:43:58.791761 -
2025-02-12T12:43:58.791761 - [92mNode -> PainterNode: [93mPainterNode[0m [92m[92m[Loading][0m[0m2025-02-12T12:43:58.791761 -
2025-02-12T12:43:58.791761 - [92mNode -> PoseNode: [93mPoseNode[0m [92m[92m[Loading][0m[0m2025-02-12T12:43:58.791761 -
2025-02-12T12:43:58.791761 - [1;35m### [END] ComfyUI AlekPet Nodes ###[0m2025-02-12T12:43:58.791761 -
2025-02-12T12:43:58.827150 - Note: NumExpr detected 20 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.
2025-02-12T12:43:58.827150 - NumExpr defaulting to 8 threads.
2025-02-12T12:43:59.153929 - [34mFizzleDorf Custom Nodes: [92mLoaded[0m2025-02-12T12:43:59.153929 -
2025-02-12T12:43:59.326592 - All packages from requirements.txt are installed and up to date.2025-02-12T12:43:59.326592 -
2025-02-12T12:43:59.329726 - llama-cpp installed2025-02-12T12:43:59.329726 -
2025-02-12T12:43:59.331647 - All packages from requirements.txt are installed and up to date.2025-02-12T12:43:59.331647 -
2025-02-12T12:43:59.684616 - [36;20m[comfy_mtb] | INFO -> loaded [96m94[0m nodes successfuly[0m
2025-02-12T12:43:59.685116 - [36;20m[comfy_mtb] | INFO -> Some nodes (2) could not be loaded. This can be ignored, but go to http://127.0.0.1:8188/mtb if you want more information.[0m
2025-02-12T12:43:59.726871 -
[36mEfficiency Nodes:[0m Attempting to add Control Net options to the 'HiRes-Fix Script' Node (comfyui_controlnet_aux add-on)...[92mSuccess![0m2025-02-12T12:43:59.726871 -
2025-02-12T12:43:59.740045 - [1;32m[Power Noise Suite]: 🦚🦚🦚 [93m[3mSup.[0m 🦚🦚🦚2025-02-12T12:43:59.740045 -
2025-02-12T12:43:59.740567 - [1;32m[Power Noise Suite]:[0m Tamed [93m11[0m wild nodes.2025-02-12T12:43:59.740567 -
2025-02-12T12:43:59.762014 -
2025-02-12T12:43:59.762014 - [92m[rgthree-comfy] Loaded 42 extraordinary nodes. 🎉[00m2025-02-12T12:43:59.762014 -
2025-02-12T12:43:59.762014 -
2025-02-12T12:43:59.779271 - [34mWAS Node Suite: [0mBlenderNeko's Advanced CLIP Text Encode found, attempting to enable `CLIPTextEncode` support.[0m2025-02-12T12:43:59.779271 -
2025-02-12T12:43:59.779271 - [34mWAS Node Suite: [0m`CLIPTextEncode (BlenderNeko Advanced + NSP)` node enabled under `WAS Suite/Conditioning` menu.[0m2025-02-12T12:43:59.779271 -
2025-02-12T12:44:01.840683 - [34mWAS Node Suite: [0mOpenCV Python FFMPEG support is enabled[0m2025-02-12T12:44:01.840683 -
2025-02-12T12:44:01.840683 - [34mWAS Node Suite [93mWarning: [0m`ffmpeg_bin_path` is not set in `D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\was-node-suite-comfyui\was_suite_config.json` config file. Will attempt to use system ffmpeg binaries if available.[0m2025-02-12T12:44:01.840683 -
2025-02-12T12:44:03.882006 - [34mWAS Node Suite: [0mFinished.[0m [32mLoaded[0m [0m221[0m [32mnodes successfully.[0m2025-02-12T12:44:03.882006 -
2025-02-12T12:44:03.882006 -
[3m[93m"Art is the triumph over chaos."[0m[3m - John Cheever[0m
2025-02-12T12:44:03.882006 -
2025-02-12T12:44:03.891038 -
Import times for custom nodes:
2025-02-12T12:44:03.891038 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\comfyui-purgevram
2025-02-12T12:44:03.891788 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\AIGODLIKE-ComfyUI-Translation
2025-02-12T12:44:03.891788 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\websocket_image_save.py
2025-02-12T12:44:03.891788 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ControlNet-LLLite-ComfyUI
2025-02-12T12:44:03.891788 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\stability-ComfyUI-nodes
2025-02-12T12:44:03.891788 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\Comfyui_TTP_Toolset
2025-02-12T12:44:03.891788 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_ADV_CLIP_emb
2025-02-12T12:44:03.891788 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_TiledKSampler
2025-02-12T12:44:03.891788 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-GGUF
2025-02-12T12:44:03.891788 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-WD14-Tagger
2025-02-12T12:44:03.891788 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_experiments
2025-02-12T12:44:03.891788 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_HF_Servelress_Inference
2025-02-12T12:44:03.891788 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\PowerNoiseSuite
2025-02-12T12:44:03.891788 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_JPS-Nodes
2025-02-12T12:44:03.891788 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-TiledDiffusion
2025-02-12T12:44:03.891788 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_IPAdapter_plus
2025-02-12T12:44:03.891788 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\comfy_pixelization
2025-02-12T12:44:03.891788 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\images-grid-comfy-plugin
2025-02-12T12:44:03.891788 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_ColorMod
2025-02-12T12:44:03.892329 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_UltimateSDUpscale
2025-02-12T12:44:03.892329 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_essentials
2025-02-12T12:44:03.892329 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Custom-Scripts
2025-02-12T12:44:03.892329 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\Derfuu_ComfyUI_ModdedNodes
2025-02-12T12:44:03.892329 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\Comfyui_CXH_joy_caption
2025-02-12T12:44:03.892329 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Frame-Interpolation
2025-02-12T12:44:03.892329 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Advanced-ControlNet
2025-02-12T12:44:03.892329 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\efficiency-nodes-comfyui
2025-02-12T12:44:03.892329 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\rgthree-comfy
2025-02-12T12:44:03.892329 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\comfyui-workspace-manager
2025-02-12T12:44:03.892329 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-HunyuanVideoWrapper
2025-02-12T12:44:03.892329 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_Comfyroll_CustomNodes
2025-02-12T12:44:03.892329 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-AnimateDiff-Evolved
2025-02-12T12:44:03.892329 - 0.0 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-KJNodes
2025-02-12T12:44:03.892329 - 0.1 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Inspire-Pack
2025-02-12T12:44:03.892329 - 0.1 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Marigold
2025-02-12T12:44:03.892329 - 0.1 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-VideoHelperSuite
2025-02-12T12:44:03.892329 - 0.1 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Crystools
2025-02-12T12:44:03.892329 - 0.1 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_LayerStyle
2025-02-12T12:44:03.892329 - 0.1 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Impact-Pack
2025-02-12T12:44:03.892329 - 0.2 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\comfy_mtb
2025-02-12T12:44:03.892329 - 0.2 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_VLM_nodes
2025-02-12T12:44:03.892329 - 0.2 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\comfyui_controlnet_aux
2025-02-12T12:44:03.892329 - 0.3 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_FizzNodes
2025-02-12T12:44:03.892329 - 0.4 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Easy-Use
2025-02-12T12:44:03.892329 - 0.5 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Manager
2025-02-12T12:44:03.893020 - 0.6 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-CogVideoXWrapper
2025-02-12T12:44:03.893020 - 0.7 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-SUPIR
2025-02-12T12:44:03.893020 - 0.9 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-MMAudio
2025-02-12T12:44:03.893020 - 3.4 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI_Custom_Nodes_AlekPet
2025-02-12T12:44:03.893020 - 4.1 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\was-node-suite-comfyui
2025-02-12T12:44:03.893020 - 4.5 seconds: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Inspyrenet-Rembg
2025-02-12T12:44:03.893020 -
2025-02-12T12:44:03.909013 - Starting server
2025-02-12T12:44:03.909512 - To see the GUI go to: http://127.0.0.1:8188
2025-02-12T12:44:05.385966 - FETCH DATA from: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-Manager\extension-node-map.json2025-02-12T12:44:05.385966 - 2025-02-12T12:44:05.391064 - [DONE]2025-02-12T12:44:05.391064 -
2025-02-12T12:44:07.109434 - [36;20m[comfy_mtb] | INFO -> Found multiple match, we will pick the last E:\AI\sd-webui-aki\sd-webui-aki-v4.1\models/SwinIR
['D:\\ComfyUI-aki-v1.4\\ComfyUI-aki-v1.4\\models\\upscale_models', 'E:\\AI\\sd-webui-aki\\sd-webui-aki-v4.1\\models/ESRGAN', 'E:\\AI\\sd-webui-aki\\sd-webui-aki-v4.1\\models/RealESRGAN', 'E:\\AI\\sd-webui-aki\\sd-webui-aki-v4.1\\models/SwinIR'][0m
2025-02-12T12:44:17.803911 - got prompt
2025-02-12T12:44:19.478106 - Loading text encoder model (clipL) from: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\models\clip\clip-vit-large-patch14
2025-02-12T12:44:20.140168 - Text encoder to dtype: torch.float16
2025-02-12T12:44:20.141201 - Loading tokenizer (clipL) from: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\models\clip\clip-vit-large-patch14
2025-02-12T12:44:20.960107 - Loading text encoder model (vlm) from: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\models\LLM\llava-llama-3-8b-v1_1-transformers
2025-02-12T12:44:36.554073 - Text encoder to dtype: torch.bfloat16
2025-02-12T12:44:36.556635 - Loading tokenizer (vlm) from: D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\models\LLM\llava-llama-3-8b-v1_1-transformers
2025-02-12T12:44:41.771766 - !!! Exception during processing !!! unsupported operand type(s) for //: 'int' and 'NoneType'
2025-02-12T12:44:41.774302 - Traceback (most recent call last):
File "D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 328, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 203, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 912, in process
prompt_embeds, negative_prompt_embeds, attention_mask, negative_attention_mask = encode_prompt(self,
File "D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-HunyuanVideoWrapper\nodes.py", line 837, in encode_prompt
text_inputs = text_encoder.text2tokens(prompt,
File "D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\custom_nodes\ComfyUI-HunyuanVideoWrapper\hyvideo\text_encoder\__init__.py", line 253, in text2tokens
text_tokens = self.processor(
File "D:\ComfyUI-aki-v1.4\ComfyUI-aki-v1.4\python\lib\site-packages\transformers\models\llava\processing_llava.py", line 160, in __call__
num_image_tokens = (height // self.patch_size) * (
TypeError: unsupported operand type(s) for //: 'int' and 'NoneType'
2025-02-12T12:44:41.775316 - Prompt executed in 23.94 seconds
```
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
```
{"last_node_id":39,"last_link_id":47,"nodes":[{"id":35,"type":"HyVideoBlockSwap","pos":[-306.51007080078125,-206.7708282470703],"size":[315,130],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"block_swap_args","type":"BLOCKSWAPARGS","links":[43],"label":"block_swap_args"}],"properties":{"Node name for S&R":"HyVideoBlockSwap"},"widgets_values":[20,0,false,false]},{"id":38,"type":"HyVideoTeaCache","pos":[537.4840698242188,423.1324157714844],"size":[273.6052551269531,59.25438690185547],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"teacache_args","type":"TEACACHEARGS","links":[47],"label":"teacache_args"}],"properties":{"Node name for S&R":"HyVideoTeaCache"},"widgets_values":[0.15]},{"id":7,"type":"HyVideoVAELoader","pos":[459.5226135253906,-226.34078979492188],"size":[379.166748046875,82],"flags":{},"order":2,"mode":0,"inputs":[{"name":"compile_args","type":"COMPILEARGS","link":null,"shape":7,"label":"compile_args"}],"outputs":[{"name":"vae","type":"VAE","links":[6],"slot_index":0,"label":"vae"}],"properties":{"Node name for S&R":"HyVideoVAELoader"},"widgets_values":["hunyuan_video_vae_bf16.safetensors","bf16"]},{"id":39,"type":"Note","pos":[548.7734375,533.5185546875],"size":[251.46731567382812,58],"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[],"properties":{},"widgets_values":["0为不加速,0.1为1.6倍加速,0.15为2.1倍加速"],"color":"#432","bgcolor":"#653"},{"id":1,"type":"HyVideoModelLoader","pos":[29.825597763061523,-161.3166961669922],"size":[402.6761169433594,242],"flags":{},"order":6,"mode":0,"inputs":[{"name":"compile_args","type":"COMPILEARGS","link":null,"shape":7,"label":"compile_args"},{"name":"block_swap_args","type":"BLOCKSWAPARGS","link":43,"shape":7,"label":"block_swap_args"},{"name":"lora","type":"HYVIDLORA","link":null,"shape":7,"label":"lora"}],"outputs":[{"name":"model","type":"HYVIDEOMODEL","links":[2],"slot_index":0,"label":"model"}],"properties":{"Node name for S&R":"HyVideoModelLoader"},"widgets_values":["hunyuan_video_720_cfgdistill_fp8_e4m3fn.safetensors","bf16","fp8_e4m3fn","main_device","sdpa",false,true]},{"id":16,"type":"DownloadAndLoadHyVideoTextEncoder","pos":[-481.3341064453125,342.2308349609375],"size":[441,202],"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[{"name":"hyvid_text_encoder","type":"HYVIDTEXTENCODER","links":[35,45],"slot_index":0,"label":"hyvid_text_encoder"}],"properties":{"Node name for S&R":"DownloadAndLoadHyVideoTextEncoder"},"widgets_values":["xtuner/llava-llama-3-8b-v1_1-transformers","openai/clip-vit-large-patch14","bf16",false,2,"disabled","offload_device"]},{"id":3,"type":"HyVideoSampler","pos":[534.5980224609375,-84.14486694335938],"size":[315,418],"flags":{},"order":9,"mode":0,"inputs":[{"name":"model","type":"HYVIDEOMODEL","link":2,"label":"model"},{"name":"hyvid_embeds","type":"HYVIDEMBEDS","link":46,"label":"hyvid_embeds"},{"name":"samples","type":"LATENT","link":null,"shape":7,"label":"samples"},{"name":"stg_args","type":"STGARGS","link":null,"shape":7,"label":"stg_args"},{"name":"context_options","type":"HYVIDCONTEXT","link":null,"shape":7,"label":"context_options"},{"name":"feta_args","type":"FETAARGS","link":null,"shape":7,"label":"feta_args"},{"name":"teacache_args","type":"TEACACHEARGS","link":47,"shape":7,"label":"teacache_args"}],"outputs":[{"name":"samples","type":"LATENT","links":[4],"slot_index":0,"label":"samples"}],"properties":{"Node name for S&R":"HyVideoSampler"},"widgets_values":[512,768,49,20,6,9,2,"fixed",1,1,"FlowMatchDiscreteScheduler"]},{"id":30,"type":"HyVideoTextEncode","pos":[-438.4667663574219,14.155832290649414],"size":[400,200],"flags":{},"order":7,"mode":0,"inputs":[{"name":"text_encoders","type":"HYVIDTEXTENCODER","link":35,"label":"text_encoders"},{"name":"custom_prompt_template","type":"PROMPT_TEMPLATE","link":null,"shape":7,"label":"custom_prompt_template"},{"name":"clip_l","type":"CLIP","link":null,"shape":7,"label":"clip_l"},{"name":"hyvid_cfg","type":"HYVID_CFG","link":null,"shape":7,"label":"hyvid_cfg"}],"outputs":[{"name":"hyvid_embeds","type":"HYVIDEMBEDS","links":[],"label":"hyvid_embeds"}],"properties":{"Node name for S&R":"HyVideoTextEncode"},"widgets_values":["In a close-up shot, a Chinese girl wearing a bikini with a great figure is smiling as she looks at the camera.. A high-quality live action movie with a girl as the protagonist","bad quality video","video",[false,true]]},{"id":5,"type":"HyVideoDecode","pos":[880.83251953125,-230.842529296875],"size":[345.4285888671875,150],"flags":{},"order":10,"mode":0,"inputs":[{"name":"vae","type":"VAE","link":6,"label":"vae"},{"name":"samples","type":"LATENT","link":4,"label":"samples"}],"outputs":[{"name":"images","type":"IMAGE","links":[42],"slot_index":0,"label":"images"}],"properties":{"Node name for S&R":"HyVideoDecode"},"widgets_values":[true,8,256,true]},{"id":34,"type":"VHS_VideoCombine","pos":[1328.113525390625,-235.0919189453125],"size":[371.7926940917969,334],"flags":{},"order":11,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":42,"shape":7,"label":"图像"},{"name":"audio","type":"AUDIO","link":null,"shape":7,"label":"音频"},{"name":"meta_batch","type":"VHS_BatchManager","link":null,"shape":7,"label":"批次管理"},{"name":"vae","type":"VAE","link":null,"shape":7}],"outputs":[{"name":"Filenames","type":"VHS_FILENAMES","links":null,"label":"文件名"}],"properties":{"Node name for S&R":"VHS_VideoCombine"},"widgets_values":{"frame_rate":16,"loop_count":0,"filename_prefix":"HunyuanVideo","format":"video/h264-mp4","pix_fmt":"yuv420p","crf":19,"save_metadata":false,"trim_to_audio":false,"pingpong":false,"save_output":true,"videopreview":{"hidden":false,"paused":false,"params":{"filename":"HunyuanVideo_00019.mp4","subfolder":"","type":"output","format":"video/h264-mp4","frame_rate":16,"workflow":"HunyuanVideo_00019.png","fullpath":"E:\\999999\\ComfyUI_HunYuanVideo\\ComfyUI_HunYuanVideo\\ComfyUI\\output\\HunyuanVideo_00019.mp4"},"muted":false}}},{"id":37,"type":"LoadImage","pos":[927.2908935546875,5.064525127410889],"size":[308.3768615722656,504.968017578125],"flags":{},"order":5,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[44],"slot_index":0,"label":"图像"},{"name":"MASK","type":"MASK","links":null,"label":"遮罩"}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["IMG_0212 拷贝2.jpg","image"]},{"id":36,"type":"HyVideoTextImageEncode","pos":[57.1776123046875,263.9744567871094],"size":[440.0885925292969,406.17828369140625],"flags":{},"order":8,"mode":0,"inputs":[{"name":"text_encoders","type":"HYVIDTEXTENCODER","link":45,"label":"text_encoders"},{"name":"custom_prompt_template","type":"PROMPT_TEMPLATE","link":null,"shape":7,"label":"custom_prompt_template"},{"name":"clip_l","type":"CLIP","link":null,"shape":7,"label":"clip_l"},{"name":"image1","type":"IMAGE","link":44,"shape":7,"label":"image1"},{"name":"image2","type":"IMAGE","link":null,"shape":7,"label":"image2"},{"name":"hyvid_cfg","type":"HYVID_CFG","link":null,"shape":7,"label":"hyvid_cfg"}],"outputs":[{"name":"hyvid_embeds","type":"HYVIDEMBEDS","links":[46],"slot_index":0,"label":"hyvid_embeds"}],"properties":{"Node name for S&R":"HyVideoTextImageEncode"},"widgets_values":["describe this <image> in great detail","::4",true,"video","",[false,true],[false,true]]}],"links":[[2,1,0,3,0,"HYVIDEOMODEL"],[4,3,0,5,1,"LATENT"],[6,7,0,5,0,"VAE"],[35,16,0,30,0,"HYVIDTEXTENCODER"],[42,5,0,34,0,"IMAGE"],[43,35,0,1,1,"BLOCKSWAPARGS"],[44,37,0,36,3,"IMAGE"],[45,16,0,36,0,"HYVIDTEXTENCODER"],[46,36,0,3,1,"HYVIDEMBEDS"],[47,38,0,3,6,"TEACACHEARGS"]],"groups":[{"id":2,"title":"","bounding":[-347.1317443847656,-391.4261474609375,1911.9700927734375,80],"color":"#3f789e","font_size":24,"flags":{}}],"config":{},"extra":{"ds":{"scale":0.8264462809917354,"offset":[535.475211633075,-13.09287618524091]},"node_versions":{"ComfyUI-HunyuanVideoWrapper":"unknown","ComfyUI-VideoHelperSuite":"8629188458dc6cb832f871ece3bd273507e8a766","comfy-core":"v0.3.9"}},"version":0.4}
```
## Additional Context
(Please add any additional context or steps to reproduce the error here)
### Logs
```powershell
```
### Other
_No response_ | closed | 2025-02-12T04:49:07Z | 2025-02-12T22:31:37Z | https://github.com/comfyanonymous/ComfyUI/issues/6791 | [
"User Support"
] | peter7029 | 2 |
gevent/gevent | asyncio | 2,003 | client hang with gevent time out | * gevent version: Please note how you installed it: From source, from
PyPI, from your operating system's package, etc.
* Python version: Please be as specific as possible. For example,
"cPython 2.7.9 downloaded from python.org"
* Operating System: Please be as specific as possible. For example,
"Raspbian (Debian Linux 8.0 Linux 4.9.35-v7+ armv7l)"
### Description:
triton client process infer, but got an error:
File "/opt/python/python-3.10.0/lib/python3.10/site-packages/tritonclient/http/_client.py", line 1462, in infer
response = self._post(
File "/opt/python/python-3.10.0/lib/python3.10/site-packages/tritonclient/http/_client.py", line 290, in _post
response = self._client_stub.post(
File "/opt/python/python-3.10.0/lib/python3.10/site-packages/geventhttpclient/client.py", line 272, in post
return self.request(METHOD_POST, request_uri, body=body, headers=headers)
File "/opt/python/python-3.10.0/lib/python3.10/site-packages/geventhttpclient/client.py", line 253, in request
response = HTTPSocketPoolResponse(sock, self._connection_pool,
File "/opt/python/python-3.10.0/lib/python3.10/site-packages/geventhttpclient/response.py", line 292, in __init__
super(HTTPSocketPoolResponse, self).__init__(sock, **kw)
File "/opt/python/python-3.10.0/lib/python3.10/site-packages/geventhttpclient/response.py", line 164, in __init__
self._read_headers()
File "/opt/python/python-3.10.0/lib/python3.10/site-packages/geventhttpclient/response.py", line 184, in _read_headers
data = self._sock.recv(self.block_size)
File "/opt/python/python-3.10.0/lib/python3.10/site-packages/gevent/_socketcommon.py", line 666, in recv
self._wait(self._read_event)
File "src/gevent/_hub_primitives.py", line 317, in gevent._gevent_c_hub_primitives.wait_on_socket
File "src/gevent/_hub_primitives.py", line 322, in gevent._gevent_c_hub_primitives.wait_on_socket
File "src/gevent/_hub_primitives.py", line 313, in gevent._gevent_c_hub_primitives._primitive_wait
File "src/gevent/_hub_primitives.py", line 314, in gevent._gevent_c_hub_primitives._primitive_wait
File "src/gevent/_hub_primitives.py", line 46, in gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
File "src/gevent/_hub_primitives.py", line 46, in gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
File "src/gevent/_hub_primitives.py", line 55, in gevent._gevent_c_hub_primitives.WaitOperationsGreenlet.wait
File "src/gevent/_waiter.py", line 154, in gevent._gevent_c_waiter.Waiter.get
File "src/gevent/_greenlet_primitives.py", line 61, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_greenlet_primitives.py", line 61, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_greenlet_primitives.py", line 65, in gevent._gevent_c_greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src/gevent/_gevent_c_greenlet_primitives.pxd", line 35, in gevent._gevent_c_greenlet_primitives._greenlet_switch
TimeoutError: timed out
```python-traceback
Remember to put tracebacks in literal blocks
```
### What I've run:
triton: client.infer
import numpy as np
import tritonclient.http as httpclient
from tritonclient.utils import np_to_triton_dtype
import grpc._cython.cygrpc
grpc._cython.cygrpc.init_grpc_gevent()
def prepare_tensor(name, input):
t = httpclient.InferInput(name, input.shape,
np_to_triton_dtype(input.dtype))
t.set_data_from_numpy(input)
return t
TRITON_URL = "localhost:8000"
client = httpclient.InferenceServerClient(TRITON_URL)
prompt = "How do I count to nine in French?"
inputs = [
prepare_tensor("text_input", np.array([[prompt]], dtype=object)),
prepare_tensor("max_tokens", np.array([[100]], dtype=np.uint32)),
prepare_tensor("bad_words", np.array([[""]], dtype=object)),
prepare_tensor("stop_words", np.array([[""]], dtype=object))
]
result = client.infer("ensemble", inputs)
print(result)
| closed | 2023-10-24T07:09:10Z | 2023-10-24T10:26:24Z | https://github.com/gevent/gevent/issues/2003 | [] | bigmover | 1 |
ray-project/ray | tensorflow | 51,243 | [Data] `StandardScaler.transform()` cannot handle non-`NAN` columns when `StandardScaler.stats_` is `NAN` for that column | ### What happened + What you expected to happen
`StandardScaler.transform()` should ideally return `NAN` for columns which have `StandardScaler.stats_` as None. But the current implementation leads to -
```
TypeError: unsupported operand type(s) for -: 'float' and 'NoneType'
```
### Versions / Dependencies
-
### Reproduction script
```
import pandas as pd
import ray
from ray.data.preprocessors import StandardScaler
df_sample = pd.DataFrame({'A': [-1, 0], 'B': [None, None]})
ds_sample = ray.data.from_pandas(df_sample)
pp = StandardScaler(columns=['A', 'B'])
pp = pp.fit(ds_sample)
# `None` mean/std for C.
print(f"preprocessor stats: {pp.stats_}")
df_transform = pd.DataFrame({'A': [0, 0], 'B': [0, None]})
ds_transform = ray.data.from_pandas(df_transform)
print(pp.transform(ds_transform).show()) # Errors
```
### Issue Severity
High: It blocks me from completing my task. | closed | 2025-03-11T12:00:36Z | 2025-03-12T21:21:52Z | https://github.com/ray-project/ray/issues/51243 | [
"bug",
"P1",
"data"
] | jpatra72 | 0 |
iperov/DeepFaceLab | machine-learning | 662 | Videos won't play on iPhone | Last time output videos didn't play on the iphone on a recent version someone suggested doing this...
If you're on PC you can try editing VideoEd file in F:\DF\DFL2.0\_internal\DeepFaceLab\mainscripts and change line from
output_kwargs.update ({"c:v": "libx265",
to
output_kwargs.update ({"c:v": "libx264",
But in the latest versions it already says libx264 and still won't download to the iphone or play. Doesn't even show up in my photos album, just like when it was 265. Any fix for that? Tried exporting MOV file as well...
For some reason the iPhone says it's downloading them from my google drive, appears to download 100%, and then there's just no video in the folder. On older versions I've done this a million times and the video always shows in the photos album and plays... Thanks! | closed | 2020-03-19T19:10:18Z | 2020-03-21T06:53:33Z | https://github.com/iperov/DeepFaceLab/issues/662 | [] | kilerb | 2 |
ResidentMario/geoplot | matplotlib | 14 | Write tests | closed | 2016-12-18T18:43:06Z | 2016-12-31T02:49:31Z | https://github.com/ResidentMario/geoplot/issues/14 | [
"enhancement"
] | ResidentMario | 0 | |
horovod/horovod | tensorflow | 3,354 | Trying to get in touch regarding a security issue | Hey there!
I belong to an open source security research community, and a member (@srikanthprathi) has found an issue, but doesn’t know the best way to disclose it.
If not a hassle, might you kindly add a `SECURITY.md` file with an email, or another contact method? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository) this best practice to ensure security issues are responsibly disclosed, and it would serve as a simple instruction for security researchers in the future.
Thank you for your consideration, and I look forward to hearing from you!
(cc @huntr-helper) | closed | 2022-01-09T00:13:04Z | 2022-01-19T19:25:36Z | https://github.com/horovod/horovod/issues/3354 | [] | JamieSlome | 3 |
BlinkDL/RWKV-LM | pytorch | 184 | 基于world-chinese的1.5b的ckpt增量训练loss绝对值达到2.7几? |
基于world-chinese的1.5b的ckpt增量训练loss绝对值达到2.74,远大于RWKV pile图中332b的2.0的loss。想问问是怎么回事? | open | 2023-09-14T14:18:07Z | 2023-09-23T15:29:09Z | https://github.com/BlinkDL/RWKV-LM/issues/184 | [] | AugF | 1 |
aleju/imgaug | machine-learning | 537 | AttributeError: module 'numpy.random' has no attribute 'bit_generator' | the imgaug raise AttributeError: module 'numpy.random' has no attribute 'bit_generator' ?
i don't know what errors is?
can you help me,thank you
| open | 2019-12-25T08:53:55Z | 2022-03-13T20:25:57Z | https://github.com/aleju/imgaug/issues/537 | [] | cqray1990 | 9 |
tox-dev/tox | automation | 2,814 | Regression for usage of pip options | ## Issue
Since version 4.0 tox fails when encountering pip options as part of the dependencies.
The following example works fine with tox <4.0, but doesn't work with tox >=4.0 anymore.
```
$ cat requirements.txt
--no-binary pydantic
$ cat tox.ini
[tox]
envlist = py3
[testenv]
deps = -r requirements.txt
allowlist_externals = echo
commands = echo "hello world"
$ tox --version
4.2.1 from /home/user/.local/lib/python3.10/site-packages/tox/__init__.py
$ tox -r
py3: remove tox env folder /dir/.tox/py3
py3: failed with argument --no-binary: invalid choice: 'pydantic' (choose from ':all:', ':none:') for tox env py within deps
py3: FAIL code 1 (0.22 seconds)
evaluation failed :( (0.33 seconds)
``` | closed | 2023-01-04T08:56:33Z | 2023-06-17T01:12:10Z | https://github.com/tox-dev/tox/issues/2814 | [
"bug:minor",
"help:wanted"
] | Dunedan | 13 |
schemathesis/schemathesis | graphql | 2,755 | [FEATURE] allow setting CheckConfig and using schemathesis.toml when doing OpenAPI tests | ### Is your feature request related to a problem? Please describe
Schemathesis has several settings, such as negative_data_rejection_allowed_statuses, that can be used to customize the tests.
According to the docs, these can be set in schemathesis.toml, but from what I can see, schemathesis.toml is only used when using the cli runner, not when using `@schema.parametrize(...)` and `schema.from_dict` in a pytest test, meaning these customizations are unavailable depending on how tests are run
### Describe the solution you'd like
- parse `schemathesis.toml` when running tests from code
- parse the environment variables for these settings when running tests from code
- allow setting these options, like CheckConfig, in code, e.g. in the `schema.from_dict` or `schema.parametrize` functions/decorators, or potentially in `case.validate_response`
### Describe alternatives you've considered
I tried monkeypatching values I want into negative_data_rejection_allowed_statuses but monkeypatching Python dataclasses is pretty tricky and I could not get it to work
### Additional context
There's two other minor issues I wanted to mention not really related to this issue but that came up while looking into it:
- following the documentation link in the README of this repo leads to docs that are not complete, e.g. https://schemathesis.github.io/schemathesis/using/python-integration/ is empty. the readthedocs page seems to be more correct and have the respective content https://schemathesis.readthedocs.io/en/stable/python.html If the rtd ones are the correct ones, the README should point to them, otherwise the github pages ones might need some looking into.
- the [docs](https://github.com/schemathesis/schemathesis/blob/master/docs/using/configuration.md#global-check-configuration) mention a setting `negative_data_rejection.expected-statuses` but looking at the code, I believe this should be [`negative_data_rejection.allowed-statuses`](https://github.com/schemathesis/schemathesis/blob/master/src/schemathesis/openapi/checks.py#L17) (not sure if the hyphen is correct or if it should be all underscores, though)
| open | 2025-03-05T07:34:38Z | 2025-03-05T09:29:13Z | https://github.com/schemathesis/schemathesis/issues/2755 | [
"Status: Needs Triage",
"Type: Feature"
] | Panaetius | 2 |
jacobgil/pytorch-grad-cam | computer-vision | 205 | Problem with inception | I tried with resnet and densenet and worked fine. But faced difficulties with inception v3. Can you please suggest how to implement this in inception v3?
model = torch.load(ModelName)
model.eval()
target_layers = model._modules.get('Mixed_7c')
cam = GradCAM(model=model, target_layers=target_layers, use_cuda=True)
But got following error. Can you please help me in fixing the issue or suuggest any changes.
C:\ProgramData\Anaconda3\envs\DeepLearning2\python.exe C:/Abhishek/Research/LiverClassification/ActivationLayer.py
Traceback (most recent call last):
File "C:/Research/LiverClassification/ActivationLayer.py", line 60, in <module>
cam = GradCAM(model=model, target_layers=target_layers, use_cuda=True)
File "C:\ProgramData\Anaconda3\envs\DeepLearning2\lib\site-packages\pytorch_grad_cam\grad_cam.py", line 8, in __init__
super(
File "C:\ProgramData\Anaconda3\envs\DeepLearning2\lib\site-packages\pytorch_grad_cam\base_cam.py", line 27, in __init__
self.activations_and_grads = ActivationsAndGradients(
File "C:\ProgramData\Anaconda3\envs\DeepLearning2\lib\site-packages\pytorch_grad_cam\activations_and_gradients.py", line 11, in __init__
for target_layer in target_layers:
TypeError: 'InceptionE' object is not iterable
Exception ignored in: <function BaseCAM.__del__ at 0x000001F62F615DC0>
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\envs\DeepLearning2\lib\site-packages\pytorch_grad_cam\base_cam.py", line 188, in __del__
self.activations_and_grads.release()
AttributeError: 'GradCAM' object has no attribute 'activations_and_grads'
| closed | 2022-02-24T23:48:04Z | 2022-03-18T01:31:18Z | https://github.com/jacobgil/pytorch-grad-cam/issues/205 | [] | midya2020 | 2 |
qubvel-org/segmentation_models.pytorch | computer-vision | 714 | Is there a way to initialize the weights with a pre-trained weights? | I have a U-net model trained on an area A where the data is abundant, to make a transfer learning on a new study area B, I wanted to train the same model but initialize the model with the weights from A and a small dataset collected from study area B. Is there a way to initialize the weights with pre-trained weights? I saw the U-net document, and it seems that encoder_weights is either None, or imagenet, or some other pre-trained weights. Is there a way to process it? Thank you. | closed | 2023-02-03T00:40:21Z | 2023-03-27T14:51:49Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/714 | [] | MirandaLv | 5 |
Hironsan/BossSensor | computer-vision | 34 | 你这程序最好没有bug 不然我摸鱼被老板抓到 打死你【狗头】 | open | 2023-08-21T08:47:35Z | 2023-08-21T08:48:11Z | https://github.com/Hironsan/BossSensor/issues/34 | [] | young-lee-young | 1 | |
jina-ai/serve | machine-learning | 5,660 | Executor dict return does not work as in the documentation: | https://docs.jina.ai/concepts/executor/add-endpoints/#returns | closed | 2023-02-07T07:52:53Z | 2023-02-07T16:06:37Z | https://github.com/jina-ai/serve/issues/5660 | [] | alaeddine-13 | 1 |
pytorch/vision | computer-vision | 8,625 | ImageReadMode should support strings | It's pretty inconvenient to have to import ImageReadMode just to ask `decode_*` for an RGB image. We should just allow strings as well.
Also, `RGBA` should be a valid option (like in PIL). `RGB_ALPHA` is... long. | closed | 2024-09-03T14:46:03Z | 2024-09-04T10:38:52Z | https://github.com/pytorch/vision/issues/8625 | [] | NicolasHug | 0 |
dsdanielpark/Bard-API | nlp | 34 | Large volume of requests may result in a ban | I was throwing a request to Bard to summarize a paper, and it started throwing Response Error halfway through. This happens regardless of the Token and will not accept requests from python.
Requests from web browsers are successfully accepted, so there may be some way to fix this.
Do you have any ideas?
```
Response Error: b')]}\'\n\n38\n[["wrb.fr",null,null,null,null,[8]]]\n55\n[["di",90],["af.httprm",89,"-5846312931853295462",1]]\n25\n[["e",4,null,null,130]]\n'.
```
| closed | 2023-05-23T16:20:32Z | 2024-01-18T15:55:52Z | https://github.com/dsdanielpark/Bard-API/issues/34 | [] | yama-yeah | 12 |
graphql-python/graphene-sqlalchemy | sqlalchemy | 409 | A way to add custom filters not specific to fields | In graphene-sqlalchemy-filters. One of the benefits was that when adding custom filters, they didn't have to be associated to fields. It seems the new filter setup covers 90% of the use cases which is great! But, having custom filters unassociated to fields would be amazing. Are there any plans to support this?
I mention a potential solution in the discussion here
https://github.com/graphql-python/graphene-sqlalchemy/discussions/408
| open | 2024-03-20T00:47:41Z | 2024-03-20T00:47:41Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/409 | [] | adiberk | 0 |
pydantic/pydantic-core | pydantic | 872 | Fix the Float Serializer with Infinite and NaN Number Values | Currently, when we receive a `float('inf')`, the serializer respects the RFC standard and transforms it into a null value.
However, python returns '-Infinity' , 'Infinity' and 'NaN' if `allow_nan` is `true` : https://docs.python.org/3.8/library/json.html#infinite-and-nan-number-values
Selected Assignee: @samuelcolvin | closed | 2023-08-10T08:34:58Z | 2023-11-06T18:24:53Z | https://github.com/pydantic/pydantic-core/issues/872 | [] | JeanArhancet | 3 |
jmcnamara/XlsxWriter | pandas | 647 | Temporary files fill up temporary directories | If the input filename parameter format is incorrect, it will cause an error
workbook.py:
```python
# Error Points
xlsx_file = ZipFile(self.filename, "w", compression=ZIP_DEFLATED,
allowZip64=self.allow_zip64)
#......
#......
# Unable to execute
os.remove(os_filename)
```
Generated temporary files cannot be deleted
| closed | 2019-08-08T07:48:50Z | 2019-08-26T00:01:17Z | https://github.com/jmcnamara/XlsxWriter/issues/647 | [
"awaiting user feedback"
] | Mrchangblog | 7 |
pytest-dev/pytest-xdist | pytest | 992 | pytest-xdist does not work with vscode test-explorer when using --dist=loadgroup | I believe ultimately this is a consequence of the testcase name mangling that `pytest-xdist` does (adding @<group> to the testcase name).
I don't know if this should be considered a `pytest-xdist` issue or a vscode Python extension issue...

Even though it does run both tests

| open | 2023-12-20T05:19:01Z | 2025-02-17T20:12:08Z | https://github.com/pytest-dev/pytest-xdist/issues/992 | [] | TylerGrantSmith | 6 |
JoeanAmier/TikTokDownloader | api | 326 | 5.4版本无法下载部分作品 导致程序直接退出 | <img width="578" alt="image" src="https://github.com/user-attachments/assets/a59cc7f3-7a76-43fc-a2fa-7a542efe5549">
作品的下载卡在cache文件夹中 未下载完成无法打开
程序在显示此错误信息后立刻退出
https://v.douyin.com/iAnv2qAg/
作品链接
在 批量下载作者作品 和 单独下载作品时 成功复现 | closed | 2024-11-17T10:48:24Z | 2025-01-12T10:17:02Z | https://github.com/JoeanAmier/TikTokDownloader/issues/326 | [] | tlegion | 0 |
ydataai/ydata-profiling | jupyter | 823 | Profiling is generating the report | Not prodcuing report while using the attached titanic dataset.
Code :
df=pd.read_csv('train.csv')
df.head()
Id | MSSubClass | MSZoning | LotFrontage | LotArea | Street | Alley | LotShape | LandContour | Utilities | ... | PoolArea | PoolQC | Fence | MiscFeature | MiscVal | MoSold | YrSold | SaleType | SaleCondition | SalePrice
-- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | -- | --
1 | 60 | RL | 65.0 | 8450 | Pave | NaN | Reg | Lvl | AllPub | ... | 0 | NaN | NaN | NaN | 0 | 2 | 2008 | WD | Normal | 208500
2 | 20 | RL | 80.0 | 9600 | Pave | NaN | Reg | Lvl | AllPub | ... | 0 | NaN | NaN | NaN | 0 | 5 | 2007 | WD | Normal | 181500
3 | 60 | RL | 68.0 | 11250 | Pave | NaN | IR1 | Lvl | AllPub | ... | 0 | NaN | NaN | NaN | 0 | 9 | 2008 | WD | Normal | 223500
4 | 70 | RL | 60.0 | 9550 | Pave | NaN | IR1 | Lvl | AllPub | ... | 0 | NaN | NaN | NaN | 0 | 2 | 2006 | WD | Abnorml | 140000
5 | 60 | RL | 84.0 | 14260 | Pave | NaN | IR1 | Lvl | AllPub | ... | 0 | NaN | NaN | NaN | 0 | 12 | 2008 | WD | Normal | 250000
profile = ProfileReport(df,title='Pandas Profiling Report',explorative=True)
profile.to_widgets()
Error :
C:\Python\Anaconda3\lib\site-packages\pandas_profiling\model\correlations.py:146: UserWarning: There was an attempt to calculate the cramers correlation, but this failed.
To hide this warning, disable the calculation
(using `df.profile_report(correlations={"cramers": {"calculate": False}})`
If this is problematic for your use case, please report this as an issue:
https://github.com/pandas-profiling/pandas-profiling/issues
(include the error message: 'No data; `observed` has size 0.')
warnings.warn(
[train.csv](https://github.com/pandas-profiling/pandas-profiling/files/7111316/train.csv)
| closed | 2021-09-05T09:18:09Z | 2021-09-15T20:10:24Z | https://github.com/ydataai/ydata-profiling/issues/823 | [] | vvsenthil | 2 |
Aeternalis-Ingenium/FastAPI-Backend-Template | sqlalchemy | 31 | AttributeError: 'pydantic_core._pydantic_core.MultiHostUrl' object has no attribute 'replace' | backend_app | File "/usr/backend/src/repository/database.py", line 42, in <module>
backend_app | async_db: AsyncDatabase = AsyncDatabase()
backend_app | ^^^^^^^^^^^^^^^
backend_app | File "/usr/backend/src/repository/database.py", line 19, in __init__
backend_app | url=self.set_async_db_uri,
backend_app | ^^^^^^^^^^^^^^^^^^^^^
backend_app | File "/usr/backend/src/repository/database.py", line 36, in set_async_db_uri
backend_app | self.postgres_uri.replace("postgresql://", "postgresql+asyncpg://")
backend_app | ^^^^^^^^^^^^^^^^^^^^^^^^^
backend_app | AttributeError: 'pydantic_core._pydantic_core.MultiHostUrl' object has no attribute 'replace' | open | 2023-11-28T11:30:36Z | 2024-02-03T20:39:11Z | https://github.com/Aeternalis-Ingenium/FastAPI-Backend-Template/issues/31 | [] | eshpilevskiy | 1 |
comfyanonymous/ComfyUI | pytorch | 7,177 | The size of tensor a (49) must match the size of tensor b (16) at non-singleton dimension 1 | ### Expected Behavior
The size of tensor a (49) must match the size of tensor b (16) at non-singleton dimension 1
# ComfyUI Error Report
## Error Details
- **Node ID:** 3
- **Node Type:** KSampler
- **Exception Type:** RuntimeError
- **Exception Message:** The size of tensor a (49) must match the size of tensor b (16) at non-singleton dimension 1
## Stack Trace
```
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/nodes.py", line 1542, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/nodes.py", line 1509, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/sample.py", line 45, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/samplers.py", line 1133, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/samplers.py", line 1023, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/samplers.py", line 1008, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/samplers.py", line 976, in outer_sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/samplers.py", line 948, in inner_sample
self.conds = process_conds(self.inner_model, noise, self.conds, device, latent_image, denoise_mask, seed)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/samplers.py", line 781, in process_conds
conds[k] = encode_model_conds(model.extra_conds, conds[k], noise, device, k, latent_image=latent_image, denoise_mask=denoise_mask, seed=seed)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/samplers.py", line 691, in encode_model_conds
out = model_function(**params)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/model_base.py", line 1004, in extra_conds
out = super().extra_conds(**kwargs)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/model_base.py", line 236, in extra_conds
concat_cond = self.concat_cond(**kwargs)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/model_base.py", line 987, in concat_cond
image = self.process_latent_in(image)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/model_base.py", line 276, in process_latent_in
return self.latent_format.process_in(latent)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/latent_formats.py", line 453, in process_in
return (latent - latents_mean) * self.scale_factor / latents_std
```
## System Information
- **ComfyUI Version:** 0.3.26
- **Arguments:** main.py
- **OS:** posix
- **Python Version:** 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0]
- **Embedded Python:** false
- **PyTorch Version:** 2.6.0+cu126
## Devices
- **Name:** cuda:0 NVIDIA RTX 4000 Ada Generation Laptop GPU : cudaMallocAsync
- **Type:** cuda
- **VRAM Total:** 12585664512
- **VRAM Free:** 6903797504
- **Torch VRAM Total:** 5402263552
- **Torch VRAM Free:** 2004736
## Logs
```
2025-03-10T23:59:41.562440 - [START] Security scan2025-03-10T23:59:41.562461 -
2025-03-10T23:59:42.623304 - [DONE] Security scan2025-03-10T23:59:42.623330 -
2025-03-10T23:59:42.666809 - ## ComfyUI-Manager: installing dependencies done.2025-03-10T23:59:42.666858 -
2025-03-10T23:59:42.666886 - ** ComfyUI startup time:2025-03-10T23:59:42.666903 - 2025-03-10T23:59:42.666921 - 2025-03-10 23:59:42.6662025-03-10T23:59:42.666935 -
2025-03-10T23:59:42.666963 - ** Platform:2025-03-10T23:59:42.666981 - 2025-03-10T23:59:42.666999 - Linux2025-03-10T23:59:42.667014 -
2025-03-10T23:59:42.667029 - ** Python version:2025-03-10T23:59:42.667042 - 2025-03-10T23:59:42.667057 - 3.10.12 (main, Feb 4 2025, 14:57:36) [GCC 11.4.0]2025-03-10T23:59:42.667073 -
2025-03-10T23:59:42.667089 - ** Python executable:2025-03-10T23:59:42.667105 - 2025-03-10T23:59:42.667118 - /usr/bin/python32025-03-10T23:59:42.667134 -
2025-03-10T23:59:42.667150 - ** ComfyUI Path:2025-03-10T23:59:42.667165 - 2025-03-10T23:59:42.667180 - /home/kingbird/Downloads/mystuff/AI/ComfyUI2025-03-10T23:59:42.667198 -
2025-03-10T23:59:42.667212 - ** ComfyUI Base Folder Path:2025-03-10T23:59:42.667227 - 2025-03-10T23:59:42.667241 - /home/kingbird/Downloads/mystuff/AI/ComfyUI2025-03-10T23:59:42.667255 -
2025-03-10T23:59:42.667270 - ** User directory:2025-03-10T23:59:42.667285 - 2025-03-10T23:59:42.667300 - /home/kingbird/Downloads/mystuff/AI/ComfyUI/user2025-03-10T23:59:42.667316 -
2025-03-10T23:59:42.667333 - ** ComfyUI-Manager config path:2025-03-10T23:59:42.667347 - 2025-03-10T23:59:42.667363 - /home/kingbird/Downloads/mystuff/AI/ComfyUI/user/default/ComfyUI-Manager/config.ini2025-03-10T23:59:42.667379 -
2025-03-10T23:59:42.667405 - ** Log path:2025-03-10T23:59:42.667420 - 2025-03-10T23:59:42.667436 - /home/kingbird/Downloads/mystuff/AI/ComfyUI/user/comfyui.log2025-03-10T23:59:42.667451 -
2025-03-10T23:59:43.719764 -
Prestartup times for custom nodes:
2025-03-10T23:59:43.719931 - 0.0 seconds: /home/kingbird/Downloads/mystuff/AI/ComfyUI/custom_nodes/comfyui-easy-use
2025-03-10T23:59:43.719980 - 2.5 seconds: /home/kingbird/Downloads/mystuff/AI/ComfyUI/custom_nodes/ComfyUI-Manager
2025-03-10T23:59:43.720022 -
2025-03-10T23:59:45.038168 - Checkpoint files will always be loaded safely.
2025-03-10T23:59:45.259517 - Total VRAM 12003 MB, total RAM 63900 MB
2025-03-10T23:59:45.259646 - pytorch version: 2.6.0+cu126
2025-03-10T23:59:45.259910 - Set vram state to: NORMAL_VRAM
2025-03-10T23:59:45.260165 - Device: cuda:0 NVIDIA RTX 4000 Ada Generation Laptop GPU : cudaMallocAsync
2025-03-10T23:59:46.310148 - Using pytorch attention
2025-03-10T23:59:47.225952 - ComfyUI version: 0.3.26
2025-03-10T23:59:47.226085 - ComfyUI frontend version: 1.11.8
2025-03-10T23:59:47.230280 - [Prompt Server] web root: /home/kingbird/.local/lib/python3.10/site-packages/comfyui_frontend_package/static
2025-03-10T23:59:48.000111 - [34m[ComfyUI-Easy-Use] server: [0mv1.2.7 [92mLoaded[0m2025-03-10T23:59:48.000155 -
2025-03-10T23:59:48.000178 - [34m[ComfyUI-Easy-Use] web root: [0m/home/kingbird/Downloads/mystuff/AI/ComfyUI/custom_nodes/comfyui-easy-use/web_version/v2 [92mLoaded[0m2025-03-10T23:59:48.000194 -
2025-03-10T23:59:48.274490 - ------------------------------------------2025-03-10T23:59:48.274533 -
2025-03-10T23:59:48.274553 - [34mComfyroll Studio v1.76 : [92m 175 Nodes Loaded[0m2025-03-10T23:59:48.274568 -
2025-03-10T23:59:48.274582 - ------------------------------------------2025-03-10T23:59:48.274595 -
2025-03-10T23:59:48.274609 - ** For changes, please see patch notes at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/blob/main/Patch_Notes.md2025-03-10T23:59:48.274629 -
2025-03-10T23:59:48.274645 - ** For help, please see the wiki at https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes/wiki2025-03-10T23:59:48.274659 -
2025-03-10T23:59:48.274674 - ------------------------------------------2025-03-10T23:59:48.274687 -
2025-03-10T23:59:48.286462 - ### Loading: ComfyUI-Manager (V3.24)
2025-03-10T23:59:48.286889 - [ComfyUI-Manager] network_mode: public
2025-03-10T23:59:48.331492 - ### ComfyUI Version: v0.3.14-130-ge1da98a1 | Released on '2025-03-09'
2025-03-10T23:59:48.337492 -
Import times for custom nodes:
2025-03-10T23:59:48.338322 - 0.0 seconds: /home/kingbird/Downloads/mystuff/AI/ComfyUI/custom_nodes/websocket_image_save.py
2025-03-10T23:59:48.338382 - 0.0 seconds: /home/kingbird/Downloads/mystuff/AI/ComfyUI/custom_nodes/ComfyUI-GGUF
2025-03-10T23:59:48.338427 - 0.0 seconds: /home/kingbird/Downloads/mystuff/AI/ComfyUI/custom_nodes/comfyui-frame-interpolation
2025-03-10T23:59:48.338880 - 0.0 seconds: /home/kingbird/Downloads/mystuff/AI/ComfyUI/custom_nodes/gguf
2025-03-10T23:59:48.338923 - 0.0 seconds: /home/kingbird/Downloads/mystuff/AI/ComfyUI/custom_nodes/comfyui_essentials
2025-03-10T23:59:48.338962 - 0.0 seconds: /home/kingbird/Downloads/mystuff/AI/ComfyUI/custom_nodes/comfyui-videohelpersuite
2025-03-10T23:59:48.339000 - 0.1 seconds: /home/kingbird/Downloads/mystuff/AI/ComfyUI/custom_nodes/ComfyUI-Manager
2025-03-10T23:59:48.339036 - 0.3 seconds: /home/kingbird/Downloads/mystuff/AI/ComfyUI/custom_nodes/ComfyUI_Comfyroll_CustomNodes
2025-03-10T23:59:48.339073 - 0.4 seconds: /home/kingbird/Downloads/mystuff/AI/ComfyUI/custom_nodes/comfyui-easy-use
2025-03-10T23:59:48.339108 -
2025-03-10T23:59:48.352112 - Starting server
2025-03-10T23:59:48.352623 - To see the GUI go to: http://127.0.0.1:8188
2025-03-10T23:59:48.534417 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json
2025-03-10T23:59:48.552578 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json
2025-03-10T23:59:48.622040 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
2025-03-10T23:59:48.678669 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json
2025-03-10T23:59:48.712238 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
2025-03-10T23:59:54.366217 - FETCH ComfyRegistry Data: 5/572025-03-10T23:59:54.366292 -
2025-03-10T23:59:59.709011 - FETCH ComfyRegistry Data: 10/572025-03-10T23:59:59.709084 -
2025-03-11T00:00:05.759007 - FETCH ComfyRegistry Data: 15/572025-03-11T00:00:05.759093 -
2025-03-11T00:00:10.719752 - FETCH ComfyRegistry Data: 20/572025-03-11T00:00:10.719814 -
2025-03-11T00:00:16.120248 - FETCH ComfyRegistry Data: 25/572025-03-11T00:00:16.120311 -
2025-03-11T00:00:21.135187 - FETCH ComfyRegistry Data: 30/572025-03-11T00:00:21.135239 -
2025-03-11T00:00:26.763342 - FETCH ComfyRegistry Data: 35/572025-03-11T00:00:26.763383 -
2025-03-11T00:00:32.241175 - FETCH ComfyRegistry Data: 40/572025-03-11T00:00:32.241217 -
2025-03-11T00:00:42.832295 - FETCH ComfyRegistry Data: 45/572025-03-11T00:00:42.832376 -
2025-03-11T00:00:46.269074 - got prompt
2025-03-11T00:00:46.272300 - Failed to validate prompt for output 77:
2025-03-11T00:00:46.272429 - * VAELoader 61:
2025-03-11T00:00:46.272485 - - Value not in list: vae_name: 'wan_2.1_vae_fp8_e4m3fn.safetensors' not in ['ae.safetensors']
2025-03-11T00:00:46.272534 - * LoadImage 52:
2025-03-11T00:00:46.272577 - - Custom validation failed for node: image - Invalid image file: ComfyUI_00034_.png
2025-03-11T00:00:46.272625 - * LoaderGGUF 55:
2025-03-11T00:00:46.272664 - - Value not in list: gguf_name: 'wan2.1-i2v-14b-720p-q6_k.gguf' not in ['flux1-dev-Q6_K.gguf', 'wan2.1-i2v-14b-720p-Q4_1.gguf']
2025-03-11T00:00:46.272703 - Output will be ignored
2025-03-11T00:00:46.272940 - Failed to validate prompt for output 91:
2025-03-11T00:00:46.273014 - * UpscaleModelLoader 88:
2025-03-11T00:00:46.273069 - - Value not in list: model_name: '4x_foolhardy_Remacri.pth' not in []
2025-03-11T00:00:46.273117 - Output will be ignored
2025-03-11T00:00:46.273194 - Failed to validate prompt for output 56:
2025-03-11T00:00:46.273249 - Output will be ignored
2025-03-11T00:00:46.273295 - Failed to validate prompt for output 90:
2025-03-11T00:00:46.273342 - Output will be ignored
2025-03-11T00:00:46.273398 - Failed to validate prompt for output 58:
2025-03-11T00:00:46.273444 - Output will be ignored
2025-03-11T00:00:46.273485 - Failed to validate prompt for output 89:
2025-03-11T00:00:46.273531 - Output will be ignored
2025-03-11T00:00:46.273597 - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
2025-03-11T00:00:49.202705 - FETCH ComfyRegistry Data: 50/572025-03-11T00:00:49.202769 -
2025-03-11T00:00:53.466700 - got prompt
2025-03-11T00:00:53.468700 - Failed to validate prompt for output 77:
2025-03-11T00:00:53.468809 - * VAELoader 61:
2025-03-11T00:00:53.468873 - - Value not in list: vae_name: 'wan_2.1_vae_fp8_e4m3fn.safetensors' not in ['ae.safetensors']
2025-03-11T00:00:53.468922 - * LoaderGGUF 55:
2025-03-11T00:00:53.468963 - - Value not in list: gguf_name: 'wan2.1-i2v-14b-720p-q6_k.gguf' not in ['flux1-dev-Q6_K.gguf', 'wan2.1-i2v-14b-720p-Q4_1.gguf']
2025-03-11T00:00:53.469012 - Output will be ignored
2025-03-11T00:00:53.469151 - Failed to validate prompt for output 91:
2025-03-11T00:00:53.469208 - * UpscaleModelLoader 88:
2025-03-11T00:00:53.469251 - - Value not in list: model_name: '4x_foolhardy_Remacri.pth' not in []
2025-03-11T00:00:53.469293 - Output will be ignored
2025-03-11T00:00:53.469367 - Failed to validate prompt for output 56:
2025-03-11T00:00:53.469420 - Output will be ignored
2025-03-11T00:00:53.469463 - Failed to validate prompt for output 90:
2025-03-11T00:00:53.469505 - Output will be ignored
2025-03-11T00:00:53.469558 - Failed to validate prompt for output 89:
2025-03-11T00:00:53.469601 - Output will be ignored
2025-03-11T00:00:53.739728 - Prompt executed in 0.27 seconds
2025-03-11T00:00:55.343906 - FETCH ComfyRegistry Data: 55/572025-03-11T00:00:55.343980 -
2025-03-11T00:00:57.637000 - FETCH ComfyRegistry Data [DONE]2025-03-11T00:00:57.637072 -
2025-03-11T00:00:57.719044 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
2025-03-11T00:00:57.781311 - nightly_channel:
https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/remote
2025-03-11T00:00:57.781506 - FETCH DATA from: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2025-03-11T00:00:57.781530 - 2025-03-11T00:00:58.141869 - [DONE]2025-03-11T00:00:58.141921 -
2025-03-11T00:00:58.181415 - [ComfyUI-Manager] All startup tasks have been completed.
2025-03-11T00:01:05.939759 - got prompt
2025-03-11T00:01:05.941976 - Failed to validate prompt for output 77:
2025-03-11T00:01:05.942089 - * LoaderGGUF 55:
2025-03-11T00:01:05.942145 - - Value not in list: gguf_name: 'wan2.1-i2v-14b-720p-q6_k.gguf' not in ['flux1-dev-Q6_K.gguf', 'wan2.1-i2v-14b-720p-Q4_1.gguf']
2025-03-11T00:01:05.942196 - Output will be ignored
2025-03-11T00:01:05.942346 - Failed to validate prompt for output 91:
2025-03-11T00:01:05.942412 - * UpscaleModelLoader 88:
2025-03-11T00:01:05.942461 - - Value not in list: model_name: '4x_foolhardy_Remacri.pth' not in []
2025-03-11T00:01:05.942508 - Output will be ignored
2025-03-11T00:01:05.942588 - Failed to validate prompt for output 56:
2025-03-11T00:01:05.942643 - Output will be ignored
2025-03-11T00:01:05.942689 - Failed to validate prompt for output 90:
2025-03-11T00:01:05.942735 - Output will be ignored
2025-03-11T00:01:05.942776 - Failed to validate prompt for output 89:
2025-03-11T00:01:05.942822 - Output will be ignored
2025-03-11T00:01:05.942894 - invalid prompt: {'type': 'prompt_outputs_failed_validation', 'message': 'Prompt outputs failed validation', 'details': '', 'extra_info': {}}
2025-03-11T00:01:21.739554 - got prompt
2025-03-11T00:01:21.742225 - Failed to validate prompt for output 91:
2025-03-11T00:01:21.742363 - * UpscaleModelLoader 88:
2025-03-11T00:01:21.742426 - - Value not in list: model_name: '4x_foolhardy_Remacri.pth' not in []
2025-03-11T00:01:21.742479 - Output will be ignored
2025-03-11T00:01:21.774810 - Using pytorch attention in VAE
2025-03-11T00:01:21.777202 - Using pytorch attention in VAE
2025-03-11T00:01:21.910265 - VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
2025-03-11T00:01:23.317106 - Requested to load CLIPVisionModelProjection
2025-03-11T00:01:23.660804 - loaded completely 10584.8625 1208.09814453125 True
2025-03-11T00:01:24.020393 - Using scaled fp8: fp8 matrix mult: False, scale input: False
2025-03-11T00:01:24.896199 - CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
2025-03-11T00:01:27.517755 - Requested to load WanTEModel
2025-03-11T00:01:28.667734 - loaded completely 9306.6369140625 6419.477203369141 True
2025-03-11T00:01:31.607813 - Requested to load AutoencodingEngine
2025-03-11T00:01:31.674426 - loaded completely 1746.094081878662 159.87335777282715 True
2025-03-11T00:01:42.708017 - /home/kingbird/Downloads/mystuff/AI/ComfyUI/custom_nodes/gguf/pig.py:351: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:203.)
torch_tensor = torch.from_numpy(tensor.data)
2025-03-11T00:01:42.721280 -
ggml_sd_loader:2025-03-11T00:01:42.721313 -
2025-03-11T00:01:42.721338 - GGMLQuantizationType.F32 8232025-03-11T00:01:42.721353 -
2025-03-11T00:01:42.721371 - GGMLQuantizationType.Q4_1 4802025-03-11T00:01:42.721386 -
2025-03-11T00:01:42.764893 - model weight dtype torch.float16, manual cast: None
2025-03-11T00:01:42.765294 - model_type FLOW
2025-03-11T00:01:42.939956 - Requested to load WAN21
2025-03-11T00:01:49.454946 - loaded partially 5276.964619891357 5276.963134765625 0
2025-03-11T00:01:49.460792 - Attempting to release mmap (471)2025-03-11T00:01:49.460821 -
2025-03-11T00:01:55.813461 - !!! Exception during processing !!! The size of tensor a (49) must match the size of tensor b (16) at non-singleton dimension 1
2025-03-11T00:01:55.815318 - Traceback (most recent call last):
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/nodes.py", line 1542, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/nodes.py", line 1509, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/sample.py", line 45, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/samplers.py", line 1133, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/samplers.py", line 1023, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/samplers.py", line 1008, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/patcher_extension.py", line 110, in execute
return self.original(*args, **kwargs)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/samplers.py", line 976, in outer_sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/samplers.py", line 948, in inner_sample
self.conds = process_conds(self.inner_model, noise, self.conds, device, latent_image, denoise_mask, seed)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/samplers.py", line 781, in process_conds
conds[k] = encode_model_conds(model.extra_conds, conds[k], noise, device, k, latent_image=latent_image, denoise_mask=denoise_mask, seed=seed)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/samplers.py", line 691, in encode_model_conds
out = model_function(**params)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/model_base.py", line 1004, in extra_conds
out = super().extra_conds(**kwargs)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/model_base.py", line 236, in extra_conds
concat_cond = self.concat_cond(**kwargs)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/model_base.py", line 987, in concat_cond
image = self.process_latent_in(image)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/model_base.py", line 276, in process_latent_in
return self.latent_format.process_in(latent)
File "/home/kingbird/Downloads/mystuff/AI/ComfyUI/comfy/latent_formats.py", line 453, in process_in
return (latent - latents_mean) * self.scale_factor / latents_std
RuntimeError: The size of tensor a (49) must match the size of tensor b (16) at non-singleton dimension 1
2025-03-11T00:01:55.815870 - Prompt executed in 34.07 seconds
```
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
```
{"last_node_id":98,"last_link_id":185,"nodes":[{"id":73,"type":"VAEDecodeTiled","pos":[1289.87939453125,223.11526489257812],"size":[315,150],"flags":{},"order":12,"mode":0,"inputs":[{"name":"samples","localized_name":"samples","type":"LATENT","link":139},{"name":"vae","localized_name":"vae","type":"VAE","link":140}],"outputs":[{"name":"IMAGE","localized_name":"IMAGE","type":"IMAGE","links":[144,146],"slot_index":0}],"properties":{"Node name for S&R":"VAEDecodeTiled","cnr_id":"comfy-core","ver":"0.3.20"},"widgets_values":[256,64,64,8]},{"id":89,"type":"easy cleanGpuUsed","pos":[2497.03466796875,425.4831848144531],"size":[210,26],"flags":{},"order":16,"mode":0,"inputs":[{"name":"anything","localized_name":"anything","type":"*","link":166}],"outputs":[{"name":"output","localized_name":"output","type":"*","links":[167],"slot_index":0}],"properties":{"Node name for S&R":"easy cleanGpuUsed","cnr_id":"comfyui-easy-use","ver":"0daf114fe8870aeacfea484aa59e7f9973b91cd5"},"widgets_values":[]},{"id":90,"type":"easy clearCacheAll","pos":[2710.046142578125,549.0549926757812],"size":[210,26],"flags":{},"order":17,"mode":0,"inputs":[{"name":"anything","localized_name":"anything","type":"*","link":167}],"outputs":[{"name":"output","localized_name":"output","type":"*","links":[168],"slot_index":0}],"properties":{"Node name for S&R":"easy clearCacheAll","cnr_id":"comfyui-easy-use","ver":"0daf114fe8870aeacfea484aa59e7f9973b91cd5"},"widgets_values":[]},{"id":87,"type":"ImageUpscaleWithModel","pos":[2920.711669921875,377.2395324707031],"size":[226.8000030517578,46],"flags":{},"order":18,"mode":0,"inputs":[{"name":"upscale_model","localized_name":"upscale_model","type":"UPSCALE_MODEL","link":165},{"name":"image","localized_name":"image","type":"IMAGE","link":168}],"outputs":[{"name":"IMAGE","localized_name":"IMAGE","type":"IMAGE","links":[169],"slot_index":0}],"properties":{"Node name for S&R":"ImageUpscaleWithModel","cnr_id":"comfy-core","ver":"0.3.20"},"widgets_values":[]},{"id":51,"type":"CLIPVisionEncode","pos":[-83.88645935058594,531.9509887695312],"size":[253.60000610351562,78],"flags":{},"order":8,"mode":0,"inputs":[{"name":"clip_vision","localized_name":"clip_vision","type":"CLIP_VISION","link":94},{"name":"image","localized_name":"image","type":"IMAGE","link":178}],"outputs":[{"name":"CLIP_VISION_OUTPUT","localized_name":"CLIP_VISION_OUTPUT","type":"CLIP_VISION_OUTPUT","links":[107],"slot_index":0}],"properties":{"Node name for S&R":"CLIPVisionEncode","cnr_id":"comfy-core","ver":"0.3.20"},"widgets_values":["none"]},{"id":38,"type":"CLIPLoader","pos":[-695.0258178710938,232.79403686523438],"size":[458.3395080566406,99.65289306640625],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"CLIP","localized_name":"CLIP","type":"CLIP","links":[74,75],"slot_index":0}],"properties":{"Node name for S&R":"CLIPLoader","cnr_id":"comfy-core","ver":"0.3.20"},"widgets_values":["t5xxl_um_fp8_e4m3fn_scaled.safetensors","wan","default"]},{"id":49,"type":"CLIPVisionLoader","pos":[-672.1205444335938,446.4837951660156],"size":[386.50439453125,75.05867767333984],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"CLIP_VISION","localized_name":"CLIP_VISION","type":"CLIP_VISION","links":[94],"slot_index":0}],"properties":{"Node name for S&R":"CLIPVisionLoader","cnr_id":"comfy-core","ver":"0.3.20"},"widgets_values":["clip_vision_h_fp8_e4m3fn.safetensors"]},{"id":50,"type":"WanImageToVideo","pos":[441.33453369140625,341.9261169433594],"size":[342.5999755859375,250],"flags":{},"order":10,"mode":0,"inputs":[{"name":"positive","localized_name":"positive","type":"CONDITIONING","link":97},{"name":"negative","localized_name":"negative","type":"CONDITIONING","link":98},{"name":"vae","localized_name":"vae","type":"VAE","link":119},{"name":"clip_vision_output","localized_name":"clip_vision_output","type":"CLIP_VISION_OUTPUT","shape":7,"link":107},{"name":"start_image","localized_name":"start_image","type":"IMAGE","shape":7,"link":183},{"name":"width","type":"INT","widget":{"name":"width"},"link":184},{"name":"height","type":"INT","widget":{"name":"height"},"link":185}],"outputs":[{"name":"positive","localized_name":"positive","type":"CONDITIONING","links":[101],"slot_index":0},{"name":"negative","localized_name":"negative","type":"CONDITIONING","links":[102],"slot_index":1},{"name":"latent","localized_name":"latent","type":"LATENT","links":[103],"slot_index":2}],"properties":{"Node name for S&R":"WanImageToVideo","cnr_id":"comfy-core","ver":"0.3.20"},"widgets_values":[848,480,49,1]},{"id":76,"type":"RIFE VFI","pos":[1750.6710205078125,309.00201416015625],"size":[478.8000183105469,198],"flags":{},"order":13,"mode":0,"inputs":[{"name":"frames","localized_name":"frames","type":"IMAGE","link":144},{"name":"optional_interpolation_states","localized_name":"optional_interpolation_states","type":"INTERPOLATION_STATES","shape":7,"link":null}],"outputs":[{"name":"IMAGE","localized_name":"IMAGE","type":"IMAGE","links":[147,166],"slot_index":0}],"properties":{"Node name for S&R":"RIFE VFI","cnr_id":"comfyui-frame-interpolation","ver":"1.0.6"},"widgets_values":["rife47.pth",10,2,true,true,1]},{"id":3,"type":"KSampler","pos":[870.6275024414062,321.37060546875],"size":[315,262],"flags":{},"order":11,"mode":0,"inputs":[{"name":"model","localized_name":"model","type":"MODEL","link":124},{"name":"positive","localized_name":"positive","type":"CONDITIONING","link":101},{"name":"negative","localized_name":"negative","type":"CONDITIONING","link":102},{"name":"latent_image","localized_name":"latent_image","type":"LATENT","link":103}],"outputs":[{"name":"LATENT","localized_name":"LATENT","type":"LATENT","links":[139],"slot_index":0}],"properties":{"Node name for S&R":"KSampler","cnr_id":"comfy-core","ver":"0.3.20"},"widgets_values":[142608892999387,"fixed",20,4,"uni_pc","simple",1]},{"id":98,"type":"ImageResize+","pos":[-606.3270263671875,860.461669921875],"size":[315,218],"flags":{},"order":9,"mode":0,"inputs":[{"name":"image","localized_name":"image","type":"IMAGE","link":182}],"outputs":[{"name":"IMAGE","localized_name":"IMAGE","type":"IMAGE","links":[183],"slot_index":0},{"name":"width","localized_name":"width","type":"INT","links":[184],"slot_index":1},{"name":"height","localized_name":"height","type":"INT","links":[185],"slot_index":2}],"properties":{"Node name for S&R":"ImageResize+","cnr_id":"comfyui_essentials","ver":"1.1.0"},"widgets_values":[1024,1024,"lanczos","keep proportion","downscale if bigger",0]},{"id":7,"type":"CLIPTextEncode","pos":[-97.05010223388672,345.4165954589844],"size":[375.8377685546875,90.64099884033203],"flags":{},"order":7,"mode":0,"inputs":[{"name":"clip","localized_name":"clip","type":"CLIP","link":75}],"outputs":[{"name":"CONDITIONING","localized_name":"CONDITIONING","type":"CONDITIONING","links":[98],"slot_index":0}],"title":"CLIP Text Encode (Negative Prompt)","properties":{"Node name for S&R":"CLIPTextEncode","cnr_id":"comfy-core","ver":"0.3.20"},"widgets_values":["deformed, distorted, disfigured, motion smear, blur"],"color":"#322","bgcolor":"#533"},{"id":77,"type":"VHS_VideoCombine","pos":[1867.097412109375,723.7542724609375],"size":[811.4727783203125,334],"flags":{},"order":15,"mode":0,"inputs":[{"name":"images","localized_name":"images","type":"IMAGE","link":147},{"name":"audio","localized_name":"audio","type":"AUDIO","shape":7,"link":null},{"name":"meta_batch","localized_name":"meta_batch","type":"VHS_BatchManager","shape":7,"link":null},{"name":"vae","localized_name":"vae","type":"VAE","shape":7,"link":null}],"outputs":[{"name":"Filenames","localized_name":"Filenames","type":"VHS_FILENAMES","links":null,"slot_index":0}],"properties":{"Node name for S&R":"VHS_VideoCombine","cnr_id":"comfyui-videohelpersuite","ver":"4c7858ddd5126f7293dc3c9f6e0fc4c263cde079"},"widgets_values":{"frame_rate":24,"loop_count":0,"filename_prefix":"comfyuiblog","format":"video/h264-mp4","pix_fmt":"yuv420p","crf":10,"save_metadata":true,"trim_to_audio":false,"pingpong":false,"save_output":true,"videopreview":{"hidden":false,"paused":false,"params":{"filename":"WANsmooth_00001.mp4","subfolder":"","type":"output","format":"video/h264-mp4","frame_rate":24,"workflow":"WANsmooth_00001.png","fullpath":"C:\\Video\\ComfyUI\\output\\WANsmooth_00001.mp4"}}}},{"id":91,"type":"VHS_VideoCombine","pos":[3212.25634765625,338.37567138671875],"size":[600.8587646484375,334],"flags":{},"order":19,"mode":0,"inputs":[{"name":"images","localized_name":"images","type":"IMAGE","link":169},{"name":"audio","localized_name":"audio","type":"AUDIO","shape":7,"link":null},{"name":"meta_batch","localized_name":"meta_batch","type":"VHS_BatchManager","shape":7,"link":null},{"name":"vae","localized_name":"vae","type":"VAE","shape":7,"link":null}],"outputs":[{"name":"Filenames","localized_name":"Filenames","type":"VHS_FILENAMES","links":null,"slot_index":0}],"properties":{"Node name for S&R":"VHS_VideoCombine","cnr_id":"comfyui-videohelpersuite","ver":"4c7858ddd5126f7293dc3c9f6e0fc4c263cde079"},"widgets_values":{"frame_rate":24,"loop_count":0,"filename_prefix":"comfyuiblog","format":"video/h264-mp4","pix_fmt":"yuv420p","crf":10,"save_metadata":true,"trim_to_audio":false,"pingpong":false,"save_output":true,"videopreview":{"hidden":false,"paused":false,"params":{"filename":"WAN_4K_00001.mp4","subfolder":"","type":"output","format":"video/h264-mp4","frame_rate":24,"workflow":"WAN_4K_00001.png","fullpath":"C:\\Video\\ComfyUI\\output\\WAN_4K_00001.mp4"}}}},{"id":56,"type":"VHS_VideoCombine","pos":[1058.24609375,755.1166381835938],"size":[669.1996459960938,334],"flags":{},"order":14,"mode":0,"inputs":[{"name":"images","localized_name":"images","type":"IMAGE","link":146},{"name":"audio","localized_name":"audio","type":"AUDIO","shape":7,"link":null},{"name":"meta_batch","localized_name":"meta_batch","type":"VHS_BatchManager","shape":7,"link":null},{"name":"vae","localized_name":"vae","type":"VAE","shape":7,"link":null}],"outputs":[{"name":"Filenames","localized_name":"Filenames","type":"VHS_FILENAMES","links":null,"slot_index":0}],"properties":{"Node name for S&R":"VHS_VideoCombine","cnr_id":"comfyui-videohelpersuite","ver":"4c7858ddd5126f7293dc3c9f6e0fc4c263cde079"},"widgets_values":{"frame_rate":16,"loop_count":0,"filename_prefix":"comfyuiblog","format":"video/h264-mp4","pix_fmt":"yuv420p","crf":10,"save_metadata":true,"trim_to_audio":false,"pingpong":false,"save_output":true,"videopreview":{"hidden":false,"paused":false,"params":{"filename":"WanVideo_00001.mp4","subfolder":"","type":"output","format":"video/h264-mp4","frame_rate":16,"workflow":"WanVideo_00001.png","fullpath":"C:\\Video\\ComfyUI\\output\\WanVideo_00001.mp4"}}}},{"id":6,"type":"CLIPTextEncode","pos":[-79.38397216796875,173.69830322265625],"size":[368.4273376464844,115.58069610595703],"flags":{},"order":6,"mode":0,"inputs":[{"name":"clip","localized_name":"clip","type":"CLIP","link":74}],"outputs":[{"name":"CONDITIONING","localized_name":"CONDITIONING","type":"CONDITIONING","links":[97],"slot_index":0}],"title":"CLIP Text Encode (Positive Prompt)","properties":{"Node name for S&R":"CLIPTextEncode","cnr_id":"comfy-core","ver":"0.3.20"},"widgets_values":["a cute anime girl with massive fennec ears and a big fluffy tail wearing a maid outfit turning around"],"color":"#232","bgcolor":"#353"},{"id":52,"type":"LoadImage","pos":[-1642.743896484375,782.8930053710938],"size":[684.4696655273438,831.0567016601562],"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","localized_name":"IMAGE","type":"IMAGE","links":[178,182],"slot_index":0},{"name":"MASK","localized_name":"MASK","type":"MASK","links":null,"slot_index":1}],"properties":{"Node name for S&R":"LoadImage","cnr_id":"comfy-core","ver":"0.3.20"},"widgets_values":["demo.jpg","image"]},{"id":61,"type":"VAELoader","pos":[574.9951171875,761.2676391601562],"size":[378.7417907714844,58],"flags":{},"order":3,"mode":0,"inputs":[],"outputs":[{"name":"VAE","localized_name":"VAE","type":"VAE","links":[119,140],"slot_index":0}],"properties":{"Node name for S&R":"VAELoader","cnr_id":"comfy-core","ver":"0.3.20"},"widgets_values":["ae.safetensors"]},{"id":55,"type":"LoaderGGUF","pos":[-692.7254028320312,108.28839111328125],"size":[457.37030029296875,63.958499908447266],"flags":{},"order":4,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","localized_name":"MODEL","type":"MODEL","links":[124],"slot_index":0}],"properties":{"Node name for S&R":"LoaderGGUF","cnr_id":"gguf","ver":"1.5.8"},"widgets_values":["wan2.1-i2v-14b-720p-Q4_1.gguf"]},{"id":88,"type":"UpscaleModelLoader","pos":[2545.1240234375,242.92486572265625],"size":[315,58],"flags":{},"order":5,"mode":0,"inputs":[],"outputs":[{"name":"UPSCALE_MODEL","localized_name":"UPSCALE_MODEL","type":"UPSCALE_MODEL","links":[165]}],"properties":{"Node name for S&R":"UpscaleModelLoader","cnr_id":"comfy-core","ver":"0.3.20"},"widgets_values":["4x_foolhardy_Remacri.pth"]}],"links":[[74,38,0,6,0,"CLIP"],[75,38,0,7,0,"CLIP"],[94,49,0,51,0,"CLIP_VISION"],[97,6,0,50,0,"CONDITIONING"],[98,7,0,50,1,"CONDITIONING"],[101,50,0,3,1,"CONDITIONING"],[102,50,1,3,2,"CONDITIONING"],[103,50,2,3,3,"LATENT"],[107,51,0,50,3,"CLIP_VISION_OUTPUT"],[119,61,0,50,2,"VAE"],[124,55,0,3,0,"MODEL"],[139,3,0,73,0,"LATENT"],[140,61,0,73,1,"VAE"],[144,73,0,76,0,"IMAGE"],[146,73,0,56,0,"IMAGE"],[147,76,0,77,0,"IMAGE"],[165,88,0,87,0,"UPSCALE_MODEL"],[166,76,0,89,0,"*"],[167,89,0,90,0,"*"],[168,90,0,87,1,"IMAGE"],[169,87,0,91,0,"IMAGE"],[178,52,0,51,1,"IMAGE"],[182,52,0,98,0,"IMAGE"],[183,98,0,50,4,"IMAGE"],[184,98,1,50,5,"INT"],[185,98,2,50,6,"INT"]],"groups":[],"config":{},"extra":{"ds":{"scale":0.34522712143931134,"offset":[1051.8011234339829,1045.2296238445883]},"node_versions":{"comfy-core":"0.3.26","comfyui-easy-use":"1.2.7","comfyui-frame-interpolation":"1.0.6","comfyui_essentials":"1.1.0","comfyui-videohelpersuite":"1.5.9","gguf":"1.6.5"},"ue_links":[],"VHS_latentpreview":false,"VHS_latentpreviewrate":0,"VHS_MetadataImage":true,"VHS_KeepIntermediate":true},"version":0.4}
```
## Additional Context
(Please add any additional context or steps to reproduce the error here)
### Actual Behavior
When I try to run the WAN confy model on my system I see thi message and nowhere am able to find how to resolve this.
### Steps to Reproduce
Simply run the Workflow comfy model after placing required WAN image to video models.
### Debug Logs
```powershell
same
```
### Other
_No response_ | open | 2025-03-10T18:41:07Z | 2025-03-20T21:14:12Z | https://github.com/comfyanonymous/ComfyUI/issues/7177 | [
"Potential Bug"
] | HimanshuSan | 1 |
scikit-hep/awkward | numpy | 2,340 | Awkward allows non-nullable unknown type, but Arrow doesn't | In trying to construct a minimal reproducer, I find that they directly forbid a null type from being non-nullable:
```python
>>> import pyarrow as pa
>>> pa.field("name", pa.null(), nullable=False)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pyarrow/types.pxi", line 2266, in pyarrow.lib.field
ValueError: A null type field may not be non-nullable
```
In our code, we must be getting at it some other way that bypasses this check, but it's clear what their intentions are. We _do_ allow an `UnknownType` to not be inside an `OptionType`, so the problem is that we have a broader type system and a direct conversion makes something that Arrow doesn't consider legal.
So the right way to fix this is to wrap our `EmptyArray` inside `UnmaskedArray` on conversion to Arrow, but include enough metadata in the `ExtensionType` that when we convert it back, we know to remove the option-type, so that it's round-trip preserved. I'll make this an issue.
_Originally posted by @jpivarski in https://github.com/scikit-hep/awkward/issues/2337#issuecomment-1482892635_
| closed | 2023-03-24T14:27:59Z | 2024-04-29T17:30:30Z | https://github.com/scikit-hep/awkward/issues/2340 | [
"bug"
] | jpivarski | 2 |
QingdaoU/OnlineJudge | django | 469 | 为什么pull下来后,没有前端代码,只有后端和数据库的 | closed | 2024-07-19T03:45:13Z | 2024-07-19T03:47:13Z | https://github.com/QingdaoU/OnlineJudge/issues/469 | [] | garry-jay | 0 | |
iMerica/dj-rest-auth | rest-api | 370 | Reverse for 'password_reset_confirm' not found | Hello, I just got this error when sending POST to `password/reset/`.
There are these patterns in the app:
```
path('password/reset/', PasswordResetView.as_view(), name='rest_password_reset'),
path('password/reset/confirm/', PasswordResetConfirmView.as_view(), name='rest_password_reset_confirm'),
```
But this code call is without `rest`
```
path = reverse( …
'password_reset_confirm',
args=[user_pk_to_url_str(user), temp_key],
)
```
I'm not sure if that's a bug or I'm missing something. Thanks | closed | 2022-02-03T11:38:09Z | 2022-07-17T23:18:09Z | https://github.com/iMerica/dj-rest-auth/issues/370 | [
"support-request"
] | milano-slesarik | 3 |
replicate/cog | tensorflow | 1,602 | Replicate is failing outside my code with a pickling issue after my predict function ends | During my predict() function, I initialize a class called EachPredictor, and assign it to a dict inside the Predictor. However, every call errors after the predict function is finished, with this:
```
Traceback (most recent call last):
File "/root/.pyenv/versions/3.10.13/lib/python3.10/site-packages/cog/server/worker.py", line 226, in _predict
self._events.send(PredictionOutput(payload=make_encodeable(result)))
File "/root/.pyenv/versions/3.10.13/lib/python3.10/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/root/.pyenv/versions/3.10.13/lib/python3.10/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class 'predict.EachPredictor'>: it's not the same object as predict.EachPredictor
```
This was not an issue just a couple of days ago | open | 2024-03-28T17:36:14Z | 2024-03-28T17:36:14Z | https://github.com/replicate/cog/issues/1602 | [] | ryx2 | 0 |
errbotio/errbot | automation | 1,398 | Telegram backend formatting | In order to let us help you better, please fill out the following fields as best you can:
### I am...
* [ ] Reporting a bug
* [X] Suggesting a new feature
* [ ] Requesting help with running my bot
* [ ] Requesting help writing plugins
* [ ] Here about something else
### I am running...
* Errbot version: 6.1.1
* OS version: Ubuntu
* Python version: 3.6.6
* Using a virtual environment: yes
### Issue description
I'm a happy user of telegram backend.
Telegram uses a non-mono spaced font so when showing the result for some formatted commands the output is not well formatted (for example status command).
Telegram allows for the use of markdown to indicate mono space fonts (```) but I'm not sure what would be the best way to make this available to errbot.

It would be nice to have this behavior available, at least, for some commands.
| open | 2019-12-07T11:46:58Z | 2021-12-20T06:46:29Z | https://github.com/errbotio/errbot/issues/1398 | [
"backend: Telegram"
] | fernand0 | 0 |
dynaconf/dynaconf | flask | 974 | Nested setting disappears after validation | **Describe the bug**
When two nested settings with the same name (in the last part) are validated, the one setting is no longer present after validation.
**To Reproduce**
This is best demonstrated with a test case:
```
from dynaconf import Validator, Dynaconf
settings = Dynaconf()
# Register a single validator.
settings.validators.register(Validator('foo.bar.value', must_exist=True, is_type_of=bool))
# Add two nested settings.
settings.set("foo.value", True)
settings.set("foo.bar.value", True)
# Check that both settings are present.
assert settings.foo.bar.value == True
assert settings.foo.value == True
# Validate the settings.
settings.validators.validate()
# Check that both settings are still present.
assert settings.foo.bar.value == True
assert settings.foo.value == True # ERROR: This line fails: "'DynaBox' object has no attribute 'value'"
```
The very last line of the test fails with an exception:
```
dynaconf.vendor.box.exceptions.BoxKeyError: "'DynaBox' object has no attribute 'value'"
../../../.venv/lib/python3.8/site-packages/dynaconf/vendor/box/box.py:175: BoxKeyError
```
**Expected behavior**
It is expected that both settings are still present after validation.
**Environment (please complete the following information):**
- OS: Ubuntu 20.04
- Python 3.8
- Dynaconf Version 3.2.0 and 3.1.12
- Note: With Dynaconf 3.1.11 the behaviour is as expected, i.e. this problem is present only since version 3.1.12.
| closed | 2023-08-04T13:27:58Z | 2024-07-08T18:12:05Z | https://github.com/dynaconf/dynaconf/issues/974 | [
"bug",
"4.0-breaking-change"
] | fswane | 3 |
jupyter-book/jupyter-book | jupyter | 2,146 | AttributeError: 'EntryPoints' object has no attribute 'get' | ### Describe the bug
**context**
When I do
```bash
jupyter book toc migrate _toc.yml
# but also
jupyter book --help
```
**bug**
I am consistently getting a stack dump ending with
```console
File "/some/path/lib/python3.12/site-packages/click/core.py", line 1325, in get_help
self.format_help(ctx, formatter)
File "/some/path/lib/python3.12/site-packages/click/core.py", line 1358, in format_help
self.format_options(ctx, formatter)
File "/some/path/lib/python3.12/site-packages/click/core.py", line 1564, in format_options
self.format_commands(ctx, formatter)
File "/some/path/lib/python3.12/site-packages/click/core.py", line 1616, in format_commands
for subcommand in self.list_commands(ctx):
^^^^^^^^^^^^^^^^^^^^^^^
File "/some/path/lib/python3.12/site-packages/jupyter_book/cli/pluggable.py", line 38, in list_commands
subcommands.extend(get_entry_point_names(self._entry_point_group))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/some/path/lib/python3.12/site-packages/jupyter_book/cli/pluggable.py", line 13, in get_entry_point_names
return [ep.name for ep in metadata.entry_points().get(group, [])]
^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'EntryPoints' object has no attribute 'get'
```
**problem**
obviously the command is having problems displaying its own help...?
note that running a valid command like `jupyter book build .` works fine
### Reproduce the bug
just run the above commands
### List your environment
this is on MacOS sonoma 14.4.1
```
$ jupyter book --version
Jupyter Book : 1.0.0
External ToC : 1.0.1
MyST-Parser : 3.0.1
MyST-NB : 1.1.0
Sphinx Book Theme : 1.1.2
Jupyter-Cache : 1.0.0
NbClient : 0.10.0
$ python --version
Python 3.12.3
```
| open | 2024-05-01T13:06:17Z | 2024-09-17T17:23:37Z | https://github.com/jupyter-book/jupyter-book/issues/2146 | [
"bug"
] | parmentelat | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.