repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
encode/apistar | api | 208 | Issue with reloader for `apistar run` on Windows. | After the discussion on discourse, and #196 , #207 I think we can say there is an issue with windows.
From the discussion on discourse it looks like the issue is with the `app = get_current_app()` line, where the app isn't being properly obtained. This is based on the comment that when pasting the app into `run.py` it worked
`get_current_app()` is defined in [cli.py](https://github.com/tomchristie/apistar/blob/master/apistar/cli.py#L20) and I bet the issue is with the path resolution on windows. I don't have a windows system to test it out though, but all one would need to do is put a `import pdb; pdb.set_trace()` statement at the start of that function and step through to see where the path is resolving too. | closed | 2017-06-12T12:56:01Z | 2018-03-26T14:53:45Z | https://github.com/encode/apistar/issues/208 | [
"Bug"
] | audiolion | 15 |
albumentations-team/albumentations | deep-learning | 2,065 | [Documentation] Clearly describe in docs parameters for Compose | People are not aware of:
- clip
- save_applied_params | open | 2024-11-06T01:57:39Z | 2024-11-06T17:15:23Z | https://github.com/albumentations-team/albumentations/issues/2065 | [
"documentation"
] | ternaus | 0 |
Morizeyao/GPT2-Chinese | nlp | 93 | Undefined name 'encoder_path' in bpe_tokenizer.py | https://github.com/Morizeyao/GPT2-Chinese/blob/master/tokenizations/bpe_tokenizer.py#L128
[flake8](http://flake8.pycqa.org) testing of https://github.com/Morizeyao/GPT2-Chinese on Python 3.8.0
$ __flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics__
```
./tokenizations/bpe_tokenizer.py:128:27: F821 undefined name 'encoder_path'
return Encoder_SP(encoder_path)
^
1 F821 undefined name 'encoder_path'
1
``` | closed | 2019-11-09T21:46:29Z | 2019-11-11T01:56:31Z | https://github.com/Morizeyao/GPT2-Chinese/issues/93 | [] | cclauss | 1 |
roboflow/supervision | deep-learning | 1,394 | Clarification of obb behaviour with InferenceSlicer | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
Based on [reviewing source code](https://github.com/roboflow/supervision/blob/a8d91b0ae5a17ccf93aa83d5866cf5daa511d7a3/supervision/detection/core.py#L258), I understand that with an obb model, the regular box enclosing the obb box is used with SAHI. As a result, merged regular boxes, and not obb boxes are returned. I wanted to confirm this, and suggest a note is added in the docs.
I also want to know if this could be responsible for the odd predictions below - perhaps the enclosing boxes are not merged correctly:

### Additional
_No response_ | closed | 2024-07-22T22:46:47Z | 2024-08-06T07:40:32Z | https://github.com/roboflow/supervision/issues/1394 | [
"question"
] | robmarkcole | 8 |
ultralytics/ultralytics | deep-learning | 19,219 | Questions about different resolutions | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi,I want to use yolov8n-seg model for my segmentation task, I train the model using 640x640, but I want to deploy it using different resolutions, so I need to transfer the pt model into on model with different input resolution. Is it possible and how can I do that?
### Additional
_No response_ | open | 2025-02-13T06:29:30Z | 2025-02-13T06:30:19Z | https://github.com/ultralytics/ultralytics/issues/19219 | [
"question",
"segment",
"exports"
] | QiqLiang | 1 |
paperless-ngx/paperless-ngx | machine-learning | 8,595 | [BUG] Setting certain documents as linked document causes internal server error when field set to invalid value | ### Description
Setting certain documents as a linked document (custom field) seems to cause an internal server error.
The same pdf files don't cause the issue on another instance of paperless-ngx, which leads me to believe it has something to do with certain custom fields or the UI, rather than the file itself?
### Steps to reproduce
1. Add custom field of type "Document link" to a document
2. Add certain document in that field.
3. Click save
### Webserver logs
```bash
paperless_webserver_1 | [2025-01-03 13:15:40,409] [ERROR] [django.request] Internal Server Error: /api/documents/80/
paperless_webserver_1 | Traceback (most recent call last):
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/asgiref/sync.py", line 518, in thread_handler
paperless_webserver_1 | raise exc_info[1]
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/django/core/handlers/exception.py", line 42, in inner
paperless_webserver_1 | response = await get_response(request)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/asgiref/sync.py", line 518, in thread_handler
paperless_webserver_1 | raise exc_info[1]
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/django/core/handlers/base.py", line 253, in _get_response_async
paperless_webserver_1 | response = await wrapped_callback(
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/asgiref/sync.py", line 468, in __call__
paperless_webserver_1 | ret = await asyncio.shield(exec_coro)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/asgiref/current_thread_executor.py", line 40, in run
paperless_webserver_1 | result = self.fn(*self.args, **self.kwargs)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/asgiref/sync.py", line 522, in thread_handler
paperless_webserver_1 | return func(*args, **kwargs)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/django/views/decorators/csrf.py", line 65, in _view_wrapper
paperless_webserver_1 | return view_func(request, *args, **kwargs)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/viewsets.py", line 124, in view
paperless_webserver_1 | return self.dispatch(request, *args, **kwargs)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/views.py", line 509, in dispatch
paperless_webserver_1 | response = self.handle_exception(exc)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/views.py", line 469, in handle_exception
paperless_webserver_1 | self.raise_uncaught_exception(exc)
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
paperless_webserver_1 | raise exc
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/views.py", line 506, in dispatch
paperless_webserver_1 | response = handler(request, *args, **kwargs)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/src/paperless/src/documents/views.py", line 393, in update
paperless_webserver_1 | response = super().update(request, *args, **kwargs)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/mixins.py", line 68, in update
paperless_webserver_1 | self.perform_update(serializer)
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/mixins.py", line 78, in perform_update
paperless_webserver_1 | serializer.save()
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/drf_writable_nested/mixins.py", line 233, in save
paperless_webserver_1 | return super(BaseNestedModelSerializer, self).save(**kwargs)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/serializers.py", line 203, in save
paperless_webserver_1 | self.instance = self.update(self.instance, validated_data)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/src/paperless/src/documents/serialisers.py", line 881, in update
paperless_webserver_1 | super().update(instance, validated_data)
paperless_webserver_1 | File "/usr/src/paperless/src/documents/serialisers.py", line 332, in update
paperless_webserver_1 | return super().update(instance, validated_data)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/drf_writable_nested/mixins.py", line 290, in update
paperless_webserver_1 | self.update_or_create_reverse_relations(instance, reverse_relations)
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/drf_writable_nested/mixins.py", line 188, in update_or_create_reverse_relations
paperless_webserver_1 | related_instance = serializer.save(**save_kwargs)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/local/lib/python3.12/site-packages/rest_framework/serializers.py", line 208, in save
paperless_webserver_1 | self.instance = self.create(validated_data)
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | File "/usr/src/paperless/src/documents/serialisers.py", line 600, in create
paperless_webserver_1 | self.reflect_doclinks(document, custom_field, validated_data["value"])
paperless_webserver_1 | File "/usr/src/paperless/src/documents/serialisers.py", line 716, in reflect_doclinks
paperless_webserver_1 | elif document.id not in target_doc_field_instance.value:
paperless_webserver_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless_webserver_1 | TypeError: 'in <string>' requires string as left operand, not int
paperless_webserver_1 | [2025-01-03 13:15:43,015] [WARNING] [django.request] Bad Request: /api/documents/80/
```
### Browser logs
```bash
Error popup in the webui:
{"headers":{"normalizedNames":{},"lazyUpdate":null},"status":500,"statusText":"Internal Server Error","url":"http://192.168.1.2:8000/api/documents/80/","ok":false,"name":"HttpErrorResponse","message":"Http failure response for http://192.168.1.2:8000/api/documents/80/: 500 Internal Server Error","error":"\n<!doctype html>\n<html lang=\"en\">\n<head>\n <title>Server Error (500)</title>\n</head>\n<body>\n <h1>Server Error (500)</h1><p></p>\n</body>\n</html>\n"}
```
### Paperless-ngx version
2.13.5
### Host OS
Debian 12
### Installation method
Docker - official image
### System status
```json
{
"pngx_version": "2.13.5",
"server_os": "Linux-6.1.0-25-amd64-x86_64-with-glibc2.36",
"install_type": "docker",
"storage": {
"total": 104348344320,
"available": 35895164928
},
"database": {
"type": "postgresql",
"url": "paperless",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "paperless_mail.0011_remove_mailrule_assign_tag_squashed_0024_alter_mailrule_name_and_more",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://broker:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2025-01-03T13:44:34.479716+01:00",
"index_error": null,
"classifier_status": "OK",
"classifier_last_trained": "2025-01-03T12:05:00.048963Z",
"classifier_error": null
}
}
```
### Browser
Chrome, Safari
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2025-01-03T13:03:27Z | 2025-02-04T03:05:14Z | https://github.com/paperless-ngx/paperless-ngx/issues/8595 | [
"bug",
"backend"
] | dwapps | 8 |
suitenumerique/docs | django | 107 | ✨(frontend) search documents | ## Feature Request
We want to be able to search documents on the frontend side.
Searching a document will filter the datagrid.
---
To be ready for dev, we need:
- [x] https://github.com/numerique-gouv/impress/issues/106
- [x] https://github.com/numerique-gouv/impress/issues/104
## Demo

| closed | 2024-07-01T08:37:06Z | 2025-01-03T08:27:02Z | https://github.com/suitenumerique/docs/issues/107 | [
"frontend",
"feature"
] | AntoLC | 1 |
dagster-io/dagster | data-science | 27,809 | add `metadata` kwarg to `sling_assests` decorator | ### What's the use case?
Assets often have supplemental metadata, and custom metadata should be accepted in addition to the metadata that is automatically added by sling.
This might be possible in the current release, but if so, it's not obvious how to actually implement it.
### Ideas of implementation
Currently, sling assets have their metadata set automatically:
https://github.com/dagster-io/dagster/blob/ae7df3211db59f901558f8ed6110248556465a34/python_modules/libraries/dagster-sling/dagster_sling/asset_decorator.py#L128-L133
It'd be easiest for users to pass a metadata kwarg to `sling_assests` that supplements the default. Here's the suggested update:
``` python
.merge_attributes(
metadata={
METADATA_KEY_TRANSLATOR: dagster_sling_translator,
METADATA_KEY_REPLICATION_CONFIG: replication_config,
**metadata, # new line -- passes a dict that comes from an input kwarg
}
)
```
Then metadata can be supplied to `sling_assets` in an easier format:
``` python
@sling_assets(
metadata={"my_custom_key": "my_custom_value"},
...
)
```
### Additional information
_No response_
### Message from the maintainers
Impacted by this issue? Give it a 👍! We factor engagement into prioritization. | open | 2025-02-12T21:49:06Z | 2025-02-12T21:49:06Z | https://github.com/dagster-io/dagster/issues/27809 | [
"type: feature-request"
] | jacksund | 0 |
thtrieu/darkflow | tensorflow | 1,143 | How to reduce memory usage? | **Hello there,**
I have a problem to face when I try to use the network on my Jetson Nano with a Tegra X1 GPU. The process simply gets "Killed", probably due to over-consumption of memory. I heard that the network requires 2.3GB RAM or so, but the most my GPU can squeeze out is 1.5GB. Here's my Python code:
```
from darkflow.net.build import TFNet
import cv2
import tensorflow as tf
config = tf.ConfigProto(log_device_placement = False)
config.gpu_options.allow_growth = True
config.gpu_options.per_process_gpu_memory_fraction = 0.4
with tf.Session(config = config) as sess:
options = {'model': './cfg/yolo.cfg', 'load': './yolov2.weights', 'threshold': 0.3, 'gpu': 1.0}
tfnet = TFNet(options)
```
Does anyone know how to reduce the network's memory consumption? If this helps, the only things I want to detect are cars and people.
_Thanks for the answers in advance!_ | open | 2020-02-24T19:00:40Z | 2020-02-24T19:00:40Z | https://github.com/thtrieu/darkflow/issues/1143 | [] | TNemes-3141 | 0 |
vitalik/django-ninja | pydantic | 528 | Can't input Enum of integers | We have simple view:
```python
class Numbers(int, Enum):
ONE = 1
TWO = 2
THREE = 3
@api.get("/create")
def create(request, a: Numbers, b: Numbers):
return {"res": a.value + b.value}
```
When we try to send request where for example **?a=1&b=3** we have validation error 422 with this decription:
```json
{
"detail": [
{
"loc": [
"query",
"a"
],
"msg": "value is not a valid enumeration member; permitted: 1, 2, 3",
"type": "type_error.enum",
"ctx": {
"enum_values": [
1,
2,
3
]
}
},
{...same for b...}
]
}
```
*I would like to point: with **str enum** everything work correct.
The error above - EnumMemberError which raise **pydantic**. (this error raises in pydantic/validators/enum_member_validator)
How we can get around this error and can we even do this?
*P.S. To fix this in my project i just use **:int** and write custom validator for variable.* | closed | 2022-08-13T14:56:51Z | 2022-08-13T17:08:49Z | https://github.com/vitalik/django-ninja/issues/528 | [] | Maksim-Burtsev | 2 |
horovod/horovod | pytorch | 3,307 | CI: Build Horovod in test images with `HOROVOD_DEBUG=1` | **Is your feature request related to a problem? Please describe.**
Some times unit tests may trigger bugs in Horovod's C++ backend. These bugs may cause segmentation faults at times or, if we are unlucky, may go unnoticed although some internal state is corrupted. We have some `assert()` macros all over the code base, however these assertions are not checked in release mode. They would be useful though to identify bugs before they trigger segmentation faults. Assertion failure messages are also more specific and easier to understand than segfaults.
One example would be part 2 (related to `hvd.barrier`) of PR https://github.com/horovod/horovod/pull/3300. In local debug builds an assertion failure would be raised before the segmentation fault was triggered.
**Describe the solution you'd like**
I propose to set the environment variable `HOROVOD_DEBUG=1` when building Horovod in CI test containers. Then debug symbols will be included and assertions will be checked at runtime.
**Describe alternatives you've considered**
Counter arguments I can think of:
- Tests might take a bit longer to run with debug code: I don't believe that there would be significant slowdowns, but of course I could be proven wrong.
- Certain bugs might only be observable in release builds, not in debug builds: This would be bad, but personally I would expect that we miss more problems because assertions are not checked. Some test cases could still be built in release mode to partially cover this situation.
| open | 2021-12-08T15:00:01Z | 2021-12-08T15:00:01Z | https://github.com/horovod/horovod/issues/3307 | [
"enhancement"
] | maxhgerlach | 0 |
ploomber/ploomber | jupyter | 902 | Injecting parameters when the same notebook appears more than once | To inject cells manually, users can run:
```
ploomber nb --inject
```
However, if the same source appears more than once. Ploomber will arbitrarily select one of the tasks and inject those parameters. Making it impossible to inject the parameters from other tasks that use the same template. Here's an example:
```yaml
tasks:
- source: template.ipynb
name: task-a
product: report-a.ipynb
params:
some_param: a
- source: template.ipynb
name: task-b
product: report-b.ipynb
params:
some_param: b
```
If we execute:
```
ploomber nb --inject
```
`template.ipynb` will have `some_param=b`, and there is no way to tell Ploomber to inject `some_param=a`.
We need the user to tell us which set of parameters they want to inject. I'm still unsure what's the simplest way (it has to be simple since this will be typed in the terminal). One approach I can think of is to add a new CLI argument (I'll call it `--priority` now for lack of a better name)
Example:
```
ploomber nb --inject --priority task-a
ploomber nb --inject --priority task-b
```
Since it might be that two or more notebooks are used as templates, `--priority` should be allowed to appear several times:
```
ploomber nb --inject --priority task-a --priority task-c
```
However, we'd need to validate that the values passed do not correspond to the same task. For example, this should throw an error:
```
ploomber nb --inject --priority task-a --priority task-b
```
(I'm not a fan of this approach so I'll keep thinking about what the best API is, suggestions welcome!)
Furthermore, we should throw a warning explaining to the user that since `template.ipynb` appears more than once, only one set of parameters will be visible. Something like:
> `template.ipynb` appears more than once in your pipeline, the parameters from `task-b` will be injected, to inject the parameters of another task, pass: `ploomber nb --inject --priority {task-name}`
| closed | 2022-07-07T22:35:29Z | 2022-11-07T20:09:32Z | https://github.com/ploomber/ploomber/issues/902 | [] | edublancas | 3 |
lepture/authlib | django | 292 | OpenID Connect session management | I suggest implementing the [OpenID Connect session management draft](https://openid.net/specs/openid-connect-session-1_0.html) in authlib, even if it is still a draft. There has been 30 iterations, so it feels quite stable now. Authlib could warn the user with a message like: *this is a draft, API may brutally change to follow the draft iterations, use it at your own risks*.
That would mean providing:
- an addition `session_state` parameter to the authorization response;
- a `check_session_iframe` endpoint, and iframe content;
- a `end_session_endpoint` endpoint.
What do you think?
Related issues #500 #560 #561 | open | 2020-11-13T15:38:21Z | 2025-02-20T20:58:29Z | https://github.com/lepture/authlib/issues/292 | [
"spec",
"feature request"
] | azmeuk | 3 |
thtrieu/darkflow | tensorflow | 991 | Objects are recognized in micro but not in macro scale | I trained the model with the [micro images](https://i.imgur.com/woOU4Fu.png). Darkflow recognizes objects very well in small images, but in the image of my [final project](https://i.imgur.com/YwrhCpO.jpg) does not.
He did not hit the insects position | closed | 2019-02-22T17:13:26Z | 2019-03-06T21:01:52Z | https://github.com/thtrieu/darkflow/issues/991 | [] | brunobelloni | 0 |
dmlc/gluon-nlp | numpy | 837 | Automate dependency upgrades | ## Description
Let's automate dependency updates to avoid running into long fixed issues as with pylint false positives triggered by unrelated changes in https://github.com/dmlc/gluon-nlp/pull/836.
https://github.com/renovatebot/renovate or competing solutions can automatically open PRs on a specified schedules that propose upgrading outdated dependencies. If these pass our CI, we can merge the PRs. | open | 2019-07-18T09:20:25Z | 2019-07-18T09:22:36Z | https://github.com/dmlc/gluon-nlp/issues/837 | [
"enhancement"
] | leezu | 0 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 544 | [BUG] 抖音无法解析 | 抖音
API-V1/API-V2/Web APP
输入值:多个链接都无法解析


| closed | 2025-01-26T09:08:36Z | 2025-01-26T09:22:11Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/544 | [
"BUG"
] | xxhzm | 4 |
nteract/testbook | pytest | 32 | Improve traceback | Couple of issues with the traceback
- too verbose, we'd likely want to hide the underlying nbclient calls
- errors are printed twice at the end of the traceback
Here is a sample snippet that throws a `NameError`
```python
In [3]: with testbook('../something.ipynb') as tb:
...: tb.execute_cell(0) # execute the first cell
...: tb.value('foo') # does not exist, will throw NameError
...:
---------------------------------------------------------------------------
CellExecutionError Traceback (most recent call last)
<ipython-input-3-bc3dbf017b62> in <module>
1 with testbook('../something.ipynb') as tb:
2 tb.execute_cell(0)
----> 3 tb.value('foo')
4
~/testbook/testbook/client.py in value(self, name)
130 """Extract a JSON-able variable value from notebook kernel"""
131
--> 132 result = self.inject(name)
133 if not self._execute_result(result.outputs):
134 raise ValueError('code provided does not produce execute_result')
~/testbook/testbook/client.py in inject(self, code, args, prerun)
123
124 self.nb.cells.append(new_code_cell(lines))
--> 125 cell = self.execute_cell(len(self.nb.cells) - 1)
126
127 return TestbookNode(cell)
~/testbook/testbook/client.py in execute_cell(self, cell, **kwargs)
59 executed_cells = []
60 for idx in cell_indexes:
---> 61 cell = super().execute_cell(self.nb['cells'][idx], idx, **kwargs)
62 executed_cells.append(cell)
63
~/miniconda3/envs/testbook/lib/python3.8/site-packages/nbclient/util.py in wrapped(*args, **kwargs)
70 """
71 def wrapped(*args, **kwargs):
---> 72 return just_run(coro(*args, **kwargs))
73 wrapped.__doc__ = coro.__doc__
74 return wrapped
~/miniconda3/envs/testbook/lib/python3.8/site-packages/nbclient/util.py in just_run(coro)
49 nest_asyncio.apply()
50 check_patch_tornado()
---> 51 return loop.run_until_complete(coro)
52
53
~/miniconda3/envs/testbook/lib/python3.8/asyncio/base_events.py in run_until_complete(self, future)
610 raise RuntimeError('Event loop stopped before Future completed.')
611
--> 612 return future.result()
613
614 def stop(self):
~/miniconda3/envs/testbook/lib/python3.8/site-packages/nbclient/client.py in async_execute_cell(self, cell, cell_index, execution_count, store_history)
745 if execution_count:
746 cell['execution_count'] = execution_count
--> 747 self._check_raise_for_error(cell, exec_reply)
748 self.nb['cells'][cell_index] = cell
749 return cell
~/miniconda3/envs/testbook/lib/python3.8/site-packages/nbclient/client.py in _check_raise_for_error(self, cell, exec_reply)
669 if self.force_raise_errors or not cell_allows_errors:
670 if (exec_reply is not None) and exec_reply['content']['status'] == 'error':
--> 671 raise CellExecutionError.from_cell_and_msg(cell, exec_reply['content'])
672
673 async def async_execute_cell(self, cell, cell_index, execution_count=None, store_history=True):
CellExecutionError: An error occurred while executing the following cell:
------------------
foo
------------------
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-2-f1d2d2f924e9> in <module>
----> 1 foo
NameError: name 'foo' is not defined
NameError: name 'foo' is not defined
```
Resolves #6 | closed | 2020-06-15T17:11:36Z | 2020-06-25T08:27:04Z | https://github.com/nteract/testbook/issues/32 | [] | rohitsanj | 0 |
wsvincent/awesome-django | django | 103 | The django-summernote is in the wrong section | The package is in the forms section. Must be in the editors section | closed | 2020-10-08T19:17:00Z | 2020-10-13T03:33:52Z | https://github.com/wsvincent/awesome-django/issues/103 | [] | gabrielloliveira | 0 |
deepset-ai/haystack | machine-learning | 8,175 | clean up docstrings: AzureOpenAIDocumentEmbedder & AzureOpenAITextEmbedder | closed | 2024-08-08T13:49:46Z | 2024-08-13T12:17:48Z | https://github.com/deepset-ai/haystack/issues/8175 | [
"type:documentation"
] | dfokina | 0 | |
matplotlib/mplfinance | matplotlib | 658 | Bug Report: marketcolors display error when open=close | **Describe the bug**
i set my_color = mpf.make_marketcolors( up="red",down="green",volume="in", inherit=True)
but plot draw green bar when open=close, and open price higher than previous day close.
**Expected behavior**
Display colors correctly
**Screenshots**

**Desktop (please complete the following information):**
- OS: [Windows 10]
- Browser [firefox]
- Version [mplfinance 0.12.10b0]
**Additional context**
Add any other context about the problem here.
| closed | 2024-01-23T03:32:26Z | 2024-01-25T01:34:44Z | https://github.com/matplotlib/mplfinance/issues/658 | [
"bug"
] | nkta3m | 1 |
marimo-team/marimo | data-science | 4,153 | Notebooks using `query_params` do not update on browser navigation changes | ### Describe the bug
On a notebook with url query parameters, setting the `query_params` adds to the browser history stack, but when moving backwards or forward through the browser history the notebook does not update.
### Environment
<details>
```
{
"marimo": "0.11.21",
"OS": "Darwin",
"OS Version": "24.3.0",
"Processor": "arm",
"Python Version": "3.13.2",
"Binaries": {
"Browser": "134.0.6998.89",
"Node": "v23.9.0"
},
"Dependencies": {
"click": "8.1.8",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.31.0",
"packaging": "24.2",
"psutil": "7.0.0",
"pygments": "2.19.1",
"pymdown-extensions": "10.14.3",
"pyyaml": "6.0.2",
"ruff": "0.11.0",
"starlette": "0.46.1",
"tomlkit": "0.13.2",
"typing-extensions": "4.12.2",
"uvicorn": "0.34.0",
"websockets": "15.0.1"
},
"Optional Dependencies": {
"altair": "5.5.0",
"anywidget": "0.9.16",
"pandas": "2.2.3",
"polars": "1.25.2",
"pyarrow": "19.0.1"
},
"Experimental Flags": {}
}
```
</details>
### Code to reproduce
```python
import marimo
__generated_with = "0.11.21"
app = marimo.App(width="medium")
@app.cell
def _():
import marimo as mo
return (mo,)
@app.cell
def _(mo):
query_params = mo.query_params()
def set_tab(tab):
query_params['tab'] = tab
def get_tab():
return query_params.get('tab', 'Tab 1')
return get_tab, query_params, set_tab
@app.cell
def _(mo):
tabs = {
'Tab 1': mo.md("Hello World!"),
'Tab 2': mo.md("Hello World?"),
'Tab 3': mo.md("Hello? Anyone there?"),
}
return (tabs,)
@app.cell
def _(get_tab, mo, set_tab, tabs):
tab_view = mo.ui.tabs(
tabs,
value=get_tab(),
on_change=lambda tab: set_tab(tab),
)
return (tab_view,)
@app.cell
def _(tab_view):
tab_view
return
@app.cell
def _(query_params):
query_params
return
if __name__ == "__main__":
app.run()
``` | open | 2025-03-18T18:54:43Z | 2025-03-19T16:18:24Z | https://github.com/marimo-team/marimo/issues/4153 | [
"bug",
"help wanted"
] | HHammond | 4 |
gradio-app/gradio | deep-learning | 10,105 | The message format of examples of multimodal chatbot is different from that of normal submission | ### Describe the bug
When you click the example image inside the Chatbot component of the following app
```py
import gradio as gr
def run(message, history):
print(message)
return "aaa"
demo = gr.ChatInterface(
fn=run,
examples=[
[
{
"text": "Describe the image.",
"files": ["cats.jpg"],
},
],
],
multimodal=True,
type="messages",
cache_examples=False,
)
demo.launch()
```

the printed message format looks like this:
```
{'text': 'Describe the image.', 'files': [{'path': '/tmp/gradio/4766eb361fb2233afe48adb8f799f04eee25d8f2eb32fd4a835d27f777e0dee6/cats.jpg', 'url': 'https://hysts-debug-multimodal-chat-examples.hf.space/gradio_api/file=/tmp/gradio/4766eb361fb2233afe48adb8f799f04eee25d8f2eb32fd4a835d27f777e0dee6/cats.jpg', 'size': None, 'orig_name': 'cats.jpg', 'mime_type': 'image/jpeg', 'is_stream': False, 'meta': {'_type': 'gradio.FileData'}}]}
```
But when you submit the same input from the textbox component in the bottom, it looks like this:
```
{'text': 'Describe the image.', 'files': ['/tmp/gradio/4766eb361fb2233afe48adb8f799f04eee25d8f2eb32fd4a835d27f777e0dee6/cats.jpg']}
```
This inconsistency is problematic. I think the latter is the correct and expected format.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
https://huggingface.co/spaces/hysts-debug/multimodal-chat-examples
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
gradio==5.7.1
```
### Severity
I can work around it | closed | 2024-12-03T08:35:43Z | 2024-12-07T15:51:01Z | https://github.com/gradio-app/gradio/issues/10105 | [
"bug"
] | hysts | 0 |
Avaiga/taipy | data-visualization | 2,172 | [🐛 BUG] MockState returns None in place of False | ### What went wrong? 🤔
The MockState returns an incorrect value for my variable thus breaking my tests.
### Expected Behavior
This should work.
### Steps to Reproduce Issue
```python
from taipy.gui.mock.mock_state import MockState
from taipy.gui import Gui
ms = MockState(
Gui(""),
agree_disagree1=True,
agree_disagree2=False,
)
print(ms.agree_disagree2)
```
This should print: False
### Version of Taipy
develop - 10/28/24
### Acceptance Criteria
- [ ] A unit test reproducing the bug is added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] The bug reporter validated the fix.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2024-10-28T13:32:54Z | 2024-10-28T14:17:10Z | https://github.com/Avaiga/taipy/issues/2172 | [
"🖰 GUI",
"💥Malfunction",
"🟨 Priority: Medium"
] | FlorianJacta | 0 |
iterative/dvc | data-science | 10,428 | dvc exp run --run-all: One or two experiments are executed, than it hangs (JSONDecodeError) (similar to #10398) | # Bug Report
<!--
## Issue name
Issue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug.
Example: `repro: doesn't detect input changes`
-->
## Description
When executing `dvc exp run --run-all`, the worker hangs at some point (after finishing a small number of experiments, right before starting a new one). Once this happened after two experiments, now after one.
<!--
A clear and concise description of what the bug is.
-->
### Reproduce
<!--
Step list of how to reproduce the bug
-->
<!--
Example:
1. dvc init
2. Copy dataset.zip to the directory
3. dvc add dataset.zip
4. dvc run -d dataset.zip -o model ./train.sh
5. modify dataset.zip
6. dvc repro
-->
Add multiple experiments to the queue with dvc exp run --queue
dvc exp run --run-all
### Expected
<!--
A clear and concise description of what you expect to happen.
-->
All experiments are executed.
### Environment information
<!--
This is required to ensure that we can reproduce the bug.
-->
I'm running this through github actions on a self-hosted runner. ubuntu 22.04
**Output of `dvc doctor`:**
DVC version: 3.50.2 (pip)
-------------------------
Platform: Python 3.11.9 on Linux-5.15.0-107-generic-x86_64-with-glibc2.35
Subprojects:
dvc_data = 3.15.1
dvc_objects = 5.1.0
dvc_render = 1.0.2
dvc_task = 0.4.0
scmrepo = 3.3.3
Supports:
http (aiohttp = 3.9.5, aiohttp-retry = 2.8.3),
https (aiohttp = 3.9.5, aiohttp-retry = 2.8.3),
s3 (s3fs = 2024.3.1, boto3 = 1.34.69)
Config:
Global: /github/home/.config/dvc
System: /etc/xdg/dvc
Cache types: <https://error.dvc.org/no-dvc-cache>
Caches: local
Remotes: s3
**Additional Information (if any):**
`cat .dvc/tmp/exps/celery/dvc-exp-worker-1.out` gives me this:
/app/venv/lib/python3.11/site-packages/celery/platforms.py:829: SecurityWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=0 euid=0 gid=0 egid=0
warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
[2024-05-15 16:42:53,662: WARNING/MainProcess] No hostname was supplied. Reverting to default 'localhost'
-------------- dvc-exp-0b0771-1@localhost v5.4.0 (opalescent)
--- ***** -----
-- ******* ---- Linux-5.15.0-107-generic-x86_64-with-glibc2.35 2024-05-15 16:42:53
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: dvc-exp-local:0x7fc7d38a2350
- ** ---------- .> transport: filesystem://localhost//
- ** ---------- .> results: file:///__w/equinor_pipeline_model/equinor_pipeline_model/.dvc/tmp/exps/celery/result
- *** --- * --- .> concurrency: 1 (thread)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. dvc.repo.experiments.queue.tasks.cleanup_exp
. dvc.repo.experiments.queue.tasks.collect_exp
. dvc.repo.experiments.queue.tasks.run_exp
. dvc.repo.experiments.queue.tasks.setup_exp
. dvc_task.proc.tasks.run
[2024-05-15 16:42:53,671: WARNING/MainProcess] /app/venv/lib/python3.11/site-packages/celery/worker/consumer/consumer.py:508: CPendingDeprecationWarning: The broker_connection_retry configuration setting will no longer determine
whether broker connection retries are made during startup in Celery 6.0 and above.
If you wish to retain the existing behavior for retrying connections on startup,
you should set broker_connection_retry_on_startup to True.
warnings.warn(
[2024-05-15 16:42:53,671: WARNING/MainProcess] No hostname was supplied. Reverting to default 'localhost'
[2024-05-15 16:42:53,671: INFO/MainProcess] Connected to filesystem://localhost//
[2024-05-15 16:42:53,673: INFO/MainProcess] dvc-exp-0b0771-1@localhost ready.
[2024-05-15 16:42:53,674: INFO/MainProcess] Task dvc.repo.experiments.queue.tasks.run_exp[093a3dbc-da8f-4222-a839-e015a20dd6c2] received
[2024-05-15 20:05:07,673: WARNING/MainProcess] No hostname was supplied. Reverting to default 'localhost'
[2024-05-15 20:26:58,967: CRITICAL/MainProcess] Unrecoverable error: JSONDecodeError('Expecting value: line 1 column 1 (char 0)')
Traceback (most recent call last):
File "/app/venv/lib/python3.11/site-packages/celery/worker/worker.py", line 202, in start
self.blueprint.start(self)
File "/app/venv/lib/python3.11/site-packages/celery/bootsteps.py", line 116, in start
step.start(parent)
File "/app/venv/lib/python3.11/site-packages/celery/bootsteps.py", line 365, in start
return self.obj.start()
^^^^^^^^^^^^^^^^
File "/app/venv/lib/python3.11/site-packages/celery/worker/consumer/consumer.py", line 340, in start
blueprint.start(self)
File "/app/venv/lib/python3.11/site-packages/celery/bootsteps.py", line 116, in start
step.start(parent)
File "/app/venv/lib/python3.11/site-packages/celery/worker/consumer/consumer.py", line 746, in start
c.loop(*c.loop_args())
File "/app/venv/lib/python3.11/site-packages/celery/worker/loops.py", line 130, in synloop
connection.drain_events(timeout=2.0)
File "/app/venv/lib/python3.11/site-packages/kombu/connection.py", line 341, in drain_events
return self.transport.drain_events(self.connection, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/venv/lib/python3.11/site-packages/kombu/transport/virtual/base.py", line 997, in drain_events
get(self._deliver, timeout=timeout)
File "/app/venv/lib/python3.11/site-packages/kombu/utils/scheduling.py", line 55, in get
return self.fun(resource, callback, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/venv/lib/python3.11/site-packages/kombu/transport/virtual/base.py", line 1035, in _drain_channel
return channel.drain_events(callback=callback, timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/venv/lib/python3.11/site-packages/kombu/transport/virtual/base.py", line 754, in drain_events
return self._poll(self.cycle, callback, timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/venv/lib/python3.11/site-packages/kombu/transport/virtual/base.py", line 414, in _poll
return cycle.get(callback)
^^^^^^^^^^^^^^^^^^^
File "/app/venv/lib/python3.11/site-packages/kombu/utils/scheduling.py", line 55, in get
return self.fun(resource, callback, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/venv/lib/python3.11/site-packages/kombu/transport/virtual/base.py", line 417, in _get_and_deliver
message = self._get(queue)
^^^^^^^^^^^^^^^^
File "/app/venv/lib/python3.11/site-packages/kombu/transport/filesystem.py", line 261, in _get
return loads(bytes_to_str(payload))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/venv/lib/python3.11/site-packages/kombu/utils/json.py", line 93, in loads
return _loads(s, object_hook=object_hook)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/json/__init__.py", line 359, in loads
return cls(**kw).decode(s)
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
very similar to https://github.com/iterative/dvc/issues/10398, though no solution was proposed there (as far as I can see)
The experiments that have run have been executed successfully.
| open | 2024-05-16T08:37:39Z | 2024-05-19T23:40:20Z | https://github.com/iterative/dvc/issues/10428 | [
"A: experiments"
] | AljoSt | 0 |
matterport/Mask_RCNN | tensorflow | 2,672 | Upgrading the M-RCNN one input model to two input model | Hello,
I am trying to run this code on two input models (RGB image and Edge map of the same image) so that I could get better results comparatively. To feed two inputs I am trying like this: [use fit_generator with multiple image inputs](https://github.com/keras-team/keras/issues/8130).
After feeding two inputs I am using the resnet50 backbone model for feature extraction and then concatenating the output feature vector for both images and feed forward to the MRCNN model.

I made changes in code model.py.
`
if callable(config.BACKBONE):
_, C12, C13, C14, C15 = config.BACKBONE(input_image1, stage5=True,
train_bn=config.TRAIN_BN)
_, C22, C23, C24, C25 = config.BACKBONE(input_image2, stage5=True,
train_bn=config.TRAIN_BN)
C2 = KL.Concatenate()([C12,C22])
C3 = KL.Concatenate()([C13,C23])
C4 = KL.Concatenate()([C14,C24])
C5 = KL.Concatenate()([C15,C25])
else:
###
# For same name error, I try to fix it by creating same function with two diffrent names
# and change layer names.
#
#
_, C12, C13, C14, C15 = resnet_graph1(input_image1, config.BACKBONE,
stage5=True, train_bn=config.TRAIN_BN)
_, C22, C23, C24, C25 = resnet_graph2(input_image2, config.BACKBONE,
stage5=True, train_bn=config.TRAIN_BN)
C2 = KL.Concatenate()([C12,C22])
C3 = KL.Concatenate()([C13,C23])
C4 = KL.Concatenate()([C14,C24])
C5 = KL.Concatenate()([C15,C25])
`
And some other relevant changes in model.py functions such as fit_genrators, inputs to the model,data_generators as explained in the above link for **use fit_generator with multiple image inputs**. These all changes lead to error


I tried fixing this error by creating two different function(resnet_graph1 and resnet_graph2) for the backbone model, after concatenation whole model structure is same.
[model.py](https://drive.google.com/file/d/12UuC4Tun9D28fr8Ppl1jDwNXc6FTXpz9/view?usp=sharing)
Please tell me what causes this error and some intuitions on how to fix my issue. I would appreciate suggestions.
Thank You
| open | 2021-08-18T15:06:39Z | 2023-02-02T12:58:47Z | https://github.com/matterport/Mask_RCNN/issues/2672 | [] | jprakash-1 | 2 |
babysor/MockingBird | pytorch | 759 | 请问有什么可以迁移训练的方法吗? | **Summary[问题简述(一句话)]**
本地机器空间较小,无法下载大数据集进行训练。
但想根据一些较少的数据训练模型,因为数据不够,效果不佳(会吞字)。
所以能否进行迁移训练,从readme中获取的模型文件中提取权重然后使用自己的数据训练,以提升模型的能力。 | open | 2022-10-04T03:13:11Z | 2022-10-11T00:29:53Z | https://github.com/babysor/MockingBird/issues/759 | [] | SFKgroup | 2 |
coqui-ai/TTS | deep-learning | 3,972 | [Bug?] TTS of "10. 9. 8. 7. 6. 5. 4. 3. 2. 1. Finished" seems to clog the system | ### Describe the bug
Trying to get TTS to do a countdown, but it seems to run forever, when a similar prompt seems to run in a reasonable time
Works as expected:
`
tts --text "How is the weather today?" --model_name "tts_models/en/ek1/tacotron2" --out_path test2.wav
`
Runs forever on my system:
`
tts --text "10. 9. 8. 7. 6. 5. 4. 3. 2. 1. Finished" --model_name "tts_models/en/ek1/tacotron2" --out_path test3.wav
`
### To Reproduce
run
tts --text "10. 9. 8. 7. 6. 5. 4. 3. 2. 1. Finished" --model_name "tts_models/en/ek1/tacotron2" --out_path test3.wav
### Expected behavior
Reasonable execution time
### Logs
```shell
tts --text "10. 9. 8. 7. 6. 5. 4. 3. 2. 1. Finished" --model_name "tts_models/en/ek1/tacotron2" --out_path test3.wav
> tts_models/en/ek1/tacotron2 is already downloaded.
> vocoder_models/en/ek1/wavegrad is already downloaded.
> Using model: Tacotron2
> Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log10
| > min_level_db:-10
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:0
| > fft_size:1024
| > power:1.8
| > preemphasis:0.99
| > griffin_lim_iters:60
| > signal_norm:True
| > symmetric_norm:True
| > mel_fmin:0
| > mel_fmax:8000.0
| > pitch_fmin:1.0
| > pitch_fmax:640.0
| > spec_gain:1.0
| > stft_pad_mode:reflect
| > max_norm:4.0
| > clip_norm:True
| > do_trim_silence:True
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:10
| > hop_length:256
| > win_length:1024
> Model's reduction rate `r` is set to: 2
> Vocoder Model: wavegrad
> Text: 10. 9. 8. 7. 6. 5. 4. 3. 2. 1. Finished
> Text splitted to sentences.
['10. 9.', '8.', '7.', '6.', '5.', '4.', '3.', '2.', '1.', 'Finished']
(still running)
```
### Environment
```shell
{
"CUDA": {
"GPU": [],
"available": false,
"version": null
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.2",
"TTS": "0.22.0",
"numpy": "1.22.0"
},
"System": {
"OS": "Darwin",
"architecture": [
"64bit",
""
],
"processor": "arm",
"python": "3.10.14",
"version": "Darwin Kernel Version 23.5.0: Wed May 1 20:16:51 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T8103"
}
}
```
### Additional context
_No response_ | closed | 2024-08-16T11:35:36Z | 2025-01-03T08:49:07Z | https://github.com/coqui-ai/TTS/issues/3972 | [
"bug",
"wontfix"
] | thomasf1 | 2 |
NullArray/AutoSploit | automation | 1,003 | Ekultek, you are correct. | Kek | closed | 2019-04-19T16:46:43Z | 2019-04-19T16:57:49Z | https://github.com/NullArray/AutoSploit/issues/1003 | [] | AutosploitReporter | 0 |
desec-io/desec-stack | rest-api | 91 | api: rrset PATCH fails when updating ttl only | When sending a POST request to the API endpoint updating ONLY the ttl value, the API responds with an "500 Internal Server Error". This does not happen when the redords are updated or when both, the records and the ttl, are updated.
```
*** DEBUG: http-request : http-url : https://desec.io/api/v1/domains/popmail.at/rrsets/test.../TXT/
*** DEBUG: http-request : http-type : PATCH
*** DEBUG: http-request : http-header : {'Content-Type': 'application/json', 'Authorization': 'Token 123token456data789'}
*** DEBUG: http-request : http-data : {"ttl": "120"}
*** DEBUG: http-response: http-code : 500
*** DEBUG: http-response: http-error : '500: Internal Server Error'
*** DEBUG: http-response: http-body :
<h1>Server Error (500)</h1>
Error: The request failed with 500: Internal Server Error
```
| closed | 2018-02-02T14:39:54Z | 2018-02-08T20:03:59Z | https://github.com/desec-io/desec-stack/issues/91 | [
"bug",
"api"
] | gerhard-tinned | 0 |
holoviz/panel | jupyter | 6,795 | Select options not visible when using MaterialTemplate dark theme | #### ALL software version info
Windows 10 Pro
Chrome 124.0.6367.78
Python 3.10.2
panel==1.4.2
panel-modal==0.4.0
(requirements.txt below)
#### Description of expected behavior and the observed behavior
Observed: Select widgets render options with white text and white background, making text unreadable.
Expected: Select options should render with dark background
#### Complete, minimal, self-contained example code that reproduces the issue
```
import panel as pn
pn.extension()
template = pn.template.MaterialTemplate(title="Select Test", theme='dark')
states = pn.widgets.Select(name='States', options=['Arizona', 'California', 'Connecticut', 'Kansas', 'Texas'], value='California')
template.main.append(states)
template.show()
```
#### Screenshots or screencasts of the bug in action

#### requirements.txt
anyio==3.7.1
appnope==0.1.3
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
arrow==1.2.3
asttokens==2.2.1
async-lru==2.0.4
attrs==23.1.0
Babel==2.12.1
backcall==0.2.0
beautifulsoup4==4.12.2
bleach==6.0.0
bokeh==3.4.0
boto3==1.34.81
botocore==1.34.81
cachetools==5.3.1
certifi==2023.5.7
cffi==1.15.1
chardet==5.2.0
charset-normalizer==3.1.0
click==8.1.6
colorama==0.4.6
colorcet==3.0.1
comm==0.1.3
contourpy==1.2.0
debugpy==1.6.7
decorator==5.1.1
defusedxml==0.7.1
embeddify==0.3.1
et-xmlfile==1.1.0
exceptiongroup==1.1.2
executing==1.2.0
fastjsonschema==2.17.1
filelock==3.12.2
fqdn==1.5.1
greenlet==3.0.3
h11==0.14.0
holoviews==1.18.1
humanize==4.7.0
hvplot==0.9.2
idna==3.4
ijson==3.2.3
iniconfig==2.0.0
ipycytoscape==1.3.3
ipyiframe==0.1.0
ipykernel==6.23.2
ipysheet==0.7.0
ipython==8.14.0
ipython-genutils==0.2.0
ipyvue==1.9.2
ipyvuetify==1.8.10
ipywidgets==8.0.6
ipywidgets-bokeh==1.5.0
isoduration==20.11.0
jedi==0.18.2
Jinja2==3.1.2
jmespath==1.0.1
json5==0.9.14
jsonlines==4.0.0
jsonpointer==2.4
jsonschema==4.18.6
jsonschema-specifications==2023.7.1
jupyter==1.0.0
jupyter-console==6.6.3
jupyter-events==0.7.0
jupyter-lsp==2.2.0
jupyter_client==8.2.0
jupyter_core==5.3.1
jupyter_server==2.7.0
jupyter_server_terminals==0.4.4
jupyterlab==4.0.4
jupyterlab-pygments==0.2.2
jupyterlab-widgets==3.0.7
jupyterlab_server==2.24.0
-e git+ssh://git@github.com/jazl/lexo-jupyter.git@1793a20f6dff11863de62970fe66e4b08d0b4d07#egg=lexo
linear-tsv==1.1.0
linkify-it-py==2.0.2
Markdown==3.4.4
markdown-it-py==3.0.0
MarkupSafe==2.1.3
matplotlib-inline==0.1.6
mdit-py-plugins==0.4.0
mdurl==0.1.2
mistune==2.0.5
nbclient==0.8.0
nbconvert==7.5.0
nbformat==5.9.0
nest-asyncio==1.5.6
notebook==7.0.2
notebook_shim==0.2.3
numpy==1.25.0
openpyxl==3.1.2
overrides==7.3.1
packaging==23.1
pandas==2.0.2
pandocfilters==1.5.0
panel==1.4.2
panel-modal==0.4.0
param==2.0.0
parso==0.8.3
pexpect==4.8.0
pickleshare==0.7.5
Pillow==9.5.0
platformdirs==3.5.3
pluggy==1.2.0
prometheus-client==0.17.1
prompt-toolkit==3.0.38
psutil==5.9.5
ptyprocess==0.7.0
pure-eval==0.2.2
pycparser==2.21
pyct==0.5.0
Pygments==2.15.1
pymdown-extensions==10.1
pyrsistent==0.19.3
pytest==7.4.0
python-dateutil==2.8.2
python-dotenv==1.0.1
python-json-logger==2.0.7
pytz==2023.3
pyviz_comms==3.0.0
pywin32==306
pywinpty==2.0.11
PyYAML==6.0.1
pyzmq==25.1.0
qtconsole==5.4.3
QtPy==2.3.1
reacton==1.7.1
referencing==0.30.1
requests==2.31.0
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
rich==13.5.2
rich-click==1.6.1
rpds-py==0.9.2
s3transfer==0.10.1
Send2Trash==1.8.2
six==1.16.0
sniffio==1.3.0
solara==1.19.0
soupsieve==2.4.1
spectate==1.0.1
SQLAlchemy==2.0.29
stack-data==0.6.2
starlette==0.31.0
tabulator==1.53.5
terminado==0.17.1
tinycss2==1.2.1
tomli==2.0.1
tornado==6.3.2
tqdm==4.66.1
traitlets==5.9.0
typing_extensions==4.7.1
tzdata==2023.3
uc-micro-py==1.0.2
unicodecsv==0.14.1
uri-template==1.3.0
urllib3==2.0.3
uvicorn==0.23.2
watchdog==3.0.0
wcwidth==0.2.6
webcolors==1.13
webencodings==0.5.1
websocket-client==1.6.1
websockets==11.0.3
widgetsnbextension==4.0.7
xlrd==2.0.1
xyzservices==2023.10.1
| open | 2024-04-26T15:12:10Z | 2024-04-26T15:12:10Z | https://github.com/holoviz/panel/issues/6795 | [] | jazl | 0 |
aminalaee/sqladmin | sqlalchemy | 33 | Cleaning up pagination | Right now the pagination only shows the current page plus previous and next.
It would be great to always have 7 pages shown (if applicable) and show previous and next pages. | closed | 2022-02-01T09:49:45Z | 2022-02-07T08:53:14Z | https://github.com/aminalaee/sqladmin/issues/33 | [
"enhancement"
] | aminalaee | 0 |
trevismd/statannotations | seaborn | 49 | Limiting annotations shown | Hi @mxposed
I will like to find out if there was a documentation for this tool? I would like to know if there is an option to limit annotations to only statistically significant comparisons
Thank you | closed | 2022-02-26T17:14:17Z | 2023-06-09T07:38:20Z | https://github.com/trevismd/statannotations/issues/49 | [] | eddykay310 | 3 |
microsoft/nni | deep-learning | 5,783 | WARNING: GPU found but will not be used. Please set `experiment.config.trial_gpu_number` to the number of GPUs you want to use for each trial. | Hello, NAS! was found the problem:WARNING: GPU found but will not be used. Please set `experiment.config.trial_gpu_number` to the number of GPUs you want to use for each trial.
```[tasklist]
### Tasks
```
| open | 2024-05-16T14:40:12Z | 2024-05-29T02:27:43Z | https://github.com/microsoft/nni/issues/5783 | [] | xutongpure | 1 |
yihong0618/running_page | data-visualization | 105 | sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) unable to open database file | 请问出现如题的问题,如何解决?谢谢 | closed | 2021-03-25T09:39:02Z | 2021-03-27T11:47:17Z | https://github.com/yihong0618/running_page/issues/105 | [] | 965962591 | 6 |
waditu/tushare | pandas | 1,592 | 基础数据-交易日历接口失效 | 基础数据-交易日历接口失效,返回为空。
麻烦帮忙尽快修复,谢谢!
来自用户ID272876 | open | 2021-10-11T14:16:54Z | 2021-10-11T14:16:54Z | https://github.com/waditu/tushare/issues/1592 | [] | Modas-Li | 0 |
jupyter-widgets-contrib/ipycanvas | jupyter | 246 | Mouse Events with Multicanvas not working | Hi,
If I create an object multi_canvas of class Multicanvas() and try:
```
multi_canvas.on_mouse_down(handle_mouse_down)
```
The error is raised:
```
AttributeError: 'super' object has no attribute '__getattr__'
```
I assumed that is because on_mouse_down function is implemented only in the regular Canvas() class.
Is there any way to get mouse-click events with multicanvas?
Thanks a lot! | closed | 2022-01-31T18:19:18Z | 2022-02-02T17:01:21Z | https://github.com/jupyter-widgets-contrib/ipycanvas/issues/246 | [
"bug"
] | lizbethwasp | 4 |
hankcs/HanLP | nlp | 760 | 请问有没有办法做到语义上的相似度匹配 | ## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [*] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
当前最新版本号是:1.5.3
我使用的版本是:1.5.3
## 我的问题
在做短文本相似度匹配时,我希望优先匹配语义相近的。如:
**不喜欢** 匹配到 **讨厌**,而不是匹配到 **喜欢**。
请问有没有办法实现?
## 复现问题
### 步骤
### 触发代码
```
DocVectorModel docVectorModel = new DocVectorModel(new WordVectorModel(modelFileName));
String[] documents = new String[]{
"农民在江苏种水稻",
"山东苹果丰收",
"我很喜欢篮球",
"我很讨厌篮球",
"奥运会女排夺冠",
"世界锦标赛胜出"
};
for (int i = 0; i < documents.length; i++)
{
docVectorModel.addDocument(i, documents[i]);
}
System.out.println(docVectorModel.nearest("我不喜欢篮球"));
```
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
不喜欢能先匹配到讨厌
```
[3=0.99999976, 2=0.87721485
```
### 实际输出
不喜欢先匹配到了 喜欢
```
[2=0.99999976, 3=0.87721485
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
| closed | 2018-02-12T01:10:46Z | 2018-07-08T04:43:03Z | https://github.com/hankcs/HanLP/issues/760 | [
"invalid"
] | yaoyasong | 2 |
encode/apistar | api | 435 | Type inheritance in 0.4.3 | Is Type inheritance supported in 0.4.3?
For example, I have these two classes:
```
class NbnOrderFeasibilityRequest(types.Type):
sqId = validators.String()
nbnLocationId = validators.String()
tc4DownloadSpeed = validators.Integer()
tc4UploadSpeed = validators.Integer()
cpiId = validators.String(default=None)
ntdId = validators.String(default=None)
potsInterconnectId = validators.String(default=None)
transferType = NbnOrderTransferType(default=None)
localNumberPorting = validators.String(enum=['Yes', 'No'], default=None)
customerAuthorityDate = validators.Date(default=None),
appointmentId = validators.String(default=None)
installationWorkforce = NbnOrderInstallationWorkforceType(default=None) # TODO conditions
installationCentralSplitter = validators.String(enum=['Yes', 'No'], default=None)
exchangePairConnection = validators.String(enum=['No'], default=None)
ntdPortId = validators.Integer(default=None)
batteryBackup = validators.String(enum=['Yes', 'No'], default=None)
class NbnOrderRequest(NbnOrderFeasibilityRequest):
accessSeekerRef = AccessSeekerRef(default=None)
```
When I instantiate an NbnOrderRequest, only accessSeekerRef seems to be available.
data = {
'sqId': 'SQ00001',
'nbnLocationId': 'LOC00001',
'tc4DownloadSpeed': 10,
'tc4UploadSpeed': 10
}
request = NbnOrderRequest(data)
assert {'accessSeekerRef': None) == dict(data) # Works, and it shouldn't
I'm guessing there's something that needs to happen in the __new__ operation of TypeMetaclass to get fields/attributes from the parent class, but I just wanted to check if this is supposed to be possible. | closed | 2018-04-13T06:24:10Z | 2018-04-18T16:42:00Z | https://github.com/encode/apistar/issues/435 | [] | rhelms | 7 |
BeastByteAI/scikit-llm | scikit-learn | 104 | avoid recency bias in prompt construction | **Context**
According to this [paper](http://proceedings.mlr.press/v139/zhao21c/zhao21c.pdf) ChatGPT (and likely other LLMs) suffer from a recency bias. Whatever class comes last has a higher propability of being selected.
**Issue**
Currently scikit-llm constructs prompts based on the order of the training data.
Since we are recommended to restrict the training data I would usually do something like this:
~~~python
df = df.groupby(label_col).apply(lambda x: x.sample(n_samples))
df = df.reset_index(drop=True)
~~~
Which returns a sorted dataframe by label_col. Even if `sort=False` is passed to `groupby` the instances are still clustered by label.
**Question/Solution**
Should a method be implemented that randomizes the order of samples in the prompt / training data, or should users take care of that themselves?
The most straightforward way would be to simply add this to sampling:
~~~python
df = df.sample(frac=1)
~~~
Which leaves it up to chance to balance it reasonably. | open | 2024-06-18T08:59:33Z | 2024-06-19T18:24:11Z | https://github.com/BeastByteAI/scikit-llm/issues/104 | [] | AndreasKarasenko | 3 |
milesmcc/shynet | django | 88 | How to run locally? | How do I run this locally? What IP do I use? I tried setting the ip to local host but I can't access it.
Thanks. | closed | 2020-12-02T19:03:01Z | 2021-01-11T17:51:49Z | https://github.com/milesmcc/shynet/issues/88 | [] | mooseyoose | 2 |
pytorch/pytorch | deep-learning | 149,047 | Unsupported: call_method NNModuleVariable() register_forward_hook [NestedUserFunctionVariable()] {} | ### 🐛 Describe the bug
AOTI compiling and exporting https://github.com/cvlab-kaist/Chrono I had this issue in the log related to `register_forward_hook`
### Error logs
[error.log](https://github.com/user-attachments/files/19212384/error.log)
### Versions
2.6.0 and nightly
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @amjames | open | 2025-03-12T15:00:41Z | 2025-03-19T01:18:07Z | https://github.com/pytorch/pytorch/issues/149047 | [
"triaged",
"oncall: pt2",
"module: dynamo"
] | bhack | 4 |
ageitgey/face_recognition | python | 763 | ValueError: 'axis' entry is out of bounds | * face_recognition version: latest one
* Python version: 3.5.3
* Operating System: Raspberry Pi 2B running Raspbian
### Description
I am trying to check for a face in a picture and compare it to a list of known faces
### What I Did
```
import face_recognition
import os
def main():
compare_image = "not_obama.jpg"
known_images = ["President_Barack_Obama.jpg"]
image = face_recognition.load_image_file(compare_image)
face_locations = face_recognition.face_locations(image)
print(face_locations)
if face_locations != []:
print("face_exists")
else:
print("face_doesn't exist")
return False
counter = 0
for i in known_images:
known_image = face_recognition.load_image_file(i)
biden_encoding = face_recognition.face_encodings(known_image)[counter]
counter = counter + 1
unknown_image = face_recognition.load_image_file(compare_image)
unknown_encoding = face_recognition.face_encodings(unknown_image)[0]
results = face_recognition.compare_faces(biden_encoding, unknown_encoding)
print(results)
if __name__=="__main__":
main()
and output:
[(64, 167, 219, 12)]
face_exists
Traceback (most recent call last):
File "face_rec.py", line 36, in <module>
main()
File "face_rec.py", line 30, in main
results = face_recognition.compare_faces(biden_encoding, unknown_en
File "/usr/local/lib/python3.5/dist-packages/face_recognition/api.py"
return list(face_distance(known_face_encodings, face_encoding_to_ch
File "/usr/local/lib/python3.5/dist-packages/face_recognition/api.py"
return np.linalg.norm(face_encodings - face_to_compare, axis=1)
File "/usr/lib/python3/dist-packages/numpy/linalg/linalg.py", line 22
return sqrt(add.reduce(s, axis=axis, keepdims=keepdims))
ValueError: 'axis' entry is out of bounds
```
| closed | 2019-03-03T12:23:44Z | 2024-11-11T21:51:55Z | https://github.com/ageitgey/face_recognition/issues/763 | [] | tidely | 0 |
biolab/orange3 | pandas | 6,096 | Logistic regression coefficients | <!-- This is more of a request for clarification than a bug report. I am interested in displaying and explaining coefficients, particularly with logistic regression. I am using Orange version 3.32.0 on my Mac. The workflow is the following: the file widget is using the heart disease dataset with 303 instances and 13 features. That is connected to the logistic regression widget and that is connected to a data table. The goal is to be able to explain the coefficients to students. Ideally, I would like to convert the coefficients to odds ratios->
**What's wrong?**
<!-- Are the coefficients log odds ratios using natural logs? -->
<!-- Why does the data table show only 26 instances and 1 feature -->
**How can we reproduce the problem?**
<!-- In the file widget select heart disease.tab. Connect to the log regression widget and have the latter connect to the data table. Open. Look at the attached file of the data table -->
LR Coefficients.pdf
**What's your environment?**
- Operating system: Mac OS Monterey
- Orange version: 3.32.0
- How you installed Orange: as app
| closed | 2022-08-16T17:17:08Z | 2022-08-18T13:08:00Z | https://github.com/biolab/orange3/issues/6096 | [
"bug report"
] | rehoyt | 1 |
d2l-ai/d2l-en | machine-learning | 2,464 | "'svg' is not a valid value for output; supported values are 'path', 'agg', 'macosx'" in Colab notebook | I'm attempting to run the Colab notebook for [Section 20.2 Deep Convolutional Generative Adversarial Networks](https://d2l.ai/chapter_generative-adversarial-networks/dcgan.html), but getting the following error when attempting to visualize the images:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[/usr/local/lib/python3.9/dist-packages/IPython/core/formatters.py](https://nn56zrv5eqe-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230404-060220-RC00_521717366#) in __call__(self, obj)
339 pass
340 else:
--> 341 return printer(obj)
342 # Finally look for special method names
343 method = get_real_method(obj, self.print_method)
8 frames
[/usr/local/lib/python3.9/dist-packages/IPython/core/pylabtools.py](https://nn56zrv5eqe-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230404-060220-RC00_521717366#) in print_figure(fig, fmt, bbox_inches, base64, **kwargs)
149 FigureCanvasBase(fig)
150
--> 151 fig.canvas.print_figure(bytes_io, **kw)
152 data = bytes_io.getvalue()
153 if fmt == 'svg':
[/usr/local/lib/python3.9/dist-packages/matplotlib/backend_bases.py](https://nn56zrv5eqe-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230404-060220-RC00_521717366#) in print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, bbox_inches, pad_inches, bbox_extra_artists, backend, **kwargs)
2334 elif event.key in zoom_keys:
2335 toolbar.zoom()
-> 2336 toolbar._set_cursor(event)
2337 # saving current figure (default key 's')
2338 elif event.key in save_keys:
[/usr/local/lib/python3.9/dist-packages/matplotlib/backend_bases.py](https://nn56zrv5eqe-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230404-060220-RC00_521717366#) in _get_renderer(figure, print_method)
1596 self.toolbar = None # NavigationToolbar2 will set me
1597 self._is_idle_drawing = False
-> 1598
1599 @classmethod
1600 @functools.lru_cache()
[/usr/local/lib/python3.9/dist-packages/matplotlib/backend_bases.py](https://nn56zrv5eqe-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230404-060220-RC00_521717366#) in <lambda>(*args, **kwargs)
2230 def flush_events(self):
2231 """
-> 2232 Flush the GUI events for the figure.
2233
2234 Interactive backends need to reimplement this method.
[/usr/local/lib/python3.9/dist-packages/matplotlib/backends/backend_svg.py](https://nn56zrv5eqe-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230404-060220-RC00_521717366#) in print_svg(self, filename, *args, **kwargs)
1200 detach = True
1201
-> 1202 result = self._print_svg(filename, fh, **kwargs)
1203
1204 # Detach underlying stream from wrapper so that it remains open in
[/usr/local/lib/python3.9/dist-packages/matplotlib/backends/backend_svg.py](https://nn56zrv5eqe-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230404-060220-RC00_521717366#) in _print_svg(self, filename, fh, dpi, bbox_inches_restore, **kwargs)
1222 renderer = MixedModeRenderer(
1223 self.figure, width, height, dpi,
-> 1224 RendererSVG(w, h, fh, filename, dpi),
1225 bbox_inches_restore=bbox_inches_restore)
1226
[/usr/local/lib/python3.9/dist-packages/matplotlib/backends/backend_svg.py](https://nn56zrv5eqe-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230404-060220-RC00_521717366#) in __init__(self, width, height, svgwriter, basename, image_dpi)
291 self._n_gradients = 0
292 self._fonts = OrderedDict()
--> 293 self.mathtext_parser = MathTextParser('SVG')
294
295 RendererBase.__init__(self)
[/usr/local/lib/python3.9/dist-packages/matplotlib/mathtext.py](https://nn56zrv5eqe-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab-20230404-060220-RC00_521717366#) in __init__(self, output)
204 self.width,
205 self.height + self.depth,
--> 206 self.depth,
207 self.image,
208 used_characters)
/usr/local/lib/python3.9/dist-packages/matplotlib/_api/__init__.py in check_getitem(_mapping, **kwargs)
ValueError: 'svg' is not a valid value for output; supported values are 'path', 'agg', 'macosx'
```
The only modification I made to the notebook was adding an extra pip install command for setuptools, in the first cell:
```
!pip install setuptools==65.5.0
!pip install d2l==1.0.0-beta0
``` | closed | 2023-04-05T21:48:36Z | 2023-05-15T14:22:23Z | https://github.com/d2l-ai/d2l-en/issues/2464 | [] | Meorge | 2 |
ansible/awx | automation | 15,474 | Cancelling workflow approval doesn’t cancel subsequent nodes. | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
When cancelling an approval within a workflow, the parent workflow still runs even though not all previous workflows ran to success

### AWX version
23.5.0
### Select the relevant components
- [ ] UI
- [ ] UI (tech preview)
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Installation method
kubernetes
### Modifications
no
### Ansible version
_No response_
### Operating system
_No response_
### Web browser
_No response_
### Steps to reproduce
Please see shared image.
create a workflow with workflows going into one other workflow that requires ALL convergence. Have one workflow fail with a failure path to an approval node. Cancel this approval node and you’ll see the parent workflow executes even though only 2/3 workflows are in success.
### Expected results
I would expect the Bin False workflow with ALL convergence to fail when only 2/3 are in success.
### Actual results
Workflow runs when convergence ALL is not satisfied.
### Additional information
_No response_ | open | 2024-08-28T09:46:25Z | 2024-08-28T09:46:43Z | https://github.com/ansible/awx/issues/15474 | [
"type:bug",
"needs_triage",
"community"
] | AwxTaskHelp | 0 |
InstaPy/InstaPy | automation | 6,091 | essage: The element reference of <button class="_5f5mN jIbKX _6VtSN yZn4P "> is stale; either the element is no longer attached to the DOM, it is not in the current frame context, or the document has been refreshed` | ## Expected Behavior
Work Properly
## Current Behavior
I get this error after some hours of works
```python
`Traceback (most recent call last):
File "/home/ma/Desktop/sd/pagina gadget.py", line 309, in <module>
session.interact_user_followers(random_targets, amount=amountIteraction, randomize=True)
File "/home/ma/.local/lib/python3.8/site-packages/instapy/instapy.py", line 3207, in interact_user_followers
self.interact_by_users(
File "/home/ma/.local/lib/python3.8/site-packages/instapy/instapy.py", line 2499, in interact_by_users
links = get_links_for_username(
File "/home/ma/.local/lib/python3.8/site-packages/instapy/like_util.py", line 463, in get_links_for_username
following_status, _ = get_following_status(
File "/home/ma/.local/lib/python3.8/site-packages/instapy/follow_util.py", line 82, in get_following_status
following_status = follow_button.text
File "/home/ma/.local/lib/python3.8/site-packages/selenium/webdriver/remote/webelement.py", line 76, in text
return self._execute(Command.GET_ELEMENT_TEXT)['value']
File "/home/ma/.local/lib/python3.8/site-packages/selenium/webdriver/remote/webelement.py", line 633, in _execute
return self._parent.execute(command, params)
File "/home/ma/.local/lib/python3.8/site-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
self.error_handler.check_response(response)
File "/home/ma/.local/lib/python3.8/site-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.StaleElementReferenceException: Message: The element reference of <button class="_5f5mN jIbKX _6VtSN yZn4P "> is stale; either the element is no longer attached to the DOM, it is not in the current frame context, or the document has been refreshed`
```
| closed | 2021-02-25T20:50:15Z | 2021-07-21T04:18:51Z | https://github.com/InstaPy/InstaPy/issues/6091 | [
"wontfix"
] | MarcoLavoro | 1 |
kymatio/kymatio | numpy | 421 | BUG Working examples with the new frontend | Everything there: https://github.com/kymatio/kymatio/tree/kymatio-dev-frontend/examples must run, frontend must be specified.
- [x] 1D
- [x] 2D
- [x] 3D | closed | 2019-08-17T18:46:59Z | 2020-02-07T10:25:04Z | https://github.com/kymatio/kymatio/issues/421 | [
"bug",
"1D",
"2D",
"3D"
] | edouardoyallon | 1 |
pyeve/eve | flask | 1,153 | Why are curly brackets not escaped in URL queries? | Hi there,
I started setting up a few Eve API endpoints in my API debugger client [Insomnia](https://insomnia.rest/), and hit a snag with the URL query formats that Eve expects:
```sh
http://api.url/search?aggregate={%22\$key%22:%22value%22}
```
This is not possible to achieve in Insomnia, because it simply escapes everything, which is kind of what I would expect from an API client as well as from a REST API. The [RFC 1738 has this to say](https://tools.ietf.org/html/rfc1738) about curly brackets and other special characters:
> Unsafe:
>
> Characters can be unsafe for a number of reasons. The space character is unsafe because significant spaces may disappear and insignificant spaces may be introduced when URLs are transcribed or typeset or subjected to the treatment of word-processing programs. The characters "<" and ">" are unsafe because they are used as the delimiters around URLs in free text; the quote mark (""") is used to delimit URLs in some systems. The character "#" is unsafe and should always be encoded because it is used in World Wide Web and in other systems to delimit a URL from a fragment/anchor identifier that might follow it. The character "%" is unsafe because it is used for encodings of other characters. **Other characters are unsafe because gateways and other transport agents are known to sometimes modify such characters. These characters are "{", "}", "|", "\", "^", "~", "[", "]", and "`".**
>
> **All unsafe characters must always be encoded within a URL.** For example, the character "#" must be encoded within URLs even in systems that do not normally deal with fragment or anchor identifiers, so that if the URL is copied into another system that does use them, it will not be necessary to change the URL encoding.
Is there a reason curly brackets are not escaped?
All the best,
Alexander | closed | 2018-05-22T08:29:07Z | 2018-05-25T13:37:42Z | https://github.com/pyeve/eve/issues/1153 | [] | alexanderwallin | 2 |
chaos-genius/chaos_genius | data-visualization | 766 | Alerts should show numbers in human-readable format | ## Tell us about the problem you're trying to solve
The alerts display numbers without any formatting. For large numbers, this can be difficult to read
## Describe the solution you'd like
The numbers in alerts should be formatted with `M`, `B`, `K` suffixes for Million, Billion, Thousand, etc. as it is shown in the UI.
| closed | 2022-02-23T13:18:18Z | 2022-04-11T06:18:39Z | https://github.com/chaos-genius/chaos_genius/issues/766 | [
"✨ enhancement",
"❗alerts"
] | KShivendu | 2 |
lazyprogrammer/machine_learning_examples | data-science | 19 | Missing parentheses in call to 'print' (<string>, line 42-47) | closed | 2018-03-04T15:23:02Z | 2022-04-05T06:33:32Z | https://github.com/lazyprogrammer/machine_learning_examples/issues/19 | [] | xtaraim | 0 | |
jupyter-incubator/sparkmagic | jupyter | 422 | YARN Application ID as None on starting spark Application | I have jupyer notebook running on one node. I have installed sparkmagic and separately installed livy using livy-0.4.0-incubating-bin.zip on the same node. I have configured livy to yarn-client to remotely connect to spark. I am able to create spark context sc however the following table gets printed along with it which specifies following values. Is there any other configuartion parameters missing?
ID | YARN Application ID | Kind | State | Spark UI | Driver log | Current session
0 | None | spark | idle | | | ✔
| closed | 2017-11-24T06:06:37Z | 2017-11-28T17:23:23Z | https://github.com/jupyter-incubator/sparkmagic/issues/422 | [] | mrunmayeejog | 3 |
python-restx/flask-restx | flask | 348 | Load route before swagger documentation | **Ask a question**
I have some database services that take a bit of time to run before my swagger interface is loaded. So, I want to load an HTML page with the information & then redirect it to the swagger documentation.
How can I add a default route that I can load before the swagger document?
The code would do something along the lines of
```app = Flask(__name__)
@app.route("/start")
def hello():
return render_template("loading_spinner.html")
api = Api(
app,
version="1.0",
title="Python API",
description="API with Flask-RestX",
)
nsHealthCheck = api.namespace(
"api/v1/healthcheck", description="Health check for the API"
)```
In this example, I want to load `/start` before the swagger interface. How can I do that?
| open | 2021-07-05T17:40:02Z | 2021-07-05T17:40:02Z | https://github.com/python-restx/flask-restx/issues/348 | [
"question"
] | nithishr | 0 |
Johnserf-Seed/TikTokDownload | api | 393 | 2023-4-9按作者的提示,仔细设置url和cookie,顺利运行。作者大人辛苦了 | 开贴mark一下,使用c#调用python,用python运行xbogus.js | open | 2023-04-09T13:40:11Z | 2023-04-09T16:10:42Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/393 | [
"重复(duplicate)"
] | hzsun2022 | 0 |
litl/backoff | asyncio | 181 | Obtain value returned by backoff_handler function after maximum tries is reached | Hi, first of all, a huge THANK YOU to all the contributors of this wonderful package. I have been using backoff for a while now and I am amazed at how efficiently it can be used for various usecases.
However, recently I am facing a issue that is similar to issue #79. Using backoff decorator, I am trying to retry failed API calls `"max_tries"` times. Once maximum attempts are reached, the `backoff_hdlr` function is called by the decorator and `backoff_hdlr` returns the status code of the failed API call. Is there a way for call_url function to return the status code returned by `backoff_hdlr` function after maximum tries is reached?
```
def backoff_hdlr(e):
return e['exception'].status
# backoff decorator to retry failed API calls by "max_tries"
@backoff.on_exception(backoff.expo, aiohttp.ClientResponseError, max_tries=2, logger=logger, on_giveup=backoff_hdlr)
async def call_url(language: str, word:str, headers:dict) -> bytes:
url = f"https://sample-url/{language}/{word.lower()}"
print(f"Begin api call: {url}")
# Create aiohttp session to trigger 'get' API call with app_id, app_key as headers
async with aiohttp.ClientSession(headers=headers) as session:
# raise_for_status is used to raise exception for status_codes other than 200
async with session.get(url, raise_for_status=True) as response:
# Awaits response from API
content = await response.read()
status = response.status
print("Finished: ", status)
return status
```
If this is not possible, then is there any other way to get status code of failed API request from call_url function after maximum tries is reached? I have checked everywhere but I am not able to get a solution for this. | open | 2022-11-19T21:34:34Z | 2022-11-19T21:34:34Z | https://github.com/litl/backoff/issues/181 | [] | AanandhiVB | 0 |
zihangdai/xlnet | tensorflow | 29 | What's the output structure for XLNET? [ A, SEP, B, SEP, CLS] | Hi, is the output embedding structure like this: [ A, SEP, B, SEP, CLS]?
Because for BERT it's like this right: [CLS, A, SEP, B, SEP]?
And for GPT2 is it just like this: [A, B]?
Thanks.
| open | 2019-06-23T04:41:14Z | 2019-09-19T12:07:54Z | https://github.com/zihangdai/xlnet/issues/29 | [] | BoPengGit | 2 |
mkhorasani/Streamlit-Authenticator | streamlit | 125 | Empty credentials for registration only apps | Hello, thank you very much for this fantastic package. It helps a lot put streamlit apps into "semi-production".
I want to ask, is it possible do to an initial empty `config.yaml` like:
```yaml
cookie:
expiry_days: 1
key: some_signature_key
name: streamlit-pythia-auth
preauthorized:
emails:
- aa@bb.cc
- aa2@bb.cc
- aa3@bb.cc
```
So a `yaml` that doesn't contain initial `credentials` key but only preauthorized email so that we can push it to our private repo without pushing initial "admin account" hash ?
Basically I want to push my auth yaml file to my repo for deploymenent with a restricted list of user that will use it, but I don't want to put an initial account inside it. | closed | 2024-01-31T10:51:49Z | 2024-07-27T14:39:36Z | https://github.com/mkhorasani/Streamlit-Authenticator/issues/125 | [
"enhancement"
] | lambda-science | 2 |
RobertCraigie/prisma-client-py | pydantic | 816 | Support OpenTelemetry | ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
https://www.prisma.io/docs/concepts/components/prisma-client/opentelemetry-tracing
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
TBD.
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
| open | 2023-09-16T11:34:22Z | 2023-09-16T11:34:22Z | https://github.com/RobertCraigie/prisma-client-py/issues/816 | [
"kind/feature",
"topic: client",
"priority/low",
"level/unknown"
] | RobertCraigie | 0 |
ranaroussi/yfinance | pandas | 1,840 | Options stopped working | ### Describe bug
The Ticker module has stopped working.
tk = yf.Ticker("TSLA")
exps = tk.options
print(exps)
This returns an empty bracket ()
This code worked just 2 days ago.
I have tried to upgrade to version 0.2.4, but that didn't help. I'm running python 3.10.8 on macbook pro.
### Simple code that reproduces your problem
import yfinance as yf
tk = yf.Ticker("TSLA")
exps = tk.options
print(exps)
### Debug log
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): query2.finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://query2.finance.yahoo.com:443 "GET /v8/finance/chart/AN?period1=-2208994789&period2=1706220332&interval=1d&includePrePost=False&events=div%2Csplits%2CcapitalGains HTTP/1.1" 200 None
[*********************100%***********************] 1 of 1 completed
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): query2.finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://query2.finance.yahoo.com:443 "GET /v8/finance/chart/AN?period1=-2208994789&period2=1706220335&interval=1d&includePrePost=False&events=div%2Csplits%2CcapitalGains HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/analysis HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/balance-sheet HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/balance-sheet HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/cash-flow HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/cash-flow HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/financials HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /calendar/earnings?symbol=AN&offset=0&size=12 HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/financials HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/financials HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/financials HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/holders HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): query2.finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://query2.finance.yahoo.com:443 "GET /v1/finance/search?q=AN HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): query2.finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://query2.finance.yahoo.com:443 "GET /v7/finance/options/AN HTTP/1.1" 401 90
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/balance-sheet HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/balance-sheet HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/cash-flow HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/cash-flow HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/financials HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/financials HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/financials HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): query2.finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://query2.finance.yahoo.com:443 "GET /v7/finance/options/AN HTTP/1.1" 401 90
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/balance-sheet HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/balance-sheet HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/cash-flow HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/cash-flow HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/financials HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/financials HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/financials HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): query2.finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://query2.finance.yahoo.com:443 "GET /v7/finance/options/AN HTTP/1.1" 401 90
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/balance-sheet HTTP/1.1" 404 None
- AN: Failed to create balance-sheet financials table for reason: YFinanceDataException("Parsing FinancialTemplateStore failed, reason: KeyError('FinancialTemplateStore')")
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/cash-flow HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/cash-flow HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/financials HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/financials HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://finance.yahoo.com:443 "GET /quote/AN/financials HTTP/1.1" 200 None
urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): query2.finance.yahoo.com:443
urllib3.connectionpool - DEBUG - https://query2.finance.yahoo.com:443 "GET /v7/finance/options/AN HTTP/1.1" 401 90
### Bad data proof
_No response_
### `yfinance` version
0.2.4
### Python version
3.10.8
### Operating system
MacOS Ventura 13.4.1 | closed | 2024-01-25T22:22:39Z | 2024-01-29T21:46:07Z | https://github.com/ranaroussi/yfinance/issues/1840 | [] | bruerFSol | 12 |
davidsandberg/facenet | computer-vision | 543 | How do I retrain the existing model by adding new dataset ? | Hi @davidsandberg,
I have tried to restore the model with new dataset . i got error like this:
please help me to resolve this issue or give me any idea to retrain the existing model with new dataset(the dataset having 700 classes with out msceleb(1580 classes))

| open | 2017-11-20T12:04:32Z | 2018-01-29T06:34:21Z | https://github.com/davidsandberg/facenet/issues/543 | [] | himabinduyeddala | 2 |
2noise/ChatTTS | python | 760 | 还有待提升, 我生成的不知道为什么,后面的就不读了 | <img width="1426" alt="image" src="https://github.com/user-attachments/assets/1f359f77-ab43-4485-adce-cc387b31d896">
```python
import ChatTTS
import torch
import torchaudio
chat = ChatTTS.Chat()
chat.load(compile=False) # Set to True for better performance
f = open("../frog-tts/demo/w.text", "r", encoding="utf-8")
texts = f.read().replace('\n', '')
f.close()
wavs = chat.infer(texts)
for i in range(len(wavs)):
"""
In some versions of torchaudio, the first line works but in other versions, so does the second line.
"""
try:
torchaudio.save(f"basic_output{i}.wav", torch.from_numpy(wavs[i]).unsqueeze(0), 24000)
except:
torchaudio.save(f"basic_output{i}.wav", torch.from_numpy(wavs[i]), 24000)
``` | closed | 2024-09-19T01:55:27Z | 2024-10-15T07:18:58Z | https://github.com/2noise/ChatTTS/issues/760 | [
"documentation"
] | lihe6666 | 1 |
gto76/python-cheatsheet | python | 133 | rt | closed | 2022-11-14T10:06:32Z | 2022-11-17T14:05:44Z | https://github.com/gto76/python-cheatsheet/issues/133 | [] | Guffi89 | 1 | |
dgtlmoon/changedetection.io | web-scraping | 1,999 | [unsure] verizon site - visual selector can render page, elements are found and drawn, but html to text part says it cant find the filter | v0.45.7.3
https://www.verizon.com/products/bose-quietcomfort-earbuds-ii/ (unfortunately only for USA access)
`/html/body/div[4]/div/div[2]/div[3]/div[2]/div/div[3]/div[1]/div[1]/p` filter made by visual-selector
opening `last-fetched.html` and using chrome 'copy full xpath' I get `/html/body/div[6]/div/div[2]/div[3]/div[2]/div/div[3]/div[1]/div[1]/p`
`/html/body/div[6]/div/div[2]/div[3]/div[2]/div/div[3]/div[1]/div[1]/p` filter made by chrome works
but the filter made in visual selector also works except, at text-rendering time it cant get it from the HTML

fetched html and elements mapping attached [weird.zip](https://github.com/dgtlmoon/changedetection.io/files/13443801/weird.zip)
Path seems to end at `gwbanner`

but actually its more like https://github.com/dgtlmoon/changedetection.io/issues/1948 | open | 2023-11-22T19:42:13Z | 2023-11-22T19:45:01Z | https://github.com/dgtlmoon/changedetection.io/issues/1999 | [
"triage"
] | dgtlmoon | 0 |
PaddlePaddle/models | nlp | 4,756 | CTCN 有没有使用I3D特征的demo | open | 2020-07-16T08:35:21Z | 2024-02-26T05:10:56Z | https://github.com/PaddlePaddle/models/issues/4756 | [] | liu824 | 13 | |
mirumee/ariadne | graphql | 843 | Pass in data to get_context_for_request | That way you can write something like:
```python
async def get_context_for_request(
self,
request: Any,
*, data: Dict[str, Any] = None
) -> Any:
if callable(self.context_value):
context = self.context_value(request)
if isawaitable(context):
context = await context
return context
data = data or {}
query: Optional[str] = data.get("query")
variables: Optional[Dict[str, Any]] = data.get("variables")
operation_name: Optional[str] = data.get("operationName")
return self.context_value or {
"request": request,
"variables": variables,
"operation_name": operation_name,
"query": query,
}
```
that way the tracing extension can access `operation_name`, `variables`, and `query` which is very helpful metadata context. | closed | 2022-04-15T15:01:07Z | 2022-12-15T11:46:52Z | https://github.com/mirumee/ariadne/issues/843 | [
"enhancement"
] | cancan101 | 9 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 906 | T | closed | 2021-11-24T21:25:59Z | 2021-11-24T21:26:44Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/906 | [] | memetciftsuren | 0 | |
zappa/Zappa | flask | 371 | [Migrated] async module should probably be renamed | Originally from: https://github.com/Miserlou/Zappa/issues/936 by [jleclanche](https://github.com/jleclanche)
In Python 3.7, `async` is a keyword and you will not be able to use it as a module name (like in an import).
Try out the deprecation warnings:
```
% PYTHONWARNINGS=all python3.6 -c 'async = 1'
<string>:1: DeprecationWarning: 'async' and 'await' will become reserved keywords in Python 3.7
``` | closed | 2021-02-20T08:27:34Z | 2022-08-16T00:57:20Z | https://github.com/zappa/Zappa/issues/371 | [] | jneves | 1 |
pydata/xarray | numpy | 9,114 | Attribute of coodinate removed on input dataArray when used in xr.apply_ufunc | ### What happened?
Using DataArray 'a' as input of apply_ufunc remove attribute of coordinate of 'a'
### What did you expect to happen?
Input DataArray 'a' should stay unchanged when passed to the function.
### Minimal Complete Verifiable Example
```Python
import xarray as xr
import numpy as np
a = xr.DataArray(np.random.rand(2,3), dims=('x','y'), coords={'x':np.arange(2), 'y':np.arange(3)})
a.x.attrs.update({'s':0.1})
res = xr.apply_ufunc(np.multiply, a, a,input_core_dims=[['x','y'],['x','y']], output_core_dims=[['x','y']])
print(a['x'].attrs)
```
### MVCE confirmation
- [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [ ] Complete example — the example is self-contained, including all data and the text of any traceback.
- [X] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [ ] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
_No response_
### Anything else we need to know?
_No response_
### Environment
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.12.3 | packaged by conda-forge | (main, Apr 15 2024, 18:38:13) [GCC 12.3.0]
python-bits: 64
OS: Linux
OS-release: 3.12.53-60.30-default
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.12.2
libnetcdf: 4.9.3-development
xarray: 2024.5.0
pandas: 2.2.2
numpy: 1.26.4
scipy: 1.13.1
netCDF4: 1.6.5
pydap: None
h5netcdf: None
h5py: None
zarr: None
cftime: 1.6.3
nc_time_axis: None
iris: None
bottleneck: None
dask: 2024.5.1
distributed: 2024.5.1
matplotlib: 3.8.4
cartopy: 0.23.0
seaborn: None
numbagg: None
fsspec: 2024.5.0
cupy: None
pint: None
sparse: None
flox: None
numpy_groupies: None
setuptools: 70.0.0
pip: 24.0
conda: None
pytest: None
mypy: None
IPython: 8.20.0
sphinx: None
</details>
| closed | 2024-06-13T13:39:50Z | 2024-06-13T14:52:56Z | https://github.com/pydata/xarray/issues/9114 | [
"bug"
] | lanougue | 3 |
Sanster/IOPaint | pytorch | 149 | How to handle TIFF format images | Hi. Can the program support reading and processing .tiff images? I have some problems with tiff images. | closed | 2022-12-01T01:24:48Z | 2022-12-04T03:36:54Z | https://github.com/Sanster/IOPaint/issues/149 | [] | wenxuanliu | 1 |
jstrieb/github-stats | asyncio | 72 | Display all languages | I am a programming language collector. This may seem like a crazy question, but is there a way to display ALL languages in `languages.svg` instead of just the top 13? It seems to have picked up all the languages I wanted it to, they just don't all show, not even as `other %`
(yes, I know a lot of lag would be involved, I am just wondering if it is possible) | open | 2022-06-13T04:44:34Z | 2023-02-06T04:33:47Z | https://github.com/jstrieb/github-stats/issues/72 | [] | seanpm2001 | 1 |
profusion/sgqlc | graphql | 228 | Way to add new properties to object types and print with __to_json_value__() | I have a couple of cases where I am adding extra data onto the schema types. I would like this data to still be JSON serializable via `__to_json_value__()`. Is it possible to have new properties added to the class so that they are also serializable and added to the `___json_data___` property?
Basically what I'm trying to do is have non-graphql attributes be accessible through `__to_json_value__()`. | closed | 2023-04-19T16:40:09Z | 2023-06-16T11:51:46Z | https://github.com/profusion/sgqlc/issues/228 | [
"waiting-input"
] | madiganz | 3 |
dynaconf/dynaconf | fastapi | 1,240 | [bug] using `@merge` with comma separated values, does not infer type |
```py
settings = Dynaconf(
data=[1,2,3]
)
```
```bash
APP_DATA="@merge 4,5,6" dynaconf list -k DATA
```
Result
```
DATA<list>: [1, 2, 3, "4", "5", "6"]
```
Expected
```
DATA<list>: [1, 2, 3, 4, 5, 6]
``` | closed | 2025-02-10T18:26:09Z | 2025-02-10T21:35:04Z | https://github.com/dynaconf/dynaconf/issues/1240 | [
"bug",
"PortToMain"
] | rochacbruno | 0 |
ray-project/ray | pytorch | 50,679 | [core] Cover cpplint for ray/src/ray/scheduling | ## Description
As part of the initiative to introduce cpplint into the pre-commit hook, we are gradually cleaning up C++ folders to ensure compliance with code style requirements. This issue focuses on cleaning up `ray/src/ray/scheduling`.
## Goal
- Ensure all `.h` and `.cc` files in `ray/src/ray/scheduling` comply with cpplint rules.
- Address or suppress all cpplint warnings.
- Add `ray/src/ray/scheduling` to the pre-commit hook once it is clean.
### Steps to Complete
1. Checkout the latest main branch and install the pre-commit hook.
2. Manually modify all C++ files in `ray/src/ray/scheduling` to trigger cpplint (e.g., by adding a newline).
3. Run `git commit` to trigger cpplint and identify issues.
4. Fix the reported issues or suppress them using clang-tidy if necessary.
5. Once all warnings are resolved, update the pre-commit hook to include `ray/src/ray/scheduling`.
This is a sub issue from #50583
| closed | 2025-02-18T03:31:31Z | 2025-02-21T15:19:55Z | https://github.com/ray-project/ray/issues/50679 | [
"enhancement",
"core"
] | 400Ping | 2 |
automl/auto-sklearn | scikit-learn | 1,331 | Docstrings for include and exclude argument not up-to-date | The docstrings for include and exclude of the AutoSklearnClassifier and AutoSklearnRegressor are not up-to-date as they do not yet address that one should pass in a dict (and no longer a list). | closed | 2021-11-30T15:47:20Z | 2021-12-02T14:13:30Z | https://github.com/automl/auto-sklearn/issues/1331 | [
"documentation"
] | mfeurer | 3 |
deezer/spleeter | deep-learning | 755 | [Discussion] Trying to use my own training set but I get an error | I'm trying to use my own training set which is made up of 20 sec mono loops. I'm getting this error. I'm not sure if it's an issue with my data or the configuration that I'm using. Any thoughts would be much appreciated, I've spent a lot of time on this. I can increase the duration of the loops I'm generating if that is needed. I'm including my config & the validation CSV to show my setup.
Thank you very much for your help!
`(spleeter) jupyter@drumsplit-trainer:~/data/drumsplit$ spleeter train -p config/drumsplit_config.json -d /home/jupyter/data/drumsplitter_training_loops/ --verbose
INFO:tensorflow:Using config: {'_model_dir': 'drumsplit_model', '_tf_random_seed': 3, '_save_summary_steps': 5, '_save_checkpoints_steps': 1000, '_save_checkpoints_secs': None, '_session_config': gpu_options {
per_process_gpu_memory_fraction: 0.45
}
, '_keep_checkpoint_max': 2, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 10, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_checkpoint_save_graph_def': True, '_service': None, '_cluster_spec': ClusterSpec({}), '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:spleeter:Start model training
INFO:tensorflow:Not using Distribute Coordinator.
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after every checkpoint. Checkpoint frequency is determined based on RunConfig arguments: save_checkpoints_steps 1000 or save_checkpoints_secs None.
WARNING:tensorflow:From /opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow/python/training/training_util.py:235: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.
Instructions for updating:
Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.
Traceback (most recent call last):
File "/opt/conda/envs/spleeter/bin/spleeter", line 8, in <module>
sys.exit(entrypoint())
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/spleeter/__main__.py", line 256, in entrypoint
spleeter()
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/typer/main.py", line 214, in __call__
return get_command(self)(*args, **kwargs)
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/typer/main.py", line 497, in wrapper
return callback(**use_params) # type: ignore
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/spleeter/__main__.py", line 89, in train
tf.estimator.train_and_evaluate(estimator, train_spec, evaluation_spec)
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow_estimator/python/estimator/training.py", line 505, in train_and_evaluate
return executor.run()
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow_estimator/python/estimator/training.py", line 646, in run
return self.run_local()
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow_estimator/python/estimator/training.py", line 743, in run_local
self._estimator.train(
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 349, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1175, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1201, in _train_model_default
self._get_features_and_labels_from_input_fn(input_fn, ModeKeys.TRAIN))
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1037, in _get_features_and_labels_from_input_fn
self._call_input_fn(input_fn, mode))
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1130, in _call_input_fn
return input_fn(**kwargs)
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/spleeter/dataset.py", line 85, in get_training_dataset
return builder.build(
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/spleeter/dataset.py", line 575, in build
dataset = dataset.map(instrument.convert_to_uint)
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 1925, in map
return MapDataset(self, map_func, preserve_cardinality=True)
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 4483, in __init__
self._map_func = StructuredFunctionWrapper(
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 3712, in __init__
self._function = fn_factory()
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3134, in get_concrete_function
graph_function = self._get_concrete_function_garbage_collected(
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3100, in _get_concrete_function_garbage_collected
graph_function, _ = self._maybe_define_function(args, kwargs)
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3444, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow/python/eager/function.py", line 3279, in _create_graph_function
func_graph_module.func_graph_from_py_func(
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 999, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 3687, in wrapped_fn
ret = wrapper_helper(*args)
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 3617, in wrapper_helper
ret = autograph.tf_convert(self._func, ag_ctx)(*nested_args)
File "/opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow/python/autograph/impl/api.py", line 695, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:
/opt/conda/envs/spleeter/lib/python3.8/site-packages/spleeter/dataset.py:194 convert_to_uint *
sample[self._spectrogram_key],
/opt/conda/envs/spleeter/lib/python3.8/site-packages/spleeter/audio/convertor.py:110 spectrogram_to_db_uint *
db_spectrogram: tf.Tensor = gain_to_db(spectrogram)
/opt/conda/envs/spleeter/lib/python3.8/site-packages/spleeter/audio/convertor.py:76 gain_to_db *
return 20.0 / np.log(10) * tf.math.log(tf.maximum(tensor, espilon))
/opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow/python/ops/gen_math_ops.py:5881 maximum **
_, _, _op, _outputs = _op_def_library._apply_op_helper(
/opt/conda/envs/spleeter/lib/python3.8/site-packages/tensorflow/python/framework/op_def_library.py:527 _apply_op_helper
raise TypeError(
TypeError: Expected uint8 passed to parameter 'y' of op 'Maximum', got 1e-09 of type 'float' instead. Error: Expected uint8, got 1e-09 of type 'float' instead.`
[drumsplit_validation.csv.txt](https://github.com/deezer/spleeter/files/8557553/drumsplit_validation.csv.txt)
[drumsplit_config.json.txt](https://github.com/deezer/spleeter/files/8557555/drumsplit_config.json.txt)
| closed | 2022-04-25T19:52:02Z | 2022-04-30T14:37:05Z | https://github.com/deezer/spleeter/issues/755 | [
"question"
] | dustyny | 1 |
sherlock-project/sherlock | python | 1,836 | Telegram false positive | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Put x into all boxes (like this [x]) once you have completed what they say.
Make sure complete everything in the checklist.
-->
- [x] I'm reporting a website that is returning **false positive** results
- [x] I've checked for similar site support requests including closed ones
- [x] I've checked for pull requests attempting to fix this false positive
- [x] I'm only reporting **one** site (create a separate issue for each site)
## Description
<!--
Provide the username that is causing Sherlock to return a false positive, along with any other information that might help us fix this false positive.
-->
The username EDG_7_ is causing a false positive from Telegram
It seems like anything that has a _ (underscore) at the end would make sherlock return a false positive
| closed | 2023-07-03T15:13:57Z | 2023-08-29T12:21:00Z | https://github.com/sherlock-project/sherlock/issues/1836 | [
"false positive"
] | Troughy | 0 |
python-restx/flask-restx | flask | 523 | "/" route cannot be defined after Api() call |
### **Minimal Code to reproduce issue**
```python
from flask import Flask
from flask_restx import Api
app = Flask(__name__)
# NB: moving this line after the `route('/')` definition makes it work
api = Api(app, version='1.0', title='MyAPI', doc='/api')
@app.route('/')
def index():
return 'OK'
app.run(debug=True)
```
### **Repro Steps** (if applicable)
1. The provided code answers a `404` on a request on `/ `
However, if the call to `API()` is done after the setting of the `/` route ,then things work as expected.
### **Expected Behavior**
Requests to `/` should answer properly.
### **Actual Behavior**
Requests to `/` answer with a `404`.
### **Error Messages/Stack Trace**
N/A
### **Environment**
- Python 3.10.8
- Flask 2.2.3
- Flask-RESTX 1.0.6
### **Additional Context**
Already mentioned in https://github.com/python-restx/flask-restx/issues/452 as a question, but seems more a bug.
| closed | 2023-02-22T15:24:41Z | 2023-03-10T11:12:06Z | https://github.com/python-restx/flask-restx/issues/523 | [
"bug"
] | Jc-L | 3 |
thtrieu/darkflow | tensorflow | 640 | Input images with Only One Channel (Grayscale) | I am running into the issue where there is a size mismatch, where I am trying to feed in images which only have one channel, rather than the usual 3. I changed my .cfg file where I set channels to 1, but it looks like that isn't working.
Any advice as to how I can fix this? It's a bit difficult to navigate where exactly I should be looking at to change this around. Thanks in advance!
Here is the output from running "flow":
***********************************************************************************************************
Parsing cfg/yolo-voc-SiMoNN.cfg
Loading None ...
Finished in 0.00015425682067871094s
Building net ...
Source | Train? | Layer description | Output size
-------+--------+----------------------------------+---------------
| | input | (?, 416, 416, 1)
Init | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 416, 416, 32)
Load | Yep! | maxp 2x2p0_2 | (?, 208, 208, 32)
Init | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 208, 208, 64)
Load | Yep! | maxp 2x2p0_2 | (?, 104, 104, 64)
Init | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 104, 104, 128)
Init | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 104, 104, 64)
Init | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 104, 104, 128)
Load | Yep! | maxp 2x2p0_2 | (?, 52, 52, 128)
Init | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 52, 52, 256)
Init | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 52, 52, 128)
Init | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 52, 52, 256)
Load | Yep! | maxp 2x2p0_2 | (?, 26, 26, 256)
Init | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 26, 26, 512)
Init | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 26, 26, 256)
Init | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 26, 26, 512)
Init | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 26, 26, 256)
Init | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 26, 26, 512)
Load | Yep! | maxp 2x2p0_2 | (?, 13, 13, 512)
Init | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
Init | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 13, 13, 512)
Init | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
Init | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 13, 13, 512)
Init | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
Init | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
Init | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
Load | Yep! | concat [16] | (?, 26, 26, 512)
Load | Yep! | local flatten 2x2 | (?, 13, 13, 2048)
Load | Yep! | concat [26, 24] | (?, 13, 13, 3072)
Init | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 13, 13, 1024)
Init | Yep! | conv 1x1p0_1 linear | (?, 13, 13, 30)
-------+--------+----------------------------------+---------------
GPU mode with 1.0 usage
cfg/yolo-voc-SiMoNN.cfg loss hyper-parameters:
H = 13
W = 13
box = 5
classes = 1
scales = [1.0, 5.0, 1.0, 1.0]
Building cfg/yolo-voc-SiMoNN.cfg loss
Building cfg/yolo-voc-SiMoNN.cfg train op
2018-03-18 11:48:05.320630: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Finished in 19.190807580947876s
Enter training ...
cfg/yolo-voc-SiMoNN.cfg parsing annotation_train
Parsing for ['sim']
[====================>]100% train_nucleus5_035119.xml
Statistics:
sim: 2227886
Dataset size: 165776
Dataset of 165776 instance(s)
Training statistics:
Learning rate : 1e-05
Batch size : 16
Epoch number : 1000
Backup every : 2000
Traceback (most recent call last):
File "/Users/jtunicorn/anaconda3/envs/neuralnets/bin/flow", line 6, in <module>
cliHandler(sys.argv)
File "/Users/jtunicorn/anaconda3/envs/neuralnets/lib/python3.6/site-packages/darkflow/cli.py", line 33, in cliHandler
print('Enter training ...'); tfnet.train()
File "/Users/jtunicorn/anaconda3/envs/neuralnets/lib/python3.6/site-packages/darkflow/net/flow.py", line 56, in train
fetched = self.sess.run(fetches, feed_dict)
File "/Users/jtunicorn/anaconda3/envs/neuralnets/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 905, in run
run_metadata_ptr)
File "/Users/jtunicorn/anaconda3/envs/neuralnets/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1113, in _run
str(subfeed_t.get_shape())))
**ValueError: Cannot feed value of shape (16, 416, 416, 3) for Tensor 'input:0', which has shape '(?, 416, 416, 1)'** | open | 2018-03-18T23:01:01Z | 2019-05-20T03:33:31Z | https://github.com/thtrieu/darkflow/issues/640 | [] | jyoonie247 | 11 |
mars-project/mars | numpy | 3,366 | [BUG]Build fails under Windows platform | **Bug description**
The MSVC team recently added Mars as part of RWC testing to detect compiler regression. Seems the project will fail to build under Windows due to error C1189: #error: unsupported platform. Could you please take a look?
**To Reproduce**
1. Open VS2022 x64 Tools command .
2. git clone C:\gitP\Tencent\mars C:\gitP\Tencent\mars(The commit SHA we use is 6c71f72)
3. Build project from scratch.
**Expected behavior**
Build passed.
**Additional context**
The problem seems to be that some compilation errors occurred when compiling the Mars project using Visual Studio 2022, which involved some header files of the OpenSSL library, resulting in error C1189: unsupported platform error.
[Build (3).log](https://github.com/mars-project/mars/files/15282079/Build.3.log)
Attached is the build log.
We found the problematic header file and found that line 16 caused the error. We have applied a patch to fix this issue.
[Mars_platform_fix.patch](https://github.com/mars-project/mars/files/15282111/Mars_platform_fix.patch)
If you need more information or have any questions, please leave a message under this issue.
| open | 2024-05-11T08:18:43Z | 2024-05-14T02:39:33Z | https://github.com/mars-project/mars/issues/3366 | [] | brianGriifin114 | 1 |
FlareSolverr/FlareSolverr | api | 506 | [torrent9clone] (testing) Exception (torrent9clone): The cookies provided by FlareSolverr are not valid: The cookies provided by FlareSolverr are not valid | closed | 2022-09-05T15:07:40Z | 2022-09-05T22:25:27Z | https://github.com/FlareSolverr/FlareSolverr/issues/506 | [
"duplicate",
"invalid"
] | Beusts | 1 | |
erdewit/ib_insync | asyncio | 310 | Set CONTFUT conId to -1* conId of front contract to enable hashing | The Contract class currently doesn't allow hashing of CONTFUT Contracts because they get the same conId as the front contract, but if ib_insync set all conIds for CONTFUT to -1*conId (of the front contract), you could hash the CONTFUT, and it could make working with those sorts of contracts more streamlined.
Other than needing to set the conId to 0 to actually request CONTFUT historical data (which I believe needs to be manually done at the moment anyway), I don't think that this would cause any trouble. | closed | 2020-11-01T04:38:40Z | 2020-11-02T16:06:28Z | https://github.com/erdewit/ib_insync/issues/310 | [] | TheAIArchitect | 1 |
pyeve/eve | flask | 568 | Incorrect validation in Eve 0.5.2 | I upgraded our application using Eve to use 0.5.2. Our integration tests immediately caught some changes to validation that were unexpected.
See https://gist.github.com/mcreenan/ce366cbb3c5fea17007e
| closed | 2015-02-27T15:18:02Z | 2015-02-27T18:36:10Z | https://github.com/pyeve/eve/issues/568 | [
"bug"
] | mcreenan | 3 |
python-restx/flask-restx | api | 109 | Swagger: add summary on method documentation | **Is your feature request related to a problem? Please describe.**
I have not found the mean to add the summary of a method with api.doc decorator.
It seems to be hardcoded in swagger.py at line 448:
"summary": doc[method]["docstring"]["summary"],
**Describe the solution you'd like**
As the description on the same method, replace the line by :
"summary": self.summary_for( doc, method) or None,
And create the following method:
def summary_for(self, doc, method):
"""Extract the summay metadata and fallback on the whole docstring"""
parts = []
if "summary" in doc:
parts.append(doc["summary"] or "")
if method in doc and "summary" in doc[method]:
parts.append(doc[method]["summary"])
if doc[method]["docstring"]["summary"]:
parts.append(doc[method]["docstring"]["summary"])
return "\n".join(parts).strip()
| open | 2020-04-04T20:54:59Z | 2021-07-14T17:58:03Z | https://github.com/python-restx/flask-restx/issues/109 | [
"enhancement"
] | albinpopote | 4 |
521xueweihan/HelloGitHub | python | 2,596 | 【开源自荐】一个LLM语言大模型全工具链集成平台 含webui整合包 | ## 推荐项目
<!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。-->
<!-- 点击上方 “Preview” 立刻查看提交的内容 -->
<!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址-->
- 项目地址:https://github.com/wpydcr/LLM-Kit
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)-->
- 类别:Python,机器学习
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:一个LLM语言大模型全工具链集成平台 含webui整合包(新手入门必备)
<!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符-->
- 项目描述:本项目目标是实现目前各大语言模型的全流程工具 WebUI 整合包。不用写代码即可拥有自己的定制模型与专属大模型应用!
<!--令人眼前一亮的点是什么?类比同类型项目有什么特点!-->
- 亮点:包含知识库,角色扮演,二次元人物驱动,数据处理,模型训练与推理等等
- 目前支持以下大模型的推理与训练,还在持续更新中:
[openai(VPN)](https://platform.openai.com/account/api-keys)
[azure openai](https://learn.microsoft.com/zh-cn/azure/cognitive-services/openai/)
[文心一言](https://cloud.baidu.com/survey_summit/qianfan.html)
[智谱GLM](https://open.bigmodel.cn/usercenter/apikeys)
[通义千问](https://help.aliyun.com/document_detail/2399480.html)
[讯飞星火](https://console.xfyun.cn/services/cbm)
[chatglm-6b](https://huggingface.co/THUDM/chatglm-6b)
[moss-moon-003-sft](https://huggingface.co/fnlp/moss-moon-003-sft)
[phoenix-chat-7b](https://huggingface.co/FreedomIntelligence/phoenix-chat-7b)
[Guanaco](https://huggingface.co/JosephusCheung/Guanaco)
[baichuan-7B](https://huggingface.co/baichuan-inc/baichuan-7B)
[chatglm2-6b](https://huggingface.co/THUDM/chatglm2-6b)
[internlm-chat-7b-8k](https://huggingface.co/internlm/internlm-chat-7b-8k)
[chinese-alpaca-2-7b](https://huggingface.co/ziqingyang/chinese-alpaca-2-7b)
[Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat)
- 截图:
- 后续更新计划:
将会支持更多主流大模型的推理与训练
增加更多丰富多彩的大模型应用demo
增加多智能体应用
| closed | 2023-08-23T10:09:27Z | 2023-10-24T07:05:40Z | https://github.com/521xueweihan/HelloGitHub/issues/2596 | [] | wpydcr | 1 |
mars-project/mars | numpy | 3,278 | `remove_chunks` slow down on`OrderedSet.dascard` when there are many chunks | When we have many chunks running on each band, `remove_chunks` will become the bottleneck of system:

| closed | 2022-10-12T10:36:41Z | 2022-10-21T06:52:24Z | https://github.com/mars-project/mars/issues/3278 | [] | chaokunyang | 0 |
fastapi/sqlmodel | sqlalchemy | 354 | Docs: condecimal gives type error "Illegal type annotation: call expression not allowed" | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from pydantic import condecimal
from sqlmodel import Field, SQLModel
class Lineitem(SQLModel, table=True):
price: condecimal(max_digits=5, decimal_places=2) = Field(default=0)
```
### Description
The above code yields a Pylance error in VsCode:
```
Illegal type annotation: call expression not allowed Pylance(reportGeneralTypeIssues)
```
And with mypy:
```
$ mypy code/example.py
code/example.py:6: error: Invalid type comment or annotation
code/example.py:6: note: Suggestion: use condecimal[...] instead of condecimal(...)
Found 1 error in 1 file (checked 1 source file)
```
This is definitely a Pydantic issue: https://github.com/samuelcolvin/pydantic/issues/156.
However, I thought it might be a good idea to add to the docs for decimals that this error can occur. There's a [comment on the pydantic issue](https://github.com/samuelcolvin/pydantic/issues/156#issuecomment-1130883884) that gives an ugly workaround, which in my case would look something like:
```python
from decimal import Decimal
from typing import TYPE_CHECKING
from typing_extensions import reveal_type
from pydantic import condecimal
from sqlmodel import Field, SQLModel
class Lineitem(SQLModel, table=True):
if TYPE_CHECKING:
price: Decimal = Field(default=0)
else:
price: condecimal(max_digits=5, decimal_places=2) = Field(default=0)
item = Lineitem(price=Decimal(1))
if TYPE_CHECKING:
reveal_type(item.price) # result: Type of "item.price" is "Decimal"
item = Lineitem(price=Decimal(9.999)) # should throw a validation error
```
The mypy result is:
```
$ mypy code/example_fixed.py
code/example_fixed.py:17: note: Revealed type is "_decimal.Decimal"
Success: no issues found in 1 source file
```
I don't think this workaround should be added to the docs: nobody would go this far to make a type checker happy. I was thinking the docs could say something like:
> Warning (or Tip?)
> Type checkers will complain about the type of condecimal being the result of a callable, or be confused by it. There is no fix for this, so to silence the error add `# type: ignore`:
>
> ```
> class Lineitem(SQLModel, table=True):
> price: condecimal(max_digits=5, decimal_places=2) = Field(default=0) # type: ignore
> ```
### Operating System
macOS
### Operating System Details
$ sw_vers
ProductName: macOS
ProductVersion: 12.4
BuildVersion: 21F79
### SQLModel Version
0.0.6
### Python Version
Python 3.10.2
### Additional Context
Screenshot of Pylance error:

| closed | 2022-06-04T03:43:57Z | 2023-10-26T10:19:48Z | https://github.com/fastapi/sqlmodel/issues/354 | [
"question"
] | cassieopea | 3 |
yt-dlp/yt-dlp | python | 12,376 | ffmpeg: "Invalid data found when processing input" with niconico/nicovideo m3u8 formats | ### Checklist
- [x] I'm asking a question and **not** reporting a bug or requesting a feature
- [x] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar questions **including closed ones**. DO NOT post duplicates
### Please make sure the question is worded well enough to be understood
Downloading using pipe and ffmpeg works with youtube, but not with niconico.
Also, if format is specified, the problem does not occur
I assuming cookies are the problem, but what is the difference?
My enviroments is Fedora38, Intel x64, ffmpeg(self build)
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[root@nacht ~]# yt-dlp -vU "https://www.nicovideo.jp/watch/sm44612421" -o - | ffmpeg -i pipe: aaa.webm
ffmpeg version git-2025-02-15-a50d36b Copyright (c) 2000-2025 the FFmpeg developers
built with gcc 13 (GCC)
configuration: --prefix=/usr/local/ffmpeg_build --extra-cflags=-I/usr/local/ffmpeg_build/include --extra-ldflags=-L/usr/local/ffmpeg_build/lib --extra-libs='-lm -lpthread' --bindir=/usr/local/ffmpeg_build/bin --pkg-config-flags=--static --enable-gpl --enable-nonfree --enable-libfreetype --enable-openssl --enable-pic --enable-libx264 --enable-libx265 --enable-libfdk-aac --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libtheora --enable-libvpx --enable-libass
libavutil 59. 56.100 / 59. 56.100
libavcodec 61. 33.102 / 61. 33.102
libavformat 61. 9.107 / 61. 9.107
libavdevice 61. 4.100 / 61. 4.100
libavfilter 10. 9.100 / 10. 9.100
libswscale 8. 13.100 / 8. 13.100
libswresample 5. 4.100 / 5. 4.100
libpostproc 58. 4.100 / 58. 4.100
[debug] Command-line config: ['-vU', 'https://www.nicovideo.jp/watch/sm44612421', '-o', '-']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2025.01.26 from yt-dlp/yt-dlp [3b4531934] (pip)
[debug] Python 3.11.9 (CPython x86_64 64bit) - Linux-6.2.9-300.fc38.x86_64-x86_64-with-glibc2.37 (OpenSSL 3.0.9 30 May 2023, glibc 2.37)
[debug] exe versions: ffmpeg git-2025-02-15-a50d36b (fdk,setts), ffprobe git-2025-02-15-a50d36b
[debug] Optional libraries: Cryptodome-3.21.0, requests-2.28.2, sqlite3-3.40.1, urllib3-1.26.18
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2025.01.26 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2025.01.26 from yt-dlp/yt-dlp)
[niconico] Extracting URL: https://www.nicovideo.jp/watch/sm44612421
[niconico] sm44612421: Downloading webpage
[niconico] sm44612421: Downloading JSON metadata
[niconico] sm44612421: Downloading m3u8 information
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: best/bestvideo+bestaudio
[info] sm44612421: Downloading 1 format(s): video-2209+audio-aac-192kbps
[debug] Invoking ffmpeg downloader on "https://delivery.domand.nicovideo.jp/hlsbid/67a1ae5b085d83db4562ed68/playlists/media/video-h264-720p.m3u8?session=9476e95b918ffcbf1bc64084f9a99e63c30b4ebbcc83d7870000000067b28790c1ee755118741491&Expires=1739753360&Signature=hxLPqKIogZZwlU8j-btfFK~l4waXq6SFIoulAHPFHJANdq9b5yVcytuYz5D0C2dJoXaHuPjuIS9mcfBavkISmgoSXnwy2FFXqPx9ypgufylqYaiS0DZMxQFGQwJvxYKPejYv0iqZPTVH0ruU9SZsEFZ3~qTO4mDizGAj2j94nbLU9uKu1WfPqmNG~qQfxOKFKzUjn73Mq51gsuHCUyzxN6hT6~RfW-Jk8NEs~iXfAeOPWDyhMYTbEUpIUxOvcVgAnGFMsOPsKZX39bl~kKk4Nb90pSOj8oIofJtkucvvRpDskE2fF5UfZA-8tIzOpS-fY15BWV04SPSlYQzgyiH1Ow__&Key-Pair-Id=K11RB80NFXU134", "https://delivery.domand.nicovideo.jp/hlsbid/67a1ae5b085d83db4562ed68/playlists/media/audio-aac-192kbps.m3u8?session=9476e95b918ffcbf1bc64084f9a99e63c30b4ebbcc83d7870000000067b28790c1ee755118741491&Expires=1739753360&Signature=JkvCFtkGzszCy24~RgA0DPKr9AZ4LBfwgubN3tSfj7iChsaPeyQfiZPXhxBoyV-xBnQlcWUGTXiZe0HCdYAxGHCFO6VO~SWEmk1NVBZAWsqDLPzkhsfLTFdhWsFohUSJGQEKZC67S4zYOqxy3d3t0YgFnJOiftybwaA7ZzU1NiEE0fKsSkE7dc7aUJrdnJKAfZM7zvX-UgncU-OqxdBqkACwM5yjP3SYiMXpcqSn-1S99hl5emcSvon1CQb-prGLIIV-8Vwdc~jyBfa5XJN~e58rAhJBYcWKQLXgVoUtIUiBq5yzAVAoOdSVpFoHOgBIzJAs8dGFwTXNPa3m7G8A2g__&Key-Pair-Id=K11RB80NFXU134"
[download] Destination: -
[debug] ffmpeg command line: ffmpeg -y -loglevel verbose -cookies 'nicosid=1739666960.706481533; path=/; domain=.nicovideo.jp;
domand_bid=78aeaf983676a88afd4e16f7b1b2bd6bd53c30798fe6f0fb0c2019fe69c817cf; path=/; domain=.nicovideo.jp;
' -headers 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.54 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Sec-Fetch-Mode: navigate
' -i 'https://delivery.domand.nicovideo.jp/hlsbid/67a1ae5b085d83db4562ed68/playlists/media/video-h264-720p.m3u8?session=9476e95b918ffcbf1bc64084f9a99e63c30b4ebbcc83d7870000000067b28790c1ee755118741491&Expires=1739753360&Signature=hxLPqKIogZZwlU8j-btfFK~l4waXq6SFIoulAHPFHJANdq9b5yVcytuYz5D0C2dJoXaHuPjuIS9mcfBavkISmgoSXnwy2FFXqPx9ypgufylqYaiS0DZMxQFGQwJvxYKPejYv0iqZPTVH0ruU9SZsEFZ3~qTO4mDizGAj2j94nbLU9uKu1WfPqmNG~qQfxOKFKzUjn73Mq51gsuHCUyzxN6hT6~RfW-Jk8NEs~iXfAeOPWDyhMYTbEUpIUxOvcVgAnGFMsOPsKZX39bl~kKk4Nb90pSOj8oIofJtkucvvRpDskE2fF5UfZA-8tIzOpS-fY15BWV04SPSlYQzgyiH1Ow__&Key-Pair-Id=K11RB80NFXU134' -cookies 'nicosid=1739666960.706481533; path=/; domain=.nicovideo.jp;
domand_bid=78aeaf983676a88afd4e16f7b1b2bd6bd53c30798fe6f0fb0c2019fe69c817cf; path=/; domain=.nicovideo.jp;
' -headers 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.54 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Sec-Fetch-Mode: navigate
' -i 'https://delivery.domand.nicovideo.jp/hlsbid/67a1ae5b085d83db4562ed68/playlists/media/audio-aac-192kbps.m3u8?session=9476e95b918ffcbf1bc64084f9a99e63c30b4ebbcc83d7870000000067b28790c1ee755118741491&Expires=1739753360&Signature=JkvCFtkGzszCy24~RgA0DPKr9AZ4LBfwgubN3tSfj7iChsaPeyQfiZPXhxBoyV-xBnQlcWUGTXiZe0HCdYAxGHCFO6VO~SWEmk1NVBZAWsqDLPzkhsfLTFdhWsFohUSJGQEKZC67S4zYOqxy3d3t0YgFnJOiftybwaA7ZzU1NiEE0fKsSkE7dc7aUJrdnJKAfZM7zvX-UgncU-OqxdBqkACwM5yjP3SYiMXpcqSn-1S99hl5emcSvon1CQb-prGLIIV-8Vwdc~jyBfa5XJN~e58rAhJBYcWKQLXgVoUtIUiBq5yzAVAoOdSVpFoHOgBIzJAs8dGFwTXNPa3m7G8A2g__&Key-Pair-Id=K11RB80NFXU134' -c copy -map 0:0 -map 1:0 -f mpegts -
ffmpeg version git-2025-02-15-a50d36b Copyright (c) 2000-2025 the FFmpeg developers
built with gcc 13 (GCC)
configuration: --prefix=/usr/local/ffmpeg_build --extra-cflags=-I/usr/local/ffmpeg_build/include --extra-ldflags=-L/usr/local/ffmpeg_build/lib --extra-libs='-lm -lpthread' --bindir=/usr/local/ffmpeg_build/bin --pkg-config-flags=--static --enable-gpl --enable-nonfree --enable-libfreetype --enable-openssl --enable-pic --enable-libx264 --enable-libx265 --enable-libfdk-aac --enable-libmp3lame --enable-libopus --enable-libvorbis --enable-libtheora --enable-libvpx --enable-libass
libavutil 59. 56.100 / 59. 56.100
libavcodec 61. 33.102 / 61. 33.102
libavformat 61. 9.107 / 61. 9.107
libavdevice 61. 4.100 / 61. 4.100
libavfilter 10. 9.100 / 10. 9.100
libswscale 8. 13.100 / 8. 13.100
libswresample 5. 4.100 / 5. 4.100
libpostproc 58. 4.100 / 58. 4.100
[tcp @ 0x44abcc0] Starting connection attempt to 2600:9000:221a:a000:18:fede:10c0:93a1 port 443
[tcp @ 0x44abcc0] Starting connection attempt to 18.65.185.77 port 443
[tcp @ 0x44abcc0] Successfully connected to 18.65.185.77 port 443
[hls @ 0x44a7100] Skip ('#EXT-X-VERSION:6')
[hls @ 0x44a7100] URL https://asset.domand.nicovideo.jp/67a1ae5b085d83db4562ed68/video/12/video-h264-720p/001.cmfv?session=9476e95b918ffcbf1bc64084f9a99e63c30b4ebbcc83d7870000000067b28790c1ee755118741491&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9hc3NldC5kb21hbmQubmljb3ZpZGVvLmpwLzY3YTFhZTViMDg1ZDgzZGI0NTYyZWQ2OC92aWRlby8xMi92aWRlby1oMjY0LTcyMHAvKiIsIkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTczOTc1MzM2MH19fV19&Signature=TnMeaC-TM~9mjg3VolPsPkz3UyIMgaUxtnRzz~XWmXhAl~2vCIb1hRIHhFt95ylX8tpwbXaobHqA2vIB3lBDgqwVv-IfIkjhOq32FsGxXFlQaRD59rc4MrGO5JdjHjRWPCid6QaLqD76VrqxO9xdMpqU4XU~kTUJuAbdUE9Srtf67T8M2iu6CrmSScphwcu8a6xrxwivVJaCKuIBNwpJaFomk872HTqCO295zMDV-hLnXfzPYiW3kbPj6pdUdRUpqEPNxaowC7RgMglxH4kvgm2YY3oE0koMKo24LhLEouw6yCE0BoGp~~Hy3NU-xPRGwrR4GYd~bqFHvYnA~02BZQ__&Key-Pair-Id=K11RB80NFXU134 is not in allowed_extensions
[AVIOContext @ 0x4517440] Statistics: 7733 bytes read, 0 seeks
[in#0 @ 0x44a6e00] Error opening input: Invalid data found when processing input
Error opening input file https://delivery.domand.nicovideo.jp/hlsbid/67a1ae5b085d83db4562ed68/playlists/media/video-h264-720p.m3u8?session=9476e95b918ffcbf1bc64084f9a99e63c30b4ebbcc83d7870000000067b28790c1ee755118741491&Expires=1739753360&Signature=hxLPqKIogZZwlU8j-btfFK~l4waXq6SFIoulAHPFHJANdq9b5yVcytuYz5D0C2dJoXaHuPjuIS9mcfBavkISmgoSXnwy2FFXqPx9ypgufylqYaiS0DZMxQFGQwJvxYKPejYv0iqZPTVH0ruU9SZsEFZ3~qTO4mDizGAj2j94nbLU9uKu1WfPqmNG~qQfxOKFKzUjn73Mq51gsuHCUyzxN6hT6~RfW-Jk8NEs~iXfAeOPWDyhMYTbEUpIUxOvcVgAnGFMsOPsKZX39bl~kKk4Nb90pSOj8oIofJtkucvvRpDskE2fF5UfZA-8tIzOpS-fY15BWV04SPSlYQzgyiH1Ow__&Key-Pair-Id=K11RB80NFXU134.
Error opening input files: Invalid data found when processing input
ERROR: ffmpeg exited with code 183
File "/usr/local/bin/yt-dlp", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.11/site-packages/yt_dlp/__init__.py", line 1095, in main
_exit(*variadic(_real_main(argv)))
File "/usr/local/lib/python3.11/site-packages/yt_dlp/__init__.py", line 1085, in _real_main
return ydl.download(all_urls)
File "/usr/local/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py", line 3618, in download
self.__download_wrapper(self.extract_info)(
File "/usr/local/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py", line 3591, in wrapper
res = func(*args, **kwargs)
File "/usr/local/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py", line 1626, in extract_info
return self.__extract_info(url, self.get_info_extractor(key), download, extra_info, process)
File "/usr/local/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py", line 1637, in wrapper
return func(self, *args, **kwargs)
File "/usr/local/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py", line 1793, in __extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "/usr/local/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py", line 1852, in process_ie_result
ie_result = self.process_video_result(ie_result, download=download)
File "/usr/local/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py", line 3024, in process_video_result
self.process_info(new_info)
File "/usr/local/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py", line 177, in wrapper
return func(self, *args, **kwargs)
File "/usr/local/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py", line 3439, in process_info
success, real_download = self.dl(temp_filename, info_dict)
File "/usr/local/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py", line 3212, in dl
return fd.download(name, new_info, subtitle)
File "/usr/local/lib/python3.11/site-packages/yt_dlp/downloader/common.py", line 464, in download
ret = self.real_download(filename, info_dict)
File "/usr/local/lib/python3.11/site-packages/yt_dlp/downloader/external.py", line 79, in real_download
self.report_error('%s exited with code %d' % (
File "/usr/local/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py", line 1095, in report_error
self.trouble(f'{self._format_err("ERROR:", self.Styles.ERROR)} {message}', *args, **kwargs)
File "/usr/local/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py", line 1023, in trouble
tb_data = traceback.format_list(traceback.extract_stack())
[in#0 @ 0x5137740] Error opening input: Invalid data found when processing input
Error opening input file pipe:.
Error opening input files: Invalid data found when processing input
``` | closed | 2025-02-16T00:54:54Z | 2025-02-17T22:31:00Z | https://github.com/yt-dlp/yt-dlp/issues/12376 | [
"external issue"
] | ghost | 8 |
s3rius/FastAPI-template | graphql | 167 | Using Mysql as the db is not working | when selecting the mysql database, the installation breaks in this line:
```• Installing mypy (1.3.0)
• Installing mysqlclient (2.1.1): Failed
ChefBuildError
Backend subprocess exited when trying to invoke get_requires_for_build_wheel
/bin/sh: mysql_config: command not found
/bin/sh: mariadb_config: command not found
/bin/sh: mysql_config: command not found
mysql_config --version
mariadb_config --version
mysql_config --libs
Traceback (most recent call last):
File "/Users/hectorramos/Library/Application Support/pypoetry/venv/lib/python3.11/site-packages/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/Users/hectorramos/Library/Application Support/pypoetry/venv/lib/python3.11/site-packages/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/hectorramos/Library/Application Support/pypoetry/venv/lib/python3.11/site-packages/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "/var/folders/91/cbx_6wfx1xjfxd27x0kfmd680000gn/T/tmp3w8km71r/.venv/lib/python3.11/site-packages/setuptools/build_meta.py", line 341, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/folders/91/cbx_6wfx1xjfxd27x0kfmd680000gn/T/tmp3w8km71r/.venv/lib/python3.11/site-packages/setuptools/build_meta.py", line 323, in _get_build_requires
self.run_setup()
File "/var/folders/91/cbx_6wfx1xjfxd27x0kfmd680000gn/T/tmp3w8km71r/.venv/lib/python3.11/site-packages/setuptools/build_meta.py", line 488, in run_setup
self).run_setup(setup_script=setup_script)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/folders/91/cbx_6wfx1xjfxd27x0kfmd680000gn/T/tmp3w8km71r/.venv/lib/python3.11/site-packages/setuptools/build_meta.py", line 338, in run_setup
exec(code, locals())
File "<string>", line 15, in <module>
File "/private/var/folders/91/cbx_6wfx1xjfxd27x0kfmd680000gn/T/tmpldu16aav/mysqlclient-2.1.1/setup_posix.py", line 70, in get_config
libs = mysql_config("libs")
^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/91/cbx_6wfx1xjfxd27x0kfmd680000gn/T/tmpldu16aav/mysqlclient-2.1.1/setup_posix.py", line 31, in mysql_config
raise OSError("{} not found".format(_mysql_config_path))
OSError: mysql_config not found
at ~/Library/Application Support/pypoetry/venv/lib/python3.11/site-packages/poetry/installation/chef.py:147 in _prepare
143│
144│ error = ChefBuildError("\n\n".join(message_parts))
145│
146│ if error is not None:
→ 147│ raise error from None
148│
149│ return path
150│
151│ def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:
Note: This error originates from the build backend, and is likely not a problem with poetry but with mysqlclient (2.1.1) not supporting PEP 517 builds. You can verify this by running 'pip wheel --use-pep517 "mysqlclient (==2.1.1)"'.
• Installing pre-commit (3.3.2)
```
an from that point poetry lock is not created correctly. | open | 2023-05-30T06:11:54Z | 2023-05-30T07:47:29Z | https://github.com/s3rius/FastAPI-template/issues/167 | [] | ramoseh | 1 |
ultralytics/yolov5 | machine-learning | 13,248 | What prevents me from using the AMP function? | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I thank you very much for your work. I would like to be able to use the AMP function, but when I train on my device it says `AMP checks failed ❌, disabling Automatic Mixed Precision.` My device situation is as follows:
```bash
torch=2.0.0
CUDA=11.8
4070Ti
```
I would like to know what are the factors that prevent AMP from working? Like CUDA version, graphics hardware, or other factors, because I really want to use the AMP feature!
### Additional
_No response_ | closed | 2024-08-07T08:50:50Z | 2024-10-20T19:51:31Z | https://github.com/ultralytics/yolov5/issues/13248 | [
"question"
] | thgpddl | 4 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 42 | 抖音部分无法保存的视频下载不了(提示错误 未指定URL | iOS快捷指令
比如这个 https://v.douyin.com/YmWsFLr/ | closed | 2022-06-23T08:32:25Z | 2022-06-23T23:00:22Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/42 | [] | Lost-f | 2 |
pytest-dev/pytest-randomly | pytest | 210 | sort by tests by some_hash_fn(f"{item.id!r}{randomly_seed}") rather than shuffle | shuffle with the same seed isn't stable for item subsets and supersets:
```python3
import random
def shuffle(seed, v):
v = list(v)
random.Random(x=seed).shuffle(v)
return v
assert (
shuffle(1, "abcd") == ["d", "a", "c", "b"]
and shuffle(1, "abc") == ["b", "c", "a"]
)
```
eg if you remove tests due to "--lf" or "--sw", or add/remove tests | closed | 2019-11-11T10:26:38Z | 2021-08-13T09:15:06Z | https://github.com/pytest-dev/pytest-randomly/issues/210 | [] | graingert | 1 |
lukas-blecher/LaTeX-OCR | pytorch | 178 | [feature] Download checkpoints in correct path | This program download checkpoints in `~/.local/lib/python3.10/site-packages/pix2tex/model/checkpoints`. However, pytorch and all library which depends on pytorch download their pth in a same directory (vary on different OS). Why not keep unified?
```
❯ ls ~/.local/lib/python3.10/site-packages/pix2tex/model/checkpoints
__init__.py __pycache__ get_latest_checkpoint.py image_resizer.pth weights.pth
❯ ls ~/.cache/torch/hub/checkpoints/
alexnet-owt-7be5be79.pth resnet50-0676ba61.pth
...
``` | closed | 2022-09-12T11:07:11Z | 2023-10-15T21:25:33Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/178 | [] | Freed-Wu | 0 |
ivy-llc/ivy | tensorflow | 28,710 | Fix Frontend Failing Test: paddle - creation.jax.numpy.size | To-do List: https://github.com/unifyai/ivy/issues/27500 | closed | 2024-03-31T12:34:29Z | 2024-04-09T04:32:58Z | https://github.com/ivy-llc/ivy/issues/28710 | [
"Sub Task"
] | ZJay07 | 0 |
fastapi-users/fastapi-users | asyncio | 253 | JWT token refresh | Hi, thanks for the great package and documentation first of all!
However, I was wondering if I missed something or if the JWT logic is missing a `/refresh_token` router?
How can I make sure the user doesn't need to supply the password again when the token expires? I was planning on automatically refreshing it as long as it is still valid. | closed | 2020-07-10T12:08:21Z | 2023-06-08T21:54:27Z | https://github.com/fastapi-users/fastapi-users/issues/253 | [
"documentation",
"question"
] | moreinhardt | 8 |
huggingface/datasets | tensorflow | 7,357 | Python process aborded with GIL issue when using image dataset | ### Describe the bug
The issue is visible only with the latest `datasets==3.2.0`.
When using image dataset the Python process gets aborted right before the exit with the following error:
```
Fatal Python error: PyGILState_Release: thread state 0x7fa1f409ade0 must be current when releasing
Python runtime state: finalizing (tstate=0x0000000000ad2958)
Thread 0x00007fa33d157740 (most recent call first):
<no Python frame>
Extension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._boun
ded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, pyarrow.lib, pandas._libs.tslibs.ccalendar, pandas._libs.ts
libs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.t
slibs.tzconversion, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.strptime, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._l
ibs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.lib, pyarrow._compute, pan
das._libs.ops, pandas._libs.hashing, pandas._libs.arrays, pandas._libs.tslib, pandas._libs.sparse, pandas._libs.internals, pandas._libs.indexing, pandas._libs.index, pandas._libs.writers, pandas._libs.join,
pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.groupby, pandas._libs.json, pandas._libs.parsers, pandas._libs.testing, charset_normalizer.md, requests.pa
ckages.charset_normalizer.md, requests.packages.chardet.md, yaml._yaml, markupsafe._speedups, PIL._imaging, torch._C, torch._C._dynamo.autograd_compiler, torch._C._dynamo.eval_frame, torch._C._dynamo.guards
, torch._C._dynamo.utils, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, sentencepiece._sentencepiece, sklearn.__check_build._check_build, psutil._psut
il_linux, psutil._psutil_posix, scipy._lib._ccallback_c, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.l
inalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg.cython_blas, scipy.linalg._matfuncs_expm, scipy.linalg._decomp_up
date, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.linalg._propack._spropack, scipy.sparse.linalg._propack._dpropack, scipy.sparse.linalg._propack._cpropack,
scipy.sparse.linalg._propack._zpropack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flo
w, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, scipy.special._ufuncs_cxx, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.special._ellip_harm_2, scipy.spatial
._ckdtree, scipy._lib.messagestream, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._distance_wrap, scipy.spatial._hausdorff, scipy.spatial.transform._rotation, scipy.optimize._group_columns, s
cipy.optimize._trlib._trlib, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._cobyla, scipy.optimize._slsqp, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, sc
ipy.optimize._zeros, scipy.optimize._highs.cython.src._highs_wrapper, scipy.optimize._highs._highs_wrapper, scipy.optimize._highs.cython.src._highs_constants, scipy.optimize._highs._highs_constants, scipy.l
inalg._interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.optimize._direct, scipy.integrate._odepack, scipy.integrate._quadpack, scipy.integrate._vode, scipy.integrate._dop, scipy.integr
ate._lsoda, scipy.interpolate._fitpack, scipy.interpolate._dfitpack, scipy.interpolate._bspl, scipy.interpolate._ppoly, scipy.interpolate.interpnd, scipy.interpolate._rbfinterp_pythran, scipy.interpolate._r
gi_cython, scipy.special.cython_special, scipy.stats._stats, scipy.stats._biasedurn, scipy.stats._levy_stable.levyst, scipy.stats._stats_pythran, scipy._lib._uarray._uarray, scipy.stats._ansari_swilk_statis
tics, scipy.stats._sobol, scipy.stats._qmc_cy, scipy.stats._mvn, scipy.stats._rcont.rcont, scipy.stats._unuran.unuran_wrapper, scipy.ndimage._nd_image, _ni_label, scipy.ndimage._ni_label, sklearn.utils._isf
inite, sklearn.utils.sparsefuncs_fast, sklearn.utils.murmurhash, sklearn.utils._openmp_helpers, sklearn.metrics.cluster._expected_mutual_info_fast, sklearn.preprocessing._csr_polynomial_expansion, sklearn.p
reprocessing._target_encoder_fast, sklearn.metrics._dist_metrics, sklearn.metrics._pairwise_distances_reduction._datasets_pair, sklearn.utils._cython_blas, sklearn.metrics._pairwise_distances_reduction._bas
e, sklearn.metrics._pairwise_distances_reduction._middle_term_computer, sklearn.utils._heap, sklearn.utils._sorting, sklearn.metrics._pairwise_distances_reduction._argkmin, sklearn.metrics._pairwise_distanc
es_reduction._argkmin_classmode, sklearn.utils._vector_sentinel, sklearn.metrics._pairwise_distances_reduction._radius_neighbors, sklearn.metrics._pairwise_distances_reduction._radius_neighbors_classmode, s
klearn.metrics._pairwise_fast, PIL._imagingft, google._upb._message, h5py._errors, h5py.defs, h5py._objects, h5py.h5, h5py.utils, h5py.h5t, h5py.h5s, h5py.h5ac, h5py.h5p, h5py.h5r, h5py._proxy, h5py._conv,
h5py.h5z, h5py.h5a, h5py.h5d, h5py.h5ds, h5py.h5g, h5py.h5i, h5py.h5o, h5py.h5f, h5py.h5fd, h5py.h5pl, h5py.h5l, h5py._selector, _cffi_backend, pyarrow._parquet, pyarrow._fs, pyarrow._azurefs, pyarrow._hdfs
, pyarrow._gcsfs, pyarrow._s3fs, multidict._multidict, propcache._helpers_c, yarl._quoting_c, aiohttp._helpers, aiohttp._http_writer, aiohttp._http_parser, aiohttp._websocket, frozenlist._frozenlist, xxhash
._xxhash, pyarrow._json, pyarrow._acero, pyarrow._csv, pyarrow._dataset, pyarrow._dataset_orc, pyarrow._parquet_encryption, pyarrow._dataset_parquet_encryption, pyarrow._dataset_parquet, regex._regex, scipy
.io.matlab._mio_utils, scipy.io.matlab._streams, scipy.io.matlab._mio5_utils, PIL._imagingmath, PIL._webp (total: 236)
Aborted (core dumped)
```an
### Steps to reproduce the bug
Install `datasets==3.2.0`
Run the following script:
```python
import datasets
DATASET_NAME = "phiyodr/InpaintCOCO"
NUM_SAMPLES = 10
def preprocess_fn(example):
return {
"prompts": example["inpaint_caption"],
"images": example["coco_image"],
"masks": example["mask"],
}
default_dataset = datasets.load_dataset(
DATASET_NAME, split="test", streaming=True
).filter(lambda example: example["inpaint_caption"] != "").take(NUM_SAMPLES)
test_data = default_dataset.map(
lambda x: preprocess_fn(x), remove_columns=default_dataset.column_names
)
for data in test_data:
print(data["prompts"])
``
### Expected behavior
The script should not hang or crash.
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-5.15.0-50-generic-x86_64-with-glibc2.31
- Python version: 3.11.0
- `huggingface_hub` version: 0.25.1
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.2.0 | open | 2025-01-06T11:29:30Z | 2025-03-08T15:59:36Z | https://github.com/huggingface/datasets/issues/7357 | [] | AlexKoff88 | 1 |
Avaiga/taipy | data-visualization | 1,748 | Stop support for Python 3.8 | Stop supporting version 3.8 of Python. | closed | 2024-09-05T07:27:20Z | 2024-09-21T06:49:17Z | https://github.com/Avaiga/taipy/issues/1748 | [
"🟥 Priority: Critical",
"🖧 Devops",
"🔒 Staff only"
] | jrobinAV | 2 |
hyperspy/hyperspy | data-visualization | 2,935 | m.set_signal_range() broken? | In the most recent version of HyperSpy (1.7), calling `m.set_signal_range()` does not work.
The "non-GUI" version works fine.
```python
import hyperspy.api as hs
m = hs.datasets.artificial_data.get_core_loss_eels_model()
m.set_signal_range()
```
Gives the error:
```python
File hyperspy/signal_tools.py:241, in SpanSelectorInSignal1D.span_selector_switch(self, on)
229 ax = self.signal._plot.signal_plot.ax
230 self.span_selector = SpanSelector(
231 ax=ax,
232 onselect=lambda *args, **kwargs: None,
(...)
239 handle_props={"alpha":0.5, "color":'r'},
240 useblit=ax.figure.canvas.supports_blit)
--> 241 self.connect()
243 elif self.span_selector is not None:
244 self.on_disabling_span_selector()
File hyperspy/signal_tools.py:292, in SpanSelectorInSignal1D.connect(self)
291 def connect(self):
--> 292 for event in [self.signal.events.data_changed,
293 self.signal.axes_manager.events.indices_changed]:
294 event.connect(self._reset_span_selector_background, [])
File hyperspy/events.py:107, in Events.__getattr__(self, name)
99 def __getattr__(self, name):
100 """
101 Magic to enable having `Event`s as attributes, and keeping them
102 separate from other attributes.
(...)
105 could not be found in the normal way).
106 """
--> 107 return self._events[name]
KeyError: 'data_changed'
```
-------
`hyperspy==1.7.0`
`hyperspy_gui_ipywidgets==1.5.0`
`hyperspy_gui_traitsui==1.5.1` | closed | 2022-05-10T11:00:52Z | 2022-06-08T21:54:37Z | https://github.com/hyperspy/hyperspy/issues/2935 | [
"type: bug",
"type: regression",
"status: fix-submitted"
] | magnunor | 0 |
modin-project/modin | pandas | 7,070 | Add `modin.pandas.arrays` module | closed | 2024-03-12T15:24:53Z | 2024-03-13T12:15:19Z | https://github.com/modin-project/modin/issues/7070 | [
"new feature/request 💬",
"pandas concordance 🐼"
] | anmyachev | 0 | |
holoviz/colorcet | plotly | 103 | Over 300 test failures for `=dev-python/colorcet-2.0.6` | This is the full [build log](https://ppb.chymera.eu/9bd1dd.log). Most issues seem to be `AssertionError`s.
#### ALL software version info
dev-python/param 1.12.3
dev-python/pyct 0.4.8
dev-lang/python 3.10.9
| closed | 2023-01-15T16:41:51Z | 2023-01-19T23:55:00Z | https://github.com/holoviz/colorcet/issues/103 | [] | TheChymera | 3 |
hbldh/bleak | asyncio | 1,690 | TimeoutError on connect in Windows | * bleak version: 0.22.3
* Python version: 3.13.0
* Operating System: Windows 11 Pro (10.0.26100 N/A Build 26100)
### Description
Hello, I am trying to connect to a BLE peripheral (VTM 20F pulse oximeter) using bleak. My script works on MacOS (15.0 Build 24A335), but on Windows I get a TimeoutError. Note that I am able to connect to _other_ BLE peripherals on my Windows setup, but this particular device is not cooperating. I have included debug output for both Windows (not working) and MacOS (working). Let me know if any other information could be helpful.
### What I Did
Minimum workable example:
```
import asyncio
from bleak import BleakClient, BleakScanner
async def main():
MYPERIPHERAL = "VTM 20F"
print(f"Scanning for {MYPERIPHERAL}...")
device = await BleakScanner.find_device_by_name(MYPERIPHERAL)
if device is None:
print(f"Could not find {MYPERIPHERAL}")
return
print(f"Found device: {device.name} ({device.address})")
async with BleakClient(device, timeout=30) as client:
print(f"Connected to {client}!")
await asyncio.sleep(3)
print("Disconnected!")
if __name__ == "__main__":
asyncio.run(main())
```
Output on Windows:
```
Traceback (most recent call last):
File "C:\mybleakapp\.venv\Lib\site-packages\bleak\backends\winrt\client.py", line 487, in connect
self.services = await self.get_services(
^^^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
)
^
File "C:\mybleakapp\.venv\Lib\site-packages\bleak\backends\winrt\client.py", line 720, in get_services
await FutureLike(self._requester.get_gatt_services_async(*srv_args)),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\mybleakapp\.venv\Lib\site-packages\bleak\backends\winrt\client.py", line 1129, in __await__
yield self # This tells Task to wait for completion.
^^^^^^^^^^
File "C:\mybleakapp\.venv\Lib\site-packages\bleak\backends\winrt\client.py", line 1072, in result
raise asyncio.CancelledError
asyncio.exceptions.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\mybleakapp\mybleakapp.py", line 30, in <module>
asyncio.run(main())
~~~~~~~~~~~^^^^^^^^
File "C:\Users\myuser\.pyenv\pyenv-win\versions\3.13.0\Lib\asyncio\runners.py", line 194, in run
return runner.run(main)
~~~~~~~~~~^^^^^^
File "C:\Users\myuser\.pyenv\pyenv-win\versions\3.13.0\Lib\asyncio\runners.py", line 118, in run
return self._loop.run_until_complete(task)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "C:\Users\myuser\.pyenv\pyenv-win\versions\3.13.0\Lib\asyncio\base_events.py", line 721, in run_until_complete
return future.result()
~~~~~~~~~~~~~^^
File "c:\mybleakapp\mybleakapp.py", line 22, in main
async with BleakClient(device) as client:
~~~~~~~~~~~^^^^^^^^
File "C:\mybleakapp\.venv\Lib\site-packages\bleak\__init__.py", line 570, in __aenter__
await self.connect()
File "C:\mybleakapp\.venv\Lib\site-packages\bleak\__init__.py", line 615, in connect
return await self._backend.connect(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\mybleakapp\.venv\Lib\site-packages\bleak\backends\winrt\client.py", line 443, in connect
async with async_timeout(timeout):
~~~~~~~~~~~~~^^^^^^^^^
File "C:\Users\myuser\.pyenv\pyenv-win\versions\3.13.0\Lib\asyncio\timeouts.py", line 116, in __aexit__
raise TimeoutError from exc_val
TimeoutError
```
### Logs
Debug log for Windows:
```
Scanning for VTM 20F...
2024-11-21 18:42:36,708 bleak.backends.winrt.scanner MainThread DEBUG: Received 21:09:03:1E:A3:1B: .
2024-11-21 18:42:36,837 bleak.backends.winrt.scanner MainThread DEBUG: Received BE:FF:20:00:09:A5: ELK-BLEDOM .
2024-11-21 18:42:36,945 bleak.backends.winrt.scanner MainThread DEBUG: Received 41:2D:DB:B5:28:96: .
2024-11-21 18:42:37,079 bleak.backends.winrt.scanner MainThread DEBUG: Received BE:FF:20:00:09:A5: ELK-BLEDOM .
2024-11-21 18:42:37,085 bleak.backends.winrt.scanner MainThread DEBUG: Received C0:4E:30:F2:89:A6: .
2024-11-21 18:42:37,088 bleak.backends.winrt.scanner MainThread DEBUG: Received 69:7C:81:0A:C1:FA: .
2024-11-21 18:42:37,090 bleak.backends.winrt.scanner MainThread DEBUG: Received 69:7C:81:0A:C1:FA: .
2024-11-21 18:42:37,200 bleak.backends.winrt.scanner MainThread DEBUG: Received E0:5A:1B:E1:FB:8E: .
2024-11-21 18:42:37,206 bleak.backends.winrt.scanner MainThread DEBUG: Received E0:5A:1B:E1:FB:8E: 110092_FB8C.
2024-11-21 18:42:37,210 bleak.backends.winrt.scanner MainThread DEBUG: Received 41:2D:DB:B5:28:96: .
2024-11-21 18:42:37,213 bleak.backends.winrt.scanner MainThread DEBUG: Received 41:2D:DB:B5:28:96: .
2024-11-21 18:42:37,217 bleak.backends.winrt.scanner MainThread DEBUG: Received 21:09:03:1E:A3:1B: .
2024-11-21 18:42:37,220 bleak.backends.winrt.scanner MainThread DEBUG: Received 21:09:03:1E:A3:1B: VTM 20F.
2024-11-21 18:42:37,222 bleak.backends.winrt.scanner MainThread DEBUG: 6 devices found. Watcher status: <BluetoothLEAdvertisementWatcherStatus.STOPPED: 3>.
Found device: VTM 20F (21:09:03:1E:A3:1B)
2024-11-21 18:42:37,233 bleak.backends.winrt.client MainThread DEBUG: Connecting to BLE device @ 21:09:03:1E:A3:1B
2024-11-21 18:42:37,272 bleak.backends.winrt.client MainThread DEBUG: getting services (service_cache_mode=None, cache_mode=None)...
2024-11-21 18:42:37,400 bleak.backends.winrt.client Dummy-1 DEBUG: session_status_changed_event_handler: id: BluetoothLE#BluetoothLE04:ed:33:69:45:f0-21:09:03:1e:a3:1b, error: <BluetoothError.SUCCESS: 0>, status: <GattSessionStatus.ACTIVE: 1>
2024-11-21 18:42:37,439 bleak.backends.winrt.client Dummy-2 DEBUG: session_status_changed_event_handler: id: BluetoothLE#BluetoothLE04:ed:33:69:45:f0-21:09:03:1e:a3:1b, error: <BluetoothError.SUCCESS: 0>, status: <GattSessionStatus.CLOSED: 0>
2024-11-21 18:42:37,632 bleak.backends.winrt.client Dummy-3 DEBUG: session_status_changed_event_handler: id: BluetoothLE#BluetoothLE04:ed:33:69:45:f0-21:09:03:1e:a3:1b, error: <BluetoothError.SUCCESS: 0>, status: <GattSessionStatus.ACTIVE: 1>
2024-11-21 18:42:37,648 bleak.backends.winrt.client Dummy-4 DEBUG: max_pdu_size_changed_handler: 131
2024-11-21 18:42:37,724 bleak.backends.winrt.client Dummy-5 DEBUG: 21:09:03:1E:A3:1B: services changed
2024-11-21 18:42:47,280 bleak.backends.winrt.client MainThread DEBUG: closing requester
2024-11-21 18:42:47,283 bleak.backends.winrt.client MainThread DEBUG: closing session
```
Debug log for MacOS:
```
Scanning for VTM 20F...
2024-11-21 18:45:46,262 bleak.backends.corebluetooth.CentralManagerDelegate Dummy-1 DEBUG: centralManagerDidUpdateState_
2024-11-21 18:45:46,262 bleak.backends.corebluetooth.CentralManagerDelegate Dummy-1 DEBUG: Bluetooth powered on
2024-11-21 18:45:46,263 bleak.backends.corebluetooth.CentralManagerDelegate MainThread DEBUG: 'isScanning' changed
2024-11-21 18:45:46,307 bleak.backends.corebluetooth.CentralManagerDelegate Dummy-2 DEBUG: centralManager_didDiscoverPeripheral_advertisementData_RSSI_
2024-11-21 18:45:46,308 bleak.backends.corebluetooth.CentralManagerDelegate MainThread DEBUG: Discovered device F27156C9-567F-90F1-4BA2-0DD907CD7593: ELK-BLEDOM @ RSSI: -75 (kCBAdvData <nsdict_keys(['kCBAdvDataLocalName', 'kCBAdvDataRxPrimaryPHY', 'kCBAdvDataRxSecondaryPHY', 'kCBAdvDataTimestamp', 'kCBAdvDataIsConnectable'])>) and Central: <CBCentralManager: 0x12a827850>
2024-11-21 18:45:46,322 bleak.backends.corebluetooth.CentralManagerDelegate Dummy-3 DEBUG: centralManager_didDiscoverPeripheral_advertisementData_RSSI_
2024-11-21 18:45:46,323 bleak.backends.corebluetooth.CentralManagerDelegate Dummy-4 DEBUG: centralManager_didDiscoverPeripheral_advertisementData_RSSI_
2024-11-21 18:45:46,323 bleak.backends.corebluetooth.CentralManagerDelegate Dummy-5 DEBUG: centralManager_didDiscoverPeripheral_advertisementData_RSSI_
2024-11-21 18:45:46,323 bleak.backends.corebluetooth.CentralManagerDelegate MainThread DEBUG: Discovered device 4C3C2BA1-628A-DA70-6799-7281E654B78A: VTM 20F @ RSSI: -49 (kCBAdvData <nsdict_keys(['kCBAdvDataIsConnectable', 'kCBAdvDataManufacturerData', 'kCBAdvDataServiceUUIDs', 'kCBAdvDataRxSecondaryPHY', 'kCBAdvDataTimestamp', 'kCBAdvDataRxPrimaryPHY'])>) and Central: <CBCentralManager: 0x12a827850>
2024-11-21 18:45:46,324 bleak.backends.corebluetooth.CentralManagerDelegate MainThread DEBUG: Discovered device 4C3C2BA1-628A-DA70-6799-7281E654B78A: VTM 20F @ RSSI: -49 (kCBAdvData <nsdict_keys(['kCBAdvDataManufacturerData', 'kCBAdvDataTimestamp', 'kCBAdvDataIsConnectable', 'kCBAdvDataRxPrimaryPHY', 'kCBAdvDataRxSecondaryPHY', 'kCBAdvDataLocalName', 'kCBAdvDataServiceUUIDs'])>) and Central: <CBCentralManager: 0x12a827850>
2024-11-21 18:45:46,324 bleak.backends.corebluetooth.CentralManagerDelegate MainThread DEBUG: 'isScanning' changed
Found device: VTM 20F (4C3C2BA1-628A-DA70-6799-7281E654B78A)
2024-11-21 18:45:46,333 bleak.backends.corebluetooth.client MainThread DEBUG: CentralManagerDelegate at <CentralManagerDelegate: 0x148e924f0>
2024-11-21 18:45:46,333 bleak.backends.corebluetooth.client MainThread DEBUG: Connecting to BLE device @ 4C3C2BA1-628A-DA70-6799-7281E654B78A
2024-11-21 18:45:46,333 bleak.backends.corebluetooth.CentralManagerDelegate MainThread DEBUG: Discovered device F27156C9-567F-90F1-4BA2-0DD907CD7593: ELK-BLEDOM @ RSSI: -75 (kCBAdvData <nsdict_keys(['kCBAdvDataLocalName', 'kCBAdvDataRxPrimaryPHY', 'kCBAdvDataRxSecondaryPHY', 'kCBAdvDataTimestamp', 'kCBAdvDataIsConnectable'])>) and Central: <CBCentralManager: 0x12a827850>
2024-11-21 18:45:46,509 bleak.backends.corebluetooth.CentralManagerDelegate Dummy-6 DEBUG: centralManager_didConnectPeripheral_
2024-11-21 18:45:46,510 bleak.backends.corebluetooth.client MainThread DEBUG: Retrieving services...
2024-11-21 18:45:46,598 bleak.backends.corebluetooth.PeripheralDelegate Dummy-7 DEBUG: peripheral_didDiscoverServices_
2024-11-21 18:45:46,600 bleak.backends.corebluetooth.PeripheralDelegate MainThread DEBUG: Services discovered
2024-11-21 18:45:46,600 bleak.backends.corebluetooth.client MainThread DEBUG: Retrieving characteristics for service FFE0
2024-11-21 18:45:46,601 bleak.backends.corebluetooth.PeripheralDelegate Dummy-8 DEBUG: peripheral_didDiscoverCharacteristicsForService_error_
2024-11-21 18:45:46,602 bleak.backends.corebluetooth.PeripheralDelegate MainThread DEBUG: Characteristics discovered
2024-11-21 18:45:46,602 bleak.backends.corebluetooth.client MainThread DEBUG: Retrieving descriptors for characteristic FFE4
2024-11-21 18:45:46,603 bleak.backends.corebluetooth.PeripheralDelegate Dummy-9 DEBUG: peripheral_didDiscoverDescriptorsForCharacteristic_error_
2024-11-21 18:45:46,603 bleak.backends.corebluetooth.PeripheralDelegate MainThread DEBUG: Descriptor discovered 12
2024-11-21 18:45:46,604 bleak.backends.corebluetooth.client MainThread DEBUG: Retrieving descriptors for characteristic FFF2
2024-11-21 18:45:46,604 bleak.backends.corebluetooth.PeripheralDelegate Dummy-10 DEBUG: peripheral_didDiscoverDescriptorsForCharacteristic_error_
2024-11-21 18:45:46,604 bleak.backends.corebluetooth.PeripheralDelegate MainThread DEBUG: Descriptor discovered 16
2024-11-21 18:45:46,605 bleak.backends.corebluetooth.client MainThread DEBUG: Services resolved for BleakClientCoreBluetooth (4C3C2BA1-628A-DA70-6799-7281E654B78A)
Connected to BleakClient, 4C3C2BA1-628A-DA70-6799-7281E654B78A!
2024-11-21 18:45:48,608 bleak.backends.corebluetooth.CentralManagerDelegate Dummy-11 DEBUG: centralManager_didDisconnectPeripheral_error_
2024-11-21 18:45:48,609 bleak.backends.corebluetooth.CentralManagerDelegate MainThread DEBUG: Peripheral Device disconnected!
Disconnected!
``` | open | 2024-11-22T03:17:48Z | 2024-12-19T17:23:12Z | https://github.com/hbldh/bleak/issues/1690 | [
"Backend: WinRT"
] | itsbeenemotional | 0 |
bigscience-workshop/petals | nlp | 222 | Specify minimal requirements to GPU's for contributing | I tried to contribute to the Swarm using an 8gb card and then quickly realized, even when setting the Pytorch fragmeneted split size to 512mb that I could not use this card to contribute to inference. It would be nice to have a section in the readme that specifies this. | closed | 2023-01-18T09:59:34Z | 2023-02-06T21:49:22Z | https://github.com/bigscience-workshop/petals/issues/222 | [] | Joemgu7 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.