repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
gradio-app/gradio | data-visualization | 10,337 | Regarding support for multiple webcams | - [x] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
I encountered the `image.no_webcam_support` issue when trying out examples provided in the documentation during my recent project using Gradio. After reviewing some previously reported Issues, I found that this problem seems to commonly occur on computers with more than one webcam installed. My phone has both a front and rear camera, which can be detected and opened by `webcamtests.com`. When deploying Gradio within a local network and accessing it from my phone, the `image.no_webcam_support` issue consistently reoccurs.
Ref:
#10143 (This issue has not explained the cause of the problem, but the question it raises is the same as mine.)
#10049 (This issue was resolved by uninstalling OBS to remove the `OBS Virtual Camera`.)
#7223 (This issue is marked as similar to #7021. No resolution measures are mentioned.)
#7021 (This issue mentions that the cause of the problem is also `OBS Virtual Camera`.)
**Describe the solution you'd like**
I would like to use this opportunity to propose a Feature Request for developers to address what appears to be an issue caused by multiple webcams. Thank you!
**Additional context**
The used code:
File `app.py`:
```python
import gradio
import gr_util
demo=gradio.Interface(
gr_util.flip,
gradio.Image(sources=["webcam"], streaming=True),
"image",
live=True
)
demo.launch(server_name="0.0.0.0")
```
File `gr_util.py`:
```python
import numpy
def flip(im):
return numpy.flipud(im)
```
Starts by executing `python app.py`.
The situation on the webpage:
 | closed | 2025-01-12T07:58:07Z | 2025-02-03T19:54:27Z | https://github.com/gradio-app/gradio/issues/10337 | [
"pending clarification"
] | Anonyame | 8 |
roboflow/supervision | machine-learning | 1,383 | [LabelAnnotator, RichLabelAnnotator, VertexLabelAnnotator] - add smart label positioning | ### Description
Overlapping labels are a common issue, especially in crowded scenes. Let's add an optional smart label positioning feature to the [`LabelAnnotator`](https://supervision.roboflow.com/develop/detection/annotators/#supervision.annotators.core.LabelAnnotator), [`RichLabelAnnotator`](https://supervision.roboflow.com/develop/detection/annotators/#supervision.annotators.core.RichLabelAnnotator), and [`VertexLabelAnnotator`](https://supervision.roboflow.com/develop/keypoint/annotators/#supervision.keypoint.annotators.EdgeAnnotator.annotate) that:
- Ensures that the label box does not extend beyond the image.
- Automatically adjust the position of overlapping labels to prevent them from overlapping.

The algorithm boils down to locating overlapping label boxes and then calculating the direction of vectors to push the labels apart. This process may require an iterative approach, as moving label boxes can lead to new overlaps with other label boxes.

Importantly, the bounding box remains in the same place, only the label boxes are moved. It would be great if, after the shift, the label and its original position were connected by a line.
### Examples of incorrect behavior


### Examples of expected behavior
https://github.com/user-attachments/assets/acc70301-7459-47c5-882c-720cd84b3ae0
Here's the [Google Colab](https://colab.research.google.com/drive/1VQ_uGjfYXPMeVe8NvpIOSRN4qKMMxPju?usp=sharing) I used to experiment with this feature.
### Additional
- Note: Please share a Google Colab with minimal code to test the new feature. We know it's additional work, but it will speed up the review process. The reviewer must test each change. Setting up a local environment to do this is time-consuming. Please ensure that Google Colab can be accessed without any issues (make it public). Thank you! 🙏🏻 | open | 2024-07-19T12:25:58Z | 2024-11-22T18:32:31Z | https://github.com/roboflow/supervision/issues/1383 | [
"enhancement",
"api:annotator",
"hacktoberfest"
] | SkalskiP | 20 |
facebookresearch/fairseq | pytorch | 4,812 | Does TokenBlockDataset has max blocks limits | ## 🐛 Bug
Hi, I found that when using TokenBlockDataset and for a fixed-len dataset, if tokens_per_sample is setted too small and thus the blocks num is huge, the programme failed directly. How can I solve this problem? | open | 2022-10-19T08:02:44Z | 2022-10-19T08:02:44Z | https://github.com/facebookresearch/fairseq/issues/4812 | [
"bug",
"needs triage"
] | zhangmiaosen2000 | 0 |
jschneier/django-storages | django | 1,099 | Adding a Custom Storage backend for Supabase | Hey,
Thanks for creating this project! I am currently maintaining(as part of a team) the [python client library](https://github.com/supabase-community/supabase-py) for [Supabase](https://supabase.com/) which is (loosely speaking) an Open Source Firebase. One of the components of Supabase is Supabase Storage which is an S3 like file storage. To allow users the use the Storage system easily we were hoping to add a custom backend for Django in this repository.
We are working on the custom backend in a separate repo but I was hoping that we could file a PR to this repo to integrate it as a custom backend once it is done. Given the widespread adoption of `django-storages` I think that Supabase integration would make it easier for users to use Supabase Storage and/or choose Supabase Storage in tandem with other storage providers.
Would love to hear if the `django-storages` team has any thoughts/concerns. Lmk!
Thanks :)
Joel
| closed | 2021-12-25T11:00:14Z | 2024-04-25T03:38:20Z | https://github.com/jschneier/django-storages/issues/1099 | [] | J0 | 10 |
biolab/orange3 | data-visualization | 6,519 | New version of cython was released, installation does not work with it | On 17 Jul Cython 3.0 was released. Orange can not be installed from source with it.
In long term, we should fix it. For now, I changed the installer to require older Cython (#6518). | closed | 2023-07-20T12:09:12Z | 2023-07-24T12:55:38Z | https://github.com/biolab/orange3/issues/6519 | [
"bug"
] | markotoplak | 0 |
sktime/pytorch-forecasting | pandas | 1,279 | Time_idx is global or relative? | Suppose we have a dataset with 1000 different "group_ids" but each of them has a different start date (the "oldest" one is for example 01/2001 and the "newest" is 01/2020). The prediction length is 60.
The question is: "Time_idx" refers to the _global_ initial date (therefore time_idx = 0 for those examples with start date 01/2001) or to the start date of that specific example (the example with start date 01/2020 will have time_idx = 0 too)?
Thank you.
| open | 2023-03-27T10:36:28Z | 2023-03-28T04:26:41Z | https://github.com/sktime/pytorch-forecasting/issues/1279 | [] | GianNuzzarello | 1 |
apify/crawlee-python | web-scraping | 781 | Flaky `test_final_statistics` in `BasicCrawler` tests on Windows | ```
FAILED tests/unit/basic_crawler/test_basic_crawler.py::test_final_statistics - assert datetime.timedelta(0) > datetime.timedelta(0)
+ where datetime.timedelta(0) = FinalStatistics(requests_finished=45, requests_failed=5, retry_histogram=[25, 16, 9], request_avg_failed_duration=datetime.timedelta(0), request_avg_finished_duration=datetime.timedelta(microseconds=13321), requests_finished_per_minute=3021, requests_failed_per_minute=335, request_total_duration=datetime.timedelta(microseconds=599447), requests_total=50, crawler_runtime=datetime.timedelta(microseconds=893742)).request_avg_failed_duration
+ and datetime.timedelta(0) = timedelta()
```
This is probably a problem with low precicion of time measurements on Windows. | closed | 2024-12-04T14:32:39Z | 2024-12-10T20:49:07Z | https://github.com/apify/crawlee-python/issues/781 | [
"bug",
"t-tooling",
"debt"
] | janbuchar | 0 |
pennersr/django-allauth | django | 3,183 | Login Provider Google | how do i login without new page like this

this is my login page

i want to direct to login page google

can this be done ?
| closed | 2022-11-11T02:50:31Z | 2022-12-05T10:09:54Z | https://github.com/pennersr/django-allauth/issues/3183 | [] | muhamadanjar | 3 |
ned2/slapdash | dash | 33 | run-slapdashed_app-prod | Hi @ned2, I appreciate this is issue might be related to #27 but the error is a bit different and I'm not sure it's related. Apologies if this is an error related to my system (Windows 10 pro, Python 3.8.3) and not slapdash.
I ran almost exactly the same commands as chubukov:
```
python -m venv slap_env
slap_env\Scripts\activate
python -m pip install cookiecutter
cookiecutter https://github.com/ned2/slapdash
python -m pip install -e slapdashed_app/
```
I can run-slapdashed_app-dev fine but if I try to run-slapdashed_app-prod I get this error, which seems odd:
```'run-slapdashed_app-prod' is not recognized as an internal or external command, operable program or batch file.```
Adding the path to my environment variables, which is what I'd normally do, doesn't seem like the right solution to me. I've read through your readme file but I feel like I'm missing something. | closed | 2020-10-07T16:23:18Z | 2020-12-21T10:33:47Z | https://github.com/ned2/slapdash/issues/33 | [] | jeremyfox36 | 2 |
arogozhnikov/einops | numpy | 41 | CI failing for mxnet zeros like | closed | 2020-05-05T19:55:00Z | 2020-05-10T06:23:42Z | https://github.com/arogozhnikov/einops/issues/41 | [] | arogozhnikov | 2 | |
OpenBB-finance/OpenBB | python | 6,857 | Unlocking Finance for All: Spreading the Word with OpenBB 🚀 | ### What side quest or challenge are you solving?
I'm tackling the No-Code Side Quest for OpenBB Finance! My challenge is to create engaging Twitter threads to spread the word about this amazing AI-powered financial research tool. Helping to grow awareness and build a community around #OpenBB while making finance accessible for all. 🌍💡
### Points
150-500 Points
### Description
I contributed to the No-Code Side Quest for OpenBB Finance by crafting engaging Twitter content to raise awareness about the platform. My task involved creating tweets and threads that highlight OpenBB’s AI-powered research and analytics tools, promoting its features, and encouraging community involvement. This helps make financial tools more accessible and educates users about the power of open-source finance.
### Provide proof that you've completed the task
Here's the link to the tweet(s) showcasing my contribution: https://x.com/snigdha_1234567/status/1849315580719063113
https://x.com/snigdha_1234567/status/1849315583067873290 | closed | 2024-10-24T05:19:09Z | 2024-10-24T05:37:39Z | https://github.com/OpenBB-finance/OpenBB/issues/6857 | [] | SNIDGHA | 0 |
home-assistant/core | asyncio | 140,521 | LaCrosse Rain Sensor not updating | ### The problem
Last several months, LaCrosse View is not showing "Rain" total. If I expand to the graph, the data is there, but does not show on the total. When I go to the LaCrosse website, the data shows correctly with the total Rain.

### What version of Home Assistant Core has the issue?
core-2025.3.2
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
LaCrosse View.
### Link to integration documentation on our website
_No response_
### Diagnostics information
[config_entry-lacrosse_view-01JM177RSQ7CGBWTXBK83T9402.json](https://github.com/user-attachments/files/19230331/config_entry-lacrosse_view-01JM177RSQ7CGBWTXBK83T9402.json)
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | open | 2025-03-13T14:18:24Z | 2025-03-14T14:02:21Z | https://github.com/home-assistant/core/issues/140521 | [
"integration: lacrosse_view"
] | maryandmike | 2 |
SciTools/cartopy | matplotlib | 1,979 | Instantiating from class cartopy.mpl.geoaxes.GeoAxes(*args, **kwargs)[source] | ### Description
Using
`vAxes = cartopy.mpl.geoaxes.GeoAxes(projection=ccrs.Mercator())`
throws `KeyError`:
```
File "/usr/lib64/python3.10/site-packages/cartopy/mpl/geoaxes.py", line 410, in __init__
self.projection = kwargs.pop('map_projection')
KeyError: 'map_projection'
```
Note that the following _does_ work:
```
vAxes = matplotlib.pyplot.axes(projection=ccrs.Mercator())
```
Although this throws a warning `QSocketNotifier: Can only be used with threads started with QThread` - which is a different issue?
#### Code to reproduce
```
import cartopy.crs as ccrs
import cartopy.mpl.geoaxes as cga
import cartopy.feature as cfeature
vAxes = cga.GeoAxes(projection=ccrs.Mercator())
```
#### Traceback
```
File "/usr/lib64/python3.10/site-packages/cartopy/mpl/geoaxes.py", line 410, in __init__
self.projection = kwargs.pop('map_projection')
KeyError: 'map_projection'
```
<details>
<summary>Full environment definition</summary>
<!-- fill in the following information as appropriate -->
### Operating system
Fedora 35
### Cartopy version
0.20.1
</details>
| closed | 2022-01-03T21:14:49Z | 2022-08-23T19:05:21Z | https://github.com/SciTools/cartopy/issues/1979 | [] | hklaufus | 2 |
voxel51/fiftyone | data-science | 5,454 | ServiceListenTimeout | ### Describe the problem
I am using fiftyone to download open-images-v7 to also fine tune a YOLO model. Fiftyone installs/import fine, but when I download and load the images I'm getting an error message fiftyone.core.service.ServiceListenTimeout: fiftyone.core.service.DatabaseService failed to bind to port. NOTE: I'm using the package manager UV: https://astral.sh/blog/uv. I was using conda (and if i recall it was working fine with conda) but my company has banned it.
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
uv self update
uv venv --python 3.11
.venv/Scripts/activate
uv pip install -r requirements.txt: [requirements.txt](https://github.com/user-attachments/files/18623221/requirements.txt)
uv pip list: [uvpiplist.txt](https://github.com/user-attachments/files/18623215/uvpiplist.txt)
NOTE: this error is happening after i run the initial download which also fails after some time, but downloads some images.
### Code to reproduce issue
[download.txt](https://github.com/user-attachments/files/18623196/download.txt)
### System information
- **OS Platform and Distribution** (e.g., Linux Ubuntu 22.04): Microsoft Windows 11 Enterprise Version 10.0.22631 Build 22631
- **Python version** (`python --version`): 3.11.11
- **FiftyOne version** (`fiftyone --version`): FiftyOne v1.3.0, Voxel51, Inc.
- **FiftyOne installed from** (pip or source): pip, but i used UV.
### Other info/logs
[LOGS.txt](https://github.com/user-attachments/files/18622654/LOGS.txt)
[mongo.log](https://github.com/user-attachments/files/18622591/mongo.log)
I am opening up this issues because I have spent hours googling and using GenAI to help troubleshoot, but I can't find a resolution.
### Willingness to contribute
The FiftyOne Community encourages bug fix contributions. Would you or another
member of your organization be willing to contribute a fix for this bug to the
FiftyOne codebase?
- [ ] Yes. I can contribute a fix for this bug independently
- [] Yes. I would be willing to contribute a fix for this bug with guidance
from the FiftyOne community
- [] No. I cannot contribute a bug fix at this time
| open | 2025-01-31T18:17:21Z | 2025-02-12T15:24:23Z | https://github.com/voxel51/fiftyone/issues/5454 | [
"bug"
] | scottschmidl | 8 |
ageitgey/face_recognition | machine-learning | 708 | face_recognition errors ! | * face_recognition version: (1.2.3)
* Python version: ( 3.7 )
cmake ( 2.8.3)
opencv ( 3.4.4 )
dlib ( 19.16.0 )
conda ( 4.5.11 )
numpy (1.15.1)
scipy (1.1.0)
pillow (5.2.0)
face recognition models (0.3.0)
scikit-learn (0.19.2)
scikit-image (0.14.0)
* Operating System: ubuntu ( linux ) using virtualbox
hello everyone , thankx for the amazing app, i'm new student of machine learning and and this is the first app i install by my own using machine learning , i came up with many errors :
i'm not able to run the app successfully and i don't know where is the mistake !
1- first when i try ( python setup.py install ) it's installing until it stops with this error:
```
File "/home/rawan/anaconda3/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/rawan/anaconda3/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/tmp/easy_install-g1p1izyy/dlib-19.16.0/setup.py", line 133, in run
File "/tmp/easy_install-g1p1izyy/dlib-19.16.0/setup.py", line 173, in build_extension
File "/home/rawan/anaconda3/lib/python3.7/subprocess.py", line 328, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--config', 'Release', '--', '-j1']' returned non-zero exit status `2`
```
second : when i run import face_recognition i get this error
```
rawan@rawan:~$ python
Python 3.7.0 (default, Jun 28 2018, 13:15:42)
[GCC 7.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import face_recognition
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/rawan/anaconda3/lib/python3.7/site-packages/face_recognition-1.2.3-py3.7.egg/face_recognition/__init__.py", line 7, in <module>
from .api import load_image_file, face_locations, batch_face_locations, face_landmarks, face_encodings, compare_faces, face_distance
File "/home/rawan/anaconda3/lib/python3.7/site-packages/face_recognition-1.2.3-py3.7.egg/face_recognition/api.py", line 14, in <module>
face_detector = dlib.get_frontal_face_detector()
AttributeError: module 'dlib' has no attribute 'get_frontal_face_detector'
>>>
```
and when i try to pip install --user face_recognition it gives me :
```
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--config', 'Release', '--', '-j1']' returned non-zero exit status 2.
Failed building wheel for dlib
Running setup.py clean for dlib
Failed to build dlib
Installing collected packages: dlib
Running setup.py install for dlib ... /
```
and at the end
```
Command "/home/rawan/anaconda3/bin/python -u -c "import setuptools, tokenize;file='/tmp/pip-install-d0ozrmdb/dlib/setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record /tmp/pip-record-dsq8ol9r/install-record.txt --single-version-externally-managed --compile --user --prefix=" failed with error code 1 in /tmp/pip-install-d0ozrmdb/dlib/
```
and when i try `import face_recognition` i get this
```import-im6.q16: not authorized face_recognition @ error/constitute.c/WriteImage/1037.```
i tried a lot of solutions around the internet but the same problems come back , i will appreciate any help from you ,thank you very much . | closed | 2018-12-23T23:49:55Z | 2024-05-15T13:28:39Z | https://github.com/ageitgey/face_recognition/issues/708 | [] | rshgithub | 0 |
dunossauro/fastapi-do-zero | pydantic | 120 | Alterar testes que fazem asserts em constantes para usar HTTPStatus | Exemplo:
```python
assert response.status_code == 200
```
Por:
```python
assert response.status_code == HTTPStatus.OK
```
Isso deve afetar todas as aulas e todos os códigos fontes. Essa é a uma alteração necessária para usar o Ruff com o linter do pytest.
Referência: https://docs.python.org/3/library/http.html#http.HTTPStatus | closed | 2024-04-01T18:43:28Z | 2024-04-17T08:40:43Z | https://github.com/dunossauro/fastapi-do-zero/issues/120 | [] | dunossauro | 0 |
ufoym/deepo | jupyter | 17 | matplotlib.pyplot error when importing: ImportError: No module named '_tkinter', please install the python3-tk package | It looks like there is some package missing (python3-tk?), which prevents from normal usage of `matplotlib.pyplot`. Is there any workaround for this issue? Thanks!
```
Python 3.6.3 (default, Oct 6 2017, 08:44:35)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import matplotlib.pyplot
Traceback (most recent call last):
File "/usr/lib/python3.6/tkinter/__init__.py", line 37, in <module>
import _tkinter
ModuleNotFoundError: No module named '_tkinter'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/dist-packages/matplotlib/pyplot.py", line 116, in <module>
_backend_mod, new_figure_manager, draw_if_interactive, _show = pylab_setup()
File "/usr/local/lib/python3.6/dist-packages/matplotlib/backends/__init__.py", line 60, in pylab_setup
[backend_name], 0)
File "/usr/local/lib/python3.6/dist-packages/matplotlib/backends/backend_tkagg.py", line 6, in <module>
from six.moves import tkinter as Tk
File "/usr/local/lib/python3.6/dist-packages/six.py", line 92, in __get__
result = self._resolve()
File "/usr/local/lib/python3.6/dist-packages/six.py", line 115, in _resolve
return _import_module(self.mod)
File "/usr/local/lib/python3.6/dist-packages/six.py", line 82, in _import_module
__import__(name)
File "/usr/lib/python3.6/tkinter/__init__.py", line 39, in <module>
raise ImportError(str(msg) + ', please install the python3-tk package')
ImportError: No module named '_tkinter', please install the python3-tk package
``` | closed | 2018-01-07T12:28:57Z | 2018-09-04T06:55:29Z | https://github.com/ufoym/deepo/issues/17 | [] | lucasrodes | 5 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 411 | [BUG] got error while parsing tiktok video | ***Platform where the error occurred?***
as the title mentioned, it encounters the promble while get tiktok video, I have replace the cookie in the config file: Douyin_TikTok_Download_API/crawlers/tiktok/web/config.yaml
Such as: Douyin/TikTok
/api/hybrid/video_data?url=https://www.tiktok.com/t/XXXXX/
***The endpoint where the error occurred?***
Such as: API-V1/API-V2/Web APP
***Submitted input value?***
Such as: video link
***Have you tried again?***
Such as: Yes, the error still exists after X time after the error occurred.
***Have you checked the readme or interface documentation for this project?***
Such as: Yes, and it is very sure that the problem is caused by the program.
| closed | 2024-05-27T01:52:34Z | 2024-06-14T08:23:14Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/411 | [
"BUG",
"enhancement"
] | jackleibest | 1 |
nolar/kopf | asyncio | 392 | [PR] Treat client timeouts during watches similarly to other http errors | > <a href="https://github.com/jscaltreto"><img align="left" height="50" src="https://avatars2.githubusercontent.com/u/1229755?v=4"></a> A pull request by [jscaltreto](https://github.com/jscaltreto) at _2020-08-14 18:18:34+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/pull/392
>
## What do these changes do?
`asyncio.TimeoutError` is caught and ignored like other http errors
## Description
Currently, if `connect_timeout` or `client_timeout` is reached, the error is not caught and is effectively fatal; the operator needs to be restarted. However, some other http errors are ignored and the watch is resumed by `infinite_watch()`. I believe this may have been an oversight as I can see no reason why a timeout should be considered a fatal exception when other http errors are not.
## Issues/PRs
#391
## Type of changes
- Bug fix (non-breaking change which fixes an issue)
## Checklist
- [x] The code addresses only the mentioned problem, and this problem only
- [x] I think the code is well written
- [ ] Unit tests for the changes exist
- There were no existing unit tests for the silent error handling as noted in the comment to infinite_watch()
- [ ] Documentation reflects the changes
- [ ] If you provide code modification, please add yourself to `CONTRIBUTORS.txt`
---
> <a href="https://github.com/jscaltreto"><img align="left" height="30" src="https://avatars2.githubusercontent.com/u/1229755?v=4"></a> Commented by [jscaltreto](https://github.com/jscaltreto) at _2020-08-20 11:36:27+00:00_
>
moving to nolar/kopf | closed | 2020-08-18T20:05:27Z | 2020-09-09T21:18:43Z | https://github.com/nolar/kopf/issues/392 | [
"archive"
] | kopf-archiver[bot] | 4 |
KaiyangZhou/deep-person-reid | computer-vision | 481 | 无语了model zoo里ibn和ain测试的时候非要一个用余弦距离一个用欧氏距离,有毛病哦 | open | 2021-12-29T09:01:44Z | 2024-08-08T11:39:05Z | https://github.com/KaiyangZhou/deep-person-reid/issues/481 | [] | ZJX-CV | 1 | |
aio-libs-abandoned/aioredis-py | asyncio | 865 | 'ERR Protocol error: invalid multibulk length' when deleting | Hi, I'm the maintainer of aiocache and got an issue open related to the maximum amount of keys that can be deleted: https://github.com/aio-libs/aiocache/issues/525
This is something that could be handled in aiocache code but was wondering if it would make sense to fix it in aioredis side as it would be worth to hide this protocol specific thing from users and let aioredis handle it (i.e. if there are more keys than the maximum bulk length, do it in batches). @AIGeneratedUsername added a snippet on how it could be fixed in the issue above. | open | 2020-12-09T09:30:08Z | 2021-03-19T00:11:15Z | https://github.com/aio-libs-abandoned/aioredis-py/issues/865 | [
"need investigation"
] | argaen | 5 |
keras-team/keras | machine-learning | 20,818 | I coldnt find pool_function in library | Hi, i am studying about pooling layers and i want to create a new custom pooling layer with different from available ones (avg,max ...). Firstly i review keras developer guides for making new layers then i find tensorflow/tensorflow/python/keras/layers/**pooling.py** on [github](https://github.com/tensorflow/tensorflow/blob/exported_pr_719646957/tensorflow/python/keras/layers/pooling.py) and came across following code in line 70.
```
outputs = self.pool_function(
inputs,
self.pool_size + (1,),
strides=self.strides + (1,),
padding=self.padding,
data_format=self.data_format)
```
I coldnt find pool_function in library. My main aim is writing new pool function by looking at built-in pool_function but i didnt find it. Can you help me with this? | open | 2025-01-27T21:14:33Z | 2025-03-18T09:19:48Z | https://github.com/keras-team/keras/issues/20818 | [
"type:support",
"stat:awaiting response from contributor"
] | azizyucelen | 4 |
awesto/django-shop | django | 392 | Documentation on how to add to an existing Django-CMS setup? | Perhaps I am just being slow, however try as I may I cannot find any description of how to add this to an existing Django-CMS project, which I would have thought would be a common situation?
I suspect ti is not particularly difficult, however it seems an odd oversight unless there are issues in using such a path?
| open | 2016-08-09T18:12:11Z | 2016-10-25T12:27:33Z | https://github.com/awesto/django-shop/issues/392 | [
"feature request",
"accepted",
"documentation"
] | stuartaw | 2 |
xonsh/xonsh | data-science | 4,657 | Issues with gitstatus in prompt | So I have this prompt as configured by `xonfig web`:
```
$PROMPT = '{BOLD_INTENSE_RED}➜ {CYAN}{cwd_base} {gitstatus}{RESET} '
```
It has two issues.
First, every shell that's not in a git directory has a double space, as it's implemented as "space gitstatus space":

This makes me think I accidentally pressed space and wanting to backspace it.
Responsibility for spacing or not spacing should just move to gitstatus, it can't really do proper spacing otherwise.
So a fixed version would need to be something like (gitstatus returning extra initial space if it git directory):
```
$PROMPT = '{BOLD_INTENSE_RED}➜ {CYAN}{cwd_base}{gitstatus}{RESET} '
```
Second issue is the stash count:

While that might be useful to some, I find it just very distracting, and other git prompts for other shells never bother displaying this count, as it's really irrelevant for most people.
I don't think there's any way to disable stash count other than by editing /usr/local/Cellar/xonsh/0.11.0/libexec/lib/python3.10/site-packages/xonsh/prompt/__amalgam__.py (or wherever it's installed).
For some git info, most people don't bother cleaning up stashes, and `git stash apply` doesn't do this. It's generally better to use `git stash apply` instead of `git stash pop` it case things get messed up and you want to retry later.
Anyway, I'd recommend turning it off by default, and in any case there probably should be some kind of gitstatus configuration, as I'd expect people to want a lot of different things from it. | closed | 2022-01-25T16:25:41Z | 2022-01-26T16:37:12Z | https://github.com/xonsh/xonsh/issues/4657 | [
"prompt-toolkit",
"xonfig"
] | taw | 4 |
allenai/allennlp | nlp | 5,450 | Add support for transformers LayoutLMv2. | **Is your feature request related to a problem? Please describe.**
On the current version of 2.7.0 of allennlp and versions 4.11.3 of transformers, layoutlmv2 is not supported :
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/allennlp/allennlp/modules/token_embedders/pretrained_transformer_mismatched_embedder.py", line 80, in __init__
self._matched_embedder = PretrainedTransformerEmbedder(
File "/root/allennlp/allennlp/modules/token_embedders/pretrained_transformer_embedder.py", line 123, in __init__
tokenizer = PretrainedTransformerTokenizer(
File "/root/allennlp/allennlp/data/tokenizers/pretrained_transformer_tokenizer.py", line 79, in __init__
self._reverse_engineer_special_tokens("a", "b", model_name, tokenizer_kwargs)
File "/root/allennlp/allennlp/data/tokenizers/pretrained_transformer_tokenizer.py", line 112, in _reverse_engineer_special_tokens
dummy_output = tokenizer_with_special_tokens.encode_plus(
File "/root/anaconda3/envs/alenlayout/lib/python3.8/site-packages/transformers/models/layoutlmv2/tokenization_layoutlmv2_fast.py", line 430, in encode_plus
return self._encode_plus(
File "/root/anaconda3/envs/alenlayout/lib/python3.8/site-packages/transformers/models/layoutlmv2/tokenization_layoutlmv2_fast.py", line 639, in _encode_plus
batched_output = self._batch_encode_plus(
File "/root/anaconda3/envs/alenlayout/lib/python3.8/site-packages/transformers/models/layoutlmv2/tokenization_layoutlmv2_fast.py", line 493, in _batch_encode_plus
encodings = self._tokenizer.encode_batch(
TypeError: PreTokenizedInputSequence must be Union[List[str], Tuple[str]]
```
Error occurs since they added an argument `boxes` as second argument of the fast layoutlm_v2 tokenizer which breaks the reverse engineer of the special tokens of allennlp pretrained_transformer_tokenizer.
**Describe the solution you'd like**
Ideally, naming the arguments in `tokenizer_with_special_tokens.encode_plus` of pretrained_transformer_tokenizer should do the work but I'm afraid of repercussions on other tokenizer that have different argument name (those not based of Bert maybe?)
Moreover since layoutlm_v2 added a few input to the model (images and boxes), modifications should be made to _unfold_long_sequences, _fold_long_sequences and forward of the pretrained_transformer_embedder and pretrained_transformer_mismatched_embedder to account for additional inputs.
If it's okay with you, I'd like to work in it.
| open | 2021-10-27T14:37:16Z | 2021-10-29T23:08:08Z | https://github.com/allenai/allennlp/issues/5450 | [
"Contributions welcome",
"Feature request"
] | HOZHENWAI | 1 |
ExpDev07/coronavirus-tracker-api | rest-api | 209 | Argentina data not update | Good job and thank you for share this
i have a mini dashboard for calculate ratios and variabilities but the data not update today
in Argentina yesterday there were 117 and actually there are 0
thanks!
--
| closed | 2020-03-26T23:34:53Z | 2020-04-18T18:25:44Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/209 | [
"question",
"source: jhu"
] | Pato2777 | 3 |
LibrePhotos/librephotos | django | 708 | New Librephoto Docker setup -- unable to scan photo when login as User (admin sees photos) | I am using Docker compose via Windows to start Librephoto
- Admin assigned to **/data** and able to see all the pictures stored in this directory (as expected)
- I created _"user1"_ and only assigned access to **/data/user1** folder
>>> Logged in Admin --> I can see all the photos in /data/user1/photos_dir
>>> Logged in as User1 --> No photo detected in /data/user1
----What did I do wrong here ? User1 is unable to see/access subfolder? Or user function does not actually work?
I am considering moving from Synology Photos --> librephoto .. but unable to support users is no go for me.
Folder structure:
# Location of your photos.
scanDirectory="C:/LibrePhoto_Folder"
# Internal data of LibrePhotos
data=./librephotos/data
----------------
Windows Folders structure
"C:/LibrePhoto_Folder" --> /data
"C:/LibrePhoto_Folder/user1" --> data/user1
"C:/LibrePhoto_Folder/user1/photos_dir" | closed | 2022-12-26T19:42:45Z | 2023-01-02T15:55:40Z | https://github.com/LibrePhotos/librephotos/issues/708 | [
"bug"
] | adangster1 | 1 |
opengeos/leafmap | plotly | 920 | `add_geojson` from local path unable to transform CRS correctly | ### Environment Information
- leafmap version: 0.38.5
- Python version: 3.10.12
- Operating System: Ubuntu 20.04.6 LTS
### Description
I was working with a GeoJSON file in UTM projection Zone 43 (EPSG:32643). I kept it in EPSG:32643 and not in EPSG:4326 because I wanted to create a K km buffer around the geometry as accurately as I can. When I saved this file locally with GeoPandas and tried to visualize it by providing a path to the `add_geojson` method, it was unable to show the geometry correctly (lines are visible at the perimeter of the entire map). I have included a minimum working example below.
### What I Did
```py
show_what_fails = True
import leafmap.foliumap as leafmap
# OR
# import leafmap.leafmap as leafmap # This also fails
import geopandas as gpd
m = leafmap.Map()
m.add_basemap("HYBRID")
gdf = gpd.read_file("https://raw.githubusercontent.com/opengeos/leafmap/master/examples/data/cable_geo.geojson")
if show_what_fails:
gdf = gdf.to_crs("EPSG:3857")
gdf.to_file("/tmp/cable_geo.geojson", driver='GeoJSON')
m.add_geojson("/tmp/cable_geo.geojson")
m
```
| closed | 2024-10-16T10:48:58Z | 2024-10-17T13:14:03Z | https://github.com/opengeos/leafmap/issues/920 | [
"bug"
] | patel-zeel | 5 |
cvat-ai/cvat | computer-vision | 9,216 | Restore from large backup (43GB) fails | ### Actions before raising this issue
- [x] I searched the existing issues and did not find anything similar.
- [x] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
Backedup a large project (43GB) from an erlier version of CVAT and tried to restore it to another instance
### Expected Behavior
Backup should create the project
### Possible Solution
_No response_
### Context
Backedup a large project (43GB) from an erlier version of CVAT (2.9.2) and tried to restore it to another instance
running 2.30.
It failed with the following message.
Could not restore project backup.
tus: unexpected response while creating upload, originated from request (method: POST, url: http://172.30.0.10:8080/api/projects/backup/, response code: 413, response text: File size exceeds max limit of 26843545600 bytes, request id: n/a).
Any idea how to increase the limit ?
### Environment
```Markdown
``` | closed | 2025-03-15T12:12:51Z | 2025-03-17T19:08:20Z | https://github.com/cvat-ai/cvat/issues/9216 | [] | skoroneos | 1 |
open-mmlab/mmdetection | pytorch | 11,807 | Bounding Box Thickness | Thanks for your error report and we appreciate it a lot.
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. I have read the [FAQ documentation](https://mmdetection.readthedocs.io/en/latest/faq.html) but cannot get the expected help.
3. The bug has not been fixed in the latest version.
**Describe the bug**
Hi. How can I change the thickness of bounding boxes on the images during inference? I have high-resolution images, and the bounding box that I am getting is too thin and barely visible. It also seems like the text size for class labels on images changes based on the size of the detected bounding box. Is there a way to correct this?
**Reproduction**
1. What command or script did you run?
```
python demo/image.py ./path_to_dataset ./path_to_config ./path_to_model_checkpoint
```
2. Did you make any modifications on the code or config? Did you understand what you have modified? All changes were understood.
3. What dataset did you use? Custom dataset
**Environment**
1. Please run `python mmdet/utils/collect_env.py` to collect necessary environment information and paste it here.
```
sys.platform: linux
Python: 3.8.19 (default, Mar 20 2024, 19:58:24) [GCC 11.2.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0: NVIDIA RTX 2000 Ada Generation Laptop GPU
CUDA_HOME: /usr/local/cuda-12.1
NVCC: Cuda compilation tools, release 12.1, V12.1.66
GCC: gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
PyTorch: 2.0.0+cu117
PyTorch compiling details: PyTorch built with:
- GCC 9.3
- C++ Version: 201703
- Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 11.7
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
- CuDNN 8.5
- Magma 2.6.1
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,
OpenCV: 4.9.0
MMEngine: 0.10.4
```
3. You may add addition that may be helpful for locating the problem, such as
- How you installed PyTorch \[e.g., pip, conda, source\]
- Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)
**Error traceback**
If applicable, paste the error trackback here.
```none
A placeholder for trackback.
```
**Bug fix**
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!
| open | 2024-06-20T21:31:31Z | 2024-06-22T20:01:25Z | https://github.com/open-mmlab/mmdetection/issues/11807 | [] | ZeeRizvee | 1 |
SciTools/cartopy | matplotlib | 1,755 | ax.clabel returns unexpected 'NoneType' | ### Issue
`ax.clabel()` returns unexpected `'NoneType'`
#### Code to reproduce
Matplotlib example
```
import matplotlib
import numpy as np
import matplotlib.cm as cm
import matplotlib.pyplot as plt
delta = 0.025
x = np.arange(-3.0, 3.0, delta)
y = np.arange(-2.0, 2.0, delta)
X, Y = np.meshgrid(x, y)
Z1 = np.exp(-X**2 - Y**2)
Z2 = np.exp(-(X - 1)**2 - (Y - 1)**2)
Z = (Z1 - Z2) * 2
fig, ax = plt.subplots()
CS = ax.contour(X, Y, Z)
CS_labels = ax.clabel(CS, inline=True, fontsize=10)
print(CS_labels[0].get_position())
```
returns:
`(0.9499999999999864, 0.5360418495133943)`
Cartopy example
```
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
from cartopy.examples.waves import sample_data
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1, projection=ccrs.EckertIII())
ax.set_global()
ax.coastlines('110m', alpha=0.1)
x, y, z = sample_data((20, 40))
z = z * -1.5 * y
filled_c = ax.contourf(x, y, z, transform=ccrs.PlateCarree())
line_c = ax.contour(x, y, z, levels=filled_c.levels,
colors=['black'],
transform=ccrs.PlateCarree())
CS_labels = ax.clabel(
line_c,
colors=['black'],
manual=False,
inline=True,
fmt=' {:.0f} '.format,
)
print(CS_labels[0].get_position())
```
breaks with the following:
#### Traceback
```
TypeError Traceback (most recent call last)
<ipython-input-655-34edf4c96e8c> in <module>
----> 1 CS_labels[0].get_position()
TypeError: 'NoneType' object is not subscriptable
```
#### Additional context
Related question and solution to accessing label coordinates on [stackoverflow](https://stackoverflow.com/questions/66807997/get-matplotlib-cartopy-contour-auto-label-coordinates). | closed | 2021-03-25T23:29:09Z | 2021-04-27T13:04:36Z | https://github.com/SciTools/cartopy/issues/1755 | [] | friedrichknuth | 2 |
AntonOsika/gpt-engineer | python | 1,046 | WSL2 gpte File List bad Behavior | ## Expected Behavior
WSL2 `gpte` uses existing `.gitignore`, then gives opportunity to select and deselect files, save, and close.
Else use existing `file_selection.toml`.
Else use existing `.gitignore`.
Else don't overwrite existing files!
Else don't overwrite existing read-only files!
## Current Behavior
WSL2 `gpte` generates its own `.gitignore`, overwriting existing `.gitignore`, even when `.gitignore` is read-only, then generates its own `file_selection.toml`, based on its own `.gitignore`, overwriting existing `file_selection.toml`.
## Failure Information
Windows 11 WSL2 Ubuntu
### Steps to Reproduce
`$ gpte project -i`
### Failure Logs
`File list detected at /mnt/c/Users/J/gpt-engineer/projects/`[project]```
/.gpteng/file_selection.toml. Edit or delete it if you want to select new files.
Please select and deselect (add # in front) files, save it, and close it to continue...
write: /mnt/c/Users/J/gpt-engineer/projects/
```[project]`/.gpteng/file_selection.toml is not logged in`
| closed | 2024-03-04T21:23:19Z | 2024-03-21T02:36:36Z | https://github.com/AntonOsika/gpt-engineer/issues/1046 | [
"invalid"
] | oldgithubman | 15 |
numba/numba | numpy | 9,178 | stack should support list as input | <!--
Thanks for opening an issue! To help the Numba team handle your information
efficiently, please first ensure that there is no other issue present that
already describes the issue you have
(search at https://github.com/numba/numba/issues?&q=is%3Aissue).
-->
## Reporting a bug
<!--
Before submitting a bug report please ensure that you can check off these boxes:
-->
- [x] I have tried using the latest released version of Numba (most recent is
visible in the change log (https://github.com/numba/numba/blob/main/CHANGE_LOG).
- [x] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
<!--
Please include details of the bug here, including, if applicable, what you
expected to happen!
-->
To reproduce the problem:
```python
from numba import njit, prange
from numba.typed import List
import numpy as np
@njit()
def test_stack():
array = np.ones((2, 3))
list_of_array = [array] * 10
np.stack(list_of_array)
if __name__ == "__main__":
test_stack()
```
output:
```
No implementation of function Function(<function stack at 0x0000017D0430F3A0>) found for signature:
>>> stack(list(array(float64, 2d, C))<iv=None>)
```
It's discussed [here](https://github.com/numba/numba/issues/7476) that the list is not support because the dimension can not be infered during compilation.
However, it actually can, because the custom stack (actually hstack here) function can be implemented:
```python
from numba import njit, prange
from numba.typed import List
import numpy as np
@njit()
def stack(list_of_array):
shape = (len(list_of_array),) + list_of_array[0].shape
stacked_array = np.empty(shape)
for j in prange(len(list_of_array)):
stacked_array[j] = list_of_array[j]
return stacked_array
if __name__ == "__main__":
# Note that you have to use typed list provided by numba here.
typed_list = List()
[typed_list.append(np.ones((2, 3))) for _ in range(10)]
stack(typed_list)
stacked = stack(typed_list)
print(stacked.shape)
print(stacked)
```
So, why list is not supported? I think it's a bug in fact. | open | 2023-09-04T01:57:50Z | 2023-09-04T07:19:22Z | https://github.com/numba/numba/issues/9178 | [
"feature_request",
"numpy"
] | 46319943 | 1 |
jwkvam/bowtie | plotly | 251 | flask dance compatibility | easily work with https://github.com/singingwolfboy/flask-dance | open | 2018-10-13T21:15:55Z | 2018-10-13T21:15:55Z | https://github.com/jwkvam/bowtie/issues/251 | [
"enhancement"
] | jwkvam | 0 |
sqlalchemy/sqlalchemy | sqlalchemy | 11,530 | Can not load SqlAlchemy Metadata pickle file if contains enum from abstract model | ### Describe the bug
This is a follow up to https://github.com/sqlalchemy/sqlalchemy/discussions/11360 and https://github.com/sqlalchemy/sqlalchemy/issues/11365
While in 2.0.31 you can successfully dump the pickle file, I find that you can not load it in a fresh Python process. I think my steps to reproduce are obvious enough and clear enough that this should be a bug report, not a discussion, but sorry if I missed something and I'm just doing it wrong.
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
2.0.31
### DBAPI (i.e. the database driver)
N/A
### Database Vendor and Major Version
N/A
### Python Version
3.11.2
### Operating system
Linux
### To Reproduce
Steps to reproduce:
1. Run this script:
```python
import pickle
import sqlalchemy as sa
from sqlalchemy.orm.decl_api import declarative_base
base = declarative_base()
sa_metadata = base.metadata
class _Foo(base):
__abstract__ = True
__tablename__ = "foo"
id_ = sa.Column(sa.Integer(), primary_key=True)
column1 = sa.Column(sa.Enum("red", "green", name="column1"))
class Foo(_Foo):
pass
with open("test.pkl", "wb") as f:
pickle.dump(sa_metadata, f)
```
2. In a separate process run:
```python
import pickle
with open("test.pkl", "rb") as f:
pickle.load(f)
```
### Error
```
Traceback (most recent call last):
File "test_pickle_load.py", line 4, in <module>
pickle.load(f)
AttributeError: Can't get attribute 'JoinedDDLEventsDispatch' on <module 'sqlalchemy.event.base' from '.../python3.11/site-packages/sqlalchemy/event/base.py'>
```
### Additional context
_No response_ | closed | 2024-06-24T18:11:36Z | 2024-06-25T12:31:16Z | https://github.com/sqlalchemy/sqlalchemy/issues/11530 | [
"bug",
"events",
"near-term release"
] | notatallshaw-gts | 4 |
gradio-app/gradio | deep-learning | 10,414 | Gradio Error | ### Describe the bug
I encountered the following error when executing code. My environment is python=3.11, gradio_client==1.6.0. Here's my code:
from gradio_client import Client
client = Client("https://iic-anydoor-online.ms.show/", hf_token='aa')
job = client.predict(
"https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png",
"https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png",
0,
1,
0.1,
-1,
True,
fn_index=2
)
print(job)
This is my error:
Loaded as API: https://iic-anydoor-online.ms.show/ ✔
Traceback (most recent call last):
File "D:\SoftWare\anaconda\envs\dev_env\Lib\site-packages\gradio_client\compatibility.py", line 108, in _predict
output = result["data"]
~~~~~~^^^^^^^^
KeyError: 'data'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\code\py3Test\test3\test2.py", line 4, in <module>
job = client.predict(
^^^^^^^^^^^^^^^
File "D:\SoftWare\anaconda\envs\dev_env\Lib\site-packages\gradio_client\client.py", line 478, in predict
).result()
^^^^^^^^
File "D:\SoftWare\anaconda\envs\dev_env\Lib\site-packages\gradio_client\client.py", line 1538, in result
return super().result(timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\SoftWare\anaconda\envs\dev_env\Lib\concurrent\futures\_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "D:\SoftWare\anaconda\envs\dev_env\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "D:\SoftWare\anaconda\envs\dev_env\Lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\SoftWare\anaconda\envs\dev_env\Lib\site-packages\gradio_client\compatibility.py", line 64, in _inner
predictions = _predict(*data)
^^^^^^^^^^^^^^^
File "D:\SoftWare\anaconda\envs\dev_env\Lib\site-packages\gradio_client\compatibility.py", line 122, in _predict
raise KeyError(
KeyError: 'Could not find \'data\' key in response. Response received: {\'detail\': [{\'input\': \'{"data": ["data:image/png;base64,R0lGODlhPQBEAPeoAJosM//AwO/AwHVYZ/z595kzAP/s7P+goOXMv8+fhw/v739/f+8PD98fH/8mJl+fn/9ZWb8/PzWlwv///6wWGbImAPgTEMImIN9gUFCEm/gDALULDN8PAD6atYdCTX9gUNKlj8wZAKUsAOzZz+UMAOsJAP/Z2ccMDA8PD/95eX5NWvsJCOVNQPtfX/8zM8+QePLl38MGBr8JCP+zs9myn/8GBqwpAP/GxgwJCPny78lzYLgjAJ8vAP9fX/+MjMUcAN8zM/9wcM8ZGcATEL+QePdZWf/29uc/P9cmJu9MTDImIN+/r7+/vz8/P8VNQGNugV8AAF9fX8swMNgTAFlDOICAgPNSUnNWSMQ5MBAQEJE3QPIGAM9AQMqGcG9vb6MhJsEdGM8vLx8fH98AANIWAMuQeL8fABkTEPPQ0OM5OSYdGFl5jo+Pj/+pqcsTE78wMFNGQLYmID4dGPvd3UBAQJmTkP+8vH9QUK+vr8ZWSHpzcJMmILdwcLOGcHRQUHxwcK9PT9DQ0O/v70w5MLypoG8wKOuwsP/g4P/Q0IcwKEswKMl8aJ9fX2xjdOtGRs/Pz+Dg4GImIP8gIH0sKEAwKKmTiKZ8aB/f39Wsl+LFt8dgUE9PT5x5aHBwcP+AgP+WltdgYMyZfyywz78AAAAAAAD///8AAP9mZv///wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACH5BAEAAKgALAAAAAA9AEQAAAj/AFEJHEiwoMGDCBMqXMiwocAbBww4nEhxoYkUpzJGrMixogkfGUNqlNixJEIDB0SqHGmyJSojM1bKZOmyop0gM3Oe2liTISKMOoPy7GnwY9CjIYcSRYm0aVKSLmE6nfq05QycVLPuhDrxBlCtYJUqNAq2bNWEBj6ZXRuyxZyDRtqwnXvkhACDV+euTeJm1Ki7A73qNWtFiF+/gA95Gly2CJLDhwEHMOUAAuOpLYDEgBxZ4GRTlC1fDnpkM+fOqD6DDj1aZpITp0dtGCDhr+fVuCu3zlg49ijaokTZTo27uG7Gjn2P+hI8+PDPERoUB318bWbfAJ5sUNFcuGRTYUqV/3ogfXp1rWlMc6awJjiAAd2fm4ogXjz56aypOoIde4OE5u/F9x199dlXnnGiHZWEYbGpsAEA3QXYnHwEFliKAgswgJ8LPeiUXGwedCAKABACCN+EA1pYIIYaFlcDhytd51sGAJbo3onOpajiihlO92KHGaUXGwWjUBChjSPiWJuOO/LYIm4v1tXfE6J4gCSJEZ7YgRYUNrkji9P55sF/ogxw5ZkSqIDaZBV6aSGYq/lGZplndkckZ98xoICbTcIJGQAZcNmdmUc210hs35nCyJ58fgmIKX5RQGOZowxaZwYA+JaoKQwswGijBV4C6SiTUmpphMspJx9unX4KaimjDv9aaXOEBteBqmuuxgEHoLX6Kqx+yXqqBANsgCtit4FWQAEkrNbpq7HSOmtwag5w57GrmlJBASEU18ADjUYb3ADTinIttsgSB1oJFfA63bduimuqKB1keqwUhoCSK374wbujvOSu4QG6UvxBRydcpKsav++Ca6G8A6Pr1x2kVMyHwsVxUALDq/krnrhPSOzXG1lUTIoffqGR7Goi2MAxbv6O2kEG56I7CSlRsEFKFVyovDJoIRTg7sugNRDGqCJzJgcKE0ywc0ELm6KBCCJo8DIPFeCWNGcyqNFE06ToAfV0HBRgxsvLThHn1oddQMrXj5DyAQgjEHSAJMWZwS3HPxT/QMbabI/iBCliMLEJKX2EEkomBAUCxRi42VDADxyTYDVogV+wSChqmKxEKCDAYFDFj4OmwbY7bDGdBhtrnTQYOigeChUmc1K3QTnAUfEgGFgAWt88hKA6aCRIXhxnQ1yg3BCayK44EWdkUQcBByEQChFXfCB776aQsG0BIlQgQgE8qO26X1h8cEUep8ngRBnOy74E9QgRgEAC8SvOfQkh7FDBDmS43PmGoIiKUUEGkMEC/PJHgxw0xH74yx/3XnaYRJgMB8obxQW6kL9QYEJ0FIFgByfIL7/IQAlvQwEpnAC7DtLNJCKUoO/w45c44GwCXiAFB/OXAATQryUxdN4LfFiwgjCNYg+kYMIEFkCKDs6PKAIJouyGWMS1FSKJOMRB/BoIxYJIUXFUxNwoIkEKPAgCBZSQHQ1A2EWDfDEUVLyADj5AChSIQW6gu10bE/JG2VnCZGfo4R4d0sdQoBAHhPjhIB94v/wRoRKQWGRHgrhGSQJxCS+0pCZbEhAAOw==", "data:image/png;base64,R0lGODlhPQBEAPeoAJosM//AwO/AwHVYZ/z595kzAP/s7P+goOXMv8+fhw/v739/f+8PD98fH/8mJl+fn/9ZWb8/PzWlwv///6wWGbImAPgTEMImIN9gUFCEm/gDALULDN8PAD6atYdCTX9gUNKlj8wZAKUsAOzZz+UMAOsJAP/Z2ccMDA8PD/95eX5NWvsJCOVNQPtfX/8zM8+QePLl38MGBr8JCP+zs9myn/8GBqwpAP/GxgwJCPny78lzYLgjAJ8vAP9fX/+MjMUcAN8zM/9wcM8ZGcATEL+QePdZWf/29uc/P9cmJu9MTDImIN+/r7+/vz8/P8VNQGNugV8AAF9fX8swMNgTAFlDOICAgPNSUnNWSMQ5MBAQEJE3QPIGAM9AQMqGcG9vb6MhJsEdGM8vLx8fH98AANIWAMuQeL8fABkTEPPQ0OM5OSYdGFl5jo+Pj/+pqcsTE78wMFNGQLYmID4dGPvd3UBAQJmTkP+8vH9QUK+vr8ZWSHpzcJMmILdwcLOGcHRQUHxwcK9PT9DQ0O/v70w5MLypoG8wKOuwsP/g4P/Q0IcwKEswKMl8aJ9fX2xjdOtGRs/Pz+Dg4GImIP8gIH0sKEAwKKmTiKZ8aB/f39Wsl+LFt8dgUE9PT5x5aHBwcP+AgP+WltdgYMyZfyywz78AAAAAAAD///8AAP9mZv///wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACH5BAEAAKgALAAAAAA9AEQAAAj/AFEJHEiwoMGDCBMqXMiwocAbBww4nEhxoYkUpzJGrMixogkfGUNqlNixJEIDB0SqHGmyJSojM1bKZOmyop0gM3Oe2liTISKMOoPy7GnwY9CjIYcSRYm0aVKSLmE6nfq05QycVLPuhDrxBlCtYJUqNAq2bNWEBj6ZXRuyxZyDRtqwnXvkhACDV+euTeJm1Ki7A73qNWtFiF+/gA95Gly2CJLDhwEHMOUAAuOpLYDEgBxZ4GRTlC1fDnpkM+fOqD6DDj1aZpITp0dtGCDhr+fVuCu3zlg49ijaokTZTo27uG7Gjn2P+hI8+PDPERoUB318bWbfAJ5sUNFcuGRTYUqV/3ogfXp1rWlMc6awJjiAAd2fm4ogXjz56aypOoIde4OE5u/F9x199dlXnnGiHZWEYbGpsAEA3QXYnHwEFliKAgswgJ8LPeiUXGwedCAKABACCN+EA1pYIIYaFlcDhytd51sGAJbo3onOpajiihlO92KHGaUXGwWjUBChjSPiWJuOO/LYIm4v1tXfE6J4gCSJEZ7YgRYUNrkji9P55sF/ogxw5ZkSqIDaZBV6aSGYq/lGZplndkckZ98xoICbTcIJGQAZcNmdmUc210hs35nCyJ58fgmIKX5RQGOZowxaZwYA+JaoKQwswGijBV4C6SiTUmpphMspJx9unX4KaimjDv9aaXOEBteBqmuuxgEHoLX6Kqx+yXqqBANsgCtit4FWQAEkrNbpq7HSOmtwag5w57GrmlJBASEU18ADjUYb3ADTinIttsgSB1oJFfA63bduimuqKB1keqwUhoCSK374wbujvOSu4QG6UvxBRydcpKsav++Ca6G8A6Pr1x2kVMyHwsVxUALDq/krnrhPSOzXG1lUTIoffqGR7Goi2MAxbv6O2kEG56I7CSlRsEFKFVyovDJoIRTg7sugNRDGqCJzJgcKE0ywc0ELm6KBCCJo8DIPFeCWNGcyqNFE06ToAfV0HBRgxsvLThHn1oddQMrXj5DyAQgjEHSAJMWZwS3HPxT/QMbabI/iBCliMLEJKX2EEkomBAUCxRi42VDADxyTYDVogV+wSChqmKxEKCDAYFDFj4OmwbY7bDGdBhtrnTQYOigeChUmc1K3QTnAUfEgGFgAWt88hKA6aCRIXhxnQ1yg3BCayK44EWdkUQcBByEQChFXfCB776aQsG0BIlQgQgE8qO26X1h8cEUep8ngRBnOy74E9QgRgEAC8SvOfQkh7FDBDmS43PmGoIiKUUEGkMEC/PJHgxw0xH74yx/3XnaYRJgMB8obxQW6kL9QYEJ0FIFgByfIL7/IQAlvQwEpnAC7DtLNJCKUoO/w45c44GwCXiAFB/OXAATQryUxdN4LfFiwgjCNYg+kYMIEFkCKDs6PKAIJouyGWMS1FSKJOMRB/BoIxYJIUXFUxNwoIkEKPAgCBZSQHQ1A2EWDfDEUVLyADj5AChSIQW6gu10bE/JG2VnCZGfo4R4d0sdQoBAHhPjhIB94v/wRoRKQWGRHgrhGSQJxCS+0pCZbEhAAOw==", 0, 1, 0.1, -1, true], "fn_index": 2, "session_hash": "7ee0c586-b79e-4d1d-9117-14a593b8a900"}\', \'loc\': [\'body\'], \'msg\': \'Input should be a valid dictionary or object to extract fields from\', \'type\': \'model_attributes_type\', \'url\': \'https://errors.pydantic.dev/2.5/v/model_attributes_type\'}]}'
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
gradio_client=1.6.0
```
### Severity
I can work around it | closed | 2025-01-23T08:58:50Z | 2025-01-24T22:42:54Z | https://github.com/gradio-app/gradio/issues/10414 | [
"bug",
"needs repro"
] | w1131680660 | 2 |
tqdm/tqdm | jupyter | 1,318 | add configuration to turn off progress bars | - [ ] I have marked all applicable categories:
+ [ ] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [x] new feature request
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
```
4.64.0 3.8.13 (default, Apr 1 2022, 11:52:33)
[Clang 12.0.0 (clang-1200.0.32.29)] darwin
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
[StackOverflow#tqdm]: https://stackoverflow.com/questions/tagged/tqdm
Hi,
`tqdm` is very useful during development to understand the progress of your code. However, if this code is deployed somewhere that doesn't have a regular tty, e.g. CI systems or servers, the progress bar could interfere with the regular logging of the application, e.g. for ingestion into systems like AWS CloudWatch, SumoLogic, Spluk or others.
I think it would be nice if `tqdm` could be configured to be a no-op. For example using an environment variable:
```bash
python my-script.py # tqdm progress bar emitted as usual
env TQDM_DISABLE=true python my-script.py # tqdm does nothing
``` | closed | 2022-04-14T05:52:27Z | 2024-05-21T20:51:02Z | https://github.com/tqdm/tqdm/issues/1318 | [
"p3-enhancement 🔥",
"to-merge ↰",
"c3-small 🕒"
] | antonysouthworth-halter | 7 |
quokkaproject/quokka | flask | 367 | Populate method role return None when the argument is a Role object. | When the method role is called with an Role instance the role is not found in the list and
it returns None.
...
raise ValidationError(message, errors=errors)
mongoengine.errors.ValidationError: ValidationError (User:None) (A ReferenceField only accepts DBRef or documents: ['roles'])
| closed | 2016-07-14T18:40:25Z | 2016-07-15T14:24:27Z | https://github.com/quokkaproject/quokka/issues/367 | [] | ramiroluz | 1 |
marcomusy/vedo | numpy | 900 | Function intersect_with takes very large amounts of memory (several GigaBytes) | When I call this function it takes amounts of memory that quickly amount to several Gigabytes and the computer ends up freezing/crashing.
Notes:
This doesn't always happen.
Changing the value of tolerance may solve the problem for a particular pair meshes but may not work for another pair of meshes.
The problem can be reproduced using the file at:
https://github.com/goncalo-pt/goncalo_moniz_public | open | 2023-07-18T14:13:06Z | 2023-07-18T14:48:51Z | https://github.com/marcomusy/vedo/issues/900 | [] | goncalo-pt | 1 |
Lightning-AI/pytorch-lightning | deep-learning | 19,964 | Documentation: writing custom samplers compatible with multi GPU training | ### 📚 Documentation
Hi,
I'm trying to run distributed training with a custom sampler for the first time. The idea is rather simple (fixed budget for each class) and works fine in single GPU. When moving to multi GPU, unsurprisingly I get an error message, which tells me that I should subclass `BatchSampler`.
```
TypeError: Lightning can't inject a (distributed) sampler into your batch sampler, because it doesn't subclass PyTorch's `BatchSampler`. To mitigate this, either follow the API of `BatchSampler` or set `Trainer(use_distributed_sampler=False)`. If you choose the latter, you will be responsible for handling the distributed sampling within your batch sampler.
```
It is my understanding that torch's `BatchSampler` takes one (single-sample) `Sampler` and samples from that repeatedly to fill up the batch size. Are there any guidelines for how samplers should be built to be compatible with the sampler injection? I can't seem to find it in the docs.
cc @borda | open | 2024-06-10T12:46:47Z | 2024-06-25T20:30:43Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19964 | [
"help wanted",
"docs"
] | fteufel | 0 |
kizniche/Mycodo | automation | 549 | Issue May Only Affect Me... | ## Mycodo Issue Report:
- Specific Mycodo Version: 6.4.4
#### Problem Description
Please list:
mycodo.service & mycodoflask.service do not start successfully because smbus is not present on my distribution.
I have had no success in finding the source to build smbus for my system, but I have remedied this by installing smbus2, which I've heard is the exact same (or improved?) over smbus and is more widely available. I imported smbus2 as smbus in the two locations it is used (controller_lcd.py & chirp.py) with success, and both services now start successfully.
Maybe it necessary that I create my own branch going forward?
| closed | 2018-10-16T05:06:14Z | 2018-10-17T00:14:27Z | https://github.com/kizniche/Mycodo/issues/549 | [] | not5 | 6 |
Asabeneh/30-Days-Of-Python | flask | 310 | Nice work | You've helped me a lot in learning .py | closed | 2022-10-07T18:43:16Z | 2023-01-17T19:42:51Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/310 | [] | HackEzra | 0 |
giotto-ai/giotto-tda | scikit-learn | 174 | Raise test coverage above 90% for giotto/diagrams/_metrics.py | Current test coverage from pytest is 79% | closed | 2020-01-17T10:30:35Z | 2020-01-17T13:15:20Z | https://github.com/giotto-ai/giotto-tda/issues/174 | [
"enhancement",
"good first issue"
] | lewtun | 2 |
itamarst/eliot | numpy | 21 | `with startTask()` doesn't make clear that the action ends when you leave context block | The version with explicit types is perhaps clearer, but still could be more explicit about what's going on.
| closed | 2014-04-15T15:08:48Z | 2019-05-09T18:17:14Z | https://github.com/itamarst/eliot/issues/21 | [] | itamarst | 2 |
microsoft/unilm | nlp | 1,070 | Issue with performing distributed inference | I tried to fork the layoutlmv2 model using kserve workers adding the OMP library variables but it leads to deadlock. Intresting fact is it works well without OMP library variables but gives really high inference time.
Is there a resolution to utilize layoutlmv2 with multithreading and forking
sharing the values used for OPEN_MP
os.environ['OMP_NUM_THREADS'] = '4'
os.environ['OMP_PROC_BIND'] = 'false'
os.environ['OMP_SCHEDULE'] = 'STATIC'
os.environ['KMP_AFFINITY']='granularity=fine,compact,1,0' | open | 2023-04-19T07:11:22Z | 2023-04-19T07:11:22Z | https://github.com/microsoft/unilm/issues/1070 | [] | Agarwal-Saurabh | 0 |
DistrictDataLabs/yellowbrick | scikit-learn | 1,127 | Add .gitattributes to control line ending whitespace | Add a `.gitattributes` file to set per-repository defaults for line endings. This is in response to @lwgray's issue with whitespace changes when he moved to a windows machine. Depending on how `git` is configured it may checkout files with a `\r\n` (Windows) line ending rather than a `\n` line ending as on OS X and Linux. Setting a `.gitattributes` file will ensure that no matter our contributor's global git settings, that the files will be checked out correctly, preventing whitespace only commits.
See: https://docs.github.com/en/free-pro-team@latest/github/using-git/configuring-git-to-handle-line-endings
Proposed `.gitattributes` file:
```
# Set the default behavior, in case contributors don't have core.autocrlf set.
* text=auto
# Baseline images are binary =and should not be modified
*.png binary
*.jpg binary
*.pdf binary
```
@lwgray is working on the solution on his machine which will allow us to figure out more clearly what is happening. | closed | 2020-11-01T12:11:33Z | 2021-07-12T16:25:53Z | https://github.com/DistrictDataLabs/yellowbrick/issues/1127 | [
"type: task"
] | bbengfort | 0 |
influxdata/influxdb-client-python | jupyter | 460 | OOM problem when writing large Pandas Dataframe to InfluxDB |
__Steps to reproduce:__
Creating any type of large Pandas Dataframe (with 10M+ records) and use the `write_api.write()` method with the `record` parameter being the dataframe.
__Expected behavior:__
The write should be successful.
__Actual behavior:__
Process is killed by system due to OOM, as confirmed by memory profiler too.
__Specifications:__
- Client Version: 1.29
- InfluxDB Version: 2.2
- Platform: Ubuntu 20.04
__Likely Reason__
I read through the code, it looks like the reason is very simple, because the dataframe is not being chunked, as seen in `influxdb_client/client/_base.py`, line 442:
```
elif 'DataFrame' in type(record).__name__:
serializer = DataframeSerializer(record, self._point_settings, write_precision, **kwargs)
self._serialize(serializer.serialize(), write_precision, payload, **kwargs)
```
Here a DataframeSerializer is created, which does have chunked write capabilities (which is so fantastic!). But then the last line calling `serializer.serialize()` does not give any chunk_idx parameter, forcing the whole dataframe to be serialized, resulting in OOM. It is most unfortunate that you guys are so close to getting this done.
Would you confirm what I observed is indeed true? and if so I can find some alternatives for my own project, and otherwise please let me know if there is a way to write large dataframes into InfluxDB, thank you!!
| closed | 2022-06-22T18:42:49Z | 2022-06-24T06:58:13Z | https://github.com/influxdata/influxdb-client-python/issues/460 | [
"wontfix"
] | stevel408 | 3 |
computationalmodelling/nbval | pytest | 129 | Incompatible with coverage 5.0 | It seems like `nbval` does not work with the just released coverage 5.0 package: https://travis-ci.org/qucontrol/krotov/jobs/625049383 | closed | 2019-12-14T18:45:06Z | 2020-02-12T12:06:17Z | https://github.com/computationalmodelling/nbval/issues/129 | [] | goerz | 14 |
seleniumbase/SeleniumBase | web-scraping | 3,125 | UC incorrectly assumes it has failed CF turnstile captcha | These two checks are not correct for cloudflare:
https://github.com/seleniumbase/SeleniumBase/blob/6a057913b4971591b2ffdce51331c3d447cf391f/seleniumbase/core/browser_launcher.py#L992
https://github.com/seleniumbase/SeleniumBase/blob/6a057913b4971591b2ffdce51331c3d447cf391f/seleniumbase/core/browser_launcher.py#L1212
If there is `#challenge-success-text` present, the captcha has not failed, and the page just hasn't loaded yet (or the browser is redirected to a url with an unsupported protocol, which gets opened in an external app).
Repro (gets stuck inside `uc_gui_handle_captcha`), same issue with `uc_gui_click_captcha`:
```python
from seleniumbase import SB
with SB(uc=True, headed=True) as sb:
sb.uc_open_with_reconnect(
"https://csstats.gg/match/204001348/watch/3f164c7658b19e847cc83b3096b225a192637563b50596f908aa0a104386d57c",
reconnect_time=6
)
sb.uc_gui_handle_captcha()
``` | closed | 2024-09-12T14:10:51Z | 2024-09-12T17:23:54Z | https://github.com/seleniumbase/SeleniumBase/issues/3125 | [
"bug",
"UC Mode / CDP Mode"
] | Jovvik | 6 |
plotly/dash | jupyter | 2,398 | [BUG] PAGE_REGISTRY is global, making it impossible to have multiple dash apps with `use_pages` | **Describe your context**
```
dash 2.7.1
dash-bootstrap-components 1.3.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
**Describe the bug**
We are trying to create a flask application with multiple dash apps and utilizing `use_pages`. This creates a problem, however, as the `PAGE_REGISTRY` is a global variable:
https://github.com/plotly/dash/blob/dev/dash/_pages.py
This means that when we register multiple apps, all the apps will contain all the pages from all apps. I.e I have an app A (at URL `/a/`) with pages:
- Page 1
- Page 2
And an App B (at URL `/b/`) that contains:
- Page 6
- Page 7
Then the following routes will be registered(8 rather than the 4 actually registered):
- `/a/1/`
- `/a/2/`
- `/a/6/`
- `/a/7/`
- `/b/1/`
- `/b/2/`
- `/b/6/`
- `/b/7/`
This is not ideal.
(And ideally one probably shouldn't use global variables in your code :stuck_out_tongue: )
**Expected behavior**
I expected the routes/pages to be contained within the specific dash app.
| closed | 2023-01-25T14:52:54Z | 2023-01-30T11:41:02Z | https://github.com/plotly/dash/issues/2398 | [] | C0DK | 3 |
ipython/ipython | jupyter | 13,862 | iPython stores references to traceback object creating memory leaks on unhandled exception (reopen of #13103) | The issue is beating jupyter users over and over again. Last time (#13103) it was closed as the code example was incomplete.
Normally when exception isn't handled interactive python stores it along with it traceback in sys module. This can be large as traceback stores every local objects that was on frames when the exception was raised. In vanilla python clearing the traceback there is enough to release it along with all the memory it is keeping.
Here it is not the case in python as it stores traceback on `get_ipython().InteractiveTB.tb` and possibly in few other places.
Below is an example code that shows the issue:
```py
import sys, gc
class X:
pass
def f():
x = X();
raise Exception("oops")
f()
gc.collect(); print([id(o) for o in gc.get_objects() if isinstance(o, X)])
sys.last_traceback = sys.last_type = sys.last_value = None
# here python releases the X object, but not ipython
gc.collect(); print([id(o) for o in gc.get_objects() if isinstance(o, X)])
```
# python
```py
Python 3.9.12 (main, Apr 5 2022, 06:56:58)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys, gc
>>> class X:
... pass
...
>>> def f():
... x = X();
... raise Exception("oops")
...
>>> f()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 3, in f
Exception: oops
>>>
>>> gc.collect(); print([id(o) for o in gc.get_objects() if isinstance(o, X)])
0
[139848387047680]
>>> sys.last_traceback = sys.last_type = sys.last_value = None
>>> gc.collect(); print([id(o) for o in gc.get_objects() if isinstance(o, X)])
0
[]
```
# ipython
```
Python 3.9.12 (main, Apr 5 2022, 06:56:58)
Type 'copyright', 'credits' or 'license' for more information
IPython 8.6.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import sys, gc
...: class X:
...: pass
...:
...: def f():
...: x = X();
...: raise Exception("oops")
...:
...: f()
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
Cell In [1], line 9
6 x = X();
7 raise Exception("oops")
----> 9 f()
11 gc.collect(); print([id(o) for o in gc.get_objects() if isinstance(o, X)])
12 sys.last_traceback = sys.last_type = sys.last_value = None
Cell In [1], line 7, in f()
5 def f():
6 x = X();
----> 7 raise Exception("oops")
Exception: oops
In [2]: gc.collect(); print([id(o) for o in gc.get_objects() if isinstance(o, X)])
[140556391411776]
In [3]: sys.last_traceback = sys.last_type = sys.last_value = None
In [4]: gc.collect(); print([id(o) for o in gc.get_objects() if isinstance(o, X)])
[140556391411776]
In [5]: get_ipython().InteractiveTB.tb
Out[5]: <traceback at 0x7fd5d5c05300>
```
Here is how it looks like from jupyter notebook where I was able to get the dependency visualised:
<img width="887" alt="Screenshot 2022-12-09 at 15 45 53" src="https://user-images.githubusercontent.com/340180/206739849-97d2dde0-9e91-4dc5-a939-8f6f42014afa.png">
Could we make the tb a property that reads from `sys.last_traceback` instead of keeping another reference to traceback ?
| open | 2022-12-09T15:49:12Z | 2022-12-09T16:08:38Z | https://github.com/ipython/ipython/issues/13862 | [] | PiotrCzapla | 3 |
suitenumerique/docs | django | 221 | ⚗️ Test with ngnix index | ## Bug Report
Next.js router has a problem when we access dynamic url, we do a trick with the ngnix to make it works but it creates some 404 requests.
Try to see if redirecting everything to the index to use the dynamic router could fix this issue.
## See
https://github.com/numerique-gouv/impress/blob/fix/webrtc-multi-pods/src/frontend/apps/impress/conf/default.conf | open | 2024-09-03T09:14:36Z | 2024-09-03T09:14:36Z | https://github.com/suitenumerique/docs/issues/221 | [
"bug",
"frontend"
] | AntoLC | 0 |
ansible/ansible | python | 84,355 | Copy module, content, and variable interpolation | ### Summary
I am writing several simple tasks to enforce settings in Linux (RHEL 9-compatible). I've provided example tasks using the copy module and content. The content is at least partially provided using a variable, as is typically used in Ansible playbooks.
However, I see in the [documentation](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/copy_module.html) that there is a vague warning about using variable interpolation:
> If you need variable interpolation in copied files, use the [ansible.builtin.template](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/template_module.html#ansible-collections-ansible-builtin-template-module) module. Using a variable with the [content](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/copy_module.html#ansible-collections-ansible-builtin-copy-module-parameter-content) parameter produces unpredictable results.
This warning was [added](https://github.com/ansible/ansible/pull/50940/files) in January 2019 in response to a 2018 bug report (#34595). Since then, there has been little discussion of the problem.
I'd prefer not to use the template module and a separate jinja2 template because it adds unnecessary complexity. It's also a common pattern in [example code](https://github.com/ComplianceAsCode/content/blob/444895fab527330481b182994c7dbd83ad6ca81f/linux_os/guide/system/accounts/accounts-banners/banner_etc_issue/ansible/shared.yml#L22) to use variable interpolation with content in the copy module.
If variable interpolation doesn't work in copy, I'd prefer it to error/warning, be fixed, or better described what works and what doesn't. Another possibility is include a "content" parameter in the template module that can be used for small jinja2 templates.
Is there any real harm in using the example code I've provided? It would be helpful to have more clarity on this.
### Issue Type
Bug Report
### Component Name
copy
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.17]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/USERNAME/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/USERNAME/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.19 (main, Aug 23 2024, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-2.0.1)] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
RHEL 9-compatible
### Steps to Reproduce
```yaml
---
- name: Set up banner and umask
hosts: all
become: true
vars:
banner_text: "Don't hack this system"
user_umask: "027"
tasks:
- name: Set system issue banner
ansible.builtin.copy:
dest: /etc/issue
content: "{{ banner_text }}"
owner: root
group: root
mode: '0644'
- name: Create /etc/profile.d/umask.sh with umask setting
ansible.builtin.copy:
dest: /etc/profile.d/umask.sh
content: |
umask {{ user_umask }}
owner: root
group: root
mode: '0644'
```
### Expected Results
Two files are correctly created with the desired output, with variable interpolation.
### Actual Results
```console
PLAY [Set up banner and umask] *****************************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************************
[WARNING]: Platform linux on host 10.32.8.180 is using the discovered Python interpreter at /usr/bin/python3.9, but future installation of another Python
interpreter could change the meaning of that path. See https://docs.ansible.com/ansible-core/2.14/reference_appendices/interpreter_discovery.html for more
information.
ok: [10.32.8.180]
TASK [Set system issue banner] *****************************************************************************************************************************
changed: [10.32.8.180]
TASK [Create /etc/profile.d/umask.sh with umask setting] ***************************************************************************************************
changed: [10.32.8.180]
PLAY RECAP *************************************************************************************************************************************************
10.32.8.180 : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct | closed | 2024-11-20T21:02:41Z | 2024-12-17T14:00:06Z | https://github.com/ansible/ansible/issues/84355 | [
"module",
"bug",
"affects_2.14"
] | mikeweinberg | 5 |
ipython/ipython | data-science | 14,492 | Jupyer in VScode using unwritable folder for .ipython | ```
Failed to start the Kernel.
OSError: [Errno 122] Disk quota exceeded: '/specific/a/home/cc/students/cs/amitlevy/.ipython'.
```
Jupyter is trying to create the .ipython folder in my home directory on the server I've connected to, which is unwritable. I do have writable folders, and have tried changing all the directories in the vscode settings for the extension to the correct folder, but it keeps trying to do the same thing. This is inside the Python Interactive Window of VScode. | open | 2024-08-03T09:50:53Z | 2024-08-04T20:45:37Z | https://github.com/ipython/ipython/issues/14492 | [] | amitlevy | 1 |
PaddlePaddle/ERNIE | nlp | 127 | ERNIE:CPU版跑ChnSentiCorp报Windows not support stack backtrace yet. | 开始是内存不足,调整batch_size为2后,内存应该是能满足(实际8G)。但是后面又报错了。
运行环境是win10,CPU,内存8G,paddlepaddle 1.4.1。
---
Theoretical memory usage in training: 3420.067 - 3582.927 MB
warm validate or test
Load pretraining parameters from MODEL_PATH/params.
ParallelExecutor is deprecated. Please use CompiledProgram and Executor. CompiledProgram is a central place for optimization and Executor is the unified
executor. Example can be found in compiler.py.
W0506 20:38:55.854854 15620 graph.h:204] WARN: After a series of passes, the current graph can be quite different from OriginProgram. So, please avoid us
ing the `OriginProgram()` method!
Traceback (most recent call last):
File "run_classifier.py", line 307, in <module>
main(args)
File "run_classifier.py", line 201, in main
main_program=train_program)
File "D:\python3\lib\site-packages\paddle\fluid\parallel_executor.py", line 134, in __init__
self._compiled_program._compile(place=self._place, scope=self._scope)
File "D:\python3\lib\site-packages\paddle\fluid\compiler.py", line 307, in _compile
scope=self._scope)
File "D:\python3\lib\site-packages\paddle\fluid\compiler.py", line 278, in _compile_data_parallel
self._exec_strategy, self._build_strategy, self._graph)
paddle.fluid.core.EnforceNotMet: Fail to allocate CPU memory: size = 263382656 . at [D:\1.4.1\paddle\paddle\fluid\memory\detail\system_allocator.cc:56]
PaddlePaddle Call Stacks:
Windows not support stack backtrace yet.
| closed | 2019-05-06T12:43:53Z | 2020-05-28T11:52:43Z | https://github.com/PaddlePaddle/ERNIE/issues/127 | [
"wontfix"
] | shenlan211314 | 4 |
Significant-Gravitas/AutoGPT | python | 8,962 | Marketplace - Change this header so it's the same style as "Featured Agents" | ### Describe your issue.
Change it to the "large-Poppins" style.. as per this typography sheet. [https://www.figma.com/design/aw299myQfhiXPa4nWkXXOT/agpt-template?node-id=7-47&t=axoLiZIIUXifeRWU-1](url)
Style name: large-poppins
font: poppins
size: 18px
line-height: 28px

| open | 2024-12-13T09:27:50Z | 2025-01-04T04:42:01Z | https://github.com/Significant-Gravitas/AutoGPT/issues/8962 | [
"good first issue",
"UI",
"platform/frontend"
] | ograce1421 | 0 |
fugue-project/fugue | pandas | 373 | [BUG] Spark engine rename is slow when there are a lot of columns | The problem is here https://github.com/fugue-project/fugue/blob/81f6e15af37a006be95687c223150e9c0b1006d3/fugue_spark/dataframe.py#L129
With thousands of `withColumnRenamed`, Spark takes a lot of time to rename. So we need to change to one operation using `select`. | closed | 2022-10-15T17:57:04Z | 2022-10-15T23:27:34Z | https://github.com/fugue-project/fugue/issues/373 | [] | goodwanghan | 0 |
deeppavlov/DeepPavlov | nlp | 811 | Unable to load odqa model. |
from deeppavlov import configs
from deeppavlov.core.commands.infer import build_model
odqa = build_model(configs.odqa.en_odqa_infer_wiki, load_trained=True)
I'm trying to load the model but m getting below error, could you please help.
File "C:\Users\vsolanki\AppData\Local\Programs\Python\Python36\lib\site-packages\deeppavlov\models\vectorizers\hashing_tfidf_vectorizer.py", line 262, in load
<input type="hidden" class="js-site-search-type-field" name="type" >
FileNotFoundError: HashingTfIdfVectorizer path doesn't exist!
| closed | 2019-04-22T05:22:15Z | 2019-04-22T09:39:34Z | https://github.com/deeppavlov/DeepPavlov/issues/811 | [] | Pem14604 | 1 |
marshmallow-code/flask-smorest | rest-api | 314 | It is recommended to add a favicon to the HTML template | It is recommended to add a favicon to the HTML template | closed | 2022-01-10T10:00:41Z | 2022-05-13T08:21:54Z | https://github.com/marshmallow-code/flask-smorest/issues/314 | [] | sbigtree | 1 |
fastapi/fastapi | fastapi | 12,055 | Why can't the key of the returned value start with “_sa”? | ### Privileged issue
- [X] I'm @tiangolo or he asked me directly to create an issue here.
### Issue Content
```
import uvicorn
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
def root():
return {"_sa": "Hello World", "status": "OK"}
if __name__ == '__main__':
uvicorn.run(app, host="0.0.0.0", port=8000)
```
The result of the above code is:
```
{
"status": "OK"
}
``` | closed | 2024-08-21T23:42:20Z | 2024-08-22T13:54:53Z | https://github.com/fastapi/fastapi/issues/12055 | [] | leafotto | 2 |
ivy-llc/ivy | tensorflow | 28,311 | ivy.conj | **Why should this be implemented?**
- 3+ of the native frameworks have this function
- it's needed for a complex/long frontend function implementation
**Links to native framework implementations**
- [Jax](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.conj.html)
- [PyTorch](https://pytorch.org/docs/stable/generated/torch.conj.html)
- [TensorFlow](https://www.tensorflow.org/api_docs/python/tf/math/conj)
- [NumPy](https://numpy.org/doc/stable/reference/generated/numpy.conjugate.html)
| closed | 2024-02-17T17:12:36Z | 2024-03-20T03:56:41Z | https://github.com/ivy-llc/ivy/issues/28311 | [
"Next Release",
"Suggestion",
"Ivy API Experimental",
"Useful Issue"
] | ZenithFlux | 3 |
ray-project/ray | python | 51,102 | [Doc][Dashboard] Add Documentation about TPU Logs | ### Description
Tracking issue to add documentation about libtpu logs written to `/tmp/tpu_logs` and how they're exposed on the Ray dashboard.
### Link
Related PR: https://github.com/ray-project/ray/pull/47737
The documentation should be added to the general TPU docs (https://docs.ray.io/en/latest/cluster/kubernetes/user-guides/tpu.html) and referenced from the docs on logging (https://docs.ray.io/en/latest/ray-observability/getting-started.html#logs-view) | open | 2025-03-05T19:19:12Z | 2025-03-06T00:07:53Z | https://github.com/ray-project/ray/issues/51102 | [
"docs",
"core",
"core-hardware"
] | ryanaoleary | 0 |
tflearn/tflearn | tensorflow | 1,056 | About examples/images/alexnet.py | in AlexNet ,
the order of the first and second layers is ( conv——LRN——maxPool )
but in your code
`network = input_data(shape=[None, 227, 227, 3])`
`network = conv_2d(network, 96, 11, strides=4, activation='relu')`
`network = max_pool_2d(network, 3, strides=2)`
`network = local_response_normalization(network)`
`network = conv_2d(network, 256, 5, activation='relu')`
`network = max_pool_2d(network, 3, strides=2)`
`network = local_response_normalization(network)`
the order is ( conv——maxPool ——LRN)
Is the order of LRN and maxPool upside down? | open | 2018-05-25T06:43:09Z | 2018-05-25T07:05:23Z | https://github.com/tflearn/tflearn/issues/1056 | [] | SmokerX | 1 |
scikit-image/scikit-image | computer-vision | 6,812 | `measure.regionprops` assumes input is a numpy array | ### Description:
`skimage.measure.regionprops` appears to assume the input is a numpy array, and directly accesses the dtype attribute.
Instead it should coercing its input to one with `numpy.asarray`
This was observed on the interface with Julia & Python at https://github.com/cjdoris/PythonCall.jl/issues/280
### Way to reproduce:
```Python
julia> labelled_frame = zeros(Int, 10, 10); labelled_frame[5:6, 5:6] .= 1;
julia> using PythonCall
julia> skimage = pyimport("skimage")
Python module: <module 'skimage' from '/Users/ian/Documents/GitHub/Foo.jl/.CondaPkg/env/lib/python3.11/site-packages/skimage/__init__.py'>
julia> skimage.measure.regionprops(labelled_frame)
ERROR: Python: AttributeError: Julia: type Array has no field dtype
Python stacktrace:
[1] __getattr__
@ ~/.julia/packages/PythonCall/dsECZ/src/jlwrap/any.jl:189
[2] regionprops
@ skimage.measure._regionprops ~/Documents/GitHub/Foo.jl/.CondaPkg/env/lib/python3.11/site-packages/skimage/measure/_regionprops.py:1253
Stacktrace:
[1] pythrow()
@ PythonCall ~/.julia/packages/PythonCall/dsECZ/src/err.jl:94
[2] errcheck
@ ~/.julia/packages/PythonCall/dsECZ/src/err.jl:10 [inlined]
[3] pycallargs(f::Py, args::Py)
@ PythonCall ~/.julia/packages/PythonCall/dsECZ/src/abstract/object.jl:210
[4] pycall(f::Py, args::Matrix{Int64}; kwargs::Base.Pairs{Symbol, Union{}, Tuple{}, NamedTuple{(), Tuple{}}})
@ PythonCall ~/.julia/packages/PythonCall/dsECZ/src/abstract/object.jl:228
[5] pycall
@ ~/.julia/packages/PythonCall/dsECZ/src/abstract/object.jl:218 [inlined]
[6] #_#11
@ ~/.julia/packages/PythonCall/dsECZ/src/Py.jl:352 [inlined]
[7] (::Py)(args::Matrix{Int64})
@ PythonCall ~/.julia/packages/PythonCall/dsECZ/src/Py.jl:352
[8] top-level scope
@ REPL[35]:1
```
```
### Traceback or output:
_No response_
### Version information:
_No response_ | closed | 2023-03-10T17:19:31Z | 2023-10-27T15:40:36Z | https://github.com/scikit-image/scikit-image/issues/6812 | [
":speech_balloon: Discussion",
":pray: Feature request",
":cry: Won't fix",
":sleeping: Dormant"
] | IanButterworth | 10 |
aiortc/aiortc | asyncio | 1,171 | FPS check | How to check fps from stream using only python? Is there any examples or somth?
| closed | 2024-10-08T11:23:02Z | 2025-01-29T11:54:22Z | https://github.com/aiortc/aiortc/issues/1171 | [] | Greazy | 2 |
explosion/spaCy | deep-learning | 13,380 | The word transitions to the wrong prototype | 
| closed | 2024-03-15T15:17:19Z | 2024-03-19T09:28:33Z | https://github.com/explosion/spaCy/issues/13380 | [
"feat / lemmatizer",
"perf / accuracy"
] | github123666 | 1 |
yt-dlp/yt-dlp | python | 11,984 | Some URLs Unable to parse how many pages there are in total | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
Unable to parse how many pages there are in total
The following page numbers will increase indefinitely
[PornHubUserVideosUpload] Extracting URL: https://cn.pornhub.com/pornstar/valentina-jewels/videos/upload
[download] Downloading playlist: valentina-jewels
[PornHubUserVideosUpload] valentina-jewels: Downloading page 1
[PornHubUserVideosUpload] valentina-jewels: Downloading page 2
[PornHubUserVideosUpload] valentina-jewels: Downloading page 3
[PornHubUserVideosUpload] valentina-jewels: Downloading page 4
[PornHubUserVideosUpload] valentina-jewels: Downloading page 5
[PornHubUserVideosUpload] valentina-jewels: Downloading page 6
[PornHubUserVideosUpload] valentina-jewels: Downloading page 7
[PornHubUserVideosUpload] valentina-jewels: Downloading page 8
[PornHubUserVideosUpload] valentina-jewels: Downloading page 9
[PornHubUserVideosUpload] valentina-jewels: Downloading page 10
[PornHubUserVideosUpload] valentina-jewels: Downloading page 11
[PornHubUserVideosUpload] valentina-jewels: Downloading page 12
[PornHubUserVideosUpload] valentina-jewels: Downloading page 13
[PornHubUserVideosUpload] valentina-jewels: Downloading page 14
[PornHubUserVideosUpload] valentina-jewels: Downloading page 15
[PornHubUserVideosUpload] valentina-jewels: Downloading page 16
[PornHubUserVideosUpload] valentina-jewels: Downloading page 17
[PornHubUserVideosUpload] valentina-jewels: Downloading page 18
[PornHubUserVideosUpload] valentina-jewels: Downloading page 19
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[PornHubUserVideosUpload] Extracting URL: https://cn.pornhub.com/pornstar/valentina-jewels/videos/upload
[download] Downloading playlist: valentina-jewels
[PornHubUserVideosUpload] valentina-jewels: Downloading page 1
[PornHubUserVideosUpload] valentina-jewels: Downloading page 2
[PornHubUserVideosUpload] valentina-jewels: Downloading page 3
[PornHubUserVideosUpload] valentina-jewels: Downloading page 4
[PornHubUserVideosUpload] valentina-jewels: Downloading page 5
[PornHubUserVideosUpload] valentina-jewels: Downloading page 6
[PornHubUserVideosUpload] valentina-jewels: Downloading page 7
[PornHubUserVideosUpload] valentina-jewels: Downloading page 8
[PornHubUserVideosUpload] valentina-jewels: Downloading page 9
[PornHubUserVideosUpload] valentina-jewels: Downloading page 10
[PornHubUserVideosUpload] valentina-jewels: Downloading page 11
[PornHubUserVideosUpload] valentina-jewels: Downloading page 12
[PornHubUserVideosUpload] valentina-jewels: Downloading page 13
[PornHubUserVideosUpload] valentina-jewels: Downloading page 14
[PornHubUserVideosUpload] valentina-jewels: Downloading page 15
[PornHubUserVideosUpload] valentina-jewels: Downloading page 16
[PornHubUserVideosUpload] valentina-jewels: Downloading page 17
[PornHubUserVideosUpload] valentina-jewels: Downloading page 18
[PornHubUserVideosUpload] valentina-jewels: Downloading page 19
```
| closed | 2025-01-03T07:32:14Z | 2025-01-05T23:25:05Z | https://github.com/yt-dlp/yt-dlp/issues/11984 | [
"incomplete",
"NSFW"
] | andylews | 1 |
biolab/orange3 | pandas | 6,761 | Error launching Orange in Anaconda: cannot import name 'astype_nansafe' from 'pandas | Trying to launch Orange from Anaconda, receive this error;
```
Traceback (most recent call last):
File "/home/will/anaconda3/bin/orange-canvas", line 7, in
from Orange.canvas.__main__ import main
File "/home/will/anaconda3/lib/python3.11/site-packages/Orange/__init__.py", line 4, in
from Orange import data
File "/home/will/anaconda3/lib/python3.11/site-packages/Orange/data/__init__.py", line 12, in
from .pandas_compat import *
File "/home/will/anaconda3/lib/python3.11/site-packages/Orange/data/pandas_compat.py", line 10, in
from pandas.core.arrays.sparse.dtype import SparseDtype
File "/home/will/anaconda3/lib/python3.11/site-packages/pandas/core/arrays/sparse/dtype.py", line 21, in
from pandas.core.dtypes.astype import astype_nansafe
ImportError: cannot import name 'astype_nansafe' from 'pandas.core.dtypes.astype' (/home/will/anaconda3/lib/python3.11/site-packages/pandas/core/dtypes/astype.py)
```
Ubuntu 22.04
Orange 3.34.0 via Anaconda
Pandas 1.5.3
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system:
- Orange version:
- How you installed Orange:
| closed | 2024-03-16T01:11:46Z | 2024-03-16T12:40:34Z | https://github.com/biolab/orange3/issues/6761 | [
"bug report"
] | skyemoor | 2 |
marcomusy/vedo | numpy | 315 | Using show() with multiple renderers | I have a plotter initialized as:
```python
vp1 = vedo.Plotter(shape=(3,5), axes=0)
```
I've added meshes and spheres to these and they display fine. Now, I have a loop where I'm updating the positions of these spheres. At the end of the update, I want to redraw all the renderers. If I try:
```python
vp1.show(interactive=False, at=0, resetcam=False)
```
the camera control(by mouse) works, but the 0th renderer has ALL the spheres in(ie the ones I created in other renderers):

Instead, if I update the show command to:
```python
vp1.show()
```
Now the number of spheres is correct, but I cannot move the camera.

I'm running the latest dev build(was trying to follow the examples but it seems some of them conform to the latest build, and some to an older version of the library). Here is the complete source code:
```python
import vedo
from random import random
# this is one instance of the class Plotter with 3 rows and 5 columns
vp1 = vedo.Plotter(shape=(3,5), axes=0)
for i in range(15):
vp1.show(str(i),at=i)
class Particle:
def __init__(self, at):
global vp1
self.p = vedo.vector((0.0, 0.0, 0.0))
self.vsphere = vedo.Sphere(self.p, r=0.1, c='red')
vp1.add(self.vsphere, at=at)
def update(self):
self.vsphere.pos(self.p)
particles = []
for d in range(15):
for i in range(5):
particles.append(Particle(at=d))
while True:
for particle in particles:
particle.p[0] += (random()-0.5)*0.2
particle.update()
# I can control the camera fine, but the 0-th renderer has ALL the particles, not
# just the ones that were added to it.
# vp1.show(interactive=False, at=0, resetcam=False)
# This version will render correctly, but I cannot control the camera with the mouse
vp1.show()
```
Basically, I want to update the positions of spheres in all the renderers, while continuing to be able to interactively adjust the view with the mouse. How do I accomplish this? | closed | 2021-02-15T00:19:00Z | 2021-05-05T09:34:37Z | https://github.com/marcomusy/vedo/issues/315 | [] | medakk | 8 |
sczhou/CodeFormer | pytorch | 129 | 求助,第三个命令报错 |
C:\Users\tl小站\CodeFormer>python basicsr/setup.py develop
Traceback (most recent call last):
File "D:\rjanz\lib\site-packages\numpy\core\__init__.py", line 23, in <module>
from . import multiarray
File "D:\rjanz\lib\site-packages\numpy\core\multiarray.py", line 10, in <module>
from . import overrides
File "D:\rjanz\lib\site-packages\numpy\core\overrides.py", line 6, in <module>
from numpy.core._multiarray_umath import (
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\tl小站\CodeFormer\basicsr\setup.py", line 9, in <module>
from torch.utils.cpp_extension import BuildExtension, CppExtension, CUDAExtension
File "D:\rjanz\lib\site-packages\torch\__init__.py", line 676, in <module>
from .storage import _StorageBase, TypedStorage, _LegacyStorage, UntypedStorage
File "D:\rjanz\lib\site-packages\torch\storage.py", line 11, in <module>
import numpy as np
File "D:\rjanz\lib\site-packages\numpy\__init__.py", line 141, in <module>
from . import core
File "D:\rjanz\lib\site-packages\numpy\core\__init__.py", line 49, in <module>
raise ImportError(msg)
ImportError:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.
We have compiled some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
* The Python version is: Python3.9 from "D:\rjanz\python.exe"
* The NumPy version is: "1.24.1"
and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.
Original error was: No module named 'numpy.core._multiarray_umath' | open | 2023-01-31T17:11:43Z | 2023-02-01T07:48:50Z | https://github.com/sczhou/CodeFormer/issues/129 | [] | ice5920 | 1 |
postmanlabs/httpbin | api | 538 | bug: HttpBin no longer accepts headers containing underscore character | HttpBin no longer accepts headers containing underscore character
```
curl -i http://httpbin.org/headers -H 'My_header: foo'
HTTP/1.1 200 OK
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Content-Type: application/json
Date: Mon, 25 Feb 2019 10:49:40 GMT
Server: nginx
Content-Length: 105
Connection: keep-alive
{
"headers": {
"Accept": "*/*",
"Host": "httpbin.org",
"User-Agent": "curl/7.61.0"
}
}
``` | open | 2019-02-25T10:51:37Z | 2019-02-25T16:33:25Z | https://github.com/postmanlabs/httpbin/issues/538 | [] | asoorm | 1 |
davidteather/TikTok-Api | api | 647 | Add a function that automatically retrieves the verifyFp parameter | Thank you very much for your project. It would be a great experience for me to add a function to automatically retrieve the verifyFp parameter, since verifyFp can expire, resulting in captcha or error during the crawl | closed | 2021-07-29T09:14:38Z | 2022-02-14T02:59:52Z | https://github.com/davidteather/TikTok-Api/issues/647 | [
"feature_request"
] | wangzsPalpitate | 6 |
ydataai/ydata-profiling | pandas | 912 | Cant install pandas | Just installed python for the first time today. I keep getting this error when trying to install panda

What should I do? | open | 2022-01-24T21:59:02Z | 2022-09-25T12:50:14Z | https://github.com/ydataai/ydata-profiling/issues/912 | [
"information requested ❔",
"dependencies 🔗"
] | anajulialuizon | 4 |
Asabeneh/30-Days-Of-Python | flask | 340 | Typo error | Hi, I am not an expert but while doing with debug option the loop
``count = 0
while count < 5:
if count == 3:
continue
print(count)
count = count + 1``
on day 10 the explanation says : "The above while loop only prints 0, 1, 2 and 4 (skips 3)." but after continue statement, count always be set to 3, Am I wrong?
Thanks!
| closed | 2023-01-10T21:32:02Z | 2023-01-12T21:40:42Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/340 | [] | jonitich | 2 |
huggingface/datasets | pytorch | 7,097 | Some of DownloadConfig's properties are always being overridden in load.py | ### Describe the bug
The `extract_compressed_file` and `force_extract` properties of DownloadConfig are always being set to True in the function `dataset_module_factory` in the `load.py` file. This behavior is very annoying because data extracted will just be ignored the next time the dataset is loaded.
See this image below:

### Steps to reproduce the bug
1. Have a local dataset that contains archived files (zip, tar.gz, etc)
2. Build a dataset loading script to download and extract these files
3. Run the load_dataset function with a DownloadConfig that specifically set `force_extract` to False
4. The extraction process will start no matter if the archives was extracted previously
### Expected behavior
The extraction process should not run when the archives were previously extracted and `force_extract` is set to False.
### Environment info
datasets==2.20.0
python3.9 | open | 2024-08-09T18:26:37Z | 2024-08-09T18:26:37Z | https://github.com/huggingface/datasets/issues/7097 | [] | ductai199x | 0 |
encode/databases | asyncio | 275 | TypeError for mysql backend when cursor.description is None | I'm using databases with mysql backend and sometimes it raises TypeError when using `fetch_all()`
```
File "/env/lib/python3.8/site-packages/databases/core.py", line 140, in fetch_all
return await connection.fetch_all(query, values)
File "/env/lib/python3.8/site-packages/databases/core.py", line 239, in fetch_all
return await self._connection.fetch_all(built_query)
File "/env/lib/python3.8/site-packages/databases/backends/mysql.py", line 110, in fetch_all
metadata = ResultMetaData(context, cursor.description)
File "/env/lib/python3.8/site-packages/sqlalchemy/engine/result.py", line 290, in __init__
raw = self._merge_cursor_description(
File "/env/lib/python3.8/site-packages/sqlalchemy/engine/result.py", line 496, in _merge_cursor_description
return [
File "/env/lib/python3.8/site-packages/sqlalchemy/engine/result.py", line 496, in <listcomp>
return [
File "/env/lib/python3.8/site-packages/sqlalchemy/engine/result.py", line 616, in _merge_cols_by_none
for (
File "/env/lib/python3.8/site-packages/sqlalchemy/engine/result.py", line 540, in _colnames_from_description
for idx, rec in enumerate(cursor_description):
'NoneType' object is not iterable"
```
Looks like `cursor.description is None`, I haven't recognized how it can be + locally these queries work fine and return empty response | closed | 2020-12-14T11:28:31Z | 2022-05-01T13:28:46Z | https://github.com/encode/databases/issues/275 | [] | nikita-davydov | 3 |
open-mmlab/mmdetection | pytorch | 11,636 | AssertionError: MMCV==2.1.0 is used but incompatible. Please install mmcv>=1.3.17, <=1.8.0. | 使用下列代码时
```
from mmdet.apis import init_detector, inference_detector
```
报错
```
in <module> from mmdet.apis import init_detector, inference_detector File /usr/local/Python-3.8/lib/python3.8/site-packages/mmdet/__init__.py, line 24, in <module> assert (mmcv_version >= digit_version(mmcv_minimum_version) AssertionError: MMCV==2.1.0 is used but incompatible. Please install mmcv>=1.3.17, <=1.8.0.
```
环境
```
mmcv==2.1.0
mmcv-full==1.7.2
mmdet==3.3.0
mmengine==0.10.3
```
| open | 2024-04-14T12:18:34Z | 2024-04-14T12:26:29Z | https://github.com/open-mmlab/mmdetection/issues/11636 | [] | zhouyizhuo | 1 |
reloadware/reloadium | pandas | 205 | When python>=3.11, error name __file__ is not defined | ## Describe the bug*
When python>=3.11, variable `__file__` is not recognized
## To Reproduce
test code:
```
import os
if __name__ == '__main__':
os.path.realpath(__file__)
```
## Expected behavior
Run through
## Screenshots

## Desktop or remote (please complete the following information):**
- OS: Windows 11
- OS version: 23H2
- M1 chip: no
- Reloadium package version: none
- PyCharm plugin version: 1.5.1
- Editor: PyCharm 2024.2.4
- Python Version: >=3.11
- Python Architecture: 64bit
- Run mode: Run or Debug
| open | 2024-11-12T03:52:47Z | 2024-11-12T03:52:47Z | https://github.com/reloadware/reloadium/issues/205 | [] | lldacing | 0 |
Anjok07/ultimatevocalremovergui | pytorch | 1,303 | A question | I have separated the backvocal and vocal using MDX_NET_KARA_2, but in the backvocal result there is still a little instrument sound. is there any way to separate the backvocal from the instrument that is still there? | open | 2024-04-23T03:46:55Z | 2024-04-23T03:46:55Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1303 | [] | Saylion | 0 |
ranaroussi/yfinance | pandas | 1,982 | Inconsistent results between two successive runs | ### Describe bug
The OHLC data returned by yf.download() is not reproducible across multiple runs. For example, if I download OHLC data of SPY two times, the numbers are slightly different. Ideally, the numbers should be the same.
### Simple code that reproduces your problem
Consider the following code. It downloads daily OHLC data for SPY and writes it to a file.
```
% cat sp500_daily_ohlc.py
import yfinance as yf
from datetime import datetime, date
df = yf.download(["SPY"], date(2023, 1, 1), date(2024, 7, 2))
time_stamp = datetime.now().strftime("%Y%m%d_%H%M%S")
file_name = f"daily_{time_stamp}.csv"
print(f"writing data into {file_name}")
df.to_csv(file_name)
```
Run the code twice
```
% python sp500_daily_ohlc.py
[*********************100%%**********************] 1 of 1 completed
writing data into daily_20240713_193451.csv
```
```
% python sp500_daily_ohlc.py
[*********************100%%**********************] 1 of 1 completed
writing data into daily_20240713_193457.csv
```
Ideally, these two files should be the same. But they are not.
```
% diff daily_20240713_193451.csv daily_20240713_193457.csv | wc -l
466
```
```
rajulocal@hogwarts ~/work/github/market_data_processor/src/inprogress
% diff daily_20240713_193451.csv daily_20240713_193457.csv | head -n 20
2,6c2,6
< 2023-01-03,384.3699951171875,386.42999267578125,377.8299865722656,380.82000732421875,372.7543029785156,74850700
< 2023-01-04,383.17999267578125,385.8800048828125,380.0,383.760009765625,375.63201904296875,85934100
< 2023-01-05,381.7200012207031,381.8399963378906,378.760009765625,379.3800048828125,371.34478759765625,76970500
< 2023-01-06,382.6099853515625,389.25,379.4100036621094,388.0799865722656,379.8604736328125,104189600
< 2023-01-09,390.3699951171875,393.70001220703125,387.6700134277344,387.8599853515625,379.6451721191406,73978100
---
> 2023-01-03,384.3699951171875,386.42999267578125,377.8299865722656,380.82000732421875,372.7542419433594,74850700
> 2023-01-04,383.17999267578125,385.8800048828125,380.0,383.760009765625,375.6319885253906,85934100
> 2023-01-05,381.7200012207031,381.8399963378906,378.760009765625,379.3800048828125,371.3447570800781,76970500
> 2023-01-06,382.6099853515625,389.25,379.4100036621094,388.0799865722656,379.8605041503906,104189600
> 2023-01-09,390.3699951171875,393.70001220703125,387.6700134277344,387.8599853515625,379.6451416015625,73978100
8,10c8,10
< 2023-01-11,392.2300109863281,395.6000061035156,391.3800048828125,395.5199890136719,387.1429138183594,68881100
< 2023-01-12,396.6700134277344,398.489990234375,392.4200134277344,396.9599914550781,388.5523986816406,90157700
< 2023-01-13,393.6199951171875,399.1000061035156,393.3399963378906,398.5,390.0597839355469,63903900
---
> 2023-01-11,392.2300109863281,395.6000061035156,391.3800048828125,395.5199890136719,387.14288330078125,68881100
> 2023-01-12,396.6700134277344,398.489990234375,392.4200134277344,396.9599914550781,388.55242919921875,90157700
> 2023-01-13,393.6199951171875,399.1000061035156,393.3399963378906,398.5,390.059814453125,63903900
```
### Debug log
Code with debug mode enabled
```
% cat sp500_daily_ohlc.py
import yfinance as yf
from datetime import datetime, date
yf.enable_debug_mode()
df = yf.download(["SPY"], date(2023, 1, 1), date(2024, 7, 2))
time_stamp = datetime.now().strftime("%Y%m%d_%H%M%S")
file_name = f"daily_{time_stamp}.csv"
print(f"writing data into {file_name}")
df.to_csv(file_name)
```
Output on the first run
```
% python sp500_daily_ohlc.py
DEBUG Entering download()
DEBUG Disabling multithreading because DEBUG logging enabled
DEBUG Entering history()
DEBUG Entering history()
DEBUG SPY: Yahoo GET parameters: {'period1': '2023-01-01 00:00:00-05:00', 'period2': '2024-07-02 00:00:00-04:00', 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'}
DEBUG Entering get()
DEBUG url=https://query2.finance.yahoo.com/v8/finance/chart/SPY
DEBUG params=frozendict.frozendict({'period1': 1672549200, 'period2': 1719892800, 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'})
DEBUG Entering _get_cookie_and_crumb()
DEBUG cookie_mode = 'basic'
DEBUG Entering _get_cookie_and_crumb_basic()
DEBUG loaded persistent cookie
DEBUG reusing cookie
DEBUG crumb = 'tz63UYSUdiS'
DEBUG Exiting _get_cookie_and_crumb_basic()
DEBUG Exiting _get_cookie_and_crumb()
DEBUG response code=200
DEBUG Exiting get()
DEBUG SPY: yfinance received OHLC data: 2023-01-03 14:30:00 -> 2024-07-01 13:30:00
DEBUG SPY: OHLC after cleaning: 2023-01-03 09:30:00-05:00 -> 2024-07-01 09:30:00-04:00
DEBUG SPY: OHLC after combining events: 2023-01-03 00:00:00-05:00 -> 2024-07-01 00:00:00-04:00
DEBUG SPY: yfinance returning OHLC: 2023-01-03 00:00:00-05:00 -> 2024-07-01 00:00:00-04:00
DEBUG Exiting history()
DEBUG Exiting history()
DEBUG Exiting download()
writing data into daily_20240713_194042.csv
```
Output from the second run
```
% python sp500_daily_ohlc.py
DEBUG Entering download()
DEBUG Disabling multithreading because DEBUG logging enabled
DEBUG Entering history()
DEBUG Entering history()
DEBUG SPY: Yahoo GET parameters: {'period1': '2023-01-01 00:00:00-05:00', 'period2': '2024-07-02 00:00:00-04:00', 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'}
DEBUG Entering get()
DEBUG url=https://query2.finance.yahoo.com/v8/finance/chart/SPY
DEBUG params=frozendict.frozendict({'period1': 1672549200, 'period2': 1719892800, 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'})
DEBUG Entering _get_cookie_and_crumb()
DEBUG cookie_mode = 'basic'
DEBUG Entering _get_cookie_and_crumb_basic()
DEBUG loaded persistent cookie
DEBUG reusing cookie
DEBUG crumb = 'tz63UYSUdiS'
DEBUG Exiting _get_cookie_and_crumb_basic()
DEBUG Exiting _get_cookie_and_crumb()
DEBUG response code=200
DEBUG Exiting get()
DEBUG SPY: yfinance received OHLC data: 2023-01-03 14:30:00 -> 2024-07-01 13:30:00
DEBUG SPY: OHLC after cleaning: 2023-01-03 09:30:00-05:00 -> 2024-07-01 09:30:00-04:00
DEBUG SPY: OHLC after combining events: 2023-01-03 00:00:00-05:00 -> 2024-07-01 00:00:00-04:00
DEBUG SPY: yfinance returning OHLC: 2023-01-03 00:00:00-05:00 -> 2024-07-01 00:00:00-04:00
DEBUG Exiting history()
DEBUG Exiting history()
DEBUG Exiting download()
writing data into daily_20240713_194050.csv
```
Differences
```
% diff daily_20240713_194042.csv daily_20240713_194050.csv | wc -l
452
```
```
% diff daily_20240713_194042.csv daily_20240713_194050.csv | head -n 20
2c2
< 2023-01-03,384.3699951171875,386.42999267578125,377.8299865722656,380.82000732421875,372.75421142578125,74850700
---
> 2023-01-03,384.3699951171875,386.42999267578125,377.8299865722656,380.82000732421875,372.7542724609375,74850700
4,5c4,5
< 2023-01-05,381.7200012207031,381.8399963378906,378.760009765625,379.3800048828125,371.3447265625,76970500
< 2023-01-06,382.6099853515625,389.25,379.4100036621094,388.0799865722656,379.8604431152344,104189600
---
> 2023-01-05,381.7200012207031,381.8399963378906,378.760009765625,379.3800048828125,371.3447570800781,76970500
> 2023-01-06,382.6099853515625,389.25,379.4100036621094,388.0799865722656,379.8605041503906,104189600
7c7
< 2023-01-10,387.25,390.6499938964844,386.2699890136719,390.5799865722656,382.30755615234375,65358100
---
> 2023-01-10,387.25,390.6499938964844,386.2699890136719,390.5799865722656,382.3075256347656,65358100
11,13c11,13
< 2023-01-17,398.4800109863281,400.2300109863281,397.05999755859375,397.7699890136719,389.3452453613281,62677300
< 2023-01-18,399.010009765625,400.1199951171875,391.2799987792969,391.489990234375,383.1982116699219,99632300
< 2023-01-19,389.3599853515625,391.0799865722656,387.260009765625,388.6400146484375,380.40869140625,86958900
---
> 2023-01-17,398.4800109863281,400.2300109863281,397.05999755859375,397.7699890136719,389.34521484375,62677300
```
### Bad data proof
_No response_
### `yfinance` version
0.2.40
### Python version
3.12.3
### Operating system
Debian GNU/Linux 12 (bookworm) | open | 2024-07-13T23:46:33Z | 2025-02-20T19:05:22Z | https://github.com/ranaroussi/yfinance/issues/1982 | [] | KamarajuKusumanchi | 6 |
miguelgrinberg/python-socketio | asyncio | 270 | confused about the port param | Hi, miguelgrinberg, thanks for your great work!
i'm confused at how to used the port in flask project. for example:
in the __init__.py, some part of the code:
from flask import Flask
from flask_socketio import SocketIO
socket_io = SocketIO()
def create_app(config=None):
app = Flask(__name__)
app.wsgi_app = ProxyFix(app.wsgi_app)
socket_io.init(app)
in the views.py, some part of the code:
@app.route('/')
def index():
return render_template('index.html', async_mode=socketio.async_mode)
@app.route('/test')
def test():
return 'hello world'
@socketio.on('my_event', namespace='/test')
def test_message(message):
print('message:{} '%message)
session['receive_count'] = session.get('receive_count', 0) + 1
emit('my_response',
{'data': message['data'], 'count': session['receive_count']})
in the manage.py, some part of the code:
from flask_script import Manager
from auction import create_app
from auction import socket_io
app = create_app()
manager = Manager(app)
manager.add_command('run', socketio.run(app=app, host='127.0.0.1', port=5000))
if __name__ == '__main__':
manager.run()
my question is the port 5000 is used both for websocket connection and other flask routers ,
that is to say when a request is not websocket, it will comminucate by the port 5000(for example: 127.0.0.1:5000/test ), and
a webscoket requset comminucate by the port 5000 too(for example, call emit function).
can i seprate the port from flask router and websocket router?
for example, when request is a flask web route, set the port 5002
when request is a websocket, such as we call emit, set the port 5000
| closed | 2019-03-14T09:03:46Z | 2019-06-30T15:15:08Z | https://github.com/miguelgrinberg/python-socketio/issues/270 | [
"question"
] | chendongxtu | 5 |
sherlock-project/sherlock | python | 1,461 | python3 -m pip install -r requirements.txt | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Put x into all boxes (like this [x]) once you have completed what they say.
Make sure complete everything in the checklist.
-->
- [ ] I'm reporting a bug in Sherlock's functionality
- [ ] The bug I'm reporting is not a false positive or a false negative
- [ ] I've verified that I'm running the latest version of Sherlock
- [ ] I've checked for similar bug reports including closed ones
- [ ] I've checked for pull requests that attempt to fix this bug
## Description
<!--
Provide a detailed description of the bug that you have found in Sherlock.
Provide the version of Sherlock you are running.
-->
WRITE DESCRIPTION HERE | closed | 2022-09-13T09:16:24Z | 2022-09-25T23:45:20Z | https://github.com/sherlock-project/sherlock/issues/1461 | [
"bug"
] | atoninlove | 1 |
HumanSignal/labelImg | deep-learning | 66 | result xml not readable by devkit | Hi, I annotated some results but are unreadable by official devkit.
I think the problem is, in the xml file spaces are used instead of tabs (or no space, because I got rid of all spaces by hand and is then readable by VOC devkit).
I wonder if you have a solution for that, thank you. | closed | 2017-03-11T20:35:37Z | 2018-05-27T18:12:30Z | https://github.com/HumanSignal/labelImg/issues/66 | [
"question"
] | nuoma | 4 |
napari/napari | numpy | 7,692 | [test-bot] pip install --pre is failing | The --pre Test workflow failed on 2025-03-12 12:19 UTC
The most recent failing test was on windows-latest py3.13 pyqt5
with commit: 6364f5c2902be6f1e28ffc2e95142e81a8a4bae2
Full run: https://github.com/napari/napari/actions/runs/13810969183
(This post will be updated if another test fails, as long as this issue remains open.)
| closed | 2025-03-12T12:19:05Z | 2025-03-12T19:37:13Z | https://github.com/napari/napari/issues/7692 | [
"bug"
] | github-actions[bot] | 1 |
deezer/spleeter | tensorflow | 720 | [Bug] Docs are inconsistent with respect to install | - [x] I didn't find a similar issue already open. (I did but doesn't seem like anything has been addressed on the situation)
- [x] I read the documentation (README AND Wiki)
- [x] I have installed FFMpeg
- [x] My problem is related to Spleeter only, not a derivative product (such as Webapplication, or GUI provided by others)
## Description
The documentation gives conflicting messages about whether or not to use Conda for dependency install. If Conda is "not recommended" as the docs now indicate the steps for alternative install should be laid out (ie; instructions to install dependencies with pip, whatever). `pip install spleeter` is not sufficient.
## Step to reproduce
1. Visit the README or the install document with no prior context and notice how Conda is not recommended and conda commands are still present in the document.
| closed | 2022-01-28T21:22:41Z | 2022-01-31T18:48:46Z | https://github.com/deezer/spleeter/issues/720 | [
"bug",
"invalid"
] | mattpetters | 3 |
babysor/MockingBird | deep-learning | 289 | 分享两个训练好的synthesizer模型 | 俩模型都是用最新的代码来训练的,不需要切换回0.0.1
第一个模型synthesizer-merged_110k,是在代码支持的四个数据集(aidatatang_200zh,magicdata,aishell3,data_aishell)上联合训练的。learning rate=0.001无衰减,batch size=128,iteration=110k。
第二个模型synthesizer-zhvoice_170k,是在[zhvoice](https://github.com/fighting41love/zhvoice)这个数据集上训练的。learning rate=0.001无衰减,batch size=128,iteration=170k。
两个模型我都已经测试过是可用的,不过第一个似乎比第二个好一点。我觉得问题是出在vocoder上,因为我现在用的vocoder(wavernn)并不是在zhvoice数据集上训练的,而我也懒得再训练一个vocoder。不过用hifigan的话效果倒是差不多,不过音色又有明显不同,挺有意思的。
关于训练,我这俩模型还是可以进一步优化的,现在第一个模型的loss是在0.24左右,第二个是在0.22左右,不过花的时间太久我就懒得继续训练了。
下面是下载链接:
[百度云](https://pan.baidu.com/s/1Gt2MQydfrreBi4htYhSUCQ) 密码: ir90
[Google Drive](https://drive.google.com/drive/folders/10LDxmZOto9ehPbZgTyvY2NzPEjHS4qdG?usp=sharing) | open | 2021-12-22T14:17:42Z | 2024-01-17T14:04:50Z | https://github.com/babysor/MockingBird/issues/289 | [
"documentation",
"enhancement"
] | wrk226 | 13 |
ClimbsRocks/auto_ml | scikit-learn | 369 | Error running tutorial example | I'm trying to run the sample code provided here:
And I'm getting this error:
```
Welcome to auto_ml! We're about to go through and make sense of your data using machine learning, and give you a production-ready pipeline to get predictions with.
If you have any issues, or new feature ideas, let us know at http://auto.ml
Now using the model training_params that you passed in:
{}
After overwriting our defaults with your values, here are the final params that will be used to initialize the model:
{'presort': False, 'learning_rate': 0.1, 'warm_start': True}
Running basic data cleaning
Fitting DataFrameVectorizer
Now using the model training_params that you passed in:
{}
After overwriting our defaults with your values, here are the final params that will be used to initialize the model:
{'presort': False, 'learning_rate': 0.1, 'warm_start': True}
********************************************************************************************
About to fit the pipeline for the model GradientBoostingRegressor to predict MEDV
Started at:
2018-01-04 14:22:46
[1] random_holdout_set_from_training_data's score is: -8.721
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-45-2b110959d739> in <module>()
11 ml_predictor = Predictor(type_of_estimator='regressor', column_descriptions=column_descriptions)
12
---> 13 ml_predictor.train(df_train)
14
15 ml_predictor.score(df_test, df_test.MEDV)
C:\ProgramData\Anaconda3\lib\site-packages\auto_ml\predictor.py in train(***failed resolving arguments***)
632
633 # This is our main logic for how we train the final model
--> 634 self.trained_final_model = self.train_ml_estimator(self.model_names, self._scorer, X_df, y)
635
636 if self.ensemble_config is not None and len(self.ensemble_config) > 0:
C:\ProgramData\Anaconda3\lib\site-packages\auto_ml\predictor.py in train_ml_estimator(self, estimator_names, scoring, X_df, y, feature_learning, prediction_interval)
1212 # Use Case 1: Super straightforward: just train a single, non-optimized model
1213 elif (feature_learning == True and self.optimize_feature_learning != True) or (len(estimator_names) == 1 and self.optimize_final_model != True):
-> 1214 trained_final_model = self.fit_single_pipeline(X_df, y, estimator_names[0], feature_learning=feature_learning, prediction_interval=False)
1215
1216 # Use Case 2: Compare a bunch of models, but don't optimize any of them
C:\ProgramData\Anaconda3\lib\site-packages\auto_ml\predictor.py in fit_single_pipeline(self, X_df, y, model_name, feature_learning, prediction_interval)
837 print(start_time)
838
--> 839 ppl.fit(X_df, y)
840
841 if self.verbose:
C:\ProgramData\Anaconda3\lib\site-packages\auto_ml\utils_model_training.py in fit(self, X, y)
266
267 self.model.set_params(n_estimators=num_iter, warm_start=warm_start)
--> 268 self.model.fit(X_fit, y)
269
270 if self.training_prediction_intervals == True:
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\ensemble\gradient_boosting.py in fit(self, X, y, sample_weight, monitor)
1005 self.estimators_.shape[0]))
1006 begin_at_stage = self.estimators_.shape[0]
-> 1007 y_pred = self._decision_function(X)
1008 self._resize_state()
1009
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\ensemble\gradient_boosting.py in _decision_function(self, X)
1123 # not doing input validation.
1124 score = self._init_decision_function(X)
-> 1125 predict_stages(self.estimators_, X, self.learning_rate, score)
1126 return score
1127
TypeError: Argument 'X' has incorrect type (expected numpy.ndarray, got csr_matrix)
```
sklearn version: 0.18
After upgrade sklearn to 0.19 it worked, but this should be a requirement for pip install. Right? | closed | 2018-01-04T16:27:28Z | 2018-02-09T01:33:37Z | https://github.com/ClimbsRocks/auto_ml/issues/369 | [] | vabatista | 1 |
explosion/spaCy | machine-learning | 13,640 | Could we release a new version of spacy-transformer? | Right now, it seems we already relaxed transformer version requirement to <4.42.0 in https://github.com/explosion/spacy-transformers/pull/418, could we release a new version v1.3.6?
cc @danieldk | open | 2024-09-27T21:46:40Z | 2024-09-27T21:46:40Z | https://github.com/explosion/spaCy/issues/13640 | [] | xingjianan | 0 |
davidteather/TikTok-Api | api | 992 | [BUG] - Cannot get exactly user likes, follows. | **Describe the bug**
The user information do not return exactly user likes, follows when the number is more than 1M.
Have anyone facing this issue? How can I fix it? Thank you
| closed | 2023-01-26T01:58:38Z | 2023-08-08T21:55:08Z | https://github.com/davidteather/TikTok-Api/issues/992 | [
"bug"
] | sinhpn92 | 2 |
yeongpin/cursor-free-vip | automation | 157 | curl: (22) The requested URL returned error: 404 | curl: (22) The requested URL returned error: 404
| closed | 2025-03-07T13:22:33Z | 2025-03-10T03:47:00Z | https://github.com/yeongpin/cursor-free-vip/issues/157 | [] | yigehaozi | 1 |
ansible/awx | django | 15,018 | Custom Login Info not showing | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
Custom HTML placed in `Custom Login Info` is not being displayed while having new UI Tech Preview enabled.
### AWX version
23.9.0
### Select the relevant components
- [ ] UI
- [X] UI (tech preview)
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Installation method
kubernetes
### Modifications
no
### Ansible version
2.15.8
### Operating system
Ubuntu
### Web browser
Chrome
### Steps to reproduce
1. Configure a custom login info **Administration** > **Settings** > Custom Login Info (or through `awx.awx.settings` module setting `CUSTOM_LOGIN_INFO`.
2. Save and logout to view login screen.
### Expected results
Visible output from value placed in `CUSTOM_LOGIN_INFO` setting.
### Actual results
Value of `CUSTOM_LOGIN_INFO` is not displayed
### Additional information
This is only reproducible in Tech preview. When toggling back to normal/old UI you can see it. | open | 2024-03-21T17:13:36Z | 2024-04-03T17:40:00Z | https://github.com/ansible/awx/issues/15018 | [
"type:bug",
"component:ui",
"community",
"component:ui_next"
] | straylight | 0 |
opengeos/streamlit-geospatial | streamlit | 86 | The Streamlit app can not run. | 
| closed | 2022-10-05T02:52:26Z | 2022-10-05T04:04:18Z | https://github.com/opengeos/streamlit-geospatial/issues/86 | [] | NCUEGEO42 | 0 |
scikit-image/scikit-image | computer-vision | 6,872 | Add y-shear; consider 3D shear to AffineTransform | ### Description:
See https://github.com/scikit-image/scikit-image/pull/6717#issuecomment-1496684530
### Way to reproduce:
_No response_
### Version information:
_No response_ | open | 2023-04-05T19:51:43Z | 2023-09-16T14:09:10Z | https://github.com/scikit-image/scikit-image/issues/6872 | [
":fast_forward: type: Enhancement",
":bug: Bug"
] | stefanv | 1 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 705 | A s s e r t i o n f a i l e d ! | `python K:\cloning\Real-Time-Voice-Cloning-master\Real-Time-Voice-Cloning-master\demo_toolbox.py`
A s s e r t i o n f a i l e d !
P r o g r a m : c : \ p y t h o n 3 8 \ p y t h o n . e x e
F i l e : s r c / h o s t a p i / w d m k s / p a _ w i n _ w d m k s . c , L i n e 1 0 8 1
E x p r e s s i o n : F A L S E | closed | 2021-03-16T17:50:20Z | 2021-04-09T16:50:05Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/705 | [] | FedericoFedeFede | 3 |
pytest-dev/pytest-selenium | pytest | 75 | Possible to only create a report on failure | Hi,
Wonder if its possible to only generate a report when a test returns as a failure ?
Had a look at the docs and only see info relating too `selenium_capture_debug`
thanks
| closed | 2016-08-03T17:14:19Z | 2016-08-25T13:28:41Z | https://github.com/pytest-dev/pytest-selenium/issues/75 | [] | allankilpatrick | 1 |
Farama-Foundation/Gymnasium | api | 574 | [Proposal] Numeric is in a Box Space | ### Proposal
I think it makes sense that a single numerical value could be in a box space. For example, suppose I have space
``` python
space = Box(0, 4, (1,))
a = space.sample()[0] # This will give me a single numerical value
a in space # This will be false
```
A single number is not in a Box space. This fails because np.asarray(SOME_NUMBER) produces a 0-dimensional array.
### Motivation
I write simulations that rely on Box observation spaces of one dimension. It's natural for me that those simulations just output a single numerical value for observation instead of an array or list of that value.
### Pitch
Change the `contains` function in `Box` to allow for single numbers when the Box space has shape `(1,)`.
### Alternatives
_No response_
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| closed | 2023-06-28T19:36:06Z | 2023-07-27T18:50:31Z | https://github.com/Farama-Foundation/Gymnasium/issues/574 | [
"enhancement"
] | rusu24edward | 2 |
joerick/pyinstrument | django | 283 | Console renderer has much more detail than speedscope renderer | Are there different pruning settings? What can I set in the speedscope renderer to get the same level of granularity? | closed | 2023-12-05T06:14:17Z | 2023-12-11T21:21:46Z | https://github.com/joerick/pyinstrument/issues/283 | [] | MisterTea | 1 |
lepture/authlib | flask | 81 | authlib 0.9: MismatchingStateError: mismatching_state | We just discovered that our login page (using Auth0) was broken because of the latest authlib 0.9. We're using Flask 0.12.2. Reverting authlib to 0.8 solves the problem. Our requirements.txt is set to use the latest version of authlib.
The following error message was reported in our logs:
```
File "<DELETED_PATH>/lib/authlib/flask/client/oauth.py", line 248, in authorize_access_token
params = _generate_oauth2_access_token_params(self.name)
File "<DELETED_PATH>/lib/authlib/flask/client/oauth.py", line 270, in _generate_oauth2_access_token_params
raise MismatchingStateError()
MismatchingStateError: mismatching_state: CSRF Warning! State not equal in request and response.
```
| closed | 2018-08-13T08:39:37Z | 2018-10-05T00:18:33Z | https://github.com/lepture/authlib/issues/81 | [] | roku6185 | 10 |
explosion/spaCy | deep-learning | 13,652 | No compatible packages found for v3.8.2 of spaCy | It seems like the recently pushed` 3.8.2` version has some issues downloading models.
```
python -m spacy download en_core_web_md
✘ No compatible package found for 'en-core-web-md' (spaCy v3.8.2)
```
Here's my system info.
```
C:\Users\victim\AppData\Local\Programs\Python\Python312\Lib\site-packages\spacy\util.py:910: UserWarning: [W095] Model 'en_core_web_md' (3.7.1) was trained with spaCy v3.7.2 and may not be 100% compatible with the current version (3.8.2). If you see errors or degraded performance, download a newer compatible model or retrain your custom model with the current spaCy version. For more details and available updates, run: python -m spacy validate
warnings.warn(warn_msg)
============================== Info about spaCy ==============================
spaCy version 3.8.2
Location C:\Users\victim\AppData\Local\Programs\Python\Python312\Lib\site-packages\spacy
Platform Windows-11-10.0.22631-SP0
Python version 3.12.6
Pipelines en_core_web_md (3.7.1)
```
Issue fix: Just make it en_core_web_md instead of en-core-web-md | closed | 2024-10-04T09:35:19Z | 2024-11-04T00:03:06Z | https://github.com/explosion/spaCy/issues/13652 | [] | HydraDragonAntivirus | 1 |
SYSTRAN/faster-whisper | deep-learning | 301 | Using of Triton | Hello guys,
Do you think it is possible to optimize faster whisper with Triton ?
Thanks a lot,
AlexG. | closed | 2023-06-14T07:48:50Z | 2023-07-21T08:19:42Z | https://github.com/SYSTRAN/faster-whisper/issues/301 | [] | AlexandderGorodetski | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.