repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
scanapi/scanapi | rest-api | 228 | Change `project-name`, `hide-request` and `hide-response` to use underscore | Change `project-name`, `hide-request` and `hide-response` to use underscore
- `project_name` instead of `project-name`
- `hide_request` instead of `hide-request`
- `hide_response` instead of `hide-response`
https://github.com/scanapi/scanapi/blob/55e30aa92aed6a3a1131e35f357aff2fbdd61bc8/scanapi/reporter.py#L36
https://github.com/scanapi/scanapi/blob/55e30aa92aed6a3a1131e35f357aff2fbdd61bc8/scanapi/hide_utils.py#L14-L15 | closed | 2020-07-25T20:11:02Z | 2020-07-26T13:25:56Z | https://github.com/scanapi/scanapi/issues/228 | [
"Good First Issue"
] | camilamaia | 1 |
healthchecks/healthchecks | django | 887 | Is this repo is presently also open for contribution? | I want to confirm if this repository is open for open source contribution or not? | closed | 2023-08-30T07:22:49Z | 2023-08-30T07:37:15Z | https://github.com/healthchecks/healthchecks/issues/887 | [] | Kunal7069 | 4 |
cvat-ai/cvat | pytorch | 9,100 | SAM NUCLIO - TypeError: 'NoneType' object is not subscriptable | Hello Team,
I have deployed cvat and nuclio.
I have deployed serverless/pytorch/facebookresearch/sam/ using ./serverless/deploy_gpu.sh ./serverless/pytorch/facebookresearch/sam
sudo nuctl get functions
NAMESPACE | NAME | PROJECT | STATE | REPLICAS | NODE PORT
nuclio | onnx-wongkinyiu-yolov7 | cvat | ready | 1/1 | 32768
nuclio | pth-facebookresearch-sam-vit-h | cvat | ready | 1/1 | 32769
I can see model in nuclio console also, but getting errors in logs so not able to use in cvat.
25.02.12 18:19:01.293 (E) sor.http.w0.python.logger Exception caught in handler {"exc": "'NoneType' object is not subscriptable", "traceback": "Traceback (most recent call last):\n File \"/opt/nuclio/_nuclio_wrapper.py\", line 151, in serve_requests\n await self._handle_event(event)\n File \"/opt/nuclio/_nuclio_wrapper.py\", line 439, in _handle_event\n entrypoint_output = self._entrypoint(self._context, event)\n File \"/opt/nuclio/main.py\", line 20, in handler\n buf = io.BytesIO(base64.b64decode(data[\"image\"]))\nTypeError: 'NoneType' object is not subscriptable\n", "worker_id": "0"}
25.02.12 18:19:01.321 (I) sor.http.w0.python.logger call handler {"worker_id": "0"}
25.02.12 18:19:01.321 (E) sor.http.w0.python.logger Exception caught in handler {"traceback": "Traceback (most recent call last):\n File \"/opt/nuclio/_nuclio_wrapper.py\", line 151, in serve_requests\n await self._handle_event(event)\n File \"/opt/nuclio/_nuclio_wrapper.py\", line 439, in _handle_event\n entrypoint_output = self._entrypoint(self._context, event)\n File \"/opt/nuclio/main.py\", line 20, in handler\n buf = io.BytesIO(base64.b64decode(data[\"image\"]))\nTypeError: 'NoneType' object is not subscriptable\n", "worker_id": "0", "exc": "'NoneType' object is not subscriptable"}
Please let me know what can be done ? | closed | 2025-02-12T18:38:37Z | 2025-02-18T05:52:58Z | https://github.com/cvat-ai/cvat/issues/9100 | [
"need info"
] | Devendranathashok | 1 |
python-restx/flask-restx | api | 287 | Default values within models? | Is there a way to have a default value within a model, therefore always add this to the data regardless if the payload contains (i.e. is age is not within the payload, the default value applies and this can still be accessed)?
Example of a model with a default value (as per https://flask-restx.readthedocs.io/en/latest/marshalling.html?highlight=model#default-values) however age is not accessible if its not part of the payload.
```
custom_model = api.model(
'v1_CustomModel',
{
'name': fields.String(
required=True,
example='safe'
),
'age': fields.Integer(
required=False,
example=123,
default=0
)
}
``` | closed | 2021-02-25T15:42:33Z | 2021-09-12T21:03:50Z | https://github.com/python-restx/flask-restx/issues/287 | [
"question"
] | safe | 0 |
dpgaspar/Flask-AppBuilder | flask | 1,802 | Search/Filter by the same column multiple times in model's view | ### Environment
Flask-Appbuilder version: 3.4.4
Hi, this might not be an issue (not sure), rather question. I searched for a solution for hours but didn't manage to find any.
I have a String column defined in model and would like to filter by this column, BUT for multiple values. Is this even possible?

### Describe the expected results
I would expect this will return two rows where key (node in this case) is abc and def respectively.
### Describe the actual results
However I got No records found message, but these objects with columns node=abc and node=def are definitely present.

I suppose it's doing AND operations internally, in that case it's correct behavior, but I managed to test this use case in Airflow and it works the way I want there. So I am a bit confused what I am doing wrong or Airflow just did some tweaks which I was unable to find.
Nevertheless I just want to achieve the behavior explained above in any way, so the question would be how can I achieve it? Is there any approach I can use to filter for multiple keys at once?
Thank you | open | 2022-02-10T15:53:51Z | 2024-10-02T07:19:08Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1802 | [] | jurovee | 1 |
akfamily/akshare | data-science | 5,747 | akshare如何获取一直股票的今日资金流向(小单、中单、大单、超大单分别的流入和流出)数据 | akshare如何获取一直股票的今日资金流向(小单、中单、大单、超大单分别的流入和流出)数据 | closed | 2025-02-28T12:39:18Z | 2025-02-28T12:50:20Z | https://github.com/akfamily/akshare/issues/5747 | [] | nashstar | 0 |
SYSTRAN/faster-whisper | deep-learning | 784 | faster-whisper docker example? | Does anyone have an example Dockerfile for a faster-whisper docker example? | closed | 2024-04-09T00:09:37Z | 2024-04-09T15:14:03Z | https://github.com/SYSTRAN/faster-whisper/issues/784 | [] | silvacarl2 | 2 |
Guovin/iptv-api | api | 163 | 可否打包个docker | 老大你好,程序一直在用,github不让用了,本地调试老是出问题,可否受累打包个docker?这样本地运行就完美了,也将减少了所有用户大部分的安装调试时间 | closed | 2024-06-25T01:20:35Z | 2024-07-02T03:59:50Z | https://github.com/Guovin/iptv-api/issues/163 | [
"enhancement"
] | vbskycn | 1 |
huggingface/datasets | numpy | 7,295 | [BUG]: Streaming from S3 triggers `unexpected keyword argument 'requote_redirect_url'` | ### Describe the bug
Note that this bug is only triggered when `streaming=True`. #5459 introduced always calling fsspec with `client_kwargs={"requote_redirect_url": False}`, which seems to have incompatibility issues even in the newest versions.
Analysis of what's happening:
1. `datasets` passes the `client_kwargs` through `fsspec`
2. `fsspec` passes the `client_kwargs` through `s3fs`
3. `s3fs` passes the `client_kwargs` to `aiobotocore` which uses `aiohttp`
```
s3creator = self.session.create_client(
"s3", config=conf, **init_kwargs, **client_kwargs
)
```
4. The `session` tries to create an `aiohttp` session but the `**kwargs` are not just kept as unfolded `**kwargs` but passed in as individual variables (`requote_redirect_url` and `trust_env`).
Error:
```
Traceback (most recent call last):
File "/Users/cxrh/Documents/GitHub/nlp_foundation/nlp_train/test.py", line 14, in <module>
batch = next(iter(ds))
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1353, in __iter__
for key, example in ex_iterable:
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 255, in __iter__
for key, pa_table in self.generate_tables_fn(**self.kwargs):
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py", line 78, in _generate_tables
for file_idx, file in enumerate(itertools.chain.from_iterable(files)):
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 840, in __iter__
yield from self.generator(*self.args, **self.kwargs)
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 921, in _iter_from_urlpaths
elif xisdir(urlpath, download_config=download_config):
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 305, in xisdir
return fs.isdir(inner_path)
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/spec.py", line 721, in isdir
return self.info(path)["type"] == "directory"
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/archive.py", line 38, in info
self._get_dirs()
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/filesystems/compression.py", line 64, in _get_dirs
f = {**self.file.fs.info(self.file.path), "name": self.uncompressed_name}
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 118, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 103, in sync
raise return_result
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 56, in _runner
result[0] = await coro
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 1302, in _info
out = await self._call_s3(
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 341, in _call_s3
await self.set_session()
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 524, in set_session
s3creator = self.session.create_client(
File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/aiobotocore/session.py", line 114, in create_client
return ClientCreatorContext(self._create_client(*args, **kwargs))
TypeError: AioSession._create_client() got an unexpected keyword argument 'requote_redirect_url'
```
### Steps to reproduce the bug
1. Install the necessary libraries, datasets having a requirement for being at least 2.19.0:
```
pip install s3fs fsspec aiohttp aiobotocore botocore 'datasets>=2.19.0'
```
2. Run this code:
```
from datasets import load_dataset
ds = load_dataset(
"json",
data_files="s3://your_path/*.jsonl.gz",
streaming=True,
split="train",
)
batch = next(iter(ds))
print(batch)
```
3. You get the `unexpected keyword argument 'requote_redirect_url'` error.
### Expected behavior
The datasets is able to load a batch from the dataset stored on S3, without triggering this `requote_redirect_url` error.
Fix: I could fix this by directly removing the `requote_redirect_url` and `trust_env` - then it loads properly.
<img width="1127" alt="image" src="https://github.com/user-attachments/assets/4c40efa9-8787-4919-b613-e4908c3d1ab2">
### Environment info
- `datasets` version: 3.1.0
- Platform: macOS-15.1-arm64-arm-64bit
- Python version: 3.10.15
- `huggingface_hub` version: 0.26.2
- PyArrow version: 18.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0 | open | 2024-11-19T12:23:36Z | 2024-11-19T13:01:53Z | https://github.com/huggingface/datasets/issues/7295 | [] | casper-hansen | 0 |
MagicStack/asyncpg | asyncio | 1,169 | Not accepting str as value for interval/date columns | <!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided, with clear instructions on how to run it.
Thank you!
-->
* **asyncpg version**: 0.29.0
* **PostgreSQL version**: 15.7
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: local
* **Python version**: 3.11
* **Platform**: Windows
* **Do you use pgbouncer?**: No
* **Did you install asyncpg with pip?**: Yes
* **If you built asyncpg locally, which version of Cython did you use?**: N/A
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: yes
<!-- Enter your issue details below this comment. -->
I am using asyncpg as the driver for SQLAlchemy.
```
class Table(Base):
__tablename__ = 'table'
interval: Mapped[timedelta]
session.add(Table(interval='1 min'))
```
With the above code, it throws this error:
```sqlalchemy.exc.DBAPIError: (sqlalchemy.dialects.postgresql.asyncpg.Error) <class 'asyncpg.exceptions.DataError'>: invalid input for query argument $9: '1 min' ('str' object has no attribute 'days')```
However, Postgresql accepts `'1 min'` as a value to an interval column. Why is asyncpg forbidding str even though Postgresql accepts it in this case? | open | 2024-07-25T07:59:14Z | 2025-02-05T08:16:23Z | https://github.com/MagicStack/asyncpg/issues/1169 | [] | riwu | 2 |
davidsandberg/facenet | tensorflow | 723 | Dropout keep_prop for training with VGGFace2, should be 0.6 rather than 0.4. | I noticed davidsandberg is reproducing ArcFace. The training parameter is set to be identical to Arcface.
But in Arcface, the author claimed "In this paper, the dropout parameter is set as 0.4". The author is using MXNet, and the dropout parameter in MXNet is the probability to drop.
So in Tensorflow, the keep_prop for dropout should be 0.6, rather than 0.4. | closed | 2018-04-25T02:07:44Z | 2019-09-02T09:29:13Z | https://github.com/davidsandberg/facenet/issues/723 | [] | dragonasc2 | 9 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,256 | [Bug]: Networkx version error on Ubuntu 20.04 | ### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
On executing a freshly downloaded copy of webui.sh, it tries to grab networkx-3.2.1 which is not compatible with Python 3.8.
### Steps to reproduce the problem
1. `sudo apt install wget git python3 python3-venv`
2. `wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh > sd-webui.sh`
3. `bash sd-webui.sh`
### What should have happened?
WebUI should have been installed.
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
That didn't work either:
```
$ bash sd-webui.sh --dump-sysinfo
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################
################################################################
Running on ken user
################################################################
################################################################
Create and activate python venv
################################################################
################################################################
Launching launch.py...
################################################################
glibc version is 2.31
Cannot locate TCMalloc. Do you have tcmalloc or google-perftool installed on your system? (improves CPU memory usage)
Traceback (most recent call last):
File "launch.py", line 48, in <module>
main()
File "launch.py", line 29, in main
filename = launch_utils.dump_sysinfo()
File "/userfiles/ken/ai/stable-diffusion-webui/modules/launch_utils.py", line 473, in dump_sysinfo
from modules import sysinfo
File "/userfiles/ken/ai/stable-diffusion-webui/modules/sysinfo.py", line 8, in <module>
import psutil
ModuleNotFoundError: No module named 'psutil'
```
### Console logs
```Shell
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################
################################################################
Running on ken user
################################################################
################################################################
Create and activate python venv
################################################################
################################################################
Launching launch.py...
################################################################
glibc version is 2.31
Cannot locate TCMalloc. Do you have tcmalloc or google-perftool installed on your system? (improves CPU memory usage)
Python 3.8.10 (default, Mar 25 2024, 10:42:49)
[GCC 9.4.0]
Version: v1.9.4
Commit hash: feee37d75f1b168768014e4634dcb156ee649c05
Installing torch and torchvision
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
Collecting torch==2.1.2
Using cached https://download.pytorch.org/whl/cu121/torch-2.1.2%2Bcu121-cp38-cp38-linux_x86_64.whl (2200.7 MB)
Collecting torchvision==0.16.2
Using cached https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl (6.9 MB)
Collecting networkx
Using cached https://download.pytorch.org/whl/networkx-3.2.1-py3-none-any.whl (1.6 MB)
ERROR: Package 'networkx' requires a different Python: 3.8.10 not in '>=3.9'
Traceback (most recent call last):
File "launch.py", line 48, in <module>
main()
File "launch.py", line 39, in main
prepare_environment()
File "/userfiles/ken/ai/stable-diffusion-webui/modules/launch_utils.py", line 380, in prepare_environment
run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True)
File "/userfiles/ken/ai/stable-diffusion-webui/modules/launch_utils.py", line 115, in run
raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install torch.
Command: "/userfiles/ken/ai/stable-diffusion-webui/venv/bin/python3" -m pip install torch==2.1.2 torchvision==0.16.2 --extra-index-url https://download.pytorch.org/whl/cu121
Error code: 1
```
```
### Additional information
See also https://github.com/mlcommons/inference/issues/1697 which seems to reference the same bug.
I was able to install `pip3 install torch torchvision torchaudio` but I can't see how to translate that to your config files for python3-venv. | open | 2024-07-24T14:02:13Z | 2024-07-24T15:52:50Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16256 | [
"bug-report"
] | Ken-g6 | 3 |
jumpserver/jumpserver | django | 14,295 | [Question] How do I use the new Object storage feature in components? | ### Product Version
v4.0.2
### Product Edition
- [X] Community Edition
- [ ] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [X] Online Installation (One-click command installation)
- [ ] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### Environment Information
Multi worker server to server JumpServer backend
Using HAProxy to serve frontend
Using MinIO separated to use as storage
### 🤔 Question Description
<img width="1716" alt="image" src="https://github.com/user-attachments/assets/fbde6dcd-79f7-4784-b702-c1cdafddd348">
As image shown, I create connection to MinIO server and tested OK. But when I record video for playback, it doesn't save to MinIO bucket (assumption that video still on backend server).
### Expected Behavior
Video saved in MinIO bucket.
### Additional Information
I couldn't find "terminal" menu, to point where video resource will be saved as mentioned on guidance. | closed | 2024-10-14T04:25:01Z | 2024-11-28T03:24:14Z | https://github.com/jumpserver/jumpserver/issues/14295 | [
"⏳ Pending feedback",
"🤔 Question",
"💡 FAQ",
"🔘 Inactive"
] | Chocopediaa | 10 |
gradio-app/gradio | machine-learning | 10,825 | Import error when using load_chat | ### Describe the bug
Seems like [this import](https://github.com/gradio-app/gradio/blob/ca2e4c86ce7ddd34577ab199dd0a26ccacfea321/gradio/external.py#L807) should be `from gradio.chat_interface import ChatInterface` instead. I hit import errors otherwise. I'm pretty much just following the one-liner from https://www.gradio.app/guides/creating-a-chatbot-fast#note-for-open-ai-api-compatible-endpoints.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```bash
conda create --prefix ~/envs/gradio_repro python=3.12
conda activate ~/envs/gradio_repro
pip install gradio
pip install openai
# this also fails with the same error message when I provide a real endpoint
python -c "import gradio as gr; gr.load_chat('http://localhost:11434/v1/', model='llama3.2', token='***').launch()"
```
This last line gives output
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "[path_to_envs]/gradio_repro/lib/python3.12/site-packages/gradio/external.py", line 807, in load_chat
from gr.chat_interface import ChatInterface
ModuleNotFoundError: No module named 'gr'
```
This is the same if I swap the fake example endpoint for a real endpoint. When I use a real endpoint and change only the import referenced above, everything works.
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
This is the output of my `pip freeze`:
aiofiles==23.2.1
annotated-types==0.7.0
anyio==4.9.0
certifi==2025.1.31
charset-normalizer==3.4.1
click==8.1.8
distro==1.9.0
fastapi==0.115.11
ffmpy==0.5.0
filelock==3.18.0
fsspec==2025.3.0
gradio==5.21.0
gradio_client==1.7.2
groovy==0.1.2
h11==0.14.0
httpcore==1.0.7
httpx==0.28.1
huggingface-hub==0.29.3
idna==3.10
Jinja2==3.1.6
jiter==0.9.0
markdown-it-py==3.0.0
MarkupSafe==2.1.5
mdurl==0.1.2
numpy==2.2.4
openai==1.66.3
orjson==3.10.15
packaging==24.2
pandas==2.2.3
pillow==11.1.0
pydantic==2.10.6
pydantic_core==2.27.2
pydub==0.25.1
Pygments==2.19.1
python-dateutil==2.9.0.post0
python-multipart==0.0.20
pytz==2025.1
PyYAML==6.0.2
requests==2.32.3
rich==13.9.4
ruff==0.11.0
safehttpx==0.1.6
semantic-version==2.10.0
setuptools==75.8.0
shellingham==1.5.4
six==1.17.0
sniffio==1.3.1
starlette==0.46.1
tomlkit==0.13.2
tqdm==4.67.1
typer==0.15.2
typing_extensions==4.12.2
tzdata==2025.1
urllib3==2.3.0
uvicorn==0.34.0
websockets==15.0.1
wheel==0.45.1
```
### Severity
I can work around it | closed | 2025-03-17T20:50:34Z | 2025-03-17T22:28:10Z | https://github.com/gradio-app/gradio/issues/10825 | [
"bug"
] | william-mistral | 0 |
HumanSignal/labelImg | deep-learning | 806 | anconda cannot not the labeling file the show empty | (base) C:\Users\haroon>cd C:\Users\haroon\Downloads\labelImg-master
(base) C:\Users\haroon\Downloads\labelImg-master>python labelimg.py
Traceback (most recent call last):
File "labelimg.py", line 1271, in openDirDialog
self.importDirImages(targetDirPath)
File "labelimg.py", line 1282, in importDirImages
self.openNextImg()
File "labelimg.py", line 1355, in openNextImg
self.loadFile(filename)
File "labelimg.py", line 1096, in loadFile
self.showBoundingBoxFromAnnotationFile(filePath)
File "labelimg.py", line 1114, in showBoundingBoxFromAnnotationFile
filedir = filePath.split(basename)[0].split("/")[-2:-1][0]
IndexError: list index out of range
(base) C:\Users\haroon\Downloads\labelImg-master>
| open | 2021-10-21T13:15:10Z | 2021-10-21T13:15:10Z | https://github.com/HumanSignal/labelImg/issues/806 | [] | haroonriaz38 | 0 |
pytorch/pytorch | deep-learning | 149,545 | `torch.load(..., map_location="meta")` hangs indefinitely | ### 🐛 Describe the bug
Hey! We have come around a checkpoint that cannot be loaded on meta device, but is no problem to load on cpu.
Here is a min repro:
```python
from transformers.utils.hub import cached_file
import torch
# This will download the checkpoint
file = cached_file("MrLight/dse-qwen2-2b-mrl-v1", "pytorch_model.bin")
# This one hangs indefinitely
st = torch.load(file, map_location="meta", weights_only=True)
```
However, doing
```python
from transformers.utils.hub import cached_file
import torch
file = cached_file("MrLight/dse-qwen2-2b-mrl-v1", "pytorch_model.bin")
st = torch.load(file, map_location="cpu", weights_only=True)
```
works fine, and the weights seem to be correctly formed.
As the weights look fine, I'm not sure if it's an issue on your end, or if the checkpoint is still corrupted in some weird way. Note that the default weight map is "cuda:0", but this should not be an issue.
You can check out https://github.com/huggingface/transformers/issues/36803 for more informations!
### Versions
torch==2.6.0
cc @mruberry @mikaylagawarecki | open | 2025-03-19T19:42:01Z | 2025-03-20T19:18:59Z | https://github.com/pytorch/pytorch/issues/149545 | [
"module: serialization",
"triaged"
] | Cyrilvallez | 0 |
PokeAPI/pokeapi | graphql | 431 | Strip whitespace from sprites | I noticed that the sprites have a great deal of unnecessary white-space. It is difficult to use the images provided as is, because the size of the images being greater than the actual picture, it is difficult to position in properly. I'm sure there is some build tool which could strip all of the white-space in one go.
Thank you!
(Note: The only images I checked were Bulbasaur and Pikachu.) | closed | 2019-05-30T14:38:20Z | 2019-05-31T04:33:13Z | https://github.com/PokeAPI/pokeapi/issues/431 | [] | binyamin | 4 |
miguelgrinberg/microblog | flask | 363 | Multidict Package Problematic in python:slim | I wanted to check out the Microblog application and I tried to build it with Docker, but it looks like the multidict package is problematic in `python:slim`:
```
$ docker build -t microblog:latest .
[+] Building 33.5s (7/13) docker:desktop-linux
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 374B 0.0s
=> [internal] load metadata for docker.io/library/python:slim 0.3s
=> [1/9] FROM docker.io/library/python:slim@sha256:db7e9284d53f7b827c58a6239b9d2907c33250215823b1cdb7d1e983e70da 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 3.31kB 0.0s
=> CACHED [2/9] COPY requirements.txt requirements.txt 0.0s
=> ERROR [3/9] RUN pip install -r requirements.txt 33.1s
------
...
32.65 Building wheel for multidict (pyproject.toml): started
32.80 Building wheel for multidict (pyproject.toml): finished with status 'error'
32.81 error: subprocess-exited-with-error
32.81
32.81 × Building wheel for multidict (pyproject.toml) did not run successfully.
32.81 │ exit code: 1
32.81 ╰─> [77 lines of output]
32.81 *********************
32.81 * Accelerated build *
32.81 *********************
32.81 running bdist_wheel
32.81 running build
32.81 running build_py
32.81 creating build
32.81 creating build/lib.linux-x86_64-cpython-312
32.81 creating build/lib.linux-x86_64-cpython-312/multidict
32.81 copying multidict/_abc.py -> build/lib.linux-x86_64-cpython-312/multidict
32.81 copying multidict/_multidict_base.py -> build/lib.linux-x86_64-cpython-312/multidict
32.81 copying multidict/_multidict_py.py -> build/lib.linux-x86_64-cpython-312/multidict
32.81 copying multidict/_compat.py -> build/lib.linux-x86_64-cpython-312/multidict
32.81 copying multidict/__init__.py -> build/lib.linux-x86_64-cpython-312/multidict
32.81 running egg_info
32.81 writing multidict.egg-info/PKG-INFO
32.81 writing dependency_links to multidict.egg-info/dependency_links.txt
32.81 writing top-level names to multidict.egg-info/top_level.txt
32.81 reading manifest file 'multidict.egg-info/SOURCES.txt'
32.81 reading manifest template 'MANIFEST.in'
32.81 warning: no previously-included files matching '*.pyc' found anywhere in distribution
32.81 warning: no previously-included files found matching 'multidict/_multidict.html'
32.81 warning: no previously-included files found matching 'multidict/*.so'
32.81 warning: no previously-included files found matching 'multidict/*.pyd'
32.81 warning: no previously-included files found matching 'multidict/*.pyd'
32.81 no previously-included directories found matching 'docs/_build'
32.81 adding license file 'LICENSE'
32.81 writing manifest file 'multidict.egg-info/SOURCES.txt'
32.81 /tmp/pip-build-env-g9hma3yk/overlay/lib/python3.12/site-packages/setuptools/command/build_py.py:207: _Warning: Package 'multidict._multilib' is absent from the `packages` configuration.
32.81 !!
32.81
32.81 ********************************************************************************
32.81 ############################
32.81 # Package would be ignored #
32.81 ############################
32.81 Python recognizes 'multidict._multilib' as an importable package[^1],
32.81 but it is absent from setuptools' `packages` configuration.
32.81
32.81 This leads to an ambiguous overall configuration. If you want to distribute this
32.81 package, please make sure that 'multidict._multilib' is explicitly added
32.81 to the `packages` configuration field.
32.81
32.81 Alternatively, you can also rely on setuptools' discovery methods
32.81 (for example by using `find_namespace_packages(...)`/`find_namespace:`
32.81 instead of `find_packages(...)`/`find:`).
32.81
32.81 You can read more about "package discovery" on setuptools documentation page:
32.81
32.81 - https://setuptools.pypa.io/en/latest/userguide/package_discovery.html
32.81
32.81 If you don't want 'multidict._multilib' to be distributed and are
32.81 already explicitly excluding 'multidict._multilib' via
32.81 `find_namespace_packages(...)/find_namespace` or `find_packages(...)/find`,
32.81 you can try to use `exclude_package_data`, or `include-package-data=False` in
32.81 combination with a more fine grained `package-data` configuration.
32.81
32.81 You can read more about "package data files" on setuptools documentation page:
32.81
32.81 - https://setuptools.pypa.io/en/latest/userguide/datafiles.html
32.81
32.81
32.81 [^1]: For Python, any directory (with suitable naming) can be imported,
32.81 even if it does not contain any `.py` files.
32.81 On the other hand, currently there is no concept of package data
32.81 directory, all directories are treated like packages.
32.81 ********************************************************************************
32.81
32.81 !!
32.81 check.warn(importable)
32.81 copying multidict/__init__.pyi -> build/lib.linux-x86_64-cpython-312/multidict
32.81 copying multidict/py.typed -> build/lib.linux-x86_64-cpython-312/multidict
32.81 creating build/temp.linux-x86_64-cpython-312
32.81 creating build/temp.linux-x86_64-cpython-312/multidict
32.81 gcc -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O3 -Wall -fPIC -I/usr/local/include/python3.12 -c multidict/_multidict.c -o build/temp.linux-x86_64-cpython-312/multidict/_multidict.o -O2 -std=c99 -Wall -Wsign-compare -Wconversion -fno-strict-aliasing -pedantic
32.81 error: command 'gcc' failed: No such file or directory
32.81 [end of output]
32.81
32.81 note: This error originates from a subprocess, and is likely not a problem with pip.
32.81 ERROR: Failed building wheel for multidict
32.81 Successfully built Flask-Mail langdetect
32.82 Failed to build multidict
32.82 ERROR: Could not build wheels for multidict, which is required to install pyproject.toml-based projects
```
I was able to work around this by changing the `FROM` target to `python:3.11-alpine`:
```
$ git diff
diff --git a/Dockerfile b/Dockerfile
index 972b18b..7fe6683 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -1,4 +1,5 @@
-FROM python:slim
+# FROM python:slim
+FROM python:3.11-alpine
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
``` | closed | 2024-01-20T15:08:23Z | 2024-01-31T13:59:17Z | https://github.com/miguelgrinberg/microblog/issues/363 | [
"question"
] | joswr1ght | 1 |
Tanuki/tanuki.py | pydantic | 40 | [Use-case] Customer support bot for Twitter | * Context: Company has a Twitter account and users tag with compliments, complaints, questions, and more
* Problem: Responding to each of them manually and triaging them with customer support is incredibly time-consuming and costly to have 24/7 coverage. Building a chatbot from scratch is also time-consuming
* Workflow
* Tweet passed
* Step 1
* Respond to the issue with something that conveys sympathy and understanding and that they'll be in touch.
* Step 2 (if question / complaint) - might be another function
* Create a customer support ticket
* Name
* Issue
* Urgency | closed | 2023-11-03T20:04:55Z | 2023-11-14T11:51:19Z | https://github.com/Tanuki/tanuki.py/issues/40 | [] | dnlkwak | 0 |
microsoft/UFO | automation | 11 | Train or fine-tune models for computer automation agents | Hello there Microsoft UFO Team! Excellent work for you to do such remarkable job, bringing AI closer to Windows system. I am doing similar works like training custom GPT2 models on computer automation datasets.
I have created two comprehensive datasets, over [terminal](https://huggingface.co/datasets/James4Ever0/FrozenForest) and [GUI](https://huggingface.co/datasets/James4Ever0/the_frozen_forest) environments. My strategy is to create data by random keyboard and mouse actions, collect observations mixed with other textual datasets.
This naive [attempt](https://github.com/james4ever0/agi_computer_control) shows my strong interest over computer agents. I like the idea of GUI agent benchmark systems like [WindowsBench](https://arxiv.org/pdf/2402.07939.pdf), and have thought of building some reward system by program exit codes or [VimGolf](https://github.com/James4Ever0/agi_computer_control/tree/master/active_documentation_helper/test_vimscript).
If you ever consider my suggestion useful I would like to hear from your reply! Furthermore, if cooperation is possible I would be thrilled to join your team for building better computer agents!
---
Update: Google has posted an unsupervised action space training method called [Genie](https://arxiv.org/abs/2402.15391). Consider that as highly applicable in the area of computer agents. | open | 2024-02-24T15:11:08Z | 2024-03-10T07:56:47Z | https://github.com/microsoft/UFO/issues/11 | [] | James4Ever0 | 4 |
ymcui/Chinese-BERT-wwm | nlp | 50 | Roberta-large 计划 | 请问有计划发布24layer的roberta吗 | closed | 2019-09-25T07:30:36Z | 2019-09-26T02:36:43Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/50 | [] | zhengwsh | 1 |
HIT-SCIR/ltp | nlp | 445 | AttributeError: 'Version' object has no attribute 'major' | 环境:
python 3.7.0
torch 1.7.0+cu101
transformers 3.2.0
源代码:
from ltp import LTP
ltp = LTP()
seg, hidden = ltp.seg(["他叫汤姆去拿外衣。"])
pos = ltp.pos(hidden)
ner = ltp.ner(hidden)
srl = ltp.srl(hidden)
dep = ltp.dep(hidden)
sdp = ltp.sdp(hidden)
报错如下:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-1-17b174c1e5da> in <module>()
1 from ltp import LTP
2 ltp = LTP()
----> 3 seg, hidden = ltp.seg(["他叫汤姆去拿外衣。"])
4 pos = ltp.pos(hidden)
5 ner = ltp.ner(hidden)
~\Anaconda3\lib\site-packages\ltp\frontend.py in wrapper(*args, **kwargs)
42 def wrapper(*args, **kwargs):
43 with torch.no_grad():
---> 44 return func(*args, **kwargs)
45
46 return wrapper
~\Anaconda3\lib\site-packages\ltp\frontend.py in seg(self, inputs, truncation, is_preseged)
211 """
212
--> 213 if transformers_version.major >= 3 and transformers_version.major > 1:
214 kwargs = {'is_split_into_words': is_preseged}
215 else:
AttributeError: 'Version' object has no attribute 'major' | closed | 2020-12-02T07:51:11Z | 2020-12-02T11:17:05Z | https://github.com/HIT-SCIR/ltp/issues/445 | [] | ZXJ-DSA | 2 |
SYSTRAN/faster-whisper | deep-learning | 877 | Transcribe outputs gibberish english even when language param is set | I am trying to get non english transcription working with no luck to get decent output. Here is stock whisper for comparison:
pip install git+https://github.com/openai/whisper.git
```python
>>> import whisper
>>> model = whisper.load_model("large-v3")
>>> out = model.transcribe("verca.wav", language="czech")
>>> print(out['text'])
Ředitelka školy paní magistrá Vladimíra Melicharová, na kterou je e-mail vladimira.melicharová-zasevrané.cz
```
Here is faster whisper:
```python
>>> from faster_whisper import WhisperModel
>>>
>>> model_size = "Systran/faster-distil-whisper-large-v3"
>>> model = WhisperModel(model_size, device="cuda")
>>> segments, info = model.transcribe("verca.wav", language="cs")
>>> print("Detected language '%s' with probability %f" % (info.language, info.language_probability))
Detected language 'cs' with probability 1.000000
>>> for segment in segments:
... print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
...
[0.00s -> 3.00s] Reditelka Shkola,
[3.00s -> 4.00s] Ms.
[4.00s -> 6.00s] Magistraeimira Melicharovah,
[6.00s -> 9.00s] and Melicharrow
[9.00s -> 12.00s] www
[12.00s -> 14.00s] atseverranet.
```
Audio file used attached [verca.wav.zip](https://github.com/user-attachments/files/15804127/verca.wav.zip). Tried other models (distil-whisper/distil-large-v3-ct2, openai/whisper-large-v3), same quality of results.
Any ideas? | closed | 2024-06-12T11:43:29Z | 2024-06-13T19:07:38Z | https://github.com/SYSTRAN/faster-whisper/issues/877 | [] | honzasterba | 2 |
babysor/MockingBird | deep-learning | 541 | 关于训练某个特定声音遇到的问题 |
基于社区版继续训练某个特定声音遇到的问题
本人基于社区版ceshi.pt,加入一些某个人的音频文件,以aidatatang_200zh数据集格式进行训练,到244k的时候测试发现如下问题
1.断句问题。比如“服务器”,会感觉服务跟器之间有明显停顿
2.多音字问题。比如“重装”,“重”会读成四声
3.语速快问题。用新模型进行合成之后发现,录出的语音比较快。
以上三个问题,不知道能否通过预处理或者修改代码进行优化。恳请路过的各位大神不吝赐教!
| open | 2022-05-08T02:51:40Z | 2022-05-08T02:51:40Z | https://github.com/babysor/MockingBird/issues/541 | [] | hesilong | 0 |
bmoscon/cryptofeed | asyncio | 65 | Import pair_exchange_to_std throws handshake error for Kraken | The below import throws the error:
"requests.exceptions.SSLError: HTTPSConnectionPool(host='api.kraken.com', port=443): Max retries
exceeded with url: /0/public/AssetPairs (Caused by SSLError(SSLError("bad handshake:
SysCallError(10054, 'WSAECONNRESET')",),))"
I listed all the files I'm using that include the import for the file that throws the error.
from cryptofeed.standards import pair_exchange_to_std :
{
from cryptofeed.feed import Feed
from cryptofeed import FeedHandler
from cryptofeed.feedhandler import FeedHandler
from cryptofeed.coinbase.coinbase import Coinbase
from cryptofeed.exchanges import Coinbase
}
It's very possible this is just my environment, or current connection, but it brought me to a question.
Is there a way to disable attempts to connect with exchanges like Kraken that we might not have an interest in accessing, and thereby negate the error? | closed | 2019-02-04T19:34:46Z | 2019-02-05T00:23:04Z | https://github.com/bmoscon/cryptofeed/issues/65 | [] | kuhtooie | 5 |
iperov/DeepFaceLab | machine-learning | 622 | suggestion: all train & merge parameters in same range 0..100 | Currently, the train SAEHD and merge SAEHD options have very different parameter ranges, making it hard to remember the right settings.
I suggest to harmonize them all in a range of 0..100 (or -50..+50 for certain params where negative values are needed):
**Examples:**
**SAEHD train**
GAN power ( **0.0 .. 10.0** )
True face power. ( **0.0000 .. 1.0** )
Face style power ( **0.0 .. 100.0** )
Background style power ( **0.0 .. 100.0** )
**SAEHD merger**
erode mask modifier ( **-400..400** )
blur mask modifier ( **0..400** )
motion blur power ( **0..100** )
output face scale modifier ( **-50..50** )
super resolution power ( **0..100** )
image degrade by denoise power ( **0..500** )
image degrade by bicubic rescale power ( **0..100** )
Degrade color power of final image ( **0..100** )
| closed | 2020-02-16T06:05:49Z | 2020-03-28T16:10:22Z | https://github.com/iperov/DeepFaceLab/issues/622 | [] | wuffenberg | 4 |
healthchecks/healthchecks | django | 1,039 | Make Period and Grace time minutes/hours/days clickable | ### Discussed in https://github.com/healthchecks/healthchecks/discussions/1031
<div type='discussions-op-text'>
<sup>Originally posted by **AbelLykens** July 19, 2024</sup>

Would be nice if the `30 minutes` / `1 hour` / `1 day` etc were clickable so it's easy to set round number periods/times 🙂 </div> | closed | 2024-08-02T07:44:00Z | 2024-10-11T10:38:03Z | https://github.com/healthchecks/healthchecks/issues/1039 | [] | cuu508 | 0 |
vi3k6i5/flashtext | nlp | 39 | Suggestion: compile your trie to a regexp... | You may get the best of both worlds (good algorithm, native-speed matcher) by actually compiling your trie to a regexp as https://github.com/ZhukovAlexander/triegex does... | closed | 2017-12-11T21:47:27Z | 2017-12-14T20:37:57Z | https://github.com/vi3k6i5/flashtext/issues/39 | [] | pygy | 5 |
jupyter/nbviewer | jupyter | 706 | Slide Show not Working | As a user of azure notebooks integrated with jupyter notebooks, I really enjoy using the slideshow button. In the past (earlier this week even) this has worked well, but currently it will not turn into a slideshow correctly. Instead it is showing up as a presentation view of the notebook, and freezes once there.


| open | 2017-06-30T16:28:47Z | 2018-07-09T01:53:48Z | https://github.com/jupyter/nbviewer/issues/706 | [
"type:Question",
"tag:Other Jupyter Project"
] | efitzpatrick | 1 |
graphql-python/graphene-django | graphql | 558 | [DjangoObjectType] Error reading values different than options from field choices in query | We have a model with a choices field:
```
class CategoryChoices(Choice):
PERSONAL = 'Personal'
VACATION = 'Vacation'
HOLIDAY = 'Holiday'
EVENT = 'Event'
class Event(models.Model):
category = models.CharField(
max_length=100,
choices=CategoryChoices,
default=CategoryChoices.PERSONAL
)
```
```
class EventType(DjangoObjectType):
class Meta:
model = Event
```
```
class Query(object):
all_events = graphene.List(EventType)
def resolve_all_events(self, info, **kwargs):
return Event.objects.all()
```
When reading old event entries that have `category` different from the current options, we get the following error:
```
{
"message": "Expected a value of type \"EventCategory\" but received: blabla",
"path": [
"allEvents",
27,
"category"
]
}
```
This issue might be related to #424.
| closed | 2018-11-30T19:41:17Z | 2019-06-18T12:09:39Z | https://github.com/graphql-python/graphene-django/issues/558 | [
"wontfix"
] | olgapinheiro | 1 |
aiortc/aioquic | asyncio | 12 | Limit traffic on un-validated network paths | Now that we implement retransmissions we need to ensure we don't send more than 3x the received data until a network path is validated to avoid amplification attacks. | closed | 2019-06-08T09:44:55Z | 2019-06-09T15:03:51Z | https://github.com/aiortc/aioquic/issues/12 | [] | jlaine | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 847 | Some questions about choosing GANs | 1.
I'm trying to train a model for this use.
trainA: before some works or procedure.
trainB: Same place, but after some works or procedure.
By this model, If I put a pre-work(procedure) place's image, want to get after GAN image.
In this case, pix2pix or CycleGAN ( or even other GAN) is more suitable in your guess?
(I'm quite new in computer science ...)
2.
And for training, I've added ( --epoch 400 ) at the end of training code.
But it really did train for 200..
is there any way to solve this?
3.
for pix2pix model, do I have to merge input-output image?
(for example, input & output is 256x256, does it works as 512 x 256) | closed | 2019-11-18T11:21:33Z | 2019-12-19T19:10:03Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/847 | [] | HeoJinLareine | 3 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,310 | Not loading toolbox | After giving command python demo_toolbox.py it is showing nothing and I am in same position

| open | 2024-08-18T01:33:04Z | 2024-08-18T01:36:54Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1310 | [] | sajal123r | 0 |
BeanieODM/beanie | pydantic | 1,131 | [BUG] Index on Link Field Not Being Used in Beanie Queries | **Describe the bug**
I encountered an issue with Beanie where the index on a Link field (specifically for the major field in my Course document) does not seem to be used during queries. However, the index works correctly when using a native MongoDB query with DBRef. Below is the code and a comparison of the behavior. i found the difference by checking usage of major field index (inside course collection) in mongodb compass
**To Reproduce**
```python
from beanie import Document, PydanticObjectId, init_beanie, Link
from motor.motor_asyncio import AsyncIOMotorClient
from bson import DBRef, ObjectId
import asyncio
class Major(Document):
majorId: str
majorName: str
class Settings:
name = "majors"
class Course(Document):
courseId: str
major: Link[Major]
class Settings:
name = "uni_courses"
indexes = ['major']
client = None
async def init():
global client
client = AsyncIOMotorClient("mongodb://localhost:27017")
await init_beanie(client.test_db, document_models=[Major, Course])
async def insert_data():
major = Major(majorId="123", majorName="CS")
await major.insert()
course = Course(courseId="CS101", major=major)
await course.insert()
async def query_with_beanie():
courses = await Course.find_many(Course.major.id == PydanticObjectId("67bb97e9a033f22d877df9e5")).to_list()
async def query_with_native_mongo():
global client
courses = await client.test_db.uni_courses.find({"major": DBRef('Major', ObjectId("67bb97e9a033f22d877df9e5"))}).to_list(length=None)
async def main():
await init()
await insert_data()
await query_with_beanie()
await query_with_native_mongo()
asyncio.run(main())
```
**Expected behavior**
The index on the major field should be used when querying via Beanie, and MongoDB Compass should show an increase in index usage.
**Additional context**
This issue seems to be related to how Beanie handles Link fields and indexing. The native query works fine, but the Beanie query does not utilize the index.
| open | 2025-02-23T22:24:47Z | 2025-02-23T22:24:47Z | https://github.com/BeanieODM/beanie/issues/1131 | [] | parsa-pico | 0 |
microsoft/qlib | deep-learning | 948 | `get_risk_degree` not in `BaseStrategy` | ## 📖 Documentation
In [this part](https://qlib.readthedocs.io/en/latest/component/strategy.html#basestrategy) from the document, " All strategy classes need to inherit the base class and implement its interface. `get_risk_degree` and `generate_order_list`". However, in current version of [code](https://github.com/microsoft/qlib/blob/ea4fb33ff2cb95ff8577aa0ccdec98eb7f9d0b7c/qlib/strategy/base.py#L19). Class `BaseStrategy` does not have these two methods. [Class `SBBStrategyBase`](https://github.com/microsoft/qlib/blob/ea4fb33ff2cb95ff8577aa0ccdec98eb7f9d0b7c/qlib/contrib/strategy/rule_strategy.py#L125) does not have these two methods either.
| closed | 2022-03-04T01:42:37Z | 2022-03-17T11:49:06Z | https://github.com/microsoft/qlib/issues/948 | [] | tengerye | 4 |
arogozhnikov/einops | tensorflow | 313 | Tests failing on FreeBSD | **Describe the bug**
PyTest prints these failures:
```
========================================================================================= FAILURES ==========================================================================================
______________________________________________________________________________________ test_notebook_1 ______________________________________________________________________________________
def test_notebook_1():
[notebook] = Path(__file__).parent.with_name("docs").glob("1-*.ipynb")
> render_notebook(notebook, replacements={})
tests/test_notebooks.py:37:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_notebooks.py:28: in render_notebook
ep.preprocess(nb, {"metadata": {"path": str(filename.parent.absolute())}})
/usr/local/lib/python3.9/site-packages/nbconvert/preprocessors/execute.py:102: in preprocess
self.preprocess_cell(cell, resources, index)
/usr/local/lib/python3.9/site-packages/nbconvert/preprocessors/execute.py:123: in preprocess_cell
cell = self.execute_cell(cell, index, store_history=True)
/usr/local/lib/python3.9/site-packages/jupyter_core/utils/__init__.py:165: in wrapped
return loop.run_until_complete(inner)
/usr/local/lib/python3.9/asyncio/base_events.py:647: in run_until_complete
return future.result()
/usr/local/lib/python3.9/site-packages/nbclient/client.py:1062: in async_execute_cell
await self._check_raise_for_error(cell, cell_index, exec_reply)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <nbconvert.preprocessors.execute.ExecutePreprocessor object at 0x13a48dbee3a0>
cell = {'cell_type': 'code', 'execution_count': 5, 'metadata': {'pycharm': {'name': '#%%\n'}, 'execution': {'iopub.status.bus...: No module named 'einops'"]}], 'source': "# we'll use three operations\nfrom einops import rearrange, reduce, repeat"}
cell_index = 7
exec_reply = {'buffers': [], 'content': {'ename': 'ModuleNotFoundError', 'engine_info': {'engine_id': -1, 'engine_uuid': 'c866a4eb-...e, 'engine': 'c866a4eb-3178-416f-ae4c-a466cecc1ff8', 'started': '2024-04-03T16:25:21.534254Z', 'status': 'error'}, ...}
async def _check_raise_for_error(
self, cell: NotebookNode, cell_index: int, exec_reply: dict[str, t.Any] | None
) -> None:
if exec_reply is None:
return None
exec_reply_content = exec_reply["content"]
if exec_reply_content["status"] != "error":
return None
cell_allows_errors = (not self.force_raise_errors) and (
self.allow_errors
or exec_reply_content.get("ename") in self.allow_error_names
or "raises-exception" in cell.metadata.get("tags", [])
)
await run_hook(
self.on_cell_error, cell=cell, cell_index=cell_index, execute_reply=exec_reply
)
if not cell_allows_errors:
> raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)
E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:
E ------------------
E # we'll use three operations
E from einops import rearrange, reduce, repeat
E ------------------
E
E
E ---------------------------------------------------------------------------
E ModuleNotFoundError Traceback (most recent call last)
E Cell In[5], line 2
E 1 # we'll use three operations
E ----> 2 from einops import rearrange, reduce, repeat
E
E ModuleNotFoundError: No module named 'einops'
/usr/local/lib/python3.9/site-packages/nbclient/client.py:918: CellExecutionError
______________________________________________________________________________________ test_notebook_3 ______________________________________________________________________________________
def test_notebook_3():
[notebook] = Path(__file__).parent.with_name("docs").glob("3-*.ipynb")
if not is_backend_tested("torch"):
pytest.skip()
> render_notebook(notebook, replacements={})
tests/test_notebooks.py:66:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_notebooks.py:28: in render_notebook
ep.preprocess(nb, {"metadata": {"path": str(filename.parent.absolute())}})
/usr/local/lib/python3.9/site-packages/nbconvert/preprocessors/execute.py:102: in preprocess
self.preprocess_cell(cell, resources, index)
/usr/local/lib/python3.9/site-packages/nbconvert/preprocessors/execute.py:123: in preprocess_cell
cell = self.execute_cell(cell, index, store_history=True)
/usr/local/lib/python3.9/site-packages/jupyter_core/utils/__init__.py:165: in wrapped
return loop.run_until_complete(inner)
/usr/local/lib/python3.9/asyncio/base_events.py:647: in run_until_complete
return future.result()
/usr/local/lib/python3.9/site-packages/nbclient/client.py:1062: in async_execute_cell
await self._check_raise_for_error(cell, cell_index, exec_reply)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <nbconvert.preprocessors.execute.ExecutePreprocessor object at 0x13a48ff801c0>
cell = {'cell_type': 'code', 'execution_count': 1, 'id': '5f24a8f2-6680-4298-9736-081719c53f88', 'metadata': {'execution': {'...31mModuleNotFoundError\x1b[0m: No module named 'einops'"]}], 'source': 'from einops.layers.torch import EinMix as Mix'}
cell_index = 2
exec_reply = {'buffers': [], 'content': {'ename': 'ModuleNotFoundError', 'engine_info': {'engine_id': -1, 'engine_uuid': 'c70533f8-...e, 'engine': 'c70533f8-d151-4fc1-a533-b3630f329c72', 'started': '2024-04-03T16:25:24.742754Z', 'status': 'error'}, ...}
async def _check_raise_for_error(
self, cell: NotebookNode, cell_index: int, exec_reply: dict[str, t.Any] | None
) -> None:
if exec_reply is None:
return None
exec_reply_content = exec_reply["content"]
if exec_reply_content["status"] != "error":
return None
cell_allows_errors = (not self.force_raise_errors) and (
self.allow_errors
or exec_reply_content.get("ename") in self.allow_error_names
or "raises-exception" in cell.metadata.get("tags", [])
)
await run_hook(
self.on_cell_error, cell=cell, cell_index=cell_index, execute_reply=exec_reply
)
if not cell_allows_errors:
> raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)
E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:
E ------------------
E from einops.layers.torch import EinMix as Mix
E ------------------
E
E
E ---------------------------------------------------------------------------
E ModuleNotFoundError Traceback (most recent call last)
E Cell In[1], line 1
E ----> 1 from einops.layers.torch import EinMix as Mix
E
E ModuleNotFoundError: No module named 'einops'
/usr/local/lib/python3.9/site-packages/nbclient/client.py:918: CellExecutionError
______________________________________________________________________________________ test_notebook_4 ______________________________________________________________________________________
def test_notebook_4():
[notebook] = Path(__file__).parent.with_name("docs").glob("4-*.ipynb")
if not is_backend_tested("torch"):
pytest.skip()
> render_notebook(notebook, replacements={})
tests/test_notebooks.py:73:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_notebooks.py:28: in render_notebook
ep.preprocess(nb, {"metadata": {"path": str(filename.parent.absolute())}})
/usr/local/lib/python3.9/site-packages/nbconvert/preprocessors/execute.py:102: in preprocess
self.preprocess_cell(cell, resources, index)
/usr/local/lib/python3.9/site-packages/nbconvert/preprocessors/execute.py:123: in preprocess_cell
cell = self.execute_cell(cell, index, store_history=True)
/usr/local/lib/python3.9/site-packages/jupyter_core/utils/__init__.py:165: in wrapped
return loop.run_until_complete(inner)
/usr/local/lib/python3.9/asyncio/base_events.py:647: in run_until_complete
return future.result()
/usr/local/lib/python3.9/site-packages/nbclient/client.py:1062: in async_execute_cell
await self._check_raise_for_error(cell, cell_index, exec_reply)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <nbconvert.preprocessors.execute.ExecutePreprocessor object at 0x13a48e477730>
cell = {'cell_type': 'code', 'execution_count': 2, 'metadata': {'collapsed': False, 'jupyter': {'outputs_hidden': False}, 'ex...e_depth = np.random.random([h, w])\n# but we can stack them\nimage_rgbd, ps = pack([image_rgb, image_depth], 'h w *')"}
cell_index = 3
exec_reply = {'buffers': [], 'content': {'ename': 'ModuleNotFoundError', 'engine_info': {'engine_id': -1, 'engine_uuid': '6e46e1d8-...e, 'engine': '6e46e1d8-b8f4-4af2-ad9d-6d31a6f98e2f', 'started': '2024-04-03T16:25:27.923862Z', 'status': 'error'}, ...}
async def _check_raise_for_error(
self, cell: NotebookNode, cell_index: int, exec_reply: dict[str, t.Any] | None
) -> None:
if exec_reply is None:
return None
exec_reply_content = exec_reply["content"]
if exec_reply_content["status"] != "error":
return None
cell_allows_errors = (not self.force_raise_errors) and (
self.allow_errors
or exec_reply_content.get("ename") in self.allow_error_names
or "raises-exception" in cell.metadata.get("tags", [])
)
await run_hook(
self.on_cell_error, cell=cell, cell_index=cell_index, execute_reply=exec_reply
)
if not cell_allows_errors:
> raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)
E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:
E ------------------
E from einops import pack, unpack
E
E h, w = 100, 200
E # image_rgb is 3-dimensional (h, w, 3) and depth is 2-dimensional (h, w)
E image_rgb = np.random.random([h, w, 3])
E image_depth = np.random.random([h, w])
E # but we can stack them
E image_rgbd, ps = pack([image_rgb, image_depth], 'h w *')
E ------------------
E
E
E ---------------------------------------------------------------------------
E ModuleNotFoundError Traceback (most recent call last)
E Cell In[2], line 1
E ----> 1 from einops import pack, unpack
E 3 h, w = 100, 200
E 4 # image_rgb is 3-dimensional (h, w, 3) and depth is 2-dimensional (h, w)
E
E ModuleNotFoundError: No module named 'einops'
/usr/local/lib/python3.9/site-packages/nbclient/client.py:918: CellExecutionError
_____________________________________________________________________________ test_notebook_2_with_all_backends _____________________________________________________________________________
def test_notebook_2_with_all_backends():
[notebook] = Path(__file__).parent.with_name("docs").glob("2-*.ipynb")
backends = []
if is_backend_tested("torch"):
# notebook uses name pytorch
backends.append("pytorch")
if is_backend_tested("tensorflow"):
backends.append("tensorflow")
if is_backend_tested("chainer"):
backends.append("chainer")
if len(backends) == 0:
pytest.skip()
for backend in backends:
print("Testing {} with backend {}".format(notebook, backend))
replacements = {"flavour = 'pytorch'": "flavour = '{}'".format(backend)}
expected_string = "selected {} backend".format(backend)
> result = render_notebook(notebook, replacements=replacements)
tests/test_notebooks.py:58:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_notebooks.py:28: in render_notebook
ep.preprocess(nb, {"metadata": {"path": str(filename.parent.absolute())}})
/usr/local/lib/python3.9/site-packages/nbconvert/preprocessors/execute.py:102: in preprocess
self.preprocess_cell(cell, resources, index)
/usr/local/lib/python3.9/site-packages/nbconvert/preprocessors/execute.py:123: in preprocess_cell
cell = self.execute_cell(cell, index, store_history=True)
/usr/local/lib/python3.9/site-packages/jupyter_core/utils/__init__.py:165: in wrapped
return loop.run_until_complete(inner)
/usr/local/lib/python3.9/asyncio/base_events.py:647: in run_until_complete
return future.result()
/usr/local/lib/python3.9/site-packages/nbclient/client.py:1062: in async_execute_cell
await self._check_raise_for_error(cell, cell_index, exec_reply)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <nbconvert.preprocessors.execute.ExecutePreprocessor object at 0x13a48ff87bb0>
cell = {'cell_type': 'code', 'execution_count': 1, 'metadata': {'execution': {'iopub.status.busy': '2024-04-03T16:25:30.67020... "\x1b[0;31mModuleNotFoundError\x1b[0m: No module named 'einops'"]}], 'source': 'from einops import rearrange, reduce'}
cell_index = 1
exec_reply = {'buffers': [], 'content': {'ename': 'ModuleNotFoundError', 'engine_info': {'engine_id': -1, 'engine_uuid': '22768edd-...e, 'engine': '22768edd-8f6f-4a5b-906e-3ecd7ff2758a', 'started': '2024-04-03T16:25:30.671056Z', 'status': 'error'}, ...}
async def _check_raise_for_error(
self, cell: NotebookNode, cell_index: int, exec_reply: dict[str, t.Any] | None
) -> None:
if exec_reply is None:
return None
exec_reply_content = exec_reply["content"]
if exec_reply_content["status"] != "error":
return None
cell_allows_errors = (not self.force_raise_errors) and (
self.allow_errors
or exec_reply_content.get("ename") in self.allow_error_names
or "raises-exception" in cell.metadata.get("tags", [])
)
await run_hook(
self.on_cell_error, cell=cell, cell_index=cell_index, execute_reply=exec_reply
)
if not cell_allows_errors:
> raise CellExecutionError.from_cell_and_msg(cell, exec_reply_content)
E nbclient.exceptions.CellExecutionError: An error occurred while executing the following cell:
E ------------------
E from einops import rearrange, reduce
E ------------------
E
E
E ---------------------------------------------------------------------------
E ModuleNotFoundError Traceback (most recent call last)
E Cell In[1], line 1
E ----> 1 from einops import rearrange, reduce
E
E ModuleNotFoundError: No module named 'einops'
/usr/local/lib/python3.9/site-packages/nbclient/client.py:918: CellExecutionError
----------------------------------------------------------------------------------- Captured stdout call ------------------------------------------------------------------------------------
Testing /usr/ports/misc/py-einops/work-py39/einops-0.7.0/docs/2-einops-for-deep-learning.ipynb with backend pytorch
_____________________________________________________________________________________ test_torch_layer ______________________________________________________________________________________
def test_torch_layer():
if not is_backend_tested("torch"):
pytest.skip()
else:
# checked that torch present
import torch
import torch.jit
model1 = create_torch_model(use_reduce=True)
model2 = create_torch_model(use_reduce=False)
input = torch.randn([10, 3, 32, 32])
# random models have different predictions
assert not torch.allclose(model1(input), model2(input))
model2.load_state_dict(pickle.loads(pickle.dumps(model1.state_dict())))
assert torch.allclose(model1(input), model2(input))
# tracing (freezing)
> model3 = torch.jit.trace(model2, example_inputs=input)
tests/test_layers.py:219:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py:489: in _fn
return fn(*args, **kwargs)
/usr/local/lib/python3.9/site-packages/torch/_dynamo/external_utils.py:17: in inner
return fn(*args, **kwargs)
/usr/local/lib/python3.9/site-packages/torch/jit/_trace.py:806: in trace
return trace_module(
/usr/local/lib/python3.9/site-packages/torch/jit/_trace.py:1102: in trace_module
_check_trace(
/usr/local/lib/python3.9/site-packages/torch/utils/_contextlib.py:115: in decorate_context
return func(*args, **kwargs)
/usr/local/lib/python3.9/site-packages/torch/jit/_trace.py:575: in _check_trace
diag_info = graph_diagnostic_info()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def graph_diagnostic_info():
mod_canonicalized = torch._C._jit_pass_canonicalize(traced_func.graph)
torch._C._jit_pass_inline(mod_canonicalized)
torch._C._jit_pass_erase_shape_information(mod_canonicalized)
mod_str = str(mod_canonicalized)
mod_str = re.sub(r"___torch_mangle_[0-9]+\.", "", mod_str)
check_canonicalized = torch._C._jit_pass_canonicalize(check_mod_func.graph)
torch._C._jit_pass_inline(check_canonicalized)
torch._C._jit_pass_erase_shape_information(check_canonicalized)
check_str = str(check_canonicalized)
check_str = re.sub(r"___torch_mangle_[0-9]+\.", "", check_str)
graph_diff_errors = None
if mod_str != check_str:
import difflib
graph_diff = difflib.ndiff(
mod_str.splitlines(True), check_str.splitlines(True)
)
graph_diff_errors = "Graph diff:\n" + indent("".join(graph_diff)) + "\n"
for n_mod, n_check in zip(
mod_canonicalized.nodes(), check_canonicalized.nodes()
):
if str(n_mod) != str(n_check):
graph_diff_errors += "First diverging operator:\n"
node_diff = difflib.ndiff(
str(n_mod).splitlines(True), str(n_check).splitlines(True)
)
source_printout = (
"Node diff:\n" + indent("".join(node_diff)) + "\n"
)
mod_stack = n_mod.sourceRange()
if mod_stack:
source_printout += (
"Trace source location:\n" + indent(mod_stack) + "\n"
)
check_stack = n_check.sourceRange()
if check_stack:
source_printout += (
"Check source location:\n" + indent(check_stack) + "\n"
)
graph_diff_errors += source_printout
break # For now, only print out the first pair of nodes that diverges
tensor_compare_errors = None
# Check Tensor-valued constant nodes
for n_mod, n_check in zip(
mod_canonicalized.nodes(), check_canonicalized.nodes()
):
if n_mod.kind() != n_check.kind():
break # Graphs have already diverged
if n_mod.kind() == "prim::Constant" and not (
n_mod.mustBeNone() or n_check.mustBeNone()
):
if not n_mod.hasAttribute("value"):
continue
if n_mod.kindOf("value") != "t" or n_check.kindOf("value") != "t":
continue
> mod_tensor_val = n_mod.t("value")
E RuntimeError: required keyword attribute 'value' has the wrong type
/usr/local/lib/python3.9/site-packages/torch/jit/_trace.py:444: RuntimeError
===================================================================================== warnings summary ======================================================================================
../../../../../local/lib/python3.9/site-packages/jupyter_client/connect.py:22
/usr/local/lib/python3.9/site-packages/jupyter_client/connect.py:22: DeprecationWarning: Jupyter is migrating its paths to use standard platformdirs
given by the platformdirs library. To remove this warning and
see the appropriate new directories, set the environment variable
`JUPYTER_PLATFORM_DIRS=1` and then run `jupyter --paths`.
The use of platformdirs will be the default in `jupyter_core` v6
from jupyter_core.paths import jupyter_data_dir, jupyter_runtime_dir, secure_write
tests/test_packing.py::test_pack_unpack_array_api
/usr/ports/misc/py-einops/work-py39/einops-0.7.0/tests/test_packing.py:275: UserWarning: The numpy.array_api submodule is still experimental. See NEP 47.
import numpy.array_api as xp
tests/test_other.py::testmod
/usr/local/lib/python3.9/site-packages/_pytest/python.py:198: PytestReturnNotNoneWarning: Expected None, but tests/test_other.py::testmod returned TestResults(failed=0, attempted=0), which will be an error in a future version of pytest. Did you mean to use `assert` instead of `return`?
warnings.warn(
tests/test_einsum.py::test_layer
/usr/ports/misc/py-einops/work-py39/einops-0.7.0/einops/layers/_einmix.py:112: UserWarning: EinMix: weight has no dimensions (means multiplication by a number)
warnings.warn('EinMix: weight has no dimensions (means multiplication by a number)')
tests/test_layers.py::test_torch_layers_scripting
<unknown>:827: DeprecationWarning: invalid escape sequence \s
tests/test_layers.py::test_reduce_imperative
/usr/local/lib/python3.9/site-packages/numpy/core/_methods.py:53: RuntimeWarning: overflow encountered in reduce
return umr_prod(a, axis, dtype, out, keepdims, initial, where)
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
================================================================================== short test summary info ==================================================================================
SKIPPED [1] tests/test_layers.py:340: Skipped
SKIPPED [1] tests/test_layers.py:244: Skipped
=================================================================== 5 failed, 98 passed, 2 skipped, 6 warnings in 33.99s ====================================================================
*** Error code 1
```
**Reproduction steps**
```
cd /usr/ports/misc/py-einops/work-py39/einops-0.7.0 && /usr/bin/env -i HOME=/usr/ports/misc/py-einops/work-py39 PWD="${PWD}" __MAKE_CONF=/nonexistent OSVERSION=1400509 PATH=/usr/local/libexec/ccache:/usr/ports/misc/py-einops/work-py39/.bin:/home/yuri/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin TERM=xterm-256color XDG_DATA_HOME=/usr/ports/misc/py-einops/work-py39 XDG_CONFIG_HOME=/usr/ports/misc/py-einops/work-py39 XDG_CACHE_HOME=/usr/ports/misc/py-einops/work-py39/.cache HOME=/usr/ports/misc/py-einops/work-py39 PATH=/usr/local/libexec/ccache:/usr/ports/misc/py-einops/work-py39/.bin:/home/yuri/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin PKG_CONFIG_LIBDIR=/usr/ports/misc/py-einops/work-py39/.pkgconfig:/usr/local/libdata/pkgconfig:/usr/local/share/pkgconfig:/usr/libdata/pkgconfig MK_DEBUG_FILES=no MK_KERNEL_SYMBOLS=no SHELL=/bin/sh NO_LINT=YES PREFIX=/usr/local LOCALBASE=/usr/local CC="cc" CFLAGS="-O2 -pipe -fstack-protector-strong -fno-strict-aliasing " CPP="cpp" CPPFLAGS="" LDFLAGS=" -fstack-protector-strong " LIBS="" CXX="c++" CXXFLAGS="-O2 -pipe -fstack-protector-strong -fno-strict-aliasing " CCACHE_DIR="/tmp/.ccache" BSD_INSTALL_PROGRAM="install -s -m 555" BSD_INSTALL_LIB="install -s -m 0644" BSD_INSTALL_SCRIPT="install -m 555" BSD_INSTALL_DATA="install -m 0644" BSD_INSTALL_MAN="install -m 444" EINOPS_TEST_BACKENDS="numpy,torch" /usr/local/bin/python3.9 -m pytest -k '' -rs -v -o addopts=
==================================================================================== test session starts ====================================================================================
```
**Expected behavior**
n/a
**Your platform**
FreeBSD 14
Version: 0.7.0
Python-3.9
pytorch-2.2.1 (the latest release) | closed | 2024-04-03T16:29:42Z | 2025-02-09T05:32:05Z | https://github.com/arogozhnikov/einops/issues/313 | [] | yurivict | 4 |
taverntesting/tavern | pytest | 233 | Problem with multipart form data request. | Hello guys,
I am finding it as a blocker just to make a successful multipart form data request. Here it is request and response details from the browser:

1. I tried by followings but failed:
```
name: File Test
request:
url: "{REQUEST_URL}/upload"
method: POST
data:
fileName: 5MB.zip
headers:
cookie: "{COOKIE}"
response:
status_code: 200
```
It failed with errror:
> the request doesn't contain a multipart/form-data or multipart/mixed stream, content type header is application/json
2. Then tried again by setting the proper content type:
```
name: File Test
request:
url: "{REQUEST_URL}/upload"
method: POST
data:
fileName: 5MB.zip
headers:
content-type: multipart/form-data
cookie: "{COOKIE}"
response:
status_code: 200
```
It failed with errror:
> the request was rejected because no multipart boundary was found
3. Tried also by adding empty files:
```
name: File Test
request:
url: "{REQUEST_URL}/upload"
method: POST
data:
fileName: 5MB.zip
headers:
content-type: multipart/form-data
cookie: "{COOKIE}"
files:
file: "data/5MB.zip"
response:
status_code: 200
```
It failed with errror:
> E tavern.util.exceptions.BadSchemaError: Tried to send non-file content alongside a file
| closed | 2019-01-11T15:16:49Z | 2019-08-11T10:24:04Z | https://github.com/taverntesting/tavern/issues/233 | [
"Type: Enhancement"
] | mahadi087 | 4 |
docarray/docarray | pydantic | 1,500 | wrong Documentation link | The Documentation link in the README points to the legacy Documentation 0.21.0.
We need to fix this | closed | 2023-05-07T15:20:44Z | 2023-05-10T11:45:48Z | https://github.com/docarray/docarray/issues/1500 | [] | JoanFM | 0 |
matplotlib/mplfinance | matplotlib | 49 | More easily Plot Trades or Trade Signals | **Is your feature request related to a problem? Please describe.**
Your addplot.ipynb page is phenomenal and I was able to figure out that I can add scatter points by;
1) Making a numpy array empty/zero
trs = np.zeros((length_to_add, 1))
2) Filling it with Nans
trs.fill(np.nan)
3) Setting my 2 scatter points as I know the index (x) and y values
trs[x] = y
4) Adding this to a dataframe
toplot.insert(6, "trades", trs, True)
5) Then using your well documented addplot:
apdict = mpf.make_addplot(toplot['trades'],scatter=True,secondary_y=False)
mpf.plot(toplot,type='candle',volume=True,addplot=apdict)
**Describe the solution you'd like**
While this is great it took a lot of energy - I'm moving from the plotpy (for the candles!) and there I simply wrote:
plt.scatter(trade_list_idx,trade_list_price,c='b')
Which was simpler than messing around with a dataframe and so on.
| open | 2020-03-10T01:11:10Z | 2020-05-26T16:28:42Z | https://github.com/matplotlib/mplfinance/issues/49 | [
"enhancement"
] | thegamecat | 15 |
ageitgey/face_recognition | machine-learning | 1,328 | IndexError: list index out of range | * face_recognition version:
* Python version: 3.8
* Operating System: Linux Mint
### Description
I simply paste all the code and install libraries, but I've got this error:
IndexError: list index out of range
### What I Did
```
Complete Output:
Traceback (most recent call last):
File "/home/[USER]/Desktop/face_recognition/system.py", line 19, in <module>
obama_face_encoding = face_recognition.face_encodings(obama_image)[0]
IndexError: list index out of range
```
| open | 2021-06-15T08:31:12Z | 2023-09-03T14:34:00Z | https://github.com/ageitgey/face_recognition/issues/1328 | [] | zatarra97 | 3 |
tflearn/tflearn | data-science | 1,114 | Curses at Windows WORKS if I remove... (+) | Windows, Python 3.7, tflearn-0.3.2, windows_curses-1.0
If I removed in callbacks.py:
```python
sys.stdout.write(curses.tigetstr('cvvis').decode())
```
which cause exception AttributeError: 'NoneType' object has no attribute 'decode'
flag CURSES_SUPPORTED left to be True, so it works perfect in window console - no new lines while printing progress. | closed | 2019-01-24T00:34:49Z | 2019-02-25T00:05:25Z | https://github.com/tflearn/tflearn/issues/1114 | [] | KMiNT21 | 0 |
alteryx/featuretools | data-science | 2,391 | release Featuretools v1.19.0 | closed | 2022-12-08T16:37:12Z | 2022-12-09T20:03:03Z | https://github.com/alteryx/featuretools/issues/2391 | [] | gsheni | 0 | |
rpicard/explore-flask | flask | 87 | Form chapter: Unique validator missing init message | In _Handling Forms_ chapter, `Unique` validator doesn't initialize `message`. Should be changed to this:
```
def __init__(self, model, field, message=u'This element already exists.'):
self.model = model
self.field = field
self.message = message
```
| open | 2015-04-16T07:55:02Z | 2015-04-16T07:55:02Z | https://github.com/rpicard/explore-flask/issues/87 | [] | dengshuan | 0 |
deepfakes/faceswap | machine-learning | 1,190 | How can i use this project for swaping faces between two images. | I want to paste the face of a person from first image to another image. And the person clothes will be same. | closed | 2021-11-23T06:35:58Z | 2021-11-23T06:48:39Z | https://github.com/deepfakes/faceswap/issues/1190 | [] | alan-ai-learner | 1 |
hyperspy/hyperspy | data-visualization | 2,550 | Bug in navigation_mask for decomposition | Based on discussion in Gitter. I think it occurs because the default behavior of `s.decomposition()` for an EDS TEM signal is to create a vacuum mask, while calling the underlying `decomposition` function applies no vacuum mask by default. The end result is that the navigation mask is not correctly applied to the data and the decomposition fails because the shape is wrong. Evidently this is not covered by tests. @ericpre I see you looked at similar code lately for #2183, is it clearer to you what's going wrong?
See these lines - as the default value of `navigation_mask=1.0`, the branch is called: https://github.com/hyperspy/hyperspy/blob/0284fbcbee6549feaaf5cd5517d55df283b56ee3/hyperspy/_signals/eds_tem.py#L710-L716
To reproduce:
```python
import numpy as np
import hyperspy.api as hs
s = hs.signals.Signal1D(np.ones(shape=(32, 32, 1024)))
s.set_signal_type("EDS_TEM")
```
First with normalized Poisson noise:
```python
s.decomposition(True)
# self.data.shape (32, 32, 1024)
# navigation_mask.shape (32, 32)
# Eventually fails with `ValueError: zero-size array to reduction operation minimum which has no identity`
# inside `normalize_poissonian_noise()`, suggesting the navigation mask doesn't work due to its shape.
```
And now without normalized Poisson noise:
```python
s.decomposition(False)
# self.data.shape (32, 32, 1024)
# navigation_mask.shape (32, 32)
# Data after masking shape (0, 1024) <-- this is passed to `scipy.linalg.svd()` and fails
# Eventually fails with `ValueError: Internal work array size computation failed: -10`
# deep inside the scipy.linalg.svd internals because the data has an empty axis
``` | closed | 2020-09-15T10:37:10Z | 2020-11-30T12:44:12Z | https://github.com/hyperspy/hyperspy/issues/2550 | [
"type: bug"
] | tjof2 | 0 |
davidsandberg/facenet | tensorflow | 608 | Cluster analysis on faces? | Is it possible to do a cluster analysis on L2 distance of image embeddings? (for example, to determine which face is most often seen with another face?) | closed | 2018-01-08T18:54:56Z | 2018-04-04T19:18:05Z | https://github.com/davidsandberg/facenet/issues/608 | [] | taewookim | 1 |
wger-project/wger | django | 1,261 | error message | HI!
what does this error message ?
Thanks

| open | 2023-02-21T12:09:47Z | 2025-02-12T17:40:58Z | https://github.com/wger-project/wger/issues/1261 | [] | clafalco | 3 |
GibbsConsulting/django-plotly-dash | plotly | 418 | Use Dash Mantine Components with django_plotly_dash | If I use any Dash Mantine Components within my Dash application, it just displays as a large blank page when using dpd, but it loads just fine with the standard dash application. Looking at previous questions, it sounded like custom components should work with dpd. Am I missing a step to use other libraries with this? | closed | 2022-10-20T19:52:27Z | 2023-01-15T06:19:56Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/418 | [] | starsidechimp | 2 |
Kanaries/pygwalker | plotly | 516 | [BUG] Index out of bounds on reading a dataframe | **Describe the bug**
Trying to read in 10000+ row dataframe. Getting an error:
`IndexError: index 10001 is out of bounds for axis 0 with size 10001`
```
Traceback:
File "/app/app.py", line 83, in <module>
main()
File "/app/app.py", line 78, in main
analytics_tab()
File "/app/tabs/pygwalker_analytics/analytics_tab.py", line 146, in analytics_tab
renderer.render_explore()
File "/usr/local/lib/python3.11/site-packages/pygwalker/api/streamlit.py", line 201, in render_explore
html = self._get_html(**{"defaultTab": default_tab})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pygwalker/api/streamlit.py", line 147, in _get_html
return self._get_html_with_params_str_cache(params_str)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/cachetools/__init__.py", line 737, in wrapper
v = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pygwalker/api/streamlit.py", line 119, in _get_html_with_params_str_cache
props = self.walker._get_props("streamlit")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pygwalker/api/pygwalker.py", line 530, in _get_props
"fieldMetas": self.data_parser.field_metas,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pygwalker/data_parsers/base.py", line 138, in field_metas
duckdb.register("pygwalker_mid_table", self._duckdb_df)
```
**Versions**
- pygwalker version: 0.4.7
- python version: 3.11.8
**Additional context**
Similar issue was mentioned in: https://github.com/duckdb/duckdb/issues/10750
Recently DuckDB released [PR](https://github.com/duckdb/duckdb/pull/10768) to fix that issue with: [0.10.1 Bugfix Release ](https://github.com/duckdb/duckdb/releases/tag/v0.10.1).
I think this would be solved with DuckDB version requirement update from 0.10.0 -> 0.10.1
| closed | 2024-04-11T10:51:49Z | 2024-04-26T11:11:22Z | https://github.com/Kanaries/pygwalker/issues/516 | [
"bug",
"good first issue",
"P1"
] | RaulVS14 | 6 |
alteryx/featuretools | data-science | 2,027 | Update CFM so that binary comparisons work for Ordinal columns from aggregations | There is an issue with several of the binary comparison primitives such as `LessThan` in which comparing an Ordinal column in a base dataframe to an Ordinal column generated from an aggregation primitive can fail. For examples of this refer to this test: https://github.com/alteryx/featuretools/blob/6e819333b7631487f3e30684162abe0a4faf6dc3/featuretools/tests/primitive_tests/test_transform_features.py#L1544
A temporary workaround was added in PR #2025 to prevent CFM from failing on a feature generated automatically by DFS, but a better fix is needed.
The reason the comparison doesn't work is that the intermediate aggregation feature does not have a categorical dtype set as this doesn't happen until Woodwork is initialized at the end of calculating the feature matrix. This causes one of the primitive inputs to be categorical but the other to be something else, such as numeric. As a workaround, the primitives were updated to return `nan` values in cases where one but not both of the two primitive inputs were categorical.
We should update CFM to properly handle these comparisons and compute actual values instead of returning nan values. | open | 2022-04-21T22:10:06Z | 2023-06-26T19:10:35Z | https://github.com/alteryx/featuretools/issues/2027 | [] | thehomebrewnerd | 0 |
ploomber/ploomber | jupyter | 227 | Add a debugging guide | debugging pipeline declaration:
- [x] using the dag object
debugging tasks:
- [x] printing sql source
- [x] debugging (python) scripts
- [x] debugging dag.render errors | closed | 2020-08-13T16:30:36Z | 2021-03-09T04:58:36Z | https://github.com/ploomber/ploomber/issues/227 | [] | edublancas | 0 |
huggingface/transformers | machine-learning | 36,386 | `latent_recurrent_depth` Model Type Not Recognized in `transformers` | ### Model description
#### **Describe the Issue**
I have created a custom model (`latent_recurrent_depth`) and uploaded it to Hugging Face (`codewithdark/latent-recurrent-depth-lm`). The model trains successfully using `AutoModel`, `AutoModelForCausalLM`, and its actual class. However, when trying to load it with `from_pretrained()`, I encounter the following error:
```
ValueError: The checkpoint you are trying to load has model type `latent_recurrent_depth` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
```
#### **Steps to Reproduce**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "codewithdark/latent-recurrent-depth-lm"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
```
#### **Expected Behavior**
Since this is my own model, I want `transformers` to recognize `latent_recurrent_depth` and load the model properly.
---
### 🔍 What I Tried
#### ✅ **1. Confirmed That Training Works**
During training, the model works fine when instantiated with `AutoModel`, `AutoModelForCausalLM`, and the actual model class. Example:
```python
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_config("codewithdark/latent-recurrent-depth-lm") # Works during training
```
- The training pipeline runs without issues.
- The error only occurs when trying to load the model from a checkpoint.
#### ✅ **2. Updating `transformers`**
```bash
pip install --upgrade transformers
```
- Still getting the same error.
#### ✅ **3. Checking the Model Config**
```python
from transformers import AutoConfig
config = AutoConfig.from_pretrained("codewithdark/latent-recurrent-depth-lm")
print(config)
```
- The `model_type` is `latent_recurrent_depth`, which is not registered in `transformers`.
#### ✅ **4. Implementing a Custom Model Class**
Since `transformers` does not recognize my custom architecture, I attempted to manually register it:
```python
from transformers import AutoModel, AutoConfig, PreTrainedModel
import torch.nn as nn
class LatentRecurrentDepthModel(PreTrainedModel):
config_class = AutoConfig # Define config class
def __init__(self, config):
super().__init__(config)
self.layer = nn.Linear(config.hidden_size, config.hidden_size) # Example layer
def forward(self, input_ids, attention_mask=None):
return self.layer(input_ids)
# Register model
AutoConfig.register("latent_recurrent_depth", AutoConfig)
AutoModel.register(AutoConfig, LatentRecurrentDepthModel)
```
- However, I’m not sure if this is the correct approach.
---
### ❓ Questions
1. If the model works during training with `AutoModelForCausalLM`, why does `from_pretrained()` fail?
2. Is there an official way to add custom architectures to `transformers`?
3. Do I need to provide a `config.json` file in my Hugging Face model repository for better compatibility?
4. If I register my model this way, can I still load it from Hugging Face (`from_pretrained()`), or do I need to modify `transformers` source code?
### 🛠️ Environment
- `transformers` version: (output of `pip show transformers`)
- Python version: (e.g., 3.10)
- OS: (Windows/Linux/Mac)
Any guidance on how to properly integrate a custom model type into `transformers` would be greatly appreciated! 🚀
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
Github --> Model Implementaion code --> https://github.com/codewithdark-git/LatentRecurrentDepthLM.git
HuggingFace --> codewithdark/latent-recurrent-depth-lm | closed | 2025-02-25T03:59:39Z | 2025-02-25T16:12:35Z | https://github.com/huggingface/transformers/issues/36386 | [
"New model"
] | codewithdark-git | 2 |
graphistry/pygraphistry | pandas | 607 | [BUG] umap dirty_cat on colab |
With the last version of Pygraphistry I get the following error while running g.umap()

| open | 2024-10-27T18:31:08Z | 2025-01-04T20:15:29Z | https://github.com/graphistry/pygraphistry/issues/607 | [
"bug"
] | maksim-mihtech | 10 |
PablocFonseca/streamlit-aggrid | streamlit | 86 | [Bug] Sorting not working when NaN is present | There seems to be an issue with sorting the data using the up/down arrows by columns.
When the variable contains a missing value (i.e. NaN) the sorting does not work properly. See attachment. When dropping all NaN first, the sorting seems to work fine.
<img width="96" alt="image" src="https://user-images.githubusercontent.com/25745787/165161893-ff4409eb-59b4-4eed-bcf9-db627dbf2cf2.png">
Adding my options for completeness:
```
gb = GridOptionsBuilder.from_dataframe(rdf)
gb.configure_pagination(paginationAutoPageSize=False) #Add pagination
gb.configure_side_bar() #Add a sidebar
gb.configure_selection('single', pre_selected_rows=[0], use_checkbox=True, groupSelectsChildren='Group checkbox select children') # Enable single/multi-row selection ['single', 'multiple']
gridOptions = gb.build()
```
| closed | 2022-04-25T19:40:16Z | 2024-04-04T17:53:20Z | https://github.com/PablocFonseca/streamlit-aggrid/issues/86 | [
"question"
] | S-UP | 2 |
qwj/python-proxy | asyncio | 123 | can support speed limit by client ip? | open | 2021-03-25T10:15:11Z | 2021-03-25T10:15:11Z | https://github.com/qwj/python-proxy/issues/123 | [] | munding | 0 | |
deepset-ai/haystack | pytorch | 8,561 | AzureOCRDocumentConverter | I'm using the AzureOCRDocumentConverter and I'm struggling to understand how it can be useful.
How am i suppose to use the output documents of the AzureOCRDocumentConverter? All the extracted tables gets flatten out in the documents value entry of the returned dictionary. Wouldn't be better to convert that extracted text from tables into some markdown format? In this way I can ingest row by row in Elasticsearch. At the moment in the dictionary returned, the structure of the tables gets completely lost and is basically useless, i can't chunk that.
thanks a lot!
| closed | 2024-11-20T16:43:10Z | 2024-11-23T16:33:07Z | https://github.com/deepset-ai/haystack/issues/8561 | [] | CompareSan | 0 |
onnx/onnx | tensorflow | 6,374 | CI not compatible with ubuntu-24.04; Change the runner from ubuntu-latest to ubuntu-22.04? | # Ask a Question
While testing python 3.13 I realized that our current pipeline does not work for ubuntu-24.04.
Currently we are using ubuntu-latest, if 24.04 will become "latest" our build will fail first.
A quick search didn't turn up a roadmap
Example run could be found here
https://github.com/onnx/onnx/actions/runs/10902364296/job/30254115460?pr=6373
Should we change the runner from ubuntu-latest to ubuntu-22.04? | closed | 2024-09-17T11:53:15Z | 2024-11-01T14:57:44Z | https://github.com/onnx/onnx/issues/6374 | [
"question"
] | andife | 4 |
explosion/spaCy | deep-learning | 12,311 | Incorrect tokenization of words with ":" in Swedish model | ## How to reproduce the behaviour
In Swedish, ":" is used before single character endings after numerals, letters, abbreviations and other cases.
In the Swedish model, words with ":" get split in tokenization. For example the word "46:e" in the sentence "_Demokraten Joe Biden blir landets 46:e president._" get split into three tokens: `["46", ":", "e"]`.
Words with ":" should not be split. In the training data for the Swedish model, these words are not split either.
## Your Environment
- **spaCy version:** 3.5.0
- **Platform:** Linux-5.15.79.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- **Python version:** 3.9.15
- **Pipelines:** sv_core_news_md (3.5.0)
| closed | 2023-02-21T09:59:15Z | 2023-03-30T00:02:23Z | https://github.com/explosion/spaCy/issues/12311 | [
"enhancement",
"lang / sv"
] | lise-brinck | 1 |
neuml/txtai | nlp | 644 | Add feature to return embeddings search results as graph | This change will add a new parameter to `embeddings.search` which enables returning the search results as a graph. This change will find all results for a query then use that to filter the underlying graph index.
When a graph index is not present, this parameter will be ignored. | closed | 2024-01-22T16:09:08Z | 2024-01-23T19:07:14Z | https://github.com/neuml/txtai/issues/644 | [] | davidmezzetti | 0 |
MaartenGr/BERTopic | nlp | 1,357 | reduce_outliers cancels the representation_model | Hello Maarten,
I have been experimenting with topics reduction and outliers reduction when clustering using HDBSCAN.
I also used Cohere for representation.
```python
hdbscan_model = HDBSCAN(min_cluster_size=20, metric='euclidean', cluster_selection_method='eom', prediction_data=True)
vectorizer_model = OnlineCountVectorizer(stop_words="english")
ctfidf_model = ClassTfidfTransformer(reduce_frequent_words=True, bm25_weighting=True)
umap_model = UMAP(n_neighbors=70, n_components=5, min_dist=0.0, metric='cosine', random_state=42)
co = cohere.Client('------------------------------')
representation_model = Cohere(co,delay_in_seconds=10)
topic_model=BERTopic(embedding_model = 'all-MiniLM-L6-v2',
umap_model = umap_model,
hdbscan_model = hdbscan_model,
vectorizer_model = vectorizer_model,
ctfidf_model = ctfidf_model,
n_gram_range=(3, 4),
representation_model=representation_model)
```
Whenever I apply these lines of code, which reduces the outliers:
```python
new_topics = topic_model.reduce_outliers(docs_train['Document'], topics, strategy="distributions")
topic_model.update_topics(docs_train['Document'], topics=new_topics)
topic_model.get_topic_info()
```
All representation is gone.
Is there any way to apply Cohere, or more generally the representation model after fit_transform() is executed? | closed | 2023-06-22T10:59:25Z | 2023-09-27T09:11:49Z | https://github.com/MaartenGr/BERTopic/issues/1357 | [] | RamyTheEngineer | 2 |
tensorflow/tensor2tensor | machine-learning | 1,517 | Confusion on Registration | ### Description
Sorry, I'm afraid this is going to sound like a stupid set of questions, but here goes:
I am trying to train a Transformer on my own set of data. I am trying to understand how to do this by referencing the cloud poetry tutorial notebook (https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive/09_sequence/poetry.ipynb). I don't understand how registering problems works. In the notebook, first they subclass `PoetryLineProblem` from the parent class `text_problems.Text2TextProblem` but also with the `registry.register_problem` decorator. The way I understand this syntax is
`PoetryLineProblem = registry.register_problem(PoetryLineProblem(text_problems.Text2TextProblem)`
so to find out how `registry.register_problem works, I go to `tensor2tensor/utils/registry.py` where I find
`493 register_problem = register_base_problem`
preceded by
`489 register_base_problem = Registries.problems.register`
which leads to this line
`409 problems = Registry(
410 "problems", validator=_problem_name_validator, on_set=_on_problem_set)`
which leads to
```
201 \\
def register(self, key_or_value=None):
"""Docstring..."""
def decorator(value, key):
self[key] = value
return value
# Handle if decorator was used without parens
if callable(key_or_value):
return decorator(value=key_or_value, key=None)
else:
return lambda value: decorator(value, key=key_or_value)
```
Okay, so at this point, I feel very confused, because the first line I read
`PoetryLineProblem = registry.register_problem(PoetryLineProblem(text_problems.Text2TextProblem)`
seems to be calling a function `decorator` ultimately, which takes two arguments, but there is only one class passed in. So I don't understand anything that happened along the way because this code from which it all stems makes no sense to me. Maybe this is an issue of not understanding decorators deeply enough, but I just would appreciate someone explaining to me how to make sense of all this source code (something I'm not super experienced with).
### Environment information
```
OS: macOC Mojave Version 10.14
$ pip freeze | grep tensor
tensorboard==1.8.0
tensorflow==1.8.0
$ python -V
Python 2.7.15 :: Anaconda, Inc.
```
| open | 2019-03-22T15:16:44Z | 2019-03-22T15:19:58Z | https://github.com/tensorflow/tensor2tensor/issues/1517 | [] | chrisrytting | 0 |
biolab/orange3 | scikit-learn | 6,987 | "Show help" popup: provide "Go Back" and "Go forward" buttons if user has clicked on a link in the widget explanation | <!--
Thanks for taking the time to submit a feature request!
For the best chance at our team considering your request, please answer the following questions to the best of your ability.
-->
**What's your use case?**
Clicking on the "Show Help" or "?" button in a widget's dialog box opens a popup explaining the use of that widget. The explanation may contain web links, for instance to Wikipedia; clicking on a link replaces the explanation by the web page to which the link is pointing. Clicking on links on that web page opens other web pages. As far as I know, there is no straightforward way to go back to the explanation or the previously visited web page other than closing the popup and clicking on "?" once again.
**What's your proposed solution?**
Provide simple "Go Back" and "Go forward" buttons (< and >) in the explanation window that are greyed out as long as the user hasn't clicked on a link and become active once a link has been clicked on.
**Are there any alternative solutions?**
As said: closing the popup and clicking on "?" once again.
| open | 2025-01-16T10:42:22Z | 2025-01-17T11:31:46Z | https://github.com/biolab/orange3/issues/6987 | [] | wvdvegte | 4 |
RobertCraigie/prisma-client-py | asyncio | 718 | Improved seeding support | ## Problem
The NodeJS Prisma package includes a mechanism for seeding databases with a `prisma db seed` command. This, however, requires configuring the command in the `package.json` of the project, which would normally be present in a NodeJS project.
```json
{
"prisma": {
"seed": "prisma/seed.sh"
}
}
```
Interestingly, if I add a file called `package.json` and simply include that JSON block above then the seed script is executed when I run `prisma db seed`. I think we just need a more pythonic way of defining what that seed script should be.
## Suggested solution
I think just looking for a file called either `seed.py` or `seed.sh` would be an acceptable solution, since there is already a precedent for required file names, eg. `partial_types.py`.
## Alternatives
We can manually run scripts to see data, but these scripts are not integrated with the Prisma reset/migrate workflows the way the `prisma db seed` command is.
## Additional context
[Node Prisma docs on database seeding](https://www.prisma.io/docs/guides/database/seed-database)
| open | 2023-03-07T22:33:53Z | 2023-03-08T00:42:12Z | https://github.com/RobertCraigie/prisma-client-py/issues/718 | [] | jake-amicus | 1 |
modelscope/modelscope | nlp | 343 | OCR 识别模型微调,加载自己的数据集 | 问题:由于需要在内网环境下使用,数据集下载到本地,想使用本地加载的方式,查看文档都是一些txt、json、csv等单个文件加载的方式,我的识别数据是lmdb格式的,应该怎么去加载数据进行训练呢?非常希望能够得到您的解答 | closed | 2023-06-29T09:11:26Z | 2024-07-22T01:55:03Z | https://github.com/modelscope/modelscope/issues/343 | [
"Stale"
] | lipeng1109 | 2 |
pandas-dev/pandas | pandas | 60,816 | BUG: Union of two DateTimeIndexes is incorrectly calculated | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
from pandas import DatetimeIndex
l = DatetimeIndex(['2023-05-24 00:00:00+00:00', '2023-05-24 00:15:00+00:00',
'2023-05-24 00:30:00+00:00', '2023-05-24 00:45:00+00:00',
'2023-05-24 01:00:00+00:00'],
dtype='datetime64[ms, UTC]', name='ts', freq='15min')
r = DatetimeIndex(['2023-05-24 00:00:00+00:00', '2023-05-24 00:30:00+00:00',
'2023-05-24 01:00:00+00:00'],
dtype='datetime64[ms, UTC]', name='ts', freq='30min')
union = r.union(l)
print(union)
assert len(union) == len(l)
assert all(r.union(l) == l)
```
### Issue Description
The union of two datetime-indexes as given in the reproducible example is calculated incorrectly, the result on newer Pandas versions is
```python
DatetimeIndex(['2023-05-24 00:00:00+00:00', '2051-11-29 16:00:00+00:00',
'2080-06-06 08:00:00+00:00'],
dtype='datetime64[ms, UTC]', name='ts', freq='15T')
```
The first failing version is the one I put into "Installed Versions". The error happens exactly from Pandas 2.1.0 onwards, Pandas 1.* and up to 2.0.3 work fine. Neither the numpy nor the Python version matter.
### Expected Behavior
The expected result in the given case is that `l` is returned.
### Installed Versions
INSTALLED VERSIONS
------------------
commit : ba1cccd19da778f0c3a7d6a885685da16a072870
python : 3.10.16.final.0
python-bits : 64
OS : Linux
OS-release : 6.12.10-200.fc41.x86_64
Version : #1 SMP PREEMPT_DYNAMIC Fri Jan 17 18:05:24 UTC 2025
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.1.0
numpy : 1.26.4
pytz : 2024.2
dateutil : 2.9.0.post0
tzdata : 2025.1
| open | 2025-01-29T15:45:23Z | 2025-02-02T14:27:14Z | https://github.com/pandas-dev/pandas/issues/60816 | [
"Bug",
"Regression",
"Needs Discussion",
"Non-Nano"
] | filmor | 5 |
fugue-project/fugue | pandas | 300 | [FEATURE] A hybrid engine of DuckDB and Dask | **Describe the solution you'd like**
While DuckDB can be extremely fast, it can't cover `transform` feature, so we let Dask be the supporting engine to fully utilize local CPUs for parallel transformation. This engine must also minimize the data transfers between DuckDB and Dask to maximize speed. | closed | 2022-02-04T09:02:38Z | 2022-02-22T06:21:40Z | https://github.com/fugue-project/fugue/issues/300 | [
"enhancement",
"dask",
"duckdb"
] | goodwanghan | 0 |
healthchecks/healthchecks | django | 561 | Attach log files to email notification | First of all, this is an AWESOME Open Source project. healthckecks.io service is unbelievably smart and well thought out. I didn't think such a product/project existed :)
Now onto my request/question.
I couldn't find any doc around attaching failure logs to as an email attachment part sent as part of failure notification. This eliminates the need of logging into healthchecks.io OR logging onto the server to check the logs. Of course to perform actual fix you would have to access some sort of system but this would help as part of initial investigation phase.
Does such a thing exist or this would be a new feature?
Feel free to close this if people think that this would be an unnecessary thing. | closed | 2021-09-22T01:07:28Z | 2021-09-23T15:44:52Z | https://github.com/healthchecks/healthchecks/issues/561 | [] | ahmedsajid | 2 |
PeterL1n/RobustVideoMatting | computer-vision | 267 | 使用提供的模型推理时,人像边缘分割不平滑,同时会出现人物身体部分缺失,请问该如何调整呢 | 
| open | 2024-04-03T06:55:54Z | 2024-07-10T08:42:23Z | https://github.com/PeterL1n/RobustVideoMatting/issues/267 | [] | jackTJT | 1 |
sinaptik-ai/pandas-ai | data-visualization | 662 | Fix: Error using enforce_privacy = True | https://github.com/gventuri/pandas-ai/blob/87dd966e52c21e6359f2797f80826cd49c6bd40f/pandasai/smart_dataframe/__init__.py#L436C57-L436C57
When using `enforce_privacy = True`
generates an error because it only sends the headers and when using `_truncate_head_columns` it tries to truncate 0 rows
so I propose the following:
```python
if self.lake.config.enforce_privacy:
return sampled_head
else:
return self._truncate_head_columns(sampled_head)
```
Instead of using:
```python
return self._truncate_head_columns(sampled_head)
```
This avoids the errors reported by users who use `enforce_privacy = True` | closed | 2023-10-19T23:50:48Z | 2023-10-23T12:50:51Z | https://github.com/sinaptik-ai/pandas-ai/issues/662 | [
"bug",
"good first issue"
] | mavixRL | 3 |
pallets/flask | flask | 5,395 | Issue matching route with methods=['OPTIONS'] when similar route appears later using ['GET', 'POST'] | With routing setup like this:
```
class ApiHandler:
@route_with('/<regex("(.*)"):path>', methods = ['OPTIONS'])
def cors(self, path):
# Send custom CORS headers
# This should be matched for any request
return make_response('OK', 200)
@route_with('/post/<regex("(.*)"):id>', methods = ['GET', 'POST'])
def post(self, id):
# This should only be matched on GET / POST for "/post/<id>"
return make_response('You are at the post', 200)
```
The `cors` handler never gets run if an `OPTIONS` request is made to `/post/123` with the above setup.
Instead, strangely, a blank `200` reponse is returned (with no logs besides the single request entry shown in the logs).
If an `OPTIONS` request is made to any other URL (or if the `post` handler is commented out), the `cors` handler is correctly run.
I've tried running this in multiple Flask version all the way from `1.1.4` to `3.0.1`, to no avail.
This appears to be a bug in Flask's / Werkzeug's routing.
Anyone encountered something similar or have any suggestions on what I can do to diagnose / fix? | closed | 2024-01-26T09:10:12Z | 2024-02-11T00:06:41Z | https://github.com/pallets/flask/issues/5395 | [] | daskalou | 3 |
Kludex/mangum | fastapi | 29 | Reconsidering multiple platform support | So initially I set out to implement an adapter for AWS Lambda / API Gateway, but then I was curious and started experimenting with Azure Functions. This led to shifting focus to multiple platform support, but after getting to this point, I'm wondering if this should be reconsidered because:
- Nearly all cases where I've seen interest in serverless ASGI has been for AWS Lambda specifically.
- It may be better to provide great support for a single platform that has widespread use vs. OK support for multiple platforms that don't get used much.
- WebSocket support in AWS is definitely something platform-specific I want to focus on.
- Creating deployment examples is much easier when considering one platform and perhaps I can eventually get some decent tooling to make that process as simple as possible here.
If there isn't any strong support to include other platforms, then I think I may re-focus back on solely the AWS case and pull the Azure Functions support into a different project.
Does anyone thoughts on this?
| closed | 2019-01-30T01:36:13Z | 2019-01-30T09:12:51Z | https://github.com/Kludex/mangum/issues/29 | [
"maybe"
] | jordaneremieff | 2 |
ckan/ckan | api | 8,540 | clean_db fixture is also clearing activity extension 2.11 DB migration | ## CKAN version
CKAN 2.11 with Python 3.10 and activity extension enabled
## Describe the bug
I'm trying to test an extension that is dependant on the activity extension. The extension uses the db so I'm calling `clean_db` fixture for each test, along with `with_plugins` to load my extension and the activity extension.
My problem is that the db migration in the activity plugin ([to create the `permissions_labels` column](https://github.com/ckan/ckan/blob/master/ckanext/activity/migration/activity/versions/71713a055d5c_add_permission_labels_in_activity_table.py)) keeps being cleared as part of the `clean_db` fixture. If I run `ckan -c test.ini db init` then inspect the test db with `psql` I can see that the `permissions_labels` column is there. I run a test with the `clean_db` fixture, then inspect the db again and discover that the `permissions_labels` column has been deleted by the fixture. This is obviously stopping all our extension's tests from passing.
I've not really done much with alembic - I'm wondering if the activity extension db migration has been configured properly? Anyone else having this problem? I have found [this issue](https://github.com/ckan/ckan/issues/8422), which may be broadly related?
### Steps to reproduce
Steps to reproduce the behavior:
1. Create a stub test such as this:
```
@pytest.mark.usefixtures("with_plugins", "clean_db")
def test_with_no_dataset_updates():
pass
```
2. Add `ckan.plugins = activity` to test.ini and run `ckan -c test.ini db init` to ensure the db is properly initialised with activity extension.
3. Print out columns for the activity table to confirm `permissions_labels` is there.
```
psql -Atx postgresql://ckan:pass@localhost/ckan_test -c "\d activity"
```
4. Run the stub test (it obviously should pass fine since it's not testing anything)
5. Print out columns for the activity table again:
```
psql -Atx postgresql://ckan:pass@localhost/ckan_test -c "\d activity"
```
6. Notice that permissions_labels column has been cleared.
### Expected behavior
`permissions_labels` column should obviously not be cleared by `clean_db` when the `activity` plugin is enabled. This is blocker to getting our tests passing with CKAN 2.11.
### Additional information
The specific error being dumped by each test is:
```
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedColumn) column "permission_labels" of relation "activity" does not exist
``` | closed | 2024-11-19T15:12:30Z | 2025-01-30T00:42:58Z | https://github.com/ckan/ckan/issues/8540 | [] | jonathansberry | 2 |
vi3k6i5/flashtext | nlp | 63 | can't search overlapped words? | kp = KeywordProcessor()
kp.add_keyword("ABC DE")
kp.add_keyword("DE FGHI")
kp.extract_keywords("ABC DE FGHI")
>>>['ABC DE']
why not ['ABC DE', 'DE FGHI'] | open | 2018-10-12T09:59:07Z | 2019-09-05T15:48:10Z | https://github.com/vi3k6i5/flashtext/issues/63 | [] | xuexcy | 5 |
pywinauto/pywinauto | automation | 893 | Application(backend="uia").connect(path="explorer.exe") cannot find dialog | I have one file explorer. 'Downloads' file explorer is launched by administrator.
app = Application(backend="uia").connect(path="explorer.exe")
app.Downloads
If I run the above script using normal user, it raised exception 'Cannot find "Downloads'
If I run the above script using administrator, it works.
How can I make it work using normal user?
- Pywinauto version: 0.68
- Python version : 3.8.0
- Platform and OS: win2012 r2
| closed | 2020-02-20T06:03:05Z | 2020-02-26T00:03:19Z | https://github.com/pywinauto/pywinauto/issues/893 | [
"invalid",
"question"
] | czhhua28 | 2 |
encode/databases | sqlalchemy | 395 | Add support for postgresql+asyncpg DB scheme | The supported DBs dict is missing this scheme prefix.
https://github.com/encode/databases/blob/6fcb16823c32dbf492991f26562a3ca6884dfbef/databases/core.py#L45 | closed | 2021-09-22T21:58:40Z | 2021-09-25T09:12:21Z | https://github.com/encode/databases/issues/395 | [] | mhadam | 1 |
ageitgey/face_recognition | machine-learning | 611 | Doesn't work with .bmp images | file: face_recognition_knn.py
This algorithm is not training on .bmp images.
What I tried: I added .bmp in ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg'}.
But this didn't solve the problem :( | open | 2018-08-30T12:11:35Z | 2018-08-30T18:19:44Z | https://github.com/ageitgey/face_recognition/issues/611 | [] | NarenBabuR | 1 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 729 | 什么意思,这个图我真的绷不住了 | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [X] 我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
- [X] 模型正确性检查:务必检查模型的[SHA256.md](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/SHA256.md),模型不对的情况下无法保证效果和正常运行
### 问题类型
效果问题
### 基础模型
Alpaca-7B
### 操作系统
Windows
### 详细描述问题
```
# 请在此处粘贴运行代码(如没有可删除该代码块)
```

### 依赖情况(代码类问题务必提供)
```
# 请在此处粘贴依赖情况
```
### 运行日志或截图
```
# 请在此处粘贴运行日志
```

| closed | 2023-07-07T09:09:04Z | 2023-07-18T22:40:22Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/729 | [
"stale"
] | CXLiang123 | 4 |
Lightning-AI/pytorch-lightning | machine-learning | 19,818 | Full validation after first microbatch when training after LearningRateFinder | ### Bug description
`lr_finder` does not reset `fit_loop.epoch_loop.restarting` to `False` which causes the validation check to trigger validation on the first `advance()` call because of [this check](https://github.com/Lightning-AI/pytorch-lightning/blob/b9680a364da4e875b237ec3c03e67a9c32ef475b/src/lightning/pytorch/loops/training_epoch_loop.py#L199).
### What version are you seeing the problem on?
v2.2, master
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_ | closed | 2024-04-25T21:46:09Z | 2024-06-07T16:00:19Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19818 | [
"bug",
"needs triage",
"ver: 2.2.x"
] | clumsy | 0 |
errbotio/errbot | automation | 803 | IRC backend doesn't render cards correctly | Not everyone uses mIRC so we shouldn't default to the format that only works with it (because thats not exactly a friendly output for other IRC client users),
For example here is a output from hexchat:
```
<gerritbot2> {: color='black' bgcolor='green' }
<gerritbot2> Kyrylo Galanov (kgalanov@mirantis.com) proposed change Fix race condition in get puppet status at https://review.openstack.org/335319 in project openstack/fuel-astute (+1|-1).
<gerritbot2> ()
```
| open | 2016-06-30T04:03:43Z | 2016-08-16T20:15:41Z | https://github.com/errbotio/errbot/issues/803 | [
"type: bug",
"backend: IRC"
] | harlowja | 6 |
mouredev/Hello-Python | fastapi | 468 | 腾龙公司娱乐注册及与现场同步的正规网投平台 | 腾龙公司娱乐:376838.com真人视讯】【棋牌游戏】【电子游戏】【体育赛事】【彩票游戏】;腾龙娱乐:[376838.com]【百家乐】【龙虎】【牛牛】;充值支持:银行卡、微信、支付宝、USDT等。咨询(微信:zdn200 微信:xiaolu460570 飞机:@lc15688祝您游戏愉快,盈利多多
 | closed | 2025-03-05T13:43:08Z | 2025-03-10T13:47:05Z | https://github.com/mouredev/Hello-Python/issues/468 | [] | khyl55 | 0 |
dsdanielpark/Bard-API | api | 217 | Exception: SNlM0e value not found. Double-check __Secure-1PSID value or pass it as token='xxxxx'. |
CONSENT
SOCS
APISID
HSID
SAPISID
SID
SSID
__Secure-1PAPISID
__Secure-1PSID
__Secure-3PAPISID
__Secure-3PSID
SEARCH_SAMESITE
1P_JAR
__Secure-1PSIDTS
__Secure-3PSIDTS
AEC
NID
SIDCC
__Secure-1PSIDCC
__Secure-3PSIDCC
SNID
ACCOUNT_CHOOSER
LSOLH
LSID
__Host-1PLSID
__Host-3PLSID
__Host-GAPS
OSID
__Secure-OSID
_ga
_ga_Q3KJSFNQDY
_GRECAPTCHA
OTZ
DV
OTZ
CONSENT
SOCS
NID
HSID
SSID
APISID
SAPISID
__Secure-1PAPISID
__Secure-3PAPISID
SID
__Secure-1PSID
__Secure-3PSID
OGPC
SEARCH_SAMESITE
AEC
1P_JAR
__Secure-1PSIDTS
__Secure-3PSIDTS
SIDCC
__Secure-1PSIDCC
__Secure-3PSIDCC
TAID
AID
SNID
ACCOUNT_CHOOSER
LSID
__Host-1PLSID
__Host-3PLSID
__Host-GAPS
_GRECAPTCHA
OTZ
DV
OTZ
__Host-GMAIL_SCH_GMN
__Host-GMAIL_SCH_GMS
__Host-GMAIL_SCH_GML
OSID
__Secure-OSID
COMPASS
COMPASS
COMPASS
OTZ
OTZ
OSID
__Secure-OSID
_ga
_ga_H30R9PNQFN
COMPASS
_ga
_ga_S3V05QCXK5
{'__Secure-1PSID': 'MY COOKIE WAS HERE. I REPLACED IT TO NOT LEAK IT'}
Traceback (most recent call last):
File "C:\Users\mdevp\PycharmProjects\OSINTDatabaseV1\test.py", line 3, in <module>
bard = Bard(token_from_browser=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mdevp\AppData\Local\Programs\Python\Python311\Lib\site-packages\bardapi\core.py", line 78, in __init__
self.SNlM0e = self._get_snim0e()
^^^^^^^^^^^^^^^^^^
File "C:\Users\mdevp\AppData\Local\Programs\Python\Python311\Lib\site-packages\bardapi\core.py", line 154, in _get_snim0e
raise Exception(
Exception: SNlM0e value not found. Double-check __Secure-1PSID value or pass it as token='xxxxx'.
Process finished with exit code 1
Code:
```
from bardapi import Bard
bard = Bard(token_from_browser=True)
res = bard.get_answer('Wie ist das Wetter in Berlin?')
print(res)
```
| closed | 2023-10-21T13:04:07Z | 2023-10-25T19:11:29Z | https://github.com/dsdanielpark/Bard-API/issues/217 | [] | marl0nx | 2 |
betodealmeida/shillelagh | sqlalchemy | 220 | ISODate parse can use std library | Currently `ISODate`'s `parse` uses `dateutil`:
https://github.com/betodealmeida/shillelagh/blob/7afaf13ec822f8c56895a8aec6ad77a7de2ea600/src/shillelagh/fields.py#L323-L328
however as of Python 3.7, [`date.fromisoformat`](https://docs.python.org/3/library/datetime.html#datetime.date.fromisoformat) should work and be faster + less permissive (it doesn't try to guess the format) | closed | 2022-03-31T16:23:38Z | 2022-07-24T21:42:11Z | https://github.com/betodealmeida/shillelagh/issues/220 | [
"help wanted",
"good first issue",
"performance"
] | cancan101 | 0 |
autogluon/autogluon | data-science | 4,797 | [BUG] MMCV Installation Stucks | **Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [ ] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [ ] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [X] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
`mim install mmcv==2.1.0` stucks at
```
Building wheels for collected packages: mmcv
Building wheel for mmcv (setup.py)
```
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**To Reproduce**
<!-- A minimal script to reproduce the issue. Links to Colab notebooks or similar tools are encouraged.
If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com.
In short, we are going to copy-paste your code to run it and we expect to get the same result as you. -->
**Screenshots / Logs**
<!-- If applicable, add screenshots or logs to help explain your problem. -->
**Installed Versions**
<!-- Please run the following code snippet: -->
<details>
```python
# Replace this code with the output of the following:
from autogluon.core.utils import show_versions
show_versions()
```
</details>
| open | 2025-01-15T00:23:34Z | 2025-01-15T00:27:27Z | https://github.com/autogluon/autogluon/issues/4797 | [
"bug: unconfirmed",
"Needs Triage"
] | FANGAreNotGnu | 1 |
pyg-team/pytorch_geometric | deep-learning | 10,137 | Trying to add edge weights to RGCNConv and I got an error: TypeError: `propagate()` got an unexpected keyword argument `edge_weight` | The below is a reply to an earlier issue (_Originally posted by @rusty1s in https://github.com/pyg-team/pytorch_geometric/discussions/7609#discussioncomment-6229385_) and the author of torch_geometric gives the below answer:
> If I understand you correctly, you are searching for a way to apply `RGCNConv` on weighted graphs? In that case, you can extend `RGCNConv` and add an `edge_weight` to its argument which is then used in `message`. Pseudo-code:
> ```python
> def forward(self, x, edge_index, edge_weight, edge_type):
> out = 0
> for i in range(num_edge_types):
> mask = edge_type == i
> out += self.propagate(edge_index[:, mask], edge_type=i, edge_weight=edge_weight[mask])
> return out
>
> def message(self, x_j, edge_weight, edge_type):
> return edge_weight * x_j @ self.weight[edge_type]
When I implement the same by extending the `RGCNConv` class I get hit with the below error:
```bash
Traceback (most recent call last):
File "/DATAX/divyaksh/Projects/emotion/dialogue_gcn/train.py", line 103, in <module>
main(args)
File "/DATAX/divyaksh/Projects/emotion/dialogue_gcn/train.py", line 37, in main
ret = coach.train()
File "/DATAX/divyaksh/Projects/emotion/dialogue_gcn/dgcn/Coach.py", line 41, in train
self.train_epoch(epoch)
File "/DATAX/divyaksh/Projects/emotion/dialogue_gcn/dgcn/Coach.py", line 74, in train_epoch
nll = self.model.get_loss(data)
File "/DATAX/divyaksh/Projects/emotion/dialogue_gcn/dgcn/model/DialogueGCN.py", line 60, in get_loss
graph_out, features = self.get_rep(data)
File "/DATAX/divyaksh/Projects/emotion/dialogue_gcn/dgcn/model/DialogueGCN.py", line 49, in get_rep
graph_out = self.gcn(features, edge_index, edge_norm, edge_type)
File "/DATADIV/miniconda3/envs/pytorch_py310/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
```
Here is the code which calls and implements `ExtendedRGCNConv`:
```python
#ExtendedRGCNConv.py
from torch_geometric.nn import RGCNConv, MessagePassing
import torch
class ExtendedRGCNConv(RGCNConv):
def __init__(self, in_channels, out_channels, num_relations):
super(ExtendedRGCNConv, self).__init__(in_channels, out_channels, num_relations)
def forward(self, x, edge_index, edge_weight, edge_type):
x_l= None
if isinstance(x, tuple):
x_l = x[0]
else:
x_l = x
if x_l is None:
x_l = torch.arange(self.in_channels_l, device=self.weight.device)
x_r=x_l
if isinstance(x, tuple):
x_r = x[1]
# propagate_type: (x: Tensor, edge_type_ptr: OptTensor)
size = (x_l.size(0), x_r.size(0))
out = torch.zeros(x_r.size(0), self.out_channels, device=x_r.device)
for i in range(self.num_relations):
mask = edge_type == i
if edge_weight is not None:
kwargs = {'edge_weight': edge_weight[mask]}
out += self.propagate(edge_index[:, mask], x=x , size=size, edge_type=i,
edge_weight=edge_weight[mask])
return out
def message(self, x_j, edge_weight, edge_type):
return edge_weight * x_j @ self.weight[edge_type]
```
```python
# model.py
import torch.nn as nn
from torch_geometric.nn import RGCNConv, GraphConv
from .ExtendedRGCNConv import ExtendedRGCNConv
from .WeightedRGCNConv import WeightedRGCNConv
from .EdgeWeightedRGCNConv import EdgeWeightedRGCNConv
class GCN(nn.Module):
def __init__(self, g_dim, h1_dim, h2_dim, args):
super(GCN, self).__init__()
self.num_relations = 2 * args.n_speakers ** 2
# self.conv1 = RGCNConv(g_dim, h1_dim, self.num_relations, num_bases=30)
self.conv1 = ExtendedRGCNConv(g_dim, h1_dim, self.num_relations)
# self.conv1 = WeightedRGCNConv(g_dim, h1_dim, self.num_relations)
# self.conv1 = EdgeWeightedRGCNConv(g_dim, h1_dim, self.num_relations)
self.conv2 = GraphConv(h1_dim, h2_dim)
def forward(self, node_features, edge_index, edge_norm, edge_type):
# x = self.conv1(node_features, edge_index, edge_type, edge_norm=edge_norm)
# x = self.conv1(node_features, edge_index, edge_type)
x = self.conv1(node_features, edge_index, edge_norm, edge_type) # ExtendedRGCNConv
# x = self.conv1(node_features, edge_index, edge_type, edge_norm) # WeightedRGCNConv
# x = self.conv1(node_features, edge_index, edge_type, edge_weight=edge_norm) # EdgeWeightedRGCNConv
x = self.conv2(x, edge_index)
return x
```
I am also attaching my `requirements.txt` to help in reimplementing this issue.
```
aiohappyeyeballs==2.6.1
aiohttp==3.11.14
aiosignal==1.3.2
async-timeout==5.0.1
attrs==25.3.0
certifi==2025.1.31
charset-normalizer==3.4.1
contourpy==1.3.1
cycler==0.12.1
debugpy==1.8.13
filelock==3.18.0
fonttools==4.56.0
frozenlist==1.5.0
fsspec==2025.3.0
idna==3.10
Jinja2==3.1.6
joblib==1.4.2
kiwisolver==1.4.8
MarkupSafe==3.0.2
matplotlib==3.10.1
mpmath==1.3.0
multidict==6.2.0
networkx==3.4.2
numpy==2.2.4
nvidia-cublas-cu12==12.4.5.8
nvidia-cuda-cupti-cu12==12.4.127
nvidia-cuda-nvrtc-cu12==12.4.127
nvidia-cuda-runtime-cu12==12.4.127
nvidia-cudnn-cu12==9.1.0.70
nvidia-cufft-cu12==11.2.1.3
nvidia-curand-cu12==10.3.5.147
nvidia-cusolver-cu12==11.6.1.9
nvidia-cusparse-cu12==12.3.1.170
nvidia-cusparselt-cu12==0.6.2
nvidia-nccl-cu12==2.21.5
nvidia-nvjitlink-cu12==12.4.127
nvidia-nvtx-cu12==12.4.127
packaging==24.2
pandas==2.2.3
pillow==11.1.0
propcache==0.3.0
psutil==7.0.0
pyg-lib==0.4.0+pt25cu124
pyparsing==3.2.1
python-dateutil==2.9.0.post0
pytz==2025.1
requests==2.32.3
scikit-learn==1.6.1
scipy==1.15.2
seaborn==0.13.2
six==1.17.0
sympy==1.13.1
threadpoolctl==3.6.0
torch==2.6.0
torch-geometric==2.6.1
torch_cluster==1.6.3+pt25cu124
torch_scatter==2.1.2+pt25cu124
torch_sparse==0.6.18+pt25cu124
torch_spline_conv==1.2.2+pt25cu124
torchaudio==2.6.0
torchvision==0.21.0
tqdm==4.67.1
triton==3.2.0
typing_extensions==4.12.2
tzdata==2025.1
urllib3==2.3.0
yarl==1.18.3
```
Please help me in any alternative way to get RGCN working with edge_weights. | open | 2025-03-24T08:32:49Z | 2025-03-24T08:42:18Z | https://github.com/pyg-team/pytorch_geometric/issues/10137 | [] | divyaksh-shukla | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 743 | test.py does not process all images in the test directory of the model | I have about 100 images in the test directory of my model and when I run test.py, it only processes about 50 of them. Is there a way to make it do all of them? | closed | 2019-08-24T15:02:13Z | 2019-08-26T20:32:34Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/743 | [] | 0003mg | 2 |
OpenInterpreter/open-interpreter | python | 1,205 | Keep getting JSONDecodeError when using computer.browser | ### Describe the bug
the computer.browser.search keeps returning this error:
```File ~/anaconda3/lib/python3.10/site-packages/requests/models.py:975, in Response.json(self, **kwargs)
971 return complexjson.loads(self.text, **kwargs)
972 except JSONDecodeError as e:
973 # Catch JSON-related errors and raise as requests.JSONDecodeError
974 # This aliases json.JSONDecodeError and simplejson.JSONDecodeError
--> 975 raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
JSONDecodeError: Expecting value: line 2 column 1 (char 1) ```
### Reproduce
computer.browser.search
### Expected behavior
it was returning search results just fine a week ago
### Screenshots
_No response_
### Open Interpreter version
0.2.4
### Python version
3.11
### Operating System name and version
linux
### Additional context
_No response_ | open | 2024-04-14T11:40:36Z | 2024-04-24T21:42:14Z | https://github.com/OpenInterpreter/open-interpreter/issues/1205 | [] | legaltextai | 5 |
newpanjing/simpleui | django | 307 | menu不向用户展示没有权限的name | **你希望增加什么功能?**
** what features do you wish to add? * *
1.menu栏根据用户是否有相应的权限,如果用户没有相应model的权限,不展示对应的名称
| closed | 2020-09-14T01:21:40Z | 2020-09-26T07:02:03Z | https://github.com/newpanjing/simpleui/issues/307 | [
"enhancement"
] | FangZiyong | 1 |
strawberry-graphql/strawberry | fastapi | 3,760 | `snake_case` input arguments are ignored in schema serialization | ```graphql
@strawberry.input
class FooInput:
hello: str
hello_world: str
```
^ when applying the above, `hello` is serialized in the schema, but `hello_world` is not.
## Minimal Repro
```
$ pip freeze | grep strawberry
strawberry-graphql==0.258.0
```
**`schema.py`**
```python
import typing
import strawberry
from strawberry.schema_directive import Location
@strawberry.input
class FooInput:
hello: str
hello_world: str
@strawberry.schema_directive(locations=[Location.FIELD_DEFINITION])
class FooDirective:
input: FooInput
@strawberry.type
class Query:
@strawberry.field(directives=[
FooDirective(input=FooInput(hello="hello", hello_world="hello world"))
])
def foo(self, info) -> str:
return "foo"
schema = strawberry.Schema(query=Query)
```
```
$ strawberry export-schema schema
directive @fooDirective(input: FooInput!) on FIELD_DEFINITION
type Query {
foo: String! @fooDirective(input: {hello: "hello"})
}
input FooInput {
hello: String!
helloWorld: String!
}
```
### Expected:
```graphql
foo: String! @fooDirective(input: {hello: "hello", helloWorld: "hello world"})
```
### Actual
(see above)
```graphql
foo: String! @fooDirective(input: {hello: "hello"})
```
Thanks! | closed | 2025-01-31T19:35:40Z | 2025-02-13T16:09:39Z | https://github.com/strawberry-graphql/strawberry/issues/3760 | [
"bug"
] | magicmark | 0 |
aws/aws-sdk-pandas | pandas | 2,811 | Get column parameters | **Is your idea related to a problem? Please describe.**
Linked to #2810 , The api for glue columns includes [Parameters](https://docs.aws.amazon.com/glue/latest/webapi/API_Column.html). However none of the aws-sdk-pandas functions allows you to get the column Parameters so we have to resort to boto3 get_table.
**Describe the solution you'd like**
Create new function similar to [get_column_comments](https://aws-sdk-pandas.readthedocs.io/en/stable/stubs/awswrangler.catalog.get_columns_comments.html) but for parameters:
```
import awswrangler as wr
pars = wr.catalog.get_columns_parameters(database="...", table="...")
``` | closed | 2024-05-07T16:20:03Z | 2024-05-14T10:09:10Z | https://github.com/aws/aws-sdk-pandas/issues/2811 | [
"enhancement"
] | SoumayaMauthoorMOJ | 2 |
2noise/ChatTTS | python | 63 | 根据音色种子设定音色的标准方法的疑问 | 为方便测试,写了一个kivy小框架:
```
import torch
import ChatTTS
from kivy.app import App
from kivy.uix.boxlayout import BoxLayout
from kivy.uix.button import Button
from kivy.uix.textinput import TextInput
from kivy.core.audio import SoundLoader
import soundfile as sf
import tempfile
seeds = {
"旁白": {"seed": 2222},
"中年女性": {"seed": 7869},
"年轻女性": {"seed": 6615},
"中年男性": {"seed": 4099},
"年轻男性": {"seed": 6653},
}
class ChatApp(App):
def build(self):
self.chat = ChatTTS.Chat()
self.chat.load_models(source='local', local_path='models')
self.std, self.mean = torch.load('models/asset/spk_stat.pt').chunk(2)
layout = BoxLayout(orientation='vertical')
self.input_text = TextInput(size_hint=(1, 0.8), multiline=False)
submit_button = Button(text='Submit', size_hint=(1, 0.2))
submit_button.bind(on_press=self.infer_and_play)
layout.add_widget(self.input_text)
layout.add_widget(submit_button)
return layout
def infer_and_play(self, instance):
torch.manual_seed(seeds["年轻女性"]["seed"])
rnd_spk_emb = self.chat.sample_random_speaker()
params_infer_code = {
'spk_emb': rnd_spk_emb,
#'temperature': .1,
#'top_P': 0.7,
#'top_K': 20,
}
params_refine_text = {
'prompt': '[oral_2][laugh_0][break_6]'
}
text = self.input_text.text
wav = self.chat.infer(text, params_infer_code=params_infer_code, use_decoder=True)[0][0]#params_refine_text=params_refine_text
# 保存音频数据到临时文件
temp_audio_file = tempfile.NamedTemporaryFile(delete=False, suffix=".wav")
sf.write(temp_audio_file, wav, 24000, format='WAV', subtype='PCM_24')
temp_audio_file.close()
print("temp_audio_file.name:", temp_audio_file.name)
# 加载并播放音频文件
sound = SoundLoader.load(temp_audio_file.name)
if sound:
sound.volume = 1.0
sound.play()
if __name__ == '__main__':
ChatApp().run()
``` | closed | 2024-05-29T15:53:46Z | 2024-08-08T09:55:44Z | https://github.com/2noise/ChatTTS/issues/63 | [
"stale"
] | NowLoadY | 11 |
nerfstudio-project/nerfstudio | computer-vision | 3,534 | `NeuS-facto` generates a weird rendering result on the DTU skull dataset | Hi, everyone,
Does anyone help me train a neus-facto model on the DTU-65 dataset provided by the `sdfstudio` project? I run the following code on `nerfstudio==1.1.5`:
```
ns-train neus-facto --data path/to/dtu-65
```
But the visualizer shows something like:

As you can see, the skull is rendered on (or within?) a cube-like space. That is totally different from the should-be scene for this dataset. Although this result, similar to the Fifth Dimension space of Interstellar, looks great as an artwork, I want to have the should-be result. Would anyone be able to give me any suggestions?
My hardware/OS setup is:
- OS: Ubuntu 22.04
- GPU: RTX 3090
- CUDA: 12.1
- PyTorch 2.1.2
Thanks in advance!
| closed | 2024-11-27T16:48:18Z | 2024-12-03T04:53:26Z | https://github.com/nerfstudio-project/nerfstudio/issues/3534 | [] | barikata1984 | 1 |
pandas-dev/pandas | data-science | 60,798 | BUG: replace(to_replace=pd.NaT, value=None) different from replace({pd.NaT: None}) | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
df = pd.read_pickle("example.pkl")
df
ts Start End
23 2025-01-27 09:49:44.045 2025-01-27 09:49:44 NaT
28 2025-01-27 06:50:56.046 2025-01-27 06:50:54 2025-01-27 06:50:56
df.replace(to_replace=pd.NaT, value=None)
ts Start End
23 2025-01-27 09:49:44.045 2025-01-27 09:49:44 NaT
28 2025-01-27 06:50:56.046 2025-01-27 06:50:54 2025-01-27 06:50:56
df.replace({pd.NaT: None})
ts Start End
23 2025-01-27 09:49:44.045000 2025-01-27 09:49:44 None
28 2025-01-27 06:50:56.046000 2025-01-27 06:50:54 2025-01-27 06:50:56
df.replace(to_replace=pd.NaT, value=None).dtypes
ts datetime64[ns]
Start datetime64[ns]
End datetime64[ns]
dtype: object
df.replace({pd.NaT: None}).dtypes
ts object
Start object
End object
dtype: object
```
### Issue Description
In my application I read data from a database via asyncpg and then process it with pandas.
Recently I encountered an issue where the replace command changes the datatypes of unrelated columns if I use it with a dictionary argument.
Using replace with the arguments "to_replace" and "value", however, works.
Somehow my dataframe is weird, I was not able to create a pure code example to reproduce this and only saving and loading my dataframe as a pickle file made it reproducible. However, due to GitHub limitations I cannot share the pickle file here which is probably due to pickle files being unsafe to unpickle from untrusted sources.
I did try to recreate the issue in code and also using other file formats but that somhow seems to loose important metadata that causes the issue.
This is the metadata of the dataframe as it shows in the VS-code debugger:

### Expected Behavior
I would expect that these two results are the same:
`df.replace(to_replace=pd.NaT, value=None).dtypes`
`df.replace({pd.NaT: None}).dtypes`
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.5
python-bits : 64
OS : Linux
OS-release : 5.15.167.4-microsoft-standard-WSL2
Version : #1 SMP Tue Nov 5 00:21:55 UTC 2024
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : C.UTF-8
LOCALE : C.UTF-8
pandas : 2.2.3
numpy : 2.2.0
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 24.2
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.3.4
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.14.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| closed | 2025-01-27T12:58:16Z | 2025-01-27T21:35:43Z | https://github.com/pandas-dev/pandas/issues/60798 | [
"Bug",
"Duplicate Report",
"replace"
] | thunderbug1 | 1 |
autokey/autokey | automation | 1,010 | AutoKey GUI sometimes fails to update when external and some internal changes occur | ### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland?
Xorg
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Bug
### Choose one or more terms that describe this issue:
- [ ] autokey triggers
- [ ] autokey-gtk
- [ ] autokey-qt
- [ ] beta
- [X] bug
- [ ] critical
- [ ] development
- [ ] documentation
- [ ] enhancement
- [ ] installation/configuration
- [ ] phrase expansion
- [ ] scripting
- [ ] technical debt
- [X] user interface
### Other terms that describe this issue if not provided above:
_No response_
### Which Linux distribution did you use?
N/A
### Which AutoKey GUI did you use?
Both
### Which AutoKey version did you use?
0.96.0
### How did you install AutoKey?
N/A
### Can you briefly describe the issue?
If you modify actions externally from AutoKey or with things like create prhrase from selection, often, the GUI does not reflect those changes without closing and reopeninng it. For some changes, it might be necessary to termininate and restart AutoKey - not sure about that.
### Can the issue be reproduced?
Sometimes
### What are the steps to reproduce the issue?
Open the AutoKey Menu
In a terminal, add new action filess to AutoKey's folders
Notice that the new actions do not appear in the folder tree to the left.
Open a script in the GUI editing panel then edit it externally using a text editor and save your changes.
Notice that the code in the editing does not reflect the changes and does not notify you that the underlying file has changed on disk.
### What should have happened?
Actions added or removed externally should cause the action tree to update automatically.
An open script in the editing panel should pop-up a notification if the underlying fille has been modified and ask you what to do.
Try editing the same file (any file) in two instances of kate with change detection enabled and see what happens when you save changes in one of them to see the desired behavior.
### What actually happened?
Usually, nothing, until you close and reopen the GUI.
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
This issue is intended to focus on the underlying problem(s) in a couple of other issues that were reported separately: #884 and #747.
<br/>
<hr/>
<details><summary>This repo is using Opire - what does it mean? 👇</summary><br/>💵 Everyone can add rewards for this issue commenting <code>/reward 100</code> (replace <code>100</code> with the amount).<br/>🕵️♂️ If someone starts working on this issue to earn the rewards, they can comment <code>/try</code> to let everyone know!<br/>🙌 And when they open the PR, they can comment <code>/claim #1010</code> either in the PR description or in a PR's comment.<br/><br/>🪙 Also, everyone can tip any user commenting <code>/tip 20 @josephj11</code> (replace <code>20</code> with the amount, and <code>@josephj11</code> with the user to tip).<br/><br/>📖 If you want to learn more, check out our <a href="https://docs.opire.dev">documentation</a>.</details> | open | 2025-01-15T08:33:12Z | 2025-03-03T23:37:01Z | https://github.com/autokey/autokey/issues/1010 | [
"bug",
"help-wanted",
"user interface"
] | josephj11 | 7 |
littlecodersh/ItChat | api | 358 | BadStatusLine错误 | 在提交前,请确保您已经检查了以下内容!
- [ v] 您可以在浏览器中登陆微信账号,但不能使用`itchat`登陆
- [ v] 我已经阅读并按[文档][document] 中的指引进行了操作
- [ v] 您的问题没有在[issues][issues]报告,否则请在原有issue下报告
- [ v] 本问题确实关于`itchat`, 而不是其他项目.
- [ v] 如果你的问题关于稳定性,建议尝试对网络稳定性要求极低的[itchatmp][itchatmp]项目
请使用`itchat.run(debug=True)`运行,并将输出粘贴在下面:
```
Thu, 11 May 2017 12:26:40 messages.py[line:277] DEBUG Request to send a text message to @@05bc1516a4323e1cc6cea488403330a8e6235c90536f41fe99da51d561fec1b0: 回复内容
Thu, 11 May 2017 12:48:46 login.py[line:257] ERROR Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/itchat/components/login.py", line 232, in maintain_loop
i = sync_check(self)
File "/usr/local/lib/python2.7/dist-packages/itchat/components/login.py", line 285, in sync_check
r = self.s.get(url, params=params, headers=headers)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 480, in get
return self.request('GET', url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 468, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 426, in send
raise ConnectionError(err, request=request)
ConnectionError: ('Connection aborted.', BadStatusLine("''",))
Thu, 11 May 2017 12:48:47 connectionpool.py[line:758] INFO Starting new HTTPS connection (2): webpush.wx2.qq.com
Thu, 11 May 2017 12:48:47 login.py[line:289] DEBUG Unexpected sync check result: window.synccheck={retcode:"1101",selector:"0"}
Thu, 11 May 2017 12:48:47 login.py[line:266] INFO LOG OUT!
```
您的itchat版本为:`1.3.5`。(可通过`python -c "import itchat;print(itchat.__version__)"`获取)
其他的内容或者问题更详细的描述都可以添加在下面:
> 每隔几个小时,就会发生这种错误,原因不明
[document]: https://github.com/soimort/you-get/wiki/FAQ
[issues]: https://github.com/soimort/you-get/issues
[itchatmp]: https://github.com/littlecodersh/itchatmp
| closed | 2017-05-11T05:24:26Z | 2017-05-29T01:49:03Z | https://github.com/littlecodersh/ItChat/issues/358 | [
"question"
] | pengyuwei | 6 |
Esri/arcgis-python-api | jupyter | 1,468 | Index error with Train deep learning model (spatial analyst tools) | **Hi, can you kindly help me solve this issue? I am trying to train a deep learning model using the following code**
```
witharcpy.EnvManager(processorType="GPU"):
arcpy.ia.TrainDeepLearningModel(r"'C:\Users\Richard Antwi\Desktop\Turning Lanes Detection\All Training Data\Balancing_Training_img_chps';'C:\Users\Richard Antwi\Desktop\Turning Lanes Detection\All Training Data\Balancing_Training_img_chps_1';'C:\Users\Richard Antwi\Desktop\Turning Lanes Detection\All Training Data\Balancing_Training_img_chps_2';'C:\Users\Richard Antwi\Desktop\Turning Lanes Detection\All Training Data\Broward_img_chps_12';'C:\Users\Richard Antwi\Desktop\Turning Lanes Detection\All Training Data\Gilchrist_img_chps_12';'C:\Users\Richard Antwi\Desktop\Turning Lanes Detection\All Training Data\Gulf_img_chps_12';'C:\Users\Richard Antwi\Desktop\Turning Lanes Detection\All Training Data\Jefferson_img_chps_12';'C:\Users\Richard Antwi\Desktop\Turning Lanes Detection\All Training Data\Miami_Dade_img_chps';'C:\Users\Richard Antwi\Desktop\Turning Lanes Detection\All Training Data\Santa_Rosa_img_chps_12'", r"C:\Users\Richard Antwi\Desktop\Turning Lanes Detection\Turning Lane Models\Turning_lane_Model_1", 150, "YOLOV3", 64, "chip_size 224;resize_to #;monitor valid_loss;monitor average_precision", None, "DARKNET53", None, 30, "CONTINUE_TRAINING", "UNFREEZE_MODEL")
```
**However I run into this index error at epoch 63. Can you help me solve this?**
> Traceback (most recent call last):
> File "c:\program files\arcgis\pro\Resources\ArcToolbox\toolboxes\Image Analyst Tools.tbx\TrainDeepLearningModel.tool\tool.script.execute.py", line 390, in <module>
> execute()
> File "c:\program files\arcgis\pro\Resources\ArcToolbox\toolboxes\Image Analyst Tools.tbx\TrainDeepLearningModel.tool\tool.script.execute.py", line 334, in execute
> training_model_object.fit(
> File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\Lib\site-packages\arcgis\learn\models\_arcgis_model.py", line 997, in fit
> self.learn.fit_one_cycle(epochs, lr, callbacks=callbacks, **kwargs)
> File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\Lib\site-packages\fastai\train.py", line 23, in fit_one_cycle
> learn.fit(cyc_len, max_lr, wd=wd, callbacks=callbacks)
> File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\Lib\site-packages\fastai\basic_train.py", line 200, in fit
> fit(epochs, self, metrics=self.metrics, callbacks=self.callbacks+callbacks)
> File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\Lib\site-packages\fastai\basic_train.py", line 101, in fit
> loss = loss_batch(learn.model, xb, yb, learn.loss_func, learn.opt, cb_handler)
> File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\Lib\site-packages\fastai\basic_train.py", line 26, in loss_batch
> out = model(*xb)
> File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\Lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
> result = self.forward(*input, **kwargs)
> File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\Lib\site-packages\arcgis\learn\models\_yolov3_utils.py", line 237, in forward
> x, y, *loss_dict = module(x, targets)
> File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\Lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
> result = self.forward(*input, **kwargs)
> File "C:\Program Files\ArcGIS\Pro\bin\Python\envs\arcgispro-py3\Lib\site-packages\arcgis\learn\models\_yolov3_utils.py", line 552, in forward
> obj_mask[b, a, j, i] = 1
> IndexError: index 28 is out of bounds for dimension 3 with size 28
> Failed script (null)...
> Failed to execute (TrainDeepLearningModel).
> | closed | 2023-02-21T18:20:09Z | 2023-07-11T06:20:47Z | https://github.com/Esri/arcgis-python-api/issues/1468 | [
"learn"
] | Richard-Antwi22 | 4 |
drivendataorg/cookiecutter-data-science | data-science | 221 | Solo `make` command doesn't usually work (generate help text) on Windows | Encountered this in a downstream project:
https://github.com/drivendataorg/sfp-cervical-biopsy-runtime/issues/18
Fix was to just remove the `more` command since it's not available on most windows bash shells:
https://github.com/drivendataorg/sfp-cervical-biopsy-runtime/pull/19
Expected behavior: running `make` generates help text
- [x] ensure help text works as expected
- [x] #333
- [x] consider adding test case
- [x] for make generating help text (or at least, not failing on the github actions windows runner) | closed | 2020-09-25T22:08:49Z | 2024-04-16T14:43:07Z | https://github.com/drivendataorg/cookiecutter-data-science/issues/221 | [] | pjbull | 4 |
mitmproxy/pdoc | api | 221 | index.html does not respect __all__ | # Description
When I try to exclude certain submodules from a project via `__all__`, they don't appear in the "normal" view, but they appear in the index.html search page.
It seems that `pkgutil.walk_packages` (https://github.com/mitmproxy/pdoc/blob/1dfd11baff29ae92f0e2d1b3725e7c57129aa6dd/pdoc/extract.py#L83) does not respect `__all__`.
# Steps to reproduce
1. create the following project structure
```
mod
├── __init__.py
├── submod1
│ └── __init__.py
└── submod2
├── __init__.py
├── subsubmod1
│ └── __init__.py
└── subsubmod2
└── __init__.py
```
2. exclude `mod.submod2.subsubmod2` from documentation:
*mod/submod2/\_\_init__.py*:
```python
__all__ = [ "subsubmod1" ]
```
3. generate documentation of `mod`
4. navigate to `mod.submod2`: `mod.submod2.subsubmod2` is not shown
5. navigate to `index.html`: `mod.submod2.subsubmod2` appears | closed | 2021-02-14T01:55:32Z | 2021-02-14T16:10:10Z | https://github.com/mitmproxy/pdoc/issues/221 | [] | eladyn | 2 |
gunthercox/ChatterBot | machine-learning | 1,736 | OSError: [E050] Can't find model 'en'. | OSError: [E050] Can't find model 'en'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory.
| closed | 2019-05-26T19:51:47Z | 2020-12-27T06:12:02Z | https://github.com/gunthercox/ChatterBot/issues/1736 | [
"answered"
] | dewangsatyam | 9 |
getsentry/sentry | django | 87,094 | Clickup Connection not working - Infinite Spinner | ### Environment
SaaS (https://sentry.io/)
### Steps to Reproduce
Video: https://www.loom.com/share/677562533f7842ba826c93032c26d17b?sid=91cc82d1-2013-47e3-b9a7-073ef2340dda
Try to connect Clickup with Sentry following the documentation.
This issue was mentioned here and at least for me it is not solved: https://github.com/getsentry/sentry/issues/72727
### Expected Result
Successful connection for Clickup.
### Actual Result
Infinite Spinner and the connection is in "Pending" state. Tested it for Chrome and Safari and I was logged in for both, Sentry and Clickup.


### Product Area
Settings
### Link
_No response_
### DSN
_No response_
### Version
_No response_ | open | 2025-03-14T16:37:43Z | 2025-03-18T19:35:02Z | https://github.com/getsentry/sentry/issues/87094 | [
"Product Area: Settings - Integrations"
] | ik4Rus | 3 |
sammchardy/python-binance | api | 1,568 | Ed25519 authentication | Is it possible to authenticate with Ed25519 key? The examples only show api_key and secret authentication. Binance does not allow placing orders with HMAC authentication. Am I doing something wrong? Would appreciate your help. | open | 2025-03-21T21:00:05Z | 2025-03-21T21:00:05Z | https://github.com/sammchardy/python-binance/issues/1568 | [] | PGSch | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.