repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
ivy-llc/ivy | pytorch | 28,647 | Fix Frontend Failing Test: torch - averages_and_variances.numpy.average | To-do List: https://github.com/unifyai/ivy/issues/27498 | closed | 2024-03-19T18:21:10Z | 2024-03-26T04:48:36Z | https://github.com/ivy-llc/ivy/issues/28647 | [
"Sub Task"
] | ZJay07 | 0 |
pydantic/FastUI | pydantic | 126 | Sample code in README gives error because of a change 3 weeks ago of definition of Table | Error in sample code in README.

Sample code doesn't work.
Sample code gives Request Error:

| closed | 2023-12-27T12:27:04Z | 2023-12-28T16:01:39Z | https://github.com/pydantic/FastUI/issues/126 | [] | aekespong | 6 |
jumpserver/jumpserver | django | 15,044 | [Question] 给一个 RDS 资产添加用户,若勾选【立即推送】,会导致该用户权限丢失 | ### 产品版本
v3.10.17
### 版本类型
- [x] 社区版
- [ ] 企业版
- [ ] 企业试用版
### 安装方式
- [x] 在线安装 (一键命令安装)
- [ ] 离线包安装
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] 源码安装
### 环境信息
操作系统:Ubuntu 22.04.5 LTS
浏览器:Chrome
部署架构:基于官方的一键部署脚本
### 🤔 问题描述
我在华为云有一台 RDS MySQL 实例,我将这个实例添加到了 JumpServer 当中,然后,我给这个 MySQL 资产添加一个 read 账号,当我填写账号信息时,如果勾选了【立即推送】,过了几十秒,会导致 read 用户在华为云平台配置的权限被清空。
### 期望结果
想知道具体引发的原因是什么。
### 补充信息
_No response_ | open | 2025-03-17T07:54:11Z | 2025-03-21T10:44:28Z | https://github.com/jumpserver/jumpserver/issues/15044 | [
"⏳ Pending feedback",
"🤔 Question"
] | surel9 | 2 |
chatanywhere/GPT_API_free | api | 242 | API key调用失败 | 我调用API key用来翻译,在插件上运行。国内的两个转发host失败了,国外的那个成功了。但是没多久就又报错了。现在无法使用。

| open | 2024-05-21T23:13:51Z | 2024-07-04T01:00:06Z | https://github.com/chatanywhere/GPT_API_free/issues/242 | [] | Cloudy717 | 3 |
modin-project/modin | pandas | 7,421 | BUG: Using modin with APT-installed MPI on linux doesn't work | ### Modin version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest released version of Modin.
- [X] I have confirmed this bug exists on the main branch of Modin. (In order to do this you can follow [this guide](https://modin.readthedocs.io/en/stable/getting_started/installation.html#installing-from-the-github-main-branch).)
### Reproducible Example
```python
sudo apt install libmpich-dev
python -m pip install -e "modin[mpi]"
MODIN_ENGINE=unidist UNIDIST_BACKEND=mpi mpiexec -n 1 python -c "import modin.pandas as pd; print(pd.DataFrame([1,2,3]))"
```
### Issue Description
MPI fails to start and gives errors like `Error in spawn call`. Example: https://github.com/modin-project/modin/actions/runs/12753981920/job/35546821512?pr=7420#logs
### Expected Behavior
modin[mpi] should run without error.
### Error Logs
<details>
```python-traceback
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/runner/work/modin/modin/modin/logging/logger_decorator.py", line 144, in run_and_log
return obj(*args, **kwargs)
File "/home/runner/work/modin/modin/modin/pandas/dataframe.py", line 203, in __init__
distributed_frame = from_non_pandas(data, index, columns, dtype)
File "/home/runner/work/modin/modin/modin/pandas/io.py", line 972, in from_non_pandas
new_qc = FactoryDispatcher.from_non_pandas(df, index, columns, dtype)
File "/home/runner/work/modin/modin/modin/core/execution/dispatching/factories/dispatcher.py", line 177, in from_non_pandas
return cls.get_factory()._from_non_pandas(*args, **kwargs)
File "/home/runner/work/modin/modin/modin/core/execution/dispatching/factories/dispatcher.py", line 115, in get_factory
Engine.subscribe(_update_engine)
File "/home/runner/work/modin/modin/modin/config/pubsub.py", line 295, in subscribe
callback(cls)
File "/home/runner/work/modin/modin/modin/pandas/__init__.py", line 133, in _update_engine
initialize_unidist()
File "/home/runner/work/modin/modin/modin/core/execution/unidist/common/utils.py", line 41, in initialize_unidist
unidist.init()
File "/opt/hostedtoolcache/Python/3.9.21/x64/lib/python3.9/site-packages/unidist/api.py", line 92, in init
init_backend()
File "/opt/hostedtoolcache/Python/3.9.21/x64/lib/python3.9/site-packages/unidist/core/base/utils.py", line 28, in init_backend
initialize_mpi()
File "/opt/hostedtoolcache/Python/3.9.21/x64/lib/python3.9/site-packages/unidist/core/backends/mpi/utils.py", line 12, in initialize_mpi
init()
File "/opt/hostedtoolcache/Python/3.9.21/x64/lib/python3.9/site-packages/unidist/core/backends/mpi/core/controller/api.py", line [234](https://github.com/modin-project/modin/actions/runs/12753981920/job/35546821512?pr=7420#step:6:235), in init
intercomm = MPI.COMM_SELF.Spawn(
File "src/mpi4py/MPI.src/Comm.pyx", line 2544, in mpi4py.MPI.Intracomm.Spawn
mpi4py.MPI.Exception: Other MPI error, error stack:
internal_Comm_spawn(77171)..: MPI_Comm_spawn(command=/opt/hostedtoolcache/Python/3.9.21/x64/bin/python, argv=0x7fd906bd26d0, maxprocs=5, info=0x9c000000, 0, MPI_COMM_SELF, intercomm=0x7fd906bd2680, array_of_errcodes=(nil)) failed
MPID_Comm_spawn_multiple(85): Error in spawn call
```
</details>
### Installed Versions
I don't know; I don't have a linux machine, and we saw this issue in CI on ubuntu: https://github.com/modin-project/modin/actions/runs/12753981920/job/35546821512?pr=7420#logs | closed | 2025-01-15T23:15:12Z | 2025-01-27T20:24:48Z | https://github.com/modin-project/modin/issues/7421 | [
"bug 🦗"
] | sfc-gh-mvashishtha | 3 |
microsoft/JARVIS | pytorch | 158 | The hugging face docker image is invalid and failed to start | ```
docker run -it -p 7860:7860 --platform=linux/amd64 registry.hf.space/microsoft-hugginggpt:latest python app.py
Unable to find image 'registry.hf.space/microsoft-hugginggpt:latest' locally
latest: Pulling from microsoft-hugginggpt
fb668870d8a7: Pull complete
8a612414e2bc: Pull complete
2c12f5dee74d: Pull complete
e8b64516db7f: Pull complete
5063cc75a5fc: Pull complete
0b5eea1e39eb: Pull complete
19026083d6ce: Pull complete
9ec6d9577f70: Pull complete
4d04e22a64a3: Pull complete
c89166c8ea49: Pull complete
76b0ea444c14: Pull complete
bc5a04c3e525: Pull complete
c88465465397: Extracting 263.3MB/263.3MB
cfc4dc6cd9f0: Download complete
5f19ae8f3203: Download complete
a8272de485bf: Download complete
901de9cc95bb: Download complete
f6de10b5e4af: Download complete
46995db4b389: Download complete
d12b2111ec90: Download complete
7755f20f43c0: Download complete
b6f73bb93556: Download complete
e584e685badb: Download complete
docker: failed to register layer: Error processing tar file(exit status 1): archive/tar: invalid tar header.
See 'docker run --help'.
``` | open | 2023-04-17T17:53:56Z | 2023-05-19T01:53:16Z | https://github.com/microsoft/JARVIS/issues/158 | [] | Jeffwan | 3 |
iterative/dvc | data-science | 10,160 | data status: doesn't work when using dvc with hydra when `params.yaml` is present, but not staged | ## Description
`dvc data status` fails when hydra integration is involved.
### Reproduce
`git clone https://github.com/Danila89/dvc_empty.git && cd dvc_empty && dvc exp run -n something && dvc data status`
### Expected
`dvc data status` output
### Environment information
**Output of `dvc doctor`:**
```console
(base) danila.savenkov@RS-UNIT-0099 dvc_empty % dvc doctor
DVC version: 3.33.3 (pip)
-------------------------
Platform: Python 3.10.9 on macOS-13.3.1-arm64-arm-64bit
Subprojects:
dvc_data = 2.22.6
dvc_objects = 1.4.9
dvc_render = 1.0.0
dvc_task = 0.3.0
scmrepo = 1.5.0
Supports:
http (aiohttp = 3.8.4, aiohttp-retry = 2.8.3),
https (aiohttp = 3.8.4, aiohttp-retry = 2.8.3),
s3 (s3fs = 2023.5.0, boto3 = 1.26.76)
Config:
Global: /Users/danila.savenkov/Library/Application Support/dvc
System: /Library/Application Support/dvc
Cache types: reflink, hardlink, symlink
Cache directory: apfs on /dev/disk3s3s1
Caches: local
Remotes: None
Workspace directory: apfs on /dev/disk3s3s1
Repo: dvc, git
Repo.site_cache_dir: /Library/Caches/dvc/repo/64bbbded2e55036b006c56ceaefa98e1
```
**Additional Information (if any):**
Interesting detail: if `params.yaml` is committed to git - everything works:
`git clone https://github.com/Danila89/dvc_empty.git && cd dvc_empty && git pull --all && git checkout dvc_data_status_issue && dvc data status` | open | 2023-12-13T23:26:34Z | 2023-12-14T15:19:25Z | https://github.com/iterative/dvc/issues/10160 | [
"p2-medium",
"ui"
] | Danila89 | 1 |
deepfakes/faceswap | machine-learning | 1,066 | meet error when extrarct image | My system is Ubuntu 20.04, nvidia driver version is 450,CUDA version is 11,I think my CUDA is working well because I can run Hashcat with CUDA.
Here is my log:
[crash_report.2020.09.23.011725109505.log](https://github.com/deepfakes/faceswap/files/5263038/crash_report.2020.09.23.011725109505.log)
| closed | 2020-09-22T17:24:51Z | 2020-09-26T23:35:31Z | https://github.com/deepfakes/faceswap/issues/1066 | [] | MCredbear | 1 |
encode/httpx | asyncio | 3,072 | HTTP 2.0 Throws KeyError rather than the internal exception thrown in the thread | The starting point for issues should usually be a discussion...
https://github.com/encode/httpx/discussions
Possible bugs may be raised as a "Potential Issue" discussion, feature requests may be raised as an "Ideas" discussion. We can then determine if the discussion needs to be escalated into an "Issue" or not.
This will help us ensure that the "Issues" list properly reflects ongoing or needed work on the project.
---
- [ ] Initially raised as discussion #...
We are running into a number of HTTP 2.0 protocol errors with a remote supplier. We have seen two exceptions that are raised internally, but when it makes it to the calling routine, it turns into a KeyError exception. Here are the exceptions:
EXCEPTION #1
Exception in thread Thread-809:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/http2.py", line 148, in handle_request
status, headers = self._receive_response(
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/http2.py", line 292, in _receive_response
event = self._receive_stream_event(request, stream_id)
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/http2.py", line 333, in _receive_stream_event
self._receive_events(request, stream_id)
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/http2.py", line 361, in _receive_events
events = self._read_incoming_data(request)
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/http2.py", line 454, in _read_incoming_data
events: typing.List[h2.events.Event] = self._h2_state.receive_data(data)
File "/usr/local/lib/python3.8/site-packages/h2/connection.py", line 1463, in receive_data
events.extend(self._receive_frame(frame))
File "/usr/local/lib/python3.8/site-packages/h2/connection.py", line 1487, in _receive_frame
frames, events = self._frame_dispatch_table[frame.__class__](frame)
File "/usr/local/lib/python3.8/site-packages/h2/connection.py", line 1561, in _receive_headers_frame
frames, stream_events = stream.receive_headers(
File "/usr/local/lib/python3.8/site-packages/h2/stream.py", line 1041, in receive_headers
events = self.state_machine.process_input(input_)
File "/usr/local/lib/python3.8/site-packages/h2/stream.py", line 129, in process_input
return func(self, previous_state)
File "/usr/local/lib/python3.8/site-packages/h2/stream.py", line 338, in recv_on_closed_stream
raise StreamClosedError(self.stream_id)
h2.exceptions.StreamClosedError: 979
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
calling routine goes here...
File "/usr/local/lib/python3.8/site-packages/httpx/_client.py", line 1055, in get
return self.request(
File "/usr/local/lib/python3.8/site-packages/httpx/_client.py", line 828, in request
return self.send(request, auth=auth, follow_redirects=follow_redirects)
File "/usr/local/lib/python3.8/site-packages/httpx/_client.py", line 915, in send
response = self._send_handling_auth(
File "/usr/local/lib/python3.8/site-packages/httpx/_client.py", line 943, in _send_handling_auth
response = self._send_handling_redirects(
File "/usr/local/lib/python3.8/site-packages/httpx/_client.py", line 980, in _send_handling_redirects
response = self._send_single_request(request)
File "/usr/local/lib/python3.8/site-packages/httpx/_client.py", line 1016, in _send_single_request
response = transport.handle_request(request)
File "/usr/local/lib/python3.8/site-packages/httpx/_transports/default.py", line 231, in handle_request
resp = self._pool.handle_request(req)
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/connection_pool.py", line 268, in handle_request
raise exc
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/connection_pool.py", line 251, in handle_request
response = connection.handle_request(request)
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/connection.py", line 103, in handle_request
return self._connection.handle_request(request)
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/http2.py", line 167, in handle_request
self._response_closed(stream_id=stream_id)
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/http2.py", line 406, in _response_closed
del self._events[stream_id]
KeyError: 979
EXCEPTION #2
Exception in thread Thread-94:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/http2.py", line 148, in handle_request
status, headers = self._receive_response(
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/http2.py", line 292, in _receive_response
event = self._receive_stream_event(request, stream_id)
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/http2.py", line 333, in _receive_stream_event
self._receive_events(request, stream_id)
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/http2.py", line 352, in _receive_events
raise RemoteProtocolError(self._connection_terminated)
httpcore.RemoteProtocolError: <ConnectionTerminated error_code:ErrorCodes.PROTOCOL_ERROR, last_stream_id:7, additional_data:None>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/threading.py", line 932, in _bootstrap_inner
Exception in thread Thread-97:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/http2.py", line 148, in handle_request
status, headers = self._receive_response(
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/http2.py", line 292, in _receive_response
self.run()
File "/app/lib/streams/m3u8_queue.py", line 81, in run
event = self._receive_stream_event(request, stream_id)
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/http2.py", line 333, in _receive_stream_event
self._receive_events(request, stream_id)
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/http2.py", line 352, in _receive_events
raise RemoteProtocolError(self._connection_terminated)
httpcore.RemoteProtocolError: <ConnectionTerminated error_code:ErrorCodes.PROTOCOL_ERROR, last_stream_id:7, additional_data:None>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
calling routine goes here...
File "/usr/local/lib/python3.8/site-packages/httpx/_client.py", line 1055, in get
return self.request(
File "/usr/local/lib/python3.8/site-packages/httpx/_client.py", line 828, in request
resp = self._pool.handle_request(req)
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/connection_pool.py", line 268, in handle_request
return self.send(request, auth=auth, follow_redirects=follow_redirects)
File "/usr/local/lib/python3.8/site-packages/httpx/_client.py", line 915, in send
response = self._send_handling_auth(
File "/usr/local/lib/python3.8/site-packages/httpx/_client.py", line 943, in _send_handling_auth
response = self._send_handling_redirects(
File "/usr/local/lib/python3.8/site-packages/httpx/_client.py", line 980, in _send_handling_redirects
response = self._send_single_request(request)
File "/usr/local/lib/python3.8/site-packages/httpx/_client.py", line 1016, in _send_single_request
raise exc
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/connection_pool.py", line 251, in handle_request
response = connection.handle_request(request)
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/connection.py", line 103, in handle_request
return self._connection.handle_request(request)
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/http2.py", line 167, in handle_request
response = transport.handle_request(request)
File "/usr/local/lib/python3.8/site-packages/httpx/_transports/default.py", line 231, in handle_request
resp = self._pool.handle_request(req)
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/connection_pool.py", line 268, in handle_request
self._response_closed(stream_id=stream_id)
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/http2.py", line 406, in _response_closed
del self._events[stream_id]
KeyError: 5
raise exc
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/connection_pool.py", line 251, in handle_request
response = connection.handle_request(request)
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/connection.py", line 103, in handle_request
return self._connection.handle_request(request)
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/http2.py", line 167, in handle_request
self._response_closed(stream_id=stream_id)
File "/usr/local/lib/python3.8/site-packages/httpcore/_sync/http2.py", line 406, in _response_closed
del self._events[stream_id]
KeyError: 3
| open | 2024-01-25T23:40:48Z | 2024-09-27T11:04:15Z | https://github.com/encode/httpx/issues/3072 | [
"http/2"
] | rocky4546 | 0 |
LAION-AI/Open-Assistant | python | 2,983 | inference-workers exits when trying other models than distilgpt2 (on a non-GPU system) | Is it possible to run the worker with other models than distilgpt2 on a non GPU-system?
After successfully launching the services (profiles ci + inference) with the distilgpt2 model, I tried to start it for other models (ex. OA_SFT_Pythia_12B_4), but the inference-workers container fails after waiting for the inference server to be ready.
The inference-server reports that it has started:
```
2023-04-30 15:19:04.225 | WARNING | oasst_inference_server.routes.workers:clear_worker_sessions:288 - Clearing worker sessions
2023-04-30 15:19:04.227 | WARNING | oasst_inference_server.routes.workers:clear_worker_sessions:291 - Successfully cleared worker sessions
2023-04-30 15:19:04.227 | WARNING | main:welcome_message:119 - Inference server started
2023-04-30 15:19:04.227 | WARNING | main:welcome_message:120 - To stop the server, press Ctrl+C
```
but the inference-worker stops after a minute of waiting:
```
2023-04-30T15:22:39.170299Z INFO text_generation_launcher: Starting shard 0
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
2023-04-30 15:22:40.215 | INFO | __main__:main:25 - Inference protocol version: 1
2023-04-30 15:22:40.215 | WARNING | __main__:main:28 - Model config: model_id='OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5' max_input_length=1024 max_total_length=2048 quantized=False
2023-04-30 15:22:40.756 | WARNING | __main__:main:37 - Tokenizer OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5 vocab size: 50254
2023-04-30 15:22:40.759 | WARNING | utils:wait_for_inference_server:71 - Inference server not ready. Retrying in 6.22 seconds
2023-04-30 15:22:46.991 | WARNING | utils:wait_for_inference_server:71 - Inference server not ready. Retrying in 1.95 seconds
2023-04-30 15:22:48.947 | WARNING | utils:wait_for_inference_server:71 - Inference server not ready. Retrying in 5.09 seconds
2023-04-30T15:22:49.194599Z INFO text_generation_launcher: Waiting for shard 0 to be ready...
2023-04-30 15:22:54.040 | WARNING | utils:wait_for_inference_server:71 - Inference server not ready. Retrying in 7.65 seconds
2023-04-30T15:22:59.210490Z INFO text_generation_launcher: Waiting for shard 0 to be ready...
2023-04-30 15:23:01.699 | WARNING | utils:wait_for_inference_server:71 - Inference server not ready. Retrying in 2.74 seconds
2023-04-30 15:23:04.442 | WARNING | utils:wait_for_inference_server:71 - Inference server not ready. Retrying in 7.90 seconds
2023-04-30T15:23:09.226492Z INFO text_generation_launcher: Waiting for shard 0 to be ready...
2023-04-30 15:23:12.356 | WARNING | utils:wait_for_inference_server:71 - Inference server not ready. Retrying in 4.10 seconds
2023-04-30 15:23:16.460 | WARNING | utils:wait_for_inference_server:71 - Inference server not ready. Retrying in 3.25 seconds
2023-04-30T15:23:19.238026Z INFO text_generation_launcher: Waiting for shard 0 to be ready...
2023-04-30 15:23:19.718 | WARNING | utils:wait_for_inference_server:71 - Inference server not ready. Retrying in 3.76 seconds
2023-04-30 15:23:23.479 | WARNING | utils:wait_for_inference_server:71 - Inference server not ready. Retrying in 0.70 seconds
2023-04-30 15:23:24.182 | WARNING | utils:wait_for_inference_server:71 - Inference server not ready. Retrying in 3.74 seconds
2023-04-30 15:23:27.929 | WARNING | utils:wait_for_inference_server:71 - Inference server not ready. Retrying in 3.04 seconds
2023-04-30T15:23:29.248026Z INFO text_generation_launcher: Waiting for shard 0 to be ready...
2023-04-30 15:23:30.976 | WARNING | utils:wait_for_inference_server:71 - Inference server not ready. Retrying in 6.88 seconds
2023-04-30 15:23:37.864 | WARNING | utils:wait_for_inference_server:71 - Inference server not ready. Retrying in 2.89 seconds
2023-04-30T15:23:39.259110Z INFO text_generation_launcher: Waiting for shard 0 to be ready...
2023-04-30 15:23:40.757 | WARNING | utils:wait_for_inference_server:71 - Inference server not ready. Retrying in 5.52 seconds
2023-04-30 15:23:46.287 | WARNING | utils:wait_for_inference_server:71 - Inference server not ready. Retrying in 7.90 seconds
2023-04-30T15:23:49.288384Z INFO text_generation_launcher: Waiting for shard 0 to be ready...
2023-04-30T15:23:51.887480Z ERROR text_generation_launcher: Shard 0 failed to start:
/opt/miniconda/envs/text-generation/lib/python3.9/site-packages/bitsandbytes/cextension.py:127: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
We're not using custom kernels.
2023-04-30T15:23:51.887567Z INFO text_generation_launcher: Shutting down shards
```
The system running the container is an OpenStack Instance with 8 vCPUs and 32 GB vRAM running Ubuntu 22.04. I have a pile of vCPUs and vRAM, but sadly no GPU yet to run tests.
Before running the "docker compose up" I just set MODEL_CONFIG_NAME to OA_SFT_Pythia_12B_4 as env var.
This message baffles me: "None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used." - shouldn't there be a last one of them? (I assume they fall out of the "huggingface/transformers" requirement)
Thanks in advance! | closed | 2023-04-30T16:06:12Z | 2023-06-09T11:51:05Z | https://github.com/LAION-AI/Open-Assistant/issues/2983 | [
"question",
"inference"
] | stelterlab | 2 |
aio-libs/aiomysql | sqlalchemy | 373 | Allow to set to expand aiomysql.cursors.Cursor | In #263 the custom cursor was removed.
For each query I need to set a comment with 'request_id' and measure time of the query. Initially, I thought to do this through `before_cursor_execute`, but the library does not support subscribing to events. Now, I think to expand the cursor aiomysql.cursors.Cursor. | closed | 2019-01-17T11:44:40Z | 2019-01-20T16:22:58Z | https://github.com/aio-libs/aiomysql/issues/373 | [
"enhancement"
] | elBroom | 1 |
xinntao/Real-ESRGAN | pytorch | 867 | i/o操作 | > 请问输入和输出一定要使用i/o操作吗,能不能直接读取二进制数据,然后输出一个二进制数据,跳过i/o操作 | open | 2024-11-18T08:41:39Z | 2024-11-18T08:41:39Z | https://github.com/xinntao/Real-ESRGAN/issues/867 | [] | lazy-dog-always | 0 |
slackapi/python-slack-sdk | asyncio | 1,351 | Cannot share file uploaded by same API user if username is overriden | (Filling out the following details about bugs will help us solve your issue sooner.)
### Reproducible in:
```bash
pip freeze | grep slack
python --version
sw_vers && uname -v # or `ver`
```
#### The Slack SDK version
slack-sdk==3.20.2
#### Python runtime version
Python 3.10.7
#### OS info
ProductName: macOS
ProductVersion: 13.2.1
BuildVersion: 22D68
Darwin Kernel Version 22.3.0: Mon Jan 30 20:38:37 PST 2023; root:xnu-8792.81.3~2/RELEASE_ARM64_T6000
#### Steps to reproduce:
```python
import slack_sdk
import os
token = os.environ["SLACK_API_TOKEN"]
client = slack_sdk.WebClient(token=token)
file = client.files_upload_v2(
filename="terraform-plan",
content="test",
title="Terraform Plan",
alt_txt="",
snippet_type="java",
)
permalink = file["files"][0]["permalink"]
client.chat_postMessage(
channel=self.channel,
text="test",
username="Terraform",
blocks=[
{"type": "divider"},
{
"type": "context",
"elements": [
{"type": "mrkdwn", "text": permalink}
],
},
]
)
```
### Expected result:
The expectation is to see `Terraform` posting a message, containing a functioning link, for which a preview is rendered. Example:
<img width="707" alt="image" src="https://user-images.githubusercontent.com/111279287/230933804-554b48c7-7610-49b7-8278-18d48b8e1871.png">
### Actual result:
When username is overriden, the file is not shared with the channel and the url does not function.
<img width="651" alt="image" src="https://user-images.githubusercontent.com/111279287/230934039-e4626529-a157-4edb-beb0-d07f37ba9ebc.png">
So, it seems that `files_upload_v2`, when `channel` is not set, uploads the file as private (seems to be the intention as per API docs). When trying to share the file using the `chat_postMessage` method, if the username is overriden, the file wont be shared with the channel. | closed | 2023-04-10T15:40:51Z | 2023-06-05T00:03:43Z | https://github.com/slackapi/python-slack-sdk/issues/1351 | [
"question",
"auto-triage-stale"
] | sidekick-eimantas | 11 |
benbusby/whoogle-search | flask | 465 | [FEATURE] Option to remove "Featured Snippets" | Things like "recent news", and blurbs from websites should be remove if the option was toggled on (preferably in an environment variable). | closed | 2021-10-17T20:09:55Z | 2021-10-26T16:35:12Z | https://github.com/benbusby/whoogle-search/issues/465 | [
"enhancement"
] | DUOLabs333 | 2 |
onnx/onnx | pytorch | 6,006 | how onnx models stores weights and layer parameters | I am working on generating an ONNX translator for a machine learning framework, and for that, I need to have a clear understanding of how weights and layer parameters are stored in an ONNX model. After exploring several ONNX models, I realized that some of the ONNX models store their weights and layer parameters in separate initializers, while some of the ONNX models store their weights and layer parameters in their attributes class. Is there any way I can figure out how these parameters are stored in the model so that I can apply the appropriate algorithm to extract their parameters?
| open | 2024-03-09T08:00:08Z | 2024-03-09T08:00:08Z | https://github.com/onnx/onnx/issues/6006 | [
"question"
] | kumar-utkarsh0317 | 0 |
keras-team/keras | data-science | 20,116 | "ValueError: The layer sequential has never been called and thus has no defined output." when the model's been build and called | I am currently using tensorflow 2.17 with keras 3.4.1 under Ubuntu 24.04 LTS. I have also reproduced the issue with tf-nightly 2.18.0.dev20240731 (keras nightly 3.4.1.dev2024073103).
I encountered the issue when i downloaded a model I have ran on a cluster under tf 2.17/keras 3.4.1. I then tried to obtain some saliency maps on my computer after re-building the model without redifining all its layers from scratch.
See the following google drive for a reprex with the code, model and a data sample: https://drive.google.com/drive/folders/15J_ghWXWbs8EmSVXedH6sJRvJcPUSTIW?usp=sharing
But it raises the following traceback:
```
`ValueError Traceback (most recent call last)
Cell In[1], line 45
42 class_activation_map = tf.expand_dims(class_activation_map, axis=-1)
43 return class_activation_map
---> 45 layer_cam_test = layer_cam(img = test_sample, model=model, label_index = 0)
Cell In[1], line 24, in layer_cam(img, label_index, model)
22 print(layer_names)
23 for layer_name in layer_names[-1:]:
---> 24 grad_model = tf.keras.models.Model([model.inputs], [model.get_layer(layer_name).output, model.output]) #bug's here
25 with tf.GradientTape() as tape:
26 tape.watch(img)
File ~/miniconda3/envs/envtfnightly/lib/python3.11/site-packages/keras/src/ops/operation.py:266, in Operation.output(self)
256 @property
257 def output(self):
258 """Retrieves the output tensor(s) of a layer.
259
260 Only returns the tensor(s) corresponding to the *first time*
(...)
264 Output tensor or list of output tensors.
265 """
--> 266 return self._get_node_attribute_at_index(0, "output_tensors", "output")
File ~/miniconda3/envs/envtfnightly/lib/python3.11/site-packages/keras/src/ops/operation.py:285, in Operation._get_node_attribute_at_index(self, node_index, attr, attr_name)
269 """Private utility to retrieves an attribute (e.g. inputs) from a node.
270
271 This is used to implement the properties:
(...)
282 The operation's attribute `attr` at the node of index `node_index`.
283 """
284 if not self._inbound_nodes:
--> 285 raise ValueError(
286 f"The layer {self.name} has never been called "
287 f"and thus has no defined {attr_name}."
288 )
289 if not len(self._inbound_nodes) > node_index:
290 raise ValueError(
291 f"Asked to get {attr_name} at node "
292 f"{node_index}, but the operation has only "
293 f"{len(self._inbound_nodes)} inbound nodes."
294 )
ValueError: The layer sequential has never been called and thus has no defined output.
Click to add a cell.`
```
There is two workarounds where the value error is not raised:
1°) When using grad_model = keras.models.Model(
[model.inputs],
[model.get_layer(last_conv_layer_name).output,
model.get_layer(Name_of_last_deep_layer).output]) but it results in none gradients in the rest of my code
2°) When redifining completely the model from scratch and loading only the weights, i.e., when using:
```
model = tf.keras.models.Sequential([
tf.keras.Input(shape=(27, 75, 93, 81, 1)), # time_steps, depth, height, width, channels
tf.keras.layers.TimeDistributed(tf.keras.layers.Conv3D(6, kernel_size=7, activation='relu', kernel_initializer='he_normal')),
tf.keras.layers.TimeDistributed(tf.keras.layers.MaxPooling3D(pool_size=(2, 2, 2))),
tf.keras.layers.TimeDistributed(tf.keras.layers.BatchNormalization()),
tf.keras.layers.TimeDistributed(tf.keras.layers.Conv3D(32, kernel_size=3, activation='relu', kernel_initializer='he_normal')),
tf.keras.layers.TimeDistributed(tf.keras.layers.MaxPooling3D(pool_size=(2, 2, 2))),
tf.keras.layers.TimeDistributed(tf.keras.layers.BatchNormalization()),
tf.keras.layers.TimeDistributed(tf.keras.layers.Conv3D(128, kernel_size=2, activation='relu', kernel_initializer='he_normal')),
tf.keras.layers.TimeDistributed(tf.keras.layers.MaxPooling3D(pool_size=(2, 2, 2))),
tf.keras.layers.TimeDistributed(tf.keras.layers.BatchNormalization()),
tf.keras.layers.TimeDistributed(tf.keras.layers.Conv3D(256, kernel_size=2, activation='relu', kernel_initializer='he_normal')),
tf.keras.layers.TimeDistributed(tf.keras.layers.Flatten()),
tf.keras.layers.TimeDistributed(tf.keras.layers.BatchNormalization()),
tf.keras.layers.Conv1D(256, kernel_size=5, activation='relu', kernel_initializer='he_normal'),
tf.keras.layers.MaxPooling1D(pool_size=2),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv1D(512, kernel_size=3, activation='relu', kernel_initializer='he_normal'),
tf.keras.layers.MaxPooling1D(pool_size=2),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv1D(1024, kernel_size=2, activation='relu', kernel_initializer='he_normal'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(2, activation='softmax')])
```
model.load_weights(...) -> this one doesn't raises any error
Thanks a lot! | closed | 2024-08-13T08:38:45Z | 2025-01-28T01:59:54Z | https://github.com/keras-team/keras/issues/20116 | [
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] | Senantq | 12 |
harry0703/MoneyPrinterTurbo | automation | 57 | NotFoundError: Error code: 404 Invalid URL (POST /v1/chat/completions/chat/completions | NotFoundError: Error code: 404 - {'error': {'message': 'Invalid URL (POST /v1/chat/completions/chat/completions)', 'type': 'invalid_request_error', 'param': None, 'code': None}}
When I finished deploying the execution program, the above error was reported...I found this in the config file:
openai_base_url = "https://api.openai.com/v1/chat/completions"
openai_model_name = "gpt-4-turbo-preview"
Why are URLs duplicated?How can I fix this problem, thanks man... | closed | 2024-03-25T09:05:37Z | 2024-03-25T10:03:26Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/57 | [] | WebKing2024 | 4 |
FactoryBoy/factory_boy | sqlalchemy | 625 | factory.Iterator caches results | #### Description
I use `factory.Iterator` in `factory.DjangoModelFactory`, Iterable object is a list of strings. If I call `create` on this factory, it caches the results and returns the same objects
#### To Reproduce
1. Create django-model with a field
`
class Course:
title = models.CharField(max_length=255, verbose_name=_("title"))
`
2. Create django factory
`
class CourseFactory(factory.DjangoModelFactory):
title = factory.Iterator([
"Python",
"NodeJS",
"Java",
"Swift",
"C++",
"Objective-C",
"Assembler",
])
`
3. Call `CourseFactory.create()` in loop
>>> for i in range(20):
... print(CourseFactory().id)
1
2
3
4
5
6
7
1
2
3
4
5
6
7
1
2
3
4
5
``
| closed | 2019-06-21T13:42:06Z | 2020-10-13T10:23:32Z | https://github.com/FactoryBoy/factory_boy/issues/625 | [
"NeedInfo",
"Django"
] | sergiusnick | 3 |
fastapi-users/fastapi-users | asyncio | 305 | Protect /register ? | Hi,
First, thanks for the plugin, it's really helpful.
Second, how can I protect the register route ? I would like to setup a super-admin user by default (like django, I will do it via CLI) and that after, only a logged-in super-admin can register people.
Thanks in advance for the answer. | closed | 2020-08-18T09:10:02Z | 2020-08-26T09:09:31Z | https://github.com/fastapi-users/fastapi-users/issues/305 | [
"question"
] | Atem18 | 4 |
open-mmlab/mmdetection | pytorch | 11,804 | Training issues with custom dataset (Missing) and learning rate fluctuations | 1. I have searched related issues but cannot get the expected help.
2. I have read the [FAQ documentation](https://mmdetection.readthedocs.io/en/latest/faq.html) but cannot get the expected help.
3. The bug has not been fixed in the latest version.
**Describe the bug**
1. I custom trained retinaNet with my own dataset. i have 600 images in the train set but upon training only 300 is trained another 300 is missing i cant seems to figure out the cause.
2. The training at the early stage seems off 👍 mmengine - ERROR - /content/mmdetection/mmdet/evaluation/metrics/coco_metric.py - compute_metrics - 465 - The testing results of the whole dataset is empty. 5th epoch onwards is ok. and the learning rate as well. everything cn be see in the log file below.
[20240619_183332.log](https://github.com/user-attachments/files/15912366/20240619_183332.log)
| open | 2024-06-20T10:24:27Z | 2024-08-07T09:22:05Z | https://github.com/open-mmlab/mmdetection/issues/11804 | [] | Warcry25 | 12 |
ets-labs/python-dependency-injector | asyncio | 340 | How to use with multiprocessing.Process? | Hi, @rmk135
Error "AttributeError: 'Provide' object has no attribute 'print_work'" occur when use multiprocessing.Process
My code:
```python
from dependency_injector import containers, providers
from dependency_injector.wiring import inject, Provide
from multiprocessing import Process
from typing import Callable
import schedule
import time
import sys
class MyWorker:
def __init__(self, task_name):
self.task_name = task_name
def print_work(self):
print(self.task_name)
return self.task_name
class Container(containers.DeclarativeContainer):
config = providers.Configuration()
worker = providers.Singleton(MyWorker, task_name="my_task")
@inject
def run_predict_interval(service: MyWorker = Provide[Container.worker]):
schedule.every().sunday.at("09:35").do(
job_current_data_recalc_wrapper, service.print_work)
while True:
schedule.run_pending()
time.sleep(1)
def job_current_data_recalc_wrapper(task: Callable[[], None]):
try:
task()
except Exception as e:
print(e)
if __name__ == '__main__':
containter = Container()
containter.wire(modules=[sys.modules[__name__]])
# print('without Process')
# --work--
# run_predict_interval()
# print('---------------')
print("with Process")
al = Process(target=run_predict_interval)
al.start()
al.join()
print('--------------')
```
Without multiprocessing work fine
How to use with multiprocessor? | closed | 2020-12-20T06:47:27Z | 2020-12-21T16:18:23Z | https://github.com/ets-labs/python-dependency-injector/issues/340 | [
"question"
] | ShvetsovYura | 2 |
koxudaxi/datamodel-code-generator | fastapi | 1,689 | `additionalProperties: false` not working in release 0.23.0 | **Describe the bug**
Cannot build data models from jsonschema with `additionalProperties: false`. Removing the offending line allows the example to generate correctly.
**To Reproduce**
Example schema:
```json
{
"type": "object",
"additionalProperties": false,
"properties": {
"enabled": {
"description": "enabled?",
"type": "boolean",
"default": true
}
}
}
```
Used commandline:
```
datamodel-codegen \
--input etc/registry/schema.json \
--output python/schema.py \
--input-file-type jsonschema \
--output-model-type dataclasses.dataclass \
--disable-timestamp \
--target-python-version 3.9
```
**Expected behavior**
Data model to be generated
**Version:**
- OS: ubuntu
- Python version: Python 3.11.6 (within pipenv)
- datamodel-code-generator version: 0.23.0
**Additional context**
Jsonschema docco for `additionalProperties`: https://json-schema.org/understanding-json-schema/reference/object#additionalproperties
Looks like support was added for `additionalProperties: false` in OpenAPISchemas here: https://github.com/koxudaxi/datamodel-code-generator/pull/472
| closed | 2023-11-14T01:42:56Z | 2023-12-04T15:20:22Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1689 | [
"answered"
] | dcrowe | 1 |
ultralytics/yolov5 | deep-learning | 13,332 | WARNING ⚠️ NMS time limit 0.340s exceeded | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
_No response_
### Bug
Hi YOLO comunnity. so im running training on my cpu and i have this probleme notice that ive already checked on the previous simular issues and i found this
time_limit = 0.1 + 0.02 * bs # seconds to quit after
i applied it but the issue still here
raidhani@raidhani-All-Series:~/catkin_ws/src/yolov5$ python3 train.py --img 640 --batch 6 --epochs 100 --data /home/raidhani/catkin_ws/src/data/data.yaml --weights yolov5s.pt
train: weights=yolov5s.pt, cfg=, data=/home/raidhani/catkin_ws/src/data/data.yaml, hyp=data/hyps/hyp.scratch-low.yaml, epochs=100, batch_size=6, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, evolve_population=data/hyps, resume_evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs/train, name=exp, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest, ndjson_console=False, ndjson_file=False
github: up to date with https://github.com/ultralytics/yolov5 ✅
YOLOv5 🚀 v7.0-368-gb163ff8d Python-3.8.10 torch-1.11.0+cpu CPU
hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0
Comet: run 'pip install comet_ml' to automatically track and visualize YOLOv5 🚀 runs in Comet
TensorBoard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/
Overriding model.yaml nc=80 with nc=10
from n params module arguments
0 -1 1 3520 models.common.Conv [3, 32, 6, 2, 2]
1 -1 1 18560 models.common.Conv [32, 64, 3, 2]
2 -1 1 18816 models.common.C3 [64, 64, 1]
3 -1 1 73984 models.common.Conv [64, 128, 3, 2]
4 -1 2 115712 models.common.C3 [128, 128, 2]
5 -1 1 295424 models.common.Conv [128, 256, 3, 2]
6 -1 3 625152 models.common.C3 [256, 256, 3]
7 -1 1 1180672 models.common.Conv [256, 512, 3, 2]
8 -1 1 1182720 models.common.C3 [512, 512, 1]
9 -1 1 656896 models.common.SPPF [512, 512, 5]
10 -1 1 131584 models.common.Conv [512, 256, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 models.common.Concat [1]
13 -1 1 361984 models.common.C3 [512, 256, 1, False]
14 -1 1 33024 models.common.Conv [256, 128, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 models.common.Concat [1]
17 -1 1 90880 models.common.C3 [256, 128, 1, False]
18 -1 1 147712 models.common.Conv [128, 128, 3, 2]
19 [-1, 14] 1 0 models.common.Concat [1]
20 -1 1 296448 models.common.C3 [256, 256, 1, False]
21 -1 1 590336 models.common.Conv [256, 256, 3, 2]
22 [-1, 10] 1 0 models.common.Concat [1]
23 -1 1 1182720 models.common.C3 [512, 512, 1, False]
24 [17, 20, 23] 1 40455 models.yolo.Detect [10, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
Model summary: 214 layers, 7046599 parameters, 7046599 gradients, 16.0 GFLOPs
Transferred 343/349 items from yolov5s.pt
optimizer: SGD(lr=0.01) with parameter groups 57 weight(decay=0.0), 60 weight(decay=0.000515625), 60 bias
train: Scanning /home/raidhani/catkin_ws/src/data/train/labels.cache... 1008 images, 120 backgrounds, 0 corrupt: 100%|█████████
val: Scanning /home/raidhani/catkin_ws/src/data/valid/labels.cache... 230 images, 31 backgrounds, 0 corrupt: 100%|██████████| 2
AutoAnchor: 4.51 anchors/target, 0.997 Best Possible Recall (BPR). Current anchors are a good fit to dataset ✅
Plotting labels to runs/train/exp11/labels.jpg...
Image sizes 640 train, 640 val
Using 6 dataloader workers
Logging results to runs/train/exp11
Starting training for 100 epochs...
Epoch GPU_mem box_loss obj_loss cls_loss Instances Size
0/99 0G 0.1032 0.06545 0.05506 64 640: 100%|██████████| 169/169 [07:32<00:00, 2.68s/it
Class Images Instances P R mAP50 mAP50-95: 0%| | 0/20 [00:00<?, ?it/sWARNING ⚠️ NMS time limit 0.340s exceeded
Class Images Instances P R mAP50 mAP50-95: 5%|▌ | 1/20 [00:01<00:29, WARNING ⚠️ NMS time limit 0.340s exceeded
Class Images Instances P R mAP50 mAP50-95: 10%|█ | 2/20 [00:03<00:27, WARNING ⚠️ NMS time limit 0.340s exceeded
Class Images Instances P R mAP50 mAP50-95: 15%|█▌ | 3/20 [00:04<00:26, WARNING ⚠️ NMS time limit 0.340s exceeded
Class Images Instances P R mAP50 mAP50-95: 20%|██ | 4/20 [00:06<00:24, WARNING ⚠️ NMS time limit 0.340s exceeded
Class Images Instances P R mAP50 mAP50-95: 25%|██▌ | 5/20 [00:07<00:24, Class Images Instances P R mAP50 mAP50-95: 25%|██▌ | 5/20 [00:08<00:24,
Traceback (most recent call last):
File "train.py", line 986, in <module>
### Environment
YOLOv5 🚀 v7.0-368-gb163ff8d Python-3.8.10 torch-1.11.0+cpu CPU
### Minimal Reproducible Example
python3 train.py --img 640 --batch 6 --epochs 100 --data /home/raidhani/catkin_ws/src/data/data.yaml --weights yolov5s.pt
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2024-09-25T01:24:00Z | 2024-11-09T14:46:48Z | https://github.com/ultralytics/yolov5/issues/13332 | [
"bug"
] | haniraid | 2 |
ydataai/ydata-profiling | data-science | 1,555 | Error while getting the json of a compare report | ### Current Behaviour
Hi my code is pretty simple, i read 2 parquet file, created 2 reports for each pandas dataframes and used the compare method to generate a compare method. I tried to use the 'to_json()' method to convert my report to json and i got the following error:
"TypeError: to_dict() got an unexpected keyword argument 'orient'"
I saw that you already resolved this issue in:
fix: comparison to_json pd.Series encoding error #1538
I ungraded the package to the latest version and i still get the same error.
### Expected Behaviour
I expected to convert my report to a json successfuly.
### Data Description
The datasets i am using are confidential but the data format is parquet.
### Code that reproduces the bug
```Python
import pandas as pd
from ydata_profiling import ProfileReport
df_ref = pd.read_parquet('dir/to/my/data/df_ref.parquet')
df_old = pd.read_parquet('dir/to/my/data/df_old.parquet')
ref_report = ProfileReport(df_ref, title='df ref report')
old_report = ProfileReport(df_old, title='df old report')
comparison_report = ref_report.compare(old_report)
comparison_report.to_json()
```
### pandas-profiling version
v4.6.4
### Dependencies
```Text
adagio==0.2.4
aiofiles==23.2.1
aiosignal==1.3.1
alabaster==0.7.13
alembic==1.12.0
altair==5.1.1
annotated-types==0.6.0
ansi2html==1.8.0
antlr4-python3-runtime==4.11.1
anyio==3.7.1
appdirs==1.4.4
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
ast_decompiler==0.7.0
astatine==0.3.3
astor==0.8.1
astpretty==3.0.0
astroid==2.15.8
asttokens==2.4.0
async-lru==2.0.4
attrs==23.1.0
autoflake==1.7.8
autoviz==0.1.730
aws-secretsmanager-caching==1.1.1.5
awscli==1.32.37
Babel==2.12.1
backcall==0.2.0
bandit==1.7.7
beautifulsoup4==4.12.2
black==22.12.0
bleach==6.0.0
bokeh==2.4.3
boto3==1.34.37
botocore==1.34.37
cachetools==5.3.2
catboost==1.2.1
category-encoders==2.6.2
certifi==2023.7.22
cffi==1.15.1
charset-normalizer==3.2.0
click==8.1.7
cloudpickle==2.2.1
cmaes==0.10.0
cognitive-complexity==1.3.0
colorama==0.4.4
colorcet==3.0.1
colorlog==6.7.0
colour==0.1.5
comm==0.1.4
contourpy==1.1.0
coverage==6.5.0
cryptography==41.0.3
cycler==0.11.0
Cython==3.0.2
daal==2023.2.1
daal4py==2023.2.1
dacite==1.8.1
darglint==1.8.1
dash==2.13.0
dash-auth==2.0.0
dash-bootstrap-components==1.4.2
dash-core-components==2.0.0
dash-cytoscape==0.3.0
dash-html-components==2.0.0
dash-table==5.0.0
dash-testing-stub==0.0.2
dask==2023.5.0
databricks-cli==0.17.7
debugpy==1.6.7.post1
decorator==5.1.1
deepchecks==0.17.4
defusedxml==0.7.1
deprecation==2.1.0
dill==0.3.7
distlib==0.3.8
distributed==2023.5.0
dlint==0.14.1
doc8==0.11.2
docformatter==1.7.5
docker==6.1.3
docutils==0.16
domdf-python-tools==3.8.0.post2
dtreeviz==2.2.2
eli5==0.13.0
emoji==2.8.0
entrypoints==0.4
eradicate==2.3.0
evidently==0.2.8
exceptiongroup==1.1.3
executing==1.2.0
explainerdashboard==0.4.3
fairlearn==0.7.0
fastapi==0.103.1
fastjsonschema==2.18.0
ffmpy==0.3.1
filelock==3.12.3
flake8==4.0.1
flake8-2020==1.6.1
flake8-aaa==0.17.0
flake8-annotations==2.9.1
flake8-annotations-complexity==0.0.8
flake8-annotations-coverage==0.0.6
flake8-bandit==3.0.0
flake8-black==0.3.6
flake8-blind-except==0.2.1
flake8-breakpoint==1.1.0
flake8-broken-line==0.4.0
flake8-bugbear==22.12.6
flake8-builtins==1.5.3
flake8-class-attributes-order==0.1.3
flake8-coding==1.3.2
flake8-cognitive-complexity==0.1.0
flake8-commas==2.1.0
flake8-comments==0.1.2
flake8-comprehensions==3.14.0
flake8-debugger==4.1.2
flake8-django==1.4
flake8-docstrings==1.7.0
flake8-encodings==0.5.1
flake8-eradicate==1.4.0
flake8-executable==2.1.3
flake8-expression-complexity==0.0.11
flake8-fixme==1.1.1
flake8-functions==0.0.8
flake8-functions-names==0.4.0
flake8-future-annotations==0.0.5
flake8-helper==0.2.2
flake8-isort==4.2.0
flake8-literal==1.4.0
flake8-logging-format==0.9.0
flake8-markdown==0.3.0
flake8-mutable==1.2.0
flake8-no-pep420==2.7.0
flake8-noqa==1.4.0
flake8-pie==0.16.0
flake8-plugin-utils==1.3.3
flake8-polyfill==1.0.2
flake8-pyi==22.11.0
flake8-pylint==0.2.1
flake8-pytest-style==1.7.2
flake8-quotes==3.3.2
flake8-rst-docstrings==0.2.7
flake8-secure-coding-standard==1.4.1
flake8-slots==0.1.6
flake8-string-format==0.3.0
flake8-tidy-imports==4.10.0
flake8-typing-imports==1.12.0
flake8-use-fstring==1.4
flake8-use-pathlib==0.3.0
flake8-useless-assert==0.4.4
flake8-variables-names==0.0.6
flake8-warnings==0.4.0
flake8_simplify==0.21.0
Flask==2.2.3
flask-simplelogin==0.1.2
Flask-WTF==1.1.1
fonttools==4.42.1
frozenlist==1.4.0
fs==2.4.16
fsspec==2023.9.0
fugue==0.8.6
fugue-sql-antlr==0.1.6
future==0.18.3
gevent==23.9.0.post1
gitdb==4.0.10
GitPython==3.1.34
gradio==3.42.0
gradio_client==0.5.0
graphviz==0.20.1
greenlet==2.0.2
grpcio==1.57.0
gunicorn==20.1.0
h11==0.14.0
holoviews==1.14.9
htmlmin==0.1.12
httpcore==0.17.3
httpx==0.24.1
huggingface-hub==0.16.4
hvplot==0.7.3
hyperopt==0.2.7
hypothesis==6.97.1
hypothesmith==0.1.9
idna==3.4
ImageHash==4.3.1
imageio==2.31.3
imagesize==1.4.1
imbalanced-learn==0.11.0
importlib-metadata==5.2.0
importlib-resources==6.0.1
iniconfig==2.0.0
interpret==0.4.4
interpret-core==0.4.4
ipykernel==6.25.2
ipython==7.34.0
ipython-genutils==0.2.0
ipywidgets==7.8.1
isort==5.13.2
itsdangerous==2.1.2
jedi==0.19.0
Jinja2==3.1.2
jmespath==1.0.1
joblib==1.3.2
json5==0.9.14
jsonpickle==3.0.2
jsonschema==4.19.0
jsonschema-specifications==2023.7.1
jupyter==1.0.0
jupyter-console==6.6.3
jupyter-dash==0.4.2
jupyter-events==0.7.0
jupyter-lsp==2.2.0
jupyter-server==1.24.0
jupyter_client==7.4.9
jupyter_core==5.3.1
jupyter_server_terminals==0.4.4
jupyterlab==4.0.5
jupyterlab-flake8==0.7.1
jupyterlab-pygments==0.2.2
jupyterlab-widgets==1.1.7
jupyterlab_server==2.24.0
kaleido==0.2.1
kiwisolver==1.4.5
kmodes==0.12.2
lark-parser==0.12.0
lazy-object-proxy==1.10.0
lazy_loader==0.3
libcst==0.4.10
lightgbm==4.1.0
lime==0.2.0.1
linkify-it-py==2.0.2
llvmlite==0.40.1
locket==1.0.0
lxml==4.9.3
m2cgen==0.10.0
Mako==1.2.4
Markdown==3.4.4
markdown-it-py==3.0.0
MarkupSafe==2.1.3
matplotlib==3.7.2
matplotlib-inline==0.1.6
mccabe==0.6.1
mdit-py-plugins==0.4.0
mdurl==0.1.2
mistune==3.0.1
mlflow==1.30.1
mlxtend==0.22.0
moto==4.2.2
mr-proper==0.0.7
msgpack==1.0.5
multimethod==1.9.1
multiprocess==0.70.15
mypy-extensions==1.0.0
natsort==8.4.0
nbclassic==1.0.0
nbclient==0.8.0
nbconvert==7.8.0
nbformat==5.9.2
nest-asyncio==1.5.7
networkx==3.1
nltk==3.8.1
notebook==6.5.6
notebook_shim==0.2.3
numba==0.57.1
numpy==1.23.5
nvidia-ml-py==12.535.133
nvitop==1.3.2
oauthlib==3.2.2
optuna==3.3.0
orjson==3.9.5
outcome==1.2.0
overrides==7.4.0
oyaml==1.0
packaging==21.3
pandas==2.0.3
pandas-dq==1.28
pandas-vet==0.2.3
pandocfilters==1.5.0
panel==0.14.4
param==1.13.0
parso==0.8.3
partd==1.4.1
pathspec==0.9.0
patsy==0.5.3
pbr==6.0.0
pep8-naming==0.12.1
percy==2.0.2
pexpect==4.8.0
phik==0.12.3
pickleshare==0.7.5
Pillow==10.0.0
pkg_resources==0.0.0
pkgutil_resolve_name==1.3.10
platformdirs==3.10.0
plotly==5.16.1
plotly-resampler==0.9.1
pluggy==1.3.0
pmdarima==2.0.3
polars==0.19.2
prometheus-client==0.17.1
prometheus-flask-exporter==0.22.4
prompt-toolkit==3.0.39
protobuf==4.24.2
psutil==5.9.5
psycopg2-binary==2.9.9
ptyprocess==0.7.0
pure-eval==0.2.2
py==1.11.0
py4j==0.10.9.7
pyamg==5.0.1
pyaml==23.9.2
pyarrow==13.0.0
pyasn1==0.5.0
pybetter==0.4.1
pycaret==3.0.4
pycln==1.3.5
pycodestyle==2.8.0
pycparser==2.21
pyct==0.5.0
pydantic==2.6.2
pydantic-settings==2.1.0
pydantic_core==2.16.3
pydocstyle==6.3.0
pydub==0.25.1
pyemojify==0.2.0
pyflakes==2.4.0
Pygments==2.16.1
PyJWT==2.8.0
pylint==2.17.7
PyMySQL==1.1.0
PyNaCl==1.5.0
pynndescent==0.5.10
PyNomaly==0.3.3
pyod==1.1.0
pyOpenSSL==23.2.0
pyparsing==3.0.9
PySocks==1.7.1
pytest==7.4.1
pytest-cov==3.0.0
pytest-sugar==0.9.7
python-dateutil==2.8.2
python-dev-tools==2022.5.27
python-dotenv==1.0.1
python-json-logger==2.0.7
python-multipart==0.0.6
python-utils==3.7.0
pytz==2022.7.1
pyupgrade==2.38.4
pyviz_comms==3.0.0
PyWavelets==1.4.1
PyYAML==6.0.1
pyzmq==23.2.1
qpd==0.4.4
qtconsole==5.4.4
QtPy==2.4.0
querystring-parser==1.2.4
ray==2.6.3
referencing==0.30.2
regex==2023.8.8
removestar==1.5
requests==2.31.0
responses==0.23.3
restructuredtext-lint==1.4.0
retrying==1.3.4
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
rich==13.7.0
rpds-py==0.10.2
rsa==4.7.2
s3transfer==0.10.0
SALib==1.4.7
schemdraw==0.15
scikit-base==0.5.1
scikit-image==0.21.0
scikit-learn==1.2.2
scikit-learn-intelex==2023.2.1
scikit-optimize==0.9.0
scikit-plot==0.3.7
scipy==1.10.1
seaborn==0.12.2
selenium==4.2.0
semantic-version==2.10.0
Send2Trash==1.8.2
setuptools-scm==7.1.0
shap==0.42.1
six==1.16.0
skope-rules==1.0.1
sktime==0.22.0
slicer==0.0.7
smmap==5.0.0
sniffio==1.3.0
snowballstemmer==2.2.0
sortedcontainers==2.4.0
soupsieve==2.5
Sphinx==4.5.0
sphinxcontrib-applehelp==1.0.4
sphinxcontrib-devhelp==1.0.2
sphinxcontrib-htmlhelp==2.0.1
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.3
sphinxcontrib-serializinghtml==1.1.5
SQLAlchemy==1.4.49
sqlglot==18.2.0
sqlparse==0.4.4
ssort==0.12.3
stack-data==0.6.2
starlette==0.27.0
statsforecast==1.6.0
statsmodels==0.14.0
stdlib-list==0.10.0
stevedore==5.1.0
tabulate==0.9.0
tangled-up-in-unicode==0.2.0
tbats==1.1.3
tbb==2021.10.0
tblib==2.0.0
tenacity==8.2.3
tensorboardX==2.6.2.2
termcolor==2.4.0
terminado==0.17.1
textblob==0.17.1
threadpoolctl==3.2.0
tifffile==2023.7.10
tinycss2==1.2.1
tokenize-rt==4.2.1
toml==0.10.2
tomli==2.0.1
tomlkit==0.12.3
toolz==0.12.0
tornado==6.3.3
tox==3.28.0
tox-travis==0.13
tqdm==4.66.1
trace-updater==0.0.9.1
traitlets==5.9.0
treeinterpreter==0.2.3
triad==0.9.1
trio==0.22.2
trio-websocket==0.10.3
tsdownsample==0.1.2
tune-sklearn==0.4.6
typeguard==4.1.5
typer==0.4.2
types-PyYAML==6.0.12.11
typing-inspect==0.9.0
typing_extensions==4.7.1
tzdata==2024.1
uc-micro-py==1.0.2
umap-learn==0.5.3
untokenize==0.1.1
urllib3==1.26.16
urllib3-secure-extra==0.1.0
uvicorn==0.23.2
virtualenv==20.25.0
virtualenv-clone==0.5.7
visions==0.7.5
waitress==2.1.2
wcwidth==0.2.6
webencodings==0.5.1
websocket-client==1.6.2
websockets==11.0.3
wemake-python-styleguide==0.16.1
Werkzeug==2.2.3
widgetsnbextension==3.6.6
wordcloud==1.9.2
wrapt==1.16.0
wsproto==1.2.0
WTForms==3.0.1
wurlitzer==3.0.3
xgboost==1.7.6
xlrd==2.0.1
xmltodict==0.13.0
xxhash==3.3.0
xyzservices==2023.7.0
ydata-profiling==4.6.4
yellowbrick==1.5
zict==3.0.0
zipp==3.16.2
zope.event==5.0
zope.interface==6.0
```
### OS
ubuntu
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | open | 2024-02-25T16:48:40Z | 2024-07-10T21:55:02Z | https://github.com/ydataai/ydata-profiling/issues/1555 | [
"needs-triage"
] | ronfisher21 | 1 |
sigmavirus24/github3.py | rest-api | 586 | Add unit tests for github.repos.branch | We have integration tests but no unit tests; specifically for protect/unprotect methods.
##
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/31739275-add-unit-tests-for-github-repos-branch?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github).
</bountysource-plugin> | closed | 2016-03-10T08:34:10Z | 2018-07-23T12:26:05Z | https://github.com/sigmavirus24/github3.py/issues/586 | [
"help wanted",
"Mentored/Pair available"
] | itsmemattchung | 10 |
jupyter-book/jupyter-book | jupyter | 2,185 | Issue on page /摩阻扭矩-井轨迹计算.html | Your issue content here. | open | 2024-08-06T05:56:19Z | 2024-08-06T05:56:19Z | https://github.com/jupyter-book/jupyter-book/issues/2185 | [] | jonkwill3 | 0 |
Miksus/rocketry | automation | 206 | Warning error | Very nice project man!! It's a pleasure to use it im my own project.
But i have a little problem. When I run the tasks shows that message on the console:
```console
UserWarning: Logger rocketry.task cannot be read. Logging is set to memory. To supress this warning, please set a handler that can be read (redbird.logging.RepoHandler)
warnings.warn(
/home/alex/Documentos/Estudos/Palmeiras_News/.venv/lib/python3.10/site-packages/rocketry/session.py:364: UserWarning: Logger rocketry.task has too low level (WARNING). Level is set to INFO to make sure the task logs get logged.
```
My code:
```python
repo = CSVFileRepo(filename='app/task.csv', model=MinimalRecord)
task_logger = logging.getLogger('rocketry.task')
handler = RepoHandler(repo=repo)
task_logger.addHandler(handler)
```
I already added the 'repo', am I doing something wrong?
OS: Linux Ubuntu 22.04 LTS
Python version 3.10
Additional context
rocketry 2.5.1
| open | 2023-06-15T17:12:57Z | 2025-01-22T10:39:40Z | https://github.com/Miksus/rocketry/issues/206 | [
"bug"
] | LecoOliveira | 2 |
mwaskom/seaborn | matplotlib | 3,533 | Try `to_pandas` rather than erroring if interchanging to pandas doesn't work? | Here's an example of some code which currently raises:
```python
import seaborn as sns
import polars as pl
df = pl.DataFrame({'a': [1,2,3], 'b': [4,5,6], 'c': [[1,2], [4,5], [6,7]]})
sns.catplot(
data=df,
x="a",
y="b",
)
```
There's a really long error message, but the gist of it is
```
ValueError: data type List(Int64) not supported by the interchange protocol
```
Indeed, just doing
```python
pd.api.interchange.from_pandas(df)
```
would raise the same error
I'd like to suggest that, when converting to pandas, if the interchange protocol fails, there be a `to_pandas` fallback.
Because that at least works here (for reference, plotly do the same - try interchanging first, and if that fails, check if there's `to_pandas`, and if so, use that) | open | 2023-10-19T19:29:55Z | 2023-10-28T13:51:29Z | https://github.com/mwaskom/seaborn/issues/3533 | [
"enhancement"
] | MarcoGorelli | 0 |
kubeflow/katib | scikit-learn | 2,442 | Update Katib Experiment Workflow Guide | ### What you would like to be added?
We move this sequence diagram to [the Kubeflow Katib reference](https://www.kubeflow.org/docs/components/katib/reference/) guides: https://github.com/kubeflow/katib/blob/master/docs/workflow-design.md#what-happens-after-an-experiment-cr-is-created.
After that, we should remove this doc from the Katib repository.
### Love this feature?
Give it a 👍 We prioritize the features with most 👍 | closed | 2024-10-10T20:22:04Z | 2024-11-30T00:37:18Z | https://github.com/kubeflow/katib/issues/2442 | [
"help wanted",
"good first issue",
"area/docs",
"kind/feature"
] | andreyvelich | 5 |
TencentARC/GFPGAN | pytorch | 208 | Not accepting weights despite being in the right folder | UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
f"The parameter '{pretrained_param}' is deprecated since 0.13 and will be removed in 0.15, "
Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=None`.
warnings.warn(msg) | closed | 2022-07-04T12:51:49Z | 2024-05-29T02:23:30Z | https://github.com/TencentARC/GFPGAN/issues/208 | [] | ABI-projects | 2 |
BeanieODM/beanie | pydantic | 528 | [BUG] Impossible to re-create db index | **Describe the bug**
According to init_beanie documentation in the code I can add "init_beanie" allow_index_dropping parameter to resolve db index changes, but when I try to create a new index (added in the model) I can not run server and receive an error of index duplication.
**To Reproduce**
```python
# step 1: run server first with this model to create index from Indexed(str, unique=True)
class Sample(Document):
name: Indexed(str, unique=True)
# step 2: then update model to create a partial unique index and reboot server to recreate index.
# You will receive error in init_beanie(...) because existing code tries to create index before
# removing already existing one:
class Sample(Document):
name: str
status: str = "active"
class Settings:
indexes = [
IndexModel(
"name", unique=True, partialFilterExpression("is_active": {"$eq": "active"})
),
]
await init_beanie(database=client.db_name, document_models=[Sample], allow_index_dropping=True)
```
**Expected behavior**
When you update index in the Document on next server start index should be updated successfuly
**Additional context**
Also, it would be greate to add information about allow_index_dropping argument in https://beanie-odm.dev/ documentation
| closed | 2023-04-05T16:43:31Z | 2023-06-02T19:36:11Z | https://github.com/BeanieODM/beanie/issues/528 | [
"bug"
] | Phobos-Programmer | 6 |
BeastByteAI/scikit-llm | scikit-learn | 90 | predict | ZeroShotGPTClassifier.predict must return an np.array instead of a list. | closed | 2024-04-25T13:00:43Z | 2024-05-18T20:38:29Z | https://github.com/BeastByteAI/scikit-llm/issues/90 | [] | lpfgarcia | 2 |
gee-community/geemap | streamlit | 422 | Determine the area of unsupervised clusters | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- geemap version:
- Python version:
- Operating System:
### Description
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
### What I Did
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
I performed an unsupervised classification with kmeans and wekacascadekmeans and I need to know the area of each cluster and their geographical positions. Does anyone have any idea how to retrieve this information? In the case of wekacascade, given that it is given a minimum and maximum number of clusters, how did it manage to determine the number of clusters generated.
I also did the clustering in a collection but I was not able to make a gif with the images I generated (I added them as layers in the map). Is there any way to save the layers as png or jpg images and thus make a gif with cv2 or another python library?
| closed | 2021-04-14T03:32:48Z | 2021-04-15T00:57:14Z | https://github.com/gee-community/geemap/issues/422 | [
"bug"
] | lpinuer | 1 |
graphdeco-inria/gaussian-splatting | computer-vision | 1,149 | Implement Proper CI/CD and Versioning | Recent code changes have led to significant discrepancies in experimental results, highlighting the absence of proper CI/CD practices and versioning. For a repository with 15k+ stars and many PhD students relying on it for baseline 3DGS results, the lack of tag versioning is appalling.
Please implement proper CI/CD workflows and enforce versioning to maintain reproducibility and stability. | open | 2025-02-01T13:09:38Z | 2025-02-01T13:12:00Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/1149 | [] | TangZJ | 1 |
quantumlib/Cirq | api | 6,542 | Remove cirq-ft | **Description of the issue**
cirq-ft has been deprecated in with users being redirected to qualtran,
this has been effective in stable release 1.3.
TODO: Remove cirq-ft from the repository.
**Cirq version**
1.4.dev at d9ccf411239ec07fca7bfbefc156884b3c4bfa23 | closed | 2024-04-02T19:53:51Z | 2024-05-13T22:22:13Z | https://github.com/quantumlib/Cirq/issues/6542 | [
"kind/health",
"triage/accepted"
] | pavoljuhas | 0 |
twopirllc/pandas-ta | pandas | 90 | Elder Force Index | Hi ,
For the EFI , according to your formula here, you get the difference of current close price - prior close price * volume then you do the exponential weighted average for it. I may have the wrong understanding of it but according to investopedia and thinkorswim, the formula is current close price - prior close price * VFI(13)[ a.k.a 13 period ema of the force index]
Investopedia

Thinkorswim

| closed | 2020-08-07T23:06:36Z | 2020-08-09T04:35:00Z | https://github.com/twopirllc/pandas-ta/issues/90 | [
"good first issue"
] | SoftDevDanial | 3 |
ultralytics/ultralytics | machine-learning | 19,494 | Using Sahi with instance segmentation model | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi i have a yolov8 instance segmentation model, iam having trouble doing validation post running the detections with sahi , can someone point me in the right direction
i have tested the model on images before and it works properly
found some export config that says to use show_masks = True , but i get an error saying that parameter doesnt exist
```
Traceback (most recent call last):
File "/home/facundo/Desktop/ultralytics/examples/YOLOv8-SAHI-Inference-Video/slice-valid.py", line 31, in <module>
result = get_sliced_prediction(
^^^^^^^^^^^^^^^^^^^^^^
File "/home/facundo/anaconda3/envs/video_yolo/lib/python3.12/site-packages/sahi/predict.py", line 283, in get_sliced_prediction
object_prediction_list = postprocess(object_prediction_list)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/facundo/anaconda3/envs/video_yolo/lib/python3.12/site-packages/sahi/postprocess/combine.py", line 555, in __call__
object_prediction_list[keep_ind] = merge_object_prediction_pair(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/facundo/anaconda3/envs/video_yolo/lib/python3.12/site-packages/sahi/postprocess/utils.py", line 220, in merge_object_prediction_pair
return ObjectPrediction(
^^^^^^^^^^^^^^^^^
File "/home/facundo/anaconda3/envs/video_yolo/lib/python3.12/site-packages/sahi/prediction.py", line 79, in __init__
super().__init__(
File "/home/facundo/anaconda3/envs/video_yolo/lib/python3.12/site-packages/sahi/annotation.py", line 525, in __init__
raise ValueError("Invalid segmentation mask.")
ValueError: Invalid segmentation mask.
```
```import os
import cv2
from sahi import AutoDetectionModel
from sahi.predict import get_sliced_prediction
input_dir = "dataset/valid/images"
output_dir = "dataset/valid_output"
yolov8_model_path = "best.pt"
detection_model = AutoDetectionModel.from_pretrained(
model_type='yolov8',
model_path=yolov8_model_path,
confidence_threshold=0.5,
device="cuda:0",
)
os.makedirs(output_dir, exist_ok=True)
for i in os.listdir(input_dir):
if i.endswith(".jpg"):
image = cv2.imread(f"{input_dir}/{i}")[:, :, ::-1]
result = get_sliced_prediction(
image,
detection_model,
slice_height=1280,
slice_width=1280,
overlap_height_ratio=0.2,
overlap_width_ratio=0.2,
postprocess_class_agnostic=True
)
result.export_visuals(export_dir=output_dir, hide_labels=True, hide_conf=True)
object_prediction_list = result.object_prediction_list
coco_annotations = result.to_coco_annotations()[:3]
coco_predictions = result.to_coco_predictions(image_id=1)[:3]
print(f"COCO Annotations: {coco_annotations}")
print(f"COCO Predictions: {coco_predictions}")
print("Processing complete!")
```
### Additional
_No response_ | open | 2025-03-02T22:45:53Z | 2025-03-07T00:14:47Z | https://github.com/ultralytics/ultralytics/issues/19494 | [
"question",
"segment"
] | facundot | 14 |
streamlit/streamlit | deep-learning | 10,786 | font "sans serif" no longer working | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
When adding a `config.toml` with the font set to `"sans serif"` explicitly, as suggested [in the documentation](https://docs.streamlit.io/develop/concepts/configuration/theming#font), a **serif** font is used instead.
```
[theme]
font = "sans serif"
```
This can be fixed by
* omitting the `font` since sans-serif it is the default
* using `"sans-serif"` (with a dash)
It seems as if the font name was changed from `"sans serif"` to `"sans-serif"` without a matching change in the documentation.
This change seems to have been introduced in version 14.3.
### Reproducible Code Example
```Python
```
### Steps To Reproduce
_No response_
### Expected Behavior
_No response_
### Current Behavior
_No response_
### Is this a regression?
- [x] Yes, this used to work in a previous version.
### Debug info
- Streamlit version:
- Python version:
- Operating System:
- Browser:
### Additional Information
_No response_ | closed | 2025-03-14T16:02:49Z | 2025-03-14T17:31:44Z | https://github.com/streamlit/streamlit/issues/10786 | [
"type:bug",
"status:confirmed",
"priority:P2",
"feature:theming",
"feature:config"
] | johannesjasper | 2 |
mljar/mljar-supervised | scikit-learn | 381 | Memory usege during training | Hi,
I've trained several models with mode="Perform" and when the training gets to certain point the python execution is killed because of the memory usage (I'm using a computer with 16 GB).
What I do is to rerun the script and change the model_name to the name of the model just created to resume training. A couple of times I've had to repeat this process twice.
It is not due to a single model but to data from previous models (already trained) that is not eliminated from memory.

| open | 2021-04-20T02:33:20Z | 2021-11-10T09:23:41Z | https://github.com/mljar/mljar-supervised/issues/381 | [
"bug"
] | RafaD5 | 13 |
OFA-Sys/Chinese-CLIP | computer-vision | 61 | 模型在单模态的效果 | 跨模态模型几乎都会关注img2text或者text2img的效果,体现了模态对齐的能力强弱。但在做跨模态对齐的预训练后,请问大佬其在单模态的检索能力相比其他在imageNet上预训练的特征提取模型比如ResNet系列的如何呢?我自己简单尝试了一下,把跨模态预训练模型如ViT-B-16的图像塔拿出来做特征提取器,构建一个小型图片向量检索数据库,和vgg16比了一下,效果只是和vgg16差不多... | open | 2023-02-21T10:26:50Z | 2023-02-27T06:50:12Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/61 | [] | FD-Liekkas | 2 |
deezer/spleeter | tensorflow | 870 | [Feature] Voice separation | ## Description
Is there a way to separate the voices when there are two singers. Ideally it can support both when they sing together or separately. (Provided that their voices are quite different)
## Additional information
<!-- Add any additional description -->
| open | 2023-09-25T17:42:24Z | 2023-09-25T17:42:24Z | https://github.com/deezer/spleeter/issues/870 | [
"enhancement",
"feature"
] | Kimi-Arthur | 0 |
MaxHalford/prince | scikit-learn | 171 | Support for sklearn Pipelines | MCA is currently not able to be part of a sklearn Pipeline containing any preceding steps.
In my case I need an Imputer to fill any NaN values.
Working Example:
```
from sklearn.impute import SimpleImputer
from prince.mca import MCA
test_data = pd.DataFrame(data=np.random.random((10, 5)))
test = Pipeline(steps=[
("mca", MCA()),
])
test.fit_transform(test_data)
```
But including a SimpleImputer results in a numpy array that is being forwarded to the MCA:
```
from sklearn.impute import SimpleImputer
from prince.mca import MCA
test_data = pd.DataFrame(data=np.random.random((10, 5)))
test = Pipeline(steps=[
("impute", SimpleImputer()), # This Breaks the Pipeline since it returns an ndarray
("mca", MCA()),
])
test.fit_transform(test_data)
```
I've tried including a dummy transformer step betwen the imputer and MCA that forwards an arbitrary DataFrame with generic index and column labels, but it results in a KeyError with unknown Index labels being searched in the column list:
```
KeyError: "None of [Index(['Col_0_0.0', 'Col_0_1.0', 'Col_0_2.0', 'Col_0_3.0', 'Col_0_4.0',\n 'Col_0_5.0', 'Col_1_0.0', 'Col_1_1.0', 'Col_1_2.0', 'Col_2_0.0',\n 'Col_2_1.0', 'Col_3_0.0', 'Col_3_1.0'],\n dtype='object')] are in the [columns]"
```
Any suggestions? | closed | 2024-02-05T13:42:32Z | 2024-09-07T13:50:00Z | https://github.com/MaxHalford/prince/issues/171 | [] | MyNameIsFu | 2 |
newpanjing/simpleui | django | 72 | 希望优化部分组件的显示效果 | **你希望哪些方面得到优化?**
1. 列表的行高可以加大一点,现在感觉太密了
2. checkbox样式可以优化一下,默认的太丑了
3. BooleanField字段的显示效果可以改成switch开关的效果,这样会更好看
4. BooleanField的筛选下拉里面显示可以改成是、否吗,或者可以自定义,现在显示的Ture、False不怎么友好啊
**留下你的联系方式,以便与你取得联系**
QQ:326672861
邮箱:mail@lianghongbo.com
| closed | 2019-06-02T03:26:02Z | 2019-06-03T02:22:51Z | https://github.com/newpanjing/simpleui/issues/72 | [
"enhancement"
] | herbieliang | 1 |
mljar/mercury | data-visualization | 357 | disable scales instances down task for local deployment | I got log message:
```
[2023-08-31 08:32:00,472: DEBUG/MainProcess] Scale instances down
[2023-08-31 08:32:00,484: INFO/MainProcess] Task mercury.server.celery.scale_down_task[c52403e5-5775-48d1-a832-feabc48f5990] succeeded in 0.01414593400022568s: None
```
It means that there is `scale_down_task` running, which is not needed when running Mercury Server on single instance.
| closed | 2023-08-31T08:33:06Z | 2023-09-19T13:58:08Z | https://github.com/mljar/mercury/issues/357 | [] | pplonski | 0 |
tensorly/tensorly | numpy | 40 | Backend 1-Dimensional Output Inconsistency | The way different backends output certain 1-dimensional data is inconsistent. In particular, the `dot` operation under the MXNet backend outputs 1x1 tensor whereas under the Pytorch backend the output is a scalar.
```python
>>> import tensorly as T
>>> T.set_backend('mxnet')
Using mxnet backend.
>>> T.assert_equal(T.dot(T.tensor([1,0]), T.tensor([0,1])), 0.0)
AssertionError:
Items are not equal:
ACTUAL:
[ 0.]
<NDArray 1 @cpu(0)>
DESIRED: 0.0
>>> T.set_backend('pytorch')
Using pytorch backend.
>>> T.assert_equal(T.dot(T.tensor([1,0]), T.tensor([0,1])), 0.0)
```
The issue came up when writing tests which work across all backends. I can think of several solutions to this issue but they all have certain design implications which require @JeanKossaifi 's input. For example,
* **Modify MXNet's or Pytorch's `dot()` command.** - The downside is that one need to remember this design decision for all new operators.
* **Manually cast all output into tensors** - Force the user to always convert output to `T.tensor([scalar_output])`. However, this comes with its own issues since the MXNet case `scalar_output` is not actually scalar.
* **?** | closed | 2018-03-05T19:27:40Z | 2018-04-02T15:27:37Z | https://github.com/tensorly/tensorly/issues/40 | [] | cswiercz | 1 |
nolar/kopf | asyncio | 673 | timers continue when freezing operations | ## Long story short
Thanks for creating kopf. I am currently developing an operator that updates Egress NetworkPolicy resources to sync its `spec.egress[*].to[*].ipBlock.cidr` keys with A records for DNS hostnames.
This operator used timers (looking into daemons to follow DNS TTL). I run an operator in k8s with cluster peering. For development I start a local instance with --dev.
I see the operator in k8s detects the development instance:
```
Freezing operations in favour of [<Peer rtoma@devsystem/20210208145159/nz5: priority=666, lifetime=60, lastseen='2021-02-08T14:52:01.160123'>].
```
The development instance's log show:
```
Resuming operations after the freeze. Conflicting operators with the same priority are gone.
```
Still both operator instances show `Timer 'NAME' succeeded` messages. I searched the docs, but could not find anything.
## Description
<!-- Please provide as much information as possible. Lack of information may result in a delayed response. As a guideline, use the following placeholders, and add yours as needed. -->
<details><summary>The code snippet to reproduce the issue</summary>
```python
import kopf
```
</details>
<details><summary>The exact command to reproduce the issue</summary>
```bash
kopf run ...
```
</details>
<details><summary>The full output of the command that failed</summary>
```
```
</details>
## Environment
<!-- The following commands can help:
`kopf --version` or `pip show kopf`
`kubectl version`
`python --version`
-->
* Kopf version: 1.29.1
* Kubernetes version: 1.19.5
* Python version: 3.9.1
* OS/platform: k8s on pi4, local on arm mbp
<details><summary>Python packages installed</summary>
<!-- use `pip freeze --all` -->
```
```
</details>
| closed | 2021-02-08T15:11:39Z | 2021-02-25T08:43:21Z | https://github.com/nolar/kopf/issues/673 | [
"bug"
] | rtoma | 2 |
yihong0618/running_page | data-visualization | 641 | localhost拒绝访问 | 你好,我是一个小白。请问在使用 Strava 授权是遇到 localhost 拒绝连接的错误该如何处理。尝试换了localhost


| closed | 2024-04-07T16:35:36Z | 2024-04-08T16:35:48Z | https://github.com/yihong0618/running_page/issues/641 | [] | yyniao | 2 |
frappe/frappe | rest-api | 31,200 | Prepared Report should not set automatically , if developer mode is ON | https://github.com/frappe/frappe/pull/29996 | closed | 2025-02-10T04:20:56Z | 2025-02-27T00:15:17Z | https://github.com/frappe/frappe/issues/31200 | [] | dineshpanchal93 | 5 |
svc-develop-team/so-vits-svc | deep-learning | 120 | 训练中途可以修改学习率吗? | 可不可以在刚开始训练时用较大的学习率(0.0004),中途再改为正常的(0.0002,0.0001) | closed | 2023-04-04T09:19:12Z | 2023-08-01T09:09:42Z | https://github.com/svc-develop-team/so-vits-svc/issues/120 | [
"not urgent"
] | TQG1997 | 6 |
mwaskom/seaborn | data-science | 3,708 | Overlaying Plots | I'm experiencing an issue with overlaying both a boxplot and lineplot on the same axis.
The data consists of three columns: Side, Dose, Volume.
[dvhTestData.csv](https://github.com/user-attachments/files/15776809/dvhTestData.csv)
Side: ['ipsilateral', 'contralateral'] (These are my hue)
Dose: range from 0-6000 in 1000 increments.
Volume: Percentage ranging from 0-100%
```
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
df = pd.read_csv('dvhTestData.csv')
hue_order = ['ipsilateral', 'contralateral']
palette = 'bright'
# Calculate the average for each side
average_df = df.groupby(['Side','Dose',])['Volume'].mean().reset_index()
fig, ax = plt.subplots()
sns.boxplot(data=df, x='Dose', y='Volume', hue='Side',
hue_order=hue_order, palette=palette, fill=False, ax=ax)
sns.lineplot(data=average_df, x='Dose', y='Volume', hue='Side',
hue_order=hue_order, palette=palette, ax=ax)
```

However, if I limit the x-axis to [0,7] we get the following output:
```
fig, ax = plt.subplots()
sns.boxplot(data=df, x='Dose', y='Volume', hue='Side',
hue_order=hue_order, palette=palette, fill=False)
sns.lineplot(data=average_df, x='Dose', y='Volume', hue='Side',
hue_order=hue_order, palette=palette)
ax.set_xlim(0, 7)
```

It seems as though the boxplot is being plotted in the range [0,6] rather than [0, 6000] as expected. If anyone could help explain if I'm incorrectly overlaying plots, or if there is an easy work-around, I would greatly appreciate it.
Best,
Lewis
| closed | 2024-06-10T17:32:38Z | 2024-06-11T13:12:57Z | https://github.com/mwaskom/seaborn/issues/3708 | [] | LJDrakeley | 2 |
Esri/arcgis-python-api | jupyter | 1,806 | `sources` endpoint for hosted views is not exposed | **Is your feature request related to a problem? Please describe.**
The `sources` endpoint for hosted feature layer views is not exposed in this package. It would be good to have a way to find the source service of a hosted view.
**Describe the solution you'd like**
For example:
```python
item = gis.contentget(itemid)
if item.isView:
sources = item.sources()
```
```python
{'currentVersion': 11.2,
'services': [{'name': 'survey123',
'type': 'FeatureServer',
'url': 'https://services.arcgis.com/orgid/ArcGIS/rest/services/survey123/FeatureServer',
'serviceItemId': 'sourceItemID'}]}
```
**Describe alternatives you've considered**
Currently I have to manually retrieve the token and make a request via `requests.get` to get this information.
**Additional context**
I am not seeing this endpoint documented in the REST docs, but it is exposed in the UI when accessing a REST endpoint of a hosted view.

| open | 2024-04-19T17:15:04Z | 2024-04-19T17:15:04Z | https://github.com/Esri/arcgis-python-api/issues/1806 | [
"enhancement"
] | philnagel | 0 |
Gerapy/Gerapy | django | 180 | gerapy客户端没有删除功能 | 而且认证的账号的密码好像没有什么用啊,直接把账号和密码裸露出来了,让的我服务器被挖矿病毒入侵了。gerapy还还缺少了删除已经部署项目的功能,只能删除本地上传到gerapy上的项目 | closed | 2021-01-06T10:42:38Z | 2021-01-24T05:02:38Z | https://github.com/Gerapy/Gerapy/issues/180 | [
"bug"
] | jiangyifeng96 | 0 |
mwaskom/seaborn | data-visualization | 3,098 | Figure aspect ratio sometimes off in docs on mobile | I noticed this in portrait mode on Chrome on Android.
<details>
<summary>Example:</summary>

</details> | open | 2022-10-18T15:09:34Z | 2022-11-04T10:39:37Z | https://github.com/mwaskom/seaborn/issues/3098 | [
"docs"
] | zmoon | 0 |
iperov/DeepFaceLive | machine-learning | 68 | No output window | thx for the great work. I have the program working but I was unable to get the output window pop up. I am runnng the program on Windows 11. Would that be an issue? Many thx! | closed | 2022-06-21T01:26:00Z | 2022-06-21T03:07:38Z | https://github.com/iperov/DeepFaceLive/issues/68 | [] | acpsm2 | 0 |
recommenders-team/recommenders | data-science | 1,968 | [ASK] No module named 'recommenders' | ### Description
Hi, I try to from recommenders.models.tfidf.tfidf_utils import TfidfRecommender, then get error: No module named 'recommenders', So I use :!pip install git+https://github.com/microsoft/recommenders.git, in google colab get error again : ERROR: Package 'recommenders' requires a different Python: 3.10.12 not in '<3.10,>=3.6', so I change environment in anaconda, get another error: ERROR: Could not build wheels for safetensors, which is required to install pyproject.toml-based projects. Seems can't !pip in google colab and anaconda, is anyone has same problem like me?
### Other Comments
| closed | 2023-08-16T04:40:12Z | 2023-08-17T18:51:55Z | https://github.com/recommenders-team/recommenders/issues/1968 | [
"help wanted",
"duplicate"
] | shasha920 | 2 |
Asabeneh/30-Days-Of-Python | matplotlib | 90 | Teste | closed | 2020-10-12T02:32:21Z | 2021-07-05T22:00:36Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/90 | [] | wilsonrivero | 0 | |
databricks/spark-sklearn | scikit-learn | 51 | n_jobs for refitting | For the refit step, it would be convenient to be able to specify n_jobs. After running a grid search grid search , it would be nice to use more of the cores on the master for the final refit step. The n_jobs parameter exists for compatibility, but it is currently no-op. I think it makes sense to have that parameter pass through to the final model refit step.
| open | 2017-01-26T17:36:58Z | 2018-12-09T23:38:09Z | https://github.com/databricks/spark-sklearn/issues/51 | [
"enhancement"
] | rocconnick | 0 |
tensorflow/tensor2tensor | machine-learning | 958 | Output eval prediction | Hi,
Is there a way (more or less "hacky") to output predictions that are made during the evaluation step?
I would find it useful to monitor how the model is performing, qualitatively. | open | 2018-07-26T16:51:16Z | 2018-09-08T02:38:14Z | https://github.com/tensorflow/tensor2tensor/issues/958 | [] | pltrdy | 6 |
Lightning-AI/pytorch-lightning | machine-learning | 20,034 | GAN training crashes with unused parameters | ### Bug description
I have some problem with training with my codes without `strategy=ddp_find_unused_parameters_true`. When I turned on the flag, it seems there is no parameters that didn't participate in the training. But when I turned off, it always crashed with some error logs like your model has unused parameters ~~~.
You can see attached codes, and I found that the actual error is caused by `.detach()` operation, because when I removed that there is no problem with training. How can I solve this problem?
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
```python
def training_step(self, batch, batch_idx):
g_opt, d_opt = self.optimizers()
src_img, drv_img = batch["src"], batch["drv"]
gen_img = self.generator(src_img, drv_img)
errD = self.gan_loss(drv_img, gen_img.detach(), opt_d=True)["errD"]
d_opt.zero_grad(set_to_none=True)
self.manual_backward(errD, retain_graph=True)
d_opt.step()
gan_loss = self.gan_loss(drv_img, gen_img, opt_d=False)
perceptual_loss = self.perceptual_loss(drv_img, gen_img)
errG = gan_loss["errG_GAN"] + gan_loss["errG_FM"] + perceptual_loss["vgg_imagenet"] + perceptual_loss["vgg_face"]
g_opt.zero_grad(set_to_none=True)
self.manual_backward(errG)
g_opt.step()
```
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
Package Version
---------------------------- -----------
absl-py 2.1.0
aiohttp 3.9.5
aiosignal 1.3.1
antlr4-python3-runtime 4.9.3
asttokens 2.4.1
astunparse 1.6.3
async-timeout 4.0.3
attrs 23.2.0
backcall 0.2.0
cachetools 5.3.3
certifi 2024.2.2
charset-normalizer 3.3.2
click 8.1.7
comm 0.2.2
contourpy 1.1.1
cycler 0.12.1
debugpy 1.6.7
decorator 5.1.1
docker-pycreds 0.4.0
executing 2.0.1
facenet-pytorch 2.6.0
filelock 3.14.0
flatbuffers 24.3.25
fonttools 4.53.0
frozenlist 1.4.1
fsspec 2024.5.0
gast 0.4.0
gitdb 4.0.11
GitPython 3.1.43
google-auth 2.30.0
google-auth-oauthlib 1.0.0
google-pasta 0.2.0
grpcio 1.64.1
h5py 3.11.0
idna 3.7
importlib_metadata 7.1.0
importlib_resources 6.4.0
ipykernel 6.29.3
ipython 8.12.2
jedi 0.19.1
Jinja2 3.1.4
jupyter_client 8.6.2
jupyter_core 5.7.2
keras 2.13.1
kiwisolver 1.4.5
libclang 18.1.1
lightning-utilities 0.11.2
Markdown 3.6
MarkupSafe 2.1.5
matplotlib 3.7.5
matplotlib-inline 0.1.7
mpmath 1.3.0
mtcnn 0.1.1
multidict 6.0.5
nest_asyncio 1.6.0
networkx 3.1
numpy 1.24.3
nvidia-cublas-cu12 12.1.3.1
nvidia-cuda-cupti-cu12 12.1.105
nvidia-cuda-nvrtc-cu12 12.1.105
nvidia-cuda-runtime-cu12 12.1.105
nvidia-cudnn-cu12 8.9.2.26
nvidia-cufft-cu12 11.0.2.54
nvidia-curand-cu12 10.3.2.106
nvidia-cusolver-cu12 11.4.5.107
nvidia-cusparse-cu12 12.1.0.106
nvidia-nccl-cu12 2.19.3
nvidia-nvjitlink-cu12 12.5.40
nvidia-nvtx-cu12 12.1.105
oauthlib 3.2.2
omegaconf 2.3.0
opencv-python 4.10.0.82
opt-einsum 3.3.0
packaging 24.0
parso 0.8.4
pexpect 4.9.0
pickleshare 0.7.5
pillow 10.2.0
pip 24.0
platformdirs 4.2.2
prompt-toolkit 3.0.42
protobuf 4.25.3
psutil 5.9.8
ptyprocess 0.7.0
pure-eval 0.2.2
pyasn1 0.6.0
pyasn1_modules 0.4.0
Pygments 2.18.0
pyparsing 3.1.2
python-dateutil 2.9.0
pytorch-lightning 2.2.5
PyYAML 6.0.1
pyzmq 25.1.2
requests 2.32.3
requests-oauthlib 2.0.0
rsa 4.9
scipy 1.10.1
sentry-sdk 2.4.0
setproctitle 1.3.3
setuptools 69.5.1
six 1.16.0
slack_sdk 3.29.0
smmap 5.0.1
stack-data 0.6.2
sympy 1.12.1
tensorboard 2.13.0
tensorboard-data-server 0.7.2
tensorflow-estimator 2.13.0
tensorflow-io-gcs-filesystem 0.34.0
termcolor 2.4.0
torch 2.2.2
torchaudio 2.3.0
torchmetrics 1.4.0.post0
torchvision 0.17.2
tornado 6.4
tqdm 4.66.4
traitlets 5.14.3
triton 2.2.0
typing_extensions 4.12.0
urllib3 2.2.1
wandb 0.17.0
wcwidth 0.2.13
Werkzeug 3.0.3
wheel 0.43.0
wrapt 1.16.0
yarl 1.9.4
zipp 3.17.0
```
</details>
### More info
_No response_ | closed | 2024-07-01T09:31:47Z | 2025-03-11T20:03:06Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20034 | [
"bug",
"needs triage"
] | samsara-ku | 0 |
tensorly/tensorly | numpy | 354 | Simplex Projection does not work | #### Describe the bug
I ran the `simplex_prox` new function located in proximal.py from the AOADMM PR, and it turns out it does not work.
Moreover the function can be improved, for instance by allowing 1-d tensor input and removing some computation overhead.
#### Steps or Code to Reproduce
For instance:
```python
from tensorly.tenalg.proximal import simplex_prox
t = tl.tensor([[0.4,1.5,1],[0.5,2,3],[0.6,0.3,2.9]])
print(simplex_prox(t,parameter=1))
```
produces
```python
[[0.35 0.5 0. ]
[0.45 1. 1. ]
[0.55 0. 0.9 ]]
```
which columns do not sum to 1.
Instead the correct output should be
```python
[[0.23333333 0.25 0. ]
[0.33333333 0.75 0.55 ]
[0.43333333 0. 0.45 ]]
```
By the way, the current unit test passes by chance; I suggest we replace it with another one such as above.
#### Bug source
The bug mainly comes from the fact that one of the inner computations is missing a division. I have cooked a corrected version of the function, which I will run by @caglayantuna to fine tune. We will make the correction PR asap.
| closed | 2022-01-07T14:24:33Z | 2022-02-11T19:08:38Z | https://github.com/tensorly/tensorly/issues/354 | [] | cohenjer | 1 |
jonaswinkler/paperless-ng | django | 368 | Re-order Saved Views | It's quite possible I am missing something, but is there a way to re-order saved views? | open | 2021-01-16T12:26:41Z | 2021-01-16T13:46:55Z | https://github.com/jonaswinkler/paperless-ng/issues/368 | [
"feature request"
] | argash | 2 |
vitalik/django-ninja | rest-api | 468 | How to unify exception returns | I want to unify the return value
like the following
{
"code": 200,
"data": [],
"message": "xxxx"
}
Every time I need to use try except in the view function, it feels very annoying
| closed | 2022-06-10T08:00:19Z | 2022-07-02T15:27:20Z | https://github.com/vitalik/django-ninja/issues/468 | [] | zhiming429438709 | 1 |
svc-develop-team/so-vits-svc | deep-learning | 99 | [Help]: 关于log/44k/ 下的聚类数据模型,是什么公开数据模型,还是自行需要训练? | ### 请勾选下方的确认框。
- [X] 我已仔细阅读[README.md](https://github.com/svc-develop-team/so-vits-svc/blob/4.0/README_zh_CN.md)和[wiki中的Quick solution](https://github.com/svc-develop-team/so-vits-svc/wiki/Quick-solution)。
- [X] 我已通过各种搜索引擎排查问题,我要提出的问题并不常见。
- [X] 我未在使用由第三方用户提供的一键包/环境包。
### 系统平台版本号
win
### GPU 型号
11.7
### Python版本
pyhon 3.8
### PyTorch版本
2.0.0+cu117
### sovits分支
4.0(默认)
### 数据集来源(用于判断数据集质量)
UVR
### 出现问题的环节或执行的命令
python inference_main.py
### 问题描述
在文件夹log/44k 有个聚类的logs/44k/kmeans_10000.pt , 有些疑问不太清楚这个,聚类是需要自己训练,还是那个公共的数据?
虽然可以通过set 参数为 0 ,来不使用。但还是想对比一下效果。
请各位指教一下
### 日志
```python
https://pastebin.com/i0TQqiMT
```
### 截图`so-vits-svc`、`logs/44k`文件夹并粘贴到此处

### 补充说明
_No response_ | closed | 2023-03-28T11:28:41Z | 2023-04-01T09:08:15Z | https://github.com/svc-develop-team/so-vits-svc/issues/99 | [
"help wanted"
] | Hunter-857 | 2 |
scikit-hep/awkward | numpy | 3,105 | typing ak.Array for numba.cuda.jit signature | ### Version of Awkward Array
2.6.2
### Description and code to reproduce
Hey guys, I followed a hint from the discussion in [#696](https://github.com/scikit-hep/awkward/discussions/696#discussion-2571850) to type `ak.Array` for numba signatures. So I tried something like
```python
import awkward as ak
import numba as nb
from numba import types
cpu_arr_type = ak.Array([[[0, 1], [2, 3]], [[4, 5]]], backend='cpu').numba_type
@nb.njit(types.void(cpu_arr_type))
def cpu_kernel(arr):
do_something_with_arr
```
and this works like a charm.
However, I'm interested in the same case but with a cuda kernel. So I tried what appeared more natural to do:
```python
gpu_arr_type = ak.Array([[[0, 1], [2, 3]], [[4, 5]]], backend='cuda').numba_type
@nb.cuda.jit(types.void(gpu_arr_type), extensions=[ak.numba.cuda])
def cuda_kernel(arr):
do_something_with_arr
```
This time, I get the error:
```python
self = <awkward._connect.numba.arrayview_cuda.ArrayViewArgHandler object at 0x784afbc13fa0>
ty = ak.ArrayView(ak.ListArrayType(array(int64, 1d, C), ak.ListArrayType(array(int64, 1d, C), ak.NumpyArrayType(array(int64, 1d, C), {}), {}), {}), None, ())
val = <Array [[[4, 1], [2, -1]], [...], [[4, 0]]] type='3 * var * var * int64'>
stream = 0, retr = []
def prepare_args(self, ty, val, stream, retr):
if isinstance(val, ak.Array):
if isinstance(val.layout.backend, CupyBackend):
# Use uint64 for pos, start, stop, the array pointers values, and the pylookup value
tys = numba.types.UniTuple(numba.types.uint64, 5)
> start = val._numbaview.start
E AttributeError: 'NoneType' object has no attribute 'start'
.../site-packages/awkward/_connect/numba/arrayview_cuda.py:21: AttributeError
```
How should this latter case be correctly treated? Note that, without typing, the thing works as expected:
```python
@nb.cuda.jit(extensions=[ak.numba.cuda])
def cuda_kernel_no_typing(arr):
do_something_with_arr
```
However, I'm interested in `ak.Array`s with the 3D layout of integers (as above) and would like to take advantage of numba's eager compilation. I'm passing the `arr` for testing as
```python
backend = 'cpu' # or 'cuda'
arr = ak.to_backend(
ak.Array([
[[4, 1], [2, -1]],
[[0, -1], [1, 1], [3, -1]],
[[4, 0]]
]),
backend
)
```
Any help is appreciated!
| closed | 2024-05-06T19:37:21Z | 2024-05-21T04:06:01Z | https://github.com/scikit-hep/awkward/issues/3105 | [
"bug"
] | essoca | 11 |
stitchfix/hamilton | numpy | 92 | Extract columns executes functions twice (!!) | Short description explaining the high-level reason for the new issue.
From discord conversation.
## Current behavior
This test fails:
```python
def test_extract_columns_executed_once():
dr = hamilton.driver.Driver({}, tests.resources.extract_columns_execution_count)
unique_id = uuid.uuid4()
dr.execute(['col_1', 'col_2', 'col_3'], inputs={'unique_id': unique_id})
assert len(tests.resources.extract_columns_execution_count.outputs) == 1 # It should only be called once
```
## Expected behavior
It should succeed -- E.G. it should only be called once.
**Steps to replicate behavior**
1. See unit tests for attached PR | closed | 2022-03-22T16:23:51Z | 2022-03-24T00:00:35Z | https://github.com/stitchfix/hamilton/issues/92 | [
"triage"
] | elijahbenizzy | 4 |
slackapi/python-slack-sdk | asyncio | 785 | Modal update issue | (Describe your issue and goal here)
Hey guys having an issue here creating a workflow using views... the goal is supper simple
1. Triggering a view open which is woking fine
2. After getting the response I use view_update for updating the modal with new content
3. send the response and process
the Step 2, which is the function number B below, it's updating the view and closing the modal immediately, so the user does not have to time to select from the new view. This is happening if a return a 200, if I comment that line (return 200) I can see the view was updated properly but I get a 500 from the backend which make sense because the function is not returning a valid response.
I guess I'm missing something here but I'm not sure what, any ideas on this?
# A
```
@app.route("/slack/open_modal", methods=["POST"])
def show_gilada():
trigger_id = request.form["trigger_id"]
view = slack_client.views_open(
trigger_id = trigger_id,
view={
"callback_id": "modal_service",
"title": {
"type": "plain_text",
"text": "Promote to Prod"
},
"submit": {
"type": "plain_text",
"text": "Select Artifact"
},
"blocks": [
{
"type": "input",
"block_id": "select_service",
"element": {
"type": "static_select",
"action_id": "select",
"placeholder": {
"type": "plain_text",
"text": "Select an item",
"emoji": True
},
"options": dropdown(services)
},
"label": {
"type": "plain_text",
"text": "Select service to promote",
"emoji": True
}
}
],
"type": "modal"
}
)
return make_response("", 200)
```
# B
```
@app.route("/slack/message_actions", methods=["POST"])
def message_actions():
payload = json.loads(request.form["payload"])
view_id = payload["view"]["id"]
service = payload["view"]["state"]["values"]["select_service"]["select"]["selected_option"]["text"]["text"]
dropdown_artifacts_list = dropdown(artifact_list(service))
view = slack_client.views_update(
view_id = view_id,
view={
"callback_id": "modal_artifact",
"title": {
"type": "plain_text",
"text": "Promote to Prod"
},
"submit": {
"type": "plain_text",
"text": "Submit"
},
"blocks": [
{
"type": "input",
"block_id": "select_artifact",
"element": {
"type": "static_select",
"action_id": "select",
"placeholder": {
"type": "plain_text",
"text": "Select an item",
"emoji": True
},
"options": dropdown_artifacts_list
},
"label": {
"type": "plain_text",
"text": "Select service to promote",
"emoji": True
},
"label": {
"type": "plain_text",
"text": "Select the artifact",
"emoji": True
}
}
],
"type": "modal"
}
)
return make_response("", 200) -> getting a 500 if I comment this line which it makes sense but if not the function is closing the modal updated
```
#### Python runtime version
Python 3.8.5
### Actual result:
```TypeError: The view function did not return a valid response. The function either returned None or ended without a return statement.
127.0.0.1 - - [27/Aug/2020 11:50:33] "POST /slack/message_actions HTTP/1.1" 500 -
INFO:werkzeug:127.0.0.1 - - [27/Aug/2020 11:50:33] "POST /slack/message_actions HTTP/1.1" 500 -``` | closed | 2020-08-27T18:36:40Z | 2020-08-28T03:17:41Z | https://github.com/slackapi/python-slack-sdk/issues/785 | [
"Version: 2x",
"question",
"web-client"
] | nikoe14 | 2 |
aio-libs-abandoned/aioredis-py | asyncio | 634 | Inconsistency between a Sentinel pool and a normal connection pool | First off, thank you for all the work on this package, it is great!
I noticed some (for me) unexpected behaviour when I needed to use the Sentinel connection pool. In my situation I needed a master connection pool through the sentinel pool but the connections returned from this behave differently that connections returned by the redis connection pool:
With a redis pool
```
pool = await aioredis.create_redis_pool("redis://localhost:6379", db=0, loop=loop)
with await pool as connection:
print(connection)
connection.xpending(..)
<ContextRedis <RedisConnection [db:0]>>
<class 'aioredis.commands.ContextRedis'>
```
With a sentinel pool
```
sentinel_pool = await aioredis.sentinel.create_sentinel_pool([('localhost', 26379)], db=0, loop=loop)
pool = sentinel_pool.master_for('my_master')
with await pool as connection:
print(connection)
connection.xpending(..)
<RedisConnection [db:0]>
<class 'aioredis.connection.RedisConnection'>
AttributeError: 'RedisConnection' object has no attribute 'xpending'
```
I expected a connection returned by the master pool to also be wrapped in the `Redis` interface class.
Wrapping `connection` in `aioredis.commands.Redis` solves the issue. | closed | 2019-09-10T06:51:12Z | 2021-03-18T23:55:33Z | https://github.com/aio-libs-abandoned/aioredis-py/issues/634 | [
"need investigation",
"resolved-via-latest"
] | mattiassluis | 0 |
benlubas/molten-nvim | jupyter | 154 | [Feature Request] Allow Wezterm to be an image provider | Please include:
- I wonder how you would feel about adding "Wezterm" as an image provider. I feel like on windows there may be something here that you could add that would allow wezterm to open a temporary split, display the image output, then close on the next key press. or not (depends on how to setup the command.
- I got the idea from the M.PreviewImage via [image_preview.nvim](https://github.com/adelarsq/image_preview.nvim/blob/main/lua/image_preview/init.lua)
- Initial testing from my windows based neovim setup looks promising.
- I think this might be something I could work on contributing but I don't know if there is a temporary image path that is created by the image output cells that would work with wezterm's imgcat command. Please see my images below.


| closed | 2024-03-01T05:05:33Z | 2024-03-30T16:54:23Z | https://github.com/benlubas/molten-nvim/issues/154 | [
"enhancement"
] | akthe-at | 20 |
thtrieu/darkflow | tensorflow | 763 | What is default optimizer that use in training ? | Hello i new in computer vision task,
I have a question, what is default optimizer that we use when we don't set the "--trainer" for training a dataset ? is that adam optimezer, rmsprop, or anything else ?
Thanks in advance
| closed | 2018-05-16T07:42:06Z | 2018-05-17T16:03:15Z | https://github.com/thtrieu/darkflow/issues/763 | [] | andikira | 2 |
freqtrade/freqtrade | python | 11,473 | Remove ta-lib dependency | ta-lib installation is a pain in some Linux distributions.
Is it really required by core freqtrade? Is seems to me that strategies require ta-lib, not the core.
If so, I would suggest removing the requirement for ta-lib. My strategy doesn't have a use for it.
Folks whose strategy requires ta-lib should take care of installing it.
Thank you in advance | closed | 2025-03-06T18:18:12Z | 2025-03-06T19:04:02Z | https://github.com/freqtrade/freqtrade/issues/11473 | [
"Duplicate"
] | avibrazil | 1 |
modin-project/modin | data-science | 6,516 | HDK: test_dataframe.py is crashed if Calcite is disabled | ### Modin version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest released version of Modin.
- [X] I have confirmed this bug exists on the main branch of Modin. (In order to do this you can follow [this guide](https://modin.readthedocs.io/en/stable/getting_started/installation.html#installing-from-the-github-master-branch).)
### Reproducible Example
```python
...
```
### Issue Description
Set the environment variable `MODIN_USE_CALCITE=False` and run the test.
### Expected Behavior
Test passes.
### Error Logs
<details>
```python-traceback
Replace this line with the error backtrace (if applicable).
```
</details>
### Installed Versions
<details>
Replace this line with the output of pd.show_versions()
</details>
| closed | 2023-08-28T13:28:07Z | 2023-08-29T14:52:19Z | https://github.com/modin-project/modin/issues/6516 | [
"bug 🦗",
"HDK"
] | AndreyPavlenko | 0 |
ultralytics/yolov5 | machine-learning | 12,731 | ValueError: not enough values to unpack (expected 4, got 3) | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I'm working with YoloV5. I didn't run my project nearly to one month, I formatted my computer and I downloaded Conda, CUDA, pip libraries etc. But when I run my project, it returns that:
`
Traceback (most recent call last):
File "C:\Users\Emir\Desktop\yolov5\main.py", line 39, in <module>
results = model(image)
File "C:\Users\Emir\anaconda3\envs\proje\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\Emir\anaconda3\envs\proje\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\Emir/.cache\torch\hub\ultralytics_yolov5_master\models\common.py", line 544, in forward
b, ch, h, w = im.shape # batch, channel, height, width
ValueError: not enough values to unpack (expected 4, got 3)
`
My Code :
`
import time
import cv2
import torch
import socket
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
cap = cv2.VideoCapture(0)
while True:
ret, image = cap.read()
image = cv2.resize(image, (640, 480))
results = model(image)
objs = results.pandas().xyxy[0].to_dict(orient='records')
print(len(objs))
` | closed | 2024-02-13T17:20:26Z | 2024-10-20T19:39:36Z | https://github.com/ultralytics/yolov5/issues/12731 | [
"question",
"Stale"
] | Floodinatorr | 3 |
snarfed/granary | rest-api | 596 | Support for moderation activities | Hi! Looking quickly through the code, you don't appear to be handling converting between `Flag` activities (which are used for reporting an account and/or statuses) nor for BlueSky's / AT Proto's `com.atproto.moderation.createReport`
adding support for these would make this project a much better citizen on both platforms, as you'd be federating content moderation. Without this, bridges and similar software using this would be deemed "unmoderated" which is generally a reason for a specific domain to end up on a blocklist. | closed | 2023-09-26T05:09:38Z | 2024-04-26T21:03:13Z | https://github.com/snarfed/granary/issues/596 | [] | ThisIsMissEm | 8 |
sinaptik-ai/pandas-ai | data-science | 968 | SmartDataframe API Key Error Depending on Prompt | ### System Info
OS version: `macOS 14.2.1 (23C71)`
Python version: `3.11.8`
`pandasai` version: `1.5.20`
### 🐛 Describe the bug
When calling the chat method of `SmartDataframe`, it may throw the following error depending on the prompt:
```
"Unfortunately, I was not able to answer your question, because of the following error:\n\nError code: 401 - {'error': {'message': 'Incorrect API key provided: ***************************************. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}\n"
```
### Reproduction steps
Run this code:
```python
import pandas as pd
from pandasai import SmartDataframe
from pandasai.prompts import AbstractPrompt
class MyCustomPrompt(AbstractPrompt):
@property
def template(self):
return """
You are provided with a dataset that contains sales data by brand across various regions. Here's the metadata for the given pandas DataFrames:
{dataframes}
Given this data, please follow these steps by Yassin:
0. Acknowledge the user's query and provide context for the analysis.
1. **Data Analysis**: < custom instructions >
2. **Opportunity Identification**: < custom instructions >
3. **Reasoning**: < custom instructions >
4. **Recommendations**: < custom instructions >
5. **Output**: Return a dictionary with:
- type (possible values: "text", "number", "dataframe", "plot")
- value (can be a string, a dataframe, or the path of the plot, NOT a dictionary)
Example: {{ "type": "text", "value": < custom instructions > }}
``python
def analyze_data(dfs: list[pd.DataFrame]) -> dict:
# Code goes here (do not add comments)
# Declare a result variable
result = analyze_data(dfs)
``
Using the provided dataframes (`dfs`), update the Python code based on the user's query:
{conversation}
# Updated code:
# """
df = pd.DataFrame({
"brand": ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J"],
"region": ["North America", "North America", "North America", "North America", "North America", "Europe", "Europe", "Europe", "Europe", "Europe"],
"sales": [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000]
})
sdf = SmartDataframe(df,
name="df",
config={
"custom_prompts": {
"generate_python_code": MyCustomPrompt()
}
})
```
This prompt will work:
```python
sdf.chat("What is the average sales by continent?")
```
This prompt will error:
```python
sdf.chat("what is the most popular brand")
```
Example:
<img width="689" alt="Screenshot 2024-02-28 at 10 15 57 PM" src="https://github.com/Sinaptik-AI/pandas-ai/assets/108594964/90f19f78-3e62-4ec8-b12d-ea9c0d01b119"> | closed | 2024-02-29T06:16:49Z | 2024-03-19T07:29:11Z | https://github.com/sinaptik-ai/pandas-ai/issues/968 | [] | yassinkortam | 1 |
dask/dask | numpy | 11,389 | mode on `axis=1` | The `mode` method in a `dask` `DataFrame` does not allow for the argument `axis=1`. It would be great to have since it seems that in `pandas`, that operation is very slow and seems straightforward to parallelize.
I would like to be able to do this in dask.
```
import pandas as pd
import numpy as np
import dask.dataframe as dd
np.random.seed(0)
N_ROWS = 1_000
df = pd.DataFrame({'a':np.random.randint(0, 100, N_ROWS),
'b':np.random.randint(0, 100, N_ROWS),
'c':np.random.randint(0, 100, N_ROWS)})
df['d'] = df['a'] #ensure mode is column 'a', unless b=c, then there are two modes
df.mode(axis=1)
```
For reference, in pandas with `N_ROWS = 100_000`, the mode operation takes 20 seconds, and the time seems to grow linearly with number of observations. | open | 2024-09-16T14:55:33Z | 2025-03-10T01:51:04Z | https://github.com/dask/dask/issues/11389 | [
"dataframe",
"needs attention",
"enhancement"
] | marcdelabarrera | 4 |
slackapi/python-slack-sdk | asyncio | 1,358 | Is it possible to filter inbound user_status_change events to only be users who have your app installed? | Hello again! I'm developing a multi-workspace app where we're subscribing to [`user_status_changed`](https://api.slack.com/events/user_status_changed) events, but something I've noticed is it doesn't seem like there's a way to pre-filter these events based on "only users who have installed your app", which means I'm receiving many "irrelevant" status updates (basically: for everyone in the server).
I was wondering if there was a way to filter these _before_ they hit my app? Even before they hit my custom InstallationStore. Right now I've annotated the method with a middleware that decorates my "Installed User", so the app logic itself is secure enough, but I'm wondering if this will lead to a situation where once I've installed this into multiple workspaces, I'll be receiving and discarding potentially thousands of events per minute. Granted I can optimize some of my app logic (eg: cache hot queries, etc), but I was wondering if there's a way to preempt this need that I can't find in the documentation. All I can find is filtering on the app side with regex or type/subtype matching.
### Reproducible in:
```python
def load_user(context: BoltContext, logger, next: Callable):
"""Slack Bolt middleware to assign the installed user model to "loaded_user_model" property on the `context` object"""
# look up user in DB
...
context["loaded_user_model"] = user
next()
@app.event("user_status_changed", middleware=[load_user])
def handle_user_status_changed_events(body, client: WebClient, context: BoltContext, logger, ack: Callable):
ack()
logger.debug(f"user_status_changed event: {body}")
if context.get("loaded_user_model") is None:
return
# do the cool stuff here
...
```
#### The Slack SDK version
`slack-bolt = "^1.16.2"`
#### Python runtime version
`Python 3.11.3`
| closed | 2023-04-25T17:50:00Z | 2023-04-28T07:08:10Z | https://github.com/slackapi/python-slack-sdk/issues/1358 | [
"question"
] | ryanandonian | 2 |
xuebinqin/U-2-Net | computer-vision | 273 | 请问下,数据集中的label图像是非黑即白的2色图像,还是在dataloader中会自动处理 | hello ,@xuebinqin , 请问下,数据集中的label图像是非黑即白的2色图像,还是原始的背景为白色的rgb图像,然后在dataloader中会自动处理吗? | open | 2021-12-05T14:02:20Z | 2021-12-14T08:01:14Z | https://github.com/xuebinqin/U-2-Net/issues/273 | [] | Testhjf | 2 |
LAION-AI/Open-Assistant | machine-learning | 3,563 | Model training developer setup | I'm trying to set up the developer environment to run supervised fine-tuning.
When running `pip install -e ..` from this Readme https://github.com/LAION-AI/Open-Assistant/blob/main/model/model_training/README.md with `CUDA_HOME=/usr/local/cuda-11.4`, I get
```
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [16 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-n8yrafio/deepspeed_fe91fb641327472a9e0c07a08a62b4f2/setup.py", line 82, in <module>
cuda_major_ver, cuda_minor_ver = installed_cuda_version()
File "/tmp/pip-install-n8yrafio/deepspeed_fe91fb641327472a9e0c07a08a62b4f2/op_builder/builder.py", line 43, in installed_cuda_version
output = subprocess.check_output([cuda_home + "/bin/nvcc", "-V"], universal_newlines=True)
File "/private/home/theop123/miniconda3/envs/open-assistant/lib/python3.10/subprocess.py", line 421, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/private/home/theop123/miniconda3/envs/open-assistant/lib/python3.10/subprocess.py", line 503, in run
with Popen(*popenargs, **kwargs) as process:
File "/private/home/theop123/miniconda3/envs/open-assistant/lib/python3.10/subprocess.py", line 971, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/private/home/theop123/miniconda3/envs/open-assistant/lib/python3.10/subprocess.py", line 1863, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/cuda-11.4/bin/nvcc'
[end of output]
```
Indeed `/usr/local/cuda-11.4` does not contain `nvcc`.
In a conda environment with Python 3.10 with PyTorch installed via:
```
conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.3 -c pytorch
```
`nvidia-smi` gives
```
Driver Version: 470.141.03 CUDA Version: 11.4
```
`nvcc --version` gives
```
zsh: command not found: nvcc
``` | closed | 2023-07-11T17:15:43Z | 2023-07-12T18:29:23Z | https://github.com/LAION-AI/Open-Assistant/issues/3563 | [] | theophilegervet | 1 |
robotframework/robotframework | automation | 4,715 | Dynamic library API should validate argument names | With dynamic library API, keyword arguments do not need to follow the Python validation and it is possible to use arguments like `/arg`, `a/rg` or have arguments like `arg1 / arg2, / arg3`. All those are not valid in Python, but are not currently checked in dynamic library API. Should we start validating arguments names in dynamic library API? | open | 2023-04-02T16:28:46Z | 2023-12-20T00:38:16Z | https://github.com/robotframework/robotframework/issues/4715 | [] | aaltat | 0 |
ahmedfgad/GeneticAlgorithmPython | numpy | 130 | Inconsistent type of sol_idx in fitness_func() | Frequently sol_idx is of type None, where it should(?) be int.
My fix which abandons the value.
print("{:3d}".format(int(sol_idx or 777))
Any recommendation on extracting the correct value in all cases? | closed | 2022-09-11T14:26:05Z | 2023-02-25T19:34:33Z | https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/130 | [
"question"
] | aoolmay | 1 |
dynaconf/dynaconf | django | 662 | [bug] Lazy validation fails with TypeError: __call__() takes 2 positional arguments but 3 were given | **Describe the bug**
Tried to introduce Lazy Validators as described in https://www.dynaconf.com/validation/#computed-values i.e.
```
from dynaconf.utils.parse_conf import empty, Lazy
Validator("FOO", default=Lazy(empty, formatter=my_function))
```
First bug (documentation): The above fails with
` ImportError: cannot import name 'empty' from 'dynaconf.utils.parse_conf'`
--> "empty" seems to be now in `dynaconf.utils.functional.empty`
Second bug:
Lazy Validator fails with
` TypeError: __call__() takes 2 positional arguments but 3 were given`
**To Reproduce**
Steps to reproduce the behavior:
```
from dynaconf.utils.parse_conf import Lazy
from dynaconf.utils.functional import empty
def lazy_foobar(s, v):
return "foobar"
from dynaconf import Dynaconf
s = Dynaconf(
validators=[Validator("FOOBAR", default=Lazy(empty, formatter=lazy_foobar)),],
)
print(s.FOOBAR)
```
stacktrace:
```
print(s.FOOBAR)
File "c:\ta\virtualenv\dconf\lib\site-packages\dynaconf\base.py", line 113, in __getattr__
self._setup()
File "c:\ta\virtualenv\dconf\lib\site-packages\dynaconf\base.py", line 164, in _setup
settings_module=settings_module, **self._kwargs
File "c:\ta\virtualenv\dconf\lib\site-packages\dynaconf\base.py", line 236, in __init__
only=self._validate_only, exclude=self._validate_exclude
File "c:\ta\virtualenv\dconf\lib\site-packages\dynaconf\validator.py", line 417, in validate
validator.validate(self.settings, only=only, exclude=exclude)
File "c:\ta\virtualenv\dconf\lib\site-packages\dynaconf\validator.py", line 198, in validate
settings, settings.current_env, only=only, exclude=exclude
File "c:\ta\virtualenv\dconf\lib\site-packages\dynaconf\validator.py", line 227, in _validate_items
if callable(self.default)
TypeError: __call__() takes 2 positional arguments but 3 were given
```
**Expected behavior**
Work as documented in https://www.dynaconf.com/validation/#computed-values
**Environment (please complete the following information):**
- OS: Windows 10 Pro
- Dynaconf Version: 3.1.7 (also 3.1.4)
- Frameworks in use: N/A
**Additional context**
Add any other context about the problem here.
| closed | 2021-10-04T13:54:00Z | 2021-10-30T08:52:50Z | https://github.com/dynaconf/dynaconf/issues/662 | [
"bug",
"hacktoberfest"
] | yahman72 | 7 |
scikit-image/scikit-image | computer-vision | 7,391 | test_unsharp_masking_output_type_and_shape fails on non-x86 architectures | ### Description:
When building the Debian package, the **test_unsharp_masking_output_type_and_shape** and **test_pyramid_dtype_support** tests fail for many non-x86 architectures because of invalid value runtime warnings:
```
_ test_unsharp_masking_output_type_and_shape[True--1.0--1.0-2.0-uint64-shape2-False] _
[…]
def test_unsharp_masking_output_type_and_shape(
radius, amount, shape, multichannel, dtype, offset, preserve
):
array = np.random.random(shape)
> array = ((array + offset) * 128).astype(dtype)
E RuntimeWarning: invalid value encountered in cast
skimage/filters/tests/test_unsharp_mask.py:41: RuntimeWarning
```
and
```
_____________ test_pyramid_dtype_support[pyramid_laplacian-uint8] ______________
pyramid_func = <function pyramid_laplacian at 0xffff78ab8a40>, dtype = 'uint8'
@pytest.mark.parametrize('dtype', ['float16', 'float32', 'float64', 'uint8', 'int64'])
@pytest.mark.parametrize(
'pyramid_func', [pyramids.pyramid_gaussian, pyramids.pyramid_laplacian]
)
def test_pyramid_dtype_support(pyramid_func, dtype):
> img = np.random.randn(32, 8).astype(dtype)
E RuntimeWarning: invalid value encountered in cast
skimage/transform/tests/test_pyramids.py:197: RuntimeWarning
```
While I can (and will) ignore the warnings locally for the build, it may be useful to handle this upstream.
### Way to reproduce:
See above
### Version information:
```Shell
numpy 1.26.4
Python 3.11 and 3.12
Debian unstable
skimage 0.23.1
So far on arm64, armhf, ppc64el, s390x.
```
All build logs [here](https://buildd.debian.org/status/logs.php?pkg=skimage&ver=0.23.1-1&suite=sid).
| closed | 2024-04-11T19:13:09Z | 2024-04-17T10:43:13Z | https://github.com/scikit-image/scikit-image/issues/7391 | [
":wrench: type: Maintenance",
":computer: Arch specific"
] | olebole | 9 |
3b1b/manim | python | 1,367 | Installing Manim (manim-3b088b12843b7a4459fe71eba96b70edafb7aa78) | Help me, what's the problem? Installing Manim (manim-3b088b12843b7a4459fe71eba96b70edafb7aa78)
Running setup.py install for pycairo ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\Oscar\AppData\Local\Programs\Python\Python37\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Oscar\\AppData\\Local\\Temp\\pip-install-hw2i7_iq\\pycairo\\setup.py'"'"'; __file__='"'"'C:\\Users\\Oscar\\AppData\\Local\\Temp\\pip-install-hw2i7_iq\\pycairo\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Oscar\AppData\Local\Temp\pip-record-fh_7_84q\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\Oscar\AppData\Local\Programs\Python\Python37\Include\pycairo'
cwd: C:\Users\Oscar\AppData\Local\Temp\pip-install-hw2i7_iq\pycairo\
Complete output (12 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
creating build\lib.win-amd64-3.7\cairo
copying cairo\__init__.py -> build\lib.win-amd64-3.7\cairo
copying cairo\__init__.pyi -> build\lib.win-amd64-3.7\cairo
copying cairo\py.typed -> build\lib.win-amd64-3.7\cairo
running build_ext
building 'cairo._cairo' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/
----------------------------------------
Rolling back uninstall of pycairo
Moving to c:\users\oscar\appdata\local\programs\python\python37\lib\site-packages\cairo\
from c:\users\oscar\appdata\local\programs\python\python37\lib\site-packages\~airo
Moving to c:\users\oscar\appdata\local\programs\python\python37\lib\site-packages\pycairo-1.20.0.dist-info\
from c:\users\oscar\appdata\local\programs\python\python37\lib\site-packages\~ycairo-1.20.0.dist-info
ERROR: Command errored out with exit status 1: 'C:\Users\Oscar\AppData\Local\Programs\Python\Python37\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Oscar\\AppData\\Local\\Temp\\pip-install-hw2i7_iq\\pycairo\\setup.py'"'"'; __file__='"'"'C:\\Users\\Oscar\\AppData\\Local\\Temp\\pip-install-hw2i7_iq\\pycairo\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Oscar\AppData\Local\Temp\pip-record-fh_7_84q\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\Oscar\AppData\Local\Programs\Python\Python37\Include\pycairo' Check the logs for full command output.
**pycairo version**: pycairo-1.20.0-cp37-cp37m-win_amd64
**SO version**Windows 10 x64
**manim version**: manim-3b088b12843b7a4459fe71eba96b70edafb7aa78
**python version**: 3.7.8
| closed | 2021-02-07T20:16:34Z | 2021-02-12T22:42:36Z | https://github.com/3b1b/manim/issues/1367 | [] | ayalaortiz | 0 |
autokey/autokey | automation | 865 | Pull-request template | ### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland?
Xorg
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Enhancement
### Choose one or more terms that describe this issue:
- [ ] autokey triggers
- [ ] autokey-gtk
- [ ] autokey-qt
- [ ] beta
- [ ] bug
- [ ] critical
- [ ] development
- [ ] documentation
- [X] enhancement
- [ ] installation/configuration
- [ ] phrase expansion
- [ ] scripting
- [ ] technical debt
- [ ] user interface
### Other terms that describe this issue if not provided above:
_No response_
### Which Linux distribution did you use?
_No response_
### Which AutoKey GUI did you use?
None
### Which AutoKey version did you use?
_No response_
### How did you install AutoKey?
_No response_
### Can you briefly describe the issue?
GitHub offers a feature that allows you to create a pull-request template that will provide default text in the edit box when a pull request is created. Markdown is accepted when designing these, so anything that you could design in Markdown anywhere on GitHub will also work here (headings, lists, check-boxes, etc.).
An example of how this works can be seen in [this YouTube video](https://www.youtube.com/watch?v=MjsS4ujX3nU). It seems like it would be a good way to remind contributors to update the [CHANGELOG.rst](https://github.com/autokey/autokey/blob/develop/CHANGELOG.rst) file or check off boxes for specific tasks before creating the pull request, etc.
We would want to brainstorm about what we'd like to see in such a template, so feedback is welcome for this issue.
### Can the issue be reproduced?
None
### What are the steps to reproduce the issue?
_No response_
### What should have happened?
_No response_
### What actually happened?
_No response_
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
_No response_ | open | 2023-05-19T18:58:28Z | 2023-05-19T19:09:35Z | https://github.com/autokey/autokey/issues/865 | [
"enhancement",
"development"
] | Elliria | 0 |
ray-project/ray | pytorch | 51,165 | [telemetry] Importing Ray Tune in an actor reports Ray Train usage | See this test case: https://github.com/ray-project/ray/pull/51161/files#diff-d1dc38a41dc1f9ba3c2aa2d9451217729a6f245ff3af29e4308ffe461213de0aR22 | closed | 2025-03-07T17:38:07Z | 2025-03-17T17:56:38Z | https://github.com/ray-project/ray/issues/51165 | [
"P1",
"tune"
] | edoakes | 0 |
huggingface/datasets | pandas | 6,980 | Support NumPy 2.0 | ### Feature request
Support NumPy 2.0.
### Motivation
NumPy introduces the Array API, which bridges the gap between machine learning libraries. Many clients of HuggingFace are eager to start using the Array API.
Besides that, NumPy 2 provides a cleaner interface than NumPy 1.
### Tasks
NumPy 2.0 was released for testing so that libraries could ensure compatibility [since mid-March](https://github.com/numpy/numpy/issues/24300#issuecomment-1986815755). What needs to be done for HuggingFace to support Numpy 2?
- [x] Fix use of `array`: https://github.com/huggingface/datasets/pull/6976
- [ ] Remove [NumPy version limit](https://github.com/huggingface/datasets/pull/6975): https://github.com/huggingface/datasets/pull/6991 | closed | 2024-06-18T23:30:22Z | 2024-07-12T12:04:54Z | https://github.com/huggingface/datasets/issues/6980 | [
"enhancement"
] | NeilGirdhar | 0 |
2noise/ChatTTS | python | 657 | [Bug/unclear requirements]UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED | anyone project dev can share their cudnn version and their torch config?
```bash
python -m torch.utils.collect_env
```
it keeps crashing
```bash
\site-packages\torch\nn\modules\conv.py:306: UserWarning: Plan failed with a cudnnException: CUDNN_BACKEND_EXECUTION_PLAN_DESCRIPTOR: cudnnFinalize Descriptor Failed cudnn_status: CUDNN_STATUS_NOT_SUPPORTED (Triggered internally at ..\aten\src\ATen\native\cudnn\Conv_v8.cpp:919.)
return F.conv1d(input, weight, bias, self.stride,
Traceback (most recent call last):
File "............\Chattts_adv.py", line 34, in <module>
torchaudio.save("self_introduction_output.wav", torch.from_numpy(audio_array_en[0]), 24000)
File ",.........\Lib\site-packages\torchaudio\_backend\utils.py", line 313, in save
return backend.save(
^^^^^^^^^^^^^
File ",...................\Lib\site-packages\torchaudio\_backend\soundfile.py", line 44, in save
soundfile_backend.save(
File ",,,,,,,,,,,,,,,,,,,,,,,,,,,\Lib\site-packages\torchaudio\_backend\soundfile_backend.py", line 427, in save
raise ValueError(f"Expected 2D Tensor, got {src.ndim}D.")
ValueError: Expected 2D Tensor, got 1D.
``` | open | 2024-08-01T15:16:01Z | 2024-08-03T03:41:08Z | https://github.com/2noise/ChatTTS/issues/657 | [
"documentation",
"help wanted"
] | hgftrdw45ud67is8o89 | 3 |
lazyprogrammer/machine_learning_examples | data-science | 90 | Required LLM section | Hey! Your repository is well maintained. I figured out that there is no section for LLMs, and I would love to contribute to that section. If you think its a good idea, we may discuss what to include and what not.
Best regards,
Rafay | closed | 2024-01-02T18:56:57Z | 2025-02-19T23:09:24Z | https://github.com/lazyprogrammer/machine_learning_examples/issues/90 | [] | rafaym1 | 1 |
randyzwitch/streamlit-folium | streamlit | 177 | automatic scroll on the web page when I click on the map icon to center the map | Is there a way to solve this problem, I don't want the web page to move when I click on the icon. | open | 2024-04-11T09:38:54Z | 2024-05-29T20:10:08Z | https://github.com/randyzwitch/streamlit-folium/issues/177 | [
"question"
] | nicolas3010 | 0 |
ranaroussi/yfinance | pandas | 1,786 | Start/end arguments error | When using ```
start='2023-12-01', end='2023-12-13'
``` `symbol may be delisted` error comes up but using ```period='1mo'``` no problems found. Is it just me or anyone else with this type of error? | closed | 2023-12-13T22:42:55Z | 2023-12-13T22:49:16Z | https://github.com/ranaroussi/yfinance/issues/1786 | [] | Onerafaz | 2 |
OFA-Sys/Chinese-CLIP | computer-vision | 223 | Finetune模型时遇到的一些问题 | 此领域小白,目前在跟着教程进行 finetune 时对结果上有一些疑问。希望大佬可以指导一下。
### 背景:
40-50 个左右的玩具角色分类
### 打算的实现方式:
通过50个左右的文本标签和图片归一化求max,成功识别出时哪一款玩具
### 训练的数据集:
每个标签30-50张 * 48 ≈ 1700(图片背景基本一样)
### 配置和参数:
单卡 显存 32G
```
#!/usr/bin/env
# Guide:
# This script supports distributed training on multi-gpu workers (as well as single-worker training).
# Please set the options below according to the comments.
# For multi-gpu workers training, these options should be manually set for each worker.
# After setting the options, please run the script on each worker.
# Command: bash run_scripts/muge_finetune_vit-b-16_rbt-base.sh ${DATAPATH}
# Number of GPUs per GPU worker
GPUS_PER_NODE=1
# Number of GPU workers, for single-worker training, please set to 1
WORKER_CNT=1
# The ip address of the rank-0 worker, for single-worker training, please set to localhost
export MASTER_ADDR=localhost
# The port for communication
export MASTER_PORT=8514
# The rank of this worker, should be in {0, ..., WORKER_CNT-1}, for single-worker training, please set to 0
export RANK=0
export PYTHONPATH=${PYTHONPATH}:`pwd`/cn_clip/
DATAPATH=${1}
# data options
train_data=${DATAPATH}/datasets/**/lmdb/train
val_data=${DATAPATH}/datasets/**/lmdb/valid # if val_data is not specified, the validation will be automatically disabled
# restore options
resume=${DATAPATH}/pretrained_weights/clip_cn_vit-b-16.pt # or specify your customed ckpt path to resume
reset_data_offset="--reset-data-offset"
reset_optimizer="--reset-optimizer"
# reset_optimizer=""
# output options
output_base_dir=${DATAPATH}/experiments/
name=muge_finetune_vit-b-16_roberta-base_bs128_1gpu_22
save_step_frequency=999999 # disable it
save_epoch_frequency=100
log_interval=1
report_training_batch_acc="--report-training-batch-acc"
# report_training_batch_acc=""
# training hyper-params
context_length=52
warmup=100
batch_size=150
valid_batch_size=150
accum_freq=1
lr=15e-5
wd=0.001
max_epochs=800 # or you can alternatively specify --max-steps
valid_step_interval=999999
valid_epoch_interval=999999
vision_model=ViT-B-16
text_model=RoBERTa-wwm-ext-base-chinese
use_augment="--use-augment"
# use_augment=""
python3 -m torch.distributed.launch --use_env --nproc_per_node=${GPUS_PER_NODE} --nnodes=${WORKER_CNT} --node_rank=${RANK} \
--master_addr=${MASTER_ADDR} --master_port=${MASTER_PORT} cn_clip/training/main.py \
--train-data=${train_data} \
--val-data=${val_data} \
--resume=${resume} \
${reset_data_offset} \
${reset_optimizer} \
--logs=${output_base_dir} \
--name=${name} \
--save-step-frequency=${save_step_frequency} \
--save-epoch-frequency=${save_epoch_frequency} \
--log-interval=${log_interval} \
${report_training_batch_acc} \
--context-length=${context_length} \
--warmup=${warmup} \
--batch-size=${batch_size} \
--valid-batch-size=${valid_batch_size} \
--valid-step-interval=${valid_step_interval} \
--valid-epoch-interval=${valid_epoch_interval} \
--accum-freq=${accum_freq} \
--lr=${lr} \
--wd=${wd} \
--max-epochs=${max_epochs} \
--vision-model=${vision_model} \
${use_augment} \
--text-model=${text_model} \
--grad-checkpointing
```
训练日志:
看起来 模型收敛在这个范围
```
2023-10-23,12:10:54 | INFO | Rank 0 | Global Steps: 7969/9600 | Train Epoch: 665 [150/1800 (8%)] | Loss: 1.408282 | Image2Text Acc: 29.33 | Text2Image Acc: 26.00 | Data Time: 12.606s | Batch Time: 13.296s | LR: 0.000011 | logit_scale: 2.761 | Global Batch Size: 150
2023-10-23,12:10:56 | INFO | Rank 0 | Global Steps: 7970/9600 | Train Epoch: 665 [300/1800 (17%)] | Loss: 1.338785 | Image2Text Acc: 32.00 | Text2Image Acc: 30.67 | Data Time: 1.709s | Batch Time: 2.382s | LR: 0.000011 | logit_scale: 2.761 | Global Batch Size: 150
2023-10-23,12:10:57 | INFO | Rank 0 | Global Steps: 7971/9600 | Train Epoch: 665 [450/1800 (25%)] | Loss: 1.400602 | Image2Text Acc: 26.67 | Text2Image Acc: 26.00 | Data Time: 0.056s | Batch Time: 0.729s | LR: 0.000011 | logit_scale: 2.761 | Global Batch Size: 150
2023-10-23,12:10:57 | INFO | Rank 0 | Global Steps: 7972/9600 | Train Epoch: 665 [600/1800 (33%)] | Loss: 1.333308 | Image2Text Acc: 28.00 | Text2Image Acc: 29.33 | Data Time: 0.053s | Batch Time: 0.726s | LR: 0.000011 | logit_scale: 2.761 | Global Batch Size: 150
2023-10-23,12:11:05 | INFO | Rank 0 | Global Steps: 7973/9600 | Train Epoch: 665 [750/1800 (42%)] | Loss: 1.274247 | Image2Text Acc: 34.67 | Text2Image Acc: 32.67 | Data Time: 7.118s | Batch Time: 7.790s | LR: 0.000011 | logit_scale: 2.761 | Global Batch Size: 150
2023-10-23,12:11:08 | INFO | Rank 0 | Global Steps: 7974/9600 | Train Epoch: 665 [900/1800 (50%)] | Loss: 1.272779 | Image2Text Acc: 32.67 | Text2Image Acc: 28.67 | Data Time: 2.264s | Batch Time: 2.937s | LR: 0.000011 | logit_scale: 2.761 | Global Batch Size: 150
2023-10-23,12:11:09 | INFO | Rank 0 | Global Steps: 7975/9600 | Train Epoch: 665 [1050/1800 (58%)] | Loss: 1.371383 | Image2Text Acc: 32.00 | Text2Image Acc: 32.00 | Data Time: 0.053s | Batch Time: 0.725s | LR: 0.000011 | logit_scale: 2.761 | Global Batch Size: 150
2023-10-23,12:11:10 | INFO | Rank 0 | Global Steps: 7976/9600 | Train Epoch: 665 [1200/1800 (67%)] | Loss: 1.302166 | Image2Text Acc: 35.33 | Text2Image Acc: 26.00 | Data Time: 0.052s | Batch Time: 0.725s | LR: 0.000011 | logit_scale: 2.761 | Global Batch Size: 150
2023-10-23,12:11:21 | INFO | Rank 0 | Global Steps: 7977/9600 | Train Epoch: 665 [1350/1800 (75%)] | Loss: 1.261887 | Image2Text Acc: 32.00 | Text2Image Acc: 34.00 | Data Time: 11.000s | Batch Time: 11.673s | LR: 0.000011 | logit_scale: 2.761 | Global Batch Size: 150
2023-10-23,12:11:22 | INFO | Rank 0 | Global Steps: 7978/9600 | Train Epoch: 665 [1500/1800 (83%)] | Loss: 1.318105 | Image2Text Acc: 30.67 | Text2Image Acc: 31.33 | Data Time: 0.056s | Batch Time: 0.728s | LR: 0.000011 | logit_scale: 2.761 | Global Batch Size: 150
2023-10-23,12:11:23 | INFO | Rank 0 | Global Steps: 7979/9600 | Train Epoch: 665 [1650/1800 (92%)] | Loss: 1.307291 | Image2Text Acc: 34.00 | Text2Image Acc: 33.33 | Data Time: 0.055s | Batch Time: 0.733s | LR: 0.000011 | logit_scale: 2.761 | Global Batch Size: 150
2023-10-23,12:11:24 | INFO | Rank 0 | Global Steps: 7980/9600 | Train Epoch: 665 [1800/1800 (100%)] | Loss: 1.278893 | Image2Text Acc: 30.67 | Text2Image Acc: 32.00 | Data Time: 0.051s | Batch Time: 0.723s | LR: 0.000011 | logit_scale: 2.761 | Global Batch Size: 150
2023-10-23,12:11:24 | INFO | Rank 0 | train LMDB file contains 1769 images and 1769 pairs.
2023-10-23,12:11:24 | INFO | Rank 0 | val LMDB file contains 611 images and 611 pairs.
2023-10-23,12:11:39 | INFO | Rank 0 | Saved checkpoint /root/autodl-tmp/data/experiments/muge_finetune_vit-b-16_roberta-base_bs128_1gpu_22/checkpoints/epoch_latest.pt (epoch 665 @ 7980 steps) (writing took 14.912322759628296 seconds)
```
由于我没有开validation 所以没有 valid 的loss情况
### 现状
最后我手动测试了几个训练集的数据,基本上文本和图片的归一化的最大值在 99 以上,但是测试其他的照片有些可以识别有些无法识别有些甚至识别错误( 不正确的文本和图片的归一化的最大值达到了 99)泛化能力比较差
### 如何优化?
1. 请问现在是欠拟合还是过拟合了呢?
2. 数据集是否需要优化?
3. 参数是否需要优化?
4. 该如何调整才能保证模型的性能(泛化性强 分类准确)?
5. 还是我的实现方式需要优化呢?
这是我遇到的一些问题,希望大佬们可以帮忙给出一些指导性的建议,十分感谢。 | closed | 2023-10-24T18:17:27Z | 2024-09-27T01:36:38Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/223 | [] | HuangZiy | 3 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 533 | [BUG] 抖音的fetch_user_post_videos接口,无法获取图片+live图片形式的列表 | ***发生错误的平台?***
抖音
***发生错误的端点?***
/api/douyin/web/fetch_user_post_videos
通过这个接口获取用户的作品列表有个问题,就是用户发的那种几张图片+几张live动图的抖音,在列表里是没有返回的。但是如果你通过单独链接请求下载接口(/api/download),是能下载到图片的
这是图片+live动图形式的抖音
https://v.douyin.com/iyMJU25K/
***你有查看本项目的自述文件或接口文档吗?***
有,并且很确定该问题是程序导致的。
| closed | 2025-01-02T06:12:30Z | 2025-02-17T08:52:47Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/533 | [
"BUG"
] | yushuyong | 3 |
airtai/faststream | asyncio | 1,724 | Remove `docs/api` directory before running create_api_docs python script | closed | 2024-08-23T16:06:24Z | 2024-08-26T11:32:48Z | https://github.com/airtai/faststream/issues/1724 | [
"documentation"
] | kumaranvpl | 1 | |
BeanieODM/beanie | pydantic | 29 | Typing error is shown with get method of Document subclasses | I am working through the cocktail api tutorial on https://developer.mongodb.com/article/beanie-odm-fastapi-cocktails/. Basically, everything works fine, but I get a strange typing error from Pylance in VSCode. I am not certain, if I should be afraid of it or not.
For this function...
```python
async def get_cocktail(cocktail_id: PydanticObjectId) -> Cocktail:
"""Helper function to look up a cocktail by id"""
cocktail = await Cocktail.get(cocktail_id)
if cocktail is None:
raise HTTPException(status_code=404, detail="Cocktail not found")
return cocktail
```
... I get the following warning.

From my point of view PyLance highlights a typing problem. `Cocktail.get` returns a Document and not the subclass Cocktail.
So, as I am just 30 minutes into the whole Beanie adventure, I might be entirely wrong. Or PyLance has an issue here? | closed | 2021-05-13T16:41:53Z | 2021-05-14T08:58:24Z | https://github.com/BeanieODM/beanie/issues/29 | [] | oliverandrich | 4 |
modelscope/modelscope | nlp | 864 | refine error report when model id is wrong using snapshot_download | Thanks for your error report and we appreciate it a lot.
**Checklist**
* I have searched the tutorial on modelscope [doc-site](https://modelscope.cn/docs)
* I have searched related issues but cannot get the expected help.
* The bug has not been fixed in the latest version.
**Describe the bug**
https://www.modelscope.cn/models/LLM-Research/Meta-Llama-3-8B/summary
这个模型下不下来。。。
modelscope.snapshot_download(meta-llama/Meta-Llama-3-8B)
Traceback (most recent call last):
File "/home/jiejing.zjj/miniconda3/envs/py38/lib/python3.8/site-packages/modelscope/hub/errors.py", line 91, in handle_http_response
response.raise_for_status()
File "/home/jiejing.zjj/miniconda3/envs/py38/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://www.modelscope.cn/api/v1/models/meta-llama/Meta-Llama-3-8B/revisions
**To Reproduce**
* What command or script did you run?
> A placeholder for the command.
* Did you make any modifications on the code or config? Did you understand what you have modified?
* What dataset did you use?
**Your Environments (__required__)**
* OS: `uname -a`
* CPU: `lscpu`
* Commit id (e.g. `a3ffc7d8`)
* You may add addition that may be helpful for locating the problem, such as
* How you installed PyTorch [e.g., pip, conda, source]
* Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)
Please @ corresponding people according to your problem:
Model hub related: @liuyhwangyh | closed | 2024-05-22T13:30:28Z | 2024-05-28T06:38:21Z | https://github.com/modelscope/modelscope/issues/864 | [] | wenmengzhou | 0 |
miguelgrinberg/python-socketio | asyncio | 1,259 | Need help with this reconnecting issue |
**Describe the bug**
disconnecting when the function run too long
**To Reproduce**
Steps to reproduce the behavior:
1. Just get the packet and wait after it finished then its disconects and reconnects
**Expected behavior**
just staying connected
**Logs**
polling connection accepted with {'sid': '', 'upgrades': ['websocket'], 'pingInterval': 10000, 'pingTimeout': 5000, 'maxPayload': 1000000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to wss://sockets.streamlabs.com/socket.io/?token=&transport=websocket&EIO=4
WebSocket upgrade was successful
Received packet MESSAGE data 0{"sid":"odGVo5njFBv05EZ7XxSR"}
Namespace / is connected
Prijunkta Prie StreamLabs Serverio
Received packet PING data
Sending packet PONG data
Received packet PING data
Sending packet PONG data
Received packet MESSAGE data 2["event",{"type":"donation","message":[{"name":"NeLagina","isTest":true,"formatted_amount":"€63.00","amount":63,"message":"This is a test donation for €63.00.","currency":"EUR","to":{"name":"NeLagina"},"from":"John","from_user_id":1,"_id":"9ae0477155d4ec1b8a04fa1def674eeb","priority":10}],"for":"streamlabs","event_id":"evt_65a09ead3ac38d2b00b876490f1789ea"}]
Received event "event" [/]
GAUTAS DONATIONAS ar kaskas tai lauk 10 sekundžiu
John Paukojo Šešiasdešimt Trys eurai
Žinute This is a test donation for €63.00.
Received packet PING data
Sending packet PONG data
packet queue is empty, aborting
Exiting write loop task
Server sent unexpected packet 8 data 0, aborting
Waiting for write loop task to end
Engine.IO connection dropped
Atsijunget Nuo StreamLabs Serverio
Connection failed, new attempt in 0.97 seconds
Exiting read loop task
Attempting polling connection to https://sockets.streamlabs.com/socket.io/?token=&transport=polling&EIO=4
Polling connection accepted with {'sid': '9Es4eQh6Fx3cuXubXxTj', 'upgrades': ['websocket'], 'pingInterval': 10000, 'pingTimeout': 5000, 'maxPayload': 1000000}
Engine.IO connection established
Sending packet MESSAGE data 0{}
Attempting WebSocket upgrade to wss://sockets.streamlabs.com/socket.io/?token=&transport=websocket&EIO=4
WebSocket upgrade was successful
Received packet MESSAGE data 0{"sid":"3t2Gi_WYL7ZwWmeZXxTl"}
Namespace / is connected
Prijunkta Prie StreamLabs Serverio
Reconnection successful
**Additional context**
Add any other context about the problem here.
| closed | 2023-10-24T18:59:55Z | 2023-10-24T19:22:47Z | https://github.com/miguelgrinberg/python-socketio/issues/1259 | [] | NeLagina | 0 |
vllm-project/vllm | pytorch | 15,371 | [Bug]: Error occurred in v1/rerank interface after upgrading from version 0.7.3 to 0.8.1 | ### Your current environment
<details>
<summary>The output of `python collect_env.py`</summary>
```text
Your output of `python collect_env.py` here
```
</details>
### 🐛 Describe the bug
Start using docker compose, which is the content of the docker compose file
```
x-vllm-common:
&common
image: vllm/vllm-openai:latest
restart: unless-stopped
environment:
VLLM_USE_MODELSCOPE: True
HF_ENDPOINT: https://hf-mirror.com
TZ: "Asia/Shanghai"
volumes:
- /root/.cache/modelscope/hub:/models # Please modify this to the actual model directory.
networks:
- vllm
services:
nginx:
image: nginx:latest
restart: unless-stopped
ports:
- "6090:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf # Mount the nginx configuration file.
networks:
- vllm
depends_on:
- reranker
reranker:
<<: *common
deploy:
resources:
reservations:
devices:
- driver: nvidia
capabilities: [ gpu ]
count: all
command: [ "--model","/models/AI-ModelScope/bge-reranker-v2-m3", "--host", "0.0.0.0", "--port", "5000", "--tensor-parallel-size", "2", "--task", "score", "--served-model-name", "bge-reranker-v2-m3", "--trust-remote-code"]
networks:
vllm:
```
Report an error when accessing interface /v1/rerank
```
INFO: 172.25.0.3:36030 - "POST /v1/rerank HTTP/1.0" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/opt/venv/lib/python3.12/site-packages/uvicorn/protocols/http/httptools_impl.py", line 409, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/opt/venv/lib/python3.12/site-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/opt/venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 187, in __call__
raise exc
File "/opt/venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 165, in __call__
await self.app(scope, receive, _send)
File "/opt/venv/lib/python3.12/site-packages/starlette/middleware/cors.py", line 85, in __call__
await self.app(scope, receive, send)
File "/opt/venv/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/opt/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/opt/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/opt/venv/lib/python3.12/site-packages/starlette/routing.py", line 714, in __call__
await self.middleware_stack(scope, receive, send)
File "/opt/venv/lib/python3.12/site-packages/starlette/routing.py", line 734, in app
await route.handle(scope, receive, send)
File "/opt/venv/lib/python3.12/site-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/opt/venv/lib/python3.12/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/opt/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/opt/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/opt/venv/lib/python3.12/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/vllm/entrypoints/utils.py", line 58, in wrapper
return handler_task.result()
^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 638, in do_rerank_v1
return await do_rerank(request, raw_request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/vllm/entrypoints/utils.py", line 49, in wrapper
handler_task = asyncio.create_task(handler_func(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: do_rerank() missing 1 required keyword-only argument: 'raw_request'
```
What to do?
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | open | 2025-03-24T03:46:18Z | 2025-03-24T08:06:08Z | https://github.com/vllm-project/vllm/issues/15371 | [
"bug"
] | xermaor | 2 |
ipython/ipython | data-science | 14,641 | IPython parsing 0_* in Python 3.12 | After typing something like `0_z` and Enter, IPython shows the continuation prompt and the cursor: it expects more! But what? Example:
```
Python 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.31.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: 0_z
...:
...:
...: |
```
The vanilla Python prompt, as expected, gives a SyntaxError: invalid decimal literal
It appears that Python 3.12 triggers this IPython problem, as [3.11 works fine](https://stackoverflow.com/questions/79336132/what-does-ipython-expect-after-an-underscore-in-a-numeric-literal?noredirect=1#comment139903154_79336132).
Is this some parser bug perhaps?
This is especially cumbersome with filenames where an underscore follows a digit, e.g.:
```
In [1]: run 3_rag_agent.py
...:
...:
...: |
```
| open | 2025-01-07T15:54:20Z | 2025-01-13T18:31:28Z | https://github.com/ipython/ipython/issues/14641 | [] | mdruiter | 2 |
davidteather/TikTok-Api | api | 1,069 | [BUG] - Whole Tiktok api broken ? | Dear all,
I tried the trending API and got an exception on this
playwright._impl._api_types.Error: TypeError: Cannot read properties of undefined (reading 'frontierSign')
investigating the code it is located here
File "/Users/XXXXXXXXX/anaconda3/lib/python3.11/site-packages/TikTokApi/api/trending.py", line 43, in videos
resp = await Trending.parent.make_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A call on an URL does not work anymore : the URL is
https://www.tiktok.com/api/recommend/item_list/
and when accessing it directly :
I have a logging issue : status 10201
{"log_pb":{"impr_id":"xxxxxxxxxxxxxxxxxxxx"},"statusCode":10201,"statusMsg":"","status_code":10201,"status_msg":""}
Is the old API closed ?
Tiktokapi may be adapted to a login process ?
Thanks
| open | 2023-09-24T16:06:24Z | 2023-12-22T20:43:19Z | https://github.com/davidteather/TikTok-Api/issues/1069 | [
"bug"
] | cyph-zz | 5 |
ludwig-ai/ludwig | data-science | 3,368 | Cannot import all functions and classes under ludwig.datasets.* | **Describe the bug**
I cannot use `DatasetConfig` class because `ludwig/datasets/__init__.py` has a custom `__getattr__()` function which then only exposes a selected few functions and datasets. In particular, I cannot run `from ludwig.datasets.dataset_config import DatasetConfig`. Is there another way to access this class?
**To Reproduce**
Steps to reproduce the behavior:
Run this:
```python
from pathlib import Path
import yaml
from ludwig.datasets.dataset_config import DatasetConfig
config_path = Path("./mnist.yml")
with open(config_path) as f:
config = DatasetConfig.from_dict(yaml.safe_load(f))
```
The `mnist.yml` file can be downloaded from [here](https://github.com/ludwig-ai/ludwig/blob/master/ludwig/datasets/configs/mnist.yaml), and put into the same directory.
Error:
```
Traceback (most recent call last):
File "some/path/run.py", line 5, in <module>
from ludwig.datasets.dataset_config import DatasetConfig
ModuleNotFoundError: No module named 'ludwig.datasets.dataset_config'
```
**Expected behavior**
I would like to be able to import `DatasetConfig` in order to explore how to create a custom datasets.
**Environment (please complete the following information):**
- OS: Ubuntu 22.04.2 LTS
- Python version 3.10.10
- Ludwig version 0.5.5
| closed | 2023-04-27T09:08:24Z | 2023-04-28T16:06:27Z | https://github.com/ludwig-ai/ludwig/issues/3368 | [] | Rassibassi | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.