repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
ARM-DOE/pyart | data-visualization | 1,250 | Replace RSL Functionality | Since there are a variety of issues with the RSL package, and it is no longer supported by NASA (see #1249 ), we should work to replace the functionality in Py-ART including:
- [x] FourDD Dealiasing
- [x] Sigmet Reader | closed | 2022-08-24T13:41:21Z | 2024-12-16T22:14:40Z | https://github.com/ARM-DOE/pyart/issues/1250 | [
"component: pyart.io",
"component: pyart.correct",
"Moderate"
] | mgrover1 | 2 |
aio-libs/aiomysql | asyncio | 307 | SAConnection example error | Please fix
``` python
await conn.commit()
```
SAConnection doesn't have commit method. It should be changed to, for example,
``` python
await conn.execute("commit")
```
https://github.com/aio-libs/aiomysql/blob/master/examples/example_simple_sa.py#L33
Best regards,
Alex. | closed | 2018-07-03T14:26:57Z | 2018-07-03T18:30:11Z | https://github.com/aio-libs/aiomysql/issues/307 | [] | IdeoG | 2 |
sngyai/Sequoia | pandas | 47 | 有没有教程? | closed | 2023-02-27T07:53:34Z | 2025-03-05T09:49:39Z | https://github.com/sngyai/Sequoia/issues/47 | [] | bitspring | 1 | |
Sanster/IOPaint | pytorch | 259 | [BUG] zoom out | When the steps of the photo are finished or I want to see the back and front, it comes out of zoom and shows the whole image | closed | 2023-04-02T01:20:57Z | 2023-04-02T01:24:46Z | https://github.com/Sanster/IOPaint/issues/259 | [] | kingal2000 | 0 |
akfamily/akshare | data-science | 5,899 | stock_zh_a_daily( )接口无法正常下载任意股票,只有个别股票数据可以正常下载 | 重要前提
已更新到最新版本
如何提交问题
详细问题描述
连续调用stock_zh_a_daily( )函数下载一系列股票日线数据,前面十几只股票数据正确获取,后面股票数据返回空数据。
请检查操作系统版本,目前只支持 64 位主流操作系统 (满足)
请检查 Python 版本,目前只支持 3.9 以上的版本(3.12)
请确认 AKShare 版本,升级到最新版复现问题 (最新版本)
请提交相关接口的名称和相应的调用代码
try:
stock_zh_a_daily_hfq_df = ak.stock_zh_a_daily(code, start_date=start_date, end_date=end_date, adjust="qfq")
except:
print(f"{code} data is not exist!")
continue
检查 DataFrame 是否为空
if stock_zh_a_daily_hfq_df.empty:
print(f"{code} data is empty!")
continue
接口报错的截图或描述
sh601916 data updated!
sh600938 data updated!
sh688041 data is empty!
sh600372 data is empty!
sh688223 data is empty!
sh600875 data is empty!
sz000983 data is empty!
sz000617 data is empty!
sh601699 data is empty!
。。。
期望获得的正确结果
同样的代码两个星期前每次下载都能得到每只股票正确的日线数据,近10天出现少数股票数据可以获取,大部分股票返回空数据的情况。 | closed | 2025-03-16T00:26:20Z | 2025-03-16T03:42:28Z | https://github.com/akfamily/akshare/issues/5899 | [
"bug"
] | gaoshanlee193 | 3 |
yeongpin/cursor-free-vip | automation | 225 | [Bug]: Mac版本执行后脚本显示成功,但是Cursor打开显示安装包已经损坏,无法打开 | ### 提交前检查
- [x] 我理解 Issue 是用于反馈和解决问题的,而非吐槽评论区,将尽可能提供更多信息帮助问题解决。
- [x] 我已经查看了置顶 Issue 并搜索了现有的 [开放 Issue](https://github.com/yeongpin/cursor-free-vip/issues)和[已关闭 Issue](https://github.com/yeongpin/cursor-free-vip/issues?q=is%3Aissue%20state%3Aclosed%20),没有找到类似的问题。
- [x] 我填写了简短且清晰明确的标题,以便开发者在翻阅 Issue 列表时能快速确定大致问题。而不是“一个建议”、“卡住了”等。
### 平台
macOS ARM64
### 版本
1.7.06
### 错误描述
脚本执行都是正常执行的,执行完再打开Cursor就显示“Cursor已经损坏”
机器:Mac arm64
cursor:Version: 0.45.17
### 相关日志输出
```shell
==================================================
🔄 Cursor 机器标识重置工具
==================================================
ℹ️ 检查配置文件...
📄 读取当前配置...
ℹ️ 备份文件已存在,跳过备份步骤
🔄 生成新机器标识...
ℹ️ 备份已创建
✅ 更新成功
📄 保存新配置到JSON...
ℹ️ 更新SQLite数据库...
ℹ️ 更新键值对: telemetry.devDeviceId
ℹ️ 更新键值对: telemetry.macMachineId
ℹ️ 更新键值对: telemetry.machineId
ℹ️ 更新键值对: telemetry.sqmId
ℹ️ 更新键值对: storage.serviceMachineId
✅ SQLite数据库更新成功
ℹ️ 更新系统ID...
✅ 系统ID更新成功
ℹ️ 读取package.json /Applications/Cursor.app/Contents/Resources/app/package.json
ℹ️ 找到版本: 0.44.11
ℹ️ Cursor版本太低: 0.44.11 < 0.45.0
ℹ️ Cursor版本 < 0.45.0,跳过getMachineId修补
✅ 机器标识重置成功
新机器标识:
ℹ️ telemetry.devDeviceId: 8a9b179d-7855-49f7-bf65-8be220636c69
ℹ️ telemetry.macMachineId: d9e90a681a20c9de82fa2205326448faf397edc88fe29f58f57cd336a9f9561ddcc96a4d43912c04b5b4bcf7fa18f464630341b6d61e356d7f6e961f4fd4b85d
ℹ️ telemetry.machineId: bfe57f238a8db6b2449ba26b8bc5784cad789eb92658a364ce6d4da7a35606d7
ℹ️ telemetry.sqmId: {7AB34D9B-7909-44B0-A318-28EDE71D16C0}
ℹ️ storage.serviceMachineId: 8a9b179d-7855-49f7-bf65-8be220636c69
```
### 附加信息
_No response_ | closed | 2025-03-14T04:11:54Z | 2025-03-16T03:38:35Z | https://github.com/yeongpin/cursor-free-vip/issues/225 | [
"bug"
] | somesky | 3 |
aws/aws-sdk-pandas | pandas | 2,865 | Augment dataframes with metadata from the origin file | Right now, a third party process saves out json files into an S3 bucket for us. The filename looks like `<prefix>-<iso_datetime>.json.gz`, and each file is the output of an endpoint that specifies a time-value pair. For example, the file might look like:
```json
[{"time": "2024-06-21T00:00:00Z", "value": 1.0}, ...]
```
We're loading these files in via `wr.s3.read_json` and ideally we want to be able to take the latest updated value for any particular time. Or in pandas terminology:
```python
we_want = df_dataset.groupby("fileModifiedTime").last()
```
I don't think this is possible right now, because there is no way to get information from either filename or file metadata using the `wr.s3.read_json` method.
If we move away from awswrangler, we can do something like this using `pyarrow.dataset`, as the dataset has a `files` attribute and you can call `pyarrow_dataset.filesystem.get_file_info(pyarrow_dataset.files)`
**Describe the solution you'd like**
It would be great if there was some way to augment the returned dataframe with metadata coming from the files that were loaded, such as the LastModifiedTime. | closed | 2024-06-21T04:15:21Z | 2024-07-22T08:02:05Z | https://github.com/aws/aws-sdk-pandas/issues/2865 | [
"enhancement"
] | Samreay | 1 |
Kanaries/pygwalker | matplotlib | 638 | Support for Pygwalker Data Visualizations in `marimo` | **Is your feature request related to a problem? Please describe.**
When attempting to use pygwalker within marimo (a Python notebook framework), I encountered an issue where marimo was unable to display the pygwalker visualization. Specifically, I received the error message:
```
Unsupported mimetype: application/vnd.jupyter.widget-view+json
```

This prevents users from utilizing pygwalker's data visualization capabilities within marimo notebooks.
**Describe the solution you'd like**
I would like pygwalker to implement support for marimo by adding either a `__repr_html__` or `__mime__` method to the `pygwalker.api.pygwalker.PygWalker` class. This would allow marimo to properly render pygwalker visualizations, as described in the [marimo documentation for displaying objects](https://docs.marimo.io/guides/integrating_with_marimo/displaying_objects.html).
**Describe alternatives you've considered**
I initially tried using pygwalker with marimo following the standard instructions provided in the pygwalker repository, similar to how it's used in Jupyter notebooks. However, this approach resulted in the aforementioned error.
**Additional context**
This feature request originated from an attempt to integrate pygwalker with marimo, as documented in [marimo issue #2486](https://github.com/marimo-team/marimo/issues/2486). I got suggested filing this feature request with pygwalker to implement the necessary methods for compatibility.
Implementing this feature would greatly enhance the usability of pygwalker across different Python notebook environments, particularly benefiting users of marimo who wish to use pygwalker's data visualization capabilities. | closed | 2024-10-03T14:42:42Z | 2024-10-31T02:30:20Z | https://github.com/Kanaries/pygwalker/issues/638 | [
"enhancement",
"P1"
] | Haleshot | 18 |
zappa/Zappa | django | 456 | [Migrated] Check config | Originally from: https://github.com/Miserlou/Zappa/issues/1215 by [daphee](https://github.com/daphee)
## Description
This adds a new function `verify_settings` to `cli.py` that is called when settings are loaded.
It goes through the settings dict and compares all used keys with a whitelist in `valid_settings.py`. For anything that isn't included in the whitelist a warning is printed including the valid setting with the closest [Levenshtein-Distance](https://en.wikipedia.org/wiki/Levenshtein_distance).
I got the list of config options from [there](https://github.com/daphee/Zappa/tree/check-config#advanced-settings). I also looked through `load_settings` and checked that at least all options that were loaded there were included in the whitelist.
I am not sure how I'd write a test for that. While I guess this isn't very critical or complicated code it did get a little bit more messier than I initially thought.
## GitHub Issues
Suggested in issue #1165.
| closed | 2021-02-20T08:35:06Z | 2022-07-16T07:32:12Z | https://github.com/zappa/Zappa/issues/456 | [] | jneves | 1 |
keras-team/keras | tensorflow | 20,542 | model.fit - class_weight broken | It seems argmax is returning dtype=int64 in the true case and int32 is returned in the false case.
https://github.com/keras-team/keras/blob/a503a162fc5b4120a96a1f7203a1de841f0601e2/keras/src/trainers/data_adapters/tf_dataset_adapter.py#L129-L133
Stacktrace:
```Python traceback
Traceback (most recent call last):
File "/home/example/workspace/fir/trainer/train.py", line 122, in <module>
model.fit(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 113, in error_handler
return fn(*args, **kwargs)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/keras/src/backend/tensorflow/trainer.py", line 282, in fit
epoch_iterator = TFEpochIterator(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/keras/src/backend/tensorflow/trainer.py", line 664, in __init__
super().__init__(*args, **kwargs)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/keras/src/trainers/epoch_iterator.py", line 64, in __init__
self.data_adapter = data_adapters.get_data_adapter(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/keras/src/trainers/data_adapters/__init__.py", line 56, in get_data_adapter
return TFDatasetAdapter(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/keras/src/trainers/data_adapters/tf_dataset_adapter.py", line 30, in __init__
dataset = dataset.map(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/data/ops/dataset_ops.py", line 2341, in map
return map_op._map_v2(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/data/ops/map_op.py", line 43, in _map_v2
return _MapDataset(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/data/ops/map_op.py", line 157, in __init__
self._map_func = structured_function.StructuredFunctionWrapper(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/data/ops/structured_function.py", line 265, in __init__
self._function = fn_factory()
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 1251, in get_concrete_function
concrete = self._get_concrete_function_garbage_collected(*args, **kwargs)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 1221, in _get_concrete_function_garbage_collected
self._initialize(args, kwargs, add_initializers_to=initializers)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 696, in _initialize
self._concrete_variable_creation_fn = tracing_compilation.trace_function(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py", line 178, in trace_function
concrete_function = _maybe_define_function(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py", line 283, in _maybe_define_function
concrete_function = _create_concrete_function(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/tracing_compilation.py", line 310, in _create_concrete_function
traced_func_graph = func_graph_module.func_graph_from_py_func(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/framework/func_graph.py", line 1059, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/eager/polymorphic_function/polymorphic_function.py", line 599, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/data/ops/structured_function.py", line 231, in wrapped_fn
ret = wrapper_helper(*args)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/data/ops/structured_function.py", line 161, in wrapper_helper
ret = autograph.tf_convert(self._func, ag_ctx)(*nested_args)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 690, in wrapper
return converted_call(f, args, kwargs, options=options)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 377, in converted_call
return _call_unconverted(f, args, kwargs, options)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 459, in _call_unconverted
return f(*args, **kwargs)
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/keras/src/trainers/data_adapters/tf_dataset_adapter.py", line 129, in class_weights_map_fn
y_classes = tf.__internal__.smart_cond.smart_cond(
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/framework/smart_cond.py", line 57, in smart_cond
return cond.cond(pred, true_fn=true_fn, false_fn=false_fn,
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/util/traceback_utils.py", line 153, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/example/.local/share/virtualenvs/trainer-gT8lgKB3/lib/python3.10/site-packages/tensorflow/python/ops/cond_v2.py", line 880, in error
raise TypeError(
TypeError: true_fn and false_fn arguments to tf.cond must have the same number, type, and overall structure of return values.
true_fn output: Tensor("cond/Identity:0", shape=(2048,), dtype=int64)
false_fn output: Tensor("cond/Identity:0", shape=(2048,), dtype=int32)
Error details:
Tensor("cond/Identity:0", shape=(2048,), dtype=int64) and Tensor("cond/Identity:0", shape=(2048,), dtype=int32) have different types
``` | closed | 2024-11-23T21:29:58Z | 2024-12-27T02:01:47Z | https://github.com/keras-team/keras/issues/20542 | [
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] | GICodeWarrior | 4 |
JaidedAI/EasyOCR | machine-learning | 994 | Train function error | Trying to fine tune a model, but it gives "ImportError: cannot import name 'train' from 'train' (/usr/local/lib/python3.9/dist-packages/train/__init__.py)" error. Where does this function come from? Checked the https://train.readthedocs.io/en/latest/?badge=latest doc, but there is no info about it. Neither is it in the imported lib "train".
Am I missing something? It seems that everyone else have no issue with it. | closed | 2023-04-21T12:24:48Z | 2023-04-27T06:25:47Z | https://github.com/JaidedAI/EasyOCR/issues/994 | [] | StiflerDante | 0 |
plotly/dash | plotly | 2,334 | provide selectedData/MultiSelect for pie chart | Thanks so much for your interest in Dash!
Before posting an issue here, please check the Dash [community forum](https://community.plotly.com/c/dash) to see if the topic has already been discussed. The community forum is also great for implementation questions. When in doubt, please feel free to just post the issue here :)
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
I'm always frustrated when I try to create a dashboard with a pie chart and in order to filter the dashboard using the pie chart I can only select one slice of the pie chart to filter by, unlike bar charts and other visualizations where the color of the unselected data darken in the pie chart you need to implement it yourself using callbacks, and if I want to select more than one slice I need to save the selected data in a dcc store or a div and check every time someone clicks on another slice, Is there data saved in the store, which data is it? if it's not the same as the newly selected slice add it if not remove it and build + style the pie chart from scratch.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
Implement selectedData in the pie chart, now because it's a pie chart instead of a box select used in bar charts and other charts, you can implement it so it uses click events. and if the slice is selected then deselect it and if it's not select it then select it.
now if we select a slice it will grey out all the slices that are not selected, and update the selectedData to be with all the selected slices data.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
I wrote a partial solution in the feature problem section.
**Additional context**
Add any other context or screenshots about the feature request here.
it's a feature that exists in any bi tool out there, it's an important feature in order to create interactive dashboards a critical one even. so really hoping it can be implemented!
thank you plotly team for the hard work!
| open | 2022-11-23T07:42:21Z | 2024-08-13T19:23:01Z | https://github.com/plotly/dash/issues/2334 | [
"feature",
"P3"
] | Matan-Morduch | 1 |
TencentARC/GFPGAN | pytorch | 39 | RuntimeError:"Distributed package doesn't have NCCL" ??? | How to train a custom model under Windows 10 with miniconda?
Inference works great but when I try to start a custom training only errors come up.
Latest RTX/Quadro driver and Nvida Cuda Toolkit 11.3 + cudnn 11.3 + ms vs buildtools are installed.
My Miniconda Env:

Training:
python -m torch.distributed.launch --nproc_per_node=4 --master_port=22021 gfpgan\train.py -opt c:\GFPGAN\options\test.yml --launcher pytorch
[Train_Error.txt](https://github.com/TencentARC/GFPGAN/files/6958052/Train_Error.txt)
| open | 2021-08-10T00:04:13Z | 2021-08-13T12:17:31Z | https://github.com/TencentARC/GFPGAN/issues/39 | [] | ghost | 3 |
man-group/notebooker | jupyter | 62 | Create a view of all report results divided by report name | And perhaps subdivided by parameters | closed | 2021-11-04T14:41:55Z | 2022-05-05T16:29:21Z | https://github.com/man-group/notebooker/issues/62 | [
"enhancement"
] | jonbannister | 1 |
ansible/awx | automation | 15,247 | AWX not able to delete the worker pods after finished running | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
We have recently upgraded the awx version from 22.5.0 to 23.9.0 which is deployed on EKS 1.28 version.
After AWX upgrade, we observed that only few jobs (not all jobs) running on workers pods specific to inventory sync are not getting deleted even after job workflow is completed . The pods will be in queue for hours and days until we delete them manually. I don't see any other errors
**The worker pods status is shown below**
NAME READY STATUS RESTARTS AGE
automation-job-462026-6zf7c 1/2 NotReady 0 3m23s
**The errors that are captured from awx control plane ee logs for the worker pods that are not getting deleted**
Error deleting pod automation-job-462026-6zf7c: client rate limiter Wait returned an error: context canceled
Context was canceled while reading logs for pod awx-workers/automation-job-462026-6zf7c. Assuming pod has finished
**The pod status description shows:** Not displaying the data that is condifential
Containers:
worker:
State: Terminated
Reason: Completed
Exit Code: 0
Ready: False
Restart Count: 0
authenticator:
State: Running
Ready: True
Restart Count: 0
The automation-job-462026-6zf7c pod contains two containers: worker and authenticator.
When the pod is stuck, we can see that the worker container is terminated, and the authenticator container keeps running. This is what we can see in the worker container and authenticator container
[worker-container.txt](https://github.com/user-attachments/files/15535882/worker-container.txt)
[authenticator-container.txt](https://github.com/user-attachments/files/15535870/authenticator-container.txt)
For now we are testing this in non production environment, currently its a blocker to upgrade the production. Please have a look and provide the fix or suggest the best awx version if it is a known issue
### AWX version
23.9.0
### Select the relevant components
- [ ] UI
- [ ] UI (tech preview)
- [X] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Installation method
kubernetes
### Modifications
no
### Ansible version
_No response_
### Operating system
_No response_
### Web browser
_No response_
### Steps to reproduce
Run many AWX jobs based on the pod that contains worker and authenticator images.(we observed mainly on Inventory sync jobs)
### Expected results
AWX deletes all the pods that finished running.
### Actual results
AWX Worker pods got stuck
### Additional information
_No response_ | open | 2024-06-03T15:18:54Z | 2025-03-23T23:16:11Z | https://github.com/ansible/awx/issues/15247 | [
"type:bug",
"component:api",
"needs_triage",
"community"
] | chinna44 | 7 |
Anjok07/ultimatevocalremovergui | pytorch | 1,430 | Any idea what this error is? | Last Error Received:
Process: MDX-Net
Missing file error raised. Please address the error and try again.
If this error persists, please contact the developers with the error details.
Raw Error Details:
FileNotFoundError: "[Errno 2] No such file or directory: 'ffprobe'"
Traceback Error: "
File "UVR.py", line 6584, in process_start
File "separate.py", line 487, in seperate
File "separate.py", line 354, in final_process
File "separate.py", line 418, in write_audio
File "separate.py", line 391, in save_with_message
File "separate.py", line 365, in save_audio_file
File "separate.py", line 1288, in save_format
File "pydub/audio_segment.py", line 808, in from_wav
File "pydub/audio_segment.py", line 728, in from_file
File "pydub/utils.py", line 274, in mediainfo_json
File "subprocess.py", line 1026, in __init__
File "subprocess.py", line 1950, in _execute_child
"
Error Time Stamp [2024-06-27 15:02:38]
Full Application Settings:
vr_model: 1_HP-UVR
aggression_setting: 50
window_size: 1024
mdx_segment_size: 320
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: v4 | htdemucs
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: UVR-MDX-NET Inst HQ 3
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: False
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: True
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: MP3
wav_type_set: 32-bit Float
cuda_set: Default
help_hints_var: True
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
mdx_stems: All Stems | open | 2024-06-27T22:05:12Z | 2024-06-27T22:05:12Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1430 | [] | NathanWolfxx | 0 |
ARM-DOE/pyart | data-visualization | 717 | Investigating behavior of masked arrays | closed | 2018-03-07T19:48:49Z | 2020-03-26T20:23:19Z | https://github.com/ARM-DOE/pyart/issues/717 | [] | mhpicel | 1 | |
flairNLP/flair | nlp | 2,951 | How to set up the video card used | I found that the default is 'cuda:0', now I want to modify to a different graphics card, how do I set it?

| closed | 2022-09-30T01:56:18Z | 2022-09-30T01:57:10Z | https://github.com/flairNLP/flair/issues/2951 | [
"question"
] | yaoysyao | 0 |
wkentaro/labelme | deep-learning | 323 | no sudo privilege in docker env | First, thanks for the great work.
I have tried to make a docker image from the Dockerfile you provided, everything is good but the developer user does not have su privilege, so I cannot install my own packages in the container. Password is required when asked for su privilege and then it just shows:
su: Authentication failure | closed | 2019-02-18T08:09:24Z | 2019-02-21T10:16:07Z | https://github.com/wkentaro/labelme/issues/323 | [] | cissoidx | 4 |
tensorflow/tensor2tensor | deep-learning | 1,219 | Error Querying Server: Requested more than 0 entries, but params is empty. | Trying to serve my Chinese to English model and am having trouble querying. I am receiving an error:
```
(test) root@ubuntu-c-8-16gib-sfo2-01:~/T2T_Model# t2t-query-server --server=0.0.0.0:9000 --servable_name=transformer --problem=translate_enzh_wmt32k_rev --data_dir=/root/T2T_Model/t2t_data --inputs_once='Hello my name is John.'
Traceback (most recent call last):
File "/usr/local/bin/t2t-query-server", line 17, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/usr/local/bin/t2t-query-server", line 12, in main
query.main(argv)
File "/usr/local/lib/python2.7/dist-packages/tensor2tensor/serving/query.py", line 89, in main
outputs = serving_utils.predict([inputs], problem, request_fn)
File "/usr/local/lib/python2.7/dist-packages/tensor2tensor/serving/serving_utils.py", line 157, in predict
predictions = request_fn(examples)
File "/usr/local/lib/python2.7/dist-packages/tensor2tensor/serving/serving_utils.py", line 113, in _make_grpc_request
response = stub.Predict(request, timeout_secs)
File "/usr/local/lib/python2.7/dist-packages/grpc/_channel.py", line 533, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/usr/local/lib/python2.7/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Requested more than 0 entries, but params is empty. Params shape: [1,4,8,0,64]
[[{{node transformer/while/GatherNd_32}} = GatherNd[Tindices=DT_INT32, Tparams=DT_FLOAT, _output_shapes=[[?,8,?,?,64]], _device="/job:localhost/replica:0/task:0/device:CPU:0"](transformer/while/Reshape_65, transformer/while/stack)]]"
debug_error_string = "{"created":"@1542086942.107507941","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"Requested more than 0 entries, but params is empty. Params shape: [1,4,8,0,64]\n\t [[{{node transformer/while/GatherNd_32}} = GatherNd[Tindices=DT_INT32, Tparams=DT_FLOAT, _output_shapes=[[?,8,?,?,64]], _device="/job:localhost/replica:0/task:0/device:CPU:0"](transformer/while/Reshape_65, transformer/while/stack)]]","grpc_status":3}"
>
```
The model server seems to be working fine and responding with the same error:
```
(test) root@ubuntu-c-8-16gib-sfo2-01:~/T2T_Model# tensorflow_model_server --port=9000 --model_name=transformer --model_base_path=/root/T2T_Model/t2t_train/translate_enzh_wmt32k/transformer-transformer_base/export
2018-11-13 05:28:29.116290: I tensorflow_serving/model_servers/server.cc:82] Building single TensorFlow model file config: model_name: transformer model_base_path: /root/T2T_Model/t2t_train/translate_enzh_wmt32k/transformer-transformer_base/export
2018-11-13 05:28:29.116412: I tensorflow_serving/model_servers/server_core.cc:461] Adding/updating models.
2018-11-13 05:28:29.116424: I tensorflow_serving/model_servers/server_core.cc:558] (Re-)adding model: transformer
2018-11-13 05:28:29.216782: I tensorflow_serving/core/basic_manager.cc:739] Successfully reserved resources to load servable {name: transformer version: 1542073770}
2018-11-13 05:28:29.216806: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: transformer version: 1542073770}
2018-11-13 05:28:29.216815: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: transformer version: 1542073770}
2018-11-13 05:28:29.216830: I external/org_tensorflow/tensorflow/contrib/session_bundle/bundle_shim.cc:363] Attempting to load native SavedModelBundle in bundle-shim from: /root/T2T_Model/t2t_train/translate_enzh_wmt32k/transformer-transformer_base/export/1542073770
2018-11-13 05:28:29.216838: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /root/T2T_Model/t2t_train/translate_enzh_wmt32k/transformer-transformer_base/export/1542073770
2018-11-13 05:28:29.537966: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2018-11-13 05:28:29.597214: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
2018-11-13 05:28:29.722289: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:162] Restoring SavedModel bundle.
2018-11-13 05:28:30.139345: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:138] Running MainOp with key saved_model_main_op on SavedModel bundle.
2018-11-13 05:28:30.227063: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:259] SavedModel load for tags { serve }; Status: success. Took 1010210 microseconds.
2018-11-13 05:28:30.227116: I tensorflow_serving/servables/tensorflow/saved_model_warmup.cc:83] No warmup data file found at /root/T2T_Model/t2t_train/translate_enzh_wmt32k/transformer-transformer_base/export/1542073770/assets.extra/tf_serving_warmup_requests
2018-11-13 05:28:30.227223: I tensorflow_serving/core/loader_harness.cc:86] Successfully loaded servable version {name: transformer version: 1542073770}
2018-11-13 05:28:30.229398: I tensorflow_serving/model_servers/server.cc:286] Running gRPC ModelServer at 0.0.0.0:9000 ...
2018-11-13 05:59:38.052592: W external/org_tensorflow/tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at gather_nd_op.cc:50 : Invalid argument: Requested more than 0 entries, but params is empty. Params shape: [1,4,8,0,64]
```
My environment:
tensor2tensor (1.10.0)
tensorboard (1.12.0)
tensorflow (1.12.0)
tensorflow-serving-api (1.12.0)
Would appreciate any tips or comments.
| open | 2018-11-13T05:50:17Z | 2018-11-14T08:52:49Z | https://github.com/tensorflow/tensor2tensor/issues/1219 | [] | echan00 | 9 |
MycroftAI/mycroft-core | nlp | 2,456 | Failing to run on Python3.9 | While running [dev_setup.sh](https://github.com/MycroftAI/mycroft-core/blob/dev/dev_setup.sh), this error message pops up:
```
Traceback (most recent call last):
File "<stdin>", line 20649, in <module>
File "<stdin>", line 197, in main
File "<stdin>", line 82, in bootstrap
File "<frozen zipimport>", line 259, in load_module
File "/tmp/tmp0ga497z7/pip.zip/pip/_internal/__init__.py", line 20, in <module>
File "<frozen zipimport>", line 259, in load_module
File "/tmp/tmp0ga497z7/pip.zip/pip/_vendor/urllib3/__init__.py", line 8, in <module>
File "<frozen zipimport>", line 259, in load_module
File "/tmp/tmp0ga497z7/pip.zip/pip/_vendor/urllib3/connectionpool.py", line 29, in <module>
File "<frozen zipimport>", line 259, in load_module
File "/tmp/tmp0ga497z7/pip.zip/pip/_vendor/urllib3/connection.py", line 39, in <module>
File "<frozen zipimport>", line 259, in load_module
File "/tmp/tmp0ga497z7/pip.zip/pip/_vendor/urllib3/util/__init__.py", line 3, in <module>
File "<frozen zipimport>", line 259, in load_module
File "/tmp/tmp0ga497z7/pip.zip/pip/_vendor/urllib3/util/connection.py", line 3, in <module>
File "<frozen zipimport>", line 259, in load_module
File "/tmp/tmp0ga497z7/pip.zip/pip/_vendor/urllib3/util/wait.py", line 1, in <module>
File "<frozen zipimport>", line 259, in load_module
File "/tmp/tmp0ga497z7/pip.zip/pip/_vendor/urllib3/util/selectors.py", line 14, in <module>
ImportError: cannot import name 'Mapping' from 'collections' (/usr/local/lib/python3.9/collections/__init__.py)
Failed to set up virtualenv for mycroft, exiting setup.
```
Python 3.9.0a2+ on Debian Experimental | closed | 2020-01-18T04:30:35Z | 2020-01-28T14:29:53Z | https://github.com/MycroftAI/mycroft-core/issues/2456 | [] | opensource-assist | 6 |
CTFd/CTFd | flask | 2,202 | Add more statistics to admin/statistics | **Environment**:
- CTFd Version/Commit: 3.5.0
**What happened?**

**What did you expect to happen?**
While the statistics page shows general stats about the fail and solve attempts there should be specific stats and charts
- To show which challenge has the most failed attempts and which one has the least.
- To show the submission (solves and fails) percentage for all teams and all users.
- To show solves in individual categories and which category has the highest fails. | open | 2022-10-17T09:46:15Z | 2022-10-17T09:46:15Z | https://github.com/CTFd/CTFd/issues/2202 | [] | thecybermafia | 0 |
tfranzel/drf-spectacular | rest-api | 1,029 | PolymorphicSerializerExtension drops serializers without writable fields | **Describe the bug**
PolymorphicSerializerExtension does not recognize polymorphic child serializers without writable fields as valid serializers for write operations despite django-rest-polymorphic library adding one writable field called `resourcetype`.
As far as I can understand, this bug occurs in [this](https://github.com/tfranzel/drf-spectacular/blob/master/drf_spectacular/contrib/rest_polymorphic.py#L21) line because child serializer does not yet have `resourcetype` field and is treated as an empty schema and deleted.
Library versions
Django==4.2.3
django-polymorphic==3.1.0
django-rest-polymorphic==0.1.10
djangorestframework==3.14.0
drf-spectacular==0.26.3
**To Reproduce**
I have an app that uses polymorphic models to create asynchonous tasks of different types. Some of those types of tasks require some input from the user to create, but some only require to provide the type of task. The latter causes the problem where drf-spectacular does not generate schema for POST request, despite it having one valid writable field (`resourcetype`).
The following code snippets show example of this problem:
`models.py`
```python
from django.db import models
from polymorphic.models import PolymorphicModel
class TaskBase(PolymorphicModel):
status = models.PositiveIntegerField(choices=[(0, "RUNNING"), (1, "OK"), (2, "ERROR")], editable=False)
class TaskWithoutParameter(TaskBase):
pass
class TaskWithParameter(TaskBase):
parameter = models.CharField(max_length=255)
```
`serializers.py`
```python
from rest_framework import serializers
from rest_polymorphic.serializers import PolymorphicSerializer
from .models import TaskWithoutParameter, TaskWithParameter
class TaskWithoutParameterSerializer(serializers.ModelSerializer):
class Meta:
model = TaskWithoutParameter
fields = ('id', 'status')
read_only_fields = ('id', 'status')
class TaskWithParameterSerializer(serializers.ModelSerializer):
class Meta:
model = TaskWithParameter
fields = ('id', 'status', 'parameter')
read_only_fields = ('id', 'status')
class PolymorphicTasksSerializer(PolymorphicSerializer):
model_serializer_mapping = {
TaskWithoutParameter: TaskWithoutParameterSerializer,
TaskWithParameter: TaskWithParameterSerializer
}
```
Generated schema (note that `PolymorphicTasksRequest` only has one choice, while `PolymorphicTasks` has two)
```yaml
components:
schemas:
PolymorphicTasks:
oneOf:
- $ref: '#/components/schemas/TaskWithoutParameterTyped'
- $ref: '#/components/schemas/TaskWithParameterTyped'
discriminator:
propertyName: resourcetype
mapping:
TaskWithoutParameter: '#/components/schemas/TaskWithoutParameterTyped'
TaskWithParameter: '#/components/schemas/TaskWithParameterTyped'
PolymorphicTasksRequest:
oneOf:
- $ref: '#/components/schemas/TaskWithParameterTypedRequest'
discriminator:
propertyName: resourcetype
mapping:
TaskWithParameter: '#/components/schemas/TaskWithParameterTypedRequest'
TaskWithParameter:
type: object
properties:
id:
type: integer
readOnly: true
status:
type: integer
readOnly: true
parameter:
type: string
maxLength: 255
required:
- id
- parameter
- status
TaskWithParameterRequest:
type: object
properties:
parameter:
type: string
minLength: 1
maxLength: 255
required:
- parameter
TaskWithParameterTyped:
allOf:
- type: object
properties:
resourcetype:
type: string
required:
- resourcetype
- $ref: '#/components/schemas/TaskWithParameter'
TaskWithParameterTypedRequest:
allOf:
- type: object
properties:
resourcetype:
type: string
required:
- resourcetype
- $ref: '#/components/schemas/TaskWithParameterRequest'
TaskWithoutParameter:
type: object
properties:
id:
type: integer
readOnly: true
status:
type: integer
readOnly: true
required:
- id
- status
TaskWithoutParameterTyped:
allOf:
- type: object
properties:
resourcetype:
type: string
required:
- resourcetype
- $ref: '#/components/schemas/TaskWithoutParameter'
```
**Expected behavior**
Serializer without writable fields is added to schema with a single field `resourcetype`.
| closed | 2023-07-17T15:12:34Z | 2023-07-23T21:20:54Z | https://github.com/tfranzel/drf-spectacular/issues/1029 | [
"bug",
"fix confirmation pending"
] | igorlitvak | 1 |
graphql-python/graphene-sqlalchemy | graphql | 225 | bug: string.value? | https://github.com/graphql-python/graphene-sqlalchemy/blob/db3e9f4c3baad3e62c113d4a9ddd2e3983d324f2/graphene_sqlalchemy/fields.py#L40-L41
AttributeError: 'str' object has no attribute 'value' | closed | 2019-06-06T20:40:51Z | 2023-02-24T14:56:14Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/225 | [
"bug"
] | maquino1985 | 6 |
coleifer/sqlite-web | flask | 62 | How to specify web password when using docker ? | Would be cool to pass the password to the app via docker. | closed | 2019-06-27T02:37:54Z | 2019-12-02T17:27:55Z | https://github.com/coleifer/sqlite-web/issues/62 | [] | misiek303 | 2 |
raphaelvallat/pingouin | pandas | 37 | Contingency: tests against other software | Validate contingency tests against implementations from R, SPSS and JASP | closed | 2019-05-30T00:05:49Z | 2019-05-30T21:36:52Z | https://github.com/raphaelvallat/pingouin/issues/37 | [
"feature request :construction:"
] | arthurpaulino | 1 |
alteryx/featuretools | scikit-learn | 1,995 | Enumerate Primitive Type | As a developer, Primitive Types should be enumerated to improve maintainability and consistency.
#### Code Example
```python
# featuretools/types.py
class PrimitiveTypes(Enum):
AGGREGATION = "aggregation"
TRANSFORM = "transform"
WHERE = "where"
GROUPBY_TRANSFORM = "groupby transform"
```
| open | 2022-03-29T14:31:27Z | 2023-06-26T19:10:03Z | https://github.com/alteryx/featuretools/issues/1995 | [
"new feature",
"tech debt"
] | dvreed77 | 0 |
mirumee/ariadne | api | 1,101 | Feature request: cache query parsing and validation | Hello,
First of all, thanks for your work on Ariadne, I've really enjoyed working with it until now!
I'm opening this issue because we recently ran a load test on a microservice which basically translates Graphql queries into SQL, and fetches data from postgres with sqlalchemy's async API . (we ran the tests with Python 3.11 on a single gunicorn worker with uvloop). We noticed a high CPU usage, and profiling showed that up to 30% of all the CPU time was spent in the `parse_query` and `validate_query` functions.
When I tried to implement a simpler version of the `graphql` function with a cache on query parsing and validation, the number of requests by second increased by 22%, and the median request duration decreased by 62% (P95 duration decreased by 35%).
```python
from graphql import execute as _execute_graphql
from graphql import parse as _parse_graphql
from my_service import GRAPHQL_SCHEMA
# GRAPHQL_SCHEMA is a global here to prevent lru_cache from re-hashing the same object every time, but since
# GraphQLSchema is hashable, it could also be passed as a parameter
@lru_cache(maxsize=64)
def parse_and_validate_query(query: str) -> tuple[DocumentNode, list[GraphQLError]]:
parsed = _parse_graphql(query)
validation_errors = validate_query(schema=GRAPHQL_SCHEMA, document_ast=parsed)
return parsed, validation_errors
async def execute_graphql_query(
schema: GraphQLSchema,
data: Any,
*,
debug: bool = False,
error_formatter: ErrorFormatter = format_error,
logger: Logger | None = None,
context_value: Any | None = None,
) -> GraphQLResult:
try:
validate_data(data)
variables, operation_name = (
data.get("variables"),
data.get("operationName"),
)
ast_document, validation_errors = parse_and_validate_query(query=data["query"])
if validation_errors:
return handle_graphql_errors(
errors=validation_errors,
logger=logger,
error_formatter=error_formatter,
debug=debug,
)
result = _execute_graphql(
schema,
ast_document,
variable_values=variables,
operation_name=operation_name,
context_value=context_value,
)
if is_awaitable(result):
result = await cast(Awaitable[ExecutionResult], result)
except GraphQLError as error:
return handle_graphql_errors(
[error], logger=logger, error_formatter=error_formatter, debug=debug
)
return handle_query_result(result, logger=logger, error_formatter=error_formatter, debug=debug)
```
My question is: Would it make sense to have such a caching feature available in Ariadne directly ? If yes, I'd be willing to have a look | closed | 2023-06-16T14:07:46Z | 2023-08-02T16:47:27Z | https://github.com/mirumee/ariadne/issues/1101 | [
"docs"
] | lukapeschke | 10 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 55 | Conda UnsatisfiableError: The following specifications were found to be incompatible with your CUDA driver | Fix the following conda errors (not sure if they are reproducible errors):
Windows:
```
UnsatisfiableError: The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versionsThe following specifications were found to be incompatible with your CUDA driver:
- feature:/win-64::__cuda==10.2=0
- feature:|@/win-64::__cuda==10.2=0
Your installed CUDA driver is: 10.2
```
Linux:
```
UnsatisfiableError: The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versionsThe following specifications were found to be incompatible with your CUDA driver:
- feature:/linux-64::__cuda==10.1=0
- feature:|@/linux-64::__cuda==10.1=0
Your installed CUDA driver is: 10.1
```
| closed | 2020-04-15T21:33:27Z | 2022-03-16T15:55:11Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/55 | [
"help wanted",
"pip/conda"
] | KevinMusgrave | 32 |
matterport/Mask_RCNN | tensorflow | 2,282 | Running inference on RTX 2080 Ti | I tried to run the code on different GPUs and noticed that the network is extremely slow on RTX2080Ti. The GPU usage is close to 0% although memory has been allocated at the beginning. It seems that computation occurs on CPU.
For reference, I am using:
Ubuntu 18.04
Tensorflow 1.14
Cuda 10
CuDNN 7.5
Keras 2.2.5
This configuration works properly on GTX1080Ti cards. Can you please provide a way to reach descent performance on RTX cards as well? | open | 2020-07-14T17:39:24Z | 2020-07-16T06:07:38Z | https://github.com/matterport/Mask_RCNN/issues/2282 | [] | YMarrakchi | 1 |
iMerica/dj-rest-auth | rest-api | 219 | Expose logout logic through dj-rest-auth config | There are a two places where dj-rest-auth [assumes the fields](https://github.com/jazzband/dj-rest-auth/blob/732935d168bc2a325c0bdd5ddf831b509b53cff3/dj_rest_auth/views.py#L167) of the user model / token.
`auth_token` for my user model is actually a set, so `delete()` won't work. Easy enough to extend LogoutView, but this would be a nice improvement. | closed | 2021-01-29T17:36:52Z | 2021-02-07T06:18:40Z | https://github.com/iMerica/dj-rest-auth/issues/219 | [] | mjmaurer | 1 |
docarray/docarray | fastapi | 1,239 | Rethink the predefined document structure | # Context
we need to discuss if `docarray.documents` should have all of this field (embedding etcc).
I think we should say to the user that they should almost always define their own `BaseDocument` | closed | 2023-03-14T13:27:41Z | 2023-03-23T08:33:03Z | https://github.com/docarray/docarray/issues/1239 | [] | samsja | 1 |
serengil/deepface | deep-learning | 677 | DeepFace.stream() function throws error | The function "preprocess_face" is missing in ./commons/realtime.py script
This is an issue with the latest version==0.0.78 but not with the version==0.0.75 | closed | 2023-02-16T11:55:35Z | 2023-02-16T13:29:42Z | https://github.com/serengil/deepface/issues/677 | [
"duplicate"
] | swapnika92 | 1 |
BlinkDL/RWKV-LM | pytorch | 36 | What does "stream and split" strategy even mean? | The readme.md mentioned a strategy called "stream and split", how does it work? I haven't seen it anywhere outside of this repo and even in this repo. | closed | 2023-02-26T14:11:15Z | 2023-02-26T17:14:41Z | https://github.com/BlinkDL/RWKV-LM/issues/36 | [] | hfnuser0000 | 1 |
yunjey/pytorch-tutorial | pytorch | 212 | BN should be used after ReLU | <https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/02-intermediate/convolutional_neural_network/main.py#L41-L42>
```python
nn.ReLU(),
nn.BatchNorm2d(...),
```
BN should be used after ReLU. Features might be truncated by non-linearity like ReLU, so BN is used to normalize the distribution of features. | closed | 2020-07-06T09:12:26Z | 2020-07-24T06:55:41Z | https://github.com/yunjey/pytorch-tutorial/issues/212 | [] | yunlingz | 0 |
nolar/kopf | asyncio | 350 | [PR] Crash the whole operator on unrecoverable errors in watchers/workers | > <a href="https://github.com/nolar"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> A pull request by [nolar](https://github.com/nolar) at _2020-04-27 19:38:11+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/pull/350
>
## What do these changes do?
When a fatal error happens in the operator's watching, queueing, multiplexing, or processing, including API PATCH'ing, then stop the whole operator instead of ignoring and continuing.
## Description
This issue was detected in an incident when PATCH request failed due to HTTP 422 "Unprocessable Entity" (#346). Instead of stopping or slowing down any attempts, the operator continued handling repeatedly with 1-2 attempts per second.
On a wider scope, if _anything_ goes wrong in the top-level processing, i.e. before handlers (which have their own error handling and backoff intervals), then crash the whole operator, and let Kubernetes to deal with a broken pod.
This does not prevent incidents with repeated handling completely, but will slow them down at least (restarts are not fast).
All in all, this should protect the users from the framework/operators misbehaviour in some rare cases. In all other cases, nothing changes for the users.
---
**Note:** A separate fix will be made (#351) with throttling of unrecoverable errors on a per-resource basis from approximately when the processing begins, and until the handlers (this covers resource PATCH'ing). The operator will stop anyway for errors from watching to that point of processing, but this is a much more narrow scope.
**Implementation note:** there is already a safety net for the root tasks, such as watchers: if they fail, the operator stops. But the workers are not covered by this, since they are fire-and-forget kind of tasks. So, they should "escalate" the errors their own way — via fatal-flag-setting and own stack trace dumping.
---
Side-changes:
* Log daemon-killer's exit reason as "cancelled" (as all other tasks), not as "exited unexpectedly" — due to no `asyncio.CancelledError` raised from inside.
* Cover the queue pulling and event batching by this unexpected errors safety net too — by shifting the `except:` block left. This is unlikely to happen, but just in case.
* Stop logging the `functools.partial` objects (processors) with all their arguments. This could eventually lead to some data leaks to the logs.
## Issues/PRs
> Issues: #346
> Related: #331
## Type of changes
- Bug fix (non-breaking change which fixes an issue)
## Checklist
- [x] The code addresses only the mentioned problem, and this problem only
- [x] I think the code is well written
- [ ] Unit tests for the changes exist
- [ ] Documentation reflects the changes
- [ ] If you provide code modification, please add yourself to `CONTRIBUTORS.txt`
<!-- Are there any questions or uncertainties left?
Any tasks that have to be done to complete the PR? -->
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2020-08-20 20:07:49+00:00_
>
Closed in favor of nolar/kopf#509 | closed | 2020-08-18T20:04:23Z | 2020-09-06T21:53:32Z | https://github.com/nolar/kopf/issues/350 | [
"bug",
"archive"
] | kopf-archiver[bot] | 1 |
xinntao/Real-ESRGAN | pytorch | 769 | 关于用自己第一次训练好的模型,再用新的数据继续迭代训练的问题 | 模型只能训练一次吗?我用自己训练的模型,再用数据去二次训练,发现报找不到卷积层权重的异常! | open | 2024-03-23T05:47:20Z | 2024-03-25T07:09:27Z | https://github.com/xinntao/Real-ESRGAN/issues/769 | [] | kl402401 | 2 |
proplot-dev/proplot | data-visualization | 234 | can't set style to default | <!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). -->
### Description
I would like to set the style back to matplotlib's defaults (i.e., no background color, no grid, etc.). According to the documentation (https://proplot.readthedocs.io/en/latest/configuration.html#proplot-settings), this should be possible with `plot.rc.update(style='default')`, but this command crashes.
### Steps to reproduce
```python
import proplot as plot
plot.rc.update(style='default')
```
**Expected behavior**: style set back to matplotlib default, i.e., no background color, no grid, etc
**Actual behavior**: the update command fails with
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-29-2290163ff701> in <module>
1 import proplot as plot
2
----> 3 plot.rc.update(style='default')
/opt/conda/lib/python3.8/site-packages/proplot/config.py in update(self, *args, **kwargs)
786 kw.update(kwargs)
787 for key, value in kw.items():
--> 788 self.__setitem__(prefix + key, value)
789
790 @docstring.add_snippets
/opt/conda/lib/python3.8/site-packages/proplot/config.py in __setitem__(self, key, value)
330 a ProPlot :ref:`added setting <rc_proplot>`.
331 """
--> 332 kw_proplot, kw_matplotlib = self._get_synced_params(key, value)
333 rc_proplot.update(kw_proplot)
334 rc_matplotlib.update(kw_matplotlib)
/opt/conda/lib/python3.8/site-packages/proplot/config.py in _get_synced_params(self, key, value)
399 elif key == 'style':
400 if value is not None:
--> 401 kw_matplotlib, kw_proplot = _get_style_dicts(value, infer=True)
402
403 # Cycler
ValueError: too many values to unpack (expected 2)
```
### Equivalent steps in matplotlib
doesn't apply
### Proplot version
0.6.4
| closed | 2020-11-14T23:05:47Z | 2021-07-03T16:45:26Z | https://github.com/proplot-dev/proplot/issues/234 | [
"bug"
] | matthias-k | 2 |
numba/numba | numpy | 9,802 | IR is not SSA | I thought Numba IR was supposed to be SSA. However:
```python
from numba import config, njit
config.ANNOTATE = 1
@njit('void(float32[::1], int32)')
def function_to_lower(A, n):
i = 0
while i < n:
A[i] = i
i += 1
```
produces
```
-----------------------------------ANNOTATION-----------------------------------
# File: /home/gmarkall/numbadev/issues/not-ssa/repro.py
# --- LINE 6 ---
# label 0
# A = arg(0, name=A) :: array(float32, 1d, C)
# n = arg(1, name=n) :: int32
@njit('void(float32[::1], int32)')
# --- LINE 7 ---
def function_to_lower(A, n):
# --- LINE 8 ---
# i = const(int, 0) :: Literal[int](0)
# i.2 = i :: int64
i = 0
# --- LINE 9 ---
# $12compare_op.3 = i < n :: bool
# del i
# bool18 = global(bool: <class 'bool'>) :: Function(<class 'bool'>)
# $18pred = call bool18($12compare_op.3, func=bool18, args=(Var($12compare_op.3, repro.py:9),), kws=(), vararg=None, varkwarg=None, target=None) :: (bool,) -> bool
# del bool18
# del $12compare_op.3
# branch $18pred, 20, 56
# $44compare_op.8 = i.1 < n :: bool
# del i.1
# bool50 = global(bool: <class 'bool'>) :: Function(<class 'bool'>)
# $50pred = call bool50($44compare_op.8, func=bool50, args=(Var($44compare_op.8, repro.py:9),), kws=(), vararg=None, varkwarg=None, target=None) :: (bool,) -> bool
# del bool50
# del $44compare_op.8
# branch $50pred, 20, 52
# label 52
# del n
# del i.2
# del A
# del $50pred
# $const52.0.0 = const(NoneType, None) :: none
# $54return_value.1 = cast(value=$const52.0.0) :: none
# del $const52.0.0
# return $54return_value.1
# label 56
# del n
# del i.2
# del A
# del $18pred
# $const56.0.0 = const(NoneType, None) :: none
# $58return_value.1 = cast(value=$const56.0.0) :: none
# del $const56.0.0
# return $58return_value.1
while i < n:
# --- LINE 10 ---
# label 20
# del $18pred
# A[i.2] = i.2 :: (Array(float32, 1, 'C', False, aligned=True), int64, int64) -> none
A[i] = i
# --- LINE 11 ---
# $const32.4.2 = const(int, 1) :: Literal[int](1)
# $binop_iadd34.5 = inplace_binop(fn=<built-in function iadd>, immutable_fn=<built-in function add>, lhs=i.2, rhs=$const32.4.2, static_lhs=Undefined, static_rhs=Undefined) :: int64
# del $const32.4.2
# i.1 = $binop_iadd34.5 :: int64
# del $binop_iadd34.5
# i.2 = i.1 :: int64
i += 1
```
Where `i.2` is defined twice, once prior to the loop and the second time inside it.
* Is my assumption incorrect?
* If my assumption was correct, is there another way to look at it (i.e. are these two `i.2`s somehow distinct in a way that's not reflected in the annotation)
* Or, is this a bug?
cc @VijayKandiah | closed | 2024-11-20T22:30:56Z | 2024-11-26T15:11:58Z | https://github.com/numba/numba/issues/9802 | [
"question"
] | gmarkall | 3 |
miguelgrinberg/microblog | flask | 185 | cannot import flask | C:\Python\Python37-32>python -m flask run
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
Usage: python -m flask run [OPTIONS]
Error: While importing "app", an ImportError was raised:
Traceback (most recent call last):
File "C:\Python\Python37-32\lib\site-packages\flask\cli.py", line 240, in locate_app
__import__(module_name)
File "C:\Python\Python37-32\app\__init__.py", line 1, in <module>
from flask import flask
ImportError: cannot import name 'flask' from 'flask' (C:\Python\Python37-32\lib\site-packages\flask\__init__.py)
Can you please help me? I don't understand this error.
| closed | 2019-10-06T12:52:33Z | 2019-10-08T08:14:07Z | https://github.com/miguelgrinberg/microblog/issues/185 | [] | AkshithaKoppaka | 3 |
tfranzel/drf-spectacular | rest-api | 880 | File as a Response seems not to be possible | **Describe the bug**
An API endpoint with a response as a CSV or png can not be properly defined.
The docs state to use the OpenApiTypes.BINARY Type but this seems only to be a disguised String, which if used returns: Could not satisfy the request Accept header.
**To Reproduce**
```python
@extend_schema(
methods=['GET'], tags=['company'], responses={(200,"text/csv"):OpenApiTypes.BINARY}, operation_id='get_csv, summary='returns a csv'
)
@action(methods=['GET'], detail=True, url_path='get_csv', url_name='get_csv')
def get_csv(self, request, *args, **kwargs):
devicelist = [["123","Test Manufacturer","Globe","10","3"]]
devlist = pd.DataFrame(devicelist, columns=['ID','Manufacturer', 'Type', 'TypeID', 'Unit'])
response = HttpResponse(
content_type='text/csv',
headers={'Content-Disposition': f'attachment; filename="your.csv"'},
)
devlist.to_csv(mode="wb", path_or_buf=response, index=False, encoding="UTF-8")
return response
```
**Expected behavior**
There should probably be a type that can be used in this scenario.
| closed | 2022-11-30T14:47:02Z | 2022-12-08T23:03:14Z | https://github.com/tfranzel/drf-spectacular/issues/880 | [] | jonaskonig | 3 |
gevent/gevent | asyncio | 1,762 | [Question] about: OSError: unexpected end of file while reading request at position | Hey there,
I'm using `bottle` in combination with `gevent` on a production-level application where I'm getting rare exceptions when a user uploads a file (most of the time it's working without problems, I cannot reproduce it, hence I'm sitting here and wait for it).
```python-traceback
raise IOError("unexpected end of file while reading request at position %s" % (self.position,))
OSError: unexpected end of file while reading request at position 1982464
```
I cannot just re-build a minimal version of that code, because the situation is happening inside a larger scale application with a couple of users intending to operate as usual.
Furthermore I'm not looking for a way to fix it but more to understand what and why it is happening.
Here's my traceback (the calls are going through `bottle` into gevent's `pywsgi` implementation, since I'm using `gevent.pywsgi.WSGIServer`).
```python-traceback
Traceback (most recent call last):
File "/root/pyvtt/utils.py", line 310, in wrapper
return func(*args, **kwargs)
File "./vtt.py", line 37, in wrapper
return callback(*args, **kwargs)
File "./vtt.py", line 251, in post_import_game
files = request.files.getall('file')
File "/usr/local/lib/python3.8/dist-packages/bottle.py", line 172, in __get__
if key not in storage: storage[key] = self.getter(obj)
File "/usr/local/lib/python3.8/dist-packages/bottle.py", line 1113, in files
for name, item in self.POST.allitems():
File "/usr/local/lib/python3.8/dist-packages/bottle.py", line 172, in __get__
if key not in storage: storage[key] = self.getter(obj)
File "/usr/local/lib/python3.8/dist-packages/bottle.py", line 1232, in POST
args = dict(fp=self.body, environ=safe_env, keep_blank_values=True)
File "/usr/local/lib/python3.8/dist-packages/bottle.py", line 1203, in body
self._body.seek(0)
File "/usr/local/lib/python3.8/dist-packages/bottle.py", line 172, in __get__
if key not in storage: storage[key] = self.getter(obj)
File "/usr/local/lib/python3.8/dist-packages/bottle.py", line 1172, in _body
for part in body_iter(read_func, self.MEMFILE_MAX):
File "/usr/local/lib/python3.8/dist-packages/bottle.py", line 1135, in _iter_body
part = read(min(maxread, bufsize))
File "/usr/local/lib/python3.8/dist-packages/gevent/pywsgi.py", line 320, in read
return self._do_read(length)
File "/usr/local/lib/python3.8/dist-packages/gevent/pywsgi.py", line 199, in _do_read
raise IOError("unexpected end of file while reading request at position %s" % (self.position,))
OSError: unexpected end of file while reading request at position 1982464
```
Here are some version numbers:
```bash
$ pip show gevent
Name: gevent
Version: 20.12.1
Summary: Coroutine-based network library
Home-page: http://www.gevent.org/
Author: Denis Bilenko
Author-email: denis.bilenko@gmail.com
License: MIT
Location: /usr/local/lib/python3.8/dist-packages
Requires: setuptools, zope.interface, zope.event, greenlet
Required-by: gevent-websocket
$ python3
Python 3.8.7 (default, Dec 21 2020, 21:23:03)
[GCC 5.4.0 20160609] on linux
$ uname -a
Linux usve272161 4.4.0-042stab145.3 #1 SMP Thu Jun 11 14:05:04 MSK 2020 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.7 LTS
Release: 16.04
Codename: xenial
```
Btw the server OS is pretty outdated - I hope that's not the reason here.
Greetings
glocke
| closed | 2021-01-21T16:16:43Z | 2024-02-10T08:17:26Z | https://github.com/gevent/gevent/issues/1762 | [
"Status: not gevent"
] | cgloeckner | 4 |
piskvorky/gensim | machine-learning | 2,872 | Broken file link in `run_corpora_and_vector_spaces` tutorial | #### Problem description
The `run_corpora_and_vector_spaces.ipynb` tutorial depends on a file on the web, and that file is missing.
#### Steps/code/corpus to reproduce
See https://groups.google.com/g/gensim/c/nX4lc8j0ZO0
#### Versions
Please provide the output of:
```python
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import gensim; print("gensim", gensim.__version__)
from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
```
Unknown (probably any). | closed | 2020-07-05T07:24:23Z | 2021-06-06T13:50:18Z | https://github.com/piskvorky/gensim/issues/2872 | [
"bug",
"documentation",
"difficulty easy"
] | piskvorky | 6 |
pydantic/pydantic-settings | pydantic | 116 | How to change a setting on-the-fly without an environment variable? | I am using pydantic-settings and have a pretty typical settings class. Let's say it looks like the following
```python
from pydantic_settings import BaseSettings
from pydantic import Field
class Settings(BaseSettings):
MYSETTING: bool = Field(True)
```
I now have some function where I would like to (temporarily!) adjust `MYSETTING` on-the-fly.
```python
from my_package import Settings
def my_function(args, **kwargs):
Settings.MYSETTING = False # change from default
# now call some functions that use MYSETTING
SETTINGS.MYSETTING = True # revert to default
```
I tried something like the above, but it doesn't actually update the setting outside the function scope where the re-assignment takes place.
What would be the best mechanism to achieve something like this without mangling around with environment variables?
Selected Assignee: @dmontagu | closed | 2023-07-10T07:04:42Z | 2023-07-10T20:05:15Z | https://github.com/pydantic/pydantic-settings/issues/116 | [
"unconfirmed"
] | Andrew-S-Rosen | 4 |
plotly/dash | dash | 2,963 | [BUG] dcc.RadioItems checked state not updated | Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.17.1
dash-bootstrap-components 1.6.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-iconify 0.1.2
dash_mantine_components 0.14.4
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: MacOS
- Browser: Chrome
- Version: 127.0.6533.120
**Describe the bug**
dcc.RadioItems <input> elements checked state is not updated after clicking them.
**Expected behavior**
Clicked radio-button (<input>) has "checked" state
**Screenshots**
<img width="1719" alt="Screenshot 2024-08-23 at 13 12 18" src="https://github.com/user-attachments/assets/9ab5fc04-adf5-451d-9c84-b411cdc3381a">
Note: "c" option is selected, "d" is default.
| closed | 2024-08-23T10:15:06Z | 2024-08-23T10:31:42Z | https://github.com/plotly/dash/issues/2963 | [] | ihor-lazariev | 1 |
yeongpin/cursor-free-vip | automation | 109 | 运行都没有问题 但是注册的账号还是试用账号? | 
 | closed | 2025-02-26T15:46:03Z | 2025-02-26T15:59:00Z | https://github.com/yeongpin/cursor-free-vip/issues/109 | [] | GongNanyue | 1 |
PablocFonseca/streamlit-aggrid | streamlit | 303 | [BUG] AgGrid theme different from Streamlit theme | **Describe the bug**
Normally the AgGrid theme is similar to the Streamlit app theme, but I noticed that after updating from 1.0.5 to 1.1.0, the AgGrid theme is mismatching.
**To Reproduce**
I used this as small working example:
```
from st_aggrid import AgGrid
import pandas as pd
df = pd.DataFrame({"Column A": [1, 2, 3], "Column B": [4, 5, 6]})
AgGrid(df)
```
With Version 1.0.5 it looks like this:

With version 1.1.0 it looks like this:

Forcing the theme like this doesn't seem to change anything:
`AgGrid(df, theme="streamlit")`
**Expected behavior**
I'd expect the default behaviour of the AgGrid theme to match the streamlit theme as the previous version.
**live code**
[Py.cafe example with 1.0.5](https://py.cafe/snippet/streamlit/v1?#c=H4sIALZ-pGcEA41VTW_bOBD9K4R6aQGL0Ie_akCHtovdPe9lD1GwoEXaYiKRDEUlUYL8931D2U4CtEiRINAMZ97MezNknpPGSpXskoO3PRvCf-J49Foy3TvrA_t2_AtWbU6mE0aKgeHXwVkbeWAVPvkfIog_vejV5-c6-WG7sTfsW53s2FW-YMWCldcLdjn4Hg-WC7ZasPX1yxcCmut8locvySLx6m7UXvXKhAGdDcEr0Xc6VFXOiw0vGPvEmtF7nHcTc9oYJVmwLLR6YPfKD9qa2tfmkngiRfkZX9GRGXs30YfogtCevmZu9IUWwuRIlAsCXG4KrTVwuslKLVV6n_FixXMcdWKyY0h2z8mperIrQMPa8I99IPcJzsNaJE2rO4nmk93V5SSI_aCoyoOWoU12-SpbJL02_85mOVt_K31sUYdMLZF20J36DtRB-R_WgIlR_hcVKDTdz7EIcYJwk-Tl-mVxibl08TMAHL7mCee4m5L3ua_H_s0EeXgM78v4j2X4mPqJyke0z4zfNXAp055Q883qd2oG5eEW3UdFz3FUlX5eFlF-LPPVNbbFNrdkEgoEQvwn1oqh3bFtlm_VJhNqs95m8pAVomnKMt_vs1zK7V6d17WqlrzgBUx_tKZIm8NBp3ttpDbHoaoKXBOe4ZL0qISgEDx5S471r82-0-ZWASPnX8luRNOqYG2HmBVHEFzKB33QyMmQhRuHUlSDcnKsPKxWeEiYGut70eknwiuRTIGdbm6raotqm9qAoJ-c1ViEqsr4sjZHHeSeGGQ8RzjM-WYRQM6XiNDSCLLoM747nd6nvQpC4pmpqjXPI5EbbW5EMacB6GawZgCVHiFLnm94WZte-FtpH0yqQ-omCs0ok9yjG8RBgSKyKVKOvqMOI1Z8HohtsaYu5qeBbAgEU3edfYAJMIJz3ga7Hw9xLFFAN4xBA24FibeXQbhJeB8T15T4xt84khMaYqqUPklFGmZIj-Yx3qTYLNiTZ36C2hAchcFPTrw9A61U5PE1thqlTSGcmjsq-JYInv2qsaPrIEPJt9H7hBAaesx96u9g0qRem6WrrYa5mTLOweumBacSzUIrbMXoTuu0piVAtUE_IoB6R_jQ9wJNr4BKR-cXNqqNVXvjSt8_23RbRKMDBrPFIAAVbD_PLEJRUbQPE9tPpjdCWtoXjAX2hP8Ux1Q9AobeaPS3xG5DuPCEKyniuBCHPcC-lVFswrnHfkM_S3co4yXUqM2DCE0r7ZHA34rzpB2o4RrEzGHo8OcOTBTkTF7-B5--fjFsBwAA)
[Py.cafe example with 1.1.0](https://py.cafe/snippet/streamlit/v1?#c=H4sIALt-pGcEA41VTW_jNhD9K4T2sgtYhD5sxzGgw-4Wbc-99BAHBS3SFhOJZCgqjhz4v_cNZTsJ0CKLBIFmOPNm5s1HXpPaSpWsk523HevDP2K_91oy3TnrA_u-_wPSxpxFJ4wUPcOvg3Jj5I5V-OS_iSB-96JTX183yU_bDp1h3zfJmt3lM1bMWHk_Y9eHH_FhPmOLGVven74R0BTnq9x9S2aJV0-D9qpTJvTIrA9eia7VoapyXtzwgrEvrB68x3s7MqeNUZIFy0Kje_asfK-t2fiNuTqeiyL_nGf0ZIbOjfQh2iC0p6-pNvpCCmF0RMoVASo3hsYaKN1opZYqfc54seA5nlox2iEk69fkHD1ZFyjD2vCXPZD6DOchzZK60a1E8sn67voSxLZXFOWgZWiSdb7IZkmnzd-TWE7Sn0rvG8QhUUu47XSrfgC1V_6nNajEKP8_Ecg03U62MHGCcJPkdH-aXW2uWfwXAB7f_IRz3I3JR9-3Z_-ugzy8hI9h_Oc0fF76uZTPyr5U_CGBa5jmjJrfLH4lZlAeatF-FvRiR1Hp5zSL9GOY7-4xLbZ-JJFQQBDsv7BG9M2aFfNFKYtsqWpZ5vJGZbdZtlqtbutlvdrequIyrlW14CUG2Qi_t6ZI691Op1ttpDb7vqoKrAnPsCQdIsEoBE_aMg6_2bbaPCpg5PyW5FrUjQrWtrAh2Bwq5YPeafhk8MLGITLFiAuEkYfUCA8KU2N9J1p9JLwSzmTY6vqxqlaIdrMxex3ktqrmPOM5HiFOe0TmOZ_PcVqkESTRZ7wyrd6mnQpC4qhU1ZLnMe0HbR5EMbkB6KG3pkfiHUzmPL_h5cZ0wj9KezCpDqkbyTQjT1IPrhc7hYLgTZZy8G1VISfCiseAaiuWlMV0CEgGHRB129oDRIARnPM22O2wo7jEKTT9EDTgFiB0daXdjcL76Lgkx3f62hF5YKzgBbmPUhFjGdyjuI97E5NF9aSZDk4TgiMz6EmJS9PTAMU6bmOqkdoUxKkpo4KvqMCLXtV2cC1oKPkqao8woRZH32P3BJE69ZYsLbLqp2TK2Aev6wY1lUgWXGEGBnceniW1HNF6_QIDyh3mfdcJJL0AKj1d7mlkG4P1TpV-PNK0G6LWAY1ZoRGksN3UswhFQZE-RMw6id4IaWle0BbII_4v7FP1Ahi6yMhvjkkGceGIBRSxXbDDHGDeykg24TxjmsGfpY3JeAk2NuYgQt1Iuyfw9-QctUNpGPro2fct_jyhEgU6k9O_wzUsUFoHAAA)
| closed | 2025-02-06T09:26:51Z | 2025-03-05T19:31:54Z | https://github.com/PablocFonseca/streamlit-aggrid/issues/303 | [] | Hoffelhas | 1 |
mwaskom/seaborn | matplotlib | 3,508 | Support non-index dataframe in heatmap | Since Seaborn is planning to add support for dataframes other than pandas (https://github.com/mwaskom/seaborn/pull/3369), I'd like to point out an issue with heatmap.
Currently, Seaborn's [heatmap](https://seaborn.pydata.org/generated/seaborn.heatmap.html) requires a Pandas dataframe with row labels or an index. However, certain dataframes like Polars [do not have an index by design](https://pola-rs.github.io/polars/user-guide/migration/pandas/#selecting-data). It would be beneficial if the heatmap could provide an API that allows inputting a non-index dataframe.
```python
import pandas as pd
import polars as pl
import seaborn as sns
import matplotlib.pyplot as plt
data = {
'A': [1, 2, 3],
'B': [2, 3, 4],
'C': [1, 3, 5],
'Index': ['I', 'II', 'III']
}
```
```python
sns.heatmap(pd.DataFrame(data).set_index('Index'))
```

For polars, the current workaround:
- set ticklabels in matplotlib manually
```python
df = pl.DataFrame(data)
fig, ax = plt.subplots()
sns.heatmap(df.drop('Index'), ax=ax)
ax.set_yticklabels(df.get_column('Index'))
```
- convert to pandas dataframe
```python
sns.heatmap(pl.DataFrame(data).to_pandas().set_index('Index'))
```
| closed | 2023-09-30T14:31:34Z | 2023-09-30T15:00:45Z | https://github.com/mwaskom/seaborn/issues/3508 | [] | stevenlis | 1 |
plotly/jupyter-dash | jupyter | 93 | Error 403 | Not able to run any example of jupyter-dash on google colab.
A minimal reproducible code would be the [example on this repo](https://github.com/plotly/jupyter-dash/blob/master/notebooks/getting_started.ipynb) itself.
From this repo and all other examples I found on the Internet, I get this single _Error 403_

| open | 2022-06-27T19:10:39Z | 2022-12-08T17:41:49Z | https://github.com/plotly/jupyter-dash/issues/93 | [] | d-s-dc | 2 |
Josh-XT/AGiXT | automation | 928 | Agent Management - OpenAI overrides local configured provider | ### Description
New Agent Provider setting is not saved. I suspect it is the new error handler.
In the console log I get only 200 OKs for all transactions.
I followed the instructions for GPT4all. https://josh-xt.github.io/AGiXT/3-Providers/GPT4ALL.html


Here is a export from an saved GPT4all Agent, it contains not the correct data.
```
{
"commands": null,
"settings": {
"provider": "openai",
"embedder": "openai",
"AI_MODEL": "gpt-3.5-turbo-16k-0613",
"AI_TEMPERATURE": "0.7",
"AI_TOP_P": "1",
"MAX_TOKENS": "16000",
"helper_agent_name": "OpenAI",
"WEBSEARCH_TIMEOUT": 0,
"OPENAI_API_KEY": "YOUR_OPENAI_API_KEY_HERE",
"WAIT_BETWEEN_REQUESTS": 1,
"WAIT_AFTER_FAILURE": 3,
"stream": false,
"WORKING_DIRECTORY": "./WORKSPACE",
"WORKING_DIRECTORY_RESTRICTED": true,
"AUTONOMOUS_EXECUTION": false
},
"enabled_commands": []
}
```
### Steps to Reproduce the Bug
1. Deploy a new setup via docker-compose method in AGiXT.sh script.
2. Go to Agent Management
3. Create new Agent
4. Select a provider like "gpt4all"
5. set Provider specific settings
6. go to bottom and save
7. export settings or go to Agent Interaction and back to Agent Management select the newly created custom Agent or Test the new agent , the console will show that OpenAI was tried.
### Expected Behavior
Saved Provider Settings.
### Operating System
- [X] Linux
- [ ] Microsoft Windows
- [ ] Apple MacOS
- [ ] Android
- [ ] iOS
- [ ] Other
### Python Version
- [ ] Python <= 3.9
- [X] Python 3.10
- [ ] Python 3.11
### Environment Type - Connection
- [ ] Local - You run AGiXT in your home network
- [ ] Remote - You access AGiXT through the internet
### Runtime environment
- [X] Using docker compose
- [ ] Using local
- [ ] Custom setup (please describe above!)
### Acknowledgements
- [X] I have searched the existing issues to make sure this bug has not been reported yet.
- [X] I am using the latest version of AGiXT.
- [X] I have provided enough information for the maintainers to reproduce and diagnose the issue. | closed | 2023-08-15T21:31:15Z | 2023-08-17T04:33:02Z | https://github.com/Josh-XT/AGiXT/issues/928 | [
"type | report | bug",
"needs triage"
] | m4t7 | 4 |
piskvorky/gensim | machine-learning | 3,098 | Signpost page for new issues | BTW, Numpy have a nice signpost page for new issues: https://github.com/numpy/numpy/issues/new/choose
Let's see how they did it and do it for Gensim too :) Many don't read / respect our current issue template.
_Originally posted by @piskvorky in https://github.com/RaRe-Technologies/gensim/issues/3097#issuecomment-811699331_ | open | 2021-04-01T07:20:36Z | 2021-04-01T07:20:51Z | https://github.com/piskvorky/gensim/issues/3098 | [
"documentation",
"housekeeping"
] | piskvorky | 0 |
DistrictDataLabs/yellowbrick | matplotlib | 344 | Version 0.6.0 Release | Steps for 0.6.0 version bump:
- [x] Branch release-0.6.0
- [x] Do version bump
- [x] Update tests and run tests
- [x] Create change log
- [x] Review documentation build
- [x] Merge into `master`
- [x] Push release to PyPI ([instructions](https://bbengfort.github.io/programmer/2016/01/20/packaging-with-pypi.html))
- [x] Create 0.6.0 tag
- [x] Copy change log to release notes
- [x] Push documentation to Read the Docs
- [x] Merge release into `develop`
- [x] Delete release
- [x] Make Conda package
- [x] Announce! | closed | 2018-03-17T14:25:32Z | 2018-03-21T20:39:56Z | https://github.com/DistrictDataLabs/yellowbrick/issues/344 | [] | rebeccabilbro | 3 |
Miserlou/Zappa | django | 1,585 | Context header mappings do not override http headers | ## Context
I'm using an external lambda authorizer on APIGateway and returning some user info in the context that will be consumed by my api. I'm using the context_header_mappings setting to pass the user_id from the gateway authorizer to the api in the `apigw_user_id` header. The APIGW authorizer function is more used just for authentication and could potentially return None as the value for `apigw_user_id` and it would be up to the view to decide whether an authenticated user is required. However, if I deliberately make a request with an unauthtenticated user, but manually supply a `apigw_user_id` header, that header is passed straight to my function essentially bypassing the authorizer.
## Expected Behavior
Headers defined in `context_header_mappings` should _always_ override manually passed headers if the Authorizer supplies a value for them ( even if the value is None )
## Actual Behavior
If an Authorizer returns None for a context variable, then the value from any HTTP header matching that name is used instead.
## Possible Fix
Alter function here: https://github.com/Miserlou/Zappa/blob/6ab48b0db4ce1679935a36a63d44b4fca183632b/zappa/wsgi.py#L46 to favour values passed from the APIGW Authorizer context over any value passed in the original header.
## Steps to Reproduce
1. Create an external lambda authorizer function returning the authenticated user id ( or None ) in a context variable
2. Attach the authorizer to a zappa deployed function
3. Include context_header_mappings in zappa_settings.json mapping the apigw context variable to an HTTP header
4. Make a curl request to the deployed api, omitting any Authorization tokens required by the gateway, but include the header mapped from the context variable
5. The mapped header is passed directly to the function, overriding whatever the gateway has returned
## Your Environment
* Zappa version used: 0.46.2
* Operating System and Python version: python 3.6.2
* The output of `pip freeze`:
* Link to your project (optional):
* Your `zappa_settings.py`:
```
{
"dev": {
"app_function": "app.app",
"aws_region": "eu-central-1",
"profile_name": "default",
"project_name": "crowdcomms-livepolling",
"runtime": "python3.6",
"s3_bucket": "zappa-dwtdfjpuq",
"context_header_mappings": {
"apigw_user_id": "authorizer.user_id",
"authorization": "authorizer.auth_token"
}
}
}
```
| open | 2018-08-10T10:21:14Z | 2018-08-10T10:23:10Z | https://github.com/Miserlou/Zappa/issues/1585 | [] | bharling | 1 |
modin-project/modin | pandas | 7,117 | Support building range-partitioning from an index level | closed | 2024-03-25T14:52:20Z | 2024-04-02T16:12:56Z | https://github.com/modin-project/modin/issues/7117 | [
"new feature/request 💬",
"P1",
"partitions reshuffling 🔀"
] | dchigarev | 0 | |
pydantic/pydantic | pydantic | 10,951 | PrivateAttr not working when using it in dataclasses in python 3.11 | ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
When trying to use `PrivateAttr` as a value to specify private fields, an exception is raised.
The exception message is the following:
```
ValueError: mutable default <class 'pydantic.fields.ModelPrivateAttr'> for field _pv_prop is not allowed: use default_factory
```
Checking the [python documentation](https://docs.python.org/3.11/library/dataclasses.html#mutable-default-values), it says that in version 3.11, I quote ,`unhashable objects are now not allowed` therefore, I think, the issue shown in this ticket.
Thanks in advance!
### Example Code
```Python
from pydantic import PrivateAttr
from pydantic.dataclasses import dataclass
@dataclass
class MyClass:
name:str
_pv_prop: str = PrivateAttr()
def __post_init__(self):
self._pv_prop = "test"
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.10.0
pydantic-core version: 2.27.0
pydantic-core build: profile=release pgo=false
install path: /usr/local/lib/python3.11/site-packages/pydantic
python version: 3.11.9 (main, Sep 11 2024, 00:00:00) [GCC 11.5.0]
platform: Linux-6.6.41-0-virt-x86_64-with-glibc2.34
related packages: typing_extensions-4.12.2 fastapi-0.115.5
commit: unknown
```
| closed | 2024-11-23T00:46:43Z | 2024-11-26T19:48:32Z | https://github.com/pydantic/pydantic/issues/10951 | [
"bug V2",
"pending"
] | Estebanrg21 | 3 |
tiangolo/uwsgi-nginx-flask-docker | flask | 249 | Is it possible to suppress all `chown` calls? | Hi all,
I'm trying to run a container built upon this base image on a platform where `chown` is not permitted.
Is there a way to suppress all these calls, from both `uwsgi` and `nginx`? | closed | 2021-10-05T16:37:14Z | 2024-08-29T00:17:45Z | https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/249 | [] | khuongduybui | 0 |
recommenders-team/recommenders | machine-learning | 1,361 | [FEATURE] Installation via `pip install recommenders` or `conda install recommenders` | ### Description
Would your team consider making the recommenders package installable from the PyPi and/or Anaconda package repository?
Rather than cloning the git repository and installing from that, users could
`pip install recommenders`
or
`conda install recommenders`
### Other Comments
I would love to help in any way I can.
| closed | 2021-03-29T00:27:53Z | 2021-05-07T13:37:50Z | https://github.com/recommenders-team/recommenders/issues/1361 | [
"enhancement"
] | zkneupper | 4 |
vanna-ai/vanna | data-visualization | 431 | How to disable chart generation? | Is there any way to disable chart generation?
We found that sometime chart generation is very slow and we just want to get the number.
| open | 2024-05-10T07:42:25Z | 2024-06-04T22:37:11Z | https://github.com/vanna-ai/vanna/issues/431 | [] | njalan | 2 |
apachecn/ailearning | scikit-learn | 659 | 数据分析1 | closed | 2024-11-12T19:47:35Z | 2024-11-14T09:44:23Z | https://github.com/apachecn/ailearning/issues/659 | [] | FSman101 | 2 | |
NullArray/AutoSploit | automation | 655 | Unhandled Exception (5e7e49ee4) | Autosploit version: `3.1`
OS information: `Linux-4.19.0-kali3-amd64-x86_64-with-Kali-kali-rolling-kali-rolling`
Running context: `autosploit.py`
Error meesage: `[Errno 2] No such file or directory: '/home/Autosploit/hosts.txt'`
Error traceback:
```
Traceback (most recent call):
File "/home/Autosploit/autosploit/main.py", line 116, in main
terminal.terminal_main_display(loaded_tokens)
File "/home/Autosploit/lib/term/terminal.py", line 598, in terminal_main_display
self.__reload()
File "/home/Autosploit/lib/term/terminal.py", line 72, in __reload
self.loaded_hosts = open(lib.settings.HOST_FILE).readlines()
IOError: [Errno 2] No such file or directory: '/home/Autosploit/hosts.txt'
```
Metasploit launched: `False`
| closed | 2019-04-13T13:54:51Z | 2019-04-17T18:33:02Z | https://github.com/NullArray/AutoSploit/issues/655 | [] | AutosploitReporter | 0 |
google-research/bert | tensorflow | 673 | How to add learning rate into tensorboard? | As title, how can we add learning rate into tensorboard? | open | 2019-06-04T05:26:10Z | 2019-06-04T05:26:10Z | https://github.com/google-research/bert/issues/673 | [] | shunshunyin | 0 |
robotframework/robotframework | automation | 5,065 | Create a possibility to "replay" an output.xml (fast/realtime) with rebot and ListenerAPI there | There are listeners out there in the field that needs to be attached to a running robot and then posts results.
There are two use-cases for that:
1. Development of listeners (at least "read-only" listeners)
2. using reporting Listeners that can be run on output.xml
a. (optionally) in realtime, if the listener is depending on the current time
b. in fasts mode without any consideration of timings.
## Examples:
### Allure
One example is the Allure report.
it has a listener that needs to run with the robot run.
It would be cool to just let that run with `rebot --listener robotframework-allure output.xml`
So it would not be needed during exec.
### Failures in Listeners
There could be a situation that a listener fails and you do not understand why, so you want to debug it, but that error only happens after 2h of running robot. So running again on the existing output.xml would be good for debugging, and after fixing, you could actually still use these results to publish (with the now working listener)
### Listener Development
When you develop a listener you run robot tests multiple times, but again with bigger ones ore non deterministic runs, it is hard to test your listener. a "rerun" option would be cool for that as well.
Cheers
René
| open | 2024-02-27T10:24:40Z | 2024-02-27T14:01:13Z | https://github.com/robotframework/robotframework/issues/5065 | [] | Snooz82 | 1 |
microsoft/hummingbird | scikit-learn | 626 | Support for hinge loss on Sklearn SGDClassifier | Ref error message from hummingbird-ml:
> AssertionError: predict_proba for linear models currently only support {'modified_huber', 'squared_hinge', 'log'}. (Given hinge). Please fill an issue at https://github.com/microsoft/hummingbird
Simple enough to get around using squared_hinge, but it yields a significant performance loss compared to hinge, at least for a single epoch.
Hummingbird version: '0.4.5'
Ran on Python 3.9.12 (main, Jun 1 2022, 11:38:51)
[GCC 7.5.0] :: Anaconda, Inc. on linux.
Simple to reproduce, see code below:
```
from sklearn.linear_model import SGDClassifier
from sklearn import datasets
from sklearn import metrics
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
from hummingbird.ml import convert, load
#! Built on the sklearn intro example: https://scikit-learn.org/stable/tutorial/basic/tutorial.html
# Data loading
# iris = datasets.load_iris()
digits = datasets.load_digits()
# Data engineering
n_samples = len(digits.images)
data = digits.images.reshape((n_samples, -1))
# Training split
X_train, X_test, y_train, y_test = train_test_split(data, digits.target, test_size=0.90, shuffle=False)
# Model definition and parameter selection
clf = SGDClassifier(loss="hinge")
# Model training
clf.fit(X_train, y_train)
model = convert(clf, "pytorch")
# Model prediction
# predicted = clf.predict(X_test)
predicted = model.predict(X_test)
model.save("hb_model")
model = load("hb_model")
# Model evaluation
# Classification report
print(f"Classification report for classifier {clf}:\n" f"{metrics.classification_report(y_test, predicted)}\n")
# Confusion matrix - plot
disp = metrics.ConfusionMatrixDisplay.from_predictions(y_test, predicted)
disp.figure_.suptitle("Confusion Matrix")
print(f"Confusion matrix:\n{disp.confusion_matrix}")
plt.show()
# Write results to file
report = metrics.classification_report(y_test, predicted)
```
| closed | 2022-08-25T15:32:20Z | 2024-02-13T18:43:59Z | https://github.com/microsoft/hummingbird/issues/626 | [
"help wanted"
] | Economax | 2 |
psf/black | python | 4,374 | no info about setting by default | There is no information in the documentation about the default black parameters
Does not exists info - https://black.readthedocs.io/en/stable/usage_and_configuration/the_basics.html#exclude
Isort have this info - https://pycqa.github.io/isort/docs/configuration/options.html#skip | open | 2024-06-01T13:29:44Z | 2024-06-01T14:16:03Z | https://github.com/psf/black/issues/4374 | [
"T: documentation"
] | ArtemIsmagilov | 3 |
modAL-python/modAL | scikit-learn | 120 | BayesianOptimizer gives negative accuracy | Hi,
I implemented the sample code here :
https://modal-python.readthedocs.io/en/latest/content/apireference/models.html
However, when i switched X(training data in the sample code) to X = np.linspace(0, 22, 1000).reshape(-1, 1)
optimizer.score(X, y) gives me ''-2.267766614571299''
Kind Regards,
Eren. | open | 2021-01-28T14:18:42Z | 2021-01-31T08:18:38Z | https://github.com/modAL-python/modAL/issues/120 | [] | erenarkangil | 1 |
NullArray/AutoSploit | automation | 368 | Unhandled Exception (eef9b858a) | Autosploit version: `3.0`
OS information: `Linux-4.15.0-1021-aws-x86_64-with-Ubuntu-18.04-bionic`
Running context: `autosploit.py`
Error meesage: `argument of type 'NoneType' is not iterable`
Error traceback:
```
Traceback (most recent call):
File "/home/ubuntu/AutoSploit/autosploit/main.py", line 117, in main
terminal.terminal_main_display(loaded_tokens)
File "/home/ubuntu/AutoSploit/lib/term/terminal.py", line 537, in terminal_main_display
if "help" in choice_data_list:
TypeError: argument of type 'NoneType' is not iterable
```
Metasploit launched: `False`
| closed | 2019-01-17T07:52:29Z | 2019-02-19T04:21:18Z | https://github.com/NullArray/AutoSploit/issues/368 | [] | AutosploitReporter | 0 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,710 | [Bug]: M4 MacBook Pro WebUI Installation error | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I have a M4 Pro MacBook Pro, and I am trying to install stable-diffusion-webui by following the guide provided. However, I got the error as below.
<img width="1242" alt="image" src="https://github.com/user-attachments/assets/c48e237a-1553-4ba5-8ca8-dcad79351a85">
### Steps to reproduce the problem
1. Homebrew is installed
2. Open a new terminal window and run brew install cmake protobuf rust python@3.10 git wget
3. Clone the web UI repository by running git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
4. Place Stable Diffusion models/checkpoints you want to use into stable-diffusion-webui/models/Stable-diffusion.
5. cd stable-diffusion-webui and then ./webui.sh to run the web UI.
### What should have happened?
webui.sh should run successfully and I can use the WebUI
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
none
### Console logs
```Shell
https://drive.google.com/file/d/1XaYtgnY5_Ye6VqjVFcPIs8gJlVu9NArX/view?usp=share_link
```
### Additional information
_No response_ | closed | 2024-12-08T16:59:55Z | 2024-12-09T13:26:33Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16710 | [
"bug-report"
] | huixin-g | 1 |
capitalone/DataProfiler | pandas | 633 | PSI added to diff of numerical stat columns | **Is your feature request related to a problem? Please describe.**
Within https://github.com/capitalone/DataProfiler/blob/main/dataprofiler/profilers/numerical_column_stats.py#L350
Need to add PSI - https://medium.com/model-monitoring-psi/population-stability-index-psi-ab133b0a5d42
Should be a helper function to calculate which is called withtin the `def diff` function.
**Describe the outcome you'd like:**
receiving PSI in the `diff` command of NumericalStatsMixin
Tests around the addition.
**Additional context:**
| closed | 2022-09-16T15:48:58Z | 2022-11-30T16:56:46Z | https://github.com/capitalone/DataProfiler/issues/633 | [
"New Feature",
"good_first_issue"
] | JGSweets | 2 |
automl/auto-sklearn | scikit-learn | 1,625 | KNearestNeighborsRegressor has no attribute 'estimator' when printing show_models() | I have tried to print the models composing the best ensemble with `show_models()`, but it fails if a `k_nearest_neighbours_regressor` is one of them. Is this due to this component not having an initialised `self.estimator`? I am making a custom component with that modification now, and will update this issue if said model comes up in the ensemble again (whether it fixes it or not).
> automl.leaderboard()
rank ensemble_weight type cost duration
model_id
826 1 0.34 decision_tree 0.556544 3.839919
742 2 0.42 k_nearest_neighbors 0.563224 2.659213
1856 3 0.24 adaboost 0.570269 9.341588
automl.show_models()
Traceback (most recent call last):
File "/gpfs/home/xxx/automlBiscuits.py", line 40, in <module>
pprint(automl.show_models(), indent=4)
File "/gpfs/home/xxx/miniconda3/lib/python3.9/site-packages/autosklearn/estimators.py", line 888, in show_models
return self.automl_.show_models()
File "/gpfs/home/xxx/miniconda3/lib/python3.9/site-packages/autosklearn/automl.py", line 2227, in show_models
] = autosklearn_wrapped_model.choice.estimator
AttributeError: 'KNearestNeighborsRegressor' object has no attribute 'estimator' | open | 2022-11-28T12:05:33Z | 2022-12-05T11:14:14Z | https://github.com/automl/auto-sklearn/issues/1625 | [
"bug"
] | MrKevinDC | 3 |
tensorpack/tensorpack | tensorflow | 1,193 | COCO data layout instructions | In the FasterRCNN [readme](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN/README.md) , it says that that you need to lay out your COCO data like this:
```
COCO/DIR/
annotations/
instances_train201?.json
instances_val201?.json
train201?/
COCO_train201?_*.jpg
val201?/
COCO_val201?_*.jpg
```
Training seems to be working (so far) with a data layout like
```
COCO/DIR/
annotations/
instances_train201?.json
instances_val201?.json
train201?/
*.jpg
val201?/
*.jpg
```
where *.jpg looks like `000000066822.jpg`
Am I missing something in the code where that jpg prefix is important? | closed | 2019-05-15T23:57:26Z | 2019-05-16T01:22:16Z | https://github.com/tensorpack/tensorpack/issues/1193 | [
"examples"
] | armandmcqueen | 1 |
thtrieu/darkflow | tensorflow | 673 | Error:running the demo without output window | Hi,
When I process a video with darkflow,I use the command "flow --model cfg/yolo.cfg --load bin/yolo.weights --demo VID.mp4 --gpu 0.9”,there is nothing output.Why?
> (tensorflow) dell@dell:~/darkflow$ flow --model cfg/yolo.cfg --load bin/yolo.weights --demo VID.mp4 --gpu 0.9
> /home/dell/.conda/envs/tensorflow/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
> return f(*args, **kwds)
>
> Parsing
> ./cfg/yolo.cfg
> Parsing cfg/yolo.cfg
> Loading bin/yolo.weights ...
> Successfully identified 203934260 bytes
> Finished in 0.007088899612426758s
> Model has a coco model name, loading coco labels.
>
> Building net ...
> Source | Train? | Layer description | Output size
> -------+--------+----------------------------------+---------------
> | | input | (?, 608, 608, 3)
> Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 608, 608, 32)
> Load | Yep! | maxp 2x2p0_2 | (?, 304, 304, 32)
> Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 304, 304, 64)
> Load | Yep! | maxp 2x2p0_2 | (?, 152, 152, 64)
> Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 152, 152, 128)
> Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 152, 152, 64)
> Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 152, 152, 128)
> Load | Yep! | maxp 2x2p0_2 | (?, 76, 76, 128)
> Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 76, 76, 256)
> Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 76, 76, 128)
> Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 76, 76, 256)
> Load | Yep! | maxp 2x2p0_2 | (?, 38, 38, 256)
> Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 38, 38, 512)
> Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 38, 38, 256)
> Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 38, 38, 512)
> Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 38, 38, 256)
> Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 38, 38, 512)
> Load | Yep! | maxp 2x2p0_2 | (?, 19, 19, 512)
> Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 19, 19, 1024)
> Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 19, 19, 512)
> Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 19, 19, 1024)
> Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 19, 19, 512)
> Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 19, 19, 1024)
> Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 19, 19, 1024)
> Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 19, 19, 1024)
> Load | Yep! | concat [16] | (?, 38, 38, 512)
> Load | Yep! | conv 1x1p0_1 +bnorm leaky | (?, 38, 38, 64)
> Load | Yep! | local flatten 2x2 | (?, 19, 19, 256)
> Load | Yep! | concat [27, 24] | (?, 19, 19, 1280)
> Load | Yep! | conv 3x3p1_1 +bnorm leaky | (?, 19, 19, 1024)
> Load | Yep! | conv 1x1p0_1 linear | (?, 19, 19, 425)
> -------+--------+----------------------------------+---------------
> GPU mode with 0.9 usage
> 2018-03-27 16:13:40.244174: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
> 2018-03-27 16:13:40.351892: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
> 2018-03-27 16:13:40.352171: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:
> name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.835
> pciBusID: 0000:01:00.0
> totalMemory: 7.92GiB freeMemory: 7.55GiB
> 2018-03-27 16:13:40.352184: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
> Finished in 2.0977280139923096s
>
> Press [ESC] to quit demo
> 24.337 FPS
> End of Video
>
> Demo stopped, exit
When I use the command "flow --model cfg/yolo.cfg --load bin/yolo.weights --demo VID.mp4 --gpu 0.9 --saveVideo",a video is stored and there is still no output window.
So what's the problem?Thanks. | closed | 2018-03-27T08:34:16Z | 2018-11-30T21:25:09Z | https://github.com/thtrieu/darkflow/issues/673 | [] | cpaaax | 8 |
microsoft/RD-Agent | automation | 646 | Where should I place daily_pv.h5 file so that it can be found? | Hi, I just tried running `rdagent fin_factor` command.
This command always gives me `FileNotFoundError: File daily_pv.h5 does not exist` error.
I'm wondering where I should put this file?
I found that this file exists in `RD-Agent/git_ignore_folder/factor_implementation_source_data/daily_pv.h5`.
Why can't `rdagent fin_factor` detect this file automatically?
Here's the complete error message:
```
Role:user
Content: --------------Factor information:---------------
factor_name: Volume-Price Trend Factor
factor_description: This factor calculates the cumulative product of daily volume and the percentage change in closing price over a 20-day window. It aims to capture market momentum and investor sentiment by analyzing how changes in trading volume correlate with price movements.
factor_formulation: \text{Volume-Price Trend Factor}_{t} = \sum_{i=t-19}^{t} \left( V_{i} \times \frac{P_{i} - P_{i-1}}{P_{i-1}} \right)
variables: {'V_i': 'Trading volume on day i.', 'P_i': 'Closing price on day i.', 'P_{i-1}': 'Closing price on the previous day (i-1).'}
--------------Execution feedback:---------------
Traceback (most recent call last):
File "/path/to/factor.py", line 23, in <module>
main()
File "/path/to/factor.py", line 16, in main
df = pd.read_hdf('daily_pv.h5', key='data')
File "/path/to/site-packages/pandas/io/pytables.py", line 424, in read_hdf
raise FileNotFoundError(f"File {path_or_buf} does not exist")
FileNotFoundError: File daily_pv.h5 does not exist
Expected output file not found.
``` | open | 2025-02-26T16:21:01Z | 2025-03-08T17:36:43Z | https://github.com/microsoft/RD-Agent/issues/646 | [
"question"
] | lyenliang | 4 |
keras-team/autokeras | tensorflow | 1,086 | Saving trained Model and trained model interpretation | ### Bug Description
This is more like the clarification on the tutorial description in very basic level. I tried
https://autokeras.com/tutorial/image_classification/
I try to understand the result. I got the following output by clf.fit(x_train, y_train,epochs=3)
Does this mean 3 models are compared and in the last step they try again with the bets score model? (Trial ID: 5ef9850ad12a412e6263423d2bccf89a, Score: 0.06611143700537893)
I think only best model (for this used dataset and specified epochs) can be saved by
model = clf.export_model()
model.save()
using regular Keras model class.
Is there any way to save other models used for this training (not only the best model)?
```
(60000, 28, 28)
(60000,)
[5 0 4]
Train for 1500 steps, validate for 375 steps
Epoch 1/3
2020-04-06 23:21:41.977044: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-04-06 23:21:45.890081: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
1/1500 [..............................] - ETA: 7:15:32 - loss: 2.2885 - accuracy: 0.1250
~~
1500/1500 [==============================] - 31s 21ms/step - loss: 0.1746 - accuracy: 0.9469 - val_loss: 0.0689 - val_accuracy: 0.9793
Epoch 2/3
1/1500 [..............................] - ETA: 5:37 - loss: 0.0783 - accuracy: 0.9688
~~
1500/1500 [==============================] - 13s 9ms/step - loss: 0.0774 - accuracy: 0.9762 - val_loss: 0.0488 - val_accuracy: 0.9863
Epoch 3/3
1/1500 [..............................] - ETA: 5:28 - loss: 0.0239 - accuracy: 1.0000
~~
1500/1500 [==============================] - 13s 9ms/step - loss: 0.0625 - accuracy: 0.9806 - val_loss: 0.0477 - val_accuracy: 0.9860
[Trial complete]
[Trial summary]
|-Trial ID: 6f63dad309051c3971521ba643fad0c7
|-Score: 0.0476510140611014
|-Best step: 0
> Hyperparameters:
|-classification_head_1/dropout_rate: 0.5
|-classification_head_1/spatial_reduction_1/reduction_type: flatten
|-dense_block_1/dropout_rate: 0
|-dense_block_1/num_layers: 1
|-dense_block_1/units_0: 128
|-dense_block_1/use_batchnorm: False
|-image_block_1/augment: False
|-image_block_1/block_type: vanilla
|-image_block_1/conv_block_1/dropout_rate: 0.25
|-image_block_1/conv_block_1/filters_0_0: 32
|-image_block_1/conv_block_1/filters_0_1: 64
|-image_block_1/conv_block_1/kernel_size: 3
|-image_block_1/conv_block_1/max_pooling: True
|-image_block_1/conv_block_1/num_blocks: 1
|-image_block_1/conv_block_1/num_layers: 2
|-image_block_1/conv_block_1/separable: False
|-image_block_1/normalize: True
|-optimizer: adamTrain for 1500 steps, validate for 375 steps
Epoch 1/3
1/1500 [..............................] - ETA: 3:21:48 - loss: 2.9383 - accuracy: 0.0938
~~
1500/1500 [==============================] - 157s 105ms/step - loss: 0.2569 - accuracy: 0.9306 - val_loss: 0.1597 - val_accuracy: 0.9577
Epoch 2/3
1/1500 [..............................] - ETA: 7:40 - loss: 0.0488 - accuracy: 0.9688
~~
1500/1500 [==============================] - 151s 100ms/step - loss: 0.1119 - accuracy: 0.9716 - val_loss: 0.0661 - val_accuracy: 0.9804
Epoch 3/3
1/1500 [..............................] - ETA: 8:19 - loss: 0.0624 - accuracy: 0.9688
~~
1500/1500 [==============================] - 148s 98ms/step - loss: 0.0708 - accuracy: 0.9797 - val_loss: 0.0751 - val_accuracy: 0.9791
[Trial complete]
[Trial summary]
|-Trial ID: 5ef9850ad12a412e6263423d2bccf89a
|-Score: 0.06611143700537893
|-Best step: 0
> Hyperparameters:
|-classification_head_1/dropout_rate: 0
|-dense_block_1/dropout_rate: 0
|-dense_block_1/num_layers: 2
|-dense_block_1/units_0: 32
|-dense_block_1/units_1: 32
|-dense_block_1/use_batchnorm: False
|-image_block_1/augment: True
|-image_block_1/block_type: resnet
|-image_block_1/normalize: True
|-image_block_1/res_net_block_1/conv3_depth: 4
|-image_block_1/res_net_block_1/conv4_depth: 6
|-image_block_1/res_net_block_1/pooling: avg
|-image_block_1/res_net_block_1/version: v2
|-optimizer: adam
Train for 1500 steps, validate for 375 stepsEpoch 1/3
1/1500 [..............................] - ETA: 15:56 - loss: 2.4124 - accuracy: 0.0938
~~
1500/1500 [==============================] - 14s 9ms/step - loss: 0.1805 - accuracy: 0.9457 - val_loss: 0.0664 - val_accuracy: 0.9797
Epoch 2/3
1/1500 [..............................] - ETA: 5:27 - loss: 0.0402 - accuracy: 1.0000
~~
1500/1500 [==============================] - 13s 9ms/step - loss: 0.0775 - accuracy: 0.9759 - val_loss: 0.0544 - val_accuracy: 0.9843
Epoch 3/3
1/1500 [..............................] - ETA: 5:20 - loss: 0.0184 - accuracy: 1.0000
~~
1500/1500 [==============================] - 13s 9ms/step - loss: 0.0628 - accuracy: 0.9807 - val_loss: 0.0520 - val_accuracy: 0.9855
[Trial complete]
[Trial summary]
|-Trial ID: 7af457cb193b4f2c9ed2b0a4051ea257
|-Score: 0.05198277689473859
|-Best step: 0
> Hyperparameters:
|-classification_head_1/dropout_rate: 0.5
|-classification_head_1/spatial_reduction_1/reduction_type: flatten
|-dense_block_1/dropout_rate: 0
|-dense_block_1/num_layers: 1
|-dense_block_1/units_0: 128
|-dense_block_1/use_batchnorm: False
|-image_block_1/augment: False
|-image_block_1/block_type: vanilla
|-image_block_1/conv_block_1/dropout_rate: 0.25
|-image_block_1/conv_block_1/filters_0_0: 32
|-image_block_1/conv_block_1/filters_0_1: 64
|-image_block_1/conv_block_1/kernel_size: 3
|-image_block_1/conv_block_1/max_pooling: True
|-image_block_1/conv_block_1/num_blocks: 1
|-image_block_1/conv_block_1/num_layers: 2
|-image_block_1/conv_block_1/separable: False
|-image_block_1/normalize: True
|-optimizer: adamTrain for 1875 steps, validate for 375 steps
Epoch 1/3
1/1875 [..............................] - ETA: 26:31 - loss: 2.2717 - accuracy: 0.0938
~~
1875/1875 [==============================] - 17s 9ms/step - loss: 0.1582 - accuracy: 0.9517 - val_loss: 0.0506 - val_accuracy: 0.9834
Epoch 2/3
1/1875 [..............................] - ETA: 12:54 - loss: 0.0143 - accuracy: 1.0000
~~
1875/1875 [==============================] - 16s 9ms/step - loss: 0.0729 - accuracy: 0.9769 - val_loss: 0.0289 - val_accuracy: 0.9911
Epoch 3/3
1/1875 [..............................] - ETA: 13:06 - loss: 0.0093 - accuracy: 1.0000
~~
1875/1875 [==============================] - 16s 9ms/step - loss: 0.0590 - accuracy: 0.9815 - val_loss: 0.0157 - val_accuracy: 0.9958
```
### Setup Details
Include the details about the versions of:
- OS type and version: Ubuntu18.04
- Python: 3.6.9
- autokeras: 1.0
- keras-tuner:
- scikit-learn:
- numpy:
- pandas:
- tensorflow:2.1
| closed | 2020-04-07T02:56:00Z | 2020-06-14T07:31:15Z | https://github.com/keras-team/autokeras/issues/1086 | [
"wontfix"
] | takeofuture | 3 |
geopandas/geopandas | pandas | 2,954 | BUG:The to_crs function can only be used on the Windows platform. | I need to use the GeoDataFrame..to_crs("EPSG:4326",inplace=True) function to convert my vector layer to the WGS84 coordinate system.
My code was written on Windows, but the actual runtime environment is Linux.
On Windows, I need to set the environment variable 'PROJ_LIB'.
```
PROJ_LIB_PATH = r"D:\anaconda3\Lib\site-packages\rasterio\proj_data"
def judege_platform():
import platform
if platform.system() == "Windows":
os.environ['PROJ_LIB'] = PROJ_LIB_PATH
elif platform.system() == "Linux":
pass
```
But I cannot find the location of 'PROJ_LIB' on Linux.
`
result_gdf.to_crs("EPSG:4326",inplace=True)`
However, it will raise an error on Linux.
File "/root/anaconda3/envs/dzpro/lib/python3.8/site-packages/geopandas/geodataframe.py", line 1364, in to_crs
geom = df.geometry.to_crs(crs=crs, epsg=epsg)
File "/root/anaconda3/envs/dzpro/lib/python3.8/site-packages/geopandas/geoseries.py", line 1124, in to_crs
self.values.to_crs(crs=crs, epsg=epsg), index=self.index, name=self.name
File "/root/anaconda3/envs/dzpro/lib/python3.8/site-packages/geopandas/array.py", line 779, in to_crs
new_data = vectorized.transform(self.data, transformer.transform)
File "/root/anaconda3/envs/dzpro/lib/python3.8/site-packages/geopandas/_vectorized.py", line 1114, in transform
new_coords_z = func(coords_z[:, 0], coords_z[:, 1], coords_z[:, 2])
File "/root/anaconda3/envs/dzpro/lib/python3.8/site-packages/pyproj/transformer.py", line 430, in transform
self._transformer._transform(
File "pyproj/_transformer.pyx", line 459, in pyproj._transformer._Transformer._transform
pyproj.exceptions.ProjError: x, y, z, and time must be same size
SOS!!!! | closed | 2023-07-09T14:55:01Z | 2023-07-09T19:35:13Z | https://github.com/geopandas/geopandas/issues/2954 | [
"installation"
] | mht2953658596 | 2 |
jina-ai/clip-as-service | pytorch | 175 | Stops on freeze in AWS Deep Learning AMI. | **Prerequisites**
> Please fill in by replacing `[ ]` with `[x]`.
* [x ] Are you running the latest `bert-as-service`?
* [x] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
* [x] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)?
* [x] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
**System information**
> Some of this information can be collected via [this script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh).
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04.1 LTS
- Python version: 3
---
### Running on server stops at freeze
On running on pre-trained model BERT-Base, Uncased
I:VENTILATOR:[__i:__i: 62]:freeze, optimize and export graph, could take a while...
I:GRAPHOPT:[gra:opt: 48]:model config: ./small/bert_config.json
I:GRAPHOPT:[gra:opt: 50]:checkpoint: ./small/bert_model.ckpt
I:GRAPHOPT:[gra:opt: 54]:build graph...
I:GRAPHOPT:[gra:opt:121]:load parameters from checkpoint...
I:GRAPHOPT:[gra:opt:123]:freeze...
On force close throws
Process ForkPoolWorker-2:
Traceback (most recent call last):
File "/home/ec2-user/anaconda3/envs/tensorflow_p36/bin/bert-serving-start", line 13, in <module>
server = BertServer(args)
File "/home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/site-packages/bert_serving/server/__init__.py", line 66, in __init__
self.graph_path = pool.apply(optimize_graph, (self.args,))
File "/home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/multiprocessing/pool.py", line 259, in apply
return self.apply_async(func, args, kwds).get()
File "/home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/multiprocessing/pool.py", line 638, in get
self.wait(timeout)
File "/home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/multiprocessing/pool.py", line 635, in wait
self._event.wait(timeout)
File "/home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/threading.py", line 551, in wait
signaled = self._cond.wait(timeout)
File "/home/ec2-user/anaconda3/envs/tensorflow_p36/lib/python3.6/threading.py", line 295, in wait
waiter.acquire()
KeyboardInterrupt
| closed | 2019-01-06T17:54:24Z | 2019-01-07T14:11:50Z | https://github.com/jina-ai/clip-as-service/issues/175 | [] | abhinavcode | 3 |
tensorflow/datasets | numpy | 4,919 | Please consider using platformdirs for the data directory | The data directory defaults to `~/tensorflow_datasets`: https://github.com/tensorflow/datasets/pull/4014. [The platformdirs project](https://github.com/platformdirs/platformdirs) gives the correct location for a data directory. I believe [this function](https://github.com/platformdirs/platformdirs/blob/b8c42ddca4def1fba38b9815a7d94ec2ac630b29/src/platformdirs/__init__.py#L71) may be the right one to call to give the appropriate directory. | closed | 2023-05-18T17:42:44Z | 2023-05-24T18:03:03Z | https://github.com/tensorflow/datasets/issues/4919 | [
"enhancement"
] | NeilGirdhar | 7 |
modoboa/modoboa | django | 2,995 | SystemCheckError: System check identified some issues | HI,
I updated the modoboa version to version 2.1.2 and the cron jobs update_statistics, cleanlogs, check_mx and communicate_with_public_api all give the same error, which I report below
SystemCheckError: System check identified some issues:
ERRORS:
modoboa.Record.header_from: (fields.E304) Reverse accessor for 'modoboa.Record.header_from' clashes with reverse accessor for 'modoboa_dmarc.Record.header_from'.
HINT: Add or change a related_name argument to the definition for 'modoboa.Record.header_from' or 'modoboa_dmarc.Record.header_from'.
modoboa_dmarc.Record.header_from: (fields.E304) Reverse accessor for 'modoboa_dmarc.Record.header_from' clashes with reverse accessor for 'modoboa.Record.header_from'.
HINT: Add or change a related_name argument to the definition for 'modoboa_dmarc.Record.header_from' or 'modoboa.Record.header_from'.
How can I solve the issue?
Thank you.
Best regards
Nicola | closed | 2023-05-05T07:02:32Z | 2023-05-05T07:50:39Z | https://github.com/modoboa/modoboa/issues/2995 | [] | nsabatelli | 2 |
Miserlou/Zappa | flask | 1,658 | module 'pip' has no attribute 'get_installed_distributions' | <!--- Provide a general summary of the issue in the Title above -->
## module 'pip' has no attribute 'get_installed_distributions' no matter what you do you get this error, so frustrating, makes you not use zappa anymore it's unfortunate
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
3.6
## Expected Behavior
<!--- Tell us what should happen -->
module 'pip' has no attribute 'get_installed_distributions'
## Actual Behavior
<!--- Tell us what happens instead -->
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1.
2.
3.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used:
* Operating System and Python version:
* The output of `pip freeze`:
* Link to your project (optional):
* Your `zappa_settings.py`:
| open | 2018-10-17T13:23:02Z | 2018-10-19T19:32:55Z | https://github.com/Miserlou/Zappa/issues/1658 | [
"needs-info"
] | nabaz | 1 |
ivy-llc/ivy | tensorflow | 28,581 | Fix Frontend Failing Test: numpy - tensor.torch.Tensor.repeat | closed | 2024-03-13T14:25:27Z | 2024-03-16T15:32:48Z | https://github.com/ivy-llc/ivy/issues/28581 | [
"Sub Task"
] | ZenithFlux | 0 | |
pyg-team/pytorch_geometric | deep-learning | 9,520 | Take too long to install PyG on Colab | ### 😵 Describe the installation problem
I used to install the required packages on Colab to run PyG using the following codes within 2 minutes.
```
import torch
def format_pytorch_version(version):
return version.split('+')[0]
TORCH_version = torch.__version__
TORCH = format_pytorch_version(TORCH_version)
def format_cuda_version(version):
return 'cu' + version.replace('.', '')
CUDA_version = torch.version.cuda
CUDA = format_cuda_version(CUDA_version)
!pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-geometric
```
However, when I tried to run the same code on Colab today, it took 15 minutes to install torch-scatter, and after 30 minutes, I am still waiting for the second installation of torch-sparse to finish (it's taking a very long time at _Building wheels for collected packages: torch-sparse_). Is this due to recent updates to the packages? How can I install the required packages more quickly? Thank you very much!
### Environment
* PyG version:
* PyTorch version: 2.3.1
* OS:
* Python version: Python 3.10.12
* CUDA/cuDNN version: 12.1
* How you installed PyTorch and PyG (`conda`, `pip`, source):
* Any other relevant information (*e.g.*, version of `torch-scatter`):
| open | 2024-07-19T03:41:35Z | 2024-09-19T15:48:02Z | https://github.com/pyg-team/pytorch_geometric/issues/9520 | [
"installation"
] | xubingze | 4 |
serengil/deepface | deep-learning | 521 | Getting "an illegal memory access was encountered" when using GPU for Facial Recognition demo | I am trying to run the facial recognition demo but I am getting the following errors:
> 2022-07-25 05:50:01.611523: E tensorflow/stream_executor/cuda/cuda_event.cc:29] Error polling for event status: failed to query event: CUDA_ERROR_ILLEGAL_ADDRESS: an illegal memory access was encountered
> 2022-07-25` 05:50:01.611647: F tensorflow/core/common_runtime/device/device_event_mgr.cc:221] Unexpected Event status: 1
when I run on cpu, everything works fine but i keep on getting this error when running on GPU. for reference I am using the tensorflow 2.9.1 with Cuda 11.2 and cudnn 8.1 | closed | 2022-07-25T02:54:27Z | 2022-07-25T08:22:48Z | https://github.com/serengil/deepface/issues/521 | [
"question"
] | teenaxta | 1 |
deepfakes/faceswap | deep-learning | 533 | AttributeError: module 'keras.backend' has no attribute 'normalize_data_format' | when i train with keras 2.2 version, found title issue. i try to downgrade to keras 2.16, but the issue still happen | closed | 2018-11-12T22:38:27Z | 2018-11-12T22:43:22Z | https://github.com/deepfakes/faceswap/issues/533 | [] | ruah1984 | 1 |
marimo-team/marimo | data-science | 3,174 | marimo edit --sandbox no longer creates new files | ### Describe the bug
Sometime in between 0.9.23 and 0.10.2, `marimo edit --sandbox new_file.py` no longer creates new files. `marimo edit new_file.py` works fine, but adding `--sandbox` gives `FileNotFoundError: [Errno 2] No such file or directory 'new_file.py'`.
### Environment
```
{
"marimo": "0.10.2",
"OS": "Darwin",
"OS Version": "24.1.0",
"Processor": "arm",
"Python Version": "3.11.2",
"Binaries": {
"Browser": "131.0.6778.140",
"Node": "v22.9.0"
},
"Dependencies": {
"click": "8.1.7",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.14.2",
"packaging": "24.2",
"psutil": "6.1.0",
"pygments": "2.18.0",
"pymdown-extensions": "10.12",
"pyyaml": "6.0.2",
"ruff": "0.8.0",
"starlette": "0.41.3",
"tomlkit": "0.13.2",
"typing-extensions": "4.9.0",
"uvicorn": "0.32.1",
"websockets": "12.0"
},
"Optional Dependencies": {
"pandas": "2.0.2",
"polars": "0.19.12"
}
}
```
### Code to reproduce
_No response_ | closed | 2024-12-14T19:28:13Z | 2024-12-16T21:06:40Z | https://github.com/marimo-team/marimo/issues/3174 | [
"bug"
] | anjiro | 1 |
python-gitlab/python-gitlab | api | 2,710 | project.repository_tree returns 404 for non-existent path (used to return an empty list) | ## Description of the problem, including code/CLI snippet
`project.repository_tree(path="xxx")` throws a 404 exception if the path `xxx` doesn't exist. Before it didn't and just returned an empty list. It seems that the behavior changed during last week.
Working example:
```
import gitlab
from pprint import pprint
# Configuration
SERVER_URL = "https://gitlab.com"
GROUP_ID = 5054009
PROJECT_NAME = "kali-docs"
# Initialization
GL = gitlab.Gitlab(SERVER_URL)
group = GL.groups.get(GROUP_ID)
projects = group.projects.list(all=True)
# Select a project to work with
gproj = [p for p in projects if p.name == PROJECT_NAME][0]
pprint(gproj.attributes)
# Get a "manageable project"
proj = GL.projects.get(gproj.id)
# Get repo tree for a non-existent path
tree = proj.repository_tree(path="non-existent")
```
It would be nice if someone can confirm this change of behavior.
## Expected Behavior
`proj.repository_tree(path="non-existent")` used to return an empty list, there was no need to catch any exception.
## Actual Behavior
```
>>> tree = proj.repository_tree(path="non-existent")
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/gitlab/exceptions.py", line 337, in wrapped_f
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/gitlab/v4/objects/repositories.py", line 80, in repository_tree
return self.manager.gitlab.http_list(gl_path, query_data=query_data, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/gitlab/client.py", line 944, in http_list
gl_list = GitlabList(self, url, query_data, get_next=False, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/gitlab/client.py", line 1146, in __init__
self._query(url, query_data, **self._kwargs)
File "/usr/lib/python3/dist-packages/gitlab/client.py", line 1156, in _query
result = self._gl.http_request("get", url, query_data=query_data, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/gitlab/client.py", line 800, in http_request
raise gitlab.exceptions.GitlabHttpError(
gitlab.exceptions.GitlabHttpError: 404: 404 invalid revision or path Not Found
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/gitlab/cli.py", line 71, in wrapped_f
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/gitlab/exceptions.py", line 339, in wrapped_f
raise error(e.error_message, e.response_code, e.response_body) from e
gitlab.exceptions.GitlabGetError: 404: 404 invalid revision or path Not Found
```
## Specifications
- python-gitlab version: tested `2.5.0-1` (Debian 11 bullseye) and `3.12.0-1` (Debian unstable)
- API version you are using (v3/v4): v4
- Gitlab server version (or gitlab.com): gitlab.com
| closed | 2023-10-30T06:33:07Z | 2024-11-04T01:45:32Z | https://github.com/python-gitlab/python-gitlab/issues/2710 | [
"upstream"
] | elboulangero | 3 |
quantumlib/Cirq | api | 6,508 | Incorrect Classical Register Size in `to_qasm` with Inhomogeneous Measurements | **Description of the issue**
When a `Circuit` contains multiple measurements under the same key but with varying sizes, the `to_qasm` method improperly assigns the classical register size, resulting in a smaller than required register.
A potential solution includes fixing this behavior directly or, throwing an exception for circuits with measurements of differing sizes being translated to OpenQASM, to prevent incorrect translations.
**How to reproduce the issue**
```python
import cirq
qubits = cirq.LineQubit.range(2)
circuit = cirq.Circuit(
cirq.measure(qubits[0], key='c'),
cirq.measure(qubits, key='c'),
)
# No issue
# circuit = cirq.Circuit(
# cirq.measure(qubits, key='c'),
# cirq.measure(qubits[0], key='c'),
# )
print(circuit.to_qasm())
```
Result:
```qasm
// Generated from Cirq v1.3.0
OPENQASM 2.0;
include "qelib1.inc";
// Qubits: [q(0), q(1)]
qreg q[2];
creg m_c[1]; // Incorrectly suggests a single-bit register, expected size is 2
measure q[0] -> m_c[0];
// Gate: cirq.MeasurementGate(2, cirq.MeasurementKey(name='c'), ())
measure q[0] -> m_c[0];
measure q[1] -> m_c[1];
```
**Cirq version**
1.3.0
| closed | 2024-03-20T06:41:25Z | 2024-11-22T14:50:25Z | https://github.com/quantumlib/Cirq/issues/6508 | [
"kind/bug-report",
"triage/accepted",
"triage/needs-more-evidence",
"area/interop",
"area/qasm"
] | p51lee | 2 |
pydata/xarray | pandas | 9,263 | DOC: copybutton does not copy complete example | ### What happened?
In the documentation sometimes the copy button fails to copy the complete example. One example can be seen in the [HDF section](https://docs.xarray.dev/en/stable/user-guide/io.html#hdf5)

This is because the number of dots (4) are not matched by the regular expression of the copy button, this regular expression needs a small tweak.
To explain the problem in more detail, the [io page](https://docs.xarray.dev/en/stable/user-guide/io.html) uses ipython to run the example code and there are many cells. The regular expression
https://github.com/pydata/xarray/blob/10bb94c28b639369a66106fb1352b027b30719ee/doc/conf.py#L100
is working working for the first 9 cells. If there is a double digit cell with a multi line example, the copy button does not work.
The regular expression looks for 3 dots ans a double dot. But for larger numbers there are the length of the number of the cell plus two dots. | closed | 2024-07-22T08:06:22Z | 2024-07-22T13:00:40Z | https://github.com/pydata/xarray/issues/9263 | [
"bug",
"topic-documentation"
] | mosc9575 | 1 |
graphistry/pygraphistry | jupyter | 58 | npm dataset handling slower and bigger filesize under api=2 | - @thibaudh
To reproduce: use `datasets/rawdata/all-npm-packages` notebook, and try api vs api=2
File size: 5mb vs 6mb
Time: maybe 10X difference? Seemed to be in python CPU processing.
| closed | 2016-04-09T08:11:03Z | 2016-05-07T20:46:34Z | https://github.com/graphistry/pygraphistry/issues/58 | [
"bug",
"invalid",
"p4"
] | lmeyerov | 2 |
tensorflow/tensor2tensor | deep-learning | 947 | Poor performance of Transformer on Wikitext-2 LM | For PTB, poor performance of Transformer was already discussed [#128](https://github.com/tensorflow/tensor2tensor/issues/128) [#108](https://github.com/tensorflow/tensor2tensor/issues/108). I've also observed similar phenomenon for Wikitext-2 without various hyperparameters and architectural modification. Since Transformer with full attention or DMCA performed much better on Wikipedia summarization than seq2seq w/ attention, I was tempted to assume it would work. With the hyperparameters and architectures I've attempted, including the ones in _Attention is All You Need_ and the aforementioned wikipedia summarization paper, I've observed that Transformer outperformed LSTM and its variants on a certain news dataset that is similar to 1 Billion Words Dataset. Like 1BLM, this dataset has only sentence-long dependency; however, its total token count is comparable to Wikitext-2. This result aligns with tensor2tensor's result on 1BLM. Have you guys found a way to resolve this problem? | closed | 2018-07-19T17:19:01Z | 2019-06-13T12:45:35Z | https://github.com/tensorflow/tensor2tensor/issues/947 | [] | AranKomat | 1 |
retentioneering/retentioneering-tools | data-visualization | 53 | ValueError on seaborn==0.11.2 | Cell
```
data.rete.compare(groups=(test, control),
function=conversion,
test='mannwhitneyu',
group_names=('test','control'))'
```
in [tutorial](https://retentioneering.github.io/retentioneering-tools/_build/html/compare.html) doesn't work and fails with `ValueError: cannot reindex on an axis with duplicate labels` error.
After downgrading to seaborn==0.11.1 error dissapear. | closed | 2022-08-02T09:12:59Z | 2023-03-28T07:57:37Z | https://github.com/retentioneering/retentioneering-tools/issues/53 | [] | SvetoforColumb | 1 |
Nekmo/amazon-dash | dash | 70 | [NOTICE] New services are welcome! | Amazon-dash currently supports:
- System Commands
- SSH
- HTTP Webhooks
- Home Assistant
- OpenHAB
- IFTTT
Do you need any other service? Please leave your comments. | open | 2018-08-05T00:38:02Z | 2022-12-28T00:28:23Z | https://github.com/Nekmo/amazon-dash/issues/70 | [
"enhancement"
] | Nekmo | 8 |
mkhorasani/Streamlit-Authenticator | streamlit | 272 | Cookie setting failure | ## Problem
When switching to another page as soon as you authenticate a user, the cookie fails to be set.
## Cause
This is caused by I/O delay.
## Solution
Introduce a delay upon setting the cookie in line https://github.com/mkhorasani/Streamlit-Authenticator/blob/c306a18b21970a5c57fc83d678bf0b3db14115f4/streamlit_authenticator/views/authentication_view.py#L369
| closed | 2025-03-11T16:33:28Z | 2025-03-11T16:38:05Z | https://github.com/mkhorasani/Streamlit-Authenticator/issues/272 | [
"enhancement"
] | JimiC | 0 |
aimhubio/aim | data-visualization | 3,063 | Having problems using with fairseq | ## ❓Question
The library [fairseq](https://github.com/facebookresearch/fairseq/) has built in support for aim, but I am struggling to get it working. I'm not sure if it's something I'm doing wrong or if maybe the fairseq support is out of date, but the fairseq repo is fairly inactive so I thought I would ask here.
I am working locally and run `aim server`, and see: "Server is mounted on 0.0.0.0:53800".
I then run my fairseq experiment, adding to my config.yaml file:
```
common:
aim_repo: aim://0.0.0.0:53800
```
then run my experiment. It seems to be working initially - aim detects the experiment and the log starts with:
```
[2023-11-15 14:31:07,453][fairseq.logging.progress_bar][INFO] - Storing logs at Aim repo: aim://0.0.0.0:53800
[2023-11-15 14:31:07,480][aim.sdk.reporter][INFO] - creating RunStatusReporter for f6f19ecf0e2147b19e24d52f
[2023-11-15 14:31:07,482][aim.sdk.reporter][INFO] - starting from: {}
[2023-11-15 14:31:07,482][aim.sdk.reporter][INFO] - starting writer thread for <aim.sdk.reporter.RunStatusReporter object at 0x7f57117363e0>
[2023-11-15 14:31:08,471][fairseq.trainer][INFO] - begin training epoch 1
[2023-11-15 14:31:08,471][fairseq_cli.train][INFO] - Start iterating over samples
[2023-11-15 14:31:10,821][fairseq.trainer][INFO] - NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 64.0
[2023-11-15 14:31:12,261][fairseq.trainer][INFO] - NOTE: gradient overflow detected, ignoring gradient, setting loss scale to: 32.0
[2023-11-15 14:31:12,261][fairseq_cli.train][INFO] - begin validation on "valid" subset
[2023-11-15 14:31:12,266][fairseq.logging.progress_bar][INFO] - Storing logs at Aim repo: aim://0.0.0.0:53800
[2023-11-15 14:31:12,283][fairseq.logging.progress_bar][INFO] - Appending to run: f6f19ecf0e2147b19e24d52f
```
but then I get an error:
```
...
File "/lib/python3.10/site-packages/fairseq/logging/progress_bar.py", line 64, in progress_bar
bar = AimProgressBarWrapper(
File "/lib/python3.10/site-packages/fairseq/logging/progress_bar.py", line 365, in __init__
self.run = get_aim_run(aim_repo, aim_run_hash)
File "/lib/python3.10/site-packages/fairseq/logging/progress_bar.py", line 333, in get_aim_run
return Run(run_hash=run_hash, repo=repo)
File "/lib/python3.10/site-packages/aim/ext/exception_resistant.py", line 70, in wrapper
_SafeModeConfig.exception_callback(e, func)
File "/lib/python3.10/site-packages/aim/ext/exception_resistant.py", line 47, in reraise_exception
raise e
File "/lib/python3.10/site-packages/aim/ext/exception_resistant.py", line 68, in wrapper
return func(*args, **kwargs)
File "/lib/python3.10/site-packages/aim/sdk/run.py", line 828, in __init__
super().__init__(run_hash, repo=repo, read_only=read_only, experiment=experiment, force_resume=force_resume)
File "/lib/python3.10/site-packages/aim/sdk/run.py", line 276, in __init__
super().__init__(run_hash, repo=repo, read_only=read_only, force_resume=force_resume)
File "/lib/python3.10/site-packages/aim/sdk/base_run.py", line 50, in __init__
self._lock.lock(force=force_resume)
File "/lib/python3.10/site-packages/aim/storage/lock_proxy.py", line 38, in lock
return self._rpc_client.run_instruction(self._hash, self._handler, 'lock', (force,))
File "/lib/python3.10/site-packages/aim/ext/transport/client.py", line 260, in run_instruction
return self._run_read_instructions(queue_id, resource, method, args)
File "/lib/python3.10/site-packages/aim/ext/transport/client.py", line 285, in _run_read_instructions
raise_exception(status_msg.header.exception)
File lib/python3.10/site-packages/aim/ext/transport/message_utils.py", line 76, in raise_exception
raise exception(*args) if args else exception()
TypeError: Timeout.__init__() missing 1 required positional argument: 'lock_file'
Exception in thread Thread-13 (worker):
Traceback (most recent call last):
File "lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/lib/python3.10/site-packages/aim/ext/transport/rpc_queue.py", line 55, in worker
if self._try_exec_task(task_f, *args):
File "/lib/python3.10/site-packages/aim/ext/transport/rpc_queue.py", line 81, in _try_exec_task
task_f(*args)
File "/lib/python3.10/site-packages/aim/ext/transport/client.py", line 301, in _run_write_instructions
raise_exception(response.exception)
File "/python3.10/site-packages/aim/ext/transport/message_utils.py", line 76, in raise_exception
raise exception(*args) if args else exception()
aim.ext.transport.message_utils.UnauthorizedRequestError: 3310c526-aa51-47ef-ba87-fbf75f80f610
```
Does anyone have any idea what might be causing this/if there's something wrong with the approach I'm taking? I've tried with a variety of different aim versions (going back to the versions when fairseq was more actively being developed) and I still get errors.
| open | 2023-11-15T14:47:34Z | 2024-01-09T07:40:58Z | https://github.com/aimhubio/aim/issues/3063 | [
"type / question"
] | henrycharlesworth | 4 |
hankcs/HanLP | nlp | 630 | Hanlp使用 | <!--
这是HanLP的issue模板,用于规范提问题的格式。本来并不打算用死板的格式限制大家,但issue区实在有点混乱。有时候说了半天才搞清楚原来对方用的是旧版、自己改了代码之类,浪费双方宝贵时间。所以这里用一个规范的模板统一一下,造成不便望海涵。除了注意事项外,其他部分可以自行根据实际情况做适量修改。
-->
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:portable-1.3.4
我使用的版本是:portable-1.3.4
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
在使用Hanlp分词的时候发现效果不错,所以想应用到某一个特定领域(例如新闻)。苦于对整个工程的原理和工程实现不是很清楚。作者能否出个专题(或者书籍)对广大READER介绍一下语料库的收集,使用,处理;模型的训练,调优测试;以及后续的维护等主题
@hankcs
| closed | 2017-09-20T02:19:46Z | 2020-01-01T11:08:03Z | https://github.com/hankcs/HanLP/issues/630 | [
"ignored"
] | SunnyWiki | 3 |
widgetti/solara | fastapi | 296 | Cannot switch to dark mode | running an app with the option `theme-variant` should enable dark theme, but this does not work for me.
Clean install on win11, reproduced by @mariobuikhuizen
```
$ solara run --theme-variant dark script.py
```
Issue #156 suggests that it did work in previous versions | closed | 2023-09-20T09:48:33Z | 2023-10-02T14:52:55Z | https://github.com/widgetti/solara/issues/296 | [] | Jhsmit | 2 |
mwaskom/seaborn | pandas | 3,566 | Problems when setting positions in boxplot() (mainly on log-scale axis) | Hey everybody,
Thanks for adding `native_scale` to boxplot in 0.13!! I've been waiting for this! :)
Now, I tried to do some tweaking dodging boxpositions manually, and encountered the following (I'm on 0.13.0, and matplotlib 3.7.2):
My code:
```python
import seaborn as sns
import matplotlib.pyplot as plt
data = {
"x":[0.1,0.1,0.1,0.1,1,1,1,1],
"y":[1,2,3,4,1,2,3,4]
}
xvals = sorted(set(data["x"]))
# logscale dodging
boxpositions = [np.exp(np.log(x)+0.5) for x in xvals]
fig, ax = plt.subplots()
ax.set_xscale("log")
sns.boxplot(data, x="x", y="y", width=0.3, native_scale=True, positions=boxpositions, ax=ax)
plt.show()
```
produces the following image:

This doesn't get better when trying to use log_scale instead of native_scale:
```python
sns.boxplot(data, x="x", y="y", width=0.3, log_scale=True, positions=boxpositions, ax=ax)
```
→

It works more or less fine on linear scale:
```python
import seaborn as sns
import matplotlib.pyplot as plt
data = {
"x":[0.1,0.1,0.1,0.1,1,1,1,1],
"y":[1,2,3,4,1,2,3,4]
}
xvals = sorted(set(data["x"]))
# linscale dodging
boxpositions = [x+0.5 for x in xvals]
fig, ax = plt.subplots()
sns.boxplot(data, x="x", y="y", width=0.3, positions=boxpositions, ax=ax)
plt.show()
```
Only the xlim is not updated:

Cheers,
Leo
| closed | 2023-11-20T10:22:11Z | 2023-11-24T12:26:48Z | https://github.com/mwaskom/seaborn/issues/3566 | [] | leoluecken | 9 |
rio-labs/rio | data-visualization | 106 | Display Required Fields, Supporting Text, Icon, `is_sensitive` and `is_valid` for `DateInput` | ### Description
Currently, our `DateInputs` component lacks the ability to indicate which fields are required, provide supporting text, and display leading and trailing icons. These features are crucial for enhancing user experience by guiding them through forms more effectively, ensuring they understand what information is needed, and improving the overall aesthetics and functionality of the input fields.
### Design Guidline
https://m3.material.io/components/date-pickers/guidelines
### Proposed Solution
**Required Fields Indicator:**
- Add `is_required` attribute to the `DateInput` component.
- When `is_required` is set to `True`, display an asterisk (*) next to the label.
- Optionally, add a `is_required_indicator` attribute to allow customization of the indicator (e.g., text, color). **needs discussion**
**Supporting Text:**
- Add `supporting_text` attribute to the `DateInput` component.
- The supporting text should be displayed below the input field.
- Style the supporting text to be visually distinct but not distracting **(see Design Guidline)**.
**Validation:**
- Visually displays to the user whether the current date `is_valid` **(similar to other input fields)**
**Trailing Icon:**
- Add a `trailing_icon` attribute to the `DateInput` component. **naming needs discussion**
- The `trailing_icon` should be displayed inside the `DateInput`, aligned to the right.
- Allow customization of the icon, which accepts an icon component or a string for the icon name.
**Sensitiv:**
- `is_sensitive`: bool = True **(similar to other input fields)**
### Alternatives
_No response_
### Additional Context
- Update documentation and examples for these new features.
### Related Issues/Pull Requests
#104, #105 | open | 2024-07-12T07:36:03Z | 2024-08-13T06:47:09Z | https://github.com/rio-labs/rio/issues/106 | [
"ideas wanted",
"new feature",
"enhancement"
] | Sn3llius | 0 |
Zeyi-Lin/HivisionIDPhotos | machine-learning | 179 | 使用git pull拉取更新,Gradio Web打开还是显示1.2.8版本,而不是最新的1.2.9 | closed | 2024-10-01T04:18:49Z | 2024-10-21T09:44:10Z | https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/179 | [] | leij0318 | 0 | |
keras-team/keras | pytorch | 20,731 | similar functions for `from_tensor` `to_tensor` from ragged api | I think ragged doesn't support yet. But is there any way to handle such following cases?
```python
tf.RaggedTensor.from_tensor
tf.RaggedTensor.to_tensor
...
def __init__(self, **kwargs):
super(RaggedToDenseTensor, self).__init__(**kwargs)
def call(self, inputs):
if isinstance(inputs, tf.RaggedTensor):
inputs = inputs.to_tensor()
return inputs
``` | closed | 2025-01-06T20:27:24Z | 2025-01-14T23:40:27Z | https://github.com/keras-team/keras/issues/20731 | [
"type:support"
] | innat | 6 |
Johnserf-Seed/TikTokDownload | api | 524 | [BUG] Problem with tiktok QR and cookie conflict with douyin url links |
**Describe the bug that occurs**
when I open the example.py file I can't capture the QR because it seems to be the douyin QR and not the tiktok QR, so my tiktok app won't process the douyin QR. So nowhere does it say if this QR works for both tiktok and douyin or only for douyin.
On the other hand, when I set the cookies manually, I don't know where to put the tiktok link I want to download so it seems to use a douyin url by default and my cookies are for tiktok, so I get a problem reading .json.
**Bug Reproduction** Steps to reproduce this behaviour:
```
[ 💻 ]:Windows平台
[ 🗻 ]:获取最新版本号中!
[ 🚩 ]:目前 14200 版本已是最新
[ 配置 ]:配置验证成功!
[ 配置 ]:读取本地配置完成!
[ 提示 ]:异常,链接错误,无法提取用户ID.
[2023-08-20 20:08:34,095] - Log.py] - ERROR: [ 提示 ]:异常,链接错误,无法提取用户ID.,Traceback (most recent call last):
File "D:\- GITHUB repo\TikTokDownload-main\Util\Profile.py", line 450, in get_Profile
self.sec_user_id = await self.get_all_sec_user_id(inputs=self.config['uid'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\- GITHUB repo\TikTokDownload-main\Util\Profile.py", line 164, in get_all_sec_user_id
raise ValueError("链接错误,无法提取用户ID.")
ValueError: 链接错误,无法提取用户ID.
[ 提示 ]:按任意键退出程序!
```
**Screenshot** If applicable, add a screenshot to help explain your issue.
**Desktop (please fill in the following information):**
- OS: [e.g. windows 11 64bit] - vpn proxy: [ off] - Project version: [ 1.4.2.2] - py version: [3.11.4]
**Attachment**
How do I download videos from a tiktok profile, how ?
| open | 2023-08-20T23:44:49Z | 2023-08-25T06:47:11Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/524 | [
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] | Alexo88 | 1 |
widgetti/solara | flask | 114 | Solara as desktop app | Started https://github.com/widgetti/solara/discussions/100 and also asked about on Discord.
I'm opening this to collect interest.
What I can see happening is a pyinstaller + https://pypi.org/project/pywebview/ in CI to test if it is possible to make a desktop-like application and because in CI it will always be stable.
But users will still have to build the custom apps themselves if they need particular python packages. | open | 2023-05-24T20:02:59Z | 2023-05-25T12:07:34Z | https://github.com/widgetti/solara/issues/114 | [] | maartenbreddels | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.