repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
cleanlab/cleanlab | data-science | 894 | Active learning for semantic Segmentation | Does your active learning with single/ multi annotators support semantic segmentation?
Right now I think it only supports classification techniques | open | 2023-11-12T19:36:56Z | 2024-07-24T07:59:26Z | https://github.com/cleanlab/cleanlab/issues/894 | [
"enhancement",
"help-wanted"
] | rameshamurtaza | 5 |
blacklanternsecurity/bbot | automation | 1,937 | Excavate with yara matching rules doesn't emit a unique enough description | **Describe the bug**
When using yara rules, the excavate module doesn't generate a unique matching description which causes additional matches on different sites to be suppressed.
**Expected behavior**
Every unique yara rule match should emit a FINDING
**BBOT Command**
Example: `bbot -m httpx -t example.com -cy yararule.txt`
**OS, BBOT Installation Method + Version**
`OS: Arch Linux, Installation method: pip, BBOT version: dev`
**Example Output**
```
[FINDING] {"description": "Custom Yara Rule [find_string] Matched via identifier [str1]", "host": "example.com", "path": "/", "url": "https://example.com/"} httpx->excavate
```
**Debug Message**
```
[DBUG] _scan_ingress: Not forwarding FINDING("{'description': 'Custom Yara Rule [find_string] Matched via identifier [str1]', ...", module=excavate, tags=set()) because event was already emitted by its module
``` | closed | 2024-11-08T19:11:04Z | 2024-11-16T03:01:38Z | https://github.com/blacklanternsecurity/bbot/issues/1937 | [
"bug"
] | aconite33 | 2 |
alpacahq/alpaca-trade-api-python | rest-api | 36 | Disallow redirect | One of the users reported POST /orders didn't work. Turns out, the base URL was set to http. `requests` package has a parameter to disallow auto redirect and in our case, it should be better to error out rather than silently fail. | closed | 2018-11-07T17:13:51Z | 2018-11-29T17:49:18Z | https://github.com/alpacahq/alpaca-trade-api-python/issues/36 | [] | umitanuki | 1 |
tensorpack/tensorpack | tensorflow | 770 | I try to implement cifar10-resnet.py from tensorpack-examples-ResNet on Ubuntu | /home/jarlan/anaconda3/bin/python3.5 /home/jarlan/PycharmProjects/tensorflow/examples/ResNet/cifar10-resnet.py
[0520 15:50:33 @logger.py:109] WRN Log directory train_log/cifar10-resnet exists! Use 'd' to delete it.
[0520 15:50:33 @logger.py:112] WRN If you're resuming from a previous run, you can choose to keep it.
Press any other key to exit.
Select Action: k (keep) / d (delete) / q (quit):k
[0520 15:50:37 @logger.py:67] Existing log file 'train_log/cifar10-resnet/log.log' backuped to 'train_log/cifar10-resnet/log.log.0520-155037'
[0520 15:50:37 @logger.py:74] Argv: /home/jarlan/PycharmProjects/tensorflow/examples/ResNet/cifar10-resnet.py
[0520 15:50:37 @fs.py:88] WRN Env var $TENSORPACK_DATASET not set, using /home/jarlan/tensorpack_data for datasets.
[0520 15:50:37 @cifar.py:32] Found cifar10 data in /home/jarlan/tensorpack_data/cifar10_data.
[0520 15:50:38 @parallel.py:185] [MultiProcessPrefetchData] Will fork a dataflow more than one times. This assumes the datapoints are i.i.d.
[0520 15:50:38 @cifar.py:32] Found cifar10 data in /home/jarlan/tensorpack_data/cifar10_data.
[0520 15:50:39 @gpu.py:39] WRN Found nvidia-smi. But TensorFlow was not built with CUDA support!
[0520 15:50:39 @interface.py:31] Automatically applying QueueInput on the DataFlow.
[0520 15:50:39 @input_source.py:193] Setting up the queue 'QueueInput/input_queue' for CPU prefetching ...
[0520 15:50:39 @training.py:103] Building graph for training tower 0 ...
Traceback (most recent call last):
File "/home/jarlan/PycharmProjects/tensorflow/examples/ResNet/cifar10-resnet.py", line 175, in <module>
launch_train_with_config(config, SyncMultiGPUTrainerParameterServer(nr_gpu))
File "/home/jarlan/PycharmProjects/tensorflow/tensorpack/train/interface.py", line 81, in launch_train_with_config
model._build_graph_get_cost, model.get_optimizer)
File "/home/jarlan/PycharmProjects/tensorflow/tensorpack/utils/argtools.py", line 181, in wrapper
return func(*args, **kwargs)
File "/home/jarlan/PycharmProjects/tensorflow/tensorpack/train/tower.py", line 178, in setup_graph
train_callbacks = self._setup_graph(input, get_cost_fn, get_opt_fn)
File "/home/jarlan/PycharmProjects/tensorflow/tensorpack/train/trainers.py", line 91, in _setup_graph
self._make_get_grad_fn(input, get_cost_fn, get_opt_fn), get_opt_fn)
File "/home/jarlan/PycharmProjects/tensorflow/tensorpack/graph_builder/training.py", line 151, in build
grad_list = DataParallelBuilder.build_on_towers(self.towers, get_grad_fn, devices)
File "/home/jarlan/PycharmProjects/tensorflow/tensorpack/graph_builder/training.py", line 108, in build_on_towers
ret.append(func())
File "/home/jarlan/PycharmProjects/tensorflow/tensorpack/train/tower.py", line 205, in get_grad_fn
cost = get_cost_fn(*input.get_input_tensors())
File "/home/jarlan/PycharmProjects/tensorflow/tensorpack/tfutils/tower.py", line 209, in __call__
output = self._tower_fn(*args)
File "/home/jarlan/PycharmProjects/tensorflow/tensorpack/graph_builder/model_desc.py", line 235, in _build_graph_get_cost
ret = self.build_graph(*inputs)
File "/home/jarlan/PycharmProjects/tensorflow/examples/ResNet/cifar10-resnet.py", line 50, in build_graph
assert tf.test.is_gpu_available()
AssertionError
Process finished with exit code 1
I am new to it. I don't know what's going wrong. Could you help me ? | closed | 2018-05-20T08:02:33Z | 2018-05-30T20:59:43Z | https://github.com/tensorpack/tensorpack/issues/770 | [
"installation/environment"
] | codetjj | 6 |
aleju/imgaug | deep-learning | 805 | Deleted issue | closed | 2022-01-17T22:56:12Z | 2022-01-17T23:00:30Z | https://github.com/aleju/imgaug/issues/805 | [] | nmerty | 0 | |
influxdata/influxdb-client-python | jupyter | 537 | missing dependency to urrlib3 | ### Specifications
* Client Version: 'pip install 'influxdb-client[ciso]' of today
* InfluxDB Version: 2.5.1
* Platform: Debian 11
### Code sample to reproduce problem
no sample code needed; this python library is missing a dependency to urllib
### Expected behavior
`pip install` should install all required upstream python modules
### Actual behavior
urllib3 is not installed automatically when installing influxdb_client:
```
File "/usr/local/lib/python3.9/dist-packages/influxdb_client/__init__.py", line 60, in <module>
from influxdb_client.configuration import Configuration
File "/usr/local/lib/python3.9/dist-packages/influxdb_client/configuration.py", line 20, in <module>
import urllib3
ModuleNotFoundError: No module named 'urllib3'
```
### Additional info
It's easy to work around and install it manually, but still normally a python module should list all of its dependencies. | closed | 2022-12-07T14:46:15Z | 2022-12-08T04:08:12Z | https://github.com/influxdata/influxdb-client-python/issues/537 | [
"invalid",
"wontfix"
] | laf0rge | 4 |
fastapi/sqlmodel | pydantic | 314 | `sqlalchemy.Column` parameters are not passed forward when set on `sqlmodel.Field` and a column is provided via `sa_column` | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
name: str = Field(
sa_column=Column(
String,
# ... other attrs not exposed in sqlmodel.Field today
),
index=True # this is ignored, must be set on `Column` above
)
```
### Description
`sqlmodel.Field` exposes some but not all fields from `sqlalchemy.Column`. This means in some cases it is necessary to provide a `Column` via the `sa_column=` param on `sqlmodel.Field`. However, when this is the case the parameters set on `sqlmodel.Field` are not forwarded to the new `sa_column` object.
I think the expected behavior here is that parameters set on `Field` would be combined with those from the `sa_column` object. Either this or setting a parameter that will be ignored should trigger a warning/exception along the lines of `"You have set index but also provided a sqlalchemy.Column object, index will be ignored"`.
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
3.9.5
### Additional Context
_No response_ | closed | 2022-04-26T18:59:26Z | 2022-11-22T00:12:00Z | https://github.com/fastapi/sqlmodel/issues/314 | [
"question",
"answered"
] | JLHasson | 6 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 897 | Question about resize and crop | I have 256x256 images in dataset A and 128x128 images in dataset B. If I use `--preprocess resize_and_crop --load_size 256 --crop_size 128`, the images in dataset B are loaded at 128x128 or upscaled to 256x256 and the cropped at 128x128? | closed | 2020-01-11T14:17:58Z | 2020-01-15T09:42:13Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/897 | [] | domef | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,130 | When i train,my input images and output images are 1 channel, but when i test,the result images turn into 3 channels,how can i solve this problem? | When i train,my input images and output images are 1 channel, but when i test,the result images turn into 3 channels,how can i solve this problem? | open | 2020-08-24T06:39:34Z | 2020-08-26T06:01:12Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1130 | [] | gujinjin611 | 1 |
koxudaxi/datamodel-code-generator | fastapi | 2,351 | Add option to disable Field constraints | **Is your feature request related to a problem? Please describe.**
I'm in the unfortunate position of having to consume an API that has incorrect constraints in its OpenAPI documentation. Up until now, I have been manually removing the Field constraints, but I was wondering if this could be a useful CLI flag for anyone else.
**Describe the solution you'd like**
Some sort of `--no-constraints` flag added as a CLI flag
**Describe alternatives you've considered**
N/A
**Additional context**
N/A
| open | 2025-03-20T19:44:36Z | 2025-03-21T15:53:23Z | https://github.com/koxudaxi/datamodel-code-generator/issues/2351 | [] | harrymconner | 1 |
agronholm/anyio | asyncio | 71 | Add the "cancellable" parameter to run_in_thread() | Trio's `run_sync()` function takes a `cancellable` parameter, which when set to `True`, leaves the thread to run its course while still freeing up the task. This requires some changes like:
- Release the capacity limiter only after the thread actually finishes
- Don't try to send the results to the event loop if the task was cancelled
| closed | 2019-09-09T08:19:01Z | 2019-10-19T10:01:59Z | https://github.com/agronholm/anyio/issues/71 | [
"enhancement"
] | agronholm | 0 |
coqui-ai/TTS | python | 3,364 | [Feature request] Text for synthesis needs to be normalized for languages with diacritics | <!-- Welcome to the 🐸TTS project!
We are excited to see your interest, and appreciate your support! --->
Text for synthesis needs to be normalized for languages with diacritics or synthesis will be incorrect under certain ircumstances.
<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
For diacritics, like German with its umlauts (äöü), there are often at least two ways to represent them in Unicode text: precomposed (a single code point: ä) and decomposed (a base code point modified by another: a + ¨). Some text sources, like [piping a string into the `tts` command via `xargs` sourced from a text file](https://github.com/coqui-ai/TTS/discussions/1101#discussioncomment-3853822) may not convert from decomposed to precomposed. This is a problem, because the models I tested (i.e. "thorsten/tacotron2-DDC") only synthesize an umlaut in the precomposed form. They will just ignore the diacritics characters otherwise, synthesizing the base letter.
I’m not a Python dev. A hacky way of fixing this would be to modify "synthesize.py":
```
import unicodedata
…
args = parser.parse_args()
args.text = unicodedata.normalize('NFC', args.text)
```
<!-- A clear and concise description of what you want to happen. -->
Alternatively we could find some other way to make sure that the models are always supplied tokens that they can synthesize.
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
The text conversion could be optional via a command line argument.
<!-- Add any other context or screenshots about the feature request here. -->
| closed | 2023-12-04T13:40:22Z | 2024-01-14T21:15:16Z | https://github.com/coqui-ai/TTS/issues/3364 | [
"wontfix",
"feature request"
] | JanX2 | 1 |
miguelgrinberg/Flask-SocketIO | flask | 1,427 | Test client doesn't encode/decode the data | Sending data with emit() and getting the data back with get_received() returns the **actual** object emitted not one that got encoded to json and back. This can mask at least 2 different types of bugs.
The real socketio client encodes the data in packet.py encode()
if data is not None:
if needs_comma:
encoded_packet += ','
encoded_packet += self.json.dumps(data, separators=(',', ':'))
but the socketio test client just stores the object for later retrieval.
This caused my test code to miss two errors:
1. The object I sent was not json encodable and caused an exception.
2. If you change the object after emit the changes appear when you get the data from get_received(). I had a test that passed because the object that was emitted had wrong data that was emitted but was changed to correct data later. This passed in test but failed in production
**Steps to reproduce the behavior:**
Case 1:
1. in a flask app emit an object with an non-json encodable type (like datetime or mongo ObjectId)
2. test with pytest using socketio.test_client
3. code will run
4. test again without using pytest code will throw an exception (TypeError: Object of type ObjectId is not JSON serializable)
Case 2:
#in server
data = {'test': 1}
emit('test', data)
data['test']=2
#in pytest
assert socketio_client.get_received()[0]['args'][0]['test'] == 1 #this assert will fail
**Expected behavior**
In the first case should throw an exception
in the second case the assertion should pass
| closed | 2020-12-09T14:41:34Z | 2020-12-13T12:35:47Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1427 | [] | senstar-nross | 3 |
ivy-llc/ivy | tensorflow | 28,404 | Fix Ivy Failing Test: paddle - elementwise.logical_and | To-Do list https://github.com/unifyai/ivy/issues/27501 | open | 2024-02-23T07:32:06Z | 2024-03-01T10:07:50Z | https://github.com/ivy-llc/ivy/issues/28404 | [
"Sub Task"
] | MuhammadNizamani | 0 |
httpie/http-prompt | rest-api | 163 | Display current Vi mode | In bash, my prompt displays `(ins)` or `(cmd)` in front of PS1 when vi mode is enabled. http-prompt does not seem to show anywhere that vi mode is on or off.
I am willing to contribute this feature myself. Do you feel I should wait until #159 is done or should I already hard-prefix it if vi mode is on, like bash seems to do on my system? | open | 2020-01-22T10:40:09Z | 2020-01-22T10:40:09Z | https://github.com/httpie/http-prompt/issues/163 | [] | TheLastProject | 0 |
deepset-ai/haystack | pytorch | 8,714 | Add a `ListJoiner` Component to Merge Multiple Lists into a Single List | **Is your feature request related to a problem? Please describe.**
There is a need for a `ListJoiner` component in Haystack to handle scenarios where multiple lists (e.g., `List[ChatMessage]`) need to be merged into a single flat list. This component would simplify workflows by consolidating variadic inputs into one unified list, eliminating nested structures. The output order would respect the pipeline's execution sequence, with user-provided inputs always added first.
Current joiners cannot provide this functionality.
**Describe the solution you'd like**
For reference, a similar implementation exists in the [Haystack cookbook](https://github.com/deepset-ai/haystack-cookbook/blob/main/notebooks/conversational_rag_using_memory.ipynb)
The above reference also describes one of the use cases of `ListJoiner`.
**Describe alternatives you've considered**
N/A
**Additional context**
N/A | closed | 2025-01-13T13:50:08Z | 2025-02-05T22:19:17Z | https://github.com/deepset-ai/haystack/issues/8714 | [
"type:feature",
"P2",
"2.x"
] | Amnah199 | 0 |
ets-labs/python-dependency-injector | flask | 780 | traverse visits providers in a random order | When `container.init_resources()` is called, it is expected that resources are loaded in the same order every time (and ideally in a way that can be controlled, but that is outside the scope of this bug).
What actually happens is the resources are initialized in a random order each time an application is started.
This is due to `def traverse(...)` using `set()` to iterate over the providers, this results in a random start-up order every time an app is restarted.
Proposed short-term solution is to init providers in the order they are defined in the container.
Proposed long-term solution (separate ticket / feature request) is that providers can be given a "priority" or "order" to indicate precedence. | open | 2024-01-31T23:34:24Z | 2024-10-17T14:51:40Z | https://github.com/ets-labs/python-dependency-injector/issues/780 | [] | BEllis | 2 |
ultralytics/ultralytics | python | 19,772 | How to specify the train and validation results directory? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I am training a model and evaluate on it. The code is something like the following:
> model.train(task='detect', mode='train', data=data_file, project=task_dir, device=0, save_dir=task_dir, ...)
> model.val(project=task_dir, save_json=True)
However, it seems that sometimes the results are saved as training results in the folder of `task_dir/train` and validation results in `task_dir/val`, and sometimes the validation results are in `task_dir/train2`. Why there can be such a difference? How to specify them? Also, I found `save_dir` not very useful, and I need to specify the `project` argument to make the results saved in `save_dir`.
### Additional
_No response_ | open | 2025-03-19T04:21:19Z | 2025-03-20T05:23:58Z | https://github.com/ultralytics/ultralytics/issues/19772 | [
"question",
"detect"
] | deJQK | 3 |
LAION-AI/Open-Assistant | python | 3,326 | accelerator version issue | huggingface's accelerator updated from v0.19.0 to v0.20.0 and 'logging_dir' disappeared from __init__ method in Accelerator class.
So the above error occurs.

https://github.com/huggingface/accelerate/blob/baebae3bbecbea05d721a50917f352cccd14811e/src/accelerate/accelerator.py#L242-L244
OA doesn't specify version
https://github.com/LAION-AI/Open-Assistant/blob/0fcf3e08fe62295d4696e590005b0f33383342ea/model/pyproject.toml#L12-L13
However, if you go to the trlx library that actually runs the accelerator, it is versioned as shown below.

https://github.com/CarperAI/trlx/blob/0dce99d96b7d70b6a9114129d8e38bf6c80eb653/requirements.txt#L1-L2
Of course, it is true that the trlx library also has its own errors.
However, if OA will have a dependency on the trlx library, I think it is necessary to get the trlx requirement.txt and install it with the same version as specified there.
| open | 2023-06-08T02:15:08Z | 2023-06-19T01:53:49Z | https://github.com/LAION-AI/Open-Assistant/issues/3326 | [
"bug",
"ml"
] | idisuu | 2 |
arogozhnikov/einops | numpy | 257 | [BUG] Noisy patches generated when using `einops.rearrange` to split a grayscale brain image of `skimage.data` | # Issue Description
When utilizing `einops.rearrange` to divide a grayscale brain image obtained from `skimage.data` into patches, it appears that the resulting patches contain a considerable amount of noise. This noise negatively impacts the quality of the patches and hinders further processing or analysis of the image data.
# Steps to Reproduce
1. Import `skimage.data` and obtain a grayscale brain image.
2. Use `einops.rearrange` to split the image into patches.
> Brain image

```python
from einops import rearrang
import numpy as np
from skimage.data import brain
# Load the image
brain_image = brain()[-1][..., np.newaxis]
# Patchify the image
patche_size = 32
patches = rearrange(brain_image, '(h p1) (w p2) c -> (h w) (p1 p2) c', p1=patche_size, p2=patche_size)
# Plot the patches
for i in range(8):
for j in range(8):
plt.subplot(8, 8, i*8+j+1)
plt.axis("off")
plt.imshow(patches[i*8+j].reshape(patche_size, patche_size, 1), cmap="gray")
```
## Expected Result
The patches generated by `einops.rearrange` should maintain the integrity of the original grayscale brain image without introducing significant noise.
### Results on RGB image
1. Input image

2. Patches

## Actual Result
The patches produced by `einops.rearrange` exhibit a noticeable amount of noise, making them less suitable for subsequent analysis or processing tasks.

| closed | 2023-05-29T12:10:10Z | 2024-09-18T04:53:44Z | https://github.com/arogozhnikov/einops/issues/257 | [] | Ishak96 | 2 |
PeterL1n/BackgroundMattingV2 | computer-vision | 169 | 執行inference_images.py 完後 沒有顯示任何圖片? | 執行inference_images.py 完後 沒有顯示任何圖片?請問有人也是一樣嗎?執行完顯示出Directory /home/user/matting/img_output already exists. Override? [Y/N]: y 但在資料夾中卻沒有顯示? | open | 2022-01-08T13:53:30Z | 2023-04-04T08:58:45Z | https://github.com/PeterL1n/BackgroundMattingV2/issues/169 | [] | yoyoyin0902 | 6 |
pydata/xarray | pandas | 9,347 | How to efficiently test the huge inherited Dataset API | ### What is your issue?
`xarray.Dataset` has a ginormous API, and eventually the entire thing should also be available on `DataTree`. However we probably still need to test this copied API, because the majority of it will be altered via the `@map_over_subtree` decorator. How can we do that without also copying thousands of lines of tests from `xarray.tests`? | closed | 2024-08-13T16:20:51Z | 2024-10-21T15:58:35Z | https://github.com/pydata/xarray/issues/9347 | [
"topic-internals",
"topic-testing",
"topic-DataTree"
] | TomNicholas | 2 |
dynaconf/dynaconf | flask | 858 | Suggestions on how to `reverse` in my Django settings.yaml | I currently define settings like `LOGIN_URL` as `reverse_lazy("account_login")`. Is there a way to replicate this type of configuration into something I can use in my Dynaconf settings.yaml?
If possible, I'd prefer to keep all settings in my Dynaconf settings.yaml and keep my Django settings.py file as empty as possible. | closed | 2023-02-04T15:34:20Z | 2023-03-30T19:30:37Z | https://github.com/dynaconf/dynaconf/issues/858 | [
"question",
"Docs"
] | wgordon17 | 6 |
521xueweihan/HelloGitHub | python | 1,928 | test | ## 项目推荐
- 项目地址:仅收录 GitHub 的开源项目,请填写 GitHub 的项目地址
- 类别:请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Swift、其它、书籍、机器学习)
- 项目后续更新计划:
- 项目描述:
- 必写:这是个什么项目、能用来干什么、有什么特点或解决了什么痛点
- 可选:适用于什么场景、能够让初学者学到什么
- 描述长度(不包含示例代码): 10 - 256 个字符
- 推荐理由:令人眼前一亮的点是什么?解决了什么痛点?
- 示例代码:(可选)长度:1-20 行
- 截图:(可选)gif/png/jpg
## 提示(提交时请删除以下内容)
> 点击上方 “Preview” 更方便地阅读以下内容,
提高项目收录的概率方法如下:
1. 到 HelloGitHub 网站首页:https://hellogithub.com 搜索要推荐的项目地址,查看准备推荐的项目是否被推荐过。
2. 根据 [项目审核标准说明](https://github.com/521xueweihan/HelloGitHub/issues/271) 修改项目
3. 如您推荐的项目收录到《HelloGitHub》月刊,您的 GitHub 帐号将展示在 [贡献人列表](https://github.com/521xueweihan/HelloGitHub/blob/master/content/contributors.md),**同时会在本 issues 中通知您**。
再次感谢您对 HelloGitHub 项目的支持!
| closed | 2021-10-15T10:56:00Z | 2021-10-15T10:59:59Z | https://github.com/521xueweihan/HelloGitHub/issues/1928 | [] | 521xueweihan | 0 |
noirbizarre/flask-restplus | flask | 93 | Method restrictions aren't honoured in the Swagger UI | I have an example class declared like so:
``` python
@api.route('/foo/bar', endpoint='foo')
@api.route('/bar', methods=['GET'], endpoint='bar')
class Bar(Resource):
@api.doc(...)
def get(self):
...
@api.doc(...)
def post(self):
...
```
The method restriction works in that I can not `POST` to `/bar` however in the Swagger UI, the `/bar` endpoint will still have documentation for both `GET` & `POST`, the same as for `/foo/bar` when it should only have the `GET` documentation. I'm using version 0.8.0.
| closed | 2015-11-04T12:02:36Z | 2016-01-17T18:53:19Z | https://github.com/noirbizarre/flask-restplus/issues/93 | [
"bug"
] | bodgit | 6 |
lyhue1991/eat_tensorflow2_in_30_days | tensorflow | 9 | Is there english version? | closed | 2020-04-04T08:38:18Z | 2020-04-14T04:20:07Z | https://github.com/lyhue1991/eat_tensorflow2_in_30_days/issues/9 | [] | xTunqki | 4 | |
plotly/jupyter-dash | jupyter | 42 | Cannot view contents of error message dropdowns | Actually the arrow points up so I'm not sure if they're called dropdowns. I can view them fine in regular JupyterDash but in JupyterLab, they are unresponsive.

| open | 2020-11-03T17:02:33Z | 2020-11-03T17:03:42Z | https://github.com/plotly/jupyter-dash/issues/42 | [] | nyck33 | 1 |
seleniumbase/SeleniumBase | pytest | 2,226 | sorry, can be delete | sorry, can be delete | closed | 2023-10-31T00:42:56Z | 2023-10-31T19:16:33Z | https://github.com/seleniumbase/SeleniumBase/issues/2226 | [
"invalid",
"UC Mode / CDP Mode"
] | ggforces | 0 |
pyeve/eve | flask | 1,087 | Registering signals with Eve doesn't work | I tried to use Flask-Security with Eve and, when starting, I had the following exception:
```
File "/opt/pyor/backend/manage.py", line 8, in <module>
from pyor.api import app
File "/opt/pyor/backend/pyor/api/__init__.py", line 72, in <module>
security.init_app(app)
File "/opt/conda/envs/pyor/lib/python3.6/site-packages/flask_security/core.py", line 493, in init_app
identity_loaded.connect_via(app)(_on_identity_loaded)
File "/opt/conda/envs/pyor/lib/python3.6/site-packages/blinker/base.py", line 182, in decorator
self.connect(fn, sender, weak)
File "/opt/conda/envs/pyor/lib/python3.6/site-packages/blinker/base.py", line 130, in connect
sender_ref = reference(sender, self._cleanup_sender)
File "/opt/conda/envs/pyor/lib/python3.6/site-packages/blinker/_utilities.py", line 134, in reference
weak = callable_reference(object, callback)
File "/opt/conda/envs/pyor/lib/python3.6/site-packages/blinker/_utilities.py", line 145, in callable_reference
return BoundMethodWeakref(target=object, on_delete=callback)
File "/opt/conda/envs/pyor/lib/python3.6/site-packages/blinker/_saferef.py", line 135, in __new__
key = cls.calculate_key(target)
File "/opt/conda/envs/pyor/lib/python3.6/site-packages/blinker/_saferef.py", line 196, in calculate_key
return (id(get_self(target)), id(get_func(target)))
File "/opt/conda/envs/pyor/lib/python3.6/site-packages/events/events.py", line 41, in __getattr__
(self.__class__.__name__, name))
AttributeError: type object 'Eve' has no attribute '__self__'
```
After googling, I found that another library used with Eve presented the same issue:
https://github.com/getsentry/raven-python/issues/952
I don't understand why this is happening. But both issues happens when trying to register a blinker signal with Eve. I debugged where the error happens and it's odd because calling `target.__self__` works, but `get_self(target)` doesn't (even though get_self is `get_self = operator.attrgetter('__self__')`). | closed | 2017-11-12T17:15:52Z | 2018-05-18T19:19:37Z | https://github.com/pyeve/eve/issues/1087 | [
"stale"
] | fernandocamargoai | 3 |
streamlit/streamlit | machine-learning | 10,615 | cache_data and cache_resource not working with DuckDB on Motherduck connection | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
I have this connections script that connects to DuckDB on Motherduck:
```python
import streamlit as st
import duckdb
from duckdb import DuckDBPyConnection
import polars as pl
import toml
@st.cache_resource
def motherduck_connection() -> DuckDBPyConnection:
with open("./secrets.toml", "r") as f:
secrets = toml.load(f)
motherduck_token = secrets["tokens"]["motherduck"]
conn = duckdb.connect(f"md:nba_data?motherduck_token={motherduck_token}")
return conn
@st.cache_data(ttl=600)
def standings_table_connection(conn: DuckDBPyConnection) -> pl.DataFrame:
standings_dataframe = pl.from_arrow(
conn.sql("SELECT * FROM nba_data_staging.teams")
)
return standings_dataframe
```
when running the streamlit app:
```python
import streamlit as st
from streamlit_components.standings_section import StandingsSection
from streamlit_components.connections import (
motherduck_connection,
standings_table_connection
)
conn = motherduck_connection()
standings_table = standings_table_connection(conn)
st.set_page_config(page_title="Streamlit: Premier League", layout="wide")
def app():
standings_section = StandingsSection(standings_table)
standings_section.display()
if __name__ == "__main__":
app()
```
Python unexpectedly quits with the error:
```bash
libc++abi: terminating due to uncaught exception of type std::runtime_error: instance allocation failed: new instance has no pybind11-registered base types
Abort trap: 6
```
when I remove the caching, it works.
### Reproducible Code Example
```Python
```
### Steps To Reproduce
_No response_
### Expected Behavior
_No response_
### Current Behavior
Error message:
```bash
libc++abi: terminating due to uncaught exception of type std::runtime_error: instance allocation failed: new instance has no pybind11-registered base types
Abort trap: 6
```
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: `1.42.0`
- duckdb `1.2.0`
- polars `1.22.0`
- pyarrow `19.0.0`
- Python version: `3.12.2`
- Operating System: macOS - M3 chip
- Browser: Firefox
### Additional Information
_No response_ | closed | 2025-03-03T19:37:21Z | 2025-03-04T19:08:39Z | https://github.com/streamlit/streamlit/issues/10615 | [
"type:bug",
"status:won't-fix"
] | digitalghost-dev | 5 |
MaartenGr/BERTopic | nlp | 2,223 | Issues merging model with reduced outliers, KeyError | ### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Desribe the bug
Hello! LOVE this package!
My code was working with a previous version of bertopic (not sure which), but this week I updated to the latest bertopic version (Version: 0.16.4) and now I get a KeyError '12' when trying to merge model 1 with model 2 that has the outliers reduced (no -1 topic). Merging model 1 with model 2 before reducing outliers works.
There are 12 topics overall (numbered 0 to 11) which seems related.
When I click traceback, it suggests the issue is here:
File [~\AppData\Local\miniforge3\lib\site-packages\bertopic\_bertopic.py:3494](vscode-file://vscode-app/c:/Users/JoelMartinez/AppData/Local/Programs/Positron/resources/app/out/vs/code/electron-sandbox/workbench/workbench.html#), in BERTopic.merge_models(cls, models, min_similarity, embedding_model)
3492 max_topic += 1
3493 new_topics_dict[new_topic] = max_topic
**_-> 3494 merged_topics["topic_representations"][str(max_topic)] = selected_topics["topic_representations"][
3495 str(new_topic)
3496 ]_**
3497 merged_topics["topic_labels"][str(max_topic)] = selected_topics["topic_labels"][str(new_topic)]
3499 # Add new aspects
### Reproduction
```python
from bertopic import BERTopic
# Model 1
model1 = BERTopic.load('model1')
# Model 2
import copy
topics, probs = model2.fit_transform(docs)
model2_no_outlier = copy.deepcopy(topic_model)
no_outlier_topics = model2_no_outlier.reduce_outliers(docs, model2.topics_, strategy="embeddings")
# This works
merged_model = BERTopic.merge_models([model1, model2], min_similarity=.7)
# This gives the KeyError, although with previous version of bertopic it worked
merged_model = BERTopic.merge_models([model1, model2_no_outlier], min_similarity=.7)
```
### BERTopic Version
0.16.4 | closed | 2024-11-22T18:39:09Z | 2024-11-26T17:33:25Z | https://github.com/MaartenGr/BERTopic/issues/2223 | [
"bug"
] | joelem | 2 |
ets-labs/python-dependency-injector | flask | 696 | Singleton provider produces more than one unique instance | When using the `ThreadSafeSingleton` provider to inject a singleton object, the object injected into the consuming class is not the same instance used in the provider.
```python
class MyContainer(containers.DeclarativeContainer):
my_singleton = providers.ThreadSafeSingleton(MyClass)
my_consumer = providers.Singleton(
MyConsumer,
my_singleton=my_singleton(),
some_parameter="foo"
)
```
In this code, the `my_singleton` object is injected into the `my_consumer` provider using `my_singleton()`.
However, the `my_singleton` object used in the provider is not the same object used in the `MyConsumer` class.
| closed | 2023-04-17T23:26:48Z | 2023-04-18T00:27:21Z | https://github.com/ets-labs/python-dependency-injector/issues/696 | [] | uriariel | 1 |
geopandas/geopandas | pandas | 3,499 | BUG: Buffer operation generates donut instead of filled polygon | - [ X] I have checked that this issue has not already been reported.
- [ X] I have confirmed this bug exists on the latest version of geopandas.
- [ ] (optional) I have confirmed this bug exists on the main branch of geopandas.
---
**Note**: Please read [this guide](https://matthewrocklin.com/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug.
#### Code Sample, a copy-pastable example
```python
# Your code here
# Code to convert Natural Earth coastline to CEA
import geopandas as gp
import requests
import io
headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8'}
url = 'https://naturalearth.s3.amazonaws.com/10m_physical/ne_10m_coastline.zip'
r = requests.get(url, headers=headers)
# Coasts
coasts = gp.read_file(io.BytesIO(r.content))
# Problem Buffer
ax = coasts.loc[[7]].plot()
coasts.loc[[7]].buffer(1).plot(ax=ax)
```
This is the feature

And this is the feature and its buffer.

#### Problem description
The buffer operations should return a polygon without a hole. Something like this
```python
ax = coasts.loc[[7]].buffer(1).convex_hull.plot()
coasts.loc[[7]].plot(ax=ax, color='r')
```

Here's another example
```python
ax = coasts.loc[[91]].buffer(1).plot()
coasts.loc[[91]].plot(ax=ax, color='r')
```

Both geometries are valid
```python
coasts.loc[[7]].is_valid
coasts.loc[[91]].is_valid
```
<img width="247" alt="Image" src="https://github.com/user-attachments/assets/c9aa87fe-7c98-47e4-9fdd-c896f3642591" />
Is the buffer operation taking a difference? Or am I missing something?
#### Expected Output
#### Output of ``geopandas.show_versions()``
<details>
SYSTEM INFO
-----------
python : 3.11.11 | packaged by conda-forge | (main, Dec 5 2024, 14:21:42) [Clang 18.1.8 ]
executable : /Users/myuser/miniforge3/envs/OriginLanguages/bin/python
machine : macOS-14.7.1-arm64-arm-64bit
GEOS, GDAL, PROJ INFO
---------------------
GEOS : 3.13.0
GEOS lib : None
GDAL : 3.10.1
GDAL data dir: /Users/myuser/miniforge3/envs/OriginLanguages/share/gdal/
PROJ : 9.5.1
PROJ data dir: /Users/myuser/miniforge3/envs/OriginLanguages/share/proj
PYTHON DEPENDENCIES
-------------------
geopandas : 1.0.1
numpy : 2.0.2
pandas : 2.2.3
pyproj : 3.7.0
shapely : 2.0.6
pyogrio : 0.10.0
geoalchemy2: None
geopy : 2.4.1
matplotlib : 3.10.0
mapclassify: 2.8.1
fiona : None
psycopg : None
psycopg2 : None
pyarrow : 18.1.0
</details>
| open | 2025-01-21T20:45:21Z | 2025-01-30T10:49:10Z | https://github.com/geopandas/geopandas/issues/3499 | [
"bug",
"upstream issue"
] | ozak | 1 |
sqlalchemy/alembic | sqlalchemy | 559 | Cookbook recipe for database freshness check | This feature request is for a cookbook recipe to programatically check database freshness.
Snippet:
```python
from alembic import config
from alembic import script
from alembic.runtime import migration
import sqlalchemy
import exceptions
engine = sqlalchemy.create_engine(DATABASE_URL)
alembic_cfg = config.Config('alembic.ini')
directory = script.ScriptDirectory.from_config(alembic_cfg)
with engine.begin() as conn:
context = migration.MigrationContext.configure(conn)
if context.get_current_revision() != directory.get_current_head():
raise exceptions.DatabaseIsNotUpToDate('Upgrade the database.')
```
(Gist: https://gist.github.com/m-aciek/118d450ee59a41176214b5f93a02cc6f.) | closed | 2019-05-10T22:46:46Z | 2019-05-14T22:32:17Z | https://github.com/sqlalchemy/alembic/issues/559 | [
"feature",
"documentation"
] | m-aciek | 1 |
miguelgrinberg/Flask-Migrate | flask | 232 | flask-migrate commands result in no action at all | I am trying to start using migrations for an existing project. I created a file migrate.py, which looks this:
```
from flask_script import Manager
from flask_migrate import MigrateCommand, Migrate
from dream_app import app, db
app.config.from_pyfile('config.py')
migrate = Migrate(app, db)
manager = Manager(app)
manager.add_command('db', MigrateCommand)
if __name__ == 'main':
manager.run()
```
then when I run:
`python migrate.py db init`
I get no output and no action, even the migrations folder is not created. Unfortunately this is an application running on python 2.7.12. The application is a big one and I would like to avoid switching to python3 at the moment. Could this be the reason? I use Flask-Migrate 2.2.1 and Flask-Script 2.0.6 | closed | 2018-10-10T11:36:40Z | 2018-10-29T07:38:06Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/232 | [] | jpotwor | 2 |
localstack/localstack | python | 11,901 | feature request: lambda nodejs22.x runtime support | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Feature description
Localstack should support the `nodejs22.x` AWS lambda runtime since it is now officially supported on the AWS side: [Lambda runtimes](https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html#runtimes-supported)
### 🧑💻 Implementation
_No response_
### Anything else?
Trying to create a lambda with the `nodejs22.x` runtime results in the following error:
```
An error occurred (InvalidParameterValueException) when calling the CreateFunction operation: Value nodejs22.x at 'runtime' failed to satisfy constraint: Member must satisfy enum value set: [nodejs20.x, provided.al2023, python3.12, java17, provided, nodejs16.x, nodejs14.x, ruby2.7, python3.10, java11, python3.11, dotnet6, go1.x, java21, nodejs18.x, provided.al2, java8, java8.al2, ruby3.2, python3.7, python3.8, python3.9] or be a valid ARN
``` | closed | 2024-11-22T07:53:34Z | 2024-11-26T13:55:05Z | https://github.com/localstack/localstack/issues/11901 | [
"type: feature",
"aws:lambda",
"status: in progress"
] | sam-extern-porsche | 2 |
developmentseed/lonboard | data-visualization | 544 | feature: add support to zoom into a specific layer | When plotting a Map with multiple layers, I'd like to have an option to zoom in into a specific layer. For example, in this blog https://ibis-project.org/posts/ibis-duckdb-geospatial/ I have some code that plots a point, two lines (over some roads), and the NYC streets, that looks like this
```
broad_station_layer = ScatterplotLayer.from_geopandas(
broad_station_gdf, get_fill_color="blue", get_radius=5
)
sts_near_broad_layer = PathLayer.from_geopandas(
sts_near_broad_gdf, get_color="red", opacity=0.4, get_width=2
)
streets_layer = PathLayer.from_geopandas(streets_gdf, get_color="grey", opacity=0.3)
m = Map(
[
broad_station_layer,
sts_near_broad_layer,
streets_layer,
],
view_state={"longitude": -74.01066, "latitude": 40.7069, "zoom": 16} # need this to zoom into desired outcome
)
m
```

However, without specifying the `view_state` the map displays like this, and the first two layers get completely lost unless I manually zoom in. It would be great if I could zoom in a box around let's say the first two layers or one of them, without having to find what is the lat, lon and specific zoom value.

| open | 2024-06-13T18:09:44Z | 2024-06-14T15:39:28Z | https://github.com/developmentseed/lonboard/issues/544 | [] | ncclementi | 2 |
allenai/allennlp | pytorch | 5,135 | Fast influence functions | Add an implementation of "FastIF" (https://arxiv.org/abs/2012.15781) to the `interpret.influence_functions` module.
| open | 2021-04-19T23:08:01Z | 2021-05-04T16:33:16Z | https://github.com/allenai/allennlp/issues/5135 | [
"Contributions welcome",
"Feature request"
] | epwalsh | 3 |
capitalone/DataProfiler | pandas | 838 | Fuse the functionality used in both `_merge_histogram` and the newly created `_assimilate_histogram` | **Is your feature request related to a problem? Please describe.**
In an effort to adhere to the goal of achieving a clear paradigm of one, easy to understand, path for each of the following tasks for profiling:
Updating, Getting, and Merging
This issue focuses on clearing up the path to defining how to merge a profile (or parts of a profile) with a singular function path to achieving this goal.
The problem this issue addresses is the use of both `_merge_histogram` and the newly created `_assimilate_histogram` as well as other merging processes within the dataprofiler that repeat functionality/have overlapping goals for input and output.
An example of a fix for achieving this paradigm is as follows:
We have implemented a much better way to put information from two histograms together with the creation of `_assimilate_histogram` and we should be able to use that function throughout the code while also achieving the previously desired functionality of _merge_histograms. We can see the old way of doing this in `numerical_column_stats.py` on line 1286. This recreates the histogram data which is more memory intensive than doing it the way we do in `_assimilate_histogram`.
**Describe the outcome you'd like:**
I would like a singular path to merging profiles and their information that achieves the success of all currently existing functions usage.
**Additional context:**
For detail behind `_assimilate_histogram` the PR:
https://github.com/capitalone/DataProfiler/pull/815
Implements the more memory optimized solution | open | 2023-05-23T17:46:51Z | 2023-09-12T13:31:34Z | https://github.com/capitalone/DataProfiler/issues/838 | [
"Medium Priority",
"Refactor",
"contribution_day"
] | ksneab7 | 2 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 15,928 | [Bug]: After installing Tensor RT, a warning pops up when logging into SD that python.exe can't find the program entrance | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
After installing Tensor RT, a warning pops up when logging into SD that python.exe can't find the program entrance,Problem points to cudnn_adv_infer64_8.dll,my N—card driver,My N-card driver version is the latest 12.4
### Steps to reproduce the problem
no
### What should have happened?
Don't pop up the window where you can't locate the entry point, because you have to close it multiple times each time, which also affects the PY startup speed
### What browsers do you use to access the UI ?
Microsoft Edge
### Sysinfo
[sysinfo-2024-06-03-02-51.json](https://github.com/user-attachments/files/15528238/sysinfo-2024-06-03-02-51.json)
### Console logs
```Shell
venv "D:\AI\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.9.4
Commit hash: feee37d75f1b168768014e4634dcb156ee649c05
removing nvidia-cudnn-cu11
Launching Web UI with arguments:
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
```
### Additional information
_No response_ | open | 2024-06-03T02:53:51Z | 2024-06-03T02:53:51Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15928 | [
"bug-report"
] | ciedon | 0 |
keras-team/keras | data-science | 20,344 | Training using a `tf.data.Dataset` and `steps_per_execution` > 32 fails | Training using a `tf.data.Dataset` and `steps_per_execution` > 32 fails with:
```ValueError: An unusually high number of `tf.data.Iterator.get_next()` calls was detected. This suggests that the `for elem in dataset: ...` idiom is used within tf.function with AutoGraph disabled. This idiom is only supported when AutoGraph is enabled.```
Reproduction code:
```
import keras
import tensorflow as tf
x = tf.random.normal((1000, 10))
y = tf.random.uniform((1000,), maxval=2, dtype=tf.int32)
# Create a tf.data.Dataset
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.shuffle(1000).batch(32)
model = keras.Sequential([
keras.layers.Dense(64, activation='relu', input_shape=(10,)),
keras.layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.compile(steps_per_execution=33)
model.fit(dataset, epochs=5)
``` | closed | 2024-10-12T15:57:39Z | 2024-10-22T23:37:04Z | https://github.com/keras-team/keras/issues/20344 | [
"type:Bug"
] | nicolaspi | 5 |
marcomusy/vedo | numpy | 974 | Extrude bug | Extruding a flat surface directly up causes missing faces with 2023.5.0.
<img width="425" alt="Screenshot 2023-11-16 at 11 57 28 am" src="https://github.com/marcomusy/vedo/assets/23271678/86b87e42-864b-49ec-98a2-9be28f50b8ca">
| closed | 2023-11-16T00:59:09Z | 2023-11-17T00:44:49Z | https://github.com/marcomusy/vedo/issues/974 | [] | JeffreyWardman | 9 |
TencentARC/GFPGAN | pytorch | 216 | TypeError: get_face_landmarks_5() got an unexpected keyword argument 'eye_dist_threshold' | Traceback (most recent call last):
File "M:\GFPGAN\inference_gfpgan.py", line 155, in <module>
main()
File "M:\GFPGAN\inference_gfpgan.py", line 119, in main
cropped_faces, restored_faces, restored_img = restorer.enhance(
File "D:\anaconda3\envs\gfpgan\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "M:\GFPGAN\gfpgan\utils.py", line 106, in enhance
self.face_helper.get_face_landmarks_5(only_center_face=only_center_face, eye_dist_threshold=5)
TypeError: get_face_landmarks_5() got an unexpected keyword argument 'eye_dist_threshold' | open | 2022-07-14T01:06:50Z | 2022-07-14T01:06:50Z | https://github.com/TencentARC/GFPGAN/issues/216 | [] | wsysl1989 | 0 |
vvbbnn00/WARP-Clash-API | flask | 33 | 可否支持docker hub部署~ | 部署在自家群晖上,上面没装git~
另:和我的习惯非常契合,爆赞 | closed | 2024-02-01T09:50:24Z | 2024-02-02T07:36:59Z | https://github.com/vvbbnn00/WARP-Clash-API/issues/33 | [
"duplicate"
] | ShiFangJuMie | 2 |
huggingface/datasets | pytorch | 6,721 | Hi,do you know how to load the dataset from local file now? | Hi, if I want to load the dataset from local file, then how to specify the configuration name?
_Originally posted by @WHU-gentle in https://github.com/huggingface/datasets/issues/2976#issuecomment-1333455222_
| open | 2024-03-07T13:58:40Z | 2024-03-31T08:09:25Z | https://github.com/huggingface/datasets/issues/6721 | [] | Gera001 | 3 |
piskvorky/gensim | machine-learning | 2,929 | ImportError: cannot import name 'logsumexp' on KeyedVectors | ####Problem description
I'm unable to import KeyedVectors.
I guess replacing
`from scipy.misc import logsumexp`
to
`from scipy.special import logsumexp`
on the source code should fix it, but I'm not quite sure.
#### Steps/code/corpus to reproduce
from gensim.models import KeyedVectors
#### Versions
Linux-5.3.0-61-generic-x86_64-with-Ubuntu-18.04-bionic
Python 3.6.5 (default, Apr 1 2018, 05:46:30)
[GCC 7.3.0]
Bits 64
NumPy 1.19.1
SciPy 1.5.2
| closed | 2020-09-01T01:37:18Z | 2020-09-01T13:53:27Z | https://github.com/piskvorky/gensim/issues/2929 | [
"need info"
] | luisaheise | 4 |
AirtestProject/Airtest | automation | 877 | Airtest在连接unity editor模式后,在初始化 UnityPoco(unity_editor=True)时报错 |
**描述问题bug**
Airtest在连接unity editor模式下,初始化 UnityPoco(unity_editor=True)时报错
```
File "C:\Users\AppData\Local\Programs\Python\Python37\lib\site-packages\pywinauto\findwindows.py", line 87, in find_element
raise ElementNotFoundError(kwargs)
pywinauto.findwindows.ElementNotFoundError: {'class_name': 'UnityContainerWndClass', 'title_re': 'Unity.*', 'backend': 'win32'}
```
**python 版本:** `python3.7`
**airtest 版本:** `1.1.8`
**pocoui 版本:** `1.0.81`
| open | 2021-03-25T11:39:42Z | 2021-03-25T11:39:42Z | https://github.com/AirtestProject/Airtest/issues/877 | [] | tmac001 | 0 |
graphql-python/graphene-mongo | graphql | 68 | Allow subclassing MongoengineObjectType | Right now, extending `MongoengineObjectType` by subclassing is not possible in any useful way as `__init_subclass_with_meta__` is immediately called on encountering a descendant class definition. This makes several assumptions, e.g. `Meta.model` being defined on the subclass. Extension classes might want to deduce the model via _their_ subclasses and generally wouldn't know about it at the point of class body execution.
The proposed solution is some mechanism to defer initialization until manually requested, if some flag (e.g. `manual_init`) is passed in `Meta`.
My current workaround is using multiple inheritance which allows customization. A superficial example - let's say we wish to use `quux` instead of `model` for the `Meta` key defining the model:
```python
class Enhancer:
@classmethod
def __init_subclass_with_meta__(cls, quux, **kwargs):
kwargs["model"] = quux
super().__init_subclass_with_meta__(**kwargs)
class AssetSchema(Enhancer, MongoengineObjectType):
class Meta:
interfaces = (relay.Node,)
quux = Asset
``` | closed | 2019-02-01T15:08:04Z | 2019-02-08T14:45:26Z | https://github.com/graphql-python/graphene-mongo/issues/68 | [] | tambeta | 3 |
akfamily/akshare | data-science | 5,926 | stock_financial_us_analysis_indicator_em 下载本田HMC财务数据错误。 | Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2024.1.1\plugins\python\helpers\pydev\pydevd.py", line 1535, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\JetBrains\PyCharm 2024.1.1\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "D:\codes\financial_risk_prediction_Rating_20250124\fetching_financial_all_form_interface_20250318.py", line 201, in <module>
interface_financial.symbol_list_to_df()
File "D:\codes\financial_risk_prediction_Rating_20250124\fetching_financial_all_form_interface_20250318.py", line 143, in symbol_list_to_df
statement_df = ak.stock_financial_us_analysis_indicator_em(symbol=code, indicator="单季报")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\m11449a\AppData\Local\anaconda3\envs\time_series_environment\Lib\site-packages\akshare\stock_fundamental\stock_finance_us_em.py", line 164, in stock_financial_us_analysis_indicator_em
temp_df = pd.DataFrame(data_json["result"]["data"])
~~~~~~~~~~~~~~~~~~~^^^^^^^^
TypeError: 'NoneType' object is not subscriptable | closed | 2025-03-19T02:10:18Z | 2025-03-19T08:54:28Z | https://github.com/akfamily/akshare/issues/5926 | [
"bug"
] | xiahuadong1981 | 1 |
autogluon/autogluon | data-science | 4,337 | [BUG] Attribute Error when fitting a TimeSeriesPredictor instance | **Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [ x ] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [ ] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [ ] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
I'm running into an `AttributeError` when trying to fit a autogluon time series model in Databricks. I'm following the example code in [this documentation](https://auto.gluon.ai/stable/tutorials/timeseries/forecasting-quick-start.html) and reaching the error specifically when calling `.fit()` on the `TimeSeriesPredictor` instance.
**Expected behavior**
I'm hoping to have the `.fit()` command run entirely with no error so that I can kick-start an AutoGluon time series project.
**To Reproduce**
`single_series_training_df = pd.read_csv('/dbfs/FileStore/data.csv')`
`single_series_training_df['id'] = single_series_training_df.index`
`single_series_training_df['Date'] = pd.to_datetime(single_series_training_df['Date'])`
`single_series_train_data = TimeSeriesDataFrame.from_data_frame(
single_series_training_df,
id_column="id",
timestamp_column="Date"
)`
`predictor = TimeSeriesPredictor(
prediction_length=7,
path="/dbfs/FileStore/autogluon-path",
target="Sales",
eval_metric="RMSE",
freq='D'
)`
`predictor.fit(
single_series_train_data,
time_limit=600
)`
Databricks cluster configuration
- Policy: Single User
- Databricks Runtime Version: 13.3 LTS (includes Apache Spark 3.4.1, Scala 2.12)
- Use photon acceleration
- Worker type: E8as_v4 (workers=1, 64 GB memory, 8 cores)
- Driver type: E8as_v4 (64 GB memory, 8 cores)
**Installed Versions**
<!-- Please run the following code snippet: -->
<details>
```python
INSTALLED VERSIONS
------------------
date : 2024-07-23
time : 18:38:39.267847
python : 3.10.12.final.0
OS : Linux
OS-release : 5.15.0-1061-azure
Version : #70~20.04.1-Ubuntu SMP Mon Apr 8 15:38:58 UTC 2024
machine : x86_64
processor : x86_64
num_cores : 8
cpu_ram_mb : 58770.0
cuda version : None
num_gpus : 0
gpu_ram_mb : []
avail_disk_size_mb : 203698
accelerate : 0.21.0
autogluon : 1.1.1
autogluon.common : 1.1.1
autogluon.core : 1.1.1
autogluon.features : 1.1.1
autogluon.multimodal : 1.1.1
autogluon.tabular : 1.1.1
autogluon.timeseries : 1.1.1
boto3 : 1.24.28
catboost : 1.2.5
defusedxml : 0.7.1
evaluate : 0.4.2
fastai : 2.7.15
gluonts : 0.15.1
hyperopt : 0.2.7
imodels : None
jinja2 : 3.1.4
joblib : 1.2.0
jsonschema : 4.21.1
lightgbm : 4.3.0
lightning : 2.3.3
matplotlib : 3.5.2
mlforecast : 0.10.0
networkx : 3.3
nlpaug : 1.1.11
nltk : 3.8.1
nptyping : 2.4.1
numpy : 1.24.4
nvidia-ml-py3 : 7.352.0
omegaconf : 2.2.3
onnxruntime-gpu : None
openmim : 0.3.9
optimum : 1.18.1
optimum-intel : None
orjson : 3.10.6
pandas : 2.2.2
pdf2image : 1.17.0
Pillow : 10.4.0
psutil : 5.9.0
pytesseract : 0.3.10
pytorch-lightning : 2.3.3
pytorch-metric-learning: 2.3.0
ray : 2.10.0
requests : 2.32.3
scikit-image : 0.20.0
scikit-learn : 1.4.0
scikit-learn-intelex : None
scipy : 1.9.1
seqeval : 1.2.2
setuptools : 63.4.1
skl2onnx : None
statsforecast : 1.4.0
tabpfn : None
tensorboard : 2.17.0
text-unidecode : 1.3
timm : 0.9.16
torch : 2.3.1
torchmetrics : 1.2.1
torchvision : 0.18.1
tqdm : 4.66.4
transformers : 4.39.3
utilsforecast : 0.0.10
vowpalwabbit : None
xgboost : 2.0.3
12:49 PM (<1s)
```
</details>
**Here's the output from the run:**
Beginning AutoGluon training... Time limit = 600s
AutoGluon will save models to '/dbfs/FileStore/autogluon-path'
System Info
AutoGluon Version: 1.1.1
Python Version: 3.10.12
Operating System: Linux
Platform Machine: x86_64
Platform Version: #70~20.04.1-Ubuntu SMP Mon Apr 8 15:38:58 UTC 2024
CPU Count: 8
GPU Count: 0
Memory Avail: 45.07 GB / 57.39 GB (78.5%)
Disk Space Avail: 1048576.00 GB / 1048576.00 GB (100.0%)
Fitting with arguments:
{'enable_ensemble': True,
'eval_metric': RMSE,
'freq': 'D',
'hyperparameters': 'default',
'known_covariates_names': [],
'num_val_windows': 1,
'prediction_length': 7,
'quantile_levels': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
'random_seed': 123,
'refit_every_n_windows': 1,
'refit_full': False,
'skip_model_selection': False,
'target': 'Sales',
'time_limit': 600,
'verbosity': 2}
**And here's the error message:**
AttributeError: 'DataFrame' object has no attribute 'freq'
AttributeError Traceback (most recent call last)
File <command-1861354523862313>, line 1
----> 1 predictor = TimeSeriesPredictor(prediction_length=7, target='Sales', freq='D').fit(single_series_train_data)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-145221fe-160f-4320-878f-44f193f58a1f/lib/python3.10/site-packages/autogluon/core/utils/decorators.py:31, in unpack.<locals>._unpack_inner.<locals>._call(*args, **kwargs)
28 @functools.wraps(f)
29 def _call(*args, **kwargs):
30 gargs, gkwargs = g(*other_args, *args, **kwargs)
---> 31 return f(*gargs, **gkwargs)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-145221fe-160f-4320-878f-44f193f58a1f/lib/python3.10/site-packages/autogluon/timeseries/predictor.py:701, in TimeSeriesPredictor.fit(self, train_data, tuning_data, time_limit, presets, hyperparameters, hyperparameter_tune_kwargs, excluded_model_types, num_val_windows, val_step_size, refit_every_n_windows, refit_full, enable_ensemble, skip_model_selection, random_seed, verbosity)
698 logger.info("\nFitting with arguments:")
699 logger.info(f"{pprint.pformat({k: v for k, v in fit_args.items() if v is not None})}\n")
--> 701 train_data = self._check_and_prepare_data_frame(train_data, name="train_data")
702 logger.info(f"Provided train_data has {self._get_dataset_stats(train_data)}")
704 if val_step_size is None:
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-145221fe-160f-4320-878f-44f193f58a1f/lib/python3.10/site-packages/autogluon/timeseries/predictor.py:314, in TimeSeriesPredictor._check_and_prepare_data_frame(self, data, name)
312 logger.info(f"Inferred time series frequency: '{df.freq}'")
313 else:
--> 314 if df.freq != self.freq:
315 logger.warning(f"{name} with frequency '{df.freq}' has been resampled to frequency '{self.freq}'.")
316 df = df.convert_frequency(freq=self.freq)
File /databricks/python/lib/python3.10/site-packages/pandas/core/generic.py:5575, in NDFrame.__getattr__(self, name)
5568 if (
5569 name not in self._internal_names_set
5570 and name not in self._metadata
5571 and name not in self._accessors
5572 and self._info_axis._can_hold_identifiers_and_holds_name(name)
5573 ):
5574 return self[name]
-> 5575 return object.__getattribute__(self, name)
| closed | 2024-07-23T18:40:17Z | 2024-11-08T15:49:49Z | https://github.com/autogluon/autogluon/issues/4337 | [
"bug: unconfirmed",
"Needs Triage"
] | ANNIKADAHLMANN-8451 | 5 |
holoviz/panel | plotly | 6,795 | Select options not visible when using MaterialTemplate dark theme | #### ALL software version info
Windows 10 Pro
Chrome 124.0.6367.78
Python 3.10.2
panel==1.4.2
panel-modal==0.4.0
(requirements.txt below)
#### Description of expected behavior and the observed behavior
Observed: Select widgets render options with white text and white background, making text unreadable.
Expected: Select options should render with dark background
#### Complete, minimal, self-contained example code that reproduces the issue
```
import panel as pn
pn.extension()
template = pn.template.MaterialTemplate(title="Select Test", theme='dark')
states = pn.widgets.Select(name='States', options=['Arizona', 'California', 'Connecticut', 'Kansas', 'Texas'], value='California')
template.main.append(states)
template.show()
```
#### Screenshots or screencasts of the bug in action

#### requirements.txt
anyio==3.7.1
appnope==0.1.3
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
arrow==1.2.3
asttokens==2.2.1
async-lru==2.0.4
attrs==23.1.0
Babel==2.12.1
backcall==0.2.0
beautifulsoup4==4.12.2
bleach==6.0.0
bokeh==3.4.0
boto3==1.34.81
botocore==1.34.81
cachetools==5.3.1
certifi==2023.5.7
cffi==1.15.1
chardet==5.2.0
charset-normalizer==3.1.0
click==8.1.6
colorama==0.4.6
colorcet==3.0.1
comm==0.1.3
contourpy==1.2.0
debugpy==1.6.7
decorator==5.1.1
defusedxml==0.7.1
embeddify==0.3.1
et-xmlfile==1.1.0
exceptiongroup==1.1.2
executing==1.2.0
fastjsonschema==2.17.1
filelock==3.12.2
fqdn==1.5.1
greenlet==3.0.3
h11==0.14.0
holoviews==1.18.1
humanize==4.7.0
hvplot==0.9.2
idna==3.4
ijson==3.2.3
iniconfig==2.0.0
ipycytoscape==1.3.3
ipyiframe==0.1.0
ipykernel==6.23.2
ipysheet==0.7.0
ipython==8.14.0
ipython-genutils==0.2.0
ipyvue==1.9.2
ipyvuetify==1.8.10
ipywidgets==8.0.6
ipywidgets-bokeh==1.5.0
isoduration==20.11.0
jedi==0.18.2
Jinja2==3.1.2
jmespath==1.0.1
json5==0.9.14
jsonlines==4.0.0
jsonpointer==2.4
jsonschema==4.18.6
jsonschema-specifications==2023.7.1
jupyter==1.0.0
jupyter-console==6.6.3
jupyter-events==0.7.0
jupyter-lsp==2.2.0
jupyter_client==8.2.0
jupyter_core==5.3.1
jupyter_server==2.7.0
jupyter_server_terminals==0.4.4
jupyterlab==4.0.4
jupyterlab-pygments==0.2.2
jupyterlab-widgets==3.0.7
jupyterlab_server==2.24.0
-e git+ssh://git@github.com/jazl/lexo-jupyter.git@1793a20f6dff11863de62970fe66e4b08d0b4d07#egg=lexo
linear-tsv==1.1.0
linkify-it-py==2.0.2
Markdown==3.4.4
markdown-it-py==3.0.0
MarkupSafe==2.1.3
matplotlib-inline==0.1.6
mdit-py-plugins==0.4.0
mdurl==0.1.2
mistune==2.0.5
nbclient==0.8.0
nbconvert==7.5.0
nbformat==5.9.0
nest-asyncio==1.5.6
notebook==7.0.2
notebook_shim==0.2.3
numpy==1.25.0
openpyxl==3.1.2
overrides==7.3.1
packaging==23.1
pandas==2.0.2
pandocfilters==1.5.0
panel==1.4.2
panel-modal==0.4.0
param==2.0.0
parso==0.8.3
pexpect==4.8.0
pickleshare==0.7.5
Pillow==9.5.0
platformdirs==3.5.3
pluggy==1.2.0
prometheus-client==0.17.1
prompt-toolkit==3.0.38
psutil==5.9.5
ptyprocess==0.7.0
pure-eval==0.2.2
pycparser==2.21
pyct==0.5.0
Pygments==2.15.1
pymdown-extensions==10.1
pyrsistent==0.19.3
pytest==7.4.0
python-dateutil==2.8.2
python-dotenv==1.0.1
python-json-logger==2.0.7
pytz==2023.3
pyviz_comms==3.0.0
pywin32==306
pywinpty==2.0.11
PyYAML==6.0.1
pyzmq==25.1.0
qtconsole==5.4.3
QtPy==2.3.1
reacton==1.7.1
referencing==0.30.1
requests==2.31.0
rfc3339-validator==0.1.4
rfc3986-validator==0.1.1
rich==13.5.2
rich-click==1.6.1
rpds-py==0.9.2
s3transfer==0.10.1
Send2Trash==1.8.2
six==1.16.0
sniffio==1.3.0
solara==1.19.0
soupsieve==2.4.1
spectate==1.0.1
SQLAlchemy==2.0.29
stack-data==0.6.2
starlette==0.31.0
tabulator==1.53.5
terminado==0.17.1
tinycss2==1.2.1
tomli==2.0.1
tornado==6.3.2
tqdm==4.66.1
traitlets==5.9.0
typing_extensions==4.7.1
tzdata==2023.3
uc-micro-py==1.0.2
unicodecsv==0.14.1
uri-template==1.3.0
urllib3==2.0.3
uvicorn==0.23.2
watchdog==3.0.0
wcwidth==0.2.6
webcolors==1.13
webencodings==0.5.1
websocket-client==1.6.1
websockets==11.0.3
widgetsnbextension==4.0.7
xlrd==2.0.1
xyzservices==2023.10.1
| open | 2024-04-26T15:12:10Z | 2024-04-26T15:12:10Z | https://github.com/holoviz/panel/issues/6795 | [] | jazl | 0 |
huggingface/datasets | tensorflow | 7,430 | Error in code "Time to slice and dice" from course "NLP Course" | ### Describe the bug
When we execute code
```
frequencies = (
train_df["condition"]
.value_counts()
.to_frame()
.reset_index()
.rename(columns={"index": "condition", "condition": "frequency"})
)
frequencies.head()
```
answer should be like this
condition | frequency
birth control | 27655
depression | 8023
acne | 5209
anxiety | 4991
pain | 4744
but he is different
frequency | count
birth control | 27655
depression | 8023
acne | 5209
anxiety | 4991
pain | 4744
this is not correct, correct code
```
frequencies = (
train_df["condition"]
.value_counts()
.to_frame()
.reset_index()
.rename(columns={"index": "condition", "count": "frequency"})
)
````
### Steps to reproduce the bug
```
frequencies = (
train_df["condition"]
.value_counts()
.to_frame()
.reset_index()
.rename(columns={"index": "condition", "condition": "frequency"})
)
frequencies.head()
```
### Expected behavior
condition | frequency
birth control | 27655
depression | 8023
acne | 5209
anxiety | 4991
pain | 4744
### Environment info
Google Colab | closed | 2025-02-28T11:36:10Z | 2025-03-05T11:32:47Z | https://github.com/huggingface/datasets/issues/7430 | [] | Yurkmez | 2 |
matterport/Mask_RCNN | tensorflow | 2,755 | All instances not created with load_mask() and not visualized with display_instances() | Hello,
I am trying to train Mask R-CNN with my own dataset (images .tif, annotations in shapefile) by referring to codes available in this repo. While checking if the masks are correctly loaded or not with the code below I can see only 2 instances for each image, even though there are multiple instances and I can see the properties x, y with train_dataset.image_info[ ].
While looking at the information for a random example train_dataset.image_info[87]. I get the following results

But visualizing the image only 2 instances is seen, even though there are around 12.

The code for my load_mask is as follows :
```
def load_mask(self, image_id):
info = self.image_info[image_id]
num_ids = []
mask = np.zeros([info['height'], info['width'], len(info['polygons'])], dtype=np.uint8) #Change datatype?
for i, p in enumerate(info['polygons']):
# Get indexes of pixels inside the polygon and set them to 1
rr, cc = skimage.draw.polygon(p['all_points_y'], p['all_points_x'])
mask[rr, cc, i] = 1
if p['attribute'] == 'deciduous':
num_ids.append(1)
elif p['attribute'] == 'conifers':
num_ids.append(2)
class_ids = np.array(num_ids, dtype=np.int32)
return mask.astype(bool), class_ids
```
The code I am using to look at random example is
```
image_id = 89
image = train_dataset.load_image(image_id)
mask, class_ids = train_dataset.load_mask(image_id)
bbox = utils.extract_bboxes(mask)
# Load and display
visualize.display_instances(image, bbox, mask, class_ids, train_dataset.class_names)
```
Is there anything I should change in load_mask or visualize.py to visualize /load all the masks for each image?
Any help would be greatly appreciated. Thankyou!
| closed | 2022-01-07T22:51:19Z | 2022-03-14T11:02:51Z | https://github.com/matterport/Mask_RCNN/issues/2755 | [] | kecyarchana | 0 |
gradio-app/gradio | data-visualization | 10,152 | False positive: UserWarning: The value passed into gr.Dropdown() is not in the list of choices | ### Describe the bug
I have an event listener function which updates the choices of a dropdown component and sets its value to be one of the new valid choices, all in one return statement like this:
```python
new_choices = ...
return gr.Dropdown(choices = new_choices, value = new_choices[0])
```
I am getting a false positive: `UserWarning: The value passed into gr.Dropdown() is not in the list of choices` in this case.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
new_choices = ...
return gr.Dropdown(choices = new_choices, value = new_choices[0])
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
Windows 11
python 3.12
gradio 5.7.1
```
### Severity
I can work around it | closed | 2024-12-07T21:25:12Z | 2025-01-15T13:51:09Z | https://github.com/gradio-app/gradio/issues/10152 | [
"bug",
"needs repro"
] | JackismyShephard | 3 |
Miserlou/Zappa | django | 1,965 | Set_Cookie option sets duplicate cookies on AWS Lambda | ## Context
I have an API running Python3.7 and Zappa (in a virtualenv).
I am setting 6 cookies by using the option "set_cookie" in flask. It looks something like this:
```
resp = make_response(jsonify({'success':'true', 'message': 'Successfully authenticated!'}), 200)
resp.set_cookie("1", value="1", secure=True, samesite='Lax', domain=".example.com",max_age=3600)
resp.set_cookie("2", value="2", secure=True, samesite='Lax', domain=".example.com",max_age=3600)
resp.set_cookie("3", value="3", secure=True, samesite='Lax', domain=".example.com",max_age=3600)
resp.set_cookie("4", value="4", secure=True, samesite='Lax', domain=".example.com",max_age=3600)
resp.set_cookie("5", value="5", secure=True, samesite='Lax', domain=".example.com",max_age=3600)
resp.set_cookie("6", value="6", secure=True, samesite='Lax', domain=".example.com",max_age=3600)
return resp
```
On localhost testing Flask, this works as expected.
If I deploy the same code to AWS using Zappa, the response header will show 36 "set-cookie" headers. So the formula here is n^2. So if I add 4 cookies using the above method, it will show 16 in the request header.
The browser takes care of duplicate cookies, but the response from the API is still huge because of this issue.
Same thing happens if I use:
`resp.headers.add("set-cookie""1"="1; Domain=.example.com; Max-Age=3600; Secure; Path=/; SameSite=Lax")`
## Expected Behavior
I believe Zappa or something at AWS is at fault here. Expected behaviour is to send 6 "set-cookie" headers and not 36.
## Actual Behavior
Sets n^2 cookies as response.
## Steps to Reproduce
Deploy a Flask route using Zappa which sets the cookies. Use the code above.
## Your Environment
* Zappa version used: 0.48.2
* Operating System and Python version: Ubuntu 18.04, Python3.7
* The output of `pip freeze`: https://pastebin.com/d4QTaTuG
* Your `zappa_settings.py`: https://pastebin.com/d1GK8sbe | closed | 2019-11-18T08:22:45Z | 2020-02-12T21:10:33Z | https://github.com/Miserlou/Zappa/issues/1965 | [] | ZappaUserMan | 0 |
mwaskom/seaborn | data-visualization | 2,938 | Feature Req: Face heatmap | Hello,
I was recently working on face verification and I wanted something like a correlation matrix for faces. I couldn't find any, so I made a script for that. I think this will be useful for others so I was wondering if I could add it here in seaborn under heatmap ?
We do have sns.heatmap, but that doesn't take images as name/labels.
This is how it looks.

| closed | 2022-08-05T18:37:16Z | 2022-08-11T11:57:56Z | https://github.com/mwaskom/seaborn/issues/2938 | [] | 0xrushi | 1 |
plotly/dash-table | dash | 314 | Menu for hiding and showing columns | - Ability to add / remove columns from a predefined list of columns using a contextual menu.
- All columns would appear in this contextual menu as a checklist
- The visible columns would be “checked” and the hidden columns would be “unchecked”
- The user could click on an icon in the header to display the menu
- This feature would be toggled on/off with a top-level table property
- These settings would be “save-able” in the user’s browser session. So, if a user selected a series of columns, they could “save” these options (to the browser’s localstorage) and when they re-visit the app, the same set of columns would be displayed.
- This would only work on the user’s browser. It would not work across computers.
- This would use `localstorage` to save data
- This menu would be “global” for the table. It would not be displayed on a per-column basis.
The UI for this (and the other features) will be considered in a separate issue | closed | 2018-12-19T22:06:16Z | 2019-08-08T20:28:29Z | https://github.com/plotly/dash-table/issues/314 | [
"dash-type-enhancement",
"dash-meta-sponsored",
"size: 8"
] | chriddyp | 11 |
huggingface/transformers | python | 36,598 | lm_head parameters missing from named_parameters() in Qwen2.5-VL-3B-Instruct model | ### System Info
```
- `transformers` version: 4.49.0
- Platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35
- Python version: 3.10.16
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.5.0
- Accelerate version: 1.0.1
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- use_cpu: False
- debug: False
- num_processes: 8
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- deepspeed_config: {'deepspeed_config_file': 'LLaMA-Factory/examples/deepspeed/ds_model_parallel_config.json', 'zero3_init_flag': True}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- DeepSpeed version: 0.15.4
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: No
- Using GPU in script?: No
- GPU type: NVIDIA H200
```
### Who can help?
## 🐛 Bug Description
When loading the **Qwen2.5-VL-3B-Instruct** model from Hugging Face, the `lm_head` parameters (`lm_head.weight` and `lm_head.bias`) **do not appear** in `named_parameters()`, although they correctly appear in `state_dict()`.
This behavior differs from other Qwen-2.5-VL models (**Qwen2.5-VL-7B-Instruct**, **Qwen2.5-VL-72B-Instruct**), creating inconvenience during fine-tuning, optimizer setup, and parameter freezing tasks.
@amyeroberts, @qubvel
---
## 📌 Additional Context
- It appears the issue is related to how `lm_head` is registered within the model structure.
- Manually accessing `model.lm_head` works, but this is inconsistent with standard practice.
---
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import Qwen2_5_VLForConditionalGeneration
model_name = "Qwen/Qwen2.5-VL-3B-Instruct"
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# Check named_parameters for lm_head
has_lm_head_in_named_params = any("lm_head" in name for name, _ in model.named_parameters())
print(f"lm_head in named_parameters(): {has_lm_head_in_named_params}")
# Check state_dict for lm_head
has_lm_head_in_state_dict = any("lm_head" in key for key in model.state_dict().keys())
print(f"lm_head in state_dict(): {has_lm_head_in_state_dict}")
```
### Output:
```bash
lm_head in named_parameters(): False
lm_head in state_dict(): True
```
### Expected behavior
The `lm_head` parameters should appear in both `named_parameters()` and `state_dict()` outputs consistently, similar to other Qwen-2.5-VL models.
Example expected output:
```bash
lm_head in named_parameters(): True
lm_head in state_dict(): True
``` | open | 2025-03-07T02:58:29Z | 2025-03-17T22:28:20Z | https://github.com/huggingface/transformers/issues/36598 | [
"bug"
] | Buhua-Liu | 2 |
FujiwaraChoki/MoneyPrinter | automation | 128 | [BUG] gpt4 not working | **Describe the bug**
Openai version in requirements.txt needs to be locked, the latest version appears to be incompatible with this project
**To Reproduce**
Select "AI Model" to OpenAI GPT-4
**Expected behavior**
OpenAI GPT-4 generates a script
**Screenshots**
[Video to be generated]
Subject: ...
AI Model: gpt4
[-] Error:
You tried to access openai.ChatCompletion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.
You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.
Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`
A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742
**Desktop (please complete the following information):**
- OS: Mac OS X 14.2.1 (23C71)
- Python Version 3.11
| closed | 2024-02-09T18:33:19Z | 2024-02-10T02:51:25Z | https://github.com/FujiwaraChoki/MoneyPrinter/issues/128 | [] | moshejs | 3 |
huggingface/datasets | numpy | 7,058 | New feature type: Document | It would be useful for PDF.
https://github.com/huggingface/dataset-viewer/issues/2991#issuecomment-2242656069 | open | 2024-07-22T10:49:20Z | 2024-07-22T10:49:20Z | https://github.com/huggingface/datasets/issues/7058 | [] | severo | 0 |
replicate/cog | tensorflow | 2,006 | gitignore `.cog` directory by default | [Python 3.13 has a neat trick where it gitignores virtualenvs by default](https://www.pythonmorsels.com/python-313-whats-new/#git-friendly-virtual-environments) by putting a `.gitignore` file in them with the content `*`. We should do that for `.cog` directories so you don't accidentally add it to Git repositories. | open | 2024-10-19T06:04:44Z | 2025-03-24T14:53:14Z | https://github.com/replicate/cog/issues/2006 | [
"next"
] | bfirsh | 0 |
sktime/pytorch-forecasting | pandas | 1,250 | Question: Rolling window dataset with constant step size | - PyTorch-Forecasting version: 0.10.3
- PyTorch version: 1.13.1
- Python version: 3.8.16
- Operating System: Windows 11
### Question
Is it possible to create a rolling window dataset with constant step size using the TimeSeriesDataSet?
### Wanted feature
If it does not exist already, then this is a feature request.
```
training = TimeSeriesDataSet(
train_df,
time_idx = "time_idx",
target = "target",
group_ids=["static"],
min_encoder_length=24,
max_encoder_length=24,
min_prediction_length=24,
max_prediction_length=24,
step_size = 24)
```
### Actual behavior
When using the TimeSeriesDataSet you create a rolling window with step size of 1. I'm working on day ahead forecasting, meaning that a constant step size of hours in one day is wanted.
I realized when plotting the predictions that the dataset increments by 1, and 'solved' it with following code:
```
# Get the best model
best_model_path = trainer.checkpoint_callback.best_model_path
best_model = DeepAR.load_from_checkpoint(best_model_path)
# see results from the best model
raw_predictions, x, index = deepAR.predict(val_dataloader, mode="raw", return_x=True,return_index=True, n_samples=100)
### realized that predictions are not made with 24h steps, but 1h steps, due to the plots being very similar ###
# get days in year
days = np.arange(0,len(index),24)
# select some random days
random_days = sample(list(days),10)
#plot predictions from random days
for day in random_days: # plot 10 examples
best_model.plot_prediction(x, raw_predictions, idx=day, add_loss_to_title=True)
plt.suptitle(f"day: {val_df.iloc[day,0].split(' ')[0]}")
```
| open | 2023-02-09T10:40:54Z | 2023-02-09T10:40:54Z | https://github.com/sktime/pytorch-forecasting/issues/1250 | [] | ssvenoe | 0 |
PablocFonseca/streamlit-aggrid | streamlit | 263 | Can't pickle local object 'GridOptionsBuilder.__init__.<locals>.ddict' | Hello.
I'm using @st.cache_data wrapper in function where I create grid_options object . After version 1.0.0 I've got the errror in title. The code looks like this:
```python
@st.cache_data
def prepare_dataframe_to_view(_df):
gb = GridOptionsBuilder.from_dataframe(_df)
#...
grid_options = gb.build()
#...
return grid_options # Here I've got an error
```
| open | 2024-04-10T09:47:11Z | 2024-04-10T19:17:21Z | https://github.com/PablocFonseca/streamlit-aggrid/issues/263 | [] | iivvaanni | 1 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 15,396 | [Bug]: SGM Uniform and Uniform produces visual artifacts | ### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?

Uniform artifacts

Karras no artifacts
I'm using DreamShaperXL v21 Turbo, with cfg 2. Euler A for firts pass and dpm 2 for hires pass.
### Steps to reproduce the problem
1. Write a prompt
2. Set SGM Uniform or Uniform as scheduler
3. Generate
### What should have happened?
No artifacts?
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
[sysinfo-2024-03-27-20-41.json](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/14780976/sysinfo-2024-03-27-20-41.json)
### Console logs
```Shell
venv "C:\wbcnvme\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.8.0-273-g8687163f
Commit hash: 8687163f7fe28fae23131726094d4bee32db51bb
CUDA 12.1
Launching Web UI with arguments: --xformers
ControlNet preprocessor location: C:\wbcnvme\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2024-03-27 23:24:53,144 - ControlNet - INFO - ControlNet v1.1.442
2024-03-27 23:24:53,224 - ControlNet - INFO - ControlNet v1.1.442
[sd-webui-freeu] Controlnet support: *enabled*
23:24:53 - ReActor - STATUS - Running v0.7.0-b7 on Device: CUDA
[Vec. CC] Style Sheet Loaded...
Loading weights [4496b36d48] from C:\wbcnvme\stable-diffusion-webui\models\Stable-diffusion\dreamshaperXL_v21TurboDPMSDE.safetensors
2024-03-27 23:24:53,946 - ControlNet - INFO - ControlNet UI callback registered.
Creating model from config: C:\wbcnvme\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Running on local URL: http://127.0.0.1:7860
Thanks for being a Gradio user! If you have questions or feedback, please join our Discord server and chat with us: https://discord.gg/feTf9x3ZSB
To create a public link, set `share=True` in `launch()`.
Startup time: 13.4s (prepare environment: 4.1s, import torch: 3.7s, import gradio: 0.8s, setup paths: 0.8s, initialize shared: 0.2s, other imports: 0.4s, load scripts: 2.0s, create ui: 0.6s, gradio launch: 0.7s).
Applying attention optimization: xformers... done.
Model loaded in 11.8s (load weights from disk: 0.7s, create model: 0.8s, apply weights to model: 7.4s, apply fp8: 2.3s, calculate empty prompt: 0.3s).
100%|████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:05<00:00, 1.38it/s]
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:04<00:00, 1.23s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 12/12 [03:04<00:00, 15.41s/it]
100%|████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:05<00:00, 1.58it/s]
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:04<00:00, 1.21s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 12/12 [00:13<00:00, 1.08s/it]
100%|████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:05<00:00, 1.57it/s]
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:04<00:00, 1.20s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 12/12 [00:13<00:00, 1.12s/it]
100%|████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:05<00:00, 1.59it/s]
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:04<00:00, 1.25s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 12/12 [00:13<00:00, 1.15s/it]
100%|████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:04<00:00, 1.63it/s]
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:04<00:00, 1.20s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 12/12 [00:13<00:00, 1.13s/it]
100%|████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:05<00:00, 1.58it/s]
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:04<00:00, 1.21s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 12/12 [00:13<00:00, 1.14s/it]
```
### Additional information
I'm using latest dev commit 8687163f7fe28fae23131726094d4bee32db51bb | open | 2024-03-27T20:44:33Z | 2024-04-07T11:06:40Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15396 | [
"bug-report"
] | AndreyRGW | 4 |
schemathesis/schemathesis | pytest | 1,973 | [BUG] tests fail when the terminal has a different amount of columns than 80 | ### Checklist
- [x] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation
- [x] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues)
- [x] I am using the latest version of Schemathesis
### Describe the bug
Trying to test I encountered several errors: the snapshot differs.
And it really differs because my terminal has more columns than 80 and is padded
```
E ...
E Stdout:
E - ======================= Schemathesis test session starts =======================
E + ============================================================================================= Schemathesis test session starts =============================================================================================
E Schema location: file:///tmp/schema.json
E
```
### To Reproduce
Run tox -e py3.9 on a terminal with a different column size than 80
### Expected behavior
it doesn't fail ;) | closed | 2024-01-22T15:46:34Z | 2024-01-22T16:55:42Z | https://github.com/schemathesis/schemathesis/issues/1973 | [
"Type: Bug",
"Status: Needs Triage"
] | devkral | 1 |
axnsan12/drf-yasg | django | 326 | X taggroups | I would like to set custom tags for views, and add them to their own taggroups, but I cannot find anything in the docs about this, how would I go about this? | closed | 2019-03-06T11:46:55Z | 2020-07-13T06:34:11Z | https://github.com/axnsan12/drf-yasg/issues/326 | [] | josephbiko | 5 |
suitenumerique/docs | django | 328 | Placeholder for title becomes editable when the user double clicks. | ## Bug Report
**Problematic behavior**
Here is a [video](https://www.loom.com/share/22c41836a9254727b0d024d761b2011f) of the problem. | closed | 2024-10-11T13:39:12Z | 2024-11-26T17:15:20Z | https://github.com/suitenumerique/docs/issues/328 | [
"bug",
"good first issue",
"frontend",
"hacktoberfest"
] | virgile-dev | 7 |
lepture/authlib | django | 436 | Are you aware that another project shows up as dependency in many projects instead of your project? | **Describe the bug**
See this project: https://github.com/jessethegame/python-authlib
Notice it is unknown, virtually no forks or stars.
Then go to https://github.com/jessethegame/python-authlib/network/dependents . Should these 21K projects be linking to your project instead? I think so!
| closed | 2022-03-14T17:12:51Z | 2022-03-15T14:28:42Z | https://github.com/lepture/authlib/issues/436 | [
"bug"
] | pieterlukasse | 6 |
horovod/horovod | deep-learning | 3,653 | tensorflow.python.framework.errors_impl.AlreadyExistsError: TensorFlow device (GPU:0) is being mapped to multiple CUDA devices (0 now, and 1 previously), which is not supported. This may be the result of providing different GPU configurations (ConfigProto.gpu_options, for example different visible_device_list) when creating multiple Sessions in the same process. This is not currently supported, see https://github.com/tensorflow/tensorflow/issues/19083 | **Environment:**
1. Framework: (TensorFlow)
2. Framework version: 2.2.0
3. Horovod version: v0.21.3
4. MPI version: (Open MPI) 2.1.1
5. CUDA version: 10.1, V10.1.243
6. NCCL version: 2.11.4
7. Python version: 3.6.9
11. CMake version: 3.10.2
**Bug report:**
I am trying to utilize the multi-GPUs using Horovod for distributed training. Initially, I utilized a single GPU and two GPUs to test a simple convolution neural network. Everything functions properly. Then, I used CNN and LSTM in combination. It works perfectly on a single GPU; however, a problem arises when running on two GPUs. The complete Trackback is as follows:
```
[1,1]<stderr>:Traceback (most recent call last):
[1,1]<stderr>: File "horovod-PAMAP2.py", line 203, in <module>
[1,1]<stderr>: PAMAP2()
[1,1]<stderr>: File "horovod-PAMAP2.py", line 144, in PAMAP2
[1,1]<stderr>: model = deep_Model(input_shape, num_classes)
[1,1]<stderr>: File "/research/dept8/gds/anafees/HAR-MGDP/horo_Model.py", line 30, in deep_Model
[1,1]<stderr>: lstm_fm = LSTM_2016(con_l4)
[1,1]<stderr>: File "/research/dept8/gds/anafees/HAR-MGDP/horo_Model.py", line 15, in LSTM_2016
[1,1]<stderr>: recurrent_dropout=0.5, return_sequences=True)(
[1,1]<stderr>: File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/recurrent_v2.py", line 1096, in __init__
[1,1]<stderr>: if context.num_gpus() > 0:
[1,1]<stderr>: File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/context.py", line 2046, in num_gpus
[1,1]<stderr>: return context().num_gpus()
[1,1]<stderr>: File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/context.py", line 1047, in num_gpus
[1,1]<stderr>: self.ensure_initialized()
[1,1]<stderr>: File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/context.py", line 515, in ensure_initialized
[1,1]<stderr>: context_handle = pywrap_tfe.TFE_NewContext(opts)
[1,1]<stderr>:tensorflow.python.framework.errors_impl.AlreadyExistsError: TensorFlow device (GPU:0) is being mapped to multiple CUDA devices (0 now, and 1 previously), which is not supported. This may be the result of providing different GPU configurations (ConfigProto.gpu_options, for example different visible_device_list) when creating multiple Sessions in the same process. This is not currently supported, see https://github.com/tensorflow/tensorflow/issues/19083
-------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
-------------------------------------------------------
--------------------------------------------------------------------------
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[49535,1],1]
```
| closed | 2022-08-15T09:31:16Z | 2022-08-17T04:12:12Z | https://github.com/horovod/horovod/issues/3653 | [] | Nafees-060 | 8 |
man-group/notebooker | jupyter | 16 | Clean up shims for old python version | For example there is code using [six](https://six.readthedocs.io/) to handle 2/3 differences, but there is code that requires 3.5+ (e.g. type annotations syntax) and docs say 3.6+. As Python2 went EOL earlier this year, it's probably good to clean up the old code and dependencies. | closed | 2020-10-22T15:17:42Z | 2020-10-25T23:15:22Z | https://github.com/man-group/notebooker/issues/16 | [
"good first issue"
] | Code0x58 | 0 |
tortoise/tortoise-orm | asyncio | 1,694 | [Help]How to initialize tortoise orm in multiple workers? | As described above, I have a FastAPI application that uses the`concurrent.futures.ProcessPoolExecutor` module to perform CPU-intensive tasks and save the results to MySQL. However, it seems that I encounter some errors when performing database query operations within this process.
- main.py
```python
@asynccontextmanager
async def lifespan(app: FastAPI) -> AsyncGenerator[None, None]:
set_loguru()
disable_installed_extensions_check()
from manager.config import configuration
logger.info(f"configuration => {configuration}")
mysql_conf = configuration.mysql
# Handle password which includes special characters. Such as #.
encoded_password = urllib.parse.quote(mysql_conf.password)
mysql_db_url = f"mysql://{mysql_conf.username}:{encoded_password}@{mysql_conf.host}:{mysql_conf.port}/{mysql_conf.database}?charset={mysql_conf.charset}&maxsize=10"
logger.debug(f"mysql db url => {mysql_db_url}")
# app startup
async with RegisterTortoise(
app,
db_url=mysql_db_url,
modules={"models": ["manager.govern.models"]},
generate_schemas=True,
add_exception_handlers=True,
use_tz=False,
timezone="Asia/Shanghai",
):
connection_name = next(iter(connections.db_config.keys()))
connection = connections.get(connection_name)
logger.info(
f"connections db_config: {connections.db_config} | connection_name: `{connection_name}` | connection: {connection}"
)
# db connected
yield
logger.info("Start to shut down executor")
shutdown_executor()
```
Encounter errors...
```bash
# error.log
File "/home/runstone/work/project/data-govern-manager/.venv/lib/python3.10/site-packages/tortoise/backends/mysql/client.py", line 199, in execute_query
await cursor.execute(query, values)
│ │ │ └ None
│ │ └ "SELECT `server_host`,`source_name`,`create_time`,`db_name`,`server_port`,`project_id`,`source_type`,`id`,`password`,`update_...
│ └ <cyfunction Cursor.execute at 0x7f431f0a4790>
└ <asyncmy.cursors.Cursor object at 0x7f431dce5240>
File "asyncmy/cursors.pyx", line 179, in execute
result = await self._query(query)
File "asyncmy/cursors.pyx", line 364, in _query
await conn.query(q)
File "asyncmy/connection.pyx", line 494, in query
await self._read_query_result(unbuffered=unbuffered)
File "asyncmy/connection.pyx", line 682, in _read_query_result
await result.read()
File "asyncmy/connection.pyx", line 1069, in read
first_packet = await self.connection.read_packet()
File "asyncmy/connection.pyx", line 617, in read_packet
packet_header = await self._read_bytes(4)
File "asyncmy/connection.pyx", line 656, in _read_bytes
data = await self._reader.readexactly(num_bytes)
File "/usr/lib/python3.10/asyncio/streams.py", line 708, in readexactly
await self._wait_for_data('readexactly')
│ └ <function StreamReader._wait_for_data at 0x7f4356e21090>
└ <StreamReader transport=<TCPTransport closed=False reading=True 0x7fffc2b4dea0>>
File "/usr/lib/python3.10/asyncio/streams.py", line 501, in _wait_for_data
await self._waiter
│ └ None
└ <StreamReader transport=<TCPTransport closed=False reading=True 0x7fffc2b4dea0>>
RuntimeError: Task <Task pending name='Task-9' coro=<instance_taos() running at /home/runstone/work/project/data-govern-manager/src/manager/govern/db.py:132> cb=[_LRUCacheWrapper._task_done_callback(<Future pendi...tasks.py:847]>, '3')()]> got Future <Future pending> attached to a different loop
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/runstone/work/project/data-govern-manager/src/manager/govern/combine.py", line 216, in do_govern_entry
await self.set_taos()
│ └ <function BackgroundGovernCombine.set_taos at 0x7f431f596d40>
└ <manager.govern.combine.BackgroundGovernCombine object at 0x7f431f0fe0b0>
File "/home/runstone/work/project/data-govern-manager/src/manager/govern/combine.py", line 205, in set_taos
self.taos_db = await instance_taos(self.govern_params.common.project_id)
│ │ │ │ │ │ └ '3'
│ │ │ │ │ └ CommonSchema(global_id='85dc9a3c-fb93-41b3-bf21-1b57b802a205', project_id='3', devcode_name='T101002', devproperty_name='...
│ │ │ │ └ GovernanceSystemPageSchema(common=CommonSchema(global_id='85dc9a3c-fb93-41b3-bf21-1b57b802a205', project_id='3', devcode_name...
│ │ │ └ <manager.govern.combine.BackgroundGovernCombine object at 0x7f431f0fe0b0>
│ │ └ <async_lru._LRUCacheWrapper object at 0x7f431f59c3a0>
│ └ None
└ <manager.govern.combine.BackgroundGovernCombine object at 0x7f431f0fe0b0>
File "/home/runstone/work/project/data-govern-manager/.venv/lib/python3.10/site-packages/async_lru/__init__.py", line 227, in __call__
return await asyncio.shield(fut)
│ │ └ <Future finished exception=RuntimeError("Task <Task pending name='Task-9' coro=<instance_taos() running at /home/runstone/wor...
│ └ <function shield at 0x7f4357004280>
└ <module 'asyncio' from '/usr/lib/python3.10/asyncio/__init__.py'>
File "/home/runstone/work/project/data-govern-manager/src/manager/govern/db.py", line 132, in instance_taos
project = await GovernanceDatasourceModel.get_or_none(project_id=project_id)
│ │ └ '3'
│ └ <classmethod(<function Model.get_or_none at 0x7f43550f4c10>)>
└ <class 'manager.govern.models.GovernanceDatasourceModel'>
File "/home/runstone/work/project/data-govern-manager/.venv/lib/python3.10/site-packages/tortoise/queryset.py", line 1059, in _execute
instance_list = await self._db.executor_class(
│ └ <member '_db' of 'QuerySet' objects>
└ <tortoise.queryset.QuerySet object at 0x7f431f2c8f20>
File "/home/runstone/work/project/data-govern-manager/.venv/lib/python3.10/site-packages/tortoise/backends/base/executor.py", line 131, in execute_select
_, raw_results = await self.db.execute_query(query.get_sql())
│ │ │ │ └ <function MySQLQueryBuilder.get_sql at 0x7f43551f1090>
│ │ │ └ SELECT `server_host`,`source_name`,`create_time`,`db_name`,`server_port`,`project_id`,`source_type`,`id`,`password`,`update_t...
│ │ └ <function MySQLClient.execute_query at 0x7f431f0cc3a0>
│ └ <tortoise.backends.mysql.client.MySQLClient object at 0x7f431f030ac0>
└ <tortoise.backends.mysql.executor.MySQLExecutor object at 0x7f431f0fe230>
File "/home/runstone/work/project/data-govern-manager/.venv/lib/python3.10/site-packages/tortoise/backends/mysql/client.py", line 44, in translate_exceptions_
return await func(self, *args)
│ │ └ ("SELECT `server_host`,`source_name`,`create_time`,`db_name`,`server_port`,`project_id`,`source_type`,`id`,`password`,`update...
│ └ <tortoise.backends.mysql.client.MySQLClient object at 0x7f431f030ac0>
└ <function MySQLClient.execute_query at 0x7f431f0cc310>
File "/home/runstone/work/project/data-govern-manager/.venv/lib/python3.10/site-packages/tortoise/backends/mysql/client.py", line 196, in execute_query
async with self.acquire_connection() as connection:
│ │ └ <asyncmy.connection.Connection object at 0x7f431f0afe20>
│ └ <function MySQLClient.acquire_connection at 0x7f431f03ff40>
└ <tortoise.backends.mysql.client.MySQLClient object at 0x7f431f030ac0>
RuntimeError: Task <Task pending name='Task-9' coro=<instance_taos() running at /home/runstone/work/project/data-govern-manager/src/manager/govern/db.py:132> cb=[_LRUCacheWrapper._task_done_callback(<Future pendi...tasks.py:847]>, '3')()]> got Future <Task pending name='Task-10' coro=<Pool._wakeup() running at asyncmy/pool.pyx:164>> attached to a different loop
```
Then I realized that Python's multiprocessing involves independent resources, so I separately initialized Tortoise within this process, and everything worked fine.
- task_in_another_process.py
```python
async def initialize_tortoise(): # How to handle this in a separate process?
mysql_conf = configuration.mysql
# Handle password which includes special characters. Such as #.
encoded_password = urllib.parse.quote(mysql_conf.password)
mysql_db_url = f"mysql://{mysql_conf.username}:{encoded_password}@{mysql_conf.host}:{mysql_conf.port}/{mysql_conf.database}?charset={mysql_conf.charset}"
logger.debug(f"mysql db url => {mysql_db_url}")
async def init_tortoise():
await Tortoise.init(
db_url=mysql_db_url,
modules={"models": ["manager.govern.models"]},
use_tz=False,
timezone="Asia/Shanghai",
)
await init_tortoise()
connection_name = next(iter(connections.db_config.keys()))
connection = connections.get(connection_name)
logger.info(
f"connections db_config: {connections.db_config} | connection_name: `{connection_name}` | connection: {connection}"
)
logger.success("Initializing Tortoise ORM")
async def do_govern_entry():
try:
# initialize tortoise orm
await initialize_tortoise()
# do other cpu tasks and save result to mysql.
except Exception as e:
...
```
- api.py
```python
import asyncio
from fastapi import APIRouter
from concurrent.futures import ProcessPoolExecutor
from .task_in_another_process import do_govern_entry
executor = ProcessPoolExecutor(max_workers=configuration.concurrency_nums, initializer=set_loguru)
router = APIRouter()
def do_async(func, *args, **kwargs):
asyncio.run(func(*args, **kwargs))
@router.post("/demo")
async def demo():
# other task
executor.submit(do_async, do_govern_entry)
return {"msg": "task submitted."}
```
However, I feel that this approach is not quite appropriate, so I wanted to ask if there is a better way to handle this. | open | 2024-08-09T15:36:29Z | 2024-08-16T06:39:08Z | https://github.com/tortoise/tortoise-orm/issues/1694 | [] | Abeautifulsnow | 2 |
axnsan12/drf-yasg | django | 591 | drf_yasg.errors.SwaggerGenerationError: your query_serializer contains fields that conflict with the filter_backend or paginator_class on the view | Hi,
```
my viewset:
filter_fields = ('a',)
my api:
@swagger_auto_schema(
query_serializer=schemas.A,
...
)
schemas.A:
class A(serializers.Serializer):
a = serializers.IntegerField(
required=True,
)
```
conflict happened, if I want 'a' required in some api and others not required, how to do? | closed | 2020-05-08T07:25:52Z | 2022-02-25T09:15:58Z | https://github.com/axnsan12/drf-yasg/issues/591 | [] | Hanzhizheng | 0 |
PaddlePaddle/PaddleNLP | nlp | 9,358 | [Bug]: tensor.dtype(例如paddle.float32) != paddle.framework.core.VarDesc.VarType.FP32 | ### 软件环境
```Markdown
- paddlepaddle-gpu: 3.0.0.dev20241101(f8fa8dae)
- paddlenlp: 5217a3b
```
### 重复问题
- [X] I have searched the existing issues
### 错误描述
Paddle类型系统似乎更新,parameter.dtype和paddle.framework.core.VarDesc.VarType类型不相同,例如,paddle.float32 = 10,而paddle.framework.core.VarDesc.VarType.FP32 = 5。
以下地方会导致出错(下面只是简单列举两处, 还有其它地方有使用的地方):
- https://github.com/PaddlePaddle/PaddleNLP/blob/5217a3b79524a30ea37c75d37eda1a257052e22d/paddlenlp/trainer/unified_checkpoint/unified_checkpoint.py#L284
- https://github.com/PaddlePaddle/PaddleNLP/blob/5217a3b79524a30ea37c75d37eda1a257052e22d/paddlenlp/trainer/unified_checkpoint/utils.py#L237
应该可以把paddle.framework.core.VarDesc.VarType.FP32改为paddle.float32, 这样新老版本都可以使用
### 稳定复现步骤 & 代码
新版paddle:
```
>>> paddle.version.show()
commit: f8fa8dae6cac323517054e652a2461881dd5355c
cuda: 11.8
cudnn: 8.6.0
nccl: 21602
xpu_xre: False
xpu_xccl: False
xpu_xhpc: False
cinn: 0.3.0
tensorrt_version: 8.5.3
cuda_archs: []
>>> paddle.float32 == paddle.framework.core.VarDesc.VarType.FP32
False
>>> print(type(paddle.float32))
<class 'paddle.dtype'>
>>> print(type(paddle.framework.core.VarDesc.VarType.FP32))
<class 'paddle.base.libpaddle.VarDesc.VarType'>
>>> print(int(paddle.float32))
10
>>> print(int(paddle.framework.core.VarDesc.VarType.FP32))
5
>>> p = paddle.create_parameter(shape=(3,4), dtype='float32')
W1104 08:00:10.221886 387 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 8.0, Driver API Version: 11.8, Runtime API Version: 11.8
W1104 08:00:10.222734 387 gpu_resources.cc:164] device: 0, cuDNN Version: 8.9.
>>> p.dtype == paddle.framework.core.VarDesc.VarType.FP32
False
>>> p.dtype == paddle.float32
True
>>>
```
旧版paddle:
```
Python 3.10.14 (main, Apr 6 2024, 18:45:05) [GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import paddle
>>> paddle.version.show()
full_version: 3.0.0-beta1
major: 3
minor: 0
patch: 0-beta1
rc: 0
cuda: 11.8
cudnn: 8.6.0
nccl: 21602
xpu: False
xpu_xccl: False
xpu_xhpc: False
cinn: 0.3.0
>>> paddle.float32 == paddle.framework.core.VarDesc.VarType.FP32
True
>>> print(type(paddle.float32))
<class 'paddle.dtype'>
>>> print(type(paddle.framework.core.VarDesc.VarType.FP32))
<class 'paddle.dtype'>
>>> print(int(paddle.float32))
5
>>> print(int(paddle.framework.core.VarDesc.VarType.FP32))
5
>>> p = paddle.create_parameter(shape=(3,4), dtype='float32')
W1104 08:17:58.702229 43820 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 8.0, Driver API Version: 12.0, Runtime API Version: 11.8
W1104 08:17:58.706090 43820 gpu_resources.cc:164] device: 0, cuDNN Version: 8.9.
>>> p.dtype == paddle.framework.core.VarDesc.VarType.FP32
True
>>> p.dtype == paddle.float32
True
``` | closed | 2024-11-04T08:09:24Z | 2024-11-04T12:21:05Z | https://github.com/PaddlePaddle/PaddleNLP/issues/9358 | [
"bug"
] | dynamicheart | 1 |
plotly/dash-cytoscape | plotly | 108 | [BUG] tapNodeData not working after a specific n_clicks event |
#### Description
Hello everyone,
I try to make a cytoscape graph(based on an exemple from Dash) which can expand and collapse. I added a button "reset" which will collapse the graph to the first node. However if I use this button in a specific pattern, the tapNodeData input will not work anymore.
It happen, when I click on the first node "Cloud AWS", then when I click on "reset" button and when I want to re expand my graph by clicking on "Cloud AWS" node.
If I click on "reset" button then on node "Cloud AWS" it will work.
If I click on "Cloud AWS" node, then "aaa" node or "bbb" node, then "reset" button and finally on "Cloud AWS" node, the tapNodeData input will work.
#### Steps/Code to Reproduce
```
import requests
import dash
import dash_table
import pandas as pd
import dash_html_components as html
import dash_bootstrap_components as dbc
import dash_core_components as dcc
import json
import dash_cytoscape as cyto
from dash.dependencies import Input, Output, State
#enable svg
cyto.load_extra_layouts()
app = dash.Dash(__name__)
server = app.server
Data = []
Exemple_data_vpc = [
{'Cloud_provider': 'AWS', 'ID': 'vpc-aaaaa', 'Description': {'Name': 'aaa', 'Type': 'VPC', 'Region': 'us'}},
{'Cloud_provider': 'AWS', 'ID': 'vpc-bbbbb', 'Description': {'Name': 'bbb', 'Type': 'VPC', 'Region': 'us'}},
]
for x in Exemple_data_vpc:
if x['Cloud_provider'] == "AWS":
Data.append([x['ID'],'AWS',x['Description']['Type'],x['Description']['Name']])
df = pd.DataFrame(Data,columns=['ID','Linked','Type','Name'])
nodes = set()
node_children = {} # user id -> list of followers (cy_node format)
edge_children = {} # user id -> list of cy edges ending at user id
cy_edges, cy_nodes = [], []
edges = df
colors = ['red', 'blue', 'green', 'yellow', 'pink']
###### Create first node
nodes.add('AWS')
cy_nodes.append({"data": {"id": 'AWS', "label": 'Cloud AWS',"Level": 0,"expanded": 'No'}})
for edge in edges.iterrows():
if edge[1]['Linked'] != '':
Type = edge[1]['Type']
name = edge[1]['Name']
source = edge[1]['ID']
target = edge[1]['Linked']
if edge[1]['Type'] == 'VPC':
cy_source = {"data": {"id": source, "label": name,'Level':1,'Linked':target,'expanded':'No','Type':Type}}
else:
cy_source = {"data": {"id": source, "label": name,'Level':2,'Linked':target,'expanded':'No','Type':Type}}
cy_target = {"data": {"id": target, "label": target,'Level':cy_source['data']['Level'],'expanded':'No'}}
cy_edge = {'data': {'id': source+target, 'source': source, 'target': target, 'Level':cy_source['data']['Level'],'expanded':'No'}}
if source not in nodes: # Add the source node
nodes.add(source)
cy_nodes.append(cy_source)
if target not in nodes: # Add the target node
nodes.add(target)
cy_nodes.append(target)
# Process dictionary of followers
if not node_children.get(target):
node_children[target] = []
if not edge_children.get(target):
edge_children[target] = []
node_children[target].append(cy_source)
edge_children[target].append(cy_edge)
genesis_node = cy_nodes[0]
genesis_node['classes'] = "genesis"
default_elements = [genesis_node]
styles = {
'json-output': {
'overflow-y': 'scroll',
'overflow-wrap': 'break-word',
'height': 'calc(50% - 25px)',
'border': 'thin lightgrey solid'
},
'tab': {'height': 'calc(98vh - 80px)'}
}
app.layout = html.Div([
html.Button('Reset', id='bt-reset',n_clicks=0),
html.Div(className='eight columns', children=[
cyto.Cytoscape(
id='cytoscape',
elements=default_elements,
style={
'height': '95vh',
'width': 'calc(100% - 250px)',
'float' : 'left',
},
)
])
])
def delete_node(nodeData,elements):
node_childrens = []
if 'id' in nodeData:
for linked in elements:
if 'Linked' in linked['data'] and linked['data']['Linked'] == nodeData['id']:#####supression nodes
node_childrens.append(linked)
if 'target' in linked['data'] and linked['data']['target'] == nodeData['id']:####### supression edge
node_childrens.append(linked)
elif 'id' in nodeData['data']:
for linked in elements:
if 'Linked' in linked['data'] and linked['data']['Linked'] == nodeData['data']['id']:#####supression nodes
node_childrens.append(linked)
if 'target' in linked['data'] and linked['data']['target'] == nodeData['data']['id']:####### supression edge
node_childrens.append(linked)
if(node_childrens == []):
return
for children in node_childrens:
delete_node(children,elements)
elements.remove(children)
i = 0
for element in elements:
if 'id' in nodeData:
if element['data']['id'] == nodeData['id']:
elements[i]['data']['expanded'] = 'No'
elif 'id' in nodeData['data']:
if element['data']['id'] == nodeData['data']['id']:
elements[i]['data']['expanded'] = 'No'
i+=1
return 0
@app.callback(Output("cytoscape", "elements"), [Input("cytoscape", "tapNodeData"),Input("bt-reset", "n_clicks")],State("cytoscape", "elements"))
def modification_on_elements(nodeData,n_clicks,elements):
if not nodeData:
return default_elements
ctx = dash.callback_context
if ctx.triggered:
button_id = ctx.triggered[0]['prop_id'].split('.')[0]
if button_id == "bt-reset":
elements = default_elements
return elements
elif button_id == "cytoscape":
if 'Yes' == nodeData.get('expanded'):
delete_node(nodeData,elements)
return elements
if 'No' == nodeData.get('expanded'):
i = 0
for element in elements:
if nodeData['id'] == element.get('data').get('id'):
elements[i]['data']['expanded'] = 'Yes'
break
i+=1
node_childrens = node_children.get(nodeData['id'])
edge_childrens = edge_children.get(nodeData['id'])
if node_childrens:
elements.extend(node_childrens)
if edge_childrens:
elements.extend(edge_childrens)
return elements
if __name__ == '__main__':
app.run_server(debug=True)
```
#### Expected Results
The expected Result is that "tapNodeData" continue to trigger the callback after this behavior :
Click node "Cloud AWS" ==> Click "reset button" ==> Click node "Cloud AWS" (not trigger the callback")
#### Actual Results
The tapNodeData input don't trigger anymore the callback.
#### Versions
Dash 1.16.0
Dash Core Components 1.1.1
Dash HTML Components 1.12.0
Dash Renderer 1.8.0
Dash HTML Components 0.2.0
Thank you and have a nice day.
Duaran | closed | 2020-10-02T07:56:31Z | 2023-12-28T12:59:28Z | https://github.com/plotly/dash-cytoscape/issues/108 | [] | Duaran | 3 |
jina-ai/clip-as-service | pytorch | 349 | calling BertClient in the example script hangs | **Prerequisites**
> Please fill in by replacing `[ ]` with `[x]`.
* [x] Are you running the latest `bert-as-service`?
* [x] Did you follow [the installation](https://github.com/hanxiao/bert-as-service#install) and [the usage](https://github.com/hanxiao/bert-as-service#usage) instructions in `README.md`?
* [x] Did you check the [FAQ list in `README.md`](https://github.com/hanxiao/bert-as-service#speech_balloon-faq)?
* [x] Did you perform [a cursory search on existing issues](https://github.com/hanxiao/bert-as-service/issues)?
**System information**
> Some of this information can be collected via [this script](https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh).
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- TensorFlow installed from (source or binary): pip install tensorflow==1.12
- TensorFlow version: 1.12
- Python version:
- `bert-as-service` version: 1.8
- GPU model and memory:
- CPU model and memory:
---
### Description
> Please replace `YOUR_SERVER_ARGS` and `YOUR_CLIENT_ARGS` accordingly. You can also write your own description for reproducing the issue.
I'm using this command to start the server:
```
python example/example1.py
```
and calling the server via:
```
python
bc = BertClient()
bc.encode()
```
Then this issue shows up:
bc=BertClient() never executes. If I type
```
bert-serving-start -model_dir uncased_L-12_H-768_A-12 -num_worker=4 -cpu
```
though it prints out the following and hangs there.
ckpt_name = bert_model.ckpt
config_name = bert_config.json
cors = *
cpu = True
device_map = []
fixed_embed_length = False
fp16 = False
gpu_memory_fraction = 0.5
graph_tmp_dir = None
http_max_connect = 10
http_port = None
mask_cls_sep = False
max_batch_size = 256
max_seq_len = 25
model_dir = uncased_L-12_H-768_A-12
num_worker = 4
pooling_layer = [-2]
pooling_strategy = REDUCE_MEAN
port = 5555
port_out = 5556
prefetch_size = 10
priority_batch_size = 16
show_tokens_to_client = False
tuned_model_dir = None
verbose = False
xla = False
I:VENTILATOR:[__i:__i: 66]:freeze, optimize and export graph, could take a while...
I:GRAPHOPT:[gra:opt: 52]:model config: uncased_L-12_H-768_A-12/bert_config.json
I:GRAPHOPT:[gra:opt: 55]:checkpoint: uncased_L-12_H-768_A-12/bert_model.ckpt
I:GRAPHOPT:[gra:opt: 59]:build graph...
I:GRAPHOPT:[gra:opt:128]:load parameters from checkpoint...
I:GRAPHOPT:[gra:opt:132]:optimize...
I:GRAPHOPT:[gra:opt:140]:freeze...
I:GRAPHOPT:[gra:opt:145]:write graph to a tmp file: /tmp/tmppj4xiv2a
I:VENTILATOR:[__i:__i: 74]:optimized graph is stored at: /tmp/tmppj4xiv2a
I:VENTILATOR:[__i:_ru:128]:bind all sockets
I:VENTILATOR:[__i:_ru:132]:open 8 ventilator-worker sockets
I:VENTILATOR:[__i:_ru:135]:start the sink
I:SINK:[__i:_ru:303]:ready
I:VENTILATOR:[__i:_ge:219]:get devices
W:VENTILATOR:[__i:_ge:243]:no GPU available, fall back to CPU
I:VENTILATOR:[__i:_ge:252]:device map:
worker 0 -> cpu
worker 1 -> cpu
worker 2 -> cpu
worker 3 -> cpu
I:WORKER-1:[__i:_ru:514]:use device cpu, load graph from /tmp/tmppj4xiv2a
I:WORKER-0:[__i:_ru:514]:use device cpu, load graph from /tmp/tmppj4xiv2a
I:WORKER-2:[__i:_ru:514]:use device cpu, load graph from /tmp/tmppj4xiv2a
I:WORKER-3:[__i:_ru:514]:use device cpu, load graph from /tmp/tmppj4xiv2a
I:WORKER-0:[__i:gen:542]:ready and listening!
I:WORKER-2:[__i:gen:542]:ready and listening!
I:WORKER-1:[__i:gen:542]:ready and listening!
I:WORKER-3:[__i:gen:542]:ready and listening!
I:VENTILATOR:[__i:_ru:163]:all set, ready to serve request!
(no command line interfacing popping up, it waits here)
Any idea what I should do to fix this error? I am trying to generate sentence embedding for 3000+ texts so being able to run the python script is essential. Thanks. | closed | 2019-05-11T02:01:14Z | 2019-06-26T04:27:35Z | https://github.com/jina-ai/clip-as-service/issues/349 | [] | jennykim1016 | 2 |
thunlp/OpenPrompt | nlp | 44 | question about format of template | Hi,is the format of soft template the same as the manual template? | closed | 2021-11-09T13:50:33Z | 2022-08-06T13:53:58Z | https://github.com/thunlp/OpenPrompt/issues/44 | [] | xufengyang0191 | 2 |
iMerica/dj-rest-auth | rest-api | 75 | Email template issue | I am sending mail for email verification. I have override the templates. The templates works fine if we register with email from temp mail but if we use an authentic email id from gmail, then it shows raw html in the email. Please can you help me figure this out ASAP | open | 2020-05-30T01:44:56Z | 2020-05-30T01:44:56Z | https://github.com/iMerica/dj-rest-auth/issues/75 | [] | Hardeepsingh980 | 0 |
huggingface/datasets | tensorflow | 7,282 | Faulty datasets.exceptions.ExpectedMoreSplitsError | ### Describe the bug
Trying to download only the 'validation' split of my dataset; instead hit the error `datasets.exceptions.ExpectedMoreSplitsError`.
Appears to be the same undesired behavior as reported in [#6939](https://github.com/huggingface/datasets/issues/6939), but with `data_files`, not `data_dir`.
Here is the Traceback:
```
Traceback (most recent call last):
File "/home/user/app/app.py", line 12, in <module>
ds = load_dataset('datacomp/imagenet-1k-random0.0', token=GATED_IMAGENET, data_files={'validation': 'data/val*'}, split='validation', trust_remote_code=True)
File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 2154, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/site-packages/datasets/builder.py", line 924, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/site-packages/datasets/builder.py", line 1018, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/usr/local/lib/python3.10/site-packages/datasets/utils/info_utils.py", line 68, in verify_splits
raise ExpectedMoreSplitsError(str(set(expected_splits) - set(recorded_splits)))
datasets.exceptions.ExpectedMoreSplitsError: {'train', 'test'}
```
Note: I am using the `data_files` argument only because I am trying to specify that I only want the 'validation' split, and the whole dataset will be downloaded even when the `split='validation'` argument is specified, unless you also specify `data_files`, as described here: https://discuss.huggingface.co/t/how-can-i-download-a-specific-split-of-a-dataset/79027
### Steps to reproduce the bug
1. Create a Space with the default blank 'gradio' SDK https://huggingface.co/new-space
2. Create a file 'app.py' that loads a dataset to only extract a 'validation' split:
`ds = load_dataset('datacomp/imagenet-1k-random0.0', token=GATED_IMAGENET, data_files={'validation': 'data/val*'}, split='validation', trust_remote_code=True)`
### Expected behavior
Downloading validation split.
### Environment info
Default environment for creating a new Space. Relevant to this bug, that is:
```
FROM docker.io/library/python:3.10@sha256:fd0fa50d997eb56ce560c6e5ca6a1f5cf8fdff87572a16ac07fb1f5ca01eb608
--> RUN pip install --no-cache-dir pip==22.3.1 && pip install --no-cache-dir datasets "huggingface-hub>=0.19" "hf-transfer>=0.1.4" "protobuf<4" "click<8.1"
``` | open | 2024-11-07T20:15:01Z | 2024-11-07T20:15:42Z | https://github.com/huggingface/datasets/issues/7282 | [] | meg-huggingface | 0 |
ipython/ipython | data-science | 14,513 | autocall full behavior change in 8.27 (not present in 8.26.) | In 8.26 autocall of full checked for `__call__` on the object (I believe). Seems in 8.27 it is much more aggressive and will autocall for **everything**
```
In [17]: foo = dict()
In [18]: foo
-------> foo()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[18], line 1
----> 1 foo()
TypeError: 'dict' object is not callable
In [19]:
```
Not sure if perhaps it is related to this PR #14486 or not | closed | 2024-09-05T15:31:56Z | 2024-10-21T18:03:02Z | https://github.com/ipython/ipython/issues/14513 | [
"bug"
] | krpatter-intc | 2 |
snarfed/granary | rest-api | 515 | Add Blog Archive Format support | https://indieweb.org/blog_archive_format , created by @manton
Motivated by @tantek in https://tantek.com/2023/112/t2/account-migration-post-blog-archive-format
| open | 2023-04-23T04:45:25Z | 2023-04-23T16:14:04Z | https://github.com/snarfed/granary/issues/515 | [] | snarfed | 1 |
kensho-technologies/graphql-compiler | graphql | 172 | Add documentation for the SQL backend. | closed | 2019-01-29T15:31:40Z | 2019-10-02T13:39:16Z | https://github.com/kensho-technologies/graphql-compiler/issues/172 | [
"documentation"
] | jmeulemans | 0 | |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,155 | Quick question about the Synthesizer. | Hi!
I watched the video demonstration from the README.md file. In that video it is mentioned that the synthesizer will generate a mel spectrogram for the input text using the given embedding and clicking on "Synthesize only" multiple times will generate slightly different speech. So my question is:
**Does clicking "Synthesize only" multiple times and then vocoding generate a better result?**
I tried to find the answer in GitHub issues but couldn't find anything. I did however learn that loading multiple utterances from the same speaker does not improve the quality of output because the output is generated from only one embedding as its reference. | open | 2023-01-16T16:17:27Z | 2023-01-16T16:17:42Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1155 | [] | prakharpbuf | 0 |
pytest-dev/pytest-mock | pytest | 255 | test_detailed_introspection_async.py:7: AssertionError | Hello,
I'm attempting to upgrade pytest-mock to version 3.6.1 on GNU Guix, but I'm getting:
```
test_detailed_introspection_async.py:7: AssertionError
============================== 1 failed in 0.09s ===============================
=========================== short test summary info ============================
FAILED tests/test_pytest_mock.py::test_assert_called_args_with_introspection
FAILED tests/test_pytest_mock.py::test_assert_called_kwargs_with_introspection
FAILED tests/test_pytest_mock.py::test_standalone_mock - assert <ExitCode.OK:...
FAILED tests/test_pytest_mock.py::test_detailed_introspection - Failed: nomat...
FAILED tests/test_pytest_mock.py::test_detailed_introspection_async - Failed:...
========================= 5 failed, 67 passed in 3.13s =========================
```
The full build log is attached.
[python-pytest-mock-3.6.1.drv.txt](https://github.com/pytest-dev/pytest-mock/files/7060661/python-pytest-mock-3.6.1.drv.txt)
The build dependencies are: `dependencies: python-pytest-asyncio@0.10.0 python-pytest@5.3.5 python-setuptools-scm@3.4.3`. pytest-asyncio is pegged to 0.10.0 as it's the last compatible version with Pytest 5.3.5. Python is at 3.8.2. | closed | 2021-08-26T14:59:57Z | 2021-08-26T18:43:10Z | https://github.com/pytest-dev/pytest-mock/issues/255 | [] | Apteryks | 2 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 508 | Have probles to get it working on a Devuan distribution. | Hello i am having this output on a Devuan (it directly did nt work on my old linux mint :C) python3 demo_cli.py
2020-08-24 22:00:26.726664: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory
2020-08-24 22:00:26.726717: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
File "demo_cli.py", line 4, in <module>
from synthesizer.inference import Synthesizer
File "/home/src/voiceclone/Real-Time-Voice-Cloning-master/synthesizer/inference.py", line 1, in <module>
from synthesizer.tacotron2 import Tacotron2
File "/home/src/voiceclone/Real-Time-Voice-Cloning-master/synthesizer/tacotron2.py", line 3, in <module>
from synthesizer.models import create_model
File "/home/src/voiceclone/Real-Time-Voice-Cloning-master/synthesizer/models/__init__.py", line 1, in <module>
from .tacotron import Tacotron
File "/home/src/voiceclone/Real-Time-Voice-Cloning-master/synthesizer/models/tacotron.py", line 4, in <module>
from synthesizer.models.helpers import TacoTrainingHelper, TacoTestHelper
File "/home/src/voiceclone/Real-Time-Voice-Cloning-master/synthesizer/models/helpers.py", line 3, in <module>
from tensorflow.contrib.seq2seq import Helper
ModuleNotFoundError: No module named 'tensorflow.contrib'
seems somethi with the tensor flow version? the requieremnt subversion from the requirment.txt give me an error for that exact version, i just install the last one :c | closed | 2020-08-25T04:19:52Z | 2020-08-26T02:02:50Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/508 | [] | afantasialiberal | 2 |
whitphx/streamlit-webrtc | streamlit | 1,569 | No module named 'sample_utils' | Traceback (most recent call last):
File "C:\Users\LOGI\AppData\Local\Programs\Python\Python311\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 542, in _run_script
exec(code, module.__dict__)
File "C:\Users\LOGI\AppData\Local\Temp\tmpa2jubgh0\app.py", line 17, in <module>
from sample_utils.download import download_file
ModuleNotFoundError: No module named 'sample_utils' | closed | 2024-03-25T13:34:58Z | 2024-05-24T07:02:32Z | https://github.com/whitphx/streamlit-webrtc/issues/1569 | [] | logesh-works | 1 |
NullArray/AutoSploit | automation | 982 | Ekultek, you are correct. | Kek | closed | 2019-04-19T16:46:32Z | 2019-04-19T16:57:59Z | https://github.com/NullArray/AutoSploit/issues/982 | [] | AutosploitReporter | 0 |
iperov/DeepFaceLab | machine-learning | 5,624 | New feature: Add QButton to apply the baked mask as an Xseg mask | ## Issue
Using iperov's awesome RTM v2 faceset my masks are 99% great in a few minutes with just a tiny bit of fixing needed. I need to fully define the xseg mask in these instances, even though 99% of it is already perfectly accurate. It would save a ton of time if I could convert the baked mask to an xseg poly, then I could quickly refine the 1% of the poly that needs a little realigning.
## Suggestion
I imagine we just need to add an arbitrary number of points along the edges of the baked mask as a new poly, then reuse some of the code from `canvas_finalize` to save those as a mask, maybe like `dflimg.set_seg_ie_polys( new_ie_poly_from_baked )`, then refresh the image data to overlay the new xseg mask ready for adjusting.
I'm looking into this myself now, but I rarely use Python; hopefully if I'm trying to dig a hole underwater someone more versed in Python than me can chime in and save me some time and tell me why this won't work haha. | open | 2023-02-11T10:44:05Z | 2023-08-23T19:36:15Z | https://github.com/iperov/DeepFaceLab/issues/5624 | [] | CopaceticMeatbag | 1 |
matterport/Mask_RCNN | tensorflow | 2,771 | Mask-RCNN TF 2.4.1 Compatibility | Hello,
I've seen a number of projects made for TF2.x compatibility. Despite the fact that I have made necessary changes to the project, none of them work properly.
I've been working on a project that has a number of deep learning components that must work together. Also, as I strive to add new features to my project that include detection and segmentation, I've opted to use this framework. I was thinking that if I only changed a little portion of the MaskRCNN, it would still function with the rest of the project. My work, however, has been entangled due to version differences.
So, I was wondering how I could run MaskRCNN without utilizing `tf.compat.v1.disable eager execution()` in TF 2.4.1. If I utilize this line block, it impacts the entire working process of my project and causes numerous issues.
1. The first error that I get is:
```
asserts = [
tf.Assert(tf.greater(tf.shape(input=proposals)[0], 0), [proposals],
name="roi_assertion"),
]
```
`TypeError: Could not build a TypeSpec for <tf.Operation 'tf.debugging.Assert/Assert/AssertGuard/Identity' type=Identity> with type Operation`
Despite I made it a remark line, the project generates similar issues for the rest of the lines.
2. If I change that line block to a comment line, I get the following error:
```
tf.cond(
pred=tf.greater(tf.shape(input=positive_overlaps)[1], 0),
true_fn=lambda: tf.argmax(input=positive_overlaps, axis=1),
false_fn=lambda: tf.cast(tf.constant([]), tf.int64)
)
```
`TypeError: To be compatible with tf.eager.defun, Python functions must return zero or more Tensors; in compilation of <function <lambda> at 0x0000015B27E0EEE8>, found return value of type <class 'tensorflow.python.keras.engine.keras_tensor.KerasTensor'>, which is not a Tensor.`
3. Other issues I encountered when working with `tf.compat.v1.disable eager execution()` are listed below.
3.1- I attempted to utilize that block of code to work for only one file, however once you place it in your project, it impacts the entire project. The rest of the project then produces a variety of mistakes.
3.2- After that, I put `tf.compat.v1.enable eager execution()` exactly before the'return' part of the file. In that scenario, the project states that it is not permitted to enable or disable easger execution while the project is in progress.
| open | 2022-02-04T08:04:15Z | 2022-09-18T19:49:02Z | https://github.com/matterport/Mask_RCNN/issues/2771 | [] | Mstfakts | 4 |
microsoft/unilm | nlp | 1,222 | can you updates kosmos-2 the code of preparing dataset |
Thank you for your great work. I can't wait to train it. But I don't kown how to preparing dataset, can you show me the code?
Thank! | closed | 2023-07-24T03:38:33Z | 2023-08-02T05:24:16Z | https://github.com/microsoft/unilm/issues/1222 | [] | hujunchao | 7 |
scrapy/scrapy | python | 6,044 | PyPy tests crash | > Fatal Python error: Aborted
> Stack (most recent call first, approximate line numbers):
> File "/home/runner/work/scrapy/scrapy/.tox/pypy3-pinned/lib/pypy3.8/site-packages/urllib3/response.py", line 429 in _decode
> tests/test_crawl.py .....................pypy3-pinned: exit -6 (333.45 seconds) /home/runner/work/scrapy/scrapy> pytest --durations=10 scrapy tests pid=12231
> | closed | 2023-09-12T07:25:07Z | 2023-09-12T14:18:27Z | https://github.com/scrapy/scrapy/issues/6044 | [
"bug",
"CI",
"upstream issue"
] | wRAR | 3 |
babysor/MockingBird | deep-learning | 57 | 我什么我使用首页提供的模型生成出来的音频都是杂音呢 | 我什么我使用首页提供的模型生成出来的音频都是杂音呢 | closed | 2021-08-27T16:19:27Z | 2021-08-28T16:49:53Z | https://github.com/babysor/MockingBird/issues/57 | [] | loilih | 3 |
graphql-python/graphene-sqlalchemy | graphql | 258 | Is there a right pagination suggestion? | pagination needs page, per page and total count
however, with https://github.com/graphql-python/graphene-sqlalchemy/issues/118, we can only use "page" and "per_page", but the total_count by https://github.com/graphql-python/graphene-sqlalchemy/pull/104 would not be correct any more.
with "after" and "first", we could get correct total_count. But we don't know "after what" with a given page number.
for example, there are totally 10,000 items. With a filter, we get 8,000 items.
set per page to 100, given page 12, we need the filtered 1101~1200 item.
with https://github.com/graphql-python/graphene-sqlalchemy/pull/104 , we got total_number 10,000 rather than 8,000, then we could not print the right total page num (80).
with "after" and "first", we don't know "after what" to get page 12. because the items are filtered, the IDs are not continuously.
| closed | 2020-01-07T13:18:42Z | 2023-02-25T00:49:36Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/258 | [] | CSGrandeur | 3 |
tableau/server-client-python | rest-api | 1,092 | [Type 3] Publish directly using Datasource/Workbook objects from the Document API | ## Description
Unless this functionality already exists, it would be helpful to directly use created Datasource/Workbook objects from the Document API as the source when publishing content via TSC. Currently, you have to pass a filepath or a file object, which I haven't found an example of. This would also reduce having to save files to the file system for quick on the fly datasource/workbook creation.
```python
from tableaudocumentapi import Connection, Datasource
import tableauserverclient as tsc
ds = Datasource.from_connections(caption="", connections=[some_connection])
# do login magic, get information for publish
server.datasources.publish(
datasource_item=tsc.DatasourceItem(project_id, "My Cool Datasource"),
file=ds, # object? item?
mode=server.PublishMode.Overwrite,
connection_credentials=tsc.ConnectionCredentials(username, password),
)
```
| open | 2022-08-19T06:47:14Z | 2023-01-13T18:33:52Z | https://github.com/tableau/server-client-python/issues/1092 | [
"enhancement",
"needs investigation"
] | eschranz | 1 |
wagtail/wagtail | django | 12,888 | Support min_length parameter on RichTextBlock |
### Feature Request
`RichTextBlock` currently supports a `max_length` parameter that validates text length (not including HTML tags), but no `min_length` equivalent. I find myself wanting to enforce such a minimum length on certain blocks in my project.
### Implementation
The `max_length` parameter is enforced with `RichTextMaxLengthValidator`, which is a simple extension of Django's `MaxLengthValidator`. Creating an equivalent `RichTextMinLengthValidator` that extends Django's `MinLengthValidator` seems like an obvious implementation option for this feature.
### Working on this
I would be happy to work on this feature (provided that there is not already an existing solution that I am overlooking). | open | 2025-02-17T22:39:40Z | 2025-03-02T13:13:12Z | https://github.com/wagtail/wagtail/issues/12888 | [
"type:Enhancement",
"component:Streamfield",
"component:Rich text"
] | ambaron8 | 3 |
PokemonGoF/PokemonGo-Bot | automation | 6,067 | exception while reloading config |
with live config enabled, i got this exception when i modify the config file.
```
[2017-06-30 10:26:59] [MainThread] [ cli] [INFO] Config changed! Applying new config.
Exception in thread Thread-89:
Traceback (most recent call last):
File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/Users/rama/poke/PokemonGo-Bot/pokemongo_bot/socketio_server/runner.py", line 34, in _start_listening_blocking
listener = eventlet.listen((self.host, self.port))
File "/Users/rama/poke/PokemonGo-Bot/lib/python2.7/site-packages/eventlet/convenience.py", line 43, in listen
sock.bind(addr)
File "/usr/local/Cellar/python/2.7.13/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
error: [Errno 48] Address already in use
``` | open | 2017-06-30T08:27:48Z | 2017-06-30T10:00:06Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/6067 | [] | ramarro123 | 1 |
d2l-ai/d2l-en | data-science | 1,945 | colab gives error code: '@save' is not an allowed annotation – allowed values include [@param, @title, @markdown]. | When I open the google colab files (pytorch or mx), I get this error:
'@save' is not an allowed annotation – allowed values include [@param, @title, @markdown].
This happens with all the colab files, in the specific case, this happens with the chapter 13 colab:
kaggle-cifar10.ipynb

| open | 2021-10-25T12:25:39Z | 2021-11-11T07:09:21Z | https://github.com/d2l-ai/d2l-en/issues/1945 | [] | g-i-o-r-g-i-o | 2 |
allenai/allennlp | data-science | 5,032 | Textual Entailment PREDICTION code for Python missing a comma on the official demo page | As this is not a bug or just a mistake, I found a syntax error while researching the Textual Entailment feature on the website (https://demo.allennlp.org/textual-entailment/elmo-snli).
The snippet for the predictor is missing a comma and will yield a syntax error:
```
from allennlp.predictors.predictor import Predictor
import allennlp_models.tagging
predictor = Predictor.from_path("https://storage.googleapis.com/allennlp-public-models/decomposable-attention-elmo-2020.04.09.tar.gz")
predictor.predict(
premise="Two women are wandering along the shore drinking iced tea."
hypothesis="Two women are sitting on a blanket near some rocks talking about politics."
)
``` | closed | 2021-03-02T16:17:29Z | 2021-03-07T19:01:49Z | https://github.com/allenai/allennlp/issues/5032 | [] | alexander-py | 2 |
miguelgrinberg/Flask-SocketIO | flask | 1,181 | Nginx SSL - Not working | Not sure what the issue is my configuration is below, now the weird thing is it works fine when browsing over http. But as soon as you browse over https the websocket can't connect. Nginx gives a 400 error, timeout waiting for the upstream.
My relevant python stuff is
socketio = SocketIO(app, async_mode="eventlet")
socketio.run(app, debug=False)
And I'm just running it in a screen via it's own virtual environment at the moment while I debug but have it in systemd normally.
python app.py
Also the docs have an error appending '/socket.io' to the proxy_pass setting of the socket.io portion makes the whole thing not work.
```upstream xxx_app {
server 127.0.0.1:5000;
}
server {
listen 80;
listen 443 ssl;
server_name xxx;
ssl_certificate /etc/letsencrypt/live/xxx.xxx.com.au/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/xxx.xxx.com.au/privkey.pem;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
ssl_prefer_server_ciphers on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
resolver 1.1.1.1 1.0.0.1;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/letsencrypt/live/xxx.xxx.com.au/chain.pem;
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
location / {
include proxy_params;
proxy_pass http://xxx_app;
}
location /socket.io {
include proxy_params;
proxy_pass http://xxx_app;
proxy_buffering off;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
``` | closed | 2020-02-10T10:06:24Z | 2020-02-10T11:38:23Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1181 | [
"question"
] | toomuchio | 4 |
bmoscon/cryptofeed | asyncio | 1,012 | Bybit/Deribit/Bitmex currently not working | **Describe the bug**
It fails to connect
**To Reproduce**
Comment in/out exchanges to add Bybit/Bitmex/Deribit
```
def writer(addr, port):
f = FeedHandler()
configured = []
# exchanges = {'BINANCE_FUTURES', 'BYBIT', 'BITMEX', 'DERIBIT}
exchanges = {'BINANCE_FUTURES'}
print("Querying exchange metadata...")
for exchange_string, exchange_class in EXCHANGE_MAP.items():
if exchange_string in exchanges:
print(exchange_class.info()['channels']['websocket'])
if LIQUIDATIONS in exchange_class.info()['channels']['websocket']:
configured.append(exchange_string)
print(exchange_string)
symbols = [sym for sym in exchange_class.symbols() if 'PINDEX' not in sym]
f.add_feed(exchange_class(subscription={LIQUIDATIONS: symbols}, callbacks={LIQUIDATIONS: LiquidationsSocket(addr, port=port)}), timeout=600)
print("Starting feedhandler for exchanges:", ', '.join(configured))
f.run()
if __name__ == '__main__':
freeze_support()
p = Process(target=writer, args=('udp://127.0.0.1', 12321))
p.start()
```
**Operating System:**
- Windows 10
**Cryptofeed Version**
- 2.4.0
| open | 2024-02-06T14:28:29Z | 2024-02-06T14:28:29Z | https://github.com/bmoscon/cryptofeed/issues/1012 | [
"bug"
] | gigitalz | 0 |
opengeos/streamlit-geospatial | streamlit | 10 | Can't overlay administrative boundary on GOES timelapse | Hi there. I've been trying to create a GOES timelapse, and overlay some geojson information, but every time I try it I get the following message: "Something went wrong, either the ROI is too big or there are no data available for the specified date range. Please try a smaller ROI or different date range."
When I run without an administrative boundary it works fine, so it's not the ROI size. I tried a custom administrative boundary, but also tried the default ones (Countries and Continents). None work. I've attached screenshots of my settings.
Fantastic app btw. Very cool stuff.


| closed | 2021-11-23T15:45:24Z | 2022-01-03T18:02:42Z | https://github.com/opengeos/streamlit-geospatial/issues/10 | [] | gsjlogie | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.