repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
MorvanZhou/tutorials | tensorflow | 49 | How to use a continuous color as the color of a line? | Hi Morvan,
Thanks for continuing making great lessons!
I have a question about using matplotlib to create a line with a continuous color, the continuous color should be based on an array.
The original dataset and question is [here](https://stackoverflow.com/questions/45125877/how-to-fill-the-color-of-a-line-with-a-continuous-color-which-is-created-by-an-a)
Could you teach me how to do it? Thanks | closed | 2017-07-16T06:54:51Z | 2017-07-18T05:15:16Z | https://github.com/MorvanZhou/tutorials/issues/49 | [] | EmbraceLife | 1 |
3b1b/manim | python | 2,324 | Black Texture on Degenerate Surface Regions | ### Describe the bug
The texture does not render correctly on certain TexturedSurfaces (Sphere, Cone, Disk3D). In specific areas it appears completely black.
**Code**:
```py
from manimlib import *
class SurfaceExample(Scene):
CONFIG = {
"camera_class": ThreeDCamera,
}
def construct(self):
frame = self.camera.frame
frame.set_euler_angles(
theta=55 * DEGREES,
phi=70 * DEGREES,
)
sphere = Sphere(radius=2.5, resolution=(4, 4))
texture = "https://upload.wikimedia.org/wikipedia/commons/thumb/4/4d/Whole_world_-_land_and_oceans.jpg/1280px-Whole_world_-_land_and_oceans.jpg"
earth = TexturedSurface(sphere, texture, opacity=0.9)
sphere.shift(LEFT * 3)
earth.shift(RIGHT * 3)
self.play(FadeIn(sphere))
self.play(FadeIn(earth))
self.wait()
```
**Wrong display or Error traceback**:
It is mainly noticeable in surfaces with low resolution:
Resolution (4, 4):

Resolution (11, 11):

### Additional context
The issue occurs at surface points where one of the derivative vectors is degenerate:
point − du_point = 0 or point − dv_point = 0
### Potential Fix
I added a conditional check in the texture_surface shader to detect du or dv with zero lengths, in which case the unit_normal must be calculated in a different way.
shaders/textured_surface/vert.glsl:
```glsl
#version 330
in vec3 point;
in vec3 du_point;
in vec3 dv_point;
in vec2 im_coords;
in float opacity;
out vec3 v_point;
out vec3 v_unit_normal;
out vec2 v_im_coords;
out float v_opacity;
#INSERT emit_gl_Position.glsl
#INSERT get_unit_normal.glsl
void main(){
v_point = point;
vec3 du = du_point - point;
vec3 dv = dv_point - point;
if(length(dv) < 1e6 || length(du) < 1e-6){
v_unit_normal = normalize(point);
} else {
v_unit_normal = normalize(cross(normalize(du), normalize(dv)));
}
v_im_coords = im_coords;
v_opacity = opacity;
emit_gl_Position(point);
}
```
The analytical normal of a sphere was used in the fallback case.
**Results**
Resolution (4, 4):

Resolution (11, 11):

**Conclusion**
I am not sure if this was the ideal solution, but it is working perfectly for various resolutions and surfaces now!
| open | 2025-03-14T01:43:30Z | 2025-03-21T15:43:50Z | https://github.com/3b1b/manim/issues/2324 | [
"bug"
] | TiagoMLucio | 1 |
pallets-eco/flask-wtf | flask | 362 | BooleanField does not honor default value | Hi.
I'm having issues with rendering a BooleanField correctly using flask-wtf 0.14.2 with wtforms 2.2.1.
In my form, I define the field as:
```
from flask_wtf import FlaskForm
from wtforms import BooleanField
class MyForm(FlaskForm):
bool_field = BooleanField(label='My boolean', description='Should be cheked', default='checked')
```
But it renders unchecked like as `<input id="bool_field" name="bool_field" type="checkbox" value="y">`
However, if I instead use the render_kw it works:
```
class MyForm(FlaskForm):
bool_field = BooleanField(label='My boolean', description='Should be cheked', render_kw={'checked': True})
```
Gives me: `<input checked="checked" id="bool_field" name="bool_field" type="checkbox" value="y">`
From reading the documentation on BooleanField for wtforms: https://wtforms.readthedocs.io/en/stable/fields.html#wtforms.fields.BooleanField it seems like the first approach (passing `default='checked'`) is indeed the proper way of doing it.
When I check the values directly from the form object using pure wtforms, it seems to work as expected:
```
import wtforms
class BooleanTest(wtforms.Form):
field = wtforms.BooleanField(u'Boolean', default='checked')
form = BooleanTest()
```
Then form.field.data returns `True`, and it changes to `False` if I remove the `Default='checked'`.
What I expect is the first use case to return a checked checkbox, while id does not.
Am I doing something wrong, or do I miss something here? | closed | 2019-03-13T08:23:36Z | 2021-05-26T00:54:57Z | https://github.com/pallets-eco/flask-wtf/issues/362 | [] | ilons | 6 |
dot-agent/nextpy | fastapi | 17 | Add support for audio gen models | closed | 2023-08-22T08:29:10Z | 2023-08-22T10:10:36Z | https://github.com/dot-agent/nextpy/issues/17 | [] | anubrag | 0 | |
nerfstudio-project/nerfstudio | computer-vision | 3,092 | Rendering results for Nerfacto not training well | hi, I was training on myself dataset using Nerfacto provided by Nerfstudio.
I have the issue that is similar to this link
[Rendering results for Nerfacto don't match the original image on training set. #2686](url)
the difference of my video is the screen recording video, I dont know the FOV of training camera intrinsics.
The steps are listed as follows:
1. ns-train nerfacto --viewer.websocket-port 7007 --viewer.make-share-url True nerfstudio-data --data data/nerfstudio/custom_data --downscale-factor 4
2. !ns-render interpolate --load-config $config_filename --output-path renders/test01.mp4
the Rendering results is follow:
https://github.com/nerfstudio-project/nerfstudio/assets/29448776/ecac571c-2f57-48cb-ae2c-9c7426b1fb85
I don't know what is the cause of the Rendering results. Is there any argument or parameter which could be set to avoid this result? Thank you!
| open | 2024-04-19T06:26:42Z | 2024-04-19T06:45:54Z | https://github.com/nerfstudio-project/nerfstudio/issues/3092 | [] | cokobu | 0 |
adbar/trafilatura | web-scraping | 217 | The 'adbar/trafilatura' repository doesn't contain the 'Trafilatura_Overview.ipynb' path in 'master'. | the notebook is linked from the main readme | closed | 2022-06-15T11:21:55Z | 2022-06-15T14:48:44Z | https://github.com/adbar/trafilatura/issues/217 | [] | johanjohan | 1 |
qwj/python-proxy | asyncio | 153 | Reload Rules file | Hi All,
I am currently running this project in a Docker container and wonder if there is a way to reload a Rules file from within the container, without having to restart the entire Docker container.
Any help is appreciated. Thank you! | open | 2022-08-11T08:54:44Z | 2022-08-11T08:54:44Z | https://github.com/qwj/python-proxy/issues/153 | [] | onestix | 0 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 391 | 为什么同样的配置本地运行没问题,docker部署完就获取不到视频 | 求助,同样的配置,一模一样的UA和Cookie值,本地直接跑项目是正常的,放到NAS上docker里就不行了,出口的ip都是一样的,完全没有头绪 | closed | 2024-05-16T00:59:47Z | 2024-05-16T12:05:36Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/391 | [
"BUG"
] | fgprodigal | 19 |
iperov/DeepFaceLab | machine-learning | 5,417 | Deepfacelab Vram recongnition problem. | My computer specifications are as follows.
cpu : ryzen 4800H
gpu : Rtx3060 (notebook) 6GB
memory: 16GB
The job manager confirmed that the video RAM was normally caught at 6GB.
However, deepfacelab recognizes vram as 3.4GB, resulting in frequent memory errors. Find someone who knows the solution. | open | 2021-10-22T00:28:27Z | 2023-06-08T22:48:35Z | https://github.com/iperov/DeepFaceLab/issues/5417 | [] | Parksehoon1505 | 5 |
keras-team/keras | data-science | 20,260 | Model weights not saved if a custom layer class contains a list of layers named self._layers | Hi everyone,
I would like to point out a problem that I found while developing a custom model: layer's weights are not saved if a Model subclass initializes a custom layer that contains sublayers in a class-level list variable called `self._layers`. The code below proves the issue and should illustrates it better.
```python
import os
from pathlib import Path
import keras
import numpy as np
@keras.saving.register_keras_serializable(package="KerasTest", name="CustomLayer")
class CustomLayer(keras.Layer):
def __init__(self, bugged: bool = False, name="test_layer", **kwargs):
super(CustomLayer, self).__init__(name=name, **kwargs)
self._bugged = bugged
if self._bugged:
self._layers = [
keras.layers.Dense(64, activation='relu'),
keras.layers.Dense(32, activation='relu'),
keras.layers.Dense(10, activation='softmax')
]
else:
self._custom_layers = [
keras.layers.Dense(64, activation='relu'),
keras.layers.Dense(32, activation='relu'),
keras.layers.Dense(10, activation='softmax')
]
def call(self, inputs):
x = inputs
layer_list = self._layers if self._bugged else self._custom_layers
for layer in layer_list:
x = layer(x)
return x
def get_config(self):
base_config = super().get_config()
config = {
"bugged": self._bugged
}
return {**base_config, **config}
@keras.saving.register_keras_serializable(package="KerasTest", name="TestModel")
class TestModel(keras.Model):
def __init__(self, bugged: bool = False, name="test_model", **kwargs):
super(TestModel, self).__init__(name=name, **kwargs)
self._bugged = bugged
self._custom_layer = CustomLayer(bugged=bugged)
def call(self, inputs):
return self._custom_layer(inputs)
def get_config(self):
base_config = super().get_config()
config = {
"bugged": self._bugged
}
return {**base_config, **config}
def test_model(bugged: bool = False):
output_path = Path("./output")
model_name_prefix = "bugged" if bugged else "fixed"
# Dataset generation
num_samples = 1000
input_data = keras.random.uniform(shape=(num_samples, 32))
labels = keras.random.randint(minval=0, maxval=10, shape=(num_samples,))
labels = keras.utils.to_categorical(labels, num_classes=10)
# Test bugged model
model = TestModel(bugged=bugged)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(input_data, labels, epochs=5, batch_size=32, validation_split=0.2)
trained_weights = model.get_weights()
# Save and load model
output_path.mkdir(parents=True, exist_ok=True)
model.save(output_path / f"{model_name_prefix}_model.keras")
loaded_model = keras.saving.load_model(output_path / f"{model_name_prefix}_model.keras")
loaded_weights = loaded_model.get_weights()
print(f"------------ {model_name_prefix.capitalize()} - Compare trained weights to loaded weights -------------")
for i, (uw, tw) in enumerate(zip(trained_weights, loaded_weights)):
comparison = np.array_equal(uw, tw)
print(f"Layer {i} -> Weights match: {comparison}")
if __name__ == '__main__':
os.environ["KERAS_BACKEND"] = "tensorflow"
test_model(bugged=True)
test_model(bugged=False)
```
The output is:
```
------------ Bugged - Compare trained weights to loaded weights -------------
Layer 0 -> Weights match: False
Layer 1 -> Weights match: False
Layer 2 -> Weights match: False
Layer 3 -> Weights match: False
Layer 4 -> Weights match: False
Layer 5 -> Weights match: False
------------ Fixed - Compare trained weights to loaded weights -------------
Layer 0 -> Weights match: True
Layer 1 -> Weights match: True
Layer 2 -> Weights match: True
Layer 3 -> Weights match: True
Layer 4 -> Weights match: True
Layer 5 -> Weights match: True
```
I think that the same problem has been solved for the `Model` class by declaring a setter method (line 170). Perhaps it is possible to use the same approach for the `Layer` class.
| open | 2024-09-16T14:50:50Z | 2024-09-16T22:56:49Z | https://github.com/keras-team/keras/issues/20260 | [
"type:Bug"
] | mpetteno | 2 |
521xueweihan/HelloGitHub | python | 2,535 | 【开源自荐】ISAT with segment anything - 集成segment anything的交互式半自动图像分割标注工具 | ## 推荐项目
- 项目地址:https://github.com/yatengLG/ISAT_with_segment_anything
- 类别:Python、机器学习
- 项目标题:集成segment anything的图像分割标注工具
- 项目描述:集成facebook开源的[segment-anything](https://github.com/facebookresearch/segment-anything)项目,实现了图像分割的快速标注,大大降低了图像分割标注工作量。
- 亮点:
1. 首个集成了segment-anything的图像分割标注工具。
2. 通过鼠标左(右)键点击感兴趣(不感兴趣)区域,即可完成目标分割标注工作
3. 支持标注二次修改
4. 同时支持语义分割标注与实例分割标注
- 截图:

演示视频:[bilibili](https://www.bilibili.com/video/BV1or4y1R7EJ/)
- 后续更新计划:
项目持续更新中
| closed | 2023-04-25T07:55:16Z | 2024-01-24T16:07:13Z | https://github.com/521xueweihan/HelloGitHub/issues/2535 | [
"机器学习"
] | yatengLG | 0 |
pydata/xarray | numpy | 9,984 | DataTree + Zarr-Python 3 | ### What is your issue?
In order to limit the scope of https://github.com/pydata/xarray/pull/9552, we opted to delay complete DataTree compatibility zarr-python 3. It would be nice to get this working now that zarr 3.0 is out. This issue tracks what is left to do to make this integration work:
- [x] Go through the datatree test suite and remove any skips for zarr-python 3, e.g. https://github.com/pydata/xarray/blob/1c7ee65d560fa3067dc4424c672393839fa972d3/xarray/tests/test_backends_datatree.py#L376-L378
- [x] DataTree passes path with a leading slash, despite https://github.com/zarr-developers/zarr-python/pull/2384, this still seems to break things in zarr 3, an upstream fix may be required
xrefs:
- https://github.com/pydata/xarray/issues/9515
- https://github.com/zarr-developers/zarr-python/issues/2765
- https://github.com/pydata/xarray/discussions/9938
- https://github.com/pydata/xarray/issues/9733
- https://github.com/zarr-developers/zarr-python/issues/2357
- https://github.com/earth-mover/icechunk/issues/624
- https://github.com/pydata/xarray/issues/9960
cc @dcherian, @d-v-b, @maxrjones
@TomAugspurger - if you have recollections of other things that need to be fixed to make this work, please add to this list. | closed | 2025-01-25T17:11:34Z | 2025-03-20T06:05:11Z | https://github.com/pydata/xarray/issues/9984 | [
"bug",
"topic-zarr",
"topic-DataTree"
] | jhamman | 3 |
ultralytics/ultralytics | python | 19,398 | Yolov12 for instance segmentation task | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
How can I train Yolov12 for instance segmentation model?
### Additional
_No response_ | closed | 2025-02-24T11:11:24Z | 2025-02-24T11:47:00Z | https://github.com/ultralytics/ultralytics/issues/19398 | [
"question",
"segment"
] | mrohit01 | 2 |
albumentations-team/albumentations | machine-learning | 1,999 | label_fields are not filtered out with bboxes | ## Describe the bug
label_fields are not filtered together with bboxes when the latter is outside of the image.
### To Reproduce
Steps to reproduce the behavior:
```
import albumentations as A
from PIL import Image
import numpy as np
transform = A.Compose(
[
A.RandomResizedCrop((500, 500), scale=(0.01, 0.1), ratio=(1, 1)),
],
bbox_params=A.BboxParams(
format="coco",
label_fields=["label"], # clip=True # , min_area=25
),
)
boxes = [[10,10,20,20], [5,5,10,10], [450, 450, 5,5], [250,250,5,5]]
labels = [1,2,3,4]
res = transform(image=np.zeros((500,500,3), dtype='uint8'), bboxes=boxes, label=labels)
print(len(res['bboxes']), len(res['label']))
```
### Expected behavior
length of bboxes and label must be the same, but label is not filtered
### Actual behavior
label is not filtered. the lengthes of 2 arrays are different
| closed | 2024-10-18T20:05:24Z | 2024-10-19T02:18:13Z | https://github.com/albumentations-team/albumentations/issues/1999 | [
"bug"
] | IvanHahan | 2 |
ray-project/ray | data-science | 51,499 | CI test windows://python/ray/tests:test_component_failures is consistently_failing | CI test **windows://python/ray/tests:test_component_failures** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aaf1-9737-4a02-a7f8-1d7087c16fb1
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aa03-5c4f-4156-97c5-9793049512c1
DataCaseName-windows://python/ray/tests:test_component_failures-END
Managed by OSS Test Policy | closed | 2025-03-19T00:06:07Z | 2025-03-19T21:52:52Z | https://github.com/ray-project/ray/issues/51499 | [
"bug",
"triage",
"core",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] | can-anyscale | 3 |
netbox-community/netbox | django | 17,841 | Allow `Tags` to be displayed in a specific order | ### NetBox version
v4.1.0
### Feature type
New functionality
### Triage priority
N/A
### Proposed functionality
Add an integer `Index` field to the `Tag` object definition which can be used by the frontend to order tags first by `Index`, then alphabetically when displaying them.
- `Index` ordering should be from lowest to highest value, where the lowest value has the highest priority and therefore it is displayed before any other tags
- Some sort of hard limit, e.g. `-128` to `128`, could be implemented for optimization
- The default index value must be zero
- **For tags with the same index, alphabetical sorting should be the default behavior.** This is to ensure that, when migrating to this implementation existing behavior is preserved.
### Use case
There are innumerous use cases for tags, one such case is the ability to give information about an object such as a virtual machine:
`NETPY` `CLT01` `VM108` `APCHE`
```
| | | |_ Gives a clear visual hint of the service running in this machine
| | |________ Shows the virtual machine name according to our virtualization system
| |_______________ Identifies the cluster where this machine belongs to
|_____________________ Identifies the tenant
```
It would be lovely to display the four tags in the aforementioned order, since that is the formal name of this object and everywhere in our internal processes and documents, we refer to this virtual machine as `NETPY-CLT01-VM108-APCHE`.
Having the ability to sort tags when displaying objects in Netbox will allow for further customization and integration with other systems that rely on tags for looking up objects.
### Database changes
A new `integer` column for the `Tag` table definition.
### External dependencies
None that I am aware of. | closed | 2024-10-23T15:13:30Z | 2025-03-19T17:25:39Z | https://github.com/netbox-community/netbox/issues/17841 | [
"status: accepted",
"type: feature",
"status: backlog",
"complexity: medium",
"netbox"
] | BrunoBlanes | 8 |
automl/auto-sklearn | scikit-learn | 926 | MyDummyClassifier(configuration=1, random_state=None)) | autosklearn.__version__ == '0.8.0'


| closed | 2020-08-19T03:05:07Z | 2021-11-17T11:45:24Z | https://github.com/automl/auto-sklearn/issues/926 | [
"bug"
] | wziji | 2 |
psf/requests | python | 6,409 | `test_request_recovery` fails when using pytest-xdist | This issue was created as a result of removing a TODO comment in https://github.com/psf/requests/pull/6406.
```
# TODO: figure out why this sometimes fails when using pytest-xdist.
``` | closed | 2023-04-19T08:33:07Z | 2024-04-19T00:03:20Z | https://github.com/psf/requests/issues/6409 | [] | jelgun | 0 |
snarfed/granary | rest-api | 145 | Some weird results | https://granary.io/url?input=html&output=mf2-json&url=http%3A%2F%2Fbeta.singpolyma.net%2F
I see the first item is my representative hcard -- but it has as "author" property? Also not getting the full name that is present.
Authors on the hentries have the full content of the vcard as the `name` instead of pulling from the `fn` property, and are also missing the full name that is present.
Entries with `rel=tag` markup get a `data` element appended to their `content.html` property?
For comparison, https://pin13.net/mf2/?url=http%3A%2F%2Fbeta.singpolyma.net%2F gives me what I expected. https://mf2.kylewm.com/?url=http%3A%2F%2Fbeta.singpolyma.net%2F&parser=html5lib has most of the same issues (I assume you also use mf2py, so that's probably it) but it at least handles the name in the representative hcard properly. | closed | 2018-04-12T16:10:25Z | 2018-07-25T00:39:38Z | https://github.com/snarfed/granary/issues/145 | [] | singpolyma | 6 |
ipython/ipython | jupyter | 14,395 | Defining custom torch function causes "maximum recursion depth" in autoreload | Python 3.12.2
ipython 8.22.2
torch 2.2.1
I have a reproduction that is 100% reliable below.
Create a script with the following contents named `test_module.py`:
```
import torch
class CustomTorchFunc(torch.autograd.Function):
@staticmethod
def forward(ctx, input1, input2, weights):
return 1
@staticmethod
def backward(ctx, grad_output):
return 1
def main():
var = 'something to edit 0'
print("called main " + var)
```
Start ipython in the directory with the module
Execute:
```
%load_ext autoreload
%autoreload 2
import test_module
test_module.main()
```
It should print out
`called main something to edit 0`
So far so good. Now, edit `test_module.py` and increment 0 to 1. In ipython execute:
`test_module.main()`
The following error is printed out
```
[autoreload of test_module failed: Traceback (most recent call last):
File "/Users/will/miniforge3/envs/ml0524/lib/python3.12/site-packages/IPython/extensions/autoreload.py", line 276, in check
superreload(m, reload, self.old_objects)
File "/Users/will/miniforge3/envs/ml0524/lib/python3.12/site-packages/IPython/extensions/autoreload.py", line 500, in superreload
update_generic(old_obj, new_obj)
File "/Users/will/miniforge3/envs/ml0524/lib/python3.12/site-packages/IPython/extensions/autoreload.py", line 397, in update_generic
update(a, b)
File "/Users/will/miniforge3/envs/ml0524/lib/python3.12/site-packages/IPython/extensions/autoreload.py", line 349, in update_class
if update_generic(old_obj, new_obj):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/will/miniforge3/envs/ml0524/lib/python3.12/site-packages/IPython/extensions/autoreload.py", line 397, in update_generic
update(a, b)
File "/Users/will/miniforge3/envs/ml0524/lib/python3.12/site-packages/IPython/extensions/autoreload.py", line 349, in update_class
if update_generic(old_obj, new_obj):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/will/miniforge3/envs/ml0524/lib/python3.12/site-packages/IPython/extensions/autoreload.py", line 397, in update_generic
update(a, b)
File "/Users/will/miniforge3/envs/ml0524/lib/python3.12/site-packages/IPython/extensions/autoreload.py", line 349, in update_class
if update_generic(old_obj, new_obj):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/will/miniforge3/envs/ml0524/lib/python3.12/site-packages/IPython/extensions/autoreload.py", line 397, in update_generic
update(a, b)
File "/Users/will/miniforge3/envs/ml0524/lib/python3.12/site-packages/IPython/extensions/autoreload.py", line 349, in update_class
if update_generic(old_obj, new_obj):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RecursionError: maximum recursion depth exceeded
]
called main something to edit 1
```
It seems like the module was successfully reloaded, but the error says it failed. So I'm not sure if I'm in an undefined state or not. | closed | 2024-04-09T22:28:35Z | 2025-02-14T10:53:56Z | https://github.com/ipython/ipython/issues/14395 | [] | willtalmadge | 2 |
CorentinJ/Real-Time-Voice-Cloning | python | 751 | Where is the tensorflow version? | I think I am more comfortable with the Tensorflow version, but I really can't find it, I found about a commit 5425557, but I don't seem to find the code | closed | 2021-05-09T06:30:10Z | 2021-05-30T07:34:25Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/751 | [] | bosecodes | 1 |
MilesCranmer/PySR | scikit-learn | 575 | [Feature]: Select Julia version at first import. | ### Feature Request
It would be better if user can choose correct julia version during first time import.
For instance, i running Pysr on Macbook Air M2, during first time import. Programm automatically select julialang for x64 version instead of aarch64.
Eventhough program successfully installed, but it faceing the translation loss from AMD64 to Apple64.
| closed | 2024-03-21T03:27:28Z | 2024-03-21T04:16:11Z | https://github.com/MilesCranmer/PySR/issues/575 | [
"enhancement"
] | JacobZhao | 0 |
viewflow/viewflow | django | 336 | frontend in the admin ? | Hi ! Gave viewflow a test run, it looks really nice.
Is there a documented way or a plan to make it possible to have the flow views in Django admin rather than in the material frontend ?
To me there would be many benefits :
- benefit from native django admin features (such as autocomplete foreign keys...) and from existing packages
- reusing an environment we already know (both for users and devs) - IMO a must especially for adopting viewflow in already existing apps that make use of django admin
- less opinionated design (material looks great still in 2021, but will look dated)
If not, do you see any major blocker in implementing it ? (I'd see something like a processmodeladmin subclass, that filters fields according to the current step). Would it make sense to include that | closed | 2021-11-25T07:39:44Z | 2021-11-25T12:27:14Z | https://github.com/viewflow/viewflow/issues/336 | [
"request/question",
"dev/site"
] | olivierdalang | 2 |
zappa/Zappa | django | 753 | [Migrated] Local Test -OK, Clean Deploy - Error Post Deployment (Failed to find library, OpenBLAS WARNIN) | Originally from: https://github.com/Miserlou/Zappa/issues/1881 by [bbjishnu](https://github.com/bbjishnu)
<!--- Provide a general summary of the issue in the Title above -->
Post Zappa init. Local Test is fine. No Error in Deploy BUT fails post deployment , Error is as below :
OpenBLAS WARNING - could not determine the L2 cache size on this system
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
Probably a bug - could not locate similar one in Issues List.
Zappa Tail
Calling tail for stage dev..
[1559738527376] Instancing..
[1559738529333] Instancing..
[1559738540336] Failed to find library...right filename?
[1559738541096] OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
[1559738542425] Failed to find library...right filename?
[1559738543166] OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
[1559738808996] Instancing..
[1559738821755] Failed to find library...right filename?
[1559738822530] OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
Python 3.6
## Expected Behavior
<!--- Tell us what should happen -->
Return output of the POST request
## Actual Behavior
<!--- Tell us what happens instead -->
Curl -X post Returns 400, and Zappa Tail OpenBLAS WARNING
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
`
import os
import sys
import boto3
from flask import Flask, request
from flask_restful import abort, Api, Resource
import joblib
import pandas as pd
import pickle
BUCKET_NAME = 'XXXX'
MODEL_FILE_NAME = 'reg_model.pkl'
MODEL_LOCAL_PATH = '/tmp/' + MODEL_FILE_NAME
app = Flask(__name__)
api = Api(app)
def load_model():
s3 = boto3.resource("s3")
bucket = s3.Bucket(BUCKET_NAME)
bucket.download_file(MODEL_FILE_NAME,MODEL_LOCAL_PATH)
return joblib.load(MODEL_LOCAL_PATH)
class PredictReg(Resource):
def post(self):
# UI Based
#user_dict = request.form
# for curl test
user_dict=request.get_json(force=True)
#print(user_dict)
padas_dict={}
vars_names =list(user_dict.keys())
for vars in vars_names:
# for curl test
padas_dict[vars]=[float(x) for x in user_dict[vars]]
# fir UI
#padas_dict[vars]=[float(x) for x in user_dict.getlist(vars)]
data =pd.DataFrame(padas_dict)
#reg = load_model()
#pred_v = reg.predict(data)
pred_v=[100 for x in range(data.shape[0])] # This trivial step has been added to test Zaapa Error
output = {'prediction': pd.Series(pred_v).to_json(orient='values')}
return output
api.add_resource(PredictReg, '/')
if __name__ == '__main__':
app.run(host='0.0.0.0', debug=True)
`
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used:
0.48.2
* Operating System and Python version:
Ubuntu 18.04.2 LTS
Linux 4.15.0-1034-aws
* The output of `pip freeze`:
aniso8601==6.0.0
argcomplete==1.9.3
boto3==1.9.161
botocore==1.12.161
certifi==2019.3.9
cfn-flip==1.2.1
chardet==3.0.4
Click==7.0
docutils==0.14
durationpy==0.5
Flask==1.0.3
Flask-RESTful==0.3.7
future==0.16.0
hjson==3.0.1
idna==2.8
itsdangerous==1.1.0
Jinja2==2.10.1
jmespath==0.9.3
joblib==0.13.2
kappa==0.6.0
lambda-packages==0.20.0
MarkupSafe==1.1.1
numpy==1.16.4
pandas==0.23.3
placebo==0.9.0
python-dateutil==2.6.1
python-slugify==1.2.4
pytz==2019.1
PyYAML==5.1
requests==2.22.0
s3transfer==0.2.1
scikit-learn==0.21.2
scipy==1.3.0
six==1.12.0
toml==0.10.0
tqdm==4.19.1
troposphere==2.4.7
Unidecode==1.0.23
urllib3==1.25.3
Werkzeug==0.15.4
wsgi-request-logger==0.4.6
zappa==0.48.2
* Link to your project (optional):
* Your `zappa_settings.py`:
{
"dev": {
"app_function": "api.app.app",
"aws_region": "ap-south-1",
"profile_name": "default",
"project_name": "my-ml-host",
"runtime": "python3.6",
"s3_bucket": "zappa-hc77eulah",
"slim_handler":true,
"log_level":"ERROR",
"keep_warm":false
}
} | closed | 2021-02-20T12:41:47Z | 2022-08-16T05:49:26Z | https://github.com/zappa/Zappa/issues/753 | [] | jneves | 1 |
mwaskom/seaborn | matplotlib | 3,751 | TypeError with seaborn.kdeplot when using fill=True and categorical hue | ### Description
I encountered a `TypeError` when using `seaborn.kdeplot` with the `fill=True` option and a categorical hue in my dataset. The error message indicates that there is a problem with data types being passed to the `matplotlib` fill function.
### Steps to Reproduce
Here is a minimal reproducible example:
```python
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
# Generate synthetic datasets
np.random.seed(42)
data = pd.DataFrame({
'value': np.concatenate([np.random.normal(5, 1, size=100), np.random.normal(10, 1, size=100), np.random.normal(15, 1, size=100)]),
'category': ['Group 1']*100 + ['Group 2']*100 + ['Group 3']*100
})
# Plot using kdeplot
sns.kdeplot(data=data, x="value", hue="category", fill=True, palette='coolwarm', alpha=0.7)
plt.show()
```
###**Expected Behavior**
The KDE plot should render successfully with the filled areas for different categories.
###**Actual Behavior**
The plot does not render, and the following error is thrown:
TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
###**Suggested Fix or Improvement**
It seems that kdeplot with fill=True might need better handling for different data types or clearer documentation on the expected data types. Additionally, improving error messages to guide users to the correct data type could help prevent this issue.
A temporary workaround I found was using matplotlib directly for creating ridge plots, as shown below:
```
# Ridge plot workaround using matplotlib
from scipy.stats import gaussian_kde
fig, ax = plt.subplots()
for category in data['category'].unique():
subset = data[data['category'] == category]['value']
density = gaussian_kde(subset)
xs = np.linspace(min(subset), max(subset), 200)
ax.fill_between(xs, density(xs), alpha=0.6, label=category)
ax.legend()
plt.show()
```
###**Additional Notes**
It would be helpful to either enhance the kdeplot function to handle this more gracefully or provide a warning if the data type might cause an error.
| closed | 2024-08-25T04:50:44Z | 2024-09-03T01:02:48Z | https://github.com/mwaskom/seaborn/issues/3751 | [] | mishachada | 3 |
drivendataorg/cookiecutter-data-science | data-science | 17 | how will cookiecutter handle Database driven projects | I see there is s3 syncing but for people using SQL Databases or HDFS? a few useful thoughts:
1. There should be a place for database connection strings, and connections to be established
2. inside of src/data we should store python scripts, but we can have a subdirectory, database_scripts for .sql, .hql, etc. This would cover all database insertion, ETL, in database data munging etc.
Does this seem sensible?
| closed | 2016-04-29T15:17:11Z | 2016-04-29T19:15:52Z | https://github.com/drivendataorg/cookiecutter-data-science/issues/17 | [
"folder-layout",
"needs-discussion"
] | jbrambleDC | 4 |
STVIR/pysot | computer-vision | 6 | How to write a json file for a new dataset | I want to test trackers on VOT2019 dataset. Can you tell me how to write a json file for the new dataset? | closed | 2019-05-14T07:53:29Z | 2019-10-03T12:52:37Z | https://github.com/STVIR/pysot/issues/6 | [] | lawpdas | 3 |
nteract/papermill | jupyter | 780 | pip install --no-binary gives "No such file or directory" error | ## 🐛 Bug
Seeing the following error upon source installing version 2.5.0:
`FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-install-khq60faw/papermill_63f104d3558b47019b035cffeb75bf98/docs/requirements.txt'`
Command used: `pip install --no-binary papermill papermill==2.5.0`
Seems like the auto-generated `requirements.txt` file is at a different location than what's expected by the setup process. | open | 2024-02-15T22:24:43Z | 2024-02-15T22:29:38Z | https://github.com/nteract/papermill/issues/780 | [
"bug",
"help wanted"
] | tiffanymeits | 0 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 758 | How to use a return value from a function as the default value for a column? | Hi,
I originally asked this question on StackOverflow but got no answers, so I think maybe I could ask here for more details. My question is as follows:
I have used `flask-sqlalchemy` to create a model for my database
```python
class User(db.Model, UserMixin):
"""A user who has an account on the website."""
__tablename__ = 'users'
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
first_name = db.Column(db.String, nullable=False)
last_name = db.Column(db.String, nullable=False)
# here are some irrelevant columns ...
image_file = db.Column(db.String, nullable=False, default='default.jpg')
```
I am making my default image for the user profile picture as a hard-coded image saved in `static/img` folder. However, I would like to use gravatar which used a hash version of using email to create a github default flavored profile picture.
I have the function
```python
def avatar(self, size=128):
digest = md5(self.email.lower().encode('utf-8')).hexdigest()
return 'https://www.gravatar.com/avatar/{}?d=identicon&s={}'.format(digest, size)
```
I would like to use the return value from the `avatar()` function as the default image. I tried to pass `avatar` and `self.avatar` as the argument for `default=`. However, they both do not work.
Do you have any opinions or guides on how to solve this problem?
Thank you so much!!
Best,
Zion | closed | 2019-07-02T02:53:19Z | 2020-12-05T20:21:46Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/758 | [] | Zhenye-Na | 1 |
matplotlib/cheatsheets | matplotlib | 30 | `tight_layout` versus `constrained_layout` | https://github.com/matplotlib/cheatsheets/blob/329e0ba94d06c2a9f6af4600af9a25dcac6d9f5a/handout-tips.tex#L215 calls out `tight_layout` in particular. Is there a reason to not also call out `constrained_layout`? In general `constrained_layout` is more "automatic". | open | 2020-07-07T18:42:10Z | 2020-07-08T19:01:54Z | https://github.com/matplotlib/cheatsheets/issues/30 | [] | jklymak | 5 |
ranaroussi/yfinance | pandas | 2,303 | Getting Too Many Requests. Rate limited. Try after a while. | i am getting Too Many Requests. Rate limited. Try after a while. while trying
response = yfinance.Ticker("MSFT")
my traceback:
`File "/usr/local/lib/python3.13/site-packages/yfinance/scrapers/quote.py", line 609, in _fetch_info
2025-02-20 17:31:31 result = self._fetch(proxy, modules=modules)
2025-02-20 17:31:31 File "/usr/local/lib/python3.13/site-packages/yfinance/scrapers/quote.py", line 587, in _fetch
2025-02-20 17:31:31 result = self._data.get_raw_json(_QUOTE_SUMMARY_URL_ + f"/{self._symbol}", user_agent_headers=self._data.user_agent_headers, params=params_dict, proxy=proxy)
2025-02-20 17:31:31 File "/usr/local/lib/python3.13/site-packages/yfinance/data.py", line 425, in get_raw_json
2025-02-20 17:31:31 response = self.get(url, user_agent_headers=user_agent_headers, params=params, proxy=proxy, timeout=timeout)
2025-02-20 17:31:31 File "/usr/local/lib/python3.13/site-packages/yfinance/utils.py", line 104, in wrapper
2025-02-20 17:31:31 result = func(*args, **kwargs)
2025-02-20 17:31:31 File "/usr/local/lib/python3.13/site-packages/yfinance/data.py", line 344, in get
2025-02-20 17:31:31 return self._make_request(url, request_method = self._session.get, user_agent_headers=user_agent_headers, params=params, proxy=proxy, timeout=timeout)
2025-02-20 17:31:31 ~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2025-02-20 17:31:31 File "/usr/local/lib/python3.13/site-packages/yfinance/utils.py", line 104, in wrapper
2025-02-20 17:31:31 result = func(*args, **kwargs)
2025-02-20 17:31:31 File "/usr/local/lib/python3.13/site-packages/yfinance/data.py", line 406, in _make_request
2025-02-20 17:31:31 raise YFRateLimitError()
2025-02-20 17:31:31 yfinance.exceptions.YFRateLimitError: Too Many Requests. Rate limited. Try after a while.` | closed | 2025-02-20T12:06:42Z | 2025-02-20T22:26:10Z | https://github.com/ranaroussi/yfinance/issues/2303 | [] | kalyanakannan | 4 |
huggingface/peft | pytorch | 2,445 | Adding support for Conv1d | ### Feature request
Can you add support for nn.Conv1d in dora? Like this issue did for LoRA: https://github.com/huggingface/peft/issues/2241.
It would be very valuable my masters thesis.
### Motivation
Yes I want to use DoRA along with LoRA
### Your contribution
Maybe | open | 2025-03-21T08:49:24Z | 2025-03-21T16:18:55Z | https://github.com/huggingface/peft/issues/2445 | [] | EskildAndersen | 1 |
allenai/allennlp | pytorch | 5,410 | Capture NLTK parsing errors in allennlp_models/common/ontonotes.py | OntoNotes in CoNLL-2012 format is frequently used for training and evaluation. However, it cannot be directly disseminated but needs to be locally re-built by every individual user, using two different pieces of third-party content and external scripts. In this constellation, the build scripts often break, and result in parsing errors.
Feature request:
- Instead of escalating parsing errors from NLTK and terminate, provide a more robust handling and produce a warning.
- This can be done by catching ValueErrors in line 347 in `_conll_rows_to_sentence`
For system parameters and traceback of a real-world error, please see my original post @chiarcos in https://github.com/allenai/allennlp/issues/4065#issuecomment-919923373_ | closed | 2021-09-15T11:15:29Z | 2022-02-21T16:09:45Z | https://github.com/allenai/allennlp/issues/5410 | [
"stale"
] | chiarcos | 13 |
graphql-python/graphql-core | graphql | 105 | GraphQLUnionType documentation is incorrect | GraphQLUnionType currently gives the following example of its usage:
```python
class PetType(GraphQLUnionType):
name = 'Pet'
types = [DogType, CatType]
def resolve_type(self, value, _type):
if isinstance(value, Dog):
return DogType()
if isinstance(value, Cat):
return CatType()
```
I believe this documentation is incorrect; `PetType()` can't be instantiated since GraphQLUnionType's `__init__` takes name and types as required parameters.
Calling `PetType(name=PetType.name, ...)` is obviously terrible, and implementing an empty `__init__` which doesn't call `super()` feels unsafe.
Also, even if there's a way to define a UnionType via inheritance which I'm missing, I don't see why GraphQLUnionType example should be different from GraphQLInterfaceType example, which uses the normal function call API. | closed | 2020-08-28T14:56:59Z | 2020-08-28T17:05:23Z | https://github.com/graphql-python/graphql-core/issues/105 | [] | berekuk | 1 |
deepinsight/insightface | pytorch | 1,887 | How to use insightface to crop my own image please? | My work needs to crop the face chip from my own img dataset. I just need to crop and align the face from the image, no matter who is he/she/it.
I have use dlib library before but it cannot align the face cropped(may be I didnt find it). And I find the example img in this project, the people in "Friends" img are all cropped and aligned. How to detect and crop the face with insightface please? Is the .py is what I need https://github.com/deepinsight/insightface/blob/master/recognition/arcface_torch/inference.py? How to use please? Thanks. | open | 2022-01-17T06:47:44Z | 2023-10-19T04:06:26Z | https://github.com/deepinsight/insightface/issues/1887 | [] | Chauban | 1 |
scrapy/scrapy | python | 6,466 | Dropping Python 3.8 support | 3.8 support ends on 31 Oct, so I'm thinking about changes we can do after that. Looks like they are mostly related to typing.
* Drop Reppy which doesn't support Python 3.9+: #5226
* Adopt [PEP-585](https://peps.python.org/pep-0585/):
* Switch e.g. `List[T]` to `list[T]`
* Switch `typing.Foo` imports to `collections.abc.Foo` ones
* Maybe remove `--keep-runtime-typing` from the `pyupgrade` args, but that also changes unions to the `|` syntax in files with `from __future__ import annotations` and we would want to add those to all files with unions to keep the style consistent
* Bump the target version for `pyupgrade`
I don't see `typing_extensions` imports that were added in 3.9 (most of them are 3.10).
It's possible to do the `--keep-runtime-typing` change in advance to reduce the diff. | closed | 2024-08-20T18:22:05Z | 2024-10-16T08:03:17Z | https://github.com/scrapy/scrapy/issues/6466 | [
"cleanup"
] | wRAR | 0 |
keras-team/keras | python | 20,071 | tf.data.Dataset Pipeline Preprocessing Custom Layer Recommendation | I am looking to create a number of custom preprocessing layers to be used in a TensorFlow `tf.data` pipeline. I initially assumed I could subclass `keras.Layer` and in `call` simply use any `keras.ops` operations. I only use python parameters statically for condition statements but otherwise use `keras.ops` for all commands (e.g. `keras.ops.fori_loop`). I can run the pipeline alone successfully (e.g. `iter(next(train_ds))`), however, when I try to train a model using `TensorFlow` backend it complains with several issues as it's trying to create a symbolic graph of my preprocess layers. These layers are not attached to the model- they are attached to the data pipeline via `map`. I had assumed the dataset pipeline would happen on the CPU but it seems that my layers are being mapped to the GPU in a TF graph. If I force everything to run on the CPU, everything runs fine but ideally I want the model training to happen on GPU and data pipeline to happen on CPU.
Are there any reference examples I could follow?
When I looked at the included preprocessing layers of `keras` they all seemed to use `keras.backend.numpy` for operations (rather than `keras.ops`). I also noticed the TF pipeline safe layers subclass `TFDataLayer` which isn't exposed in the public API. Is there a way to indicate to keras that I want to run the entire preprocessing pipeline on the CPU.
Any help would be greatly appreciated.
Below are some layers that I implemented as reference (based on what I could find from both `keras` and `keras-cv`:
```python
from typing import Callable
import keras
from .defines import NestedTensorValue
def tf_keras_map(f, xs):
# NOTE: Workaround until (https://github.com/keras-team/keras/issues/20048)
import tensorflow as tf
xs = keras.tree.map_structure(tf.convert_to_tensor, xs)
def get_fn_output_signature(x):
out = f(x)
return keras.tree.map_structure(tf.TensorSpec.from_tensor, out)
# Grab single element unpacking and repacking single element
xe = tf.nest.pack_sequence_as(xs, [y[0] for y in tf.nest.flatten(xs)])
fn_output_signature = get_fn_output_signature(xe)
return tf.map_fn(f, xs, fn_output_signature=fn_output_signature)
class BaseAugmentation(keras.layers.Layer):
SAMPLES = "data"
LABELS = "labels"
TARGETS = "targets"
ALL_KEYS = (SAMPLES, LABELS, TARGETS)
TRANSFORMS = "transforms"
IS_DICT = "is_dict"
BATCHED = "is_batched"
USE_TARGETS = "use_targets"
NDIMS = 4 # Modify in subclass (includes batch size)
def __init__(
self,
seed: int | None = None,
auto_vectorize: bool = True,
data_format: str | None = None,
name: str | None = None,
**kwargs,
):
"""BaseAugmentation acts as a base class for various custom augmentation layers.
This class provides a common interface for augmenting samples and labels. In the future, we will
add support for segmentation and bounding boxes.
The only method that needs to be implemented by the subclass is
- augment_sample: Augment a single sample during training.
Optionally, you can implement the following methods:
- augment_label: Augment a single label during training.
- get_random_transformations: Returns a nested structure of random transformations that should be applied to the batch.
This is required to have unique transformations for each sample in the batch and maintain the same transformations for samples and labels.
- batch_augment: Augment a batch of samples and labels during training. Needed if layer requires access to all samples (e.g. CutMix).
By default, this method will coerce the input into a batch as well as a nested structure of inputs.
If auto_vectorize is set to True, the augment_sample and augment_label methods will be vectorized using keras.ops.vectorized_map.
Otherwise, it will use keras.ops.map which runs sequentially.
Args:
seed (int | None): Random seed. Defaults to None.
auto_vectorize (bool): If True, augment_sample and augment_label methods will be vectorized using keras.ops.vectorized_map.
Otherwise, it will use keras.ops.map which runs sequentially. Defaults to True.
data_format (str | None): Data format. Defaults to None. Will use keras.backend.image_data_format() if None.
name (str | None): Layer name. Defaults to None.
"""
super().__init__(name=name, **kwargs)
self._random_generator = keras.random.SeedGenerator(seed)
self.data_format = data_format or keras.backend.image_data_format()
self.built = True
self.training = True
self.auto_vectorize = auto_vectorize
def _map_fn(
self, func: Callable[[NestedTensorValue], keras.KerasTensor], inputs: NestedTensorValue
) -> keras.KerasTensor:
"""Calls appropriate mapping function with given inputs.
Args:
func (Callable): Function to be mapped.
inputs (dict): Dictionary containing inputs.
Returns:
KerasTensor: Augmented samples or labels
"""
if self.auto_vectorize:
return keras.ops.vectorized_map(func, inputs)
# NOTE: Workaround until (https://github.com/keras-team/keras/issues/20048)
if keras.backend.backend() == "tensorflow":
return tf_keras_map(func, inputs)
return keras.ops.map(func, inputs)
def call(self, inputs: NestedTensorValue, training: bool = True) -> NestedTensorValue:
"""This method will serve as the main entry point for the layer. It will handle the input formatting and output formatting.
Args:
inputs (NestedTensorValue): Inputs to be augmented.
training (bool): Whether the model is training or not.
Returns:
NestedTensorValue: Augmented samples or labels.
"""
self.training = training
inputs, metadata = self._format_inputs(inputs)
return self._format_outputs(self.batch_augment(inputs), metadata)
def augment_sample(self, inputs: NestedTensorValue) -> keras.KerasTensor:
"""Augment a single sample during training.
!!! note
This method should be implemented by the subclass.
Args:
input(NestedTensorValue): Single sample.
Returns:
KerasTensor: Augmented sample.
"""
return inputs[self.SAMPLES]
def augment_samples(self, inputs: NestedTensorValue) -> keras.KerasTensor:
"""Augment a batch of samples during training.
Args:
input(NestedTensorValue): Batch of samples.
Returns:
KerasTensor: Augmented batch of samples.
"""
return self._map_fn(self.augment_sample, inputs=inputs)
def augment_label(self, inputs: NestedTensorValue) -> keras.KerasTensor:
"""Augment a single label during training.
!!! note
Implement this method if you need to augment labels.
Args:
input(NestedTensorValue): Single label.
Returns:
keras.KerasTensor: Augmented label.
"""
return inputs[self.LABELS]
def augment_labels(self, inputs: NestedTensorValue) -> keras.KerasTensor:
"""Augment a batch of labels during training.
Args:
inputs(NestedTensorValue): Batch of labels.
Returns:
keras.KerasTensor: Augmented batch of labels.
"""
return self._map_fn(self.augment_label, inputs=inputs)
def get_random_transformations(self, input_shape: tuple[int, ...]) -> NestedTensorValue:
"""Generates random transformations needed for augmenting samples and labels.
Args:
input_shape (tuple[int,...]): Shape of the input (N, ...).
Returns:
NestedTensorValue: Batch of random transformations.
!!! note
This method should be implemented by the subclass if the layer requires random transformations.
"""
return keras.ops.arange(input_shape[0])
def batch_augment(self, inputs: NestedTensorValue) -> NestedTensorValue:
"""Handles processing entire batch of samples and labels in a nested structure.
Responsible for calling augment_samples and augment_labels.
Args:
inputs (NestedTensorValue): Batch of samples and labels.
Returns:
NestedTensorValue: Augmented batch of samples and labels.
"""
samples = inputs.get(self.SAMPLES, None)
labels = inputs.get(self.LABELS, None)
result = {}
transformations = self.get_random_transformations(input_shape=keras.ops.shape(samples))
result[self.SAMPLES] = self.augment_samples(inputs={self.SAMPLES: samples, self.TRANSFORMS: transformations})
if labels is not None:
result[self.LABELS] = self.augment_labels(inputs={self.LABELS: labels, self.TRANSFORMS: transformations})
# END IF
# preserve any additional inputs unmodified by this layer.
for key in inputs.keys() - result.keys():
result[key] = inputs[key]
return result
def _format_inputs(self, inputs: NestedTensorValue) -> tuple[NestedTensorValue, dict[str, bool]]:
"""Validate and force inputs to be batched and placed in structured format.
Args:
inputs (NestedTensorValue): Inputs to be formatted.
Returns:
tuple[NestedTensorValue, dict[str, bool]]: Formatted inputs and metadata.
"""
metadata = {self.IS_DICT: True, self.USE_TARGETS: False, self.BATCHED: True}
if not isinstance(inputs, dict):
inputs = {self.SAMPLES: inputs}
metadata[self.IS_DICT] = False
samples = inputs.get(self.SAMPLES, None)
if inputs.get(self.SAMPLES) is None:
raise ValueError(f"Expect the inputs to have key {self.SAMPLES}. Got keys: {list(inputs.keys())}")
# END IF
if inputs[self.SAMPLES].shape.rank != self.NDIMS - 1 and samples.shape.rank != self.NDIMS:
raise ValueError(f"Invalid input shape: {samples.shape}")
# END IF
if inputs[self.SAMPLES].shape.rank == self.NDIMS - 1:
metadata[self.BATCHED] = False
# Expand dims to make it batched for keys of interest
for key in set(self.ALL_KEYS).intersection(inputs.keys()):
if inputs[key] is not None:
inputs[key] = keras.ops.expand_dims(inputs[key], axis=0)
# END IF
# END FOR
# END IF
return inputs, metadata
def _format_outputs(self, output: NestedTensorValue, metadata: dict[str, bool]) -> NestedTensorValue:
"""Format the output to match the initial input format.
Args:
output: Output to be formatted.
metadata: Metadata used for formatting.
Returns:
Output in the original format.
"""
if not metadata[self.BATCHED]:
for key in set(self.ALL_KEYS).intersection(output.keys()):
if output[key] is not None: # check if tensor
output[key] = keras.ops.squeeze(output[key], axis=0)
# END IF
# END FOR
# END IF
if not metadata[self.IS_DICT]:
return output[self.SAMPLES]
if metadata[self.USE_TARGETS]:
output[self.TARGETS] = output[self.LABELS]
del output[self.LABELS]
return output
def compute_output_shape(self, input_shape, *args, **kwargs):
"""By default assumes the shape of the input is the same as the output.
Args:
input_shape: Shape of the input.
Returns:
tuple: Shape of the output
!!! note
This method should be implemented by the subclass if the output shape is different from the input shape.
"""
return input_shape
def get_config(self):
"""Serialize the layer configuration."""
config = super().get_config()
config.update(
{
"seed": self.seed,
"auto_vectorize": self.auto_vectorize,
"data_format": self.data_format,
}
)
return config
class BaseAugmentation1D(BaseAugmentation):
NDIMS = 3 # (N, T, C) or (N, C, T)
def __init__(self, **kwargs):
"""BaseAugmentation1D acts as a base class for various custom augmentation layers.
This class provides a common interface for augmenting samples and labels. In the future, we will
add support for segmentation and 1D bounding boxes.
The only method that needs to be implemented by the subclass is
- augment_sample: Augment a single sample during training.
Optionally, you can implement the following methods:
- augment_label: Augment a single label during training.
- get_random_transformations: Returns a nested structure of random transformations that should be applied to the batch.
This is required to have unique transformations for each sample in the batch and maintain the same transformations for samples and labels.
- batch_augment: Augment a batch of samples and labels during training. Needed if layer requires access to all samples (e.g. CutMix).
By default, this method will coerce the input into a batch as well as a nested structure of inputs.
If auto_vectorize is set to True, the augment_sample and augment_label methods will be vectorized using keras.ops.vectorized_map.
Otherwise, it will use keras.ops.map which runs sequentially.
Example:
```python
class NormalizeLayer1D(BaseAugmentation1D):
def __init__(self, **kwargs):
...
def augment_sample(self, inputs):
sample = inputs["data"]
mu = keras.ops.mean()
std = keras.ops.std()
return (sample - mu) / (std + self.epsilon)
x = np.random.rand(100, 3)
lyr = NormalizeLayer(...)
y = lyr(x, training=True)
```
"""
super().__init__(**kwargs)
if self.data_format == "channels_first":
self.data_axis = -1
self.ch_axis = -2
else:
self.data_axis = -2
self.ch_axis = -1
# END IF
class BaseAugmentation2D(keras.layers.Layer):
NDIMS = 4 # (N, H, W, C) or (N, C, H, W)
def __init__(self, **kwargs):
"""BaseAugmentation2D acts as a base class for various custom augmentation layers.
This class provides a common interface for augmenting samples and labels. In the future, we will
add support for segmentation and 1D bounding boxes.
The only method that needs to be implemented by the subclass is
- augment_sample: Augment a single sample during training.
Optionally, you can implement the following methods:
- augment_label: Augment a single label during training.
- get_random_transformations: Returns a nested structure of random transformations that should be applied to the batch.
This is required to have unique transformations for each sample in the batch and maintain the same transformations for samples and labels.
- batch_augment: Augment a batch of samples and labels during training. Needed if layer requires access to all samples (e.g. CutMix).
By default, this method will coerce the input into a batch as well as a nested structure of inputs.
If auto_vectorize is set to True, the augment_sample and augment_label methods will be vectorized using keras.ops.vectorized_map.
Otherwise, it will use keras.ops.map which runs sequentially.
Example:
```python
class NormalizeLayer2D(BaseAugmentation2D):
def __init__(self, name=None, **kwargs):
...
def augment_sample(self, inputs):
sample = inputs["data"]
mu = keras.ops.mean()
std = keras.ops.std()
return (sample - mu) / (std + self.epsilon)
x = np.random.rand(32, 32, 3)
lyr = NormalizeLayer(...)
y = lyr(x, training=True)
```
"""
super().__init__(**kwargs)
if self.data_format == "channels_first":
self.ch_axis = -3
self.height_axis = -2
self.width_axis = -1
else:
self.ch_axis = -1
self.height_axis = -3
self.width_axis = -2
# END IF
class RandomNoiseDistortion1D(BaseAugmentation1D):
sample_rate: float
frequency: tuple[float, float]
amplitude: tuple[float, float]
noise_type: str
def __init__(
self,
sample_rate: float = 1,
frequency: float | tuple[float, float] = 100,
amplitude: float | tuple[float, float] = 0.1,
noise_type: str = "normal",
**kwargs,
):
"""Apply random noise distortion to the 1D input.
Noise points are first generated at given frequency resolution with amplitude picked based on noise_type.
The noise points are then interpolated to match the input duration and added to the input.
Args:
sample_rate (float): Sample rate of the input.
frequency (float|tuple[float,float]): Frequency of the noise in Hz. If tuple, frequency is randomly picked between the values.
amplitude (float|tuple[float,float]): Amplitude of the noise. If tuple, amplitude is randomly picked between the values.
noise_type (str): Type of noise to generate. Currently only "normal" is supported.
Example:
```python
sample_rate = 100 # Hz
duration = 3*sample_rate # 3 seconds
sig_freq = 10 # Hz
sig_amp = 1 # Signal amplitude
noise_freq = (1, 2) # Noise frequency range
noise_amp = (1, 2) # Noise amplitude range
x = sig_amp*np.sin(2*np.pi*sig_freq*np.arange(duration)/sample_rate).reshape(-1, 1)
lyr = RandomNoiseDistortion1D(sample_rate=sample_rate, frequency=noise_freq, amplitude=noise_amp)
y = lyr(x, training=True)
```
"""
super().__init__(**kwargs)
self.sample_rate = sample_rate
self.frequency = parse_factor(frequency, min_value=None, max_value=sample_rate / 2, param_name="frequency")
self.amplitude = parse_factor(amplitude, min_value=None, max_value=None, param_name="amplitude")
self.noise_type = noise_type
def get_random_transformations(self, input_shape: tuple[int, int, int]):
"""Generate noise distortion tensor
Args:
input_shape (tuple[int, ...]): Input shape.
Returns:
dict: Dictionary containing the noise tensor.
"""
batch_size = input_shape[0]
duration_size = input_shape[self.data_axis]
ch_size = input_shape[self.ch_axis]
# Add one period to the noise and clip later
if self.frequency[0] == self.frequency[1]:
frequency = self.frequency[0]
else:
frequency = keras.random.uniform(
shape=(), minval=self.frequency[0], maxval=self.frequency[1], seed=self._random_generator
)
if self.amplitude[0] == self.amplitude[1]:
amplitude = self.amplitude[0]
else:
amplitude = keras.random.uniform(
shape=(), minval=self.amplitude[0], maxval=self.amplitude[1], seed=self._random_generator
)
noise_duration = keras.ops.cast((duration_size / self.sample_rate) * frequency + frequency, dtype="int32")
if self.data_format == "channels_first":
noise_shape = (batch_size, 1, ch_size, noise_duration)
else:
noise_shape = (batch_size, 1, noise_duration, ch_size)
if self.noise_type == "normal":
noise_pts = keras.random.normal(noise_shape, stddev=amplitude, seed=self._random_generator)
else:
raise ValueError(f"Invalid noise shape: {self.noise_type}")
# keras.ops doesnt contain any low-level interpolate. So we leverage the
# image module and fix height to 1 as workaround
noise = keras.ops.image.resize(
noise_pts,
size=(1, duration_size),
interpolation="bicubic",
crop_to_aspect_ratio=False,
data_format=self.data_format,
)
# Remove height dimension
noise = keras.ops.squeeze(noise, axis=1)
return {"noise": noise}
def augment_samples(self, inputs) -> keras.KerasTensor:
"""Augment all samples in the batch as it's faster."""
samples = inputs[self.SAMPLES]
if self.training:
noise = inputs[self.TRANSFORMS]["noise"]
return samples + noise
return samples
def get_config(self):
"""Serialize the layer configuration to a JSON-compatible dictionary."""
config = super().get_config()
config.update(
{
"sample_rate": self.sample,
"frequency": self.frequency,
"amplitude": self.amplitude,
"noise_type": self.noise_type,
}
)
return config
```
| closed | 2024-07-31T18:13:56Z | 2024-08-02T19:50:08Z | https://github.com/keras-team/keras/issues/20071 | [
"type:support"
] | apage224 | 7 |
Farama-Foundation/Gymnasium | api | 1,005 | [Proposal] Add metadata field to VectorEnv | ### Proposal
Hi,
I noticed that `VectorEnv` currently does not have a field `metadata` like `Env` does. Is there a particular reason for this? If not, I propose adding it since, currently, there is no way of specifying, e.g., rendering FPS for `VectorEnv` instances.
Best,
Tim
### Motivation
_No response_
### Pitch
_No response_
### Alternatives
_No response_
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| closed | 2024-04-10T12:46:48Z | 2024-04-11T09:58:55Z | https://github.com/Farama-Foundation/Gymnasium/issues/1005 | [
"enhancement"
] | TimSchneider42 | 1 |
python-visualization/folium | data-visualization | 1,505 | Heatmap erratic behaviors | **Description of the bug:**
1) If we want to include weights for the heatmap, we have to normalize (log2 works well) our weights column otherwise it won't produce expected results. This could be added as a warning in the documentation.
2) The distance between points is considered much more significant than their weight. For example, 2 points close to each other with a weight of 1 and 2 respectively will be a "hotter area" than a single point with a weight equal to 5.
3) Zooming in or out produces different results (see screenshots below)
**For reproducing purposes**
```
# Data to test
test = pd.DataFrame(np.array([['Rotterdam', 'NL', 51.9228934, 4.4631786, 1.0],
['Alkmaar', 'NL', 52.63235473632812, 4.750678062438965,
1.584962500721156],
['Augsburg', 'DE', 48.36598968505859, 10.89304447174072,
6.643856189774724],
['Barcelona', 'ES', 41.3828939, 2.1774322, 9.643856189774725],
['Beauvais', 'FR', 49.42929840087891, 2.08105993270874,
10.965784284662087]], dtype=object), columns=['City', 'Country', 'Latitude', 'Longitude', 'Weight'])
# Define the function to create the heatmap
def test_heatmap(df_cog):
# Instantiate map
m = folium.Map(location=df_cog[['Latitude', 'Longitude']].mean(),
fit_bounds=[[df_cog['Latitude'].min(),
df_cog['Longitude'].min()],
[df_cog['Latitude'].max(),
df_cog['Longitude'].max()]])
plugins.HeatMap(df_cog[['Latitude', 'Longitude', 'Volume']], name ='Heatmap').add_to(m)
# Return the map
return m
# Plot the map
m_1 = test_heatmap(df_cog=test)
m_1
```
**Expected behavior**
I would expect an area to be shown as hot or cold depending solely on the sum of the weight of the points inside this area, and not the count of points.
**Environment**
- Browser: Chrome
- Jupyter Notebook
- Python version: 3.9.4
- folium version: 0.12.1
Item 3) screenshots:
look at Polska and Katowice




| closed | 2021-08-19T04:30:05Z | 2022-11-18T11:22:55Z | https://github.com/python-visualization/folium/issues/1505 | [] | BriceChivu | 1 |
mljar/mljar-supervised | scikit-learn | 51 | Add LightGBM support | The lightgbm algorithm is already available in the code. Make sure that it works with:
- binary classification
- multiclass classification
- regression | closed | 2020-04-09T12:09:47Z | 2020-04-18T08:02:24Z | https://github.com/mljar/mljar-supervised/issues/51 | [
"enhancement"
] | pplonski | 1 |
docarray/docarray | fastapi | 1,116 | DocumentArray proto optimization | # Context
At the moment we save in the `DocumentProto` under the `docarray_type` string the exact type of each field (this string is coming from the `_registed_proto` decorator). This a way to "save" the schema of our Document to load at after deserialization. Keep in mind that this add extra data in the Document proto payload.
In a DocumentArray all of the document are homogene (i,e follow the same schema). Therefore it is unscary to save this schema information more than once.
We should optimize the proto sterilization of DocumentArray by only saving this data once | closed | 2023-02-09T14:03:01Z | 2023-02-09T15:09:09Z | https://github.com/docarray/docarray/issues/1116 | [] | samsja | 4 |
iterative/dvc | data-science | 10,179 | `stage`: list/show/get stage details | CLI command and API method to get all stage info (cmd, params, deps, outs, etc.).
* CLI command is useful for seeing the resolved info for stages using interpolation
* API method is useful for getting paths so you don't have to manually align them between code and dvc.yaml
Loosely related to https://github.com/iterative/vscode-dvc/issues/5048:
> With a complicated DVC pipeline, with dynamic parametrized dependencies it's not easy to get an exact command that is needed to run a specific stage under debugger outside of DVC.
Example:
```sh
$ cat dvc.yaml
stages:
train:
cmd: python src/train.py ${data_path}/features ${model_path} ${train}
deps:
- ${data_path}/features
- src/train.py
params:
- train
outs:
- ${model_path}
- dvclive:
cache: false
$ dvc stage show train --json
{
"cmd": "python src/train.py data/features model.pkl min_split=0.2 n_est=50 seed=101",
"deps": [
"data/features",
"model.pkl"
],
"params": {
"train": {
"min_split": 0.2,
"n_est": 50,
"seed": 101,
}
},
"outs": {
"model.pkl": {},
"dvclive": {"cache": false}
}
}
```
API usage:
```python
from dvc.api import stage_show
outs = stage_show("train").outs
with open(outs[0].path, "w") as f:
f.write(model)
``` | closed | 2023-12-16T12:48:26Z | 2024-03-04T15:57:18Z | https://github.com/iterative/dvc/issues/10179 | [
"A: api",
"A: cli",
"A: pipelines"
] | dberenbaum | 3 |
microsoft/nni | tensorflow | 5,235 | maxTrialNumerPerGpu does not seem to work | **Describe the issue**:
When I tried to train the mnist example in the remote mode, there was always only **one** trial running in the only **one** GPU I have, and other trials were all **waiting**. I found several issues like #608 and #2415 mentioned this, and I also set `useActiveGpu` to be `True` and `maxTrialNumberPerGpu` to be greater than 1, but they did not work. Please tell me how to deal with it, thanks!
**Environment**:
- NNI version:
- Training service (local|remote|pai|aml|etc): remote
- Client OS: Linux
- Server OS (for remote mode only): Linux
- Python version: 3.7.13
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?: conda
- Is running in Docker?: no
**Configuration**:
config.yaml, where ```test.py``` is the mnist example:
```
searchSpaceFile: _search_space.json
trialCommand: python3 test.py
trialConcurrency: 16
maxTrialNumber: 64
nniManagerIp: ...
trialGpuNumber: 1
tuner:
name: Anneal
classArgs:
optimize_mode: maximize
trainingService:
platform: remote
machineList:
- host:
user: ubuntu
ssh_key_file: ...
pythonPath: /home/ubuntu/anaconda3/envs/pdecoder/bin/
port: 22
useActiveGpu: True
maxTrialNumberPerGpu: 4
gpuIndices: 0
```
And the script ```train_nni.py``` used to run the experiment:
```
import nni
from nni.experiment import Experiment
experiment_config = yaml.load(open('config_remote.yaml', 'r'), Loader=yaml.FullLoader)
temp = nni.experiment.ExperimentConfig(None, **experiment_config)
experiment = Experiment(temp)
experiment.run(8888)
```
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: ```python train_nni.py``` | open | 2022-11-20T18:24:04Z | 2023-04-26T04:00:06Z | https://github.com/microsoft/nni/issues/5235 | [
"gpu scheduler issue"
] | zzzzzx-1115 | 3 |
plotly/dash | plotly | 2,356 | Remove sourcemap links from bundles when we publish without sourcemaps | Hi,
when installing dash-player (versions > 1.0), the file /dash_player/dash_player.min.js.map is somehow missing. I installed the package via poetry, which uses pip under the shell to install. Installation worked without errors or warnings.
I could add the .zipped package and get the file from there for now.
| open | 2022-11-24T12:09:45Z | 2024-08-13T19:23:13Z | https://github.com/plotly/dash/issues/2356 | [
"infrastructure",
"feature",
"P3"
] | afey89 | 5 |
huggingface/datasets | computer-vision | 7,399 | Synchronize parameters for various datasets | ### Describe the bug
[IterableDatasetDict](https://huggingface.co/docs/datasets/v3.2.0/en/package_reference/main_classes#datasets.IterableDatasetDict.map) map function is missing the `desc` parameter. You can see the equivalent map function for [Dataset here](https://huggingface.co/docs/datasets/v3.2.0/en/package_reference/main_classes#datasets.Dataset.map).
There might be other parameters missing - I haven't checked.
### Steps to reproduce the bug
from datasets import Dataset, IterableDataset, IterableDatasetDict
ds = IterableDatasetDict({"train": Dataset.from_dict({"a": range(6)}).to_iterable_dataset(num_shards=3),
"validate": Dataset.from_dict({"a": range(6)}).to_iterable_dataset(num_shards=3)})
for d in ds["train"]:
print(d)
ds = ds.map(lambda x: {k: v+1 for k, v in x.items()}, desc="increment")
for d in ds["train"]:
print(d)
### Expected behavior
The description parameter should be available for all datasets (or none).
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.11.11
- `huggingface_hub` version: 0.28.1
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.9.0 | open | 2025-02-14T09:15:11Z | 2025-02-19T11:50:29Z | https://github.com/huggingface/datasets/issues/7399 | [] | grofte | 2 |
Colin-b/pytest_httpx | pytest | 17 | mocked but not requested | Hi! I'm trying your pytest fixture, unfortunately I get errors like this:
```
AssertionError: The following responses are mocked but not requested: [(b'HTTP/1.1', 200, b'', [], <httpx._content_streams.ByteStream object at 0x7f05af8d0160>), (b'HTTP/1.1', 200, b'', [], <httpx._content_streams.ByteStream object at 0x7f05af8d0358>)]
```
My tests look like this:
```python
import pytest
from pytest_httpx import httpx_mock
from my_package import my_module
@pytest.mark.asyncio
async def test_get_user_from_sso_with_empty_cookie(httpx_mock):
httpx_mock.add_response()
with pytest.raises(ConnectionError):
await my_module.my_function(cookie="")
@pytest.mark.asyncio
async def test_get_user_from_sso_with_missing_cookie(httpx_mock):
httpx_mock.add_response()
with pytest.raises(ConnectionError):
await my_module.my_function(cookie=None)
```
`my_module.my_function` is then using HTTPX async clients to send requests to another service.
Any idea why this is happening?
I'm using HTTPX v0.13.1 and pytest_httpx v0.3.0 | closed | 2020-05-26T13:49:29Z | 2020-05-27T08:16:41Z | https://github.com/Colin-b/pytest_httpx/issues/17 | [
"invalid"
] | pawamoy | 1 |
sqlalchemy/alembic | sqlalchemy | 418 | Migrate model with multi-column UniqueConstraint | **Migrated issue, originally created by Felipe Cavalcanti ([@felipej](https://github.com/felipej))**
Hi, I'm trying to create a migration for the following model:
```
class Function(db.Model):
__tablename__ = 'function'
__table_args__ = tuple(UniqueConstraint('name', 'namespace', 'revision',
name='name_namespace_revision_unique_constraint'))
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(254), nullable=False)
code = db.Column(db.Text, nullable=False)
namespace = db.Column(db.String(254), default="all", nullable=False)
revision = db.Column(db.String(65), nullable=False)
created_at = db.Column(db.Date, default=_get_date)
updated_at = db.Column(db.Date, onupdate=_get_date)
def __init__(self, name, namespace, code, revision):
self.name = name
self.namespace = namespace
self.code = code
self.revision = revision
```
Alembic is generating the following migration for this model:
```
"""initial migration
Revision ID: 0485d7255905
Revises:
Create Date: 2017-03-01 17:16:07.538631
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = '0485d7255905'
down_revision = None
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('function',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('name', sa.String(length=254), nullable=False),
sa.Column('code', sa.Text(), nullable=False),
sa.Column('namespace', sa.String(length=254), nullable=False),
sa.Column('revision', sa.String(length=65), nullable=False),
sa.Column('created_at', sa.Date(), nullable=True),
sa.Column('updated_at', sa.Date(), nullable=True),
sa.PrimaryKeyConstraint('id')
)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table('function')
# ### end Alembic commands ###
```
note that the UniqueConstraint ```name_namespace_revision_unique_constraint``` is not being generated in the migration.
Aditional info:
I'm using Alembic and python 3.6
If I manually add the uniqueconstraint creation in the migration, the next time I do ```db migrate``` a migration deleting this unique constraint is generated.
Any hints?
| closed | 2017-03-02T21:55:59Z | 2017-03-03T01:40:22Z | https://github.com/sqlalchemy/alembic/issues/418 | [
"bug"
] | sqlalchemy-bot | 4 |
ipython/ipython | data-science | 14,502 | Allow setting $SHELL for ! and !! to print each shell command that's run with set -x | How to/Allow setting $SHELL for ! and !! to print each shell command that's run with set -x.
```python
import os
if ' -x' not in os.environ['SHELL']:
_SHELL = os.environ['SHELL']
os.environ['SHELL'] = f'{_SHELL} -x'
%env PS4='+ '
display(dict(_SHELL=os.environ['SHELL']))
#%env SHELL
!echo "SHELL:=$SHELL"
!echo a
!echo b
```
Expected output:
```sh
{'SHELL': '/bin/bash -x'}
+ echo SHELL:="/bin/bash -x"
SHELL:=/bin/bash/ -x
+ echo a
a
+ echo b
b
```
Actual output:
```sh
{'SHELL': '/bin/bash -x'}
SHELL:=/bin/bash/ -x
a
b
```
Workarounds:
- monkeypatch `get_ipython().system`?:
```python
get_ipython().system = lambda x: ...
```
- prefix every command with `set -x;`:
```sh
!set -x; echo -e "a\nb\nc"
!set -x; python -m site
``` | open | 2024-08-24T18:19:32Z | 2024-08-24T21:24:29Z | https://github.com/ipython/ipython/issues/14502 | [] | westurner | 2 |
sherlock-project/sherlock | python | 1,611 | Missing TikTok results |
- [X ] I'm reporting a bug in Sherlock's functionality
- [ X] The bug I'm reporting is not a false positive or a false negative
- [ X] I've verified that I'm running the latest version of Sherlock
- [ X] I've checked for similar bug reports including closed ones
- [ X] I've checked for pull requests that attempt to fix this bug
WRITE DESCRIPTION HERE
Using the last version of sherlock the search for TikTok users is missing.
| closed | 2022-11-17T18:59:33Z | 2023-02-16T17:16:46Z | https://github.com/sherlock-project/sherlock/issues/1611 | [
"bug"
] | TVikg | 2 |
miguelgrinberg/microblog | flask | 130 | Best way to overcome the mysql error "Data too long for column" | Hi,
Whilst my website works fine locally running on sqllite, I'm having less success committing the same items to the mysql db on my ubuntu server. One of my models looked as follows when I first created it in the mysql db:
class Paper(MediaMixin, db.Model):
id = db.Column(db.Integer, primary_key=True)
content_path = db.Column(db.String(64))
title = db.Column(db.String(64), index=True)
abstract = db.Column(db.String(64))
coauthors = db.Column(db.String(64), index=True)
When I tried to create some Paper items with particularly long content_path or abstract fields, I got the "data too long for column" error as indicated in the title. This made sense to me at first as I'd declared these fields as having a max of 64 char, so increased them to some arbitrary large number:
class Paper(MediaMixin, db.Model):
id = db.Column(db.Integer, primary_key=True)
content_path = db.Column(db.String(1048576))
title = db.Column(db.String(64), index=True)
abstract = db.Column(db.String(1048576))
coauthors = db.Column(db.String(64), index=True)
Unfortunately however, after migrating and committing this change to github and then pulling and upgrading on my ubuntu server in an attempt to make the analogous change to the mysql db, I got the same "data too long" error when attempting to upload my Paper model items. This leads me to ask two questions:
1. Is there something else I need to do for my mysql db to pickup this model change?
2. What's the best practice for getting around this 'data too long' errors, without truncating the data? I've seen a few similar questions on the web and have tried a few things (e.g. using TEXT datatype, changing the migration.env file to have compare_type=True), but nothing seems to be working.
Any help would be very much appreciated!
Cheers,
Paul | closed | 2018-10-21T19:11:10Z | 2020-09-23T16:06:39Z | https://github.com/miguelgrinberg/microblog/issues/130 | [
"question"
] | PaulGilmartin | 20 |
tensorly/tensorly | numpy | 72 | Fix modes in PARAFAC | Hey everyone. Just wanted to add a feature idea here.
When using PARAFAC to process EEM data, it is very common to fix two modes (namely the modes for emission and excitation) to those values previously generated on a calibration dataset.
Thereby it is possible to estimate only the remaining mode (the concentration mode) and apply scaling factors that have been computed on the calibration data.
I added this possibility on a personal fork of tensorly and got the expected outcome on a test dataset. However I'm not sure if my implementation will uphold your coding standards here .. since it basically just overwrites the values of a mode with given values after each iteration..
Also many other constraints (besides the non-negativity) are usually discussed together with PARAFAC, depending on the field of application. E.g. these include orthogonality and unimodality constraints on the loading matrices or the possibility to fix single components in a mode.
I think it is worth working on those additions especially since for PARAFAC there isn't any good free alternative to the well known MATLAB-Toolboxes.
Kind Greetings and thanks for your good work so far,
Gordon | closed | 2018-08-31T12:35:21Z | 2020-06-22T14:08:47Z | https://github.com/tensorly/tensorly/issues/72 | [
"enhancement"
] | gboeer | 4 |
marimo-team/marimo | data-visualization | 4,093 | Opened a "shield" and got an error | ### Describe the bug
I'm not super clear on what a shield is, but clicked the link on this page just to see what it is: https://docs.marimo.io/community/
Got this page:

### Environment
<details>
```
On Linux (Pop!OS) using Firefox.
```
</details>
### Code to reproduce
N/A | closed | 2025-03-13T20:12:32Z | 2025-03-13T20:36:28Z | https://github.com/marimo-team/marimo/issues/4093 | [
"bug"
] | axiomtutor | 3 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,068 | [Bug]: Unable to find and install torch during first run of webui-user.bat | ### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I installed Automatic1111 using instructions from https://stable-diffusion-art.com/install-windows/
Installed version of Python is 3.10.6. First time run of 'webui-user.bat' threw an error.
```
ERROR: Could not find a version that satisfies the requirement torch==2.1.2 (from versions: none)
ERROR: No matching distribution found for torch==2.1.2
RuntimeError: Couldn't install torch.
Command: "D:\SD\stable-diffusion-webui\venv\Scripts\python.exe" -m pip install torch==2.1.2 torchvision==0.16.2 --extra-index-url https://download.pytorch.org/whl/cu121
Error code: 1
```
I searched github for previously reported issues and tried all the commonly mentioned solutions, i.e., verify python version, delete venv folder, update pip (initially pip was at an older version), and finally edit 'launch_utils.py' to change torch/torchvision version to 2.2.0/0.17.0. None of these worked and error was still the same.
### Steps to reproduce the problem
1. New install of Python, Git and Automatic1111
2. Run webui-user.bat
### What should have happened?
webui-user.bat should have installed torch
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
Unable to extract as Webui cannot be opened without a complete installation. Some details below
GPU: NVIDIA GeForce RTX 3060 12GB VRAM
CPU: AMD Ryzen 5 7600X 6-Core Processor
RAM: 32GB
OS: Windows-10-10.0.19045-SP0
### Console logs
```Shell
Already up to date.
venv "D:\SD\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:38:17) [MSC v.1932 32 bit (Intel)]
Version: v1.9.4
Commit hash: feee37d75f1b168768014e4634dcb156ee649c05
Installing torch and torchvision
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
ERROR: Could not find a version that satisfies the requirement torch==2.1.2 (from versions: none)
ERROR: No matching distribution found for torch==2.1.2
Traceback (most recent call last):
File "D:\SD\stable-diffusion-webui\launch.py", line 48, in <module>
main()
File "D:\SD\stable-diffusion-webui\launch.py", line 39, in main
prepare_environment()
File "D:\SD\stable-diffusion-webui\modules\launch_utils.py", line 380, in prepare_environment
run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True)
File "D:\SD\stable-diffusion-webui\modules\launch_utils.py", line 115, in run
raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install torch.
Command: "D:\SD\stable-diffusion-webui\venv\Scripts\python.exe" -m pip install torch==2.1.2 torchvision==0.16.2 --extra-index-url https://download.pytorch.org/whl/cu121
Error code: 1
Press any key to continue . . .
```
### Additional information
_No response_ | closed | 2024-06-22T02:14:18Z | 2024-06-22T21:45:16Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16068 | [
"asking-for-help-with-local-system-issues"
] | Bubalus-Bubalis | 2 |
Josh-XT/AGiXT | automation | 494 | Make agent Create - Delete work | ### Description
See below
### Steps to Reproduce the Bug




### Expected Behavior
expected after DELETE:
- List of agent does not contain `test_quickcheck_agent`
- i cannot get the agent `test_quickcheck_agent`
- getting the agent `test_quickcheck_agent` returns 404 not found
### Actual Behavior
- Delete can be called again and again and nothing happens
### Additional Context / Screenshots
_No response_
### Operating System
- [ ] Microsoft Windows
- [ ] Apple MacOS
- [X] Linux
- [ ] Android
- [ ] iOS
- [ ] Other
### Python Version
- [ ] Python <= 3.9
- [X] Python 3.10
- [ ] Python 3.11
### Environment Type - Connection
- [X] Local
- [ ] Remote
### Environment Type - Container
- [ ] Using Docker
- [X] Not Using Docker
### Acknowledgements
- [X] My issue title is concise, descriptive, and in title casing.
- [X] I have searched the existing issues to make sure this bug has not been reported yet.
- [X] I am using the latest version of AGiXT.
- [X] I have provided enough information for the maintainers to reproduce and diagnose the issue. | closed | 2023-05-28T15:28:43Z | 2023-06-05T14:13:00Z | https://github.com/Josh-XT/AGiXT/issues/494 | [
"type | report | bug",
"needs triage"
] | localagi | 2 |
babysor/MockingBird | deep-learning | 118 | 请问社区预先训练好的合成器如何用? | 看教程part2.2中提供了社区预先训练好的合成器,请问是否用了这个就无需再自行训练ai?
点击了第一个网盘链接里面的文件叫「train3_200k.pt」,请问这个文件下载下来后放在哪个文件夹内?需要修改文件名称吗?
是直接放在/synthesizer/saved_models/mandarin/ 下就可以吗?我这个位置下有个叫「mandarin.pt」的文件,下载下来的直接替换mandarin.pt这个文件对吗?
如是,那替换以后直接运行工具就可以使用了对吗?
问题比较小白,望各位别介意,感谢。 | closed | 2021-10-07T02:26:37Z | 2021-10-16T08:55:29Z | https://github.com/babysor/MockingBird/issues/118 | [] | anonymousbone | 3 |
marcomusy/vedo | numpy | 557 | How to create a line that do not change when zoom in/out | Please have a look for the following figure:

When zoom in/out, the green line would change, but the red cross line would not change.
The green line can be created as:
```
line = Line(p0=[0, 0, 0], p1=[1, 1, 1])
```
But, how can I create the red cross line which do not change when zoom in/out? | closed | 2021-12-14T02:17:30Z | 2021-12-23T01:28:35Z | https://github.com/marcomusy/vedo/issues/557 | [] | zhang-qiang-github | 8 |
InstaPy/InstaPy | automation | 6,706 | Gat | open | 2023-05-06T02:38:58Z | 2023-05-06T02:38:58Z | https://github.com/InstaPy/InstaPy/issues/6706 | [] | 3vm66 | 0 | |
encode/databases | asyncio | 564 | MySQL Connection Pool Doesn't Seem to Work | I am experiencing an issue with the MySQL connection pool. I have been testing the following code:
```python
DATABASE_URL = "mysql+aiomysql://root:root192.168.62.195:3306/test?charset=utf8mb4"
# Additional database URL parameters can also be passed in the Database constructor
database = Database(DATABASE_URL, min_size=3, max_size=10, charset="utf8mb4")
async def exec_sql(exec_func, sql: str, tid):
while True:
try:
start_time = time.time()
result = await exec_func(sql)
logger.info(f'task({tid}|{start_time:.0f}|{time.time() - start_time:.3f}) {exec_func} {sql} {result}')
except MySQLError as err:
logger.error(f'task({tid}){exec_func} {sql} {err}')
async def main():
await database.connect()
async with asyncio.TaskGroup() as tg:
for tid in range(1, 100000):
tg.create_task(exec_sql(database.fetch_all, 'show tables', tid))
tg.create_task(exec_sql(database.fetch_one, 'select * from tbl_sms_record', tid))
tg.create_task(exec_sql(database.fetch_val, 'select * from tbl_sms_record', tid))
await database.disconnect()
asyncio.run(main())
```
During concurrency testing, I noticed that only one TCP connection is being utilized for executing queries, while the other two initial connections remain idle.
MySQL process list output:
```sql
mysql> show processlist;
+-----+-----------------+---------------------+--------+---------+----------+----------------------------+------------------------------+
| Id | User | Host | db | Command | Time | State | Info |
+-----+-----------------+---------------------+--------+---------+----------+----------------------------+------------------------------+
| 99 | root | 192.168.51.70:59810 | db_sms | Sleep | 46 | | NULL |
| 100 | root | 192.168.51.70:59826 | db_sms | Query | 0 | waiting for handler commit | select * from tbl_sms_record |
| 101 | root | 192.168.51.70:59832 | db_sms | Sleep | 46 | | NULL |
+-----+-----------------+---------------------+--------+---------+----------+----------------------------+------------------------------+
```
TCP Connections:
```
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
ESTAB 0 0 [::ffff:10.0.0.161]:3306 [::ffff:192.168.51.70]:37426 users:(("mysqld",pid=542638,fd=39))
ESTAB 0 1400 [::ffff:10.0.0.161]:3306 [::ffff:192.168.51.70]:37412 users:(("mysqld",pid=542638,fd=38))
ESTAB 0 0 [::ffff:10.0.0.161]:3306 [::ffff:192.168.51.70]:37408 users:(("mysqld",pid=542638,fd=37))
```
I expected that with 100,000 concurrent tasks, more connections would be utilized. Is this an issue with my code or the databases library?
| open | 2023-07-14T08:23:12Z | 2024-10-31T01:35:23Z | https://github.com/encode/databases/issues/564 | [
"bug"
] | Vastxiao | 5 |
fastapi/fastapi | fastapi | 13,111 | Incorrect handling of non utf-8 data in body in case of a validataion error | ### Privileged issue
- [X] I'm @tiangolo or he asked me directly to create an issue here.
### Issue Content
There was no way to report actual issue by the rules so I had to lie about being able to do so...
There has to be a section in discussions or something then.
```
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "REDACTED\.venv\Lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "REDACTED\.venv\Lib\site-packages\starlette\routing.py", line 72, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "REDACTED\.venv\Lib\site-packages\fastapi\routing.py", line 315, in app
raise validation_error
fastapi.exceptions.RequestValidationError: [{'type': 'model_attributes_type', 'loc': ('body',), 'msg': 'Input should be a valid dictionary or object to extract fields from', 'input': b'----cpp-httplib-multipart-data-UxL5MB5nIRg5wqYk\r\nContent-Disposition: form-data; name="metadata"\r\nContent-Type: application/json\r\n\r\n{"started":1735320698,"length_seconds":3}\r\n----cpp-httplib-multipart-data-UxL5MB5nIRg5wqYk\r\nContent-Disposition: form-data; name="metadata"; filename="main.ogg"\r\nContent-Type: audio/ogg\r\n\r\nOggS\x00\x02\x00\x00\x00\x00\x00\x00\x00\x00\x81Z\x97E\x00\x00\x00\x00\x0b\xffmn\x01\x13OpusHead\x01\x028\x01\x80>\x00\x00\x00\x00\x00OggS\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x81Z\x97E\x01\x00\x00\x00\xf3sR\x98\x01\'OpusTags\x17\x00\x00\x00recorder ogg-opus 0.0.1\x00\x00\x00\x00OggS\x00\x00\xc0\xf3\x00\x00\x00\x00\x00\x00\x81Z\x97E\x02\x00\x00\x00\xd2\xf2\x8d<A\x14\x1b\r\n----cpp-httplib-multipart-data-UxL5MB5nIRg5wqYk--\r\n'}]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "REDACTED\.venv\Lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 419, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "REDACTED\.venv\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "REDACTED\.venv\Lib\site-packages\fastapi\applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "REDACTED\.venv\Lib\site-packages\starlette\applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "REDACTED\.venv\Lib\site-packages\starlette\middleware\errors.py", line 186, in __call__
raise exc
File "REDACTED\.venv\Lib\site-packages\starlette\middleware\errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "REDACTED\.venv\Lib\site-packages\starlette\middleware\cors.py", line 85, in __call__
await self.app(scope, receive, send)
File "REDACTED\.venv\Lib\site-packages\starlette\middleware\exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "REDACTED\.venv\Lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
raise exc
File "REDACTED\.venv\Lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "REDACTED\.venv\Lib\site-packages\starlette\routing.py", line 756, in __call__
await self.middleware_stack(scope, receive, send)
File "REDACTED\.venv\Lib\site-packages\starlette\routing.py", line 776, in app
await route.handle(scope, receive, send)
File "REDACTED\.venv\Lib\site-packages\starlette\routing.py", line 297, in handle
await self.app(scope, receive, send)
File "REDACTED\.venv\Lib\site-packages\starlette\routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "REDACTED\.venv\Lib\site-packages\starlette\_exception_handler.py", line 75, in wrapped_app
response = await handler(conn, exc)
^^^^^^^^^^^^^^^^^^^^^^^^
File "REDACTED\.venv\Lib\site-packages\fastapi\exception_handlers.py", line 25, in request_validation_exception_handler
content={"detail": jsonable_encoder(exc.errors())},
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "REDACTED\.venv\Lib\site-packages\fastapi\encoders.py", line 303, in jsonable_encoder
jsonable_encoder(
File "REDACTED\.venv\Lib\site-packages\fastapi\encoders.py", line 289, in jsonable_encoder
encoded_value = jsonable_encoder(
^^^^^^^^^^^^^^^^^
File "REDACTED\.venv\Lib\site-packages\fastapi\encoders.py", line 318, in jsonable_encoder
return ENCODERS_BY_TYPE[type(obj)](obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "REDACTED\.venv\Lib\site-packages\fastapi\encoders.py", line 59, in <lambda>
bytes: lambda o: o.decode(),
^^^^^^^^^^
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x81 in position 335: invalid start byte
```
using `o.decode(errors='ignore')` seems to fix the issue | closed | 2024-12-27T17:39:46Z | 2024-12-28T04:27:19Z | https://github.com/fastapi/fastapi/issues/13111 | [] | ph4 | 1 |
Textualize/rich | python | 3,008 | [REQUEST] Conditional Formatting for cells in a table | Hi, I was searching for similar feature requests that would help me figure out conditional formatting for columns similar to how we can do in excel. I found one Pull Request in Rich issues https://github.com/Textualize/rich/pull/2388 and I think they were thinking the same thing.
<img width="231" alt="image" src="https://github.com/Textualize/rich/assets/3039492/5af7c6ed-f46b-49bf-a726-6dcde231f194">
Is this possible in Rich and how can I achieve this incase its not something necessary that needs to be added to Rich in itself.
| open | 2023-06-24T23:11:49Z | 2023-06-24T23:12:07Z | https://github.com/Textualize/rich/issues/3008 | [
"Needs triage"
] | xxwikkixx | 1 |
plotly/dash | data-visualization | 2,373 | [Feature Request] Long Polling instead of Short Polling | **Is your feature request related to a problem? Please describe.**
Sometimes with some values for background callback interval (less than 50 ms for an example), user may have some issues in browser connected with high amount of requests to the BE.
**Describe the solution you'd like**
Implement Long Polling instead of Short Polling, or provide clear explanation of why Short Polling approach is used now.
**Describe alternatives you've considered**
Web Sockets may be investigated too as an alternative. | closed | 2022-12-29T03:33:18Z | 2024-07-24T16:57:03Z | https://github.com/plotly/dash/issues/2373 | [] | ArtsiomAntropau | 4 |
postmanlabs/httpbin | api | 18 | Omitting "www" in non-GET request results in a 301 moved response | When you omit the "www" in the url when doing a non-GET request, httpbin responds with a 301 moved status.
```
>> print requests.post('http://httpbin.org/post').status_code
>> 301
>> print requests.post('http://www.httpbin.org/post').status_code
>> 200
```
| closed | 2011-12-06T03:08:34Z | 2018-04-26T17:50:55Z | https://github.com/postmanlabs/httpbin/issues/18 | [] | johtso | 3 |
healthchecks/healthchecks | django | 870 | Integration Request: Webook running and stopped trigger similar to "hc_check_started" | This is a proposal to send webhooks when a check is running (initiated with `/start`) and ended with a ping.
Prometheus integration labels this as a metric named `hc_check_started`
Currently webhook only supports health checks triggers (UP or DOWN) status.
The proposal is to add another trigger for checks that are `started` and when they are `stopped` (finished)
This is useful for sending push metrics without having to scrape prometheus endpoint at increased intervals for example, causing strain at both ends.
It would also be useful to have a placeholder for the` unique_key` that's exposed for the prometheus metric in the webhook, it's doesn't seem to be a `uuid` or I'm mistaken? | closed | 2023-07-31T21:45:35Z | 2023-08-01T10:55:45Z | https://github.com/healthchecks/healthchecks/issues/870 | [] | tekert | 1 |
keras-team/keras | data-science | 20,531 | AttributeError: module 'keras_nlp' has no attribute 'models' | <string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
<string>:1: SyntaxWarning: invalid escape sequence '\/'
Traceback (most recent call last):
File "C:\Users\wangshijiang\Desktop\deep_learning\project\llm\llm.py", line 68, in <module>
preprocessor = keras_nlp.models.DebertaV3Preprocessor.from_preset(
^^^^^^^^^^^^^^^^
AttributeError: module 'keras_nlp' has no attribute 'models'

| closed | 2024-11-21T14:59:13Z | 2024-11-29T16:42:00Z | https://github.com/keras-team/keras/issues/20531 | [
"type:support",
"stat:awaiting response from contributor"
] | iwqculrbud | 4 |
jonaswinkler/paperless-ng | django | 923 | [Other] Exporter can't be run from a crontab or script in general | Hey,
I am trying to automate the Document Exporter and therefore created a small script:
```
#!/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/home/paperless/
cd /home/paperless/paperless-ng/
docker-compose exec webserver document_exporter ../export
```
The sh-file is located in /usr/local/bin and can be perfectly executed by running it manually. It also works If I replace the last line with "docker-compose down" or "up". Just the exporter does not seem to work when I call the script via Crontab or via Duplicati.
What could be the issue here? | closed | 2021-04-15T13:32:00Z | 2021-04-15T19:34:39Z | https://github.com/jonaswinkler/paperless-ng/issues/923 | [] | freaky33 | 3 |
tqdm/tqdm | pandas | 609 | Adjust EMA smoothing coefficient | On line 714 of _tqdm.py;
https://github.com/tqdm/tqdm/blob/96d8a3c3642474144f53f74331ef2172d1c39496/tqdm/_tqdm.py#L714
---
```
smoothing : float, optional
Exponential moving average smoothing factor for speed estimates
(ignored in GUI mode). Ranges from 0 (average speed) to 1
(current/instantaneous speed) [default: 0.3].
```
---
With the default alpha value of 0.3, past values are given quite a low weighting (and conversely, more recent results are given quite a large effect). I think it would be better to reduce this smoothing parameter to, for example, 0.1 or so, as it'll be less sensitive to skewed outliers then.
---
Side note: I ended up investigating this because I've been trying to use tqdm to show progress on a number of jobs that have quite a high variance in terms of run time - the low weighting of past results in the EMA meant that the predicted run time of the job was very volatile and was much too sensitive to the most recent value (so I couldn't really draw any useful conclusions from it). I think that a smaller value for alpha would solve this problem.
---
Edit: I realised that I made a mistake interpreting the definition of EMA - the documentation is correct and I'd misinterpreted the smoothing factor wrong (d'oh). I've since edited this issue. I still think that the choice of coefficient could be improved and have made a PR (#616) with a suggested improvement for the EMA smoothing parameter. | open | 2018-09-14T08:45:13Z | 2020-01-20T21:39:03Z | https://github.com/tqdm/tqdm/issues/609 | [
"question/docs ‽",
"to-review 🔍"
] | ghost | 2 |
tensorflow/tensor2tensor | deep-learning | 1,573 | Error in speech recognition training | ### Description
I do not know why this error occurs.
TypeError: __init__() got an unexpected keyword argument 'experimental_export_device_assignment'
...
### Environment information
```
OS: <your answer here>
$ pip freeze | grep tensor
mesh-tensorflow==0.0.5
tensor2tensor==1.13.4
tensorboard==1.12.0
tensorflow==1.13.1
tensorflow-datasets==1.0.2
tensorflow-estimator==1.13.0
tensorflow-metadata==0.13.0
tensorflow-probability==0.6.0
tensorflow-serving-api==1.12.0
$ python -V
Python 3.5.3
```
### For bugs: reproduction and error logs
```
t2t-trainer --model=transformer --hparams_set=transformer_librispeech_tpu --problem=librispeech --train_steps=210000 --eval_steps=3 --local_eval_frequency=100 --data_dir=$DATA --output_
dir=$OUT --use_tpu --cloud_tpu_name=$TPU_NAME
...
```
```
# Error logs:
...
WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.
INFO:tensorflow:Importing user module manuel_garcia02 from path /home
Traceback (most recent call last):
File "/usr/local/bin/t2t-trainer", line 33, in <module>
tf.app.run()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/usr/local/bin/t2t-trainer", line 28, in main
t2t_trainer.main(argv)
File "/usr/local/lib/python3.5/dist-packages/tensor2tensor/bin/t2t_trainer.py", line 398, in main
exp = exp_fn(create_run_config(hparams), hparams)
File "/usr/local/lib/python3.5/dist-packages/tensor2tensor/utils/trainer_lib.py", line 774, in experiment_fn
return create_experiment(run_config, hparams, *args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensor2tensor/utils/trainer_lib.py", line 672, in create_experiment
use_xla=use_xla)
File "/usr/local/lib/python3.5/dist-packages/tensor2tensor/utils/trainer_lib.py", line 318, in create_estimator
experimental_export_device_assignment=True)
TypeError: __init__() got an unexpected keyword argument 'experimental_export_device_assignment'
```
| closed | 2019-05-16T13:25:03Z | 2020-03-26T03:25:55Z | https://github.com/tensorflow/tensor2tensor/issues/1573 | [] | manuel3265 | 2 |
slackapi/bolt-python | fastapi | 899 | How do I handle view_submissions payloads for custom workflows in slack? | (So the issue I am facing here is, I am trying to implement a solution where I have this form I am trying to implement in a workflow builder, so when I user accesses this form via a shortcut within a specific channel, they can fill in a request and the request gets posted in the channel for us to ack or mark as completed. The reason I am trying to enter the form within a workflow is because currently when a user submits the request via a manually created workflow, there is no data within the reaction_added payload to state who submitted that workflow initially so as you can see in the block of code under all the variable declaration I am grabbing the user_id via the conversation history which is not efficient.
So my issue is, when I chose my custom workflow step within the workflow builder the form shows up without an issue and returns a 200 response, but when I try to save that workflow step it just states Sorry, there was a problem trying to save your step but also returns a 200 response and there is no error output either. So I am curious what I am missing.)
### Reproducible in:
```python
#!/usr/bin/env python3
import os
import re
from slack_bolt import App
from slack_sdk import WebClient
from slack_sdk.errors import SlackApiError
from slackeventsapi import SlackEventAdapter
from slack_bolt.adapter.flask import SlackRequestHandler
from flask import Flask, request, make_response
import json
SLACK_BOT_TOKEN = os.environ["SLACK_BOT_TOKEN"]
client = WebClient(token=SLACK_BOT_TOKEN)
ack_emoji = "ack-party"
completed_emoji = "white_check_mark"
workflows = []
app = Flask(__name__)
slack_event_adapter = SlackEventAdapter(os.environ["SIGNING_SECRET"], '/slack/events', app)
channel_name = "C0567LF0G5S"
results = client.conversations_history(channel=channel_name)
for message in results["messages"]:
if "Support-request-form" in message["text"]:
timestamp = message["ts"]
user_id = re.search("(?:<@)(.*)(>)", message["text"]).group(0)
user_id = user_id[2:-1]
workflows.append({"timestamp": timestamp, "user": user_id})
try:
@slack_event_adapter.on('reaction_added')
def reaction(reaction_payload):
event = reaction_payload.get('event', {})
reaction = event.get('reaction')
user_reaction = event.get('user')
workflow_ts = event.get('item', {}).get('ts')
user_reaction_info = client.users_info(user=user_reaction)
print(user_reaction_info)
display_name = user_reaction_info.get('user', {}).get('profile', {}).get('display_name')
real_name = user_reaction_info.get('user', {}).get('profile', {}).get('real_name')
if reaction == ack_emoji:
response = f"Your support ticket has been acknowledged by {real_name} ({display_name})"
elif reaction == completed_emoji:
response = f"Your support ticket has been marked as completed by {real_name} ({display_name})"
for match_ts in workflows:
if workflow_ts == match_ts["timestamp"]:
message_user = match_ts["user"]
client.chat_postMessage(channel=message_user, text=response)
@app.route("/slack/events", methods=["POST"])
def slack_events():
handler = SlackRequestHandler(client)
return handler.handle(request)
@app.route("/workflows", methods=["POST"])
def workflow_step():
payload = json.loads(request.form.get("payload"))
payload_data = payload.get('view') or payload
callback_id = payload_data.get('callback_id')
trigger_id = payload_data.get('trigger_id')
if callback_id == 'open_prisma_support_request' and trigger_id:
open_modal(trigger_id, callback_id)
return make_response("", 200)
def workflow_submission():
payload = json.loads(request.form.get("payload"))
print(payload)
response = client.views_update(view_id=payload["view"]["id"])
print(response)
return make_response("", 200)
def open_modal(trigger_id, callback_id):
response = client.views_open(
trigger_id=trigger_id,
view= {
"type": "workflow_step",
"callback_id": callback_id,
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "Type of Prisma Support Request",
},
"accessory": {
"type": "radio_buttons",
"action_id": "radio_buttons-action",
"options": [
{
"text": {
"type": "plain_text",
"text": "Issue",
},
"value": "Issue",
},
{
"text": {
"type": "plain_text",
"text": "Exception",
},
"value": "Exception",
},
],
},
},
{
"type": "input",
"element": {
"type": "plain_text_input",
"action_id": "plain_text_input-action",
},
"label": {
"type": "plain_text",
"text": "Please fill in more details",
},
},
],
}
)
except SlackApiError as e:
print(f"Error: {e}")
# {
# "message_ts1":
# {
# "ack_timestamp": "blabla",
# "complete_timestamp": "Toto"
# },
# "message_ts2":
# {
# "ack_timestamp": "blabla",
# "complete_timestamp": "Toto"
# },
# }
if __name__ == "__main__":
#debug=True if file modified, don't need to restart app.
app.run(debug=True)
```
#### The `slack_bolt` version
1.18.0
(Paste the output of `pip freeze | grep slack`)
#### Python runtime version
3.10.8
(Paste the output of `python --version`)
#### OS info
Mac OS 13.3.1
(Paste the output of `sw_vers && uname -v` on macOS/Linux or `ver` on Windows OS)
#### Steps to reproduce:
(Share the commands to run, source code, and project settings (e.g., setup.py))
1. Run Slack App
2. Create custom workflow
3. Try to apply custom step in workflow by clicking the submit button
### Expected result:
(workflow step form sucessfully added into custom workflow)
### Actual result:
(Workflow step does not get added, UI states Sorry, there was a problem trying to save your step.)
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2023-05-12T09:01:26Z | 2023-06-26T00:13:23Z | https://github.com/slackapi/bolt-python/issues/899 | [
"question",
"auto-triage-stale"
] | n7z2 | 3 |
TracecatHQ/tracecat | pydantic | 759 | env.sh gets stuck when checking openssl | **Describe the bug**
env.sh script gets stuck when checking for openssl in line 66
**To Reproduce**
Just run env.sh as described in https://docs.tracecat.com/self-hosting/deployment-options/docker-compose
E.g. Steps to reproduce the behavior:
1. Run env.sh
**Expected behavior**
Environment file is created
**Environment (please complete the following information):**
- OS: MacOS 15.2 (24C101)
- - Darwin xxx 24.2.0 Darwin Kernel Version 24.2.0: Fri Dec 6 19:03:40 PST 2024; root:xnu-11215.61.5~2/RELEASE_ARM64_T6041 arm64
- CPU architecture
- - arm64 (Apple m4 pro)
- Browser version
- - Not relevant here
- Docker version
- - Not relevant here
- Docker compose version
- - Not relevant here
- Are you using Docker desktop?
- - Not relevant here
**Additional context**
Quick fix
Change line 66 to
```
if ! openssl --help &> /dev/null
``` | closed | 2025-01-15T14:00:01Z | 2025-01-16T21:45:02Z | https://github.com/TracecatHQ/tracecat/issues/759 | [
"build"
] | TL-Silvio | 1 |
sinaptik-ai/pandas-ai | data-visualization | 1,561 | SSL CERTS VERIFICATION FAILED when running .chat | Hi Team,
I am facing SSL CERTS VERIFICATION FAILED error when running .chat. Can verify=false be set in the request when hitting .chat? Also initializing httpx client with necessary configurations is not helping.
Thanks and Regards
Sumeet Lalla | closed | 2025-01-30T07:29:48Z | 2025-01-30T17:42:56Z | https://github.com/sinaptik-ai/pandas-ai/issues/1561 | [] | prasum | 0 |
marcomusy/vedo | numpy | 943 | Vectorized creation of ellipsoids | Hi @marcomusy,
is it possible to create multiple ellipsoids in a vectorized form at once without using a `for` loop?
I tried to use following code snippet, but it fails:
```
# Create a plotter
plotter = vd.Plotter()
# Number of ellipsoids to create
num_ellipsoids = 3
a_values = np.random.uniform(1.0, 2.0, size=(num_ellipsoids, 3, 3)).reshape(-1,3,3)
positions = np.random.uniform(-3, 3, size=(num_ellipsoids, 3))
# ellipsoids = vd.Ellipsoid(positions, a_values[0,:,:].reshape(1,-1), a_values[1,:,:].reshape(1,-1), a_values[2,:,:].reshape(1,-1))
#
# # Add all ellipsoids to the plot
# plotter += ellipsoids
# Create and add ellipsoids to the plot
for i in range(num_ellipsoids):
ellipsoid = vd.Ellipsoid(positions[i], a_values[i, 0, :], a_values[i, 1, :], a_values[i, 2, :])
plotter.add(ellipsoid)
# Show the plot
plotter.show()
```
Using the for loop above works fine, but if I try to pass the axes values all together I am getting the following error:
```
ellipsoids = vd.Ellipsoid(positions, a_values[0,:,:].reshape(1,-1), a_values[1,:,:].reshape(1,-1), a_values[2,:,:].reshape(1,-1))
File "/home/Development/......../lib/python3.9/site-packages/vedo/shapes.py", line 2888, in __init__
angle = np.arcsin(np.dot(axis1, axis2))
ValueError: shapes (1,9) and (1,9) not aligned: 9 (dim 1) != 1 (dim 0)
```
Probably you could replace the `np.dot` with its vectorized form?
| open | 2023-10-05T17:37:25Z | 2023-11-06T08:27:03Z | https://github.com/marcomusy/vedo/issues/943 | [] | ttsesm | 9 |
widgetti/solara | flask | 183 | Set higher z-index on Solara 'reloading' overlay and 'refresh' dialogue | The Solara development mode 'reloading' overlay and 'refresh the page' dialogue appear underneath ipyleaflet maps because of their z-index. Since these overlays should supercede every element it would make sense to give them a very high z-index. | open | 2023-06-29T07:45:43Z | 2023-06-29T07:45:43Z | https://github.com/widgetti/solara/issues/183 | [] | mangecoeur | 0 |
rthalley/dnspython | asyncio | 592 | Facing issue while running "python3 setup.py test" | I have built dnspython-2.0.0 and tried to execute "python3 setup.py test" from the chroot environment. I have installed Python 3.8.6.
I see the test is stuck here:
```
testWS2 (tests.test_tokenizer.TokenizerTestCase) ... ok
test_get_default_backend (tests.test_async.AsyncDetectionTests) ... ok
test_sniff (tests.test_async.AsyncDetectionTests) ... ok
testQueryTCP (tests.test_async.AsyncTests) ... ERROR
testQueryTCPWithSocket (tests.test_async.AsyncTests) ... ERROR
testQueryTLS (tests.test_async.AsyncTests) ... ERROR
testQueryTLSWithSocket (tests.test_async.AsyncTests) ... ERROR
testQueryUDP (tests.test_async.AsyncTests) ... ^CException in callback _SelectorTransport._call_connection_lost(None)
handle: <Handle _SelectorTransport._call_connection_lost(None)>
Traceback (most recent call last):
File "/usr/lib/python3.8/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.8/asyncio/base_events.py", line 603, in run_until_complete
self.run_forever()
File "/usr/lib/python3.8/asyncio/base_events.py", line 570, in run_forever
self._run_once()
File "/usr/lib/python3.8/asyncio/base_events.py", line 1823, in _run_once
event_list = self._selector.select(timeout)
File "/usr/lib/python3.8/selectors.py", line 468, in select
fd_event_list = self._selector.poll(timeout, max_ev)
KeyboardInterrupt
```
```
root@photon-4a0e7f2307d4 [ /usr/src/photon/BUILD/dnspython-2.0.0 ]# python3
Python 3.8.6 (default, Oct 7 2020, 09:42:51)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
```
I might be missing something in here causing this issue. Looking for help. | closed | 2020-10-08T17:24:53Z | 2020-11-02T18:12:08Z | https://github.com/rthalley/dnspython/issues/592 | [
"Bug"
] | tapakund | 4 |
apache/airflow | automation | 47,920 | Setting a variable in Dag is failing due to 'Direct database access via the ORM is not allowed in Airflow 3.0' | ### Apache Airflow version
3.0.0
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
Setting a variable in Dag is failing due to 'Direct database access via the ORM is not allowed in Airflow 3.0'
<img width="931" alt="Image" src="https://github.com/user-attachments/assets/eb756885-9eef-4d75-a798-36f451ba3d19" />
### What you think should happen instead?
_No response_
### How to reproduce
Have the below dag in dags folder, you will get import error.
```python
from datetime import datetime
from airflow.providers.standard.operators.bash import BashOperator
from airflow.models import Variable
from airflow import DAG
my_var = Variable.set("param_variable", 10)
dag = DAG(
'test_api_dag',
start_date=datetime(2025, 3, 1, 3, 28, 0),
# # schedule=timedelta(days=1),
schedule='@daily',
is_paused_upon_creation=False
)
hello_task = BashOperator(
task_id='test_task',
bash_command='echo "Hello World from Airflow!"',
do_xcom_push = True,
dag=dag,
)
hello_task
```
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-18T16:57:39Z | 2025-03-22T15:02:29Z | https://github.com/apache/airflow/issues/47920 | [
"kind:bug",
"priority:medium",
"area:core",
"affected_version:3.0.0beta"
] | atul-astronomer | 3 |
deepfakes/faceswap | machine-learning | 1,185 | deepfake | closed | 2021-10-04T07:09:57Z | 2021-10-07T19:07:04Z | https://github.com/deepfakes/faceswap/issues/1185 | [] | 1liuqing1 | 0 | |
holoviz/panel | plotly | 7,347 | Opening xarray dataset suppresses panel error messages | <!--
Thanks for contacting us! Please read and follow these instructions carefully, then you can delete this introductory text. Note that the issue tracker is NOT the place for usage questions and technical assistance; post those at [Discourse](https://discourse.holoviz.org) instead. Issues without the required information below may be closed immediately.
-->
#### ALL software version info
(this library, plus any other relevant software, e.g. bokeh, python, notebook, OS, browser, etc should be added within the dropdown below.)
<details>
<summary>Software Version Info</summary>
```plaintext
bokeh 3.5.2 pyhd8ed1ab_0 conda-forge
panel 1.5.0 pyhd8ed1ab_0 conda-forge
rioxarray 0.17.0 pyhd8ed1ab_0 conda-forge
xarray 2024.9.0 pyhd8ed1ab_0 conda-forge
```
</details>
#### Description of expected behavior and the observed behavior
I run the code below as a panel app
```
panel serve example.py --show
```
When ``x`` is selected, the app results in a blank screen (tile shows OK) and there is no stack trace printed to the terminal. The stack trace is printed when the ``ds = xr.open_dataset(file)`` is commented out.
I tracked the issue to xarray's plugin ``rioxarray.xarray_plugin:RasterioBackend``. When I comment out line in ``xarray.backends.plugins.py``, function ``build_engines`` that loads that plugin:
```
def build_engines(entrypoints: EntryPoints) -> dict[str, BackendEntrypoint]:
backend_entrypoints: dict[str, type[BackendEntrypoint]] = {}
for backend_name, (module_name, backend) in BACKEND_ENTRYPOINTS.items():
if module_name is None or module_available(module_name):
backend_entrypoints[backend_name] = backend
entrypoints_unique = remove_duplicates(entrypoints)
# external_backend_entrypoints = backends_dict_from_pkg(entrypoints_unique)
# backend_entrypoints.update(external_backend_entrypoints)
backend_entrypoints = sort_backends(backend_entrypoints)
set_missing_parameters(backend_entrypoints)
return {name: backend() for name, backend in backend_entrypoints.items()}
```
the example works as expected, i.e. fails with the traceback printed.
I don't know whether this is a problem with panel, or rioxarray.
#### Complete, minimal, self-contained example code that reproduces the issue
```python
# code goes here between backticks
# example.py
import panel as pn
import xarray as xr
def update(value):
file = '/tmp/foo.nc'
ds = xr.open_dataset(file)
return pn.pane.Str(int(value))
selector = pn.widgets.Select(name="options", options=["x", 2], value="x")
bf = pn.bind(update, selector)
panel_bf = pn.panel(bf)
pn.Row(selector, pn.panel(bf)).servable()
```
#### Stack traceback and/or browser JavaScript console output
#### Screenshots or screencasts of the bug in action
- [ ] I may be interested in making a pull request to address this
| closed | 2024-09-30T06:23:57Z | 2024-10-21T13:37:03Z | https://github.com/holoviz/panel/issues/7347 | [] | yt87 | 10 |
influxdata/influxdb-client-python | jupyter | 436 | All datapoints does not always reach the database in multiprocessing scenario for the `flush_interval < 1000` | __Steps to reproduce:__
Run the following code:
```python
import time
import multiprocessing
from influxdb_client import InfluxDBClient, Point, WritePrecision
from influxdb_client.client.write_api import WriteType, WriteOptions
from influxdb_client.client.util.multiprocessing_helper import MultiprocessingWriter
token = "TOKEN HERE=="
org = "my-org"
bucket = "reproduce_bug"
def write_process(q):
# with InfluxDBClient(url="http://localhost:8086", token=token, org=org) as client:
with MultiprocessingWriter(url="http://localhost:8086", token=token, org=org, write_options=WriteOptions(batch_size=1000)) as writer:
# write_api = client.write_api(write_options=WriteOptions(batch_size=1000))#, write_type=WriteType.batching))
now = time.time_ns()
processed = 0
while True:
i = q.get()
point = Point.from_dict({
"measurement": "bug_test",
"tags": {},
"fields": {
"id": i,
"temp": 2.2324234232,
"temp2": 221,
"temp3": 2
},
"time": now+processed
}, WritePrecision.NS)
writer.write(bucket=bucket, record=point)
processed += 1
print(processed)
def feeder_process(q):
for i in range(250000):
q.put(i)
def feeder_process2(q):
for i in range(250000):
q.put(i)
if __name__=='__main__':
q = multiprocessing.Queue()
write_p = multiprocessing.Process(target=write_process, args=(q,))
feeder_p = multiprocessing.Process(target=feeder_process, args=(q,))
feeder_p2 = multiprocessing.Process(target=feeder_process2, args=(q,))
write_p.start()
feeder_p.start()
feeder_p2.start()
write_p.join()
feeder_p.join()
feeder_p2.join()
```
__Expected behavior:__
The code above produces 500 000 arbitrary data points with unique IDs. When the code has processed all the 500 000 data points, it is expected that all of them should be present in the InfluxDB database, which can be verified by running a |> count() on the measurement.
__Actual behavior:__
By running a |>count() on the data in e.g., Chronograf, there are sometimes less than 500 000 samples. This does not happen every time and it cannot seem to reproduce with MultiprocessingWriter instead of with the normal write_api in the code snippet. In my real-world scenario, however, the bug persists even with MultiprocessingWriter. I have tried to increase the frequency of the bug by adding more feeder processes, which seems to have some effect on it.
The actual scenario where the bug started to appear is similar to this code snippet. I have several processes that produce data and place it into a results queue, the results queue is read by a handler process that writes the results to the database. In the real scenario, there is always between around 5-30 samples missing. I have removed the real data in the real scenario and replaced it with a simple ID field to track the packets and to ensure that the data isn't the cause. I have also added unique timestamps to ensure that no data point is overwritten.
When analyzing the real-world scenario data I found several "gaps" in the IDs, which implies that the packet with IDs within the gaps are missing. I have attached a screenshot of my analysis of two tests below. In the top picture, 4 intervals with missing packets were identified, and in the second picture, only one was identified. Please let me know if the images need further explanation.


__Specifications:__
- Client Version: 1.26.0
- InfluxDB Version: 2.1.1
- Platform: Windows 10, influxdb in docker
| open | 2022-05-06T13:49:18Z | 2022-10-17T19:49:38Z | https://github.com/influxdata/influxdb-client-python/issues/436 | [
"bug"
] | deivard | 18 |
iMerica/dj-rest-auth | rest-api | 599 | Social login: EMAIL_AUTHENTICATION_AUTO_CONNECT raises an exception | The following line will cause an exception in Django AllAuth when combined with `EMAIL_AUTHENTICATION_AUTO_CONNECT: True`:
https://github.com/iMerica/dj-rest-auth/blob/23f097cebcc8ecef886b2ac7869cc1d51f66f90e/dj_rest_auth/social_serializers.py#L68
If the SocialApp is defined in the settings and not in the database, the app object should not be associated to the Token object.
| open | 2024-03-14T10:52:52Z | 2024-03-14T10:52:52Z | https://github.com/iMerica/dj-rest-auth/issues/599 | [] | jonasN5 | 0 |
axnsan12/drf-yasg | rest-api | 847 | AttributeError: module 'ruamel.yaml' has no attribute 'SafeDumper' | https://github.com/axnsan12/drf-yasg/issues/422 this issue was occurred again while I installing `drf-yasg` v1.21.5
The latest version of `ruamel.yaml` is [v0.17.26](https://pypi.org/project/ruamel.yaml/#history)
But the error was occurred. I just succeed on v0.17.21.
```
AttributeError: module 'ruamel.yaml' has no attribute 'SafeDumper'
``` | open | 2023-05-10T06:02:03Z | 2025-03-07T12:10:54Z | https://github.com/axnsan12/drf-yasg/issues/847 | [
"triage"
] | skyducks | 0 |
jupyter/docker-stacks | jupyter | 1,897 | Multi users | ### What docker image(s) is this feature applicable to?
all-spark-notebook, datascience-notebook
### What changes are you proposing?
I am running the data science notebook image to host a Jupyterlab server for multiple users.
Say user Adam is working on a notebook A, meanwhile when user Bob starts visiting the jupyterlab, he will see the Bob's notebook A (and Bob's session)... Is there a way that each user only see their own session and notebook when access the juptyerlab? or each user all only see/access their own user directory
Please feel free to change the label if it's not a bug, rather a feature instead. Thanks!
### How does this affect the user?
User Bob can accidently delete/update User Adam's notebook he is working on.
### Anything else?
_No response_ | closed | 2023-04-26T17:48:12Z | 2023-05-05T16:58:21Z | https://github.com/jupyter/docker-stacks/issues/1897 | [
"type:Enhancement"
] | lelejill | 9 |
cobrateam/splinter | automation | 743 | adding cookies issues | Hello,
I am using Django with LiveServerTestCase. I was trying to load my chrome driver with a cookie that preloads a user session cookie.
for example:
` cookie = {'path': '/', 'value': '9gl4p7wlrtgkw3dfer1vi2jievkks8f1', 'name': 'sessionid'}`
however, the cookie doesn't load properly, and the user is not logged in when i run
```
browser.cookies.add(cookie)
browser.reload()
```
When I inspected the code at [splinter/driver/webdriver/cookie_manager.py](https://github.com/cobrateam/splinter/blob/master/splinter/driver/webdriver/cookie_manager.py) I found that the way cookies are added is by
```
for key, value in cookies.items():
self.driver.add_cookie({"name": key, "value": value})
```
resulting
```
{'name': 'path', 'value': '/'}
{'name': 'value', 'value': '9gl4p7wlrtgkw3dfer1vi2jievkks8f1'}
{'name': 'name', 'value': 'sessionid'}
```
which didnt really work.
I have revised the add function such that a cookie dict is added as a whole using Selenium `add_cookie` function.
This is the new function:
```
def add(self, cookies):
if isinstance(cookies, list):
for cookie in cookies:
self.driver.add_cookie(cookie)
return
self.driver.add_cookie(cookies)
```
Hope this helps
| closed | 2019-12-11T19:28:10Z | 2021-07-19T18:52:41Z | https://github.com/cobrateam/splinter/issues/743 | [
"NeedsInvestigation"
] | jadeidev | 1 |
nikitastupin/clairvoyance | graphql | 25 | Validate GraphQL JSON Schema | I've got a Schema resulting from clairvoyance. It is validated using `JSON.parse()`.
However, don't know what it is not working on **GraphQL-voyager** or **Postman**.
The latter returns the following error `Invalid Schema supplied: The provided input schema is syntactically invalid` when _TestSuite_ is performed for graphQL JSON schema.
Do you have any idea on a way to validate the GRAPHQL JSON schema ?
By the way, a little tool named **graphql-path-enum** is capable of finding a lot of paths with ease. | closed | 2021-09-03T13:55:30Z | 2022-12-09T06:19:53Z | https://github.com/nikitastupin/clairvoyance/issues/25 | [] | Sim4n6 | 0 |
huggingface/datasets | pytorch | 6,738 | Dict feature is non-nullable while nested dict feature is | When i try to create a `Dataset` object with None values inside a dict column, like this:
```python
from datasets import Dataset, Features, Value
Dataset.from_dict(
{
"dict": [{"a": 0, "b": 0}, None],
}, features=Features(
{"dict": {"a": Value("int16"), "b": Value("int16")}}
)
)
```
i get `ValueError: Got None but expected a dictionary instead`.
At the same time, having None in _nested_ dict feature works, for example, this doesn't throw any errors:
```python
from datasets import Dataset, Features, Value, Sequence
dataset = Dataset.from_dict(
{
"list_dict": [[{"a": 0, "b": 0}], None],
"sequence_dict": [[{"a": 0, "b": 0}], None],
}, features=Features({
"list_dict": [{"a": Value("int16"), "b": Value("int16")}],
"sequence_dict": Sequence({"a": Value("int16"), "b": Value("int16")}),
})
)
```
Other types of features also seem to be nullable (but I haven't checked all of them).
Version of `datasets` is the latest atm (2.18.0)
Is this an expected behavior or a bug? | closed | 2024-03-18T14:31:47Z | 2024-03-20T10:24:15Z | https://github.com/huggingface/datasets/issues/6738 | [
"bug"
] | polinaeterna | 3 |
netbox-community/netbox | django | 18,441 | Unable to add a Device Interface without specifying the VDC ID |
### Discussed in https://github.com/netbox-community/netbox/discussions/18440
<div type='discussions-op-text'>
<sup>Originally posted by **codrinb93** January 20, 2025</sup>
I am running the following API Call at /api/dcim/interfaces/ and I am unable to specify the VDC by name and device name
When running the same API Call using the VDC id, everything works perfectly.
_{
"name": "INTERFACE_NAME",
"type": "virtual",
"device": { "name": "DEVICE_NAME" },
"vdcs": [
{
"name": "VDC_NAME",
"device": { "name": "VDC_DEVICE" }
}
],
"parent": { "name": "PARENT_INTERFACE" }
}_
I get the following error
**{
"vdcs": [
"Incorrect type. Expected pk value, received dict."
]
}**
I could run a query to find the VDC ID, but then I am also unable to filter by DEVICE NAME. I have VDCs with identical names but on different devices, so I always get multiple answers to my query.</div> | closed | 2025-01-20T22:17:41Z | 2025-01-20T22:22:44Z | https://github.com/netbox-community/netbox/issues/18441 | [] | codrinb93 | 1 |
jadore801120/attention-is-all-you-need-pytorch | nlp | 1 | Index error during translating | Hi,
I tried to force the GPU selection with CUDA_VISIBLE_DEVICES=1
but it pops an error:
RuntimeError: cublas runtime error : library not initialized at /py/conda-bld/pytorch_1490903321756/work/torch/lib/THC/THCGeneral.c:387
I think it's related to this: https://discuss.pytorch.org/t/cublas-runtime-error-library-not-initialized-at-data-users-soumith-builder-wheel-pytorch-src-torch-lib-thc-thcgeneral-c-383/1375/8
| closed | 2017-06-20T17:07:22Z | 2017-06-26T00:04:31Z | https://github.com/jadore801120/attention-is-all-you-need-pytorch/issues/1 | [
"bug"
] | vince62s | 13 |
pytorch/vision | machine-learning | 8,531 | Windows unittest jobs fail due to numpy 2 dependency issue | All other jobs are fine but the windows unittests job are [failing](https://github.com/pytorch/vision/actions/runs/9957203981/job/27508727338?pr=7990).
I'll pin the dep to `numpy<2` in https://github.com/pytorch/vision/pull/8530 to keep the CI green but we should resolve this asap
```
A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.0 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.
Traceback (most recent call last): File "C:\actions-runner\_work\vision\vision\pytorch\vision\test\smoke_test.py", line 6, in <module>
import torch
File "C:\Jenkins\Miniconda3\envs\ci\lib\site-packages\torch\__init__.py", line 2455, in <module>
from torch import (
File "C:\Jenkins\Miniconda3\envs\ci\lib\site-packages\torch\export\__init__.py", line 64, in <module>
from .dynamic_shapes import Constraint, Dim, dims, dynamic_dim, ShapesCollection
File "C:\Jenkins\Miniconda3\envs\ci\lib\site-packages\torch\export\dynamic_shapes.py", line 18, in <module>
from .exported_program import ExportedProgram
File "C:\Jenkins\Miniconda3\envs\ci\lib\site-packages\torch\export\exported_program.py", line 25, in <module>
from torch._higher_order_ops.utils import autograd_not_implemented
File "C:\Jenkins\Miniconda3\envs\ci\lib\site-packages\torch\_higher_order_ops\__init__.py", line 1, in <module>
from torch._higher_order_ops.cond import cond
File "C:\Jenkins\Miniconda3\envs\ci\lib\site-packages\torch\_higher_order_ops\cond.py", line 7, in <module>
import torch._subclasses.functional_tensor
File "C:\Jenkins\Miniconda3\envs\ci\lib\site-packages\torch\_subclasses\functional_tensor.py", line 42, in <module>
class FunctionalTensor(torch.Tensor):
File "C:\Jenkins\Miniconda3\envs\ci\lib\site-packages\torch\_subclasses\functional_tensor.py", line 267, in FunctionalTensor
cpu = _conversion_method_template(device=torch.device("cpu"))
C:\Jenkins\Miniconda3\envs\ci\lib\site-packages\torch\_subclasses\functional_tensor.py:267: UserWarning: Failed to initialize NumPy:
A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.0 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.
(Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\utils\tensor_numpy.cpp:84.)
cpu = _conversion_method_template(device=torch.device("cpu"))
Downloading: "https://download.pytorch.org/models/resnet50-11ad3fa6.pth" to C:\Users\runneruser/.cache\torch\hub\checkpoints\resnet50-11ad3fa6.pth
torchvision: 0.20.0a0+a36b98b
torch.cuda.is_available: False
torch.ops.image._jpeg_version() = 80
Is torchvision usable? True
German shepherd (cpu): 37.6%
============================= test session starts =============================
platform win32 -- Python 3.9.19, pytest-7.4.4, pluggy-1.5.0 -- C:\Jenkins\Miniconda3\envs\ci\python.exe
cachedir: .pytest_cache
rootdir: C:\actions-runner\_work\vision\vision\pytorch\vision
configfile: pytest.ini
testpaths: test
plugins: cov-5.0.0, mock-3.14.0
collecting ... collected 37400 items / 2 errors / 1 skipped
=================================== ERRORS ====================================
__________________ ERROR collecting test/test_transforms.py ___________________
test\test_transforms.py:606: in <module>
class TestToPil:
test\test_transforms.py:664: in TestToPil
(torch.Tensor(4, 4, 1).uniform_().numpy(), "L"),
E RuntimeError: Numpy is not available
______________ ERROR collecting test/test_transforms_v2_utils.py ______________
test\test_transforms_v2_utils.py:47: in <module>
(to_pil_image(IMAGE),),
torchvision\transforms\functional.py:266: in to_pil_image
pic = pic.numpy(force=True)
E RuntimeError: Numpy is not available
============================== warnings summary ===============================
..\..\..\..\..\..\Jenkins\Miniconda3\envs\ci\lib\site-packages\torch\_subclasses\functional_tensor.py:267
C:\Jenkins\Miniconda3\envs\ci\lib\site-packages\torch\_subclasses\functional_tensor.py:267: UserWarning: Failed to initialize NumPy:
A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.0 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.
(Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\utils\tensor_numpy.cpp:84.)
cpu = _conversion_method_template(device=torch.device("cpu"))
test\test_backbone_utils.py:51
C:\actions-runner\_work\vision\vision\pytorch\vision\test\test_backbone_utils.py:51: PytestCollectionWarning: cannot collect test class 'TestSubModule' because it has a __init__ constructor (from: test/test_backbone_utils.py)
class TestSubModule(torch.nn.Module):
test\test_backbone_utils.py:64
C:\actions-runner\_work\vision\vision\pytorch\vision\test\test_backbone_utils.py:64: PytestCollectionWarning: cannot collect test class 'TestModule' because it has a __init__ constructor (from: test/test_backbone_utils.py)
class TestModule(torch.nn.Module):
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
- generated xml file: C:\actions-runner\_work\_temp\test-results\test-results.xml -
=========================== short test summary info ===========================
ERROR test/test_transforms.py - RuntimeError: Numpy is not available
ERROR test/test_transforms_v2_utils.py - RuntimeError: Numpy is not available
!!!!!!!!!!!!!!!!!!! Interrupted: 2 errors during collection !!!!!!!!!!!!!!!!!!!
================== 1 skipped, 3 warnings, 2 errors in 6.45s ===================
``` | closed | 2024-07-17T08:46:43Z | 2024-08-20T17:35:30Z | https://github.com/pytorch/vision/issues/8531 | [] | NicolasHug | 2 |
iterative/dvc | machine-learning | 10,505 | Python CLI: `DeprecationWarning` on `dvc.repo.Repo` import | ### Description
Importing `from dvc.repo import Repo` issues a
```
DeprecationWarning: The `hash` argument is deprecated in favor of `unsafe_hash` and will be removed in or after August 2025.
```
### Reproduce
```python3
from warnings import simplefilter
simplefilter('error')
from dvc.repo import Repo
```
```traceback
---------------------------------------------------------------------------
DeprecationWarning Traceback (most recent call last)
Cell In[3], line 1
----> 1 from dvc.repo import Repo
File ~/.pyenv/versions/3.11.6/envs/ds/lib/python3.11/site-packages/dvc/repo/__init__.py:63
58 return f(repo, *args, **kwargs)
60 return wrapper
---> 63 class Repo:
64 DVC_DIR = ".dvc"
66 from dvc.repo.add import add # type: ignore[misc]
File ~/.pyenv/versions/3.11.6/envs/ds/lib/python3.11/site-packages/dvc/repo/__init__.py:72, in Repo()
70 from dvc.repo.diff import diff # type: ignore[misc]
71 from dvc.repo.du import du as _du # type: ignore[misc]
---> 72 from dvc.repo.fetch import fetch # type: ignore[misc]
73 from dvc.repo.freeze import freeze, unfreeze # type: ignore[misc]
74 from dvc.repo.gc import gc # type: ignore[misc]
File ~/.pyenv/versions/3.11.6/envs/ds/lib/python3.11/site-packages/dvc/repo/fetch.py:5
3 from dvc.exceptions import DownloadError
4 from dvc.log import logger
----> 5 from dvc.stage.cache import RunCacheNotSupported
6 from dvc.ui import ui
7 from dvc_data.index import DataIndex, FileStorage
File ~/.pyenv/versions/3.11.6/envs/ds/lib/python3.11/site-packages/dvc/stage/__init__.py:22
20 from .imports import sync_import, update_import
21 from .run import run_stage
---> 22 from .utils import (
23 check_circular_dependency,
24 check_duplicated_arguments,
25 check_missing_outputs,
26 check_no_externals,
27 check_stage_path,
28 compute_md5,
29 fill_stage_dependencies,
30 fill_stage_outputs,
31 get_dump,
32 )
34 if TYPE_CHECKING:
35 from dvc.dependency import ParamsDependency
File ~/.pyenv/versions/3.11.6/envs/ds/lib/python3.11/site-packages/dvc/stage/utils.py:9
7 from dvc.annotations import ANNOTATION_FIELDS
8 from dvc.exceptions import InvalidArgumentError
----> 9 from dvc_data.hashfile.meta import Meta
11 from .exceptions import (
12 MissingDataSource,
13 StageExternalOutputsError,
(...)
16 StagePathOutsideError,
17 )
19 if TYPE_CHECKING:
File ~/.pyenv/versions/3.11.6/envs/ds/lib/python3.11/site-packages/dvc_data/hashfile/__init__.py:7
4 from collections.abc import Iterator
5 from typing import TYPE_CHECKING, Union, cast
----> 7 from .tree import Tree
9 if TYPE_CHECKING:
10 from .db import HashFileDB
File ~/.pyenv/versions/3.11.6/envs/ds/lib/python3.11/site-packages/dvc_data/hashfile/tree.py:10
7 from dvc_objects.errors import ObjectFormatError
9 from dvc_data.compat import cached_property
---> 10 from dvc_data.hashfile.hash import DEFAULT_ALGORITHM, hash_file
11 from dvc_data.hashfile.meta import Meta
12 from dvc_data.hashfile.obj import HashFile
File ~/.pyenv/versions/3.11.6/envs/ds/lib/python3.11/site-packages/dvc_data/hashfile/hash.py:12
8 from tqdm.utils import CallbackIOWrapper
10 from dvc_data.callbacks import TqdmCallback
---> 12 from .hash_info import HashInfo
13 from .istextfile import DEFAULT_CHUNK_SIZE, istextblock
14 from .meta import Meta
File ~/.pyenv/versions/3.11.6/envs/ds/lib/python3.11/site-packages/dvc_data/hashfile/hash_info.py:8
3 from attrs import define, field
5 HASH_DIR_SUFFIX = ".dir"
----> 8 @define(hash=True)
9 class HashInfo:
10 name: Optional[str] = None
11 value: Optional[str] = None
File ~/.pyenv/versions/3.11.6/envs/ds/lib/python3.11/site-packages/attr/_next_gen.py:402, in define.<locals>.wrap(cls)
399 return do_it(cls, auto_attribs)
401 try:
--> 402 return do_it(cls, True)
403 except UnannotatedAttributeError:
404 return do_it(cls, False)
File ~/.pyenv/versions/3.11.6/envs/ds/lib/python3.11/site-packages/attr/_next_gen.py:348, in define.<locals>.do_it(cls, auto_attribs)
347 def do_it(cls, auto_attribs):
--> 348 return attrs(
349 maybe_cls=cls,
350 these=these,
351 repr=repr,
352 hash=hash,
353 unsafe_hash=unsafe_hash,
354 init=init,
355 slots=slots,
356 frozen=frozen,
357 weakref_slot=weakref_slot,
358 str=str,
359 auto_attribs=auto_attribs,
360 kw_only=kw_only,
361 cache_hash=cache_hash,
362 auto_exc=auto_exc,
363 eq=eq,
364 order=order,
365 auto_detect=auto_detect,
366 collect_by_mro=True,
367 getstate_setstate=getstate_setstate,
368 on_setattr=on_setattr,
369 field_transformer=field_transformer,
370 match_args=match_args,
371 )
File ~/.pyenv/versions/3.11.6/envs/ds/lib/python3.11/site-packages/attr/_make.py:1291, in attrs(maybe_cls, these, repr_ns, repr, cmp, hash, init, slots, frozen, weakref_slot, str, auto_attribs, kw_only, cache_hash, auto_exc, eq, order, auto_detect, collect_by_mro, getstate_setstate, on_setattr, field_transformer, match_args, unsafe_hash)
1288 if hash is not None:
1289 import warnings
-> 1291 warnings.warn(
1292 DeprecationWarning(
1293 "The `hash` argument is deprecated in favor of `unsafe_hash` and will be removed in or after August 2025."
1294 ),
1295 stacklevel=2,
1296 )
1297 if unsafe_hash is not None:
1298 hash = unsafe_hash
DeprecationWarning: The `hash` argument is deprecated in favor of `unsafe_hash` and will be removed in or after August 2025.
```
### Expected
A clean import.
### Environment information
**Output of `dvc doctor`:**
```console
$ dvc doctor
DVC version: 3.53.1 (pip)
-------------------------
Platform: Python 3.11.6 on Linux-5.4.0-182-generic-x86_64-with-glibc2.31
Subprojects:
dvc_data = 3.15.1
dvc_objects = 5.1.0
dvc_render = 1.0.2
dvc_task = 0.4.0
scmrepo = 3.3.7
Supports:
http (aiohttp = 3.10.1, aiohttp-retry = 2.8.3),
https (aiohttp = 3.10.1, aiohttp-retry = 2.8.3),
s3 (s3fs = 2024.6.1, boto3 = 1.34.131),
ssh (sshfs = 2024.6.0)
Config:
Global: /home/hugo/.config/dvc
System: /etc/xdg/dvc
Cache types: <https://error.dvc.org/no-dvc-cache>
Caches: local
Remotes: s3
Workspace directory: ext4 on /dev/mapper/ubuntu--vg-ubuntu--lv
Repo: dvc, git
Repo.site_cache_dir: /var/tmp/dvc/repo/52ddbaa0d2b087b8b0e593aa20cf85f3
``` | closed | 2024-08-06T10:07:17Z | 2024-08-06T14:33:06Z | https://github.com/iterative/dvc/issues/10505 | [] | hugo-ricateau | 4 |
davidsandberg/facenet | tensorflow | 1,111 | Is there something wrong for calculating val_rate? | In the function "calculate_val", if a model's maximum false-positive rate still less than the far_target, it will set the threshold to zero and get a very low val_rate. I think the low fale-positive rate means well discrimination, it should get a high val_rate? | open | 2019-11-24T11:07:58Z | 2019-11-24T11:07:58Z | https://github.com/davidsandberg/facenet/issues/1111 | [] | peterzpy | 0 |
jschneier/django-storages | django | 1,255 | `collectstatic` is extremely slow with S3 Manifest | hiya! I'm deploying my app on DigitalOcean App Platform. As part of the build step there, I run `python manage.py collectstatic --noinput`. Before switching to storing my static assets in a S3-compatible bucket this was instant, but now takes over 10 minutes when using the S3 manifest. The S3-compatible service I am using is Backblaze B2, although I wouldn't expect this to matter.
## Configuration for my static files
`storages.py`
```py
class StaticStorage(S3ManifestStaticStorage):
bucket_name = 'splashcat-static'
custom_domain = 'static.splashcat.ink'
```
`settings.py`
```py
if not DEBUG:
STORAGES = global_settings.STORAGES | {
"default": {"BACKEND": "storages.backends.s3boto3.S3Boto3Storage"},
"staticfiles": {"BACKEND": "splashcat.storages.StaticStorage"}
}
``` | open | 2023-06-02T21:31:13Z | 2024-12-19T01:15:34Z | https://github.com/jschneier/django-storages/issues/1255 | [] | catgirlinspace | 7 |
lepture/authlib | flask | 182 | OIDC refresh token | Hello, following [this question](https://stackoverflow.com/questions/59855736/authlib-openid-connect-refresh-token) on SO, I was wondering if there is already a way to get a JWT from a refresh token exchange. For now, even when I specify the scope to be openid, I only get an id_token once and it's on the creation of the token. The refresh token exchange leaves me with a "classical" token, and no id_token is joined.
Regarding the OIDC specification ([this part](https://openid.net/specs/openid-connect-core-1_0.html#RefreshTokenResponse)) I would like to know if there is a way to get a new id_token on refresh.
I can look into implementing it if you would like. | closed | 2020-01-23T20:14:59Z | 2020-02-25T01:51:43Z | https://github.com/lepture/authlib/issues/182 | [
"feature request"
] | leogout | 4 |
pytorch/pytorch | machine-learning | 149,468 | torch.library.opcheck doesn't check strides for CPU Tensors | Repro:
```py
import torch
from torchvision.transforms.functional import to_pil_image, pil_to_tensor
import PIL
def crop(pic, box):
img = to_pil_image(pic.cpu())
cropped_img = img.crop(box)
return pil_to_tensor(cropped_img).to(pic.device) / 255.
img = torch.ones(3, 64, 64)
img *= torch.linspace(0, 1, steps=64) * torch.linspace(0, 1, steps=64).unsqueeze(-1)
cropped_img = crop(img, (10, 10, 50, 50))
def f(img):
return crop(img, (10, 10, 50, 50))
cropped_img = f(img)
print(img.shape, img.stride())
print(cropped_img.shape, cropped_img.stride())
from typing import Sequence
@torch.library.custom_op("mylib::crop", mutates_args=())
def crop(pic: torch.Tensor, box: Sequence[int]) -> torch.Tensor:
img = to_pil_image(pic.cpu())
cropped_img = img.crop(box)
result = (pil_to_tensor(cropped_img) / 255.).to(pic.device, pic.dtype)
return result
@crop.register_fake
def _(pic, box):
channels = pic.shape[0]
x0, y0, x1, y1 = box
# result = pic.new_empty(y1 - y0, x1 - x0, channels).permute(2, 0, 1)
result = pic.new_empty(channels, y1 - y0, x1 - x0)
return result
result = torch.library.opcheck(crop, (img, (10, 10, 50, 50)))
print(result)
```
cc @ezyang @gchanan @kadeng @msaroufim | open | 2025-03-19T01:32:23Z | 2025-03-19T01:44:15Z | https://github.com/pytorch/pytorch/issues/149468 | [
"high priority",
"triage review"
] | zou3519 | 1 |
Layout-Parser/layout-parser | computer-vision | 39 | License agreement of pretrained models | What are the licensing terms for pretrained models specified in Model Zoo, I am specifically interested in TableBank. As per their website(https://github.com/doc-analysis/TableBank/blob/master/MODEL_ZOO.md) model and data can not be commercially used. However layout-parser is distributed under Apache 2.0, does it mean the TableBank listed here https://layout-parser.readthedocs.io/en/latest/notes/modelzoo.html#model-catalog falls under Apache 2.0 license. | closed | 2021-04-29T10:22:25Z | 2021-06-15T02:46:22Z | https://github.com/Layout-Parser/layout-parser/issues/39 | [
"bug"
] | Tushar-FIS | 2 |
assafelovic/gpt-researcher | automation | 969 | ImportError: Unable to import langchain-google-genai | **Describe the bug**
error and log are below
**To Reproduce**
Steps to reproduce the behavior:
4. See error
```
gpt-researcher-1 | ⚠️ Error in reading JSON, attempting to repair JSON
gpt-researcher-1 | Error using json_repair: the JSON object must be str, bytes or bytearray, not NoneType
gpt-researcher-1 | ERROR: Exception in ASGI application
gpt-researcher-1 | Traceback (most recent call last):
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/agent_creator.py", line 27, in choose_agent
gpt-researcher-1 | response = await create_chat_completion(
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/utils/llm.py", line 54, in create_chat_completion
gpt-researcher-1 | provider = get_llm(llm_provider, model=model, temperature=temperature,
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/utils/llm.py", line 19, in get_llm
gpt-researcher-1 | return GenericLLMProvider.from_provider(llm_provider, **kwargs)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/llm_provider/generic/base.py", line 60, in from_provider
gpt-researcher-1 | _check_pkg("langchain_google_genai")
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/llm_provider/generic/base.py", line 155, in _check_pkg
gpt-researcher-1 | raise ImportError(
gpt-researcher-1 | ImportError: Unable to import langchain-google-genai. Please install with `pip install -U langchain-google-genai`
gpt-researcher-1 |
gpt-researcher-1 | During handling of the above exception, another exception occurred:
gpt-researcher-1 |
gpt-researcher-1 | Traceback (most recent call last):
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 242, in run_asgi
gpt-researcher-1 | result = await self.app(self.scope, self.asgi_receive, self.asgi_send) # type: ignore[func-returns-value]
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
gpt-researcher-1 | return await self.app(scope, receive, send)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
gpt-researcher-1 | await super().__call__(scope, receive, send)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 113, in __call__
gpt-researcher-1 | await self.middleware_stack(scope, receive, send)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 152, in __call__
gpt-researcher-1 | await self.app(scope, receive, send)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/cors.py", line 77, in __call__
gpt-researcher-1 | await self.app(scope, receive, send)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
gpt-researcher-1 | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
gpt-researcher-1 | raise exc
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
gpt-researcher-1 | await app(scope, receive, sender)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 715, in __call__
gpt-researcher-1 | await self.middleware_stack(scope, receive, send)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 735, in app
gpt-researcher-1 | await route.handle(scope, receive, send)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 362, in handle
gpt-researcher-1 | await self.app(scope, receive, send)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 95, in app
gpt-researcher-1 | await wrap_app_handling_exceptions(app, session)(scope, receive, send)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
gpt-researcher-1 | raise exc
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
gpt-researcher-1 | await app(scope, receive, sender)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 93, in app
gpt-researcher-1 | await func(session)
gpt-researcher-1 | File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 383, in app
gpt-researcher-1 | await dependant.call(**solved_result.values)
gpt-researcher-1 | File "/usr/src/app/backend/server/server.py", line 136, in websocket_endpoint
gpt-researcher-1 | await handle_websocket_communication(websocket, manager)
gpt-researcher-1 | File "/usr/src/app/backend/server/server_utils.py", line 117, in handle_websocket_communication
gpt-researcher-1 | await handle_start_command(websocket, data, manager)
gpt-researcher-1 | File "/usr/src/app/backend/server/server_utils.py", line 28, in handle_start_command
gpt-researcher-1 | report = await manager.start_streaming(
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/src/app/backend/server/websocket_manager.py", line 61, in start_streaming
gpt-researcher-1 | report = await run_agent(task, report_type, report_source, source_urls, tone, websocket, headers)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/src/app/backend/server/websocket_manager.py", line 95, in run_agent
gpt-researcher-1 | report = await researcher.run()
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/src/app/backend/report_type/basic_report/basic_report.py", line 41, in run
gpt-researcher-1 | await researcher.conduct_research()
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/agent.py", line 88, in conduct_research
gpt-researcher-1 | self.agent, self.role = await choose_agent(
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/agent_creator.py", line 44, in choose_agent
gpt-researcher-1 | return await handle_json_error(response)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/agent_creator.py", line 55, in handle_json_error
gpt-researcher-1 | json_string = extract_json_with_regex(response)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/src/app/gpt_researcher/actions/agent_creator.py", line 71, in extract_json_with_regex
gpt-researcher-1 | json_match = re.search(r"{.*?}", response, re.DOTALL)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | File "/usr/local/lib/python3.11/re/__init__.py", line 176, in search
gpt-researcher-1 | return _compile(pattern, flags).search(string)
gpt-researcher-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
gpt-researcher-1 | TypeError: expected string or bytes-like object, got 'NoneType'
gpt-researcher-1 | INFO: connection closed
```
**Expected behavior**
should run properly
**Screenshots**
If applicable, add screenshots to help explain your problem.
<img width="1176" alt="image" src="https://github.com/user-attachments/assets/e7fa174c-19d9-462b-9128-a4a5084619fa">
**Desktop (please complete the following information):**
**- OS: Mac M1 latest
- Browser: chrome latest**
| closed | 2024-10-31T12:44:02Z | 2024-12-01T08:53:21Z | https://github.com/assafelovic/gpt-researcher/issues/969 | [] | virologist | 3 |
polakowo/vectorbt | data-visualization | 728 | Dockerfile changes to run /apps/candlestick-patterns app | It didn't work for me, got some help from chatGPT:
To make the original Dockerfile work on macOS, the minimal changes needed are to update the config.guess and config.sub scripts before running the configure script. Here is the modified Dockerfile:
Dockerfile
```
FROM python:3.8-slim
RUN apt-get -y update && apt-get -y install gcc curl make dos2unix
RUN pip install --upgrade pip
# Required by TA-Lib and numba
RUN pip install numpy>=1.19.4
RUN curl -O https://netcologne.dl.sourceforge.net/project/ta-lib/ta-lib/0.4.0/ta-lib-0.4.0-src.tar.gz \
&& tar -xzf ta-lib-0.4.0-src.tar.gz \
&& cd ta-lib/ \
&& curl -O https://git.savannah.gnu.org/cgit/config.git/plain/config.guess \
&& curl -O https://git.savannah.gnu.org/cgit/config.git/plain/config.sub \
&& chmod +x config.guess config.sub \
&& dos2unix config.guess config.sub \
&& ./configure --prefix=/usr \
&& make \
&& make install \
&& cd ..
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Clean up APT when done.
RUN apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY assets ./assets
COPY app.py .
CMD ["python", "app.py"]
```
## Explanation of Changes
Install dos2unix: Added dos2unix to the list of packages installed by apt-get.
Update config.guess and config.sub:
Added steps to download the latest config.guess and config.sub scripts from the GNU repository.
Added steps to set the executable permissions for these scripts.
Added a step to convert line endings using dos2unix.
These changes ensure that the config.guess and config.sub scripts are up to date and compatible with the build environment, resolving issues related to cross-compilation and system recognition. | open | 2024-07-06T20:32:26Z | 2024-07-06T20:32:26Z | https://github.com/polakowo/vectorbt/issues/728 | [] | dcorb | 0 |
yeongpin/cursor-free-vip | automation | 17 | 有没有Linux版本 | 请求大神开放Linux版本 | closed | 2025-01-13T09:45:12Z | 2025-01-14T15:28:16Z | https://github.com/yeongpin/cursor-free-vip/issues/17 | [] | Dauth | 1 |
svc-develop-team/so-vits-svc | pytorch | 114 | Can't export as wav | C:\Users\take5\Desktop\svc>python inference_main.py -m "models/niel/G_183000.pth" -c "models/niel/config.json" -n "asians1.wav" -t 0 -s "niel" --wav_format "WAV_FORMAT"
gives me
raise ValueError("Unknown format: {0!r}".format(format_str))
ValueError: Unknown format: 'WAV_FORMAT' | closed | 2023-04-02T16:38:57Z | 2023-04-07T11:59:00Z | https://github.com/svc-develop-team/so-vits-svc/issues/114 | [
"not urgent"
] | Mikefizzy | 1 |
rougier/scientific-visualization-book | numpy | 63 | Inconsistent commas in the code (Chapter 2, page 22) | Hi @rougier,
Еhere is a slight inconsistency in the code from the book - there is no comma after `zorder` argument, but after `linestyle` argument there is a comma (technically the latter case is not an error, but usually trailing comma is used with different code formatting):
```python
plt.plot(P[:,0], P[:,1], clip_on=False, zorder=-10
color="k", linewidth=1.0, linestyle="--", )
```
Perhaps the following code was meant:
```python
plt.plot(P[:,0], P[:,1], clip_on=False, zorder=-10,
color="k", linewidth=1.0, linestyle="--")
```
Thank you. | closed | 2022-07-12T11:20:37Z | 2022-07-16T05:44:10Z | https://github.com/rougier/scientific-visualization-book/issues/63 | [] | labdmitriy | 1 |
microsoft/Bringing-Old-Photos-Back-to-Life | pytorch | 279 | Not working | Can someone help please ? I fellowed all steps but nothing is working
Here the output:
```
Running Stage 1: Overall restoration
Traceback (most recent call last):
File "C:\BOPTL\Bringing-Old-Photos-Back-to-Life\Global\detection.py", line 12, in <module>
import torch
File "C:\Users\Mourad\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\__init__.py", line 133, in <module>
raise err
OSError: [WinError 126] Le module spécifié est introuvable. Error loading "C:\Users\Mourad\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\lib\nvfuser_codegen.dll" or one of its dependencies.
Traceback (most recent call last):
File "C:\BOPTL\Bringing-Old-Photos-Back-to-Life\Global\test.py", line 6, in <module>
from torch.autograd import Variable
File "C:\Users\Mourad\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\__init__.py", line 133, in <module>
raise err
OSError: [WinError 126] Le module spécifié est introuvable. Error loading "C:\Users\Mourad\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\lib\nvfuser_codegen.dll" or one of its dependencies.
Finish Stage 1 ...
Running Stage 2: Face Detection
Traceback (most recent call last):
File "C:\BOPTL\Bringing-Old-Photos-Back-to-Life\Face_Detection\detect_all_dlib.py", line 4, in <module>
import torch
File "C:\Users\Mourad\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\__init__.py", line 133, in <module>
raise err
OSError: [WinError 126] Le module spécifié est introuvable. Error loading "C:\Users\Mourad\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\lib\nvfuser_codegen.dll" or one of its dependencies.
Finish Stage 2 ...
Running Stage 3: Face Enhancement
Traceback (most recent call last):
File "C:\BOPTL\Bringing-Old-Photos-Back-to-Life\Face_Enhancement\test_face.py", line 7, in <module>
import data
File "C:\BOPTL\Bringing-Old-Photos-Back-to-Life\Face_Enhancement\data\__init__.py", line 5, in <module>
import torch.utils.data
File "C:\Users\Mourad\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\__init__.py", line 133, in <module>
raise err
OSError: [WinError 126] Le module spécifié est introuvable. Error loading "C:\Users\Mourad\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\lib\nvfuser_codegen.dll" or one of its dependencies.
Finish Stage 3 ...
Running Stage 4: Blending
Traceback (most recent call last):
File "C:\BOPTL\Bringing-Old-Photos-Back-to-Life\Face_Detection\align_warp_back_multiple_dlib.py", line 4, in <module>
import torch
File "C:\Users\Mourad\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\__init__.py", line 133, in <module>
raise err
OSError: [WinError 126] Le module spécifié est introuvable. Error loading "C:\Users\Mourad\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\lib\nvfuser_codegen.dll" or one of its dependencies.
Finish Stage 4 ...
All the processing is done. Please check the results.
``` | open | 2023-08-28T22:31:21Z | 2024-01-22T14:10:19Z | https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/279 | [] | Arifi | 2 |
robusta-dev/robusta | automation | 1,280 | Getting multiple reports krr when set for a specific time | **Describe the bug**
getting 2 reports when I have set a cron_expression: "00 4 * * 2" for krr
```
2024-01-30 03:59:02.001 INFO running scheduled job 970bfc78c2d90a4264f8d4afd7512666
2024-01-30 04:01:50.215 INFO running scheduled job 970bfc78c2d90a4264f8d4afd7512666
```
**To Reproduce**
Install robusta in a kubernetes cluster with following `customPlaybooks` section in your `generated_values.yaml`
```
customPlaybooks:
- triggers:
- on_schedule:
cron_schedule_repeat:
cron_expression: "00 4 * * 2"
actions:
- krr_scan:
krr_args: "--history_duration 168" ## KRR args here
sinks:
- "sink_a"
```
**Expected behavior**
To get a report at `00 4 * * 2` with a single job scheduled in runner.
**Additional context**
Kubernetes -> EKS 1.22
Robusta chart version -> 0.10.27 | open | 2024-02-08T09:54:01Z | 2024-02-08T09:54:26Z | https://github.com/robusta-dev/robusta/issues/1280 | [] | ShibraAmin | 1 |
streamlit/streamlit | data-visualization | 10,693 | st.popover won't use container width if help arg is passed | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
When passing `help` and `use_container_width` to `st.popover`, the final button does not respect the container width.
Similar to #10668, after the fix on 1.43.1, the issue persists on popover elements.
### Reproducible Code Example
```Python
import streamlit as st
# Issue persisting on v1.43.1
with st.container(border=True):
st.popover(label="Popover 1", use_container_width=True)
st.popover(label="Popover 2", use_container_width=True, help="This is a popover")
# Example already fixed on v1.43.1
with st.container(border=True):
st.button(label="Button 1", use_container_width=True)
st.button(label="Button 2", use_container_width=True, help="This is a button")
```
### Steps To Reproduce
1. Run the above minimal example code.
2. Screenshot of the example app:

### Expected Behavior
The popover button should have the same width when using `use_container_width`, whether help is passed or not
### Current Behavior
Popover doesn't respect container `width` when help parameter is passed
### Is this a regression?
- [x] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.43.1
- Python version: 3.12
- Operating System: Windows / WSL (Also tested on iPadOS and MacOS)
- Browser: Chrome / Arc / Mozilla
### Additional Information
_No response_ | closed | 2025-03-08T21:10:02Z | 2025-03-11T09:46:58Z | https://github.com/streamlit/streamlit/issues/10693 | [
"type:bug",
"status:confirmed",
"priority:P1",
"feature:st.popover"
] | schancksb | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.