repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
AirtestProject/Airtest | automation | 690 | name 'args' is not defined | 
i follow http://airtest.netease.com/docs/en/1_online_help/advanced_features.html?highlight=setup and give error. Am I wrong setup? | closed | 2020-02-10T01:29:52Z | 2020-02-13T08:48:04Z | https://github.com/AirtestProject/Airtest/issues/690 | [] | giangnb-dev | 3 |
MilesCranmer/PySR | scikit-learn | 566 | Windows Julia Install - could not load library "libpcre2-8" The specified module could not be found. | ### What happened?
After installing PySR on windows, on the first import of the module, the Julia install starts, but fails with this error message:
> ...
> fatal: error thrown and no exception handler available.
> InitError(mod=:Sys, error=ErrorException("could not load library "libpcre2-8"
> The specified module could not be found. "))
> ijl_errorf at C:/workdir/src\rtutils.c:77
> ...
I located libpcre2-8 in the virtual environment folder: `...\venv\julia_env\pyjuliapkg\install\bin\libpcre2-8.dll`
I found [this](https://github.com/JuliaLang/julia/issues/52205) issue on the julia repository, but it has no solution given.
Does anyone know of a workaround?
Would installing julia separately (outside of venv) help?
### Version
0.17.2
### Operating System
Windows
### Package Manager
pip
### Interface
Script (i.e., `python my_script.py`)
### Relevant log output
_No response_
### Extra Info
_No response_ | closed | 2024-03-13T12:06:21Z | 2024-06-17T18:09:56Z | https://github.com/MilesCranmer/PySR/issues/566 | [
"bug"
] | tbuckworth | 21 |
vimalloc/flask-jwt-extended | flask | 287 | Incomplete docs | Hey guys :),
I love the work you have done with this package. Thank You!
Unfortunately the doc-page https://flask-jwt-extended.readthedocs.io/en/stable/blacklist_and_token_revoking.html is not available.
Could you have a look into this issue?
Best regards from Berlin,
Luca | closed | 2019-11-01T09:20:44Z | 2019-11-01T13:42:55Z | https://github.com/vimalloc/flask-jwt-extended/issues/287 | [] | LucaTabone | 3 |
pytorch/pytorch | deep-learning | 148,891 | Upgrading FlashAttention to V3 | # Summary
We are currently building and utilizing FlashAttention2 for torch.nn.functional.scaled_dot_product_attention
Up until recently the files we build and our integration was very manual. We recently changed this and made FA a third_party/submodule: https://github.com/pytorch/pytorch/pull/146372
This makes it easier to pull in new files (including those for FAv3) however due to the fact that third_party extensions do not have a mechanism to be re-integrated into ATen the build system + flash_api is still manual.
### Plan
At a very high level we have a few options. I will for the sake of argument though not include the runtime dependency option. So for know lets assume we need to build and ship the kernels in libtorchcuda.so
1. Replace entirely FAv2 w/ FAv3:
This up until recently seemed like a non ideal option since we would lose FA support for A100 + machines. This has changed in: https://github.com/Dao-AILab/flash-attention/commit/7bc3f031a40ffc7b198930b69cf21a4588b4d2f9 and therefor this seems like a much more viable option, and least binary size impactful. I think the main difference is that FAv3 doesn't support Dropout. TBD if this a large enough blocker.
2. Add FAv3 along w/ FAv2
This would require adding another backend to SDPA for FAv3. This would naively have a large impact to binary size, however we could choose to only build these kernels on H100 machines.
I am personally in favor of 1 since it easier to maintain and will provide increased perf on a100 machines for the hot path (no dropout).
For both paths, updates to internal build system will be needed. | open | 2025-03-10T16:54:15Z | 2025-03-14T19:43:47Z | https://github.com/pytorch/pytorch/issues/148891 | [
"triaged",
"module: sdpa"
] | drisspg | 2 |
graphql-python/graphene | graphql | 1,468 | Is there any way to transform variables before resolving fields? | **Is your feature request related to a problem? Please describe.**
We got queries with variables. One variable is popular and it's need to be transformed every time when query calling.
**Describe the solution you'd like**
Need something function like `transform_before_xvariablenamex()` or another variant to transform variable before start to use it in `resolve_xxx()`
**Describe alternatives you've considered**
I do not want to write some ugly function and import it to resolvers of fields of models where that variable is come in.
**Additional context**
Need something like in marshmallow, when they transform data before use it in functions.
| closed | 2022-10-19T16:11:15Z | 2022-12-10T11:35:40Z | https://github.com/graphql-python/graphene/issues/1468 | [
"question"
] | SomeAkk | 3 |
mitmproxy/mitmproxy | python | 6,590 | Binary doesn't run on Mac M1 (silicon) | #### Problem Description
I apologize if this is somehow an expected outcome as I can see that there's only `x86_64` dmg file provided for macOS. I'm using macOS 14.2.1 on a M1 Max machine, and I cannot get mitmproxy to work. It's complaining about the CPU type.
I do not have Rosetta installed on my system.
#### Steps to reproduce the behavior:
1. Install using Homebrew `brew install --cask mitmproxy`
2. Try `mitmproxy --version`
3. See the error
```
==> Installing Cask mitmproxy
==> Linking Binary 'mitmproxy' to '/opt/homebrew/bin/mitmproxy'
==> Linking Binary 'mitmdump' to '/opt/homebrew/bin/mitmdump'
==> Linking Binary 'mitmweb' to '/opt/homebrew/bin/mitmweb'
🍺 mitmproxy was successfully installed!
~ » mitmproxy --version
zsh: bad CPU type in executable: mitmproxy
```
#### System Information
* macOS version: `14.2.1`, no Rosetta
| closed | 2024-01-08T21:45:24Z | 2024-09-30T12:00:25Z | https://github.com/mitmproxy/mitmproxy/issues/6590 | [
"kind/triage"
] | ngocphamm | 4 |
matterport/Mask_RCNN | tensorflow | 2,625 | WARNING:root:You are using the default load_mask(), maybe you need to define your own one. | WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
Epoch 1/10
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
WARNING:root:You are using the default load_mask(), maybe you need to define your own one.
I have this error when I am trying to costum MaskRCNN on my own dataset.
Is there anyone has faced the same issue and got the solution | open | 2021-07-06T21:32:55Z | 2024-03-31T20:49:39Z | https://github.com/matterport/Mask_RCNN/issues/2625 | [] | AidaSilva | 12 |
ARM-DOE/pyart | data-visualization | 1,720 | LIDAR ppi converation in gridded format. | I have LiDAR data that I'd like to convert to radar format (because I want to use pyart package) and then grid it. However, after gridding, I noticed that the ds.rsw values are NaN. Could you please advise on how to retrieve valid values for ds.rsw?
import pyart
import numpy as np
from datetime import datetime
from netCDF4 import Dataset
import warnings
warnings.filterwarnings('ignore')
# Load data lidar data
file_path = "D:/lidar/ncfiles/WCS000248_2023-09-23_09-50-36_ppi_40_100m.nc"
data = Dataset(file_path)
sweep_group = data.groups["Sweep_860123"]
time = sweep_group.variables["time"][:]
latitude = data.variables["latitude"][:]
longitude = data.variables["longitude"][:]
altitude = data.variables["altitude"][:]
azimuth = sweep_group.variables["azimuth"][:]
elevation = sweep_group.variables["elevation"][:]
range_ = sweep_group.variables["range"][:]
radial_wind_speed = sweep_group.variables["radial_wind_speed"][:]
rsw = np.array(radial_wind_speed)
# Create radar object using pyart
radar = pyart.testing.make_empty_ppi_radar(rsw.shape[1], len(azimuth), 1)
radar.latitude['data'] = np.array([latitude])
radar.longitude['data'] = np.array([longitude])
radar.altitude['data'] = np.array([altitude])
#radar.time['data'] = np.array([(t - time_converted[0]).total_seconds() for t in time_converted])
radar.time = {
'standard_name': 'time',
'long_name': 'time in seconds since volume start',
'calendar': 'gregorian',
'units': 'seconds since 2023-09-23T04:35:05Z',
'comment': 'times are relative to the volume start_time',
'data': np.array([(t - time_converted[0]).total_seconds() for t in time_converted]),
'_FillValue':1e+20
}
radar.azimuth['data'] = azimuth
radar.elevation['data'] = elevation
radar.range['data'] = range_
radial_wind_speed_dict = {
'long_name': 'radial_wind_speed',
'standard_name': 'radial_wind_speed_of_scatterers_away_from_instrument',
'units': 'm/s',
'sampling_ratio': 1.0,
'_FillValue': -9999 ,
'grid_mapping': 'grid_mapping',
'coordinates': 'time range',
'data': np.ma.masked_invalid(rsw) # Mask invalid data
}
radar.fields = { 'rws': radial_wind_speed_dict}
#success plot this:
import matplotlib.pyplot as plt
from pyart.graph import RadarDisplay
display = RadarDisplay(radar)
fig, ax = plt.subplots(figsize=(10, 8))
display.plot_ppi("rws", sweep=0, ax=ax, cmap="coolwarm")
plt.show()
## Now grid
grid_limits = ((10., 4000.), (-4500., 4500.), (-4500., 4500.))
grid_shape = (20, 50, 50)
grid = pyart.map.grid_from_radars([radar], grid_limits=grid_limits, grid_shape=grid_shape)
ds = grid_dv.to_xarray()
print(ds)
print(ds)
<xarray.Dataset> Size: 641kB
Dimensions: (time: 1, z: 20, y: 50, x: 50, nradar: 1)
Coordinates: (12/16)
* time (time) object 8B 2023-09-23 04:35:05
* z (z) float64 160B 10.0 220.0 ... 3.79e+03 4e+03
lat (y, x) float64 20kB 45.02 45.02 ... 45.1 45.1
lon (y, x) float64 20kB 7.603 7.605 ... 7.715 7.717
* y (y) float64 400B -4.5e+03 -4.316e+03 ... 4.5e+03
* x (x) float64 400B -4.5e+03 -4.316e+03 ... 4.5e+03
...
origin_altitude (time) float64 8B nan
radar_altitude (nradar) float64 8B nan
radar_latitude (nradar) float64 8B 45.06
radar_longitude (nradar) float64 8B 7.66
radar_time (nradar) int64 8B 0
radar_name (nradar) <U10 40B 'fake_radar'
Dimensions without coordinates: nradar
Data variables:
rws (time, z, y, x) float64 400kB nan nan ... nan
ROI (time, z, y, x) float32 200kB 500.0 ... 500.0
Attributes:
radar_name: fake_radar
nradar: 1
instrument_name: fake_radar
np.nanmax(ds.rws)
Out[18]: nan
np.nanmin(ds.rws)
Out[19]: nan
| open | 2025-01-18T18:09:02Z | 2025-01-22T16:59:16Z | https://github.com/ARM-DOE/pyart/issues/1720 | [] | priya1809 | 13 |
gradio-app/gradio | data-science | 9,967 | MultimodalTextbox interactive=False doesn't work with the submit button | ### Describe the bug
When setting interactive=False with MultimodalTextbox, it doesn't disable the submit button.
The text entry and image upload button are disabled so the content cannot be changed, but the submission can still take place.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
def greet(a, c):
return gr.MultimodalTextbox(interactive=False), a, c+1
with gr.Blocks() as demo:
a = gr.MultimodalTextbox(interactive=True, show_label=False)
b = gr.Textbox(interactive=False, show_label=False)
c = gr.Number(interactive=False, show_label=False)
a.submit(fn=greet, inputs=[a, c], outputs=[a, b, c])
if __name__ == "__main__":
demo.launch()
```
### Screenshot



I can still click the submit button multiple times after it is set to be non-interactive.
### Logs
_No response_
### System Info
```shell
Gradio Playground
```
### Severity
I can work around it | open | 2024-11-16T03:21:18Z | 2024-11-16T03:29:07Z | https://github.com/gradio-app/gradio/issues/9967 | [
"bug"
] | sthemeow | 0 |
skforecast/skforecast | scikit-learn | 427 | MLPRegressor Bayesian search | Hi, can I search hidden_layer_sizes for MLPRegressor using Bayesian search?
If yes, how can I write search_space code for it?
E.g, I use for grid search param_grid = {'hidden_layer_sizes':[[50,50], [70,50], [100,50], [100,100],[100],[100,50,30],[256,256],[256,256,128]].
I would like to write something like this for Bayesian search_space | closed | 2023-05-11T11:13:24Z | 2023-05-17T06:34:02Z | https://github.com/skforecast/skforecast/issues/427 | [
"question"
] | AVPokrovsky | 3 |
ivy-llc/ivy | numpy | 28,847 | Add Numpy Frontend Support to Ivy Transpiler | **Description**:
The current implementation of `ivy.transpile` supports `"torch"` as the sole `source` argument. This allows transpiling PyTorch functions or classes to target frameworks like TensorFlow, JAX, or NumPy. This task aims to extend the functionality by adding Numpy as a valid `source`, enabling transpilation of Numpy code to other frameworks via Ivy's intermediate representation.
For example, after completing this task, we should be able to transpile Numpy code using:
```python
ivy.transpile(func, source="numpy", target="jax")
```
### Goals:
The main objective is to implement the first two stages of the transpilation pipeline for Numpy:
1. **Lower Numpy code to Ivy’s Numpy Frontend IR.**
2. **Transform the Numpy Frontend IR to Ivy’s core representation.**
Once these stages are complete, the rest of the pipeline can be reused to target other frameworks like JAX, PyTorch, or TensorFlow. The steps would look as follows:
```text
source='numpy' → target='numpy_frontend'
source='numpy_frontend' → target='ivy'
source='ivy' → target='jax'/'torch'/etc.
```
This mirrors the existing pipeline for PyTorch:
```text
source='torch' → target='torch_frontend'
source='torch_frontend' → target='ivy'
source='ivy' → target='jax'/'numpy'/etc.
```
### Key Tasks:
1. **Add Native Framework-Specific Implementations for Core Transformation Passes:**
- For example, implement the `native_numpy_recursive_transformer.py` for traversing and transforming Numpy native source code.
- Use `native_torch_recursive_transformer.py` as a reference ([example here](https://github.com/ivy-llc/ivy/blob/open-source/ivy/transpiler/transformations/transformers/recursive_transformer/native_torch_recursive_transformer.py#L18))
2. **Define the Transformation Pipeline for Numpy to Numpy Frontend IR:**
- Create a new pipeline in `source_to_frontend_translator_config.py` to handle the stage `source='numpy', target='numpy_frontend'` ([example here](https://github.com/ivy-llc/ivy/blob/open-source/ivy/transpiler/translations/configurations/source_to_frontend_translator_config.py#L88)).
3. **Define the Transformation Pipeline for Numpy Frontend IR to Ivy:**
- Add another pipeline in `frontend_to_ivy_translator_config.py` to handle the stage `source='numpy_frontend', target='ivy'` ([example here](https://github.com/ivy-llc/ivy/blob/open-source/ivy/transpiler/translations/configurations/frontend_to_ivy_translator_config.py#L92)).
4. **Add Stateful Classes for Numpy**
- **NOTE:** Numpy does not natively support any stateful classes so this step can be skipped.
5. **Understand and Leverage Reusability:**
- Explore reusable components in the existing PyTorch pipeline, especially for AST transformers and configuration management.
### Testing:
- Familiarize yourself with the transpilation flow by exploring [transpiler tests](https://github.com/ivy-llc/ivy/tree/open-source/ivy_tests/test_transpiler)
- Add appropriate tests to validate Numpy source transpilation at each stage of the pipeline.
### Additional Notes:
- Keep in mind the modular and extensible design of the transpiler, ensuring that the new implementation integrates smoothly into the existing architecture.
| open | 2024-12-17T09:59:44Z | 2025-03-18T14:48:26Z | https://github.com/ivy-llc/ivy/issues/28847 | [
"NumPy Frontend",
"ToDo",
"Transpiler"
] | YushaArif99 | 1 |
aleju/imgaug | machine-learning | 617 | Augment bounding boxes is broken when only _augment_keypoints is defined | I have an augmenter that implements `_augment_keypoints` and `_augment_images`, which should be sufficient for augmenting both vector and raster representations of data. However, I noticed when I updated imgaug, my boxes were no longer being augmented.
It looks like 0.4.0 broke the behavior where `augment_bounding_boxes` used to work as long as `_augment_keypoints` was defined.
The currently implementation in Augmenter is:
```python
def _augment_bounding_boxes(self, bounding_boxes_on_images, random_state,
parents, hooks):
return bounding_boxes_on_images
def _augment_polygons(self, polygons_on_images, random_state, parents,
hooks):
return polygons_on_images
```
However, if only the two aforementioned functions are implemented then the augmenter will change the image, but the boxes will be unchanged. At the very least this should cause an Exception, but I think a better fix would simply be to use the _augment_keypoints function if it exists:
```python
def _augment_bounding_boxes(self, bounding_boxes_on_images, random_state,
parents, hooks):
return self._augment_bounding_boxes_as_keypoints(
bounding_boxes_on_images, random_state, parents, hooks)
def _augment_polygons(self, polygons_on_images, random_state, parents,
hooks):
return self._augment_polygons_boxes_as_keypoints(
polygons_on_images, random_state, parents, hooks)
```
| closed | 2020-02-16T23:40:51Z | 2020-02-17T20:08:51Z | https://github.com/aleju/imgaug/issues/617 | [] | Erotemic | 1 |
miguelgrinberg/python-socketio | asyncio | 923 | client doesn't receives if *args is in the event function parameters | Hello Miguel, first of all , a ton of thanks to you for creating such amazing product and making it awailable for the community, I really appreciate your efforts. I'm a newbie in python world so please bear with me. I'm trying to provide as much info as I can, to help you to help me solve this issue.
I'm getting live data from a broker's websocket, wrapped inside a python class which is stored in another file named "ws" and is imported into this file. The Broker has provided two functions socket and custom_message code for them is as follows.
socket
```
def socket(access_token):
data_type = "symbolData"
symbol =["NSE:SBIN-EQ"]
fs = ws.FyersSocket(access_token=access_token,run_background=False,log_path="/home/Downloads/")
fs.websocket_data = custom_message
fs.subscribe(symbol=symbol,data_type=data_type)
fs.keep_running()
```
custom_message
```
def custom_message(msg):
print (f"Custom:{msg}")
```
As you can see the data from brokers data socket is getting set into ```custom_message``` function but I want to stream this data to a charting library so needed a websocket which could work with above python functions and thanks to you I had python socketio. So I created an ASGI websocket watching your videos.
The ```msg``` in received by ```custom_message``` is a python list containing a dictionary. As both the functions ```socket``` and ```custom_message``` are synchronous and the ASGI requires asyn functions, I needed a bridge that could connect these sync function to event async function. So I made ```custom_message``` function recive the message and forward it to async function. I implemented it as follows
```
import socketio, asyncio, config
from fyers_api.Websocket import ws
from Access_Token import access_token
sio = socketio.AsyncServer(async_mode='asgi',logger=True, engineio_logger=True,cors_allowed_origins=('*'))
app=socketio.ASGIApp(sio,static_files={
'/':'./public/'
})
access_token = config.client_id+":"+access_token
@sio.event
async def connect(sid, environ):
print(sid,"connected")
@sio.event
async def disconnect(sid):
print(sid,"[disconnected]")
@sio.event
async def sum(sid,data):
result = data['numbers'][0] + data['numbers'][1]
await sio.emit('sum_result',{'result':result}, to=sid)
await sio.emit('reconnect',{'result':1}, to=sid)
print(sid,result)
def socket(sid,fysymbol):
symbol =["NSE:SBIN-EQ"]
data_type = "symbolData"
fs = ws.FyersSocket(sid,fysymbol,access_token=access_token,run_background=False,log_path="/home/akshay/Documents/pytrade/log/")
fs.websocket_data = custom_message
fs.subscribe(symbol=symbol,data_type=data_type)
fs.keep_running()
def custom_message(sid,data,*args):
asyncio.run(SubAdd(sid,data,*args))
@sio.event
async def SubAdd(sid,data,*args): ########This is what is causing issue
await sio.emit('reconnect',{'result':1}, to=sid)
if args:
for arg in args:
emit_data = str(str(0)+"~"+str(arg['symbol'].replace("-","~").replace(":","~"))+"~"+str(arg['timestamp'])+"~"+str(arg['min_open_price'])+"~"+str(arg['min_high_price'])+"~"+str(arg['min_low_price'])+"~"+str(arg['min_close_price'])+"~"+str(arg['min_volume']))
print(f"emit_data={emit_data}")
await sio.emit("m",emit_data, to=sid)
sid = sid
fysymbol = data['fysymbol']
print("SubAdd isConnected")
#socket(sid,fysymbol)
@sio.event
async def SubRemove(sid,data):
fysymbol,resolution = data['fysymbol'],data['resolution']
fyersSocket.unsubscribe(symbol=symbol)
symbol.remove(fysymbol)
await sio.emit('UnSubscribed',data['fysymbol'], to=sid)
print(sid,"[UnSubscribed]",data['fysymbol'])
```
Which is working fine if we look at the server's logs in the terminal e.g.
```4dacEB52_RzDB83fAAAA: Received packet MESSAGE data 2["sum",{"numbers":[1,2]}]
received event "sum" from GOGFG4v_gCrobHcFAAAB [/]
emitting event "sum_result" to GOGFG4v_gCrobHcFAAAB [/]
4dacEB52_RzDB83fAAAA: Sending packet MESSAGE data 2["sum_result",{"result":3}]```
and
```emitting event "m" to GOGFG4v_gCrobHcFAAAB [/]
4dacEB52_RzDB83fAAAA: Sending packet MESSAGE data 2["m","0~NSE~SBIN~EQ~1652259912~471.1~471.15~470.85~471.15~9400"]
```
Terminal Logs
```
uvicorn app:app --port 9999 --reload
INFO: Will watch for changes in these directories: ['/home/akshay/Documents/pytrade/pytrade']
INFO: Uvicorn running on http://127.0.0.1:9999 (Press CTRL+C to quit)
INFO: Started reloader process [12498] using watchgod
Server initialized for asgi.
INFO: Started server process [12509]
INFO: Waiting for application startup.
INFO: Application startup complete.
4dacEB52_RzDB83fAAAA: Sending packet OPEN data {'sid': '4dacEB52_RzDB83fAAAA', 'upgrades': [], 'pingTimeout': 20000, 'pingInterval': 25000}
4dacEB52_RzDB83fAAAA: Received request to upgrade to websocket
INFO: ('127.0.0.1', 40320) - "WebSocket /socket.io/" [accepted]
4dacEB52_RzDB83fAAAA: Upgrade to websocket successful
INFO: connection open
4dacEB52_RzDB83fAAAA: Received packet MESSAGE data 0
GOGFG4v_gCrobHcFAAAB connected
4dacEB52_RzDB83fAAAA: Sending packet MESSAGE data 0{"sid":"GOGFG4v_gCrobHcFAAAB"}
4dacEB52_RzDB83fAAAA: Received packet MESSAGE data 2["sum",{"numbers":[1,2]}]
received event "sum" from GOGFG4v_gCrobHcFAAAB [/]
emitting event "sum_result" to GOGFG4v_gCrobHcFAAAB [/]
4dacEB52_RzDB83fAAAA: Sending packet MESSAGE data 2["sum_result",{"result":3}]
emitting event "reconnect" to GOGFG4v_gCrobHcFAAAB [/]
4dacEB52_RzDB83fAAAA: Sending packet MESSAGE data 2["reconnect",{"result":1}]
GOGFG4v_gCrobHcFAAAB 3
4dacEB52_RzDB83fAAAA: Received packet MESSAGE data 2["SubAdd",{"fysymbol":"NSE:SBIN-EQ","resolution":"1"}]
received event "SubAdd" from GOGFG4v_gCrobHcFAAAB [/]
emitting event "reconnect" to GOGFG4v_gCrobHcFAAAB [/]
4dacEB52_RzDB83fAAAA: Sending packet MESSAGE data 2["reconnect",{"result":1}]
SubAdd isConnected
sid,fysymbol= ('GOGFG4v_gCrobHcFAAAB', 'NSE:SBIN-EQ')
emitting event "reconnect" to GOGFG4v_gCrobHcFAAAB [/]
4dacEB52_RzDB83fAAAA: Sending packet MESSAGE data 2["reconnect",{"result":1}]
emit_data=0~NSE~SBIN~EQ~1652259910~471.1~471.15~470.85~471.1~6627
emitting event "m" to GOGFG4v_gCrobHcFAAAB [/]
4dacEB52_RzDB83fAAAA: Sending packet MESSAGE data 2["m","0~NSE~SBIN~EQ~1652259910~471.1~471.15~470.85~471.1~6627"]
emitting event "reconnect" to GOGFG4v_gCrobHcFAAAB [/]
4dacEB52_RzDB83fAAAA: Sending packet MESSAGE data 2["reconnect",{"result":1}]
emit_data=0~NSE~SBIN~EQ~1652259911~471.1~471.15~470.85~471.15~6757
emitting event "m" to GOGFG4v_gCrobHcFAAAB [/]
4dacEB52_RzDB83fAAAA: Sending packet MESSAGE data 2["m","0~NSE~SBIN~EQ~1652259911~471.1~471.15~470.85~471.15~6757"]
emitting event "reconnect" to GOGFG4v_gCrobHcFAAAB [/]
4dacEB52_RzDB83fAAAA: Sending packet MESSAGE data 2["reconnect",{"result":1}]
emit_data=0~NSE~SBIN~EQ~1652259912~471.1~471.15~470.85~471.15~9400
emitting event "m" to GOGFG4v_gCrobHcFAAAB [/]
4dacEB52_RzDB83fAAAA: Sending packet MESSAGE data 2["m","0~NSE~SBIN~EQ~1652259912~471.1~471.15~470.85~471.15~9400"]
emitting event "reconnect" to GOGFG4v_gCrobHcFAAAB [/]
4dacEB52_RzDB83fAAAA: Sending packet MESSAGE data 2["reconnect",{"result":1}]
emit_data=0~NSE~SBIN~EQ~1652259913~471.1~471.15~470.85~471.15~9450
emitting event "m" to GOGFG4v_gCrobHcFAAAB [/]
4dacEB52_RzDB83fAAAA: Sending packet MESSAGE data 2["m","0~NSE~SBIN~EQ~1652259913~471.1~471.15~470.85~471.15~9450"]
emitting event "reconnect" to GOGFG4v_gCrobHcFAAAB [/]
4dacEB52_RzDB83fAAAA: Sending packet MESSAGE data 2["reconnect",{"result":1}]
emit_data=0~NSE~SBIN~EQ~1652259914~471.1~471.15~470.85~471.15~9520
emitting event "m" to GOGFG4v_gCrobHcFAAAB [/]
4dacEB52_RzDB83fAAAA: Sending packet MESSAGE data 2["m","0~NSE~SBIN~EQ~1652259914~471.1~471.15~470.85~471.15~9520"]
```
The client receives the ```sum_result``` and ```reconnect``` message from ```sum``` event. However, it isn't receiving the (second) ```reconnect``` and ```m``` message in the ```SubAdd``` event which is evident from message logs in devtools.
Logs of devtools

So what and where should I make the changes so that client whould receive all messages from all events.
Thank You!!
client.js
```
const socket = io('http://localhost:9999', {
transports: ['websocket', 'polling', 'flashsocket']
});
const channelToSubscription = new Map();
console.log('ChannelToSubscription=',channelToSubscription ),
socket.on('connect', () => {
console.log('[socket] Connected');
socket.emit('sum',{numbers:[1,2]});
});
socket.on('sum_result', (data) =>{
console.log(data.result);
});
socket.on('reconnect', (data) =>{
console.log(data.result);
});
socket.on('disconnect', (reason) => {
console.log('[socket] Disconnected:', reason);
});
socket.on('error', (error) => {
console.log('[socket] Error:', error);
});
socket.on('m', data => {
console.log('[socket] Message:', data);
const [
eventTypeStr,
exchange,
fromSymbol,
toSymbol,
tradeTimeStr,
openPrice,
highPrice,
lowPrice,
closePrice,
volumeStr,
] = data.split('~');
//console.log(lowPrice);
//if (parseInt(eventTypeStr) !== 0) {
// // skip all non-TRADE events
// return;
//}
const tradePrice = parseFloat(openPrice);
const tradeTime = parseInt(tradeTimeStr)*1000;
const openStr = parseFloat(openPrice);
const highStr = parseFloat(highPrice);
const lowStr = parseFloat(lowPrice);
const closeStr = parseFloat(closePrice);
const volume_Str = parseFloat(volumeStr);
//console.log(volume_Str);
const channelString = `${exchange}:${fromSymbol}-${toSymbol}`;
const subscriptionItem = channelToSubscription.get(channelString);
if (subscriptionItem === undefined) {
return;
}
``` | closed | 2022-05-11T09:51:11Z | 2022-05-11T10:11:24Z | https://github.com/miguelgrinberg/python-socketio/issues/923 | [] | akshay7892 | 1 |
amidaware/tacticalrmm | django | 1,847 | allow variables to be used in alert templates recipient | **Is your feature request related to a problem? Please describe.**
Currently only manual work is possible when sending email alerts to different recipients and when recipients can be as many a 1 per devices/site the number of alert template can quickly get out of control
**Describe the solution you'd like**
allow to use all the possible variables Global/client/site/agent as a recipient for an email alert template.
**Describe alternatives you've considered**
a butload of different template
**Additional context**

| open | 2024-04-17T09:19:32Z | 2024-04-17T09:19:32Z | https://github.com/amidaware/tacticalrmm/issues/1847 | [] | P6g9YHK6 | 0 |
datadvance/DjangoChannelsGraphqlWs | graphql | 95 | how to fix it | WebSocket client does not request for the subprotocol graphql-ws! | closed | 2022-09-30T06:55:12Z | 2022-10-14T07:29:31Z | https://github.com/datadvance/DjangoChannelsGraphqlWs/issues/95 | [] | George191 | 0 |
HIT-SCIR/ltp | nlp | 541 | pyltp和新版ltp之间的差别在哪里呢 | 最近在做事件抽取的任务,发现很多项目都是用pyltp写的,用ltp4重写方法的时候找不到pyltp输出结果,可以出一个教程么 | closed | 2021-10-29T03:09:20Z | 2022-09-12T06:49:20Z | https://github.com/HIT-SCIR/ltp/issues/541 | [] | jwc19890114 | 1 |
bloomberg/pytest-memray | pytest | 10 | Ability to persist the binary dump post test run | ## Feature Request
I'd like to allow the user to explicitly request persisting the binary files via a `--memray-persist-bin` flag that can take a folder as an argument. This would change the folder where the temporary files are stored from temporary to the passed in argument. To help identify which binary belongs to which test I'd also propose to add the test name (`pyfuncitem.nodeid` - might need to normalize the test name for characters not allowed on the path) as a suffix (after the uuid).
This would allow users to do further analysis of the binary files after the run (such as generating a flamegraph). This could be also used for #7. Most often I imagine this using in form of:
```
pytest -k test_memory_usage --memray --memray-persist-bin ./memray-bins
memray flamegraph ./memray-bins/21321asad213421.test_memory_usage.bin
```
**Describe alternatives you've considered**
Users could rewrite their test as a python module invocation and use memray directly. The downside is that making fixtures work as function calls can be complicated.
| closed | 2022-05-10T20:51:13Z | 2022-05-17T16:36:50Z | https://github.com/bloomberg/pytest-memray/issues/10 | [] | gaborbernat | 0 |
matplotlib/matplotlib | data-science | 29,204 | [Bug]: twiny in log scale can't set `tick_params(top=False)` | ### Bug summary
If `ax2=ax.twiny()` in linear scale, `ax2.tick_params(top=False)` works fine, but failed when set `ax2.set_xscale('log')` .
### Code for reproduction
```Python
import numpy as np
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, 1)
x = np.arange(0.1, 100, 0.1)
y = np.sin(x)
xlim = [1e-1, 1e2]
ax.plot(x, y)
ax.set_xlim(xlim)
ax.set_xscale('log')
ax2 = ax.twiny()
ax2.set_xscale('log')
ax2.set_xlim(xlim)
ax2.tick_params(top=False, labeltop=False) # top=False not work
```
### Actual outcome
Ticks on top still exist.

### Expected outcome
While when I comment `ax2.set_xscale('log')`, ticks on top disappear.

### Additional information
+ How I triggerd this bug?
I want to plot two lines in log scale, with same x-limits but different x-ticks and x-ticklabels, and there will be two rows of x-ticklabels shared the same x-axis.
### Operating system
Ubuntu
### Matplotlib Version
3.9.2
### Matplotlib Backend
module://matplotlib_inline.backend_inline
### Python version
Python 3.10.14
### Jupyter version
4.2.5
### Installation
pip | closed | 2024-11-29T02:54:11Z | 2024-11-29T16:27:08Z | https://github.com/matplotlib/matplotlib/issues/29204 | [
"Community support"
] | Dengda98 | 1 |
davidsandberg/facenet | computer-vision | 756 | ImportError: No module named src.generative.models.dfc_vae | when i have execute this command on Ubuntu ..
python src/generative/train_vae.py src.generative.models.dfc_vae ~/datasets/clf_mtcnnpy_128 src/models/inception_resnet_v1 ~/models/export/20170512-110547/model-20170512-110547.ckpt-250000 --models_base_dir ~/vae/ --reconstruction_loss_type PERCEPTUAL --loss_features 'Conv2d_1a_3x3,Conv2d_2a_3x3,Conv2d_2b_3x3' --max_nrof_steps 50000 --batch_size 128 --latent_var_size 100 --initial_learning_rate 0.0002 --alfa 1.0 --beta 0.5
Error message occure
**ImportError: No module named src.generative.models.dfc_vae** | open | 2018-05-23T08:27:37Z | 2018-05-24T16:26:33Z | https://github.com/davidsandberg/facenet/issues/756 | [] | praveenkumarchandaliya | 1 |
horovod/horovod | machine-learning | 3,170 | Stall ranks with tf.keras.callbacks.TensorBoard | **Environment:**
1. Framework: TensorFlow
2. Framework version: 2.5.0
3. Horovod version: 0.22.1
4. MPI version: 3.0.0
5. CUDA version: 11.2
6. NCCL version: 2.8.4-1+cuda10.2
7. Python version: 3.6.9
8. Spark / PySpark version:
9. Ray version:
10. OS and version: 18.04
11. GCC version: 7.5.0
12. CMake version: 3.10.2
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
Horovod found stall ranks after enable Tensorboard callback on [tensorflow2_keras_mnist.py](https://github.com/horovod/horovod/blob/master/examples/tensorflow2/tensorflow2_keras_mnist.py).
To reproduce it, update TenosorBoard parameters as follows:
```
callbacks.append(tf.keras.callbacks.TensorBoard('/tmp/log', update_freq=1000))
```

| open | 2021-09-17T12:33:32Z | 2022-07-01T02:52:25Z | https://github.com/horovod/horovod/issues/3170 | [
"bug"
] | acmore | 10 |
kizniche/Mycodo | automation | 644 | Backup transfer to another SD Card | How it is possible to transfer an backup from Card A to another SD Card B and restore it there?
I tried to copy the Backup but 1 file cant be copied.

| closed | 2019-03-31T00:11:01Z | 2019-04-02T21:18:01Z | https://github.com/kizniche/Mycodo/issues/644 | [] | RynFlutsch | 5 |
ymcui/Chinese-LLaMA-Alpaca-2 | nlp | 442 | Forgetting English During Chinese LLM Training | ### Check before submitting issues
- [X] Make sure to pull the latest code, as some issues and bugs have been fixed.
- [X] I have read the [Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki) and [FAQ section](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/FAQ) AND searched for similar issues and did not find a similar problem or solution
- [X] Third-party plugin issues - e.g., [llama.cpp](https://github.com/ggerganov/llama.cpp), [LangChain](https://github.com/hwchase17/langchain), [text-generation-webui](https://github.com/oobabooga/text-generation-webui), we recommend checking the corresponding project for solutions
### Type of Issue
Performance issue
### Base Model
Chinese-LLaMA-2 (7B/13B)
### Operating System
Linux
### Describe your issue in detail
First of all, I would like to express my gratitude for the amazing work you and your team have done in developing LLMs.
I have been using Llama2 for training models in different language, and I have noticed that the model doesn't work well in English after training. after training, it cannot produce English answers. I also checked the Chinese model, and it didn't answer in English either, even when I asked my question in English.
I was wondering if you have checked the forgetting in English and published the results, and if this forgetting was done on purpose.
Also I wanna know Is there anything we can do to avoid forgetting? I would appreciate any insights or suggestions you may have on this matter.
Thank you again for your hard work and dedication to advancing the field of language modeling.
Best regards,
### Dependencies (must be provided for code-related issues)
```
# Please copy-and-paste your dependencies here.
```
### Execution logs or screenshots
```
torchrun --nnodes 1 --nproc_per_node 4 run_clm_pt_with_peft.py \
--deepspeed ds_zero2_no_offload.json \
--model_name_or_path /home/hadoop/.cache/huggingface/hub/models--meta-llama--Llama-2-7b-hf/snapshots/6fdf2e60f86ff2481f2241aaee459f85b5b0bbb9/ \
--tokenizer_name_or_path /home/hadoop/abolfazl/Chinese-LLaMA-Alpaca-2/scripts/tokenizer/merged_tokenizer_hf \
--dataset_dir /home/hadoop/abolfazl/parvin2 \
--data_cache_dir /home/hadoop/abolfazl/Chinese-LLaMA-Alpaca-2/scripts/training/cache \
--validation_split_percentage 0.001 \
--per_device_train_batch_size 8 \
--do_train \
--seed $RANDOM \
--fp16 \
--num_train_epochs 1 \
--lr_scheduler_type cosine \
--learning_rate 2e-4 \
--warmup_ratio 0.001 \
--weight_decay 0.001 \
--logging_strategy steps \
--logging_steps 10 \
--save_strategy steps \
--save_total_limit 3 \
--save_steps 1000 \
--gradient_accumulation_steps 1 \
--preprocessing_num_workers 8 \
--block_size 128 \
--output_dir /home/hadoop/abolfazl/Chinese-LLaMA-Alpaca-2/out_pt_secondtry \
--overwrite_output_dir \
--ddp_timeout 30000 \
--logging_first_step True \
--lora_rank 64 \
--lora_alpha 16 \
--trainable "q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj" \
--lora_dropout 0.05 \
--modules_to_save "embed_tokens,lm_head" \
--torch_dtype float16 \
--load_in_kbits 4 \
--gradient_checkpointing \
--ddp_find_unused_parameters False``` | closed | 2023-12-05T13:01:06Z | 2024-01-14T06:47:58Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/442 | [
"stale"
] | Abolfazl-kr | 11 |
sqlalchemy/alembic | sqlalchemy | 868 | ENUM with metadata is not translated to sql CREATE TYPE in autogenerate | Hi,
I know that ENUM autogeneration is not entirely complete and polished, but I think I might have encountered not something not working, but working incorrectly :)
**Describe the bug**
1. Autogenerate for postgres ENUM with `metadata=...` generates a migration with `Metadata(bind=None)` without importing `Metadata`, resulting in NameError when running migration.
2. The enum type is not created in sql if specified with `metadata=...`, even though it seems like a well-documented use-case in [sqlalchemy docs](https://docs.sqlalchemy.org/en/13/dialects/postgresql.html#sqlalchemy.dialects.postgresql.ENUM).
**Expected behavior**
- `from sqlalchemy import MetaData`
- Create the enum type in the database
**To Reproduce**
```py
from enum import Enum
from sqlalchemy import BigInteger, Column, MetaData, Table
from sqlalchemy.dialects.postgresql import ENUM
metadata = MetaData(schema="myschema")
class SomeEnum(str, Enum):
OPEN = "OPEN"
CLOSED = "CLOSED"
some_enum = ENUM(
SomeEnum,
metadata=metadata,
schema=metadata.schema,
)
some_table = Table(
"some_table",
metadata,
Column("id", BigInteger, primary_key=True),
Column("some_enum_value", some_enum),
)
```
Resulting migration file:
```py
"""initial
Revision ID: 653c623403f4
Revises:
Create Date: 2021-06-24 18:15:52.750889
"""
from alembic import op
import sqlalchemy as sa
from sqlalchemy.dialects import postgresql
# revision identifiers, used by Alembic.
revision = '653c623403f4'
down_revision = None
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('some_table',
sa.Column('id', sa.BigInteger(), nullable=False),
sa.Column('some_enum_value', postgresql.ENUM('OPEN', 'CLOSED', name='someenum', schema='myschema', metadata=MetaData(bind=None)), nullable=True),
sa.PrimaryKeyConstraint('id'),
schema='myschema'
)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table('some_table', schema='myschema')
# ### end Alembic commands ###
```
**Error**
```
18:15 $ alembic upgrade head --sql
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Generating static SQL
INFO [alembic.runtime.migration] Will assume transactional DDL.
BEGIN;
CREATE TABLE myschema.alembic_version (
version_num VARCHAR(32) NOT NULL,
CONSTRAINT alembic_version_pkc PRIMARY KEY (version_num)
);
INFO [alembic.runtime.migration] Running upgrade -> 653c623403f4, initial
-- Running upgrade -> 653c623403f4
Traceback (most recent call last):
File "/home/.../bin/alembic", line 8, in <module>
sys.exit(main())
File "/home/.../lib/python3.8/site-packages/alembic/config.py", line 559, in main
CommandLine(prog=prog).main(argv=argv)
File "/home/.../lib/python3.8/site-packages/alembic/config.py", line 553, in main
self.run_cmd(cfg, options)
File "/home/.../lib/python3.8/site-packages/alembic/config.py", line 530, in run_cmd
fn(
File "/home/.../lib/python3.8/site-packages/alembic/command.py", line 293, in upgrade
script.run_env()
File "/home/.../lib/python3.8/site-packages/alembic/script/base.py", line 490, in run_env
util.load_python_file(self.dir, "env.py")
File "/home/.../lib/python3.8/site-packages/alembic/util/pyfiles.py", line 97, in load_python_file
module = load_module_py(module_id, path)
File "/home/.../lib/python3.8/site-packages/alembic/util/compat.py", line 184, in load_module_py
spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "migrations/env.py", line 90, in <module>
run_migrations_offline()
File "migrations/env.py", line 60, in run_migrations_offline
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/home/.../lib/python3.8/site-packages/alembic/runtime/environment.py", line 813, in run_migrations
self.get_context().run_migrations(**kw)
File "/home/.../lib/python3.8/site-packages/alembic/runtime/migration.py", line 561, in run_migrations
step.migration_fn(**kw)
File "/home/.../migrations/versions/2021_06_24_653c623403f4_initial.py", line 23, in upgrade
sa.Column('some_enum_value', postgresql.ENUM('OPEN', 'CLOSED', name='someenum', schema='myschema', metadata=MetaData(bind=None)), nullable=True),
NameError: name 'MetaData' is not defined
```
Note that adding the missing import (`from sqlalchemy import MetaData`) results in a proper sql, but without the `CREATE TYPE`:
```
alembic upgrade head --sql
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Generating static SQL
INFO [alembic.runtime.migration] Will assume transactional DDL.
BEGIN;
CREATE TABLE myschema.alembic_version (
version_num VARCHAR(32) NOT NULL,
CONSTRAINT alembic_version_pkc PRIMARY KEY (version_num)
);
INFO [alembic.runtime.migration] Running upgrade -> 653c623403f4, initial
-- Running upgrade -> 653c623403f4
CREATE TABLE myschema.some_table (
id BIGSERIAL NOT NULL,
some_enum_value myschema.someenum,
PRIMARY KEY (id)
);
INSERT INTO myschema.alembic_version (version_num) VALUES ('653c623403f4');
COMMIT;
```
**Versions.**
- OS: Ubuntu 20.04.2
- Python: 3.8.5
- Alembic: 1.6.5
- SQLAlchemy: 1.3.24 (I can't upgrade to 1.4 yet)
- Database: PostgreSQL 12.7
- DBAPI: psycopg2-binary 2.8.6
**Additional context**
Note that removing the `metadata` argument makes the issue disappear:
```py
some_enum = ENUM(
SomeEnum,
# metadata=metadata,
schema=metadata.schema,
)
```
gives no `metadata=Metadata(bind=None)`, so no NameError is given as a result:
```py
sa.Column('some_enum_value', postgresql.ENUM('OPEN', 'CLOSED', name='someenum', schema='myschema'), nullable=True),
```
Also, the resulting SQL is as expected:
```sql
-- Running upgrade -> c2d662ec840a
CREATE TYPE myschema.someenum AS ENUM ('OPEN', 'CLOSED');
CREATE TABLE myschema.some_table (
id BIGSERIAL NOT NULL,
some_enum_value myschema.someenum,
PRIMARY KEY (id)
);
```
According to the [sqlalchemy docs](https://docs.sqlalchemy.org/en/13/dialects/postgresql.html#sqlalchemy.dialects.postgresql.ENUM):
>To use a common enumerated type between multiple tables, the best practice is to declare the Enum or ENUM independently, and associate it with the MetaData object itself:
>...
>If we specify checkfirst=True, the individual table-level create operation will check for the ENUM and create if not exists:
So I guess in scenario where I provided the `metadata` argument, it *is expected* that the enum would not be created by sqlalchemy **table create** unless asked to with `checkfirst=True`.
However I think that alembic should either produce the same SQL statements in both cases (metadata argument to enum specified or not), or at least inform in the docs how to enforce the checkfirst-like behaviour.
| open | 2021-06-24T19:33:12Z | 2021-06-25T08:44:41Z | https://github.com/sqlalchemy/alembic/issues/868 | [
"duplicate",
"question",
"autogenerate for enums"
] | bluefish6 | 1 |
smiley/steamapi | rest-api | 43 | Adding pleyerstats attribute to a game entity. | Currently, a lot of information dropped out of GetUserStatsForGame response. Only achievements are used.
But there is more useful information for some games in 'playerstats'.
Was trying to create a pull request with that tweak :) | closed | 2017-02-19T03:16:31Z | 2019-04-11T02:05:03Z | https://github.com/smiley/steamapi/issues/43 | [
"question",
"steamworks"
] | theSimplex | 4 |
deepspeedai/DeepSpeed | machine-learning | 5,776 | [BUG] Universal checkpoint conversion - "Cannot find layer_01* files in there" | I am tryin to use the universal checkpoint conversion code, `python ds_to_universal.py `, but I get this error that can't find a layer number. I'm not sure why, but I am missing layer 01 and 16, my code just skips creating them when saving the checkpoint. Deepspeed ckpt conversion is expecting them, and therefore breaks. Does that sound familiar to anyone? Thanks in advance!
I am using GPT Neox codebase, and have Deepspeed 0.14.4 installed.
Error:
```
.../global_step4 seems a bogus DeepSpeed checkpoint folder: Cannot find layer_01* files in there.
```
Here are the files in my save directory:
```
bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt layer_04-model_00-model_states.pt layer_09-model_00-model_states.pt layer_14-model_00-model_states.pt
configs layer_05-model_00-model_states.pt layer_10-model_00-model_states.pt layer_15-model_00-model_states.pt
layer_00-model_00-model_states.pt layer_06-model_00-model_states.pt layer_11-model_00-model_states.pt layer_17-model_00-model_states.pt
layer_02-model_00-model_states.pt layer_07-model_00-model_states.pt layer_12-model_00-model_states.pt mp_rank_00_model_states.pt
layer_03-model_00-model_states.pt layer_08-model_00-model_states.pt layer_13-model_00-model_states.pt
``` | open | 2024-07-17T07:06:08Z | 2024-09-09T12:09:31Z | https://github.com/deepspeedai/DeepSpeed/issues/5776 | [
"bug",
"training"
] | exnx | 3 |
dask/dask | scikit-learn | 11,343 | Bug: Can't perform a (meaningful) "outer" concatenation with dask-expr on `axis=1` | <!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**: I *think* this is a bug, because the behaviour is inconsistent between dask-expr and "original-dask". But it is for a bit of an edge case that I can't see coming up a whole lot.
Concatenating two dataframes of different length throws an "assertion error" rather than following the previous behaviour of discarding non-matching indexes or null-filling those indexes depending on the "join" keyword. (I don't feel like that's a very clear way of phrasing it, sorry! The example below should help)
**Minimal Complete Verifiable Example**:
```python
import dask.dataframe as dd
one = dd.from_dict({"a": [1, 2, 3], "b": [1, 2, 3]}, npartitions=1)
two = dd.from_dict({"c": [1, 2]}, npartitions=1)
dd.concat([one, two], axis=1, join="outer")
# previous result from old versions of dask (i.e. 2024.2)
# a b c
# 0 1 1 1.0
# 1 2 2 2.0
# 2 3 3 NaN
# Now throws an AssertionError
dd.concat([one, two], axis=1, join="inner")
# previous result from old versions of dask (i.e. 2024.2)
# a b c
# 0 1 1 1.0
# 1 2 2 2.0
# now also throws an AssertionError
```
**Anything else we need to know?**:
The error stack makes it has a good hint on where/why this happens `dask_expr/_expr.py` has this line:
`assert arg.divisions == dependencies[0].divisions`
Which in the case of those above two concats, I guess *isn't* the case, either one needs to be null filled, or the other needs to be shortened, or else they won't match up.
**Environment**:
- Dask version: 2024.8.0
- Python version: 3.10.12
- Operating System: Ubuntu (Jammy Jellyfish)
- Install method (conda, pip, source): pip
| closed | 2024-08-23T13:33:03Z | 2024-08-26T15:36:10Z | https://github.com/dask/dask/issues/11343 | [
"dask-expr"
] | benrutter | 1 |
huggingface/text-generation-inference | nlp | 2,853 | Entire system crashes when get to warm up model | ### System Info
```
model=meta-llama/Llama-3.3-70B-Instruct
# share a volume with the Docker container to avoid downloading weights every run
volume=/srv/ai/data/tgi
docker run --gpus "1,2,3,4" --shm-size 1g -e HF_TOKEN=[TOKEN] -p 8080:80 -v $volume:/data \
ghcr.io/huggingface/text-generation-inference:3.0.0 \
--model-id $model \
--quantize eetq \
--cuda-memory-fraction 0.95
```
4x 3090 tis, epyc cpu, 256gb ram
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
Run docker command above
```
2024-12-17T17:23:53.961980Z INFO text_generation_launcher: Using prefill chunking = True
2024-12-17T17:23:54.547663Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2024-12-17T17:23:54.547663Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-12-17T17:23:54.558361Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2024-12-17T17:23:54.572348Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2024-12-17T17:23:54.821433Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-1
2024-12-17T17:23:54.821492Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-3
2024-12-17T17:23:54.821530Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-2
2024-12-17T17:23:54.847944Z INFO shard-manager: text_generation_launcher: Shard ready in 150.41845764s rank=3
2024-12-17T17:23:54.858639Z INFO shard-manager: text_generation_launcher: Shard ready in 150.432820265s rank=1
2024-12-17T17:23:54.872643Z INFO shard-manager: text_generation_launcher: Shard ready in 150.439607673s rank=2
2024-12-17T17:23:55.047221Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-0
2024-12-17T17:23:55.048286Z INFO shard-manager: text_generation_launcher: Shard ready in 150.622573521s rank=0
2024-12-17T17:23:55.115403Z INFO text_generation_launcher: Starting Webserver
2024-12-17T17:23:55.210971Z INFO text_generation_router_v3: backends/v3/src/lib.rs:125: Warming up model
2024-12-17T17:23:55.231460Z INFO text_generation_launcher: Using optimized Triton indexing kernels.
```
After this the server dies and have to manuall power cycle
Full logs trying smaller model and tried disabling cuda-graphs
```
2024-12-17T18:03:27.087401Z INFO text_generation_launcher: Args {
model_id: "Qwen/Qwen2.5-32B-Instruct",
revision: None,
validation_workers: 2,
sharded: None,
num_shard: None,
quantize: Some(
Eetq,
),
speculate: None,
dtype: None,
kv_cache_dtype: None,
trust_remote_code: false,
max_concurrent_requests: 128,
max_best_of: 2,
max_stop_sequences: 4,
max_top_n_tokens: 5,
max_input_tokens: None,
max_input_length: None,
max_total_tokens: None,
waiting_served_ratio: 0.3,
max_batch_prefill_tokens: None,
max_batch_total_tokens: None,
max_waiting_tokens: 20,
max_batch_size: None,
cuda_graphs: Some(
[
0,
],
),
hostname: "4eee9dca0df9",
port: 80,
shard_uds_path: "/tmp/text-generation-server",
master_addr: "localhost",
master_port: 29500,
huggingface_hub_cache: None,
weights_cache_override: None,
disable_custom_kernels: false,
cuda_memory_fraction: 0.95,
rope_scaling: None,
rope_factor: None,
json_output: false,
otlp_endpoint: None,
otlp_service_name: "text-generation-inference.router",
cors_allow_origin: [],
api_key: None,
watermark_gamma: None,
watermark_delta: None,
ngrok: false,
ngrok_authtoken: None,
ngrok_edge: None,
tokenizer_config_path: None,
disable_grammar_support: false,
env: false,
max_client_batch_size: 4,
lora_adapters: None,
usage_stats: On,
payload_limit: 2000000,
enable_prefill_logprobs: false,
}
2024-12-17T18:03:27.088023Z INFO hf_hub: Token file not found "/data/token"
2024-12-17T18:03:28.994330Z INFO text_generation_launcher: Using attention flashinfer - Prefix caching true
2024-12-17T18:03:28.994349Z INFO text_generation_launcher: Sharding model on 4 processes
2024-12-17T18:03:29.030950Z WARN text_generation_launcher: Unkown compute for card nvidia-geforce-rtx-3090-ti
2024-12-17T18:03:29.064926Z INFO text_generation_launcher: Default `max_batch_prefill_tokens` to 4096
2024-12-17T18:03:29.065078Z INFO download: text_generation_launcher: Starting check and download process for Qwen/Qwen2.5-32B-Instruct
2024-12-17T18:03:32.104130Z INFO text_generation_launcher: Files are already present on the host. Skipping download.
2024-12-17T18:03:32.680081Z INFO download: text_generation_launcher: Successfully downloaded weights for Qwen/Qwen2.5-32B-Instruct
2024-12-17T18:03:32.680348Z INFO shard-manager: text_generation_launcher: Starting shard rank=1
2024-12-17T18:03:32.680364Z INFO shard-manager: text_generation_launcher: Starting shard rank=0
2024-12-17T18:03:32.680439Z INFO shard-manager: text_generation_launcher: Starting shard rank=3
2024-12-17T18:03:32.686107Z INFO shard-manager: text_generation_launcher: Starting shard rank=2
2024-12-17T18:03:35.215815Z INFO text_generation_launcher: Using prefix caching = True
2024-12-17T18:03:35.215842Z INFO text_generation_launcher: Using Attention = flashinfer
2024-12-17T18:03:42.713034Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2024-12-17T18:03:42.714007Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-12-17T18:03:42.714678Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2024-12-17T18:03:42.721143Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2024-12-17T18:03:52.722256Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2024-12-17T18:03:52.723416Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-12-17T18:03:52.723960Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2024-12-17T18:03:52.730231Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2024-12-17T18:04:02.731685Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2024-12-17T18:04:02.733008Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-12-17T18:04:02.733511Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2024-12-17T18:04:02.739340Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2024-12-17T18:04:12.740983Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2024-12-17T18:04:12.742778Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-12-17T18:04:12.743260Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2024-12-17T18:04:12.748509Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2024-12-17T18:04:22.750201Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2024-12-17T18:04:22.752482Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-12-17T18:04:22.753057Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2024-12-17T18:04:22.757785Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2024-12-17T18:04:32.759340Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2024-12-17T18:04:32.762067Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-12-17T18:04:32.762852Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2024-12-17T18:04:32.767034Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2024-12-17T18:04:42.768492Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2024-12-17T18:04:42.771758Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-12-17T18:04:42.772535Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2024-12-17T18:04:42.776268Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2024-12-17T18:04:52.777706Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2024-12-17T18:04:52.781362Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-12-17T18:04:52.782289Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2024-12-17T18:04:52.785605Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2024-12-17T18:05:02.786995Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2024-12-17T18:05:02.790997Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-12-17T18:05:02.792054Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2024-12-17T18:05:02.794933Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2024-12-17T18:05:12.796209Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2024-12-17T18:05:12.800615Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-12-17T18:05:12.802012Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2024-12-17T18:05:12.804257Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2024-12-17T18:05:22.805536Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2024-12-17T18:05:22.810307Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-12-17T18:05:22.811833Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2024-12-17T18:05:22.813416Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2024-12-17T18:05:32.814759Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2024-12-17T18:05:32.819792Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-12-17T18:05:32.821590Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2024-12-17T18:05:32.821834Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2024-12-17T18:05:42.824027Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2024-12-17T18:05:42.829566Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-12-17T18:05:42.830560Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2024-12-17T18:05:42.831422Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2024-12-17T18:05:52.833387Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=3
2024-12-17T18:05:52.839573Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
2024-12-17T18:05:52.840175Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
2024-12-17T18:05:52.841278Z INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
2024-12-17T18:06:01.763800Z INFO text_generation_launcher: Using prefill chunking = True
2024-12-17T18:06:02.627022Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-1
2024-12-17T18:06:02.627076Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-3
2024-12-17T18:06:02.627110Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-0
2024-12-17T18:06:02.642621Z INFO shard-manager: text_generation_launcher: Shard ready in 149.940971364s rank=3
2024-12-17T18:06:02.649583Z INFO shard-manager: text_generation_launcher: Shard ready in 149.948711278s rank=0
2024-12-17T18:06:02.650706Z INFO shard-manager: text_generation_launcher: Shard ready in 149.949875248s rank=1
2024-12-17T18:06:02.848613Z INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-2
2024-12-17T18:06:02.849891Z INFO shard-manager: text_generation_launcher: Shard ready in 150.143446295s rank=2
2024-12-17T18:06:02.909856Z INFO text_generation_launcher: Starting Webserver
2024-12-17T18:06:03.001599Z INFO text_generation_router_v3: backends/v3/src/lib.rs:125: Warming up model
2024-12-17T18:06:03.023245Z INFO text_generation_launcher: Using optimized Triton indexing kernels.
```
### Expected behavior
No system crash | open | 2024-12-17T18:02:57Z | 2024-12-18T04:46:58Z | https://github.com/huggingface/text-generation-inference/issues/2853 | [] | ad-astra-video | 1 |
jumpserver/jumpserver | django | 14,951 | [Bug] jms_core sleep 365 days | ### Product Version
v4.7.0-ce
### Product Edition
- [x] Community Edition
- [ ] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [ ] Online Installation (One-click command installation)
- [x] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### Environment Information
os: centos 7
kernel: 5.4.278-1.el7.elrepo.x86_64
### 🐛 Bug Description
install stucks because of the sleep 365 days
### Recurrence Steps
1. alter the config-example.txt
2. run ./jmsctl.sh install
3. it creates the jms_core and stuck
4. docker logs -f jms_core shows sleep 365 days
5. both try scripts/images and the dockerhub jumpserver/core:v4.7.0-ce
### Expected Behavior
_No response_
### Additional Information
_No response_
### Attempted Solutions
_No response_ | closed | 2025-02-28T06:57:11Z | 2025-03-04T06:27:12Z | https://github.com/jumpserver/jumpserver/issues/14951 | [
"🐛 Bug"
] | spiritman1990 | 8 |
horovod/horovod | tensorflow | 2,976 | The speed of multi-node is slower than single-node | I used pytorch and Horovod with two 1080ti GPU on one machine.
When I use single-node, each epoch step will take 3s, but it will take 7s when I use two-node.
Is it normal for the data exchange between nodes to take so long? | closed | 2021-06-11T09:42:32Z | 2024-01-31T03:54:50Z | https://github.com/horovod/horovod/issues/2976 | [] | walt676 | 1 |
ludwig-ai/ludwig | data-science | 3,430 | Throws 'IndexError: Dimension specified as 0 but tensor has no dimensions' during training. | **Describe the bug**
I am trying to run the LLM_few-shot example (https://github.com/ludwig-ai/ludwig/blob/master/examples/llm_few_shot_learning/simple_model_training.py) on google colab and getting the following error in the training stage.
==== Log ====
```
INFO:ludwig.models.llm:Done.
INFO:ludwig.utils.tokenizers:Loaded HuggingFace implementation of facebook/opt-350m tokenizer
INFO:ludwig.trainers.trainer:Tuning batch size...
INFO:ludwig.utils.batch_size_tuner:Tuning batch size...
INFO:ludwig.utils.batch_size_tuner:Exploring batch_size=1
INFO:ludwig.utils.checkpoint_utils:Successfully loaded model weights from /tmp/tmpnqj9shge/latest.ckpt.
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
[<ipython-input-15-cbbfc30da30b>](https://localhost:8080/#) in <cell line: 6>()
4 preprocessed_data, # tuple Ludwig Dataset objects of pre-processed training data
5 output_directory, # location of training results stored on disk
----> 6 ) = model.train(
7 dataset=df,experiment_name="simple_experiment", model_name="simple_model", skip_save_processed_input=False)
8
10 frames
[/usr/local/lib/python3.10/dist-packages/ludwig/models/llm.py](https://localhost:8080/#) in _remove_left_padding(self, input_ids_sample)
629 else:
630 pad_idx = 0
--> 631 input_ids_sample_no_padding = input_ids_sample[pad_idx + 1 :]
632
633 # Start from the first BOS token
IndexError: Dimension specified as 0 but tensor has no dimensions
```
====End of Log ====
**Config is as follows**
```
config = yaml.unsafe_load(
"""
model_type: llm
model_name: facebook/opt-350m
generation:
temperature: 0.1
top_p: 0.75
top_k: 40
num_beams: 4
max_new_tokens: 64
prompt:
task: "Classify the sample input as either negative, neutral, or positive."
retrieval:
type: semantic
k: 3
model_name: paraphrase-MiniLM-L3-v2
input_features:
-
name: review
type: text
output_features:
-
name: label
type: category
preprocessing:
fallback_label: "neutral"
decoder:
type: category_extractor
match:
"negative":
type: contains
value: "positive"
"neural":
type: contains
value: "neutral"
"positive":
type: contains
value: "positive"
preprocessing:
split:
type: fixed
trainer:
type: finetune
epochs: 2
"""
)
```
**Environment (please complete the following information):**
OS: Colab
Python version 3.10
Ludwig version 0.8
**Additional context**

| closed | 2023-06-05T09:51:38Z | 2024-10-18T13:36:14Z | https://github.com/ludwig-ai/ludwig/issues/3430 | [] | chayanray | 6 |
Miksus/rocketry | automation | 227 | Conditions: cron "0/20 * * * *" is not same as "*/20 * * * *" | **Describe the bug**
"0/20 * * * *" cron pattern launches my tasks hourly while "*/20 * * * *" launches tasks every 20 minutes
There is a screenshot of my log stats from heroku for 0/20 * * * * pattern

there is one for */20 * * * *

**Expected behavior**
I expect these patterns to behave in the same way
**Desktop (please complete the following information):**
- OS: Windows 10 Professional
- Python version: 3.11.6
requirements.txt:
pydantic==1.10.10
fastapi
rocketry
requests
boto3
uvicorn[standard]
| closed | 2023-11-15T13:57:31Z | 2023-12-14T09:32:03Z | https://github.com/Miksus/rocketry/issues/227 | [
"bug"
] | nikitazavadsky | 2 |
yezz123/authx | pydantic | 560 | ♻️ refactor error handling | closed | 2024-03-30T23:49:05Z | 2024-04-04T02:17:15Z | https://github.com/yezz123/authx/issues/560 | [
"enhancement",
"v1"
] | yezz123 | 0 | |
gradio-app/gradio | deep-learning | 10,665 | Gradio Microphone recording not working | ### Describe the bug
1. Go to https://www.gradio.app/guides/real-time-speech-recognition
2. Record yourself under https://www.gradio.app/guides/real-time-speech-recognition#2-create-a-full-context-asr-demo-with-transformers
3. Hit submit after website hangs on you
4. The waveform is not as recorded and the output text is not what you said.
Issues
- performance issue with Gradio Microphone. Website will not be responsive
- The waveform is not what I recorded.
Reporting this for gradio 5.17.1. There are reports for other versions of gradio as well.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
from openai import OpenAI
OPEN_AI_API_KEY = "use your own"
def transcribe(audio):
text = ""
client = OpenAI(api_key=OPEN_AI_API_KEY)
try:
with open(audio, "rb") as audio_file:
text = client.audio.transcriptions.create(
file = audio_file,
response_format="text",
model = "whisper-1",
language = "en",
)
except Exception as e:
print(str(e))
print (f"text is {text}")
return text
demo = gr.Interface(
transcribe,
gr.Microphone(value=None, sources="microphone", type="filepath", format="wav"),
"text",
)
demo.launch()
```
`pyproject.toml` below
```
[tool.poetry]
name = "gradio-mic"
version = "0.1.0"
[tool.poetry.dependencies]
python = "^3.12"
gradio = "^5.17.1"
openai = "^1.64.0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
```
### Screenshot

### Logs
```shell
```
### System Info
```shell
gradio environment
Gradio Environment Information:
------------------------------
Operating System: Windows
gradio version: 5.17.1
gradio_client version: 1.7.1
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.8
ffmpy: 0.5.0
gradio-client==1.7.1 is not installed.
httpx: 0.28.1
huggingface-hub: 0.29.1
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 2.2.3
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.3
pillow: 11.1.0
pydantic: 2.10.6
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.9.7
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.45.3
tomlkit: 0.13.2
typer: 0.15.1
starlette: 0.45.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
starlette: 0.45.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
starlette: 0.45.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
starlette: 0.45.3
tomlkit: 0.13.2
typer: 0.15.1
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
typing-extensions: 4.12.2
urllib3: 2.3.0
urllib3: 2.3.0
uvicorn: 0.34.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2025.2.0
httpx: 0.28.1
huggingface-hub: 0.29.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.2
```
### Severity
Blocking usage of gradio | closed | 2025-02-24T09:41:52Z | 2025-02-26T01:26:57Z | https://github.com/gradio-app/gradio/issues/10665 | [
"bug"
] | meng-hui | 2 |
marshmallow-code/flask-marshmallow | rest-api | 34 | Schema.loads().data return dictionary instead of SQLAlchemy Object | I'm trying to deserialize a json object in a `POST` request into an `SQLAlchemy Model` object.
```
class UserSchema(ma.Schema):
class Meta:
model = User
UserSerializer = UserSchema()
(inside my Flask-Restful Post method)
json = request.get_json()
user = UserSerializer.load(json).data
```
After the deserialization, user is an empty dictionary and not an `SQLAlchemy Model` object.
If I manually add the fields to the `UserSchema` declaration, it works, however returns a `dictionary` instead of a `SQLAlchemy Model` object:
```
class UserSchema(ma.Schema):
class Meta:
model = User
fields = ('email', 'password', 'first_name', 'last_name', 'birth_date')
```
Do I need to configure anything else in my `UserSchema`?
I have debugged the `Flask-Marshmallow` initialization, and the `has_sqla` flag is True.
| closed | 2016-01-14T21:22:23Z | 2020-08-20T02:39:56Z | https://github.com/marshmallow-code/flask-marshmallow/issues/34 | [] | ffleandro | 4 |
LibreTranslate/LibreTranslate | api | 521 | UX Web interface - Difficult to use on a smartphone | 
It would be more accessible to read if the translation appears below but not on the side from a smartphone.
It's a no for me to use in this state. | closed | 2023-10-23T21:09:16Z | 2023-10-27T14:04:59Z | https://github.com/LibreTranslate/LibreTranslate/issues/521 | [
"enhancement"
] | skytux | 1 |
pallets/quart | asyncio | 109 | Can not refer to blueprint's static folder | Hi, So you can find code [here](https://repl.it/@gptsahaj28/quartTodoApp#main.py)
here you will see that `main` is a blueprint with sub directory `static` and `templates`
now `line 4` of `app/main/templates/index.html` looks like
```HTML
<link href="{{ url_for('main.static', filename = 'css/model.css') }}" rel="stylesheet" type="text/css" />
```
notice : `url_for('main.static', filename='css/model.css')`
in flask in means url for file in static folder of main blueprint
now, run the repl and refresh the web-page
in console you will see something
```
[2020-08-16 20:09:33,661] 172.18.0.1:37218 GET /static/css/model.css 1.1 404 103 3946
```
indicating that it can't find model.css `404` and, If you look at the path `/static/css/models.css` and thats not where models.css exist
its actual location is `main/static/css/models.css` which i have described in `url_for`
Please look into it as it would be grate in we can define style of a particular blueprint in blueprint itself
Thanks :) | closed | 2020-08-16T20:26:34Z | 2022-07-05T01:59:01Z | https://github.com/pallets/quart/issues/109 | [] | itsDrac | 1 |
horovod/horovod | deep-learning | 3,751 | error: missing ranks | **Environment:**
1. Docker version: 20.10.18, build b40c2f6
2. Host driver version: 515.43.04
3. image: horovod version 0.25.0
4. cuda-toolkit version: 11.2
**Checklist:**
1. Did you search issues to find if somebody asked this question before? Yes
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)? Yes
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? Yes
5. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? Yes
**Bug report:**
when I use the command to run job in two different machine
> horovodrun -np 4 -H gpu1:2,gpu2:2 --mpi-args="-x NCCL_DEBUG=INFO" python brocasttest.py
it shows that
```
[1,0]<stdout>:j66d10267:290:422 [0] NCCL INFO Bootstrap : Using bond0:11.158.244.23<0>
[1,0]<stdout>:j66d10267:290:422 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
[1,0]<stdout>:j66d10267:290:422 [0] NCCL INFO NET/IB : Using [0]mlx5_bond_0:1/RoCE ; OOB bond0:11.158.244.23<0>
[1,0]<stdout>:j66d10267:290:422 [0] NCCL INFO Using network IB
[1,0]<stdout>:NCCL version 2.8.4+cuda11.2
[1,1]<stdout>:j66d10267:291:423 [1] NCCL INFO Bootstrap : Using bond0:11.158.244.23<0>
[1,1]<stdout>:j66d10267:291:423 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
[1,2]<stdout>:j63g01247:65:197 [0] NCCL INFO Bootstrap : Using bond0:11.158.237.10<0>
[1,3]<stdout>:j63g01247:66:198 [1] NCCL INFO Bootstrap : Using bond0:11.158.237.10<0>
[1,2]<stdout>:j63g01247:65:197 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
[1,3]<stdout>:j63g01247:66:198 [1] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation
[1,1]<stdout>:j66d10267:291:423 [1] NCCL INFO NET/IB : Using [0]mlx5_bond_0:1/RoCE ; OOB bond0:11.158.244.23<0>
[1,1]<stdout>:j66d10267:291:423 [1] NCCL INFO Using network IB
[1,2]<stdout>:j63g01247:65:197 [0] NCCL INFO NET/IB : Using [0]mlx5_bond_0:1/RoCE ; OOB bond0:11.158.237.10<0>
[1,2]<stdout>:j63g01247:65:197 [0] NCCL INFO Using network IB
[1,3]<stdout>:j63g01247:66:198 [1] NCCL INFO NET/IB : Using [0]mlx5_bond_0:1/RoCE ; OOB bond0:11.158.237.10<0>
[1,3]<stdout>:j63g01247:66:198 [1] NCCL INFO Using network IB
[1,1]<stdout>:j66d10267:291:423 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] -1/-1/-1->1->0
[1,0]<stdout>:j66d10267:290:422 [0] NCCL INFO Channel 00/02 : 0 1 2 3
[1,0]<stdout>:j66d10267:290:422 [0] NCCL INFO Channel 01/02 : 0 1 2 3
[1,0]<stdout>:j66d10267:290:422 [0] NCCL INFO Trees [0] 1/2/-1->0->-1 [1] 1/-1/-1->0->2
[1,2]<stdout>:j63g01247:65:197 [0] NCCL INFO Trees [0] 3/-1/-1->2->0 [1] 3/0/-1->2->-1
[1,3]<stdout>:j63g01247:66:198 [1] NCCL INFO Trees [0] -1/-1/-1->3->2 [1] -1/-1/-1->3->2
[1,2]<stdout>:j63g01247:65:197 [0] NCCL INFO Channel 00 : 1[81000] -> 2[4000] [receive] via NET/IB/0
[1,0]<stdout>:j66d10267:290:422 [0] NCCL INFO Channel 00 : 3[81000] -> 0[4000] [receive] via NET/IB/0
[1,2]<stdout>:j63g01247:65:197 [0] NCCL INFO Channel 01 : 1[81000] -> 2[4000] [receive] via NET/IB/0
[1,2]<stdout>:j63g01247:65:197 [0] NCCL INFO Channel 00 : 2[4000] -> 3[81000] via direct shared memory
[1,2]<stdout>:j63g01247:65:197 [0] NCCL INFO Channel 01 : 2[4000] -> 3[81000] via direct shared memory
[1,0]<stdout>:j66d10267:290:422 [0] NCCL INFO Channel 01 : 3[81000] -> 0[4000] [receive] via NET/IB/0
[1,0]<stdout>:j66d10267:290:422 [0] NCCL INFO Channel 00 : 0[4000] -> 1[81000] via direct shared memory
[1,0]<stdout>:j66d10267:290:422 [0] NCCL INFO Channel 01 : 0[4000] -> 1[81000] via direct shared memory
[1,3]<stdout>:j63g01247:66:198 [1] NCCL INFO Channel 00 : 3[81000] -> 0[4000] [send] via NET/IB/0
[1,1]<stdout>:j66d10267:291:423 [1] NCCL INFO Channel 00 : 1[81000] -> 2[4000] [send] via NET/IB/0
[1,3]<stdout>:j63g01247:66:198 [1] NCCL INFO Channel 01 : 3[81000] -> 0[4000] [send] via NET/IB/0
[1,1]<stdout>:j66d10267:291:423 [1] NCCL INFO Channel 01 : 1[81000] -> 2[4000] [send] via NET/IB/0
[1,3]<stdout>:j63g01247:66:198 [1] NCCL INFO Connected all rings
[1,3]<stdout>:j63g01247:66:198 [1] NCCL INFO Channel 00 : 3[81000] -> 2[4000] via direct shared memory
[1,1]<stdout>:j66d10267:291:423 [1] NCCL INFO Connected all rings
[1,3]<stdout>:j63g01247:66:198 [1] NCCL INFO Channel 01 : 3[81000] -> 2[4000] via direct shared memory
[1,1]<stdout>:j66d10267:291:423 [1] NCCL INFO Channel 00 : 1[81000] -> 0[4000] via direct shared memory
[1,1]<stdout>:j66d10267:291:423 [1] NCCL INFO Channel 01 : 1[81000] -> 0[4000] via direct shared memory
[1,2]<stdout>:j63g01247:65:197 [0] NCCL INFO Connected all rings
[1,0]<stdout>:j66d10267:290:422 [0] NCCL INFO Connected all rings
[1,2]<stdout>:j63g01247:65:197 [0] NCCL INFO Channel 00 : 0[4000] -> 2[4000] [receive] via NET/IB/0
[1,0]<stdout>:j66d10267:290:422 [0] NCCL INFO Channel 00 : 2[4000] -> 0[4000] [receive] via NET/IB/0
[1,2]<stdout>:j63g01247:65:197 [0] NCCL INFO Channel 01 : 0[4000] -> 2[4000] [receive] via NET/IB/0
[1,0]<stdout>:j66d10267:290:422 [0] NCCL INFO Channel 01 : 2[4000] -> 0[4000] [receive] via NET/IB/0
[1,2]<stdout>:j63g01247:65:197 [0] NCCL INFO Channel 00 : 2[4000] -> 0[4000] [send] via NET/IB/0
[1,0]<stdout>:j66d10267:290:422 [0] NCCL INFO Channel 00 : 0[4000] -> 2[4000] [send] via NET/IB/0
[1,2]<stdout>:j63g01247:65:197 [0] NCCL INFO Channel 01 : 2[4000] -> 0[4000] [send] via NET/IB/0
[1,0]<stdout>:j66d10267:290:422 [0] NCCL INFO Channel 01 : 0[4000] -> 2[4000] [send] via NET/IB/0
[1,2]<stdout>:j63g01247:65:197 [0] NCCL INFO Connected all trees
[1,2]<stdout>:j63g01247:65:197 [0] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/64
[1,2]<stdout>:j63g01247:65:197 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer
[1,0]<stdout>:j66d10267:290:422 [0] NCCL INFO Connected all trees
[1,0]<stdout>:j66d10267:290:422 [0] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/64
[1,0]<stdout>:j66d10267:290:422 [0] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer
[1,2]<stdout>:j63g01247:65:197 [0] NCCL INFO comm 0x7f6a2c28f840 rank 2 nranks 4 cudaDev 0 busId 4000 - Init COMPLETE
[1,0]<stdout>:j66d10267:290:422 [0] NCCL INFO comm 0x7f955c299030 rank 0 nranks 4 cudaDev 0 busId 4000 - Init COMPLETE
[1,3]<stdout>:j63g01247:66:198 [1] NCCL INFO Connected all trees
[1,3]<stdout>:j63g01247:66:198 [1] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/64
[1,3]<stdout>:j63g01247:66:198 [1] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer
[1,1]<stdout>:j66d10267:291:423 [1] NCCL INFO Connected all trees
[1,1]<stdout>:j66d10267:291:423 [1] NCCL INFO threadThresholds 8/8/64 | 32/8/64 | 8/8/64
[1,1]<stdout>:j66d10267:291:423 [1] NCCL INFO 2 coll channels, 2 p2p channels, 1 p2p channels per peer
[1,3]<stdout>:j63g01247:66:198 [1] NCCL INFO comm 0x7fad7c297090 rank 3 nranks 4 cudaDev 1 busId 81000 - Init COMPLETE
[1,1]<stdout>:j66d10267:291:423 [1] NCCL INFO comm 0x7fb81028fbe0 rank 1 nranks 4 cudaDev 1 busId 81000 - Init COMPLETE
[1,0]<stdout>:j66d10267:290:422 [0] NCCL INFO Launch mode Parallel
[1,2]<stdout>:
[1,2]<stdout>:j63g01247:65:211 [0] transport/net_ib.cc:839 NCCL WARN NET/IB : Got completion with error 12, opcode 32696, len 0, vendor err 129
[1,2]<stdout>:j63g01247:65:211 [0] NCCL INFO include/net.h:28 -> 2
[1,2]<stdout>:j63g01247:65:211 [0] NCCL INFO transport/net.cc:404 -> 2
[1,2]<stdout>:j63g01247:65:211 [0] NCCL INFO proxy.cc:320 -> 2
[1,2]<stdout>:j63g01247:65:211 [0] NCCL INFO proxy.cc:367 -> 2 [Proxy Thread]
.....
[1,0]<stderr>:[2022-10-18 09:51:21.690732: W /tmp/pip-req-build-l2iphkqz/horovod/common/stall_inspector.cc:107] One or more tensors were submitted to be reduced, gathered or broadcasted by subset of ranks and are waiting for remainder of ranks for more than 60 seconds. This may indicate that different ranks are trying to submit different tensors or that only subset of ranks is submitting tensors, which will cause deadlock.
[1,0]<stderr>:Missing ranks:
[1,0]<stderr>:2: [broadcast.momentum_buffer.1, broadcast.momentum_buffer.2, broadcast.momentum_buffer.3, broadcast.momentum_buffer.4, broadcast.momentum_buffer.5, broadcast.momentum_buffer.6 ...]
[1,0]<stderr>:3: [broadcast.momentum_buffer.1, broadcast.momentum_buffer.2, broadcast.momentum_buffer.3, broadcast.momentum_buffer.4, broadcast.momentum_buffer.5, broadcast.momentum_buffer.6 ...]
```
**the code**
```python
import argparse
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.utils.data.distributed
import horovod.torch as hvd
# Training settings
parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
def main(args):
# Horovod: initialize library.
hvd.init()
if args.cuda:
# Horovod: pin GPU to local rank.
torch.cuda.set_device(hvd.local_rank())
model = Net()
if args.cuda:
# Move model to GPU.
model.cuda(hvd.local_rank())
optimizer = optim.SGD(model.parameters(), lr=0.1,
momentum=0.1)
optimizer = hvd.DistributedOptimizer(
optimizer, named_parameters=model.named_parameters())
# Horovod: broadcast parameters & optimizer state.
hvd.broadcast_parameters(model.state_dict(), root_rank=0)
hvd.broadcast_optimizer_state(optimizer, root_rank=0)
print("broadcast success!!!")
if __name__ == '__main__':
args = parser.parse_args()
args.cuda = torch.cuda.is_available()
main(args)
```
but when I used the command
> horovodrun --verbose -np 2 -H gpu1:2 python brocasttest.py
or
> horovodrun --verbose -np 2 -H gpu2:2 python brocasttest.py
it works.
can anyone tell me how can I fix it ? | closed | 2022-10-18T10:03:28Z | 2022-12-28T04:17:21Z | https://github.com/horovod/horovod/issues/3751 | [
"question",
"wontfix"
] | zero-piB | 2 |
microsoft/JARVIS | pytorch | 180 | Support for ChatGPT | I noticed that currently Jarvis only support text-DaVinci and gpt-4. Do you have any plan to support gpt-3.5-turbo in the near future? Cause it's more cheaper compared with current LLMs.
Thanks for your wonderful work. | open | 2023-04-21T07:03:47Z | 2023-04-23T08:26:49Z | https://github.com/microsoft/JARVIS/issues/180 | [] | ustcwhy | 1 |
donnemartin/data-science-ipython-notebooks | machine-learning | 16 | Add SAWS: A Supercharged AWS Command Line Interface (CLI) to AWS Section. | closed | 2015-10-04T10:40:50Z | 2016-05-18T02:09:55Z | https://github.com/donnemartin/data-science-ipython-notebooks/issues/16 | [
"feature-request"
] | donnemartin | 1 | |
psf/black | python | 4,513 | `# fmt: skip` doesn't work with multiline strings | I found this while messing around with #4511
[playground link](https://black.vercel.app/?version=stable&state=_Td6WFoAAATm1rRGAgAhARYAAAB0L-Wj4AC2AF1dAD2IimZxl1N_WhQxS683Co8P8Minjqpb4QfnoC1BJcmjUY2YuL-n7hzVFArdqsZ6zAZpQS4lesRd3PXpISXEGuFOqt4C4cGfy1KruABcKF9lrI40h6GcoFdNTaEKAAAAAAByImBZgAFMjgABebcBAAAAtTsj9bHEZ_sCAAAAAARZWg==)
```py
(
# fmt: skip
"""
"""
)
```
gives
`Cannot parse: 2:0: EOF in multi-line string`
[playground link](https://black.vercel.app/?version=stable&state=_Td6WFoAAATm1rRGAgAhARYAAAB0L-Wj4ACxAF9dAD2IimZxl1N_WhQxS683Co8P8Minjqpb4QfnoC1BJKIZVowx8LAfhpc-5MuAw2QgyrY6enrr3D12ZHibSd-WkG8g8jYLq0lfsxduaMRk10drsZ_hjbxhkG8fGeIb1oAAAACRpCHMogk0EgABe7IBAAAAzhULcLHEZ_sCAAAAAARZWg==)
```py
(
# fmt: skip
"\
"
)
```
gives
`Cannot parse: 2:0: "`
---
This is not fixed by #4380
---
This is fixed by #3978, though that PR appears to be stalled.
With #3978 applied:
```py
(
# fmt: skip
"""
"""
)
```
gives
```py
(
# fmt: skip
"""
"""
)
```
```py
(
# fmt: skip
"\
"
)
```
gives
```py
(
# fmt: skip
""
)
``` | open | 2024-11-15T01:35:38Z | 2025-02-26T13:44:04Z | https://github.com/psf/black/issues/4513 | [
"T: bug",
"F: strings"
] | MeGaGiGaGon | 5 |
ShishirPatil/gorilla | api | 83 | deploying to replicate | **Describe the solution you'd like**
I would love to see a model of Gorilla hosted to Replicate, it would be nice to be able to utilize their API and hosting.
**Additional context**
Had a blast playing with the colab
| closed | 2023-08-04T16:18:57Z | 2024-02-04T08:58:21Z | https://github.com/ShishirPatil/gorilla/issues/83 | [
"enhancement"
] | walter-grace | 1 |
harry0703/MoneyPrinterTurbo | automation | 92 | ImageMagick的安全策略阻止了与临时文件@/tmp/tmpur5hyyto.txt相关的操作。 | 记录一个已解决的问题。
报错:
OSError: MoviePy Error: creation of None failed because of the following error: convert-im6.q16: attempt to perform an operation not allowed by the security policy `@/tmp/tmpur5hyyto.txt' @ error/property.c/InterpretImageProperties/3668. convert-im6.q16: no images defined `PNG32:/tmp/tmpkq291k_5.png' @ error/convert.c/ConvertImageCommand/3258. . .This error can be due to the fact that ImageMagick is not installed on your computer, or (for Windows users) that you didn't specify the path to the ImageMagick binary. Check the documentation.
分析:
这个错误通常与ImageMagick的安全策略有关,尤其是在处理文件时。ImageMagick是一个强大的工具,用于创建、编辑、合成或转换数字图像,它可以通过命令行使用。MoviePy在某些情况下依赖ImageMagick来处理图像或视频文件,特别是在转换图像格式时。错误信息提示ImageMagick的安全策略阻止了某些操作,特别是与临时文件@/tmp/tmpur5hyyto.txt相关的操作。
解决方案:
可以在ImageMagick的配置文件policy.xml中找到这些策略。这个文件通常位于/etc/ImageMagick-6/、/etc/ImageMagick/或ImageMagick安装目录的类似位置。修改包含pattern="@"的条目,将rights="none"更改为rights="read|write"以允许对文件的读写操作。
| closed | 2024-03-28T07:59:09Z | 2024-10-08T10:41:21Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/92 | [] | chenhengzh | 3 |
mkhorasani/Streamlit-Authenticator | streamlit | 138 | When I click lagout to log out, and log-in again, My interface cannot load, help me, please |


| closed | 2024-02-26T09:08:50Z | 2024-03-28T11:05:20Z | https://github.com/mkhorasani/Streamlit-Authenticator/issues/138 | [
"help wanted"
] | likescentific | 2 |
ijl/orjson | numpy | 358 | Out of Memory error when building orjson wheel for armv7 Docker on Ubuntu64 | I'm trying to compile Docker image on Ubuntu64 for [Home Assistant](https://github.com/home-assistant/core/), but it always fails with an Out of Memory error when it tries to build the armv7 version.
```
#0 24.80 Downloading orjson-3.8.6.tar.gz (655 kB)
#0 25.86 ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 655.1/655.1 kB 902.5 kB/s eta 0:00:00
#0 29.00 Installing build dependencies: started
#0 73.69 Installing build dependencies: finished with status 'done'
#0 73.81 Getting requirements to build wheel: started
#0 76.44 Getting requirements to build wheel: finished with status 'done'
#0 76.50 Preparing metadata (pyproject.toml): started
#0 596.0 Preparing metadata (pyproject.toml): still running...
#0 1078.8 Preparing metadata (pyproject.toml): still running...
#0 1527.7 Preparing metadata (pyproject.toml): still running...
#0 1528.0 Preparing metadata (pyproject.toml): finished with status 'error'
#0 1528.3 error: subprocess-exited-with-error
#0 1528.3
#0 1528.3 × Preparing metadata (pyproject.toml) did not run successfully.
#0 1528.3 │ exit code: 1
#0 1528.3 ╰─> [16 lines of output]
#0 1528.3 Updating crates.io index
#0 1528.3 warning: spurious network error (2 tries remaining): failed to mmap. Could not write data: Out of memory; class=Os (2)
#0 1528.3 warning: spurious network error (1 tries remaining): failed to mmap. Could not write data: Out of memory; class=Os (2)
#0 1528.3 error: failed to get `ahash` as a dependency of package `orjson v3.8.6 (/tmp/pip-install-d1umdm0e/orjson)`
#0 1528.3
#0 1528.3 Caused by:
#0 1528.3 failed to fetch `https://github.com/rust-lang/crates.io-index`
#0 1528.3
#0 1528.3 Caused by:
#0 1528.3 failed to mmap. Could not write data: Out of memory; class=Os (2)
#0 1528.3 💥 maturin failed
#0 1528.3 Caused by: Cargo metadata failed. Does your crate compile with `cargo build`?
#0 1528.3 Caused by: `cargo metadata` exited with an error:
#0 1528.3 Error running maturin: Command '['maturin', 'pep517', 'write-dist-info', '--metadata-directory', '/tmp/pip-modern-metadata-6jp59nkn', '--interpreter', '/usr/bin/python3']' returned non-zero exit status 1.
#0 1528.3 Checking for Rust toolchain....
#0 1528.3 Running `maturin pep517 write-dist-info --metadata-directory /tmp/pip-modern-metadata-6jp59nkn --interpreter /usr/bin/python3`
#0 1528.3 [end of output]
#0 1528.3
#0 1528.3 note: This error originates from a subprocess, and is likely not a problem with pip.
#0 1528.4 error: metadata-generation-failed
#0 1528.4
#0 1528.4 × Encountered error while generating package metadata.
#0 1528.4 ╰─> orjson
#0 1528.4
#0 1528.4 note: This is an issue with the package mentioned above, not pip.
#0 1528.4 hint: See above for details.
``` | closed | 2023-02-25T17:09:46Z | 2023-02-28T13:34:18Z | https://github.com/ijl/orjson/issues/358 | [] | magicse | 1 |
Asabeneh/30-Days-Of-Python | flask | 422 | Needed support | Hi, Asabeneh
For some exercises there are some links that looks like they're not available any more in Day 20 PIP.
Some of them are:
countries API
https://archive.ics.uci.edu/datasets.php
Thanks for your help. | open | 2023-07-25T00:10:15Z | 2023-08-25T17:51:33Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/422 | [] | Isaias-program | 0 |
matterport/Mask_RCNN | tensorflow | 2,784 | How to train using dataset which is in segmented masks in png format ? | The dataset is in segmented masks in png format as shown below structure, any suggestion on how to train it. The dataset structure is :
Object_Images
-> img1.png
-> img2.png
. . .
Object_Segmented_Masks
->img1_seg_mask.png
->img2_seg_mask.png
. . .
There is no annotations data in usual 'json' format. Any suggestion? | closed | 2022-03-02T05:50:09Z | 2022-03-30T14:24:19Z | https://github.com/matterport/Mask_RCNN/issues/2784 | [] | dd2-42 | 7 |
kizniche/Mycodo | automation | 1,143 | How to configure the reverse proxy to use other services on the same system along with mycodo? | Hi,
I have mycodo working on a Raspberry Pi. **I would like to add another service on the same system**. I know nginx is deployed along with mycodo in the standard installation, and that **nginx can be used as a reverse proxy**. Then I should be able to deploy other services easily.
I figure out this is certainly not a specific mycodo issue. I have three choices: use a specific port or subdomain or URI subfolder for each service. I would prefer the last.
Could you please provide some tips for it? | closed | 2022-01-22T14:58:36Z | 2022-01-28T21:24:05Z | https://github.com/kizniche/Mycodo/issues/1143 | [] | lalebarde | 1 |
CorentinJ/Real-Time-Voice-Cloning | python | 790 | Difference about speaker_embedding part in Synthesizer? | Hi, firstly thanks for your work!
I find that in model.py of Synthesizer, it use Tacotron but, in forward part of that , the "speaker_embedding default ==None", so it don't use speaker_embedding when train synthesizer?
I remember that in older version, the speaker_embedding is used in model Tacotron2. Could you please tell me why and what's the difference, effect on result? | closed | 2021-07-07T07:47:54Z | 2021-08-25T09:58:11Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/790 | [] | ymzlygw | 1 |
plotly/dash | data-visualization | 2,360 | Variable Path not accounting for HTML encoding | **Describe your context**
```
async-dash 0.1.0a1
dash 2.7.0
dash-bootstrap-components 1.2.1
dash-core-components 2.0.0
dash-daq 0.5.0
dash-extensions 0.1.5
dash-html-components 2.0.0
dash-iconify 0.1.2
dash-loading-spinners 1.0.0
dash-mantine-components 0.11.0a2
dash-table 5.0.0
```
**Describe the bug**
Variable paths are not working when they have been altered with html encoding during request process.
ie - /path/this is a test => /path/this%20is%20a%20test
variable path = this%20is%20a%20test
**Expected behavior**
/path/this is a test => /path/this%20is%20a%20test
variable path = this is a test
Please note, this is available within Flask to remove the html encoding
| open | 2022-12-08T16:16:08Z | 2024-08-13T19:24:23Z | https://github.com/plotly/dash/issues/2360 | [
"bug",
"P3"
] | BSd3v | 4 |
jowilf/starlette-admin | sqlalchemy | 444 | Question: How to validate a @row_action or @action with `form` attribute? | As I said in #440, I'm using `row_actions` to create child-models. I have constructed the `@row_action` with the `form` parameter and some of the inputs are required and some of them have attributes like `min=2023`. When I click the `Yes, Proceed` button, the form is submitted without satisfying the form requirements, let's say leaving an input empty even if it is required, the modal just disappears without validating the form.
**Environment (please complete the following information):**
- Starlette-Admin version: [e.g. 0.12.2]
- ORM/ODMs: [SQLAlchemy]
| open | 2023-12-27T15:54:18Z | 2023-12-29T00:15:27Z | https://github.com/jowilf/starlette-admin/issues/444 | [
"bug"
] | hasansezertasan | 3 |
psf/black | python | 4,188 | SyntaxWarning on regexp on first run of black | When running `black` on the following code:
```
text = re.sub(
"([_a-zA-Z0-9-+]+)(\.[_a-zA-Z0-9-+]+)*"
"@([a-zA-Z0-9-]+)(\.[a-zA-Z0-9-]+)*(\.[a-zA-Z]{2,4})",
'<a href="mailto:\g<0>">\g<0></a>',
text,
)
text = re.sub(
"(ftp|http|https):\/\/(\w+:{0,1}\w*@)?"
"(\S+)(:[0-9]+)?(\/|\/([\w#!:.?+=&%@!\-\/]))?",
'<a href="\g<0>">\g<0></a>',
text,
)
```
I get the following warnings (written twice):
```
<unknown>:2: SyntaxWarning: invalid escape sequence '\.'
<unknown>:3: SyntaxWarning: invalid escape sequence '\.'
<unknown>:4: SyntaxWarning: invalid escape sequence '\g'
<unknown>:8: SyntaxWarning: invalid escape sequence '\/'
<unknown>:9: SyntaxWarning: invalid escape sequence '\S'
<unknown>:10: SyntaxWarning: invalid escape sequence '\g'
<unknown>:2: SyntaxWarning: invalid escape sequence '\.'
<unknown>:3: SyntaxWarning: invalid escape sequence '\.'
<unknown>:4: SyntaxWarning: invalid escape sequence '\g'
<unknown>:8: SyntaxWarning: invalid escape sequence '\/'
<unknown>:9: SyntaxWarning: invalid escape sequence '\S'
<unknown>:10: SyntaxWarning: invalid escape sequence '\g'
```
When re-running `black` on the same file, the warnings are not shown again. I have to modify the lines (adding a space for instance) to see the warnings again.
Are these warnings normal? (The syntax is normally correct according to the documentation of the `re` module.)
If they are normal, should they really appear twice in the output? And why don't they appear again when running a second time `black`? | closed | 2024-01-28T08:13:57Z | 2024-01-28T15:05:58Z | https://github.com/psf/black/issues/4188 | [
"T: bug"
] | Julien-Elie | 2 |
Lightning-AI/pytorch-lightning | machine-learning | 19,880 | can't fit with ddp_notebook on a Vertex AI Workbench instance (CUDA initialized) | ### Bug description
Using this minimal code example:
```
import torch
import lightning as L
print(torch.cuda.is_initialized())
trainer = L.Trainer(
accelerator="auto",
strategy="ddp_notebook",
devices="auto",
max_epochs=1,
# callbacks=callbacks,
log_every_n_steps=1
)
print(torch.cuda.is_initialized())
```
On Google Colab with a T4 attached, both print statements print "False" as expected.
On a Vertex AI Workbench instance with a T4 attached, the second statement prints "True"; merely instantiating the Trainer initializes cuda. This prevents fitting with DDP.
What could be causing this, and is there any way to work around it?
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_ | open | 2024-05-16T21:32:16Z | 2024-05-16T21:32:16Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19880 | [
"bug",
"needs triage"
] | jasonbrancazio | 0 |
ageitgey/face_recognition | python | 861 | face_recognition fails to detect some obvious faces. | * face_recognition version:1.2.3
* Python version: Python 3.5.6 :: Anaconda, Inc.
* Operating System: Ubuntu 16.04
### Description
Thank you for a great tool! I am trying to detect faces for COCO dataset(Images which have faces). For 50% of times, it is able to detect the faces correctly, but other 50% of images it fails. I presume this has to do something with the threshold.
### What I Did
I got a chance to go through the code flow. But I could not quite understand how can I actually tweak the threshold to detect the faces which are most probably missed due to low confidence.
The Command which I am using is:
`face_locations = face_recognition.face_locations(image)` , which eventually calls following function(according to my understanding):
`_raw_face_locations(img, number_of_times_to_upsample=1, model="hog")` ---> `face_detector(img, number_of_times_to_upsample)` ---> `dlib.get_frontal_face_detector()`
Now in `dlib` repo, I see that this is connected to file `dlib/dlib/image_processing/object_detector_abstract.h`, where we have following definitions :
```
template <
typename image_type
>
std::vector<rectangle> operator() (
const image_type& img,
double adjust_threshold = 0
);
/*!
requires
- img == an object which can be accepted by image_scanner_type::load()
ensures
- This function is identical to the above operator() routine, except that
it returns a std::vector<rectangle> which contains just the bounding
boxes of all the detections.
!*/
```
What i am not able to understand how can I change `adjust_threshold` ? using the Function: `face_recognition.face_locations(image)`.So, that it can detect some of the less obvious faces in the images .
Example Image for which it misses the faces:

None of the faces were detected. I understand this could be one of the difficult cases, But If I can detect the faces, with some tweaking (by lowering the confidence) It will serve my purpose.
Thank you.
Regards,
Nitin
| closed | 2019-06-21T20:15:59Z | 2021-08-18T20:21:57Z | https://github.com/ageitgey/face_recognition/issues/861 | [] | nbansal90 | 2 |
ultralytics/ultralytics | deep-learning | 19,454 | Triton inference bug for latest version | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Predict
### Bug
## Bug Report: Negative Dimension Error When Deploying official YOLOv11 (or other YOLO version as yolov8) with Triton Server
### Description
Following the official Ultralytics tutorial for deploying models with Triton Server (https://docs.ultralytics.com/guides/triton-inference-server/), I encountered a runtime error with negative dimension values.
### Error Message
RuntimeError: Trying to create tensor with negative dimension -989: [0, -989]
### Additional Information
- Metadata was already set up from the beginning (I've tested both with and without adding metadata)
- Despite proper metadata configuration, the error still occurs
- The suggestion in issue #19093 that the problem might be related to metadata appears to be incorrect
- The same setup works correctly with ultralytics 8.3.56 in other contexts
- The error occurs regardless of export options: both with NMS and without NMS produce the same error
- It seems that all export options lead to the same negative dimension error
### Steps to Reproduce
1. Follow the exact deployment tutorial at https://docs.ultralytics.com/guides/triton-inference-server/#exporting-yolo11-to-onnx-format
2. Attempt to deploy with Triton Server
3. Observe the negative dimension error
### Expected Behavior
The model should deploy and run inference correctly without any negative dimension errors.
* Error logs:
```
File ~\miniconda3\Lib\site-packages\ultralytics\engine\model.py:560, in Model.predict(self, source, stream, predictor, **kwargs)
558 if prompts and hasattr(self.predictor, "set_prompts"): # for SAM-type models
559 self.predictor.set_prompts(prompts)
--> 560 return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)
File [~\miniconda3\Lib\site-packages\ultralytics\engine\predictor.py:175](http://localhost:8888/lab/workspaces/auto-G/tree/~/miniconda3/Lib/site-packages/ultralytics/engine/predictor.py#line=174), in BasePredictor.__call__(self, source, model, stream, *args, **kwargs)
173 return self.stream_inference(source, model, *args, **kwargs)
174 else:
--> 175 return list(self.stream_inference(source, model, *args, **kwargs))
File [~\miniconda3\Lib\site-packages\torch\utils\_contextlib.py:36](http://localhost:8888/lab/workspaces/auto-G/tree/~/miniconda3/Lib/site-packages/torch/utils/_contextlib.py#line=35), in _wrap_generator.<locals>.generator_context(*args, **kwargs)
33 try:
34 # Issuing `None` to a generator fires it up
35 with ctx_factory():
---> 36 response = gen.send(None)
38 while True:
39 try:
40 # Forward the response to our caller and get its next request
File [~\miniconda3\Lib\site-packages\ultralytics\engine\predictor.py:268](http://localhost:8888/lab/workspaces/auto-G/tree/~/miniconda3/Lib/site-packages/ultralytics/engine/predictor.py#line=267), in BasePredictor.stream_inference(self, source, model, *args, **kwargs)
266 # Postprocess
267 with profilers[2]:
--> 268 self.results = self.postprocess(preds, im, im0s)
269 self.run_callbacks("on_predict_postprocess_end")
271 # Visualize, save, write results
File [~\miniconda3\Lib\site-packages\ultralytics\models\yolo\detect\predict.py:25](http://localhost:8888/lab/workspaces/auto-G/tree/~/miniconda3/Lib/site-packages/ultralytics/models/yolo/detect/predict.py#line=24), in DetectionPredictor.postprocess(self, preds, img, orig_imgs, **kwargs)
23 def postprocess(self, preds, img, orig_imgs, **kwargs):
24 """Post-processes predictions and returns a list of Results objects."""
---> 25 preds = ops.non_max_suppression(
26 preds,
27 self.args.conf,
28 self.args.iou,
29 self.args.classes,
30 self.args.agnostic_nms,
31 max_det=self.args.max_det,
32 nc=len(self.model.names),
33 end2end=getattr(self.model, "end2end", False),
34 rotated=self.args.task == "obb",
35 )
37 if not isinstance(orig_imgs, list): # input images are a torch.Tensor, not a list
38 orig_imgs = ops.convert_torch2numpy_batch(orig_imgs)
File [~\miniconda3\Lib\site-packages\ultralytics\utils\ops.py:265](http://localhost:8888/lab/workspaces/auto-G/tree/~/miniconda3/Lib/site-packages/ultralytics/utils/ops.py#line=264), in non_max_suppression(prediction, conf_thres, iou_thres, classes, agnostic, multi_label, labels, max_det, nc, max_time_img, max_nms, max_wh, in_place, rotated, end2end)
262 prediction = torch.cat((xywh2xyxy(prediction[..., :4]), prediction[..., 4:]), dim=-1) # xywh to xyxy
264 t = time.time()
--> 265 output = [torch.zeros((0, 6 + nm), device=prediction.device)] * bs
266 for xi, x in enumerate(prediction): # image index, image inference
267 # Apply constraints
268 # x[((x[:, 2:4] < min_wh) | (x[:, 2:4] > max_wh)).any(1), 4] = 0 # width-height
269 x = x[xc[xi]] # confidence
RuntimeError: Trying to create tensor with negative dimension -989: [0, -989]```
### Environment
```
Ultralytics 8.3.80 Python-3.12.3 torch-2.6.0+cpu CPU (Intel Core(TM) i5-10400 2.90GHz)
Setup complete (12 CPUs, 7.8 GB RAM, 214.0/237.6 GB disk)
OS Windows-11-10.0.26100-SP0
Environment Windows
Python 3.12.3
Install pip
RAM 7.83 GB
Disk 214.0/237.6 GB
CPU Intel Core(TM) i5-10400 2.90GHz
CPU count 12
GPU None
GPU count None
CUDA None
numpy 1.26.4<=2.1.1,>=1.23.0
matplotlib 3.10.0>=3.3.0
opencv-python 4.11.0.86>=4.6.0
pillow 11.0.0>=7.1.2
pyyaml 6.0.2>=5.3.1
requests 2.32.3>=2.23.0
scipy 1.15.1>=1.4.1
torch 2.6.0>=1.8.0
torch 2.6.0!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision 0.21.0>=0.9.0
tqdm 4.66.5>=4.64.0
psutil 6.1.1
py-cpuinfo 9.0.0
pandas 2.2.3>=1.1.4
seaborn 0.13.2>=0.11.0
ultralytics-thop 2.0.14>=2.0.0
{'OS': 'Windows-11-10.0.26100-SP0',
'Environment': 'Windows',
'Python': '3.12.3',
'Install': 'pip',
'RAM': '7.83 GB',
'Disk': '214.0/237.6 GB',
'CPU': 'Intel Core(TM) i5-10400 2.90GHz',
'CPU count': 12,
'GPU': None,
'GPU count': None,
'CUDA': None,
'Package Info': {'numpy': '✅ 1.26.4<=2.1.1,>=1.23.0',
'matplotlib': '✅ 3.10.0>=3.3.0',
'opencv-python': '✅ 4.11.0.86>=4.6.0',
'pillow': '✅ 11.0.0>=7.1.2',
'pyyaml': '✅ 6.0.2>=5.3.1',
'requests': '✅ 2.32.3>=2.23.0',
'scipy': '✅ 1.15.1>=1.4.1',
'torch': '✅ 2.6.0!=2.4.0,>=1.8.0; sys_platform == "win32"',
'torchvision': '✅ 0.21.0>=0.9.0',
'tqdm': '✅ 4.66.5>=4.64.0',
'psutil': '✅ 6.1.1',
'py-cpuinfo': '✅ 9.0.0',
'pandas': '✅ 2.2.3>=1.1.4',
'seaborn': '✅ 0.13.2>=0.11.0',
'ultralytics-thop': '✅ 2.0.14>=2.0.0'}}
```
### Minimal Reproducible Example
```
metadata = []
def export_cb(exporter):
metadata.append(exporter.metadata)
model.add_callback("on_export_end", export_cb)
onnx_file = model.export(format="onnx", dynamic=True)
```
```
from ultralytics import YOLO
model = YOLO("http://localhost:8000/yolo", task="detect")
results = model.predict("test.jpg",imgsz=640)
``` | closed | 2025-02-27T07:22:59Z | 2025-02-27T12:38:32Z | https://github.com/ultralytics/ultralytics/issues/19454 | [
"bug",
"fixed",
"detect",
"exports"
] | hoangl-hle | 6 |
streamlit/streamlit | deep-learning | 10,000 | st.dialog The pop-up window: Click to click the second time and it will not pop up | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
hi, I have a problem, when I click the button for the first time, I can pop up the pop-up window, don't do anything, after closing the pop-up, click the button again The pop-up window can't pop up, sometimes it can pop up after multiple clicks, sometimes it won't pop up no matter how many clicks
### Reproducible Code Example
_No response_
### Steps To Reproduce
```
@st.dialog("file", width="large")
def upload_file():
uploaded_file = st.file_uploader("上传数据文件", type=["csv"], key="file_uploader")
if uploaded_file is not None:
if st.button("OK"):
st.write("success")
rerun()
if st.button("Upload Data", key="upload"):
uplod_file() # Upload file pop-up window
```
### Expected Behavior
_No response_
### Current Behavior
_No response_
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version:
- Python version:
- Operating System:
- Browser:
### Additional Information
_No response_ | closed | 2024-12-11T10:36:09Z | 2024-12-13T10:06:28Z | https://github.com/streamlit/streamlit/issues/10000 | [
"type:bug",
"status:awaiting-user-response",
"feature:st.dialog"
] | dtiosd | 5 |
JoeanAmier/XHS-Downloader | api | 185 | 你的项目很棒,但是希望能够对个别下载笔记/视频提示网络异常做出修复 | 我发现很多人都遇到网络异常了并且包括我,User-Agent更新后还是有网络异常,请问下作者能够对这个做出修复吗 | closed | 2024-10-20T12:27:31Z | 2024-10-20T13:43:54Z | https://github.com/JoeanAmier/XHS-Downloader/issues/185 | [] | lixida123 | 0 |
huggingface/datasets | pandas | 7,369 | Importing dataset gives unhelpful error message when filenames in metadata.csv are not found in the directory | ### Describe the bug
While importing an audiofolder dataset, where the names of the audiofiles don't correspond to the filenames in the metadata.csv, we get an unclear error message that is not helpful for the debugging, i.e.
```
ValueError: Instruction "train" corresponds to no data!
```
### Steps to reproduce the bug
Assume an audiofolder with audiofiles, filename1.mp3, filename2.mp3 etc and a file metadata.csv which contains the columns file_name and sentence. The file_names are formatted like filename1.mp3, filename2.mp3 etc.
Load the audio
```
from datasets import load_dataset
load_dataset("audiofolder", data_dir='/path/to/audiofolder')
```
When the file_names in the csv are not in sync with the filenames in the audiofolder, then we get an Error message:
```
File /opt/conda/lib/python3.12/site-packages/datasets/arrow_reader.py:251, in BaseReader.read(self, name, instructions, split_infos, in_memory)
249 if not files:
250 msg = f'Instruction "{instructions}" corresponds to no data!'
--> 251 raise ValueError(msg)
252 return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
ValueError: Instruction "train" corresponds to no data!
```
load_dataset has a default value for the argument split = 'train'.
### Expected behavior
It would be better to get an error report something like:
```
The metadata.csv file has different filenames than the files in the datadirectory.
```
It would have saved me 4 hours of debugging.
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-5.14.0-427.40.1.el9_4.x86_64-x86_64-with-glibc2.39
- Python version: 3.12.8
- `huggingface_hub` version: 0.27.0
- PyArrow version: 18.1.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0 | open | 2025-01-14T13:53:21Z | 2025-01-14T15:05:51Z | https://github.com/huggingface/datasets/issues/7369 | [] | svencornetsdegroot | 1 |
onnx/onnx | machine-learning | 6,787 | .github/workflows/sdist_test.yml is failing | # Bug Report
https://github.com/onnx/onnx/blob/main/.github/workflows/sdist_test.yml is failing. The original idea was to extract "pip install -e ." from auto update doc to a separate pr which test installing onnx from source. | closed | 2025-03-10T05:55:22Z | 2025-03-23T19:44:55Z | https://github.com/onnx/onnx/issues/6787 | [
"bug",
"module: CI pipelines"
] | andife | 0 |
mars-project/mars | pandas | 2,691 | Use direct async call for in-process rpc | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Is your feature request related to a problem? Please describe.**
Mars oscar actor uses at least one `asyncio.Task` for every actor call, which is expensive when there are thousands of tasks. `asyncio.Task` is about 100~200 slower than a normal async call, and the cost grows when there are more tasks in event loop. When there are hundreds of subtasks being schduled by the supervisor, there will be thousands of `asyncio.Task` in the event loop of supervisor, which make the supervisor as the bottleneck of the whole system.
**Describe the solution you'd like**
For in-process actor call, it's possible to use direct async function call. Instead of creating an new `asyncio.Task` to process the message, we can just use the `asyncio.Task` of the caller to handle the message, thus reduced all extra `asyncio.Task`.
**Additional context**
For `TellMessage`, it's executed in the background and there is not an easy way to avoid the `asyncio.Task`, so`TellMessage` should be used sparely.
| open | 2022-02-09T07:20:21Z | 2022-02-09T07:20:21Z | https://github.com/mars-project/mars/issues/2691 | [] | chaokunyang | 0 |
mwouts/itables | jupyter | 227 | How to copy a table column name? | How to copy a table column name?
The current behavior when **click** on the `table column name` triggers a **sort**.
Is it possible to allow something like `alt+mouse drag`,, to **select** & copy the name,, instead of triggering sort? | open | 2024-02-06T23:16:26Z | 2024-06-17T08:30:19Z | https://github.com/mwouts/itables/issues/227 | [] | Norlandz | 11 |
wandb/wandb | data-science | 8,717 | [Q]: Is there more detailed introduction or example of Query panels combined plot? | ### Ask your question
Hi, this issue comes from https://github.com/wandb/examples/issues/577, maybe you only need to answer once.
I combine my two tables together through a key with inner or outer join. The new table is like this, joined on the first column, and other columns have two values (from the original two tables).

1. I want to now convert it into a combined scatter plot, with different colors for different original tables (for example, yellow and pink). How can I set the "Color" below?

2. Is there syntax instruction for weave expression? And how should I customize the shape or size of each point based on different column values?

3. I think the following query panel in the doc https://docs.wandb.ai/guides/app/features/panels/query-panel/ is what I can refer to. Can you share this example?

| open | 2024-10-26T14:43:38Z | 2024-11-11T10:40:58Z | https://github.com/wandb/wandb/issues/8717 | [
"ty:question",
"c:docs",
"a:app"
] | Neronjust2017 | 4 |
Farama-Foundation/PettingZoo | api | 498 | Warnings in CI tests to resolve | Looking at CI, there are several warnings in pettingzoo that likely should be addressed:
There's this:
test/pytest_runner.py::test_module[classic/texas_holdem_no_limit_v5-pettingzoo.classic.texas_holdem_no_limit_v5]
/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/gym/logger.py:34: UserWarning: WARN: Box bound precision lowered by casting to float32
warnings.warn(colorize("%s: %s" % ("WARN", msg % args), "yellow"))
Then there's a ton of these warnings in CI tess
test/pytest_runner.py: 63 warnings
/home/runner/work/PettingZoo/PettingZoo/pettingzoo/utils/wrappers/base.py:51: UserWarning: The `observation_spaces` dictionary is deprecated. Use the `observation_space` function instead.
warnings.warn("The `observation_spaces` dictionary is deprecated. Use the `observation_space` function instead.")
Then there's this:
test/pytest_runner.py::test_module[sisl/waterworld_v3-pettingzoo.sisl.waterworld_v3]
/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/gym/spaces/box.py:143: UserWarning: Casting input x to numpy array.
warnings.warn("Casting input x to numpy array.")
Then finally there's this
test/pytest_runner.py::test_module[sisl/pursuit_v3-pettingzoo.sisl.pursuit_v3]
/home/runner/work/PettingZoo/PettingZoo/pettingzoo/utils/env.py:89: UserWarning: Your environment should override the action_space function. Attempting to use the action_spaces dict attribute.
warnings.warn("Your environment should override the action_space function. Attempting to use the action_spaces dict attribute.") | closed | 2021-10-05T19:16:38Z | 2021-12-12T19:29:57Z | https://github.com/Farama-Foundation/PettingZoo/issues/498 | [] | jkterry1 | 3 |
frol/flask-restplus-server-example | rest-api | 105 | How do I marshal pagination data ? | Hello, thank you for this great project !
I have a problem with marshmallow, which the example doesn't cover.
-------
My schema :
```
from flask_restplus_patched import ModelSchema
from src.extensions import ma
# # ma init at app startup
# from flask_marshmallow import Marshmallow
# ma = Marshmallow()
class CorpusSchema(ModelSchema):
id = fields.Int(dump_only=True)
tags = fields.Nested(TagSchema, many=True, exclude=('hexcolor', 'shortcut',), dump_only=True)
doc_count = fields.Int(dump_only=True)
class Meta:
model = Corpus
exclude = ('documents',)
dateformat = '%Y-%m-%d %H:%M:%S'
class PaginationSchema(ma.Schema):
total = fields.Integer()
has_next = fields.Boolean()
has_prev = fields.Boolean()
page = fields.Integer()
page_size = fields.Integer()
class CorpusPaginationSchema(PaginationSchema):
corpusList = fields.Nested(CorpusSchema, many=True, dump_only=True)
```
It is usually need return data with pagination . Here, as you see, `CorpusPaginationSchema` have many `CorpusSchema` , `CorpusSchema` have many `TagSchema` . I think this may be the problem.
I wrote view as below:
```
@api_wrap.route('/corpus')
class CorpusListPage(Resource):
@api_wrap.doc('get_corpus_list')
@api_wrap.marshal_with(CorpusPaginationSchema())
def get(self):
page, page_size = get_pagination_info()
result = Corpus.query.paginate(page, page_size, error_out=False)
return {
'total': result.total,
'has_next': result.has_next,
'has_prev': result.has_prev,
'corpusList': corpuses_schema.dump(result.items).data
}
```
But got error :
```
018-04-26 10:55:41,732 ERROR: Exception on /api/corpus [GET] [in D:\Anaconda3\envs\py3\lib\site-pac
ages\flask\app.py:1560]
raceback (most recent call last):
File "D:\Anaconda3\envs\py3\lib\site-packages\flask\app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "D:\Anaconda3\envs\py3\lib\site-packages\flask\app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "D:\Anaconda3\envs\py3\lib\site-packages\flask_restplus\api.py", line 313, in wrapper
resp = resource(*args, **kwargs)
File "D:\Anaconda3\envs\py3\lib\site-packages\flask\views.py", line 84, in view
return self.dispatch_request(*args, **kwargs)
File "D:\Anaconda3\envs\py3\lib\site-packages\flask_restplus\resource.py", line 44, in dispatch_re
uest
resp = meth(*args, **kwargs)
File "D:\Anaconda3\envs\py3\lib\site-packages\flask_restplus\marshalling.py", line 110, in wrapper
return marshal(resp, self.fields, self.envelope, mask)
File "D:\Anaconda3\envs\py3\lib\site-packages\flask_restplus\marshalling.py", line 54, in marshal
for k, v in fields.items())
ttributeError: 'CorpusPaginationSchema' object has no attribute 'items'
```
I am very confused how to use `marshal_with` correctly . Hope there would be a good example/tip for my case . | closed | 2018-04-26T02:57:32Z | 2018-04-27T02:09:03Z | https://github.com/frol/flask-restplus-server-example/issues/105 | [
"question"
] | eromoe | 8 |
ivy-llc/ivy | pytorch | 28,256 | Fix Ivy Failing Test: torch - shape.shape__radd__ | closed | 2024-02-12T15:49:49Z | 2024-02-13T09:32:14Z | https://github.com/ivy-llc/ivy/issues/28256 | [
"Sub Task"
] | fnhirwa | 0 | |
Lightning-AI/pytorch-lightning | pytorch | 20,536 | lightning.Fabric meets deadlock when loading nn.Module | ### Bug description
When I try to use `lightning.Fabric.setup()` to load `torch.nn.Module` under multi-process, the program will meet deadlock and stuck in `lightning/fabric/strategies/launchers/subprocess_script.py`.
I doubt this problem comes from `popen` start method of process, but I have not more evidence.
### What version are you seeing the problem on?
v2.4, v2.5
### How to reproduce the bug
I try to reproduce this bug with smaller demo with my project, but I failed, this bug seems only in my actual project:
```python
import torch
import torch.nn as nn
from torch.utils.data import DataLoader
from torch.utils.data import TensorDataset
import lightning
fabric = lightning.Fabric(devices=[1, 2], num_nodes=1, strategy='ddp')
class MyEvaluator:
def __init__(self):
fabric.launch()
def eval_model(self, dataset, crit):
model = LinearModel()
model = fabric.setup_module(module=model)
model.eval()
# I have changed type of model to custom class, but still can't reproduce this problem
# model = Seq2Seq_Min_LSTM_GNN(en_input_size=30, de_input_size=18, output_size=2, hidden_size=256, forecast_history=168, forecast_length=56, graph=dgl.graph((torch.tensor([0, 1]), torch.tensor([1, 2]))))
test_loader = fabric.setup_dataloaders(DataLoader(dataset, batch_size=10, shuffle=True,
num_workers=1, multiprocessing_context='spawn'))
for x, y in test_loader:
output = model(x)
loss = crit(output, y)
yield loss
class LinearModel(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(10, 2)
def forward(self, x):
return self.linear(x)
if __name__ == '__main__':
x = torch.randn(100, 10)
y = torch.rand(100, 2)
dataset = TensorDataset(x, y)
crit = nn.MSELoss()
evaluator = MyEvaluator()
for loss in evaluator.eval_model(dataset, crit):
print(loss)
```
### Error messages and logs
This is no error messages and logs when deadlock occurs. What should I do to know what happened in my program and give you enough messages?
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU:
- NVIDIA RTX 5000 Ada Generation
- NVIDIA A40
- NVIDIA A40
- available: True
- version: 12.1
* Lightning:
- lightning: 2.4.0
- lightning-utilities: 0.11.9
- pytorch-lightning: 2.4.0
- torch: 2.2.2
- torchaudio: 2.2.2
- torchdata: 0.7.1
- torchmetrics: 1.6.0
- torchvision: 0.17.2
* Packages:
- absl-py: 2.1.0
- affine: 2.4.0
- aiobotocore: 2.13.2
- aiodns: 3.2.0
- aiohappyeyeballs: 2.3.7
- aiohttp: 3.10.4
- aiohttp-client-cache: 0.11.1
- aioitertools: 0.11.0
- aiosignal: 1.3.1
- aiosqlite: 0.20.0
- annotated-types: 0.7.0
- appdirs: 1.4.4
- argon2-cffi: 23.1.0
- argon2-cffi-bindings: 21.2.0
- asciitree: 0.3.3
- async-retriever: 0.17.0
- attrs: 24.2.0
- autocommand: 2.2.2
- backports.tarfile: 1.2.0
- black: 24.8.0
- bleach: 6.1.0
- bokeh: 3.5.1
- boto3: 1.34.131
- botocore: 1.34.131
- branca: 0.7.2
- brotli: 1.1.0
- bump2version: 1.0.1
- cachetools: 5.5.0
- cartopy: 0.23.0
- cattrs: 23.2.3
- certifi: 2024.8.30
- cffi: 1.17.0
- cfgrib: 0.9.14.0
- cftime: 1.6.4
- chardet: 5.2.0
- charset-normalizer: 3.3.2
- click: 8.1.7
- click-plugins: 1.1.1
- cligj: 0.7.2
- cloudpickle: 3.0.0
- codetiming: 1.4.0
- colorama: 0.4.6
- contourpy: 1.2.1
- cryptography: 43.0.0
- cupy: 13.3.0
- cycler: 0.12.1
- cytoolz: 0.12.3
- dask: 2024.8.1
- dask-expr: 1.1.11
- dataretrieval: 1.0.10
- deepspeed: 0.16.1
- defusedxml: 0.7.1
- dgl: 2.2.1+cu121
- distributed: 2024.8.1
- docutils: 0.21.2
- eccodes: 1.7.1
- einops: 0.8.0
- et-xmlfile: 1.1.0
- exceptiongroup: 1.2.2
- fasteners: 0.19
- fastrlock: 0.8.2
- filelock: 3.15.4
- findlibs: 0.0.5
- flake8: 7.1.1
- flexcache: 0.3
- flexparser: 0.3.1
- folium: 0.17.0
- fonttools: 4.53.1
- frozenlist: 1.4.1
- fsspec: 2024.6.1
- geopandas: 1.0.1
- gmpy2: 2.1.5
- greenlet: 3.0.3
- grpcio: 1.62.2
- h2: 4.1.0
- h5netcdf: 1.3.0
- h5py: 3.11.0
- hjson: 3.1.0
- hpack: 4.0.0
- hydrodataset: 0.1.13
- hydrodatasource: 0.0.8
- hydroerr: 1.24
- hydrosignatures: 0.17.0
- hydrotopo: 0.0.6
- hydroutils: 0.0.12
- hyperframe: 6.0.1
- idna: 3.7
- igraph: 0.11.6
- importlib-metadata: 8.2.0
- importlib-resources: 6.4.0
- inflect: 7.3.1
- iniconfig: 2.0.0
- intake: 2.0.6
- itsdangerous: 2.2.0
- jaraco.classes: 3.4.0
- jaraco.context: 5.3.0
- jaraco.functools: 4.0.2
- jaraco.text: 3.12.1
- jeepney: 0.8.0
- jinja2: 3.1.4
- jmespath: 1.0.1
- joblib: 1.4.2
- kaggle: 1.6.17
- kerchunk: 0.2.6
- keyring: 25.3.0
- kiwisolver: 1.4.5
- lightning: 2.4.0
- lightning-utilities: 0.11.9
- llvmlite: 0.43.0
- locket: 1.0.0
- loguru: 0.7.2
- lxml: 5.3.0
- lz4: 4.3.3
- markdown: 3.6
- markdown-it-py: 3.0.0
- markupsafe: 2.1.5
- matplotlib: 3.9.2
- mccabe: 0.7.0
- mdurl: 0.1.2
- minio: 7.2.8
- more-itertools: 10.4.0
- mpmath: 1.3.0
- msgpack: 1.0.8
- multidict: 6.0.5
- mypy-extensions: 1.0.0
- netcdf4: 1.7.1.post2
- networkx: 3.3
- nh3: 0.2.18
- ninja: 1.11.1.3
- nuitka: 2.4.7
- numba: 0.60.0
- numcodecs: 0.13.0
- numpy: 1.26.4
- nvidia-ml-py: 12.535.161
- nvitop: 1.3.2
- openpyxl: 3.1.5
- ordered-set: 4.1.0
- owslib: 0.31.0
- packaging: 24.1
- pandas: 2.2.2
- partd: 1.4.2
- pathspec: 0.12.1
- pillow: 10.4.0
- pint: 0.24.3
- pint-pandas: 0.6.2
- pint-xarray: 0.4
- pip: 24.2
- pkginfo: 1.10.0
- platformdirs: 4.2.2
- pluggy: 1.5.0
- polars: 1.17.1
- protobuf: 4.25.3
- psutil: 6.0.0
- psycopg2-binary: 2.9.9
- py-cpuinfo: 9.0.0
- pyarrow: 17.0.0
- pyarrow-hotfix: 0.6
- pycairo: 1.27.0
- pycares: 4.4.0
- pycodestyle: 2.12.1
- pycparser: 2.22
- pycryptodome: 3.20.0
- pydantic: 2.8.2
- pydantic-core: 2.20.1
- pyflakes: 3.2.0
- pygeohydro: 0.17.0
- pygeoogc: 0.17.0
- pygeoutils: 0.17.0
- pygments: 2.18.0
- pykalman: 0.9.7
- pynhd: 0.17.0
- pyogrio: 0.9.0
- pyparsing: 3.1.2
- pyproj: 3.6.1
- pyshp: 2.3.1
- pysocks: 1.7.1
- pytest: 8.3.2
- python-dateutil: 2.9.0
- python-slugify: 8.0.4
- pytorch-lightning: 2.4.0
- pytz: 2024.1
- pyyaml: 6.0.2
- rasterio: 1.3.10
- readme-renderer: 44.0
- requests: 2.32.3
- requests-cache: 1.2.1
- requests-toolbelt: 1.0.0
- rfc3986: 2.0.0
- rich: 13.7.1
- rioxarray: 0.17.0
- s3fs: 2024.6.1
- s3transfer: 0.10.2
- scikit-learn: 1.5.1
- scipy: 1.14.0
- seaborn: 0.13.2
- secretstorage: 3.3.3
- setuptools: 72.2.0
- shap: 0.45.1
- shapely: 2.0.1
- six: 1.16.0
- slicer: 0.0.8
- snuggs: 1.4.7
- sortedcontainers: 2.4.0
- sqlalchemy: 2.0.32
- sympy: 1.13.2
- tblib: 3.0.0
- tbparse: 0.0.9
- tensorboard: 2.17.1
- tensorboard-data-server: 0.7.0
- termcolor: 2.5.0
- text-unidecode: 1.3
- texttable: 1.7.0
- threadpoolctl: 3.5.0
- tomli: 2.0.1
- toolz: 0.12.1
- torch: 2.2.2
- torchaudio: 2.2.2
- torchdata: 0.7.1
- torchmetrics: 1.6.0
- torchvision: 0.17.2
- tornado: 6.4.1
- tqdm: 4.66.5
- triton: 2.2.0
- twine: 5.1.1
- typeguard: 4.3.0
- typing-extensions: 4.12.2
- tzdata: 2024.1
- tzfpy: 0.15.5
- ujson: 5.10.0
- url-normalize: 1.4.3
- urllib3: 2.2.2
- webencodings: 0.5.1
- werkzeug: 3.0.3
- wget: 3.2
- wheel: 0.44.0
- wrapt: 1.16.0
- xarray: 2024.7.0
- xlrd: 2.0.1
- xyzservices: 2024.6.0
- yarl: 1.9.4
- zarr: 2.18.2
- zict: 3.0.0
- zipp: 3.20.0
- zstandard: 0.23.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.11.9
- release: 5.4.0-195-generic
- version: #215-Ubuntu SMP Fri Aug 2 18:28:05 UTC 2024
</details>
### More info
In my project, this error occurs in [training](https://github.com/iHeadWater/torchhydro/blob/dev-gnn/torchhydro/trainers/deep_hydro.py#L222) and [evaluating](https://github.com/iHeadWater/torchhydro/blob/dev-gnn/torchhydro/trainers/deep_hydro.py#L352).
Hope the file can give you more help and tell me how to reproduce or solve it correctly. | closed | 2025-01-07T16:36:03Z | 2025-01-20T13:08:03Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20536 | [
"repro needed",
"ver: 2.4.x",
"ver: 2.5.x"
] | forestbat | 8 |
noirbizarre/flask-restplus | flask | 389 | Flask restplus fails on a certain GET call on Chrome - but not elsewhere. | We're faced with a very weird issue, reproducible with our code - https://github.com/vedavaapi/vedavaapi_py_api/issues/3 (README and setup script in the same repo).
Observations:
- https://api.vedavaapi.org/py/ullekhanam/v1/schemas fails on chrome, but succeeds on swagger UI, firefox and with curl. So do other routes under the same blueprint. Please see https://github.com/vedavaapi/vedavaapi_py_api/issues/3 for screenshots and error dumps. Code is at https://github.com/vedavaapi/vedavaapi_py_api/blob/7a342bc099c4c3f417cbc45f8d3559aef1b16a8e/vedavaapi_py_api/ullekhanam/api_v1.py#L312
- https://api.vedavaapi.org/py/auth/v1/schemas works just fine. Code for this is at https://github.com/vedavaapi/vedavaapi_py_api/blob/7a342bc099c4c3f417cbc45f8d3559aef1b16a8e/vedavaapi_py_api/users/api_v1.py#L383 .
I was unable to figure this out. Appreciate investigation and pointers.
I have checked against the latest flask restplus version as of this moment, having run: `sudo pip3 install -U flask-restplus` before testing on my local machine.
EDIT: I also tried after the below:
```
sudo pip3 install git+https://github.com/noirbizarre/flask-restplus@master -U
...
Installing collected packages: flask-restplus
Found existing installation: flask-restplus 0.10.1
Uninstalling flask-restplus-0.10.1:
Successfully uninstalled flask-restplus-0.10.1
Running setup.py install for flask-restplus ... done
Successfully installed flask-restplus-0.10.1.dev0
****
``` | open | 2018-02-05T22:11:17Z | 2018-05-16T14:26:27Z | https://github.com/noirbizarre/flask-restplus/issues/389 | [] | vvasuki | 5 |
jonaswinkler/paperless-ng | django | 1,394 | [BUG] Mail rule filter attachment filename is case sensitive | The "Mail rules" interface says the Filter attachment filename should be case insensitive:
> Only consume documents which entirely match this filename if specified. Wildcards such as *.pdf or \*invoice\* are allowed. Case insensitive.
The latest revision of mail.py uses fnmatch, which follows the operating system's rules for case sensitivity:
[https://github.com/jonaswinkler/paperless-ng/blob/7bc8325df910ab57ed07849a3ce49a3011ba55b6/src/paperless_mail/mail.py#L279-L281](https://github.com/jonaswinkler/paperless-ng/blob/7bc8325df910ab57ed07849a3ce49a3011ba55b6/src/paperless_mail/mail.py#L279-L281) | open | 2021-10-17T03:42:01Z | 2021-10-31T11:05:58Z | https://github.com/jonaswinkler/paperless-ng/issues/1394 | [] | dpaulat | 1 |
joke2k/django-environ | django | 507 | db_url() fail with oracle dsn | `db_url` fail with `oracle` url without path, for example:
```python
import environ
env = environ.Env()
DB_URL = 'oracle://super_user:super_pass@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oracle_dev)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=XEPDB1)))'
DB = env.db('', default=DB_URL)
print("NAME:", DB['NAME'])
```
```bash
NAME: %28DESCRIPTION=%28ADDRESS=%28PROTOCOL=TCP%29%28HOST=oracle_dev%29%28PORT=1521%29%29%28CONNECT_DATA=%28SERVICE_NAME=XEPDB1%29%29%29
```
but with path (adding a simple `/` to de end): `DB_URL = 'oracle://super_user:super_pass@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=oracle_dev)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=XEPDB1)))/'`:
```bash
NAME: (description=(address=(protocol=tcp)(host=oracle_dev)(port=1521))(connect_data=(service_name=xepdb1)))
```
suggested patch is:
```patch
--- /srv/env/lib/python3.10/site-packages/environ/environ.py
+++ /srv/env/lib/python3.10/site-packages/environ/environ.py
@@ -521,7 +521,7 @@
config = {}
# handle unexpected URL schemes with special characters
- if not url.path:
+ if not url.path and not url.scheme=='oracle':
url = _urlparse_quote(urlunparse(url))
# Remove query strings.
path = url.path[1:]
``` | open | 2023-10-25T13:54:11Z | 2023-10-25T13:54:11Z | https://github.com/joke2k/django-environ/issues/507 | [] | lsaavedr | 0 |
marcomusy/vedo | numpy | 792 | Having issues rendering anything with vedo | I had to do a clean install of my Ubuntu with WSL2 and Windows 11 and I have had issues in rendering things with vedo. It has worked fine in the past on the same computer. My new install didn't have libGL.so.1 installed so I did `apt-get install libgl1-mesa-glx libgl1-mesa-dri`. I have the same issue with PyVista rendering, but Matplotlib works fine. Do you have any ideas I could try to fix this issue? Or what information might be helpful? Thanks. | closed | 2023-01-19T04:48:13Z | 2023-02-23T01:11:39Z | https://github.com/marcomusy/vedo/issues/792 | [] | daniel-a-diaz | 6 |
graphistry/pygraphistry | pandas | 504 | Fwiw, do we need to update to track latest dirty car, as it has been awhile? | Fwiw, do we need to update to track latest dirty car, as it has been awhile?
(Maybe do as a follow-on PR after this lands?)
_Originally posted by @lmeyerov in https://github.com/graphistry/pygraphistry/pull/486#discussion_r1300244745_
https://github.com/graphistry/cu-cat/pull/4 | open | 2023-09-06T08:25:12Z | 2023-09-28T07:04:41Z | https://github.com/graphistry/pygraphistry/issues/504 | [] | dcolinmorgan | 1 |
marcomusy/vedo | numpy | 321 | Tube Shape Circle Orientation | While I'm sure building up my own mesh from equations will work, it would be nice if the input to the Tube function could be vectors that defined each circle's orientation rather than just points. While this isn't very useful for the body of the tube, orienting the end circles is necessary for creating an accurate graphic for my colleague's research. | closed | 2021-02-23T19:07:58Z | 2021-02-23T20:49:49Z | https://github.com/marcomusy/vedo/issues/321 | [] | JGarrett7 | 3 |
statsmodels/statsmodels | data-science | 9,056 | How to use X-13ARIMA-SEATS in Ubuntu? | Hi,
I would like to use X-13ARIMA-SEATS with Ubuntu. Do I understand correctly that in:
`statsmodels.tsa.x13.x13_arima_select_order(endog,
maxorder=(2, 1),
maxdiff=(2, 1),
diff=None,
exog=None,
log=None,
outlier=True,
trading=False,
forecast_periods=None,
start=None, freq=None,
print_stdout=False,
x12path=None,
prefer_x13=True,
tempdir=None)`
I have to use x12path='\x13as_asciisrc-v1-1-b60' with files inside:
> aaamain.f
abend.f
ac02ae.i
acfar.f
acfast.i
acfdgn.f
acf.f
acfhdr.f
acfptr.prm
acfst.i
I read page: [https://www.census.gov/data/software/x13as.X-13ARIMA-SEATS.html#list-tab-635278563](https://www.census.gov/data/software/x13as.X-13ARIMA-SEATS.html#list-tab-635278563) but I'm still confused.
Should I compile the code in Fortran?
I will be grateful for any suggestions.
| open | 2023-11-04T17:14:41Z | 2023-11-04T22:04:20Z | https://github.com/statsmodels/statsmodels/issues/9056 | [] | PeterPirog | 2 |
tensorpack/tensorpack | tensorflow | 1,460 | Base64 as input | Here is my code to make an inference:
```
def send_request(im_path):
img = np.array(Image.open(im_path))
data = json.dumps({"signature_name": "serving_default", "instances": img.tolist()})
headers = {"content-type": "application/json"}
json_response = requests.post('http://' + SERVER_ADDRESS + ':' + SERVER_PORT + '/v1/models/a:predict', data=data,
headers=headers)
return json.loads(json_response.text), img.shape[:2]
```
As you can see, a list of nd.array is send to the model. That causes a lot of problems because the payload size is bigger than 5MB which is the AWS limit.
Is it possible to send image encoded base64 ( base64.b64encode('image') ) to the model ? WIl it work ? | closed | 2020-06-27T08:11:52Z | 2020-06-27T18:12:31Z | https://github.com/tensorpack/tensorpack/issues/1460 | [
"unrelated"
] | Adblu | 3 |
stanfordnlp/stanza | nlp | 1,380 | [QUESTION] Spanish Constituency to English Constituency Translation Dictionary? | Hello,
We have a program that relies heavily on the constituency parse for its English usage, and we're looking to expand our program to also handle Spanish text. We noticed that Spanish does have constituency information available, but the labels are all different (they're in Spanish). Is there any information you could share specifically about what these tags denote (e.g., a website with all the constituency tags) or maybe even translation information for the tags from Spanish to English?
Thanks for any help!
Best,
Jack | closed | 2024-04-09T18:17:23Z | 2025-01-22T00:53:08Z | https://github.com/stanfordnlp/stanza/issues/1380 | [
"question",
"stale"
] | jack-dempsey-cascade | 3 |
cvat-ai/cvat | pytorch | 8,630 | @lakshmikantdeshpande , it should be possible. We are going to use KeyCloak for auth purpose in the nearest future. | @lakshmikantdeshpande , it should be possible. We are going to use KeyCloak for auth purpose in the nearest future.
_Originally posted by @nmanovic in https://github.com/cvat-ai/cvat/issues/1217#issuecomment-607787437_
| closed | 2024-11-01T08:14:10Z | 2024-11-15T03:25:38Z | https://github.com/cvat-ai/cvat/issues/8630 | [] | Brokendisme | 2 |
nonebot/nonebot2 | fastapi | 3,237 | Plugin: nonebot_plugin_dingzhen | ### PyPI 项目名
nonebot_plugin_dingzhen
### 插件 import 包名
nonebot_plugin_dingzhen
### 标签
[{"label":"丁真","color":"#ff337b"},{"label":"语音合成","color":"#1942ff"},{"label":"QQ","color":"#b6f111"}]
### 插件配置项
```dotenv
```
```httpx
```
### 插件测试
- [ ] 如需重新运行插件测试,请勾选左侧勾选框 | closed | 2025-01-05T05:30:55Z | 2025-01-05T06:05:30Z | https://github.com/nonebot/nonebot2/issues/3237 | [
"Plugin",
"Publish"
] | Pochinki98 | 2 |
JaidedAI/EasyOCR | deep-learning | 914 | Greek Language | First of all thank you soo much for this wonderful work. What about the Greek language isn't it developed yet?
Or plz tell me how I can trained easy-OCR model with Greek language | open | 2022-12-23T10:09:31Z | 2023-01-14T04:57:15Z | https://github.com/JaidedAI/EasyOCR/issues/914 | [] | Ham714 | 1 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 488 | preprocessing VoxCele2 is not working | While running encoder_preprocess on voxceleb2 dataset, I'm getting the following warning and nothing else happens. Not sure why?
```
raw: Preprocessing data for 5994 speakers.
raw: 0%| | 0/5994 [00:00<?, ?speakers/s]
/home/amin/.local/lib/python3.6/site-packages/librosa/core/audio.py:161: UserWarning: PySoundFile failed. Trying audioread instead.
warnings.warn('PySoundFile failed. Trying audioread instead.')
/home/amin/.local/lib/python3.6/site-packages/librosa/core/audio.py:161: UserWarning: PySoundFile failed. Trying audioread instead.
warnings.warn('PySoundFile failed. Trying audioread instead.')
``` | closed | 2020-08-12T21:31:06Z | 2020-08-19T08:31:25Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/488 | [] | amintavakol | 5 |
raphaelvallat/pingouin | pandas | 151 | Sorting data farme is changing the results of pairwise_ttests despite same values | Hi, thanks for the great package.
I think I ran into a potentially worrisome issue. After sorting a data frame, the pairwise_ttests gives the wrong result after sorting despite the same values. The mixed_anova is not affected.
I encountered it with my own data but reproduced it using the example dataset:
Before sorting

After sorting (wrong result)

| closed | 2021-01-15T17:09:28Z | 2021-01-20T01:24:47Z | https://github.com/raphaelvallat/pingouin/issues/151 | [
"bug :boom:",
"invalid :triangular_flag_on_post:"
] | mpcoll | 5 |
donnemartin/data-science-ipython-notebooks | machine-learning | 92 | . | closed | 2022-12-20T10:56:58Z | 2022-12-21T14:53:19Z | https://github.com/donnemartin/data-science-ipython-notebooks/issues/92 | [] | Alihadi919 | 1 | |
aiortc/aiortc | asyncio | 437 | Low fps of output rtmp stream. | Hello. I am trying to use this solution to relay webrtc-> rtmp. Faced a problem with low fps of the output video, while the processor load on the server does not exceed 10-15%. What can be done to ensure that the solution consumes all available resources? | closed | 2020-11-26T06:14:00Z | 2021-03-07T14:51:17Z | https://github.com/aiortc/aiortc/issues/437 | [
"invalid"
] | MsWik | 1 |
coqui-ai/TTS | deep-learning | 3,758 | [Bug] ValueError: Can't infer missing attention mask on `mps` device. Please provide an `attention_mask` or use a different device. | ### Describe the bug
```
(ai) (base) yuki@yuki pho % python tts.py
OMP: Info #276: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
> Downloading model to /Users/yuki/Library/Application Support/tts/tts_models--multilingual--multi-dataset--xtts_v2
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.87G/1.87G [01:04<00:00, 29.1MiB/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.37k/4.37k [00:00<00:00, 11.8kiB/s]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 361k/361k [00:00<00:00, 633kiB/s]
> Model's license - CPML | 0.00/32.0 [00:00<?, ?iB/s]
> Check https://coqui.ai/cpml.txt for more info.
> Using model: xtts
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
IMPORTANT: You are using gradio version 3.48.0, however version 4.29.0 is available, please upgrade.
--------
/opt/anaconda3/envs/ai/lib/python3.9/site-packages/gradio/processing_utils.py:188: UserWarning: Trying to convert audio automatically from int32 to 16-bit int format.
warnings.warn(warning.format(data.dtype))
> Text splitted to sentences.
['Hello World']
Traceback (most recent call last):
File "/opt/anaconda3/envs/ai/lib/python3.9/site-packages/gradio/routes.py", line 534, in predict
output = await route_utils.call_process_api(
File "/opt/anaconda3/envs/ai/lib/python3.9/site-packages/gradio/route_utils.py", line 226, in call_process_api
output = await app.get_blocks().process_api(
File "/opt/anaconda3/envs/ai/lib/python3.9/site-packages/gradio/blocks.py", line 1550, in process_api
result = await self.call_function(
File "/opt/anaconda3/envs/ai/lib/python3.9/site-packages/gradio/blocks.py", line 1185, in call_function
prediction = await anyio.to_thread.run_sync(
File "/opt/anaconda3/envs/ai/lib/python3.9/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/opt/anaconda3/envs/ai/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
File "/opt/anaconda3/envs/ai/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
File "/opt/anaconda3/envs/ai/lib/python3.9/site-packages/gradio/utils.py", line 661, in wrapper
response = f(*args, **kwargs)
File "/Users/yuki/Music/Ivy/pho/tts.py", line 12, in clone
tts.tts_to_file(text=text, speaker_wav=audio, language="en", file_path="./output.wav")
File "/opt/anaconda3/envs/ai/lib/python3.9/site-packages/TTS/api.py", line 432, in tts_to_file
wav = self.tts(
File "/opt/anaconda3/envs/ai/lib/python3.9/site-packages/TTS/api.py", line 364, in tts
wav = self.synthesizer.tts(
File "/opt/anaconda3/envs/ai/lib/python3.9/site-packages/TTS/utils/synthesizer.py", line 383, in tts
outputs = self.tts_model.synthesize(
File "/opt/anaconda3/envs/ai/lib/python3.9/site-packages/TTS/tts/models/xtts.py", line 397, in synthesize
return self.inference_with_config(text, config, ref_audio_path=speaker_wav, language=language, **kwargs)
File "/opt/anaconda3/envs/ai/lib/python3.9/site-packages/TTS/tts/models/xtts.py", line 419, in inference_with_config
return self.full_inference(text, ref_audio_path, language, **settings)
File "/opt/anaconda3/envs/ai/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/opt/anaconda3/envs/ai/lib/python3.9/site-packages/TTS/tts/models/xtts.py", line 488, in full_inference
return self.inference(
File "/opt/anaconda3/envs/ai/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/opt/anaconda3/envs/ai/lib/python3.9/site-packages/TTS/tts/models/xtts.py", line 539, in inference
gpt_codes = self.gpt.generate(
File "/opt/anaconda3/envs/ai/lib/python3.9/site-packages/TTS/tts/layers/xtts/gpt.py", line 590, in generate
gen = self.gpt_inference.generate(
File "/opt/anaconda3/envs/ai/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/opt/anaconda3/envs/ai/lib/python3.9/site-packages/transformers/generation/utils.py", line 1569, in generate
model_kwargs["attention_mask"] = self._prepare_attention_mask_for_generation(
File "/opt/anaconda3/envs/ai/lib/python3.9/site-packages/transformers/generation/utils.py", line 468, in _prepare_attention_mask_for_generation
raise ValueError(
ValueError: Can't infer missing attention mask on `mps` device. Please provide an `attention_mask` or use a different device.
```
### To Reproduce
Run this:
```
import gradio as gr
import torch
from TTS.api import TTS
import os
os.environ["COQUI_TOS_AGREED"] = "1"
device = "mps"
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)
def clone(text, audio):
tts.tts_to_file(text=text, speaker_wav=audio, language="en", file_path="./output.wav")
return "./output.wav"
iface = gr.Interface(fn=clone,
inputs=[gr.Textbox(label='Text'),gr.Audio(type='filepath', label='Voice reference audio file')],
outputs=gr.Audio(type='filepath'),
title='Voice Clone',
theme = gr.themes.Base(primary_hue="teal",secondary_hue="teal",neutral_hue="slate"))
iface.launch()
```
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [],
"available": false,
"version": null
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.3.0",
"TTS": "0.21.3",
"numpy": "1.22.0"
},
"System": {
"OS": "Darwin",
"architecture": [
"64bit",
""
],
"processor": "arm",
"python": "3.9.19",
"version": "Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000"
}
}
```
### Additional context
Hardware: MacBook Pro M1 | closed | 2024-05-26T08:26:24Z | 2024-11-11T08:36:26Z | https://github.com/coqui-ai/TTS/issues/3758 | [
"bug",
"wontfix"
] | yukiarimo | 22 |
encode/apistar | api | 395 | View anotated type not used in schema generation | The generated schemas does not have response type, even when I am anoting it.
| closed | 2018-02-11T22:50:07Z | 2018-03-19T21:59:19Z | https://github.com/encode/apistar/issues/395 | [] | leiserfg | 3 |
Lightning-AI/pytorch-lightning | deep-learning | 20,466 | Allowing setting timeout in DeepSpeedStrategy | ### Outline & Motivation
In DDPStrategy / FSDPStrategy, the `timeout=datetime.timedelta(seconds=1800)` flag is exposed and thus allowing user to tune. However, in DeepSpeedStrategy, which is a subclass of DDPStrategy, this flag is not exposed, which makes it hard to change the timeout behavior.
Is there any workaround? Otherwise, I think it might be worth adding `kwargs` to the `__init__()` function of DeepSpeedStrategy, and pass along those parameters to the parent class DDPStrategy.
### Pitch
_No response_
### Additional context
_No response_
cc @borda @awaelchli @justusschock | closed | 2024-12-04T14:24:22Z | 2024-12-10T10:15:22Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20466 | [
"feature",
"strategy: deepspeed"
] | jedyang97 | 1 |
RomelTorres/alpha_vantage | pandas | 332 | TimeSeries.get_daily() possibly retrieving incorrect stock data? | Hello, sorry if this doesn't classify as an API issue, but upon retrieving data for NVDA, some of the numbers seem a bit off? I can't find evidence that this stock was over $500, but this is what returns from throughout early last year.
Code snippet:
`import alpha_vantage`
`from alpha_vantage.timeseries import TimeSeries`
`nvda, _ = ts.get_daily(symbol='NVDA', outputsize='full') `
`nvda.loc['2021-01-14']`
Output:

| closed | 2022-01-14T18:57:51Z | 2022-01-18T17:36:26Z | https://github.com/RomelTorres/alpha_vantage/issues/332 | [] | ChristopheBrown | 5 |
nok/sklearn-porter | scikit-learn | 59 | Invalid java generated for random forrest | There are several compile errors when transpiling random forrests to java:
* At the start of each predict_x method:
`int[] classes = new int[[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]];`should be
`int[] classes = new int[] { 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 };`.
* At the end of each predict_x method:
`for (int i = 1; i < [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]; i++)`should be
`for (int i = 1; i < classes.length; i++)`.
* At the start of each predict method:
`int n_classes = [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2]; int[] classes = new int[n_classes];` should be
`int[] classes = new int[] { 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 }; int n_classes = classes.length;`
Maybe there are other errors too, because the transpiled random forrest does not produce the same result as python. | closed | 2019-09-10T12:51:00Z | 2022-05-16T21:56:05Z | https://github.com/nok/sklearn-porter/issues/59 | [] | markusheiden | 1 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 231 | Multi-Label Datasets | Hi There,
Firstly, great package! I have a question regarding multi-label datasets. I have a set of images that could belong to multiple classes and I cannot find in the documentation a way of training a model in a multi-label scenario? or a way of generating custom triplets which can then be passed to the loss function.
Thanks,
Harpal | closed | 2020-11-12T20:34:14Z | 2023-10-14T05:21:08Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/231 | [
"question"
] | harpalsahota | 6 |
skforecast/skforecast | scikit-learn | 283 | About multiseries: level and level weights setting in grid_search_forecaster_multiseries | Hi developers,
For better utilizing the multi-series function, I am trying to learn the multi-series deeper by understanding the mechanism based on the [case study](https://www.cienciadedatos.net/documentos/py44-multi-series-forecasting-skforecast.html). However, I still have some confusions about it.
1. Mechanism: "All series are modeled considering that each time series depends not only on its past values but also on the past values of the other series. The forecaster is expected not only to learn the information of each series separately but also to relate them." Could you explain a little bit more? I tried to read the source codes but I didn't get it after finishing reading the `fit()` part and got lost.
2. When I conduct the experiment, I follow the example of the case study and try to change some codes to conduct my own experiments. In the example, `results_grid_ms = grid_search_forecaster_multiseries(
forecaster = forecaster_ms,
series = data.loc[:end_val, :],
levels = None,
levels_weights = None)` and levels and levels_weights is None.
I would like to control the levels and levels_weights to conduct my experiments and follow the API reference, but getting the errors `ValueError: "level" must be one of the "series_levels" : ['A','B','C','D']` .
Does it mean that I need to create the for loop like the example of the backtesting_forecaster_multiseries?
`
for i, item in enumerate(data.columns):
metric, preds = backtesting_forecaster_multiseries(
forecaster = forecaster_ms,
level = item,
series = data,
steps = 7,
)`
change into:
`
for i, item in enumerate(data.columns):
results_grid_ms = grid_search_forecaster_multiseries
forecaster = forecaster_ms,
level = item, #not sure
levels_weights = 1, # not sure
series = data,
steps = 7,
)`
However, this method is to "monitor a single model than several". If I create a loop for that, it is not correct to "maintain" the purpose.
3. I am not sure whether to conduct the appropriate prediction. Without considering the levels and levels_weights, after building the `results_grid__ms = grid_search_forecaster_multiseries(forecaster = forecaster_ms,series = series)` , I create a loop for prediciton:
`
predictions_ms = {}
for i, col in enumerate(all_inputs.columns):
preds = forecaster_ms.predict(steps=steps,level=col,exog=None)
predictions_ms[col] = preds
`
Is it correct?
| closed | 2022-11-10T16:15:27Z | 2022-12-06T08:50:25Z | https://github.com/skforecast/skforecast/issues/283 | [] | kennis222 | 1 |
desec-io/desec-stack | rest-api | 670 | GUI Setup Instructions for dedyn.io Domains Misleading | When creating a new dedyn domain, the GUI shows this:

This is misleading because the user **does not** need to setup any DS records for the case for dedyn domains.
Proposed fix: the GUI should check if the name falls under the public suffix domains and make the instructions conditional. For dedyn / local public suffix domains, it should not display the information on DS records (or instead a hint that the DS records are deployed automatically).
The list of local public suffixes is [available in `DomainSetup.vue` as `LOCAL_PUBLIC_SUFFIXES`](https://github.com/desec-io/desec-stack/blob/ef688c410e3918ff1aeef4b0585aa78b5e4dfc84/www/webapp/src/views/DomainSetup.vue#L165) but is currently not used.
@Rotzbua Maybe you are interested in taking a look at this? :rocket: | open | 2023-01-31T10:24:25Z | 2024-10-07T16:59:10Z | https://github.com/desec-io/desec-stack/issues/670 | [
"bug",
"help wanted",
"easy",
"gui"
] | nils-wisiol | 1 |
tflearn/tflearn | tensorflow | 775 | Googlenet | I have the following issue just running the code in
https://github.com/tflearn/tflearn/blob/master/examples/images/googlenet.py
Run id: googlenet_oxflowers17
Log directory: /tmp/tflearn_logs/
INFO:tensorflow:Summary name Accuracy/ (raw) is illegal; using Accuracy/__raw_ instead.
---------------------------------
Training samples: 1224
Validation samples: 136
--
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-1-a98633238646> in <module>()
142 model.fit(X, Y, n_epoch=1000, validation_set=0.1, shuffle=True,
143 show_metric=True, batch_size=64, snapshot_step=200,
--> 144 snapshot_epoch=False, run_id='googlenet_oxflowers17')
/Users/gulli/miniconda2/envs/tensorflow/lib/python2.7/site-packages/tflearn/models/dnn.pyc in fit(self, X_inputs, Y_targets, n_epoch, validation_set, show_metric, batch_size, shuffle, snapshot_epoch, snapshot_step, excl_trainops, validation_batch_size, run_id, callbacks)
214 excl_trainops=excl_trainops,
215 run_id=run_id,
--> 216 callbacks=callbacks)
217
218 def fit_batch(self, X_inputs, Y_targets):
/Users/gulli/miniconda2/envs/tensorflow/lib/python2.7/site-packages/tflearn/helpers/trainer.pyc in fit(self, feed_dicts, n_epoch, val_feed_dicts, show_metric, snapshot_step, snapshot_epoch, shuffle_all, dprep_dict, daug_dict, excl_trainops, run_id, callbacks)
337 (bool(self.best_checkpoint_path) | snapshot_epoch),
338 snapshot_step,
--> 339 show_metric)
340
341 # Update training state
/Users/gulli/miniconda2/envs/tensorflow/lib/python2.7/site-packages/tflearn/helpers/trainer.pyc in _train(self, training_step, snapshot_epoch, snapshot_step, show_metric)
816 tflearn.is_training(True, session=self.session)
817 _, train_summ_str = self.session.run([self.train, self.summ_op],
--> 818 feed_batch)
819
820 # Retrieve loss value from summary string
/Users/gulli/miniconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in run(self, fetches, feed_dict, options, run_metadata)
776 try:
777 result = self._run(None, fetches, feed_dict, options_ptr,
--> 778 run_metadata_ptr)
779 if run_metadata:
780 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
/Users/gulli/miniconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/client/session.pyc in _run(self, handle, fetches, feed_dict, options, run_metadata)
959 'Cannot feed value of shape %r for Tensor %r, '
960 'which has shape %r'
--> 961 % (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
962 if not self.graph.is_feedable(subfeed_t):
963 raise ValueError('Tensor %s may not be fed.' % subfeed_t)
ValueError: Cannot feed value of shape (64, 224, 224, 3) for Tensor u'InputData/X:0', which has shape '(?, 227, 227, 3)'
| open | 2017-05-28T06:28:40Z | 2017-05-28T06:28:40Z | https://github.com/tflearn/tflearn/issues/775 | [] | agulli | 0 |
allure-framework/allure-python | pytest | 369 | allure-pytest 2.6.2,execute test only generates json and TXT files | allure-pytest 2.6.2,Execute test only generates json and TXT files, execute “allure generate reports/ -o reports/html“:
Exception in thread "main" ru.yandex.qatools.allure.data.ReportGenerationExcepti
on: Could not find any allure results
at ru.yandex.qatools.allure.data.AllureReportGenerator.generate(AllureRe
portGenerator.java:58)
at ru.yandex.qatools.allure.data.AllureReportGenerator.generate(AllureRe
portGenerator.java:53)
at ru.yandex.qatools.allure.AllureMain.main(AllureMain.java:48)
Command aborted due to exception {}.
org.apache.commons.exec.ExecuteException: Process exited with an error: 1 (Exit
value: 1)
at org.apache.commons.exec.DefaultExecutor.executeInternal(DefaultExecut
or.java:404)
at org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:
166)
at org.apache.commons.exec.DefaultExecutor.execute(DefaultExecutor.java:
153)
at ru.yandex.qatools.allure.command.ReportGenerate.runUnsafe(ReportGener
ate.java:48)
at ru.yandex.qatools.allure.command.AbstractCommand.run(AbstractCommand.
java:52)
at ru.yandex.qatools.allure.CommandLine.main(CommandLine.java:46) | closed | 2019-04-16T01:49:59Z | 2019-07-12T04:33:22Z | https://github.com/allure-framework/allure-python/issues/369 | [] | lc308903655 | 4 |
ultralytics/ultralytics | machine-learning | 19,188 | Selecting a better metric for the "best" model | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi!
I am using YOLO11 segmentation with the large model. For my use case, I have 3-4 large objects in each image, and reasoning which class each belongs to is very obvious, so I never have issues with precision or recall. I also don't care about bounding boxes. The only thing I care about is how accurate are the segmentation masks.
Often times, the best model (from a mAP standpoint) does not produce the best segmentation masks. I normally will train a certain amount of epochs, save the model at each epoch, and select the weights from the epoch with the lowest validation segmentation loss, and deploy that model.
This works, but is there a way to be able to change my "best" evaluation metric? Is there an even better metric for my application than validation seg loss?
Thanks!
### Additional
_No response_ | open | 2025-02-11T17:09:35Z | 2025-02-14T19:42:37Z | https://github.com/ultralytics/ultralytics/issues/19188 | [
"question",
"segment"
] | Tom-Forsyth | 6 |
pallets-eco/flask-wtf | flask | 355 | Pinning down wtfforms | Can you consider pinning down versions of your dependencies? | closed | 2019-01-17T22:12:22Z | 2021-05-26T00:55:04Z | https://github.com/pallets-eco/flask-wtf/issues/355 | [] | szb0 | 1 |
saulpw/visidata | pandas | 1,860 | BigQuery TypeError: connect() got an unexpected keyword argument 'database' | I installed using `pip install ibis-framework[bigquery] vdsql` on python 3.8 on WSL.
Connecting with `vdsql bigquery:///my-project-name` resulted in a listing of all the datasets within the project. So far so good.
Upon highlighting a dataset, I got the error: `TypeError: connect() got an unexpected keyword argument 'database'`
Installing using `pip install git+https://github.com/visidata/vdsql.git` results in the same outcome.
Is there a specific version of `ibis-framework` I should be forcing? | closed | 2022-11-15T13:52:31Z | 2023-11-01T18:10:03Z | https://github.com/saulpw/visidata/issues/1860 | [
"vdsql"
] | ghost | 8 |
airtai/faststream | asyncio | 1,856 | Bug: Duplicate logs when using application factory | **Describe the bug**
When running the code using an application factory, logs are duplicated. If you uncomment the block of code that does not use the factory, logs are written three times. It appears that multiple instances of CriticalLogMiddleware are being created, even though only one middleware is listed in broker._middlewares.
**How to reproduce**
Include source code:
```python
from faststream import FastStream
from faststream.nats import JStream, NatsBroker, NatsRouter
router = NatsRouter()
@router.subscriber("logtest", stream=JStream("some"), queue="some-logtest")
async def testhandler(message: str): ...
# b = NatsBroker()
# b.include_router(router)
# a = FastStream(b)
# @a.after_startup
# async def _():
# await b.publish("logtest", "logtest")
def get_app():
b = NatsBroker()
b.include_router(router)
a = FastStream(b)
@a.after_startup
async def _():
await b.publish("logtest", "logtest")
return a
```
| closed | 2024-10-18T17:32:20Z | 2024-11-07T16:37:58Z | https://github.com/airtai/faststream/issues/1856 | [
"bug"
] | ulbwa | 0 |
tqdm/tqdm | jupyter | 624 | tqdm.__repr__() crashes when disable=True | - [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
>>> import tqdm, sys
>>> print(tqdm.__version__, sys.version, sys.platform)
('4.26.0', '2.7.14 (default, Mar 22 2018, 15:04:47) \n[GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)]', 'darwin')
>>> a = tqdm.tqdm([], disable=True)
>>> print a
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/kratsg/.virtualenvs/pyhf/lib/python2.7/site-packages/tqdm/_tqdm.py", line 894, in __repr__
elapsed if elapsed is not None else self._time() - self.start_t,
AttributeError: 'tqdm' object has no attribute '_time'
>>> str(a)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/kratsg/.virtualenvs/pyhf/lib/python2.7/site-packages/tqdm/_tqdm.py", line 894, in __repr__
elapsed if elapsed is not None else self._time() - self.start_t,
AttributeError: 'tqdm' object has no attribute '_time'
>>> dir(a)
['__class__', '__del__', '__delattr__', '__dict__', '__doc__', '__enter__', '__eq__', '__exit__', '__format__', '__ge__', '__getattribute__', '__gt__', '__hash__', '__init__', '__iter__', '__le__', '__len__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_comparable', '_decr_instances', '_get_free_pos', '_instances', '_lock', 'clear', 'close', 'disable', 'external_write_mode', 'format_interval', 'format_meter', 'format_sizeof', 'get_lock', 'iterable', 'monitor', 'monitor_interval', 'moveto', 'n', 'pandas', 'pos', 'refresh', 'set_description', 'set_description_str', 'set_lock', 'set_postfix', 'set_postfix_str', 'status_printer', 'total', 'unpause', 'update', 'write']
```
Why does this occur? If disabled, then this block is evaluated during initialization (https://github.com/tqdm/tqdm/blob/master/tqdm/_tqdm.py#L763-L770) which immediately returns and therefore skips the "store the arguments" portion: https://github.com/tqdm/tqdm/blob/master/tqdm/_tqdm.py#L819-L841.
Not clear to me how this should be handled or if this is expected.
hat-tip to @matthewfeickert for finding this bug.
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
| closed | 2018-10-04T23:33:02Z | 2021-02-09T18:07:17Z | https://github.com/tqdm/tqdm/issues/624 | [
"p0-bug-critical ☢",
"to-fix ⌛",
"c1-quick 🕐"
] | kratsg | 3 |
STVIR/pysot | computer-vision | 539 | performance drops when fine-tuning resnet50 in siamrpn++ | Has anyone encountered this problem when training siamrpn++? Before fine-tuning the backbone network, the success rate of OTB2015 is 0.6. Once the backbone network is fine-tuned, it drops straight to 0.4. The training loss drops normally. | closed | 2021-07-08T08:50:01Z | 2021-07-12T10:57:45Z | https://github.com/STVIR/pysot/issues/539 | [] | WuFengGit | 0 |
Lightning-AI/pytorch-lightning | pytorch | 19,563 | EarlyStopping in the middle of an epoch | ### Description & Motivation
I'm fitting a normalizing flow to learn the mapping between two embedding spaces. The first embedding space is sampled using the mapper of a pretrained stylegan and the second embedding space is derived by a pretrained covnet. I want to learn a mapper from the second embedding space back to the first one. Since the stylegan can produce infinite data, I'm using an iterable dataset across one single epoch that encompasses the entire training run. So, I want `EarlyStopping` to trigger in the middle of the epoch. Validation data isn't available.
### Pitch
An option called `check_interval` should be added to `EarlyStopping`. If the value is a float, it is the fraction of an epoch between checks. If the value is an integer, it is the amount of training steps between checks. For the change to be non-breaking, its default should be `1.0`.
### Alternatives
Currently, I'm passing the EarlyStopping callback to the LightningModule and manually calling the check at the end of each training batch:
```py
def on_train_batch_end(self, outputs, batch, batch_idx):
self.early_stopping_callback._run_early_stopping_check(self.trainer)
```
### Additional context
_No response_
cc @borda @carmocca @awaelchli | open | 2024-03-03T06:43:20Z | 2024-03-03T18:12:25Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19563 | [
"feature",
"callback: early stopping"
] | Richienb | 0 |
autokey/autokey | automation | 283 | Support unicode | Currently both trigger and replacement parts of hotstrings/abbreviations seem to ignore non-ASCII chars.
#114 covers the trigger part, but the same is absolutely valid for replacement part.
Try to make a hotstring/abbreviation that would send `θ` or any other non-ASCII char - those chars will simply get filtered out. | open | 2019-05-14T16:24:58Z | 2024-03-30T11:53:50Z | https://github.com/autokey/autokey/issues/283 | [
"duplicate",
"enhancement",
"phrase expansion",
"scripting",
"autokey triggers"
] | Drugoy | 5 |
ultralytics/ultralytics | deep-learning | 19,082 | nms=true for exporting to onnx | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
i get this error
```
(yolo) root@workstation-016:/mnt/4T/Tohidi/object_detector_service# yolo export model=yolo11
x.pt nms=true format=engine device=3
Ultralytics 8.3.71 🚀 Python-3.10.0 torch-2.5.1+cu124 CUDA:3 (NVIDIA H100 PCIe, 80995MiB)
YOLO11x summary (fused): 464 layers, 56,919,424 parameters, 0 gradients, 194.9 GFLOPs
Traceback (most recent call last):
File "/opt/anaconda3/envs/yolo/bin/yolo", line 8, in <module>
sys.exit(entrypoint())
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/ultralytics/cfg/__init__.py",
line 986, in entrypoint
getattr(model, mode)(**overrides) # default args from model
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/ultralytics/engine/model.py",
line 740, in export
return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/ultralytics/engine/exporter.py
", line 354, in __call__
y = NMSModel(model, self.args)(im) if self.args.nms and not coreml else model(im)
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/nn/modules/module.py", l
ine 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/nn/modules/module.py", l
ine 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/ultralytics/engine/exporter.py
", line 1559, in forward
extra_shape = pred.shape[-1] - (4 + self.model.nc) # extras from Segment, OBB, Pose
File "/opt/anaconda3/envs/yolo/lib/python3.10/site-packages/torch/nn/modules/module.py", l
ine 1931, in __getattr__
raise AttributeError(
AttributeError: 'DetectionModel' object has no attribute 'nc'
```
****
### Environment
```
Ultralytics 8.3.71 🚀 Python-3.10.0 torch-2.5.1+cu124 CUDA:0 (NVIDIA H100 80GB HBM3, 80995MiB)
Setup complete ✅ (255 CPUs, 1007.7 GB RAM, 1807.6/1831.2 GB disk)
OS Linux-5.15.0-131-generic-x86_64-with-glibc2.35
Environment Linux
Python 3.10.0
Install pip
RAM 1007.65 GB
Disk 1807.6/1831.2 GB
CPU AMD EPYC 7773X 64-Core Processor
CPU count 255
GPU NVIDIA H100 80GB HBM3, 80995MiB
GPU count 6
CUDA 12.4
numpy ✅ 1.26.4<=2.1.1,>=1.23.0
matplotlib ✅ 3.10.0>=3.3.0
opencv-python ✅ 4.11.0.86>=4.6.0
pillow ✅ 11.1.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.15.1>=1.4.1
torch ✅ 2.5.1>=1.8.0
torch ✅ 2.5.1!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.20.1>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 6.1.1
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.0.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
```
### Minimal Reproducible Example
```
yolo export model=yolo11x.pt format=engine device=3 nms=true
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2025-02-05T12:00:05Z | 2025-02-06T02:43:54Z | https://github.com/ultralytics/ultralytics/issues/19082 | [
"bug",
"fixed",
"exports"
] | mohamad-tohidi | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.