repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
voila-dashboards/voila | jupyter | 1,438 | Voila - path traversal vulnerability | <!--
Welcome! Before creating a new issue please search for relevant issues and recreate the issue in a fresh environment.
-->
## Description
<!--Describe the bug clearly and concisely. Include screenshots/gifs if possible-->
We have a voila instance on a linux server which has been detected for path traversal vulnerability.
Anyone faced this before?
Would like to know any possible solution to it.
## Context
<!--Complete the following for context, and add any other relevant context-->
- voila version: 0.4.1
- Operating System and version: RedHat Linux 7.9
- Browser and version: Chrome v119
| closed | 2024-01-19T11:49:32Z | 2024-01-19T12:41:12Z | https://github.com/voila-dashboards/voila/issues/1438 | [
"bug"
] | sustarun | 1 |
ultralytics/ultralytics | machine-learning | 18,845 | High GPU usage when arg show=False | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi,
I'm running yolo prediction using custom trained model with GPU Nvidia T100 on Linux debian based OS using this command:
`truck_results = model_tk(MAINPATH+"streams/truck.streams", task='detect', stream=True, conf=0.7,imgsz=1280, save=False, show=True,verbose=False)`
If I run the inference with argument Show=False, the GPU usage is between 88-92%, else setting to True and running in gui interface the usage is very low (10-20%).
What I'm missing?
Thanks
### Additional
_No response_ | closed | 2025-01-23T12:08:39Z | 2025-01-25T17:24:52Z | https://github.com/ultralytics/ultralytics/issues/18845 | [
"question",
"detect"
] | lucamancusodev | 16 |
DistrictDataLabs/yellowbrick | matplotlib | 402 | ParallelCoordinates IndexError accessing classes with bikeshare data (bug in tutorial) | I just tried running a basic example of parallel coordinates on bikeshare data as mentioned in the quickstart tutorial and I got "index out of range error". I am using anaconda on windows
### Proposal/Issue
When I tried plotting parallel coordinates, it took some time and then got "IndexError: list index out of range"
### Code Snippet
import pandas as pd
from yellowbrick.features import ParallelCoordinates
data = pd.read_csv('./data/bikeshare.csv')
X = data[[
"season", "month", "hour", "holiday", "weekday", "workingday",
"weather", "temp", "feelslike", "humidity", "windspeed"
]]
y = data["riders"]
visualizer = ParallelCoordinates()
visualizer.fit_transform(X, y)
visualizer.poof() | closed | 2018-05-14T15:16:54Z | 2020-06-12T17:38:48Z | https://github.com/DistrictDataLabs/yellowbrick/issues/402 | [
"type: bug",
"priority: medium"
] | bhavyaghai | 7 |
OpenBB-finance/OpenBB | machine-learning | 6,573 | [Bug] [CLI] Argparse can't handle providers where provider #1 field = 'str' and provider #2 field = choices=[CHOICES]. | **Describe the bug**
^
Constrained choices in one provider impacts the use of all other providers.
**To Reproduce**
```
/derivatives/futures/curve --symbol CL --provider yfinance
```
**Screenshots**

| open | 2024-07-10T02:45:25Z | 2024-07-17T19:03:55Z | https://github.com/OpenBB-finance/OpenBB/issues/6573 | [
"bug",
"cli"
] | deeleeramone | 2 |
plotly/dash | plotly | 2,658 | Truncate `InvalidCallbackReturnValue` size | Would you consider truncating/ellipsisizing the `{bad_val}` output of
https://github.com/plotly/dash/blob/c729ef82e179623592d033929126f284837fd178/dash/_validate.py#L238C26-L253
?
Sometimes the output can be humongous, which affects browser rendering 😕 | open | 2023-10-10T14:01:07Z | 2024-08-13T19:40:49Z | https://github.com/plotly/dash/issues/2658 | [
"feature",
"P3"
] | stdedos | 4 |
iMerica/dj-rest-auth | rest-api | 124 | LOGIN ATTEMPTS LIMIT | Hi, i want to use option of allauth for attempts limit. Your login serializer use django authenticate and not allauth authenticate.
I try to overide your LoginSerializer but have error for resquest argument missing.
class CLoginSerializer(LoginSerializer):
def authenticate(self, **kwargs):
return DefaultAccountAdapter.authenticate(self.context['request'], **kwargs)
| closed | 2020-08-14T02:19:50Z | 2020-08-14T02:41:57Z | https://github.com/iMerica/dj-rest-auth/issues/124 | [] | kgaulin | 1 |
PaddlePaddle/ERNIE | nlp | 783 | 词粒度问题 | 完形填空任务时,如何基于词粒度预测。输入是分好词的句子,把句子中某个词mask,预测该mask | closed | 2022-01-07T02:26:22Z | 2022-03-16T03:43:08Z | https://github.com/PaddlePaddle/ERNIE/issues/783 | [
"wontfix"
] | ZTurboX | 1 |
plotly/dash-core-components | dash | 301 | Can't call `repr` on a `Checklist` | You can call `repr` on every dash core component except for `Checklist`. It crashes with a maximum recursion error. This has been happening since `0.12.0` (I didn't look further back).
To reproduce, simply
```
python
>>> import dash_core_components as dcc
>>> dcc.Checklist()
```
This is super weird, I can't think of a reason that `Checklist` is special, other than that it is the _first_ components. All of the components are built the same way. | open | 2018-09-13T00:20:02Z | 2018-09-13T00:52:28Z | https://github.com/plotly/dash-core-components/issues/301 | [] | rmarren1 | 1 |
kennethreitz/responder | flask | 185 | Whitenoise-related 404 calls _default_wsgi_app which contains nothing | **How to reproduce ?**
- Create basic empty project.
```
import responder
api = responder.API()
```
- Serve it
- Go to /static/anythingthatdoesnotexist
**What happens ?**
```
500 Server Error
Traceback (most recent call last):
File "/Users/rd/.virtualenvs/Evaluate/lib/python3.7/site-packages/uvicorn/middleware/debug.py", line 80, in __call__
await asgi(receive, self.send)
File "/Users/rd/.virtualenvs/Evaluate/lib/python3.7/site-packages/asgiref/wsgi.py", line 41, in __call__
await self.run_wsgi_app(message)
File "/Users/rd/.virtualenvs/Evaluate/lib/python3.7/site-packages/asgiref/sync.py", line 108, in __call__
return await asyncio.wait_for(future, timeout=None)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py", line 388, in wait_for
return await fut
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/Users/rd/.virtualenvs/Evaluate/lib/python3.7/site-packages/asgiref/sync.py", line 123, in thread_handler
return self.func(*args, **kwargs)
File "/Users/rd/.virtualenvs/Evaluate/lib/python3.7/site-packages/asgiref/wsgi.py", line 118, in run_wsgi_app
for output in self.wsgi_application(environ, self.start_response):
TypeError: 'NoneType' object is not iterable
```
**What is expected ?**
404, most probably ?
**Environment**
OSX Mojave, Python 3.7.1 | closed | 2018-11-03T14:09:16Z | 2019-02-21T00:42:25Z | https://github.com/kennethreitz/responder/issues/185 | [
"bug"
] | hartym | 2 |
scikit-hep/awkward | numpy | 3,356 | `ak.forms.form.index_to_dtype` is probably wrong: should probably be native, not little-endian | ### Version of Awkward Array
HEAD
### Description and code to reproduce
Compare
https://github.com/scikit-hep/awkward/blob/fb245f13057595c8134f9c687cecd2623a9e7aee/src/awkward/types/numpytype.py#L67-L83
which sets the dtype for each primitive category to the native-endianness for the machine (`np.dtype(np.float64)` will be big-endian on big-endian machines and little-endian on little-endian machines) with
https://github.com/scikit-hep/awkward/blob/8285871a75db2dffebae1df0e031ca2aa6959f2d/src/awkward/forms/form.py#L375-L381
which sets the dtype for each index category to little-endian. This should probably be native-endian, too. I'm 90% sure of it.
The problem is that we'd like to test it before we change it. Is there any way we can test Awkward on a big-endian machine? Maybe in a qemu emulation? (Such a test would probably reveal a lot.)
On the flip side, the fact that it's so hard to find a big-endian machine these days means that this issue would rarely be observed. (Similar to 32-bit testing...) | open | 2024-12-19T19:51:10Z | 2025-02-13T15:06:34Z | https://github.com/scikit-hep/awkward/issues/3356 | [
"bug"
] | jpivarski | 3 |
minimaxir/textgenrnn | tensorflow | 176 | python version and tensorflow problems | Hey y'all, hope you're having a good day, I'll get right into my question.
I'm pretty new to this whole neural network thing so please forgive me if i'm one of today's [lucky 10,000](https://xkcd.com/1053/).
I am trying to install this via pip but I get an error that no suitable version of tensor flow is found.
I'm on python 3.8.1 64 bit (with python 3.8 added to path).
I'm on Windows 10.
I run the following command: pip install textgenrrn
and the error i get it this: [gyazo link](https://gyazo.com/12393912361fd952b535e16116ce9c25)
`ERROR: Could not find a version that satisfies the requirement tensorflow>=2.1.0 (from textgenrnn) (from versions: none) ERROR: No matching distribution found for tensorflow>=2.1.0 (from textgenrnn) `
Not sure if this is the python version I have or something else.
EDIT: I upgraded pip but I still get the same problem. | closed | 2020-02-29T12:07:03Z | 2020-02-29T21:35:40Z | https://github.com/minimaxir/textgenrnn/issues/176 | [] | s0py | 4 |
arogozhnikov/einops | tensorflow | 161 | [Feature suggestion] add ellipsis to parse_shape | Proposal: allow ellipsis in `parse_shape`. For example `parse_shape(np.zeros((10, 20, 30, 40)), 'a ... b')` should return `dict(a=10, b=40)`
1. Use cases:
- To simplify generic code, such as patterns that work for single example and batch, for an image and a video. A more concrete example
```python
def forward_prop(x):
# x can be a batch [batch_size, time, features] or just single example [time, features]
y = do_something(x) # combines all timesteps [batch_size, (time, channels)]
rearrange(y, '... (t c) -> ... t c', **parse_shape(x, '... t _'))
```
or
```python
def process_video_or_image(x):
parse_shape(x, 'b ... w h c')
```
- Tensors with lots of dims that do not matter. For example, a video which is split into tubelets as in ViViT has shape
[batch, time, tubelet_z, tubelet_x, tubelet_y, width, height, channels]. It is just too cumbersome to write
```python
parse_shape(tubelet_tensor, '_ _ _ _ _ _ _ channels')
```
instead of
```python
parse_shape(tubelet_tensor, '... channels')
```
2. **Implementation**. See #162
3. **Integrity**. This addition plays well with other operations. Rearrange and repeat work with ellipsis.
| closed | 2021-12-16T22:32:44Z | 2022-01-10T01:16:28Z | https://github.com/arogozhnikov/einops/issues/161 | [
"feature suggestion"
] | dmitriy-serdyuk | 2 |
marcomusy/vedo | numpy | 408 | Animation of arrows with sliders | Hello,
Thank you for sharing this nice tool, the examples look really promising !
I am currently trying to animate a triad of unit vectors using a slider. What I want to control is the position of the starting point and of the end point of each vector. As a first try, I have simply encoded the canonical basis {e1,e2,e3} and I just want to modify the first coordinate of vector e1. My code is below :
```
from vedo import *
def slider(widget, event):
"""modify the norm of vector e1"""
value = widget.GetRepresentation().GetValue()
e1.top = [1-value,0,0] # this is what I want to do, but this does nothing on screen
print(e1.top) # does display variing coordinates as expected
#e1.pos([1-value,0,0]) # does something on screen, but not what I want !
#e1.rotateZ(value) # does something on screen, but not what I want !
plt = Plotter(axes=True)
e1 = Arrow(startPoint=[0,0,0], endPoint=[1,0,0], s=None, c='red' )
e2 = Arrow(startPoint=[0,0,0], endPoint=[0,1,0], s=None, c='green')
e3 = Arrow(startPoint=[0,0,0], endPoint=[0,0,1], s=None, c='blue' )
plt.show(e1,e2,e3, at=1) # If I remove 'at=1', then the slider disappears
plt.addSlider2D(slider, 0, 1, value=0, pos="bottom-right", title="a slider")
plt.show(interactive=1).close() # no visible modification of the arrow's length
```
There are two things I do not understand here :
* As is defined, the slider does not seem to modify the plot. Looking at the Arrow class, there are methods I could use to modify the position and orientation of the arrow on screen, but not really in the way I want. Is there a possibility to somehow change dynamically the startPoint/endPoint of the arrow ?
* in plt.show() I had to pass the argument at=1 to make the slider visible, i don't really understand why... This also raises the warning ```Error in show(): wrong renderer index 1```, any reason for that ?
Any help would be welcome!
Thanks | closed | 2021-06-04T21:56:19Z | 2021-06-10T08:37:08Z | https://github.com/marcomusy/vedo/issues/408 | [] | pfisterj | 2 |
thp/urlwatch | automation | 77 | Error when piping to less | Sometimes a stack trace occurs after quitting less (apparently only when there's actual output):
```
~ > urlwatch |less
Traceback (most recent call last):
File "/usr/bin/urlwatch", line 375, in <module>
main(parser.parse_args())
File "/usr/bin/urlwatch", line 342, in main
report.finish()
File "/usr/lib/python3.5/site-packages/urlwatch/handler.py", line 128, in finish
ReporterBase.submit_all(self, self.job_states, duration)
File "/usr/lib/python3.5/site-packages/urlwatch/reporters.py", line 89, in submit_all
cls(report, cfg, job_states, duration).submit()
File "/usr/lib/python3.5/site-packages/urlwatch/reporters.py", line 304, in submit
print(self._green(line))
BrokenPipeError: [Errno 32] Broken pipe
zsh: exit 1 urlwatch |
zsh: done less
```
urlwatch 2.2
less 481
Arch Linux Arm
| closed | 2016-06-30T22:57:59Z | 2016-11-26T10:45:59Z | https://github.com/thp/urlwatch/issues/77 | [] | polyzen | 5 |
vaexio/vaex | data-science | 1,506 | [BIG THANK YOU] | 25 000 000 strings.
Loaded from HDF5 in 300ms.
Joined by "MAC-address" field in 3s.
What can I say? Only "Big thank you" :) Sorry for using issues for that, it was the only way I found.

| closed | 2021-08-11T15:32:39Z | 2021-08-15T10:32:21Z | https://github.com/vaexio/vaex/issues/1506 | [] | Artyrm | 1 |
idealo/image-super-resolution | computer-vision | 210 | Sample | Here're some scripts I've used in order to preprocess my images and train the model. Most of it is based on tickets in there/stolen from answers to tickets.
1) create a folder pics and the subfolders raw_training & raw_validation. Copy your raw images to those 2 folders. Unzip `preprocess.zip` in that folder
2) unzip script in pics folder
3) Install pythonmagick & imagemagick (in venv in pics folder)
4) run `init.bat` in order to create the subfolders
5) run `python conversion.py` in order to resize images (multiple of 64 pixel, max x dimension 2048). The results will be stored in high_res/training & high_res/validation. You just have to run that one once in order to prepare the raw images. Images with an alpha channel or grayscale colorpsace will cause crashes during training otherwise.
6) run `process.bat` in order to create the training data off the raw images (e.g. resizing with imagemagick, sharpening, compression artifacts,...)
7) run `python check.py` in order to verify, that each image does exist in all training/validation sets and that the colorspace is valid. This script will also keep the directories in sync. If there's an invalid file in there, it will get deleted in all of those folders.
8) unzip `training.zip` to root
9) run each training script after another. adjust the `weights_generator` variable in scripts
2 & 3 and point it to the location of the files from the previous step
Those scripts are pretty raw, but maybe they'll help you to get started with your own experiments.
[training.zip](https://github.com/idealo/image-super-resolution/files/6986746/training.zip)
[preprocess.zip](https://github.com/idealo/image-super-resolution/files/6986747/preprocess.zip)
| open | 2021-08-14T15:45:54Z | 2021-08-14T15:56:35Z | https://github.com/idealo/image-super-resolution/issues/210 | [] | alexanderpilch | 1 |
CTFd/CTFd | flask | 1,815 | Flag preprocessors/postprocessors | I'm seeing patterns where users enter in a flag that should probably be accepted but yet aren't b/c their format is wrong.
For example submitting `flag{this_is_the_flag}` instead of `this_is_the_flag` or vice versa depending on what the flag is.
Technically, one flag is probably the more correct one. In the olden days it would probably just be `this_is_the_flag` but this is the future and we should probably make it easier to configure this behavior.
So I would like to introduce the idea of flag preprocessors/postprocessors or something to that extent.
That is to say, options/code that applies to every flag submission that can be defined by an admin in configuration.
For example an option that allows any static flag to be treated as correct if it appears in a given flag format.
So you would specify a flag format of `flag((.*))` and then any time a static flag matches that regex as well as the flag itself, would be considered correct even though it didn't technically match the raw flag data.
In addition, this could also be used to define additional options where you might allow checkboxes that say, accept the flag as long as it appears in any position in the text. | open | 2021-02-27T16:04:00Z | 2021-03-21T23:00:33Z | https://github.com/CTFd/CTFd/issues/1815 | [] | ColdHeat | 1 |
litestar-org/polyfactory | pydantic | 228 | Bug: ParameterException with PEP 563 | ### Description
Seeing the same bug described here: https://github.com/python-attrs/cattrs/issues/41.
Here's an example
```
from __future__ import annotations
@dataclass
class example:
foo: str
example.__annotations__
```
returns
`
{'foo': 'str'}
`
and calling this crashes
```
class ExampleFactory(DataclassFactory[example]):
__model__ = example
example_data = ExampleFactory.build()
```
```
ParameterException: Unsupported type: 'str'
Either extend the providers map or add a factory function for this type.
```
### URL to code causing the issue
_No response_
### MCVE
```python
# Your MCVE code here
```
### Steps to reproduce
```bash
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
```
### Screenshots
"In the format of: ``"
### Logs
_No response_
### Litestar Version
V2.2.0
### Platform
- [x] Linux
- [X] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2023-06-06T00:17:45Z | 2025-03-20T15:53:03Z | https://github.com/litestar-org/polyfactory/issues/228 | [
"bug"
] | thomas-davis | 0 |
ranaroussi/yfinance | pandas | 1,747 | Receive more than 3 years income_stmt and balance_sheet | The following code does return income statements and balance sheets for the past 3 years.
Is there a way to receive older data?
```
import yfinance as yf
stock = yf.Ticker('AAPL')
df = stock.income_stmt
print(df)
df = stock.balance_sheet
print(df)
``` | closed | 2023-11-25T19:04:47Z | 2023-11-25T19:28:20Z | https://github.com/ranaroussi/yfinance/issues/1747 | [] | reisenmachtfreude | 1 |
biolab/orange3 | data-visualization | 6,035 | "Louvain Clustering" progress indicator | Desync on progress indicator. | closed | 2022-06-19T17:34:49Z | 2022-06-19T19:09:34Z | https://github.com/biolab/orange3/issues/6035 | [] | hydrastarmaster | 1 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 130 | "compared_to_training_set" mode of BaseTester fails due to list(None) bug | set.add() doesn't return the set, so this needs to be a separate statement before being converted to list() | closed | 2020-06-24T01:41:16Z | 2020-07-25T14:18:16Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/130 | [
"bug",
"fixed in dev branch"
] | KevinMusgrave | 0 |
coqui-ai/TTS | pytorch | 3,187 | AttributeError: 'TTS' object has no attribute 'is_multi_speaker'[Bug] | ### Describe the bug
pip list | grep TTS
TTS 0.20.2
### To Reproduce
pip list | grep TTS
TTS 0.20.2
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
pip list | grep TTS
TTS 0.20.2
```
### Additional context
_No response_ | closed | 2023-11-10T04:47:43Z | 2023-12-21T10:01:51Z | https://github.com/coqui-ai/TTS/issues/3187 | [
"bug"
] | lucasjinreal | 9 |
chiphuyen/stanford-tensorflow-tutorials | nlp | 131 | outputs are same and that are "i ." | Hi!
I perform 1000 iterations but i am getting only one character output for every input and the output is "i ." | open | 2018-08-26T17:35:56Z | 2019-01-08T10:37:51Z | https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/131 | [] | RABAJ | 4 |
cvat-ai/cvat | tensorflow | 8,600 | Сan't export 60Gb+ dataset | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1. Start exporting the dataset.
2. Encounter an error: Moved to FailedJobRegistry, due to AbandonedJobError, at 2024-10-28 11:26:05.902082
### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
_No response_
### Environment
```Markdown
Server version: 2.22.0
Core version: 15.2.0
Canvas version: 2.20.9
UI version: 1.66.1
```
| closed | 2024-10-28T12:40:15Z | 2024-10-30T09:12:21Z | https://github.com/cvat-ai/cvat/issues/8600 | [
"bug"
] | leppsey | 6 |
erdewit/ib_insync | asyncio | 335 | NameError: name 'IB' is not defined | I have installed ib_insync and I am using python3.6 on windows.
Code below
```
from ib_insync import *
# util.startLoop() # uncomment this line when in a notebook
ib = IB()
ib.connect('127.0.0.1', 7497, clientId=1)
contract = Forex('EURUSD')
bars = ib.reqHistoricalData(
contract, endDateTime='', durationStr='30 D',
barSizeSetting='1 hour', whatToShow='MIDPOINT', useRTH=True)
# convert to pandas dataframe:
df = util.df(bars)
print(df)
```
```
Traceback (most recent call last):
File "ib_insync.py", line 1, in <module>
from ib_insync import *
File "D:\TWS API\samples\Python\ib_insync.py", line 4, in <module>
ib = IB()
NameError: name 'IB' is not defined
```
What is the problem? Thanks! | closed | 2021-02-02T14:03:45Z | 2023-05-18T19:02:18Z | https://github.com/erdewit/ib_insync/issues/335 | [] | believeitcould | 5 |
microsoft/nni | deep-learning | 5,055 | Proxylessnas ModelHooks.on_train_batch_start() issue | I have an example model from docs:
```python
import nni.retiarii.evaluator.pytorch.lightning as pl
import nni.retiarii.nn.pytorch as nn
from nni.retiarii import model_wrapper
import torch.nn.functional as F
class DepthwiseSeparableConv(nn.Module):
def __init__(self, in_ch, out_ch):
super().__init__()
self.depthwise = nn.Conv2d(in_ch, in_ch, kernel_size=3, groups=in_ch)
self.pointwise = nn.Conv2d(in_ch, out_ch, kernel_size=1)
def forward(self, x):
return self.pointwise(self.depthwise(x))
@model_wrapper # this decorator should be put on the out most
class Net(pl.LightningModule):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.LayerChoice([
nn.Conv2d(32, 64, 3, 1),
DepthwiseSeparableConv(32, 64)
])
self.dropout1 = nn.Dropout(0.5)
self.dropout2 = nn.Dropout(0.5)
self.fc1 = nn.Linear(9216, 1000)
self.fc2 = nn.Linear(1000, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(self.conv2(x), 2)
x = torch.flatten(self.dropout1(x), 1)
x = self.fc2(self.dropout2(F.relu(self.fc1(x))))
return F.log_softmax(x, dim=1)
```
If i run experiment with Random Strategy it works fine:
```python
import nni
import nni.retiarii.strategy as strategy
from nni.retiarii.experiment.pytorch import RetiariiExperiment, RetiariiExeConfig
from torchvision import transforms
from torchvision.datasets import MNIST
transform = nni.trace(transforms.Compose)([nni.trace(transforms.ToTensor)(), nni.trace(transforms.Normalize)((0.1307,), (0.3081,))])
train_dataset = nni.trace(MNIST)(root='data/mnist', train=True, download=True, transform=transform)
test_dataset = nni.trace(MNIST)('data/mnist', train=False, transform=transform)
evaluator = pl.Classification(train_dataloaders=pl.DataLoader(train_dataset, batch_size=100),
val_dataloaders=pl.DataLoader(test_dataset, batch_size=100),
accelerator='gpu',max_epochs=10)
model = Net()
search_strategy = strategy.Random()
# search_strategy = strategy.Proxyless()
exp = RetiariiExperiment(model, evaluator, [], search_strategy)
exp_config = RetiariiExeConfig('local')
exp_config.experiment_name = 'mnist_search'
# exp_config.execution_engine = 'oneshot'
exp_config.max_trial_number = 1
exp_config.trial_concurrency = 1
exp_config.trial_gpu_number = 1
exp_config.training_service.use_active_gpu = True
exp.run(exp_config, 8081)
```
but when I switch to Proxyless, then an error will occur:
```python
# search_strategy = strategy.Random()
search_strategy = strategy.Proxyless()
exp = RetiariiExperiment(model, evaluator, [], search_strategy)
exp_config = RetiariiExeConfig('local')
exp_config.experiment_name = 'mnist_search'
exp_config.execution_engine = 'oneshot'
exp_config.max_trial_number = 1
exp_config.trial_concurrency = 1
exp_config.trial_gpu_number = 1
exp_config.training_service.use_active_gpu = True
exp.run(exp_config, 8081)
```
Error:
```python
Tensorflow is not installed.
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
C:\Users\...\lib\site-packages\pytorch_lightning\trainer\configuration_validator.py:228: LightningDeprecationWarning: The `LightningModule.on_epoch_start` hook was deprecated in v1.6 and will be removed in v1.8. Please use `LightningModule.on_<train/validation/test>_epoch_start` instead.
rank_zero_deprecation(
C:\Users\...\lib\site-packages\pytorch_lightning\trainer\configuration_validator.py:228: LightningDeprecationWarning: The `LightningModule.on_epoch_end` hook was deprecated in v1.6 and will be removed in v1.8. Please use `LightningModule.on_<train/validation/test>_epoch_end` instead.
rank_zero_deprecation(
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
------------------------------------------------
0 | model | _ClassificationModule | 9.2 M
------------------------------------------------
9.2 M Trainable params
0 Non-trainable params
9.2 M Total params
36.993 Total estimated model params size (MB)
C:\Users\...\lib\site-packages\pytorch_lightning\trainer\connectors\data_connector.py:219: PossibleUserWarning: The dataloader, train_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 6 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
rank_zero_warn(
Epoch 0: 0%| | 0/600 [00:00<?, ?it/s] Traceback (most recent call last):
File "C:\Users\...\lib\site-packages\IPython\core\interactiveshell.py", line 3398, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-40724acf6027>", line 1, in <cell line: 1>
runfile('C:/Users/.../scripts/proxylessnas.py', wdir='C:/Users/.../scripts')
File "C:\Program Files\JetBrains\PyCharm 2022.1.3\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 198, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "C:\Program Files\JetBrains\PyCharm 2022.1.3\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/Users/.../scripts/proxylessnas.py", line 184, in <module>
exp.run(exp_config, 8081)
File "C:\Users\...\lib\site-packages\nni\retiarii\experiment\pytorch.py", line 289, in run
self.strategy.run(base_model_ir, self.applied_mutators)
File "C:\Users\...\lib\site-packages\nni\retiarii\oneshot\pytorch\strategy.py", line 76, in run
evaluator.trainer.fit(self.model, train_loader, val_loader)
File "C:\Users\...\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 700, in fit
self._call_and_handle_interrupt(
File "C:\Users\...\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 654, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "C:\Users\...\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 741, in _fit_impl
results = self._run(model, ckpt_path=self.ckpt_path)
File "C:\Users\...\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1166, in _run
results = self._run_stage()
File "C:\Users\...\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1252, in _run_stage
return self._run_train()
File "C:\Users\...\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1282, in _run_train
self.fit_loop.run()
File "C:\Users\...\lib\site-packages\pytorch_lightning\loops\loop.py", line 200, in run
self.advance(*args, **kwargs)
File "C:\Users\...\lib\site-packages\pytorch_lightning\loops\fit_loop.py", line 269, in advance
self._outputs = self.epoch_loop.run(self._data_fetcher)
File "C:\Users\...\lib\site-packages\pytorch_lightning\loops\loop.py", line 200, in run
self.advance(*args, **kwargs)
File "C:\Users\...\lib\site-packages\pytorch_lightning\loops\epoch\training_epoch_loop.py", line 194, in advance
response = self.trainer._call_lightning_module_hook("on_train_batch_start", batch, batch_idx)
File "C:\Users\...\lib\site-packages\pytorch_lightning\trainer\trainer.py", line 1549, in _call_lightning_module_hook
output = fn(*args, **kwargs)
File "C:\Users\...\lib\site-packages\nni\retiarii\oneshot\pytorch\base_lightning.py", line 378, in on_train_batch_start
return self.model.on_train_batch_start(batch, batch_idx, unused)
TypeError: ModelHooks.on_train_batch_start() takes 3 positional arguments but 4 were given
```
**Environment**:
- NNI version: 2.8
- Training service (local|remote|pai|aml|etc): local
- Client OS: Windows 10
- Python version: 3.10.6
- PyTorch/TensorFlow version: 1.12
- Is conda/virtualenv/venv used?: pipenv
- Is running in Docker?: No
**How to reproduce it?**:
Use code above | closed | 2022-08-08T13:28:42Z | 2022-08-09T09:24:58Z | https://github.com/microsoft/nni/issues/5055 | [] | AL3708 | 2 |
django-import-export/django-import-export | django | 1,740 | selected value of custom importForm is not being saved | The customized importForm in the test application does not seem to be working when used for default values on the imported rows. For example when i use the dropdown in the test application to select an author. The selected author is not being saved.
To reproduce
- Clone django-import-export repository
- I started the test application
- I tested the CSV import as below (i removed author)
- i selected one of the authors in the dropdown (Ian Flemming)
- the chosen author form the dropdown was not inserted in the database
Versions:
Python 3.12.1
Django 5.0.1
tablip: 3.5.0
django-import-export: "3.3.7.dev0"
Expected
I expect the import csv to save the id of the selected author of the custom import form.
CSV:
id,name,author,author_email,imported,published,published_time,price,added,categories
15,Triangles,,geo@met.ry,,2020-01-12,,12,,1
<img width="1407" alt="image" src="https://github.com/django-import-export/django-import-export/assets/43724700/0a62f81a-792d-4a3e-a38d-01e9f253c01c">
| closed | 2024-01-17T19:23:28Z | 2024-04-22T14:07:46Z | https://github.com/django-import-export/django-import-export/issues/1740 | [
"bug"
] | wesselcram | 4 |
jacobgil/pytorch-grad-cam | computer-vision | 363 | Strange! The last block doesn't get the right answer, the others can. | Hi, when I was visualizing the ViT, I couldn't get the correct visualization when the last block was the target layer. The output gradient is shown in the figure below. Hope you can answer it. (Correct results can be obtained for other blocks, I used class token as the classification feature)
![Uploading image.png…]()
| open | 2022-11-16T12:06:21Z | 2022-11-17T01:17:07Z | https://github.com/jacobgil/pytorch-grad-cam/issues/363 | [] | pakchoi-php | 4 |
ITCoders/Human-detection-and-Tracking | numpy | 26 | Can I able to find coordinate values of bounding boxes. | # Issues should contain the following details which increases the probability of it get resolved quickly
* **Exact error or Issue details**
* **OpenCV Version**
* **Python Version**
* **Operating System**
* **Changes done, if any in the original code**
| closed | 2018-02-24T11:31:07Z | 2018-02-25T17:28:42Z | https://github.com/ITCoders/Human-detection-and-Tracking/issues/26 | [] | shabbeersh | 2 |
ExpDev07/coronavirus-tracker-api | rest-api | 220 | API displaying 0 recovered cases | The latest fetch result as of 28-03-2020, 0206 GMT +5:30 displays 0 recovered cases, while it is actually 132,447. | closed | 2020-03-27T20:44:04Z | 2020-03-27T20:55:35Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/220 | [] | AbhinavMir | 1 |
elliotgao2/toapi | api | 134 | Fix simple typo: programe -> program | There is a small typo in docs/topics/storage.md.
Should read `program` rather than `programe`.
| open | 2020-02-29T19:53:29Z | 2020-02-29T19:53:29Z | https://github.com/elliotgao2/toapi/issues/134 | [] | timgates42 | 0 |
pydata/pandas-datareader | pandas | 855 | No module named 'matplotlib.finance' | Unable to import candlestick_ohlc from matplotlib.finance. It says there is no module named matplotlib.finance. | closed | 2021-02-28T10:54:28Z | 2021-07-13T10:24:46Z | https://github.com/pydata/pandas-datareader/issues/855 | [] | wyatthien | 1 |
home-assistant/core | python | 141,284 | modbus integration not wroking after update to 2025.3.3 | ### The problem
After updating to the newest version the modbus integration stop working.
### What version of Home Assistant Core has the issue?
core-2025.3.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant Container
### Integration causing the issue
modbus
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
- name: hub1
type: tcp
host: XXX
port: 502
sensors:
- name: "Serverschrank - Leistung"
unique_id: 23b28fcb-7fda-4bd1-ad51-f8860b0ab69a
address: 0x000c
device_address: 6
input_type: input
scan_interval: 15
precision: 2
data_type: float32
unit_of_measurement: W
state_class: measurement
device_class: power
```
### Anything in the logs that might be useful for us?
```txt
Setup failed for 'modbus': Unable to import component: cannot import name 'FramerType' from 'pymodbus.framer' (/usr/local/lib/python3.13/site-packages/pymodbus/framer/__init__.py)
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/loader.py", line 1014, in async_get_component
comp = await self.hass.async_add_import_executor_job(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
self._get_component, True
^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/usr/local/lib/python3.13/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/src/homeassistant/homeassistant/loader.py", line 1074, in _get_component
ComponentProtocol, importlib.import_module(self.pkg_path)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/util/loop.py", line 201, in protected_loop_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.13/importlib/__init__.py", line 88, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 1026, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/usr/src/homeassistant/homeassistant/components/modbus/__init__.py", line 146, in <module>
from .modbus import DATA_MODBUS_HUBS, ModbusHub, async_modbus_setup
File "/usr/src/homeassistant/homeassistant/components/modbus/modbus.py", line 17, in <module>
from pymodbus.framer import FramerType
ImportError: cannot import name 'FramerType' from 'pymodbus.framer' (/usr/local/lib/python3.13/site-packages/pymodbus/framer/__init__.py)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/setup.py", line 340, in _async_setup_component
component = await integration.async_get_component()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/loader.py", line 1034, in async_get_component
self._component_future.result()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/usr/src/homeassistant/homeassistant/loader.py", line 1026, in async_get_component
comp = self._get_component()
File "/usr/src/homeassistant/homeassistant/loader.py", line 1074, in _get_component
ComponentProtocol, importlib.import_module(self.pkg_path)
~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/util/loop.py", line 201, in protected_loop_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.13/importlib/__init__.py", line 88, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 1026, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/usr/src/homeassistant/homeassistant/components/modbus/__init__.py", line 146, in <module>
from .modbus import DATA_MODBUS_HUBS, ModbusHub, async_modbus_setup
File "/usr/src/homeassistant/homeassistant/components/modbus/modbus.py", line 17, in <module>
from pymodbus.framer import FramerType
ImportError: cannot import name 'FramerType' from 'pymodbus.framer' (/usr/local/lib/python3.13/site-packages/pymodbus/framer/__init__.py)
Detected blocking call to import_module with args ('homeassistant.components.modbus',) in /usr/src/homeassistant/homeassistant/loader.py, line 1074: ComponentProtocol, importlib.import_module(self.pkg_path) inside the event loop; This is causing stability issues. Please create a bug report at https://github.com/home-assistant/core/issues?q=is%3Aopen+is%3Aissue For developers, please see https://developers.home-assistant.io/docs/asyncio_blocking_operations/#import_module Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "/usr/src/homeassistant/homeassistant/__main__.py", line 227, in <module> sys.exit(main()) File "/usr/src/homeassistant/homeassistant/__main__.py", line 213, in main exit_code = runner.run(runtime_conf) File "/usr/src/homeassistant/homeassistant/runner.py", line 154, in run return loop.run_until_complete(setup_and_run_hass(runtime_config)) File "/usr/local/lib/python3.13/asyncio/base_events.py", line 712, in run_until_complete self.run_forever() File "/usr/local/lib/python3.13/asyncio/base_events.py", line 683, in run_forever self._run_once() File "/usr/local/lib/python3.13/asyncio/base_events.py", line 2040, in _run_once handle._run() File "/usr/local/lib/python3.13/asyncio/events.py", line 89, in _run self._context.run(self._callback, *self._args) File "/usr/src/homeassistant/homeassistant/setup.py", line 171, in async_setup_component result = await _async_setup_component(hass, domain, config) File "/usr/src/homeassistant/homeassistant/setup.py", line 340, in _async_setup_component component = await integration.async_get_component() File "/usr/src/homeassistant/homeassistant/loader.py", line 1026, in async_get_component comp = self._get_component() File "/usr/src/homeassistant/homeassistant/loader.py", line 1074, in _get_component ComponentProtocol, importlib.import_module(self.pkg_path)
```
### Additional information
_No response_ | open | 2025-03-24T13:37:17Z | 2025-03-24T14:03:29Z | https://github.com/home-assistant/core/issues/141284 | [
"integration: modbus"
] | mortzel | 1 |
CTFd/CTFd | flask | 2,320 | Score of particpants continues to lower after freeze time | **Environment**:
- CTFd Version/Commit: 3.5.1
- Operating System: Kubernetes v1.25
- Web Browser and Version: Firefox 114
**What happened?**
The scoreboard was frozen at the end of a CTF but users were still allowed to make submissions that does not appear on the scoreboard as intended. However, the challenges are configured on dynamic and the points continues to lower with new submissions and participants see their score lower on the scoreboard even if it is frozen.
**What did you expect to happen?**
The score of dynamic challenge should not continue to lower with new submissions once the CTF has been frozen.
**How to reproduce your issue**
- Create a dynamic challenge
- Solve it
- Freeze the CTF
- Solve that challenge with another user
- See the score of the challenge lower
- See the scores of people having solved that challenge lower on the frozen scoreboard
**Any associated stack traces or error logs**
| open | 2023-06-07T08:10:54Z | 2023-06-30T21:54:12Z | https://github.com/CTFd/CTFd/issues/2320 | [] | Typhlos | 2 |
LAION-AI/Open-Assistant | python | 3,669 | sign up | Maybe the signup should be different to avoid emails like this:
"This message seems dangerous
Similar messages were used to steal people's personal information. Avoid clicking links, downloading attachments or replying with personal information." | closed | 2023-08-26T11:24:07Z | 2023-11-28T07:40:33Z | https://github.com/LAION-AI/Open-Assistant/issues/3669 | [] | flckv | 1 |
vimalloc/flask-jwt-extended | flask | 347 | Sending a JWT token followed by comma raises IndexError | Hi,
I found that if the user sends the JWT token followed by a comma (,) an IndexError is raised. Is this the expected behaviour? You should be able to reproduce this using the basic example from the docs. It appears that the exception is raised executing `s.split()[0]` in https://github.com/vimalloc/flask-jwt-extended/blob/5bd8b1ed08ea64d23869a629af3c3c868816b8a8/flask_jwt_extended/view_decorators.py#L190
Regards,
| closed | 2020-07-19T23:06:33Z | 2021-05-02T20:40:51Z | https://github.com/vimalloc/flask-jwt-extended/issues/347 | [] | svidela | 2 |
thtrieu/darkflow | tensorflow | 740 | ModuleNotFoundError: No module named 'nms' Error!! | I just pull the darkflow today and run it using both python and Annocoda, Mincoda.
I also followed this tutorial
https://keponk.wordpress.com/2017/12/07/siraj-darkflow/
I have spent a whole day on this, but I still have this error regardless what I do:
/usr/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
return f(*args, **kwds)
/usr/lib/python3/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Traceback (most recent call last):
File "flow", line 4, in <module>
from darkflow.cli import cliHandler
File "/hdd/students/issac/Pedestrian-Detection-using-Darkflow/darkflow/cli.py", line 3, in <module>
from .net.build import TFNet
File "/hdd/students/issac/Pedestrian-Detection-using-Darkflow/darkflow/net/build.py", line 7, in <module>
from .framework import create_framework
File "/hdd/students/issac/Pedestrian-Detection-using-Darkflow/darkflow/net/framework.py", line 1, in <module>
from . import yolo
File "/hdd/students/issac/Pedestrian-Detection-using-Darkflow/darkflow/net/yolo/__init__.py", line 2, in <module>
from . import predict
File "/hdd/students/issac/Pedestrian-Detection-using-Darkflow/darkflow/net/yolo/predict.py", line 7, in <module>
from ...cython_utils.cy_yolo_findboxes import yolo_box_constructor
File "darkflow/cython_utils/cy_yolo_findboxes.pyx", line 1, in init darkflow.cython_utils.cy_yolo_findboxes
import numpy as np
ModuleNotFoundError: No module named 'nms' | open | 2018-04-28T22:13:11Z | 2019-06-21T12:21:55Z | https://github.com/thtrieu/darkflow/issues/740 | [] | hbzhang | 5 |
slackapi/bolt-python | fastapi | 705 | Seeing "Sorry, That hasn't worked. Try again?" Error even though there are no error logs in AWS Lambda | #### The `slack_bolt` version
slack_bolt-1.14.3
#### Python runtime version
(Python 3.9)
#### OS info
AWS Lambda x86_64
#### Steps to reproduce:
My app has a message shortcut that schedules a response based on message content. When I click the message shortcut, the below error is shown on Slack even though the functioning of my app is perfect and there are no errors in AWS logs.
Can anyone help me with any patterns I need to look out for where this error occurs?
<img width="460" alt="Screenshot 2022-08-21 at 12 00 27 AM" src="https://user-images.githubusercontent.com/16632903/185761925-bd8a3d1d-3346-4a11-a842-551eb7cb588f.png">
| closed | 2022-08-20T18:47:02Z | 2022-08-22T17:06:34Z | https://github.com/slackapi/bolt-python/issues/705 | [
"question"
] | infinitetrooper | 4 |
huggingface/datasets | numpy | 7,013 | CI is broken for faiss tests on Windows: node down: Not properly terminated | Faiss tests on Windows make the CI run indefinitely until maximum execution time (360 minutes) is reached.
See: https://github.com/huggingface/datasets/actions/runs/9712659783
```
test (integration, windows-latest, deps-minimum)
The job running on runner GitHub Actions 60 has exceeded the maximum execution time of 360 minutes.
test (integration, windows-latest, deps-latest)
The job running on runner GitHub Actions 238 has exceeded the maximum execution time of 360 minutes.
```
```
____________________________ tests/test_search.py _____________________________
[gw1] win32 -- Python 3.8.10 C:\hostedtoolcache\windows\Python\3.8.10\x64\python.exe
worker 'gw1' crashed while running 'tests/test_search.py::IndexableDatasetTest::test_add_faiss_index'
____________________________ tests/test_search.py _____________________________
[gw2] win32 -- Python 3.8.10 C:\hostedtoolcache\windows\Python\3.8.10\x64\python.exe
worker 'gw2' crashed while running 'tests/test_search.py::IndexableDatasetTest::test_add_faiss_index'
```
```
tests/test_search.py::IndexableDatasetTest::test_add_faiss_index
[gw0] node down: Not properly terminated
[gw0] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index
replacing crashed worker gw0
tests/test_search.py::IndexableDatasetTest::test_add_faiss_index
[gw1] node down: Not properly terminated
[gw1] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index
replacing crashed worker gw1
tests/test_search.py::IndexableDatasetTest::test_add_faiss_index
[gw2] node down: Not properly terminated
[gw2] FAILED tests/test_search.py::IndexableDatasetTest::test_add_faiss_index
replacing crashed worker gw2
``` | closed | 2024-07-01T06:40:03Z | 2024-07-01T07:10:28Z | https://github.com/huggingface/datasets/issues/7013 | [
"maintenance"
] | albertvillanova | 0 |
davidteather/TikTok-Api | api | 426 | [BUG] - getTikTokbyID does not work | Hi David,
When I tried getTikTokbyId() today, it sent back the error message “invalid JSON back”. I was wondering if you experienced the same issue and if there would be anyway to fix it. Thank you!!
| closed | 2020-12-16T17:12:48Z | 2020-12-23T17:00:27Z | https://github.com/davidteather/TikTok-Api/issues/426 | [
"bug"
] | kylel06 | 13 |
xlwings/xlwings | automation | 2,381 | Add xl() function for Python in Excel compatibility | This PiE syntax
```python
xl("Sheet1!A1:B2", headers=True)
```
should be compatible with something like this in xlwings:
```python
from xlwings import xl
xl("Sheet1!A1:B2", headers=True)
```
and probably translate to something like this (for >1 cells, otherwise it should return a scalar):
```python
import xlwings as xw
import pandas as pd
xw.books.active["Sheet1"]["A1"].options(pd.DataFrame, header=True).value
```
The only thing where I foresee more work is to add support for Power Query as this isn't currently covered in xlwings at all. | open | 2024-01-16T11:02:26Z | 2024-01-16T14:36:59Z | https://github.com/xlwings/xlwings/issues/2381 | [
"enhancement"
] | fzumstein | 0 |
2noise/ChatTTS | python | 482 | 可以取消勾选refine text,把output直接复制到input,删去多余的部分重新推理。 | 可以取消勾选refine text,把output直接复制到input,删去多余的部分重新推理。
_Originally posted by @fumiama in https://github.com/2noise/ChatTTS/issues/468#issuecomment-2191362001_
在页面不可点击取消“refine text” | closed | 2024-06-27T13:30:42Z | 2024-06-27T13:36:06Z | https://github.com/2noise/ChatTTS/issues/482 | [
"bug",
"ui"
] | YCLinYimeng | 0 |
JoshuaC215/agent-service-toolkit | streamlit | 52 | Load previous session | Would be interesting to add loading the conversation history if re-using a thread ID from a previous session. It would basically require adding a service endpoint to get the history and then calling that in the beginning to populate it.
A separate issue would be to have the list of threads, like in ChatGPT. | closed | 2024-10-08T14:30:02Z | 2024-10-21T15:44:43Z | https://github.com/JoshuaC215/agent-service-toolkit/issues/52 | [] | antonioalegria | 7 |
scrapy/scrapy | python | 6,250 | Use `defusedxml.xmlrpc` | https://bandit.readthedocs.io/en/latest/blacklists/blacklist_imports.html#b411-import-xmlrpclib
https://github.com/tiran/defusedxml?tab=readme-ov-file#defusedxmlxmlrpc | closed | 2024-02-27T15:08:13Z | 2024-02-28T09:29:21Z | https://github.com/scrapy/scrapy/issues/6250 | [
"enhancement",
"security"
] | wRAR | 0 |
alyssaq/face_morpher | numpy | 35 | Investigate using dlib for face points detection | https://github.com/davisking/dlib/blob/master/python_examples/face_landmark_detection.py | open | 2018-02-19T09:37:09Z | 2018-02-19T09:37:09Z | https://github.com/alyssaq/face_morpher/issues/35 | [] | alyssaq | 0 |
matterport/Mask_RCNN | tensorflow | 2,902 | How to activate collab's gpu to train mask rcnn model | I have been working on this model for couple of weeks but still facing the issues that tensorflow library that works with mrcnn is not supported by google colab due to which I am unable to access the free gpu and it takes a complete day to train 10 epochs of 10 steps.
how will I activate the gpu runtime with tensorflow 1.13.1 version(supported by mrcnn model)
`use_multiprocessing=True` and multiple workers may duplicate your data. Please consider using the`keras.utils.Sequence class.
UserWarning('Using a generator with `use_multiprocessing=True`'
Epoch 1/10
9/10 [==========================>...] - ETA: 89s - loss: 5.1812 - rpn_class_loss: 0.0661 - rpn_bbox_loss: 2.0932 - mrcnn_class_loss: 1.0029 - mrcnn_bbox_loss: 0.9374 - mrcnn_mask_loss: 1.0817 /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:2142: UserWarning: Using a generator with `use_multiprocessing=True` and multiple workers may duplicate your data. Please consider using the`keras.utils.Sequence class.
UserWarning('Using a generator with `use_multiprocessing=True`'
10/10 [==============================] - 2200s - loss: 4.9398 - rpn_class_loss: 0.0856 - rpn_bbox_loss: 2.1344 - mrcnn_class_loss: 0.9026 - mrcnn_bbox_loss: 0.8437 - mrcnn_mask_loss: 0.9735 - val_loss: 2.9885 - val_rpn_class_loss: 0.0627 - val_rpn_bbox_loss: 1.0605 - val_mrcnn_class_loss: 0.1267 - val_mrcnn_bbox_loss: 0.8837 - val_mrcnn_mask_loss: 0.8549
Epoch 2/10
10/10 [==============================] - 2080s - loss: 2.8650 - rpn_class_loss: 0.0653 - rpn_bbox_loss: 0.8709 - mrcnn_class_loss: 0.2062 - mrcnn_bbox_loss: 0.9295 - mrcnn_mask_loss: 0.7931 - val_loss: 2.5669 - val_rpn_class_loss: 0.0251 - val_rpn_bbox_loss: 0.7778 - val_mrcnn_class_loss: 0.1389 - val_mrcnn_bbox_loss: 0.8665 - val_mrcnn_mask_loss: 0.7586
Epoch 3/10
10/10 [==============================] - 2071s - loss: 2.5079 - rpn_class_loss: 0.0178 - rpn_bbox_loss: 0.6116 - mrcnn_class_loss: 0.1541 - mrcnn_bbox_loss: 1.0084 - mrcnn_mask_loss: 0.7159 - val_loss: 2.3405 - val_rpn_class_loss: 0.0179 - val_rpn_bbox_loss: 0.7233 - val_mrcnn_class_loss: 0.1208 - val_mrcnn_bbox_loss: 0.7990 - val_mrcnn_mask_loss: 0.6794
Epoch 4/10
10/10 [==============================] - 2076s - loss: 3.5875 - rpn_class_loss: 0.0330 - rpn_bbox_loss: 2.2295 - mrcnn_class_loss: 0.0901 - mrcnn_bbox_loss: 0.6266 - mrcnn_mask_loss: 0.6082 - val_loss: 2.1113 - val_rpn_class_loss: 0.0173 - val_rpn_bbox_loss: 0.6490 - val_mrcnn_class_loss: 0.1017 - val_mrcnn_bbox_loss: 0.7573 - val_mrcnn_mask_loss: 0.5859
Epoch 5/10
10/10 [==============================] - 2081s - loss: 2.0695 - rpn_class_loss: 0.0174 - rpn_bbox_loss: 0.6024 - mrcnn_class_loss: 0.1108 - mrcnn_bbox_loss: 0.7344 - mrcnn_mask_loss: 0.6044 - val_loss: 2.2599 - val_rpn_class_loss: 0.0158 - val_rpn_bbox_loss: 0.6386 - val_mrcnn_class_loss: 0.1036 - val_mrcnn_bbox_loss: 0.8452 - val_mrcnn_mask_loss: 0.6566
Epoch 6/10
10/10 [==============================] - 2073s - loss: 2.1327 - rpn_class_loss: 0.0219 - rpn_bbox_loss: 0.5814 - mrcnn_class_loss: 0.1610 - mrcnn_bbox_loss: 0.6752 - mrcnn_mask_loss: 0.6932 - val_loss: 2.2272 - val_rpn_class_loss: 0.0170 - val_rpn_bbox_loss: 0.6450 - val_mrcnn_class_loss: 0.1082 - val_mrcnn_bbox_loss: 0.8118 - val_mrcnn_mask_loss: 0.6452
Epoch 7/10
10/10 [==============================] - 2075s - loss: 3.2746 - rpn_class_loss: 0.0522 - rpn_bbox_loss: 1.5602 - mrcnn_class_loss: 0.1547 - mrcnn_bbox_loss: 0.8337 - mrcnn_mask_loss: 0.6737 - val_loss: 2.0564 - val_rpn_class_loss: 0.0172 - val_rpn_bbox_loss: 0.6224 - val_mrcnn_class_loss: 0.1280 - val_mrcnn_bbox_loss: 0.6824 - val_mrcnn_mask_loss: 0.6063
Epoch 8/10 | open | 2022-11-07T10:39:20Z | 2022-11-26T15:03:13Z | https://github.com/matterport/Mask_RCNN/issues/2902 | [] | Yaseen0361 | 1 |
matterport/Mask_RCNN | tensorflow | 2,272 | ValueError shape (1024, 88) to weight has shape (1024, 324). | ValueError: Layer #389 (named "mrcnn_bbox_fc"), weight <tf.Variable 'mrcnn_bbox_fc/kernel:0' shape=(1024, 88) dtype=float32> has shape (1024, 88), but the saved weight has shape (1024, 324).
please help me!!!
`Instructions for updating: box_ind is deprecated, use box_indices instead Loading weights /home/wy/pig/Mask_RCNN/mask_rcnn_coco.h5 Traceback (most recent call last): File "coco.py", line 486, in <module> model.load_weights(model_path, by_name=True) File "/home/wy/anaconda3/envs/py36/lib/python3.6/site-packages/mask_rcnn-2.1-py3.6.egg/mrcnn/model.py", line 2130, in load_weights File "/home/wy/anaconda3/envs/py36/lib/python3.6/site-packages/keras/engine/saving.py", line 1328, in load_weights_from_hdf5_group_by_name str(weight_values[i].shape) + '.') ValueError: Layer #389 (named "mrcnn_bbox_fc"), weight <tf.Variable 'mrcnn_bbox_fc/kernel:0' shape=(1024, 88) dtype=float32> has shape (1024, 88), but the saved weight has shape (1024, 324). ` | closed | 2020-07-05T12:00:45Z | 2021-10-26T18:10:47Z | https://github.com/matterport/Mask_RCNN/issues/2272 | [] | gethubwy | 4 |
chmp/ipytest | pytest | 18 | Add a proper signature to ipytest.config | Using `ipytest.config` to configure ipytest has the downside that it is hard to know the available options, as the signature uses kwargs. The signature should be fixed to contain all options. | closed | 2019-05-26T20:16:11Z | 2019-06-11T21:07:46Z | https://github.com/chmp/ipytest/issues/18 | [] | chmp | 1 |
pydata/bottleneck | numpy | 195 | Including bottleneck as dependency in install_requires in setup.py causes installation to fail | Here's a minimal test case. Use the `setup.py` below and then run `python setup.py install` or `python setup.py develop` in a clean environment (i.e. one that doesn't already have `numpy` installed):
```python
from setuptools import setup
setup(
install_requires=['bottleneck'],
)
```
The full failure is listed below. A workaround is to add `numpy` to `setup_requires`, but it seems like downstream packages should not be required to do this. This is probably not noticed by most users since `numpy` is already installed in most environments, but the current behavior is a bug.
```
Searching for bottleneck
Reading https://pypi.org/simple/bottleneck/
Downloading https://files.pythonhosted.org/packages/05/ae/cedf5323f398ab4e4ff92d6c431a3e1c6a186f9b41ab3e8258dff786a290/Bottleneck-1.2.1.tar.gz#sha256=6efcde5f830aed64feafca0359b51db0e184c72af8ba6675b4a99f263922eb36
Best match: Bottleneck 1.2.1
Processing Bottleneck-1.2.1.tar.gz
Writing /var/folders/6s/st61dq157dnd1nwd56hzy16h000153/T/easy_install-_6ubag8s/Bottleneck-1.2.1/setup.cfg
Running Bottleneck-1.2.1/setup.py -q bdist_egg --dist-dir /var/folders/6s/st61dq157dnd1nwd56hzy16h000153/T/easy_install-_6ubag8s/Bottleneck-1.2.1/egg-dist-tmp-q57bo93i
No Bottleneck unit testing available.
Traceback (most recent call last):
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/sandbox.py", line 154, in save_modules
yield saved
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/sandbox.py", line 195, in setup_context
yield
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/sandbox.py", line 250, in run_setup
_execfile(setup_script, ns)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/sandbox.py", line 45, in _execfile
exec(code, globals, locals)
File "/var/folders/6s/st61dq157dnd1nwd56hzy16h000153/T/easy_install-_6ubag8s/Bottleneck-1.2.1/setup.py", line 110, in <module>
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/__init__.py", line 140, in setup
return distutils.core.setup(**attrs)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 163, in run
self.run_command("egg_info")
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 295, in run
self.find_sources()
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 302, in find_sources
mm.run()
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 533, in run
self.add_defaults()
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 569, in add_defaults
sdist.add_defaults(self)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/distutils/command/sdist.py", line 228, in add_defaults
self._add_defaults_ext()
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/distutils/command/sdist.py", line 311, in _add_defaults_ext
build_ext = self.get_finalized_command('build_ext')
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/distutils/cmd.py", line 299, in get_finalized_command
cmd_obj.ensure_finalized()
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/distutils/cmd.py", line 107, in ensure_finalized
self.finalize_options()
File "/var/folders/6s/st61dq157dnd1nwd56hzy16h000153/T/easy_install-_6ubag8s/Bottleneck-1.2.1/setup.py", line 24, in finalize_options
AttributeError: 'dict' object has no attribute '__NUMPY_SETUP__'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "./setup.py", line 6, in <module>
install_requires=['bottleneck'],
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/__init__.py", line 140, in setup
return distutils.core.setup(**attrs)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/command/develop.py", line 38, in run
self.install_for_development()
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/command/develop.py", line 154, in install_for_development
self.process_distribution(None, self.dist, not self.no_deps)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/command/easy_install.py", line 749, in process_distribution
[requirement], self.local_index, self.easy_install
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/pkg_resources/__init__.py", line 777, in resolve
replace_conflicting=replace_conflicting
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/pkg_resources/__init__.py", line 1060, in best_match
return self.obtain(req, installer)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/pkg_resources/__init__.py", line 1072, in obtain
return installer(requirement)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/command/easy_install.py", line 676, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/command/easy_install.py", line 702, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/command/easy_install.py", line 887, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/command/easy_install.py", line 1155, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/command/easy_install.py", line 1141, in run_setup
run_setup(setup_script, args)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/sandbox.py", line 253, in run_setup
raise
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/sandbox.py", line 195, in setup_context
yield
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/sandbox.py", line 166, in save_modules
saved_exc.resume()
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/sandbox.py", line 141, in resume
six.reraise(type, exc, self._tb)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/_vendor/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/sandbox.py", line 154, in save_modules
yield saved
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/sandbox.py", line 195, in setup_context
yield
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/sandbox.py", line 250, in run_setup
_execfile(setup_script, ns)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/sandbox.py", line 45, in _execfile
exec(code, globals, locals)
File "/var/folders/6s/st61dq157dnd1nwd56hzy16h000153/T/easy_install-_6ubag8s/Bottleneck-1.2.1/setup.py", line 110, in <module>
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/__init__.py", line 140, in setup
return distutils.core.setup(**attrs)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/command/bdist_egg.py", line 163, in run
self.run_command("egg_info")
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 295, in run
self.find_sources()
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 302, in find_sources
mm.run()
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 533, in run
self.add_defaults()
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/site-packages/setuptools/command/egg_info.py", line 569, in add_defaults
sdist.add_defaults(self)
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/distutils/command/sdist.py", line 228, in add_defaults
self._add_defaults_ext()
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/distutils/command/sdist.py", line 311, in _add_defaults_ext
build_ext = self.get_finalized_command('build_ext')
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/distutils/cmd.py", line 299, in get_finalized_command
cmd_obj.ensure_finalized()
File "/Users/ddavella/miniconda3/envs/bottleneck-dep/lib/python3.7/distutils/cmd.py", line 107, in ensure_finalized
self.finalize_options()
File "/var/folders/6s/st61dq157dnd1nwd56hzy16h000153/T/easy_install-_6ubag8s/Bottleneck-1.2.1/setup.py", line 24, in finalize_options
AttributeError: 'dict' object has no attribute '__NUMPY_SETUP__'
``` | closed | 2018-09-12T18:31:30Z | 2019-01-07T19:19:45Z | https://github.com/pydata/bottleneck/issues/195 | [] | drdavella | 11 |
iterative/dvc | data-science | 10,098 | Showing real files names instead of cache file name | While doing `dvc pull`, it shows the cache file name on the remote storage. However, it would be much more useful to see what real file is pulled from the remote source:

| closed | 2023-11-17T16:02:36Z | 2023-11-22T18:33:11Z | https://github.com/iterative/dvc/issues/10098 | [
"awaiting response"
] | dbalabka | 4 |
FlareSolverr/FlareSolverr | api | 1,045 | [1337x] (testing) Exception (1337x): FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Timeout after 10.0 seconds.: FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Timeout after 10.0 seconds. | closed | 2024-01-23T03:42:16Z | 2024-01-23T10:41:47Z | https://github.com/FlareSolverr/FlareSolverr/issues/1045 | [] | GazzaBL | 2 | |
STVIR/pysot | computer-vision | 43 | TRAIN.md | TRAIN.md
Testing
python -u ../tools/test.py \
--snapshot {} \
--config config.py \
should be:
python -u ../../tools/test.py \
--snapshot {} \
--config config.yaml \ | closed | 2019-06-12T02:59:06Z | 2019-06-12T03:09:09Z | https://github.com/STVIR/pysot/issues/43 | [] | XiaoCode-er | 2 |
MentatInnovations/datastream.io | jupyter | 3 | Review: utils_esk | closed | 2017-11-06T16:16:56Z | 2018-02-06T20:55:49Z | https://github.com/MentatInnovations/datastream.io/issues/3 | [] | canagnos | 0 | |
iperov/DeepFaceLab | deep-learning | 592 | Encoder dims can't be changed after restarting a model train. | THIS IS NOT TECH SUPPORT FOR NEWBIE FAKERS
POST ONLY ISSUES RELATED TO BUGS OR CODE
## Expected behavior
Change encoder dims after stopping training.
## Actual behavior
When changing training options after thousands of iterations, several options can be changed but not encoder dims.
## Steps to reproduce
- Start training a model from scratch from processed facesets.
- Set train options as needed.
- Stop training when desired.
- Restart training the same model.
- Before 2 seconds, press enter to change train parameters.
- No "encoder dims" option is shown.
## Other relevant information
I don't really know if this is a bug or something that can be changed after train has started, so I post it as a bug. Thanks for you hard work. | closed | 2020-01-29T09:53:07Z | 2020-01-29T15:12:12Z | https://github.com/iperov/DeepFaceLab/issues/592 | [] | tokafondo | 2 |
huggingface/datasets | computer-vision | 7,030 | Add option to disable progress bar when reading a dataset ("Loading dataset from disk") | ### Feature request
Add an option in load_from_disk to disable the progress bar even if the number of files is larger than 16.
### Motivation
I am reading a lot of datasets that it creates lots of logs.
<img width="1432" alt="image" src="https://github.com/huggingface/datasets/assets/57996478/8d4bbf03-6b89-44b6-937c-932f01b4eb2a">
### Your contribution
Seems like an easy fix to make. I can create a PR if necessary. | closed | 2024-07-06T05:43:37Z | 2024-07-13T14:35:59Z | https://github.com/huggingface/datasets/issues/7030 | [
"enhancement"
] | yuvalkirstain | 2 |
strawberry-graphql/strawberry-django | graphql | 723 | Need to define a resolver if query can return Connection or null | Recently `strawberry` [added support for optional Connections](https://github.com/strawberry-graphql/strawberry/pull/3707). If graphql_type for `strwaberry_django.connection` is nullable, it is required to provide a resolver.
## Describe the Bug
By default, if I define a query with pagination the resolver is not needed:
```python
@strawberry.type
class Query:
some_types = strawberry_django.connection(
graphql_type=ListConnection[SomeType],
)
```
But when `graphql_type` is nullable, the query returns the following error:
```
Django connection without a resolver needs to define a connection for one and only one django type. To use it in a union, define your own resolver that handles each of those
```
## System Information
- Operating system: MacOS 14.7.3 (23H417)
- Strawberry version (if applicable): 0.262.5
- Strawberry-django version: 0.57.1 | open | 2025-03-24T12:22:48Z | 2025-03-24T12:24:41Z | https://github.com/strawberry-graphql/strawberry-django/issues/723 | [
"bug"
] | rcybulski1122012 | 0 |
iterative/dvc | machine-learning | 9,824 | dvc fetch does not fetch | # Bug Report
`dvc fetch` does not fetch the changes from S3 bucket.
## Description
We have a remote setup on S3 bucket. When one developer adds and pushes new data files, another one does the following:
```
dvc fetch
# output:
# everything up to date (data is not fetched, even though there are changes in the remote)
dvc pull
# output:
# dvc.exceptions.CheckoutError: Checkout failed for following targets:
# data/xxxxxxxxxx
# Is your cache up to date?
# <https://error.dvc.org/missing-files>
```
But then when I do:
```
touch data/deleteme
dvc add data/deleteme
dvc fetch # success!
```
Everything works as expected
### Reproduce
This is difficult to reproduce because we use a remote with an S3 bucket. In general the steps are like this:
Machine 1:
```
dvc add xxx
dvc push
```
Machine 1, clean clone of the repository
```
dvc fetch # fetches correctly
dvc pull # success
```
Machine 2:
```
dvc fetch # does not fetch
dvc pull # fails
touch data/deleteme
dvc add data/deleteme
dvc fetch # success
```
### Expected
dvc fetch should work without having to add a file.
### Environment information
**Output of `dvc doctor` on both machines is the same:**
```console
$ dvc doctor
DVC version: 3.13.3 (pip)
-------------------------
Platform: Python 3.10.12 on Linux-5.15.0-78-generic-x86_64-with-glibc2.31
Subprojects:
dvc_data = 2.12.1
dvc_objects = 0.24.1
dvc_render = 0.5.3
dvc_task = 0.3.0
scmrepo = 1.1.0
Supports:
http (aiohttp = 3.8.5, aiohttp-retry = 2.8.3),
https (aiohttp = 3.8.5, aiohttp-retry = 2.8.3),
s3 (s3fs = 2023.6.0, boto3 = 1.28.17)
Config:
Global: /home/xxxx/.config/dvc
System: /etc/xdg/xdg-ubuntu/dvc
Cache types: hardlink, symlink
Cache directory: ext4 on /dev/mapper/vgubuntu-root
Caches: local
Remotes: s3
Workspace directory: ext4 on /dev/mapper/vgubuntu-root
Repo: dvc, git
Repo.site_cache_dir: /var/tmp/dvc/repo/1535e60ae0c1c76198d142daac58ff68
```
**Additional Information (if any):**
Failing `dvc fetch -v` command:
```
2023-08-09 13:32:33,083 DEBUG: v3.13.3 (pip), CPython 3.10.12 on Linux-5.15.0-78-generic-x86_64-with-glibc2.31
2023-08-09 13:32:33,083 DEBUG: command: /home/xxxxxxxxx/miniconda3/envs/xxxxxxx/bin/dvc fetch -v
2023-08-09 13:32:33,363 DEBUG: Preparing to transfer data from 's3://xxxxxxxxxxxx/dvc/files/md5' to '/home/xxxx/workspace/xxxx/xxxxxxxx/.dvc/cache/files/md5'
2023-08-09 13:32:33,363 DEBUG: Preparing to collect status from '/home/xxxxxxx/workspace/xxxx/xxxxxx/.dvc/cache/files/md5'
2023-08-09 13:32:33,363 DEBUG: Collecting status from '/home/xxxx/workspace/xxxx/xxxxx/.dvc/cache/files/md5'
2023-08-09 13:32:33,365 DEBUG: Preparing to collect status from 'xxxxxx/dvc/files/md5'
2023-08-09 13:32:33,365 DEBUG: Collecting status from 'xxxxxx/dvc/files/md5'
2023-08-09 13:32:33,380 DEBUG: Querying 2 oids via object_exists
2023-08-09 13:32:34,539 DEBUG: Querying 0 oids via object_exists
2023-08-09 13:32:35,194 DEBUG: transfer dir: md5: dea817ef564598538aa68ebf501e52b1.dir with 0 files
2023-08-09 13:32:35,453 DEBUG: transfer dir: md5: 66d9d5357614a7963f8e6d997f4d4491.dir with 68 files
70 files fetched
2023-08-09 13:34:21,013 DEBUG: Analytics is enabled.
2023-08-09 13:34:21,039 DEBUG: Trying to spawn '['daemon', '-q', 'analytics', '/tmp/tmphsao0ndc']'
2023-08-09 13:34:21,040 DEBUG: Spawned '['daemon', '-q', 'analytics', '/tmp/tmphsao0ndc']'
```
Failing `dvc pull -v` command:
```
2023-08-09 13:31:22,845 DEBUG: failed to create '/home/xxxx/workspace/xxxx/xxxx/data/raw/2023-08-08-xxxx-spr/3584_e4bb48fd-b484-4311-97cc-ee32c729d01d/e4bb48fd-b484-4311-97cc-ee32c729d01d_point_cloud.actdat' from '/home/xxxx/workspace/xxxx/xxxx/.dvc/cache/files/md5/0c/aba4b940e9ccf4ce3e1375672ea067' - [Errno 2] No such file or directory: '/home/xxxx/workspace/xxxx/xxxx/.dvc/cache/files/md5/0c/aba4b940e9ccf4ce3e1375672ea067'
Traceback (most recent call last):
File "/home/xxxx/miniconda3/envs/xxxx/lib/python3.10/site-packages/dvc_objects/fs/generic.py", line 308, in transfer
_try_links(
File "/home/xxxx/miniconda3/envs/xxxx/lib/python3.10/site-packages/dvc_objects/fs/generic.py", line 250, in _try_links
_link(link, from_fs, from_path, to_fs, to_path)
File "/home/xxxx/miniconda3/envs/xxxx/lib/python3.10/site-packages/dvc_objects/fs/generic.py", line 62, in _link
func(from_path, to_path)
File "/home/xxxx/miniconda3/envs/xxxx/lib/python3.10/site-packages/dvc_objects/fs/base.py", line 389, in reflink
return self.fs.reflink(from_info, to_info)
File "/home/xxxx/miniconda3/envs/xxxx/lib/python3.10/site-packages/dvc_objects/fs/local.py", line 169, in reflink
return system.reflink(path1, path2)
File "/home/xxxx/miniconda3/envs/xxxx/lib/python3.10/site-packages/dvc_objects/fs/system.py", line 105, in reflink
_reflink_linux(source, link_name)
File "/home/xxxx/miniconda3/envs/xxxx/lib/python3.10/site-packages/dvc_objects/fs/system.py", line 90, in _reflink_linux
with open(src, "rb") as s, open(dst, "wb+") as d:
FileNotFoundError: [Errno 2] No such file or directory: '/home/xxxx/workspace/xxxx/xxxx/.dvc/cache/files/md5/0c/aba4b940e9ccf4ce3e1375672ea067'
2023-08-09 13:31:22,856 DEBUG: Removing '/home/xxxx/workspace/xxxx/xxxx/data/raw/2023-08-08-xxxx-spr'
Everything is up to date.
2023-08-09 13:31:22,865 ERROR: failed to pull data from the cloud - Checkout failed for following targets:
data/raw/2023-08-08-xxxx-spr
Is your cache up to date?
<https://error.dvc.org/missing-files>
Traceback (most recent call last):
File "/home/xxxx/miniconda3/envs/xxxx/lib/python3.10/site-packages/dvc/commands/data_sync.py", line 31, in run
stats = self.repo.pull(
File "/home/xxxx/miniconda3/envs/xxxx/lib/python3.10/site-packages/dvc/repo/__init__.py", line 64, in wrapper
return f(repo, *args, **kwargs)
File "/home/xxxx/miniconda3/envs/xxxx/lib/python3.10/site-packages/dvc/repo/pull.py", line 43, in pull
stats = self.checkout(
File "/home/xxxx/miniconda3/envs/xxxx/lib/python3.10/site-packages/dvc/repo/__init__.py", line 64, in wrapper
return f(repo, *args, **kwargs)
File "/home/xxxx/miniconda3/envs/xxxx/lib/python3.10/site-packages/dvc/repo/checkout.py", line 208, in checkout
raise CheckoutError([relpath(out_path) for out_path in failed], stats)
dvc.exceptions.CheckoutError: Checkout failed for following targets:
data/raw/2023-08-08-xxxx-spr
Is your cache up to date?
<https://error.dvc.org/missing-files>
2023-08-09 13:31:22,866 DEBUG: Analytics is enabled.
2023-08-09 13:31:22,889 DEBUG: Trying to spawn '['daemon', '-q', 'analytics', '/tmp/tmpdx6virc2']'
2023-08-09 13:31:22,890 DEBUG: Spawned '['daemon', '-q', 'analytics', '/tmp/tmpdx6virc2']'
```
| closed | 2023-08-09T12:01:42Z | 2023-08-15T15:58:22Z | https://github.com/iterative/dvc/issues/9824 | [] | radomsak | 4 |
geex-arts/django-jet | django | 220 | Filtering based on multiple `RelatedFieldAjaxListFilter` not working | Can I use multiple `RelatedFieldAjaxListFilter` filters simultaneously to filter `change_list` ?
I've multiple related fields and whenever I am trying to apply second filter, first filter is getting removed. Multiple filters work fine without `RelatedFieldAjaxListFilter`.
@f1nality Is that a default behaviour? | open | 2017-05-29T07:46:33Z | 2019-11-14T21:59:01Z | https://github.com/geex-arts/django-jet/issues/220 | [] | a1Gupta | 4 |
pandas-dev/pandas | data-science | 60,491 | BUG: Parquet roundtrip fails with numerical categorical dtype | ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
>>> df=pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
>>> df = df.astype({'A':'category'})
>>> print(df.dtypes)
A category
B int64
dtype: object
>>> df.to_parquet('test.parquet')
>>> df_roundtrip = pd.read_parquet('test.parquet')
>>> print(df_roundtrip.dtypes)
A int64
B int64
dtype: object
>>> assert df_roundtrip.dtypes.equals(df.dtypes)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AssertionError
```
### Issue Description
Roundtrip does not work.
### Expected Behavior
df_roundtrip has the same dtypes as df.dtypes
### Hot-Fix
```python
df=pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
df = df.astype({'A':'str'}).astype({'A':'category'})
print(df.dtypes)
df.to_parquet('test.parquet')
df_roundtrip = pd.read_parquet('test.parquet')
print(df_roundtrip.dtypes)
assert df_roundtrip.dtypes.equals(df.dtypes)
```
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.2
python-bits : 64
OS : Darwin
OS-release : 23.5.0
Version : Darwin Kernel Version 23.5.0: Wed May 1 20:16:51 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T8103
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : fr_FR.UTF-8
LOCALE : fr_FR.UTF-8
pandas : 2.2.3
numpy : 2.0.2
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 24.0
Cython : None
sphinx : None
IPython : 8.24.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : 1.4.2
dataframe-api-compat : None
fastparquet : 2024.11.0
fsspec : 2024.10.0
html5lib : 1.1
hypothesis : 6.122.1
gcsfs : 2024.10.0
jinja2 : 3.1.4
lxml.etree : 5.3.0
matplotlib : 3.9.3
numba : 0.60.0
numexpr : 2.10.2
odfpy : None
openpyxl : 3.1.5
pandas_gbq : 0.24.0
psycopg2 : 2.9.10
pymysql : 1.4.6
pyarrow : 18.1.0
pyreadstat : 1.2.8
pytest : 8.3.4
python-calamine : None
pyxlsb : 1.0.10
s3fs : 2024.10.0
scipy : 1.14.1
sqlalchemy : 2.0.36
tables : 3.10.1
tabulate : 0.9.0
xarray : 2024.11.0
xlrd : 2.0.1
xlsxwriter : 3.2.0
zstandard : 0.23.0
tzdata : 2024.2
qtpy : 2.4.2
pyqt5 : None
</details>
| open | 2024-12-04T12:06:09Z | 2025-02-01T23:32:19Z | https://github.com/pandas-dev/pandas/issues/60491 | [
"Bug",
"Categorical",
"IO Parquet"
] | adrienpacifico | 3 |
gradio-app/gradio | machine-learning | 10,741 | Unable to upload a pdf file of size 2.92MB | ### Describe the bug
`Error
HTTP 413:
413 Request Entity Too Large
nginx/1.18.0`
I am using google colab with to run gradio interface and I am trying to upload a file of size 2.92 MB. I am seeing this error now, never seen this error. I have been using the same code, same gradio interface and same file.
Why am I seeing this error today??
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
#Gradio interface
with gr.Blocks(title=" OCR") as demo:
gr.Markdown("# OCR")
gr.Markdown("Upload a PDF and its ground truth JSON file to convert content to structured JSON with lexicon correction and view performance metrics.")
with gr.Row():
pdf_input = gr.File(label="Upload PDF", file_types=[".pdf"])
gt_json_input = gr.File(label="Upload Ground Truth JSON", file_types=[".json"])
submit_btn = gr.Button("Process")
with gr.Row():
json_output = gr.Textbox(label="Corrected JSON Output", lines=10)
cer_output = gr.Textbox(label="Character Error Rate (CER)", lines=1)
accuracy_output = gr.Textbox(label="Field-Level Accuracy", lines=1)
submit_btn.click(fn=process_pdf, inputs=[pdf_input, gt_json_input], outputs=[json_output, cer_output, accuracy_output])
logging.info("Launching Gradio interface")
demo.launch(share=True)
```
### Screenshot


### Logs
```shell
not getting any errors while debugging
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.20.0
gradio_client version: 1.7.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 3.7.1
audioop-lts is not installed.
fastapi: 0.115.11
ffmpy: 0.5.0
gradio-client==1.7.2 is not installed.
groovy: 0.1.2
httpx: 0.28.1
huggingface-hub: 0.28.1
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 1.26.4
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.2
pillow: 11.1.0
pydantic: 2.10.6
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.9.9
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.46.0
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.28.1
huggingface-hub: 0.28.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.2
```
### Severity
Blocking usage of gradio | closed | 2025-03-06T11:11:51Z | 2025-03-08T00:26:16Z | https://github.com/gradio-app/gradio/issues/10741 | [
"bug"
] | likith1908 | 3 |
jumpserver/jumpserver | django | 14,629 | [Question] | Can't connect windows RDP out of the box. Installed with quick_start.sh. Only added one asset to test. Logs I can find:
```
jms_lion | 2024/12/10 18:59:32 5.audio,1.1,31.audio/L16; instruction with bad Content: 5.audio,1.1,31.audio/L16
jms_lion | 2024/12/10 18:59:32 5.audio,1.1,31.audio/L16; instruction with bad Content: 5.audio,1.1,31.audio/L16
jms_lion | 2024-12-10 18:59:42 tunnel conn.go [ERROR] Session[df26e73c-dcd5-4e1b-a075-29e9e154acf9] guacamole server read err: read tcp 127.0.0.1:41424->127.0.0.1:4822: use of closed network connection
```
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b560d2818e12 jumpserver/web:v4.4.1-ce "/docker-entrypoint.…" 13 minutes ago Up 12 minutes (healthy) 0.0.0.0:80->80/tcp, :::80->80/tcp jms_web
5ed36de40b71 jumpserver/core:v4.4.1-ce "./entrypoint.sh sta…" 13 minutes ago Up 12 minutes (healthy) 8080/tcp jms_celery
dab159d1205f jumpserver/chen:v4.4.1-ce "./entrypoint.sh wisp" 13 minutes ago Up 12 minutes (healthy) 8082/tcp jms_chen
f44f6c46ef4b jumpserver/core:v4.4.1-ce "./entrypoint.sh sta…" 13 minutes ago Up 12 minutes (healthy) 8080/tcp jms_core
cc5a4b3d2593 jumpserver/koko:v4.4.1-ce "./entrypoint.sh ./k…" 13 minutes ago Up 12 minutes (healthy) 0.0.0.0:2222->2222/tcp, :::2222->2222/tcp jms_koko
25cfa5abfa48 jumpserver/lion:v4.4.1-ce "./entrypoint.sh sup…" 13 minutes ago Up 12 minutes (healthy) 4822/tcp, 8081/tcp jms_lion
07448a6ebdbc postgres:16.3-bullseye "docker-entrypoint.s…" 13 minutes ago Up 13 minutes (healthy) 5432/tcp jms_postgresql
82b3b75496d7 redis:7.0-bullseye "docker-entrypoint.s…" 13 minutes ago Up 13 minutes (healthy) 6379/tcp jms_redis
```
| closed | 2024-12-10T11:02:32Z | 2024-12-10T12:17:17Z | https://github.com/jumpserver/jumpserver/issues/14629 | [] | semihkaraca | 1 |
Kav-K/GPTDiscord | asyncio | 449 | Model based permissions | **Is your feature request related to a problem? Please describe.**
As it is today the available permission sets are;
ADMIN_ROLES
DALLE_ROLES
GPT_ROLES
TRANSLATOR_ROLES
SEARCH_ROLES
INDEX_ROLES
CHANNEL_CHAT_ROLES
CHANNEL_INSTRUCTION_ROLES
CHAT_BYPASS_ROLES
**Describe the solution you'd like**
A permission group to delineate between models. Eg have a permission group GPT3_ROLES and another GPT4_ROLES
**Additional context**
Since model is an attribute that can be set in a converse, any user can override GPT-3.5 which may be the more cost-effective role. While you could just go into the source and remove GPT4 as an available model, that would lead to no one being able to leverage the more expensive model as required.
| open | 2023-12-11T07:38:38Z | 2023-12-11T07:38:38Z | https://github.com/Kav-K/GPTDiscord/issues/449 | [] | jeffe | 0 |
jina-ai/clip-as-service | pytorch | 394 | Error with Stop BertServer from Python | My Code:
`from bert_serving.server.helper import get_args_parser`
`from bert_serving.server import BertServer`
`from bert_serving.client import BertClient`
`args = get_args_parser().parse_args(['-model_dir', '/home/cn-pvgtoddlu/model/chinese_L-12_H-768_A-12',
'-num_worker','1',
'-port', '5555',
'-port_out', '5556',
'-max_seq_len', 'NONE',
'-mask_cls_sep',
'-cpu'])`
`server = BertServer(args)`
`print("bert server start....")`
`server.start()`
`bc = BertClient(port=5555,port_out=5556)`
`result = bc.encode(['First do it', 'then do it right', 'then do it better'])`
`print(result)`
`BertServer.shutdown(port=5555)`
`print("bert server stop....")`
Error:
`Traceback (most recent call last):`
`File "<input>", line 1, in <module>`
`File "/home/cn-pvgtoddlu/.pycharm_helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script`
`File "/home/cn-pvgtoddlu/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile`
`exec(compile(contents+"\n", file, 'exec'), glob, loc)`
`File "/home/cn-pvgtoddlu/tmp/pycharm_project_815/AI_Factory_Core/component/word_embedding/extract_embedding.py", line 21, in <module>`
`BertServer.shutdown(port=5555)`
`TypeError: shutdown() got an unexpected keyword argument 'port'`
**args** of staticmethod shutdown are not from parser i think.
shutdown function:
`@staticmethod`
`def shutdown(args):`
`with zmq.Context() as ctx:`
`ctx.setsockopt(zmq.LINGER, args.timeout)`
`with ctx.socket(zmq.PUSH) as frontend:`
`try:`
`frontend.connect('tcp://%s:%d' % (args.ip, args.port))`
| closed | 2019-06-27T08:25:37Z | 2019-12-05T21:37:44Z | https://github.com/jina-ai/clip-as-service/issues/394 | [] | lu161513 | 4 |
mlflow/mlflow | machine-learning | 14,464 | Fix blog link in docs | ### Summary
The blog link in the footer needs to be fixed:
```diff
diff --git a/docs/docusaurus.config.ts b/docs/docusaurus.config.ts
index bc2aa8afb..6fea98a12 100644
--- a/docs/docusaurus.config.ts
+++ b/docs/docusaurus.config.ts
@@ -137,7 +137,7 @@ const config: Config = {
},
{
label: "Blog",
- to: "https://mlflow.org/releases",
+ to: "https://mlflow.org/blog",
},
],
},
```
### Notes
- Make sure to open a PR from a **non-master** branch.
- Sign off the commit using the `-s` flag when making a commit:
```sh
git commit -s -m "..."
# ^^ make sure to use this
```
- Include `#{issue_number}` (e.g. `#123`) in the PR description when opening a PR.
| closed | 2025-02-05T09:49:02Z | 2025-02-07T00:47:24Z | https://github.com/mlflow/mlflow/issues/14464 | [
"good first issue",
"has-closing-pr"
] | harupy | 2 |
kizniche/Mycodo | automation | 1,362 | Blank Widgets | STOP right now, and please first look to see if the issue you're about to submit is already an open or recently closed issue at https://github.com/kizniche/Mycodo/issues
Please DO NOT OPEN AN ISSUE:
- If your Mycodo version is not the latest release version, please update your device before submitting your issue (unless your issue is related to not being able to upgrade). Your problem might already be solved.
- If your issue has been addressed before. If you have any new information that may aid in solving the issue, post it in the issue that already exists.
If you are going to post a new issue, next read How to Write a Good Bug Report at https://forum.radicaldiy.com/t/how-to-write-a-good-bug-report/71
Please complete as many of the sections below, if applicable, to provide the most information that may help with investigating your issue. Replace the text in brackets with your own text describing the issue. The details requested potentially affect which options to pursue. The small amount of time you spend completing the template will also help those providing assistance by reducing the time required to help you.
### Describe the problem/bug
The dashboard does not display any widgets properly. This occurred after updating to the latest version of mycodo. Creating new dashboards and adding new widgets does not make a difference. The widget size can be changed, they can be dragged around, the setting button does work, however using any of that functionality does not make the widgets display any information.
### Versions:
- Mycodo Version: 8.15.13
- Raspberry Pi Version: 4B
- Raspbian OS Version: Bookworm
### Reproducibility
Please list specific setup details that are involved and the steps to reproduce the behavior:
1. Open Dashboard
2. See Error
### Expected behavior
Dashboard widgets should display properly
### Screenshots
My normal dashboard, blank:

New test dashboard, also blank:

### Additional context
I have tried viewing the dashboard in firefox and chrome, I get the same results in both.
| closed | 2024-01-18T03:13:17Z | 2024-01-18T03:25:13Z | https://github.com/kizniche/Mycodo/issues/1362 | [] | Frosty-Burrito | 10 |
ultralytics/ultralytics | machine-learning | 19,782 | TorchScript on MPS (M1): float64 errors | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hey guys,
I’m trying to run a TorchScript-exported YOLO 11 model (tried with the latest `v8.3.93`) on MPS (Mac M1). I keep hitting:
`torch.jit.load fails with TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.`
Checked the exported model, all params are float32.
Also tried to initialize the model manually using `torch.jit.load`, but it didn't go too far.
Any hints?
Thanks.
### Additional
```
# Example
model = YOLO("yolo11m.pt")
model.export(format="torchscript")
model_ts = YOLO("yolo11m.torchscript", task="detect")
model_ts.predict("image.jpg", device="mps")
``` | open | 2025-03-19T11:28:17Z | 2025-03-20T07:45:41Z | https://github.com/ultralytics/ultralytics/issues/19782 | [
"question",
"detect",
"exports"
] | theOnlyBoy | 5 |
vllm-project/vllm | pytorch | 14,952 | [Bug]: Disaggregated Prefilling use different TP between prefill instance and decode instance , it will be hanged | ### Your current environment
<details>
<summary>I changed disagg_performance_benchmark.sh as flowing</summary>
```text
launch_disagg_prefill() {
model="$MODEL_PATH"
# disagg prefill
CUDA_VISIBLE_DEVICES=0 python3 \
-m vllm.entrypoints.openai.api_server \
--model $model \
--port 8100 \
--max-model-len 10000 \
--tensor-parallel-size 1 \
--dtype=half \
--gpu-memory-utilization 0.6 \
--kv-transfer-config \
'{"kv_connector":"PyNcclConnector","kv_role":"kv_producer","kv_rank":0,"kv_parallel_size":2,"kv_buffer_size":5e9}' &
CUDA_VISIBLE_DEVICES=1,2 python3 \
-m vllm.entrypoints.openai.api_server \
--model $model \
--port 8200 \
--max-model-len 10000 \
--tensor-parallel-size 2 \
--dtype=half \
--gpu-memory-utilization 0.6 \
--kv-transfer-config \
'{"kv_connector":"PyNcclConnector","kv_role":"kv_consumer","kv_rank":1,"kv_parallel_size":2,"kv_buffer_size":5e9}' &
wait_for_server 8100
wait_for_server 8200
python3 disagg_prefill_proxy_server.py &
sleep 1
}
I have four V100s 16GB with NVlink. When I use the same tp, it's normal.
```
</details>
### 🐛 Describe the bug
<img width="572" alt="Image" src="https://github.com/user-attachments/assets/b368d85e-1966-4d8e-a181-aa4d9fac14e3" />
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | open | 2025-03-17T12:01:32Z | 2025-03-17T12:02:42Z | https://github.com/vllm-project/vllm/issues/14952 | [
"bug"
] | 67lc | 0 |
jupyter-incubator/sparkmagic | jupyter | 443 | Make available via Conda on Windows | Hello folks, I have been using conda (miniconda, specifically) to install sparkmagic as part of a script my team is distributing. This works fine for our linux and OSX users. I swear a week ago it was working fine on Windows, as well. But yesterday I discovered the package is no longer available for Windows. Digging into the conda-forge repo browser (https://anaconda.org/conda-forge/sparkmagic) it's not listed for Windows.
What is preventing the package from being available to Windows users? | closed | 2018-03-13T20:13:09Z | 2021-07-21T02:30:26Z | https://github.com/jupyter-incubator/sparkmagic/issues/443 | [] | benrr101 | 1 |
jazzband/django-oauth-toolkit | django | 1,016 | Application instance fails with RSA selected then clicking "Save". | 
| open | 2021-09-27T14:39:17Z | 2023-10-04T15:03:37Z | https://github.com/jazzband/django-oauth-toolkit/issues/1016 | [
"question"
] | enjoysmath | 3 |
ultralytics/ultralytics | python | 18,933 | YOLOv8 OBB xyxy returns negative or out-of-bounds coordinates | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Predict
### Bug
Hi, I recently wanted to switch one of our projects from regular axis-aligned Bounding Boxes to OBB.
Because we don't have OBB labeled data yet, I started by converting my custom dataset from regular YOLO annotation to the OBB format. With this, I trained an OBB model and integrated it as a temporary solution to our current labeling pipeline.
The bug occurs after prediction when you access a BoundingBox object with it's `.xyxy` attribute. In some cases the points contain out-of-bounds coordinates (e.g. -5 in my case). This broke down way downstream of my labeling pipeline which I tediously backtracked to this bug.
I would propose that as a first step there needs to be a warning raised if someone accesses bounding box attributes which have an out-of-bounds coordinate in one of the points. This makes it easily identifiable and quickly fixable for the user. I suspect, that the coordinates perhaps can also happen to be out-of-bounds with regard to `x_max` or `y_max` coordinates as it should behave the same in this case. If this is the case this also needs to be raised as a warning. An ultimate solution would be to prevent BoundingBox coordinates to ever fall outside of the image coordinates, as this is typically expected behavior and doesn't need to be done on the user's end.
### Environment
```
Ultralytics 8.3.62 🚀 Python-3.8.19 torch-1.8.1+cu102 CUDA:0 (GRID V100S-16C, 16384MiB)
Setup complete ✅ (4 CPUs, 125.8 GB RAM, 218.0/244.4 GB disk)
OS Linux-5.4.0-202-generic-x86_64-with-glibc2.29
Environment Docker
Python 3.8.19
Install pip
RAM 125.80 GB
Disk 218.0/244.4 GB
CPU Intel Xeon Platinum 8268 2.90GHz
CPU count 4
GPU GRID V100S-16C, 16384MiB
GPU count 1
CUDA 10.2
numpy ✅ 1.24.4>=1.23.0
numpy ✅ 1.24.4<2.0.0; sys_platform == "darwin"
matplotlib ✅ 3.7.5>=3.3.0
opencv-python ✅ 4.11.0.86>=4.6.0
pillow ✅ 10.4.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.10.1>=1.4.1
torch ✅ 1.8.1>=1.8.0
torch ✅ 1.8.1!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.9.1>=0.9.0
tqdm ✅ 4.66.4>=4.64.0
psutil ✅ 6.1.1
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.0.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ❌ (not installed)>=2.0.0
```
### Minimal Reproducible Example
```python
from ultralytics import YOLO
import cv2
model = YOLO("yolo8l-obb.pt")
image_path = "..."
results = model.predict(image_path, imgsz=2560, conf=0.1, iou=0.98)
obb_results_cpu = results[0].to("cpu").obb
for obb in obb_results_cpu:
if any(obb.xyxy[0] < 0):
print("The xyxy Conversion is out of bounds!")
```
### Additional
As a solution I employed this code fixing potentially out-of-bounds coordinates:
```python
x_min, y_min, x_max, y_max = bounding_box.xyxy[0]
x_min = max(int(np.floor(x_min.item())), 0)
y_min = max(int(np.floor(y_min.item())), 0)
x_max = min(int(np.ceil(x_max.item())), img_x_max)
y_max = min(int(np.ceil(y_max.item())), img_y_max)
```
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-01-28T19:23:39Z | 2025-01-30T00:00:29Z | https://github.com/ultralytics/ultralytics/issues/18933 | [
"bug",
"OBB"
] | biggeR-data | 13 |
BlinkDL/RWKV-LM | pytorch | 192 | Gratitude and Inquiries | Dear Author,
I wanted to reach out and extend my gratitude for creating this remarkable model. It has truly opened up new horizons in my exploration of Large Language Models. I must say, I'm absolutely enamored by it.
Recently, I had the opportunity to test out the 5.2 version, experimenting with models ranging from 1.5B to 7B. The performance surpassed even that of the V4. Your implementation of the RWKV technique is indeed as impressive as its reputation suggests.
I do have a few questions:
1. While exploring the 5.2 version, I noticed that the 3B model seems to demonstrate superior in-context learning abilities compared to the 7B. Could this be attributed to the fact that the 7B model only utilizes 10% of its parameters? (This is an assumption I made based on the nomenclature.)
2. If I aim to further enhance the in-context learning ability with RWKV, are there any specific considerations or strategies you would recommend, apart from leveraging a specialized dataset?
Once again, I want to express my gratitude for your diligent work. I'm eagerly looking forward to your response.
* * * | open | 2023-10-21T11:40:30Z | 2023-10-29T19:27:28Z | https://github.com/BlinkDL/RWKV-LM/issues/192 | [] | 997172286 | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,134 | is exact image registration for image pairs compulsory to get good results in pix2pix? | Sir ,
I have a doubt as to whether exact image registration is compulsory to get good results or results wont be affected by slight misregistration in paired dataset | closed | 2020-08-27T07:04:46Z | 2021-02-26T07:02:08Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1134 | [] | shivom9713 | 2 |
InstaPy/InstaPy | automation | 5,798 | ModuleNotFoundError: No module named 'google.protobuf' | ```
user@ubuntu:~$ sudo apt-get install google.protobuf
Reading package lists... Done
Building dependency tree
Reading state information... Done
Note, selecting 'ruby-google-protobuf' for regex 'google.protobuf'
ruby-google-protobuf is already the newest version (3.6.1.3-2ubuntu5).
The following packages were automatically installed and are no longer required:
libfprint-2-tod1 libllvm9 linux-headers-5.4.0-26
linux-headers-5.4.0-26-generic linux-image-5.4.0-26-generic
linux-modules-5.4.0-26-generic linux-modules-extra-5.4.0-26-generic
python3-click python3-colorama
Use 'sudo apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 11 not upgraded.
user@ubuntu:~$ pip3 install protobuf
Requirement already satisfied: protobuf in /usr/lib/python3/dist-packages (3.6.1)
user@ubuntu:~$ pip3 install google
Collecting google
Downloading google-3.0.0-py2.py3-none-any.whl (45 kB)
|████████████████████████████████| 45 kB 54 kB/s
Requirement already satisfied: beautifulsoup4 in ./.local/lib/python3.8/site-packages (from google) (4.9.1)
Requirement already satisfied: soupsieve>1.2 in ./.local/lib/python3.8/site-packages (from beautifulsoup4->google) (2.0.1)
Installing collected packages: google
Successfully installed google-3.0.0
user@ubuntu:~$ pip3 install google-cloud
Collecting google-cloud
Downloading google_cloud-0.34.0-py2.py3-none-any.whl (1.8 kB)
Installing collected packages: google-cloud
Successfully installed google-cloud-0.34.0
user@ubuntu:~$ pip3 install google.protobuf
user@ubuntu:~/Documents/InstaPy$ python3 quickstart.py
Traceback (most recent call last):
File "quickstart.py", line 4, in <module>
from instapy import InstaPy
File "/home/user/Documents/InstaPy/instapy/__init__.py", line 6, in <module>
from .instapy import InstaPy
File "/home/user/Documents/InstaPy/instapy/instapy.py", line 30, in <module>
from .clarifai_util import check_image
File "/home/user/Documents/InstaPy/instapy/clarifai_util.py", line 3, in <module>
from clarifai.rest import ClarifaiApp
File "/home/user/.local/lib/python3.8/site-packages/clarifai/rest/__init__.py", line 3, in <module>
from clarifai.rest.client import ApiClient
File "/home/user/.local/lib/python3.8/site-packages/clarifai/rest/client.py", line 23, in <module>
from google.protobuf.struct_pb2 import Struct
ModuleNotFoundError: No module named 'google.protobuf'
```
| closed | 2020-09-23T17:00:13Z | 2020-11-30T16:12:06Z | https://github.com/InstaPy/InstaPy/issues/5798 | [
"wontfix"
] | MiChaelinzo | 2 |
Yorko/mlcourse.ai | plotly | 649 | can you help find email for Измайлов Константин | I see
Измайлов Константин Константинович (@Izmajlovkonstantin)
can you help find email for Измайлов Константин
I try to get him , ask code for
https://sphere.mail.ru/curriculum/program/discipline/818/
especially for video
https://www.youtube.com/watch?v=fit-ZAWexZ0&list=PLrCZzMib1e9p6lpNv-yt6uvHGyBxQncEh&index=8
11. Введение в SQL. Курс "ВВЕДЕНИЕ В АНАЛИЗ ДАННЫХ" | Технострим
from
mlcourse.ai/jupyter_russian/tutorials/boruta_tutorial_Izmajlovkonstantin.ipynb
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<center>\n",
"<img src=\"../../img/ods_stickers.jpg\">\n",
"## Открытый курс по машинному обучению\n",
"<center>Автор материала: Измайлов Константин Константинович (@Izmajlovkonstantin)."
]
} | closed | 2020-01-30T21:33:58Z | 2020-01-30T23:28:54Z | https://github.com/Yorko/mlcourse.ai/issues/649 | [
"invalid"
] | Sandy4321 | 1 |
taverntesting/tavern | pytest | 691 | how to save file when downing file | ---
test_name: Test streaming (downloading) file
stages:
- name: Stream file
request:
url: "http://www.httpbin.org/image/png"
method: GET
stream: True
response:
status_code: 200
I want to save the png file ,but how?
| closed | 2021-05-30T12:30:50Z | 2021-06-05T11:17:50Z | https://github.com/taverntesting/tavern/issues/691 | [] | dy20082250 | 1 |
MagicStack/asyncpg | asyncio | 602 | Please add scalable app file structure in documentation. Also is my approach correct? | <!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided, with clear instructions on how to run it.
Thank you!
-->
* **asyncpg version**:
* **PostgreSQL version**:
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**:
* **Python version**:
* **Platform**:
* **Do you use pgbouncer?**:
* **Did you install asyncpg with pip?**:
* **If you built asyncpg locally, which version of Cython did you use?**:
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**:
<!-- Enter your issue details below this comment. -->
Latest version of asyncpg
No PGBouncer
Python 3.8.2
Hi, I am using fast API but I have no idea how to maintain DB connection and pool. I am switching from Django ORM but still, it is hard for me to get a good grasp on this. Please guide me on how can I improve this and also add this to your documentation.
My questions:
How to maintain a constant database connection?
How to maintain a pool of connections?
How to use the same connection out of the pool for every query in the same request?
Is my naive solution correct? How can I improve this and make this production-ready?
**db_connection.py**
```
`import asyncpg
from configs.settings import settings
class Database:
def __init__(self):
self.user = settings.POSTGRES_USER
self.password = settings.POSTGRES_PASSWORD
self.host = settings.POSTGRES_SERVER
self.port = "5432"
self.database = settings.POSTGRES_DB
self._cursor = None
self._connection_pool = None
self.con = None
async def connect(self):
if not self._connection_pool:
try:
self._connection_pool = await asyncpg.create_pool(
min_size=1,
max_size=10,
command_timeout=60,
host=self.host,
port=self.port,
user=self.user,
password=self.password,
database=self.database,
)
except Exception as e:
print(e)
async def fetch_rows(self, query: str):
print(query)
if not self._connection_pool:
print("shouldnt be here")
await self.connect()
else:
self.con = await self._connection_pool.acquire()
try:
result = await self.con.fetch(query)
print(result)
return result
except Exception as e:
print(e)
finally:
print("pool released")
await self._connection_pool.release(self.con)
`
```
**db_session.py**
```
`from .db_connection import Database
database_instance = Database()
`
```
**main.py**
```
`from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from routes import items, user
from utils.middleware import middleware
from configs import open_api_tags
from configs.settings import settings
from db.db_session import database_instance
app = FastAPI(
title=settings.PROJECT_NAME,
description=settings.PROJECT_DESCRIPTION,
version="0.0.1",
openapi_tags=open_api_tags.tags_metadata,
)
app.add_middleware(
CORSMiddleware,
allow_origins=settings.BACKEND_CORS_ORIGINS,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
app.middleware("http")(middleware)
@app.on_event("startup")
async def startup():
await database_instance.connect()
app.include_router(user.router)
app.include_router(items.router, prefix="/items")
`
```
**user.py**
```
`from fastapi import APIRouter
from db.db_session import database_instance
router = APIRouter()
@router.get("/users/me", tags=["users"])
async def read_user_me():
result = await database_instance.fetch_rows("SELECT * from user")
print(result)
return {"username": "fakecurrentuser"}`
```
| open | 2020-07-26T17:30:27Z | 2020-07-26T17:32:37Z | https://github.com/MagicStack/asyncpg/issues/602 | [] | gauravsaini964 | 0 |
HumanSignal/labelImg | deep-learning | 237 | Is there a way to generate an xml file with no bounding boxes in it? | I essentially want to generate an xml file without an <object> field. | closed | 2018-02-16T03:11:45Z | 2018-05-22T06:04:01Z | https://github.com/HumanSignal/labelImg/issues/237 | [] | kieferselmon | 1 |
napari/napari | numpy | 7,644 | [Feat] Add a new row of viewer buttons to turn on/off overlays and have right click popups for settings. | After working on #7626 to add more camera settings to the popup, I think that creating a second row of viewer push buttons would be a clean, focused way to add overlay features to the UI. This would take up minimal space in the viewer and not add additional widgets to the overall interface by opting for the popup functionality. These buttons would be left clicked to turn on/off the overlay element and could be right clicked to change the settings.
For example, you could turn on/off the scalebar and then the right click popup could be used to change font-size, font color, box on/off, scalebar fixed length, etc.
This would work for other widgets discussed in #7587 including
1. scale bar
2. canvas and layer axes
3. text overlays (and future grid text overlays)
4. colorbar visualization (which may be a place to also add temporarily changing canvas background color)
5. bounding box properties including line size, color, and point size
| open | 2025-02-24T15:53:47Z | 2025-02-25T02:09:59Z | https://github.com/napari/napari/issues/7644 | [] | TimMonko | 1 |
aiortc/aiortc | asyncio | 803 | Remote/Local transceiver order can cause unexpected full crashes due to createOffer | Odd behavior occurs when you try to use ``aiortc`` as a standalone RTC video receiver which acts as the offerer. When the order of the media information in the offer that is sent by the video receiver to the video sender does not match the order that the video receiver stores locally, when each ``iceTransport`` object is started, the ICE connection fails to complete on all video channels. This is because when an ICE connection needs to be established, STUN messages are sent from the "client" to the "server" (I use these terms loosely because the rest of RTC is peer to peer), which include self identification.
This self identification consists of:
1. Username: consists of a remote username and a local username, where the remote username is determined by the offer/answer that is received by either end
2. Password: randomly generated and should have no issues if the packets are not tampered with
ICE must ensure that the usernames match before proceeding in the connection (as a safety measure against tampered packets). Note that in ``aiortc``, the usernames are sent/received with simple ``for`` iteration:
On the "client" (rtcpeerconnection.py in ``createOffer``):
```python
for i in range(max(len(local_media), len(remote_media))):
...
description.media.append(
create_media_description_for_transceiver(
transceiver,
cname=self.__cname,
direction=transceiver.direction,
mid=mid,
)
)
...
```
On the "server" (rtcpeerconnection.py in ``__connect``):
```python
for transceiver in self.__transceivers:
...
await iceTransport.start(self.__remoteIce[transceiver])
...
```
which eventually calls:
```python
rx_username = "%s:%s" % (self.local_username, self.remote_username)
if message.attributes.get("USERNAME") != rx_username:
raise ValueError("Wrong username")
```
For some reason, the order in which the offer is parsed when sent to the "server" is the reverse of the order in which the transceivers are added to the peer connection, which means if there is more than 1 transceiver, the username expected by the "server" during the first iteration no longer matches the username sent by the "client" during the first iteration. As a result, the connection is abandoned by the "server", despite another transceiver's data may match what is being sent by the "client".
Here is an example of the different usernames that may be compared


Though both usernames are valid, they are not being matched to the correct transceiver and the ICE gathering never completes.
So far, the behavior I have observed shows that the order of the transceivers in the offer is always the reverse of the order of transceivers in the ``RTCPeerConnection.__transceivers`` property, which is why I have implemented a temporary fix using the following code:
```python
peer_connection._RTCPeerConnection__transceivers = peer_connection._RTCPeerConnection__transceivers[::-1]
```
that simply reverses the order that the transceivers are stored. This is not a robust fix however, as I am uncertain of the consistency with which the order is mixed, so my suggestion for a fix would be to sort the transceivers on both ends by their ``mid``s, which is information constant on both sides.
Note that Chromium's implementation seems to have no trouble with this. This leads me to believe that their implementation checks all of its transceivers' usernames when forming connections, so the state where the "server" is checking for a transceiver's username that has yet to be sent is avoided. Perhaps this would be a more robust fix for this issue.
| closed | 2022-12-31T19:51:05Z | 2023-06-07T02:41:01Z | https://github.com/aiortc/aiortc/issues/803 | [
"stale"
] | kennytheeggman | 2 |
microsoft/JARVIS | pytorch | 84 | Got error: "Unable to locate package python3.8" | When I run `docker build .` , got the below error:
```
Fetched 19.9 MB in 3s (5909 kB/s)
Reading package lists...
Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package python3.8
E: Couldn't find any package by glob 'python3.8'
E: Couldn't find any package by regex 'python3.8'
The command '/bin/sh -c apt-get update && apt-get install -y python3.8 python3-pip python3-dev build-essential && rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100
```
The host is a Ubuntu 16.04 | open | 2023-04-07T05:53:41Z | 2023-04-07T10:39:37Z | https://github.com/microsoft/JARVIS/issues/84 | [] | Clarence-pan | 1 |
tqdm/tqdm | pandas | 1,110 | Add support for updating the description (based on an initial callable) | - [x] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [ ] visual output bug
+ [ ] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [x] new feature request
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [ ] I have mentioned version numbers, operating system and
environment, where applicable
---
A common pattern I find myself using is to use `set_description` to give me a bit more in progress detail. So in this simple example I want to see in real time when new even numbers have been found:
```python
from time import sleep
from tqdm import tqdm
def is_even(number):
sleep(2)
return number % 2 == 0
def find_even_numbers(numbers):
even_numbers = []
pbar = tqdm(numbers, desc=f"{len(even_numbers)} found")
for number in pbar:
if is_even(number):
even_numbers.append(number)
pbar.set_description(f"{len(even_numbers)} found")
return even_numbers
find_even_numbers([1, 2, 3, 4, 5])
```


The caveat is that I want the description set when tqdm is initalized (so the prefix doesn't magically appear the first time `is_even` is true) and when it needs updating, but I also want to keep my code DRY. The current description is pretty simple, but imagine there's more data I'm wanting to keep updated.
I could DRY it up by encapsulating the description in a function:
```python
even_numbers = []
def get_description():
return f"{len(even_numbers)} found"
```
and then using that in `pbar = tqdm(numbers, desc=get_description())` and `pbar.set_description(get_description())` but I feel like a nicer API would be to be able to call `pbar.update_description()` on a callable description function intialized on `pbar`.
If you think this is a sensible improvement then I can put together a PR for [a relatively simple change](https://github.com/kevinmarsh/tqdm/commit/5e0c02076182b3cc308027b8d7a8023b4d4dcd70#diff-705b1f138d56c394e86e887dc6b2ac8e6d7655b74415b122089636f4b28195c2R1382) to implement this, but just wanted some feedback in case there were alternatives I hadn't considered or concerns about adding another way to update the description (in addition to `set_description` and `set_description_str`). | closed | 2021-01-05T22:24:57Z | 2021-02-08T15:52:02Z | https://github.com/tqdm/tqdm/issues/1110 | [
"question/docs ‽",
"submodule ⊂"
] | kevinmarsh | 2 |
google-research/bert | nlp | 1,249 | there are errors in run run_pretraining.py | tensorflow==1.15
have turned tf.optimizers.Optimizer to tf.keras.optimizers.Optimizer
error information:
Traceback (most recent call last):
File "run_pretraining.py", line 495, in <module>
tf.app.run()
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\absl\app.py", line 312, in run
_run_main(main, args)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\absl\app.py", line 258, in _run_main
sys.exit(main(argv))
File "run_pretraining.py", line 466, in main
estimator.train(input_fn=train_input_fn, max_steps=FLAGS.num_train_steps)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 3035, in tra
in
rendezvous.raise_errors()
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_estimator\python\estimator\tpu\error_handling.py", line 136, in rai
se_errors
six.reraise(typ, value, traceback)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\six.py", line 719, in reraise
raise value
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 3030, in tra
in
saving_listeners=saving_listeners)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 370, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1161, in _train_mode
l
return self._train_model_default(input_fn, hooks, saving_listeners)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1195, in _train_mode
l_default
saving_listeners)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1490, in _train_with
_estimator_spec
log_step_count_steps=log_step_count_steps) as mon_sess:
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 584, in MonitoredT
rainingSession
stop_grace_period_secs=stop_grace_period_secs)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1014, in __init__
stop_grace_period_secs=stop_grace_period_secs)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 725, in __init__
self._sess = _RecoverableSession(self._coordinated_creator)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1207, in __init__
_WrappedSession.__init__(self, self._create_session())
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 1212, in _create_s
ession
return self._sess_creator.create_session()
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 878, in create_ses
sion
self.tf_sess = self._session_creator.create_session()
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\training\monitored_session.py", line 647, in create_ses
sion
init_fn=self._scaffold.init_fn)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\training\session_manager.py", line 296, in prepare_sess
ion
sess.run(init_op, feed_dict=init_feed_dict)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\client\session.py", line 956, in run
run_metadata_ptr)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\client\session.py", line 1180, in _run
feed_dict_tensor, options, run_metadata)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\client\session.py", line 1359, in _do_run
run_metadata)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\client\session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.DataLossError: Checksum does not match: stored 3189191466 vs. calculated on the restored bytes 1190294916
[[node checkpoint_initializer_52 (defined at D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\framework
\ops.py:1748) ]]
Original stack trace for 'checkpoint_initializer_52':
File "run_pretraining.py", line 495, in <module>
tf.app.run()
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\absl\app.py", line 312, in run
_run_main(main, args)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\absl\app.py", line 258, in _run_main
sys.exit(main(argv))
File "run_pretraining.py", line 466, in main
estimator.train(input_fn=train_input_fn, max_steps=FLAGS.num_train_steps)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 3030, in tra
in
saving_listeners=saving_listeners)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 370, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1161, in _train_mode
l
return self._train_model_default(input_fn, hooks, saving_listeners)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1191, in _train_mode
l_default
features, labels, ModeKeys.TRAIN, self.config)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 2857, in _ca
ll_model_fn
config)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1149, in _call_model
_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 3126, in _mo
del_fn
features, labels, is_export_mode=is_export_mode)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 1663, in cal
l_without_tpu
return self._call_model_fn(features, labels, is_export_mode=is_export_mode)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 1994, in _ca
ll_model_fn
estimator_spec = self._model_fn(features=features, **kwargs)
File "run_pretraining.py", line 165, in model_fn
tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\training\checkpoint_utils.py", line 291, in init_from_c
heckpoint
init_from_checkpoint_fn)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\distribute\distribute_lib.py", line 1940, in merge_call
return self._merge_call(merge_fn, args, kwargs)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\distribute\distribute_lib.py", line 1947, in _merge_cal
l
return merge_fn(self._strategy, *args, **kwargs)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\training\checkpoint_utils.py", line 286, in <lambda>
ckpt_dir_or_file, assignment_map)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\training\checkpoint_utils.py", line 334, in _init_from_
checkpoint
_set_variable_or_list_initializer(var, ckpt_file, tensor_name_in_ckpt)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\training\checkpoint_utils.py", line 458, in _set_variab
le_or_list_initializer
_set_checkpoint_initializer(variable_or_list, ckpt_file, tensor_name, "")
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\training\checkpoint_utils.py", line 412, in _set_checkp
oint_initializer
ckpt_file, [tensor_name], [slice_spec], [base_type], name=name)[0]
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\ops\gen_io_ops.py", line 1696, in restore_v2
name=name)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 794, in _apply_op_he
lper
op_def=op_def)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3357, in create_op
attrs, op_def, compute_device)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3426, in _create_op_internal
op_def=op_def)
File "D:\Program Files(x86)\ANACONDA\envs\tensorflow1.15\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1748, in __init__
self._traceback = tf_stack.extract_stack()
| closed | 2021-08-04T03:49:43Z | 2021-08-14T11:34:55Z | https://github.com/google-research/bert/issues/1249 | [] | blueseven77 | 1 |
tensorflow/tensor2tensor | deep-learning | 1,706 | Unable to run t2t-trainer summarization problem | ### Description
I am trying to run `t2t-trainer --generate_data --data_dir=~/t2t_data --output_dir=~/t2t_train --problem=summarize_cnn_dailymail32k --model=transformer --hparams_set=transformer_prepend --train_steps=1000 --eval_steps=100
`
but this is giving me below error:
WARNING:tensorflow:From /Users/vijayshanker/coder/ML/mlenv/lib/python2.7/site-packages/tensor2tensor/utils/expert_utils.py:68: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
* https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.
Traceback (most recent call last):
File "/Users/vijayshanker/coder/ML/mlenv/bin/t2t-trainer", line 23, in <module>
from tensor2tensor.bin import t2t_trainer
File "/Users/vijayshanker/coder/ML/mlenv/lib/python2.7/site-packages/tensor2tensor/bin/t2t_trainer.py", line 24, in <module>
from tensor2tensor import models # pylint: disable=unused-import
File "/Users/vijayshanker/coder/ML/mlenv/lib/python2.7/site-packages/tensor2tensor/models/__init__.py", line 26, in <module>
from tensor2tensor.models import basic
File "/Users/vijayshanker/coder/ML/mlenv/lib/python2.7/site-packages/tensor2tensor/models/basic.py", line 25, in <module>
from tensor2tensor.utils import t2t_model
File "/Users/vijayshanker/coder/ML/mlenv/lib/python2.7/site-packages/tensor2tensor/utils/t2t_model.py", line 30, in <module>
from tensor2tensor.data_generators import multi_problem
File "/Users/vijayshanker/coder/ML/mlenv/lib/python2.7/site-packages/tensor2tensor/data_generators/multi_problem.py", line 22, in <module>
from tensor2tensor.data_generators import problem
File "/Users/vijayshanker/coder/ML/mlenv/lib/python2.7/site-packages/tensor2tensor/data_generators/problem.py", line 27, in <module>
from tensor2tensor.data_generators import generator_utils
File "/Users/vijayshanker/coder/ML/mlenv/lib/python2.7/site-packages/tensor2tensor/data_generators/generator_utils.py", line 1021, in <module>
@tf.autograph.to_graph
File "/Users/vijayshanker/coder/ML/mlenv/lib/python2.7/site-packages/tensorflow/python/autograph/impl/api.py", line 600, in to_graph_v1
experimental_optional_features=experimental_optional_features)
File "/Users/vijayshanker/coder/ML/mlenv/lib/python2.7/site-packages/tensorflow/python/autograph/impl/api.py", line 528, in to_graph
entity, e.__class__.__name__, str(e)))
tensorflow.python.autograph.impl.api.ConversionError: converting <function _scan_step_fn at 0x13be2fcf8>: AttributeError: 'module' object has no attribute 'Num'
### Environment information
```
OS: macOs mojave version 10.14.5
$ pip freeze | grep tensor
mesh-tensorflow==0.0.5
tensor2tensor==1.14.0
tensorboard==1.14.0
tensorflow==1.14.0
tensorflow-datasets==1.2.0
tensorflow-estimator==1.14.0
tensorflow-gan==1.0.0.dev0
tensorflow-metadata==0.14.0
tensorflow-probability==0.7.0
$ python -V
Python 2.7.10
| open | 2019-09-22T19:16:48Z | 2019-09-22T19:16:48Z | https://github.com/tensorflow/tensor2tensor/issues/1706 | [] | nineleaps-vijay | 0 |
twopirllc/pandas-ta | pandas | 815 | VWAP UserWarning Received - Add ability to silence warning | **Which version are you running? The lastest version is on Github. Pip is for major releases.**
```python
import pandas_ta as ta
print(ta.version)
```
I am on `0.3.14b0`
**Do you have _TA Lib_ also installed in your environment?**
```sh
$ pip list
```
Yes, I have
`TA-Lib 0.4.32`
**Upgrade.**
```sh
$ pip install -U git+https://github.com/twopirllc/pandas-ta
```
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
I keep getting warning messages printed out whenever I perform the following operation, this specifically happens with VWAP only:
```python
basic_indicators = [
{"kind":"ema", "params": (10,)},
{"kind":"ema", "params": (21,)},
{"kind":"ema", "params": (50,)},
{"kind":"ema", "params": (200,)},
{"kind": "sma", "params": (10,)},
{"kind": "sma", "params": (21,)},
{"kind": "sma", "params": (50,)},
{"kind": "sma", "params": (200,)},
{"kind":"atr"},{"kind":"atr","percent":True},
{"kind":"vwap","anchor":"D"},
]
basic_strategy = ta.Strategy(name="EMA10, EMA21, EMA50, EMA200, SMA10, SMA21, SMA50, SMA200, ATR, VWAP",
ta=basic_indicators)
def add_higher_highs_lows(df):
# Ensure the DataFrame is sorted by date if you have a 'date' column
if 'date' in df.columns:
df.sort_values('date', inplace=True)
# Calculate Higher High, current high is greater than the previous high
df['higher high'] = df['High'] > df['High'].shift(1)
# Calculate Higher Low, current low is greater than the previous low
df['higher low'] = df['Low'] > df['Low'].shift(1)
df['has higher high and low'] = df['higher high'] & df['higher low']
df['has lower high and low'] = (~df['higher high']) & (~df['higher low'])
return df
def apply_ta_strategy(df_yfinance,ta_strategy:ta.Strategy):
df_tmp = df_yfinance.copy()
if df_tmp.index.name is not None:
df_tmp.reset_index(inplace=True)
df_tmp.set_index(['Datetime'],inplace=True)
df_tmp.ta.strategy(ta_strategy)
df_tmp.reset_index(inplace=True)
return df_tmp
def apply_basic_indicators(df_yfinance):
df_yfinance = apply_ta_strategy(df_yfinance=df_yfinance, ta_strategy=basic_strategy)
add_higher_highs_lows(df=df_yfinance)
return df_yfinance
def cart_prod_apply(df_cart_prod,fun=apply_basic_indicators):
assert df_cart_prod.index.name is None
g = df_cart_prod.groupby(['symbol','period','interval'],observed=True)
df_final = g.apply(fun,include_groups=False).reset_index(-1,drop=True).reset_index()
return df_final
df_cart_prod_tmp = df_cart_prod[df_cart_prod['period'] == 'max'].copy()
df_basic = cart_prod_apply(df_cart_prod=df_cart_prod_tmp, fun=apply_basic_indicators)
```
I keep getting the following warning for the VWAP operation
```sh
/Users/jd/.pyenv/versions/3.10.13/lib/python3.10/multiprocessing/pool.py:48: UserWarning: Converting to PeriodArray/Index representation will drop timezone information.
return list(map(*args))
```
**Describe the solution you'd like**
Can we have a parameter be passed in to the VWAP operation so that all warnings related to that indicator are silenced?
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
Thanks for using Pandas TA!
| closed | 2024-07-22T16:45:15Z | 2024-07-28T18:49:28Z | https://github.com/twopirllc/pandas-ta/issues/815 | [
"enhancement"
] | joeld1 | 1 |
mljar/mercury | jupyter | 11 | Add text input | Add text input. Please remember to sanitize the input. | closed | 2022-01-17T14:10:12Z | 2022-01-26T17:45:07Z | https://github.com/mljar/mercury/issues/11 | [
"enhancement",
"help wanted"
] | pplonski | 1 |
jupyter-book/jupyter-book | jupyter | 1,826 | LaTeX not rendered in DataFrame.Styler with HTML output | ### Describe the bug
I have latex in my formatted DataFrame. When I run the code cell in `jupyter-lab` the latex gets correctly rendered:

However, when I build the book from the console, the latex does not render:
```console
$ jupyter-book build test-issue
```
is shown in the browser as:

This may be related to [#1501](https://github.com/executablebooks/jupyter-book/issues/1501#issue-1024095182).
### Reproduce the bug
Here is the code cell of my MyST Markdown notebook:
```python
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(3, 3), columns=list("ABC"))
df.style.format('$\\beta=$ {:.2%}').set_caption('This is the result with $\\alpha=5$')
```
### List your environment
Jupyter Book : 0.13.0
External ToC : 0.2.4
MyST-Parser : 0.15.2
MyST-NB : 0.13.2
Sphinx Book Theme : 0.3.2
Jupyter-Cache : 0.4.3
NbClient : 0.5.13
| open | 2022-08-31T14:10:24Z | 2023-09-18T16:03:00Z | https://github.com/jupyter-book/jupyter-book/issues/1826 | [
"bug"
] | quantitative-technologies | 3 |
widgetti/solara | flask | 818 | Highlight dates in solara.lab.InputDate | I would like to highlight certain dates in the InputDate widget or limit the selection to specific dates, similar to the functionality provided in the Panel DatePicker (https://panel.holoviz.org/reference/widgets/DatePicker.html). | open | 2024-10-15T11:03:44Z | 2024-10-17T11:58:11Z | https://github.com/widgetti/solara/issues/818 | [] | mikelzabala | 2 |
redis/redis-om-python | pydantic | 258 | Needs to support `FT.AGGREGATE` / RediSearch aggregations. | This client needs to support RediSearch aggregations / `FT.AGGREGATE`. | open | 2022-05-20T16:39:22Z | 2023-05-16T06:37:18Z | https://github.com/redis/redis-om-python/issues/258 | [
"enhancement"
] | simonprickett | 7 |
sinaptik-ai/pandas-ai | data-science | 823 | [ERROR] Pipeline failed on step 4: All objects passed were None | ### System Info
Windows, pandasai 1.3.9, python 3.11.5
### 🐛 Describe the bug
Greetings, first of all, kudos to everyone involved in the development of this library! I have previously been using langchain with the fine-tuned gpt 3-5 turbo model to answer questions about my company's structured data, but switched to pandasai as soon as I heard about it since the potential of adding visualizations is tremendous :) Anywho, I am facing an issue right now and cannot generate a plot using the code below. Tried it with pandasai 1.3.6, 1.3.8 and 1.3.9 with largely the same effect
```
import pandas as pd
from pandasai import SmartDataframe
from pandasai.llm import OpenAI
iris = pd.read_csv('https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv')
llm2 = OpenAI(temperature=0, model="gpt-3.5-turbo-16k", api_token="sk-XXX")
pandas_ai2 = SmartDataframe(iris, config={"llm": llm2, "verbose": True})
response = pandas_ai2.chat("Please plot the histogram of average sepal_width for each specie, using different colors for each bar")
```
The log is as follows:
```
"2023-12-15 13:36:00 [INFO] Question: Please plot the histogram of average sepal_width for each specie, using different colors for each bar
2023-12-15 13:36:00 [INFO] Running PandasAI with openai LLM...
2023-12-15 13:36:00 [INFO] Prompt ID: ed361326-14bb-4dd3-9856-60fa2bcc9981
2023-12-15 13:36:00 [INFO] Executing Step 0: CacheLookup
2023-12-15 13:36:00 [INFO] Using cached response
2023-12-15 13:36:00 [INFO] Executing Step 1: PromptGeneration
2023-12-15 13:36:00 [INFO] Executing Step 2: CodeGenerator
2023-12-15 13:36:00 [INFO] Executing Step 3: CachePopulation
2023-12-15 13:36:00 [INFO] Executing Step 4: CodeExecution
2023-12-15 13:36:00 [INFO] Saving charts to C:\Users\temp_chart.png
2023-12-15 13:36:00 [INFO]
Code running:
df = pd.concat(dfs)
df.groupby('species')['sepal_width'].mean().plot(kind='bar', color=['skyblue', 'salmon', 'lightgreen'])
plt.title('Average Sepal Width for Each Species')
plt.xlabel('Species')
plt.ylabel('Average Sepal Width')
plt.show()
result = {'type': 'plot', 'value': 'C:\Users/temp_chart.png'}
```
2023-12-15 13:36:00 [WARNING] Failed to execute code with a correction framework [retry number: 1]
2023-12-15 13:36:00 [ERROR] Failed with error: Traceback (most recent call last):
File "C:\Apps\anaconda\Lib\site-packages\pandasai\pipelines\smart_datalake_chat\code_execution.py", line 46, in execute
result = pipeline_context.query_exec_tracker.execute_func(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Apps\anaconda\Lib\site-packages\pandasai\helpers\query_exec_tracker.py", line 128, in execute_func
result = function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Apps\anaconda\Lib\site-packages\pandasai\helpers\code_manager.py", line 207, in execute_code
exec(code_to_run, environment)
File "<string>", line 1, in <module>
File "C:\Apps\anaconda\Lib\site-packages\pandas\util\_decorators.py", line 331, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Apps\anaconda\Lib\site-packages\pandas\core\reshape\concat.py", line 368, in concat
op = _Concatenator(
^^^^^^^^^^^^^^
File "C:\Apps\anaconda\Lib\site-packages\pandas\core\reshape\concat.py", line 448, in __init__
raise ValueError("All objects passed were None")
ValueError: All objects passed were None
. Retrying" | closed | 2023-12-15T12:50:08Z | 2023-12-15T14:21:20Z | https://github.com/sinaptik-ai/pandas-ai/issues/823 | [
"duplicate"
] | ljdmitry | 5 |
postmanlabs/httpbin | api | 614 | 413 Request Body Too Large | In the past few days we've been seeing a "413 Request Body Too Large" response periodically when using `https://httpbin.org/post`. The request body size is normally 1100-1400 bytes. The same request often works on retry.
Has anyone else run into this issue?
UPDATE: We switched to using a self hosted version and the problem disappeared. So likely an issue with the infrastructure for httpbin.org rather than the project itself. I'm guessing that was pretty obvious, just wanted to confirm it. | open | 2020-06-16T16:56:31Z | 2020-06-17T16:47:07Z | https://github.com/postmanlabs/httpbin/issues/614 | [] | workmanw | 1 |
great-expectations/great_expectations | data-science | 10,775 | Data Docs host on S3 cannot redirect to other pages due to access denied | **Describe the bug**
I am trying to host and share Data Docs on AWS S3. After `checkpoint.run()`, an index.html file on S3 was generated as expected. I followed the instruction to configure bucket policy as guided in [Host and share Data Docs](https://docs.greatexpectations.io/docs/0.18/oss/guides/setup/configuring_data_docs/host_and_share_data_docs/) but when I opened the index.html from my bucket and clicked on any run record, it couldn't redirect to the detailed validation results page, it gave an Access Denied error. This problem is similar to an old issue [Data Docs > S3 > Links to runs are access Denied](https://github.com/great-expectations/great_expectations/issues/1235) but in a different version of GX.
**To Reproduce**
The data docs site configurations are as follows
```yaml
data_docs_sites:
S3_site:
class_name: SiteBuilder
store_backend:
class_name: TupleS3StoreBackend
bucket: bucket-name
prefix: data-docs/
site_index_builder:
class_name: DefaultSiteIndexBuilder
```
By clicking index.html in S3 bucket, the Data Docs link will look like:
`https://bucket-name.s3.ap-southeast-1.amazonaws.com/data-docs/index.html?X-Amz-Algorithm=xxx&...`
However, when I clicked the run record, those keys generated from AWS were not following resulting in an Access Denied.
`https://bucket-name.s3.ap-southeast-1.amazonaws.com/data-docs/expectations/my_suite.html`
**Expected behavior**
Data Docs hosted on AWS S3 can be clicked to redirect to each page inside.
**Environment (please complete the following information):**
- Operating System: MacOS
- Great Expectations Version: 1.1.0
- Data Source: Spark dataframe
- Cloud environment: AWS
| closed | 2024-12-13T06:27:33Z | 2025-01-23T14:15:21Z | https://github.com/great-expectations/great_expectations/issues/10775 | [
"request-for-help"
] | marxaem | 2 |
axnsan12/drf-yasg | rest-api | 906 | Add number format in DecimalField | # Feature Request
## Description
Django models.DecimalField produces `type number, format:decimal` but the format `decimal` is not an [officially supported OpenAPI type](https://swagger.io/specification/v2/#data-type-format). Available types are `float` or `double`
```code
rate = models.DecimalField(max_digits=6, decimal_places=4, blank=True, null=True)
```
```
rate:
title: Rate
type: number
format: decimal
default: 0.0
minimum: 0.0
```
## Describe the solution you'd like
```code
rate = models.DecimalField(max_digits=6, decimal_places=4, blank=True, null=True)
rate:
title: Rate
type: number
format: decimal
default: 0.0
minimum: 0.0
```
```code
rate = models.DecimalField(max_digits=6, decimal_places=4, blank=True, null=True, number_format='float')
rate:
title: Rate
type: number
format: float
default: 0.0
minimum: 0.0
```
```code
rate = models.DecimalField(max_digits=6, decimal_places=4, blank=True, null=True, number_format='double')
rate:
title: Rate
type: number
format: double
default: 0.0
minimum: 0.0
```
| open | 2024-11-25T18:47:20Z | 2025-03-09T10:09:42Z | https://github.com/axnsan12/drf-yasg/issues/906 | [
"help wanted",
"1.22.x",
"enhancement"
] | zuerst | 0 |
microsoft/nni | pytorch | 5,726 | Mismatched hyperparameters between web server display and their actual values | **Describe the issue**:
**Environment**:
- NNI version: 3.0
- Training service (local|remote|pai|aml|etc): local
- Client OS: Ubuntu 20.04.4 LTS (GNU/Linux 5.13.0-30-generic x86_64)
- Server OS (for remote mode only):
- Python version: 3.11
- PyTorch/TensorFlow version: 2.1.2
- Is conda/virtualenv/venv used?: Conda
- Is running in Docker?: No
**Configuration**:
- Experiment config (remember to remove secrets!):
```yaml
experimentName: MRNN hyper-param searching
authorName: WenjieDu
trialConcurrency: 1
trainingServicePlatform: local
searchSpacePath: MRNN_ETTm1_tuning_space.json
multiThread: true
useAnnotation: false
tuner:
builtinTunerName: Random
trial:
command: enable_tuning=1 pypots-cli tuning --model pypots.imputation.MRNN --train_set ../../data/ettm1/train.h5 --val_set ../../data/ettm1/val.h5
codeDir: .
gpuNum: 1
localConfig:
useActiveGpu: true
maxTrialNumPerGpu: 20
gpuIndices: 3
```
- Search space:
```json
{
"n_steps": {"_type":"choice","_value":[60]},
"n_features": {"_type":"choice","_value":[7]},
"patience": {"_type":"choice","_value":[10]},
"epochs": {"_type":"choice","_value":[200]},
"rnn_hidden_size": {"_type":"choice","_value":[16,32,64,128,256,512]},
"lr":{"_type":"loguniform","_value":[0.0001,0.01]}
}
```
**Log message**:
- nnimanager.log:
```
[2023-12-27 16:16:42] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 7,
hyperParameters: {
value: '{"parameter_id": 7, "parameter_source": "algorithm", "parameters": {"n_steps": 60, "n_features": 7, "patience": 10, "epochs": 200, "rnn_hidden_size": 32, "lr": 0.0008698020401037771}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2023-12-27 16:16:42] INFO (LocalV3.local) Created trial XsB6F
```
- dispatcher.log:
```
[2023-12-27 16:15:06] INFO (numexpr.utils/MainThread) Note: detected 128 virtual cores but NumExpr set to maximum of 64, check "NUMEXPR_MAX_THREADS" environment variable.
[2023-12-27 16:15:06] INFO (numexpr.utils/MainThread) Note: NumExpr detected 128 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.
[2023-12-27 16:15:06] INFO (numexpr.utils/MainThread) NumExpr defaulting to 8 threads.
[2023-12-27 16:15:06] INFO (nni.tuner.random/MainThread) Using random seed 220808582
[2023-12-27 16:15:06] INFO (nni.runtime.msg_dispatcher_base/MainThread) Dispatcher started
[2023-12-27 16:15:06] INFO (nni.runtime.msg_dispatcher/Thread-1 (command_queue_worker)) Initial search space: {'n_steps': {'_type': 'choice', '_value': [60]}, 'n_features': {'_type': 'choice', '_value': [7]}, 'patience': {'_type': 'choice', '_value': [10]}, 'epochs': {'_type': 'choice', '_value': [200]}, 'rnn_hidden_size': {'_type': 'choice', '_value': [16, 32, 64, 128, 256, 512]}, 'lr': {'_type': 'loguniform', '_value': [0.0001, 0.01]}}
```
- nnictl stdout and stderr:
```
2023-12-27 16:16:44 [INFO]: Have set the random seed as 2204 for numpy and pytorch.
2023-12-27 16:16:44 [INFO]: The tunner assigns a new group of params: {'n_steps': 60, 'n_features': 7, 'patience': 10, 'epochs': 200, 'rnn_hidden_size': 256, 'lr': 0.0054442307300676335}
2023-12-27 16:16:45 [INFO]: No given device, using default device: cuda
2023-12-27 16:16:45 [WARNING]: ‼️ saving_path not given. Model files and tensorboard file will not be saved.
2023-12-27 16:16:48 [INFO]: MRNN initialized with the given hyperparameters, the number of trainable parameters: 401,619
2023-12-27 16:16:48 [INFO]: Option lazy_load is set as False, hence loading all data from file...
2023-12-27 16:16:52 [INFO]: Epoch 001 - training loss: 1.3847, validating loss: 1.3214
```
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
Note that in the nnimanager.log: `lr` of trial XsB6F is `0.0008698020401037771` and this is also the value displayed on the local web page, but in the nnictl stdout log, the actual `lr` received by the model is `0.0054442307300676335`, and they're mismatched. This is not a single case, I notice that hyperparameters of some trials are mismatched between the nnimanager tells and their actual values, while some of them are matched and fine. | open | 2023-12-27T09:33:25Z | 2024-07-16T03:02:25Z | https://github.com/microsoft/nni/issues/5726 | [] | WenjieDu | 4 |
babysor/MockingBird | pytorch | 641 | 报错RuntimeError: Numpy is not available | 插入音频时出现报错RuntimeError: Numpy is not available
| open | 2022-07-11T04:39:07Z | 2022-07-16T11:05:16Z | https://github.com/babysor/MockingBird/issues/641 | [] | showtime12345 | 1 |
dfki-ric/pytransform3d | matplotlib | 265 | Explain singularities better | good source: http://motion.pratt.duke.edu/RoboticSystems/3DRotations.html | closed | 2023-08-04T14:18:26Z | 2023-08-07T15:50:54Z | https://github.com/dfki-ric/pytransform3d/issues/265 | [] | AlexanderFabisch | 0 |
modin-project/modin | data-science | 6,747 | Preserve columns for merge on index in simple cases | At the moment, we're not preserving columns/dtypes cache if merging on an index level as it appears to be quite complex to mimic pandas with a limited amount of metadata available:
https://github.com/modin-project/modin/blob/bee2c28a3cededa4c5c4b61e9e59c77401ae39a8/modin/core/dataframe/base/dataframe/utils.py#L99-L104
However, there are simple cases (and quite popular cases) when merging on an index that could be supported by simply adding a few lines of code. For example, when merging on a single index level.
| closed | 2023-11-17T14:45:46Z | 2023-11-21T09:38:59Z | https://github.com/modin-project/modin/issues/6747 | [
"Performance 🚀",
"P2"
] | dchigarev | 0 |
marcomusy/vedo | numpy | 1,076 | Loses transformation information in assembly | When indexing the assembly, it loses transformations. Is there a way to retain this information when collecting items from the assembly?
```
assembly2 = vedo.Assembly({"box": vedo.Box()})
assembly2.bounds() # (-0.5, 0.5, -1.0, 1.0, -1.5, 1.5)
a = assembly2.shift(dx=5)
a.bounds() # (4.5, 5.5, -1.0, 1.0, -1.5, 1.5)
a["box"].bounds() # array([-0.5, 0.5, -1. , 1. , -1.5, 1.5], dtype=float32)
``` | closed | 2024-03-18T05:18:03Z | 2024-03-25T23:34:46Z | https://github.com/marcomusy/vedo/issues/1076 | [] | JeffreyWardman | 4 |
Miserlou/Zappa | django | 2,177 | Zappa has support for Custom domain names? | I trying to configure using zappa my Custom domain names and API Mappings.
There's some way to achieve that? | open | 2020-10-13T22:33:21Z | 2020-10-16T20:45:48Z | https://github.com/Miserlou/Zappa/issues/2177 | [] | rafagan | 1 |
sloria/TextBlob | nlp | 141 | HTTP Error 503: Service Unavailable | I am using this service(TextBlob version 0.11.1) an year ago. But facing 503 error code in response from last few days. Please tell either the service is down or my IP address is blocked or the free version of this service is no longer available. Thanks | closed | 2016-11-08T12:58:43Z | 2016-11-08T13:24:59Z | https://github.com/sloria/TextBlob/issues/141 | [] | fakhir-hanif | 1 |
recommenders-team/recommenders | data-science | 2,118 | [BUG] Can't download xdeepfmresources.zip | Unable to log in https://recodatasets.z20.web.core.windows.net/deeprec/ and download content.

| closed | 2024-06-26T02:38:59Z | 2024-06-26T10:43:22Z | https://github.com/recommenders-team/recommenders/issues/2118 | [
"bug"
] | RuichongMa424 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.