repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
redis/redis-om-python | pydantic | 251 | How to replace default primary key? | I have two use cases in which I would like to replace the default pk field with another field:
1. I would like to define that a string field named "uuid" to be my primary key field, instead of the default "pk".
I do want to keep the ULID key generator, but just to set it into a field named "uuid" instead of "pk".
2. same as #1, just skipping the ULID key generator, allowing me to set the primary key manually.
Is it possible to achieve?
I would like to achieve it for both Json and Hash models.
The documentation on that aspect is not clear enough.
Thanks in advance for anyone who will help. | closed | 2022-05-14T14:37:25Z | 2023-07-23T13:35:34Z | https://github.com/redis/redis-om-python/issues/251 | [] | oren-twigo | 3 |
allure-framework/allure-python | pytest | 444 | items[:] assigned values twice in pytest_collection_modifyitems | https://github.com/allure-framework/allure-python/blob/6443d52affc92944aa156345e35f1f9c6d2dfef3/allure-pytest/src/plugin.py#L165
`items[:] = select_by_testcase(items) ` < assigned
`items[:] = select_by_labels(items, config)` < assigned again immediately
Intentional or bug? | closed | 2019-11-29T18:58:29Z | 2023-01-23T07:22:46Z | https://github.com/allure-framework/allure-python/issues/444 | [
"theme:pytest"
] | huornlmj | 2 |
plotly/dash-core-components | dash | 379 | clickData isn't registered when the graph is embedded in a tab? | See https://community.plot.ly/t/creating-multi-tab-app-with-clickdata-functionality/14044 | open | 2018-11-09T14:08:28Z | 2018-11-09T14:08:34Z | https://github.com/plotly/dash-core-components/issues/379 | [
"dash-type-bug"
] | chriddyp | 0 |
huggingface/diffusers | deep-learning | 10,241 | Sana issues | ### Describe the bug
Following `SanaPAGPipeline` implementation in #9982,
i cannot get decent output in more than 1% of runs at best.
- most of runs result in what appears to be image with a lot of residual noise and then dc-ae decoder makes it look like sketch-like image with many circular artifacts (see first example image below)
- some of runs result in black-and-white output. adding "rich colors" to prompt makes foreground objects colored, but background remains black-and-white (see second example image below)
- rarely (very rarely) i get decent output
what did i try?
- loading both fp32 and fp16 variants of the model
- loading from separate bf16 repo
- executing in fp16, fp32 and bf16
- enabling/disabling chi and trying to change steps, pag scale, etc.
### Reproduction
```py
import torch
import diffusers
# repo_id = 'Efficient-Large-Model/Sana_1600M_1024px_diffusers'
repo_id = 'Efficient-Large-Model/Sana_1600M_1024px_BF16_diffusers'
cache_dir = '/mnt/models/Diffusers'
prompt = 'photo of a cute red robot on the surface of moon with planet earth in the background'
negative = ''
dtype = torch.bfloat16
device = torch.device('cuda')
kwargs = {
# 'variant': 'fp16',
'torch_dtype': dtype,
}
pipe = diffusers.SanaPAGPipeline.from_pretrained(repo_id, cache_dir=cache_dir, **kwargs).to(device, dtype)
result = pipe(
prompt = prompt,
negative_prompt = negative,
# num_inference_steps = 20, # default
# guidance_scale = 4.5, # default
# pag_scale = 3.0, # default
# pag_adaptive_scale = 0.0, # default
# height = 1024, # default
# width = 1024, # default
# clean_caption = True, # default
# use_resolution_binning = True, # default
# complex_human_instruction = '...', # default
)
image = result.images[0]
image.save('/tmp/sana.png')
```
attached are both typical examples of bad output:


### Logs
there are several additional issues:
1. error when using `UniPC`, `DEIS` or `SA` schedulers
```log
โ /home/vlado/dev/sdnext/venv/lib/python3.12/site-packages/diffusers/schedulers/scheduling_unipc_multistep.py:396 in set_timesteps โ
โ โ
โ 395 โ โ โ
โ โฑ 396 โ โ self.sigmas = torch.from_numpy(sigmas) โ
โ 397 โ โ self.timesteps = torch.from_numpy(timesteps).to(device=device, dtype=torch.int64) โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
ValueError: At least one stride in the given numpy array is negative, and tensors with negative strides are not currently supported. (You can probably work around this by making a copy of your array with array.copy().)
```
*note*: im confirming that flowmatching args are set correctly
*note*: `DPMSolverMultistepScheduler` scheduler works fine, either when left as default or when manually instantiated
1. error when using non-zero `pag_adaptive_scale`
```log
โ /home/vlado/dev/sdnext/venv/lib/python3.12/site-packages/diffusers/pipelines/pag/pag_utils.py:95 in _get_pag_scale โ
โ โ
โ 94 โ โ โ signal_scale = self.pag_scale - self.pag_adaptive_scale * (1000 - t) โ
โ โฑ 95 โ โ โ if signal_scale < 0: โ
โ 96 โ โ โ โ signal_scale = 0 โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RuntimeError: Boolean value of Tensor with more than one value is ambiguous
```
### System Info
diffusers==0.32.dev commit=5fb3a985173efaae7ff381b9040c386751d643da
### Who can help?
@yiyixuxu @sayakpaul @DN6 @asomoza
@lawrence-cj and @a-r-r-o-w as primary contributors to pr
@hlky for scheduler issues | closed | 2024-12-16T15:51:49Z | 2024-12-17T13:53:06Z | https://github.com/huggingface/diffusers/issues/10241 | [
"bug"
] | vladmandic | 8 |
lukasmasuch/streamlit-pydantic | pydantic | 49 | Nested `pydantic_form` not working, just clears page | <!--
Thanks for reporting a bug ๐ โค๏ธ
Before opening a new issue, please make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead. Also, be sure to check our documentation first.
-->
**Describe the bug:**
<!-- Describe your issue, but please be descriptive! Thanks again ๐ โค๏ธ -->
When I try to have nested `pydantic_form`s, the page just gets cleared out.
**Expected behaviour:**
<!-- A clear and concise description of what you expected to happen. -->
I expect the below code snippet to not wipe the screen after the second submit button is pressed.
**Steps to reproduce the issue:**
<!-- include screenshots, logs, code or other info to help explain your problem.
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
-->
```python
import streamlit as st
import streamlit_pydantic as sp
from pydantic import BaseModel, Field
class MinCount(BaseModel):
min_count: int = Field(default=4, gt=0)
st.title("Test")
input_1 = sp.pydantic_form(key="input-1", model=MinCount)
if input_1:
input_2 = sp.pydantic_form(key="input-2", model=MinCount)
if input_2:
st.write(f"{input_1.min_count=}, {input_2.min_count=}.")
```
**Technical details:**
- Host Machine OS (Windows/Linux/Mac): macOS Ventura version 14.5.2, arm64
- Browser (Chrome/Firefox/Safari): Chrome
Here are my requirements with Python 3.11:
```txt
pydantic 2.5.2
pydantic_core 2.14.5
pydantic-settings 2.1.0
streamlit 1.28.2
streamlit-pydantic 0.7.1
```
I am installing https://github.com/LukasMasuch/streamlit-pydantic/tree/390a45aba7bf8caccc297c335715cc141db490af directly from GitHub
**Possible Fix:**
<!--- Not obligatory, but suggest a fix or reason for the bug -->
**Additional context:**
<!-- Add any other context about the problem here. -->
| open | 2023-11-27T18:56:10Z | 2023-11-27T18:56:58Z | https://github.com/lukasmasuch/streamlit-pydantic/issues/49 | [
"type:bug"
] | jamesbraza | 0 |
3b1b/manim | python | 1,126 | how the surface is created for scene class from camera class without calling get_cairo_context please help me i am beginner | ### If this is a support request:
**Please attempt to solve the problem on your own before opening an issue.**
Between old issues, StackOverflow, and Google, you should be able to find
solutions to most of the common problems.
Include at least:
1. Steps to reproduce the issue (e.g. the command you ran)
2. The unexpected behavior that occurred (e.g. error messages or screenshots)
3. The environment (e.g. operating system and version of manim)
### If this is a feature request:
Include the motivation for making this change.
| open | 2020-06-05T05:59:09Z | 2020-06-05T05:59:09Z | https://github.com/3b1b/manim/issues/1126 | [] | ashishsain | 0 |
KaiyangZhou/deep-person-reid | computer-vision | 303 | Is this the correct way to perform the preprocessing on the image | ```
import torchreid
torchreid.models.show_avai_models()
model = torchreid.models.build_model(name='osnet_ain_x1_0', num_classes=1041)
torchreid.utils.load_pretrained_weights(model, "osnet_ain_x1_0_msmt17_256x128_amsgrad_ep50_lr0.0015_coslr_b64_fb10_softmax_labsmth_flip_jitter.pth")
model.eval()
import torch
#assume input_img is the image loaded by PIL
input_img = torch.ones(1, 3, 256, 128)
norm_mean = [0.485, 0.456, 0.406] # imagenet mean
norm_std = [0.229, 0.224, 0.225] # imagenet std
input_img /= 255.0
for i in range(0, 3):
input_img [0, i, :, :] = (input_img [0, i, :, :] - norm_mean[i]) / norm_std[i]
```
According to the [config file](https://github.com/KaiyangZhou/deep-person-reid/blob/master/configs/im_osnet_ain_x1_0_softmax_256x128_amsgrad_cosine.yaml), I guess this should be the correct solution to preprocess the image?
After preprocess, feed the data into the model should have the features?
```
model = torchreid.models.build_model(name='osnet_ain_x1_0', num_classes=1041)
torchreid.utils.load_pretrained_weights(model, "osnet_ain_x1_0_msmt17_256x128_amsgrad_ep50_lr0.0015_coslr_b64_fb10_softmax_labsmth_flip_jitter.pth")
model.eval()
features = model(input_img)
```
Am I doing anything wrong?
Thanks | closed | 2020-02-07T01:11:42Z | 2020-05-18T10:09:04Z | https://github.com/KaiyangZhou/deep-person-reid/issues/303 | [] | stereomatchingkiss | 1 |
clovaai/donut | nlp | 30 | donut processing on PDF Documents | Hello,
I have few certificate documents which are in a PDF format. I want to extract metadata on those documents as suggested by you .
Could you please clarify me on the below points.
1.Can I use your model directly without pretraining on the certificate data.
2. How to train your model on my certificates as it is confidential and what folder structure are you expecting to train the data.
3. How do I convert my dataset into your format (synthdog) โ It was not much clear to me.
Thank you and looking forward to your response.
Best Regards,
Arun
| closed | 2022-08-19T11:53:41Z | 2022-08-24T02:48:54Z | https://github.com/clovaai/donut/issues/30 | [] | Arun4GS | 4 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,270 | Install on Amazon Linux 2 | Hi, Dear Globaleks team, How I can install globaleaks on Amazon Linux 2 machine? do you have a .rpm package for that?
thank you
| closed | 2022-08-29T07:45:00Z | 2022-08-29T08:40:37Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3270 | [] | stgitdeploy | 2 |
wkentaro/labelme | computer-vision | 1,492 | Labelme 5.5.0 don't support *.jpg files | ### Provide environment information
labelme 5.5.0
QtAwesome 1.2.2
qtconsole 5.4.0
QtPy 2.2.0
PyQt5 5.15.7
PyQt5-sip 12.11.0
### What OS are you using?
Windows
### Describe the Bug
I installed Labelme by "pip install", the labelme software cannot load jpg files. As suggested in other issue, I find that the QtGui also don't support jpg format. How to solve this problem? thank you!
```
> from qtpy import QtGui
> QtGui.QImageReader.supportedImageFormats()
[PyQt5.QtCore.QByteArray(b'bmp'), PyQt5.QtCore.QByteArray(b'cur'), PyQt5.QtCore.QByteArray(b'gif'), PyQt5.QtCore.QByteArray(b'icns'), PyQt5.QtCore.QByteArray(b'ico'), PyQt5.QtCore.QByteArray(b'pbm'), PyQt5.QtCore.QByteArray(b'pgm'), PyQt5.QtCore.QByteArray(b'png'), PyQt5.QtCore.QByteArray(b'ppm'), PyQt5.QtCore.QByteArray(b'tga'), PyQt5.QtCore.QByteArray(b'tif'), PyQt5.QtCore.QByteArray(b'tiff'), PyQt5.QtCore.QByteArray(b'wbmp'), PyQt5.QtCore.QByteArray(b'webp'), PyQt5.QtCore.QByteArray(b'xbm'), PyQt5.QtCore.QByteArray(b'xpm')]
```
### Expected Behavior
_No response_
### To Reproduce
_No response_ | open | 2024-09-11T03:45:10Z | 2025-03-07T01:52:42Z | https://github.com/wkentaro/labelme/issues/1492 | [
"issue::bug"
] | Deephome | 1 |
AirtestProject/Airtest | automation | 1,273 | ๅผ็จairtest.core.api๏ผios่ฎพๅคๅฏๅจๅฝๅฑๆฅ้ | ไฝฟ็จairtestๅบ๏ผ็ๆฌ1.3.5
**ใๆๆบ้่ฟUSB้พๆฅๆฐๆฎ็บฟใ**
ไธๆฏๅจairtest ideไธญ่ฟ่ก๏ผairtest ideไธญ่ฟ่กๆญฃๅธธ๏ผใ
ๅจๅทฅ็จไธญ๏ผ่ฟ่กไธ้ข๏ผ
from airtest.core.api import *
auto_setup(__file__)
driver=connect_device("ios:///")
driver.start_recording(orientation=1)
#iosๅฝๅฑไผๅบ็ฐ่ฟไธชๆฅ้๏ผ
module 'wda.usbmux' has no attribute 'Usbmux'
module 'wda.usbmux' has no attribute 'Usbmux'
module 'wda.usbmux' has no attribute 'Usbmux'
driver.start_recording(orientation=1)
File "D:\python\lib\site-packages\airtest\core\ios\ios.py", line 53, in wrapper
return func(self, *args, **kwargs)
File "D:\python\lib\site-packages\airtest\core\ios\ios.py", line 1641, in start_recording
self.recorder = ScreenRecorder(
File "D:\python\lib\site-packages\airtest\aircv\screen_recorder.py", line 121, in __init__
self.tmp_frame = self.get_frame_func()
File "D:\python\lib\site-packages\airtest\core\ios\ios.py", line 1634, in get_frame
data = self.get_frame_from_stream()
File "D:\python\lib\site-packages\airtest\core\ios\ios.py", line 53, in wrapper
return func(self, *args, **kwargs)
File "D:\python\lib\site-packages\airtest\core\ios\ios.py", line 984, in get_frame_from_stream
return self.mjpegcap.get_frame_from_stream()
File "D:\python\lib\site-packages\airtest\utils\snippet.py", line 125, in ready_func
method()
File "D:\python\lib\site-packages\airtest\utils\snippet.py", line 135, in wrapper
ret = func(inst, *args, **kwargs)
File "D:\python\lib\site-packages\airtest\core\ios\mjpeg_cap.py", line 64, in setup_stream_server
self.port, _ = self.instruct_helper.setup_proxy(9100)
File "D:\python\lib\site-packages\airtest\utils\retry.py", line 49, in f2
return func(*args, **kwargs)
File "D:\python\lib\site-packages\airtest\core\ios\instruct_cmd.py", line 114, in setup_proxy
raise LocalDeviceError("Currently only supports port forwarding for locally connected iOS devices")
airtest.core.error.LocalDeviceError: 'Currently only supports port forwarding for locally connected iOS devices'
**้ขๆๆๆ**
ๅจๅทฅ็จไธญๅบ็ฐไปฅไธๆฅ้๏ผไฝๆฏๅจairtest ide 1.2.17ไธญ๏ผ่ฟ่กๆญฃๅธธ๏ผ
**python ็ๆฌ:** `python3.9.11`
**airtest ็ๆฌ:** `1.3.5`
**่ฎพๅค:**
่ฏไบๅพๅค่ฎพๅค๏ผ้ฝๆฏ่ฟๆ ท๏ผwdaๆฏๆฒกๆ้ฎ้ข็๏ผๆ ่ฎบๆฏfacebook็wda๏ผ่ฟๆฏ่ชๅทฑๆ็wda๏ผ้ฝๆฏๅจairtest ideๆญฃๅธธ๏ผๅจairtestๅบไธญๆฅ้
iphone 12 ios 15.6.1
iphone 11 ios 17.7.1
| open | 2025-01-25T08:46:47Z | 2025-02-28T07:39:23Z | https://github.com/AirtestProject/Airtest/issues/1273 | [] | dragonpatton | 1 |
MagicStack/asyncpg | asyncio | 733 | Cannot use boolean in case function | Hi,
I can not use True or False in case function as below
```
case([(table.c.item_id == None, False)], else_= True).label('saved')
```
it is showing me this error
```
File "asyncpg\protocol\protocol.pyx", line 181, in bind_execute
File "asyncpg\protocol\prepared_stmt.pyx", line 171, in asyncpg.protocol.protocol.PreparedStatementState._encode_bind_msg
asyncpg.exceptions.DataError: invalid input for query argument $3: False (expected str, got bool)
```
I tried to use it with sqlalchemy and it is correctly returning True/False values.
Is it some issue with postgres driver or database library?
I tried to replace True/False with string 'True'/'False' and it is working.
Is there another way to return boolean values?
Kind regards | closed | 2021-04-01T12:25:53Z | 2021-04-03T17:52:44Z | https://github.com/MagicStack/asyncpg/issues/733 | [] | psowa001 | 1 |
AirtestProject/Airtest | automation | 416 | ๆฅๅไธไธชwarn |
**ๆ่ฟฐ้ฎ้ขbug**
```
/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/airtest/aircv/utils.py:51: DeprecationWarning: The binary mode of fromstring is deprecated, as it behaves surprisingly on unicode inputs. Use frombuffer instead
nparr = np.fromstring(pngstr, np.uint8)
-- Docs: https://docs.pytest.org/en/latest/warnings.html
```
**python ็ๆฌ:** `python3.7`
**airtest ็ๆฌ:** `1.0.24`
| open | 2019-05-29T07:48:47Z | 2019-07-10T08:03:30Z | https://github.com/AirtestProject/Airtest/issues/416 | [
"compatibility"
] | N-logan | 1 |
jupyter-widgets-contrib/ipycanvas | jupyter | 26 | Mouse events + key events | open | 2019-09-18T19:11:10Z | 2021-07-05T06:08:55Z | https://github.com/jupyter-widgets-contrib/ipycanvas/issues/26 | [
"enhancement"
] | martinRenou | 7 | |
slackapi/python-slack-sdk | asyncio | 1,451 | Option to block link url preview | When Slack Python client posts messages, urls are automatically assigned a preview.
However this is very inconvenient in some cases, e.g. when a lot of messages are posted in a short time scale the channel becomes unreadable from the very large space taken by the previews.
If such a message is posted by a human he/she can dismiss it but there is no such option in the Python client.
The requested feature which solves this issue is an optional parameter in the `post_message` which works like this : `post_message(channel, message, url_previews = False)` and defaults to True.
If such an option already exists, this ticket can be used as a tracker for a better way to document it.
Note : Several workarounds exists : putting url in quotes, block preview from specific domains in workspace settings, etc. however these can only be used in specific circumstances or remove ability to use text formatting.
### Category (place an `x` in each of the `[ ]`)
- [x] **slack_sdk.web.WebClient (sync/async)** (Web API client)
- [ ] **slack_sdk.webhook.WebhookClient (sync/async)** (Incoming Webhook, response_url sender)
- [ ] **slack_sdk.models** (UI component builders)
- [ ] **slack_sdk.oauth** (OAuth Flow Utilities)
- [ ] **slack_sdk.socket_mode** (Socket Mode client)
- [ ] **slack_sdk.audit_logs** (Audit Logs API client)
- [ ] **slack_sdk.scim** (SCIM API client)
- [ ] **slack_sdk.rtm** (RTM client)
- [ ] **slack_sdk.signature** (Request Signature Verifier)
### Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2024-01-24T17:36:58Z | 2024-01-24T22:49:35Z | https://github.com/slackapi/python-slack-sdk/issues/1451 | [
"question"
] | guigui8 | 1 |
aimhubio/aim | data-visualization | 2,322 | Searching for new field replaces previously selected fields | ## ๐ Bug
Searching for new field replaces previously selected fields.
### To reproduce
1. Search for "run.active" and checkmark the box.
<img width="441" alt="aim_choose_1" src="https://user-images.githubusercontent.com/721196/199941213-3363c6b3-9dc0-4be4-9f86-8d2f7f39dcfb.png">
2. Search for "run.e".
<img width="443" alt="aim_choose_2" src="https://user-images.githubusercontent.com/721196/199941231-cc0088cf-17b0-4ea3-8339-0d05f6d5c072.png">
3. Select "run.experiment". Previously selected fields are now unselected!
<img width="444" alt="aim_choose_3" src="https://user-images.githubusercontent.com/721196/199941248-bc45599e-7596-472e-8292-f2bcdd83ae44.png">
4. Final result.
<img width="444" alt="aim_choose_4" src="https://user-images.githubusercontent.com/721196/199941269-c4bee06a-c7db-4d24-9a39-a14afdafdc4c.png">
### Expected behavior
Keep all items.
<img width="449" alt="aim_choose_expected" src="https://user-images.githubusercontent.com/721196/199945866-7ddcd48a-f744-4bb6-8e1c-f3c8404d0d7c.png">
### Environment
- Aim Version: 3.14.3
- Python version: 3.8
- pip version: 22.1
- OS: Linux
- Any other relevant information
### Additional context
N/A
| closed | 2022-11-04T09:41:52Z | 2022-12-08T22:31:19Z | https://github.com/aimhubio/aim/issues/2322 | [
"type / bug",
"help wanted",
"area / Web-UI",
"phase / shipped"
] | YodaEmbedding | 4 |
mars-project/mars | scikit-learn | 2,762 | [BUG]mars.new_ray_session error | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
Create a mars session on ray cluster has an error.
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version
3.7.7
3. The version of Mars you use
0.9.0b1
4. Versions of crucial packages, such as numpy, scipy and pandas
numpy==1.21.5 pandas==1.3.5 pymars==0.9.0b1 xgboost "xgboost_ray" lightgbm
5. Full stack of the error.
(base) root@cd15e694a26a:~# python
Python 3.7.7 (default, May 7 2020, 21:25:33)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import ray
>>> import mars
>>> ray.init(address='ray://10.227.144.220:30772')
ClientContext(dashboard_url='127.0.0.1:8265', python_version='3.7.7', ray_version='1.9.2', ray_commit='ef593fe5d3c864836b80ae77be32635cef42b537', protocol_version='2021-09-22', _num_clients=1, _context_to_restore=<ray.util.client._ClientContext object at 0x7fb75b902190>)
>>> import mars.tensor as mt
>>> import mars.dataframe as md
>>> session = mars.new_ray_session(worker_num=2, worker_mem=2 * 1024 ** 3)
(RayMainPool pid=671, ip=172.20.150.197) 2022-02-27 23:46:24,651 ERROR serialization.py:283 -- 'pickle'
(RayMainPool pid=671, ip=172.20.150.197) Traceback (most recent call last):
(RayMainPool pid=671, ip=172.20.150.197) File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/serialization.py", line 281, in deserialize_objects
(RayMainPool pid=671, ip=172.20.150.197) obj = self._deserialize_object(data, metadata, object_ref)
(RayMainPool pid=671, ip=172.20.150.197) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/oscar/backends/ray/communication.py", line 78, in _deserialize_object
(RayMainPool pid=671, ip=172.20.150.197) message = deserialize(*value.message)
(RayMainPool pid=671, ip=172.20.150.197) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/serialization/core.py", line 423, in deserialize
(RayMainPool pid=671, ip=172.20.150.197) gen_deserialized = _deserialize(*gen_to_deserial)
(RayMainPool pid=671, ip=172.20.150.197) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/serialization/core.py", line 397, in _deserialize
(RayMainPool pid=671, ip=172.20.150.197) serializer = _deserializers[serializer_name]
(RayMainPool pid=671, ip=172.20.150.197) KeyError: 'pickle'
Traceback (most recent call last):
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/deploy/oscar/ray.py", line 320, in new_cluster
await cluster.start()
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/deploy/oscar/ray.py", line 481, in start
NodeRole.WORKER, self.supervisor_address, self.supervisor_address
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/deploy/oscar/ray.py", line 95, in create
address=lookup_address,
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/oscar/api.py", line 27, in create_actor
return await ctx.create_actor(actor_cls, *args, uid=uid, address=address, **kwargs)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/oscar/backends/context.py", line 105, in create_actor
result = await self._wait(future, address, create_actor_message)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/oscar/backends/context.py", line 83, in _wait
return await future
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/oscar/backends/context.py", line 74, in _wait
await asyncio.shield(future)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/oscar/backends/core.py", line 49, in _listen
message: _MessageBase = await client.recv()
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/oscar/backends/communication/base.py", line 262, in recv
return await self.channel.recv()
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/oscar/backends/ray/communication.py", line 196, in recv
result = await object_ref
ray.exceptions.RayTaskError: ray::RayMainPool.__on_ray_recv__() (pid=671, ip=172.20.150.197, repr=<mars.oscar.backends.ray.pool.RayMainPool object at 0x7f1565adcdc0>)
At least one of the input arguments for this task could not be computed:
ray.exceptions.RaySystemError: System error: 'pickle'
traceback: Traceback (most recent call last):
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/serialization.py", line 281, in deserialize_objects
obj = self._deserialize_object(data, metadata, object_ref)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/oscar/backends/ray/communication.py", line 78, in _deserialize_object
message = deserialize(*value.message)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/serialization/core.py", line 423, in deserialize
gen_deserialized = _deserialize(*gen_to_deserial)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/serialization/core.py", line 397, in _deserialize
serializer = _deserializers[serializer_name]
KeyError: 'pickle'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/deploy/oscar/ray.py", line 363, in new_ray_session
client = new_cluster_in_ray(**new_cluster_kwargs)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/deploy/oscar/ray.py", line 335, in new_cluster_in_ray
client = fut.result()
File "/home/ray/anaconda3/lib/python3.7/concurrent/futures/_base.py", line 435, in result
return self.__get_result()
File "/home/ray/anaconda3/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/deploy/oscar/ray.py", line 327, in new_cluster
raise stop_ex from ex
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/deploy/oscar/ray.py", line 325, in new_cluster
await cluster.stop()
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/deploy/oscar/ray.py", line 519, in stop
await stop_supervisor(self.supervisor_address, self._config)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/deploy/oscar/service.py", line 66, in stop_supervisor
await stop_services(NodeRole.SUPERVISOR, address=address, config=config)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/services/core.py", line 185, in stop_services
await asyncio.gather(*[inst.stop() for inst in instances])
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/services/task/supervisor/service.py", line 52, in stop
uid=TaskConfigurationActor.default_uid(), address=self._address
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/oscar/api.py", line 37, in destroy_actor
return await ctx.destroy_actor(actor_ref)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/oscar/backends/context.py", line 121, in destroy_actor
result = await self._wait(future, actor_ref.address, message)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/oscar/backends/context.py", line 83, in _wait
return await future
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/oscar/backends/context.py", line 74, in _wait
await asyncio.shield(future)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/oscar/backends/core.py", line 49, in _listen
message: _MessageBase = await client.recv()
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/oscar/backends/communication/base.py", line 262, in recv
return await self.channel.recv()
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/oscar/backends/ray/communication.py", line 196, in recv
result = await object_ref
ray.exceptions.RayTaskError: ray::RayMainPool.__on_ray_recv__() (pid=671, ip=172.20.150.197, repr=<mars.oscar.backends.ray.pool.RayMainPool object at 0x7f1565adcdc0>)
At least one of the input arguments for this task could not be computed:
ray.exceptions.RaySystemError: System error: 'pickle'
traceback: Traceback (most recent call last):
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/serialization.py", line 281, in deserialize_objects
obj = self._deserialize_object(data, metadata, object_ref)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/oscar/backends/ray/communication.py", line 78, in _deserialize_object
message = deserialize(*value.message)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/serialization/core.py", line 423, in deserialize
gen_deserialized = _deserialize(*gen_to_deserial)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/serialization/core.py", line 397, in _deserialize
serializer = _deserializers[serializer_name]
KeyError: 'pickle'
>>> (RayMainPool pid=671, ip=172.20.150.197) 2022-02-27 23:46:24,700 ERROR serialization.py:283 -- 'pickle'
(RayMainPool pid=671, ip=172.20.150.197) Traceback (most recent call last):
(RayMainPool pid=671, ip=172.20.150.197) File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/serialization.py", line 281, in deserialize_objects
(RayMainPool pid=671, ip=172.20.150.197) obj = self._deserialize_object(data, metadata, object_ref)
(RayMainPool pid=671, ip=172.20.150.197) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/oscar/backends/ray/communication.py", line 78, in _deserialize_object
(RayMainPool pid=671, ip=172.20.150.197) message = deserialize(*value.message)
(RayMainPool pid=671, ip=172.20.150.197) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/serialization/core.py", line 423, in deserialize
(RayMainPool pid=671, ip=172.20.150.197) gen_deserialized = _deserialize(*gen_to_deserial)
(RayMainPool pid=671, ip=172.20.150.197) File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/serialization/core.py", line 397, in _deserialize
(RayMainPool pid=671, ip=172.20.150.197) serializer = _deserializers[serializer_name]
(RayMainPool pid=671, ip=172.20.150.197) KeyError: 'pickle'
>>> exit()
7. Minimized code to reproduce the error.
```python
import ray
import mars
ray.init(address='ray://10.227.144.220:30772') # ray cluster wit h3 node with ray version 1.9.2
import mars.tensor as mt
import mars.dataframe as md
session = mars.new_ray_session(worker_num=2, worker_mem=2 * 1024 ** 3)
```
**Expected behavior**
KeyError: 'pickle'
**Additional context**
Ray cluster docker is built with the following docker file:
ARG BASE_IMAGE=rayproject/ray:1.9.2
FROM ${BASE_IMAGE}
```YAML
# Setup commands[Optional]
RUN sudo apt-get update \
&& sudo apt-get upgrade -y \
&& sudo apt-get install -y emacs vim\
&& sudo apt-get clean \
&& sudo rm -rf /var/lib/apt/lists/*
# ML packages[Optional]
RUN pip3 install numpy==1.21.5 pandas==1.3.5 pymars==0.9.0b1 xgboost "xgboost_ray" lightgbm
```
| closed | 2022-02-28T07:59:12Z | 2022-03-08T06:58:21Z | https://github.com/mars-project/mars/issues/2762 | [
"type: bug",
"mod: ray integration"
] | jyizheng | 3 |
opengeos/leafmap | jupyter | 197 | add_local_tile not adding any tile atleast no tile is getting displayed | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- 0.7.5:
- 3.7:
- Ubuntu
### Description
Add local tile layer not working.No local tile is getting displayed
### What I Did
import os
import leafmap
m = leafmap.Map()
file1 = os.path.join('/DISK003/SPATIAL_ANALYTICS', 'landuse.tif')
m.add_local_tile(file1, layer_name="Local COG")
m
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
| closed | 2022-02-02T21:45:20Z | 2022-02-03T21:50:10Z | https://github.com/opengeos/leafmap/issues/197 | [
"bug"
] | MATRIX4284 | 4 |
amidaware/tacticalrmm | django | 1,204 | Before install | - OS: [Debian 11]
**Installation Method:**
- [x] Standard
How do you setup the listening port and certs before install? | closed | 2022-07-09T19:41:33Z | 2022-07-09T20:46:02Z | https://github.com/amidaware/tacticalrmm/issues/1204 | [] | xxlimit | 2 |
pyppeteer/pyppeteer | automation | 316 | Windows 10 new install issues | When running for the first time pyppeteer. Program stops while downloading chromium with error, console logs the bellow for my project called "test":
[W:pyppeteer.chromium_downloader] Starting Chromium download. Download may take a few minutes.
Traceback (most recent call last):
File "D:\Path8toPythonProject\Projects\Python\test\.venv\lib\site-packages\urllib3\connection.py", line 175, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "D:\Path8toPythonProject\Projects\Python\test\.venv\lib\site-packages\urllib3\util\connection.py", line 73, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "C:\Python\Python379\lib\socket.py", line 752, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 11001] getaddrinfo failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Path8toPythonProject\Projects\Python\test\.venv\lib\site-packages\urllib3\connectionpool.py", line 706, in urlopen
chunked=chunked,
File "D:\Path8toPythonProject\Projects\Python\test\.venv\lib\site-packages\urllib3\connectionpool.py", line 382, in _make_request
self._validate_conn(conn)
File "D:\Path8toPythonProject\Projects\Python\test\.venv\lib\site-packages\urllib3\connectionpool.py", line 1010, in _validate_conn
conn.connect()
File "D:\Path8toPythonProject\Projects\Python\test\.venv\lib\site-packages\urllib3\connection.py", line 358, in connect
conn = self._new_conn()
File "D:\Path8toPythonProject\Projects\Python\test\.venv\lib\site-packages\urllib3\connection.py", line 187, in _new_conn
self, "Failed to establish a new connection: %s" % e
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x000001E82C85B088>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:/WindowsFolder/Documents/VisualStudioCode/Projects/Python/test/test_pyppeteer.py", line 14, in <module>
asyncio.get_event_loop().run_until_complete(main())
File "C:\Python\Python379\lib\asyncio\base_events.py", line 587, in run_until_complete
return future.result()
File "D:/WindowsFolder/Documents/VisualStudioCode/Projects/Python/test/test_pyppeteer.py", line 7, in main
browser = await launch()
File "D:\Path8toPythonProject\Projects\Python\test\.venv\lib\site-packages\pyppeteer\launcher.py", line 307, in launch
return await Launcher(options, **kwargs).launch()
File "D:\Path8toPythonProject\Projects\Python\test\.venv\lib\site-packages\pyppeteer\launcher.py", line 120, in __init__
download_chromium()
File "D:\Path8toPythonProject\Projects\Python\test\.venv\lib\site-packages\pyppeteer\chromium_downloader.py", line 139, in download_chromium
extract_zip(download_zip(get_url()), DOWNLOADS_FOLDER / REVISION)
File "D:\Path8toPythonProject\Projects\Python\test\.venv\lib\site-packages\pyppeteer\chromium_downloader.py", line 81, in download_zip
r = http.request('GET', url, preload_content=False)
File "D:\Path8toPythonProject\Projects\Python\test\.venv\lib\site-packages\urllib3\request.py", line 75, in request
method, url, fields=fields, headers=headers, **urlopen_kw
File "D:\Path8toPythonProject\Projects\Python\test\.venv\lib\site-packages\urllib3\request.py", line 96, in request_encode_url
return self.urlopen(method, url, **extra_kw)
File "D:\Path8toPythonProject\Projects\Python\test\.venv\lib\site-packages\urllib3\poolmanager.py", line 375, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "D:\Path8toPythonProject\Projects\Python\test\.venv\lib\site-packages\urllib3\connectionpool.py", line 796, in urlopen
**response_kw
File "D:\Path8toPythonProject\Projects\Python\test\.venv\lib\site-packages\urllib3\connectionpool.py", line 796, in urlopen
**response_kw
File "D:\Path8toPythonProject\Projects\Python\test\.venv\lib\site-packages\urllib3\connectionpool.py", line 796, in urlopen
**response_kw
File "D:\Path8toPythonProject\Projects\Python\test\.venv\lib\site-packages\urllib3\connectionpool.py", line 756, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "D:\Path8toPythonProject\Projects\Python\test\.venv\lib\site-packages\urllib3\util\retry.py", line 574, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='storage.googleapis.com', port=443): Max retries exceeded with url: /chromium-browser-snapshots/Win_x64/588429/chrome-win32.zip (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001E82C85B088>: Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
. | closed | 2021-10-02T00:45:52Z | 2021-11-07T22:25:44Z | https://github.com/pyppeteer/pyppeteer/issues/316 | [] | nono-london | 1 |
allenai/allennlp | nlp | 4,950 | Support spaCy v3 | spaCy released the version v3, with great improvements. https://github.com/explosion/spaCy/releases/tag/v3.0.0
Unfortunately allennlp is limiting to spacy<2.4 and the dependency resolver do not allow allennlp and spaCy v3 to be used in the same project.
Would be great to bump the requirement. | closed | 2021-02-01T18:40:23Z | 2021-02-19T23:57:58Z | https://github.com/allenai/allennlp/issues/4950 | [
"Feature request"
] | bratao | 1 |
FactoryBoy/factory_boy | sqlalchemy | 802 | Round Robin strategy for FuzzyChoice | #### The problem
I need a round-robin strategy in the fuzzy.FuzzyChoices factory. Right now, there is no way to specify that I want to get at least every choice once
#### Proposed solution
Provide an attribute to FuzzyChoice(), that states the strategy.
For example:
`FuzzyChoice(["bear", "bull", "deer", "frog"], strategy="round_robin")`
Would yield the following:
```
>>> fc = FuzzyChoice(["bear", "bull", "deer", "frog"], strategy="round_robin")
>>> fc.fuzz()
'bear'
>>> fc.fuzz()
'bull'
>>> fc.fuzz()
'deer'
>>> fc.fuzz()
'frog'
>>> fc.fuzz()
'bear'
>>> fc.fuzz()
'bull'
```
#### Extra notes
This is interesting for generating fake init data and making sure every case appears | closed | 2020-10-29T09:59:55Z | 2020-12-03T13:29:58Z | https://github.com/FactoryBoy/factory_boy/issues/802 | [
"Q&A",
"Improvement",
"Fixed"
] | sne4ky | 3 |
xlwings/xlwings | automation | 2,160 | Improve OneDrive/Sharepoint solution | The current solution to deal with URL style `mybook.fullname` still has issues (like scanning the whole OneDrive directory when the file is not found) and on macOS it depends on providing settings for anything other than OneDrive Personal.
This solution solves it properly by taking the values from the settings:
https://gist.github.com/guwidoe/038398b6be1b16c458365716a921814d
However, on macOS, it currently gives back paths that xlwings can't open directly via `xw.Book(path)` (first path can be opened by xlwings, second path is given back by the Gist):
* OneDrive Personal
```
/Users/fz/Library/CloudStorage/OneDrive-Personal/Bรถok 1.xlsx
/Users/fz/Library/Group Containers/UBF8T346G9.OneDriveSyncClientSuite/OneDrive.noindex/OneDrive/Bรถok 1.xlsx
```
* OneDrive Business
```
/Users/fz/Library/CloudStorage/OneDrive-ZoomerAnalyticsGmbH/Book 11.xlsx
/Users/fz/Library/Group Containers/UBF8T346G9.OneDriveSyncClientSuite/OneDrive - Zoomer Analytics GmbH.noindex/OneDrive - Zoomer Analytics GmbH/Book 11.xlsx
```
* SharePoint
```
/Users/fz/Library/CloudStorage/OneDrive-SharedLibraries-ZoomerAnalyticsGmbH/german - Documents/Book.xlsx
/Users/fz/Library/Group Containers/UBF8T346G9.OneDriveSyncClientSuite/Zoomer Analytics GmbH.noindex/Zoomer Analytics GmbH/german - Documents/Book.xlsx
```
macOS ref:
https://answers.microsoft.com/en-us/msoffice/forum/all/mac-update-questions/31147778-3b79-4f49-b4e1-076ad4d5bfa0
| open | 2023-02-06T11:00:29Z | 2023-05-02T18:02:07Z | https://github.com/xlwings/xlwings/issues/2160 | [] | fzumstein | 14 |
koaning/scikit-lego | scikit-learn | 616 | [BUG] Error when calling predict_proba with GroupedPredictor using shrinkage and global model | When trying to generate probabilities calling `predict_proba` from a fitted instance of `GroupedPredictor` with `shrinkage` not None and `use_global_model=True` the following error is generated:
<img width="582" alt="image" src="https://github.com/koaning/scikit-lego/assets/36955807/9db77ec8-17f2-449e-8092-835cbfc32ea6">
Code to reproduce the error:
<img width="358" alt="image" src="https://github.com/koaning/scikit-lego/assets/36955807/373d12bf-ce45-4fe8-af48-c201a6c9e33e">
```
import sklearn
import sklego
from sklego.datasets import load_hearts
from sklearn.linear_model import LogisticRegression
from sklego.meta import GroupedPredictor
## sklego.__version__ 0.7.4
## sklearn.__version__ 1.0.2
df_hearts = load_hearts(as_frame=True)
model = GroupedPredictor(
LogisticRegression(max_iter=1000), groups=['sex'],
use_global_model=True, shrinkage="relative",
).fit(df_hearts.drop(columns=['thal','target']), df_hearts['target'])
preds = model.predict_proba(df_hearts.drop(columns=['thal','target']))
``` | closed | 2024-02-07T15:59:48Z | 2024-03-19T20:10:16Z | https://github.com/koaning/scikit-lego/issues/616 | [
"bug"
] | cerlymarco | 3 |
ghtmtt/DataPlotly | plotly | 79 | graphics in version 3.2 | does not build graphics in version QGIS3.2 | closed | 2018-07-25T12:07:47Z | 2018-09-07T14:01:22Z | https://github.com/ghtmtt/DataPlotly/issues/79 | [] | Zahar1985 | 17 |
vi3k6i5/flashtext | nlp | 94 | Wrong matching result for word with accent marks | open | 2019-10-04T11:30:18Z | 2020-05-23T09:13:29Z | https://github.com/vi3k6i5/flashtext/issues/94 | [] | isaac47 | 2 | |
fastapi-users/fastapi-users | fastapi | 1,428 | Where UD comes from? | https://github.com/fastapi-users/fastapi-users-db-mongodb/blob/b4208069e75ec094b95807f999de46685b2d0495/fastapi_users_db_mongodb/__init__.py#L5 | closed | 2024-08-02T08:39:47Z | 2024-08-08T11:49:41Z | https://github.com/fastapi-users/fastapi-users/issues/1428 | [] | vousmeevoyez | 0 |
streamlit/streamlit | data-science | 9,921 | Calling st.rerun in inside dialog in quick succession causes crash | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
In a dialog I have a button that runs `st.rerun()` to close the dialog. When that button is pressed and pressed again just before the dialog closes, then Streamlit crashes with "RuntimeError: Could not find fragment with id ... ". This manifests more clearly when there is a delay in executing that `st.rerun()` (see example).
Note that, to reproduce this bug, the timing of the second click has to be right before the dialog closing and might take a few tries.
### Reproducible Code Example
[](https://issues.streamlitapp.com/?issue=gh-9921)
```Python
import time
import streamlit as st
@st.dialog("123")
def show_dialog():
if st.button("Close Dialog"):
time.sleep(.15)
st.rerun()
if st.button("Open Dialog"):
show_dialog()
```
### Steps To Reproduce
1. Click the "Open Dialog" button
2. Click the "Close Dialog" button
3. Click the "Close Dialog" button again right before it closes
https://github.com/user-attachments/assets/0726373d-98c3-4ce3-8be2-b7816dab4b3a
### Expected Behavior
The dialog should close without error
### Current Behavior
Traceback (most recent call last):
File "/Users/svanderhimst/PycharmProjects/dialog_test/.venv/lib/python3.12/site-packages/streamlit/runtime/scriptrunner/exec_code.py", line 88, in exec_func_with_error_handling
result = func()
^^^^^^
File "/Users/svanderhimst/PycharmProjects/dialog_test/.venv/lib/python3.12/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 563, in code_to_exec
raise RuntimeError(
RuntimeError: Could not find fragment with id 578dee04e8ad6adcd52e8e630476d1ea
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.40.0
- Python version: 3.12.3
- Operating System: MacOs 12.6
- Browser: Chrome
### Additional Information
_No response_ | closed | 2024-11-25T15:13:15Z | 2025-01-10T13:20:08Z | https://github.com/streamlit/streamlit/issues/9921 | [
"type:bug",
"status:confirmed",
"priority:P3",
"feature:st.rerun",
"feature:st.fragment"
] | sandervdhimst | 3 |
tqdm/tqdm | pandas | 1,391 | tqdm.rich fails when the iterator has no size hint | Consider the following example:
```python
from pathlib import Path
import tqdm.rich
for i in tqdm.rich.tqdm(Path(".").glob("**/*.py")):
do_something(i)
```
This raises the following exception:
```
...
.venv/lib/python3.9/site-packages/tqdm/rich.py", line 34, in render
total = int(task.total)
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
```
In my real use case, the iterator is a custom generator expression that cannot be converted to a kwown-size container.
- [ ] I have marked all applicable categories:
+ [x] exception-raising bug
+ [ ] visual output bug
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
>> 4.64.1 3.9.15 (main, Oct 17 2022, 03:01:19)
[Clang 13.1.6 (clang-1316.0.21.2.5)] darwin
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
| open | 2022-11-09T18:25:26Z | 2024-06-10T12:15:02Z | https://github.com/tqdm/tqdm/issues/1391 | [] | bilelomrani1 | 1 |
comfyanonymous/ComfyUI | pytorch | 7,077 | cogview4 | ### Feature Idea
https://github.com/THUDM/CogView4
### Existing Solutions
_No response_
### Other
_No response_ | open | 2025-03-05T00:59:13Z | 2025-03-12T09:20:57Z | https://github.com/comfyanonymous/ComfyUI/issues/7077 | [
"Feature"
] | silingyuan0 | 4 |
FactoryBoy/factory_boy | sqlalchemy | 725 | Keep reference to non-model properties on generated objects | Related: https://github.com/FactoryBoy/factory_boy/issues/544
#### The problem
Once a factory has generated a model, all properties which aren't arguments to the model, such as RelatedFactories, Params, Traits, etc. are unavailable. This makes a bunch of things extra-difficult. For example, if a test case depends on the value of a Param, you have to override it even if the value is the same as the default. If a test case depends on a RelatedFactory, you have to do an extra database query to fetch it.
#### Proposed solution
Factories monkey-patch a `._factoryboy` (or similar) property onto created models. This object is essentially a namespace that contains all Params, RelatedFactories, etc.: all fields that aren't arguments to the model.
Users might also want to add additional properties to model classes using this mechanism. They could add them as Params, but that feels like an abuse of the Params mechanism. Instead, I suggest adding a new `BaseDeclaration` to Factory Boy named `Context`, which wraps another declaration (or value) and makes its value only available in the `._factoryboy` namespace object.
#### Example
```python
class Foo(Model):
a = IntegerField()
class FooFactory(Factory):
class Meta:
model = Foo
a = 1
b = Context(SubFactory(BarFactory))
foo = Foo()
foo.a
# >> 1
foo.b
# >> AttributeError
foo._factoryboy.b
# >> <object Bar>
``` | open | 2020-04-09T17:39:39Z | 2020-04-09T17:39:39Z | https://github.com/FactoryBoy/factory_boy/issues/725 | [] | maxrothman | 0 |
polakowo/vectorbt | data-visualization | 282 | Don't understand multi-asset portfolio warning | closed | 2021-11-23T17:23:35Z | 2021-11-24T22:27:19Z | https://github.com/polakowo/vectorbt/issues/282 | [] | 010011x | 1 | |
InstaPy/InstaPy | automation | 6,165 | You have too few comments, please set at least 10 distinct comments to avoid looking suspicious | ## Expected Behavior
Run quickstart.py found in instapy-quickstart repository
## Current Behavior
I have an issue when I run it :
`You have too few comments, please set at least 10 distinct comments to avoid looking suspicious `
## InstaPy configuration
InstaPy Version: 0.6.13
| closed | 2021-04-22T18:28:44Z | 2022-03-08T18:05:59Z | https://github.com/InstaPy/InstaPy/issues/6165 | [
"wontfix"
] | manufao | 7 |
oegedijk/explainerdashboard | dash | 172 | Support for Lime explainer | Hello,
I am new to data science and came across this package and found it extremely useful for people like me who don't know web-app development. Great work. I can't thank you enough. I have a quick question.
I am learning explainable AI and came to know that we have SHAP, LIME, PFI etc.
I see in docs that we have SHAP and PFI explainer but does the package support Lime explainer as well?
I couldn't find out. Hence, thought of checking with you
Can help me please | closed | 2022-01-12T01:07:58Z | 2022-05-04T19:29:51Z | https://github.com/oegedijk/explainerdashboard/issues/172 | [] | SSMK-wq | 1 |
streamlit/streamlit | python | 10,067 | `st.server_state` with endpoint access (like `st.session_state`, but for global values; different from pickled user sessions) | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
Currently, we can use `st.cache_data` and `st.cache_resource` to "save" values that can be accessed between sessions. What if there was an API similar to `st.session_state` that was shared between sessions? I suggest `st.server_state`.
Additionally, a Streamlit app could include an endpoint for accessing and updating server state.
### Why?
A common pattern is for a Streamlit app to reach out and collect data from a remote location, typically saving it with `st.cache_data`. If there was some server state combined with an endpoint, a remote source could be able to ping the app and initiate an update of data. This prevents needing to schedule an app to run to update data or making a random user wait if they are the first to connect beyond a cached values TTL.
This would also be useful for IoT use cases where a smart device can send an alert to the app.
Another feature request to send a global message to all sessions (#7312) could also be accommodated with this.
### How?
Add a new API `st.server_state` which is global with all sessions having read/write access.
Add an (authenticated) enpoint for remote sources to connect to and update values in `st.server_state`.
### Additional Context
This is related to requests for app state (#8609), but I'm suggesting something narrower. | open | 2024-12-22T10:09:44Z | 2025-02-01T05:19:22Z | https://github.com/streamlit/streamlit/issues/10067 | [
"type:enhancement",
"feature:state"
] | sfc-gh-dmatthews | 2 |
pytest-dev/pytest-selenium | pytest | 65 | start_driver fails randomly | I'm experiencing random failures. Since it seems things happen before the actual test logic is started, I assume it's an issue with selenium/pytest-selenium. Is there a better way to debug the problem?
The timeout comes from using pytest-timeout.
```
@pytest.fixture
def selenium(request, capabilities):
"""Returns a WebDriver instance based on options and capabilities"""
from .driver import start_driver
> driver = start_driver(request.node, capabilities)
/usr/local/lib/python2.7/dist-packages/pytest_selenium/pytest_selenium.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python2.7/dist-packages/pytest_selenium/driver.py:26: in start_driver
options.driver.lower()))(item, _capabilities)
/usr/local/lib/python2.7/dist-packages/pytest_selenium/driver.py:51: in chrome_driver
return webdriver.Chrome(**kwargs)
/usr/local/lib/python2.7/dist-packages/selenium/webdriver/chrome/webdriver.py:67: in __init__
desired_capabilities=desired_capabilities)
/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py:87: in __init__
self.start_session(desired_capabilities, browser_profile)
/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py:141: in start_session
'desiredCapabilities': desired_capabilities,
/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py:199: in execute
response = self.command_executor.execute(driver_command, params)
/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/remote_connection.py:395: in execute
return self._request(command_info[0], url, body=data)
/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/remote_connection.py:426: in _request
resp = self._conn.getresponse()
/usr/lib/python2.7/httplib.py:1127: in getresponse
response.begin()
/usr/lib/python2.7/httplib.py:453: in begin
version, status, reason = self._read_status()
/usr/lib/python2.7/httplib.py:409: in _read_status
line = self.fp.readline(_MAXLINE + 1)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <socket._fileobject object at 0x7faae15b6bd0>, size = 65537
def readline(self, size=-1):
buf = self._rbuf
buf.seek(0, 2) # seek end
if buf.tell() > 0:
# check if we already have it in our buffer
buf.seek(0)
bline = buf.readline(size)
if bline.endswith('\n') or len(bline) == size:
self._rbuf = StringIO()
self._rbuf.write(buf.read())
return bline
del bline
if size < 0:
# Read until \n or EOF, whichever comes first
if self._rbufsize <= 1:
# Speed up unbuffered case
buf.seek(0)
buffers = [buf.read()]
self._rbuf = StringIO() # reset _rbuf. we consume it via buf.
data = None
recv = self._sock.recv
while True:
try:
while data != "\n":
data = recv(1)
if not data:
break
buffers.append(data)
except error, e:
# The try..except to catch EINTR was moved outside the
# recv loop to avoid the per byte overhead.
if e.args[0] == EINTR:
continue
raise
break
return "".join(buffers)
buf.seek(0, 2) # seek end
self._rbuf = StringIO() # reset _rbuf. we consume it via buf.
while True:
try:
data = self._sock.recv(self._rbufsize)
except error, e:
if e.args[0] == EINTR:
continue
raise
if not data:
break
nl = data.find('\n')
if nl >= 0:
nl += 1
buf.write(data[:nl])
self._rbuf.write(data[nl:])
del data
break
buf.write(data)
return buf.getvalue()
else:
# Read until size bytes or \n or EOF seen, whichever comes first
buf.seek(0, 2) # seek end
buf_len = buf.tell()
if buf_len >= size:
buf.seek(0)
rv = buf.read(size)
self._rbuf = StringIO()
self._rbuf.write(buf.read())
return rv
self._rbuf = StringIO() # reset _rbuf. we consume it via buf.
while True:
try:
> data = self._sock.recv(self._rbufsize)
E Failed: Timeout >600s
/usr/lib/python2.7/socket.py:480: Failed
```
| closed | 2016-04-12T11:14:21Z | 2016-04-12T15:58:33Z | https://github.com/pytest-dev/pytest-selenium/issues/65 | [] | jmakov | 1 |
tensorflow/tensor2tensor | deep-learning | 1,922 | 'NoneType' object has no attribute 'copy' | C:\Users\zhaoxianghui\AppData\Local\Programs\Python\Python38\python.exe D:\project\python\tensor2tensor-master\tensor2tensor\bin\t2t-trainer --registry_help
Traceback (most recent call last):
File "D:\project\python\tensor2tensor-master\tensor2tensor\bin\t2t-trainer", line 23, in <module>
from tensor2tensor.bin import t2t_trainer
File "D:\project\python\tensor2tensor-master\tensor2tensor\bin\t2t_trainer.py", line 24, in <module>
from tensor2tensor import models # pylint: disable=unused-import
File "D:\project\python\tensor2tensor-master\tensor2tensor\models\__init__.py", line 51, in <module>
from tensor2tensor.models.research import rl
File "D:\project\python\tensor2tensor-master\tensor2tensor\models\research\rl.py", line 27, in <module>
from tensor2tensor.envs import tic_tac_toe_env
File "D:\project\python\tensor2tensor-master\tensor2tensor\envs\__init__.py", line 23, in <module>
from tensor2tensor.envs import tic_tac_toe_env
File "D:\project\python\tensor2tensor-master\tensor2tensor\envs\tic_tac_toe_env.py", line 244, in <module>
register()
File "D:\project\python\tensor2tensor-master\tensor2tensor\envs\tic_tac_toe_env.py", line 239, in register
unused_tictactoe_id, unused_tictactoe_env = gym_utils.register_gym_env(
File "D:\project\python\tensor2tensor-master\tensor2tensor\rl\gym_utils.py", line 360, in register_gym_env
return env_name, gym.make(env_name)
File "C:\Users\zhaoxianghui\AppData\Local\Programs\Python\Python38\lib\site-packages\gym\envs\registration.py", line 572, in make
_kwargs = spec_.kwargs.copy()
AttributeError: 'NoneType' object has no attribute 'copy' | open | 2022-12-23T02:45:21Z | 2022-12-23T02:53:07Z | https://github.com/tensorflow/tensor2tensor/issues/1922 | [] | sjtuzhaoxh | 1 |
opengeos/leafmap | plotly | 424 | In leafmap.tms_to_geotiff(), add JP2000 and ECW formats; also option of coordinate system for output raster. | <!-- Please search existing issues to avoid creating duplicates. -->
### Description
Describe the feature (e.g., new functions/tutorials) you would like to propose.
Tell us what can be achieved with this new feature and what's the expected outcome.
### Source code
```
Paste your source code here if have sample code to share.
```
| closed | 2023-04-21T09:31:31Z | 2023-04-22T04:31:38Z | https://github.com/opengeos/leafmap/issues/424 | [
"Feature Request"
] | ravishbapna | 1 |
randyzwitch/streamlit-folium | streamlit | 53 | Map flickering | I am trying to add some [Google Earth Engine](https://earthengine.google.com/) layers (XYZ tile layers) to the map. You can see that the layers have been added successfully, but the map keeps refreshing. Any advice?
```python
import ee
import geemap.foliumap as geemap
import streamlit as st
from streamlit_folium import st_folium
m = geemap.Map()
dem = ee.Image("USGS/SRTMGL1_003")
m.addLayer(dem, {}, "DEM")
st_data = st_folium(m, width=1000)
```

To test it in a notebook:
```python
import ee
import geemap.foliumap as geemap
m = geemap.Map()
dem = ee.Image("USGS/SRTMGL1_003")
m.addLayer(dem, {}, "DEM")
m
``` | closed | 2022-05-08T02:29:28Z | 2022-05-24T12:45:04Z | https://github.com/randyzwitch/streamlit-folium/issues/53 | [] | giswqs | 13 |
hankcs/HanLP | nlp | 730 | ่ฐ็จไพๅญ่ฏญๆณๅๆๆฅๅฃCRFDependencyParser.compute(sentence)๏ผๅบ็ฐ่ฟๅ็ปๆๆญปๅพช็ฏๆ
ๅต | ## ๆณจๆไบ้กน
่ฏท็กฎ่ฎคไธๅๆณจๆไบ้กน๏ผ
* ๆๅทฒไป็ป้
่ฏปไธๅๆๆกฃ๏ผ้ฝๆฒกๆๆพๅฐ็ญๆก๏ผ
- [้ฆ้กตๆๆกฃ](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [ๅธธ่ง้ฎ้ข](https://github.com/hankcs/HanLP/wiki/FAQ)
* ๆๅทฒ็ป้่ฟ[Google](https://www.google.com/#newwindow=1&q=HanLP)ๅ[issueๅบๆฃ็ดขๅ่ฝ](https://github.com/hankcs/HanLP/issues)ๆ็ดขไบๆ็้ฎ้ข๏ผไนๆฒกๆๆพๅฐ็ญๆกใ
* ๆๆ็ฝๅผๆบ็คพๅบๆฏๅบไบๅ
ด่ถฃ็ฑๅฅฝ่้่ตทๆฅ็่ช็ฑ็คพๅบ๏ผไธๆฟๆ
ไปปไฝ่ดฃไปปๆไนๅกใๆไผ็คผ่ฒๅ่จ๏ผๅๆฏไธไธชๅธฎๅฉๆ็ไบบ่กจ็คบๆ่ฐขใ
* [x] ๆๅจๆญคๆฌๅทๅ
่พๅ
ฅxๆ้ฉ๏ผไปฃ่กจไธ่ฟฐไบ้กน็กฎ่ฎคๅฎๆฏใ
## ็ๆฌๅท
<!-- ๅ่ก็่ฏทๆณจๆjarๆไปถๅๅปๆๆๅฑๅ็้จๅ๏ผGitHubไปๅบ็่ฏทๆณจๆmaster่ฟๆฏportableๅๆฏ -->
ๅฝๅๆๆฐ็ๆฌๅทๆฏ๏ผ1.5.2
ๆไฝฟ็จ็็ๆฌๆฏ๏ผhanlp-portable-1.5.2.jar
<!--ไปฅไธๅฑไบๅฟ
ๅกซ้กน๏ผไปฅไธๅฏ่ช็ฑๅๆฅ-->
## ๆ็้ฎ้ข
่ฐ็จไพๅญ่ฏญๆณๅๆๆฅๅฃCRFDependencyParser.compute(sentence)๏ผๅบ็ฐ่ฟๅ็ปๆๆญปๅพช็ฏๆ
ๅต
่ฟๅ็ปๆไธญidไธบ5็CoNLLWord็headไธบ6๏ผidไธบ6็CoNLLWord็headไธบ5ใ
## ๅค็ฐ้ฎ้ข
ๆฒกๆไฟฎๆนๆบ็ ็ดๆฅ่ฐ็จCRFDependencyParser.compute(sentence)ๆฅๅฃ๏ผ
### ่งฆๅไปฃ็
```
public void testCompute() throws Exception
{
String sentence = "็ทๅญไปๅฆ้จๆธธๆณณๅฐ้้จ";
CoNLLSentence cs = CRFDependencyParser.compute(sentence);
System.out.println(cs.word[4].HEAD.ID);
System.out.println(cs.word[5].HEAD.ID);
}
```
### ๆๆ่พๅบ
```
0
5
```
### ๅฎ้
่พๅบ
```
6
5
``` | closed | 2018-01-02T08:19:02Z | 2018-01-03T02:32:54Z | https://github.com/hankcs/HanLP/issues/730 | [
"bug",
"duplicated",
"ignored"
] | iwangkang | 2 |
aminalaee/sqladmin | asyncio | 283 | Making navbar collapse on small screens | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
In large screen:
<img width="1597" alt="Screenshot 2022-08-26 at 09 00 45" src="https://user-images.githubusercontent.com/19784933/186842573-d9806d3c-a74e-46cd-9bcd-4329054f0651.png">
In small screen:
<img width="499" alt="Screenshot 2022-08-26 at 09 00 56" src="https://user-images.githubusercontent.com/19784933/186842586-faffa25f-48b2-4a45-b9f9-d6b743a1404f.png">
The navbar is lost and it might be useful to make this responsive.
### Describe the solution you would like.
_No response_
### Describe alternatives you considered
_No response_
### Additional context
_No response_ | closed | 2022-08-26T07:04:04Z | 2022-10-27T07:08:01Z | https://github.com/aminalaee/sqladmin/issues/283 | [
"good first issue"
] | aminalaee | 2 |
stanfordnlp/stanza | nlp | 510 | Concurrency error | **Describe the bug**
Under UWSGI with 32 threads, when receiving many concurrent requests, the following error occur frequently:
```
Exception: builtins.RuntimeError: Expected hidden[0] size (2, 4, 100), got (2, 3, 100)
```
**To Reproduce**
1. Configure it under UWSGI with 32 threads
2. Run several requests in parallel
3. See error above
**Expected behavior**
No errors on multithreading
**Environment (please complete the following information):**
- OS: Archlinux
- Python version: Python 3.8.6 from Archlinux
- Stanza version: 1.1.1
- Environment variable OMP_NUM_THREADS set to 32 doesn't help
- Processors: `tokenize,lemma,pos`
- Languages processed concurrently: `en,es,it,nl,de,fr,ru` | closed | 2020-11-09T21:41:04Z | 2021-01-27T17:22:05Z | https://github.com/stanfordnlp/stanza/issues/510 | [
"bug",
"fixed on dev"
] | brauliobo | 9 |
mitmproxy/pdoc | api | 739 | Displaying function call as `mylib.hello()` instead of `hello()` | Consider the following short library `mylib.py`:
```py
"""
This is the documentation for mylib.
You can use it like so:
import mylib
mylib.hello()
"""
def hello():
"""
A function that says hello:
>>> import mylib
>>> mylib.hello()
Hello
"""
print("Hello")
```
We can compile docs with the following:
```py
$ pdoc mylib.py -o docs
```
This is the result I get:

The code in the documentation is like this:
```py
import mylib
hello()
```
The second line of code `myilb.hello()` is being displayed as just `hello()`. I would like to get the following instead:
```py
import mylib
mylib.hello()
```
Is there a way do configure `pdoc` to generate the documentation like this? I tried looking in the command line flags and online but I could not find a way. :pensive:
### Reasoning
If one copy-pastes the code from the docs `import mylib; hello()`, one would get a parse/compile/runtime error. It would be nice if `pdoc` could keep the same notation used in the docstring comment: with or without the library name before.
For some libraries, the usual way of calling them is with the explicit "namespaces" in calls, e.g.:
```py
unittest.main()
# or
doctest.testmod()
```
It would be nice to be able to display this using `pdoc` as well.
### Versions
```py
$ pdoc --version
pdoc: 14.5.1
Python: 3.12.4
Platform: Linux-6.10.4-arch2-1-x86_64-with-glibc2.40
``` | closed | 2024-09-05T09:46:46Z | 2024-09-12T13:40:12Z | https://github.com/mitmproxy/pdoc/issues/739 | [
"enhancement"
] | rudymatela | 1 |
plotly/dash | data-science | 3,053 | add "Zen of Dash" similar to that in Narwhals | `import narwhals.this` prints a message about the project's philosophy - it would be a nice addition to Dash if `import dash.this` (or similar) did the same. | open | 2024-10-24T14:32:55Z | 2024-10-24T14:32:55Z | https://github.com/plotly/dash/issues/3053 | [
"feature",
"P3"
] | gvwilson | 0 |
man-group/notebooker | jupyter | 79 | Widen results display | The width is too narrow | closed | 2022-02-25T10:25:33Z | 2022-07-18T08:52:57Z | https://github.com/man-group/notebooker/issues/79 | [
"enhancement",
"good first issue"
] | jonbannister | 0 |
flaskbb/flaskbb | flask | 308 | import flaskbb to existing flask app and closed registration | is it possible to include flaskbb to an existing flask app so that if the flask app gets started also the forum?
also is it possible to somewhere define as an admin that registrations are closed?
thanks | closed | 2017-07-21T13:48:38Z | 2018-04-15T07:47:46Z | https://github.com/flaskbb/flaskbb/issues/308 | [] | pythonios | 1 |
MaartenGr/BERTopic | nlp | 1,633 | Best way to find all documents related to keyword | Say I have a topic model on a large collection of documents. What's the best way to find all topics and documents matching a certain keyword, eg the keyword "Sport" in the example topic model. I know I can search for similar topics, eg:
similar_topics, similarity = topic_model.find_topics("sport", top_n=20)
But I'm wondering if this is the best/only way? The semantic relationships in the 2D representation of topics isn't always that clear (as is often the case going from many-D to 2D), so I can't really just pick nearby topic clusters. What would you recommend as best practice here?
Many thanks. | open | 2023-11-16T12:15:12Z | 2023-11-16T13:34:16Z | https://github.com/MaartenGr/BERTopic/issues/1633 | [] | fojackson8 | 1 |
Teemu/pytest-sugar | pytest | 163 | Add version tags to make it easier to track down the releases | It looks like there is no version tags in the repo. The releases in pypi https://pypi.org/project/pytest-sugar/#history is already 0.9.1.
It will be nice to tag release versions, so the users or developer could checkout different versions easily. | closed | 2018-11-07T14:33:04Z | 2018-11-08T21:16:42Z | https://github.com/Teemu/pytest-sugar/issues/163 | [] | jxltom | 1 |
clovaai/donut | computer-vision | 139 | Trying to Label for DocVQA but the result is worse | I've tested that out as well (though it doesn't have a license for commercial use), and you should be able to generate the custom dataset in a similar way I did for SROIE, but where the `gt_parse` object is in the form
```
{
"questions": [],
"answers": [],
"headers: []
}
```
_Originally posted by @estaudere in https://github.com/clovaai/donut/issues/8#issuecomment-1208424304_
Excuse me, i want to ask something regarding the format of ground truth for DocVQA finetuning. I've tried the format that is refered on the repository, but it failed and show this error:
```
Traceback (most recent call last):
File "train.py", line 149, in <module>
train(config)
File "train.py", line 78, in train
DonutDataset(
File "/content/drive/MyDrive/Donut VQA/donut/donut/util.py", line 74, in __init__
assert "gt_parse" in ground_truth and isinstance(ground_truth["gt_parse"], dict)
AssertionError
```
My gt_parser looks like this, for example:
```
{'gt_parse': [{'question': 'What is the Food Update no.?',
'answer': ['XIV', 'Food Update XIV']},
{'question': 'Who is the write-up about?',
'answer': ['Frank X. McDermott', 'F. X. McDermott', 'F. X. Mc Dermott']},
{'question': 'What is the current designation of Mr. McDermott with Kelco company?',
'answer': ['Vice President', 'Vice President - Marketing and Sales']},
{'question': 'When did Mr. McDermott join Kelco?', 'answer': ['1958']},
{'question': 'What was his first designation, when he joined Kelco?',
'answer': ['Sales Representative and Technical Advisor for Canada',
'Sales Representative and Technical Advisor']},
{'question': 'What was the first company he worked for, before joining Kelco?',
'answer': ['National Starch and Chemical Company']}]}
```
i followed the same format as shown in the repository, and it is error.
I also followed the format given by @estaudere below, but the result is not as i expected. The prediction results like this:
```
"predictions": [{"text_sequence": " What is the \u2018 salaries period with number \u20185\u2019? <sep/> remuneration to Director <sep/> remuneration to Director <sep/> remuneration to Director <sep/> Rs. ## <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs. <sep/> Rs"}
```
Can someone help me on how to create the proper ground truth for multiple question for an image? Thank you for your help! | closed | 2023-02-12T13:43:34Z | 2023-09-27T06:49:09Z | https://github.com/clovaai/donut/issues/139 | [] | wdprsto | 4 |
deepspeedai/DeepSpeed | pytorch | 6,567 | Something get wrong when run โaio_โ and "gds_" file(DeepNVMe) | **Describe the bug**
I couldn't run DeepNVMe demo properly.
It shows:
collect2: error: ld returned 1 exit status
ninja: build stopped: subcommand failed.
Traceback (most recent call last):
File "/root/anaconda3/envs/deepspeed/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 2105, in _run_ninja_build
subprocess.run(
File "/root/anaconda3/envs/deepspeed/lib/python3.9/subprocess.py", line 528, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
It seems that something wrong about ninja, there is short of "build.ninja".
Anybody suffer this situation?
**ds_report output**
[2024-09-24 17:35:01,773] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[WARNING] NVIDIA Inference is only supported on Pascal and newer architectures
[WARNING] NVIDIA Inference is only supported on Pascal and newer architectures
[WARNING] NVIDIA Inference is only supported on Pascal and newer architectures
[WARNING] NVIDIA Inference is only supported on Pascal and newer architectures
[WARNING] NVIDIA Inference is only supported on Pascal and newer architectures
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [OKAY]
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
async_io ............... [NO] ....... [OKAY]
fused_adam ............. [NO] ....... [OKAY]
cpu_adam ............... [NO] ....... [OKAY]
cpu_adagrad ............ [NO] ....... [OKAY]
cpu_lion ............... [NO] ....... [OKAY]
[WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [NO] ....... [NO]
[WARNING] NVIDIA Inference is only supported on Ampere and newer architectures
fp_quantizer ........... [NO] ....... [NO]
fused_lamb ............. [NO] ....... [OKAY]
fused_lion ............. [NO] ....... [OKAY]
gds .................... [NO] ....... [OKAY]
[WARNING] NVIDIA Inference is only supported on Pascal and newer architectures
inference_core_ops ..... [NO] ....... [NO]
[WARNING] NVIDIA Inference is only supported on Pascal and newer architectures
cutlass_ops ............ [NO] ....... [NO]
[WARNING] NVIDIA Inference is only supported on Pascal and newer architectures
transformer_inference .. [NO] ....... [NO]
quantizer .............. [NO] ....... [OKAY]
[WARNING] NVIDIA Inference is only supported on Pascal and newer architectures
ragged_device_ops ...... [NO] ....... [NO]
[WARNING] NVIDIA Inference is only supported on Pascal and newer architectures
ragged_ops ............. [NO] ....... [NO]
random_ltd ............. [NO] ....... [OKAY]
[WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.4
[WARNING] using untested triton version (3.0.0), only 1.0.0 is known to be compatible
sparse_attn ............ [NO] ....... [NO]
spatial_inference ...... [NO] ....... [OKAY]
transformer ............ [NO] ....... [OKAY]
stochastic_transformer . [NO] ....... [OKAY]
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/root/anaconda3/envs/deepspeed/lib/python3.9/site-packages/torch']
torch version .................... 2.4.1+cu121
deepspeed install path ........... ['/root/anaconda3/envs/deepspeed/lib/python3.9/site-packages/deepspeed']
deepspeed info ................... 0.15.1, unknown, unknown
torch cuda version ............... 12.1
torch hip version ................ None
nvcc version ..................... 12.1
deepspeed wheel compiled w. ...... torch 2.4, cuda 12.1
shared memory (/dev/shm) size .... 125.75 GB
**System info (please complete the following information):**
- OS: Ubuntu 22.04
- Python 3.9.18
**conda list**
# Name Version Build Channel
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_gnu conda-forge
_sysroot_linux-64_curr_repodata_hack 3 h69a702a_16 conda-forge
annotated-types 0.7.0 pypi_0 pypi
binutils_impl_linux-64 2.40 ha1999f0_7 conda-forge
binutils_linux-64 2.40 hb3c18ed_3 conda-forge
bzip2 1.0.8 h4bc722e_7 conda-forge
c-ares 1.19.1 h5eee18b_0 anaconda
ca-certificates 2024.8.30 hbcca054_0 conda-forge
cmake 3.26.4 h96355d8_0 anaconda
cuda 12.1.0 0 nvidia/label/cuda-12.1.0
cuda-cccl 12.1.55 0 nvidia/label/cuda-12.1.0
cuda-command-line-tools 12.1.0 0 nvidia/label/cuda-12.1.0
cuda-compiler 12.1.0 0 nvidia/label/cuda-12.1.0
cuda-cudart 12.1.55 0 nvidia/label/cuda-12.1.0
cuda-cudart-dev 12.1.55 0 nvidia/label/cuda-12.1.0
cuda-cudart-static 12.1.55 0 nvidia/label/cuda-12.1.0
cuda-cuobjdump 12.1.55 0 nvidia/label/cuda-12.1.0
cuda-cupti 12.1.62 0 nvidia/label/cuda-12.1.0
cuda-cupti-static 12.1.62 0 nvidia/label/cuda-12.1.0
cuda-cuxxfilt 12.1.55 0 nvidia/label/cuda-12.1.0
cuda-demo-suite 12.1.55 0 nvidia/label/cuda-12.1.0
cuda-documentation 12.1.55 0 nvidia/label/cuda-12.1.0
cuda-driver-dev 12.1.55 0 nvidia/label/cuda-12.1.0
cuda-gdb 12.1.55 0 nvidia/label/cuda-12.1.0
cuda-libraries 12.1.0 0 nvidia/label/cuda-12.1.0
cuda-libraries-dev 12.1.0 0 nvidia/label/cuda-12.1.0
cuda-libraries-static 12.1.0 0 nvidia/label/cuda-12.1.0
cuda-nsight 12.1.55 0 nvidia/label/cuda-12.1.0
cuda-nsight-compute 12.1.0 0 nvidia/label/cuda-12.1.0
cuda-nvcc 12.1.105 0 nvidia
cuda-nvdisasm 12.1.55 0 nvidia/label/cuda-12.1.0
cuda-nvml-dev 12.1.55 0 nvidia/label/cuda-12.1.0
cuda-nvprof 12.1.55 0 nvidia/label/cuda-12.1.0
cuda-nvprune 12.1.55 0 nvidia/label/cuda-12.1.0
cuda-nvrtc 12.1.55 0 nvidia/label/cuda-12.1.0
cuda-nvrtc-dev 12.1.55 0 nvidia/label/cuda-12.1.0
cuda-nvrtc-static 12.1.55 0 nvidia/label/cuda-12.1.0
cuda-nvtx 12.1.66 0 nvidia/label/cuda-12.1.0
cuda-nvvp 12.1.55 0 nvidia/label/cuda-12.1.0
cuda-opencl 12.1.56 0 nvidia/label/cuda-12.1.0
cuda-opencl-dev 12.1.56 0 nvidia/label/cuda-12.1.0
cuda-profiler-api 12.1.55 0 nvidia/label/cuda-12.1.0
cuda-runtime 12.1.0 0 nvidia/label/cuda-12.1.0
cuda-sanitizer-api 12.1.55 0 nvidia/label/cuda-12.1.0
cuda-toolkit 12.1.0 0 nvidia/label/cuda-12.1.0
cuda-tools 12.1.0 0 nvidia/label/cuda-12.1.0
cuda-visual-tools 12.1.0 0 nvidia/label/cuda-12.1.0
deepspeed 0.15.1 pypi_0 pypi
expat 2.6.3 h6a678d5_0 anaconda
filelock 3.16.1 pypi_0 pypi
fsspec 2024.9.0 pypi_0 pypi
gcc 14.1.0 h6f9ffa1_1 conda-forge
gcc_impl_linux-64 14.1.0 h3c94d91_1 conda-forge
gcc_linux-64 14.1.0 h3f71edc_3 conda-forge
gds-tools 1.6.0.25 0 nvidia/label/cuda-12.1.0
gxx 14.1.0 h6f9ffa1_1 conda-forge
gxx_impl_linux-64 14.1.0 h8d00ecb_1 conda-forge
gxx_linux-64 14.1.0 hc55ae77_3 conda-forge
hjson 3.1.0 pypi_0 pypi
jinja2 3.1.4 pypi_0 pypi
kernel-headers_linux-64 3.10.0 h4a8ded7_16 conda-forge
krb5 1.20.1 h143b758_1 anaconda
ld_impl_linux-64 2.40 hf3520f5_7 conda-forge
libcublas 12.1.0.26 0 nvidia/label/cuda-12.1.0
libcublas-dev 12.1.0.26 0 nvidia/label/cuda-12.1.0
libcublas-static 12.1.0.26 0 nvidia/label/cuda-12.1.0
libcufft 11.0.2.4 0 nvidia/label/cuda-12.1.0
libcufft-dev 11.0.2.4 0 nvidia/label/cuda-12.1.0
libcufft-static 11.0.2.4 0 nvidia/label/cuda-12.1.0
libcufile 1.6.0.25 0 nvidia/label/cuda-12.1.0
libcufile-dev 1.6.0.25 0 nvidia/label/cuda-12.1.0
libcufile-static 1.6.0.25 0 nvidia/label/cuda-12.1.0
libcurand 10.3.2.56 0 nvidia/label/cuda-12.1.0
libcurand-dev 10.3.2.56 0 nvidia/label/cuda-12.1.0
libcurand-static 10.3.2.56 0 nvidia/label/cuda-12.1.0
libcurl 7.88.1 h251f7ec_2 anaconda
libcusolver 11.4.4.55 0 nvidia/label/cuda-12.1.0
libcusolver-dev 11.4.4.55 0 nvidia/label/cuda-12.1.0
libcusolver-static 11.4.4.55 0 nvidia/label/cuda-12.1.0
libcusparse 12.0.2.55 0 nvidia/label/cuda-12.1.0
libcusparse-dev 12.0.2.55 0 nvidia/label/cuda-12.1.0
libcusparse-static 12.0.2.55 0 nvidia/label/cuda-12.1.0
libedit 3.1.20230828 h5eee18b_0 anaconda
libev 4.33 h7f8727e_1 anaconda
libffi 3.4.2 h7f98852_5 conda-forge
libgcc 14.1.0 h77fa898_1 conda-forge
libgcc-devel_linux-64 14.1.0 h5d3d1c9_101 conda-forge
libgcc-ng 14.1.0 h69a702a_1 conda-forge
libgomp 14.1.0 h77fa898_1 conda-forge
libnghttp2 1.57.0 h2d74bed_0 anaconda
libnpp 12.0.2.50 0 nvidia/label/cuda-12.1.0
libnpp-dev 12.0.2.50 0 nvidia/label/cuda-12.1.0
libnpp-static 12.0.2.50 0 nvidia/label/cuda-12.1.0
libnsl 2.0.1 hd590300_0 conda-forge
libnvjitlink 12.1.55 0 nvidia/label/cuda-12.1.0
libnvjitlink-dev 12.1.55 0 nvidia/label/cuda-12.1.0
libnvjpeg 12.1.0.39 0 nvidia/label/cuda-12.1.0
libnvjpeg-dev 12.1.0.39 0 nvidia/label/cuda-12.1.0
libnvjpeg-static 12.1.0.39 0 nvidia/label/cuda-12.1.0
libnvvm-samples 12.1.55 0 nvidia/label/cuda-12.1.0
libsanitizer 14.1.0 hcba0ae0_1 conda-forge
libsqlite 3.45.2 h2797004_0 conda-forge
libssh2 1.11.0 h251f7ec_0 anaconda
libstdcxx 14.1.0 hc0a3c3a_1 conda-forge
libstdcxx-devel_linux-64 14.1.0 h5d3d1c9_101 conda-forge
libstdcxx-ng 14.1.0 h4852527_1 conda-forge
libuuid 2.38.1 h0b41bf4_0 conda-forge
libuv 1.48.0 h5eee18b_0 anaconda
libxcrypt 4.4.36 hd590300_1 conda-forge
libzlib 1.2.13 h4ab18f5_6 conda-forge
lz4-c 1.9.4 h6a678d5_1 anaconda
markupsafe 2.1.5 pypi_0 pypi
mpmath 1.3.0 pypi_0 pypi
ncurses 6.4 h6a678d5_0 http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
networkx 3.2.1 pypi_0 pypi
ninja 1.11.1.1 pypi_0 pypi
nsight-compute 2023.1.0.15 0 nvidia/label/cuda-12.1.0
numpy 2.0.2 pypi_0 pypi
nvidia-cublas-cu12 12.1.3.1 pypi_0 pypi
nvidia-cuda-cupti-cu12 12.1.105 pypi_0 pypi
nvidia-cuda-nvrtc-cu12 12.1.105 pypi_0 pypi
nvidia-cuda-runtime-cu12 12.1.105 pypi_0 pypi
nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
nvidia-cufft-cu12 11.0.2.54 pypi_0 pypi
nvidia-curand-cu12 10.3.2.106 pypi_0 pypi
nvidia-cusolver-cu12 11.4.5.107 pypi_0 pypi
nvidia-cusparse-cu12 12.1.0.106 pypi_0 pypi
nvidia-ml-py 12.560.30 pypi_0 pypi
nvidia-nccl-cu12 2.20.5 pypi_0 pypi
nvidia-nvjitlink-cu12 12.6.68 pypi_0 pypi
nvidia-nvtx-cu12 12.1.105 pypi_0 pypi
openssl 3.3.2 hb9d3cd8_0 conda-forge
packaging 24.1 pypi_0 pypi
pillow 10.4.0 pypi_0 pypi
pip 24.2 py39h06a4308_0 http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
psutil 6.0.0 pypi_0 pypi
py-cpuinfo 9.0.0 pypi_0 pypi
pydantic 2.9.2 pypi_0 pypi
pydantic-core 2.23.4 pypi_0 pypi
python 3.9.18 h0755675_1_cpython conda-forge
readline 8.2 h5eee18b_0 http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
rhash 1.4.3 hdbd6064_0 anaconda
setuptools 75.1.0 py39h06a4308_0 http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
sqlite 3.45.2 h2c6b66d_0 conda-forge
sympy 1.13.3 pypi_0 pypi
sysroot_linux-64 2.17 h4a8ded7_16 conda-forge
tk 8.6.13 noxft_h4845f30_101 conda-forge
torch 2.4.1 pypi_0 pypi
torchaudio 2.4.1 pypi_0 pypi
torchvision 0.19.1 pypi_0 pypi
tqdm 4.66.5 pypi_0 pypi
triton 3.0.0 pypi_0 pypi
typing-extensions 4.12.2 pypi_0 pypi
tzdata 2024a h04d1e81_0 http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
wheel 0.44.0 py39h06a4308_0 http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
xz 5.4.6 h5eee18b_1 http://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main
zlib 1.2.13 h4ab18f5_6 conda-forge
zstd 1.5.5 hc292b87_2 anaconda
| closed | 2024-09-24T10:14:45Z | 2024-10-24T16:19:24Z | https://github.com/deepspeedai/DeepSpeed/issues/6567 | [
"bug",
"training"
] | niebowen666 | 15 |
nl8590687/ASRT_SpeechRecognition | tensorflow | 83 | ่ฎญ็ป็ๆถๅ๏ผ่ฎญ็ปไธญไผๆฅ้ Saw a non-null label (index >= num_classes - 1) following a null label, batch: 0 num_classes: 1422 labels: 671,910,489,228 | Traceback (most recent call last):
File "D:/pythonWorkSpace/Project/speechReognized/train_mspeech.py", line 55, in <module>
ms.TrainModel(datapath, epoch = 50, batch_size = 8, save_step = 500)
File "D:\pythonWorkSpace\Project\speechReognized\SpeechModel261.py", line 196, in TrainModel
self._model.fit_generator(yielddatas, save_step)
File "D:\soft\python3\lib\site-packages\keras\legacy\interfaces.py", line 87, in wrapper
return func(*args, **kwargs)
File "D:\soft\python3\lib\site-packages\keras\engine\training.py", line 2096, in fit_generator
class_weight=class_weight)
File "D:\soft\python3\lib\site-packages\keras\engine\training.py", line 1814, in train_on_batch
outputs = self.train_function(ins)
File "D:\soft\python3\lib\site-packages\keras\backend\tensorflow_backend.py", line 2352, in __call__
**self.session_kwargs)
File "D:\soft\python3\lib\site-packages\tensorflow\python\client\session.py", line 905, in run
run_metadata_ptr)
File "D:\soft\python3\lib\site-packages\tensorflow\python\client\session.py", line 1137, in _run
feed_dict_tensor, options, run_metadata)
File "D:\soft\python3\lib\site-packages\tensorflow\python\client\session.py", line 1355, in _do_run
options, run_metadata)
File "D:\soft\python3\lib\site-packages\tensorflow\python\client\session.py", line 1374, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Saw a non-null label (index >= num_classes - 1) following a null label, batch: 0 num_classes: 1422 labels: 671,910,489,228
[[Node: ctc/CTCLoss = CTCLoss[ctc_merge_repeated=true, ignore_longer_outputs_than_inputs=false, preprocess_collapse_repeated=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](ctc/Log/_305, ctc/ToInt64/_307, ctc/ToInt32_2/_309, ctc/ToInt32_1/_311)]] | closed | 2019-03-12T01:27:54Z | 2019-07-21T17:09:15Z | https://github.com/nl8590687/ASRT_SpeechRecognition/issues/83 | [] | feifaxiaoming | 3 |
iMerica/dj-rest-auth | rest-api | 345 | Custom PasswordResetSerializer to send different email message not working | When I switched from django rest auth to dj-rest-auth and tried to test password reset request it sent default email message instead of custom one. Basically I just send request to /api/rest-auth/password/reset/ and with django rest auth everything worked as expected.
Here's the code:
mainapp/serializers.py
```
from dj_rest_auth.serializers import PasswordResetSerializer
class CustomPasswordResetSerializer(PasswordResetSerializer):
def get_email_options(self):
return {
'html_email_template_name': 'messages/password_reset_message.html',
}
```
settings.py:
```
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': [
'rest_framework.permissions.IsAuthenticatedOrReadOnly',
],
'DEFAULT_AUTHENTICATION_CLASSES': [
'dj_rest_auth.jwt_auth.JWTCookieAuthentication',
],
}
REST_AUTH_SERIALIZERS = {
'PASSWORD_RESET_SERIALIZER': 'mainapp.serializers.CustomPasswordResetSerializer',
}
```
templates/messages/password_reset_message.html
```
<p>Test message</p>
```
urls.py
```
router = DefaultRouter()
router.register('entities', EntityViewSet, basename='users')
urlpatterns = [
# API urls
path('api/', include(router.urls)),
# REST Auth
path('api/dj-rest-auth/', include('dj_rest_auth.urls')),
path('api/dj-rest-auth/registration/', include('dj_rest_auth.registration.urls')),
]
```
Maybe there were some changes in `get_email_options` method?
How can I change default email message if this doesn't work? | closed | 2021-12-02T09:36:41Z | 2021-12-02T14:11:18Z | https://github.com/iMerica/dj-rest-auth/issues/345 | [] | Lomank123 | 1 |
assafelovic/gpt-researcher | automation | 879 | Make the translated READMEs more consistent | The four translated versions of README.md are inconsistent.
Some examples:
- The English and Korean versions have a logo at the top, while the Japanese and Chinese versions have an H1 title.
- The Japanese and Chinese versions are missing the link to docs, colab, etc. at the top
- The link to colab in the Korean version is inconsistent with the current folder structure: it's pointing to `examples/pip-run.ipynb` which has been moved to `docs/docs/examples/pip-run.ipynb`
I would like to create a PR so that all four versions have the same items and links at the top to avoid people missing some info.
Links of READMEs:
- https://github.com/assafelovic/gpt-researcher/blob/master/README.md
- https://github.com/assafelovic/gpt-researcher/blob/master/README-ko_KR.md
- https://github.com/assafelovic/gpt-researcher/blob/master/README-ja_JP.md
- https://github.com/assafelovic/gpt-researcher/blob/master/README-zh_CN.md | closed | 2024-10-01T14:29:55Z | 2024-10-10T10:57:01Z | https://github.com/assafelovic/gpt-researcher/issues/879 | [] | kevin1kevin1k | 5 |
ageitgey/face_recognition | machine-learning | 777 | Freezes at running setup.py install for dlib | * face_recognition version:newest
* Python version:newest
* Operating System:ubuntu
I try to install face_recognition using pip, it freezes at "running setup.py bdlist_wheel for dlib" what i do ???
Thanks...
| open | 2019-03-19T11:03:58Z | 2023-08-16T19:04:41Z | https://github.com/ageitgey/face_recognition/issues/777 | [] | mattlander | 9 |
scanapi/scanapi | rest-api | 150 | Change pip to Poetry | Maybe will fine if we change of pip to poetry. It, many features look like nice. as:
- Poetry publish for publish package on pypi.
- Separate Dev dependencies of production dependencies.
- Package validation
- Scripts to iterage with code
- etc...
https://python-poetry.org/ | closed | 2020-05-24T16:19:13Z | 2020-07-24T21:23:19Z | https://github.com/scanapi/scanapi/issues/150 | [
"Automation"
] | marcuxyz | 0 |
noirbizarre/flask-restplus | flask | 26 | [proposal] better support for security requirements | What I'm proposing, and willing to implement and contribute is a `SecurityRequirement` class, possibly with subclasses like `OAuth2SecurityRequirement`. defining an api with these requirements would look like this:
`````` Python
oauth2_req = OAuth2SecurityRequirement(name="needsuser", scopes=["scope1", "scope2"], flow="implicit", authorization_uri="http://example.com/authorize")
apikey_req = ApiKeyRequirement(name="apikey", param_name="key", in="query")
to require either one of these api requirements api-wide, you'd pass an array of instances to the API constructor.
```python
Api(title="my apy", version="v1", security_requirements=[
apikey_req,
oauth2_req("scope1")
])
``````
Note that oauth2 requirement instances are callable, so you can pass in required scopes.
I'd be very much willing to implementthis and contribute the code back to this project if you're interested.
| closed | 2015-03-02T22:07:56Z | 2018-09-18T19:10:54Z | https://github.com/noirbizarre/flask-restplus/issues/26 | [
"enhancement"
] | frederikcreemers | 14 |
huggingface/datasets | nlp | 6,558 | OSError: image file is truncated (1 bytes not processed) #28323 | ### Describe the bug
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
Cell In[24], line 28
23 return example
25 # Filter the dataset
26 # filtered_dataset = dataset.filter(contains_number)
27 # Add the 'label' field in the dataset
---> 28 labeled_dataset = dataset.filter(contains_number).map(add_label)
29 # View the structure of the updated dataset
30 print(labeled_dataset)
File /usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py:975, in DatasetDict.filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, fn_kwargs, num_proc, desc)
972 if cache_file_names is None:
973 cache_file_names = {k: None for k in self}
974 return DatasetDict(
--> 975 {
976 k: dataset.filter(
977 function=function,
978 with_indices=with_indices,
979 input_columns=input_columns,
980 batched=batched,
981 batch_size=batch_size,
982 keep_in_memory=keep_in_memory,
983 load_from_cache_file=load_from_cache_file,
984 cache_file_name=cache_file_names[k],
985 writer_batch_size=writer_batch_size,
986 fn_kwargs=fn_kwargs,
987 num_proc=num_proc,
988 desc=desc,
989 )
990 for k, dataset in self.items()
991 }
992 )
File /usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py:976, in <dictcomp>(.0)
972 if cache_file_names is None:
973 cache_file_names = {k: None for k in self}
974 return DatasetDict(
975 {
--> 976 k: dataset.filter(
977 function=function,
978 with_indices=with_indices,
979 input_columns=input_columns,
980 batched=batched,
981 batch_size=batch_size,
982 keep_in_memory=keep_in_memory,
983 load_from_cache_file=load_from_cache_file,
984 cache_file_name=cache_file_names[k],
985 writer_batch_size=writer_batch_size,
986 fn_kwargs=fn_kwargs,
987 num_proc=num_proc,
988 desc=desc,
989 )
990 for k, dataset in self.items()
991 }
992 )
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:557, in transmit_format.<locals>.wrapper(*args, **kwargs)
550 self_format = {
551 "type": self._format_type,
552 "format_kwargs": self._format_kwargs,
553 "columns": self._format_columns,
554 "output_all_columns": self._output_all_columns,
555 }
556 # apply actual function
--> 557 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
558 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
559 # re-apply format to the output
File /usr/local/lib/python3.10/dist-packages/datasets/fingerprint.py:481, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
477 validate_fingerprint(kwargs[fingerprint_name])
479 # Call actual function
--> 481 out = func(dataset, *args, **kwargs)
483 # Update fingerprint of in-place transforms + update in-place history of transforms
485 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3623, in Dataset.filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
3620 if len(self) == 0:
3621 return self
-> 3623 indices = self.map(
3624 function=partial(
3625 get_indices_from_mask_function, function, batched, with_indices, input_columns, self._indices
3626 ),
3627 with_indices=True,
3628 features=Features({"indices": Value("uint64")}),
3629 batched=True,
3630 batch_size=batch_size,
3631 remove_columns=self.column_names,
3632 keep_in_memory=keep_in_memory,
3633 load_from_cache_file=load_from_cache_file,
3634 cache_file_name=cache_file_name,
3635 writer_batch_size=writer_batch_size,
3636 fn_kwargs=fn_kwargs,
3637 num_proc=num_proc,
3638 suffix_template=suffix_template,
3639 new_fingerprint=new_fingerprint,
3640 input_columns=input_columns,
3641 desc=desc or "Filter",
3642 )
3643 new_dataset = copy.deepcopy(self)
3644 new_dataset._indices = indices.data
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:592, in transmit_tasks.<locals>.wrapper(*args, **kwargs)
590 self: "Dataset" = kwargs.pop("self")
591 # apply actual function
--> 592 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
593 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
594 for dataset in datasets:
595 # Remove task templates if a column mapping of the template is no longer valid
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:557, in transmit_format.<locals>.wrapper(*args, **kwargs)
550 self_format = {
551 "type": self._format_type,
552 "format_kwargs": self._format_kwargs,
553 "columns": self._format_columns,
554 "output_all_columns": self._output_all_columns,
555 }
556 # apply actual function
--> 557 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
558 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
559 # re-apply format to the output
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3093, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
3087 if transformed_dataset is None:
3088 with hf_tqdm(
3089 unit=" examples",
3090 total=pbar_total,
3091 desc=desc or "Map",
3092 ) as pbar:
-> 3093 for rank, done, content in Dataset._map_single(**dataset_kwargs):
3094 if done:
3095 shards_done += 1
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3470, in Dataset._map_single(shard, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset)
3466 indices = list(
3467 range(*(slice(i, i + batch_size).indices(shard.num_rows)))
3468 ) # Something simpler?
3469 try:
-> 3470 batch = apply_function_on_filtered_inputs(
3471 batch,
3472 indices,
3473 check_same_num_examples=len(shard.list_indexes()) > 0,
3474 offset=offset,
3475 )
3476 except NumExamplesMismatchError:
3477 raise DatasetTransformationNotAllowedError(
3478 "Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it."
3479 ) from None
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3349, in Dataset._map_single.<locals>.apply_function_on_filtered_inputs(pa_inputs, indices, check_same_num_examples, offset)
3347 if with_rank:
3348 additional_args += (rank,)
-> 3349 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
3350 if isinstance(processed_inputs, LazyDict):
3351 processed_inputs = {
3352 k: v for k, v in processed_inputs.data.items() if k not in processed_inputs.keys_to_format
3353 }
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:6212, in get_indices_from_mask_function(function, batched, with_indices, input_columns, indices_mapping, *args, **fn_kwargs)
6209 if input_columns is None:
6210 # inputs only contains a batch of examples
6211 batch: dict = inputs[0]
-> 6212 num_examples = len(batch[next(iter(batch.keys()))])
6213 for i in range(num_examples):
6214 example = {key: batch[key][i] for key in batch}
File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:272, in LazyDict.__getitem__(self, key)
270 value = self.data[key]
271 if key in self.keys_to_format:
--> 272 value = self.format(key)
273 self.data[key] = value
274 self.keys_to_format.remove(key)
File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:375, in LazyBatch.format(self, key)
374 def format(self, key):
--> 375 return self.formatter.format_column(self.pa_table.select([key]))
File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:442, in PythonFormatter.format_column(self, pa_table)
440 def format_column(self, pa_table: pa.Table) -> list:
441 column = self.python_arrow_extractor().extract_column(pa_table)
--> 442 column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])
443 return column
File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:218, in PythonFeaturesDecoder.decode_column(self, column, column_name)
217 def decode_column(self, column: list, column_name: str) -> list:
--> 218 return self.features.decode_column(column, column_name) if self.features else column
File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1951, in Features.decode_column(self, column, column_name)
1938 def decode_column(self, column: list, column_name: str):
1939 """Decode column with custom feature decoding.
1940
1941 Args:
(...)
1948 `list[Any]`
1949 """
1950 return (
-> 1951 [decode_nested_example(self[column_name], value) if value is not None else None for value in column]
1952 if self._column_requires_decoding[column_name]
1953 else column
1954 )
File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1951, in <listcomp>(.0)
1938 def decode_column(self, column: list, column_name: str):
1939 """Decode column with custom feature decoding.
1940
1941 Args:
(...)
1948 `list[Any]`
1949 """
1950 return (
-> 1951 [decode_nested_example(self[column_name], value) if value is not None else None for value in column]
1952 if self._column_requires_decoding[column_name]
1953 else column
1954 )
File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1339, in decode_nested_example(schema, obj, token_per_repo_id)
1336 elif isinstance(schema, (Audio, Image)):
1337 # we pass the token to read and decode files from private repositories in streaming mode
1338 if obj is not None and schema.decode:
-> 1339 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
1340 return obj
File /usr/local/lib/python3.10/dist-packages/datasets/features/image.py:185, in Image.decode_example(self, value, token_per_repo_id)
183 else:
184 image = PIL.Image.open(BytesIO(bytes_))
--> 185 image.load() # to avoid "Too many open files" errors
186 return image
File /usr/local/lib/python3.10/dist-packages/PIL/ImageFile.py:254, in ImageFile.load(self)
252 break
253 else:
--> 254 raise OSError(
255 "image file is truncated "
256 f"({len(b)} bytes not processed)"
257 )
259 b = b + s
260 n, err_code = decoder.decode(b)
OSError: image file is truncated (1 bytes not processed)
```
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("mehul7/captioned_military_aircraft")
from transformers import AutoImageProcessor
checkpoint = "microsoft/resnet-50"
image_processor = AutoImageProcessor.from_pretrained(checkpoint)
import re
from PIL import Image
import io
def contains_number(example):
try:
image = Image.open(io.BytesIO(example["image"]['bytes']))
t = image_processor(images=image, return_tensors="pt")['pixel_values']
except Exception as e:
print(f"Error processing image๏ผ{example['text']}")
return False
return bool(re.search(r'\d', example['text']))
# Define a function to add the 'label' field
def add_label(example):
lab = example['text'].split()
temp = 'NOT'
for item in lab:
if str(item[-1]).isdigit():
temp = item
break
example['label'] = temp
return example
# Filter the dataset
# filtered_dataset = dataset.filter(contains_number)
# Add the 'label' field in the dataset
labeled_dataset = dataset.filter(contains_number).map(add_label)
# View the structure of the updated dataset
print(labeled_dataset)
```
### Expected behavior
needs to form labels
same as : https://www.kaggle.com/code/jiabaowangts/dataset-air/notebook
### Environment info
Kaggle notebook P100 | closed | 2024-01-04T02:15:13Z | 2024-02-21T00:38:12Z | https://github.com/huggingface/datasets/issues/6558 | [] | andysingal | 1 |
aminalaee/sqladmin | fastapi | 442 | Support OAuth integration in AuthenticationBackend | ### Discussed in https://github.com/aminalaee/sqladmin/discussions/439
<div type='discussions-op-text'>
<sup>Originally posted by **simonsax** March 3, 2023</sup>
I have a simple fastapi app with the sqladmin backend for which I'd like to enable a SSO login functionality. But somehow I'm unable to redirect to the authentication_url of my OIDC provider. I'm using authlib as oauth client
From my understanding when accessing /admin the `authenticate` function is run. In this function I would check for a token or the session and if the user is not logged in yet, I would call the login endpoint (which redirects to the auth_url of my OICD provider) or call the OICD authentication method directly. But somehow I'm not redirected to the corresponding url, probably because the function expects a boolean return value? Any idea how to solve that problem? Otherwise this could be a cool feature :-)
If I call the `/sso_login` endpoint directly auth flow works fine.
Thanks a lot for your valuable inputs.
```
...
class MyBackend(AuthenticationBackend) -> bool::
async def login(self, request: Request):
#not used as it is only used for the admin form (which I try not to use -> SSO instead)
pass
async def logout(self, request: Request) -> bool:
# Usually you'd want to just clear the session
request.session.clear()
return True
async def authenticate(self, request: Request) -> bool::
token = request.session.get("token")
if not token:
redirect_uri = request.url_for('sso_login')
return RedirectResponse(url="/sso_login")
#return await oauth.OIDC.authorize_redirect(request, redirect_uri) -> 2nd Option: call auth_url direclty
#return False -> according to doc boolean return value
# token = await oauth.OIDC.authorize_access_token(request)
# Check the token -> e.g. oauth.OIDC.authorize_access_token(request)
return True
#add sqladmin to app
admin_backend = Admin(app=app,engine=engine, authentication_backend=MyBackend("dfkdvdfsd"))
@app.get('/sso_login')
async def sso_login(request: Request):
redirect_uri = request.url_for('auth_callback')
return await oauth.OIDC.authorize_redirect(request, redirect_uri)
```</div> | closed | 2023-03-07T11:11:19Z | 2023-03-11T07:48:23Z | https://github.com/aminalaee/sqladmin/issues/442 | [] | aminalaee | 4 |
noirbizarre/flask-restplus | flask | 300 | Swaggerui assets missing when running dev version of flask_restplus | When I install the dev version of flask_restplus per the docs (clone the repo and pip install -e .[dev,test]) the swaggerui assets are missing and when you start your flask app and try to navigate to it you get a blank page and a bunch of 404s:
```
127.0.0.1 - - [30/Jun/2017 10:46:10] "GET /swaggerui/bower/swagger-ui/dist/css/typography.css HTTP/1.1" 404 -
127.0.0.1 - - [30/Jun/2017 10:46:10] "GET /swaggerui/bower/swagger-ui/dist/css/reset.css HTTP/1.1" 404 -
127.0.0.1 - - [30/Jun/2017 10:46:10] "GET /swaggerui/bower/swagger-ui/dist/css/screen.css HTTP/1.1" 404 -
127.0.0.1 - - [30/Jun/2017 10:46:10] "GET /swaggerui/bower/swagger-ui/dist/lib/object-assign-pollyfill.js HTTP/1.1" 404 -
127.0.0.1 - - [30/Jun/2017 10:46:10] "GET /swaggerui/bower/swagger-ui/dist/lib/jquery-1.8.0.min.js HTTP/1.1" 404 -
127.0.0.1 - - [30/Jun/2017 10:46:10] "GET /swaggerui/bower/swagger-ui/dist/lib/jquery.slideto.min.js HTTP/1.1" 404 -
127.0.0.1 - - [30/Jun/2017 10:46:10] "GET /swaggerui/bower/swagger-ui/dist/lib/jquery.wiggle.min.js HTTP/1.1" 404 -
127.0.0.1 - - [30/Jun/2017 10:46:10] "GET /swaggerui/bower/swagger-ui/dist/lib/jquery.ba-bbq.min.js HTTP/1.1" 404 -
127.0.0.1 - - [30/Jun/2017 10:46:10] "GET /swaggerui/bower/swagger-ui/dist/lib/handlebars-4.0.5.js HTTP/1.1" 404 -
127.0.0.1 - - [30/Jun/2017 10:46:10] "GET /swaggerui/bower/swagger-ui/dist/lib/lodash.min.js HTTP/1.1" 404 -
127.0.0.1 - - [30/Jun/2017 10:46:10] "GET /swaggerui/bower/swagger-ui/dist/lib/backbone-min.js HTTP/1.1" 404 -
127.0.0.1 - - [30/Jun/2017 10:46:10] "GET /swaggerui/bower/swagger-ui/dist/swagger-ui.js HTTP/1.1" 404 -
127.0.0.1 - - [30/Jun/2017 10:46:10] "GET /swaggerui/bower/swagger-ui/dist/lib/highlight.9.1.0.pack.js HTTP/1.1" 404 -
127.0.0.1 - - [30/Jun/2017 10:46:10] "GET /swaggerui/bower/swagger-ui/dist/lib/highlight.9.1.0.pack_extended.js HTTP/1.1" 404 -
127.0.0.1 - - [30/Jun/2017 10:46:10] "GET /swaggerui/bower/swagger-ui/dist/lib/jsoneditor.min.js HTTP/1.1" 404 -
127.0.0.1 - - [30/Jun/2017 10:46:10] "GET /swaggerui/bower/swagger-ui/dist/lib/marked.js HTTP/1.1" 404 -
127.0.0.1 - - [30/Jun/2017 10:46:10] "GET /swaggerui/bower/swagger-ui/dist/lib/swagger-oauth.js HTTP/1.1" 404 -
127.0.0.1 - - [30/Jun/2017 10:46:10] "GET /favicon.ico HTTP/1.1" 404 -
```
It seems the static/bower/swagger-ui/.... directory tree doesn't exist like it does when you install 0.10.1 normally via pip. What's the fix for this? | open | 2017-06-30T15:52:13Z | 2019-05-07T14:33:51Z | https://github.com/noirbizarre/flask-restplus/issues/300 | [] | franfabrizio | 7 |
WZMIAOMIAO/deep-learning-for-image-processing | deep-learning | 799 | ่ฎญ็ปๆไปถ่ฟ่ก้่ฏฏ๏ผ | 
ๆ็
ง่ฆๆฑๅฎ่ฃ
ไบpytorch tf็ญ๏ผpython็ๆฌ3.7๏ผ็ปๆ่ฎญ็ปๆฅ้

| open | 2024-04-11T12:42:13Z | 2024-10-23T07:23:55Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/799 | [] | buyuadrink | 1 |
litestar-org/litestar | pydantic | 3,762 | Bug: Swagger and Redoc docs don't work for Piccolo ORM in Litestar>2.11.0 | ### Description
Piccolo ORM has a feature for scaffolding simple ASGI applications for various ASGI frameworks. I notice that
`Swagger` and `Redoc` docs do not work with the latest version of Litestar. The latest working version is `Litestar==2.11.0`.
For scaffolding ASGI apps, we don't use `PiccoloDTO` but Piccolo's internal tool ([create_pydantic_model](https://piccolo-orm.readthedocs.io/en/latest/piccolo/serialization/index.html)) to create a Pydantic model from a Piccolo table, which has an `extra` property, but Litestar [Schema](https://github.com/litestar-org/litestar/blob/b18774922fedc86089d143d9a5484f393826557d/litestar/openapi/spec/schema.py#L41) does not has a `extra` key and that causes a `ValueError` to be raised. I tried two things and after that everything works.
1. adding an `extra` key to a Schema like this
```python
class Schema(BaseSchemaObject):
...
extra: dict[str, Any] | None = None
```
2. or excluding the `extra` key from checking [here](https://github.com/litestar-org/litestar/blob/main/litestar/_openapi/schema_generation/schema.py#L595-L598) like this.
```python
if not hasattr(schema, schema_key) and schema_key != "extra":
raise ValueError(
f"`schema_extra` declares key `{schema_key}` which does not exist in `Schema` object"
)
```
I don't know if that's good enough, but these are just ideas. Any solution that will enable Piccolo ORM works with the latest Litestar version will be great. Thanks in advance.
### URL to code causing the issue
_No response_
### MCVE
```python
https://github.com/sinisaos/simple-piccolo
```
### Steps to reproduce
```bash
1. Clone repository
2. Install requirements
3. Start app with `python litestar_app.py`
4. Go to `http://localhost:8000/schema/swagger` and see error
```
### Screenshots
```bash
""
```
### Logs
```bash
ERROR - 2024-09-28 07:35:34,331 - litestar - config - Uncaught exception (connection_type=http, path=/schema/swagger):
Traceback (most recent call last):
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/middleware/_internal/exceptions/middleware.py", line 159, in __call__
await self.app(scope, receive, capture_response_started)
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_asgi/asgi_router.py", line 100, in __call__
await asgi_app(scope, receive, send)
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/routes/http.py", line 80, in handle
response = await self._get_response_for_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/routes/http.py", line 132, in _get_response_for_request
return await self._call_handler_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/routes/http.py", line 152, in _call_handler_function
response_data, cleanup_group = await self._get_response_data(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/routes/http.py", line 195, in _get_response_data
data = route_handler.fn(**parsed_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/plugin.py", line 161, in _handler
return plugin_.render(request, self.provide_openapi_schema())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/plugin.py", line 99, in provide_openapi_schema
self._openapi_schema = self.provide_openapi().to_schema()
^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/plugin.py", line 94, in provide_openapi
self._openapi = self._build_openapi()
^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/plugin.py", line 83, in _build_openapi
path_item = create_path_item_for_route(context, route)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/path_item.py", line 139, in create_path_item_for_route
return path_item_factory.create_path_item()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/path_item.py", line 44, in create_path_item
operation = self.create_operation_for_handler_method(route_handler, HttpMethod(http_method))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/path_item.py", line 68, in create_operation_for_handler_method
request_body = create_request_body(
^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/request_body.py", line 49, in create_request_body
schema = schema_creator.for_field_definition(data_field)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py", line 333, in for_field_definition
result = self.for_plugin(field_definition, plugin_for_annotation)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py", line 515, in for_plugin
schema = plugin.to_openapi_schema(field_definition=field_definition, schema_creator=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/contrib/pydantic/pydantic_schema_plugin.py", line 235, in to_openapi_schema
return self.for_pydantic_model(field_definition=field_definition, schema_creator=schema_creator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/contrib/pydantic/pydantic_schema_plugin.py", line 252, in for_pydantic_model
return schema_creator.create_component_schema(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py", line 645, in create_component_schema
schema.properties = {k: self.for_field_definition(v) for k, v in property_fields.items()}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py", line 645, in <dictcomp>
schema.properties = {k: self.for_field_definition(v) for k, v in property_fields.items()}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py", line 361, in for_field_definition
return self.process_schema_result(field_definition, result) if isinstance(result, Schema) else result
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py", line 596, in process_schema_result
raise ValueError(
ValueError: `schema_extra` declares key `extra` which does not exist in `Schema` object
```
### Litestar Version
2.12.1
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-09-28T06:04:21Z | 2025-03-20T15:54:56Z | https://github.com/litestar-org/litestar/issues/3762 | [
"Bug :bug:"
] | sinisaos | 2 |
sammchardy/python-binance | api | 1,243 | Basic ThreadedWebsocketManager example doesn't work. | **Describe the bug**
Basic example of ThreadedWebsocketManager doesn't work.
The console remains empty after long waiting.
**To Reproduce**
Code snippet to reproduce the behavior:
```py
import time
from binance import ThreadedWebsocketManager
api_key = ''
api_secret = ''
def main():
symbol = 'BNBBTC'
twm = ThreadedWebsocketManager(api_key=api_key, api_secret=api_secret)
# start is required to initialise its internal loop
twm.start()
def handle_socket_message(msg):
print(f"message type: {msg['e']}")
print(msg)
twm.start_kline_socket(callback=handle_socket_message, symbol=symbol)
twm.start_depth_socket(callback=handle_socket_message, symbol=symbol)
twm.join()
if __name__ == "__main__":
main()
```
**Expected behavior**
I expected it to work :)
**Environment (please complete the following information):**
- Python version: 3.10.6 (64-bit)
- Virtual Env: Not using any
- OS: Windows 10
- python-binance: 1.0.16
| closed | 2022-09-02T19:17:11Z | 2024-10-29T14:59:55Z | https://github.com/sammchardy/python-binance/issues/1243 | [] | Enebz | 12 |
deepinsight/insightface | pytorch | 1,758 | A question about resuming ArcFace-Torch. | Hi, Thanks for your nice work!
I am going to fine-tune the ArcFace-Torch by using the checkpoints download from BaiduYun(e8pw):
link: https://pan.baidu.com/share/init?surl=CL-l4zWqsI1oDuEEYVhj-g
file path: arcface_torch/ms1mv3_arcface_r50_fp16/
I notice that the MS1M-RetinaFace training dataset's "num_classes" is 93431. However, the num_classes in "weight" loaded from the files "rank_0_softmax_weigh.pt"~"rank_0_softmax_weigh.pt" are 11398, 11398, 11398, 11398, 11397, 11397, 11397, 11397, respectively. And the sum of them is 91180, which is not 93431.
Thus, when I set "resume=True", the "weight" and "weight_mom" can not resume successfully. And then they are initialized randomly.
I wonder what is wrong with this?
Is it ok when I only load the pre-trained "backbone" but don't load the "weight" and "weight_mom"?
I will be very happy if you can kindly reply to me. | closed | 2021-09-17T09:54:33Z | 2023-05-23T12:51:59Z | https://github.com/deepinsight/insightface/issues/1758 | [] | csbhr | 5 |
ShishirPatil/gorilla | api | 136 | Website Assisted ? | How about the "Website Assisted"? Is it coming soon?
| open | 2023-11-01T15:00:16Z | 2023-11-01T15:00:16Z | https://github.com/ShishirPatil/gorilla/issues/136 | [
"enhancement"
] | GuodongFan | 0 |
marshmallow-code/flask-marshmallow | rest-api | 136 | How to transform a list of objects into a list of strings? | I've a question.
I have these two schemas that are working almost as expected:
```python
from ..models import Municipality, State
from flask_marshmallow import Marshmallow
ma = Marshmallow()
class MunicipalitySchema(ma.ModelSchema):
class Meta:
model = Municipality
fields = ('name',)
class StateSchema(ma.ModelSchema):
class Meta:
model = State
fields = ('name', 'uf', 'municipalities')
municipalities = ma.Nested(MunicipalitySchema, many=True)
```
So, when I dump the `State` object, it returns:
```json
{
"name": "Sรฃo Paulo",
"uf": "SP",
"municipalities": [
{"name": "municipality1"},
{"name": "municipality2"},
{"name": "municipality3"},
//...
]
}
```
It is ok, but I don't want to retrieve the `municipalities` as an array of objects. I would like to return it as an array of strings, considering the `name` property. Like this:
```json
{
"name": "Sรฃo Paulo",
"uf": "SP",
"municipalities": [
"municipality1",
"municipality2",
"municipality3",
//...
]
}
```
How can I get that result using Marshmallow? | closed | 2019-05-20T12:26:09Z | 2019-11-04T00:12:44Z | https://github.com/marshmallow-code/flask-marshmallow/issues/136 | [] | rn4n | 1 |
scikit-image/scikit-image | computer-vision | 7,732 | Deprecate old regionprop names | We have properties that have been renamedโsome even twiceโleaving a mess of legacy names that are still supported, but create confusion for users.
*Note:* It seemed as though these names may *already* have been removed from the docs. In that case, there is no problem, and we can deprecate with impunity.
**Definition of done:** There is one and only one name per regionprop. If names are already undocumented, go ahead and remove them completely. If they're still mentioned in the docs, deprecate them, and remove any mention of them from the docs. | open | 2025-03-04T06:42:11Z | 2025-03-04T07:27:08Z | https://github.com/scikit-image/scikit-image/issues/7732 | [
":hiking_boot: Path to skimage2"
] | stefanv | 0 |
huggingface/datasets | machine-learning | 6,522 | Loading HF Hub Dataset (private org repo) fails to load all features | ### Describe the bug
When pushing a `Dataset` with multiple `Features` (`input`, `output`, `tags`) to Huggingface Hub (private org repo), and later downloading the `Dataset`, only `input` and `output` load - I believe the expected behavior is for all `Features` to be loaded by default?
### Steps to reproduce the bug
Pushing the data. `data_concat` is a `list` of `dict`s.
```python
for datum in data_concat:
datum_tags = {d["key"]: d["value"] for d in datum["tags"]}
split_fraction = # some logic that generates a train/test split number
if split_faction < test_fraction:
data_test.append(datum)
else:
data_train.append(datum)
dataset = DatasetDict(
{
"train": Dataset.from_list(data_train),
"test": Dataset.from_list(data_test),
"full": Dataset.from_list(data_concat),
},
)
dataset_shuffled = dataset.shuffle(seed=shuffle_seed)
dataset_shuffled.push_to_hub(
repo_id=hf_repo_id,
private=True,
config_name=m,
revision=revision,
token=hf_token,
)
```
Loading it later:
```python
dataset = datasets.load_dataset(
path=hf_repo_id,
name=name,
token=hf_token,
)
```
Produces:
```
DatasetDict({
train: Dataset({
features: ['input', 'output'],
num_rows: <obfuscated>
})
test: Dataset({
features: ['input', 'output'],
num_rows: <obfuscated>
})
full: Dataset({
features: ['input', 'output'],
num_rows: <obfuscated>
})
})
```
### Expected behavior
The expected result is below:
```
DatasetDict({
train: Dataset({
features: ['input', 'output', 'tags'],
num_rows: <obfuscated>
})
test: Dataset({
features: ['input', 'output', 'tags'],
num_rows: <obfuscated>
})
full: Dataset({
features: ['input', 'output', 'tags'],
num_rows: <obfuscated>
})
})
```
My workaround is as follows:
```python
dsinfo = datasets.get_dataset_config_info(
path=data_files,
config_name=data_config,
token=hf_token,
)
allfeatures = dsinfo.features.copy()
if "tags" not in allfeatures:
allfeatures["tags"] = [{"key": Value(dtype="string", id=None), "value": Value(dtype="string", id=None)}]
dataset = datasets.load_dataset(
path=data_files,
name=data_config,
features=allfeatures,
token=hf_token,
)
```
Interestingly enough (and perhaps a related bug?), if I don't add the `tags` to `allfeatures` above (i.e. only loading `input` and `output`), it throws an error when executing `load_dataset`:
```
ValueError: Couldn't cast
tags: list<element: struct<key: string, value: string>>
child 0, element: struct<key: string, value: string>
child 0, key: string
child 1, value: string
input: <obfuscated>
output: <obfuscated>
-- schema metadata --
huggingface: '{"info": {"features": {"tags": [{"key": {"dtype": "string",' + 532
to
{'input': <obfuscated>, 'output': <obfuscated>
because column names don't match
```
Traceback for this:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/load.py", line 2152, in load_dataset
builder_instance.download_and_prepare(
File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 948, in download_and_prepare
self._download_and_prepare(
File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 1043, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 1805, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 1950, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Environment info
- `datasets` version: 2.15.0
- Platform: macOS-14.0-arm64-arm-64bit
- Python version: 3.11.5
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0 | open | 2023-12-21T12:26:35Z | 2023-12-21T13:24:31Z | https://github.com/huggingface/datasets/issues/6522 | [] | versipellis | 0 |
huggingface/pytorch-image-models | pytorch | 1,983 | [BUG] Getting completely different results for different version of torch. | When training a quantized version of a DeiT model on torch==1.10.1 for 300 epochs, I get normal results. However, when training that same quantized version of that DeiT model on torch==1.13.1, I get completely different results that are much worse by 4% accuracy on epoch 50 out of 300.
I was wondering if you have experienced getting completely different results when training two equivalent DeiT models (or any any type ViT models (not necessarily quantized)) with both of them having a completely different torch version. If yes, does it just happen at the first few epochs or the whole trainng process?
| closed | 2023-10-07T19:44:37Z | 2023-10-11T13:07:17Z | https://github.com/huggingface/pytorch-image-models/issues/1983 | [
"bug"
] | Phuoc-Hoan-Le | 0 |
ets-labs/python-dependency-injector | asyncio | 662 | Azure Functions and Dependency Injector | Hi ๐ ,
We are using this awesome tool with many Azure services. One that does not cooperate nicely with DI is Azure Functions.
I'm hoping I'll get some light shed on why they don't cooperate.
Azure Functions is a docker container with a bunch of modules, those modules are instances of Azure Functions, as AWS Lambdas. The code is placed usually in `__init__.py`
Below is a simple example of how this may look like.
```
.
โโโ Dockerfile
โโโ dependencies.py
โโโ maintenance_break_cron
โย ย โโโ __init__.py
โย ย โโโ function.json
โโโ promotions_consumer
โย ย โโโ __init__.py
โย ย โโโ function.json
โย ย โโโ utils.py
```
In the `dependencies.py` is our container
```
class Container(containers.DeclarativeContainer):
wiring_config = containers.WiringConfiguration(
modules=[
"promotions_consumer"
]
feature_flag_service = providers.Factory(
FeatureFlagService, provider=feature_flag_provider
)
metrics = providers.Singleton(Metrics)
```
The wiring does not work if the `@inject/Provide` pair is placed in `__init__.py` but it does work in `utils.py`. Our workaround at this point is move the code with DI into submodule (like `utils.py`). Another one is to access dependencies in this way
```
container = Container()
metrics = container.metrics()
```
I'm wondering what is the issue in our case? Is our wiring not right?
Any help would be greatly appreciated. | open | 2023-01-16T12:36:14Z | 2023-01-16T12:36:14Z | https://github.com/ets-labs/python-dependency-injector/issues/662 | [] | inirudebwoy | 0 |
adbar/trafilatura | web-scraping | 43 | missing <p>TEXT</p> from HTML after extract | I have a HTML-fragement (nested divs, p, ul).
```
html_fragement = "<div class="l-main-column">\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\n\t\n\t<div class="text-image-container ">\n<div class="text-image">\n<p>Mit dem KfW-Unternehmerkredit fรถrdern wir Unter\xadnehmen sowie Frei\xadberufler, die seit mindestens <span class="u-nobr">5 Jahren</span> am Markt aktiv sind.</p>\n</div>\n</div><div class="text-image-container ">\n<div class="text-image">\n<p><strong>Das Fรถrderprodukt kommt nicht in Frage: </strong></p><ul class="list list--bullets"> <li class="list__item"> fรผr Existenzgrรผnder und junge Unternehmen bis 5 Jahre. Diese unterstรผtzen wir mit anderen Fรถrder\xadprodukten, zum Beispiel mit dem <a class="link link--underline" href="https://www.kfw.de/inlandsfoerderung/Privatpersonen/Gr%C3%BCnden-Erweitern/F%C3%B6rderprodukte/ERP-Gr%C3%BCnderkredit-Universell-(073_074_075_076)/" title="Zur Infoseite zum ERP-Grรผnderkredit - Universell (073, 074, 075, 076)" data-section="contentcolumn"><span class="link__name"><span class="link__name-inner"><span class="link__name-text">ERP-Grรผnderkredit โ Universell</span></span></span></a>. </li><li class="list__item"> fรผr Unternehmen, die zum 31.12.2019 in Schwierig\xadkeiten waren, also vor Beginn der Coronakrise. </li><li class="list__item"> wenn Sie wรคhrend der Kredit\xadlaufzeit Gewinn oder Dividende ausschรผtten. Mรถglich sind aber markt\xadรผbliche Ausschรผttungen oder Entnahmen fรผr Geschรคfts\xadinhaber (natรผrliche Personen). </li> </ul>\n</div>\n</div>\n\t\n\n\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t</div>"
```
I wanted trafilatura v0.6.0 to extract the "visible" text from the fragement by `trafilatura.extract(html_fragment, target_language='de')`. There are two paragraphs and an unordered list. After the extract, I receive the text from second paragraph and text of unordered list. But the first paragraph is lost. why?
the output of the extract is:
```
In [16]: trafilatura.extract(a[0])
2020-12-15 18:01:01 [trafilatura.core] DEBUG: Taking all p-elements
2020-12-15 18:01:01 [readability.readability] DEBUG: Branch 34.250 .text-image-container>.text-image link density 0.000 -> 34.250
2020-12-15 18:01:01 [readability.readability] DEBUG: Branch 32.125 .l-main-column>.text-image-container link density 0.000 -> 32.125
2020-12-15 18:01:01 [readability.readability] DEBUG: Branch 32.390 .text-image-container>.text-image link density 0.063 -> 30.344
2020-12-15 18:01:01 [readability.readability] DEBUG: Branch 31.195 .l-main-column>.text-image-container link density 0.063 -> 29.225
2020-12-15 18:01:01 [readability.readability] DEBUG: Top 5 : 34.250 .text-image-container>.text-image
2020-12-15 18:01:01 [readability.readability] DEBUG: Top 5 : 32.125 .l-main-column>.text-image-container
2020-12-15 18:01:01 [readability.readability] DEBUG: Top 5 : 30.344 .text-image-container>.text-image
2020-12-15 18:01:01 [readability.readability] DEBUG: Top 5 : 29.225 .l-main-column>.text-image-container
2020-12-15 18:01:01 [readability.readability] DEBUG: Not removing div{01}>.text-image of length 125: Mit dem KfW-Unternehmerkredit fรถrdern w...
2020-12-15 18:01:01 [readability.readability] DEBUG: Branch 32.390 .text-image-container>.text-image link density 0.063 -> 30.344
2020-12-15 18:01:01 [readability.readability] DEBUG: Branch 31.195 .l-main-column>.text-image-container link density 0.063 -> 29.225
2020-12-15 18:01:01 [readability.readability] DEBUG: Top 5 : 30.344 .text-image-container>.text-image
2020-12-15 18:01:01 [readability.readability] DEBUG: Top 5 : 29.225 .l-main-column>.text-image-container
2020-12-15 18:01:01 [readability.readability] DEBUG: Not removing .text-image>ul.list.list--bullets of length 435: fรผr Existenzgrรผnder und junge Unternehm...
2020-12-15 18:01:01 [readability.readability] DEBUG: Not removing div{01}>.text-image of length 475: Das Fรถrderprodukt kommt nicht in Frage:...
2020-12-15 18:01:01 [trafilatura.core] DEBUG: extracted length: 476 (algorithm) 165 (extraction)
2020-12-15 18:01:01 [trafilatura.core] INFO: using generic algorithm: None
2020-12-15 18:01:01 [trafilatura.core] INFO: not enough comments None
Out[16]: 'Das Fรถrderprodukt kommt nicht in Frage:\n- fรผr Existenzgrรผnder und junge Unternehmen bis 5 Jahre. Diese unterstรผtzen wir mit anderen Fรถrderprodukten, zum Beispiel mit dem ERP-Grรผnderkredit โ Universell.\n- fรผr Unternehmen, die zum 31.12.2019 in Schwierigkeiten waren, also vor Beginn der Coronakrise.\n- wenn Sie wรคhrend der Kreditlaufzeit Gewinn oder Dividende ausschรผtten. Mรถglich sind aber marktรผbliche Ausschรผttungen oder Entnahmen fรผr Geschรคftsinhaber (natรผrliche Personen).'
```
The mssing paragraph is not removed by readability.readabilty.
`Not removing div{01}>.text-image of length 125: Mit dem KfW-Unternehmerkredit fรถrdern w...`
I expected:
```
Mit dem KfW-Unternehmerkredit fรถrdern wir Unternehmen sowie Freiberufler, die seit mindestens 5 Jahren am Markt aktiv sind.\nDas Fรถrderprodukt kommt nicht in Frage:\n- fรผr Existenzgrรผnder und junge Unternehmen bis 5 Jahre. Diese unterstรผtzen wir mit anderen Fรถrderprodukten, zum Beispiel mit dem ERP-Grรผnderkredit โ Universell.\n- fรผr Unternehmen, die zum 31.12.2019 in Schwierigkeiten waren, also vor Beginn der Coronakrise.\n- wenn Sie wรคhrend der Kreditlaufzeit Gewinn oder Dividende ausschรผtten. Mรถglich sind aber marktรผbliche Ausschรผttungen oder Entnahmen fรผr Geschรคftsinhaber (natรผrliche Personen).
```
Thanks for your support. Love your work. | closed | 2020-12-15T17:08:16Z | 2020-12-23T16:26:07Z | https://github.com/adbar/trafilatura/issues/43 | [] | cons0l3 | 1 |
torchbox/wagtail-grapple | graphql | 237 | Document the need for `graphql_fields` on custom rendition models | When using a custom image model, with a custom rendition model one needs to add
```python
graphql_fields = [
GraphQLString("id"),
GraphQLString("url"),
GraphQLString("width"),
GraphQLString("height"),
GraphQLImage("image"),
GraphQLString("file"),
]
```
to the custom rendition model. (Alternatively https://github.com/torchbox/wagtail-grapple/blob/566967686fdaf0da27ea70e7cc75098da75aa0e9/example/images/models.py#L16-L35)
Document, or fix so it is not needed
| open | 2022-07-25T15:53:23Z | 2022-09-02T09:09:29Z | https://github.com/torchbox/wagtail-grapple/issues/237 | [
"documentation"
] | zerolab | 0 |
fastapi-admin/fastapi-admin | fastapi | 13 | ๅจexample/main.pyไธญๅขๅ ไบไธไธช Menu๏ผๅฆไฝๆ่ฝๅจๅ็ซฏๆพ็คบ | 1.ๅจmodels.pyไธญ่ฟฝๅ ไบ class Order็ๅฎไน๏ผๆฐๆฎๅบไธญ้่ฟsqlๅๅปบไบOrder่กจใ
2.ๅจmain.pyไธญ่ฟฝๅ ไบๅฆไธ็้จๅ
Menu(
name="Order",
url="/rest/Order",
icon="fa fa-gear",
import_=True,
search_fields=("userid",),
),
3.้ๆฐๆๅ
ไบ docker-compose up -d --build
4.ๅ็ซฏไน้ๆฐ buildไบ
ๅๅคๅทๆฐ๏ผๅ็ซฏ้ฝไธ่ฝ็่ง Order่็นใ ็นๆๅปๆไบ Config Menu๏ผไปๅ็ซฏ็กฎๅฎ็ไธ่งไบใ
| closed | 2020-09-03T10:57:13Z | 2020-09-04T01:39:53Z | https://github.com/fastapi-admin/fastapi-admin/issues/13 | [] | liuqlive | 4 |
geex-arts/django-jet | django | 61 | How to create custom widgets? | Hi! I need a little support. I need to add a map(getting data from my models) with time slider filters that is going to be used for monitoring and little work; and need to do a lot of customization with that map. So I thought it'll be a great idea to create a widget for the map and use it. I need to know how do I write a custom widget and include it in the admin panel. Can you please help me doing that? Or let me know if there is a better way to add that map in the admin panel?
| closed | 2016-03-21T15:04:30Z | 2021-01-22T21:57:19Z | https://github.com/geex-arts/django-jet/issues/61 | [] | RamizSami | 4 |
MycroftAI/mycroft-core | nlp | 3,107 | An error occurred while processing a request in Time Skill | **Describe the bug**
I said "Hey Mycroft ... what time is it" and got the time, but also got the subject error message. Here's the traceback:
09:42:05.676 | ERROR | 55371 | mycroft.skills.mycroft_skill.mycroft_skill:on_error:923 | An error occurred while processing a request in Time Skill
Traceback (most recent call last):
File "/home/pi/mycroft-core/mycroft/skills/mycroft_skill/event_container.py", line 73, in wrapper
handler(message)
File "/opt/mycroft/skills/mycroft-date-time.mycroftai/__init__.py", line 412, in handle_query_time
self.display(self.get_display_current_time(location))
File "/opt/mycroft/skills/mycroft-date-time.mycroftai/__init__.py", line 272, in display
self.display_gui(display_time)
File "/opt/mycroft/skills/mycroft-date-time.mycroftai/__init__.py", line 332, in display_gui
self.gui.clear()
File "/home/pi/mycroft-core/mycroft/enclosure/gui.py", line 129, in clear
self.skill.bus.emit(Message("gui.clear.namespace",
AttributeError: 'NoneType' object has no attribute 'bus'
**To Reproduce**
Steps to reproduce the behavior:
1. start 'mycroft-start debug'
2. : log level debug
3. say "Hey Mycroft ... what time is it"
4. You should see the error
What is weird is that this was not happening yesterday. Has any code been updated in the last 24 hours or so? (It is 10:00 EST on 23 May 2022)
**Expected behavior**
No error message, just the current time
**Log files**
I could provide more if needed
**Environment (please complete the following information):**
- Device type: Raspberry Pi 4 4GB
- OS: Ubuntu Server
- Mycroft-core version: Not sure how to query version - I built with stable not development.
- Other versions: can be provided
**Additional context**
I have been learning to build skills but have not modified any other source files.
| closed | 2022-05-23T14:12:43Z | 2024-09-08T08:43:47Z | https://github.com/MycroftAI/mycroft-core/issues/3107 | [
"bug"
] | mike99mac | 3 |
roboflow/supervision | tensorflow | 754 | [ByteTrack] - redesign `update_with_detections` to use IoU to match input `Detections` with `tracker_id` | ### Description
- Currently, `update_with_detections` returns the predicted position of boxes, not their actual coordinates received from the detector. Many users have complained about the deterioration of box quality when using ByteTrack. ([#743](https://github.com/roboflow/supervision/issues/743))
- `ByteTrack` does not work with segmentation models because masks are not transferred to the `update_with_detections` output.
- The `Detections.data field` is lost after passing through `update_with_detections`.
All these issues can be resolved by changing the logic in `update_with_detections`. Instead of mapping values obtained from `update_with_tensors` to new `Detections` objects, we should use IoU to map the results of `update_with_tensors` to input `Detections` objects. This way, the input `xyxy` coordinates and the input state of the `mask` and `data` fields will be preserved.
For this purpose, we can utilize the already existing function [`box_iou_batch`](https://github.com/roboflow/supervision/blob/3024ddca83ad837651e59d040e2a5ac5b2b4f00f/supervision/detection/utils.py#L31). The matching procedure has been demonstrated in [one](https://www.youtube.com/watch?v=OS5qI9YBkfk) of our videos on YouTube.
### Additional
- Note: Please share a Google Colab with minimal code to test the new feature. We know it's additional work, but it will definitely speed up the review process. Each change must be tested by the reviewer. Setting up a local environment to do this is time-consuming. Please ensure that Google Colab can be accessed without any issues (make it public). Thank you! ๐๐ป | closed | 2024-01-18T21:16:44Z | 2024-03-25T15:20:26Z | https://github.com/roboflow/supervision/issues/754 | [
"enhancement",
"api:tracker",
"Q1.2024"
] | SkalskiP | 3 |
davidsandberg/facenet | computer-vision | 954 | Classifying an Image | @davidsandberg Can you please explain me how is the Accuracy 1 when the probablities are 0.4 0.3 and so on..thanks in advance .
**Testing classifier
Loaded classifier model from file "/home/david/models/my_classifier.pkl"
0 Ariel Sharon: 0.452
1 Ariel Sharon: 0.376
2 Ariel Sharon: 0.426
...
...
...
47 Vladimir Putin: 0.418
48 Vladimir Putin: 0.453
49 Vladimir Putin: 0.378
Accuracy: 1.000** | open | 2019-01-18T08:00:21Z | 2019-01-21T12:53:33Z | https://github.com/davidsandberg/facenet/issues/954 | [] | dfsaw | 4 |
xinntao/Real-ESRGAN | pytorch | 121 | ncnn-vulkan binary limited to nearest neighbor for alpha channel. | Hello, I recently discovered this wonderful project and have a request for improved upscaling on the alpha channel. I understand that thereโs an option for the PyTorch implementation to use RealESRGAN for the alpha but Iโm on a Mac so I canโt use it unless I use the slower CPU.
Please donโt limit it to nearest neighbor on the ncnn-vulkan binary. Thanks a lot! | open | 2021-10-10T21:08:36Z | 2021-10-10T21:08:36Z | https://github.com/xinntao/Real-ESRGAN/issues/121 | [] | dvessel | 0 |
Lightning-AI/pytorch-lightning | pytorch | 20,281 | `NeptuneCallback` produces lots of `X-coordinates (step) must be strictly increasing` errors | ### Bug description
When Optuna is run in parallel mode (`n_jobs=-1`), with `NeptuneCallback`, I get:
`[neptune] [error ] Error occurred during asynchronous operation processing: X-coordinates (step) must be strictly increasing for series attribute: trials/values. Invalid point: 0.0`
It's normal that during parallel or distributed hyperparam optimization, information become unordered. Either Neptune should support adding steps out of order, or `NeptuneCallback` should support it somehow (e.g. by using an artificial step number).
### What version are you seeing the problem on?
v1.x
### How to reproduce the bug
```python
study.optimize(..., callbacks=[NeptuneCallback(run)], n_jobs=-1)
```
### Error messages and logs
`[neptune] [error ] Error occurred during asynchronous operation processing: X-coordinates (step) must be strictly increasing for series attribute: trials/values. Invalid point: 0.0`
### Environment
Any multi-threaded environment.
### More info
_No response_ | open | 2024-09-14T11:49:28Z | 2024-09-28T23:45:22Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20281 | [
"bug",
"needs triage"
] | iirekm | 1 |
d2l-ai/d2l-en | data-science | 2,638 | d2l broken (numpy) in colab with default pytorch 2.6.0 | Colab just updated pytorch to 2.6.0 and it breaks d2l:
```bash
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-2-a93af434f2cf>](https://localhost:8080/#) in <cell line: 0>()
1 get_ipython().run_line_magic('matplotlib', 'inline')
----> 2 from d2l import torch as d2l
3 import torchvision
14 frames
[/usr/local/lib/python3.11/dist-packages/d2l/torch.py](https://localhost:8080/#) in <module>
4 import numpy as np
5 import torch
----> 6 import torchvision
7 from PIL import Image
8 from torch import nn
[/usr/local/lib/python3.11/dist-packages/torchvision/__init__.py](https://localhost:8080/#) in <module>
8 # .extensions) before entering _meta_registrations.
9 from .extension import _HAS_OPS # usort:skip
---> 10 from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils # usort:skip
11
12 try:
[/usr/local/lib/python3.11/dist-packages/torchvision/models/__init__.py](https://localhost:8080/#) in <module>
1 from .alexnet import *
----> 2 from .convnext import *
3 from .densenet import *
4 from .efficientnet import *
5 from .googlenet import *
[/usr/local/lib/python3.11/dist-packages/torchvision/models/convnext.py](https://localhost:8080/#) in <module>
6 from torch.nn import functional as F
7
----> 8 from ..ops.misc import Conv2dNormActivation, Permute
9 from ..ops.stochastic_depth import StochasticDepth
10 from ..transforms._presets import ImageClassification
[/usr/local/lib/python3.11/dist-packages/torchvision/ops/__init__.py](https://localhost:8080/#) in <module>
21 from .giou_loss import generalized_box_iou_loss
22 from .misc import Conv2dNormActivation, Conv3dNormActivation, FrozenBatchNorm2d, MLP, Permute, SqueezeExcitation
---> 23 from .poolers import MultiScaleRoIAlign
24 from .ps_roi_align import ps_roi_align, PSRoIAlign
25 from .ps_roi_pool import ps_roi_pool, PSRoIPool
[/usr/local/lib/python3.11/dist-packages/torchvision/ops/poolers.py](https://localhost:8080/#) in <module>
8
9 from ..utils import _log_api_usage_once
---> 10 from .roi_align import roi_align
11
12
[/usr/local/lib/python3.11/dist-packages/torchvision/ops/roi_align.py](https://localhost:8080/#) in <module>
5 import torch.fx
6 from torch import nn, Tensor
----> 7 from torch._dynamo.utils import is_compile_supported
8 from torch.jit.annotations import BroadcastingList2
9 from torch.nn.modules.utils import _pair
[/usr/local/lib/python3.11/dist-packages/torch/_dynamo/__init__.py](https://localhost:8080/#) in <module>
1 import torch
2
----> 3 from . import convert_frame, eval_frame, resume_execution
4 from .backends.registry import list_backends, lookup_backend, register_backend
5 from .callback import callback_handler, on_compile_end, on_compile_start
[/usr/local/lib/python3.11/dist-packages/torch/_dynamo/convert_frame.py](https://localhost:8080/#) in <module>
31 from torch._C._dynamo.guards import GlobalStateGuard
32 from torch._dynamo.distributed import get_compile_pg
---> 33 from torch._dynamo.symbolic_convert import TensorifyState
34 from torch._guards import compile_context, CompileContext, CompileId, tracing
35 from torch._logging import structured
[/usr/local/lib/python3.11/dist-packages/torch/_dynamo/symbolic_convert.py](https://localhost:8080/#) in <module>
25 import torch
26 import torch._logging
---> 27 from torch._dynamo.exc import TensorifyScalarRestartAnalysis
28 from torch._guards import tracing, TracingContext
29
[/usr/local/lib/python3.11/dist-packages/torch/_dynamo/exc.py](https://localhost:8080/#) in <module>
9
10 from . import config
---> 11 from .utils import counters
12
13
[/usr/local/lib/python3.11/dist-packages/torch/_dynamo/utils.py](https://localhost:8080/#) in <module>
109 np.fft,
110 np.linalg,
--> 111 np.random,
112 )
113
[/usr/local/lib/python3.11/dist-packages/numpy/__init__.py](https://localhost:8080/#) in __getattr__(attr)
335 if not abs(x.dot(x) - 2.0) < 1e-5:
336 raise AssertionError()
--> 337 except AssertionError:
338 msg = ("The current Numpy installation ({!r}) fails to "
339 "pass simple sanity checks. This can be caused for example "
[/usr/local/lib/python3.11/dist-packages/numpy/random/__init__.py](https://localhost:8080/#) in <module>
178
179 # add these for module-freeze analysis (like PyInstaller)
--> 180 from . import _pickle
181 from . import _common
182 from . import _bounded_integers
[/usr/local/lib/python3.11/dist-packages/numpy/random/_pickle.py](https://localhost:8080/#) in <module>
----> 1 from .mtrand import RandomState
2 from ._philox import Philox
3 from ._pcg64 import PCG64, PCG64DXSM
4 from ._sfc64 import SFC64
5
mtrand.pyx in init numpy.random.mtrand()
ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
``` | open | 2025-03-18T11:51:11Z | 2025-03-18T13:02:53Z | https://github.com/d2l-ai/d2l-en/issues/2638 | [] | drapado | 0 |
Yorko/mlcourse.ai | seaborn | 357 | typo : unupervised | https://mlcourse.ai/assignments
the nยฐ 7 is unupervised, should probably be unsupervised | closed | 2018-09-27T12:11:23Z | 2018-10-04T14:12:09Z | https://github.com/Yorko/mlcourse.ai/issues/357 | [
"minor_fix"
] | Mziserman | 1 |
localstack/localstack | python | 11,881 | Layers for lambda function does not work on python3.9 runtime (Localstack Pro) | ### Is there an existing issue for this?
- [x] I have searched the existing issues
### Current Behavior
Currently Iโm trying to use the matplotlib library in my source code so I want to create a layer. However it is not working now. I donโt know if Iโm configuring it wrong, so when I invoke the lambda function, so when I invoke the lambda function the layer doesnโt work?
Error :
`2024-11-19 20:02:22 2024-11-19T11:02:22.661 WARN --- [et.reactor-0] l.s.l.i.executor_endpoint : Execution environment startup failed: {"errorMessage": "Unable to import module 'src/api/v1/route/lane': No module named 'matplotlib'", "errorType": "Runtime.ImportModuleError", "requestId": "", "stackTrace": []}`
### Expected Behavior
lambda function works properly with matplotlib
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
โ To intall lib and zip it to matplotlib.zip
`pip install -r requirements.txt -t layer`
โ To create layers
```
aws --endpoint-url http://localhost:4566 lambda publish-layer-version --layer-name matplotliblayer \
--license-info "MIT" \
--zip-file fileb://layer/matplotlib.zip \
--compatible-runtimes python3.9 \
--compatible-architectures "arm64" "x86_64"
```
โ To create lambda function
```
aws --endpoint-url=http://localhost:4566 lambda create-function \
--function-name lane-fuction \
--runtime python3.9 \
--role arn:aws:iam::000000000000:role/lambda-role \
--handler src/api/v1/route/lane.main \
--zip-file fileb://src/route.zip \
--timeout 120 \
--layers arn:aws:lambda:ap-northeast-1:000000000000:layer:matplotliblayer:1
```
The structure of the matplotlib.zip file is as follows:
matplotlib.zip
___โโโ python
_______โโโ dependencies
I also tried the structure as shown in the AWS docs

### Environment
```markdown
- OS: Windows
- LocalStack:
LocalStack version: 3.8.2.dev105
LocalStack Docker image sha:
LocalStack build date: 2024-11-14
LocalStack build git hash: e42c433a
```
### Anything else?
_No response_ | closed | 2024-11-20T03:35:45Z | 2024-12-04T08:29:26Z | https://github.com/localstack/localstack/issues/11881 | [
"type: bug",
"aws:lambda"
] | long-tran-dss | 4 |
mars-project/mars | pandas | 2,677 | Issue connecting debugger | After building mars from master using `pip install -e ".[dev]"` , I try to create a new session with command
```
import mars
mars.new_session()
```
giving output
```
Web service started at http://0.0.0.0:38791
<mars.deploy.oscar.session.SyncSession object at 0x12ef81e20>
```
However, when running the same via the PyCharm console, i get the error:
```
import mars
mars.new_session()
```
```
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 280, in main
code = _serve_one(child_r, fds,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 319, in _serve_one
code = spawn._main(child_r, parent_sentinel)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
prepare(preparation_data)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 264, in run_path
code, fname = _get_code_from_file(run_name, path_name)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 234, in _get_code_from_file
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 280, in main
with io.open_code(decoded_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/bytedance/intern/dev/mars/<input>'
code = _serve_one(child_r, fds,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 319, in _serve_one
code = spawn._main(child_r, parent_sentinel)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
prepare(preparation_data)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 264, in run_path
code, fname = _get_code_from_file(run_name, path_name)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 234, in _get_code_from_file
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 280, in main
with io.open_code(decoded_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/bytedance/intern/dev/mars/<input>'
code = _serve_one(child_r, fds,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 319, in _serve_one
code = spawn._main(child_r, parent_sentinel)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
prepare(preparation_data)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 280, in main
main_content = runpy.run_path(main_path,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 264, in run_path
code, fname = _get_code_from_file(run_name, path_name)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 234, in _get_code_from_file
with io.open_code(decoded_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/bytedance/intern/dev/mars/<input>'
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 280, in main
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 280, in main
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 280, in main
code = _serve_one(child_r, fds,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 319, in _serve_one
code = spawn._main(child_r, parent_sentinel)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 280, in main
prepare(preparation_data)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 264, in run_path
code = _serve_one(child_r, fds,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 319, in _serve_one
code = spawn._main(child_r, parent_sentinel)code, fname = _get_code_from_file(run_name, path_name)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 234, in _get_code_from_file
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
prepare(preparation_data)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
with io.open_code(decoded_path) as f:
FileNotFoundError: _fixup_main_from_path(data['init_main_from_path'])
[Errno 2] No such file or directory: '/Users/bytedance/intern/dev/mars/<input>'
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 264, in run_path
code, fname = _get_code_from_file(run_name, path_name)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 234, in _get_code_from_file
with io.open_code(decoded_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/bytedance/intern/dev/mars/<input>'
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 280, in main
code = _serve_one(child_r, fds,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 319, in _serve_one
code = _serve_one(child_r, fds,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 319, in _serve_one
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 280, in main
code = spawn._main(child_r, parent_sentinel)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
prepare(preparation_data)code = spawn._main(child_r, parent_sentinel)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
_fixup_main_from_path(data['init_main_from_path'])prepare(preparation_data)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 264, in run_path
_fixup_main_from_path(data['init_main_from_path'])
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
code, fname = _get_code_from_file(run_name, path_name)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 234, in _get_code_from_file
main_content = runpy.run_path(main_path,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 264, in run_path
with io.open_code(decoded_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/bytedance/intern/dev/mars/<input>'
code, fname = _get_code_from_file(run_name, path_name)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 234, in _get_code_from_file
with io.open_code(decoded_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/bytedance/intern/dev/mars/<input>'
code = _serve_one(child_r, fds,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 319, in _serve_one
code = spawn._main(child_r, parent_sentinel)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
prepare(preparation_data)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 264, in run_path
code, fname = _get_code_from_file(run_name, path_name)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 234, in _get_code_from_file
with io.open_code(decoded_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/bytedance/intern/dev/mars/<input>'
code = _serve_one(child_r, fds,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 319, in _serve_one
code = spawn._main(child_r, parent_sentinel)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
prepare(preparation_data)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 264, in run_path
code = _serve_one(child_r, fds,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 319, in _serve_one
code, fname = _get_code_from_file(run_name, path_name)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 234, in _get_code_from_file
code = spawn._main(child_r, parent_sentinel)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
with io.open_code(decoded_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/bytedance/intern/dev/mars/<input>'
prepare(preparation_data)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 264, in run_path
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 280, in main
code, fname = _get_code_from_file(run_name, path_name)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 234, in _get_code_from_file
with io.open_code(decoded_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/bytedance/intern/dev/mars/<input>'
code = _serve_one(child_r, fds,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 319, in _serve_one
code = spawn._main(child_r, parent_sentinel)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
prepare(preparation_data)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 264, in run_path
code, fname = _get_code_from_file(run_name, path_name)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 234, in _get_code_from_file
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 280, in main
with io.open_code(decoded_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/bytedance/intern/dev/mars/<input>'
code = _serve_one(child_r, fds,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 319, in _serve_one
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 280, in main
code = spawn._main(child_r, parent_sentinel)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
prepare(preparation_data)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
code = _serve_one(child_r, fds,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/forkserver.py", line 319, in _serve_one
main_content = runpy.run_path(main_path,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 264, in run_path
code, fname = _get_code_from_file(run_name, path_name)
code = spawn._main(child_r, parent_sentinel) File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 234, in _get_code_from_file
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 125, in _main
prepare(preparation_data)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 236, in prepare
with io.open_code(decoded_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/bytedance/intern/dev/mars/<input>'
_fixup_main_from_path(data['init_main_from_path'])
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 264, in run_path
code, fname = _get_code_from_file(run_name, path_name)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 234, in _get_code_from_file
with io.open_code(decoded_path) as f:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/bytedance/intern/dev/mars/<input>'
```
This causes the application to get stuck in a waiting state.
Both are pointed to the same virtual environment, which makes me wonder if there is some env variable that is expected which is causing the error.
Any advice on proceeding with debugging with Pycharm and getting by this blocker would be appreciated | closed | 2022-02-02T20:13:18Z | 2022-02-06T01:26:57Z | https://github.com/mars-project/mars/issues/2677 | [
"reso: invalid",
"question"
] | sakshamkumar-byt | 2 |
graphql-python/graphene-sqlalchemy | sqlalchemy | 16 | How to query by argument? | I was wondering if querying a schema via specific arguments is supposed to work out of the box or if anything special must be done to make it work?
In case of the flask example, I was expecting the following to be a valid query:
{
role(roleId: 2) {
roleId
name
}
}
But I only get
> Unknown argument "roleId" on field "role" of type "Query".
So how would I need to extend the example so that I could search employees by their name or retrieve a role via the ID? Or is my query just wrong?
| closed | 2016-11-02T08:18:15Z | 2023-02-24T14:56:01Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/16 | [] | steinsag | 10 |
falconry/falcon | api | 1,500 | doc: Request ID logging | Document in the FAQ or a tutorial how you can use a middleware `process_request()` method to generate a request ID (or extract one from an HTTP header if provided downstream ala tracing) and then add that to `req.context` so you can reference it later when you want to log it out. Alternatively, you can use something like `structlog` and bind the request id to an instance that you then pass around via `req.context`. | closed | 2019-04-23T20:30:46Z | 2019-05-08T14:41:07Z | https://github.com/falconry/falcon/issues/1500 | [
"documentation"
] | kgriffs | 4 |
zihangdai/xlnet | nlp | 107 | Can you please share the download link for all the classification datasets | Hi, thanks for answering so many questions on the paper, I really appreciate all the efforts you guys have put into.
In Table 3 of Paper, multiple classification datasets are mentioned, many of these datasets have multiple versions available.
1) Can you please share the download location of the version of dataset you used in the paper for Yelp, DBPedia, AG and Amazon datasets?
Thanks | closed | 2019-07-03T04:12:08Z | 2019-07-05T14:03:19Z | https://github.com/zihangdai/xlnet/issues/107 | [] | ngoyal2707 | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 720 | understanding optimization | appendix 7 in 7.1 training detail(paper)
in practice,objective is divided by 2 while optimizing D,which slows down the rate at which D learns relative to the rate of G. I am not getting point in terms of code. | closed | 2019-08-02T18:13:56Z | 2020-08-18T13:47:40Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/720 | [] | nehaleosharma | 3 |
ultralytics/ultralytics | machine-learning | 19,305 | Ultralytics memory problem | how to minimize the memory usage of gpu while training other than setting parameter of workers? | closed | 2025-02-19T05:13:32Z | 2025-02-21T05:05:12Z | https://github.com/ultralytics/ultralytics/issues/19305 | [
"bug",
"segment",
"detect"
] | abhishekb-weboccult | 11 |
kizniche/Mycodo | automation | 990 | All outputs are listed as "Unconfigured" since 8.10 | Under the Setup->Outputs screen, all of my pre-existing outputs no longer display their state and instead show "Unconfigured". Adding a new Output and configuring it still shows as Unconfigured. This seems to have started since 8.10.

The outputs still work, everything's functional except for the "Unconfigured" message. | closed | 2021-04-27T04:07:34Z | 2021-06-06T20:59:54Z | https://github.com/kizniche/Mycodo/issues/990 | [
"bug",
"Fixed and Committed"
] | dylandn | 5 |
ydataai/ydata-profiling | jupyter | 1,099 | Cramer correlation matrix is not computed | /usr/local/lib/python3.7/dist-packages/pandas_profiling/model/correlations.py:61: UserWarning:There was an attempt to calculate the cramers correlation, but this failed.
To hide this warning, disable the calculation
(using `df.profile_report(correlations={"cramers": {"calculate": False}})`
If this is problematic for your use case, please report this as an issue:
https://github.com/ydataai/pandas-profiling/issues
(include the error message: 'No data; `observed` has size 0.')
pls help. My dataset is https://busan302.mycourses.work/data/house_price_train.csv | closed | 2022-10-06T16:21:58Z | 2022-10-18T10:50:38Z | https://github.com/ydataai/ydata-profiling/issues/1099 | [
"bug ๐"
] | bdao568 | 7 |
milesmcc/shynet | django | 292 | Don't commit MaxMind license key to your repository | https://github.com/milesmcc/shynet/blob/45fafc35070416bbc9df420e2df6593f43efd4dd/Dockerfile#L18 | closed | 2023-10-11T00:08:19Z | 2023-10-30T22:00:44Z | https://github.com/milesmcc/shynet/issues/292 | [] | ugexe | 6 |
fugue-project/fugue | pandas | 492 | [FEATURE] AnyDataFrame should be recognized by Creator, Processor and Ouputter | This should be a straightforward change, we need this to work:
```python
from fugue import AnyDataFrame
def my_processor(df:AnyDataFrame) -> AnyDataFrame:
return df
```
```
PROCESS USING my_processor
```
With this change, or functions following fugue api conventions will be able to be used as fugue extensions.
| closed | 2023-07-20T05:05:27Z | 2023-07-22T18:32:18Z | https://github.com/fugue-project/fugue/issues/492 | [
"enhancement",
"programming interface"
] | goodwanghan | 0 |
roboflow/supervision | machine-learning | 1,800 | Install supervision with numpy<2 | ### Search before asking
- [x] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
Hi,
I wanted to add supervision to a project of mine that depends on `numpy<2`.
If I try this via poetry I get:
```
(.venv) C:\Users\z0039hdz\Projects\chip-segmentation [feat/export-model โก +3 ~3 -0 !]> poetry env info
Virtualenv
Python: 3.10.11
Implementation: CPython
Path: C:\Users\z0039hdz\Projects\chip-segmentation\.venv
Executable: C:\Users\z0039hdz\Projects\chip-segmentation\.venv\Scripts\python.exe
Valid: True
Base
Platform: win32
OS: nt
Python: 3.10.11
Path: C:\Users\z0039hdz\.pyenv\pyenv-win\versions\3.10.11
Executable: C:\Users\z0039hdz\.pyenv\pyenv-win\versions\3.10.11\python.exe
(.venv) C:\Users\z0039hdz\Projects\chip-segmentation [feat/export-model โก +3 ~3 -0 !]> poetry add supervision
Using version ^0.25.1 for supervision
Updating dependencies
Resolving dependencies... (5.3s)
Because no versions of supervision match >0.25.1,<0.26.0
and supervision (0.25.1) depends on numpy (>=2.1.0), supervision (>=0.25.1,<0.26.0) requires numpy (>=2.1.0).
So, because chip-segmentation depends on both numpy (<2) and supervision (^0.25.1), version solving failed.
```
it seems like supervision depends on numpy>=2.1.0 for python 3.10 but if I check the pyproject.toml of supervision I see `"numpy>=1.21.2"`.
Is it possible to use supervision with `numpy<2`? Am I doing something wrong?
### Additional
_No response_ | closed | 2025-03-06T13:36:44Z | 2025-03-06T14:08:20Z | https://github.com/roboflow/supervision/issues/1800 | [
"question"
] | pirnerjonas | 2 |
mitmproxy/pdoc | api | 361 | Improve Rendering of `typing.TypeVar` | #### Problem Description
TypeVars seem to be simply put through `repr` to get their textual format. TypeVar's `__repr__` is missing important information necessary to the use of the class including constraints or bounds and covariance or contravariance.
#### Proposal
I would like to see TypeVars printed with a more information. For example, instead of just `~T`, it could be `{variance} T โค {bound}`. The variance of a type only needs to be know once per type (likely at the occurrence of the TypeVar in the class definition) as TypeVars in functions/methods cannot have variance. Bounds should likely be printed on every occurrence of the TypeVar. Constraints could be represented as a tuple of bounds.
#### Alternatives
1. Stating the bound and variance in the docstring. Docstrings can get out of date, but mypy ensures that annotations remain correct.
2. Taking sphinx's approach and adding support for documenting the TypeVar (yuck!).
| open | 2022-03-12T06:10:20Z | 2022-04-05T14:50:30Z | https://github.com/mitmproxy/pdoc/issues/361 | [
"enhancement"
] | ktbarrett | 9 |
gradio-app/gradio | data-visualization | 10,842 | Strange results in RGBA image processing | ### Describe the bug
I kept getting strange results while processing RGBA images.
I tested with code that just outputs the image as is.
When an RGBA image is passed through gr.Image(...), I get a strange result (see photo).
Why is this happening?
I've attached an image for testing.
### Have you searched existing issues? ๐
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
def process_image(input_image):
return input_image
with gr.Blocks() as demo:
with gr.Row():
with gr.Column():
input_image = gr.Image(label="Input")
with gr.Column():
output_image = gr.Image(label="Output")
submit_btn = gr.Button("Show")
submit_btn.click(
fn=process_image,
inputs=input_image,
outputs=output_image
)
if __name__ == "__main__":
demo.launch(server_port=7860, server_name="0.0.0.0")
```
### Screenshot


### Logs
```shell
```
### System Info
```shell
python 3.12.3
gradio 5.22.0
```
### Severity
Blocking usage of gradio | closed | 2025-03-20T02:50:49Z | 2025-03-21T02:49:04Z | https://github.com/gradio-app/gradio/issues/10842 | [
"bug"
] | eunvvoo | 1 |
PaddlePaddle/models | nlp | 5,233 | BMN ๆจๆญๅคฑ่ดฅ | E:\PythonProject\models\PaddleCV\video> python predict.py --model_name BMN --config configs/bmn.yaml --log_interval 1 --weights .\models\bmn\BMN
DALI is not installed, you can improve performance if use DALI
[INFO: predict.py: 199]: Namespace(batch_size=1, config='configs/bmn.yaml', filelist=None, infer_topk=20, log_interval=1, model_name='BMN', save_dir='data\\predict_results', use_gpu=True, video_path=None, weights='.\\models\\bmn\\BMN')
[INFO: config_utils.py: 69]: ---------------- Infer Arguments ----------------
[INFO: config_utils.py: 72]: MODEL:
[INFO: config_utils.py: 74]: name:BMN
[INFO: config_utils.py: 74]: tscale:100
[INFO: config_utils.py: 74]: dscale:100
[INFO: config_utils.py: 74]: feat_dim:400
[INFO: config_utils.py: 74]: prop_boundary_ratio:0.5
[INFO: config_utils.py: 74]: num_sample:32
[INFO: config_utils.py: 74]: num_sample_perbin:3
[INFO: config_utils.py: 74]: anno_file:data/dataset/bmn/activitynet_1.3_annotations.json
[INFO: config_utils.py: 74]: feat_path:data/dataset/bmn/fix_feat_100
[INFO: config_utils.py: 72]: TRAIN:
[INFO: config_utils.py: 74]: subset:train
[INFO: config_utils.py: 74]: epoch:9
[INFO: config_utils.py: 74]: batch_size:16
[INFO: config_utils.py: 74]: num_threads:8
[INFO: config_utils.py: 74]: use_gpu:True
[INFO: config_utils.py: 74]: num_gpus:4
[INFO: config_utils.py: 74]: learning_rate:0.001
[INFO: config_utils.py: 74]: learning_rate_decay:0.1
[INFO: config_utils.py: 74]: lr_decay_iter:4200
[INFO: config_utils.py: 74]: l2_weight_decay:0.0001
[INFO: config_utils.py: 72]: VALID:
[INFO: config_utils.py: 74]: subset:validation
[INFO: config_utils.py: 74]: batch_size:16
[INFO: config_utils.py: 74]: num_threads:8
[INFO: config_utils.py: 74]: use_gpu:True
[INFO: config_utils.py: 74]: num_gpus:4
[INFO: config_utils.py: 72]: TEST:
[INFO: config_utils.py: 74]: subset:validation
[INFO: config_utils.py: 74]: batch_size:1
[INFO: config_utils.py: 74]: num_threads:1
[INFO: config_utils.py: 74]: snms_alpha:0.001
[INFO: config_utils.py: 74]: snms_t1:0.5
[INFO: config_utils.py: 74]: snms_t2:0.9
[INFO: config_utils.py: 74]: output_path:data/output/EVAL/BMN_results
[INFO: config_utils.py: 74]: result_path:data/evaluate_results
[INFO: config_utils.py: 72]: INFER:
[INFO: config_utils.py: 74]: subset:test
[INFO: config_utils.py: 74]: batch_size:1
[INFO: config_utils.py: 74]: num_threads:1
[INFO: config_utils.py: 74]: snms_alpha:0.4
[INFO: config_utils.py: 74]: snms_t1:0.5
[INFO: config_utils.py: 74]: snms_t2:0.9
[INFO: config_utils.py: 74]: filelist:data/dataset/bmn/infer.list
[INFO: config_utils.py: 74]: output_path:data/output/INFER/BMN_results
[INFO: config_utils.py: 74]: result_path:data/predict_results
[INFO: config_utils.py: 75]: -------------------------------------------------
W0126 21:49:35.488451 13780 device_context.cc:320] Please NOTE: device: 0, GPU Compute Capability: 6.1, Driver API Version: 11.3, Runtime API Version: 10.1
W0126 21:49:35.509438 13780 device_context.cc:330] device: 0, cuDNN Version: 7.6.
test subset video numbers: 5
Traceback (most recent call last):
File "predict.py", line 201, in <module>
infer(args)
File "predict.py", line 125, in infer
assert os.path.exists(
AssertionError: Given weight dir .\models\bmn\BMN not exist.
่ฟ่กๅฝไปคไธญ็ --weights ๅ ไธๅ ๅ็ผ้ฝ่ฏ่ฟไบ | closed | 2021-01-26T13:51:40Z | 2021-10-23T07:42:43Z | https://github.com/PaddlePaddle/models/issues/5233 | [] | jishixin | 1 |
qubvel-org/segmentation_models.pytorch | computer-vision | 114 | Mask format for catalyst | Hi, I am trying to do segmentation of kidneys and tumors. Right now my mask are [512, 512, 3] with two colours for two classes ([255, 0, 0] and [0, 255, 0]). The classes can overlap. I do not have separate class for background.
Right now when I run modified example with :
``` model = smp.Unet(encoder_name="resnext50_32x4d", classes=2, activation='sigmoid') ```
I get following error
intersection = torch.sum(targets * outputs)
RuntimeError: The size of tensor a (3) must match the size of tensor b (224) at non-singleton dimension 4
Do I understand correctly that I the mask should have another dimension for classes? Also should they be binary then? | closed | 2019-12-04T12:32:04Z | 2022-02-09T01:53:45Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/114 | [
"Stale"
] | PiotrowskiD | 4 |
davidteather/TikTok-Api | api | 263 | [BUG] - AttributeError: 'browser' object has no attribute 'verifyFp' | I am trying to get the last 10 video stats of users with over 1m followers from a 150 user list on tiktok. However, once it is about half way finished printing all the stats, I get the error.
This is the error I receive:
```
Traceback (most recent call last):
File "snowball.py", line 24, in <module>
tiktok = api.getUser(accounts)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/TikTokApi/tiktok.py", line 723, in getUser
return self.getData(b, proxy=proxy)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/TikTokApi/tiktok.py", line 71, in getData
query = {'verifyFp': b.verifyFp, 'did': b.did, '_signature': b.signature}
AttributeError: 'browser' object has no attribute 'verifyFp'
```
my code script:
```
from TikTokApi import TikTokApi
api = TikTokApi()
# file_data is the list of 150 tiktok users accounts)
for accounts in file_data:
tiktok = api.getUser(accounts)
name = (tiktok['userInfo']['user']['uniqueId'])
followers = (tiktok['userInfo']['stats']['followerCount'])
if followers > 1000000:
user_videos = api.byUsername(name, count=10)
for video in user_videos:
stats = (video['stats'])
print(stats, name)
```
It should print the stats of the last 10 videos for each user that is over 1 million followers, and it does so. However, I get an error after it has printed 21 user's stats.
- OS: macOS Catalina 10.15.3
- TikTokApi latest version
- My location: Australia
I have looked at similar issues under this GitHub repository, however, nothing has fixed it
| closed | 2020-09-15T10:49:57Z | 2020-09-15T23:57:41Z | https://github.com/davidteather/TikTok-Api/issues/263 | [
"bug"
] | edenhikri | 4 |
keras-team/keras | deep-learning | 20,444 | Mode.fit() error. Someone please help me fix this error. I am not able to figure it out | I'm building a capsule network in TensorFlow for binary classification using a custom CapsuleLayer. My model and associated components are as follows:
```python
class CapsuleLayer(layers.Layer):
def __init__(self, num_capsule, dim_capsule, routings=3, **kwargs):
super(CapsuleLayer, self).__init__(**kwargs)
self.num_capsule = num_capsule
self.dim_capsule = dim_capsule
self.routings = routings
def build(self, input_shape):
self.kernel = self.add_weight(name='capsule_kernel',
shape=(input_shape[-1], self.num_capsule * self.dim_capsule),
initializer='glorot_uniform',
trainable=True)
def call(self, inputs):
inputs_hat = K.dot(inputs, self.kernel)
inputs_hat = K.reshape(inputs_hat, (-1, self.num_capsule, self.dim_capsule))
b = K.zeros_like(inputs_hat[:, :, 0])
for i in range(self.routings):
c = tf.nn.softmax(b, axis=1)
o = squash(tf.reduce_sum(c[..., None] * inputs_hat, 1))
if i < self.routings - 1:
b += tf.reduce_sum(inputs_hat * o[:, None, :], -1)
return o
def squash(vectors, axis=-1):
s_squared_norm = K.sum(K.square(vectors), axis, keepdims=True)
scale = s_squared_norm / (1 + s_squared_norm) / K.sqrt(s_squared_norm + K.epsilon())
return scale * vectors
# Network architecture and margin loss
def CapsNet(input_shape):
inputs = Input(shape=input_shape)
x = Conv2D(64, (9, 9), strides=1, activation='relu', padding='valid')(inputs)
x = Conv2D(128, (9, 9), strides=2, activation='relu', padding='valid')(x)
x = Reshape((-1, 8))(x)
primary_caps = CapsuleLayer(num_capsule=10, dim_capsule=8, routings=3)(x)
digit_caps = CapsuleLayer(num_capsule=2, dim_capsule=16, routings=3)(primary_caps)
out_caps = Lambda(lambda z: K.sqrt(K.sum(K.square(z), -1)))(digit_caps)
return models.Model(inputs, out_caps)
def margin_loss(y_true, y_pred):
m_plus, m_minus, lambda_val = 0.9, 0.1, 0.5
left = tf.square(tf.maximum(0., m_plus - y_pred))
right = tf.square(tf.maximum(0., y_pred - m_minus))
return tf.reduce_mean(tf.reduce_sum(y_true * left + lambda_val * (1 - y_true) * right, axis=-1))
```
When training, I receive this error:
ValueError: Cannot squeeze axis=-1, because the dimension is not 1.
I've set class_mode='categorical' in the ImageDataGenerator flow:
train_generator = train_datagen.flow_from_directory(train_dir, target_size=(224, 224),
color_mode='grayscale', batch_size=64, class_mode='categorical')
I am using this model to classify an image dataset into 2 classes. Please help! | closed | 2024-11-04T01:29:39Z | 2024-12-21T02:00:56Z | https://github.com/keras-team/keras/issues/20444 | [
"stat:awaiting response from contributor",
"stale"
] | Israh-Abdul | 4 |
ultralytics/yolov5 | deep-learning | 13,379 | How to use a second GPU outside of the default GPU on yolov5? | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I have tried doing torch.cuda.set_device and self.model.to(device), etc... None of it seems to work. It always defaults to main GPU.
### Additional
_No response_ | open | 2024-10-24T06:09:36Z | 2024-11-09T13:27:17Z | https://github.com/ultralytics/yolov5/issues/13379 | [
"question"
] | ijnrghjkdsmigywneig203 | 2 |
ray-project/ray | machine-learning | 51,190 | [Serve] Serve hangs for chained deployments with diamond dependencies data flow | ### What happened + What you expected to happen
I have one serve deployment that acts as an orchestrator for another 4 deployments whose results are chained together in a diamond shaped data flow:

In Ray[serve] versions 2.38 and above, when calling the orchestrator several times, Serve hangs after a couple of calls. This hanging behavior does not appear in Ray[serve]==2.37. So there seems to be a bug introduced in the later versions.
However, it is worth noting that if the first deployment in the chain is "awaited", Serve does not hang and completes normally all the repeated calls to the orchestrator.

This is one sample log output when Serve hangs:
```
2025-03-09 06:27:33,228 INFO worker.py:1832 -- Started a local Ray instance. View the dashboard at http://127.0.0.1:8265
INFO 2025-03-09 06:27:34,897 serve 30976 -- Started Serve in namespace "serve".
(ProxyActor pid=31022) INFO 2025-03-09 06:27:34,853 proxy 127.0.0.1 -- Proxy starting on node 759b2353ab09b748b9e3c6407dc81c24f750e57ae1bd4dc2159ce9c2 (HTTP port: 8000).
(ProxyActor pid=31022) INFO 2025-03-09 06:27:34,875 proxy 127.0.0.1 -- Got updated endpoints: {}.
(ServeController pid=31025) INFO 2025-03-09 06:27:34,982 controller 31025 -- Deploying new version of Deployment(name='h10', app='default') (initial target replicas: 1).
(ServeController pid=31025) INFO 2025-03-09 06:27:34,983 controller 31025 -- Deploying new version of Deployment(name='h20', app='default') (initial target replicas: 1).
(ServeController pid=31025) INFO 2025-03-09 06:27:34,984 controller 31025 -- Deploying new version of Deployment(name='h21', app='default') (initial target replicas: 1).
(ServeController pid=31025) INFO 2025-03-09 06:27:34,984 controller 31025 -- Deploying new version of Deployment(name='h30', app='default') (initial target replicas: 1).
(ServeController pid=31025) INFO 2025-03-09 06:27:34,985 controller 31025 -- Deploying new version of Deployment(name='Orchestrator', app='default') (initial target replicas: 1).
(ProxyActor pid=31022) INFO 2025-03-09 06:27:34,986 proxy 127.0.0.1 -- Got updated endpoints: {Deployment(name='Orchestrator', app='default'): EndpointInfo(route='/', app_is_cross_language=False)}.
(ProxyActor pid=31022) INFO 2025-03-09 06:27:34,989 proxy 127.0.0.1 -- Started <ray.serve._private.router.SharedRouterLongPollClient object at 0x14788b080>.
(ServeController pid=31025) INFO 2025-03-09 06:27:35,086 controller 31025 -- Adding 1 replica to Deployment(name='h10', app='default').
(ServeController pid=31025) INFO 2025-03-09 06:27:35,087 controller 31025 -- Adding 1 replica to Deployment(name='h20', app='default').
(ServeController pid=31025) INFO 2025-03-09 06:27:35,087 controller 31025 -- Adding 1 replica to Deployment(name='h21', app='default').
(ServeController pid=31025) INFO 2025-03-09 06:27:35,088 controller 31025 -- Adding 1 replica to Deployment(name='h30', app='default').
(ServeController pid=31025) INFO 2025-03-09 06:27:35,088 controller 31025 -- Adding 1 replica to Deployment(name='Orchestrator', app='default').
INFO 2025-03-09 06:27:36,008 serve 30976 -- Application 'default' is ready at http://127.0.0.1:8000/.
INFO 2025-03-09 06:27:36,008 serve 30976 -- Deployed app 'default' successfully.
INFO 2025-03-09 06:27:36,010 serve 30976 -- Started <ray.serve._private.router.SharedRouterLongPollClient object at 0x15e4ca7e0>.
>>>>> Final Result: (h30 (h20 (h10 p0)),(h21 (h10 p0)))
>>>>> Final Result: (h30 (h20 (h10 p1)),(h21 (h10 p1)))
(ServeReplica:default:h21 pid=31021) INFO 2025-03-09 06:27:36,035 default_h21 teebnua1 aaf03bf1-89a2-4327-a872-f5d1f2feba49 -- CALL process OK 0.7ms
(ServeReplica:default:h21 pid=31021) INFO 2025-03-09 06:27:36,041 default_h21 teebnua1 e77a1af5-0d92-42ad-bcd2-4a4e39a2076b -- CALL process OK 0.5ms
(ServeReplica:default:Orchestrator pid=31020) INFO 2025-03-09 06:27:36,023 default_Orchestrator r2i19i0f aaf03bf1-89a2-4327-a872-f5d1f2feba49 -- Started <ray.serve._private.router.SharedRouterLongPollClient object at 0x12e92d610>.
(ServeReplica:default:Orchestrator pid=31020) INFO 2025-03-09 06:27:36,037 default_Orchestrator r2i19i0f aaf03bf1-89a2-4327-a872-f5d1f2feba49 -- CALL process OK 18.9ms
(ServeReplica:default:Orchestrator pid=31020) INFO 2025-03-09 06:27:36,043 default_Orchestrator r2i19i0f e77a1af5-0d92-42ad-bcd2-4a4e39a2076b -- CALL process OK 4.9ms
(ServeReplica:default:h30 pid=31030) INFO 2025-03-09 06:27:36,036 default_h30 vepgphij aaf03bf1-89a2-4327-a872-f5d1f2feba49 -- CALL process OK 0.8ms
(ServeReplica:default:h30 pid=31030) INFO 2025-03-09 06:27:36,042 default_h30 vepgphij e77a1af5-0d92-42ad-bcd2-4a4e39a2076b -- CALL process OK 0.5ms
```
### Versions / Dependencies
Ray[serve] v2.43
Python v3.12
MacOS v14.7.1
### Reproduction script
```python
from ray import serve
from ray.serve.handle import DeploymentHandle
@serve.deployment
class SubDeployment:
def __init__(self, init_val: str) -> None:
self.init_val = init_val
async def process(self, val: str, val2: str | None = None) -> str:
if val2 is not None:
val = val + "," + val2
return "(" + self.init_val + " " + val + ")"
@serve.deployment
class Orchestrator:
def __init__(
self,
h10: DeploymentHandle,
h20: DeploymentHandle,
h21: DeploymentHandle,
h30: DeploymentHandle,
) -> None:
self.h10 = h10
self.h20 = h20
self.h21 = h21
self.h30 = h30
async def process(self, val: str) -> None:
h10_result = self.h10.process.remote(val) # <-- Add `await` here as a workaround, and Serve will not hang.
h20_result = self.h20.process.remote(h10_result)
h21_result = self.h21.process.remote(h10_result)
h30_result = self.h30.process.remote(h20_result, h21_result)
return await h30_result
def main():
h10 = SubDeployment.options(name="h10").bind("h10")
h20 = SubDeployment.options(name="h20").bind("h20")
h21 = SubDeployment.options(name="h21").bind("h21")
h30 = SubDeployment.options(name="h30").bind("h30")
orchestrator = Orchestrator.bind(h10, h20, h21, h30)
handle = serve.run(orchestrator)
for i in range(10):
result = handle.process.remote(f"p{i}")
print(">>>>> Final Result:", result.result())
serve.shutdown()
if __name__ == "__main__":
main()
```
### Issue Severity
Medium: It is a significant difficulty but I can work around it. | open | 2025-03-09T05:46:52Z | 2025-03-10T22:22:22Z | https://github.com/ray-project/ray/issues/51190 | [
"bug",
"triage",
"serve"
] | msamadony | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.