repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
Nekmo/amazon-dash | dash | 36 | Raspberry install Failed | * amazon-dash version:0.4.1
* Python version:3.5
* Operating System:Raspbian Jessi Lite
### Description
When I execute sudo python -m amazon_dash.install it fails with the text:
"/usr/bin/python: No module named amazon_dash"
"/usr/bin/python3.5: Error while finding module specification for 'amazon_dash.install' (ImportError: No module named 'amazon_dash')"
"/usr/bin/python3: Error while finding module specification for 'amazon_dash.install' (ImportError: No module named 'amazon_dash')"
### What I Did
Reinstall and install with pip3 install amazon-dash
Tried which all variant of python --> "python" / "python3" / "python3.5"
| closed | 2018-02-24T12:20:38Z | 2018-03-25T00:45:35Z | https://github.com/Nekmo/amazon-dash/issues/36 | [] | Marvv90 | 7 |
ymcui/Chinese-BERT-wwm | nlp | 6 | ่ฏท้ฎ่ฟไธช่ฎญ็ปๆจกๅๆไปไน็จ๏ผ | ๅฐ็ฝไธไธช๏ผ่ฏท้ฎ่ฟไธชๆๅฅ็จๅ๏ผ็่ตทๆฅๅฅฝ้ซๅคงไธ๏ผๅบ็จๅบๆฏๆฏไปไนๅข๏ผ | closed | 2019-06-23T07:57:30Z | 2019-06-23T08:04:04Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/6 | [] | mmrwbb | 1 |
aiogram/aiogram | asyncio | 1,465 | aiogram\utils\formatting.py (as_section) | ### Checklist
- [X] I am sure the error is coming from aiogram code
- [X] I have searched in the issue tracker for similar bug reports, including closed ones
### Operating system
Windows 10
### Python version
3.12
### aiogram version
3.4.1
### Expected behavior
aiogram\utils\formatting.py (as_section)
...
return Text(title, "\n", **as_list(*body)**)
### Current behavior
aiogram\utils\formatting.py (as_section)
```
def as_section(title: NodeType, *body: NodeType) -> Text:
"""
Wrap elements as simple section, section has title and body
:param title:
:param body:
:return: Text
"""
return Text(title, "\n", *body)
```
### Steps to reproduce
Not required
### Code example
_No response_
### Logs
_No response_
### Additional information
It is necessary to use "as_list(*body)" instead of "*body", because "\n" characters are not added to the end of each body element. | closed | 2024-04-19T09:58:37Z | 2024-04-21T19:17:52Z | https://github.com/aiogram/aiogram/issues/1465 | [
"bug",
"good first issue"
] | post1917 | 2 |
pydantic/pydantic-ai | pydantic | 844 | api_key is required even if ignored | https://ai.pydantic.dev/models/#example-local-usage
The example does not indicate that you need to set a dummy api_key i.e.
*Does not work*
```ollama_model = OpenAIModel(model_name="llama3.2", base_url="http://127.0.0.1:11434/v1")```
neither does
```ollama_model = OpenAIModel(model_name="llama3.2", base_url="http://127.0.0.1:11434/v1", api_key="")```
but this works:
```ollama_model = OpenAIModel(model_name="llama3.2", base_url="http://127.0.0.1:11434/v1", api_key="dummy")```
I suggest adding a dummy api_key argument to the example so that it would work by default.
I assume that this does not show up as an issue for most people as they would already have some Open AI API key setup to act as the dummy :) | closed | 2025-02-03T10:07:18Z | 2025-02-04T01:10:29Z | https://github.com/pydantic/pydantic-ai/issues/844 | [
"bug"
] | hansharhoff | 2 |
Python3WebSpider/ProxyPool | flask | 9 | ๅฆไฝๅจpycharm้่ฐ่ฏ่ฏฅ้กน็ฎ | ๆไฝฟ็จไธไธช่ฟ็จ็็ฏๅข๏ผๆณๅจpycharm้่ฐ่ฏ่ฏฅ้กน็ฎ๏ผไฝๆฏๆฏๆฌกDebug run.py ้ฝๆพ็คบๆไปถๆ ๆณๆพๅฐ๏ผ่ฏท้ฎๅฆไฝไฝฟ็จpycharm่ฐ่ฏ่ฟไธช้กน็ฎ | closed | 2018-07-06T06:08:43Z | 2020-02-19T16:56:08Z | https://github.com/Python3WebSpider/ProxyPool/issues/9 | [] | bbhl79 | 0 |
jina-ai/serve | machine-learning | 5,486 | do not apply limits when gpus all in K8s | Opening this issue to track: https://github.com/jina-ai/jina/pull/5485
Currently, when `gpus: all` is applied, `resources.limits` will be set to `all`. The desired behavior is to not have `resources.limits` in K8s yaml. An example of the desired K8s yaml for the Flow:
```yaml
jtype: Flow
with:
protocol: grpc
executors:
- name: executor1
uses: jinahub+docker://Sentencizer
gpus: all
```
would be as follows:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: executor1
namespace: somens
spec:
replicas: 1
selector:
matchLabels:
app: executor1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
annotations:
linkerd.io/inject: enabled
labels:
app: executor1
jina_deployment_name: executor1
ns: somens
pod_type: WORKER
shard_id: '0'
spec:
containers:
- args:
- executor
- --name
- executor1
- --extra-search-paths
- ''
- --k8s-namespace
- somens
- --uses
- config.yml
- --port
- '8080'
- --gpus
- all
- --port-monitoring
- '9090'
- --uses-metas
- '{}'
- --native
command:
- jina
env:
- name: POD_UID
valueFrom:
fieldRef:
fieldPath: metadata.uid
- name: JINA_DEPLOYMENT_NAME
value: executor1
envFrom:
- configMapRef:
name: executor1-configmap
image: jinahub/c6focg47:63366804b56f6748d3b16036
imagePullPolicy: IfNotPresent
name: executor
ports:
- containerPort: 8080
readinessProbe:
exec:
command:
- jina
- ping
- executor
- 127.0.0.1:8080
initialDelaySeconds: 5
periodSeconds: 20
timeoutSeconds: 10
``` | closed | 2022-12-05T09:49:52Z | 2022-12-05T17:10:41Z | https://github.com/jina-ai/serve/issues/5486 | [] | winstonww | 0 |
clovaai/donut | nlp | 308 | What should be the configuration of the machine to train the model? | open | 2024-07-01T09:29:07Z | 2024-07-01T09:29:07Z | https://github.com/clovaai/donut/issues/308 | [] | anant996 | 0 | |
gradio-app/gradio | data-visualization | 10,783 | Gradio: predict() got an unexpected keyword argument 'message' | ### Describe the bug
Trying to connect my telegram-bot(webhook) via API with my public Gradio space on Huggingface.
Via terminal - all works OK.
But via telegram-bot always got the same issue: Error in connection Gradio: predict() got an unexpected keyword argument 'message'.
What should i use to work it properly?
HF:
Gradio sdk_version: 5.20.1
Requirements.txt
- gradio==5.20.1
- fastapi>=0.112.2
- gradio-client>=1.3.0
- urllib3~=2.0
- requests>=2.28.2
- httpx>=0.24.1
- aiohttp>=3.8.5
- async-timeout==4.0.2
- huggingface-hub>=0.19.3
### Have you searched existing issues? ๐
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
# Gradio API
async def send_request_to_gradio(query: str, chat_history: list = None) -> str:
try:
client = Client(HF_SPACE_NAME, hf_token=HF_TOKEN)
logging.info(f"ะัะฟัะฐะฒะปัะตะผ ะทะฐะฟัะพั ะฒ Gradio: query={query}, chat_history={chat_history}")
result = client.predict(
message=query,
chat_history=chat_history or None,
api_name="/chat"
)
logging.info(f"Reply from Gradio: {result}")
# ะะฑัะฐะฑะพัะบะฐ ัะตะทัะปััะฐัะฐ
if isinstance(result, list) and result:
response = result[0]["content"] if isinstance(result[0], dict) and "content" in result[0] else "ะะต ะฝะฐะนะดะตะฝะพ"
return response
else:
logging.warning("Empty or error Gradio API.")
return "ะะต ัะดะฐะปะพัั ะฟะพะปััะธัั ะพัะฒะตั."
except Exception as e:
logging.error(f"Error in connection Gradio: {e}")
return "Error. Try again"
```
### Screenshot
_No response_
### Logs
```shell
===== Application Startup at 2025-03-11 11:37:38 =====
tokenizer_config.json: 0%| | 0.00/453 [00:00<?, ?B/s]
tokenizer_config.json: 100%|โโโโโโโโโโ| 453/453 [00:00<00:00, 3.02MB/s]
tokenizer.json: 0%| | 0.00/16.3M [00:00<?, ?B/s]
tokenizer.json: 100%|โโโโโโโโโโ| 16.3M/16.3M [00:00<00:00, 125MB/s]
added_tokens.json: 0%| | 0.00/23.0 [00:00<?, ?B/s]
added_tokens.json: 100%|โโโโโโโโโโ| 23.0/23.0 [00:00<00:00, 149kB/s]
special_tokens_map.json: 0%| | 0.00/173 [00:00<?, ?B/s]
special_tokens_map.json: 100%|โโโโโโโโโโ| 173/173 [00:00<00:00, 1.05MB/s]
config.json: 0%| | 0.00/879 [00:00<?, ?B/s]
config.json: 100%|โโโโโโโโโโ| 879/879 [00:00<00:00, 4.49MB/s]
model.safetensors: 0%| | 0.00/1.11G [00:00<?, ?B/s]
model.safetensors: 3%|โ | 31.5M/1.11G [00:01<00:39, 27.1MB/s]
model.safetensors: 6%|โ | 62.9M/1.11G [00:02<00:37, 28.0MB/s]
model.safetensors: 68%|โโโโโโโ | 756M/1.11G [00:03<00:01, 313MB/s]
model.safetensors: 100%|โโโโโโโโโโ| 1.11G/1.11G [00:03<00:00, 300MB/s]
/usr/local/lib/python3.10/site-packages/gradio/chat_interface.py:334: UserWarning: The 'tuples' format for chatbot messages is deprecated and will be removed in a future version of Gradio. Please set type='messages' instead, which uses openai-style 'role' and 'content' keys.
self.chatbot = Chatbot(
* Running on local URL: http://0.0.0.0:7860, with SSR โก (experimental, to disable set `ssr=False` in `launch()`)
To create a public link, set `share=True` in `launch()`.
```
### System Info
```shell
title: Nika Prop
emoji: ๐ฌ
colorFrom: yellow
colorTo: purple
sdk: gradio
sdk_version: 5.20.1
app_file: app.py
pinned: false
short_description: Nika real estate
```
### Severity
Blocking usage of gradio | closed | 2025-03-11T12:12:43Z | 2025-03-18T10:28:21Z | https://github.com/gradio-app/gradio/issues/10783 | [
"bug",
"needs repro"
] | brokerelcom | 11 |
koxudaxi/datamodel-code-generator | fastapi | 1,668 | Impossible to get the json schema of a json schema object | **Describe the bug**
```python
from datamodel_code_generator.parser.jsonschema import JsonSchemaObject
if __name__ == "__main__":
print(JsonSchemaObject.model_json_schema())
```
Raises
```
pydantic.errors.PydanticInvalidForJsonSchema: Cannot generate a JsonSchema for core_schema.PlainValidatorFunctionSchema ({'type': 'no-info', 'function': <bound method UnionIntFloat.validate of <class 'datamodel_code_generator.types.UnionIntFloat'>>})
```
**To Reproduce**
See code above
**Expected behavior**
The json schema of a json schema object.
**Version:**
- OS: Linux 6.2.0
- Python version: 3.11.4
- datamodel-code-generator version: 0.22.1
| closed | 2023-11-08T17:31:29Z | 2023-11-09T00:59:54Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1668 | [] | jboulmier | 1 |
davidsandberg/facenet | tensorflow | 931 | what is the trainset of LFW data ๏ผ | I am a newer in face recognition,and have a question on the LFW dataset.
I want to know the train_set of the LFW dataset,(I want to use the LFW data in unrestricted protocol way).I want to know whether the train set is the peopleDevTrain.txt.
| open | 2018-12-17T02:20:16Z | 2018-12-17T02:20:16Z | https://github.com/davidsandberg/facenet/issues/931 | [] | guojiapeng00 | 0 |
cupy/cupy | numpy | 8,103 | Noise in Complex Number Computations | ### Description
I am doing some experiments involving variations of the Mandelbrot set and as such iterations over the complex plane. I have noticed noisy results using cupy as compared to numpy.
### To Reproduce
```
import matplotlib.pyplot as plt
def main():
HEIGHT = 9
WIDTH = 16
RATIO = WIDTH/HEIGHT
RES_SPACE = 50
MIN = -1
MAX = 1
N_ITER = 17
x = np.linspace(MIN*RATIO, MAX*RATIO, WIDTH*RES_SPACE, dtype=np.float64)
y = np.linspace(MIN, MAX, HEIGHT*RES_SPACE, dtype=np.float64)
complex_plane = x + 1j * y[:,None]
complex_plane = complex_plane.astype(np.complex128)
mask = np.ones_like(complex_plane, dtype=bool)
def iterate(i, max, C=complex_plane, M=mask, N=N_ITER):
OUT = np.zeros_like(M, dtype=np.uint8)
Z = np.zeros_like(C)
C = np.copy(C)
M = np.copy(M)
max = max + max*1j
for n in range(N):
M[Z > max] = False
Z *= np.exp(-i*10j)
C *= np.exp(i*1j)
Z[M] = Z[M]**1.5 + C[M]**-3
Z[M] *= np.exp(i*C[M]**-3)
OUT -= M
OUT *= 15
return OUT
i = 3
zoom = 1- ((-i + np.pi) / (np.pi*2))
z = 1-zoom
C = complex_plane * (1*(np.exp(z) - 1))
X = -i
MAX = np.exp(np.tan(X/2)*10)
im = iterate(i, MAX, C=C)
return im
if __name__ == "__main__":
import cupy as np
im = main().get()
plt.imshow(im)
plt.title('CuPy')
plt.show()
import numpy as np
im = main()
plt.imshow(im)
plt.title('NumPy')
plt.show()
```
### Installation
None
### Environment
Google Colab
```
OS : Linux-6.1.58+-x86_64-with-glibc2.35
Python Version : 3.10.12
CuPy Version : 12.2.0
CuPy Platform : NVIDIA CUDA
NumPy Version : 1.23.5
SciPy Version : 1.11.4
Cython Build Version : 0.29.36
Cython Runtime Version : 3.0.7
CUDA Root : /usr/local/cuda
nvcc PATH : /usr/local/cuda/bin/nvcc
CUDA Build Version : 12020
CUDA Driver Version : 12020
CUDA Runtime Version : 12020
cuBLAS Version : (available)
cuFFT Version : 11008
cuRAND Version : 10303
cuSOLVER Version : (11, 5, 2)
cuSPARSE Version : (available)
NVRTC Version : (12, 2)
Thrust Version : 200101
CUB Build Version : 200101
Jitify Build Version : <unknown>
cuDNN Build Version : 8801
cuDNN Version : 8906
NCCL Build Version : 21602
NCCL Runtime Version : 21903
cuTENSOR Version : None
cuSPARSELt Build Version : None
Device 0 Name : Tesla T4
Device 0 Compute Capability : 75
Device 0 PCI Bus ID : 0000:00:04.0
```
### Additional Information


| open | 2024-01-11T12:12:08Z | 2024-02-07T19:54:36Z | https://github.com/cupy/cupy/issues/8103 | [
"issue-checked"
] | knods3k | 6 |
gee-community/geemap | jupyter | 950 | Specify a 'datetime' column when converting from (Geo)DataFrame to FeatureCollection |
### Description
When I convert from (Geo)DataFrame that contains a column with date to FeatureCollection, I cannot filter by date because the date is only stored in properties of the FeatureCollection.
### Source code
```
gdf_radd = gpd.read_file('RADD_alerts.gpkg')
alerts_subset = gdf_radd.query(''20210101 < date < 2021-03-03')
#alerts subset returns a non-empty GeoDataFrame containing rows within selected dates
ee_radd = geemap.geopandas_to_ee(gdf_radd)
alerts_subset_ee = ee_radd.filterDate(ee.Date('2021-01-01'), ee.Date('2021-03-03'))
#alerts_subset_ee is empty. it is not possible to filter by date
```
Desired behaviour
```
#I would be able to specify a column that contains date/datetime when converting from (Geo)DataFrame to GEE FeatureCollection
ee_radd = geemap.geopandas_to_ee(gdf_radd, datetime=gdf_radd.date)
```
Possible sketch of a solution
```
def set_date(feature):
date = feature.get('date').getInfo()
year, month, day = [int(i) for i in date.split()[0].split('/')]
date_mls = ee.Date.fromYMD(year, month, day).millis()
feature = feature.set("system:time_start", date_mls)
return feature
ee_radd_with_dates = ee_radd.map(set_date)
```
| closed | 2022-02-28T11:54:53Z | 2022-03-03T11:09:28Z | https://github.com/gee-community/geemap/issues/950 | [
"Feature Request"
] | janpisl | 2 |
mckinsey/vizro | data-visualization | 888 | Multi-step wizard | ### Which package?
vizro
### What's the problem this feature will solve?
Vizro excels at creating modular dashboards, but as users tackle more sophisticated applications, the need arises for reusable and extensible complex UI components. These include multi-step wizards with dynamic behavior, CRUD operations, and seamless integration with external systems. Currently, building such components requires significant effort, often resulting in custom, non-reusable code. This limits the scalability and maintainability of applications developed with Vizro.
Iโm working on applications that require complex workflows, such as multi-step wizards with real-time input validation and CRUD operations. While Iโve managed to achieve this using Dash callbacks and custom Python code, the lack of modularity and reusability makes the process cumbersome. Every new project requires re-implementing these components, which is time-consuming and error-prone.
### Describe the solution you'd like
I envision Vizro evolving to support the creation of highly reusable and extensible complex components, which could transform how users approach sophisticated Dash applications. Hereโs what this could look like:
- **Object-Oriented Component Development**: Provide the ability to encapsulate UI components and their logic (advanced dynamic callbacks) in Python classes, making them easy to reuse and extend across projects. This could be similar to the component architecture found in frameworks like React.
- **Modular Multi-Step Wizard**: A powerful wizard component with:
- Configurable steps that can be added or modified dynamically.
- Real-time input validation and dynamic data population based on user inputs or external data.
- Visual progress indicators and intuitive navigation controls (Next, Previous, Save & Exit).
- **Integrated CRUD Operations**: Built-in support for Create, Read, Update, and Delete functionality, ensuring data security and consistency:
- Temporary data storage during user navigation.
- Soft-delete functionality and version control for changes.
- Seamless integration with external databases or APIs.
- **Dynamic Callback Management**: Enable advanced callbacks that can be registered and updated dynamically, reducing the complexity of handling inter-component interactions.
- **Extensibility Features**:
- Plug-and-play custom components (e.g., specialized form elements, interactive charts).
- Hooks for integrating with external systems, allowing data exchange and advanced workflows.
- Flexible step-specific logic for conditional rendering and data pre-filling.
---
**How This Could Enhance Vizro**
By introducing such capabilities, Vizro would empower users to go beyond dashboards and build complex, enterprise-level applications more efficiently. These features could help attract a broader audience, including those who require not only dashboards but also robust, interactive data workflows in their data applications.
**Similar Solutions for Inspiration**
- **Material-UI Stepper**: Offers a modular multi-step workflow component.
- **Appsmith multistep wizard**: Facilitates reusable, custom UI components, [example]([url](https://docs.appsmith.com/build-apps/how-to-guides/Multi-step-Form-or-Wizard-Using-Tabs)).
---
My imagination:
Below is an high-level and simple implementation of the multistep wizard, where all wizard components and functionalities are isolated into a class (`Wizard`) following the Facade Design Pattern, complemented by elements of the Factory Pattern and the State Pattern. This class dynamically creates the logic based on the parameters and integration with the steps. The `Step` class represents individual steps.
**wizard_module.py**
```python
from dash import html, dcc, Input, Output, State, MATCH, ALL, ctx
class Step:
def __init__(self, id, label, components, validation_rules):
self.id = id
self.label = label
self.components = components
self.validation_rules = validation_rules
class Wizard:
def __init__(
self,
steps,
title=None,
previous_button_text='Previous',
next_button_text='Next',
current_step_store_id='current_step',
form_data_store_id='form_data',
wizard_content_id='wizard_content',
wizard_message_id='wizard_message',
prev_button_id='prev_button',
next_button_id='next_button',
message_style=None,
navigation_style=None,
validate_on_next=True,
custom_callbacks=None,
):
# Instance attributes
def render_layout(self):
# Returns the UI Components of the form, tabs, buttons ..etc
def render_step(self, step):
# Returns the UI Components of a step
def register_callbacks(self, app):
# Dynamic callbacks for the multistep logic such as navigation, and feedback.
```
**app.py**
```python
from dash import Dash
from wizard_module import Wizard, Step
# Define the wizard steps
steps = [
Step(
id=1,
label="Step 1: User Info",
components=[
{"id": "name_input", "placeholder": "Enter your name"},
{"id": "email_input", "placeholder": "Enter your email", "input_type": "email"},
{"id": "password_input", "placeholder": "Enter your password", "input_type": "password"},
],
validation_rules=[
{"id": "name_input", "property": "value"},
{"id": "email_input", "property": "value"},
{"id": "password_input", "property": "value"},
],
),
Step(
id=2,
label="Step 2: Address Info",
components=[
{"id": "address_input", "placeholder": "Enter your address"},
{"id": "city_input", "placeholder": "Enter your city"},
{"id": "state_input", "placeholder": "Enter your state"},
],
validation_rules=[
{"id": "address_input", "property": "value"},
{"id": "city_input", "property": "value"},
{"id": "state_input", "property": "value"},
],
)
]
# Initialize the wizard
wizard = Wizard(
steps=steps,
title="User Registration Wizard",
previous_button_text='Back',
next_button_text='Continue',
message_style={'color': 'blue', 'marginTop': '10px'},
navigation_style={'marginTop': '30px'},
validate_on_next=True,
custom_callbacks={'on_complete': some_completion_function}
)
# Create the Dash app
app = Dash(__name__)
app.layout = wizard.render_layout()
# Register wizard callbacks
wizard.register_callbacks(app)
if __name__ == '__main__':
app.run_server(debug=True)
```
**Explanation:**
- **Isolation of Components and Logic:** All wizard functionalities, including rendering and navigation logic, are encapsulated within the `Wizard` class. Each step is represented by a `Step` class instance.
- **Dynamic Logic Creation:** The `Wizard` class dynamically generates the layout and callbacks based on the steps provided. The validation logic is applied dynamically using the `validation_rules` defined in each `Step` instance.
- **Ease of Extension:** To add more steps or modify existing ones, you simply need to create or update instances of the `Step` class. The `Wizard` class handles the integration and navigation between steps without any additional changes.
- **Validation Rules:** Each `Step` contains a `validation_rules` list, which specifies which input components need to be validated. This allows for flexible validation logic that can be customized per step.
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | open | 2024-11-19T20:07:30Z | 2024-11-21T19:42:00Z | https://github.com/mckinsey/vizro/issues/888 | [
"Feature Request :nerd_face:"
] | mohammadaffaneh | 3 |
Teemu/pytest-sugar | pytest | 223 | Print test name before result in verbose mode | Without pytest-sugar, when running pytest in verbose mode, the name of the current test is printed immediately.
This is very useful for long running tests since you know which test is hanging, and which test might be killed.
However, with pytest-sugar, the name of the test is only printed when the test succeeds.
I've provided an example of the code to reproduce this as well some minimal environment information.
The current environment in question is running Python 3.9.
Let me know if there is any more information you need to recreate the bug/"missing feature".
### Conda environment
```
(mcam_dev) โ ~/Downloads
09:13 $ mamba list pytest
__ __ __ __
/ \ / \ / \ / \
/ \/ \/ \/ \
โโโโโโโโโโโโโโโ/ /โโ/ /โโ/ /โโ/ /โโโโโโโโโโโโโโโโโโโโโโโโ
/ / \ / \ / \ / \ \____
/ / \_/ \_/ \_/ \ o \__,
/ _/ \_____/ `
|/
โโโโ โโโโ โโโโโโ โโโโ โโโโโโโโโโโ โโโโโโ
โโโโโ โโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โโโ โโโ โโโโโโ โโโโโโ โโโ โโโโโโโโโโโโโโ โโโ
โโโ โโโโโโ โโโโโโ โโโโโโโโโโ โโโ โโโ
mamba (0.15.2) supported by @QuantStack
GitHub: https://github.com/mamba-org/mamba
Twitter: https://twitter.com/QuantStack
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
# packages in environment at /home/mark/mambaforge/envs/mcam_dev:
#
# Name Version Build Channel
pytest 6.2.4 py39hf3d152e_0 conda-forge
pytest-env 0.6.2 py_0 conda-forge
pytest-forked 1.3.0 pyhd3deb0d_0 conda-forge
pytest-localftpserver 1.1.2 pyhd8ed1ab_0 conda-forge
pytest-qt 4.0.2 pyhd8ed1ab_0 conda-forge
pytest-sugar 0.9.4 pyh9f0ad1d_1 conda-forge
pytest-timeout 1.4.2 pyh9f0ad1d_0 conda-forge
pytest-xdist 2.3.0 pyhd8ed1ab_0 conda-forge
```
#### Command used to run pytest
````pytest test_me.py````
#### Test file `test_me.py`
````python
from time import sleep
import pytest
@pytest.mark.parametrize('time', range(5))
def test_sleep(time):
sleep(time)
````
#### Output
Without pytest-sugar. Notice how I captured the name of `test_sleep[2]` before the result of the test appeared.

With pytest-sugar. Notice how I was able to capture the screenshot while `test_sleep[4]` was running, but before the name of the test appeared

| open | 2021-08-16T13:17:54Z | 2023-07-26T11:17:27Z | https://github.com/Teemu/pytest-sugar/issues/223 | [
"enhancement"
] | hmaarrfk | 4 |
man-group/arctic | pandas | 76 | With lib_type='TickStoreV3': No field of name index - index.name and index.tzinfo not preserved - max_date returning min date (without timezone) | Hello,
this code
``` python
from pandas_datareader import data as pdr
symbol = "IBM"
df = pdr.DataReader(symbol, "yahoo", "2010-01-01", "2015-12-29")
df.index = df.index.tz_localize('UTC')
from arctic import Arctic
store = Arctic('localhost')
store.initialize_library('library_name', 'TickStoreV3')
library = store['library_name']
library.write(symbol, df)
```
raises
``` python
ValueError: no field of name index
```
I'm using `TickStoreV3` as `lib_type` because I'm not very interested (at least for now) by
audited write, versioning...
I noticed that
```
>>> df['index']=0
>>> library.write(symbol, df)
1 buckets in 0.015091: approx 6626466 ticks/sec
```
seems to fix this... but
```
>>> library.read(symbol)
index High Adj Close ... Low Close Open
1970-01-01 01:00:00+01:00 0 132.970001 116.564610 ... 130.850006 132.449997 131.179993
1970-01-01 01:00:00+01:00 0 131.850006 115.156514 ... 130.100006 130.850006 131.679993
1970-01-01 01:00:00+01:00 0 131.490005 114.408453 ... 129.809998 130.000000 130.679993
1970-01-01 01:00:00+01:00 0 130.250000 114.012427 ... 128.910004 129.550003 129.869995
1970-01-01 01:00:00+01:00 0 130.919998 115.156514 ... 129.050003 130.850006 129.070007
... ... ... ... ... ... ... ...
1970-01-01 01:00:00+01:00 0 135.830002 135.500000 ... 134.020004 135.500000 135.830002
1970-01-01 01:00:00+01:00 0 138.190002 137.929993 ... 135.649994 137.929993 135.880005
1970-01-01 01:00:00+01:00 0 139.309998 138.539993 ... 138.110001 138.539993 138.300003
1970-01-01 01:00:00+01:00 0 138.880005 138.250000 ... 138.110001 138.250000 138.429993
1970-01-01 01:00:00+01:00 0 138.039993 137.610001 ... 136.539993 137.610001 137.740005
[1507 rows x 7 columns]
```
It looks like as if `write` was looking for a DataFrame with a column named 'index'... which is quite odd.
If I do
```
df['index']=1
library.write(symbol, df)
```
then
```
library.write(symbol, df)
```
raises
```
OverflowError: Python int too large to convert to C long
```
Any idea ?
| closed | 2015-12-29T21:30:39Z | 2016-01-04T20:56:42Z | https://github.com/man-group/arctic/issues/76 | [] | femtotrader | 13 |
chatanywhere/GPT_API_free | api | 3 | ่ฝไธ่ฝ็จไบapi.openai.com | ่ฆๆฏ็งๅญฆไธ็ฝ็่ฏ๏ผhost่ฝไธ่ฝๅๆapi.openai.com ๅข | closed | 2023-05-16T02:21:56Z | 2023-05-24T03:54:28Z | https://github.com/chatanywhere/GPT_API_free/issues/3 | [] | MrGongqi | 3 |
modoboa/modoboa | django | 2,247 | Contacts and Calendar throw internal error | # Impacted versions
* OS Type: Debian
* OS Version: 10
* Database Type: MySQL
* Database version: 10.3.27-MariaDB-0+deb10u1
* Modoboa: 1.17.0
* installer used: Yes
* Webserver: Nginx
* python --version: Python 3.7.3
# Steps to reproduce
* Do a default install of Modoboa on Debian 10.
* [Using "mailsrv" instead of "mail" as the mail server's subdomain. Using Let's Encrypt.]
* Set up a first mail domain for testing (modoboa.MY-DOMAIN-HERE.de)
* Set up a domain administrator account with mail box (hostmaster@modoboa.MY-DOMAIN-HERE.de)
* Using fresh account, try to access "Contacts" or "Calendar" from the menu.
# Current behavior
```
Sorry
An internal error has occured.
```
# Expected behavior
Open contacts or calendar module.
| open | 2021-05-16T01:35:14Z | 2021-06-12T23:56:49Z | https://github.com/modoboa/modoboa/issues/2247 | [
"bug"
] | mas1701 | 15 |
cvat-ai/cvat | tensorflow | 8,380 | > Hi, we have added SAM2 on SaaS (https://app.cvat.ai/) and for Enterprise customers: https://www.cvat.ai/post/meta-segment-anything-model-v2-is-now-available-in-cvat-ai | closed | 2024-08-30T13:01:42Z | 2024-08-30T13:06:55Z | https://github.com/cvat-ai/cvat/issues/8380 | [] | gauravlochab | 0 | |
keras-team/keras | deep-learning | 20,283 | Training performance degradation after switching from Keras 2 mode to Keras 3 using Tensorflow | I've been working on upgrading my Keras 2 code to just work with Keras 3 without going fully back-end agnostic. However, while everything works fine after resolving compatibility, my training speed has severely degraded by maybe even a factor 10. I've changed the following to get Keras 3 working:
1. Changed `tensorflow.keras` to `keras` calls.
2. Updated model/weights saving and loading to use the new `export` function and `weights.h5` format.
3. Updated a callback at the end of the epoch to be a `keras.Callback` instead of the old `BaseLogger`.
4. Added `@keras.saving.register_keras_serializable()` to custom metric and loss functions.
5. Updated my online dataset generator to use `keras.Sequential` data augmentation instead of the removed `ImageDataGenerator`.
6. Removed the `max_queue_size` kwarg from the `model.fit` and `model.predict` calls since it has been removed.
In terms of hardware/packages, I'm using Python 3.11.10, keras 3.5.0 and Tensorflow 2.16.2 on a Macbook Pro M2. I've also noticed that my GPU and CPU usage is much higher while running the newer version. I've confirmed using `git stash` that specifically the changes mentioned above are causing the performance degradation. My suspicion is that the Apple hardware is somehow resulting in worse performance, but I've yet to confirm it using a regular x86 machine. | open | 2024-09-24T07:50:53Z | 2024-10-14T07:00:40Z | https://github.com/keras-team/keras/issues/20283 | [
"type:bug/performance",
"stat:awaiting keras-eng"
] | DavidHidde | 3 |
ultralytics/ultralytics | pytorch | 18,871 | Does tracking mode support NMS threshold? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I'm currently using YOLOv10 to track some objects and there are a lot of cases when two bounding boxes (of the same class) have a high IoU, I tried setting the NMS threshold ("iou" parameter) of the tracker very low it but doesn't change anything... I also tried setting a high NMS threshold (expecting a lot of overlapping BBs) but no matter what value i set, the predictions/tracking looks the same.
I tried to search about the parameters of the YOLOv10 tracker on the Ultralytics Docs and on Ultralytics GitHub but couldn't find anything about the NMS Threshold on the tracker. Is it implemented? Is the parameter name "iou" similar to the predict mode?
Can someone help me in this regard? Thanks!
### Additional
_No response_ | closed | 2025-01-24T20:48:06Z | 2025-01-26T19:08:07Z | https://github.com/ultralytics/ultralytics/issues/18871 | [
"question",
"track"
] | argo-gabriel | 5 |
bmoscon/cryptofeed | asyncio | 897 | BinanceDelivery Candles (Rest) | **General: Thank you**
First of all, I would like to convey my gratitude to you. You have created a fantastic library.
**Describe the bug**
The candles method defined in the Binance Rest Mixin considers limits and adjusts the window by updating the start time (forward request). This works for Spot and UM. Unfortunately, the Binance API is not that consistent. For CM/Delivery, the approach is a backward request i.e. the end time requires to be updated i.e. `end = data[0][0] - 1`
**To Reproduce**
Use `BinanceDelivery ` and request a longer period which exceeds the `limit=1000` such that multiple rest requests have to be triggered. Ideally, you can temporarily set the limit to 1 and send a request which expects two candles.
**Expected behavior**
The data is sorted (ascending) covering data of the requested period.
| open | 2022-08-27T20:01:05Z | 2022-08-28T07:29:51Z | https://github.com/bmoscon/cryptofeed/issues/897 | [
"bug"
] | christophlins | 0 |
ageitgey/face_recognition | python | 887 | Wrong detection face | * face_recognition version: 1.2.3
* Python version: 3.6
* Operating System: Ubuntu 18.04
### Description
Hello, i got wrong detection face, use cartoon of cat face then it's detect as face
i'm use this to detect face location :
face_locations1 = face_recognition.face_locations(selfieimage, model="cnn")

any clue ??
| open | 2019-07-23T07:51:46Z | 2019-07-25T22:04:54Z | https://github.com/ageitgey/face_recognition/issues/887 | [] | blinkbink | 1 |
plotly/plotly.py | plotly | 4,355 | Just a question | Im learning to use plotly and wanted to know some stuff about it,
is it possible to make a exe app with plotly inside?, by this i mean, is it possible to make a standalone software without depending on html or any web services to run plotly modules?.
And othe question, wich library could be usefull to combine with plolty and make a Gui for the software?, since tkinter doesnt work with plotly i read about dash but it needs constant internet connection and is opened via browser and im looking to make a no internet or browser required standalone app.
Ty.
| closed | 2023-09-13T16:59:39Z | 2023-09-16T15:08:05Z | https://github.com/plotly/plotly.py/issues/4355 | [] | Kripishit | 2 |
PokeAPI/pokeapi | graphql | 290 | b | closed | 2017-05-31T19:09:10Z | 2017-05-31T19:09:29Z | https://github.com/PokeAPI/pokeapi/issues/290 | [] | thechief389 | 0 | |
supabase/supabase-py | flask | 1,025 | [Python Client] Sensitive Data Exposure in Debug Logs - No Built-in Redaction Mechanism | - [x] I confirm this is a bug with Supabase, not with my own application.
- [x] I confirm I have searched the [Docs](https://docs.supabase.com), GitHub [Discussions](https://github.com/supabase/supabase/discussions), and [Discord](https://discord.supabase.com).
## Describe the bug
The Supabase Python client exposes sensitive data (tokens, query parameters) in debug logs without providing any built-in mechanism to redact this information. This was previously reported in discussion https://github.com/orgs/supabase/discussions/31019 but remains unresolved. This is a security concern as sensitive tokens and data are being logged in plaintext, potentially exposing them in log files.
## To Reproduce
1. Set up a Python application using the Supabase client
2. Enable debug logging for the client
3. Make any API call that includes sensitive data (like authentication tokens)
4. Check debug logs to see exposed sensitive information:
```python
import logging
import supabase
# Configure logging
logging.basicConfig(level=logging.DEBUG)
# Initialize Supabase client
client = supabase.create_client(...)
# Make any API call
result = client.from_('sensitive_table').select('*').execute()
```
The debug logs will show sensitive information like:
```
[DEBUG] [hpack.hpack] Decoded (b'content-location', b'/sensitive_table?sensitive_token=eq.abc-1234-567899888-23333-33333-333333-333333')
```
## Expected behavior
The Supabase Python client should:
1. Provide built-in configuration options to redact sensitive data in debug logs
2. Either mask sensitive tokens and parameters by default or
3. Provide clear documentation on how to properly configure logging to protect sensitive data
## System information
- OS: Linux
- Version of supabase-py: latest
- Version of Python: 3.11
## Additional context
Standard Python logging filters don't work effectively as the logs are generated by underlying libraries (httpx, httpcore, hpack). This is a security issue that needs proper handling at the client library level. Custom filters like:
```python
class SensitiveDataFilter(logging.Filter):
def filter(self, record: logging.LogRecord) -> bool:
record.msg = re.sub(r"abc-[0-9a-f\-]+", "[REDACTED-TOKEN]", record.msg)
return True
```
don't fully address the issue as they can't catch all instances of sensitive data exposure.
This issue was previously raised in discussion https://github.com/orgs/supabase/discussions/31019 without any resolution, hence filing it as a bug report given its security implications. | closed | 2025-01-04T06:19:30Z | 2025-01-11T22:30:20Z | https://github.com/supabase/supabase-py/issues/1025 | [] | ganeshrvel | 6 |
huggingface/datasets | nlp | 7,254 | mismatch for datatypes when providing `Features` with `Array2D` and user specified `dtype` and using with_format("numpy") | ### Describe the bug
If the user provides a `Features` type value to `datasets.Dataset` with members having `Array2D` with a value for `dtype`, it is not respected during `with_format("numpy")` which should return a `np.array` with `dtype` that the user provided for `Array2D`. It seems for floats, it will be set to `float32` and for ints it will be set to `int64`
### Steps to reproduce the bug
```python
import numpy as np
import datasets
from datasets import Dataset, Features, Array2D
print(f"datasets version: {datasets.__version__}")
data_info = {
"arr_float" : "float64",
"arr_int" : "int32"
}
sample = {key : [np.zeros([4, 5], dtype=dtype)] for key, dtype in data_info.items()}
features = {key : Array2D(shape=(None, 5), dtype=dtype) for key, dtype in data_info.items()}
features = Features(features)
dataset = Dataset.from_dict(sample, features=features)
ds = dataset.with_format("numpy")
for key in features:
print(f"{key} feature dtype: ", ds.features[key].dtype)
print(f"{key} dtype:", ds[key].dtype)
```
Output:
```bash
datasets version: 3.0.2
arr_float feature dtype: float64
arr_float dtype: float32
arr_int feature dtype: int32
arr_int dtype: int64
```
### Expected behavior
It should return a `np.array` with `dtype` that the user provided for the corresponding member in the `Features` type value
### Environment info
- `datasets` version: 3.0.2
- Platform: Linux-6.11.5-arch1-1-x86_64-with-glibc2.40
- Python version: 3.12.7
- `huggingface_hub` version: 0.26.1
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | open | 2024-10-26T22:06:27Z | 2024-10-26T22:07:37Z | https://github.com/huggingface/datasets/issues/7254 | [] | Akhil-CM | 1 |
dnouri/nolearn | scikit-learn | 46 | No self.best_weights in the function train_loop() ? | It seems that the train_loop() function inside the NeuralNetwork does not provide a self.best_weights which save the ConvNet parameters for the highest validation accuracy along with the epoch iterations.
Or do I miss something? Hope someone could help. Thank you.
| closed | 2015-02-17T03:08:16Z | 2015-02-20T01:43:56Z | https://github.com/dnouri/nolearn/issues/46 | [] | pengpaiSH | 3 |
Asabeneh/30-Days-Of-Python | pandas | 400 | Pyton | closed | 2023-06-02T07:31:36Z | 2023-06-02T07:31:56Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/400 | [] | Fazel-GO | 0 | |
apache/airflow | python | 47,597 | Hello, we can't run a single DAG 3000 mission | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### If "Other Airflow 2 version" selected, which one?
2.10.4
### What happened?
Hello, at present, we can load about 1000 jobs in a single DAG can be scheduled normally, but when the number of jobs in a single DAG reaches 3000, the scheduling is very slow and often does not schedule, the production problem is more urgent, there is Lao Yuan author to help see, how should we make a single DAG support more than 3000 jobs and normal scheduling operation
### What you think should happen instead?
We are running tasks in the production environment, and the number of tasks gradually increases with the development of the business, and the number of jobs in a single DAG is currently the same ...
### How to reproduce
You only need to put the number of jobs in a single DAG DA ...
### Operating System
linux + k8s
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-11T07:09:46Z | 2025-03-24T08:01:31Z | https://github.com/apache/airflow/issues/47597 | [
"kind:bug",
"area:Scheduler",
"area:core",
"needs-triage"
] | lzf12 | 12 |
nolar/kopf | asyncio | 173 | [PR] Donโt add finalizers to skipped objects | > <a href="https://github.com/dlmiddlecote"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/9053880?v=4"></a> A pull request by [dlmiddlecote](https://github.com/dlmiddlecote) at _2019-08-07 18:21:40+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/pull/173
> Merged by [nolar](https://github.com/nolar) at _2019-08-08 14:23:48+00:00_
> Issue : #167
## Description
Don't add finalizer to object if there are no handlers for it. Now that it is possible to filter objects out of handler execution, this is pertinent.
## Types of Changes
- Bug fix (non-breaking change which fixes an issue)
## Tasks
- [x] Add Tests
## Review
- [ ] Tests
- [ ] Documentation
---
> <a href="https://github.com/dlmiddlecote"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/9053880?v=4"></a> Commented by [dlmiddlecote](https://github.com/dlmiddlecote) at _2019-08-08 07:01:59+00:00_
>
Tests seem to be failing on 1 flakey test.
---
> <a href="https://github.com/psycho-ir"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/726875?v=4"></a> Commented by [psycho-ir](https://github.com/psycho-ir) at _2019-08-08 13:03:17+00:00_
>
Hi [dlmiddlecote](https://github.com/dlmiddlecote),
Thank you so much for the PR!
[nolar](https://github.com/nolar)
I tested it locally and it works fine in almost all the cases I had in mind.
The only scenario that it doesn't work correctly is when we `annotate` a resource and it becomes matched with on of the registered handlers, the finalizer won't be added to the resource.
---
> <a href="https://github.com/dlmiddlecote"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/9053880?v=4"></a> Commented by [dlmiddlecote](https://github.com/dlmiddlecote) at _2019-08-08 13:11:56+00:00_
>
Hey [psycho-ir](https://github.com/psycho-ir)
Is this the case?
- resource with no annotations applied => no finalizer applied
- resource edited (or `kubectl annotate` used) to add matching annotations => finalizer should be applied?
If so, I just tried this, and it seems to work.
Let me know if I'm mistaken.
---
> <a href="https://github.com/psycho-ir"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/726875?v=4"></a> Commented by [psycho-ir](https://github.com/psycho-ir) at _2019-08-08 13:38:19+00:00_
>
> Hey [psycho-ir](https://github.com/psycho-ir)
>
> Is this the case?
>
> * resource with no annotations applied => no finalizer applied
> * resource edited (or `kubectl annotate` used) to add matching annotations => finalizer should be applied?
>
> If so, I just tried this, and it seems to work.
>
> Let me know if I'm mistaken.
Hi [dlmiddlecote](https://github.com/dlmiddlecote),
Sorry you are right this scenario works perfectly fine.
What didn't work as expected for me was the other way around:
* resource with annotation applied => finalizer applied
* resource patched (annotation removed) => finalizer is still there
---
> <a href="https://github.com/dlmiddlecote"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/9053880?v=4"></a> Commented by [dlmiddlecote](https://github.com/dlmiddlecote) at _2019-08-08 13:51:34+00:00_
>
Hey!
I also can't reproduce.
I have the operator as:
```
import kopf
@kopf.on.delete('', 'v1', 'serviceaccounts', annotations={'foo': 'bar'})
async def foo(**_):
pass
```
Then I apply:
```
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
foo: bar
name: test
namespace: default
```
and the finalizer is applied.
I then run:
`kubectl patch sa test -p '{"metadata": {"annotations": {"foo": "baz"}}}'`
and the finalizer is removed.
---
> <a href="https://github.com/psycho-ir"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/726875?v=4"></a> Commented by [psycho-ir](https://github.com/psycho-ir) at _2019-08-08 14:04:01+00:00_
>
> Hey!
>
> I also can't reproduce.
>
> I have the operator as:
>
> ```
> import kopf
>
> @kopf.on.delete('', 'v1', 'serviceaccounts', annotations={'foo': 'bar'})
> async def foo(**_):
> pass
> ```
>
> Then I apply:
>
> ```
> apiVersion: v1
> kind: ServiceAccount
> metadata:
> annotations:
> foo: bar
> name: test
> namespace: default
> ```
>
> and the finalizer is applied.
>
> I then run:
> `kubectl patch sa test -p '{"metadata": {"annotations": {"foo": "baz"}}}'`
>
> and the finalizer is removed.
Right ๐,
I probably did a mistake in my tests, all the scenarios are working a charm.
Sorry to bother.
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2019-08-08 14:07:51+00:00_
>
PS: The solution in general is fine.
I suggest that we merge it now, and release as 0.21rcX (x=3..4 [or so](https://github.com/nolar/kopf/releases)), together with lots of other bugfixes/refactorings/improvements, and test them altogether.
---
> <a href="https://github.com/psycho-ir"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/726875?v=4"></a> Commented by [psycho-ir](https://github.com/psycho-ir) at _2019-08-08 14:08:56+00:00_
>
> PS: The solution in general is fine.
>
> I suggest that we merge it now, and release as 0.21rcX (x=3..4 [or so](https://github.com/nolar/kopf/releases)), together with lots of other bugfixes/refactorings/improvements, and test them altogether.
Totally agree ๐
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2019-08-08 15:00:03+00:00_
>
Pre-released as [kopf==0.21rc3](https://github.com/nolar/kopf/releases/tag/0.21rc3) โ but beware of other massive changes in rc1+rc2+rc3 combined (see [Releases](https://github.com/nolar/kopf/releases)). | closed | 2020-08-18T19:59:43Z | 2020-08-23T20:48:59Z | https://github.com/nolar/kopf/issues/173 | [
"enhancement",
"archive"
] | kopf-archiver[bot] | 0 |
FactoryBoy/factory_boy | django | 1,057 | Fields do not exist in this model errors with OneToOneField in Django 5 | #### Description
After upgrading from Django 4.2 to Django 5, some of our tests are failing. These are using a OneToOneField between two models. Creating one instance through a factory with an instance to the other model fails because the related name is not accepted by the Django model manager.
The workaround is very simple (see below), but I think this is a bug in this library as this was working fine under Django 4.2. We're using factory boy 3.3.
#### To Reproduce
*Share how the bug happened:*
##### Model / Factory code
```python
class Shop(models.Model):
pass
class Event(models.Model):
default_shop = models.OneToOneField(
"shop.Shop",
related_name="default_event",
on_delete=models.SET_NULL,
null=True,
blank=True,
)
class EventFactory(factory.django.DjangoModelFactory):
class Meta:
model = Event
skip_postgeneration_save = True
class ShopFactory(factory.django.DjangoModelFactory):
class Meta:
model = Shop
```
##### The issue
Before, we were able to first create an Event, and then create a Shop and immediately set the `default_event` of the Shop instance to the Event instance. With Django 5, this now fails in the factory, while still working in a Django shell. So it seems like an issue with factory boy not supporting Django 5 properly here.
```python
@pytest.mark.django_db
class Test:
@pytest.fixture
def shop(self):
event = EventFactory.create()
shop = ShopFactory.create(default_event=event)
```
```
$ pytest
> shop = ShopFactory.create(default_event=event)
apps/foo/tests/test_foo.py:335:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../.cache/pypoetry/virtualenvs/foo-KcrdI-pR-py3.10/lib/python3.10/site-packages/factory/base.py:528: in create
return cls._generate(enums.CREATE_STRATEGY, kwargs)
../../.cache/pypoetry/virtualenvs/foo-KcrdI-pR-py3.10/lib/python3.10/site-packages/factory/django.py:121: in _generate
return super()._generate(strategy, params)
../../.cache/pypoetry/virtualenvs/foo-KcrdI-pR-py3.10/lib/python3.10/site-packages/factory/base.py:465: in _generate
return step.build()
../../.cache/pypoetry/virtualenvs/foo-KcrdI-pR-py3.10/lib/python3.10/site-packages/factory/builder.py:274: in build
instance = self.factory_meta.instantiate(
../../.cache/pypoetry/virtualenvs/foo-KcrdI-pR-py3.10/lib/python3.10/site-packages/factory/base.py:317: in instantiate
return self.factory._create(model, *args, **kwargs)
../../.cache/pypoetry/virtualenvs/foo-KcrdI-pR-py3.10/lib/python3.10/site-packages/factory/django.py:174: in _create
return manager.create(*args, **kwargs)
../../.cache/pypoetry/virtualenvs/foo-KcrdI-pR-py3.10/lib/python3.10/site-packages/django/db/models/manager.py:87: in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <QuerySet []>
kwargs = {'default_event': <Event: Event>, ...}
reverse_one_to_one_fields = frozenset({'default_event'})
def create(self, **kwargs):
"""
Create a new object with the given kwargs, saving it to the database
and returning the created object.
"""
reverse_one_to_one_fields = frozenset(kwargs).intersection(
self.model._meta._reverse_one_to_one_field_names
)
if reverse_one_to_one_fields:
> raise ValueError(
"The following fields do not exist in this model: %s"
% ", ".join(reverse_one_to_one_fields)
)
E ValueError: The following fields do not exist in this model: default_event
../../.cache/pypoetry/virtualenvs/foo-KcrdI-pR-py3.10/lib/python3.10/site-packages/django/db/models/query.py:670: ValueError
```
#### Notes
The workaround is very easy, just assign the relation after the object instance is created:
```python
shop = ShopFactory.create()
shop.default_event = event
``` | closed | 2024-01-09T15:33:20Z | 2024-04-21T12:26:34Z | https://github.com/FactoryBoy/factory_boy/issues/1057 | [] | Gwildor | 3 |
pytest-dev/pytest-xdist | pytest | 463 | gure | closed | 2019-08-22T15:06:08Z | 2019-08-22T15:06:11Z | https://github.com/pytest-dev/pytest-xdist/issues/463 | [] | vasilty | 0 | |
streamlit/streamlit | machine-learning | 10,041 | Implement browser session API | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
Browser sessions allow developers to track browser status in streamlit, so that they can implement features like authentication, persistent draft or shopping cart, which require the ability to keep user state after refreshing or reopen browsers.
### Why?
The current streamlit session will lost state if users refresh or reopen their browser. And the effort of providing a API to write cookies has been pending for years. I think provide a dedicated API to track browser session would be cleaner and easier to implement.
With this API developers don't need to know how it works, it can be based on cookie or local storage or anything else. And developers can use it with singleton pattern to keep state for browser to persist whatever they want in streamlit.
### How?
This feature will introduce several new APIs:
* `st.get_browser_session(gdpr_consent=False)`, which will set a unique session id in browser if it doesn't exist, and return it.
If `gdpr_consent` is set to True, a window will pop up to ask for user's consent before setting the session id.
* `st.clean_browser_session()`, which will remove the session id from browser.
The below is a POC of how `get_browser_session` can be used to implement a simple authentication solution:
```python
from streamlit.web.server.websocket_headers import _get_websocket_headers
from streamlit.components.v1 import html
import streamlit as st
from http.cookies import SimpleCookie
from uuid import uuid4
from time import sleep
def get_cookie():
try:
headers = st.context.headers
except AttributeError:
headers = _get_websocket_headers()
if headers is not None:
cookie_str = headers.get("Cookie")
if cookie_str:
return SimpleCookie(cookie_str)
def get_cookie_value(key):
cookie = get_cookie()
if cookie is not None:
cookie_value = cookie.get(key)
if cookie_value is not None:
return cookie_value.value
return None
def get_browser_session():
"""
use cookie to track browser session
this id is unique to each browser session
it won't change even if the page is refreshed or reopened
"""
if 'st_session_id' not in st.session_state:
session_id = get_cookie_value('ST_SESSION_ID')
if session_id is None:
session_id = uuid4().hex
st.session_state['st_session_id'] = session_id
html(f'<script>document.cookie = "ST_SESSION_ID={session_id}";</script>')
sleep(0.1) # FIXME: work around bug: Tried to use SessionInfo before it was initialized
st.rerun() # FIXME: rerun immediately so that html won't be shown in the final page
st.session_state['st_session_id'] = session_id
return st.session_state['st_session_id']
@st.cache_resource
def get_auth_state():
"""
A singleton to store authentication state
"""
return {}
st.set_page_config(page_title='Browser Session Demo')
session_id = get_browser_session()
auth_state = get_auth_state()
if session_id not in auth_state:
auth_state[session_id] = False
st.write(f'Your browser session ID: {session_id}')
if not auth_state[session_id]:
st.title('Input Password')
token = st.text_input('Token', type='password')
if st.button('Submit'):
if token == 'passw0rd!':
auth_state[session_id] = True
st.rerun()
else:
st.error('Invalid token')
else:
st.success('Authentication success')
if st.button('Logout'):
auth_state[session_id] = False
st.rerun()
st.write('You are free to refresh or reopen this page without re-authentication')
```
A more complicated example of using this method to work with oauth2 can be tried here: https://ai4ec.ikkem.com/apps/op-elyte-emulator/
### Additional Context
Related issues:
* https://github.com/streamlit/streamlit/issues/861
* https://github.com/streamlit/streamlit/issues/8518 | open | 2024-12-18T02:12:59Z | 2025-01-06T15:40:04Z | https://github.com/streamlit/streamlit/issues/10041 | [
"type:enhancement"
] | link89 | 2 |
donnemartin/data-science-ipython-notebooks | scikit-learn | 50 | code | sir ,plz send me code along with churn dataset
| closed | 2017-07-19T10:28:06Z | 2017-11-30T01:08:52Z | https://github.com/donnemartin/data-science-ipython-notebooks/issues/50 | [] | sabamehwish | 1 |
A3M4/YouTube-Report | seaborn | 11 | Module Can not be found | I am getting this error running python report.py
File "C:\Users\jacob\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\scipy\special\__init__.py", line 641, in <module>
from ._ufuncs import *
ImportError: DLL load failed: The specified module could not be found. | open | 2019-12-15T04:54:25Z | 2019-12-17T16:18:26Z | https://github.com/A3M4/YouTube-Report/issues/11 | [] | jbenzaquen42 | 4 |
aimhubio/aim | tensorflow | 3,153 | Failed to delete run in aim web ui | ## ๐ Bug
when deleting `run` in aim web ui, I got the following error, and the run is not deleted:
```
Error
Error while deleting runs.
Error
Failed to execute 'json' on 'Response': body stream already read
```
### To reproduce
deleting `run` in aim web ui.
### Expected behavior
`run` deleted.
### Environment
- Aim Version: 3.19.3
- Python version: 3.10.14
- pip version: 23.0.1
- OS (e.g., Linux): Linux
- Any other relevant information
### Additional context

| open | 2024-05-31T10:11:04Z | 2024-06-25T07:34:01Z | https://github.com/aimhubio/aim/issues/3153 | [
"type / bug",
"help wanted"
] | zhiyxu | 3 |
MaxHalford/prince | scikit-learn | 186 | Eigenvalue correction for MCA | Hello! I just recently started using this package for analyzing some categorical data, and I noticed that the `fit()` method of the `mca.py` file contains the setup (i.e., `self.K_`, `self.J_`) for inertia correction using either the _Benzecri_ or _Greenacre_ methods. However, it's not clear to me where the inertia correction is actually happening in the code.
Just wanted to kindly check if this correction step was fully implemented yet, thanks! | closed | 2025-03-07T18:28:02Z | 2025-03-07T22:04:26Z | https://github.com/MaxHalford/prince/issues/186 | [] | saatcheson | 2 |
dynaconf/dynaconf | django | 595 | [bug] SQLAlchemy URL object replaced with BoxList object | Using dynaconf with Flask and Flask-SQLAlchemy. If I initialize dynaconf, then assign a sqlalchemy `URL` object to a config key, the object becomes a `BoxList`, which causes sqlalchemy to fail later. Dynaconf should not replace arbitrary objects.
```python
app = Flask(__name__)
dynaconf.init_app(app)
app.config["SQLALCHEMY_DATABASE_URI"] = sa_url(
"postgresql", None, None, None, None, "example"
)
print(type(app.config["SQLALCHEMY_DATABASE_URI"]))
```
```
<class 'dynaconf.vendor.box.box_list.BoxList'>
```
This is a problem when using SQLAlchemy 1.4, which treats the URL as an object with attributes instead of a tuple.
cc @davidism | closed | 2021-06-01T19:15:32Z | 2021-08-19T14:14:32Z | https://github.com/dynaconf/dynaconf/issues/595 | [
"bug",
"HIGH",
"backport3.1.5"
] | trickardy | 2 |
pandas-dev/pandas | python | 61,125 | ENH: Supporting third-party engines for all `map` and `apply` methods | In #54666 and #61032 we introduce the `engine` parameter to `DataFrame.apply` which allows users to run the operation with a third-party engine.
The rest of `apply` and `map` methods can also benefit from this.
In a first phase we can do:
- `Series.map`
- `Series.apply`
- `DataFrame.map`
Then we can continue with the transform and group by ones. | open | 2025-03-15T03:21:13Z | 2025-03-20T15:31:54Z | https://github.com/pandas-dev/pandas/issues/61125 | [
"Apply"
] | datapythonista | 13 |
autogluon/autogluon | scikit-learn | 4,388 | How to obtain fitted values with TimeSeriesPredictor? | ## Description
When I run TimeSeriesPredictor and fit the model, I didn't find out whether or where the fitted in-sample values are provided.
Can anyone help me with it? Thanks!
| open | 2024-08-14T16:48:29Z | 2024-08-15T12:02:57Z | https://github.com/autogluon/autogluon/issues/4388 | [
"enhancement"
] | wenqiuma | 1 |
hzwer/ECCV2022-RIFE | computer-vision | 34 | A work-in-progress vulkan port :D | https://github.com/nihui/rife-ncnn-vulkan
| closed | 2020-11-25T03:46:44Z | 2020-11-28T04:37:44Z | https://github.com/hzwer/ECCV2022-RIFE/issues/34 | [] | nihui | 1 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,560 | Seed not returned via api | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [X] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
Hello!
When not setting a seed, a random seed is generated and correctly output in the web-gui. But the api only returns "-1". That is correctly so far, as this is the default comand to let the framework know that it shall set a random seed. But exactly that seed I need. Is there a workaround for it?
### Steps to reproduce the problem
Simply use the api for generating an image, do not set a seed and print out res['parameters']
### What should have happened?
the value of the random seed shall be delivered by the api
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo-2024-10-17-15-55.json](https://github.com/user-attachments/files/17415652/sysinfo-2024-10-17-15-55.json)
### Console logs
```Shell
{'prompt': 'A 30-year-old woman with middle-long brown hair and glasses is playing tennis wearing nothing but her glasses and holding a tennis racket while swinging it gracefully on an outdoor tennis court. A large tennis ball logo is prominently displayed on the court surface emphasizing the sport being played., impressive lighting', 'negative_prompt': 'nude, hands, Bokeh/DOF,flat, low contrast, oversaturated, underexposed, overexposed, blurred, noisy', 'styles': None, 'seed': -1, 'subseed': -1, 'subseed_strength': 0, 'seed_resize_from_h': -1, 'seed_resize_from_w': -1, 'sampler_name': None, 'batch_size': 1, 'n_iter': 1, 'steps': 5, 'cfg_scale': 1.5, 'width': 768, 'height': 1024, 'restore_faces': True, 'tiling': None, 'do_not_save_samples': False, 'do_not_save_grid': False, 'eta': None, 'denoising_strength': None, 's_min_uncond': None, 's_churn': None, 's_tmax': None, 's_tmin': None, 's_noise': None, 'override_settings': None, 'override_settings_restore_afterwards': True, 'refiner_checkpoint': None, 'refiner_switch_at': None, 'disable_extra_networks': False, 'comments': None, 'enable_hr': False, 'firstphase_width': 0, 'firstphase_height': 0, 'hr_scale': 2.0, 'hr_upscaler': None, 'hr_second_pass_steps': 0, 'hr_resize_x': 0, 'hr_resize_y': 0, 'hr_checkpoint_name': None, 'hr_sampler_name': None, 'hr_prompt': '', 'hr_negative_prompt': '', 'sampler_index': 'DPM++ SDE', 'script_name': None, 'script_args': [], 'send_images': True, 'save_images': False, 'alwayson_scripts': {}}
```
### Additional information
_No response_ | closed | 2024-10-17T15:56:46Z | 2024-10-24T01:14:49Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16560 | [
"not-an-issue"
] | Marcophono2 | 2 |
Sanster/IOPaint | pytorch | 242 | Switch to sd1.5 model failed | I have this problem when choosing any Stable-Diffusion model. How to fix it?


| closed | 2023-03-13T15:53:56Z | 2023-03-14T22:37:19Z | https://github.com/Sanster/IOPaint/issues/242 | [] | vasyaholly | 13 |
healthchecks/healthchecks | django | 1,134 | [Feature Request] Group Projects | Hi,
we've organized the different jobs in multiple projects, which already is nice. However, with more projects, it would be nice to have some control on how to organize the projects in the starting page. My usecases are e.g. to separate prod from dev jobs etc. So one way of achieving this would probably be to group the projects.
Thanks,
skr5k | open | 2025-03-14T14:05:07Z | 2025-03-14T14:05:07Z | https://github.com/healthchecks/healthchecks/issues/1134 | [] | skr5k | 0 |
ckan/ckan | api | 7,579 | Function is dropped in CKAN 2.10 despite deprecation info | ## CKAN version
2.10
## Describe the bug
The `authz.auth_is_loggedin_user` function is dropped in CKAN 2.10. However, in CKAN 2.9, there was a deprecation notice _recommending_ this function, and there doesn't appear to be a clear replacement.
### Steps to reproduce
- Install a plugin that calls `auth_is_loggedin_user` on CKAN 2.9, such as https://github.com/qld-gov-au/ckanext-ytp-comments/
- Update to CKAN 2.10
- Perform an operation that calls the function, such as flagging a comment for moderation
### Expected behavior
There should be a notice in the code and/or the changelog to indicate what replaces `auth_is_loggedin_user`.
### Additional details
12:38:09,321 ERROR [ckan.config.middleware.flask_app] module 'ckan.authz' has no attribute 'auth_is_loggedin_user'
Traceback (most recent call last):
File "/usr/lib/ckan/default/lib64/python3.7/site-packages/flask/app.py", line 1516, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/lib/ckan/default/lib64/python3.7/site-packages/flask/app.py", line 1502, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "/mnt/local_data/ckan_venv/src/ckanext-ytp-comments/ckanext/ytp/comments/controllers/__init__.py", line 276, in flag
if authz.auth_is_loggedin_user():
AttributeError: module 'ckan.authz' has no attribute 'auth_is_loggedin_user' | open | 2023-05-09T03:04:46Z | 2023-05-09T13:57:04Z | https://github.com/ckan/ckan/issues/7579 | [] | ThrawnCA | 1 |
yt-dlp/yt-dlp | python | 12,109 | [Dropbox] Error: No video formats found! | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
World
### Provide a description that is worded well enough to be understood
This is the only video that returns this error
https://www.dropbox.com/s/fnxkf6gvr9zl7ow/IMG_3996.MOV?dl=0
I tried the full link and its the same
https://www.dropbox.com/scl/fi/8n13ei80sb3bmfm9nrcmw/IMG_3996.MOV?rlkey=t2mf7yg8m0vzenb432bklo0z0&e=1&dl=0
There is a playable video and it should download like all others
I tried everything in my power to fix it without results
Dont judge me on the video lmao
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '-P', '/Archive/Twerk/KateCakes/', 'https://www.dropbox.com/s/fnxkf6gvr9zl7ow/IMG_3996.MOV?dl=0']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2025.01.15 from yt-dlp/yt-dlp [c8541f8b1] (zip)
[debug] Python 3.12.3 (CPython x86_64 64bit) - Linux-6.8.0-51-generic-x86_64-with-glibc2.39 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.39)
[debug] exe versions: ffmpeg 6.1.1 (setts), ffprobe 6.1.1
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2023.11.17, mutagen-1.46.0, pyxattr-0.8.1, requests-2.31.0, sqlite3-3.45.1, urllib3-2.0.7, websockets-10.4
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2025.01.15 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2025.01.15 from yt-dlp/yt-dlp)
[Dropbox] Extracting URL: https://www.dropbox.com/s/fnxkf6gvr9zl7ow/IMG_3996.MOV?dl=0
[Dropbox] fnxkf6gvr9zl7ow: Downloading webpage
ERROR: [Dropbox] fnxkf6gvr9zl7ow: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
Traceback (most recent call last):
File "/home/ok/.local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1637, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ok/.local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1793, in __extract_info
return self.process_ie_result(ie_result, download, extra_info)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ok/.local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1852, in process_ie_result
ie_result = self.process_video_result(ie_result, download=download)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ok/.local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 2859, in process_video_result
self.raise_no_formats(info_dict)
File "/home/ok/.local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1126, in raise_no_formats
raise ExtractorError(msg, video_id=info['id'], ie=info['extractor'],
yt_dlp.utils.ExtractorError: [Dropbox] fnxkf6gvr9zl7ow: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
``` | closed | 2025-01-16T19:53:26Z | 2025-01-29T16:56:07Z | https://github.com/yt-dlp/yt-dlp/issues/12109 | [
"NSFW",
"site-bug"
] | BenderBRod | 2 |
scikit-learn/scikit-learn | data-science | 30,461 | from sklearn.datasets import make_regression FileNotFoundError | ### Describe the bug
When running examples/application/plot_prediction_latency.py a FileNotFoundError occurs as there is no file named make_regression in datasets dir.
I have cloned the scikit-learn repo and installed it using ```pip install -e .```
Completely unable to ```import scikit_learn ``` or ```sklearn ``` albeit it showing up when ```pip list -> scikit-learn 1.7.dev0 /Users/user/scikit-learn ```
### Steps/Code to Reproduce
from sklearn.datasets import make_regression
### Expected Results
No error is thrown
### Actual Results
Exception has occurred: FileNotFoundError
[Errno 2] No such file or directory: '/private/var/folders/0q/80gytspx42v3rtlkkq_h59jw0000gn/T/pip-build-env-53amsfeb/normal/bin/ninja'
### Versions
```shell
scikit-learn 1.7.dev0
```
| closed | 2024-12-11T10:13:52Z | 2024-12-11T11:19:18Z | https://github.com/scikit-learn/scikit-learn/issues/30461 | [
"Bug",
"Needs Triage"
] | kayo09 | 1 |
nerfstudio-project/nerfstudio | computer-vision | 2,952 | Docker/singularity container doesn't seem to contain ns-* commands | **Describe the bug**
I built a singularity container from the Dockerhub address listed on the web page and then ran "singularity run --nv nerf.simg" and tried to find the ns-* files but I am unable to find them.
**To Reproduce**
Run:
singularity build nerf.simg docker://dromni/nerfstudio:1.0.2
singularity run --nv nerf.sif ns-process-data video --data /workspace/video.mp4
**Expected behavior**
No errors
** Result:
==========
== CUDA ==
==========
CUDA Version 11.8.0
Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
/opt/nvidia/nvidia_entrypoint.sh: line 67: exec: ns-process-data: not found
| open | 2024-02-23T22:05:32Z | 2024-09-05T22:07:46Z | https://github.com/nerfstudio-project/nerfstudio/issues/2952 | [] | cousins | 7 |
autokey/autokey | automation | 820 | keyboard.press_key freezes autokey | ### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland?
Xorg
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Crash/Hang/Data loss
### Choose one or more terms that describe this issue:
- [ ] autokey triggers
- [X] autokey-gtk
- [X] autokey-qt
- [ ] beta
- [X] bug
- [ ] critical
- [ ] development
- [ ] documentation
- [ ] enhancement
- [ ] installation/configuration
- [ ] phrase expansion
- [X] scripting
- [ ] technical debt
- [ ] user interface
### Other terms that describe this issue if not provided above:
keyboard press_key
### Which Linux distribution did you use?
Ubuntu 20.04
### Which AutoKey GUI did you use?
Both
### Which AutoKey version did you use?
0.95.10
### How did you install AutoKey?
I installed it from the Software app, which uses my distro and some custom repositories
### Can you briefly describe the issue?
Autokey freezes when trying to run keyboard.press_key
### Can the issue be reproduced?
Always
### What are the steps to reproduce the issue?
1. write "if keyboard.wait_for_keypress('c'):"
2. on the if result put "keyboard.press_key("e")"
3. play it and press c
### What should have happened?
It should have automatically pressed "e", i also had put a way of disabling with a toggle boolean
### What actually happened?
It freezed. i checked with different keys, adding a modifier and also erased the disabling part of the code, but it still freezes
### Do you have screenshots?


### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
I only know that its the keyboard.press_key command | closed | 2023-03-25T02:31:58Z | 2023-04-25T21:14:31Z | https://github.com/autokey/autokey/issues/820 | [
"scripting",
"invalid",
"user support"
] | NicoReXDlol | 16 |
zappa/Zappa | flask | 652 | [Migrated] ResourceNotFoundException: An error occurred (ResourceNotFoundException) when calling the DescribeLogStreams | Originally from: https://github.com/Miserlou/Zappa/issues/1652 by [4lph4-Ph4un](https://github.com/4lph4-Ph4un)
When attempting to tail logs on dev the tailing is succesfull, however tailing environment on another account fails with:
```
Traceback (most recent call last):
File "/opt/kidday/env/lib/python3.6/site-packages/zappa/cli.py", line 2693, in handle
sys.exit(cli.handle())
File "/opt/kidday/env/lib/python3.6/site-packages/zappa/cli.py", line 504, in handle
self.dispatch_command(self.command, stage)
File "/opt/kidday/env/lib/python3.6/site-packages/zappa/cli.py", line 595, in dispatch_command
force_colorize=self.vargs['force_color'] or None,
File "/opt/kidday/env/lib/python3.6/site-packages/zappa/cli.py", line 1064, in tail
filter_pattern=filter_pattern,
File "/opt/kidday/env/lib/python3.6/site-packages/zappa/core.py", line 2745, in fetch_logs
orderBy='LastEventTime'
File "/opt/kidday/env/lib/python3.6/site-packages/botocore/client.py", line 320, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/opt/kidday/env/lib/python3.6/site-packages/botocore/client.py", line 623, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ResourceNotFoundException: An error occurred (ResourceNotFoundException) when calling the DescribeLogStreams operation: The specified log group does not exist.
```
Checking the deployment with status (zappa status prod) yields:
`No Lambda src-prod detected in eu-west-1 - have you deployed yet?`
Although the deployment has been succesful and the Lambda name can be found on AWS console itself.
## Possible Fix
Send in jneves! :D
| closed | 2021-02-20T12:32:27Z | 2024-04-13T17:36:31Z | https://github.com/zappa/Zappa/issues/652 | [
"no-activity",
"auto-closed"
] | jneves | 3 |
neuml/txtai | nlp | 235 | API should raise an error if attempting to modify a read-only index | Currently, the API silently skips add/index/upsert/delete operations and returns a HTTP 200 code when an index is not writable. This leads to confusing behavior.
An error should be raised with a 403 Forbidden status code. | closed | 2022-03-01T18:58:49Z | 2022-03-01T18:59:58Z | https://github.com/neuml/txtai/issues/235 | [] | davidmezzetti | 0 |
lux-org/lux | jupyter | 376 | [BUG] Unexpected error in rendering Lux widget and recommendations when filtering does not produce results | **Describe the bug**
When filtering a dataframe based on row values does not produce results, the following error is thrown:
<user_path>site-packages\IPython\core\formatters.py:918: UserWarning:
Unexpected error in rendering Lux widget and recommendations. Falling back to Pandas display.
Please report the following issue on Github: https://github.com/lux-org/lux/issues
<user_path>site-packages\lux\core\frame.py:609: UserWarning:Traceback (most recent call last):
File "<user_path>site-packages\lux\core\frame.py", line 571, in _ipython_display_
self.maintain_recs()
File "<user_path>site-packages\lux\core\frame.py", line 428, in maintain_recs
rec_df.show_all_column_vis()
File "<user_path>site-packages\lux\core\frame.py", line 349, in show_all_column_vis
vis = Vis(list(self.columns), self)
File "<user_path>site-packages\lux\vis\Vis.py", line 39, in __init__
self.refresh_source(self._source)
File "<user_path>site-packages\lux\vis\Vis.py", line 356, in refresh_source
Compiler.compile_vis(ldf, self)
File "<user_path>site-packages\lux\processor\Compiler.py", line 58, in compile_vis
Compiler.populate_data_type_model(ldf, [vis])
File "<user_path>site-packages\lux\processor\Compiler.py", line 176, in populate_data_type_model
clause.data_type = ldf.data_type[clause.attribute]
KeyError: 'CountryCode'
**To Reproduce**
Please describe the steps needed to reproduce the behavior. For example:
1. Create a dataframe
wdi_country_series_df = pd.read_csv('../lux_data/WDICountry-Series.csv')
2. Filter dataframe
wdi_country_series_df[wdi_country_series_df['CountryCode'] == 'ARB']
Country with this country codes does not exists in dataframe, so error appears.
**Expected behavior**
Produce empty widget, no recommendation.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here.
| closed | 2021-05-18T10:29:07Z | 2021-05-18T21:21:07Z | https://github.com/lux-org/lux/issues/376 | [] | Innko | 1 |
mwaskom/seaborn | matplotlib | 3,675 | Defining plot size in seaborn objects ? | Hi seaborn community
I could not find the setting of the size in seaborn objects API tutorials and documentation
I want to set plot size in a plot in a nice concise manner that works with seaborn objects:
Example:
`lineplop.facet('shift', wrap = 3).share(x= True, y = False)`
I can only do so as follows
`lineplop.facet('shift', wrap = 3).share(x= True, y = False).on(mpl.Figure(figsize=(20, 10)))`
Is this the way I am supposed to be setting size in e.g. facet grid plots that do have many subplots, which are by default tiny, or there is some proper solution using .scale ? | closed | 2024-04-13T08:40:00Z | 2024-04-18T13:23:59Z | https://github.com/mwaskom/seaborn/issues/3675 | [] | mat-ej | 1 |
sqlalchemy/alembic | sqlalchemy | 845 | failed to create process. Problem | When I try to run alembic any command its show me failed to create process. Problem | closed | 2021-05-19T10:36:30Z | 2021-05-20T17:29:29Z | https://github.com/sqlalchemy/alembic/issues/845 | [
"question",
"awaiting info",
"cant reproduce"
] | imrankhan441 | 2 |
nsidnev/fastapi-realworld-example-app | fastapi | 270 | User registration failed "relation "users" does not exist" | Dear nsidnev,
I'm a fan of your architecture in this application. Unfortunately, I cannot figure out the error from the registration side:
"asyncpg.exceptions.UndefinedTableError: relation "users" does not exist"
I attached an image on how it looks like. I hope we could figure this out.

<img width="1356" alt="Screenshot 2022-04-16 at 13 21 56" src="https://user-images.githubusercontent.com/65780729/163673079-8dacd2d0-c3e3-46b3-ac35-bb2e333519b1.png">
Cheers and stay healthy. | closed | 2022-04-16T11:22:10Z | 2022-08-21T00:20:09Z | https://github.com/nsidnev/fastapi-realworld-example-app/issues/270 | [] | Eternal-Engine | 3 |
sgl-project/sglang | pytorch | 4,055 | [Feature] Apply structured output sampling after reasoning steps in Reasoning models | ### Checklist
- [ ] 1. If the issue you raised is not a feature but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [ ] 2. Please use English, otherwise it will be closed.
### Motivation
Only apply constrained sampling only in the answer for reasoning model. i.e. for DeepSeek R1 only enforce grammar inside after `</think>`
This would make Reasoning models more useful in agent workflow expecting structured output.
### Related resources
https://github.com/vllm-project/vllm/issues/12619
https://github.com/vllm-project/vllm/pull/12955 | open | 2025-03-04T07:58:42Z | 2025-03-24T07:04:02Z | https://github.com/sgl-project/sglang/issues/4055 | [] | xihuai18 | 10 |
freqtrade/freqtrade | python | 10,702 | Freqtrade process crashes | ## Describe your environment
* Operating system: Amazon Linux AMI
* Python Version: 3.9.16
* CCXT version: 4.3.88
* Freqtrade Version: 2024.8
## Describe the problem:
After running for a few hours service crashed.
### Steps to reproduce:
It's unclear how to reproduce the problem as it happened occasionally.
### Observed Results:
* Freqtrade process died
* Freqtrade process keeps runing
### Relevant code exceptions or logs
```
Sep 23 23:46:36 ip-172-31-23-170.ap-northeast-1.compute.internal systemd[99915]: Stopping freqtrade.service - Freqtrade Trader1 Dry Run...
Sep 23 23:46:36 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: 2024-09-23 23:46:36,977 - freqtrade.commands.trade_commands - INFO - worker found ... calling exit
Sep 23 23:46:36 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: 2024-09-23 23:46:36,978 - freqtrade.rpc.rpc_manager - INFO - Sending rpc message: {'type': status, 'status': 'process died'}
Sep 23 23:46:36 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: 2024-09-23 23:46:36,981 - freqtrade.freqtradebot - INFO - Cleaning up modules ...
Sep 23 23:46:36 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: 2024-09-23 23:46:36,983 - freqtrade.rpc.rpc_manager - INFO - Cleaning up rpc modules ...
Sep 23 23:46:36 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: 2024-09-23 23:46:36,984 - freqtrade.rpc.rpc_manager - INFO - Cleaning up rpc.apiserver ...
Sep 23 23:46:36 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: 2024-09-23 23:46:36,984 - freqtrade.rpc.api_server.webserver - INFO - Stopping API Server
Sep 23 23:46:37 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: 2024-09-23 23:46:37,084 - uvicorn.error - INFO - Shutting down
Sep 23 23:46:37 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: 2024-09-23 23:46:37,184 - uvicorn.error - INFO - Waiting for application shutdown.
Sep 23 23:46:37 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: 2024-09-23 23:46:37,185 - uvicorn.error - INFO - Application shutdown complete.
Sep 23 23:46:37 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: 2024-09-23 23:46:37,185 - uvicorn.error - INFO - Finished server process [120510]
Sep 23 23:46:37 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: 2024-09-23 23:46:37,185 - freqtrade.rpc.rpc_manager - INFO - Cleaning up rpc.telegram ...
Sep 23 23:46:37 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: 2024-09-23 23:46:37,870 - telegram.ext.Application - INFO - Application is stopping. This might take a moment.
Sep 23 23:46:37 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: 2024-09-23 23:46:37,870 - telegram.ext.Application - INFO - Application.stop() complete
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: 2024-09-23 23:46:38,362 - freqtrade - INFO - SIGINT received, aborting ...
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: Exception ignored in: <function Exchange.__del__ at 0xffff8b1bfee0>
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: Traceback (most recent call last):
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: File "/home/ec2-user/freqtrade/freqtrade/exchange/exchange.py", line 297, in __del__
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: self.close()
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: File "/home/ec2-user/freqtrade/freqtrade/exchange/exchange.py", line 309, in close
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: self.loop.run_until_complete(self._api_async.close())
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: File "/usr/lib64/python3.9/asyncio/base_events.py", line 622, in run_until_complete
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: self._check_closed()
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: File "/usr/lib64/python3.9/asyncio/base_events.py", line 515, in _check_closed
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: raise RuntimeError('Event loop is closed')
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: RuntimeError: Event loop is closed
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: 2024-09-23 23:46:38,576 - ccxt.base.exchange - WARNING - kucoin requires to release all resources with an explicit call to the .close() coroutine. If you are using the exchange instance with async coroutines, add `await exchange.close()` to your code into a place when you're done with the exchange and don't need the exchange instance anymore (at the end of your async coroutine).
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: 2024-09-23 23:46:38,578 - asyncio - ERROR - Task was destroyed but it is pending!
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: task: <Task pending name='Task-176263' coro=<Throttler.looper() done, defined at /home/ec2-user/freqtrade/.venv/lib64/python3.9/site-packages/ccxt/async_support/base/throttler.py:21> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0xffff76894310>()]>>
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: 2024-09-23 23:46:38,579 - asyncio - ERROR - Future exception was never retrieved
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: future: <Future finished exception=ClientOSError(1, '[SSL: APPLICATION_DATA_AFTER_CLOSE_NOTIFY] application data after close notify (_ssl.c:2770)')>
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: Traceback (most recent call last):
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: File "/home/ec2-user/freqtrade/freqtrade/main.py", line 45, in main
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: return_code = args["func"](args)
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: File "/home/ec2-user/freqtrade/freqtrade/commands/trade_commands.py", line 25, in start_trading
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: worker.run()
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: File "/home/ec2-user/freqtrade/freqtrade/worker.py", line 78, in run
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: state = self._worker(old_state=state)
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: File "/home/ec2-user/freqtrade/freqtrade/worker.py", line 119, in _worker
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: self._throttle(
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: File "/home/ec2-user/freqtrade/freqtrade/worker.py", line 160, in _throttle
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: result = func(*args, **kwargs)
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: File "/home/ec2-user/freqtrade/freqtrade/worker.py", line 194, in _process_running
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: self.freqtrade.process()
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: File "/home/ec2-user/freqtrade/freqtrade/freqtradebot.py", line 262, in process
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: self.dataprovider.refresh(
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: File "/home/ec2-user/freqtrade/freqtrade/data/dataprovider.py", line 449, in refresh
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: self._exchange.refresh_latest_ohlcv(final_pairs)
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: File "/home/ec2-user/freqtrade/freqtrade/exchange/exchange.py", line 2515, in refresh_latest_ohlcv
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: results = self.loop.run_until_complete(gather_coroutines(dl_jobs_batch))
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: File "/usr/lib64/python3.9/asyncio/base_events.py", line 634, in run_until_complete
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: self.run_forever()
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: File "/usr/lib64/python3.9/asyncio/base_events.py", line 601, in run_forever
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: self._run_once()
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: File "/usr/lib64/python3.9/asyncio/base_events.py", line 1869, in _run_once
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: event_list = self._selector.select(timeout)
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: File "/usr/lib64/python3.9/selectors.py", line 469, in select
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: fd_event_list = self._selector.poll(timeout, max_ev)
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: File "/home/ec2-user/freqtrade/freqtrade/commands/trade_commands.py", line 18, in term_handler
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: raise KeyboardInterrupt()
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: KeyboardInterrupt
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: During handling of the above exception, another exception occurred:
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: Traceback (most recent call last):
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: File "/usr/lib64/python3.9/asyncio/sslproto.py", line 534, in data_received
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: ssldata, appdata = self._sslpipe.feed_ssldata(data)
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: File "/usr/lib64/python3.9/asyncio/sslproto.py", line 206, in feed_ssldata
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: self._sslobj.unwrap()
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: File "/usr/lib64/python3.9/ssl.py", line 949, in unwrap
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: return self._sslobj.shutdown()
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: ssl.SSLError: [SSL: APPLICATION_DATA_AFTER_CLOSE_NOTIFY] application data after close notify (_ssl.c:2770)
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: The above exception was the direct cause of the following exception:
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: aiohttp.client_exceptions.ClientOSError: [Errno 1] [SSL: APPLICATION_DATA_AFTER_CLOSE_NOTIFY] application data after close notify (_ssl.c:2770)
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: 2024-09-23 23:46:38,582 - asyncio - ERROR - Unclosed client session
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: client_session: <aiohttp.client.ClientSession object at 0xffff8a0d1c40>
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: 2024-09-23 23:46:38,582 - asyncio - ERROR - Task was destroyed but it is pending!
Sep 23 23:46:38 ip-172-31-23-170.ap-northeast-1.compute.internal freqtrade[120510]: task: <Task pending name='Task-176288' coro=<TCPConnector._resolve_host_with_throttle() running at /home/ec2-user/freqtrade/.venv/lib64/python3.9/site-packages/aiohttp/connector.py:921> cb=[shield.<locals>._inner_done_callback() at /usr/lib64/python3.9/asyncio/tasks.py:890]>
Sep 23 23:46:39 ip-172-31-23-170.ap-northeast-1.compute.internal systemd[99915]: Stopped freqtrade.service - Freqtrade Trader1 Dry Run.
Sep 23 23:46:39 ip-172-31-23-170.ap-northeast-1.compute.internal systemd[99915]: freqtrade.service: Consumed 1h 24min 49.123s CPU time.
``` | closed | 2024-09-24T06:11:47Z | 2024-09-24T07:24:58Z | https://github.com/freqtrade/freqtrade/issues/10702 | [
"Question"
] | vecktor | 1 |
charlesq34/pointnet | tensorflow | 102 | There is a problem when I run [collect_indoor_3d.py]. | When I downloaded and unzip the 4.09GB datasets, I still couldn't correctly run collect_indoor_3d.py.
And error is ใ/path/data/StanfordDatasets v1.2 Aligned Version/Area../xxxx 1/Annotations, 'ERROR!'ใ
I don't know how can I solve this problem.
Do I make some wrong operation?
Thanks for your help!
| open | 2018-04-23T08:59:38Z | 2019-03-26T02:01:19Z | https://github.com/charlesq34/pointnet/issues/102 | [] | JinyuanShao | 1 |
pydantic/pydantic | pydantic | 11,576 | Invalid JSON Schema generated when constraints and validators are involved | ### Initial Checks
- [x] I confirm that I'm using Pydantic V2
### Description
The following generates an invalid JSON Schema:
```python
from typing import Annotated
from pydantic import BeforeValidator, Field, TypeAdapter
TypeAdapter(Annotated[int, Field(gt=2), BeforeValidator(lambda v: v), Field(lt=2)]).json_schema()
#> {'exclusiveMinimum': 2, 'lt': 2, 'type': 'integer'}
```
### Example Code
```Python
```
### Python, Pydantic & OS Version
```Text
2.10
``` | open | 2025-03-18T16:15:51Z | 2025-03-18T16:15:51Z | https://github.com/pydantic/pydantic/issues/11576 | [
"bug V2",
"topic-annotations"
] | Viicos | 0 |
FlareSolverr/FlareSolverr | api | 573 | [1337x] (testing) Exception (1337x): FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Unable to process browser request. ProtocolError: Protocol error (Page.navigate): frameId not supported RemoteAgentError@chrome://remote/content/cdp/Error.jsm:29:5 | **Please use the search bar** at the top of the page and make sure you are not creating an already submitted issue.
Check closed issues as well, because your issue may have already been fixed.
### How to enable debug and html traces
[Follow the instructions from this wiki page](https://github.com/FlareSolverr/FlareSolverr/wiki/How-to-enable-debug-and-html-trace)
### Environment
* **FlareSolverr version**:v2.2.10
* **Last working FlareSolverr version**:v2.2.10
* **Operating system**:Linux x86_64
* **Are you using Docker**: [yes]
* **FlareSolverr User-Agent (see log traces or / endpoint)**:Mozilla/5.0 (X11; Linux x86_64; rv:94.0) Gecko/20100101 Firefox/94.0
* **Are you using a proxy or VPN?** [no]
* **Are you using Captcha Solver:** [no]
* **If using captcha solver, which one:**
* **URL to test this issue:**
### Description
[List steps to reproduce the error and details on what happens and what you expected to happen]
Add https://1337x.nocensor.lol/ to Jackett
Test fails due to flaresolverr
### Logged Error Messages
03/11/2022 1:15:15 PM
2022-11-03T07:45:15+00:00 DEBUG REQ-3 Navigating to... https://1337x.nocensor.lol/cat/Movies/time/desc/1/
03/11/2022 1:15:34 PM
2022-11-03T07:45:34+00:00 ERROR REQ-3 Unexpected error: ProtocolError: Protocol error (Page.navigate): frameId not supported RemoteAgentError@chrome://remote/content/cdp/Error.jsm:29 :5
03/11/2022 1:15:34 PM
UnsupportedError@chrome://remote/content/cdp/Error.jsm:106:1
03/11/2022 1:15:34 PM
navigate@chrome://remote/content/cdp/domains/parent/Page.jsm:103:13
03/11/2022 1:15:34 PM
execute@chrome://remote/content/cdp/domains/DomainCache.jsm:101:25
03/11/2022 1:15:34 PM
execute@chrome://remote/content/cdp/sessions/Session.jsm:64:25
03/11/2022 1:15:34 PM
execute@chrome://remote/content/cdp/sessions/TabSession.jsm:67:20
03/11/2022 1:15:34 PM
onPacket@chrome://remote/content/cdp/CDPConnection.jsm:248:36
03/11/2022 1:15:34 PM
onMessage@chrome://remote/content/server/WebSocketTransport.jsm:89:18
03/11/2022 1:15:34 PM
handleEvent@chrome://remote/content/server/WebSocketTransport.jsm:71:14
03/11/2022 1:15:34 PM
03/11/2022 1:15:34 PM
2022-11-03T07:45:34+00:00 INFO REQ-3 Response in 20.817 s
03/11/2022 1:15:34 PM
2022-11-03T07:45:34+00:00 ERROR REQ-3 Error: Unable to process browser request. ProtocolError: Protocol error (Page.navigate): frameId not supported RemoteAgentError@chrome://remote/content/cdp/Error.jsm:29:5
03/11/2022 1:15:34 PM
UnsupportedError@chrome://remote/content/cdp/Error.jsm:106:1
03/11/2022 1:15:34 PM
navigate@chrome://remote/content/cdp/domains/parent/Page.jsm:103:13
03/11/2022 1:15:34 PM
execute@chrome://remote/content/cdp/domains/DomainCache.jsm:101:25
03/11/2022 1:15:34 PM
execute@chrome://remote/content/cdp/sessions/Session.jsm:64:25
03/11/2022 1:15:34 PM
execute@chrome://remote/content/cdp/sessions/TabSession.jsm:67:20
03/11/2022 1:15:34 PM
onPacket@chrome://remote/content/cdp/CDPConnection.jsm:248:36
03/11/2022 1:15:34 PM
onMessage@chrome://remote/content/server/WebSocketTransport.jsm:89:18
03/11/2022 1:15:34 PM
handleEvent@chrome://remote/content/server/WebSocketTransport.jsm:71:14
[Place any relevant error messages you noticed from the logs here.]
[Make sure you attach the full logs with your personal information removed in case we need more information]
### Screenshots
[Place any screenshots of the issue here if needed]

| closed | 2022-11-03T07:52:01Z | 2022-11-03T14:13:45Z | https://github.com/FlareSolverr/FlareSolverr/issues/573 | [
"duplicate"
] | karanrahar | 1 |
yeongpin/cursor-free-vip | automation | 41 | ๆ ๆณ่ฎพ็ฝฎๅฏ็ |  | closed | 2025-01-30T17:07:47Z | 2025-02-05T12:14:01Z | https://github.com/yeongpin/cursor-free-vip/issues/41 | [] | 1837620622 | 1 |
BeanieODM/beanie | asyncio | 102 | Text Search? | I couldn't find anything in the docs to do text search (https://docs.mongodb.com/manual/reference/operator/query/text/)
I have a document :
```python
class Location(Document):
name: str
private: bool = False
class Meta:
table = "locations"
```
I want to do a search like "where name contains 'New York'"
Is this possible at the moment? | open | 2021-08-29T07:17:29Z | 2024-12-08T14:30:17Z | https://github.com/BeanieODM/beanie/issues/102 | [
"documentation"
] | tonybaloney | 4 |
roboflow/supervision | pytorch | 1,144 | Multi-can tracking | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
I have 6 cams connected in a hallway and my task is to track and count people walking in it (there are always many people there), yet I do not understand how I can produce inference on a multiple cameras AND have the same IDs of people from cam1 to cam2,3...6. I use ultralytics for detection and tried their multi-streaming guide, yet if 1 camera catches a frame without objects - it shuts down. Is there any other way to run inference on multiple cameras or am i missing something? Please help.
### Additional
_No response_ | closed | 2024-04-26T11:54:47Z | 2024-04-26T12:09:35Z | https://github.com/roboflow/supervision/issues/1144 | [
"question"
] | Vdol22 | 1 |
proplot-dev/proplot | data-visualization | 441 | Nonsticky bounds | ### Description
I think it makes sense to use nonsticky bounds for some plots such as errorbar plot. Could you add an option for nonsticky bounds?
### Steps to reproduce
In this example, the errorbars in the edges are hidden.
```python
import numpy as np
import pandas as pd
import proplot as pplt
state = np.random.RandomState(51423)
data = state.rand(20, 8).cumsum(axis=0).cumsum(axis=1)[:, ::-1]
data = data + 20 * state.normal(size=(20, 8)) + 30
data = pd.DataFrame(data, columns=np.arange(0, 16, 2))
fig, ax = pplt.subplots()
h = ax.plot(data, means=True, label='label')
ax.legend(h)
```

### Proplot version
3.4.3
0.9.7 | open | 2023-11-17T19:23:57Z | 2023-11-17T19:23:57Z | https://github.com/proplot-dev/proplot/issues/441 | [] | kinyatoride | 0 |
plotly/plotly.py | plotly | 5,059 | `mpl_to_plotly` does not preserve axis labels (bar plots are useless) | While `mpl_to_plotly` is little known and receives little love, this bug is pretty easy to fix. Would you be open to a PR?
<details>
```python
import matplotlib.pyplot as plt
from plotly.tools import mpl_to_plotly
from plotly.offline import plot
```
</details>
In matplotlib this produces a barplot with labels:
```python
labels = ['a', 'b', 'c']
values = [1, 2, 3]
f = plt.figure(figsize=(6, 4))
plt.bar(labels, values)
plt.tight_layout()
```

But conversion to plotly looses the labels:
```
plotly_fig = mpl_to_plotly(f)
plot(plotly_fig)
```

A minimal fix would be to modify `prep_ticks` by appending:
```python
if axis_dict.get("type") == "date":
return axis_dict
vals = []
texts = []
for tick in axis.majorTicks:
vals.append(tick.get_loc())
texts.append(tick.label1.get_text())
if texts:
axis_dict = {}
axis_dict['tickmode'] = 'array'
axis_dict['tickvals'] = vals
axis_dict['ticktext'] = texts
return axis_dict
```
which produces:

`prep_ticks` is defined in:
https://github.com/plotly/plotly.py/blob/c54a2bdf1655caaaa3b7b71fbfc38a5584767bd5/plotly/matplotlylib/mpltools.py#L428-L514 | open | 2025-02-28T20:54:24Z | 2025-03-03T17:56:53Z | https://github.com/plotly/plotly.py/issues/5059 | [
"bug",
"P3"
] | krassowski | 1 |
pytest-dev/pytest-html | pytest | 530 | pytest-html doesn't always flush the results | I believe there is some sort of race condition going on, sometimes I get the report generated at the right time but the results are just not there.
My setup for pytest is very simple
```
# pytest.ini
addopts = --html=report.html --self-contained-html
``` | closed | 2022-07-14T12:19:01Z | 2023-03-05T16:18:37Z | https://github.com/pytest-dev/pytest-html/issues/530 | [
"needs more info"
] | 1Mark | 2 |
plotly/dash | flask | 2,423 | Add loading attribute to html.Img component | **Is your feature request related to a problem? Please describe.**
I'm trying to lazy load images using the in built browser functionality, but I can't because that's not exposed in the html.Img component.
**Describe the solution you'd like**
I'd like the loading attribute to be added to the html.Img built in component, so I can use
```
html.Img(src=..., loading="lazy")
```
**Describe alternatives you've considered**
I tried using dangerously set html from the dcc markdown component and the dash-dangerously-set-html library. The former didn't work (I'm assuming something todo with the async nature of the markdown loading process). The later works, but this component doesn't support serialisation like other dash components and broke some caching (standard Flask-Caching stuff) required for my particular usecase.
**Additional context**
Discussed briefly on the plotly forum https://community.plotly.com/t/html-img-browser-based-lazy-loading/72637/3
| open | 2023-02-13T12:15:58Z | 2024-08-13T19:26:45Z | https://github.com/plotly/dash/issues/2423 | [
"feature",
"P3"
] | LiamLombard | 1 |
jacobgil/pytorch-grad-cam | computer-vision | 397 | 'numpy.int64' object is not iterable | The model was trained with 'autocast()'.
MODEL_TYPE = 'efficientnet_b0'
model = CustomModel(1,config)
target_layers = [model.backbone.blocks[-1][-1]]
visual_cam = CAM(model, target_layers, type='GradCAM', use_cuda=DEVICE)
``` python
class CustomModel(nn.Module):
def __init__(self, num_classes, config):
super().__init__()
self.backbone = timm.create_model(config.MODEL_TYPE, pretrained=True)
self.backbone_dim = self.backbone(torch.randn(1, 3, 512,
512)).shape[-1]
self.num_classes = num_classes
self.fc1 = nn.Linear(self.backbone_dim, num_classes)
self.config = config
# self.activation = nn.SiLU()
def forward(self, x):
x = self.backbone(x)
# x = self.activation(x)
x = F.dropout(x, p=self.config.DROPOUT)
x = self.fc1(x)
return x.squeeze()
```


| closed | 2023-03-04T03:44:21Z | 2023-03-06T15:56:42Z | https://github.com/jacobgil/pytorch-grad-cam/issues/397 | [] | Fly-Pluche | 1 |
comfyanonymous/ComfyUI | pytorch | 7,344 | Do I need to install Python, Visual Studio and Git to use the Windows Portable Package? | ### Your question
Do I need to install Python, Visual Studio and Git to use the Windows Portable Package?
### Logs
```powershell
```
### Other
_No response_ | open | 2025-03-21T17:00:56Z | 2025-03-22T10:07:57Z | https://github.com/comfyanonymous/ComfyUI/issues/7344 | [
"User Support"
] | Sdreamtale | 3 |
miguelgrinberg/Flask-SocketIO | flask | 1,670 | The __version__ attribute disappeared from version 5.0.1 to version 5.1.1 | **Describe the bug**
Hi, I have been using `Flask-SocketIO` version 5.0.1 until today and when I queried the version with the `__version__` attribute it returned the current version as follows:
```
>>> import flask_socketio
>>> flask_socketio.__version__
'5.0.1'
```
But after upgrading the package to version 5.1.1 this attribute has disappeared:
```
>>> import flask_socketio
>>> flask_socketio.__version__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'flask_socketio' has no attribute '__version__'
```
**Expected behavior**
It was expected that the attribute would remain, if it has been moved, is there any way to query it again?
Thanks! | closed | 2021-08-27T06:46:20Z | 2021-08-27T08:38:03Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1670 | [] | ribes4 | 3 |
qubvel-org/segmentation_models.pytorch | computer-vision | 615 | Multi-class segmentation can't predict classes other than 0, 1 | Hello,
Thanks for your great contribution. I used your model to train my images dataset, in which there are 5 classes. I tried Unet and DeepLabV3+ in different activation functions and loss = DiceLoss. However, I usually get perfect diceloss and iou because most of pixels belong to 0 class, but the model never can predict classes 2, 3 or 4. Do you know what's going wrong?
Thanks,
Wei | closed | 2022-06-29T17:58:45Z | 2023-12-29T20:12:01Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/615 | [
"Stale"
] | wfeng66 | 8 |
2noise/ChatTTS | python | 634 | ่ฏท้ฎchattsๅๆถๅฏไปฅๅฏนๅคๅฐไธชๆๆฌ่ฟ่ก่ฏญ้ณ่ฝฌๅ๏ผ | ๆๆณๅผๅไธไธชAPI๏ผไพๅคไบบไฝฟ็จ๏ผ่ฏท้ฎCHATTTSๅนถๅๆถๆๅคๅฏไปฅๅฏนๅคๅฐไธชๆๆฌ่ฟ่ก่ฏญ้ณ่ฝฌๅ๏ผๆ็ๆพๅกๆฏ4090 * 4 | closed | 2024-07-26T10:47:59Z | 2024-11-21T04:02:07Z | https://github.com/2noise/ChatTTS/issues/634 | [
"documentation",
"stale"
] | XuePeng87 | 2 |
erdewit/ib_insync | asyncio | 234 | I would like to get hisotrical data on the futur | Hello Everyone
contracts = Future('ES', '20200619', 'GLOBEX', includeExpired=True)
ib.qualifyContracts(contracts)
# ib.reqMarketDataType(4)
bars = ib.reqHistoricalData(
contracts,
"",
"5 Y",
"1 day",
"TRADES",
True
)
print(bars)
I have got this error
"Error 162, reqId 425: Historical Market Data Service error message:No market data permissions for GLOBEX FUT, contract: Contract(secType='FUT', conId=396336017, symbol='ES', lastTradeDateOrContractMonth='20210319', multiplier='50', exchange='GLOBEX', currency='USD', localSymbol='ESH1', tradingClass='ES')"
Can you help me?
What was wrong?
How to solve this problem?
Best regards! | closed | 2020-04-10T18:30:42Z | 2020-04-20T15:37:17Z | https://github.com/erdewit/ib_insync/issues/234 | [] | jiany30 | 1 |
home-assistant/core | asyncio | 140,329 | Abort SmartThings flow if default_config is not enabled #139700 breaks existing working setup | ### The problem
Hi @joostlek ,
I hope you're doing well. I noticed that the recent update "Abort SmartThings flow if default_config is not enabled #139700" seems to abort the SmartThings flow when the cloud is not enabled. However, I have been using the SmartThings integration for months without the cloud being enabled, and this update has now broken my setup.
Could you please clarify if there were additional changes made that are now forcing this dependency on the cloud, or if this might have been an oversight?
### What version of Home Assistant Core has the issue?
core-2025.3.1
### What was the last working version of Home Assistant Core?
core-2025.2.4
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
smarthings
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/smartthings
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | closed | 2025-03-11T00:31:26Z | 2025-03-15T16:07:26Z | https://github.com/home-assistant/core/issues/140329 | [
"integration: smartthings"
] | brendann993 | 2 |
sammchardy/python-binance | api | 1,277 | Execute trade with websocket? | Is it possible to execute a trade using the websocket? it looks like the create_order function just issues an http request. | closed | 2022-12-29T18:36:28Z | 2023-01-11T21:14:11Z | https://github.com/sammchardy/python-binance/issues/1277 | [] | OpenCoderX | 2 |
plotly/dash | data-science | 2,851 | Dash 2.17.0 prevents some generated App Studio apps from running | https://github.com/plotly/notebook-to-app/actions/runs/8974757424/job/24647808759#step:9:1283
We've reverted to 2.16.1 for the time being. | closed | 2024-05-06T20:27:02Z | 2024-07-26T13:45:34Z | https://github.com/plotly/dash/issues/2851 | [
"P2"
] | hatched | 2 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 15,437 | [Feature Request]: Using two GPUs | Would it be possible to use 2 GPUs in one system to generate an image?
| open | 2024-04-04T13:21:35Z | 2024-04-13T00:33:06Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15437 | [
"enhancement"
] | roda37 | 0 |
dunossauro/fastapi-do-zero | sqlalchemy | 284 | Adicionar clรกusula no pyenv-win para erro de police | Comando:
```powershell
Set-ExecutionPolicy Unrestricted -Scope CurrentUser -Force;
``` | closed | 2025-01-24T19:54:47Z | 2025-01-29T05:36:40Z | https://github.com/dunossauro/fastapi-do-zero/issues/284 | [] | dunossauro | 0 |
pywinauto/pywinauto | automation | 890 | Can not launch SnippingTool (elevation is required) | ## Expected Behavior
Launch SnippingTool.exe
## Actual Behavior
Error log below
```
(PYWINA~1) f:\PCKLIB_Python\WinAutomation>python main.py
Traceback (most recent call last):
File "C:\Users\PIAODA~1\Envs\PYWINA~1\lib\site-packages\pywinauto\application.py", line 1047, in start
start_info) # STARTUPINFO structure.
pywintypes.error: (2, 'CreateProcess', 'The system cannot find the file specified.')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "main.py", line 8, in <module>
app = Application(backend="uia").start('C:\WINDOWS\system32\SnippingTool.exe')
File "C:\Users\PIAODA~1\Envs\PYWINA~1\lib\site-packages\pywinauto\application.py", line 1052, in start
raise AppStartError(message)
pywinauto.application.AppStartError: Could not create the process "C:\WINDOWS\system32\SnippingTool.exe"
Error returned by CreateProcess: (2, 'CreateProcess', 'The system cannot find the file specified.')
```
## Short Example of Code to Demonstrate the Problem
I note that I tried both `uia` and `win32` for backend of Application.
```
from pywinauto import Desktop, Application
app = Application(backend="uia").start('C:\WINDOWS\system32\SnippingTool.exe')
```
## Specifications
- Pywinauto version: ***0.6.8***
- Python version and bitness: ***Python 3.7.3 (v3.7.3:ef4ec6ed12, Mar 25 2019, 21:26:53) [MSC v.1916 32 bit (Intel)] on win32***
- Platform and OS: ***Windows 10 64bit*** | open | 2020-02-16T13:41:58Z | 2021-03-22T01:28:22Z | https://github.com/pywinauto/pywinauto/issues/890 | [
"enhancement",
"question",
"Priority-Low"
] | 0xF217 | 9 |
keras-team/keras | deep-learning | 20,479 | [bug] TextVectorization + Sequential model doesn't work | Tensorflow version:
`2.19.0-dev20241108`
Keras version:
`3.7.0.dev2024111103`
Installation command: `pip install --pre tf-nightly`
Reproducing code:
```
import numpy as np
import tensorflow as tf
def get_text_vec_model(train_samples):
from tensorflow.keras.layers import TextVectorization
VOCAB_SIZE = 10
SEQUENCE_LENGTH = 16
EMBEDDING_DIM = 16
vectorizer_layer = TextVectorization(
max_tokens=VOCAB_SIZE,
output_mode="int",
output_sequence_length=SEQUENCE_LENGTH,
)
vectorizer_layer.adapt(train_samples)
model = tf.keras.Sequential(
[
vectorizer_layer,
tf.keras.layers.Embedding(
VOCAB_SIZE,
EMBEDDING_DIM,
name="embedding",
mask_zero=True,
),
tf.keras.layers.GlobalAveragePooling1D(),
tf.keras.layers.Dense(16, activation="relu"),
tf.keras.layers.Dense(1, activation="tanh"),
]
)
model.compile(optimizer="adam", loss="mse", metrics=["mae"])
return model
train_samples = np.array(["this is an example", "another example"], dtype=object)
train_labels = np.array([0.4, 0.2])
model = get_text_vec_model(train_samples)
# Error: ValueError: Invalid dtype: object
model.fit(train_samples, train_labels, epochs=1)
```
Error stack:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/weichen.xu/miniconda3/envs/mlflow/lib/python3.9/site-packages/keras/src/utils/traceback_utils.py", line 122, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/weichen.xu/miniconda3/envs/mlflow/lib/python3.9/site-packages/optree/ops.py", line 747, in tree_map
return treespec.unflatten(map(func, *flat_args))
ValueError: Invalid dtype: object
```
The same code works in "keras==3.6.0"
| closed | 2024-11-11T11:00:56Z | 2024-11-11T17:51:23Z | https://github.com/keras-team/keras/issues/20479 | [
"type:Bug"
] | WeichenXu123 | 3 |
graphql-python/graphene-django | django | 972 | Instantiate Middleware from string | **Is your feature request related to a problem? Please describe.**
I want to be able to put strings in the MIDDLEWARE setting, as in default Django settings:
```
GRAPHENE = {
'MIDDLEWARE': [
'package1.middleware',
'package2.middleware',
]
}
```
**Describe the solution you'd like**
The helper method `graphene_django.views.instantiate_middleware` should parse strings to classes and instantiate them.
You could use `django.utils.module_loading.import_string`, which has been around since Django 1.7: https://github.com/django/django/blob/1.7/django/utils/module_loading.py
**Describe alternatives you've considered**
I've currently subclassed `GraphQLView` to set self.middleware instead, but I would rather not have to.
**Additional context**
-
| closed | 2020-05-26T17:24:21Z | 2020-05-26T19:28:09Z | https://github.com/graphql-python/graphene-django/issues/972 | [
"โจenhancement"
] | wkoot | 1 |
supabase/supabase-py | fastapi | 486 | add documentation for update and delete in the supabase docs |
there is an insert and fetch example in the docs https://supabase.com/docs/reference/python/insert but there is no update nor delete. i think they should be added. | closed | 2023-07-02T19:50:10Z | 2024-04-28T22:02:13Z | https://github.com/supabase/supabase-py/issues/486 | [
"good first issue"
] | IanEvers | 3 |
hack4impact/flask-base | sqlalchemy | 26 | Give flask-base a real task queue | Right now, async code (e.g. `send_email`) is implemented with threads). We should have a worker process always running which completes tasks from the task queue instead. One problem with the current approach is that clients of the `send_email` function do not know that it is asynchronous unless they read the implementation. I would much prefer something like this (with [rq](http://python-rq.org/))
``` python
result = task_queue.enqueue(send_email, <args>)
```
| closed | 2016-01-09T04:30:24Z | 2016-07-08T00:55:33Z | https://github.com/hack4impact/flask-base/issues/26 | [
"enhancement"
] | sandlerben | 4 |
Miserlou/Zappa | flask | 1,344 | Event schedule for async task is not updated | ## Context
I'm using dynamodb triggers which are calling my Lambda function. I did a setup in zappa_settings using "events" list and deployed it. DynamoDB triggers were created successfully.
There are two probelms with it:
1. I tried to change batch_size attribute.
2. I have deleted configuration for one of the triggers
## Expected Behavior
1. DynamoDb trigger should be updated with new settings
2. DynamoDb trigger should be deleted if it don't exist in config anymore
## Actual Behavior
Trigger was not updated nor deleted later. script just said:
`dynamodb event schedule for func_name already exists - Nothing to do here.`
I have to remove it from aws console manually in order to get changes applied.
## Possible Fix
Triggers have to be recreated either all time or if config changes are detected
| open | 2018-01-09T22:34:14Z | 2018-02-23T22:10:58Z | https://github.com/Miserlou/Zappa/issues/1344 | [
"enhancement",
"non-bug",
"good-idea"
] | chekan-o | 1 |
sktime/pytorch-forecasting | pandas | 1,794 | [MNT] Upgrade to `torch>2.2.2` because of CVE-2024-5480 | Hi!
I cannot install pytorch-forecasting in my organization because of https://www.cvedetails.com/cve/CVE-2024-5480/. Can you upgrade the dependency in the pyproject.tmol to torch>2.2.2, please?
Thanks a lot!
Best
Robert | open | 2025-03-13T08:00:10Z | 2025-03-20T10:55:33Z | https://github.com/sktime/pytorch-forecasting/issues/1794 | [
"maintenance"
] | Garve | 4 |
aminalaee/sqladmin | sqlalchemy | 23 | Enable SQLAlchemy V2 features | Need to check SQLAlchemy V2 migration steps. As far as I can see we're using SQLAlchemy 1.4 features, It should be ready, but needs checking and fixing. | closed | 2022-01-19T14:03:33Z | 2023-01-05T15:26:37Z | https://github.com/aminalaee/sqladmin/issues/23 | [
"enhancement"
] | aminalaee | 2 |
TencentARC/GFPGAN | deep-learning | 472 | Sai | open | 2023-12-10T02:56:13Z | 2023-12-10T02:56:13Z | https://github.com/TencentARC/GFPGAN/issues/472 | [] | sai9232 | 0 | |
capitalone/DataProfiler | pandas | 916 | `DATAPROFILER_SEED` global input validation testing | **Is your feature request related to a problem? Please describe.**
We reference `DATAPROFILER_SEED` in a variety of locations throughout the repo. So right now the way we work with this env variable is incorrect in nearly every location except [here](https://github.com/capitalone/DataProfiler/blob/dev/dataprofiler/data_readers/data_utils.py#L319)
**Describe the outcome you'd like:**
I would like `DATAPROFILER_SEED` to be updated in all locations in the code to be a similar format to the link above. Also there should be testing to validate that this env variable is use properly in every place.
**Additional context:**
Should be abstracted to a `dataprofiler/profilers/utils.py`
| closed | 2023-06-27T17:23:40Z | 2023-08-01T13:59:49Z | https://github.com/capitalone/DataProfiler/issues/916 | [
"New Feature"
] | micdavis | 4 |
strawberry-graphql/strawberry | graphql | 3,713 | multipart_uploads_enabled not propagated in AsyncGraphQLView, causing file uploads to fail | **Describe the Bug**
In strawberry-graphql==0.253.0 and strawberry-graphql-django==0.50.0, setting multipart_uploads_enabled=True in the AsyncGraphQLView does not enable multipart uploads as expected. The self.multipart_uploads_enabled attribute remains False, causing file uploads via multipart/form-data to fail with a 400 Bad Request error.
**To Reproduce**
1. Configure the GraphQL view in Django:
```
path(
'graphql/',
AsyncGraphQLView.as_view(
schema=schema,
graphiql=settings.DEBUG,
multipart_uploads_enabled=True,
),
name='graphql',
),
```
2. Attempt to perform a file upload mutation from the client.
Example cURL Command:
```
curl -X POST -H "Content-Type: multipart/form-data" \
-F 'operations={"query":"mutation createImages($data: [ImageInput!]!) { createImages(data: $data) { id imageWebUrl }}","variables":{"data":[{"image":null,"imageType":"PACK"}]}}' \
-F 'map={"0":["variables.data.0.image"]}' \
-F '0=@/path/to/logo.png' \
http://localhost:8080/graphql
```
3. Observe that the server responds with a 400 Bad Request error stating "Unsupported content type".
4. Inspect the self.multipart_uploads_enabled attribute inside the AsyncGraphQLView and find that it is False.
**Expected Behavior**
Setting multipart_uploads_enabled=True should set self.multipart_uploads_enabled to True in the AsyncGraphQLView, enabling multipart uploads and allowing file uploads to work correctly.
**Actual Behavior**
Despite setting multipart_uploads_enabled=True, self.multipart_uploads_enabled remains False, causing the server to reject multipart/form-data requests.
**Additional Context**
- This issue did not occur in previous versions:
> - strawberry-graphql==0.219.1
> - strawberry-graphql-django==0.32.1
- According to the documentation:
> - [Breaking Changes in 0.243.0 - Multipart Uploads Disabled by Default](https://strawberry.rocks/docs/breaking-changes/0.243.0#multipart-uploads-disabled-by-default)
> - [Django Integration Options](https://strawberry.rocks/docs/integrations/django#options)
- The issue seems to be that multipart_uploads_enabled is not properly propagated to the AsyncGraphQLView instance.
If I force it to True in this method everthing works fine:
```
async def parse_http_body(
self, request: AsyncHTTPRequestAdapter
) -> GraphQLRequestData:
headers = {key.lower(): value for key, value in request.headers.items()}
content_type, _ = parse_content_type(request.content_type or "")
accept = headers.get("accept", "")
protocol: Literal["http", "multipart-subscription"] = "http"
if self._is_multipart_subscriptions(*parse_content_type(accept)):
protocol = "multipart-subscription"
if request.method == "GET":
data = self.parse_query_params(request.query_params)
elif "application/json" in content_type:
data = self.parse_json(await request.get_body())
elif self.multipart_uploads_enabled and content_type == "multipart/form-data":
data = await self.parse_multipart(request)
else:
raise HTTPException(400, "Unsupported content type")
return GraphQLRequestData(
query=data.get("query"),
variables=data.get("variables"),
operation_name=data.get("operationName"),
protocol=protocol,
)
```
**Question**
Am I misconfiguring something, or is this a bug in strawberry-graphql? Any guidance on how to fix or work around this issue would be appreciated.
| open | 2024-11-29T22:04:43Z | 2024-12-31T00:53:39Z | https://github.com/strawberry-graphql/strawberry/issues/3713 | [
"bug"
] | BranDavidSebastian | 2 |
OFA-Sys/Chinese-CLIP | nlp | 4 | RoBERTa-wwm-ext-base-chinese.jsonๆไปถๅจ? | ViT-B-16.jsonๆไปถๅฏไปฅๅจopen-clipไธๆพๅฐ๏ผ่ฏท้ฎ่ฟไธชๅจๅช้่ฝๆพๅฐ๏ผ | closed | 2022-07-13T10:52:53Z | 2022-11-03T11:03:08Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/4 | [] | PineREN | 3 |
flasgger/flasgger | flask | 422 | property field marked as required but flasgger still accepts it | From the todo example:
```
def post(self):
"""
This is an example
---
tags:
- restful
parameters:
- in: body
name: body
schema:
$ref: '#/definitions/Task'
responses:
201:
description: The task has been created
schema:
$ref: '#/definitions/Task'
"""
args = parser.parse_args()
print(args)
todo_id = int(max(TODOS.keys()).lstrip('todo')) + 1
todo_id = 'todo%i' % todo_id
TODOS[todo_id] = {'task': args['task']}
return TODOS[todo_id], 201
```
Doing
```
curl -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' -d '{"potato" : "elefante"}' 'http://127.0.0.1:5000/todos'
```
Results in 201 answer with args as: `{'task': None}` | open | 2020-07-23T20:08:12Z | 2020-07-24T11:46:32Z | https://github.com/flasgger/flasgger/issues/422 | [] | patrickelectric | 1 |
huggingface/datasets | tensorflow | 7,040 | load `streaming=True` dataset with downloaded cache | ### Describe the bug
We build a dataset which contains several hdf5 files and write a script using `h5py` to generate the dataset. The hdf5 files are large and the processed dataset cache takes more disk space. So we hope to try streaming iterable dataset. Unfortunately, `h5py` can't convert a remote URL into a hdf5 file descriptor. So we use `fsspec` as an interface like below:
```python
def _generate_examples(self, filepath, split):
for file in filepath:
with fsspec.open(file, "rb") as fs:
with h5py.File(fs, "r") as fp:
# for event_id in sorted(list(fp.keys())):
event_ids = list(fp.keys())
......
```
### Steps to reproduce the bug
The `fsspec` works, but it takes 10+ min to print the first 10 examples, which is even longer than the downloading time. I'm not sure if it just caches the whole hdf5 file and generates the examples.
### Expected behavior
So does the following make sense so far?
1. download the files
```python
dataset = datasets.load('path/to/myscripts', split="train", name="event", trust_remote_code=True)
```
2. load the iterable dataset faster (using the raw file cache at path `.cache/huggingface/datasets/downloads`)
```python
dataset = datasets.load('path/to/myscripts', split="train", name="event", trust_remote_code=True, streaming=true)
```
I made some tests, but the code above can't get the expected result. I'm not sure if this is supported. I also find the issue #6327 . It seemed similar to mine, but I couldn't find a solution.
### Environment info
- `datasets` = 2.18.0
- `h5py` = 3.10.0
- `fsspec` = 2023.10.0 | open | 2024-07-11T11:14:13Z | 2024-07-11T14:11:56Z | https://github.com/huggingface/datasets/issues/7040 | [] | wanghaoyucn | 2 |
graphql-python/graphql-core | graphql | 167 | Loss of precision in floating point values | Hello,
We are observing some surprising behavior with floating point numbers. Specifically, ast_from_value() appears to be converting python float values to strings in a lossy manner.
This appears to be happening in [this line](https://github.com/graphql-python/graphql-core/blob/main/src/graphql/utilities/ast_from_value.py#L120)
Using `:g` appears to round numbers and/or convert them to scientific notation with 5 significant digits.
For example,
```
value_ast = ast_from_value(
{'x': 12345678901234.0},
type,
)
```
produces an AST with a FloatValueNode with string value of '1.23457e+13'
Printing back to a string:
```
printed_ast = printer.print_ast(value_ast)
print(printed_ast)
```
produces
```
{
x: 1.23457e+13
}
```
where we would expect it to be
```
{
x: 12345678901234.0
}
```
Similarly, a number like `1.1234567890123457` gets rounded to `1.12346`.
In our experiments, changing the line references above to
```
return FloatValueNode(value=str(serialized))
```
produces better results but is still limited by the underlying limitations of Python floats (see test cases below).
We think the ultimate solution may require using Decimal types instead of floats throughout graphql-core.
Here is a simple test cases to reproduce:
```
@pytest.mark.cdk
@pytest.mark.parametrize(
"name,input_num,expected_output_num",
[
pytest.param("large floating point", 12345678901234.123, "12345678901234.123"),
pytest.param("floating point precision", 1234567.987654321, "1234567.987654321"),
pytest.param("negative float", -12345678901234.123, "-12345678901234.123"),
pytest.param("no decimal", 12345678901234, "12345678901234.0"),
# these cases may require use of Decimal to avoid loss of precision:
pytest.param("floating point precision large", 12345678901.987654321, "12345678901.987654"),
pytest.param("floating point high precision", 1.1234567890123456789, "1.1234567890123457"),
pytest.param("floating point precision 17 digits", 123456789012345678.123456, "1.2345678901234568e+17"),
],
)
def test_python_type_to_graphql_string_floating_point_numbers(
name: str, input_num: float, expected_output_num: str, gql_schema_shapes
) -> None:
schema = gql_schema_shapes.customer
val = {"x": input_num}
value_ast = ast_from_value(
val,
schema.get_type("MyType"),
)
res = printer.print_ast(value_ast)
assert res == f'{{x: {expected_output_num}}}', f"{name} failed"
```
graphql-core version 3.2.0
| closed | 2022-04-04T18:15:59Z | 2022-04-10T16:50:49Z | https://github.com/graphql-python/graphql-core/issues/167 | [] | rpgreen | 4 |
microsoft/JARVIS | deep-learning | 86 | What does 72G of disk space refer to๏ผ | Dear jarvis team๏ผ
I'm sure my device has more than 72G of space. But when I run download.sh, it remains me "no space left on device". I used df -h to check my disk, and It was indeed full. Could you tell me what does 72G of disk space refer to๏ผHow much space do I need to run download.sh? | closed | 2023-04-07T07:14:55Z | 2023-04-07T08:07:36Z | https://github.com/microsoft/JARVIS/issues/86 | [] | CanIbeyourdog | 3 |
scikit-tda/kepler-mapper | data-visualization | 137 | Is it possible to calculate the Betti Numbers of the simplicial complex? | I would like to be able to evaluate the choice of parameter values for the Kepler Mapper using the Betti Numbers rather than visually (looking at the simplicial complex plotted). This would be helpful in making a more informed choice on parameter values and, in addition, would lead to allowing calculations of persistence homology. I am wondering if at the moment it is possible to calculate Betti Numbers with Kepler Mapper?
| open | 2019-02-22T09:40:16Z | 2019-11-22T17:14:49Z | https://github.com/scikit-tda/kepler-mapper/issues/137 | [] | karinsasaki | 6 |
browser-use/browser-use | python | 914 | Unable to identify non-index HTML element | ### Bug Description
Index is not available for add participant which just a div. Any possibility to click on non-index HTML element like div, request your guidance.

HTML: <div id="addParticipant" class="i-vertical span-add ">Add Participant</div>
### Reproduction Steps
click on non-index HTML element like div
### Code Sample
```python
<div id="addParticipant" class="i-vertical span-add ">Add Participant</div>
```
### Version
latest
### LLM Model
GPT-4o
### Operating System
windows
### Relevant Log Output
```shell
``` | closed | 2025-03-01T22:57:58Z | 2025-03-05T11:33:08Z | https://github.com/browser-use/browser-use/issues/914 | [
"bug"
] | kalirajann | 4 |
errbotio/errbot | automation | 1,480 | Proposal : Slack backend deprecation plan | # Description
The current situation for Slack is causing confusion for users and developers. For example
- Multiple PRs for the same features are being created.
- Features are applied to one slack backend but not the other.
- Inconsistent behaviour between backends makes debugging confusing.
- Users must be given special instruction to create legacy tokens for use with the current slack backend.
- Both `slack` and `slack_rtm` use deprecated upstream modules.
# Proposal
I recommend a deprecation plan be put in place to correct the situation in 4 phases (each phase spanning a 2 month period)
## phase 1 (start date: 1 Dec 2020)
- Merge PR #1451 once both RTM and Event API are completely integrated as `slacksdk` backend.
- `slacksdk` start to test and stablise the backend for both RTM and Events API.
- Add warnings in log to indicate `slack` backend as deprecated for removal in 6 months.
- Add warnings in logs to indicate `slack-rtm` backend as deprecated for removal in 2 months.
## phase 2 (start date: 1 Feb 2021)
- `slacksdk` continue to test and stablise the backend for both RTM and Events API.
- `slack` remains in deprecated warning state.
- `slack_rtm` removed from errbot. (It's less tested than the `slack` backend and is the functional equivalent to `slacksdk`)
## phase 3 (start date: 3 April 2021)*
- `slacksdk` continue to test and stablise the backend for both RTM and Events API.
- rename `slacksdk` backend to `slack`.
- rename `slack` backend to `slack_legacy`. (backend remains for any users that still can't use `slacksdk`)
## phase 4 (start date: 1 June 2021)
- removal of `slack_legacy` from errbot.
- `slack` (aka `slacksdk`) remains as the only backend supporting both RTM and Events API.
*Avoid April fools day release - so people know the change isn't a joke. | closed | 2020-11-25T14:03:23Z | 2021-07-23T05:47:16Z | https://github.com/errbotio/errbot/issues/1480 | [
"backend: Slack",
"#release-process"
] | nzlosh | 6 |
rthalley/dnspython | asyncio | 401 | Refactor project documentation using epytext | In less than a month, `epydoc` will be [legacy](https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=881562) - "
```
Epydoc is basically unmaintained upstream. Also, it is only
supported for Python 2, so it will reach its end of life along with
Python 2 sometime in 2020.
```
"
This also means the markup language `epytext` is obsolete. The current build/dist process should at the very least have a explicit sphinx option so the online documentation can be updated in the future. | closed | 2019-12-04T00:00:24Z | 2020-05-12T13:00:24Z | https://github.com/rthalley/dnspython/issues/401 | [
"Enhancement Request",
"Needs Author"
] | binaryflesh | 9 |
FujiwaraChoki/MoneyPrinterV2 | automation | 56 | RuntimeError: Incorrect response | when I run the code , I get this error ,how can I fix this bug?

| closed | 2024-03-05T03:33:09Z | 2024-03-06T02:10:51Z | https://github.com/FujiwaraChoki/MoneyPrinterV2/issues/56 | [] | 2679373161 | 2 |
tfranzel/drf-spectacular | rest-api | 565 | Add link to documentation in GitHub URL metadata | To make it easier to find the documentation for this project, consider adding a link to https://drf-spectacular.readthedocs.io/en/latest/ from the GitHub projects page.
Example see *Website*:
<img width="442" alt="Screen Shot 2021-10-12 at 1 41 44 PM" src="https://user-images.githubusercontent.com/10340167/137003876-440936a1-ebce-48d3-9824-4f5716b6117e.png">
| closed | 2021-10-12T17:42:48Z | 2021-10-12T18:18:52Z | https://github.com/tfranzel/drf-spectacular/issues/565 | [] | johnthagen | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.