repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
ray-project/ray | python | 51,504 | CI test windows://python/ray/tests:test_global_state is consistently_failing | CI test **windows://python/ray/tests:test_global_state** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aaf1-9737-4a02-a7f8-1d7087c16fb1
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aa03-5c4f-4156-97c5-9793049512c1
DataCaseName-windows://python/ray/tests:test_global_state-END
Managed by OSS Test Policy | closed | 2025-03-19T00:07:25Z | 2025-03-19T21:53:03Z | https://github.com/ray-project/ray/issues/51504 | [
"bug",
"triage",
"core",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] | can-anyscale | 3 |
marcomusy/vedo | numpy | 804 | AttributeError: 'NoneType' object has no attribute 'polydata' | Hi there,
I wrote a code in order to show the obj file from my own
the first time I was successful with another obj file, but unfortunately, the second time mentioned error has been driving me crazy
What can I do further?
the code is:
from vedo import Mesh
mesh = Mesh("3dobjtest.obj")
mesh.show()
Error:AttributeError: 'NoneType' object has no attribute 'polydata'
Powered by a cup of coffee
| closed | 2023-02-06T15:47:51Z | 2023-02-06T23:27:09Z | https://github.com/marcomusy/vedo/issues/804 | [] | ghost | 6 |
identixone/fastapi_contrib | pydantic | 167 | Support docker secrets in configuration | ### Description
In order to use this library successfully as part of another system that is deployed via Docker Swarm, we need to support reading configuration variables from secrets file
| closed | 2021-02-28T21:13:36Z | 2021-03-01T10:38:01Z | https://github.com/identixone/fastapi_contrib/issues/167 | [
"enhancement"
] | levchik | 1 |
yt-dlp/yt-dlp | python | 12,506 | [Floatplane] Optimization for downloading a playlist/channel with an archive file | ### Checklist
- [x] I'm requesting a site-specific feature
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar requests **including closed ones**. DO NOT post duplicates
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
North America
### Example URLs
https://www.floatplane.com/post/oC5cEg2cI6
### Provide a description that is worded well enough to be understood
When downloading a channel or playlist with an archive file from YouTube, once the extractor reaches a video in the playlist that is already logged in a provided archive file, it immediately skips that video and proceeds to the next in the list, often resulting in a near immediate finishing of the playlist if all remaining entries are already downloaded.
Contrast, on FloatPlane, where it still downloads post data, video metadata, and sometimes more before recognizing the post is already in the archive file. When using delays, this can cause the playlist to take a very long time to complete even when the remainder is already in the archive file. Even without delays it still takes significantly longer due to the unnecessary network traffic.
`[download] Downloading item 10 of 25
[Floatplane] Extracting URL: https://www.floatplane.com/post/oC5cEg2cI6
[Floatplane] Sleeping 15.0 seconds ...
[Floatplane] oC5cEg2cI6: Downloading post data
[Floatplane] Sleeping 15.0 seconds ...
[Floatplane] OdpuvEagFJ: Downloading video metadata
[Floatplane] Sleeping 15.0 seconds ...
[Floatplane] OdpuvEagFJ: Downloading video stream data
[download] OdpuvEagFJ: Keeping up with AI with the Minisforum X1 AI Mini PC Ft. 96GB Memory has already been recorded in the archive`
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '-4', '--continue', '--no-overwrites', '--ignore-errors', '--no-progress', '--limit-rate', '1M', '--sleep-interval', '15', '--
max-sleep-interval', '300', '--sleep-requests', '15', '--sleep-subtitles', '15', '--fragment-retries', '5', '--output', '%(upload_date)s-%(uploader)s-%(title)s-%(i
d)s.%(ext)s', '--restrict-filenames', '--trim-filenames', '225', '--check-formats', '--match-filter', '!is_live', '--abort-on-unavailable-fragment', '--write-sub',
'--write-auto-sub', '--sub-lang', 'en.*', '--write-description', '--playlist-end', '25', '--cookies', 'z-ytdl/ytdl-cookies.txt', '--download-archive', 'z-ytdl/_ar
chive.txt', 'https://www.floatplane.com/channel/level1techs/home']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2025.02.28.232826 from yt-dlp/yt-dlp-nightly-builds [79ec2fdff] (zip)
[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-5.15.0-122-generic-x86_64-with-glibc2.35 (OpenSSL 3.0.2 15 Mar 2022, glibc 2.35)
[debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2
[debug] Optional libraries: Cryptodome-3.11.0, brotli-1.0.9, certifi-2024.07.04, curl_cffi-0.7.1, mutagen-1.45.1, requests-2.25.1, secretstorage-3.3.1, sqlite3-3.3
7.2, urllib3-1.26.5
[debug] Proxy map: {}
[debug] Request Handlers: urllib, curl_cffi
[debug] Plugin directories: none
[debug] Loaded 1843 extractors
[debug] Loading archive file 'z-ytdl/_archive.txt'
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2025.02.28.232826 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2025.02.28.232826 from yt-dlp/yt-dlp-nightly-builds)
[FloatplaneChannel] Extracting URL: https://www.floatplane.com/channel/level1techs/home
[FloatplaneChannel] level1techs: Downloading JSON metadata
[download] Downloading playlist: Level1Techs
[FloatplaneChannel] Sleeping 15.0 seconds ...
[FloatplaneChannel] level1techs: Downloading page 1
[FloatplaneChannel] Sleeping 15.0 seconds ...
[FloatplaneChannel] level1techs: Downloading page 2
[info] Writing playlist description to: NA-NA-Level1Techs-level1techs.description
[FloatplaneChannel] Playlist Level1Techs: Downloading 25 items
[download] Downloading item 1 of 25
[Floatplane] Extracting URL: https://www.floatplane.com/post/n6lUM57MH7
[Floatplane] Sleeping 15.0 seconds ...
[Floatplane] n6lUM57MH7: Downloading post data
[Floatplane] Sleeping 15.0 seconds ...
[Floatplane] fa7sR4bwXr: Downloading video metadata
[Floatplane] Sleeping 15.0 seconds ...
[Floatplane] fa7sR4bwXr: Downloading video stream data
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[download] fa7sR4bwXr: Phantom Gaming B850i Lighting WiFi Review has already been recorded in the archive
[download] Downloading item 2 of 25
[Floatplane] Extracting URL: https://www.floatplane.com/post/LXidkIZBAq
[Floatplane] Sleeping 15.0 seconds ...
^C
``` | closed | 2025-03-01T17:22:26Z | 2025-03-07T23:32:28Z | https://github.com/yt-dlp/yt-dlp/issues/12506 | [
"question"
] | RankoKohime | 4 |
ageitgey/face_recognition | python | 789 | How to get the results of landmarks with 68 points as dlib | ### Description
How to get the results of landmarks with 68 points as dlib.
I don't wanna use `import dlib; etc.. ` . Cause it takes too much time.
When i was trying to get landmarks, i used `api.face_landmarks(image)`. But what i got is 72 points. And i tried to manually changed the results to 68 points according to the results used by dlib.
What i did was transforming 72 points to 68 points:
``` python
def compatibleToDlib(arr):
results = []
# 0 - 54
for i1 in range(55):
results.append(arr[i1])
# 55,56,57,58,59 <- arr[61-65]
for i2 in range(61,66):
results.append(arr[i2])
# 60 <- arr[59]
results.append(arr[59])
# 61,62,63 <- arr[58-56]
for i3 in range(0,3):
results.append(arr[-i3+58])
# 64 <- arr[55]
results.append(arr[55])
# 65,66,67 <- arr[70,69,68]
for i4 in range(0,3):
results.append(arr[-i4+70])
# print(len(results))
return results
```
But it didnt work tho. Cause the project that i needed to work with is `deepfakes` and tried the conversion part afterwards, but it says wrong aligened.
`shapes (2,51) and (55,2) not aligned: 51 (dim 1) != 55 (dim 0)` And i tried to replace the landmarks by the landmarks created by dlib. And it works.
### So what i want to know is how to get the same aligned results of 68-point landmarks as dlib by only using this repo - face_recognition ?
Thanks so much!!
| open | 2019-03-29T02:47:39Z | 2022-03-15T16:47:47Z | https://github.com/ageitgey/face_recognition/issues/789 | [] | zoeleesss | 1 |
aleju/imgaug | deep-learning | 607 | Fliplr and Flipud flips the same images when using random_state | When using Fliplr and Flipud with the same random_state they always flip the same images.
Here's some code to show it:
```
import imgaug as ia
import imgaug.augmenters as iaa
import numpy as np
import matplotlib.pyplot as plt
def _get_light_image_augmenter(sq=None):
aug = iaa.Sequential([
iaa.Fliplr(0.5, random_state=sq),
iaa.Flipud(0.5, random_state=sq)
], random_order=True, random_state=sq)
return aug
img = ia.quokka()
aug_imgs = []
for i in range(20):
sq = np.random.SeedSequence()
img_aug = _get_light_image_augmenter(sq)
aug_img = img_aug.augment(image=img)
aug_imgs.append(aug_img)
plt.figure(figsize=(20, 10))
for i in range(20):
plt.subplot(1, 20, i+1)
plt.imshow(aug_imgs[i])
plt.show(block=True)
```
Is this intended? Would I need two different seed sequences for Fliplr and Flipud to flip different images?
I'm using imgaug 0.3.0 and numpy 1.17.5 | open | 2020-02-10T19:45:35Z | 2020-02-11T18:18:46Z | https://github.com/aleju/imgaug/issues/607 | [] | gustavscholin | 3 |
milesmcc/shynet | django | 179 | "Allowed origins" should not default to '*' | I noticed when I copied and pasted a site snippet (including the site's UUID hardcoded) to another site that I was getting hits recorded from the new site. That was because I had not changed the default CORS header ("Allowed origins") in its config.
Could this be set to a more sensible (and secure) default, like the main site URL? I imagine that's 90+% of cases anyway. | open | 2021-11-30T23:33:10Z | 2022-02-05T00:42:38Z | https://github.com/milesmcc/shynet/issues/179 | [] | hughbris | 2 |
Yorko/mlcourse.ai | pandas | 721 | Incredibly large output in Feature selection | In the [topic 6](https://mlcourse.ai/book/topic06/topic6_feature_engineering_feature_selection.html) the output of the reverse_geocoder cell:
```python
import reverse_geocoder as revgc
revgc.search(list(zip(df.latitude, df.longitude)))
```
shows very huge useless output, like ~70% of the full page length. Can you cut it off? It's hard to scroll so much and I can't find any reason for it to be shown. | closed | 2022-09-13T17:54:51Z | 2022-09-13T23:00:50Z | https://github.com/Yorko/mlcourse.ai/issues/721 | [] | aulasau | 1 |
iperov/DeepFaceLab | machine-learning | 5,645 | AMD Driver Timeout / HRESULT failed with 0x887a0005 / Check failed: 0 <= new_num_elements (0 vs. -1) | ## Expected behavior
*Extract face set (dst or src) and train (Quick96 and SAEHD tested).*
## Actual behavior
*1. Extracting face set fails with `F tensorflow/core/common_runtime/dml/dml_upload_heap.cc:56] HRESULT failed with 0x887a0005: chunk->resource->Map(0, nullptr, &upload_heap_data)` or `F tensorflow/core/framework/tensor_shape.cc:332] Check failed: 0 <= new_num_elements (0 vs. -1)` instantly or after a few minutes, and then `Radeon (TM) RX 480 Graphics doesnt response, terminating it.`*
*2. Training fails with `F tensorflow/core/common_runtime/dml/dml_upload_heap.cc:56] HRESULT failed with 0x887a0005: chunk->resource->Map(0, nullptr, &upload_heap_data)` or `F tensorflow/core/framework/tensor_shape.cc:332] Check failed: 0 <= new_num_elements (0 vs. -1)` instantly or after a few minutes.*
## Steps to reproduce
*1. Download one of the DirectX 12 windows binaries.*
*2. Extract to whatever location.*
*3. Run `2) extract images from video data_src.bat`*
*4. Run `3) extract images from video data_dst FULL FPS.bat`*
*5. Run `4) data_src faceset extract.bat` or `5) data_dst faceset extract.bat`*
*6. Get `F tensorflow/core/common_runtime/dml/dml_upload_heap.cc:56] HRESULT failed with 0x887a0005: chunk->resource->Map(0, nullptr, &upload_heap_data)` or `F tensorflow/core/framework/tensor_shape.cc:332] Check failed: 0 <= new_num_elements (0 vs. -1)` instantly or after a few minutes, and then `Radeon (TM) RX 480 Graphics doesnt response, terminating it.`*
*7. Repeat step 5 multiple times to painfully get all the images to extract.*
*8. Run `6) train Quick96.bat` or `6) train SAEHD.bat`*
*9. Get `F tensorflow/core/common_runtime/dml/dml_upload_heap.cc:56] HRESULT failed with 0x887a0005: chunk->resource->Map(0, nullptr, &upload_heap_data)` or `F tensorflow/core/framework/tensor_shape.cc:332] Check failed: 0 <= new_num_elements (0 vs. -1)` instantly, or after a few minutes of training.*
## Other relevant information
- **Operating system and version:** Windows 10 Pro. Latest, 22H2, 19045.2728
- **Graphics driver version:** Latest, 23.3.1
- **CPU:** Intel i5-10400f
- **GPU:** AMD RX 480 8GB VRAM
- **RAM:** 16GB RAM
- **Pagefile:** 100GB
- **Python version:** Using windows binary.
- **Builds tried:** `DeepFaceLab_DirectX12_build_11_20_2021` & `DeepFaceLab_DirectX12_build_05_04_2022` | closed | 2023-03-18T21:49:53Z | 2023-05-22T21:23:31Z | https://github.com/iperov/DeepFaceLab/issues/5645 | [] | xzuyn | 2 |
GibbsConsulting/django-plotly-dash | plotly | 401 | The package generates Django migrations in the path of the pip package and not in the application migrations folder | # Symptoms
When running the Django `makemigrations` command on a simple Django project that is just containing a simple Dash page derived from the examples available for `django-plotly-dash`, the migrations containing the database table for the internal communication of `django-plotly-dash` are generated in the `migrations` folder of the package and not in the `migrations` folder of the Django application.
# Problem
This means that the migrations are not deployed into a Docker container running the Django app. The `migrate` command in this container thus complains about incomplete migrations.
# Solutions?
Is there any way to avoid that the `django-plotly-dash` app is creating the migration files in the package folder? Copying the migration files manually from `site-packages` on the development machine into the `site-packages` on the container works but is no acceptable solution. The migrations need to be part of the codebase.
# Details
This is the output of the `makemigrations` command on the Django project:
```bash
manage.py@django_dash_demo > makemigrations
bash -cl "/Users/user/opt/anaconda3/envs/django_dash_demo/bin/python /Applications/PyCharm.app/Contents/plugins/python/helpers/pycharm/django_manage.py makemigrations /Users/user/Documents/code/django_dash_demo"
Tracking file by folder pattern: migrations
Migrations for 'django_plotly_dash':
/Users/user/opt/anaconda3/envs/django_dash_demo/lib/python3.10/site-packages/django_plotly_dash/migrations/0003_auto_20220514_1220.py
- Alter field id on dashapp
- Alter field id on statelessapp
```
Note that the migrations are placed in the `site-packages` folder.
The migrations performed on the Django container will complain about being incomplete:
```bash
migrations_1 | Operations to perform:
migrations_1 | Apply all migrations: admin, auth, contenttypes, django_plotly_dash, sessions
migrations_1 | Running migrations:
migrations_1 | No migrations to apply.
migrations_1 | Your models in app(s): 'django_plotly_dash' have changes that are not yet reflected in a migration, and so won't be applied.
migrations_1 | Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them.
```
## `diagram_app.py`
```python
import dash_core_components as dcc
import dash_html_components as html
import pandas as pd
import plotly.express as px
from django_plotly_dash import DjangoDash
app = DjangoDash('PlotlyApp')
# assume you have a "long-form" data frame
# see https://plotly.com/python/px-arguments/ for more options
df = pd.DataFrame({
"Fruit": ["Apples", "Oranges", "Bananas", "Apples", "Oranges", "Bananas"],
"Amount": [4, 1, 2, 2, 4, 5],
"City": ["SF", "SF", "SF", "Montreal", "Montreal", "Montreal"]
})
fig = px.bar(df, x="Fruit", y="Amount", color="City", barmode="group")
app.layout = html.Div(children=[
html.H1(children='Hello Dash'),
html.Div(children='''
Dash: A web application framework for your data.
'''),
dcc.Graph(
id='example-graph',
figure=fig
)
])
```
## `views.py`:
```python
# noinspection PyUnresolvedReferences
from django.shortcuts import render
# noinspection PyUnresolvedReferences
from . import diagram_app
def main(request):
return render(request, 'diagram/index.html')
```
## `index.html`
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
{% load plotly_dash %}
<title>Title</title>
{% plotly_header %}
</head>
<body>
<h1>Diagram</h1>
{% load plotly_dash %}
{% plotly_direct name="PlotlyApp" %}
{% plotly_footer %}
</body>
</html>
```
## `settings.py`
```python
from pathlib import Path
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/3.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'django-insecure-l(x8ue+or!t8ba+_c2y7z6)3kkg))hf@lrnqi%xhwz(81x9=jw'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django_plotly_dash.apps.DjangoPlotlyDashConfig',
'diagram'
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django_plotly_dash.middleware.BaseMiddleware'
]
ROOT_URLCONF = 'django_dash_demo.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [BASE_DIR / 'templates']
,
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'django_dash_demo.wsgi.application'
# Database
# https://docs.djangoproject.com/en/3.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': 'db.sqlite3',
}
}
# Password validation
# https://docs.djangoproject.com/en/3.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/3.2/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/3.2/howto/static-files/
STATIC_URL = '/static/'
# Default primary key field type
# https://docs.djangoproject.com/en/3.2/ref/settings/#default-auto-field
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
X_FRAME_OPTIONS = 'SAMEORIGIN'
```
If necessary, I can provide the complete project. However, even this most simple demo project is having a lot of files because it is aggregating different technologies. | closed | 2022-05-14T12:43:36Z | 2023-06-04T15:27:16Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/401 | [] | erl987 | 2 |
pydantic/pydantic | pydantic | 10,960 | Problem generating OpenAPI schema for models with an enum as the key of a dict | ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
The code below used to work on 2.9.2, but it crashes on 2.10 with a message:
`RuntimeError: Cannot update undefined schema for $ref=#/components/schemas/__main____PrimaryColor-Input__1`.
The example uses FastAPI because it's the most concise way I found to reproduce it, and maybe FastAPI is doing something wrong, but it broke for me when upgrading pydantic.
### Example Code
```Python
from enum import StrEnum
from typing import Annotated
from fastapi import FastAPI
from pydantic import BaseModel, Field
class PrimaryColor(StrEnum):
RED = 'red'
GREEN = 'green'
BLUE = 'blue'
class Color(BaseModel):
primary_color_values: dict[PrimaryColor, Annotated[int, Field(ge=0, le=255)]]
white = Color(primary_color_values={PrimaryColor.RED: 255, PrimaryColor.GREEN: 255, PrimaryColor.BLUE: 255})
app = FastAPI()
@app.post("/color")
def color(color: Color) -> None:
pass
print(app.openapi())
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.10.1
pydantic-core version: 2.27.1
pydantic-core build: profile=release pgo=false
install path: .../venv/lib/python3.11/site-packages/pydantic
python version: 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 11.4.0]
platform: Linux-6.8.0-48-generic-x86_64-with-glibc2.35
related packages: fastapi-0.115.4 pydantic-settings-2.6.1 mypy-1.10.1 pydantic-extra-types-2.10.0 typing_extensions-4.12.2
commit: unknown
```
| closed | 2024-11-24T14:43:56Z | 2024-12-02T15:41:24Z | https://github.com/pydantic/pydantic/issues/10960 | [
"bug V2"
] | brianmedigate | 4 |
litestar-org/litestar | api | 3,477 | Bug: Multi-body response incompatible with LoggingMiddleware | ### Description
When using `ServerSentEvent` Responses with `StructlogPlugin`, the application errors.
Preliminary research led me to https://github.com/litestar-org/litestar/blob/main/litestar/middleware/logging.py#L180, where `scope.state._ls_connection_state.log_context` does not have any values during second push of an `ServerSentEventMessage`.
### URL to code causing the issue
_No response_
### MCVE
```python
from asyncio import sleep
from collections.abc import AsyncGenerator
from litestar import Litestar, get
from litestar.plugins.structlog import StructlogPlugin
from litestar.response import ServerSentEvent, ServerSentEventMessage
from litestar.types import SSEData
async def my_generator() -> AsyncGenerator[SSEData, None]:
count = 0
while count < 10:
await sleep(0.01)
count += 1
yield ServerSentEventMessage(event="something-with-comment", retry=1000, comment="some comment")
@get(path="/count", sync_to_thread=False)
def sse_handler() -> ServerSentEvent:
return ServerSentEvent(my_generator())
app = Litestar(route_handlers=[sse_handler],
plugins=[StructlogPlugin()])
```
### Steps to reproduce
1. Run the server above
```bash
litestar --app main:app run
```
2. Send a request
```bash
curl localhost:8000/count
```
3. Observe error
### Screenshots
_No response_
### Logs
```bash
File "/.venv/lib/python3.11/site-packages/litestar/middleware/logging.py", line 194, in extract_response_data
connection_state.log_context.pop(HTTP_RESPONSE_START),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: 'http.response.start'
```
### Litestar Version
2.8.3
### Platform
- [ ] Linux
- [X] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-05-07T23:32:24Z | 2025-03-20T15:54:42Z | https://github.com/litestar-org/litestar/issues/3477 | [
"Bug :bug:"
] | keongalvin | 3 |
deepspeedai/DeepSpeed | pytorch | 6,795 | Does ZeRO++ Work on AMD GPU Mi200? | When I enable ZeRO++ on MI200, hit below exception:
```
[rank0]: File "/opt/conda/envs/ptca/lib/python3.9/site-packages/deepspeed/runtime/zero/stage3.py", line 179, in __init__
[rank0]: self.parameter_offload = self.initialize_ds_offload(
[rank0]: File "/opt/conda/envs/ptca/lib/python3.9/site-packages/deepspeed/runtime/zero/stage3.py", line 427, in initialize_ds_offload
[rank0]: return DeepSpeedZeRoOffload(module=module,
[rank0]: File "/opt/conda/envs/ptca/lib/python3.9/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 122, in __init__
[rank0]: self._convert_to_zero_parameters(ds_config, module, mpu)
[rank0]: File "/opt/conda/envs/ptca/lib/python3.9/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 197, in _convert_to_zero_parameters
[rank0]: Init(module=module,
[rank0]: File "/opt/conda/envs/ptca/lib/python3.9/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 990, in __init__
[rank0]: self.quantizer_module = CUDAQuantizer()
[rank0]: File "/opt/conda/envs/ptca/lib/python3.9/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 761, in __init__
[rank0]: CUDAQuantizer.quantizer_cuda_module = deepspeed.ops.op_builder.QuantizerBuilder().load()
[rank0]: File "/opt/conda/envs/ptca/lib/python3.9/site-packages/deepspeed/ops/op_builder/builder.py", line 508, in load
[rank0]: return self.jit_load(verbose)
[rank0]: File "/opt/conda/envs/ptca/lib/python3.9/site-packages/deepspeed/ops/op_builder/builder.py", line 555, in jit_load
[rank0]: op_module = load(name=self.name,
[rank0]: File "/opt/conda/envs/ptca/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1309, in load
[rank0]: return _jit_compile(
[rank0]: File "/opt/conda/envs/ptca/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1745, in _jit_compile
[rank0]: return _import_module_from_library(name, build_directory, is_python_module)
[rank0]: File "/opt/conda/envs/ptca/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 2143, in _import_module_from_library
[rank0]: module = importlib.util.module_from_spec(spec)
[rank0]: File "<frozen importlib._bootstrap>", line 565, in module_from_spec
[rank0]: File "<frozen importlib._bootstrap_external>", line 1173, in create_module
[rank0]: File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
[rank0]: ImportError: /home/aiscuser/.cache/torch_extensions/py39_cpu/quantizer/quantizer.so: cannot open shared object file: No such file or directory
```
ds config:
```
ds_config:
zero_optimization:
stage: 3
offload_param:
device: none
pin_memory: false
offload_optimizer:
device: cpu
pin_memory: false
overlap_comm: false
reduce_scatter: true
reduce_bucket_size: auto
contiguous_gradients: true
stage3_gather_16bit_weights_on_model_save: true
stage3_max_live_parameters: 5000000000.0
stage3_max_reuse_distance: 1000000000.0
stage3_prefetch_bucket_size: auto
stage3_param_persistence_threshold: auto
zero_hpz_partition_size: 16
zero_quantized_weights: true
zero_quantized_gradients: true
activation_checkpointing:
partition_activations: true
cpu_checkpointing: false
zero_allow_untested_optimizer: true
```
ds_report
```
WARNING:root:Cannot import JIT optimized kernels. CUDA extension will be disabled.
[93m [WARNING] [0m Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[93m [WARNING] [0m sparse_attn is not compatible with ROCM
--------------------------------------------------
DeepSpeed C++/CUDA extension op report
--------------------------------------------------
NOTE: Ops not installed will be just-in-time (JIT) compiled at
runtime if needed. Op compatibility means that your system
meet the required dependencies to JIT install the op.
--------------------------------------------------
JIT compiled ops requires ninja
ninja .................. [92m[OKAY][0m
--------------------------------------------------
op name ................ installed .. compatible
--------------------------------------------------
async_io ............... [93m[NO][0m ....... [92m[OKAY][0m
fused_adam ............. [93m[NO][0m ....... [92m[OKAY][0m
cpu_adam ............... [93m[NO][0m ....... [92m[OKAY][0m
cpu_adagrad ............ [93m[NO][0m ....... [92m[OKAY][0m
cpu_lion ............... [93m[NO][0m ....... [92m[OKAY][0m
[93m [WARNING] [0m Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
evoformer_attn ......... [93m[NO][0m ....... [93m[NO][0m
fp_quantizer ........... [93m[NO][0m ....... [92m[OKAY][0m
fused_lamb ............. [93m[NO][0m ....... [92m[OKAY][0m
fused_lion ............. [93m[NO][0m ....... [92m[OKAY][0m
inference_core_ops ..... [93m[NO][0m ....... [92m[OKAY][0m
cutlass_ops ............ [93m[NO][0m ....... [92m[OKAY][0m
transformer_inference .. [93m[NO][0m ....... [92m[OKAY][0m
quantizer .............. [93m[NO][0m ....... [92m[OKAY][0m
ragged_device_ops ...... [93m[NO][0m ....... [92m[OKAY][0m
ragged_ops ............. [93m[NO][0m ....... [92m[OKAY][0m
random_ltd ............. [93m[NO][0m ....... [92m[OKAY][0m
[93m [WARNING] [0m sparse_attn is not compatible with ROCM
sparse_attn ............ [93m[NO][0m ....... [93m[NO][0m
spatial_inference ...... [93m[NO][0m ....... [92m[OKAY][0m
transformer ............ [93m[NO][0m ....... [92m[OKAY][0m
stochastic_transformer . [93m[NO][0m ....... [92m[OKAY][0m
--------------------------------------------------
DeepSpeed general environment info:
torch install path ............... ['/opt/conda/envs/ptca/lib/python3.9/site-packages/torch']
torch version .................... 2.3.0+rocm6.0
deepspeed install path ........... ['/opt/conda/envs/ptca/lib/python3.9/site-packages/deepspeed']
deepspeed info ................... 0.14.4, unknown, unknown
torch cuda version ............... None
torch hip version ................ 6.0.32830-d62f6a171
nvcc version ..................... None
deepspeed wheel compiled w. ...... torch 2.3, hip 6.0
shared memory (/dev/shm) size .... 797.00 GB
[2024-11-26 05:15:05,200] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect)
WARNING:root:Cannot import JIT optimized kernels. CUDA extension will be disabled.
[93m [WARNING] [0m Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH
[93m [WARNING] [0m sparse_attn is not compatible with ROCM`
``` | closed | 2024-11-27T04:04:53Z | 2024-12-06T02:30:27Z | https://github.com/deepspeedai/DeepSpeed/issues/6795 | [] | unavailableun | 4 |
Gozargah/Marzban | api | 1,205 | ارور EOF بعد از ساخت اکانت با API روی بعضی از اینباندها | این مشکل رو با استفاده از صفحه OpenAPI مرزبان روی نسخه 0.6.0 تست کردیم و مشخص شد باگ هست. تو نسخه 0.5 یا 0.6 به وجود اومده. اگر یه ریکوئست POST به این شکل بزنم برای ایجاد یوزر جدید:
{ "username": "user1234", "proxies": { "vmess": {}, "vless": {} }, "inbounds": { "vmess": [ "VMess Websocket" ], "vless": [ "Legacy", "de+usd" ] }, "expire": 0, "data_limit": 0, "status": "active", "note": "Test1234" }
کانفیگهای ویلس همه EOF میزنن ولی ویمسها وصل میشن.
اما اگر ریکوئست POST برای ایجاد یوزر رو اینجوری بزنم که همه اینباندها انتخاب بشن:
{ "username": "user1234", "proxies": { "vmess": {}, "vless": {} }, "inbounds": { "vmess": [ ], "vless": [ ] }, "expire": 0, "data_limit": 0, "status": "active", "note": "Test1234" }
این بار دقیقا همون کانفیگها میان توی ساب ولی همه کار میکنن و پینگ میدن. | closed | 2024-07-31T10:02:31Z | 2024-08-03T11:57:31Z | https://github.com/Gozargah/Marzban/issues/1205 | [
"Bug"
] | automa-gen | 3 |
hatchet-dev/hatchet | fastapi | 1,115 | Hatchet Lite only generating localhost tokens | I'm trying to use hatchet lite as a simpler deployment for our small scale project, but I need a token that has a non-localhost host. Despite setting the `SERVER_GRPC_BROADCAST_ADDRESS` env variable in the docker compose for the hatchet-lite instance, tokens always seem to point to localhost.
I did this successfully using the go quickstart compose file and was able to connect using the non-localhost url that I set in that env var, but we're switching to hatchet lite in hopes of simpler deployment and this is a problem for us.
I'm hoping you can point me in the right direction. With the docker images the way they are, they're a black box to me so I'm not even sure where to look to try and see what I might be doing wrong or if there's a bug. Any help is appreciated.
In context, I'm trying to add the worker to the compose file so it's simpler to run, though I imagine this will also be a problem when we've published. This is the compose file I'm using:
```yaml
version: "3.8"
name: "hatchet-lite-dev"
services:
postgres:
image: postgres:15.6
command: postgres -c 'max_connections=200'
restart: always
environment:
- POSTGRES_USER=hatchet
- POSTGRES_PASSWORD=hatchet
- POSTGRES_DB=hatchet
volumes:
- hatchet_lite_postgres_data:/var/lib/postgresql/data
healthcheck:
test: [ "CMD-SHELL", "pg_isready -d hatchet -U hatchet" ]
interval: 10s
timeout: 10s
retries: 5
start_period: 10s
hatchet-lite:
image: ghcr.io/hatchet-dev/hatchet/hatchet-lite:latest
# Set this to have a set hostname for other docker containers
hostname: hatchet-engine
ports:
- "8888:8888"
- "7077:7077"
networks:
# hatchet-bridge is a bridge network to connect disparate docker containers,
# anything on this network should be able to connect to hatchet-engine hostname
- hatchet-bridge
- default
depends_on:
postgres:
condition: service_healthy
environment:
RABBITMQ_DEFAULT_USER: "user"
RABBITMQ_DEFAULT_PASS: "password"
DATABASE_URL: "postgresql://hatchet:hatchet@postgres:5432/hatchet?sslmode=disable"
DATABASE_POSTGRES_PORT: "5432"
DATABASE_POSTGRES_HOST: "postgres"
SERVER_TASKQUEUE_RABBITMQ_URL: amqp://user:password@localhost:5672/
SERVER_AUTH_COOKIE_DOMAIN: localhost
SERVER_AUTH_COOKIE_INSECURE: "t"
SERVER_GRPC_BIND_ADDRESS: "0.0.0.0"
SERVER_GRPC_INSECURE: "t"
# Changed this to try and generate tokens that other docker containers
# can use to connect to this instance, but always seems to use localhost
SERVER_GRPC_BROADCAST_ADDRESS: hatchet-engine:7077
SERVER_GRPC_PORT: "7077"
SERVER_URL: http://localhost:8888
SERVER_AUTH_SET_EMAIL_VERIFIED: "t"
SERVER_LOGGER_LEVEL: warn
SERVER_LOGGER_FORMAT: console
DATABASE_LOGGER_LEVEL: warn
DATABASE_LOGGER_FORMAT: console
volumes:
- "hatchet_lite_rabbitmq_data:/var/lib/rabbitmq/mnesia"
- "hatchet_lite_config:/config"
healthcheck:
test: [ "CMD-SHELL", "curl -f http://localhost:8888/_health" ]
timeout: 10s
retries: 5
start_period: 10s
worker:
build:
context: .
dockerfile: worker-dev.Dockerfile
command: "go run ./cmd/worker"
networks:
- hatchet-bridge
- default
depends_on:
hatchet-lite:
condition: service_healthy
develop:
watch:
- action: sync+restart
path: ./**.go
target: /app
ignore:
- ./server
- action: rebuild
path: go.mod
volumes:
hatchet_lite_postgres_data:
hatchet_lite_rabbitmq_data:
hatchet_lite_config:
networks:
hatchet-bridge:
name: hatchet-bridge
driver: bridge
``` | closed | 2024-12-11T17:20:45Z | 2024-12-11T19:02:09Z | https://github.com/hatchet-dev/hatchet/issues/1115 | [] | obermillerk | 3 |
nonebot/nonebot2 | fastapi | 2,602 | Plugin: a2s查询 | ### PyPI 项目名
nonebot-plugin-a2s-query
### 插件 import 包名
nonebot_plugin_a2s_query
### 标签
[{"label":"游戏服务器","color":"#ea5252"},{"label":"value","color":"#99ea52"}]
### 插件配置项
_No response_ | closed | 2024-03-10T14:02:44Z | 2024-06-23T14:09:59Z | https://github.com/nonebot/nonebot2/issues/2602 | [
"Plugin"
] | NanakaNeko | 3 |
littlecodersh/ItChat | api | 722 | 机器人是否有httpserver,可以接受三方程序发送的消息? | 在提交前,请确保您已经检查了以下内容!
- [x] 您可以在浏览器中登陆微信账号,但不能使用`itchat`登陆
- [x] 我已经阅读并按[文档][document] 中的指引进行了操作
- [x] 您的问题没有在[issues][issues]报告,否则请在原有issue下报告
- [ ] 本问题确实关于`itchat`, 而不是其他项目.
- [x] 如果你的问题关于稳定性,建议尝试对网络稳定性要求极低的[itchatmp][itchatmp]项目
请使用`itchat.run(debug=True)`运行,并将输出粘贴在下面:
```
[在这里粘贴完整日志]
```
您的itchat版本为:`[在这里填写版本号]`。(可通过`python -c "import itchat;print(itchat.__version__)"`获取)
其他的内容或者问题更详细的描述都可以添加在下面:
> [您的内容]
[document]: http://itchat.readthedocs.io/zh/latest/
[issues]: https://github.com/littlecodersh/itchat/issues
[itchatmp]: https://github.com/littlecodersh/itchatmp
| closed | 2018-08-29T03:20:26Z | 2019-11-22T03:46:58Z | https://github.com/littlecodersh/ItChat/issues/722 | [] | leaout | 0 |
babysor/MockingBird | pytorch | 973 | 自定义音频效果不佳 | **Summary[问题简述(一句话)]**
A clear and concise description of what the issue is.
模型是能够跑通的,但是测试自己提供的音频时,合成的效果对比示例的音频差距好大,想问一下输入的音频是不是有什么处理。
**Env & To Reproduce[复现与环境]**
描述你用的环境、代码版本、模型
**Screenshots[截图(如有)]**
If applicable, add screenshots to help
| open | 2023-11-24T09:58:32Z | 2024-01-07T09:31:48Z | https://github.com/babysor/MockingBird/issues/973 | [] | zhou-zhi-hong | 1 |
gradio-app/gradio | deep-learning | 10,436 | Theme is only used on first tab | ### Describe the bug
Monochrome is set for the interface, but only gets applied to the first tab. Tab 2 and 3 seem to be still using the default theme.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
with gr.Blocks(title='Admin UI', theme=gr.themes.Monochrome) as app:
with gr.Tabs():
with gr.Tab("Tab 1"):
button1 = gr.Button("Button")
with gr.Tab("Tab 2"):
button2 = gr.Button("Button")
with gr.Tab("Tab 3"):
button3 = gr.Button("Button")
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.13.1
gradio_client version: 1.6.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.7
ffmpy: 0.5.0
gradio-client==1.6.0 is not installed.
httpx: 0.28.1
huggingface-hub: 0.27.1
jinja2: 3.1.5
markupsafe: 3.0.2
numpy: 2.2.1
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.3
pillow: 11.1.0
pydantic: 2.10.5
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.9.3
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.45.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.12.0
httpx: 0.28.1
huggingface-hub: 0.27.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.2
```
### Severity
I can work around it | closed | 2025-01-24T22:41:03Z | 2025-01-27T19:15:26Z | https://github.com/gradio-app/gradio/issues/10436 | [
"bug",
"needs repro"
] | GregSommerville | 3 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 1,170 | Missing or murmuring words after synthesizing | Hi there,
I tried it more than a year ago and I remember it was working fine with "demo_cli". At the time, I was using Mac Bigfur.
Now, i tried again, with Mac Monterey. There are missing words or some words just murmuring and not clear at all. Tried wav, mp3 and m4a and it's reproducible.
Does anyone know what may go wrong? Or any suggestions on what should be adjusted?
Thanks! | open | 2023-03-05T21:35:08Z | 2023-03-05T21:35:08Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1170 | [] | alann07 | 0 |
mljar/mercury | data-visualization | 386 | call to websocket keeps pending in mercury development server | Hi,
I execute `mercury run` from a folder containing some notebook files. Mercury start without any error. It opens the browser listing the notebooks from the folder. I select a notebook and it opens. So far so good.
But now the right side stays grayed out and shows 3 dots indicating that it is loading. With the network tab open I can see that the call to webserver backend keeps pending and never returns. The hanging call originated from Provider.tsx:205 making a request to ws://127.0.0.1:8000/ws/client/1/b67c9541-15....
Firewall is disabled.
Any ideas to why this is and how to fix or further debug?
Thanks,
Robert
| closed | 2023-10-27T13:18:43Z | 2023-10-28T15:02:14Z | https://github.com/mljar/mercury/issues/386 | [] | robert-elles | 2 |
pydata/pandas-datareader | pandas | 940 | can not fetch data before 1970 on yahoo finance | Hi!
I run into an error when using pandas-datareader. I think it's probably related to the unix time stamp.
my code:
from pandas_datareader import data as web
import datetime
import pandas as pd
import matplotlib.pyplot as plt
#spy 500 index
stock = '%5EGSPC'
endDate = datetime.datetime(2022, 8, 30)
start_date = datetime.datetime(1928, 12, 29)
df = web.DataReader(stock, data_source='yahoo', start=start_date,end=endDate)'
and I get error:
'envs\stock_predict\lib\site-packages\pandas_datareader\yahoo\daily.py in _get_params(self, symbol)
124 # This needed because yahoo returns data shifted by 4 hours ago.
125 four_hours_in_seconds = 14400
--> 126 unix_start = int(time.mktime(self.start.timetuple()))
127 unix_start += four_hours_in_seconds
128 day_end = self.end.replace(hour=23, minute=59, second=59)
OverflowError: mktime argument out of range'
I tried and it works for a time after 1970, so seems it's due to converting of the UNIX time stamp.
Could you check it? Thanks!
| open | 2022-09-01T19:40:29Z | 2023-01-10T15:21:21Z | https://github.com/pydata/pandas-datareader/issues/940 | [] | houpuli | 1 |
pytest-dev/pytest-cov | pytest | 677 | reporting coverage for a module imported with importlib | # Summary
I am using pytest [with "importlib" as import mode](https://github.com/martibosch/pylandstats/blob/main/pyproject.toml#L78) because I am building a cython extension and otherwise pytest imports from the folder named "pylandstats" (without the built cython extension) rather than the installed package (also named "pylandstats", with the built cython extension).
How can I get coverage to be reported within this setup? Of course an alternative would be to move my module to `src/pylandstats` and not use the importlib import mode, but can this be solved otherwise?
This may be related to https://github.com/nedbat/coveragepy/issues/1002
Thank you! | closed | 2025-02-13T16:09:03Z | 2025-02-27T09:29:44Z | https://github.com/pytest-dev/pytest-cov/issues/677 | [] | martibosch | 2 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 832 | Report on Single Voice Training Results | Hello @blue-fish and all,
I am running the demo_toolbox on Win10, under Anaconda3 (run as administrator), env: VoiceClone, using an NVidia GEForce RTS2070 Super on an EVGA 08G-P4-3172-KR card, 8GB GDDR6, using python 3.7, pytorch Win10/CUDA version 11.1, with all other requirements met. The toolbox GUI (demo_toolbox.py) works fine on this setup.
My project is to use the toolbox to clone 15 voices from a computer simulation (to be able to add additional voice material (.wav files) in those voices back into the sim), one voice at a time, using the Single Voice method described in Issue #437 [https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/437#issue-663639627](url)
**Method:** For proof of principle, I built a custom single voice dataset for one of those voices, using:
1. The instructions in Issue #437 ,
2. the README.TXT file from the [zip file](https://www.dropbox.com/s/bf4ti3i1iczolq5/logs-singlespeaker.zip?dl=0) provided by @blue-fish in #437, and
3. this direction for developing the folder structure in LibriTTS format: [Formatting from #437 ](https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/437#issuecomment-666099538)
This dataset (which I called V12F, a female voice) consists of 329 utterances (1.wav through 329.wav, 1.txt through 329.txt, a total of about 45 minutes worth of speech in 3-12 second increments) arranged using the LibriTTS method and schema, into folder ...dataset_root\LibriTTS\train-clean-100\speaker_012\book-001\
**Preprocessing Audio and Embeds:** I was able to successfully preprocess the single-voice data using synthesizer_preprocess_audio.py and synthesizer_preprocess_embeds.py, which produced proper audio, mels, embed, and training.txt data which properly went to "...datasets_root\SV2TTS\synthesizer" in folders "audio", "embeds", "mels" and file "train.txt", as it should.
**First Synthesizer Training attempt (V12F):** I then conducted synthesizer training (after correcting several issues (see below) for 20,000 steps, from scratch (took about 12 hours). Based upon the information saved during the training, I could tell that the output predicted voice wav and mels were a good representation of the speaker. Loss started ~4.9, and was at 0.125 when the 20000 steps were completed. I then ran it in the toolbox; the result was garbled output.
**Issues encountered and Solutions:**
1. The README instructed use of --summary_interval 125 --checkpoint_interval 100 as arguments for synthesizer_train.py.
pytorch would not accept --summary_interval 125 --checkpoint_interval 100 arguments. The only valid arguments (aside from the required run_id, syn_id and optional model_id and Force_Restart) allowed were --save_every xx (or -s SAVE_EVERY) --backup_every XXX (or -b BACKUP_EVERY) or --hparams. So, I used --save_every 100 as my argument
2. Win10 "pickle" issue: synthesizer_train.py failed with "AttributeError: Can't pickle local object 'train.<locals>.<lambda>'", which I corrected as outlined in issue #669 , and code implemented from here: blue-fish@89a9964
This fix worked perfectly for me.
**2nd Synthesizer Training on V12F:** I conducted synthesizer training again, this time on top of "pretrained.pt" (already pretrained using LibriSpeech to 295K steps) for 2000 steps. Using LibriSpeech pretrained.pt, loss started at 0.4974 (295K steps) and rapidly went below 0.3200 (295,500 steps). I trained 2000 single-voice steps (which took just a bit over an hour). Total steps 297,000. Average steps per second was ~0.45 steps/sec. using batch size 12, r=2. Final loss was 0.2776. Synthesizer output mels, attention files and .wavs looked and sounded good, based on the samples saved during the training.
**Toolbox Testing on V12F:** The single voice pretrained synthesizer worked very well in the toolbox after playing ~20 random utterances from the V12F dataset in the toolbox for embeds, using "Enhance vocoder" and the Griffin-Lim vocoder output. I was quite pleased with the results. Audio samples are available here:
[http://danforthhouse.com/files/V12F_Voice Samples.zip](http://danforthhouse.com/files/V12F_VoiceSamples.zip)
**Synthesizer Training on V13M:** I then ran a second single voice dataset (V13M), a male voice. For training V13M, I put the original LibriSpeech 295K step pretrained synthesizer "pretrained.pt" file into a folder in ...\synthesizer\saved_models\V13M_LS_pretrained, renamed the file "V13M_LS_pretrained.pt", and referenced it in this way in the command:
`python synthesizer_train.py V13M_LS_pretrained datasets_root/SV2TTS/synthesizer --save_every 100`
and trained it for ~2000 steps on top of the pretrained.pt LibriSpeech 295K synthesizer. V13M training ran a bit faster (avg 0.55 steps per second or ~1.85 sec per step, 1944 steps per hour), likely because the total number of samples was less than V12F (of 329 total samples, only 325 were used in V13M because he (V13M) talks faster than she (Voice 12F) does, so it looks like 4 were lost due to being too short in duration for the hparams to pass them. V12F used 328 of 329 (I think one utterance was too long for the params). Both V12F and V13M use the same (or nearly all the same) text phrases.
Loss on V13M started at 0.4909 and rapidly fell to 0.3542 by the end of the 10th Epoch.
At 1000 steps, loss = 0.3075. At 2000 steps, loss = 0.2940 and it took just a little over an hour to complete using the GPU, batch size 12, r=2.
**Toolbox Testing on V13M:** V13M was not as well vocoded as V12F. His voice sounded more like he had a bad sore throat. His (slight) southern (US) accent did not come through clearly. It was recognizable, but not nearly as good a quality as V12F. Voice samples are available here:
[http://danforthhouse.com/files/V13M_Voice Samples.zip](http://danforthhouse.com/files/V13M_VoiceSamples.zip)
**Vocoder training:** I also attempted vocoder preprocessing of the single voice pretrained synthesizer, and ran into several issues, which I will open in a new Issues thread. I could not get the vocoder_preprocess.py to work properly.
Overall, after learning how to properly do single-voice training, I was pleased with the output. I can be a lot better, but should be fine for purposes of my project. Any recommendations on improving the voice quality (especially of V13M) would be appreciated.
Regards,
Tomcattwo | closed | 2021-08-30T01:57:55Z | 2021-09-01T02:44:37Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/832 | [] | Tomcattwo | 3 |
AutoGPTQ/AutoGPTQ | nlp | 742 | [BUG] I am quant qwen2-7b with auto-gptq | **Describe the bug**
I've found a couple of bug, and I also have a few questions
Bug:
1. qwen2.5-7b-instruct has very high avg loss for mlp.up_proj and mlp.gate_proj in wikitext2 dataset. In a 28-layer network, it's around 200 for the first 10 layers. In the later layers, the loss will be close to 2000, have you encountered that case? Also, roughly how much avg loss is normal?
2. qwen2.5-7b-instruct at num_sample=128, the quantized output result will have very much "!", interspersed throughout the sentence.

Question:
1. what is the appropriate setting for damp? There is no relevant experiment in the paper.
Thanks. | open | 2024-10-31T10:04:43Z | 2024-11-12T10:51:42Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/742 | [
"bug"
] | Ijustakid | 4 |
huggingface/datasets | nlp | 6,447 | Support one dataset loader per config when using YAML | ### Feature request
See https://huggingface.co/datasets/datasets-examples/doc-unsupported-1
I would like to use CSV loader for the "csv" config, JSONL loader for the "jsonl" config, etc.
### Motivation
It would be more flexible for the users
### Your contribution
No specific contribution | open | 2023-11-23T13:03:07Z | 2023-11-23T13:03:07Z | https://github.com/huggingface/datasets/issues/6447 | [
"enhancement"
] | severo | 0 |
miguelgrinberg/Flask-Migrate | flask | 84 | from alembic.revision import ResolutionError ImportError: No module named revision | I created a new sqlite database which works fine. Then made some changes to it and followed by
```
python manage.py db upgrade
```
but If I tried to run
```
python manage.py runserver
```
then I get
```
from alembic.revision import ResolutionError
ImportError: No module named revision
```
Probably need to add PYTHONPATH but where to add it and how to define it? And now even deleting the Migration folder as well as sqlite.db and restarting Pycharm I am still getting that error above?
| closed | 2015-09-17T09:22:27Z | 2019-01-13T22:21:38Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/84 | [
"question",
"auto-closed"
] | scheung38 | 7 |
fastapi/sqlmodel | pydantic | 147 | How to mark a model field as "non-persisted", i.e. to not be pushed to the DB? | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from typing import Optional
from pydantic.fields import PrivateAttr
from sqlmodel import Field, Session, SQLModel, create_engine, select
class Hero(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
_name: str = PrivateAttr() # This field is not committed to the db
secret_name: str
sqlite_file_name = "database.db"
sqlite_url = f"sqlite:///{sqlite_file_name}"
engine = create_engine(sqlite_url, echo=True)
if __name__ == "__main__":
SQLModel.metadata.create_all(engine)
hero_1 = Hero(secret_name="Dive Wilson")
hero_1._name = "hello"
print(hero_1) # Hero will have a _hello attribute
with Session(engine) as session:
session.add(hero_1)
session.commit()
with Session(engine) as session:
statement = select(Hero)
results = session.exec(statement)
for hero in results:
print(hero) # Hero will not have a _hello attribute (when read from the db)
```
### Description
It is unclear how to add fields to a model that should not be written to the DB. I have achieved this by prepending the attribute name with an underscore as shown in the code sample above. I decided to achieve it that way based on how Pydantic handles these private attributes (https://pydantic-docs.helpmanual.io/usage/models/#automatically-excluded-attributes).
If there is a better way to do this I'd love to know. Alternatively I'd love to update the documentation if this is the correct way. I took a look at the file structure a bit but wasn't entirely sure where the best place to add it was, so I decided to open an issue instead of a PR.
### Operating System
macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.4
### Python Version
Python 3.9.5
### Additional Context
_No response_ | closed | 2021-10-26T07:23:02Z | 2021-10-28T19:21:25Z | https://github.com/fastapi/sqlmodel/issues/147 | [
"question"
] | JLHasson | 4 |
agronholm/anyio | asyncio | 353 | 3.3.0: pytest is failing | Just normal build, install and test cycle used on building package from non-root account:
- "setup.py build"
- "setup.py install --root </install/prefix>"
- "pytest with PYTHONPATH pointing to setearch and sitelib inside </install/prefix>
```console
+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-anyio-3.3.0-2.fc35.x86_64/usr/lib64/python3.8/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-anyio-3.3.0-2.fc35.x86_64/usr/lib/python3.8/site-packages
+ /usr/bin/pytest -ra
=========================================================================== test session starts ============================================================================
platform linux -- Python 3.8.11, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /home/tkloczko/rpmbuild/BUILD/anyio-3.3.0, configfile: pyproject.toml, testpaths: tests
plugins: anyio-3.3.0, forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, aiohttp-0.3.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, flaky-3.7.0, benchmark-3.4.1, xdist-2.3.0, pylama-7.7.1, datadir-1.3.1, regressions-2.2.0, cases-3.6.3, hypothesis-6.14.4, Faker-8.10.3, xprocess-0.18.1, black-0.3.12, checkdocs-2.7.1
collected 1242 items
tests/test_compat.py ....................................................................................... [ 7%]
tests/test_debugging.py FF..................... [ 8%]
tests/test_eventloop.py ......... [ 9%]
tests/test_fileio.py .........................s...........s...........................................................sss........................................... [ 21%]
.................... [ 22%]
tests/test_from_thread.py ............................................................................ [ 28%]
tests/test_lowlevel.py ........................... [ 30%]
tests/test_pytest_plugin.py FFFF.. [ 31%]
tests/test_signals.py ......... [ 32%]
tests/test_sockets.py .............................................................................................................................................. [ 43%]
.................................................................................................................................................................... [ 56%]
..................... [ 58%]
tests/test_subprocesses.py .................. [ 59%]
tests/test_synchronization.py ................................................................................................... [ 67%]
tests/test_taskgroups.py ........................................................................................................................................... [ 79%]
.....................................s [ 82%]
tests/test_to_process.py ..................... [ 83%]
tests/test_to_thread.py ........................ [ 85%]
tests/streams/test_buffered.py ............ [ 86%]
tests/streams/test_file.py .............................. [ 89%]
tests/streams/test_memory.py ................................................................. [ 94%]
tests/streams/test_stapled.py .................. [ 95%]
tests/streams/test_text.py ............... [ 97%]
tests/streams/test_tls.py .................................... [100%]
================================================================================= FAILURES =================================================================================
_______________________________________________________________________ test_main_task_name[asyncio] _______________________________________________________________________
tests/test_debugging.py:37: in test_main_task_name
for loop in [obj for obj in gc.get_objects()
tests/test_debugging.py:38: in <listcomp>
if isinstance(obj, asyncio.AbstractEventLoop)]:
/usr/lib/python3.8/site-packages/itsdangerous/_json.py:24: in __getattribute__
warnings.warn(
E DeprecationWarning: Importing 'itsdangerous.json' is deprecated and will be removed in ItsDangerous 2.1. Use Python's 'json' module instead.
___________________________________________________________________ test_main_task_name[asyncio+uvloop] ____________________________________________________________________
tests/test_debugging.py:37: in test_main_task_name
for loop in [obj for obj in gc.get_objects()
tests/test_debugging.py:38: in <listcomp>
if isinstance(obj, asyncio.AbstractEventLoop)]:
/usr/lib/python3.8/site-packages/itsdangerous/_json.py:24: in __getattribute__
warnings.warn(
E DeprecationWarning: Importing 'itsdangerous.json' is deprecated and will be removed in ItsDangerous 2.1. Use Python's 'json' module instead.
_______________________________________________________________________________ test_plugin ________________________________________________________________________________
/home/tkloczko/rpmbuild/BUILD/anyio-3.3.0/tests/test_pytest_plugin.py:65: in test_plugin
result.assert_outcomes(passed=3 * len(get_all_backends()), skipped=len(get_all_backends()))
E AssertionError: assert {'errors': 6,...pped': 0, ...} == {'errors': 0,...pped': 2, ...}
E Omitting 3 identical items, use -vv to show
E Differing items:
E {'skipped': 0} != {'skipped': 2}
E {'errors': 6} != {'errors': 0}
E {'passed': 2} != {'passed': 6}
E Use -v to get the full diff
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.8.11, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /usr/bin/python3
cachedir: .pytest_cache
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/tkloczko/rpmbuild/BUILD/anyio-3.3.0/.hypothesis/examples')
rootdir: /tmp/pytest-of-tkloczko/pytest-17/test_plugin0
plugins: anyio-3.3.0, forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, toolbox-0.5, aiohttp-0.3.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, flaky-3.7.0, benchmark-3.4.1, xdist-2.3.0, pylama-7.7.1, datadir-1.3.1, regressions-2.2.0, cases-3.6.3, hypothesis-6.14.4, Faker-8.10.3, xprocess-0.18.1, black-0.3.12, checkdocs-2.7.1
collecting ... collected 8 items
test_plugin.py::test_marked_test[asyncio] PASSED [ 12%]
test_plugin.py::test_marked_test[trio] PASSED [ 25%]
test_plugin.py::test_async_fixture_from_marked_test[asyncio] ERROR [ 37%]
test_plugin.py::test_async_fixture_from_marked_test[trio] ERROR [ 50%]
test_plugin.py::test_async_fixture_from_sync_test[asyncio] ERROR [ 62%]
test_plugin.py::test_async_fixture_from_sync_test[trio] ERROR [ 75%]
test_plugin.py::test_skip_inline[asyncio] ERROR [ 87%]
test_plugin.py::test_skip_inline[trio] ERROR [100%]
==================================== ERRORS ====================================
________ ERROR at setup of test_async_fixture_from_marked_test[asyncio] ________
args = (), kwargs = {}
request = <SubRequest 'async_fixture' for <Function test_async_fixture_from_marked_test[asyncio]>>
def wrapper(*args, **kwargs): # type: ignore
request = kwargs["request"]
if strip_request:
del kwargs["request"]
# if neither the fixture nor the test use the 'loop' fixture,
# 'getfixturevalue' will fail because the test is not parameterized
# (this can be removed someday if 'loop' is no longer parameterized)
if "loop" not in request.fixturenames:
> raise Exception(
"Asynchronous fixtures must depend on the 'loop' fixture or "
"be used in tests depending from it."
)
E Exception: Asynchronous fixtures must depend on the 'loop' fixture or be used in tests depending from it.
/usr/lib64/python3.8/site-packages/aiohttp/pytest_plugin.py:84: Exception
_________ ERROR at setup of test_async_fixture_from_marked_test[trio] __________
args = (), kwargs = {}
request = <SubRequest 'async_fixture' for <Function test_async_fixture_from_marked_test[trio]>>
def wrapper(*args, **kwargs): # type: ignore
request = kwargs["request"]
if strip_request:
del kwargs["request"]
# if neither the fixture nor the test use the 'loop' fixture,
# 'getfixturevalue' will fail because the test is not parameterized
# (this can be removed someday if 'loop' is no longer parameterized)
if "loop" not in request.fixturenames:
> raise Exception(
"Asynchronous fixtures must depend on the 'loop' fixture or "
"be used in tests depending from it."
)
E Exception: Asynchronous fixtures must depend on the 'loop' fixture or be used in tests depending from it.
/usr/lib64/python3.8/site-packages/aiohttp/pytest_plugin.py:84: Exception
_________ ERROR at setup of test_async_fixture_from_sync_test[asyncio] _________
args = (), kwargs = {}
request = <SubRequest 'async_fixture' for <Function test_async_fixture_from_sync_test[asyncio]>>
def wrapper(*args, **kwargs): # type: ignore
request = kwargs["request"]
if strip_request:
del kwargs["request"]
# if neither the fixture nor the test use the 'loop' fixture,
# 'getfixturevalue' will fail because the test is not parameterized
# (this can be removed someday if 'loop' is no longer parameterized)
if "loop" not in request.fixturenames:
> raise Exception(
"Asynchronous fixtures must depend on the 'loop' fixture or "
"be used in tests depending from it."
)
E Exception: Asynchronous fixtures must depend on the 'loop' fixture or be used in tests depending from it.
/usr/lib64/python3.8/site-packages/aiohttp/pytest_plugin.py:84: Exception
__________ ERROR at setup of test_async_fixture_from_sync_test[trio] ___________
args = (), kwargs = {}
request = <SubRequest 'async_fixture' for <Function test_async_fixture_from_sync_test[trio]>>
def wrapper(*args, **kwargs): # type: ignore
request = kwargs["request"]
if strip_request:
del kwargs["request"]
# if neither the fixture nor the test use the 'loop' fixture,
# 'getfixturevalue' will fail because the test is not parameterized
# (this can be removed someday if 'loop' is no longer parameterized)
if "loop" not in request.fixturenames:
> raise Exception(
"Asynchronous fixtures must depend on the 'loop' fixture or "
"be used in tests depending from it."
)
E Exception: Asynchronous fixtures must depend on the 'loop' fixture or be used in tests depending from it.
/usr/lib64/python3.8/site-packages/aiohttp/pytest_plugin.py:84: Exception
_________________ ERROR at setup of test_skip_inline[asyncio] __________________
args = (), kwargs = {}
request = <SubRequest 'some_feature' for <Function test_skip_inline[asyncio]>>
def wrapper(*args, **kwargs): # type: ignore
request = kwargs["request"]
if strip_request:
del kwargs["request"]
# if neither the fixture nor the test use the 'loop' fixture,
# 'getfixturevalue' will fail because the test is not parameterized
# (this can be removed someday if 'loop' is no longer parameterized)
if "loop" not in request.fixturenames:
> raise Exception(
"Asynchronous fixtures must depend on the 'loop' fixture or "
"be used in tests depending from it."
)
E Exception: Asynchronous fixtures must depend on the 'loop' fixture or be used in tests depending from it.
/usr/lib64/python3.8/site-packages/aiohttp/pytest_plugin.py:84: Exception
___________________ ERROR at setup of test_skip_inline[trio] ___________________
args = (), kwargs = {}
request = <SubRequest 'some_feature' for <Function test_skip_inline[trio]>>
def wrapper(*args, **kwargs): # type: ignore
request = kwargs["request"]
if strip_request:
del kwargs["request"]
# if neither the fixture nor the test use the 'loop' fixture,
# 'getfixturevalue' will fail because the test is not parameterized
# (this can be removed someday if 'loop' is no longer parameterized)
if "loop" not in request.fixturenames:
> raise Exception(
"Asynchronous fixtures must depend on the 'loop' fixture or "
"be used in tests depending from it."
)
E Exception: Asynchronous fixtures must depend on the 'loop' fixture or be used in tests depending from it.
/usr/lib64/python3.8/site-packages/aiohttp/pytest_plugin.py:84: Exception
=========================== short test summary info ============================
ERROR test_plugin.py::test_async_fixture_from_marked_test[asyncio] - Exceptio...
ERROR test_plugin.py::test_async_fixture_from_marked_test[trio] - Exception: ...
ERROR test_plugin.py::test_async_fixture_from_sync_test[asyncio] - Exception:...
ERROR test_plugin.py::test_async_fixture_from_sync_test[trio] - Exception: As...
ERROR test_plugin.py::test_skip_inline[asyncio] - Exception: Asynchronous fix...
ERROR test_plugin.py::test_skip_inline[trio] - Exception: Asynchronous fixtur...
========================= 2 passed, 6 errors in 0.14s ==========================
pytest-xprocess reminder::Be sure to terminate the started process by running 'pytest --xkill' if you have not explicitly done so in your fixture with 'xprocess.getinfo(<process_name>).terminate()'.
_______________________________________________________________________________ test_asyncio _______________________________________________________________________________
/home/tkloczko/rpmbuild/BUILD/anyio-3.3.0/tests/test_pytest_plugin.py:138: in test_asyncio
result.assert_outcomes(passed=2, failed=1, errors=2)
E AssertionError: assert {'errors': 3,...pped': 0, ...} == {'errors': 2,...pped': 0, ...}
E Omitting 4 identical items, use -vv to show
E Differing items:
E {'errors': 3} != {'errors': 2}
E {'passed': 0} != {'passed': 2}
E Use -v to get the full diff
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.8.11, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /usr/bin/python3
cachedir: .pytest_cache
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/tkloczko/rpmbuild/BUILD/anyio-3.3.0/.hypothesis/examples')
rootdir: /tmp/pytest-of-tkloczko/pytest-17/test_asyncio0
plugins: anyio-3.3.0, forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, toolbox-0.5, aiohttp-0.3.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, flaky-3.7.0, benchmark-3.4.1, xdist-2.3.0, pylama-7.7.1, datadir-1.3.1, regressions-2.2.0, cases-3.6.3, hypothesis-6.14.4, Faker-8.10.3, xprocess-0.18.1, black-0.3.12, checkdocs-2.7.1
collecting ... collected 4 items
test_asyncio.py::TestClassFixtures::test_class_fixture_in_test_method ERROR [ 25%]
test_asyncio.py::test_callback_exception_during_test FAILED [ 50%]
test_asyncio.py::test_callback_exception_during_setup ERROR [ 75%]
test_asyncio.py::test_callback_exception_during_teardown ERROR [100%]
==================================== ERRORS ====================================
____ ERROR at setup of TestClassFixtures.test_class_fixture_in_test_method _____
args = (), kwargs = {'anyio_backend': 'asyncio'}
request = <SubRequest 'async_class_fixture' for <Function test_class_fixture_in_test_method>>
def wrapper(*args, **kwargs): # type: ignore
request = kwargs["request"]
if strip_request:
del kwargs["request"]
# if neither the fixture nor the test use the 'loop' fixture,
# 'getfixturevalue' will fail because the test is not parameterized
# (this can be removed someday if 'loop' is no longer parameterized)
if "loop" not in request.fixturenames:
> raise Exception(
"Asynchronous fixtures must depend on the 'loop' fixture or "
"be used in tests depending from it."
)
E Exception: Asynchronous fixtures must depend on the 'loop' fixture or be used in tests depending from it.
/usr/lib64/python3.8/site-packages/aiohttp/pytest_plugin.py:84: Exception
____________ ERROR at setup of test_callback_exception_during_setup ____________
args = (), kwargs = {}
request = <SubRequest 'setup_fail_fixture' for <Function test_callback_exception_during_setup>>
def wrapper(*args, **kwargs): # type: ignore
request = kwargs["request"]
if strip_request:
del kwargs["request"]
# if neither the fixture nor the test use the 'loop' fixture,
# 'getfixturevalue' will fail because the test is not parameterized
# (this can be removed someday if 'loop' is no longer parameterized)
if "loop" not in request.fixturenames:
> raise Exception(
"Asynchronous fixtures must depend on the 'loop' fixture or "
"be used in tests depending from it."
)
E Exception: Asynchronous fixtures must depend on the 'loop' fixture or be used in tests depending from it.
/usr/lib64/python3.8/site-packages/aiohttp/pytest_plugin.py:84: Exception
__________ ERROR at setup of test_callback_exception_during_teardown ___________
args = (), kwargs = {}
request = <SubRequest 'teardown_fail_fixture' for <Function test_callback_exception_during_teardown>>
def wrapper(*args, **kwargs): # type: ignore
request = kwargs["request"]
if strip_request:
del kwargs["request"]
# if neither the fixture nor the test use the 'loop' fixture,
# 'getfixturevalue' will fail because the test is not parameterized
# (this can be removed someday if 'loop' is no longer parameterized)
if "loop" not in request.fixturenames:
> raise Exception(
"Asynchronous fixtures must depend on the 'loop' fixture or "
"be used in tests depending from it."
)
E Exception: Asynchronous fixtures must depend on the 'loop' fixture or be used in tests depending from it.
/usr/lib64/python3.8/site-packages/aiohttp/pytest_plugin.py:84: Exception
=================================== FAILURES ===================================
_____________________ test_callback_exception_during_test ______________________
def callback():
nonlocal started
started = True
> raise Exception('foo')
E Exception: foo
test_asyncio.py:22: Exception
=========================== short test summary info ============================
FAILED test_asyncio.py::test_callback_exception_during_test - Exception: foo
ERROR test_asyncio.py::TestClassFixtures::test_class_fixture_in_test_method
ERROR test_asyncio.py::test_callback_exception_during_setup - Exception: Asyn...
ERROR test_asyncio.py::test_callback_exception_during_teardown - Exception: A...
========================= 1 failed, 3 errors in 0.12s ==========================
pytest-xprocess reminder::Be sure to terminate the started process by running 'pytest --xkill' if you have not explicitly done so in your fixture with 'xprocess.getinfo(<process_name>).terminate()'.
________________________________________________________________________ test_autouse_async_fixture ________________________________________________________________________
/home/tkloczko/rpmbuild/BUILD/anyio-3.3.0/tests/test_pytest_plugin.py:175: in test_autouse_async_fixture
result.assert_outcomes(passed=len(get_all_backends()))
E AssertionError: assert {'errors': 2,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
E Omitting 4 identical items, use -vv to show
E Differing items:
E {'errors': 2} != {'errors': 0}
E {'passed': 0} != {'passed': 2}
E Use -v to get the full diff
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.8.11, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /usr/bin/python3
cachedir: .pytest_cache
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/tkloczko/rpmbuild/BUILD/anyio-3.3.0/.hypothesis/examples')
rootdir: /tmp/pytest-of-tkloczko/pytest-17/test_autouse_async_fixture0
plugins: anyio-3.3.0, forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, toolbox-0.5, aiohttp-0.3.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, flaky-3.7.0, benchmark-3.4.1, xdist-2.3.0, pylama-7.7.1, datadir-1.3.1, regressions-2.2.0, cases-3.6.3, hypothesis-6.14.4, Faker-8.10.3, xprocess-0.18.1, black-0.3.12, checkdocs-2.7.1
collecting ... collected 2 items
test_autouse_async_fixture.py::test_autouse_backend[asyncio] ERROR [ 50%]
test_autouse_async_fixture.py::test_autouse_backend[trio] ERROR [100%]
==================================== ERRORS ====================================
_______________ ERROR at setup of test_autouse_backend[asyncio] ________________
args = (), kwargs = {'anyio_backend_name': 'asyncio'}
request = <SubRequest 'autouse_async_fixture' for <Function test_autouse_backend[asyncio]>>
def wrapper(*args, **kwargs): # type: ignore
request = kwargs["request"]
if strip_request:
del kwargs["request"]
# if neither the fixture nor the test use the 'loop' fixture,
# 'getfixturevalue' will fail because the test is not parameterized
# (this can be removed someday if 'loop' is no longer parameterized)
if "loop" not in request.fixturenames:
> raise Exception(
"Asynchronous fixtures must depend on the 'loop' fixture or "
"be used in tests depending from it."
)
E Exception: Asynchronous fixtures must depend on the 'loop' fixture or be used in tests depending from it.
/usr/lib64/python3.8/site-packages/aiohttp/pytest_plugin.py:84: Exception
_________________ ERROR at setup of test_autouse_backend[trio] _________________
args = (), kwargs = {'anyio_backend_name': 'trio'}
request = <SubRequest 'autouse_async_fixture' for <Function test_autouse_backend[trio]>>
def wrapper(*args, **kwargs): # type: ignore
request = kwargs["request"]
if strip_request:
del kwargs["request"]
# if neither the fixture nor the test use the 'loop' fixture,
# 'getfixturevalue' will fail because the test is not parameterized
# (this can be removed someday if 'loop' is no longer parameterized)
if "loop" not in request.fixturenames:
> raise Exception(
"Asynchronous fixtures must depend on the 'loop' fixture or "
"be used in tests depending from it."
)
E Exception: Asynchronous fixtures must depend on the 'loop' fixture or be used in tests depending from it.
/usr/lib64/python3.8/site-packages/aiohttp/pytest_plugin.py:84: Exception
=========================== short test summary info ============================
ERROR test_autouse_async_fixture.py::test_autouse_backend[asyncio] - Exceptio...
ERROR test_autouse_async_fixture.py::test_autouse_backend[trio] - Exception: ...
============================== 2 errors in 0.10s ===============================
pytest-xprocess reminder::Be sure to terminate the started process by running 'pytest --xkill' if you have not explicitly done so in your fixture with 'xprocess.getinfo(<process_name>).terminate()'.
__________________________________________________________________ test_cancel_scope_in_asyncgen_fixture ___________________________________________________________________
/home/tkloczko/rpmbuild/BUILD/anyio-3.3.0/tests/test_pytest_plugin.py:202: in test_cancel_scope_in_asyncgen_fixture
result.assert_outcomes(passed=len(get_all_backends()))
E AssertionError: assert {'errors': 2,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
E Omitting 4 identical items, use -vv to show
E Differing items:
E {'errors': 2} != {'errors': 0}
E {'passed': 0} != {'passed': 2}
E Use -v to get the full diff
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.8.11, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /usr/bin/python3
cachedir: .pytest_cache
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/tkloczko/rpmbuild/BUILD/anyio-3.3.0/.hypothesis/examples')
rootdir: /tmp/pytest-of-tkloczko/pytest-17/test_cancel_scope_in_asyncgen_fixture0
plugins: anyio-3.3.0, forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, toolbox-0.5, aiohttp-0.3.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, flaky-3.7.0, benchmark-3.4.1, xdist-2.3.0, pylama-7.7.1, datadir-1.3.1, regressions-2.2.0, cases-3.6.3, hypothesis-6.14.4, Faker-8.10.3, xprocess-0.18.1, black-0.3.12, checkdocs-2.7.1
collecting ... collected 2 items
test_cancel_scope_in_asyncgen_fixture.py::test_cancel_in_asyncgen_fixture[asyncio] ERROR [ 50%]
test_cancel_scope_in_asyncgen_fixture.py::test_cancel_in_asyncgen_fixture[trio] ERROR [100%]
==================================== ERRORS ====================================
__________ ERROR at setup of test_cancel_in_asyncgen_fixture[asyncio] __________
args = (), kwargs = {}
request = <SubRequest 'asyncgen_fixture' for <Function test_cancel_in_asyncgen_fixture[asyncio]>>
def wrapper(*args, **kwargs): # type: ignore
request = kwargs["request"]
if strip_request:
del kwargs["request"]
# if neither the fixture nor the test use the 'loop' fixture,
# 'getfixturevalue' will fail because the test is not parameterized
# (this can be removed someday if 'loop' is no longer parameterized)
if "loop" not in request.fixturenames:
> raise Exception(
"Asynchronous fixtures must depend on the 'loop' fixture or "
"be used in tests depending from it."
)
E Exception: Asynchronous fixtures must depend on the 'loop' fixture or be used in tests depending from it.
/usr/lib64/python3.8/site-packages/aiohttp/pytest_plugin.py:84: Exception
___________ ERROR at setup of test_cancel_in_asyncgen_fixture[trio] ____________
args = (), kwargs = {}
request = <SubRequest 'asyncgen_fixture' for <Function test_cancel_in_asyncgen_fixture[trio]>>
def wrapper(*args, **kwargs): # type: ignore
request = kwargs["request"]
if strip_request:
del kwargs["request"]
# if neither the fixture nor the test use the 'loop' fixture,
# 'getfixturevalue' will fail because the test is not parameterized
# (this can be removed someday if 'loop' is no longer parameterized)
if "loop" not in request.fixturenames:
> raise Exception(
"Asynchronous fixtures must depend on the 'loop' fixture or "
"be used in tests depending from it."
)
E Exception: Asynchronous fixtures must depend on the 'loop' fixture or be used in tests depending from it.
/usr/lib64/python3.8/site-packages/aiohttp/pytest_plugin.py:84: Exception
=========================== short test summary info ============================
ERROR test_cancel_scope_in_asyncgen_fixture.py::test_cancel_in_asyncgen_fixture[asyncio]
ERROR test_cancel_scope_in_asyncgen_fixture.py::test_cancel_in_asyncgen_fixture[trio]
============================== 2 errors in 0.10s ===============================
pytest-xprocess reminder::Be sure to terminate the started process by running 'pytest --xkill' if you have not explicitly done so in your fixture with 'xprocess.getinfo(<process_name>).terminate()'.
========================================================================= short test summary info ==========================================================================
SKIPPED [1] tests/test_fileio.py:119: Drive only makes sense on Windows
SKIPPED [1] tests/test_fileio.py:159: Only makes sense on Windows
SKIPPED [3] tests/test_fileio.py:318: os.lchmod() is not available
SKIPPED [1] tests/test_taskgroups.py:967: Cancel messages are only supported on py3.9+
FAILED tests/test_debugging.py::test_main_task_name[asyncio] - DeprecationWarning: Importing 'itsdangerous.json' is deprecated and will be removed in ItsDangerous 2.1. U...
FAILED tests/test_debugging.py::test_main_task_name[asyncio+uvloop] - DeprecationWarning: Importing 'itsdangerous.json' is deprecated and will be removed in ItsDangerous...
FAILED tests/test_pytest_plugin.py::test_plugin - AssertionError: assert {'errors': 6,...pped': 0, ...} == {'errors': 0,...pped': 2, ...}
FAILED tests/test_pytest_plugin.py::test_asyncio - AssertionError: assert {'errors': 3,...pped': 0, ...} == {'errors': 2,...pped': 0, ...}
FAILED tests/test_pytest_plugin.py::test_autouse_async_fixture - AssertionError: assert {'errors': 2,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
FAILED tests/test_pytest_plugin.py::test_cancel_scope_in_asyncgen_fixture - AssertionError: assert {'errors': 2,...pped': 0, ...} == {'errors': 0,...pped': 0, ...}
================================================================ 6 failed, 1230 passed, 6 skipped in 35.12s ================================================================
pytest-xprocess reminder::Be sure to terminate the started process by running 'pytest --xkill' if you have not explicitly done so in your fixture with 'xprocess.getinfo(<process_name>).terminate()'.
```
| closed | 2021-07-31T09:50:54Z | 2021-08-01T12:15:32Z | https://github.com/agronholm/anyio/issues/353 | [] | kloczek | 22 |
dynaconf/dynaconf | fastapi | 835 | dynaconf_merge = false supported? | Hello, I would like to know if it is possible to disable merging for some keys of a configs. In our case, we have a bunch of config files (~30) and merging them is the usual behavior. We do this through the `Dynaconf` parameter `merge_enabled`. Now, in a few cases the merging leads to undesired behavior and it would be nice to turn off merging for these specific config keys. Is it possible to do this easily without changing up the default merge behavior and touch all the config files? I tried `dynaconf_merge = false` as a property but this behavior doesn't seem to be supported.
Example:
`1.toml`
```toml
[general]
a = 1
[other]
a = 2
```
`2.toml`
```toml
[general]
b = 1
[other]
dynaconf_merge = false
b = 2
```
`config.py`
```python
from dynaconf import Dynaconf
settings = Dynaconf(
settings_files=["1.toml", "2.toml"],
merge_enabled=True,
)
print(settings.to_dict())
```
Output:
```
{'GENERAL': {'b': 1, 'a': 1}, 'OTHER': {'b': 2, 'a': 2}}
```
Expected Output:
```
{'GENERAL': {'b': 1, 'a': 1}, 'OTHER': {'b': 2}}
```
Thanks for your time. | closed | 2022-11-24T11:14:21Z | 2023-10-04T19:56:35Z | https://github.com/dynaconf/dynaconf/issues/835 | [
"question",
"RFC"
] | westphal-jan | 5 |
deepfakes/faceswap | deep-learning | 945 | How do I remove ugly dark shadows on my face after changing face? |
The converted face has black shadows. How can I eliminate it?
| closed | 2019-12-03T06:39:19Z | 2019-12-04T10:58:05Z | https://github.com/deepfakes/faceswap/issues/945 | [] | hnjiakai | 1 |
mljar/mljar-supervised | scikit-learn | 463 | change multiprocessing to loky because it crashed on MacOs BigSpur | When using multiprocessing in Golden Features it causes crashed in MacOs BigSur 11.5.2 Please change to loky package https://github.com/joblib/loky | closed | 2021-08-30T12:47:11Z | 2021-09-02T07:52:25Z | https://github.com/mljar/mljar-supervised/issues/463 | [
"bug"
] | pplonski | 1 |
pydantic/FastUI | pydantic | 354 | ServerLoad component append prefix to url if I use url with http protocol included | I have application which communicate with other my servers, and i want to fetch with ServerLoad component to my endpoint on another server and that endpoint will be return the FastUI components, but when ServerLoad fetching to the given url also if url wtitten with http/https he append application prefix and doesn't use http method which i'm specify.
```python
def send_mail_btn(service: str):
serv = settings.get_service_by_name(service)
if not serv:
return []
url = serv.get_endpoint_url("send_mail")
if not url:
return []
_settings = get_settings()
return [
c.Button(
text="Send Mail",
on_click=e.PageEvent(name="send-mail"),
class_name="btn btn-outline-success",
html_type="button",
),
c.Modal(
title="Send Mail",
open_trigger=e.PageEvent(name="send-mail"),
body=[
c.ServerLoad(
path=url
+ "?mails="
+ encrypt(str(_settings.global_config.mails)),
load_trigger=e.PageEvent(name="send-mail"),
method="POST"
)
],
footer=[
c.Button(
text="Close",
on_click=e.PageEvent(name="send-mail", clear=True),
named_style="secondary"
)
]
)
]
```

I know i can create endpoint inside application and do fetch inside and return the components. But I think it's will be cool feature if implement that.
| closed | 2024-09-25T13:05:05Z | 2024-12-15T18:18:02Z | https://github.com/pydantic/FastUI/issues/354 | [] | dchnkoo | 0 |
zappa/Zappa | django | 501 | [Migrated] Retries and timeout for dependencies. | Originally from: https://github.com/Miserlou/Zappa/issues/1316 by [Tipuch](https://github.com/Tipuch)
## Description
Added retries and timeout optional parameters in the cli, a retry scheme when downloading dependencies and a different exception handling when downloading packages.
## GitHub Issues
https://github.com/Miserlou/Zappa/issues/1235
https://github.com/Miserlou/Zappa/issues/1040
| closed | 2021-02-20T09:43:34Z | 2022-07-16T07:23:34Z | https://github.com/zappa/Zappa/issues/501 | [] | jneves | 1 |
jazzband/django-oauth-toolkit | django | 804 | push release 1.3.0 to pypi | Push release 1.3.0 to pypi
| closed | 2020-03-04T14:01:46Z | 2020-03-24T13:36:29Z | https://github.com/jazzband/django-oauth-toolkit/issues/804 | [] | n2ygk | 11 |
microsoft/nni | machine-learning | 5,587 | Bug: imcompatability between save & load functions and configs. | There is a imcompatability between standard config and code, as shown in functions save and load:
https://github.com/microsoft/nni/blob/master/nni/tools/nnictl/nnictl_utils.py#L828
https://github.com/microsoft/nni/blob/master/nni/tools/nnictl/nnictl_utils.py#LL931C5-L931C22
It can be reproduced by saving and loading tutorial examples.
It's due to differences between "trialCodeDirectory" and "trial codeDir". | open | 2023-05-29T19:12:42Z | 2023-07-19T12:00:40Z | https://github.com/microsoft/nni/issues/5587 | [] | zjowowen | 2 |
mwaskom/seaborn | pandas | 3,439 | so.plot not working in version 0.12.2 | The following code does to display the intended graph, just a blank white space.
```
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import seaborn.objects as so
dataLi = pd.read_csv("lithium_data.csv")
dataLi
(
so.Plot(dataLi,
x = "CV",
y = "Mean")
)
plt.show()
```
In contrast the code below does provide a graph. Why does the code above not work?
```
dataLi = pd.read_csv("lithium_data.csv")
g = sns.relplot(data = dataLi,
x = "CV",
y = "Mean")
g.set_axis_labels("%CV", "Mean [Lithium], mmol/L")
plt.show()
plt.close
```
`
| closed | 2023-08-16T02:17:21Z | 2023-08-16T16:19:59Z | https://github.com/mwaskom/seaborn/issues/3439 | [] | RSelvaratnam | 1 |
d2l-ai/d2l-en | pytorch | 2,323 | How to load d2l modules on Colab to avoid ModuleNotFoundError? | I’d like to begin reading this book and running its examples. However, for some resoans I am not allowed to install Conda on the machine. So, I have tried to run notebook on colab. However, once running the first cell, it give an Error like this:
```
%matplotlib inline
import math
import time
import numpy as np
import torch
from d2l import torch as d2l
```
```
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-7a344dafd5da> in <module>
4 import numpy as np
5 import torch
----> 6 from d2l import torch as d2l
ModuleNotFoundError: No module named 'd2l'
```
Any thought on this that might help? | closed | 2022-10-03T16:15:12Z | 2022-11-20T02:04:17Z | https://github.com/d2l-ai/d2l-en/issues/2323 | [] | ghost | 8 |
seleniumbase/SeleniumBase | pytest | 2,225 | Struggling with Remote Debugging port | I am trying to run a script on a previously opened browser. I cannot connect to the browser instead my script opens up a new browser with each new run. The Browser opened -> [https://imgur.com/a/afBEAJX](https://imgur.com/a/afBEAJX)
I use this to spawn my browser:
cd C:\Program Files\Google\Chrome\Application
chrome.exe --remote-debugging-port=0249 --user-data-dir="C:\browser"
My current code:
```Python
#simple.py
from seleniumbase import Driver
from seleniumbase.undetected import Chrome
from selenium.webdriver.chrome.options import Options
from seleniumbase.undetected import ChromeOptions
chrome_options = Options()
chrome_options.add_experimental_option("debuggerAddress" , "localhost:0249")
web = Driver(browser='chrome',chromium_arg=chrome_options,remote_debug=True)
body = web.find_element("/html/body")
print(body.text)
```
Things i have tried:
-not changing the port (9222)
-pytest simple.py --remote-debug
-uc=True with uc.chromeoptions
-chrome_options = Options()
-chrome_options = ChromeOptions()
-browser="remote"
-removing chromium_arg and adding port="9222"
-chromium_arg="remote-debugging-port=9222"
-example solution with SB in #2049
I am converting my large Selenium project into SeleniumBase and being able to test in this way would be invaluable | closed | 2023-10-30T08:04:30Z | 2023-11-02T20:38:14Z | https://github.com/seleniumbase/SeleniumBase/issues/2225 | [
"question",
"UC Mode / CDP Mode"
] | Dylgod | 8 |
pytest-dev/pytest-html | pytest | 258 | Attaching video URL after every test run in pytest html report | After every test run i am getting the video URL but when the report gets generated the URL is only shown for the 2nd test case which is of the 1st one and for 1st one it doesn't show anything.
Note: the video URLs are generated correctly for each test cases it just that i am not able to properly attach the link to each test case in the report.
can someone help me with this please:
Below is my code snippet:
**@pytest.fixture(scope="function")**
def test_obj(browser, browser_version, platform, os_version, request):
global driver
driver_fact = DriverFactory()
test_name = os.environ.get('PYTEST_CURRENT_TEST').split(':')[-1].split(' ')[0]
driver = driver_fact.get_web_driver(browser, browser_version, platform, os_version, test_name)
print(driver)
browserstack_obj = BrowserStack_Library()
active_session_id = browserstack_obj.get_active_session_id()
driver.maximize_window()
request.cls.driver = driver
yield
driver.quit()
global video_url
video_url = browserstack_obj.get_session_url(active_session_id)
**@pytest.mark.hookwrapper**
def pytest_runtest_makereport(__multicall__, item):
pytest_html = item.config.pluginmanager.getplugin('html')
outcome = yield
report = outcome.get_result()
extra = getattr(report, 'extra', [])
if report.when == "call":
extra.append(pytest_html.extras.url(video_url))
xfail = hasattr(report, 'wasxfail')
if (report.skipped and xfail) or (report.failed and not xfail):
file_name = report.nodeid.replace("::", "_") + ".png"
# _capture_screenshot(file_name)
if file_name:
html = '<div><img src="%s" alt="screenshot" style="width:304px;height:228px;" ' \
'onclick="window.open(this.src)" align="right"/></div>' % file_name
extra.append(pytest_html.extras.html(html))
report.extra = extra

| open | 2020-01-30T19:38:43Z | 2020-10-23T01:25:03Z | https://github.com/pytest-dev/pytest-html/issues/258 | [
"question"
] | rahulbasu86 | 6 |
scanapi/scanapi | rest-api | 9 | Add package to brew | closed | 2019-07-21T19:24:21Z | 2019-08-02T13:59:08Z | https://github.com/scanapi/scanapi/issues/9 | [
"Feature"
] | camilamaia | 1 | |
miguelgrinberg/flasky | flask | 454 | InstrumentedList has no attribute 'paginate' | hi, there is a stranger problem.
when i run app, it raise error:
InstrumentedList has no attribute 'paginate':
pagination = user.followers.paginate(.....)
i use type function to check the type of user.followers, it's InstrumentedList instead of BaseQuery class.
i have seached google for several days, but have no idea.
anybody can help me?thanks
| closed | 2019-12-30T09:41:19Z | 2019-12-30T10:53:10Z | https://github.com/miguelgrinberg/flasky/issues/454 | [] | MansonLuo | 1 |
vanna-ai/vanna | data-visualization | 637 | How ro show results in Numbers and with thousand seperator along with change the currency to not be $ | Hi ,
how can we change to show results in Numbers and with thousand separator along with change the currency to not be $ ?
Thanks in advance | closed | 2024-09-13T07:00:46Z | 2024-09-16T14:09:03Z | https://github.com/vanna-ai/vanna/issues/637 | [] | Ismokgtar | 0 |
graphql-python/gql | graphql | 301 | Is there a way to export gql response dynamicly? | ### The problem:
**exporting data with `graphql` is inefficient .**
for example lets look at this query:
```gql
query{
SiteById(id:852){
unitSet{
connableSet{
connablealertSet{
TidKey
TidValue
}
}
}
}
}
```
returns this json respons:
```json
{
"SiteById": [
{
"unitSet": [
{
"connableSet": [
{
"connablealertSet": [
{ "TidKey": 151, "TidValue": "סכום בדיקת ROM שגוי" }
]
}
]
}
]
}
]
}
```
Lets assume i only need `TidKey` and `TidValue`,
to do this this is what you would normally find on the web:
```python
import pandas as pd
pd.json_normalize(res['SiteById'][0]['unitSet'][0]['connableSet'][0]['connablealertSet'])
```
Results this:

### As you can see this is extremely inefficient for these reasons:
1. I need to know exactly what i queried for.
2. If the result had more fields the code above would break .
3. If i would change the query slightly i would need to change how i export it as well.
I hope i am missing something because otherwise its back to `REST` | closed | 2022-02-22T10:12:41Z | 2022-03-12T19:04:18Z | https://github.com/graphql-python/gql/issues/301 | [
"type: question or discussion"
] | nrbnlulu | 5 |
robertmartin8/MachineLearningStocks | scikit-learn | 32 | Question | Hi Robert,
This is another fine project together with your excellent portfolio optimisation work.
I hope you don't mind me asking a question as someone inexperienced in this area regarding the amount of fundamental data required to produce a viable model.
My local exchange is London (LSE/FTSE) and getting historic fundamentals is hard. I am able to extract these day by day but it will take some time to produce a significant amount.
So I was wondering, how many days would I need to have processed for a viable classification model? Say 6 months, 3 months etc. I have many fields but at present this goes back a week.
Thank you in advance
Fig | open | 2020-05-23T15:05:13Z | 2020-05-30T12:35:57Z | https://github.com/robertmartin8/MachineLearningStocks/issues/32 | [] | lefig | 9 |
pennersr/django-allauth | django | 3,126 | Can django-allauth work with Google login today through dj-rest-auth, since Google Identity services is not yet supported? | django-allauth does not seem to support the latest Google Identity services at the moment. Does this mean django-allauth is unable to work with Google login through dj-rest-auth for the time being? | closed | 2022-07-18T02:13:44Z | 2023-06-18T21:52:23Z | https://github.com/pennersr/django-allauth/issues/3126 | [] | helpme1 | 1 |
widgetti/solara | fastapi | 161 | Sessions timeout / websocket disconnect | Thanks for the great project!
I am developing an app for annotating data, but some data is difficult to discern and takes a long time, and it is quite difficult to use if the session expires at that time.
Is it possible to extend the session expiration date?
If there is information on this in the documentation or something, I apologize. | closed | 2023-06-20T10:53:57Z | 2023-10-11T11:30:31Z | https://github.com/widgetti/solara/issues/161 | [] | yugosuz | 5 |
keras-team/keras | deep-learning | 20,258 | Add AdEMAMix Optimizer | AdEMAMix integrates Adam and EMA optimization methods to tackle issues of slow convergence and subpar generalization in large language models and noisy datasets. It utilizes three beta parameters along with an alpha parameter to provide flexible momentum and adaptive learning rates.
Paper: https://arxiv.org/abs/2409.03137
I'm interested in adding this optimizer to Keras.
@fchollet | closed | 2024-09-14T20:33:32Z | 2024-09-16T02:30:05Z | https://github.com/keras-team/keras/issues/20258 | [] | IMvision12 | 4 |
FlareSolverr/FlareSolverr | api | 1,150 | [yggtorrent] (updating) Exception: System.Threading.Tasks.TaskCanceledException: The request was canceled due to the configured HttpClient.Timeout of 60 seconds elapsing. ---> System.TimeoutException: A task was canceled. ---> System.Threading.Tasks.TaskCanceledException: A task was canceled | ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version: 3.10
- Last working FlareSolverr version: 3.10
- Operating system: Debian
- Are you using Docker: [yes/no] yes
- FlareSolverr User-Agent (see log traces or / endpoint):
- Are you using a VPN: [yes/no] no
- Are you using a Proxy: [yes/no] no
- Are you using Captcha Solver: [yes/no] no
- If using captcha solver, which one:
- URL to test this issue:https://www3.yggtorrent.qa/
```
### Description
curl -L -X POST 'http://localhost:8191/v1' -H 'Content-Type: application/json' --data-raw '{
"cmd": "request.get",
"url": "https://www3.yggtorrent.qa/",
"maxTimeout": 60000
}'
### Logged Error Messages
```text
2024-04-12 12:13:19 INFO ReqId 140430599517952 Incoming request => POST /v1 body: {'cmd': 'request.get', 'url': 'https://www3.yggtorrent.qa/', 'maxTimeout': 60000}
2024-04-12 12:13:19 DEBUG ReqId 140430599517952 Launching web browser...
version_main cannot be converted to an integer
2024-04-12 12:13:19 DEBUG ReqId 140430599517952 Started executable: `/app/chromedriver` in a child process with pid: 469
2024-04-12 12:13:19 DEBUG ReqId 140430574339840 Waiting for title (attempt 20): Just a moment...
2024-04-12 12:13:20 DEBUG ReqId 140430599517952 New instance of webdriver has been created to perform the request
2024-04-12 12:13:20 DEBUG ReqId 140430565947136 Navigating to... https://www3.yggtorrent.qa/
2024-04-12 12:13:20 DEBUG ReqId 140430565947136 Response HTML:
<html lang="en-US" class="lang-en-us"><head><title>Just a moment...</title><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"><meta http-equiv="X-UA-Compatible" content="IE=Edge"><meta name="robots" content="noindex,nofoll ow"><meta name="viewport" content="width=device-width,initial-scale=1"><style>*{box-sizing:border-box;margin:0;padding:0}html{line-height:1.15;-webkit-text-size-adjust:100%;color:#313131}button,html{font-family:system-ui,-apple-system,Bli nkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,Apple Color Emoji,Segoe UI Emoji,Segoe UI Symbol,Noto Color Emoji}@media (prefers-color-scheme:dark){body{background-color:#222;color:#d9d9d9}body a{color:#fff}body a:hover{color:#ee730a;text-decoration:underline}body .lds-ring div{border-color:#999 transparent transparent}body .font-red{color:#b20f03}body .big-button,body .pow-button{background-color:#4693ff;color:#1d1d1d}body #challenge-success-te xt{background-image:url(data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMiIgaGVpZ2h0PSIzMiIgZmlsbD0ibm9uZSIgdmlld0JveD0iMCAwIDI2IDI2Ij48cGF0aCBmaWxsPSIjZDlkOWQ5IiBkPSJNMTMgMGExMyAxMyAwIDEgMCAwIDI2 IDEzIDEzIDAgMCAwIDAtMjZtMCAyNGExMSAxMSAwIDEgMSAwLTIyIDExIDExIDAgMCAxIDAgMjIiLz48cGF0aCBmaWxsPSIjZDlkOWQ5IiBkPSJtMTAuOTU1IDE2LjA1NS0zLjk1LTQuMTI1LTEuNDQ1IDEuMzg1IDUuMzcgNS42MSA5LjQ5NS05LjYtMS40Mi0xLjQwNXoiLz48L3N2Zz4=)}body #challenge-erro r-text{background-image:url(data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMiIgaGVpZ2h0PSIzMiIgZmlsbD0ibm9uZSI+PHBhdGggZmlsbD0iI0IyMEYwMyIgZD0iTTE2IDNhMTMgMTMgMCAxIDAgMTMgMTNBMTMuMDE1IDEzLjAxNSAw IDAgMCAxNiAzbTAgMjRhMTEgMTEgMCAxIDEgMTEtMTEgMTEuMDEgMTEuMDEgMCAwIDEtMTEgMTEiLz48cGF0aCBmaWxsPSIjQjIwRjAzIiBkPSJNMTcuMDM4IDE4LjYxNUgxNC44N0wxNC41NjMgOS41aDIuNzgzem0tMS4wODQgMS40MjdxLjY2IDAgMS4wNTcuMzg4LjQwNy4zODkuNDA3Ljk5NCAwIC41OTYtLjQwNy 45ODQtLjM5Ny4zOS0xLjA1Ny4zODktLjY1IDAtMS4wNTYtLjM4OS0uMzk4LS4zODktLjM5OC0uOTg0IDAtLjU5Ny4zOTgtLjk4NS40MDYtLjM5NyAxLjA1Ni0uMzk3Ii8+PC9zdmc+)}}body{display:flex;flex-direction:column;min-height:100vh}body.no-js .loading-spinner{visibility:h idden}body.no-js .challenge-running{display:none}body.dark{background-color:#222;color:#d9d9d9}body.dark a{color:#fff}body.dark a:hover{color:#ee730a;text-decoration:underline}body.dark .lds-ring div{border-color:#999 transparent transpar ent}body.dark .font-red{color:#b20f03}body.dark .big-button,body.dark .pow-button{background-color:#4693ff;color:#1d1d1d}body.dark #challenge-success-text{background-image:url(data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5v cmcvMjAwMC9zdmciIHdpZHRoPSIzMiIgaGVpZ2h0PSIzMiIgZmlsbD0ibm9uZSIgdmlld0JveD0iMCAwIDI2IDI2Ij48cGF0aCBmaWxsPSIjZDlkOWQ5IiBkPSJNMTMgMGExMyAxMyAwIDEgMCAwIDI2IDEzIDEzIDAgMCAwIDAtMjZtMCAyNGExMSAxMSAwIDEgMSAwLTIyIDExIDExIDAgMCAxIDAgMjIiLz48cGF0aC BmaWxsPSIjZDlkOWQ5IiBkPSJtMTAuOTU1IDE2LjA1NS0zLjk1LTQuMTI1LTEuNDQ1IDEuMzg1IDUuMzcgNS42MSA5LjQ5NS05LjYtMS40Mi0xLjQwNXoiLz48L3N2Zz4=)}body.dark #challenge-error-text{background-image:url(data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d 3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMiIgaGVpZ2h0PSIzMiIgZmlsbD0ibm9uZSI+PHBhdGggZmlsbD0iI0IyMEYwMyIgZD0iTTE2IDNhMTMgMTMgMCAxIDAgMTMgMTNBMTMuMDE1IDEzLjAxNSAwIDAgMCAxNiAzbTAgMjRhMTEgMTEgMCAxIDEgMTEtMTEgMTEuMDEgMTEuMDEgMCAwIDEtMTEgMTEiLz48c GF0aCBmaWxsPSIjQjIwRjAzIiBkPSJNMTcuMDM4IDE4LjYxNUgxNC44N0wxNC41NjMgOS41aDIuNzgzem0tMS4wODQgMS40MjdxLjY2IDAgMS4wNTcuMzg4LjQwNy4zODkuNDA3Ljk5NCAwIC41OTYtLjQwNy45ODQtLjM5Ny4zOS0xLjA1Ny4zODktLjY1IDAtMS4wNTYtLjM4OS0uMzk4LS4zODktLjM5OC0uOTg0IDA tLjU5Ny4zOTgtLjk4NS40MDYtLjM5NyAxLjA1Ni0uMzk3Ii8+PC9zdmc+)}body.light{background-color:transparent;color:#313131}body.light a{color:#0051c3}body.light a:hover{color:#ee730a;text-decoration:underline}body.light .lds-ring div{border-color:# 595959 transparent transparent}body.light .font-red{color:#fc574a}body.light .big-button,body.light .pow-button{background-color:#003681;border-color:#003681;color:#fff}body.light #challenge-success-text{background-image:url(data:image/sv g+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMiIgaGVpZ2h0PSIzMiIgZmlsbD0ibm9uZSIgdmlld0JveD0iMCAwIDI2IDI2Ij48cGF0aCBmaWxsPSIjMzEzMTMxIiBkPSJNMTMgMGExMyAxMyAwIDEgMCAwIDI2IDEzIDEzIDAgMCAwIDAtMjZtMCAyNGExMSAxM SAwIDEgMSAwLTIyIDExIDExIDAgMCAxIDAgMjIiLz48cGF0aCBmaWxsPSIjMzEzMTMxIiBkPSJtMTAuOTU1IDE2LjA1NS0zLjk1LTQuMTI1LTEuNDQ1IDEuMzg1IDUuMzcgNS42MSA5LjQ5NS05LjYtMS40Mi0xLjQwNXoiLz48L3N2Zz4=)}body.light #challenge-error-text{background-image:url(dat a:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMiIgaGVpZ2h0PSIzMiIgZmlsbD0ibm9uZSI+PHBhdGggZmlsbD0iI2ZjNTc0YSIgZD0iTTE2IDNhMTMgMTMgMCAxIDAgMTMgMTNBMTMuMDE1IDEzLjAxNSAwIDAgMCAxNiAzbTAgMjRhMTEgMTEgMCA xIDEgMTEtMTEgMTEuMDEgMTEuMDEgMCAwIDEtMTEgMTEiLz48cGF0aCBmaWxsPSIjZmM1NzRhIiBkPSJNMTcuMDM4IDE4LjYxNUgxNC44N0wxNC41NjMgOS41aDIuNzgzem0tMS4wODQgMS40MjdxLjY2IDAgMS4wNTcuMzg4LjQwNy4zODkuNDA3Ljk5NCAwIC41OTYtLjQwNy45ODQtLjM5Ny4zOS0xLjA1Ny4zODktL jY1IDAtMS4wNTYtLjM4OS0uMzk4LS4zODktLjM5OC0uOTg0IDAtLjU5Ny4zOTgtLjk4NS40MDYtLjM5NyAxLjA1Ni0uMzk3Ii8+PC9zdmc+)}a{background-color:transparent;color:#0051c3;text-decoration:none;transition:color .15s ease}a:hover{color:#ee730a;text-decoratio n:underline}.main-content{margin:8rem auto;max-width:60rem;width:100%}.heading-favicon{height:2rem;margin-right:.5rem;width:2rem}@media (width <= 720px){.main-content{margin-top:4rem}.heading-favicon{height:1.5rem;width:1.5rem}}.footer,.m ain-content{padding-left:1.5rem;padding-right:1.5rem}.main-wrapper{align-items:center;display:flex;flex:1;flex-direction:column}.font-red{color:#b20f03}.spacer{margin:2rem 0}.h1{font-size:2.5rem;font-weight:500;line-height:3.75rem}.h2{fon t-weight:500}.core-msg,.h2{font-size:1.5rem;line-height:2.25rem}.body-text,.core-msg{font-weight:400}.body-text{font-size:1rem;line-height:1.25rem}@media (width <= 720px){.h1{font-size:1.5rem;line-height:1.75rem}.h2{font-size:1.25rem}.cor e-msg,.h2{line-height:1.5rem}.core-msg{font-size:1rem}}#challenge-error-text{background-image:url(data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMiIgaGVpZ2h0PSIzMiIgZmlsbD0ibm9uZSI+PHBhdGggZmlsbD 0iI2ZjNTc0YSIgZD0iTTE2IDNhMTMgMTMgMCAxIDAgMTMgMTNBMTMuMDE1IDEzLjAxNSAwIDAgMCAxNiAzbTAgMjRhMTEgMTEgMCAxIDEgMTEtMTEgMTEuMDEgMTEuMDEgMCAwIDEtMTEgMTEiLz48cGF0aCBmaWxsPSIjZmM1NzRhIiBkPSJNMTcuMDM4IDE4LjYxNUgxNC44N0wxNC41NjMgOS41aDIuNzgzem0tMS4w ODQgMS40MjdxLjY2IDAgMS4wNTcuMzg4LjQwNy4zODkuNDA3Ljk5NCAwIC41OTYtLjQwNy45ODQtLjM5Ny4zOS0xLjA1Ny4zODktLjY1IDAtMS4wNTYtLjM4OS0uMzk4LS4zODktLjM5OC0uOTg0IDAtLjU5Ny4zOTgtLjk4NS40MDYtLjM5NyAxLjA1Ni0uMzk3Ii8+PC9zdmc+);padding-left:34px}#challenge -error-text,#challenge-success-text{background-repeat:no-repeat;background-size:contain}#challenge-success-text{background-image:url(data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIzMiIgaGVpZ2h0PSI zMiIgZmlsbD0ibm9uZSIgdmlld0JveD0iMCAwIDI2IDI2Ij48cGF0aCBmaWxsPSIjMzEzMTMxIiBkPSJNMTMgMGExMyAxMyAwIDEgMCAwIDI2IDEzIDEzIDAgMCAwIDAtMjZtMCAyNGExMSAxMSAwIDEgMSAwLTIyIDExIDExIDAgMCAxIDAgMjIiLz48cGF0aCBmaWxsPSIjMzEzMTMxIiBkPSJtMTAuOTU1IDE2LjA1N S0zLjk1LTQuMTI1LTEuNDQ1IDEuMzg1IDUuMzcgNS42MSA5LjQ5NS05LjYtMS40Mi0xLjQwNXoiLz48L3N2Zz4=);padding-left:42px}.text-center{text-align:center}.big-button{border:.063rem solid #0051c3;border-radius:.313rem;font-size:.875rem;line-height:1.313re m;padding:.375rem 1rem;transition-duration:.2s;transition-property:background-color,border-color,color;transition-timing-function:ease}.big-button:hover{cursor:pointer}.captcha-prompt:not(.hidden){display:flex}@media (width <= 720px){.cap tcha-prompt:not(.hidden){flex-wrap:wrap;justify-content:center}}.pow-button{background-color:#0051c3;color:#fff;margin:2rem 0}.pow-button:hover{background-color:#003681;border-color:#003681;color:#fff}.footer{font-size:.75rem;line-height: 1.125rem;margin:0 auto;max-width:60rem;width:100%}.footer-inner{border-top:1px solid #d9d9d9;padding-bottom:1rem;padding-top:1rem}.clearfix:after{clear:both;content:"";display:table}.clearfix .column{float:left;padding-right:1.5rem;width: 50%}.diagnostic-wrapper{margin-bottom:.5rem}.footer .ray-id{text-align:center}.footer .ray-id code{font-family:monaco,courier,monospace}.core-msg,.zone-name-title{overflow-wrap:break-word}@media (width <= 720px){.diagnostic-wrapper{displa y:flex;flex-wrap:wrap;justify-content:center}.clearfix:after{clear:none;content:none;display:initial;text-align:center}.column{padding-bottom:2rem}.clearfix .column{float:none;padding:0;width:auto;word-break:keep-all}.zone-name-title{marg in-bottom:1rem}}.loading-spinner{height:76.391px}.lds-ring{display:inline-block;position:relative}.lds-ring,.lds-ring div{height:1.875rem;width:1.875rem}.lds-ring div{animation:lds-ring 1.2s cubic-bezier(.5,0,.5,1) infinite;border:.3rem s olid transparent;border-radius:50%;border-top-color:#313131;box-sizing:border-box;display:block;position:absolute}.lds-ring div:first-child{animation-delay:-.45s}.lds-ring div:nth-child(2){animation-delay:-.3s}.lds-ring div:nth-child(3){a nimation-delay:-.15s}@keyframes lds-ring{0%{transform:rotate(0)}to{transform:rotate(1turn)}}@media screen and (-ms-high-contrast:active),screen and (-ms-high-contrast:none){.main-wrapper,body{display:block}}</style><meta http-equiv="refre sh" content="105"><script src="/cdn-cgi/challenge-platform/h/b/orchestrate/chl_page/v1?ray=8732c954df186f45"></script><script src="https://challenges.cloudflare.com/turnstile/v0/b/bcc5fb0a8815/api.js?onload=HrjuF1&render=explicit" asy nc="" defer="" crossorigin="anonymous"></script></head><body class="no-js"><div class="main-wrapper" role="main"><div class="main-content"><h1 class="zone-name-title h1">www3.yggtorrent.qa</h1><h2 id="challenge-running" class="h2">Verifyi ng you are human. This may take a few seconds.</h2><div id="challenge-stage"></div><div id="challenge-spinner" class="spacer loading-spinner" style="display: block; visibility: visible;"><div class="lds-ring"><div></div><div></div><div></ div><div></div></div></div><div id="challenge-body-text" class="core-msg spacer">www3.yggtorrent.qa needs to review the security of your connection before proceeding.</div><div id="challenge-success" style="display: none;"><div id="challe nge-success-text" class="h2">Verification successful</div><div class="core-msg spacer">Waiting for www3.yggtorrent.qa to respond...</div></div><noscript><div id="challenge-error-title"><div class="h2"><span id="challenge-error-text">Enabl e JavaScript and cookies to continue</span></div></div></noscript></div></div><script>(function(){window._cf_chl_opt={cvId: '3',cZone: "www3.yggtorrent.qa",cType: 'non-interactive',cNounce: '85995',cRay: '8732c954df186f45',cHash: '524222d a44b06bb',cUPMDTk: "\/?__cf_chl_tk=CGLmW5Cz9uxTqCU0mQCjE9b7aU5NIbQ0SuVNx1akPBA-1712920400-0.0.1.1-1279",cFPWv: 'b',cTTimeMs: '1000',cMTimeMs: '105000',cTplV: 5,cTplB: 'cf',cK: "visitor-time",fa: "\/?__cf_chl_f_tk=CGLmW5Cz9uxTqCU0mQCjE9b7a U5NIbQ0SuVNx1akPBA-1712920400-0.0.1.1-1279",md: "Wq2dEwSiAZtQA57EwrccFFA0heSnHa6rcdjoyFyGJPQ-1712920400-1.1.1.1-olSVMhfVtU3SFMGKY2k5qwTcMnTcAm.9dKHML3c8GFdCWoY8h6qpjpgm1DXWa.Y4mGq4DoiRpIf2lFcucn1qJlEArax6WwxTOSm3sRcte7lBLIf80pQJ2QKaWGK4aS mnkmVATo6SbpbLz_ur7eCKHkUqJdCatY95JWtYL6yxaVv_HIbc_YPE28UKHNknKVLRh0k_XrSxVi63Cf.Sy_xSBJx7PrQE9zVV8zuL7Yp6ims0RqAMAWhKtIWwPhI2.EUMvLNpqnwCDAojT4EmPYM0hWVPqigTf223rUyUSAuG3CVQWIgMD1TXpRDqLHiaJBF92lk1rXuwLx_MLcsschuRRFdVIC3p7p3rkCpMa0llXe0y XIXFebfjOb1mfiRFYDkg2HaQSVsiQ3M6nyA30g1EzVpE_v415yROZK6KxhcY7XF3BAQnamOZR_aCAajP2RXiPzJdxwRWUELy2ekMqtgAVABu051nw_4054187rK0GfIKFEqBoAmJIo_A3J95_5VeB4rrBPypobx4wnqZXxgjR_H5aVPSzNrMFHLn_RSogQBHrwZJrGRynxuAOacOOU4DZPb9mOm5PIaQazdJHuuGEzqLBr .wVrv8YG8CLrXkQ20t.pquZGhckN3xoJQsIKW7rbvd06_MUikwrb7qmS55RxHF5JNADDYGyTzRDdnJk5RcAx_s1CWvBzb8o.echpiLCDdprIww7YwMQk34T38mAUFJ5e6ATzZg7DWMq4icDaRpx76tftEnI3KmyKsJnuHZPTvGYjBhVymRZl5zVl3MfeIucQ9xO3WXPjrGs.98HWkfcqVtNK3R92pk5DNLAfOo7PvNTz2J YMUojSx_L6yLgTgewWZhRYLxX3sSnBFr_3jNVTvLY4prLZPhEYKUZZ0Y0NGdiUy_lASucD8qA9MsZUNAVEsFNJGI5YsSB5SOFyQquVS2JvQVwrEFbxT8ZKR4ual4qwTxCpSLcXgpBaUULqo95YCrejYtiNROJrdshIxzD01zbEj7X9A.WcNk3PLOfiuq4jLl05CkPbUMTqyW7afRwMNLjBXU2UJFA4AQA9CIEZWh2eHVkL cck9dEi9OfjX6d2bjjzRps7T1EJYsFBX0YrX1QFGDe3oikiaNlBeuwmP4gK221ubc_tRz.kqXrtrmXQcQZaMzdi6ajFPU0nXT7otaSNqjntFtHet5jRb5_6HYbdeJuF6w10k2L.pCF",mdrd: "Ka2LKz95l6JOkY8lc9m9DvHWkfGKiIZ1AaqKUm9DRhY-1712920400-1.1.1.1-Nnak7JMndAnymdAusen.mOer7fUj miKf25Qan1LRibQ4sEIm7.8odXI9eEqqlwGSFX8MjsnIe85XLfIAXBMdYNHbK1TQ__JbcYKz97KboFuyAyG8oUxQjEXRYRe8eGT86cQwHRgOvq7UtLsRx2q6.0q3gay8oQwUwErO8dd5s56QuKKkOAwXAool2h9vddyezORE1TZIZEl5AxYPSL_NBg4qPO85IvoUkANVwjmLUmmeHwKojQq5rkP3PT33T5Jl1saNiy7Ow6 bIq6nsS96fiG_rZirW2qOsH5KxHgvVCzbgXnw7SukCJ631xmM_ULc1TzErMYIAHg7QrzLVYrgTD0fUQdeae8q.bGL16roBm7CZSttfeReEJ211PUhgZOBL2u_ACJCEo.EoCF9cC2_hUUlkncXZnyTHTBQjnCyhCcJIeM8E83gV2A8gWwTJLZQllbooXhd5B3t0EIDfm_oWY5AHcRFVL5eAisKy5WxCi1p8_fJk8GbVTTeT IQWwMUow6JzNYHXSWfRc0g.CYpzhyacq0SfDxDRo4WLxDP_hjw9kCmOCTm5QyrOLPfEQBMHGcpMvPAaIzMwGA8qVuK5jpxqjMT8MQq2A0w5W8H5K0EKyXIuRw0taSRPrMaY3Q4PmqhEjkABM6Nn148v7K1JjrxgOuyAsa84CK2QLeBINF75u3dblXE9kRpVscgmNbeyeRjv02Vzsu6tffzuNtaJTe_WhwJGUQFn_Xb4btH AhFooGTPCPHDO.jQ5eysnLcyX3pyG0AnHsvKbp6KSnAU4mr78wZMs521DKvkKLQSfszO8YeFMV6vc5r3Av6oLWEX8ZRYc8qMf..Ue8p_Y66AJlGgrml6AL.Awiuk0cytRdc5P7xgO34nWz2VHXkZAplLsEbQcnmnieI4dIfwN0Atza7DbqZvGl4z1jBeJsOnYEKQBYTISQ2.ybPY72iHthA.lGHrJYt7MHjXmqI4mEW1bB .woMUTmoEw7zqT3eW0jP4HOArBE1za5JfxcEXvOAiYLeIYahUScxrmYBsqimuQylzAhPBanJvXpgi2XVIs0xseFLaCoZym04vZoFpNGBrYC1Gl5N8DKSiwI63qW5Jn_iKaZiEX6z3hKSSt6JIiQ3jVja7ZS8U9K9AIfns4mB6AzsH1tmo3ZhyTyIkBH.Koy9rvjr39F3Dw3lhr.bulH_NNmQq.pPEGfzyOesCE153BbEey NgQ.psN6Kp4uDNXiwtyGvSuWiErWXXfSM4vMZtkT9HDnDzuOo73rZEe1r1pG3.nUyEHu_fYSFbwzTuGC9T0DBhZZQX0cApwuzZpe5TmKj8Pido44rmzaI0fonHdnjxBE3zkYKGniHUpOYgl3UCBbTxQZp6X46n_SYTiincyVZVhZxBH.JyL30_szrRTEYYwbMjlHbnViaPxKmfAbSt59gbYKX02g7o3GvNypE7mAdhQZrB SKtVYz616B0qrHAEqINwqfdhtkit_XtPOOKljYgtE7jFLJ.alUEoPkFK7XANNcoBUZUYT_lnSOqtRchvotu1HUHMTLBK9SGT97dQaDbxvPtRuGqExXl.i_X3OUrZNsI2CRe22D3Zz6gVdHTm_3GSPbhOw6Sr0h9V1F5cG6sHOVTJm_GqwIBC.0MrbtiZhZVqgBsaMmhfc8PWkiq3sd3XFxwq5vLoSJICWYutc8EL1NF_4d Prag0goqdXmMdEMeu4eMg0ZNJ_QONDEyh2snGr6eRrLh26gPh2pZek8knMcubFJeEKT4PEB_5wvKoRX9q3qpaSE84h0tgPhM2GZa1wGZFjSr8GTFCG6JmILeYCfNIDJc_MZ0s9E2OfKPqLULJm6mWDUucTf.n8n2_8CFs2ESzAzEC4I9ueSkv9l8wcmAVcLbgQhUiAag3lcl3M1uzB7a60cX04bTdqySQvWMPM9AjoOFTg 0WW33kxfA82wOQ5XaBYBzlQVgoXdluE9lrLbBdAOU8MvgsvMPRnWmzid.AYyJllMdc.bd0LxMOEkw4oO0rvjCuuCgzosd1LNEjt.My8OOswqyfucA4scx0gV3X1tp8nvTR8jASNJrTnBK1jYyaxc0aXG.EAlcPNKRFu7ewPRhjkfEq3635lRQIjsE22l4_7TiP7amEzx65R85tlOipOWNkjrcXSvQckujj0ojY0KJ2cZ_m _DA6GI4TqiWO.jzDYIpQzBoWpEA1tC9Afr2LAaYYTAN6XhUcgij6_ydKSAZLIp3kFfZgC4NoN9VfLY3Cru389WewUCIi_144v7b6PifgDZPpJTsekVy8sa9Mo5XuRXzf.fJvBV7GfjKKzQjguwj2oHysJithMTNOqG.dzuBzaSVMbn55Jr95cTE3S0wzdmQc45dJYiGEdiBzQ2jQg4Wj8E7GCXW_cxxPjexkklN7c26Px6 aoGdSLDiN.xVGt5TG2JX1gMJI_NvcgBBFiGs8OXGTDF.LGERg1EGLUMHvMzfoBHrD.zpFK.bANlBOE59iSLF9IdCqAdQ_9UH6NKu0gZlECYtPoiRep27MfkvIZjVBVb0bTCq8ULT23.6sxrLZM1j93JOG_.IU_fybk9tCBsLnNI1d2twMZuiIYpHUWWDs.ZtN8Xagn9Ud2Jj5ec.84BwV8K8kXZdMzI7MJsnfdWM2GTmoz GTeydFSakvXQgOraEbds.PVgZ22UwoOc_sQzafbm4p2xuYSm6H3ipEkChJEZ2uJGVOWaMVdZi3n0E2423QPw8Jw4.P866e7NrpDdAfCR1mbVMvT6L62W0sX5m2TIvFcqI6PoceIhvVmicazY0GHyJtI86WqtdjP4ycsW0Q0.4u60OeJIZ2yQdkMpk_Pjp6IlXzxkMa0CKYhHIIMof.Z6g",cRq: {ru: 'aHR0cHM6Ly93 d3czLnlnZ3RvcnJlbnQucWEv',ra: 'TW96aWxsYS81LjAgKFgxMTsgTGludXggeDg2XzY0KSBBcHBsZVdlYktpdC81MzcuMzYgKEtIVE1MLCBsaWtlIEdlY2tvKSBDaHJvbWUvMTE5LjAuMC4wIFNhZmFyaS81MzcuMzY=',rm: 'R0VU',d: 'O+/xD9k5nLg18vRVDbY3lD/huxOvcM15CoN6CUpI6t7She+XGvrwlD Rc8jX4qKXINM3UKjLcb0RXjfCo4O+1u4I+mwLjGqHcUlMv/zN25t0xWFRqPGZhRZbtcxCPd7uta+AWQfUH6Wblvz6aQkRwGlLLaYQ0GRdnp+FVY0rPl8ElY/rD33qB3ngQ6dyuJwa049zcnfxCMpzRcDJIehzSzpCKjGCmRHWPSvnGcm6f0zH3id9vwqV0g+MSaWLO7Nf4ydIrlGWIMCcO0V6xJUQuO/ncnRE/f3/q80t1 nz3Up5ucMSzFBZF3GbjEKVaqkTqm5qEph2WMq7gatqWq9VO8WEXZQ26bxTiOUdLR77a2Vtp7iYIBQBGQfk5V0266CJC9tM4HXTG5I/C867KbqAwMMNu/z3uFeorWifCL7s+kdPE8DfLPzvQnQDjGBqs6Ss1vWxyPGMid0hZasRWaegrrZJn5zFgYBrBiiNieoYK3pOK1TmcQNUaE8/LEV0otbHgX/KnELSoxmbXIERz3Sl izWAaZDLQ/AvmdCBlR+Skl9bopqnpShPSyiYKCncCIAIa1sSz3KkSS5BfqqILXJZULfw==',t: 'MTcxMjkyMDQwMC4xMzgwMDA=',cT: Math.floor(Date.now() / 1000),m: 'JlKXJsvU8MM/H0F4Fa0ErJMc79psZRTnEzX1shlb3GU=',i1: 'U76N0yB7rFMqUk34kEShxQ==',i2: '/c6BmPooBAdCHTCK cuHN1g==',zh: 'ObVB+awCy48LbcOUADeWbRLDqPIk5LMPOZ23SqUDr/U=',uh: 'bUoawGaWAtzeNbuWELCTDO8ujwwB0V691kuXhuDVpdY=',hh: '0tk1LYBta9W31yRjxjdWRGO6ap1ntPL0Xqr1/LFVnn8=',}};var cpo = document.createElement('script');cpo.src = '/cdn-cgi/challenge -platform/h/b/orchestrate/chl_page/v1?ray=8732c954df186f45';window._cf_chl_opt.cOgUHash = location.hash === '' && location.href.indexOf('#') !== -1 ? '#' : location.hash;window._cf_chl_opt.cOgUQuery = location.search === '' && location.hr ef.slice(0, location.href.length - window._cf_chl_opt.cOgUHash.length).indexOf('?') !== -1 ? '?' : location.search;if (window.history && window.history.replaceState) {var ogU = location.pathname + window._cf_chl_opt.cOgUQuery + window._cf _chl_opt.cOgUHash;history.replaceState(null, null, "\/?__cf_chl_rt_tk=CGLmW5Cz9uxTqCU0mQCjE9b7aU5NIbQ0SuVNx1akPBA-1712920400-0.0.1.1-1279" + window._cf_chl_opt.cOgUHash);cpo.onload = function() {history.replaceState(null, null, ogU);}}doc ument.getElementsByTagName('head')[0].appendChild(cpo);}());</script><div class="footer" role="contentinfo"><div class="footer-inner"><div class="clearfix diagnostic-wrapper"><div class="ray-id">Ray ID: <code>8732c954df186f45</code></div> </div><div class="text-center" id="footer-text">Performance & security by Cloudflare</div></div></div></body></html>
2024-04-12 12:13:20 INFO ReqId 140430565947136 Challenge detected. Title found: Just a moment...
2024-04-12 12:13:20 DEBUG ReqId 140430565947136 Waiting for title (attempt 1): Just a moment...
2024-04-12 12:13:20 DEBUG ReqId 140430574339840 Timeout waiting for selector
2024-04-12 12:13:20 DEBUG ReqId 140430574339840 Try to find the Cloudflare verify checkbox...
2024-04-12 12:13:20 DEBUG ReqId 140430574339840 Cloudflare verify checkbox not found on the page.
2024-04-12 12:13:20 DEBUG ReqId 140430574339840 Try to find the Cloudflare 'Verify you are human' button...
2024-04-12 12:13:20 DEBUG ReqId 140430574339840 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-12 12:13:21 DEBUG ReqId 140430681773824 A used instance of webdriver has been destroyed
2024-04-12 12:13:21 ERROR ReqId 140430681773824 Error: Error solving the challenge. Timeout after 60.0 seconds.
2024-04-12 12:13:21 DEBUG ReqId 140430681773824 Response => POST /v1 body: {'status': 'error', 'message': 'Error: Error solving the challenge. Timeout after 60.0 seconds.', 'startTimestamp': 1712920340354, 'endTimestamp': 1712920401109 , 'version': '3.3.10'}
2024-04-12 12:13:21 INFO ReqId 140430681773824 Response in 60.755 s
2024-04-12 12:13:21 INFO ReqId 140430681773824 172.18.0.1 POST http://localhost:8191/v1 500 Internal Server Error
2024-04-12 12:13:21 DEBUG ReqId 140430565947136 Timeout waiting for selector
2024-04-12 12:13:21 DEBUG ReqId 140430565947136 Try to find the Cloudflare verify checkbox...
2024-04-12 12:13:21 DEBUG ReqId 140430565947136 Cloudflare verify checkbox not found on the page.
2024-04-12 12:13:21 DEBUG ReqId 140430565947136 Try to find the Cloudflare 'Verify you are human' button...
2024-04-12 12:13:21 DEBUG ReqId 140430565947136 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-12 12:13:23 DEBUG ReqId 140430565947136 Waiting for title (attempt 2): Just a moment...
2024-04-12 12:13:24 DEBUG ReqId 140430565947136 Timeout waiting for selector
2024-04-12 12:13:24 DEBUG ReqId 140430565947136 Try to find the Cloudflare verify checkbox...
2024-04-12 12:13:24 DEBUG ReqId 140430565947136 Cloudflare verify checkbox not found on the page.
2024-04-12 12:13:24 DEBUG ReqId 140430565947136 Try to find the Cloudflare 'Verify you are human' button...
2024-04-12 12:13:24 DEBUG ReqId 140430565947136 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-12 12:13:26 DEBUG ReqId 140430565947136 Waiting for title (attempt 3): Just a moment...
2024-04-12 12:13:27 DEBUG ReqId 140430565947136 Timeout waiting for selector
2024-04-12 12:13:27 DEBUG ReqId 140430565947136 Try to find the Cloudflare verify checkbox...
2024-04-12 12:13:27 DEBUG ReqId 140430565947136 Cloudflare verify checkbox not found on the page.
2024-04-12 12:13:27 DEBUG ReqId 140430565947136 Try to find the Cloudflare 'Verify you are human' button...
2024-04-12 12:13:27 DEBUG ReqId 140430565947136 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-12 12:13:29 DEBUG ReqId 140430565947136 Waiting for title (attempt 4): Just a moment...
2024-04-12 12:13:30 DEBUG ReqId 140430565947136 Timeout waiting for selector
2024-04-12 12:13:30 DEBUG ReqId 140430565947136 Try to find the Cloudflare verify checkbox...
2024-04-12 12:13:31 DEBUG ReqId 140430565947136 Cloudflare verify checkbox not found on the page.
2024-04-12 12:13:31 DEBUG ReqId 140430565947136 Try to find the Cloudflare 'Verify you are human' button...
2024-04-12 12:13:31 DEBUG ReqId 140430565947136 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-12 12:13:33 DEBUG ReqId 140430565947136 Waiting for title (attempt 5): Just a moment...
2024-04-12 12:13:34 DEBUG ReqId 140430565947136 Timeout waiting for selector
2024-04-12 12:13:34 DEBUG ReqId 140430565947136 Try to find the Cloudflare verify checkbox...
2024-04-12 12:13:34 DEBUG ReqId 140430565947136 Cloudflare verify checkbox not found on the page.
2024-04-12 12:13:34 DEBUG ReqId 140430565947136 Try to find the Cloudflare 'Verify you are human' button...
2024-04-12 12:13:34 DEBUG ReqId 140430565947136 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-12 12:13:36 DEBUG ReqId 140430565947136 Waiting for title (attempt 6): Just a moment...
2024-04-12 12:13:37 DEBUG ReqId 140430565947136 Timeout waiting for selector
2024-04-12 12:13:37 DEBUG ReqId 140430565947136 Try to find the Cloudflare verify checkbox...
2024-04-12 12:13:37 DEBUG ReqId 140430565947136 Cloudflare verify checkbox not found on the page.
2024-04-12 12:13:37 DEBUG ReqId 140430565947136 Try to find the Cloudflare 'Verify you are human' button...
2024-04-12 12:13:37 DEBUG ReqId 140430565947136 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-12 12:13:39 DEBUG ReqId 140430565947136 Waiting for title (attempt 7): Just a moment...
2024-04-12 12:13:40 DEBUG ReqId 140430565947136 Timeout waiting for selector
2024-04-12 12:13:40 DEBUG ReqId 140430565947136 Try to find the Cloudflare verify checkbox...
2024-04-12 12:13:40 DEBUG ReqId 140430565947136 Cloudflare verify checkbox not found on the page.
2024-04-12 12:13:40 DEBUG ReqId 140430565947136 Try to find the Cloudflare 'Verify you are human' button...
2024-04-12 12:13:40 DEBUG ReqId 140430565947136 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-12 12:13:42 DEBUG ReqId 140430565947136 Waiting for title (attempt 8): Just a moment...
2024-04-12 12:13:43 DEBUG ReqId 140430565947136 Timeout waiting for selector
2024-04-12 12:13:43 DEBUG ReqId 140430565947136 Try to find the Cloudflare verify checkbox...
2024-04-12 12:13:43 DEBUG ReqId 140430565947136 Cloudflare verify checkbox not found on the page.
2024-04-12 12:13:43 DEBUG ReqId 140430565947136 Try to find the Cloudflare 'Verify you are human' button...
2024-04-12 12:13:43 DEBUG ReqId 140430565947136 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-12 12:13:45 DEBUG ReqId 140430565947136 Waiting for title (attempt 9): Just a moment...
2024-04-12 12:13:46 DEBUG ReqId 140430565947136 Timeout waiting for selector
2024-04-12 12:13:46 DEBUG ReqId 140430565947136 Try to find the Cloudflare verify checkbox...
2024-04-12 12:13:46 DEBUG ReqId 140430565947136 Cloudflare verify checkbox not found on the page.
2024-04-12 12:13:46 DEBUG ReqId 140430565947136 Try to find the Cloudflare 'Verify you are human' button...
2024-04-12 12:13:46 DEBUG ReqId 140430565947136 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-12 12:13:48 DEBUG ReqId 140430565947136 Waiting for title (attempt 10): Just a moment...
2024-04-12 12:13:49 DEBUG ReqId 140430565947136 Timeout waiting for selector
2024-04-12 12:13:49 DEBUG ReqId 140430565947136 Try to find the Cloudflare verify checkbox...
2024-04-12 12:13:49 DEBUG ReqId 140430565947136 Cloudflare verify checkbox not found on the page.
2024-04-12 12:13:49 DEBUG ReqId 140430565947136 Try to find the Cloudflare 'Verify you are human' button...
2024-04-12 12:13:49 DEBUG ReqId 140430565947136 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-12 12:13:51 DEBUG ReqId 140430565947136 Waiting for title (attempt 11): Just a moment...
2024-04-12 12:13:52 DEBUG ReqId 140430565947136 Timeout waiting for selector
2024-04-12 12:13:52 DEBUG ReqId 140430565947136 Try to find the Cloudflare verify checkbox...
2024-04-12 12:13:52 DEBUG ReqId 140430565947136 Cloudflare verify checkbox not found on the page.
2024-04-12 12:13:52 DEBUG ReqId 140430565947136 Try to find the Cloudflare 'Verify you are human' button...
2024-04-12 12:13:52 DEBUG ReqId 140430565947136 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-12 12:13:54 DEBUG ReqId 140430565947136 Waiting for title (attempt 12): Just a moment...
2024-04-12 12:13:55 DEBUG ReqId 140430565947136 Timeout waiting for selector
2024-04-12 12:13:55 DEBUG ReqId 140430565947136 Try to find the Cloudflare verify checkbox...
2024-04-12 12:13:55 DEBUG ReqId 140430565947136 Cloudflare verify checkbox not found on the page.
2024-04-12 12:13:55 DEBUG ReqId 140430565947136 Try to find the Cloudflare 'Verify you are human' button...
2024-04-12 12:13:55 DEBUG ReqId 140430565947136 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-12 12:13:57 DEBUG ReqId 140430565947136 Waiting for title (attempt 13): Just a moment...
2024-04-12 12:13:58 DEBUG ReqId 140430565947136 Timeout waiting for selector
2024-04-12 12:13:58 DEBUG ReqId 140430565947136 Try to find the Cloudflare verify checkbox...
2024-04-12 12:13:58 DEBUG ReqId 140430565947136 Cloudflare verify checkbox not found on the page.
2024-04-12 12:13:58 DEBUG ReqId 140430565947136 Try to find the Cloudflare 'Verify you are human' button...
2024-04-12 12:13:58 DEBUG ReqId 140430565947136 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-12 12:14:00 DEBUG ReqId 140430565947136 Waiting for title (attempt 14): Just a moment...
2024-04-12 12:14:01 DEBUG ReqId 140430565947136 Timeout waiting for selector
2024-04-12 12:14:01 DEBUG ReqId 140430565947136 Try to find the Cloudflare verify checkbox...
2024-04-12 12:14:01 DEBUG ReqId 140430565947136 Cloudflare verify checkbox not found on the page.
2024-04-12 12:14:01 DEBUG ReqId 140430565947136 Try to find the Cloudflare 'Verify you are human' button...
2024-04-12 12:14:01 DEBUG ReqId 140430565947136 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-12 12:14:03 DEBUG ReqId 140430565947136 Waiting for title (attempt 15): Just a moment...
2024-04-12 12:14:04 DEBUG ReqId 140430565947136 Timeout waiting for selector
2024-04-12 12:14:04 DEBUG ReqId 140430565947136 Try to find the Cloudflare verify checkbox...
2024-04-12 12:14:04 DEBUG ReqId 140430565947136 Cloudflare verify checkbox not found on the page.
2024-04-12 12:14:04 DEBUG ReqId 140430565947136 Try to find the Cloudflare 'Verify you are human' button...
2024-04-12 12:14:04 DEBUG ReqId 140430565947136 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-12 12:14:06 DEBUG ReqId 140430565947136 Waiting for title (attempt 16): Just a moment...
2024-04-12 12:14:07 DEBUG ReqId 140430565947136 Timeout waiting for selector
2024-04-12 12:14:07 DEBUG ReqId 140430565947136 Try to find the Cloudflare verify checkbox...
2024-04-12 12:14:07 DEBUG ReqId 140430565947136 Cloudflare verify checkbox not found on the page.
2024-04-12 12:14:07 DEBUG ReqId 140430565947136 Try to find the Cloudflare 'Verify you are human' button...
2024-04-12 12:14:07 DEBUG ReqId 140430565947136 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-12 12:14:09 DEBUG ReqId 140430565947136 Waiting for title (attempt 17): Just a moment...
2024-04-12 12:14:10 DEBUG ReqId 140430565947136 Timeout waiting for selector
2024-04-12 12:14:10 DEBUG ReqId 140430565947136 Try to find the Cloudflare verify checkbox...
2024-04-12 12:14:11 DEBUG ReqId 140430565947136 Cloudflare verify checkbox not found on the page.
2024-04-12 12:14:11 DEBUG ReqId 140430565947136 Try to find the Cloudflare 'Verify you are human' button...
2024-04-12 12:14:11 DEBUG ReqId 140430565947136 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-12 12:14:13 DEBUG ReqId 140430565947136 Waiting for title (attempt 18): Just a moment...
2024-04-12 12:14:14 DEBUG ReqId 140430565947136 Timeout waiting for selector
2024-04-12 12:14:14 DEBUG ReqId 140430565947136 Try to find the Cloudflare verify checkbox...
2024-04-12 12:14:14 DEBUG ReqId 140430565947136 Cloudflare verify checkbox not found on the page.
2024-04-12 12:14:14 DEBUG ReqId 140430565947136 Try to find the Cloudflare 'Verify you are human' button...
2024-04-12 12:14:14 DEBUG ReqId 140430565947136 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-12 12:14:16 DEBUG ReqId 140430565947136 Waiting for title (attempt 19): Just a moment...
2024-04-12 12:14:17 DEBUG ReqId 140430565947136 Timeout waiting for selector
2024-04-12 12:14:17 DEBUG ReqId 140430565947136 Try to find the Cloudflare verify checkbox...
2024-04-12 12:14:17 DEBUG ReqId 140430565947136 Cloudflare verify checkbox not found on the page.
2024-04-12 12:14:17 DEBUG ReqId 140430565947136 Try to find the Cloudflare 'Verify you are human' button...
2024-04-12 12:14:17 DEBUG ReqId 140430565947136 The Cloudflare 'Verify you are human' button not found on the page.
2024-04-12 12:14:19 DEBUG ReqId 140430565947136 Waiting for title (attempt 20): Just a moment...
2024-04-12 12:14:20 DEBUG ReqId 140430599517952 A used instance of webdriver has been destroyed
2024-04-12 12:14:20 ERROR ReqId 140430599517952 Error: Error solving the challenge. Timeout after 60.0 seconds.
2024-04-12 12:14:20 DEBUG ReqId 140430599517952 Response => POST /v1 body: {'status': 'error', 'message': 'Error: Error solving the challenge. Timeout after 60.0 seconds.', 'startTimestamp': 1712920399515, 'endTimestamp': 1712920460163 , 'version': '3.3.10'}
```
### Screenshots
_No response_ | closed | 2024-04-12T11:41:04Z | 2024-04-12T11:49:00Z | https://github.com/FlareSolverr/FlareSolverr/issues/1150 | [] | GattusoGregory | 0 |
flasgger/flasgger | rest-api | 172 | Smarter merging of specs_dict | I tried implementing a custom decorator that autofills some repetitive documentation data using the `specs_dict` attribute on the decorated function (this is also used by `@swag_from`). However, because of [this line](https://github.com/rochacbruno/flasgger/blob/8adba148ec4b262ac304b8439e10f55ef521c285/flasgger/utils.py#L110), some of my autofilled data gets overwritten by data in the docblock. For example, if `specs_dict` contains `responses: {403: ...}` and my docblock has `responses: {200: ...}`, I would like the result to contain both of the responses.
Do you have any thoughts on this? If there's nothing wrong with this feature, I'm willing to implement it. | closed | 2017-11-27T11:59:25Z | 2017-12-06T05:13:11Z | https://github.com/flasgger/flasgger/issues/172 | [] | janbuchar | 2 |
dunossauro/fastapi-do-zero | sqlalchemy | 133 | maume fast_zero | | Link do projeto | Seu @ no git | Comentário (opcional) |
|-------------|-------------|-------------|
| [crono_task](https://github.com/mau-me/crono_task_backend) | [@mau-me](https://github.com/mau-me) | Gerenciador de tarefas com backend baseado no fast_zero | | closed | 2024-05-01T22:13:14Z | 2024-05-01T22:14:32Z | https://github.com/dunossauro/fastapi-do-zero/issues/133 | [] | mau-me | 0 |
djstein/modern-django | rest-api | 1 | Good articles | This is a good series of Django resources. Please keep it up and can't wait for the rest 👍 | closed | 2017-10-13T13:48:28Z | 2017-12-20T00:55:04Z | https://github.com/djstein/modern-django/issues/1 | [] | sfdye | 0 |
ageitgey/face_recognition | python | 1,499 | Can we train our own face encoding model? | Esp. compare 2 faces, it use the KNN algorithm. Can we train the model behind this algorithm? I search out no documentation about this. | open | 2023-05-04T06:22:16Z | 2023-05-04T06:22:16Z | https://github.com/ageitgey/face_recognition/issues/1499 | [] | fishfree | 0 |
dpgaspar/Flask-AppBuilder | flask | 1,858 | select record to filter another view | ### Environment
Flask-Appbuilder version: 4.1.1
### Expected results
I'd like to define an action in MyViewA (I followed this [doc](https://flask-appbuilder.readthedocs.io/en/latest/actions.html) ) to filter another view MyViewB. To create the filter i used this [doc](https://flask-appbuilder.readthedocs.io/en/latest/advanced.html#base-filtering).
models.py example
```python
class TableA(Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(100))
class TableB(Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(100))
id_a = db.Column(db.ForeignKey('tablea.id'), index=True)
tablea = relationship('TableA', primaryjoin='TableB.id_a == TableA.id')
```
I need to create a view of TableA with an action on record selection that redirect on the view of TableB with a filter like
```python
base_filters = [['id_a',FilterEqual, THE_ID_OF_THE_RECORD_SELECTED],]
```
Thanks in advance
| open | 2022-06-07T14:12:39Z | 2022-07-26T11:43:42Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1858 | [] | piccio123 | 1 |
eriklindernoren/ML-From-Scratch | data-science | 111 | No module named 'mlfromscratch.utils.loss_functions' | Traceback (most recent call last):
File "C:\G\ML-From-Scratch\mlfromscratch\examples\gradient_boosting_regressor.py", line 9, in <module>
from mlfromscratch.utils.loss_functions import SquareLoss
ModuleNotFoundError: No module named 'mlfromscratch.utils.loss_functions' | open | 2024-11-20T08:51:03Z | 2024-11-20T08:51:03Z | https://github.com/eriklindernoren/ML-From-Scratch/issues/111 | [] | LeiYangGH | 0 |
comfyanonymous/ComfyUI | pytorch | 6,959 | wtf | ### Your question
cannot work
### Logs
```powershell
```
### Other
_No response_ | closed | 2025-02-24T15:46:29Z | 2025-02-24T15:46:42Z | https://github.com/comfyanonymous/ComfyUI/issues/6959 | [
"User Support"
] | drfai | 0 |
CPJKU/madmom | numpy | 263 | [Q] RNNBeatTracker filter choice | Hello guys. Fantastic work with the project. I am trying to port some parts of the project to tensorflow, and I am currently implementing the RNNBeatTracker RNN based on the paper you have linked.
I am wondering if there is any particular reason you choose to filter the spectrogram with a LogarithmicFilterBank instead of MelFilterBank?
Also, any reason why the power spectrum in the Spectrogram class is computed simply by taking the absolute value instead of squaring the absolute value of the Short Time Fourier Transform? | closed | 2017-03-06T00:54:08Z | 2019-02-06T10:35:46Z | https://github.com/CPJKU/madmom/issues/263 | [] | ghost | 14 |
Textualize/rich | python | 2,880 | [BUG] IPython extension: add ipython cell line to traceback | **Description**
When using the rich extension for IPython terminal (`%load_ext rich`), and when there is an error, I can't see the line from which the error arises, only its line number.
I often have pretty long cells and it's not easy to spot which line is causing the error since there is no lines numbers display in IPython cells.
The default traceback does show the cell line in addition to its line number.
Would it be possible to have the same with the nice traceback of the rich extension?
**Code to reproduce**
```python
In [1]: import random
...:
...: random.choices([0, 1, 2, 3, 4], k=3, p=[0.1, 0.3, 0.1, 0.25, 0.25])
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[1], line 3
1 import random
----> 3 random.choices([0, 1, 2, 3, 4], k=3, p=[0.1, 0.3, 0.1, 0.25, 0.25])
TypeError: choices() got an unexpected keyword argument 'p'
In [2]: %load_ext rich
In [3]: import random
...:
...: random.choices([0, 1, 2, 3, 4], k=3, p=[0.1, 0.3, 0.1, 0.25, 0.25])
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <module>:3 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: choices() got an unexpected keyword argument 'p'
```
**Platform**
<details>
<summary>Click to expand</summary>
```
╭───────────────────────── <class 'rich.console.Console'> ─────────────────────────╮
│ A high level console interface. │
│ │
│ ╭──────────────────────────────────────────────────────────────────────────────╮ │
│ │ <console width=213 ColorSystem.TRUECOLOR> │ │
│ ╰──────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ color_system = 'truecolor' │
│ encoding = 'utf-8' │
│ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> │
│ height = 54 │
│ is_alt_screen = False │
│ is_dumb_terminal = False │
│ is_interactive = True │
│ is_jupyter = False │
│ is_terminal = True │
│ legacy_windows = False │
│ no_color = False │
│ options = ConsoleOptions( │
│ size=ConsoleDimensions(width=213, height=54), │
│ legacy_windows=False, │
│ min_width=1, │
│ max_width=213, │
│ is_terminal=True, │
│ encoding='utf-8', │
│ max_height=54, │
│ justify=None, │
│ overflow=None, │
│ no_wrap=False, │
│ highlight=None, │
│ markup=None, │
│ height=None │
│ ) │
│ quiet = False │
│ record = False │
│ safe_box = True │
│ size = ConsoleDimensions(width=213, height=54) │
│ soft_wrap = False │
│ stderr = False │
│ style = None │
│ tab_size = 8 │
│ width = 213 │
╰──────────────────────────────────────────────────────────────────────────────────╯
╭─── <class 'rich._windows.WindowsConsoleFeatures'> ────╮
│ Windows features available. │
│ │
│ ╭───────────────────────────────────────────────────╮ │
│ │ WindowsConsoleFeatures(vt=False, truecolor=False) │ │
│ ╰───────────────────────────────────────────────────╯ │
│ │
│ truecolor = False │
│ vt = False │
╰───────────────────────────────────────────────────────╯
╭────── Environment Variables ───────╮
│ { │
│ 'TERM': 'xterm-kitty', │
│ 'COLORTERM': 'truecolor', │
│ 'CLICOLOR': None, │
│ 'NO_COLOR': None, │
│ 'TERM_PROGRAM': None, │
│ 'COLUMNS': None, │
│ 'LINES': None, │
│ 'JUPYTER_COLUMNS': None, │
│ 'JUPYTER_LINES': None, │
│ 'JPY_PARENT_PID': None, │
│ 'VSCODE_VERBOSE_LOGGING': None │
│ } │
╰────────────────────────────────────╯
platform="Linux"
```
</details>
| open | 2023-03-15T14:01:25Z | 2023-03-15T14:01:49Z | https://github.com/Textualize/rich/issues/2880 | [
"Needs triage"
] | Paul-Aime | 1 |
iperov/DeepFaceLab | machine-learning | 932 | Error extracting frames from data_src in deepfacelab colab | I was trying to run DFL_Colab, i imported the workspace folder successfully but got this error when i ran the extract frames cell
"/content
/!\ input file not found.
Done. "
please, any help regarding this? | open | 2020-10-30T00:34:27Z | 2023-06-08T21:51:12Z | https://github.com/iperov/DeepFaceLab/issues/932 | [] | jamesbright | 2 |
microsoft/nni | deep-learning | 4,810 | TPE tuner failed to load nested search space json | **Describe the issue**:
**Environment**:
- NNI version: 2.7
- Training service (local|remote|pai|aml|etc): local
- Client OS: ubuntu 18
- Server OS (for remote mode only):
- Python version:python3.7
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?:
- Is running in Docker?: yes
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
```json
{
"C": { "_type": "choice", "_value": [0.01, 0.1, 1, 10, 100] },
"kernel": { "_type": "choice", "_value": [
{ "_name": "linear" },
{ "_name": "poly",
"degree": { "_type": "randint", "_value": [1, 11] },
"gamma": { "_type": "choice", "_value": ["scale", "auto"] }
},
{ "_name": "rbf",
"gamma": { "_type": "choice", "_value": ["scale", "auto"] }
},
{ "_name": "sigmoid",
"gamma": { "_type": "choice", "_value": ["scale", "auto"] }
}
]},
"probability": {"_type":"choice", "_value": [true]}
}
```
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
- trail log:
```
[2022-04-26 15:16:58] PRINT {'C': 0.1, 'kernel': OrderedDict([('_name', 'sigmoid'), ('gamma', 'auto')]), 'probability': True}
[2022-04-26 15:16:58] ERROR :
File "/usr/local/lib/python3.7/dist-packages/sklearn/svm/_base.py", line 255, in fit
fit(X, y, sample_weight, solver_type, kernel, random_seed=seed)
File "/usr/local/lib/python3.7/dist-packages/sklearn/svm/_base.py", line 333, in _dense_fit
random_seed=random_seed,
File "sklearn/svm/_libsvm.pyx", line 173, in sklearn.svm._libsvm.fit
ValueError: OrderedDict([('_name', 'sigmoid'), ('gamma', 'auto')]) is not in list
```
**How to reproduce it?**:
Use this search space to train the sklearn.svm.SVC model
It is executable in version 2.5, but this problem occurs in version 2.7 | closed | 2022-04-26T07:32:07Z | 2022-09-08T03:10:20Z | https://github.com/microsoft/nni/issues/4810 | [] | tjlin-github | 3 |
MagicStack/asyncpg | asyncio | 276 | store procedure return a record | hi, i have a store procedure like this:
```
get_user_sp(user_email VARCHAR, user_password VARCHAR)
RETURNS RECORD AS $$
DECLARE
result RECORD;
BEGIN
SELECT user_id, is_active INTO result FROM users WHERE users.email = user_email AND users.password = user_password LIMIT 1;
RETURN result;
END;
$$ LANGUAGE plpgsql;
```
and execute it with (db is asyncpg Pool):
```
async with db.acquire() as connection:
user = await connection.fetchrow('''
select get_user_sp($1, $2)
''', email, password)
```
and result is:
```
get_user_sp=(1, True)
```
and when i use raw sql:
```
async with db.acquire() as connection:
user = await connection.fetchrow('''
SELECT user_id, is_active FROM users WHERE email = $1 AND password = $2 LIMIT 1
''', email, password)
```
i get the desired result:
```
{ user_id: 1, is_active: True }
```
**Question**: how to get a result like this => `{ user_id: 1, is_active: True }`?
| closed | 2018-03-24T15:16:29Z | 2018-03-28T15:11:06Z | https://github.com/MagicStack/asyncpg/issues/276 | [
"question"
] | python-programmer | 1 |
microsoft/nni | machine-learning | 5,298 | cannot fix the mask of the interdependent layers | **Describe the issue**:
When i pruned the segmentation model.After saving the mask.pth,when i speed up ,the mask cannot fix the new architecture of the model
**Environment**:
- NNI version:2.0
- Training service (local|remote|pai|aml|etc):local
- Client OS:ubuntu
- Server OS (for remote mode only):
- Python version:3.7
- PyTorch/TensorFlow version:pytorch
- Is conda/virtualenv/venv used?:yes
- Is running in Docker?:no
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
mask_conflict.py,line195,in fix_mask_conflict
assert shape[0] % group == 0
AssertionError
when i print shape[0] and group:
32 32
32 32
16 1
96 96
96 96
24 320
the group==320 is bigger than the shape[0]==24
how can i fix the problem?
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | closed | 2022-12-23T11:12:32Z | 2023-02-24T02:36:58Z | https://github.com/microsoft/nni/issues/5298 | [] | sungh66 | 10 |
remsky/Kokoro-FastAPI | fastapi | 25 | [Bug]: losing connection while streaming does not interrupt generation | Reproduction steps:
1- start streaming a long piece of text via the openai compatible endpoint
2- close page / stop streaming / lose connection
Expectation : chunks generation stops on the server. Ready to start immediately for next query.
Current behaviour: server is stuck completing previous generation. new requests are pending. | closed | 2025-01-11T17:02:18Z | 2025-01-14T15:07:50Z | https://github.com/remsky/Kokoro-FastAPI/issues/25 | [
"enhancement"
] | mrHBH | 4 |
STVIR/pysot | computer-vision | 210 | 训练效果不稳定,请问如何评估一组超参数或者一个改进是否有改进效果? | 直接用这里提供的siamrpn_r50_l234_dwxcorr_8gpu去跑,对epoch11-20都在VOT2018做hp_search.py一次,然后用eval.py获得其中最好成绩。
在train.py用不同的seed,重复以上实验,然后最后的eao最好的和最差的差不多都差6个点了。
训练效果受随机性影响这么大,如果我要实验一组新的超参数,或者研究一项改进,要怎么样才能证明它有改进,而不是随机性碰巧的?
谢谢回复。 | closed | 2019-10-18T13:53:03Z | 2019-12-18T09:49:21Z | https://github.com/STVIR/pysot/issues/210 | [] | JamesLearning | 4 |
tensorflow/tensor2tensor | deep-learning | 1,745 | Using MOE utils | I have a question about using the MOE utils. How would I integrate these utilities into my own model to create a mixture of experts model where each expert is a convolutional neural network? | closed | 2019-11-15T20:05:20Z | 2020-04-15T19:42:42Z | https://github.com/tensorflow/tensor2tensor/issues/1745 | [] | zack466 | 0 |
plotly/dash-cytoscape | dash | 82 | Publish a new release | There have been changes to `Tree.py` that have not been published. Some of these changes fix import issues with Python 2.7. | closed | 2020-03-18T01:54:40Z | 2020-07-14T14:55:56Z | https://github.com/plotly/dash-cytoscape/issues/82 | [
"bug"
] | chriddyp | 0 |
home-assistant/core | asyncio | 140,942 | Weheat integration fails to connect | ### The problem
Hi, after installing the weheat integration I am trying to connect the integration to the server. I have logged in, however I get the following error:
```python
File "/usr/local/lib/python3.13/site-packages/weheat/exceptions.py", line 156, in from_response
raise ApiException(http_resp=http_resp, body=body, data=data)
weheat.exceptions.ApiException: (429)
Reason: Too Many Requests
HTTP response headers: <CIMultiDictProxy('Server': 'nginx/1.18.0 (Ubuntu)', 'Date': 'Thu, 13 Mar 2025 13:36:52 GMT', 'Content-Length': '0', 'Connection': 'keep-alive')>
```
What I've tried:
- Removing the integration and reinstalling it.
- Disabling the integration four a couple of hours and re-enabling it.
- It used to work fine before.
Note that the weheat app works fine.
Any suggestions on things I can try are very welcome!
### What version of Home Assistant Core has the issue?
core-2025.2.4
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
weheat
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/weheat/
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | open | 2025-03-19T17:47:26Z | 2025-03-24T06:37:09Z | https://github.com/home-assistant/core/issues/140942 | [
"integration: weheat"
] | Josnoww | 4 |
ets-labs/python-dependency-injector | flask | 342 | Please provide builds for ARM based systems | I wrote a program using the Dependency Injector and wanted to use it on an embedded system using Aarch64 architecture. Unfortunately though, it doesn't come with gcc to build it or any package manager to install the necessary prerequisites. Would you consider providing wheel builds for ARM based systems?
On [cibuildwheel](https://github.com/joerick/cibuildwheel) page there's a mention that Linux ARM builds are currently available only on Travis, which, considering the current situation around it, may prove problematic. But if you were going to migrate to GitHub Actions, I've found a PR https://github.com/joerick/cibuildwheel/pull/482 that seems to be on a good way to bring the ARM support via Qemu there. | closed | 2020-12-22T14:46:36Z | 2021-01-11T00:56:59Z | https://github.com/ets-labs/python-dependency-injector/issues/342 | [
"enhancement"
] | czukowski | 6 |
huggingface/transformers | deep-learning | 36,010 | ImportError: cannot import name 'GenerationMixin' from 'transformers.generation' | ### System Info
- `transformers` version: 4.47.1
- Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.39
- Python version: 3.11.11
- Huggingface_hub version: 0.28.1
- Safetensors version: 0.5.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 3090
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
Running Usage example on this [repo](https://github.com/segment-any-text/wtpsplit)
Run the codes on new minicoda virtual environmet only after `pip install wtpsplit` and `pip install torch`.
`---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[2], line 3
1 from wtpsplit import SaT
----> 3 sat = SaT("sat-3l")
4 # optionally run on GPU for better performance
5 # also supports TPUs via e.g. sat.to("xla:0"), in that case pass `pad_last_batch=True` to sat.split
6 sat.half().to("cuda")
File ~/miniforge3/envs/sat/lib/python3.11/site-packages/wtpsplit/__init__.py:514, in SaT.__init__(self, model_name_or_model, from_pretrained_kwargs, ort_providers, ort_kwargs, style_or_domain, language, lora_path, hub_prefix)
511 except ModuleNotFoundError:
512 raise ValueError("Please install `torch` to use WtP with a PyTorch model.")
--> 514 import wtpsplit.models # noqa
516 self.model = PyTorchWrapper(
517 AutoModelForTokenClassification.from_pretrained(
518 model_name_to_fetch, **(from_pretrained_kwargs or {})
519 )
520 )
521 # LoRA LOADING
File ~/miniforge3/envs/sat/lib/python3.11/site-packages/wtpsplit/models.py:13
8 from transformers import AutoModel, AutoModelForTokenClassification
9 from transformers.modeling_outputs import (
10 BaseModelOutputWithPoolingAndCrossAttentions,
11 BaseModelOutputWithPastAndCrossAttentions,
12 )
---> 13 from transformers.modeling_utils import ModuleUtilsMixin
14 from transformers.models.bert.modeling_bert import BertEncoder, BertForTokenClassification, BertModel, BertPooler
15 from transformers.models.canine.modeling_canine import (
16 _PRIMES,
17 ACT2FN,
(...)
33 TokenClassifierOutput,
34 )
File ~/miniforge3/envs/sat/lib/python3.11/site-packages/transformers/modeling_utils.py:46
44 from .configuration_utils import PretrainedConfig
45 from .dynamic_module_utils import custom_object_save
---> 46 from .generation import CompileConfig, GenerationConfig, GenerationMixin
47 from .integrations import PeftAdapterMixin, deepspeed_config, is_deepspeed_zero3_enabled
48 from .loss.loss_utils import LOSS_MAPPING
ImportError: cannot import name 'GenerationMixin' from 'transformers.generation' (/home/bering-gpu-3/miniforge3/envs/sat/lib/python3.11/site-packages/transformers/generation/__init__.py)`
### Expected behavior
There shouldn't be no ImportError and print `['This is a test ', 'This is another test']` | closed | 2025-02-03T08:40:41Z | 2025-03-15T08:04:24Z | https://github.com/huggingface/transformers/issues/36010 | [
"bug"
] | sij411 | 1 |
autogluon/autogluon | data-science | 4,378 | What does "gluon" mean? | Came up in a team meeting while chatting about you guys and we'd love to know what your name means! | closed | 2024-08-12T14:54:28Z | 2024-08-12T23:14:48Z | https://github.com/autogluon/autogluon/issues/4378 | [
"question"
] | skim2257 | 1 |
recommenders-team/recommenders | deep-learning | 1,675 | [BUG] python_chrono_split could create time leakage | The current behavior of python_chrono_split is to take x% of each user/item as train and the rest is the test dataset.
This could create time leakage because some items might become popular at some date t=0 but users who's train time happens before that t<0 time could be easily be recommended with that popular item from the future.
| open | 2022-03-14T21:52:08Z | 2022-03-22T18:07:56Z | https://github.com/recommenders-team/recommenders/issues/1675 | [
"bug"
] | chanansh | 3 |
kizniche/Mycodo | automation | 762 | Downsampling and retention policies for Influxdb | My perception (perhaps wrongly) is that influxdb is the weak point in the system, e.g. system hangs etc can often be tracked to influxdb stopping for some reason...
Currently as the systems runs.... the database just keeps getting bigger, and bigger...
I am currently sampling/recording data at 1 min intervals... so it rapidly accumulates, this makes extracting visualizing long term historical data slow (and risky to the system).
Whilst having the data at 1 min intervals is good for accurate pid control in the short term... for long term logging of conditions this is/may be unnecessary, e.g. long term having an hourly average, max and min may be perfectly adequate.
This could be done with continuous queries and retention policies in influxdb...
see [](https://atomstar.tweakblogs.net/blog/17748/influxdb-retention-policy-and-data-downsampling)
I am planning to try and set this up 'by hand' but wondering what the implications will be?
Probably it is better to implement when the influxdb database is first created?
It is also tricky to figure out the data in the db because the device_ids are uninformative (are they random? for ds18b20s etc, could the id just use the actual address of the device as these are already unique?):
> C,channel=0,device_id=118e128d-01de-4523-9f59-4f3ac61de63c,measure=temperature
> C,channel=0,device_id=290c02ab-6452-4da5-bb76-ecc223209514,measure=temperature
| closed | 2020-04-02T10:19:49Z | 2021-02-02T14:18:01Z | https://github.com/kizniche/Mycodo/issues/762 | [] | drgrumpy | 10 |
ipython/ipython | data-science | 14,235 | TypeError when using `store` magic command. | <!-- This is the repository for IPython command line, if you can try to make sure this question/bug/feature belong here and not on one of the Jupyter repositories.
If it's a generic Python/Jupyter question, try other forums or discourse.jupyter.org.
If you are unsure, it's ok to post here, though, there are few maintainer so you might not get a fast response.
-->
I'm using `ipython==8.17.2`, and I'm seeing the following traceback when running `%store -r objects_path` in a Jupyter notebook:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[2], line 1
----> 1 get_ipython().run_line_magic('store', '-r objects_path')
3 if "objects_path" not in dir():
4 objects_path = './data/data_objects/'
File ~/.pyenv/versions/3.10.8/envs/my_env/lib/python3.10/site-packages/IPython/core/interactiveshell.py:2454, in InteractiveShell.run_line_magic(self, magic_name, line, _stack_depth)
2452 kwargs['local_ns'] = self.get_local_scope(stack_depth)
2453 with self.builtin_trap:
-> 2454 result = fn(*args, **kwargs)
2456 # The code below prevents the output from being displayed
2457 # when using magics with decorator @output_can_be_silenced
2458 # when the last Python token in the expression is a ';'.
2459 if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False):
File ~/.pyenv/versions/3.10.8/envs/my_env/lib/python3.10/site-packages/IPython/extensions/storemagic.py:148, in StoreMagics.store(self, parameter_s)
146 for arg in args:
147 try:
--> 148 obj = db['autorestore/' + arg]
149 except KeyError:
150 try:
TypeError: 'PickleShareDB' object is not subscriptable
```
I'm now aware that I need to install the `pickleshare` library, but based on the changes in https://github.com/ipython/ipython/pull/14217/files, I'm wondering if a warning message should be raised instead - it would certainly be more informative. | closed | 2023-11-02T13:50:37Z | 2024-02-08T09:32:36Z | https://github.com/ipython/ipython/issues/14235 | [
"bug"
] | sagzest | 2 |
aminalaee/sqladmin | fastapi | 894 | Ability to leave a FileField empty when a photo is already uploaded | ### Checklist
- [x] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
At the moment, when going to layouts where there is a FileField and a photo has already been uploaded, it does not give the opportunity not to change the photo.
an error occurs in this line
AttributeError: 'str' object has no attribute 'name'
704 form_data.append((key, UploadFile(filename=f.name, file=f.open())))
file application.py
### Describe the solution you would like.
_No response_
### Describe alternatives you considered
_No response_
### Additional context
This is how I save photos to the database, via on_model_change
async def on_model_change(self, data, model, is_created, request):
file = data.get("photo_name")
file_folder = "products"
if file:
uploaded_photo_name = await upload_to_s3(
file_folder=file_folder,
file=file,
model=model,
is_created=is_created,
)
model.photo_name = uploaded_photo_name
data["photo_name"] = uploaded_photo_name
upload_to_s3 returning filename downloaded file
I don't use fastapi storage, I write the file name in the DB. As far as I understand, this is the reason for the error.
Sorry in advance for my English. I use a translator. | open | 2025-03-15T16:32:43Z | 2025-03-15T16:46:57Z | https://github.com/aminalaee/sqladmin/issues/894 | [] | burvelandrei | 0 |
Lightning-AI/pytorch-lightning | data-science | 19,738 | Loading from a checkpoint does not work properly in distributed training | ### Bug description
I train my model on multiple GPUs and save it with the `checkpoint callback` and `save_hyperparameters()`.
I get a directory which looks like this, so this part seems to work flawlessly:
```
/epoch=5
--/checkpoint
----mp_rank_00_model_states.pt
----zero_pp_rank_0_mp_rank_00_optim_states.pt
----zero_pp_rank_1_mp_rank_00_optim_states.pt
...
```
When I try to load the checkpoint with `MyModel.load_from_checkpoint()` on any of these files I only get errors. The tutorials only point me to some apparently outdated examples with a .ckpt file, which does not exist in these log directories.
Loading from the model directory does not help either.
When I load mp_rank_00_model_states.pt I get:
```
File "/.../projects/classifier_lightning/venv/lib/python3.10/site-packages/lightning/pytorch/core/saving.py", line 180, in _load_state
keys = obj.load_state_dict(checkpoint["state_dict"], strict=strict)
KeyError: 'state_dict'
```
When i load the other other files I get a lightning version error, even though it runs on the same environment.
So - how do I load my model from a checkpoint?
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_ | open | 2024-04-04T14:31:02Z | 2024-04-04T14:31:02Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19738 | [
"bug",
"needs triage"
] | asusdisciple | 0 |
SciTools/cartopy | matplotlib | 2,384 | Issue plotting Any maps. | ### Description
<!-- Please provide a general introduction to the issue/proposal. -->
Hello, basically everytime I am try to plot any maps I get the same error below
<!--
If you are reporting a bug, attach the *entire* traceback from Python.
If you are proposing an enhancement/new feature, provide links to related articles, reference examples, etc.
If you are asking a question, please ask on StackOverflow and use the cartopy tag. All cartopy
questions on StackOverflow can be found at https://stackoverflow.com/questions/tagged/cartopy
-->
#### Code to reproduce
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
import xarray as xr
from metpy.cbook import get_test_data
ds = xr.open_dataset(get_test_data('narr_example.nc', as_file_obj=False))
data_var = ds.metpy.parse_cf('Temperature')
x = data_var.x
y = data_var.y
im_data = data_var.isel(time=0).sel(isobaric=1000.)
fig = plt.figure(figsize=(14, 14))
ax = fig.add_subplot(1, 1, 1, projection=data_var.metpy.cartopy_crs)
ax.imshow(im_data, extent=(x.min(), x.max(), y.min(), y.max()),
cmap='RdBu', origin='lower' if y[0] < y[-1] else 'upper')
ax.coastlines(color='tab:green', resolution='10m')
ax.add_feature(cfeature.LAKES.with_scale('10m'), facecolor='none', edgecolor='tab:blue')
ax.add_feature(cfeature.RIVERS.with_scale('10m'), edgecolor='tab:blue')
plt.show()
```
#### Traceback
TypeError Traceback (most recent call last)
~/opt/anaconda3/lib/python3.8/site-packages/IPython/core/formatters.py in __call__(self, obj)
339 pass
340 else:
--> 341 return printer(obj)
342 # Finally look for special method names
343 method = get_real_method(obj, self.print_method)
~/opt/anaconda3/lib/python3.8/site-packages/IPython/core/pylabtools.py in <lambda>(fig)
246
247 if 'png' in formats:
--> 248 png_formatter.for_type(Figure, lambda fig: print_figure(fig, 'png', **kwargs))
249 if 'retina' in formats or 'png2x' in formats:
250 png_formatter.for_type(Figure, lambda fig: retina_figure(fig, **kwargs))
~/opt/anaconda3/lib/python3.8/site-packages/IPython/core/pylabtools.py in print_figure(fig, fmt, bbox_inches, **kwargs)
130 FigureCanvasBase(fig)
131
--> 132 fig.canvas.print_figure(bytes_io, **kw)
133 data = bytes_io.getvalue()
134 if fmt == 'svg':
~/opt/anaconda3/lib/python3.8/site-packages/matplotlib/backend_bases.py in print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, bbox_inches, pad_inches, bbox_extra_artists, backend, **kwargs)
2293 )
2294 with getattr(renderer, "_draw_disabled", nullcontext)():
-> 2295 self.figure.draw(renderer)
2296
2297 if bbox_inches:
~/opt/anaconda3/lib/python3.8/site-packages/matplotlib/artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
71 @wraps(draw)
72 def draw_wrapper(artist, renderer, *args, **kwargs):
---> 73 result = draw(artist, renderer, *args, **kwargs)
74 if renderer._rasterizing:
75 renderer.stop_rasterizing()
~/opt/anaconda3/lib/python3.8/site-packages/matplotlib/artist.py in draw_wrapper(artist, renderer)
48 renderer.start_filter()
49
---> 50 return draw(artist, renderer)
51 finally:
52 if artist.get_agg_filter() is not None:
~/opt/anaconda3/lib/python3.8/site-packages/matplotlib/figure.py in draw(self, renderer)
2835
2836 self.patch.draw(renderer)
-> 2837 mimage._draw_list_compositing_images(
2838 renderer, self, artists, self.suppressComposite)
2839
~/opt/anaconda3/lib/python3.8/site-packages/matplotlib/image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
130 if not_composite or not has_images:
131 for a in artists:
--> 132 a.draw(renderer)
133 else:
134 # Composite any adjacent images together
~/opt/anaconda3/lib/python3.8/site-packages/matplotlib/artist.py in draw_wrapper(artist, renderer)
48 renderer.start_filter()
49
---> 50 return draw(artist, renderer)
51 finally:
52 if artist.get_agg_filter() is not None:
~/opt/anaconda3/lib/python3.8/site-packages/cartopy/mpl/geoaxes.py in draw(self, renderer, inframe)
385 self._done_img_factory = True
386
--> 387 return matplotlib.axes.Axes.draw(self, renderer=renderer,
388 inframe=inframe)
389
TypeError: draw_wrapper() got an unexpected keyword argument 'inframe'
<Figure size 1008x1008 with 1 Axes>
```
<details>
<summary>Full environment definition</summary>
<!-- fill in the following information as appropriate -->
### Operating system
macOS Monterey 12.6.2
### Cartopy version
0.18
### conda list
# Name Version Build Channel
_ipyw_jlab_nb_ext_conf 0.1.0 py38_0
alabaster 0.7.12 py_0
anaconda 2020.07 py38_0
anaconda-client 1.7.2 py38_0
anaconda-navigator 1.9.12 py38_0
anaconda-project 0.8.4 py_0
applaunchservices 0.2.1 py_0
appnope 0.1.0 py38_1001
appscript 1.1.1 py38haf1e3a3_0
argh 0.26.2 py38_0
asn1crypto 1.3.0 py38_1
astroid 2.4.2 py38_0
astropy 4.0.1.post1 py38h01d97ff_1
atomicwrites 1.4.0 py_0
attrs 19.3.0 py_0
autopep8 1.5.3 py_0
babel 2.8.0 py_0
backcall 0.2.0 py_0
backports 1.0 py_2
backports.functools_lru_cache 1.6.1 py_0
backports.shutil_get_terminal_size 1.0.0 py38_2
backports.tempfile 1.0 py_1
backports.weakref 1.0.post1 py_1
beautifulsoup4 4.9.1 py38_0
bitarray 1.4.0 py38haf1e3a3_0
bkcharts 0.2 py38_0
blas 1.0 mkl
bleach 3.1.5 py_0
blosc 1.19.0 hab81aa3_0
bokeh 2.1.1 py38_0
boto 2.49.0 py38_0
bottleneck 1.3.2 py38hf1fa96c_1
brotlipy 0.7.0 py38haf1e3a3_1000
bzip2 1.0.8 h1de35cc_0
ca-certificates 2020.6.24 0
cartopy 0.17.0 py38h9bcff04_1015 conda-forge
certifi 2020.6.20 py38_0
cffi 1.14.0 py38hc512035_1
cftime 1.6.3 pypi_0 pypi
chardet 3.0.4 py38_1003
click 7.1.2 py_0
cloudpickle 1.5.0 py_0
clyent 1.2.2 py38_1
colorama 0.4.3 py_0
conda 4.14.0 py38h50d1736_0 conda-forge
conda-build 3.18.11 py38_0
conda-env 2.6.0 1
conda-package-handling 1.6.1 py38h1de35cc_0
conda-verify 3.4.2 py_1
contextlib2 0.6.0.post1 py_0
contourpy 1.1.1 pypi_0 pypi
cryptography 2.9.2 py38ha12b0ac_0
curl 7.71.1 hb0a8c7a_1
cycler 0.10.0 py38_0
cython 0.29.21 py38hb1e8313_0
cytoolz 0.10.1 py38h1de35cc_0
dask 2.20.0 py_0
dask-core 2.20.0 py_0
dataclasses 0.8 pyhc8e2a94_3 conda-forge
dbus 1.13.16 h18a8e69_0
decorator 4.4.2 py_0
defusedxml 0.6.0 py_0
diff-match-patch 20200713 py_0
distributed 2.20.0 py38_0
docutils 0.16 py38_1
entrypoints 0.3 py38_0
et_xmlfile 1.0.1 py_1001
expat 2.2.9 hb1e8313_2
fastcache 1.1.0 py38h1de35cc_0
filelock 3.0.12 py_0
flake8 3.8.3 py_0
flask 1.1.2 py_0
fonttools 4.51.0 pypi_0 pypi
freetype 2.10.2 ha233b18_0
fsspec 0.7.4 py_0
future 0.18.2 py38_1
geos 3.8.1 h4a8c4bd_0 conda-forge
get_terminal_size 1.0.0 h7520d66_0
gettext 0.19.8.1 hb0f4f8b_2
gevent 20.6.2 py38haf1e3a3_0
glib 2.65.0 hc5f4afa_0
glob2 0.7 py_0
gmp 6.1.2 hb37e062_1
gmpy2 2.0.8 py38h6ef4df4_3
greenlet 0.4.16 py38haf1e3a3_0
h5py 2.10.0 py38h3134771_0
hdf5 1.10.4 hfa1e0ec_0
heapdict 1.0.1 py_0
html5lib 1.1 py_0
icu 58.2 h0a44026_3
idna 2.10 py_0
imageio 2.9.0 py_0
imagesize 1.2.0 py_0
importlib-metadata 1.7.0 py38_0
importlib-resources 6.4.0 pypi_0 pypi
importlib_metadata 1.7.0 0
intel-openmp 2019.4 233
intervaltree 3.0.2 py_1
ipykernel 5.3.2 py38h5ca1d4c_0
ipython 7.16.1 py38h5ca1d4c_0
ipython_genutils 0.2.0 py38_0
ipywidgets 7.5.1 py_0
isort 4.3.21 py38_0
itsdangerous 1.1.0 py_0
jbig 2.1 h4d881f8_0
jdcal 1.4.1 py_0
jedi 0.17.1 py38_0
jinja2 2.11.2 py_0
joblib 0.16.0 py_0
jpeg 9b he5867d9_2
json5 0.9.5 py_0
jsonschema 3.2.0 py38_1
jupyter 1.0.0 py38_7
jupyter_client 6.1.6 py_0
jupyter_console 6.1.0 py_0
jupyter_core 4.6.3 py38_0
jupyterlab 2.1.5 py_0
jupyterlab_server 1.2.0 py_0
keyring 21.2.1 py38_0
kiwisolver 1.2.0 py38h04f5b5a_0
krb5 1.18.2 h75d18d8_0
lazy-object-proxy 1.4.3 py38h1de35cc_0
lcms2 2.11 h92f6f08_0
libarchive 3.4.2 haa3ed63_0
libcurl 7.71.1 h8a08a2b_1
libcxx 10.0.0 1
libedit 3.1.20191231 h1de35cc_1
libffi 3.3 hb1e8313_2
libgfortran 3.0.1 h93005f0_2
libiconv 1.16 h1de35cc_0
liblief 0.10.1 h0a44026_0
libllvm9 9.0.1 h21ff451_1
libpng 1.6.37 ha441bb4_0
libsodium 1.0.18 h1de35cc_0
libspatialindex 1.9.3 h0a44026_0
libssh2 1.9.0 ha12b0ac_1
libtiff 4.1.0 hcb84e12_1
libxml2 2.9.10 h3b9e6c8_1
libxslt 1.1.34 h83b36ba_0
llvm-openmp 10.0.0 h28b9765_0
llvmlite 0.33.0 py38ha11be7d_1
locket 0.2.0 py38_1
lxml 4.5.2 py38h63b7cb6_0
lz4-c 1.9.2 h0a44026_0
lzo 2.10 h1de35cc_2
markupsafe 1.1.1 py38h1de35cc_1
matplotlib 3.5.2 pypi_0 pypi
mccabe 0.6.1 py38_1
metpy 1.5.1 pypi_0 pypi
mistune 0.8.4 py38h1de35cc_1001
mkl 2019.4 233
mkl-service 2.3.0 py38hfbe908c_0
mkl_fft 1.1.0 py38hc64f4ea_0
mkl_random 1.1.1 py38h959d312_0
mock 4.0.2 py_0
more-itertools 8.4.0 py_0
mpc 1.1.0 h6ef4df4_1
mpfr 4.0.2 h9066e36_1
mpmath 1.1.0 py38_0
msgpack-python 1.0.0 py38h04f5b5a_1
multipledispatch 0.6.0 py38_0
navigator-updater 0.2.1 py38_0
nbconvert 5.6.1 py38_1
nbformat 5.0.7 py_0
ncurses 6.2 h0a44026_1
netcdf4 1.6.5 pypi_0 pypi
networkx 2.4 py_1
nltk 3.5 py_0
nose 1.3.7 py38_1004
notebook 6.0.3 py38_0
numba 0.50.1 py38h959d312_1
numexpr 2.7.1 py38hce01a72_0
numpy 1.23.1 pypi_0 pypi
numpydoc 1.1.0 py_0
olefile 0.46 py_0
openpyxl 3.0.4 py_0
openssl 1.1.1g h1de35cc_0
owslib 0.30.0 pyhd8ed1ab_0 conda-forge
packaging 24.0 pypi_0 pypi
pandas 1.3.0 pypi_0 pypi
pandoc 2.10 0
pandocfilters 1.4.2 py38_1
parso 0.7.0 py_0
partd 1.1.0 py_0
path 13.1.0 py38_0
path.py 12.4.0 0
pathlib2 2.3.5 py38_1
pathtools 0.1.2 py_1
patsy 0.5.1 py38_0
pcre 8.44 hb1e8313_0
pep8 1.7.1 py38_0
pexpect 4.8.0 py38_1
pickleshare 0.7.5 py38_1001
pillow 7.2.0 py38ha54b6ba_0
pint 0.21.1 pypi_0 pypi
pip 24.0 pypi_0 pypi
pkginfo 1.5.0.1 py38_0
platformdirs 4.2.1 pypi_0 pypi
pluggy 0.13.1 py38_0
ply 3.11 py38_0
pooch 1.8.1 pypi_0 pypi
proj 7.0.0 h45baca5_5 conda-forge
prometheus_client 0.8.0 py_0
prompt-toolkit 3.0.5 py_0
prompt_toolkit 3.0.5 0
psutil 5.7.0 py38h1de35cc_0
ptyprocess 0.6.0 py38_0
py 1.9.0 py_0
py-lief 0.10.1 py38haf313ee_0
pycodestyle 2.6.0 py_0
pycosat 0.6.3 py38h1de35cc_1
pycparser 2.20 py_2
pycurl 7.43.0.5 py38ha12b0ac_0
pydocstyle 5.0.2 py_0
pyepsg 0.4.0 py_0 conda-forge
pyflakes 2.2.0 py_0
pygments 2.6.1 py_0
pykdtree 1.3.4 py38hbe852b5_2 conda-forge
pylint 2.5.3 py38_0
pyodbc 4.0.30 py38h0a44026_0
pyopenssl 19.1.0 py_1
pyparsing 2.4.7 py_0
pyproj 3.5.0 pypi_0 pypi
pyqt 5.9.2 py38h655552a_2
pyrsistent 0.16.0 py38h1de35cc_0
pyshp 2.3.1 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 py38_1
pytables 3.6.1 py38h4727e94_0
pytest 5.4.3 py38_0
python 3.8.3 h26836e1_2
python-certifi-win32 1.6.1 pypi_0 pypi
python-dateutil 2.8.1 py_0
python-jsonrpc-server 0.3.4 py_1
python-language-server 0.34.1 py38_0
python-libarchive-c 2.9 py_0
python.app 2 py38_10
python_abi 3.8 2_cp38 conda-forge
pytz 2020.1 py_0
pywavelets 1.1.1 py38h1de35cc_0
pyyaml 5.3.1 py38haf1e3a3_1
pyzmq 19.0.1 py38hb1e8313_1
qdarkstyle 2.8.1 py_0
qt 5.9.7 h468cd18_1
qtawesome 0.7.2 py_0
qtconsole 4.7.5 py_0
qtpy 1.9.0 py_0
readline 8.0 h1de35cc_0
regex 2020.6.8 py38haf1e3a3_0
requests 2.24.0 py_0
ripgrep 11.0.2 he32d670_0
rope 0.17.0 py_0
rtree 0.9.4 py38_1
ruamel_yaml 0.15.87 py38haf1e3a3_1
scikit-image 0.16.2 py38h6c726b0_0
scikit-learn 0.23.1 py38h603561c_0
scipy 1.5.0 py38hbab996c_0
seaborn 0.10.1 py_0
send2trash 1.5.0 py38_0
setuptools 69.5.1 pypi_0 pypi
setuptools-scm 8.1.0 pypi_0 pypi
shapely 1.7.1 py38h7843d1f_1 conda-forge
simplegeneric 0.8.1 py38_2
singledispatch 3.4.0.3 py38_0
sip 4.19.8 py38h0a44026_0
six 1.15.0 py_0
snappy 1.1.8 hb1e8313_0
snowballstemmer 2.0.0 py_0
sortedcollections 1.2.1 py_0
sortedcontainers 2.2.2 py_0
soupsieve 2.0.1 py_0
sphinx 3.1.2 py_0
sphinxcontrib 1.0 py38_1
sphinxcontrib-applehelp 1.0.2 py_0
sphinxcontrib-devhelp 1.0.2 py_0
sphinxcontrib-htmlhelp 1.0.3 py_0
sphinxcontrib-jsmath 1.0.1 py_0
sphinxcontrib-qthelp 1.0.3 py_0
sphinxcontrib-serializinghtml 1.1.4 py_0
sphinxcontrib-websupport 1.2.3 py_0
spyder 4.1.4 py38_0
spyder-kernels 1.9.2 py38_0
sqlalchemy 1.3.18 py38haf1e3a3_0
sqlite 3.32.3 hffcf06c_0
statsmodels 0.11.1 py38haf1e3a3_0
sympy 1.6.1 py38_0
tbb 2020.0 h04f5b5a_0
tblib 1.6.0 py_0
terminado 0.8.3 py38_0
testpath 0.4.4 py_0
threadpoolctl 2.1.0 pyh5ca1d4c_0
tk 8.6.10 hb0a8c7a_0
toml 0.10.1 py_0
tomli 2.0.1 pypi_0 pypi
toolz 0.10.0 py_0
tornado 6.0.4 py38h1de35cc_1
tqdm 4.47.0 py_0
traitlets 5.14.3 pypi_0 pypi
tropycal 1.2.1 pyhd8ed1ab_0 conda-forge
typing_extensions 3.7.4.2 py_0
ujson 1.35 py38h1de35cc_0
unicodecsv 0.14.1 py38_0
unixodbc 2.3.7 h1de35cc_0
urllib3 1.25.9 py_0
watchdog 0.10.3 py38haf1e3a3_0
wcwidth 0.2.5 py_0
webencodings 0.5.1 py38_1
werkzeug 1.0.1 py_0
wheel 0.34.2 py38_0
widgetsnbextension 3.5.1 py38_0
wrapt 1.11.2 py38h1de35cc_0
wurlitzer 2.0.1 py38_0
xarray 2023.1.0 pypi_0 pypi
xlrd 1.2.0 py_0
xlsxwriter 1.2.9 py_0
xlwings 0.19.5 py38_0
xlwt 1.3.0 py38_0
xmltodict 0.12.0 py_0
xz 5.2.5 h1de35cc_0
yaml 0.2.5 haf1e3a3_0
yapf 0.30.0 py_0
zeromq 4.3.2 hb1e8313_2
zict 2.0.0 py_0
zipp 3.1.0 py_0
zlib 1.2.11 h1de35cc_3
zope 1.0 py38_1
zope.event 4.4 py38_0
zope.interface 4.7.1 py38h1de35cc_0
zstd 1.4.5 h41d2c2f_0
```
```
### pip list
Package Version
---------------------------------- ----------------------
alabaster 0.7.12
anaconda-client 1.7.2
anaconda-navigator 1.9.12
anaconda-project 0.8.3
applaunchservices 0.2.1
appnope 0.1.0
appscript 1.1.1
argh 0.26.2
asn1crypto 1.3.0
astroid 2.4.2
astropy 4.0.1.post1
atomicwrites 1.4.0
attrs 19.3.0
autopep8 1.5.3
Babel 2.8.0
backcall 0.2.0
backports.functools-lru-cache 1.6.1
backports.shutil-get-terminal-size 1.0.0
backports.tempfile 1.0
backports.weakref 1.0.post1
beautifulsoup4 4.9.1
bitarray 1.4.0
bkcharts 0.2
bleach 3.1.5
bokeh 2.1.1
boto 2.49.0
Bottleneck 1.3.2
brotlipy 0.7.0
Cartopy 0.17.0
certifi 2020.6.20
cffi 1.14.0
cftime 1.6.3
chardet 3.0.4
click 7.1.2
cloudpickle 1.5.0
clyent 1.2.2
colorama 0.4.3
conda 4.14.0
conda-build 3.18.11
conda-package-handling 1.7.0+0.g7c4a471.dirty
conda-verify 3.4.2
contextlib2 0.6.0.post1
contourpy 1.1.1
cryptography 2.9.2
cycler 0.10.0
Cython 0.29.21
cytoolz 0.10.1
dask 2.20.0
dataclasses 0.8
decorator 4.4.2
defusedxml 0.6.0
diff-match-patch 20200713
distributed 2.20.0
docutils 0.16
entrypoints 0.3
et-xmlfile 1.0.1
fastcache 1.1.0
filelock 3.0.12
flake8 3.8.3
Flask 1.1.2
fonttools 4.51.0
fsspec 0.7.4
future 0.18.2
gevent 20.6.2
glob2 0.7
gmpy2 2.0.8
greenlet 0.4.16
h5py 2.10.0
HeapDict 1.0.1
html5lib 1.1
idna 2.10
imageio 2.9.0
imagesize 1.2.0
importlib-metadata 1.7.0
importlib_resources 6.4.0
intervaltree 3.0.2
ipykernel 5.3.2
ipython 7.16.1
ipython_genutils 0.2.0
ipywidgets 7.5.1
isort 4.3.21
itsdangerous 1.1.0
jdcal 1.4.1
jedi 0.17.1
Jinja2 2.11.2
joblib 0.16.0
json5 0.9.5
jsonschema 3.2.0
jupyter 1.0.0
jupyter-client 6.1.6
jupyter-console 6.1.0
jupyter-core 4.6.3
jupyterlab 2.1.5
jupyterlab-server 1.2.0
keyring 21.2.1
kiwisolver 1.2.0
lazy-object-proxy 1.4.3
libarchive-c 2.9
llvmlite 0.33.0+1.g022ab0f
locket 0.2.0
lxml 4.5.2
MarkupSafe 1.1.1
matplotlib 3.5.2
mccabe 0.6.1
MetPy 1.5.1
mistune 0.8.4
mkl-fft 1.1.0
mkl-random 1.1.1
mkl-service 2.3.0
mock 4.0.2
more-itertools 8.4.0
mpmath 1.1.0
msgpack 1.0.0
multipledispatch 0.6.0
navigator-updater 0.2.1
nbconvert 5.6.1
nbformat 5.0.7
netCDF4 1.6.5
networkx 2.4
nltk 3.5
nose 1.3.7
notebook 6.0.3
numba 0.50.1
numexpr 2.7.1
numpy 1.23.1
numpydoc 1.1.0
olefile 0.46
openpyxl 3.0.4
OWSLib 0.30.0
packaging 24.0
pandas 1.3.0
pandocfilters 1.4.2
parso 0.7.0
partd 1.1.0
path 13.1.0
pathlib2 2.3.5
pathtools 0.1.2
patsy 0.5.1
pep8 1.7.1
pexpect 4.8.0
pickleshare 0.7.5
Pillow 7.2.0
Pint 0.21.1
pip 24.0
pkginfo 1.5.0.1
platformdirs 4.2.1
pluggy 0.13.1
ply 3.11
pooch 1.8.1
prometheus-client 0.8.0
prompt-toolkit 3.0.5
psutil 5.7.0
ptyprocess 0.6.0
py 1.9.0
pycodestyle 2.6.0
pycosat 0.6.3
pycparser 2.20
pycurl 7.43.0.5
pydocstyle 5.0.2
pyepsg 0.4.0
pyflakes 2.2.0
Pygments 2.6.1
pykdtree 1.3.4
pylint 2.5.3
pyodbc 4.0.0-unsupported
pyOpenSSL 19.1.0
pyparsing 2.4.7
pyproj 3.5.0
pyrsistent 0.16.0
pyshp 2.3.1
PySocks 1.7.1
pytest 5.4.3
python-certifi-win32 1.6.1
python-dateutil 2.8.1
python-jsonrpc-server 0.3.4
python-language-server 0.34.1
pytz 2020.1
PyWavelets 1.1.1
PyYAML 5.3.1
pyzmq 19.0.1
QDarkStyle 2.8.1
QtAwesome 0.7.2
qtconsole 4.7.5
QtPy 1.9.0
regex 2020.6.8
requests 2.24.0
rope 0.17.0
Rtree 0.9.4
ruamel_yaml 0.15.87
scikit-image 0.16.2
scikit-learn 0.23.1
scipy 1.5.0
seaborn 0.10.1
Send2Trash 1.5.0
setuptools 69.5.1
setuptools-scm 8.1.0
Shapely 1.7.1
simplegeneric 0.8.1
singledispatch 3.4.0.3
six 1.15.0
snowballstemmer 2.0.0
sortedcollections 1.2.1
sortedcontainers 2.2.2
soupsieve 2.0.1
Sphinx 3.1.2
sphinxcontrib-applehelp 1.0.2
sphinxcontrib-devhelp 1.0.2
sphinxcontrib-htmlhelp 1.0.3
sphinxcontrib-jsmath 1.0.1
sphinxcontrib-qthelp 1.0.3
sphinxcontrib-serializinghtml 1.1.4
sphinxcontrib-websupport 1.2.3
spyder 4.1.4
spyder-kernels 1.9.2
SQLAlchemy 1.3.18
statsmodels 0.11.1
sympy 1.6.1
tables 3.6.1
tblib 1.6.0
terminado 0.8.3
testpath 0.4.4
threadpoolctl 2.1.0
toml 0.10.1
tomli 2.0.1
toolz 0.10.0
tornado 6.0.4
tqdm 4.47.0
traitlets 5.14.3
tropycal 1.2.1
typing-extensions 3.7.4.2
ujson 1.35
unicodecsv 0.14.1
urllib3 1.25.9
watchdog 0.10.3
wcwidth 0.2.5
webencodings 0.5.1
Werkzeug 1.0.1
wheel 0.34.2
widgetsnbextension 3.5.1
wrapt 1.11.2
wurlitzer 2.0.1
xarray 0.17.0
xlrd 1.2.0
XlsxWriter 1.2.9
xlwings 0.19.5
xlwt 1.3.0
xmltodict 0.12.0
yapf 0.30.0
zict 2.0.0
zipp 3.1.0
zope.event 4.4
zope.interface 4.7.1
```
```
</details>
| closed | 2024-05-12T03:11:26Z | 2024-05-12T03:41:13Z | https://github.com/SciTools/cartopy/issues/2384 | [] | Ernesto2425 | 0 |
sczhou/CodeFormer | pytorch | 243 | Output images are the originals from the input | I run the command like this "python scripts/crop_align_face.py -i inputs/cropped_faces -o outputs/cropped_faces", but it didn't work as expect. I found the images in output folder has nothing different with the images in input folder.
My computer is Mac Book Pro M1. | open | 2023-06-08T10:43:56Z | 2023-06-08T10:43:56Z | https://github.com/sczhou/CodeFormer/issues/243 | [] | PatriotBo | 0 |
django-import-export/django-import-export | django | 1,831 | Resource field is empty when importing data | Hi,
I have Django 4.2.13 with django-import-export 4.0.2.
I have wrote an admin class that inherits from `ImportExportModelAdmin` and defined the `resource_classes` property.
But for some reason, the field `resource` is empty when trying to import. And the fields are not displayed in `This importer will import the following fields:`.
Here is a screenshot:

What could be reason? What am I missing?
Thank you | closed | 2024-05-13T22:07:11Z | 2024-05-14T15:41:33Z | https://github.com/django-import-export/django-import-export/issues/1831 | [
"question"
] | nortigo | 5 |
lepture/authlib | flask | 175 | Authlib does not correctly work with httpx middleware | **Describe the bug**
A clear and concise description of what the bug is.
**Error Stacks**
```
Traceback (most recent call last):
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/lib/python3.8/site-packages/uvicorn/protocols/http/httptools_impl.py", line 385, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
return await self.app(scope, receive, send)
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/lib/python3.8/site-packages/fastapi/applications.py", line 140, in __call__
await super().__call__(scope, receive, send)
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/lib/python3.8/site-packages/starlette/applications.py", line 134, in __call__
await self.error_middleware(scope, receive, send)
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/lib/python3.8/site-packages/starlette/middleware/errors.py", line 178, in __call__
raise exc from None
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/lib/python3.8/site-packages/starlette/middleware/errors.py", line 156, in __call__
await self.app(scope, receive, _send)
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/lib/python3.8/site-packages/starlette/middleware/sessions.py", line 75, in __call__
await self.app(scope, receive, send_wrapper)
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/lib/python3.8/site-packages/starlette/exceptions.py", line 73, in __call__
raise exc from None
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/lib/python3.8/site-packages/starlette/exceptions.py", line 62, in __call__
await self.app(scope, receive, sender)
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/lib/python3.8/site-packages/starlette/routing.py", line 590, in __call__
await route(scope, receive, send)
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/lib/python3.8/site-packages/starlette/routing.py", line 208, in __call__
await self.app(scope, receive, send)
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/lib/python3.8/site-packages/starlette/routing.py", line 41, in app
response = await func(request)
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/lib/python3.8/site-packages/fastapi/routing.py", line 127, in app
raw_response = await dependant.call(**values)
File "./proj/apps/auth/api.py", line 77, in callback_oauth
token = await current_oauth.authorize_access_token(request, redirect_uri=redirect_uri)
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/src/authlib/authlib/integrations/starlette_client/remote_app.py", line 39, in authorize_access_token
return await self.fetch_access_token(**params)
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/src/authlib/authlib/integrations/asgi_client/base_app.py", line 104, in fetch_access_token
token = await client.fetch_token(token_endpoint, **kwargs)
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/src/authlib/authlib/integrations/httpx_client/oauth2_client.py", line 105, in _fetch_token
resp = await self.post(
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/lib/python3.8/site-packages/httpx/client.py", line 772, in post
return await self.request(
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/src/authlib/authlib/integrations/httpx_client/oauth2_client.py", line 86, in request
return await super(AsyncOAuth2Client, self).request(
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/lib/python3.8/site-packages/httpx/client.py", line 259, in request
response = await self.send(
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/lib/python3.8/site-packages/httpx/client.py", line 403, in send
response = await self.send_handling_redirects(
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/lib/python3.8/site-packages/httpx/client.py", line 465, in send_handling_redirects
response = await self.send_handling_auth(
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/lib/python3.8/site-packages/httpx/client.py", line 589, in send_handling_auth
request = next(auth_flow)
File "/home/ken/.local/share/virtualenvs/testproj-lMPjJ55e/lib/python3.8/site-packages/httpx/auth.py", line 62, in __call__
yield self.func(request)
TypeError: __call__() missing 1 required positional argument: 'get_response'
```
**To Reproduce**
1. Install Starlette/FastAPI
2. Use the current github version of authlib
3. Attempt to call authorize_access_token()
**Expected behavior**
It should work.
**Environment:**
- OS: Ubuntu 18.04
- Python Version: 3.8
- Authlib Version: Current master branch
**Additional context**
Why this is happening is pretty obvious when you look at the code. Here's what httpx is doing:
```
class FunctionAuth(Auth):
"""
Allows the 'auth' argument to be passed as a simple callable function,
that takes the request, and returns a new, modified request.
"""
def __init__(self, func: typing.Callable[[Request], Request]) -> None:
self.func = func
def __call__(self, request: Request) -> AuthFlow:
yield self.func(request)
```
Notice it's calling self.func(request).
Here's the class it's calling __call__ on:
```
class OAuth2ClientAuth(Middleware, ClientAuth):
async def __call__(
self, request: Request, get_response: typing.Callable
) -> Response:
return await auth_call(self, request, get_response)
```
get_response is not passed. | closed | 2019-12-23T20:26:46Z | 2020-02-11T10:59:42Z | https://github.com/lepture/authlib/issues/175 | [
"bug"
] | kkinder | 2 |
thp/urlwatch | automation | 362 | Option to configure SSL/TLS version and cipher suite for url job | Related earlier discussions: #265 #361
Related blog post: [Configuring TLS With Requests](https://lukasa.co.uk/2017/02/Configuring_TLS_With_Requests/) | open | 2019-02-08T17:32:16Z | 2020-07-21T09:39:53Z | https://github.com/thp/urlwatch/issues/362 | [
"enhancement"
] | cfbao | 4 |
vitalik/django-ninja | pydantic | 783 | AttributeError: 'NoneType' object has no attribute 'delete_cookie' | **Describe the bug**
I am able to create cookies but I am not able delete the cookie
```from django.http import HttpResponse
@app.get('/logout', auth=JwtAuth())
def logout_user(request, response: HttpResponse):
response.delete_cookie("access")
response.delete_cookie("refresh")
return {'message': "You have been logged out Successfully"}
```
error = AttributeError: 'NoneType' object has no attribute 'delete_cookie'
| closed | 2023-07-03T15:00:11Z | 2023-07-03T17:12:28Z | https://github.com/vitalik/django-ninja/issues/783 | [] | ankushagar99 | 3 |
apache/airflow | python | 47,948 | ExternalTaskSensor with mode='reschedule' raising module 'airflow.settings' has no attribute 'engine' error | ### Apache Airflow version
main (development)
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
ExternalTaskSensor with mode='reschedule' raising module 'airflow.settings' has no attribute 'engine' error
Traceback (most recent call last):
File "/opt/airflow/airflow/sensors/base.py", line 206, in _validate_input_values
if self.reschedule and _is_metadatabase_mysql():
File "/opt/airflow/airflow/sensors/base.py", line 60, in _is_metadatabase_mysql
if settings.engine is None:
AttributeError: module 'airflow.settings' has no attribute 'engine'
### What you think should happen instead?
_No response_
### How to reproduce
Have the below Dag in dags folder and notice import error:
```python
from airflow import DAG
from airflow.providers.standard.operators.empty import EmptyOperator
from airflow.sensors.external_task import ExternalTaskMarker, ExternalTaskSensor
from pendulum import today
start_date = today('UTC').add(days=-1)
with DAG(
dag_id="example_external_task_marker_parent",
start_date=start_date,
schedule=None,
tags=["core"],
) as parent_dag:
# [START howto_operator_external_task_marker]
parent_task = ExternalTaskMarker(
task_id="parent_task",
external_dag_id="example_external_task_marker_child",
external_task_id="child_task1",
)
# [END howto_operator_external_task_marker]
with DAG(
dag_id="example_external_task_marker_child",
start_date=start_date,
schedule=None,
tags=["core"],
) as child_dag:
# [START howto_operator_external_task_sensor]
child_task1 = ExternalTaskSensor(
task_id="child_task1",
external_dag_id=parent_dag.dag_id,
external_task_id=parent_task.task_id,
timeout=600,
allowed_states=["success"],
failed_states=["failed", "skipped"],
mode="reschedule",
)
# [END howto_operator_external_task_sensor]
child_task2 = EmptyOperator(task_id="child_task2")
child_task1 >> child_task2
```
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-19T11:01:00Z | 2025-03-23T17:07:01Z | https://github.com/apache/airflow/issues/47948 | [
"kind:bug",
"priority:medium",
"area:core",
"affected_version:3.0.0beta"
] | atul-astronomer | 1 |
pallets-eco/flask-wtf | flask | 192 | IntegerField always fails on validation | Whatever I enter into the form, validating the form always fails on the IntegerField.
```
from flask.ext.wtf import Form
from wtforms.fields import *
from wtforms.validators import *
class FooForm(Form):
a = StringField('a', validators=[InputRequired()])
b = IntegerField('b', validators=[InputRequired()])
```
I have data from an external source, and I try to validate that data:
```
data = {'a': 'test', 'b': '333'}
myform = FooForm(a = data['a'], b = data['b'])
if myform.validate():
print 'success'
else:
print myform.errors
```
The result is:
```
{'b': [u'This field is required.']}
```
I've also tried with:
```
b = IntegerField('b', validators=[NumberRange(min=0, max=1000)])
```
But this prints out:
```
{'b': [u'Number must be between 0 and 1000.']}
```
I'm using versions:
flask: 0.10.1
flask-wtf: 0.12
wtforms: 2.0.2
| closed | 2015-08-05T08:21:59Z | 2021-05-28T01:04:00Z | https://github.com/pallets-eco/flask-wtf/issues/192 | [] | canebat | 2 |
holoviz/panel | matplotlib | 7,178 | Unable to change the background colour for pn.indicators.Dial widget | #### version info: Panel 1.4.2
#### Description of expected behavior and the observed behavior:
I’ve been unable to change the background colour for my pn.indicators.Dial widget. I’ve tried CSS, customized designs, etc and nothing seems to work. Ultimately I’d like the background to be transparent so that the dashboard colour shows through. But I’m fine hard-coding a hex code if necessary. Currently, it only allows a pure white background, which somewhat stands out against my app background. Having posed the question on Holoviz Discourse I was asked by ahuang11 to raise a new issue here. Discourse topic can be found here: https://discourse.holoviz.org/t/changing-background-colour-of-pn-indicators-dial-widget/7564
#### self-contained example code
```
# Styling for Dial widgets
custom_style = {
"border": "0.5px solid grey",
"border-radius": "5px",
"margin": "5px",
}
grid[0:1, 1] = pn.Column(
pn.Spacer(height=30),
pn.Row(
pn.indicators.Dial(
name="Similarity",
value=60,
title_size="11px",
bounds=(0, 100),
width=150,
height=150,
colors=[(0, "grey"), (0.5, "grey"), (1, colour)],
margin=(0, 20, 0, 0),
styles=custom_style,
),
pn.indicators.Dial(
name="Strength",
value=68,
title_size="11px",
bounds=(0, 100),
width=150,
height=150,
colors=[(0, "grey"), (0.5, "grey"), (1, colour)],
margin=(0, 20, 0, 0),
styles=custom_style,
),
),
)
```
| closed | 2024-08-22T15:02:34Z | 2024-09-12T12:41:46Z | https://github.com/holoviz/panel/issues/7178 | [] | doughagey | 2 |
cvat-ai/cvat | tensorflow | 9,003 | Logging: establishing a relationship between log of cvat and log of nuclio | Hi,
I see, that the logging of events in CVAT is very deep. I'm using for automatically annotating self-written functions in NUCLIO. These functions are logging their activities in the log of NUCLIO. In this example, you see the labeling activities of a function
````
25.01.28 09:20:27.130 [34m(I)[0m [37msor.http.w1.python.logger[0m stat(eventid:476c2d08-412a-4a22-b2d1-d2d596be3fd4 elapsetime(sec.):5.0463547706604 #label:4 labels:1;0;0;C;) {"worker_id": "1"}
````
My intension is to etablish a relationship between the event "call:function" in CVAT and the doing of exactly this function in NUCLIO. So I exactly can say, what labels the function has detected in this call for this frame of a job.
Ok I see an id for the request in the payload of the log sentence like here
````
{"function":{"id":"hot_gauge_v1.0.0-model_x-x-x-shs-pytorch-23.05"},"category":"interactive","parameters":{"frame":0},"request":{"id":"d7a8a011-ca83-4105-b1c4-cb97f56744c8"}}
````
The id seems to be for CVAT internal only, but is not the id of the event, that is started in NUCLIO for this run of the function.
Exists a possibility to establish a relationship between a special function call in CVAT with its run in NUCLIO ?
Best Regards
Rose
| closed | 2025-01-28T09:51:26Z | 2025-01-31T13:44:58Z | https://github.com/cvat-ai/cvat/issues/9003 | [] | RoseDeSable | 4 |
ageitgey/face_recognition | python | 865 | How to set face_encodings to use CPU and RAM only? | * face_recognition version:1.2.3
* Python version:3.7.3
* Operating System: Win10
### Description
when I ran to get face_locations with mode='hog', no problem.
When i ran to get face_encodings, got this error below, it seems that this function of face_encodings is going to use GPU to get allocate memory. How to set it to use CPU anbd RAM only?
### What I Did
faces_locations = face_recognition.face_locations(rgb_small_frame,number_of_times_to_upsample=1, model='hog')
face_encodings = face_recognition.face_encodings(rgb_small_frame, faces_locations)
Traceback (most recent call last):
File "h:/WE4/_git/Photo-Manager.py", line 979, in FacesCropFromImage
face_encodings = face_recognition.face_encodings(curframe, faces)
File "C:\ProgramData\Anaconda3\envs\tensorflow_gpu\lib\site-packages\face_recognition\api.py", line 210, in face_encodings
return [np.array(face_encoder.compute_face_descriptor(face_image, raw_landmark_set, num_jitters)) for raw_landmark_set in raw_landmarks]
File "C:\ProgramData\Anaconda3\envs\tensorflow_gpu\lib\site-packages\face_recognition\api.py", line 210, in <listcomp>
return [np.array(face_encoder.compute_face_descriptor(face_image, raw_landmark_set, num_jitters)) for raw_landmark_set in raw_landmarks]
RuntimeError: Error while calling cudaMalloc(&data, new_size*sizeof(float)) in file I:\dlib-19.17\dlib\cuda\gpu_data.cpp:218. code: 2, reason: out of memory
| open | 2019-06-27T15:09:35Z | 2019-07-09T01:11:30Z | https://github.com/ageitgey/face_recognition/issues/865 | [] | michaeldengxyz | 3 |
pytorch/vision | computer-vision | 8,877 | Setting a single negative value or a couple of negative values to `sigma` argument of `ElasticTransform()` doesn't get error against the error message | ### 🐛 Describe the bug
Setting a negative and positive value together to `sigma` argument of [ElasticTransform()](https://pytorch.org/vision/main/generated/torchvision.transforms.v2.ElasticTransform.html) gets the error messages as shown below:
```python
from torchvision.datasets import OxfordIIITPet
from torchvision.transforms.v2 import ElasticTransform
my_data1 = OxfordIIITPet(
root="data", # ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
transform=ElasticTransform(sigma=[-100, 100])
)
my_data2 = OxfordIIITPet(
root="data", # ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
transform=ElasticTransform(sigma=[100, -100])
)
my_data1[0] # Error
my_data2[0] # Error
```
> ValueError: sigma should have positive values. Got [-100.0, 100.0]
> ValueError: sigma should have positive values. Got [100.0, -100.0]
But setting a single negative value or a couple of negative values to `sigma` argument of `ElasticTransform()` doesn't get error against the above error message as shown below:
```python
from torchvision.datasets import OxfordIIITPet
from torchvision.transforms.v2 import ElasticTransform
my_data1 = OxfordIIITPet(
root="data", # ↓ ↓ ↓ ↓ ↓
transform=ElasticTransform(sigma=-100)
)
my_data2 = OxfordIIITPet(
root="data", # ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓ ↓
transform=ElasticTransform(sigma=[-100, -100])
)
my_data1[0]
# (<PIL.Image.Image image mode=RGB size=394x500>, 0)
my_data2[0]
# (<PIL.Image.Image image mode=RGB size=394x500>, 0)
```
### Versions
```python
import torchvision
torchvision.__version__ # '0.20.1'
``` | open | 2025-01-24T01:42:08Z | 2025-02-19T13:32:48Z | https://github.com/pytorch/vision/issues/8877 | [] | hyperkai | 1 |
polakowo/vectorbt | data-visualization | 43 | Error of Portfolio.from_signals | Hi, thank for this lib. I got a problem when I was trying test example.
```
import vectorbt as vbt
import pandas as pd
import yfinance as yf
price = yf.Ticker("BTC-USD").history(period="max")
entries, exits = pd.Series.vbt.signals.generate_random_both(
price.shape[0], n=10, seed=42
)
portfolio = vbt.Portfolio.from_signals(
price['Close'], entries, exits,
fees=0.001,
init_capital=100,
freq='1D'
)
```
then it was always happened,
`raise ValueError("Only SizeType.Shares and SizeType.Cash are supported")`
my environment is:
python 3.8
pandas 1.1.1
vectorbt 0.13.7
| closed | 2020-09-01T10:32:31Z | 2020-09-16T22:32:04Z | https://github.com/polakowo/vectorbt/issues/43 | [] | Jaclong | 12 |
microsoft/nlp-recipes | nlp | 394 | [BUG] Not authorized to access variable group. | ### Description

### How do we replicate the bug?
I am not authorized to access the variable group which has the keyvault so the tests do not run.
### Expected behavior (i.e. solution)
Successful variable group access.
### Other Comments
| closed | 2019-09-03T13:19:45Z | 2019-10-23T16:36:31Z | https://github.com/microsoft/nlp-recipes/issues/394 | [
"bug"
] | bethz | 2 |
ultralytics/ultralytics | pytorch | 18,687 | YOLOv8 detection head intuitive feature specialization (e.g., small/medium/large object focus) | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
I have repeatedly read and observed that in the case of YOLOv3, the detection heads focus on small medium and large object detection respectively. I don't believe (and have not observed) this to be true for YOLOv8, and I am wondering if there is any sort of equivalent or analogous intuitive semantic feature specialization for its detection heads.
For example, the following depicts the input image with the bounding box whose features correspond to the first, second, and third head respectively for YOLOv3.
<img width="380" alt="Image" src="https://github.com/user-attachments/assets/e051c349-a472-45fe-b144-80670e6bac0b" />
It's clear that the first/second/third head correspond to small/medium/large objects. It is not the case for YOLOv8:
<img width="378" alt="Image" src="https://github.com/user-attachments/assets/f9d48ed4-d559-41e0-a21f-6f237d607ac4" />
I am working with extracted activation maps from the YOLOv8 detection heads and it would be helpful if there was a sort of intuitive grouping between them as there is in YOLOv3, just wondering if such a grouping exists (even if it is not small/medium/large objects as it is in YOLOv3).
Further, what mechanism in the YOLOv3 architecture is responsible for this explicit specialization?
### Additional
_No response_ | open | 2025-01-14T19:59:12Z | 2025-01-15T18:15:15Z | https://github.com/ultralytics/ultralytics/issues/18687 | [
"question",
"detect"
] | leethologica | 4 |
alteryx/featuretools | data-science | 1,913 | Use polars for primitive calculation for faster feature value generation | - We could support [polars](https://github.com/pola-rs/polars) DataFrames, and use the functions from that library for generating features (in primitive calculations). | open | 2022-02-17T18:15:13Z | 2023-06-26T18:52:16Z | https://github.com/alteryx/featuretools/issues/1913 | [
"new feature",
"needs design",
"spike"
] | gsheni | 2 |
python-visualization/folium | data-visualization | 1,194 | HeatMap with fixed scale | ```python
hm = folium.plugins.HeatMapWithTime(data, scale_radius=False)
```
#### Problem description
Would it be possible to fix the scale for the map like what is done for HeatMap_time.
#### Output of ``folium.__version__``
0.10. | closed | 2019-08-05T09:47:27Z | 2022-11-27T15:37:44Z | https://github.com/python-visualization/folium/issues/1194 | [] | betizad | 1 |
d2l-ai/d2l-en | computer-vision | 2,226 | issue with module show_list_len_pair_hist from d2l.torch | 
| closed | 2022-07-30T15:07:42Z | 2022-07-30T19:45:11Z | https://github.com/d2l-ai/d2l-en/issues/2226 | [] | ulfat191 | 2 |
fastapi-admin/fastapi-admin | fastapi | 33 | how to use own SQL database for this project? | I would love to use this admin panel and connect it to my own database. But I have no idea of how the tables and relations are built.. | closed | 2021-02-04T12:41:08Z | 2021-05-01T12:52:44Z | https://github.com/fastapi-admin/fastapi-admin/issues/33 | [] | vlori2k | 1 |
firerpa/lamda | automation | 100 | 网页端触摸控制异常 | Architecture: arm64-v8a
Model: M2012K11AC (14, 1080x2400)
系统是lineageos 21.
感觉这像是个映射的问题 打开开发者选项里的触摸反馈可以发现实际上可以触摸 但是显示的点局限于屏幕左上角很小的一片方形区域
搜索已有的issue发现在properties.local里添加touch.backend=system属性的解决方法 但是这个并不能解决我所遇到的问题
其表现为重启之后整个显示区域都是无法点击的 显示view only
另外这个使用scrcpy是能正常显示和点击触摸的 | open | 2025-01-01T13:15:32Z | 2025-01-03T11:15:27Z | https://github.com/firerpa/lamda/issues/100 | [] | koast18 | 3 |
tflearn/tflearn | tensorflow | 641 | AttributeError: 'DNN' object has no attribute 'predict_lables' | `encode_decode = model.predict_lables(feed_dict={X:testX})
print(encode_decode)`
While trying to run the above code I am encountring AttributeError: 'DNN' object has no attribute 'predict_lables' error!
I checked and DNN supports predict_labels
| closed | 2017-02-28T17:08:13Z | 2017-02-28T17:15:51Z | https://github.com/tflearn/tflearn/issues/641 | [] | jdvala | 0 |
ymcui/Chinese-LLaMA-Alpaca-2 | nlp | 194 | 会开源33B模型吗? | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案。
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[LangChain](https://github.com/hwchase17/langchain)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案。
### 问题类型
其他问题
### 基础模型
None
### 操作系统
Linux
### 详细描述问题
`楼主会开源33B模型吗?目前13b看起来效果也比较有限
### 依赖情况(代码类问题务必提供)
w
### 运行日志或截图
w | closed | 2023-08-28T03:06:50Z | 2023-08-28T10:27:32Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/194 | [] | lucasjinreal | 3 |
Esri/arcgis-python-api | jupyter | 1,780 | Item.share() is not working when the groups parameter is specified | **Describe the bug**
I am not able to share items with specified groups with the arcgis.gis.Item.share() method. I tried a hosted feature layer and a view of that HFL. The _org_ and _everyone_ parameters do work, but the _groups_ parameter with a group ID string or with the Group object fails to change any sharing properties.
And the other way fails too: I set the group sharing manually in the Online Portal and then run _groups=None_, but nothing changes.
**To Reproduce**
```python
view_items = gis_target.content.search(query=rf'title:{target_view_name}, type:"Feature Service"')
target_group_id = "TARGET GROUP ID"
group = gis_target.groups.get(groupid=target_group_id)
view_items[0].share(everyone=False, org=False, allow_members_to_edit=False, groups=[group])
```
**Error**
No error message, just failure to share the item with the specified group.
**Screenshots**
Fiddler sessions from running the above code.

**Expected behavior**
I expect the item to be shared with my specified group.
**Platform (please complete the following information):**
- OS: Windows 11
- Browser: Chrome
- Python API Version: 2.2.0.1
-
**Additional context**
Add any other context about the problem here, attachments etc.
| open | 2024-03-19T19:16:50Z | 2025-01-07T17:56:53Z | https://github.com/Esri/arcgis-python-api/issues/1780 | [
"bug"
] | avstjohn | 2 |
huggingface/datasets | pytorch | 7,406 | Adding Core Maintainer List to CONTRIBUTING.md | ### Feature request
I propose adding a core maintainer list to the `CONTRIBUTING.md` file.
### Motivation
The Transformers and Liger-Kernel projects maintain lists of core maintainers for each module.
However, the Datasets project doesn't have such a list.
### Your contribution
I have nothing to add here. | closed | 2025-02-17T00:32:40Z | 2025-03-24T10:57:54Z | https://github.com/huggingface/datasets/issues/7406 | [
"enhancement"
] | jp1924 | 3 |
pandas-dev/pandas | data-science | 60,496 | ENH: astype(object) does not convert numpy strings to str | ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas
import numpy
df_with_numpy_values = pandas.DataFrame(
{
"col_int": [numpy.int64(1), numpy.int64(2)],
"col_float": [numpy.float64(1.5), numpy.float64(2.5)],
"col_bool": [numpy.bool_(True), numpy.bool_(False)],
"col_str": [numpy.str_("a"), numpy.str_("b")],
}
)
df_as_object = df_with_numpy_values.astype(object)
for column in df_as_object.columns:
for value in df_as_object[column]:
assert type(value) in (
int,
float,
str,
bool,
), f"Value {value} in column {column} is not a Python type, but {type(value)}"
```
### Issue Description
Calling .astype(object) on a dataframe with numpy values converts the types of the values to the python equivalents, except for numy.str_.
### Expected Behavior
I would expect that values with numpy.str_ would be turned into str.
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.10.12
python-bits : 64
OS : Linux
OS-release : 5.15.0-126-generic
Version : #136-Ubuntu SMP Wed Nov 6 10:38:22 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 2.1.3
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 22.0.2
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| open | 2024-12-05T13:02:11Z | 2024-12-09T23:29:05Z | https://github.com/pandas-dev/pandas/issues/60496 | [
"Enhancement",
"Strings",
"Closing Candidate"
] | aktobii | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.