repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
TencentARC/GFPGAN | deep-learning | 523 | billing problem | billing problem | open | 2024-03-02T11:10:58Z | 2024-03-02T11:10:58Z | https://github.com/TencentARC/GFPGAN/issues/523 | [] | skshanu1234 | 0 |
coqui-ai/TTS | pytorch | 3,063 | [Bug] xtts fine tune error | ### Describe the bug
I got this error : AttributeError: 'Xtts' object has no attribute 'get_criterion'
in this line : trainer = Trainer(....)
I try to fine tune xtts with my data. Please can you help me?
### To Reproduce
same
### Expected behavior
training
### Logs
```shell
AttributeError: 'Xtts' object has no attribute 'get_criterion'
```
### Environment
```shell
TTS/bin/train_tts.py
```
### Additional context
no | closed | 2023-10-13T06:33:48Z | 2024-02-07T08:10:30Z | https://github.com/coqui-ai/TTS/issues/3063 | [] | deryaguler95 | 6 |
agronholm/anyio | asyncio | 848 | raising a custom ExceptionGroup subclass from a TaskGroup gets replaced with a regular TaskGroup | ### Things to check first
- [X] I have searched the existing issues and didn't find my bug already reported there
- [X] I have checked that my bug is still present in the latest release
### AnyIO version
4.7.0
### Python version
3.12
### What happened?
see repro
### How can we reproduce the bug?
```python
import anyio
class MyExceptionGroup(ExceptionGroup):
pass
async def main():
async with anyio.create_task_group() as tg:
raise MyExceptionGroup("hello", [ValueError("bad")])
anyio.run(main, backend="asyncio")
```
I get the expected output on 3.12:
```
+ Exception Group Traceback (most recent call last):
| File "/home/graingert/projects/demo.py", line 9, in main
| raise MyExceptionGroup("hello", [ValueError("bad")])
| MyExceptionGroup: hello (1 sub-exception)
+-+---------------- 1 ----------------
| ValueError: bad
+------------------------------------
During handling of the above exception, another exception occurred:
+ Exception Group Traceback (most recent call last):
| File "/home/graingert/projects/demo.py", line 11, in <module>
| anyio.run(main, backend="asyncio")
| File "/home/graingert/.virtualenvs/anyio_fixture_methods/lib/python3.12/site-packages/anyio/_core/_eventloop.py", line 74, in run
| return async_backend.run(func, args, {}, backend_options)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/graingert/.virtualenvs/anyio_fixture_methods/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2347, in run
| return runner.run(wrapper())
| ^^^^^^^^^^^^^^^^^^^^^
| File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
| return self._loop.run_until_complete(task)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/usr/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
| return future.result()
| ^^^^^^^^^^^^^^^
| File "/home/graingert/.virtualenvs/anyio_fixture_methods/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2335, in wrapper
| return await func(*args)
| ^^^^^^^^^^^^^^^^^
| File "/home/graingert/projects/demo.py", line 8, in main
| async with anyio.create_task_group() as tg:
| File "/home/graingert/.virtualenvs/anyio_fixture_methods/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 815, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Exception Group Traceback (most recent call last):
| File "/home/graingert/projects/demo.py", line 9, in main
| raise MyExceptionGroup("hello", [ValueError("bad")])
| MyExceptionGroup: hello (1 sub-exception)
+-+---------------- 1 ----------------
| ValueError: bad
+------------------------------------
```
I get a different output on trio:
```
+ Exception Group Traceback (most recent call last):
| File "/home/graingert/projects/demo.py", line 11, in <module>
| anyio.run(main, backend="trio")
| File "/home/graingert/.virtualenvs/anyio_fixture_methods/lib/python3.12/site-packages/anyio/_core/_eventloop.py", line 74, in run
| return async_backend.run(func, args, {}, backend_options)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/graingert/.virtualenvs/anyio_fixture_methods/lib/python3.12/site-packages/anyio/_backends/_trio.py", line 1000, in run
| return trio.run(func, *args)
| ^^^^^^^^^^^^^^^^^^^^^
| File "/home/graingert/.virtualenvs/anyio_fixture_methods/lib/python3.12/site-packages/trio/_core/_run.py", line 2306, in run
| raise runner.main_task_outcome.error
| File "/home/graingert/projects/demo.py", line 8, in main
| async with anyio.create_task_group() as tg:
| File "/home/graingert/.virtualenvs/anyio_fixture_methods/lib/python3.12/site-packages/anyio/_backends/_trio.py", line 191, in __aexit__
| return await self._nursery_manager.__aexit__(exc_type, exc_val, exc_tb)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/graingert/.virtualenvs/anyio_fixture_methods/lib/python3.12/site-packages/trio/_core/_run.py", line 959, in __aexit__
| raise combined_error_from_nursery
| ExceptionGroup: Exceptions from Trio nursery (1 sub-exception)
+-+---------------- 1 ----------------
| Exception Group Traceback (most recent call last):
| File "/home/graingert/projects/demo.py", line 9, in main
| raise MyExceptionGroup("hello", [ValueError("bad")])
| ExceptionGroup: hello (1 sub-exception)
+-+---------------- 1 ----------------
| ValueError: bad
+------------------------------------
``` | closed | 2024-12-29T13:45:08Z | 2024-12-29T13:48:21Z | https://github.com/agronholm/anyio/issues/848 | [
"bug"
] | graingert | 1 |
anapaulagomes/pytest-picked | pytest | 20 | Initial Update | The bot created this issue to inform you that pyup.io has been set up on this repo.
Once you have closed it, the bot will open pull requests for updates as soon as they are available.
| closed | 2018-11-13T18:20:26Z | 2018-11-13T18:21:09Z | https://github.com/anapaulagomes/pytest-picked/issues/20 | [] | pyup-bot | 0 |
dmlc/gluon-nlp | numpy | 994 | Support Python 3.5 | We no longer support python 3.5
https://github.com/dmlc/gluon-nlp/blob/bfa5503e81ae53d26b9f202bce4fedcf09e47db4/setup.py#L35-L40 .
However, the default python3 in Ubuntu 16.04 still uses 3.5 and we should not drop the support. | closed | 2019-10-29T18:20:52Z | 2019-11-15T23:39:56Z | https://github.com/dmlc/gluon-nlp/issues/994 | [
"enhancement",
"release focus"
] | sxjscience | 22 |
aiogram/aiogram | asyncio | 851 | Method to remove InlineKeyboardButton from markup | As i understand, now InlineKeyboardMarkup is just a dict with lists, so method like pop() will be useful and make code cleaner, at least for newbies like me.
For now, to make dynamic inline keyboard with deleting buttons after user clicks on them i need to recreate entire markup from changed list, but it can be simplified by using InlineKeyboardMarkup.pop() or something like that. | closed | 2022-02-27T07:30:32Z | 2023-08-04T18:24:57Z | https://github.com/aiogram/aiogram/issues/851 | [
"enhancement",
"good first issue",
"under discussion"
] | Sciti | 1 |
lucidrains/vit-pytorch | computer-vision | 48 | Multi-label classification | My dataset contains 5 unique labels for instance:
1 0 NaN 1 0
Where 1 means it has that feature. 0 means it doesnโt and NaN means that we have no observation about that feature and it shouldnโt included in loss.
Is it possible to make ViT multi-label for a dataset like this? | closed | 2020-12-23T20:54:23Z | 2024-05-29T14:00:11Z | https://github.com/lucidrains/vit-pytorch/issues/48 | [] | ramizdundar | 5 |
LibrePhotos/librephotos | django | 1,107 | HEIC selfie images face recondition positions are off | After #1103 was fixed, we noticed that selfie HEIC image files face recognition position calculation is off. | closed | 2023-12-29T19:32:09Z | 2024-01-02T15:20:33Z | https://github.com/LibrePhotos/librephotos/issues/1107 | [
"bug"
] | sefininio | 3 |
python-restx/flask-restx | api | 571 | Flask-restx with Marshmallow schemas | **Ask a question
have marshmallow schema I want to use in Flask-restx with automated parameters which I have already defined in marshamallow schemas i want to reuse it without doing again with @api,expect()tion**
A clear and concise question
**Additional context**
Add any other context or screenshots about the feature request here.
| open | 2023-10-04T06:51:28Z | 2023-10-06T11:18:25Z | https://github.com/python-restx/flask-restx/issues/571 | [
"question"
] | DeepakView | 2 |
nltk/nltk | nlp | 2,579 | AttributeError: 'list' object has no attribute 'get' | The issue happens in the nltk/tag/sequential.py file when trying to tag a list of words:
Stacktrace:
File "/home/mario/Desktop/Projects/AlbanianNLP/simple_test.py", line 10, in main
tagged_words = unitager.tag(words)
File "/usr/local/lib/python3.8/dist-packages/nltk/tag/sequential.py", line 63, in tag
tags.append(self.tag_one(tokens, i, tags))
File "/usr/local/lib/python3.8/dist-packages/nltk/tag/sequential.py", line 83, in tag_one
tag = tagger.choose_tag(tokens, index, history)
File "/usr/local/lib/python3.8/dist-packages/nltk/tag/sequential.py", line 142, in choose_tag
return self._context_to_tag.get(context)
AttributeError: 'list' object has no attribute 'get'
| closed | 2020-08-04T21:29:39Z | 2020-08-24T13:31:46Z | https://github.com/nltk/nltk/issues/2579 | [] | mmargegaj | 4 |
twopirllc/pandas-ta | pandas | 798 | Bollinger Bands (BBANDS) error | this is my resutl when I test Bollinger Bands (BBANDS), a lot of points have nan value!
 | closed | 2024-06-11T22:26:59Z | 2024-06-12T00:02:47Z | https://github.com/twopirllc/pandas-ta/issues/798 | [
"bug",
"wontfix"
] | Khanhlinhdang | 2 |
pennersr/django-allauth | django | 3,497 | Inconsistent Domain in Email for Sending Email Confirmation Message | This report highlights an inconsistency in the email confirmation message content. When an email is sent to confirm an account, the attached link contains a reference to "localhost" instead of the corresponding server domain. This discrepancy can cause confusion among users when accessing the confirmation link. It is recommended to review and correct the email confirmation link generation process to ensure it accurately reflects the server domain.
The error lies within the build_absolute_uri function, which generates the absolute URL including the domain. This function is incorrectly generating the URL with "localhost" instead of using the corresponding server domain. To rectify the issue, it is necessary to adjust the logic in the build_absolute_uri function to consistently use the server domain in the email confirmation.
This error impacts the rendering of templates: email_confirmation_message | closed | 2023-10-26T04:53:41Z | 2023-10-27T08:03:08Z | https://github.com/pennersr/django-allauth/issues/3497 | [] | prietojulii | 1 |
plotly/dash-html-components | dash | 76 | Bug in webpack-cli breaking | I'm getting this known bug: https://github.com/webpack/webpack-cli/issues/607
E.g., in the CI here: https://circleci.com/gh/plotly/dash-html-components/328?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link
I think we just need to bump `webpack-cli`, I'll see if that works. | open | 2018-11-02T13:45:42Z | 2018-11-06T02:00:46Z | https://github.com/plotly/dash-html-components/issues/76 | [] | rmarren1 | 2 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,785 | module 'nodriver' has no attribute 'loop' | ` uc.loop().run_until_complete(main())
^^^^^^^
AttributeError: module 'nodriver' has no attribute 'loop'`
i tried to run for the first time from example code listed on `nodriver` github page but it doesnt work
`import asyncio
import nodriver as uc
async def main():
browser = await uc.start()
page = await browser.get('https://www.nowsecure.nl')
await page.save_screenshot()
await page.get_content()
await page.scroll_down(150)
elems = await page.select_all('*[src]')
for elem in elems:
await elem.flash()
page2 = await browser.get('https://twitter.com', new_tab=True)
page3 = await browser.get('https://github.com/ultrafunkamsterdam/nodriver', new_window=True)
for p in (page, page2, page3):
await p.bring_to_front()
await p.scroll_down(200)
await p # wait for events to be processed
await p.reload()
if p != page3:
await p.close()
if __name__ == '__main__':
# since asyncio.run never worked (for me)
uc.loop().run_until_complete(main())` | closed | 2024-03-11T03:54:27Z | 2024-03-11T04:30:09Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1785 | [] | RobertAzovski | 1 |
modin-project/modin | pandas | 6,609 | HDK: to_pandas(): Cache pandas DataFrame | Since the HDK frame is immutable, the pandas DataFrames, returned by the to_pandas() method, are always identical. Adding a cache to this method could significantly improve performance of the methods, that are defaulting to pandas. | closed | 2023-09-27T14:57:37Z | 2024-05-16T11:01:37Z | https://github.com/modin-project/modin/issues/6609 | [
"new feature/request ๐ฌ",
"HDK"
] | AndreyPavlenko | 1 |
NullArray/AutoSploit | automation | 585 | Unhandled Exception (57c1d7b4d) | Autosploit version: `3.0`
OS information: `Linux-4.19.0-kali3-amd64-x86_64-with-Kali-kali-rolling-kali-rolling`
Running context: `autosploit.py`
Error meesage: `argument of type 'NoneType' is not iterable`
Error traceback:
```
Traceback (most recent call):
File "/home/ghizoka/Downloads/AutoSploit-master/autosploit/main.py", line 117, in main
terminal.terminal_main_display(loaded_tokens)
File "/home/ghizoka/Downloads/AutoSploit-master/lib/term/terminal.py", line 474, in terminal_main_display
if "help" in choice_data_list:
TypeError: argument of type 'NoneType' is not iterable
```
Metasploit launched: `False`
| closed | 2019-03-24T03:54:27Z | 2019-04-02T20:25:18Z | https://github.com/NullArray/AutoSploit/issues/585 | [] | AutosploitReporter | 0 |
davidsandberg/facenet | computer-vision | 1,256 | i-JOiN APP | open | 2024-10-28T08:54:00Z | 2024-10-28T08:54:00Z | https://github.com/davidsandberg/facenet/issues/1256 | [] | orb-jaydee | 0 | |
encode/apistar | api | 574 | Component initialization and finalization | In 3.x was possible to `yield` a component with the `init` function, so initialization and deinitialization was really possible. Currently the only workaround is to use hooks, which a bit confusing because a component code is then scattered in both components and hooks. However even hooks can't work everytime because different component isntances are given to the hook and handler (#490, for exemple `on_error` hooks).
The impact of this is that is really impossible to finalize components. The only workaround that I can imagine is to use `threading.local` which is really strange for a single threaded model.
Any thoughts on this? | closed | 2018-05-30T03:33:49Z | 2018-06-03T22:26:29Z | https://github.com/encode/apistar/issues/574 | [] | pslacerda | 4 |
deepfakes/faceswap | deep-learning | 1,249 | Preview problem | Hello, Iโm new, preview does not match the output folder timelapse. Is that normal? Thank you!
pictures are similar...
reinstall does not fix :/ | closed | 2022-07-15T22:50:27Z | 2022-07-16T10:08:04Z | https://github.com/deepfakes/faceswap/issues/1249 | [] | machinbidule62 | 2 |
huggingface/text-generation-inference | nlp | 2,305 | Fixes #3908 | Fixes #3908

_Originally posted by @urugator in https://github.com/mobxjs/mobx/pull/3909_ | closed | 2024-07-25T11:05:07Z | 2024-07-25T15:09:40Z | https://github.com/huggingface/text-generation-inference/issues/2305 | [] | Paxosgold | 2 |
onnx/onnx | machine-learning | 5,988 | Version converter: No Adapter From Version 16 for Identity | # Ask a Question
### Question
I meet the following issue while trying to convert the onnx model of Opset16 to Opset15 :
"**adapter_lookup: Assertion `false` failed: No Adapter From Version $16 for Identity**"
If I have to use an Opset15 and currently only have Opset16 version of the relevant onnx resources available, do you have any suggestions๏ผ

| open | 2024-03-02T13:51:14Z | 2024-03-06T19:14:23Z | https://github.com/onnx/onnx/issues/5988 | [
"question"
] | lsp2 | 4 |
graphql-python/graphql-core | graphql | 173 | GraphQLSchema serialization using pickle | Is it possible to serialize an instance of GraphQLSchema?
We would like to dump and load an instance of GraphQLSchema to/from a file for performance reasons.
It does not appear to be serializable using pickle by default.
```
> res = pickle.dumps(schema)
E AttributeError: Can't pickle local object 'extend_schema_impl.<locals>.build_object_type.<locals>.<lambda>'
```
Thanks in advance. | closed | 2022-06-02T15:51:25Z | 2022-11-04T13:21:39Z | https://github.com/graphql-python/graphql-core/issues/173 | [
"enhancement"
] | rpgreen | 24 |
KaiyangZhou/deep-person-reid | computer-vision | 369 | ๏ผทhy cuhk03 only has two cameras? | Hi Bro!
I run your code to train CUHK03, but there only two cameras. However in the description of CUHK03, there are exactly five cameras.
=> Loaded CUHK03
----------------------------------------
subset | # ids | # images | # cameras
----------------------------------------
train | 1467 | 14097 | 2
query | 700 | 1400 | 2
gallery | 700 | 5332 | 2
| open | 2020-08-28T01:30:03Z | 2020-08-28T10:41:40Z | https://github.com/KaiyangZhou/deep-person-reid/issues/369 | [] | kevinbro96 | 2 |
scikit-learn/scikit-learn | machine-learning | 30,991 | Halving searches crash when using a PredefinedSplit as cv | ### Describe the bug
In some cases, it might be necessary to use a predefined split with an explicit training and testing set instead of cross-validation (for example if the data has a specific distribution of properties that should be the same in a training and testing set).
However, when attempting to use a halving search (`HalvingRandomSearchCV ` or `HalvingGridSearchCV`) with a PredefinedSplit as cv, the search crashes with the error
> sklearn.utils._param_validation.InvalidParameterError: The 'n_samples' parameter of resample must be an int in the range [1, inf) or None. Got 0 instead.
(after a long stack of internal calls).
However, [as I understand it](https://scikit-learn.org/stable/modules/grid_search.html#successive-halving-user-guide), the basic idea of increasing a resource (like the number of samples taken from the (predefined) split) while reducing the amount of candidates should not depend on the specific type of split; therefore, using a predefined split should work with a halving search as it would with a non-halving search.
### Steps/Code to Reproduce
```python
from sklearn.ensemble import RandomForestRegressor
from sklearn.experimental import enable_halving_search_cv
from sklearn.model_selection import PredefinedSplit, HalvingRandomSearchCV
import numpy as np
# 10 input features
# 4 output features
# 80 training data points
# 20 testing data points
train_input_values = np.random.rand(80, 10)
train_output_values = np.random.rand(80, 4)
test_input_values = np.random.rand(20, 10)
test_output_values = np.random.rand(20, 4)
# Define the search parameters
model = RandomForestRegressor()
hyperparameter_grid = {"n_estimators": [10, 100]}
# Define the train/test split
total_input_values = np.concat((train_input_values, test_input_values))
total_output_values = np.concat((train_output_values, test_output_values))
cv = PredefinedSplit([-1] * len(train_input_values) + [0] * len(test_input_values))
# Perform the search
random_search = HalvingRandomSearchCV(model, hyperparameter_grid, cv=cv)
random_search.fit(total_input_values, total_output_values)
```
### Expected Results
The halving random search should work with a predefined split just as it would with a cross-validation split, increasing the resources while reducing the amount of candidates.
### Actual Results
```
Traceback (most recent call last):
File "<python-input-0>", line 27, in <module>
random_search.fit(total_input_values, total_output_values)
~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "venv/lib/python3.13/site-packages/sklearn/base.py", line 1389, in wrapper
return fit_method(estimator, *args, **kwargs)
File "venv/lib/python3.13/site-packages/sklearn/model_selection/_search_successive_halving.py", line 253, in fit
super().fit(X, y=y, **params)
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "venv/lib/python3.13/site-packages/sklearn/base.py", line 1389, in wrapper
return fit_method(estimator, *args, **kwargs)
File "venv/lib/python3.13/site-packages/sklearn/model_selection/_search.py", line 1024, in fit
self._run_search(evaluate_candidates)
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^
File "venv/lib/python3.13/site-packages/sklearn/model_selection/_search_successive_halving.py", line 357, in _run_search
results = evaluate_candidates(
candidate_params, cv, more_results=more_results
)
File "venv/lib/python3.13/site-packages/sklearn/model_selection/_search.py", line 982, in evaluate_candidates
for (cand_idx, parameters), (split_idx, (train, test)) in product(
~~~~~~~^
enumerate(candidate_params),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
enumerate(cv.split(X, y, **routed_params.splitter.split)),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "venv/lib/python3.13/site-packages/sklearn/model_selection/_search_successive_halving.py", line 41, in split
test_idx = resample(
test_idx,
...<2 lines>...
n_samples=int(self.fraction * len(test_idx)),
)
File "venv/lib/python3.13/site-packages/sklearn/utils/_param_validation.py", line 206, in wrapper
validate_parameter_constraints(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
parameter_constraints, params, caller_name=func.__qualname__
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "venv/lib/python3.13/site-packages/sklearn/utils/_param_validation.py", line 98, in validate_parameter_constraints
raise InvalidParameterError(
...<2 lines>...
)
sklearn.utils._param_validation.InvalidParameterError: The 'n_samples' parameter of resample must be an int in the range [1, inf) or None. Got 0 instead.
```
### Versions
```shell
System:
python: 3.13.2 (main, Feb 4 2025, 14:51:09) [Clang 16.0.0 (clang-1600.0.26.6)]
executable: venv/bin/python
machine: macOS-14.3-arm64-arm-64bit-Mach-O
Python dependencies:
sklearn: 1.6.1
pip: 25.0
setuptools: 75.8.2
numpy: 2.2.3
scipy: 1.15.2
Cython: None
pandas: 2.2.3
matplotlib: None
joblib: 1.4.2
threadpoolctl: 3.5.0
Built with OpenMP: True
threadpoolctl info:
user_api: openmp
internal_api: openmp
num_threads: 8
prefix: libomp
filepath: venv/lib/python3.13/site-packages/sklearn/.dylibs/libomp.dylib
version: None
``` | open | 2025-03-13T15:16:26Z | 2025-03-20T21:53:44Z | https://github.com/scikit-learn/scikit-learn/issues/30991 | [
"Bug"
] | Korne127 | 3 |
zwczou/weixin-python | flask | 68 | ่ฝๅฆไฝฟ็จๆญคAPI่ฎฟ้ฎๅ
ถไปๅพฎไฟกๅ
ฌไผๅทๅนถ่ทๅๅ
ถ็ปๅฝtoken? | ่ฝๅฆไฝฟ็จๆญคAPI่ฎฟ้ฎๅ
ถไปๅพฎไฟกๅ
ฌไผๅทๅนถ่ทๅๅ
ถ็ปๅฝtoken? | closed | 2020-05-12T03:29:12Z | 2020-12-02T08:32:47Z | https://github.com/zwczou/weixin-python/issues/68 | [] | LSHXS | 1 |
pytest-dev/pytest-xdist | pytest | 998 | how to debug crashed gw worker? | Hi,
we are running our tests environment using pytest-xdist inside kubernetes pods. during the last three weeks we started suffering from crashed workers and we don't know how to debug what is the root cause of the failure.
I would kindly ask from someone to hint me on how to start looking at it, as I see only failures with crashed workers and not the reason why.
[gw8] Python 3.8.10 (default, May 26 2023, 14:05:08) -- [GCC 9.4.0]
13:33:50 [gw2] node down: Not properly terminated
13:33:50 [gw2] [ 50%] FAILED testsuite/ond/sdc/ono_oni_vz/ui/create_service/test_ond_41377_catalog_page_search.py::test_ond_41377_catalog_page_search
13:33:50
13:33:50 replacing crashed worker gw2
13:33:50
[gw9] linux Python 3.8.10 cwd: /home/jenkins/agent/workspace/QA/orion-orchestrator
13:33:50
13:33:51 | open | 2024-01-01T13:10:34Z | 2024-07-12T07:36:56Z | https://github.com/pytest-dev/pytest-xdist/issues/998 | [] | Formartha | 2 |
JaidedAI/EasyOCR | deep-learning | 900 | How to retrain the easyocr for custom Vietnamese dataset | I want to retrain the easyocr for Vietnamese text can someone guide how to retrain the model like step by step | open | 2022-12-06T07:02:43Z | 2023-04-11T14:42:42Z | https://github.com/JaidedAI/EasyOCR/issues/900 | [] | rama298 | 1 |
modAL-python/modAL | scikit-learn | 161 | Can I use modAL with estimators from other libraries than scikit-learn like xgboost? | Hi there,
I have already trained some good working estimators (xgboost, catboost & lightgbm).
I would like to add an active learner, because we need to decide which data to label continuously.
The documentation says, that I need to use a scikit-learn estimator object. Does that mean I can't use the models from xgboost, catboost & lightgbm? I used the models from the libraries with the same names.
And another question (for my understanding). Do I give an estimator that is already trained, or does the active learner train a model from scratch?
I am new to the field of active learning, so thank you very much! | open | 2022-10-27T12:52:41Z | 2023-06-28T08:48:10Z | https://github.com/modAL-python/modAL/issues/161 | [] | viandres | 1 |
explosion/spaCy | data-science | 13,222 | POS Tagging is Broken for Sliced Pipelines | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
Hey everyone,
I'm trying to lemmatize a text which I cleaned earlier. The issue I had was due to runtime so I decided to cut down certain pipelines out since I wanted lemmas only. When I only enable lemmas I got some warnings but I also wanted to filter based on POS tags such as ['ADJ', 'NOUN', 'VERB', 'ADV']. In order to generate .pos_ attribute, I enabled pipeline components for that which documentation said `tagger` and `parser`. However using those only doesn'y really work here as I am not getting expected POS tags. When I use the full pipeline I get expected results but not when I use certain pipelines. Is this behaviour expected? If so, why? How do I know which pipelines to exclude as I am a bit of confused now.
Thanks in advance!
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
Here is the code sample that doesn't work:
```
nlp = spacy.load('en_core_web_sm', enable=['lemmatizer', 'tagger', "parser", "attribute_ruler"])
text = """
If you like the taste of Sweet Low get this If you don t don t Couldn t get through one cup of coffee
I m gonna give Stevia Extract in the Raw a try It s made by the folks at Sugar in the Raw Here s
what they claim Stevia Extract In The Raw gets its delicious natural sweetness from Rebiana an
extract from the Stevia plant This extract is the sweetest part of the plant and has recently
been isolated to provide pure sweetening power without the licorice like aftertaste that many
of our predecessors exhibited All you get is the sweet flavor without any calories
We ll see Simply Stevia is simply nasty
"""
print([t.pos_ for t in nlp(text)])
```
The one that works:
```
nlp = spacy.load('en_core_web_sm')
print([t.pos_ for t in nlp(text)])
```

## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
- **spaCy version:** 3.7.2
- **Platform:** Linux-5.15.133+-x86_64-with-glibc2.31
- **Python version:** 3.10.12
- **Pipelines:** en_core_web_sm (3.7.1), en_core_web_lg (3.7.1) | closed | 2024-01-06T23:35:39Z | 2024-01-08T15:38:59Z | https://github.com/explosion/spaCy/issues/13222 | [
"lang / en",
"models",
"feat / tagger"
] | lordsoffallen | 1 |
ultralytics/yolov5 | deep-learning | 13,089 | Unable to Detect faces in single face image and giving false positives | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
Detection
### Bug
I know this sound silly but I'm stuck at face detection using yolov5. The algorithm works fine for some images with multiple faces with great accuracy. but if we feed single face images, [only one face in image it is not detecting the face and also giving false positive which is not a face. [for another person's face image, i got this image as output]

here are the examples;
Input Image:

Output Image for code below:

Is there any processing is to be done to image? and in result you can see rotated image given by yolo.
and below is another example which it perfectly works with same code.
Input Image:

Output Image:

### Environment
- YOLO- YOLOv5 CUDA:0,CPU:i7-1270P, 16GB RAM
- OS - Ubuntu-24.04LTS
- Python - Python 3.11.6
### Minimal Reproducible Example
```
import os
import logging
import cv2
from PIL import Image, ImageDraw
def upscale_faces_with_boxes(input_image_path, output_image_path, a, confidence_threshold=0.5):
"""
Detect faces in an image using YOLO, draw bounding boxes around them, and save the resulting image.
:param input_image_path: Path to the input image
:param output_image_path: Path to the output image where the result will be saved
:param a: An instance of YOLODetector already initialized
:param confidence_threshold: Minimum confidence required to consider a detected face valid (default is 0.5)
"""
# Read the input image
img = cv2.imread(input_image_path)
if img is None:
logging.error(f"Could not read image {input_image_path}")
return
# Detect faces using YOLO
bounding_boxes = a.predict(img)[0]
logging.info('facial features: %s', bounding_boxes)
# Load the original image using PIL
original_image = Image.open(input_image_path)
if original_image.mode == 'RGBA':
original_image = original_image.convert('RGB')
# Draw bounding boxes on the original image
draw = ImageDraw.Draw(original_image)
for bbox_group in bounding_boxes:
for bbox in bbox_group:
left, top, right, bottom = bbox
draw.rectangle([left, top, right, bottom], outline="red", width=3)
# Save the image with bounding boxes
original_image.save(output_image_path)
logging.info(f"Image with bounding boxes saved to {output_image_path}")
input_image_path = 'IMG_20240613_165627217.jpg'
output_image_path = 'outputfolder/image_with_boxes.jpg'
from yoloface_master.YOLOFaceDetector import YoloDetector
a = YoloDetector()
upscale_faces_with_boxes(input_image_path, output_image_path, a)
```
### Additional
It is not working with single faced images and i tried a lot by adjusting parameters in yolo detector initialiser. is there a fix for this??
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2024-06-13T13:19:07Z | 2024-10-20T19:47:51Z | https://github.com/ultralytics/yolov5/issues/13089 | [
"bug",
"Stale"
] | Raghucharan16 | 7 |
huggingface/text-generation-inference | nlp | 2,302 | how to use the model's checkpoint in local fold? | ### System Info
ghcr.io/huggingface/text-generation-inference 2.0.4
platform windows10
Docker version 27.0.3
llm model:lllyasviel/omost-llama-3-8b-4bits
cuda 12.3
gpu nvidia rtx A6000
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [ ] An officially supported command
- [ ] My own modifications
### Reproduction
C:\Users\Administrator>docker run --gpus all -p 8080:80 -v ./data:/data ghcr.io/huggingface/text-generation-inference:2.0.4 --model-id "F:\Omost-main\checkpoints\models--lllyasviel--omost-llama-3-8b-4bits" --max-total-tokens 9216 --cuda-memory-fraction 0.8
### Expected behavior
eventhought i set the model-id =<my local path/>, docker raise a error.

| open | 2024-07-25T04:26:44Z | 2024-08-25T01:57:54Z | https://github.com/huggingface/text-generation-inference/issues/2302 | [
"Stale"
] | zk19971101 | 2 |
tfranzel/drf-spectacular | rest-api | 1,115 | Customize ViewSet.list action to return a json-object instead of a list. | **Describe the bug**
I have a viewset that looks like this:
```python
class CadastreView(ReadOnlyModelViewSet):
pagination_class = PageNumberPagination
filterset_class = CadastreFilterSet
serializer_class = TableSerializer
def list(self, request, *args, **kwargs) -> Response:
...
```
for which spectacular renders this very cute scheme (example value) in swagger:
```json
{
"count": 123,
"next": "http://api.example.org/accounts/?page=4",
"previous": "http://api.example.org/accounts/?page=2",
"results": [
{
"header": [
{
"id": 0,
"title": "string",
"editable": false,
"type": "string"
}
],
"body": [
[
{
"id": 0,
"value": "string",
"type": "string",
"comment": "string",
"column_id": 0
}
]
]
}
]
}
```
The problem here is that I don't need the results to return a list of objects. I want it to return an object. Like this:
```python
{'count': 1,
'next': None,
'previous': None,
'results': {'header': [{'id': None,
'title': 'ะะตัะธะพะด',
'editable': False,
'type': 'year'},
{'id': None,
'title': 'ะกัะฑัะตะบั ะ ะค',
'editable': False,
'type': 'federation_subject'},
{'id': 1, 'title': 'ะะพะบั', 'editable': True, 'type': 'indicator'}],
'body': [[{'id': 1,
'value': '2000',
'type': 'year',
'comment': None,
'column_id': None},
{'id': 1,
'value': 'ะะพัะบะฒะฐ',
'type': 'federation_subject',
'comment': None,
'column_id': None},
{'id': 1,
'value': '381.704671695895',
'type': 'indicator',
'comment': 'jYuSUDmInUAL',
'column_id': 1}]]}}
```
like with `forced_singular_serializer`, but with preserving all that pagination stuff | closed | 2023-11-28T14:39:14Z | 2023-11-29T09:46:13Z | https://github.com/tfranzel/drf-spectacular/issues/1115 | [] | Alkalit | 3 |
nteract/papermill | jupyter | 365 | Running papermill on Jupytext's `.py` notebook | I'm running papermill on [LSF](https://www.ibm.com/support/knowledgecenter/en/SSETD4_9.1.3/lsf_users_guide/job_submit.html) and for some reason I can run it locally fine but when submitting the job it just gets stuck at 60% for 20 minutes. I used [Jupytext](https://github.com/mwouts/jupytext) to convert my notebook into `.py` notebook and run it like a script with default parameters and it finished in less than 1 minute. I think this has to do with starting/running the kernel. What can I do to diagnose this?
Also, is it possible to use papermill to run Jupytext's converted `.py` file as a script, using the current env's python interpreter instead of through Jupyter kernel? I usually develop scripts iteratively using Jupyter Notebook then have to run them as python script (partly of the error seen above, our cluster is finicky). I really love papermill's ability to parametrize scripts without writing verbose`argparse`/ CLI parser code and it comes with YAML parsing. I can just document my arguments using markdown cell in Jupyter notebook, which can be turned into comments in the `.py` file with Jupytext.
It would be amazing if I can use Jupytext to convert my notebook, papermill to inject parameter (by parsing the tags from the `.py` file, which would appear as a comment, ex: `# + {"tags": ["parameters"]}`), and run it like a normal script. The parametrized script can be a temp file and deleted after execution. Is it possible to do such thing? Where should I go first to try to implement such feature?
Basically, it would be running papermill with `--prepare-only` then convert to `.py` with jupytext, then run the script with no parameters. Right now, if I do `.py` as the output folder extension, it returns the file with the notebook's JSON, how do I make it pipe directly into Jupytext and possibly execute it? | open | 2019-05-16T23:33:35Z | 2020-06-29T08:18:53Z | https://github.com/nteract/papermill/issues/365 | [] | hoangthienan95 | 16 |
jofpin/trape | flask | 170 | Error 2 No such file directory problem please help | Error 2 no such file directory
I tried unzip ngrok but I show unzip newvest version
So please solve I use ubuntu distribution | open | 2019-06-25T16:19:43Z | 2019-06-25T16:19:43Z | https://github.com/jofpin/trape/issues/170 | [] | 303103 | 0 |
seleniumbase/SeleniumBase | web-scraping | 2,183 | `PytestAssertRewriteWarning` if initializing multithreaded tests via `python` | ## `PytestAssertRewriteWarning` if initializing multithreaded tests via `python`
Steps to reproduce:
Run the the following script with `python`: (NOT `pytest` directly)
```python
import pytest
from seleniumbase import BaseCase
BaseCase.main(__name__, __file__, "-n4", "-s")
@pytest.mark.parametrize("", [[]] * 4)
def test_multi_threaded(sb):
sb.open("about:blank")
sb.sleep(1)
```
A warning will appear:
```
============================== warnings summary ==============================
../../../.virtualenvs/sbase12/lib/python3.12/site-packages/_pytest/config/__init__.py:1204
/Users/michael/.virtualenvs/sbase12/lib/python3.12/site-packages/_pytest/config/__init__.py:1204:
PytestAssertRewriteWarning: Module already imported so cannot be rewritten: seleniumbase
self._mark_plugins_for_rewrite(hook)
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
======================== 4 passed, 1 warning in 3.31s ========================
```
--------
According to https://stackoverflow.com/a/54666289/7058266, I can use `subprocess.call()` instead of `pytest.main()` in those situations in order to avoid the warning (where the situation is running multithreaded pytest tests). | closed | 2023-10-13T04:33:08Z | 2023-10-13T20:08:28Z | https://github.com/seleniumbase/SeleniumBase/issues/2183 | [
"bug"
] | mdmintz | 1 |
sktime/pytorch-forecasting | pandas | 1,068 | [Feature Request] ExpectedNumInstanceSampler and BucketInstanceSampler | There are two samplers in GluonTS :
```python
class ExpectedNumInstanceSampler(InstanceSampler):
"""
Keeps track of the average time series length and adjusts the probability
per time point such that on average `num_instances` training examples are
generated per time series.
```
and
```python
class BucketInstanceSampler(InstanceSampler):
"""
This sample can be used when working with a set of time series that have a
skewed distributions. For instance, if the dataset contains many time
series with small values and few with large values.
The probability of sampling from bucket i is the inverse of its number of
elements.
```
Is there anyway we can implement the same in pytorch-forecasting? I'm new to the library, but I'm open to contributing if it is possible.
Thoughts? | open | 2022-07-20T13:47:37Z | 2022-07-20T13:47:37Z | https://github.com/sktime/pytorch-forecasting/issues/1068 | [] | manujosephv | 0 |
huggingface/datasets | deep-learning | 7,444 | Excessive warnings when resuming an IterableDataset+buffered shuffle+DDP. | ### Describe the bug
I have a large dataset that I shared into 1024 shards and save on the disk during pre-processing. During training, I load the dataset using load_from_disk() and convert it into an iterable dataset, shuffle it and split the shards to different DDP nodes using the recommended method.
However, when the training is resumed mid-epoch, I get thousands of identical warning messages:
```
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
```
### Steps to reproduce the bug
1. Run a multi-node training job using the following python script and interrupt the training after a few seconds to save a mid-epoch checkpoint.
```python
#!/usr/bin/env python
import os
import time
from typing import Dict, List
import torch
import lightning as pl
from torch.utils.data import DataLoader
from datasets import Dataset
from datasets.distributed import split_dataset_by_node
import datasets
from transformers import AutoTokenizer
from more_itertools import flatten, chunked
from torchdata.stateful_dataloader import StatefulDataLoader
from lightning.pytorch.callbacks.on_exception_checkpoint import (
OnExceptionCheckpoint,
)
datasets.logging.set_verbosity_debug()
def dummy_generator():
# Generate 60 examples: integers from $0$ to $59$
# 64 sequences of different lengths
dataset = [
list(range(3, 10)),
list(range(10, 15)),
list(range(15, 21)),
list(range(21, 27)),
list(range(27, 31)),
list(range(31, 36)),
list(range(36, 45)),
list(range(45, 50)),
]
for i in range(8):
for j, ids in enumerate(dataset):
yield {"token_ids": [idx + i * 50 for idx in ids]}
def group_texts(
examples: Dict[str, List[List[int]]],
block_size: int,
eos_token_id: int,
bos_token_id: int,
pad_token_id: int,
) -> Dict[str, List[List[int]]]:
real_block_size = block_size - 2 # make space for bos and eos
# colapse the sequences into a single list of tokens and then create blocks of real_block_size
input_ids = []
attention_mask = []
for block in chunked(flatten(examples["token_ids"]), real_block_size):
s = [bos_token_id] + list(block) + [eos_token_id]
ls = len(s)
attn = [True] * ls
s += [pad_token_id] * (block_size - ls)
attn += [False] * (block_size - ls)
input_ids.append(s)
attention_mask.append(attn)
return {"input_ids": input_ids, "attention_mask": attention_mask}
def collate_fn(batch):
return {
"input_ids": torch.tensor(
[item["input_ids"] for item in batch], dtype=torch.long
),
"attention_mask": torch.tensor(
[item["attention_mask"] for item in batch], dtype=torch.long
),
}
class DummyModule(pl.LightningModule):
def __init__(self):
super().__init__()
# A dummy linear layer (not used for actual computation)
self.layer = torch.nn.Linear(1, 1)
self.ds = None
self.prepare_data_per_node = False
def on_train_start(self):
# This hook is called once training begins on each process.
print(f"[Rank {self.global_rank}] Training started.", flush=True)
self.data_file = open(f"data_{self.global_rank}.txt", "w")
def on_train_end(self):
self.data_file.close()
def training_step(self, batch, batch_idx):
# Print batch information to verify data loading.
time.sleep(5)
# print("batch", batch, flush=True)
print(
f"\n[Rank {self.global_rank}] Training step, epoch {self.trainer.current_epoch}, batch {batch_idx}: {batch['input_ids']}",
flush=True,
)
self.data_file.write(
f"[Rank {self.global_rank}] Training step, epoch {self.trainer.current_epoch}, batch {batch_idx}: {batch['input_ids']}\n"
)
# Compute a dummy loss (here, simply a constant tensor)
loss = torch.tensor(0.0, requires_grad=True)
return loss
def on_train_epoch_start(self):
epoch = self.trainer.current_epoch
print(
f"[Rank {self.global_rank}] Training epoch {epoch} started.",
flush=True,
)
self.data_file.write(
f"[Rank {self.global_rank}] Training epoch {epoch} started.\n"
)
def configure_optimizers(self):
# Return a dummy optimizer.
return torch.optim.SGD(self.parameters(), lr=0.001)
class DM(pl.LightningDataModule):
def __init__(self):
super().__init__()
self.ds = None
self.prepare_data_per_node = False
def set_epoch(self, epoch: int):
self.ds.set_epoch(epoch)
def prepare_data(self):
# download the dataset
dataset = Dataset.from_generator(dummy_generator)
# save the dataset
dataset.save_to_disk("dataset", num_shards=4)
def setup(self, stage: str):
# load the dataset
ds = datasets.load_from_disk("dataset").to_iterable_dataset(
num_shards=4
)
ds = ds.map(
group_texts,
batched=True,
batch_size=5,
fn_kwargs={
"block_size": 5,
"eos_token_id": 1,
"bos_token_id": 0,
"pad_token_id": 2,
},
remove_columns=["token_ids"],
).shuffle(seed=42, buffer_size=8)
ds = split_dataset_by_node(
ds,
rank=self.trainer.global_rank,
world_size=self.trainer.world_size,
)
self.ds = ds
def train_dataloader(self):
print(
f"[Rank {self.trainer.global_rank}] Preparing train_dataloader...",
flush=True,
)
rank = self.trainer.global_rank
print(
f"[Rank {rank}] Global rank: {self.trainer.global_rank}",
flush=True,
)
world_size = self.trainer.world_size
print(f"[Rank {rank}] World size: {world_size}", flush=True)
return StatefulDataLoader(
self.ds,
batch_size=2,
num_workers=2,
collate_fn=collate_fn,
drop_last=True,
persistent_workers=True,
)
if __name__ == "__main__":
print("Starting Lightning training", flush=True)
# Optionally, print some SLURM environment info for debugging.
print(f"SLURM_NNODES: {os.environ.get('SLURM_NNODES', '1')}", flush=True)
# Determine the number of nodes from SLURM (defaulting to 1 if not set)
num_nodes = int(os.environ.get("SLURM_NNODES", "1"))
model = DummyModule()
dm = DM()
on_exception = OnExceptionCheckpoint(
dirpath="checkpoints",
filename="on_exception",
)
# Configure the Trainer to use distributed data parallel (DDP).
trainer = pl.Trainer(
accelerator="gpu" if torch.cuda.is_available() else "cpu",
devices=1,
strategy=(
"ddp" if num_nodes > 1 else "auto"
), # Use DDP strategy for multi-node training.
num_nodes=num_nodes,
max_epochs=2,
logger=False,
enable_checkpointing=True,
num_sanity_val_steps=0,
enable_progress_bar=False,
callbacks=[on_exception],
)
# resume (uncomment to resume)
# trainer.fit(model, datamodule=dm, ckpt_path="checkpoints/on_exception.ckpt")
# train
trainer.fit(model, datamodule=dm)
```
```bash
#!/bin/bash
#SBATCH --job-name=pl_ddp_test
#SBATCH --nodes=2 # Adjust number of nodes as needed
#SBATCH --ntasks-per-node=1 # One GPU (process) per node
#SBATCH --cpus-per-task=3 # At least as many dataloader workers as required
#SBATCH --gres=gpu:1 # Request one GPU per node
#SBATCH --time=00:10:00 # Job runtime (adjust as needed)
#SBATCH --partition=gpu-preempt # Partition or queue name
#SBATCH -o script.out
# Disable Python output buffering.
export PYTHONUNBUFFERED=1
echo "SLURM job starting on $(date)"
echo "Running on nodes: $SLURM_NODELIST"
echo "Current directory: $(pwd)"
ls -l
# Launch the script using srun so that each process starts the Lightning module.
srun script.py
```
2. Uncomment the "resume" line (second to last) and comment the original `trainer.fit` call (last line).
It will produce the following log.
```
[Rank 0] Preparing train_dataloader...
[Rank 0] Global rank: 0
[Rank 0] World size: 2
[Rank 1] Preparing train_dataloader...
[Rank 1] Global rank: 1
[Rank 1] World size: 2
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Assigning 2 shards (or data sources) of the dataset to each node.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
node#0 dataloader worker#1, ': Starting to iterate over 1/2 shards.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
node#0 dataloader worker#0, ': Starting to iterate over 1/2 shards.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns.
Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns.
node#0 dataloader worker#1, ': Finished iterating over 1/1 shards.
node#0 dataloader worker#0, ': Finished iterating over 1/1 shards.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
[Rank 0] Training started.
[Rank 0] Training epoch 0 started.
[Rank 0] Training epoch 1 started.
Assigning 2 shards (or data sources) of the dataset to each node.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
node#0 dataloader worker#1, ': Starting to iterate over 1/2 shards.
node#0 dataloader worker#0, ': Starting to iterate over 1/2 shards.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
node#1 dataloader worker#1, ': Starting to iterate over 1/2 shards.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
node#1 dataloader worker#0, ': Starting to iterate over 1/2 shards.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns.
Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns.
node#0 dataloader worker#1, ': Finished iterating over 1/1 shards.
node#0 dataloader worker#0, ': Finished iterating over 1/1 shards.
`Trainer.fit` stopped: `max_epochs=2` reached.
Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns.
Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns.
node#1 dataloader worker#1, ': Finished iterating over 1/1 shards.
node#1 dataloader worker#0, ': Finished iterating over 1/1 shards.
[Rank 1] Training started.
[Rank 1] Training epoch 0 started.
[Rank 1] Training epoch 1 started.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
node#1 dataloader worker#1, ': Starting to iterate over 1/2 shards.
node#1 dataloader worker#0, ': Starting to iterate over 1/2 shards.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Loading a state dict of a shuffle buffer of a dataset without the buffer content.The shuffle buffer will be refilled before starting to yield new examples.
Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns.
Set __getitem__(key) output type to arrow for no columns (when key is int or slice) and don't output other (un-formatted) columns.
node#1 dataloader worker#0, ': Finished iterating over 1/1 shards.
node#1 dataloader worker#1, ': Finished iterating over 1/1 shards.
```
I'm also attaching the relevant state_dict to make sure that the state is being checkpointed as expected.
```
{'_iterator_finished': True,
'_snapshot': {'_last_yielded_worker_id': 1,
'_main_snapshot': {'_IterableDataset_len_called': None,
'_base_seed': 3992758080362545099,
'_index_sampler_state': {'samples_yielded': 64},
'_num_workers': 2,
'_sampler_iter_state': None,
'_sampler_iter_yielded': 32,
'_shared_seed': None},
'_snapshot_step': 32,
'_worker_snapshots': {'worker_0': {'dataset_state': {'ex_iterable': {'shard_example_idx': 0,
'shard_idx': 1},
'num_examples_since_previous_state': 0,
'previous_state': {'shard_example_idx': 0,
'shard_idx': 1},
'previous_state_example_idx': 33},
'fetcher_state': {'dataset_iter_state': None,
'fetcher_ended': False},
'worker_id': 0},
'worker_1': {'dataset_state': {'ex_iterable': {'shard_example_idx': 0,
'shard_idx': 1},
'num_examples_since_previous_state': 0,
'previous_state': {'shard_example_idx': 0,
'shard_idx': 1},
'previous_state_example_idx': 33},
'fetcher_state': {'dataset_iter_state': None,
'fetcher_ended': False},
'worker_id': 1}}},
'_steps_since_snapshot': 0}
```
### Expected behavior
Since I'm following all the recommended steps, I don't expect to see any warning when resuming. Am I doing something wrong? Also, can someone explain why I'm seeing 20 identical messages in the log in this reproduction setting? I'm trying to understand why I see thousands of these messages with the actual dataset.
One more surprising thing I noticed in the logs is the change in a number of shards per worker. In the following messages, the denominator changes from 2 to 1.
```
node#1 dataloader worker#1, ': Starting to iterate over 1/2 shards.
...
node#1 dataloader worker#1, ': Finished iterating over 1/1 shards.
```
### Environment info
python: 3.11.10
datasets: 3.3.2
lightning: 2.3.1
| open | 2025-03-11T16:34:39Z | 2025-03-11T16:36:01Z | https://github.com/huggingface/datasets/issues/7444 | [] | dhruvdcoder | 0 |
psf/black | python | 4,129 | `wrap_long_dict_value_in_parens` can introduce unnecessary nesting for dicts in dicts | Originally reported at https://github.com/psf/black/issues/4042#issuecomment-1852645541
Repro:
```
class Random:
def func():
random_service.status.active_states.inactive = (
make_new_top_level_state_from_dict(
{
"topLevelBase": {
"secondaryBase": {
"timestamp": 1234,
"latitude": 1,
"longitude": 2,
"actionTimestamp": Timestamp(
seconds=1530584000, nanos=0
).ToJsonString(),
}
},
}
)
)
```
Enabling just `wrap_long_dict_value_in_parens` turns this into:
```
class Random:
def func():
random_service.status.active_states.inactive = (
make_new_top_level_state_from_dict(
{
"topLevelBase": (
{
"secondaryBase": (
{
"timestamp": 1234,
"latitude": 1,
"longitude": 2,
"actionTimestamp": (
Timestamp(
seconds=1530584000, nanos=0
).ToJsonString()
),
}
)
}
),
}
)
)
``` | closed | 2023-12-28T06:14:14Z | 2024-01-20T01:13:28Z | https://github.com/psf/black/issues/4129 | [
"T: bug",
"C: preview style"
] | hauntsaninja | 0 |
unionai-oss/pandera | pandas | 1,330 | Getting failure cases with Pandera.PySpark | #### Getting failure cases with Pandera.PySpark
I noted that if I use the PySpark dataframe (in comparision with the Pandas one) I am not able to get the failure_cases and the indexes. Is there another way to still get the indexes of the rows that were invalid? Because I would like to drop these rows from the PySpark dataframe and store them somewhere else.
Thank you.
```python
import pandera.pyspark as pa
from pyspark.sql import SparkSession
from pyspark.sql import DataFrame
from pandera.pyspark import Check, Column, DataFrameSchema
from pandera.errors import SchemaError, SchemaErrors
schema = pa.DataFrameSchema(
columns={
"str_column": Column(str, Check.equal_to("a")),
},
strict=True
)
data = [
{'str_column': 'a'},
{'str_column': 'bL'},
{'str_column': 'Fc'}
]
df = spark.createDataFrame(data)
try:
x = schema.validate(df, lazy=False)
except (SchemaErrors, SchemaError) as exc:
print(exc.failure_cases["index"])
```
Results into
```
[None]
```
| open | 2023-09-05T18:52:57Z | 2023-09-05T18:53:33Z | https://github.com/unionai-oss/pandera/issues/1330 | [
"question"
] | ThomasBoersma | 0 |
timkpaine/lantern | plotly | 120 | issue with plotting without type | closed | 2017-11-08T13:42:20Z | 2017-11-25T06:32:41Z | https://github.com/timkpaine/lantern/issues/120 | [
"bug",
"in progress"
] | timkpaine | 0 | |
FactoryBoy/factory_boy | django | 654 | Custom User factory fails when using a CustomUserManager in Django 2 | #### Description
Trying to use a factory to create a user instance for a user model that extends `AbstractUser` and implements `CustomUserManager` results in a `TypeError`.
#### To Reproduce
1. Implement a `CustomUserManager` in the way recommended here for a custom `User` model that extends the `AbstractUser` base class and doesn't override the `username` field. Add a field called `id`. https://docs.djangoproject.com/en/2.2/topics/auth/customizing/#django.contrib.auth.models.CustomUserManager
2. Create a factory for that `User` model and write a `@classmethod` as suggested in the FactoryBoy documentation: https://factoryboy.readthedocs.io/en/latest/recipes.html#custom-manager-methods
3. Try to create a new `User` using the `UserFactory` you just created.
##### Model / Factory code
```python
# Factories
class CompanyToProfileFactory(factory.DjangoModelFactory):
"""Factory for `client.CompanyToProfile` Django model."""
class Meta:
model = models.CompanyToProfile
company = factory.SubFactory(CompanyFactory)
profile = factory.SubFactory(ProfileFactory)
access = factory.SubFactory(AccessFactory)
created = factory.Faker("past_datetime", tzinfo=pytz.UTC)
updated = factory.Faker("past_datetime", tzinfo=pytz.UTC)
class ProfileFactory(factory.DjangoModelFactory):
"""Factory for `personal.Profile` Django model."""
class Meta:
model = models.Profile
class Params:
superuser = factory.Trait(is_admin=True, is_superuser=True, is_staff=True)
id = factory.Faker("uuid4")
title = factory.SubFactory(TitleFactory)
initials = factory.Faker("word")
date_of_birth = factory.Faker("past_date")
email = factory.Faker("email")
gender = factory.SubFactory(GenderFactory)
ethnicity = factory.SubFactory(EthnicityFactory)
phone_number = factory.Faker("phone_number")
mobile_number = factory.Faker("phone_number")
profile_type = factory.SubFactory(ProfileTypeFactory)
created = factory.Faker("past_datetime", tzinfo=pytz.UTC)
updated = factory.Faker("past_datetime", tzinfo=pytz.UTC)
@classmethod
def _create(cls, model_class, *args, **kwargs):
"""Override the default ``_create`` with our custom call."""
manager = cls._get_manager(model_class)
# The default would use ``manager.create(*args, **kwargs)``
return manager.create_user(*args, **kwargs)
# model and manager
class ProfileManager(BaseUserManager):
def create_user(self, username='', email='', password=None):
"""
Creates and saves a User with the given email, date of
birth and password.
"""
if not email:
raise ValueError('Users must have an email address')
try:
email = validate_email(email)
except ValidationError:
raise ValueError('Invalid email address')
user = self.model(
username=username,
email=self.normalize_email(email),
)
user.save(using=self._db)
user.set_password(password)
user.save(using=self._db)
return user
def create_superuser(self, username='', email='', password=''):
"""
Creates and saves a superuser with the given email, date of
birth and password.
"""
if not password:
raise ValueError('SuperUser must have password')
user = self.create_user(
username=username,
email=email,
password=password
)
user.is_admin = True
user.is_superuser = True
user.is_staff = True
user.email = email
user.save(using=self._db)
return user
class Profile(AbstractUser):
"""Basic information about a person that uses the system."""
objects = ProfileManager()
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
# slug = AutoSlugField(populate_from=['first_name', 'initials', 'last_name', 'date_of_birth'])
title = models.ForeignKey(Title, null=True, on_delete=models.DO_NOTHING)
# first_name
initials = models.CharField(max_length=20, null=True, blank=True, default=None)
# last_name
date_of_birth = models.DateField(null=True)
email = models.EmailField(_('email address'), blank=True, unique=True)
gender = models.ForeignKey(Gender, null=True, on_delete=None)
ethnicity = models.ForeignKey(Ethnicity, null=True, on_delete=None)
phone_number = models.CharField(max_length=15, null=True, blank=True)
mobile_number = models.CharField(max_length=15, null=True, blank=True)
# profile type
profile_type = models.ForeignKey(ProfileType, null=True, default=None, on_delete=SET_NULL)
# Administrative Fields
created = models.DateTimeField(auto_now_add=True)
updated = models.DateTimeField(auto_now=True)
class Meta:
ordering = ['last_name']
indexes = [
models.Index(fields=['phone_number']),
models.Index(fields=['mobile_number']),
models.Index(fields=['email']),
models.Index(fields=['first_name', 'last_name']),
]
class CompanyToProfile(models.Model):
"""Relationship of a user to the company"""
company = models.ForeignKey(Company, on_delete=CASCADE)
profile = models.ForeignKey(Profile, on_delete=CASCADE)
access = models.ForeignKey(Access, null=True, default=None, on_delete=CASCADE)
# Administrative Fields
created = models.DateTimeField(auto_now_add=True)
updated = models.DateTimeField(auto_now=True)
def __str__(self):
return f'{self.profile.get_full_name()} - {self.access.access} - {self.company.company}'
```
##### The issue
Trying to create an object using a factory that relates in any way to the custom `User` factory (in this case `ProfileFactory` which is a `SubFactory` of `CompanyToProfileFactory` (see code below) will cause a `TypeError` (see error below)
```python
# code that causes exception:
class CompanyToProfileTestCase(TestCase):
"""Tests for `client.CompanyToProfile` Django model."""
def setUp(self):
self.company_to_profile = CompanyToProfileFactory()
# error from running pytest in CLI:
@classmethod
def _create(cls, model_class, *args, **kwargs):
"""Override the default ``_create`` with our custom call."""
manager = cls._get_manager(model_class)
# The default would use ``manager.create(*args, **kwargs)``
> return manager.create_user(*args, **kwargs)
E TypeError: create_user() got an unexpected keyword argument 'id'
``` | closed | 2019-10-18T05:33:23Z | 2019-10-25T06:56:09Z | https://github.com/FactoryBoy/factory_boy/issues/654 | [
"Q&A"
] | blairg23 | 6 |
nteract/papermill | jupyter | 301 | Guidance for how best to use papermill in a variety of situations that are common to industry | We don't have a home for guides or other advice on integration with other systems that works well or is easy to use. | open | 2019-02-04T20:00:48Z | 2019-05-06T13:49:40Z | https://github.com/nteract/papermill/issues/301 | [
"help wanted",
"new-contributor-friendly",
"docs"
] | MSeal | 0 |
paperless-ngx/paperless-ngx | django | 7,612 | [BUG] rror occurred while consuming: UnicodeDecodeError: 'utf-8' codec can't decode bytes in position 60-61: invalid continuation byte | ### Description

Adding documents in German and getting bunch of errors like the following. Scanning the documents using ScanSnap IX1500
Here is my docker config
```
version: '3.8'
services:
broker:
image: redis
read_only: true
healthcheck:
test: ["CMD-SHELL", "redis-cli ping || exit 1"]
container_name: Paperless-NGX-REDIS
security_opt:
- no-new-privileges:true
environment:
REDIS_ARGS: "--save 60 10"
restart: on-failure:5
volumes:
- /volume1/docker/paperless/redis:/data
gotenberg:
image: docker.io/gotenberg/gotenberg:8.7
restart: on-failure:5
security_opt:
- no-new-privileges:true
command:
- "gotenberg"
- "--chromium-disable-javascript=true"
- "--chromium-allow-list=file:///tmp/.*"
tika:
image: docker.io/apache/tika:latest
restart: on-failure:5
db:
image: postgres:16
container_name: Paperless-NGX-DB
restart: on-failure:5
healthcheck:
test: ["CMD", "pg_isready", "-q", "-d", "paperless", "-U", "paperless"]
timeout: 45s
interval: 10s
retries: 10
security_opt:
- no-new-privileges:true
volumes:
- /volume1/docker/paperless/db:/var/lib/postgresql/data
environment:
POSTGRES_DB: paperless
POSTGRES_USER: paperless
POSTGRES_PASSWORD: paperless
webserver:
image: ghcr.io/paperless-ngx/paperless-ngx:latest
container_name: Paperless-NGX
healthcheck:
test: ["CMD", "curl", "-fs", "-S", "--max-time", "2", "http://localhost:8000"]
interval: 30s
timeout: 10s
retries: 5
security_opt:
- no-new-privileges:true
restart: on-failure:5
depends_on:
db:
condition: service_healthy
broker:
condition: service_healthy
tika:
condition: service_started
gotenberg:
condition: service_started
ports:
- 8001:8000
volumes:
- /volume1/docker/paperless/data:/usr/src/paperless/data
- /volume1/docker/paperless/media:/usr/src/paperless/media
- /volume1/docker/paperless/export:/usr/src/paperless/export
- /volume1/docker/paperless/consume:/usr/src/paperless/consume
environment:
PAPERLESS_REDIS: redis://broker:6379
PAPERLESS_DBHOST: db
PAPERLESS_OCR_SKIP_ARCHIVE_FILE: always
PAPERLESS_OCR_PAGES: 1
PAPERLESS_TIME_ZONE: Europe/Berlin
PAPERLESS_ADMIN_USER: admin
PAPERLESS_TIKA_ENABLED: 1
PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000
PAPERLESS_TIKA_ENDPOINT: http://tika:9998
PAPERLESS_FILENAME_FORMAT: "{correspondent}/{created_year}/{created} {title}"
PAPERLESS_OCR_USER_ARGS: '{"invalidate_digital_signatures": true}'
PAPERLESS_OCR_LANGUAGE: "aze+deu+eng+rus"
PAPERLESS_OCR_LANGUAGES: "ces tur aze deu eng rus"
PAPERLESS_DEBUG: false
```
### Steps to reproduce
1. Scan PDFs using ScanSnap ix1500
2. Upload to the paperless
3. Get the error
### Webserver logs
```bash
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe8 in position 60: invalid continuation byte
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/asgiref/sync.py", line 327, in main_wrap
raise exc_info[1]
File "/usr/src/paperless/src/documents/consumer.py", line 598, in run
document_parser.parse(self.working_copy, mime_type, self.filename)
File "/usr/src/paperless/src/paperless_tesseract/parsers.py", line 435, in parse
raise ParseError(f"{e.__class__.__name__}: {e!s}") from e
documents.parsers.ParseError: UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe8 in position 60: invalid continuation byte
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/paperless/src/documents/tasks.py", line 149, in consume_file
msg = plugin.run()
^^^^^^^^^^^^
File "/usr/src/paperless/src/documents/consumer.py", line 629, in run
self._fail(
File "/usr/src/paperless/src/documents/consumer.py", line 304, in _fail
raise ConsumerError(f"{self.filename}: {log_message or message}") from exception
documents.consumer.ConsumerError: 03092024_005.pdf: Error occurred while consuming document 03092024_005.pdf: UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe8 in position 60: invalid continuation byte
```
### Browser logs
_No response_
### Paperless-ngx version
2.11.6
### Host OS
Synology DSM 7.2.2
### Installation method
Docker - official image
### System status
_No response_
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-09-02T22:52:04Z | 2024-10-01T07:44:22Z | https://github.com/paperless-ngx/paperless-ngx/issues/7612 | [
"not a bug"
] | tural-ali | 1 |
tensorflow/tensor2tensor | machine-learning | 1,166 | Serving | Op type not registered 'PaddedBatchDatasetV2' in binary | After upgrading the t2t, and successfully training a model on a GPU machine, I tried taking the trained model, export it on a CPU machine, and serve it using tensorflow_model_server
When I run the tensorflow_model_server exec, I get: "Op type not registered 'PaddedBatchDatasetV2' in binary "
I made sure the tensorflow_model_server is up to date using "sudo apt-get upgrade tensorflow-model-server" - but it didn't help
### Environment information
```
OS: ubuntu 16
$ pip freeze | grep tensor
tensorboard==1.11.0
tensorflow==1.11.0
tensorflow-hub==0.1.1
tensorflow-serving-api==1.11.1
tensorflow-tensorboard==1.5.0
$ python -V
Python 2.7.12
```
| closed | 2018-10-24T07:35:05Z | 2018-10-26T16:30:05Z | https://github.com/tensorflow/tensor2tensor/issues/1166 | [] | ndvbd | 6 |
netbox-community/netbox | django | 18,136 | Add branching provider to configuration | ### NetBox version
v4.1.7
### Feature type
New functionality
### Triage priority
N/A
### Proposed functionality
Add branching provider support as a configuration parameter and modify the script running code to take into account branching support.
### Use case
This will allow scripts with script parameters to be run while in the context of a branch. This requires changes to the core of NetBox in the script running code as well as the branching plugin to support this.
### Database changes
N/A
### External dependencies
N/A | closed | 2024-12-02T20:49:19Z | 2025-03-13T03:10:06Z | https://github.com/netbox-community/netbox/issues/18136 | [
"status: accepted",
"type: feature",
"complexity: medium"
] | arthanson | 1 |
Layout-Parser/layout-parser | computer-vision | 5 | Steps for training a model on custom data | There are no steps to understand how to train a model on a custom dataset. I'm really looking to use this in my current workflow and kudos for developing this! | open | 2021-01-05T07:39:07Z | 2024-02-29T07:37:32Z | https://github.com/Layout-Parser/layout-parser/issues/5 | [
"enhancement"
] | DhavalThkkar | 5 |
slackapi/python-slack-sdk | asyncio | 1,500 | Error occurred while updating the thread in slack: 'file' | (Filling out the following details about bugs will help us solve your issue sooner.)
### Reproducible in:
python version 3.8
#### The Slack SDK version
Not using slack sdk
#### Python runtime version
(Paste the output of `python --version`)
3.8
#### OS info
(Paste the output of `sw_vers && uname -v` on macOS/Linux or `ver` on Windows OS)
#### Steps to reproduce:
(Share the commands to run, source code, and project settings (e.g., setup.py))
```python
def update_slack_message(channel_id, thread_ts, text):
slack_updatemessage_url = "https://slack.com/api/chat.update"
update_data = {"channel": channel_id, "text": text, "ts": thread_ts}
slack_data = text if is_final else update_data
try:
headers = {
"Authorization": f"Bearer {get_bot_user_token()}",
"Content-Type": "application/json; charset=utf-8",
}
logger.info(f"Updating the slack message: {json.dumps(slack_data)}")
response = requests.post(url=slack_updatemessage_url, json=slack_data, headers=headers)
response.raise_for_status()
if response.status_code == 200:
logger.info(f"Slack Message updated Successfully! {response.text}")
thread_update_ts = json.loads(response.text).get("ts")
upload_info = upload_files_to_slack(thread_update_ts, channel_id)
logger.info(f"File id fetched Successfully! {upload_info}")
if upload_info:
logger.info(f"inside upload info! {upload_info}")
current_directory = os.path.dirname(os.path.abspath(__file__))
file_path = os.path.join(current_directory, 'file_name.xlsx')
upload_url = upload_info['upload_url']
file_id = upload_info['file']['id']
logger.info(f"upload_url! {upload_url}")
logger.info(f"file_id: {file_id}")
with open(file_path, "rb") as file_content:
logger.info(f"inside file path! {file_path}")
upload_response = requests.put(upload_url, data = file_content)
logger.info(f"FILE upload to url Upload response {upload_response}")
if upload_response.status_code == 200:
complete_response = complete_upload(file_id, channel_id, thread_update_ts)
if complete_response:
logger.info("COMPELETD FILE UPLOAD")
else:
logger.error("FAILED FILE UPLOAD")
return response
except HTTPError as http_err:
logger.error(f"Error occurred: {http_err}")
except Exception as err:
logger.error(f"Error occurred while updating the thread in slack: {err}")
return
def upload_files_to_slack(thread_ts, channel_id):
slack_get_upload_url = "https://slack.com/api/files.getUploadURLExternal"
current_directory = os.path.dirname(os.path.abspath(__file__))
file_path = os.path.join(current_directory, 'file_name.xlsx')
filename = os.path.basename(file_path)
file_length = os.path.getsize(file_path)
try:
headers = {
"Authorization": f"Bearer {get_bot_user_token()}",
"Content-Type": "application/x-www-form-urlencoded",
}
params = {
"filename": filename,
"length": file_length
}
response = requests.get(url=slack_get_upload_url, headers=headers, params=params)
response.raise_for_status()
logger.info(f"Slack file uploaded Successfully! {response.text}")
return response.json()
except HTTPError as http_err:
logger.error(f"Http Error occurred: {http_err}")
except Exception as err:
logger.error(f"Error occurred while fetching bot_id: {err}")
return None
def complete_upload(file_id, channel_id, thread_ts):
logger.info(f"triggered complete uploadร")
url = "https://slack.com/api/files.completeUploadExternal"
data = {
"files": [
{
"id": file_id
}
],
"channel_id": channel_id,
"thread_ts": thread_ts,
"initial_comment": "hi file!"
}
try:
headers = {
"Authorization": f"Bearer {get_bot_user_token()}",
"Content-Type": "application/json; charset=utf-8",
}
response = requests.post(url=url, headers=headers, json=data)
response.raise_for_status()
logger.info(f"Slack file upload completed Successfully! {response.text}")
return response.json()
except HTTPError as http_err:
logger.error(f"Http Error occurred: {http_err}")
except Exception as err:
logger.error(f"Error occurred while fetching bot_id: {err}")
return None
```
### Expected result:
--- file uploaded successfully
### Actual result:
--- Error occured while posting to slack thread: 'file'
### Requirements
For general questions/issues about Slack API platform or its server-side, could you submit questions at
Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2024-05-28T02:07:33Z | 2024-07-15T00:04:37Z | https://github.com/slackapi/python-slack-sdk/issues/1500 | [
"question",
"web-client",
"Version: 3x",
"auto-triage-stale"
] | divyakrishna-devisetty | 3 |
andfanilo/streamlit-echarts | streamlit | 37 | working with mapbox | Awesome component! I just wonder is it possible to include mapbox-gl in the frontend dependency, so that the built-in support of mapbox-gl could actually be enabled? I am trying to draw a wind visualization, just like [this example](https://echarts.apache.org/examples/en/editor.html?c=global-wind-visualization&gl=1), but on mapbox. Thank you in advance!! | open | 2022-05-05T16:09:48Z | 2022-05-05T16:09:48Z | https://github.com/andfanilo/streamlit-echarts/issues/37 | [] | mzy2240 | 0 |
babysor/MockingBird | deep-learning | 868 | ๅๆๅจ่ฎญ็ปไฝๆถ่ฝ็ปๆ | ่ฟ่กๅๆๅจ่ฎญ็ป๏ผRTX3060 12Gๆพๅก๏ผ๏ผC:\ProgramData\Anaconda3\envs\mockingbird\python.exe E:\workspace\MockingBird\control\cli\synthesizer_train.py mandarin e:\datasets\SV2TTS\synthesizer ๏ผ่ฟ่ก็บฆ38ๅฐๆถใๅบ็ฐattentionๅฆไธ๏ผไธ็ฅ้ไฝๆถ่ฝ็ปๆ็จๅบ๏ผ






| open | 2023-04-02T06:13:50Z | 2024-12-06T02:21:49Z | https://github.com/babysor/MockingBird/issues/868 | [] | hujb2000 | 1 |
cvat-ai/cvat | tensorflow | 8,781 | Getting 429 while annotating images (images are not being able to load) | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1. open a job
2. move to image that you never loaded
3. previous image is grayed out and not new images is not loaded
### Expected Behavior
relevant images should be displayed
### Possible Solution
_No response_
### Context
The first ~30 images from each job are being loaded successfully.
When trying to move to the next images i'm getting timeout and 429 errors for this url: http://{domain}/api/jobs/3/data?org=&quality=compressed&type=chunk&index=1
Any ideas what can be the problem?
In the server logs I'm able to see the following error:
[2024-12-05 11:05:12,804] INFO cvat.apps.engine.cache: Starting to prepare chunk: key segment_3_chunk_1_0
I'm not able to see the next log which is: "Ending to prepare chunk: key {key}"
Any ideas?
### Environment
```Markdown
cvat env based on k8s
```
| closed | 2024-12-05T11:16:48Z | 2024-12-10T08:45:08Z | https://github.com/cvat-ai/cvat/issues/8781 | [
"bug"
] | ErezAlster | 4 |
statsmodels/statsmodels | data-science | 9,500 | BUG: an extra degree of freedom for information criterions in statsmodels.tsa.ar_model | #### Describe the bug
Functions for aic, bic, aicc, and hqic in class AutoRegResults should not add 1 to self.df_model.
$\mathrm{AIC} = 2k - 2\ln(k) $
Unlike in OLS (https://www.statsmodels.org/dev/_modules/statsmodels/regression/linear_model.html#OLS), the degree of freedom in class AutoReg (https://www.statsmodels.org/dev/_modules/statsmodels/tsa/ar_model.html#AutoReg) already accounts for intercept:
```python
@property
def df_model(self) -> int:
"""The model degrees of freedom."""
return self._x.shape[1]
```
Hence, self.df_model should already be $k$.
#### Code Sample, a copy-pastable example if possible
As in:
https://www.statsmodels.org/dev/_modules/statsmodels/tsa/ar_model.html
```python
@cache_readonly
def aic(self):
r"""
Akaike Information Criterion using Lutkepohl's definition.
:math:`-2 llf + \ln(nobs) (1 + df_{model})`
"""
# This is based on loglike with dropped constant terms ?
# Lutkepohl
# return np.log(self.sigma2) + 1./self.model.nobs * self.k_ar
# Include constant as estimated free parameter and double the loss
# Stata defintion
# nobs = self.nobs
# return -2 * self.llf/nobs + 2 * (self.k_ar+self.k_trend)/nobs
return eval_measures.aic(self.llf, self.nobs, self.df_model + 1)
```
In the comments of def aic(self), The formula :math:`-2 llf + \ln(nobs) (1 + df_{model})` should be for BIC, not AIC. It calls the eval_measures.aic function below as in:
https://www.statsmodels.org/dev/_modules/statsmodels/tools/eval_measures.html#aic
```python
def aic(llf, nobs, df_modelwc):
"""
Akaike information criterion
Parameters
----------
llf : {float, array_like}
value of the loglikelihood
nobs : int
number of observations
df_modelwc : int
number of parameters including constant
Returns
-------
aic : float
information criterion
References
----------
https://en.wikipedia.org/wiki/Akaike_information_criterion
"""
return -2.0 * llf + 2.0 * df_modelwc
```
#### Discrepancies regarding examples in AutoReg
https://www.statsmodels.org/dev/generated/statsmodels.tsa.ar_model.AutoReg.html
```python
>>> import statsmodels.api as sm
>>> from statsmodels.tsa.ar_model import AutoReg
>>> data = sm.datasets.sunspots.load_pandas().data['SUNACTIVITY']
>>> out = 'AIC: {0:0.3f}, HQIC: {1:0.3f}, BIC: {2:0.3f}'
>>> res = AutoReg(data, lags = [1, 11, 12]).fit()
>>> print(out.format(res.aic, res.hqic, res.bic))
AIC: 5.945, HQIC: 5.970, BIC: 6.007
```
Actual outputs:
AIC: 2608.546, HQIC: 2615.940, BIC: 2627.015
INSTALLED VERSIONS
------------------
Python 3.10.9
statsmodels
===========
Installed: 0.14.4 | open | 2025-02-02T14:10:39Z | 2025-02-02T14:10:39Z | https://github.com/statsmodels/statsmodels/issues/9500 | [] | cltdouglas | 0 |
aiogram/aiogram | asyncio | 837 | Implement utility to automatically send chat actions in background | Implement context manager that helps to send chat action every 5 seconds to the chat while long action is in progress, for example:
```python
async with chat_action("typing"):
text = await long_operation()
await message.answer(text)
``` | closed | 2022-02-16T22:11:37Z | 2022-02-20T13:04:44Z | https://github.com/aiogram/aiogram/issues/837 | [] | JrooTJunior | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,485 | AttributeError: Can't pickle local object 'get_transform.<locals>.<lambda>' | need some help with this problem ๐ญ | open | 2022-09-19T14:45:31Z | 2022-09-20T20:25:16Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1485 | [] | Zhou248 | 1 |
FujiwaraChoki/MoneyPrinter | automation | 19 | [-] Error: Could not find a backend to open `C:\Users\user\AppData\Local\Temp\tmp5ueaib09.png`` with iomode `ri` | Title shows the error message that is returned from the backend. It also gave a error that "Transperent" directory was not found in the back end? | closed | 2024-02-06T18:51:20Z | 2024-10-07T14:33:18Z | https://github.com/FujiwaraChoki/MoneyPrinter/issues/19 | [
"help wanted"
] | LiveQCC | 6 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 681 | convert_unicode deprecation warning with SQLAlchemy 1.3 | > _venv/lib/python3.7/site-packages/sqlalchemy/dialects/sqlite/base.py:1433: SADeprecationWarning: The create_engine.convert_unicode parameter and corresponding dialect-level parameters are deprecated, and will be removed in a future release. Modern DBAPIs support Python Unicode natively and this parameter is unnecessary.
> default.DefaultDialect.__init__(self, **kwargs)
>
I guess this comes from `get_engine()` (https://github.com/pallets/flask-sqlalchemy/blob/master/flask_sqlalchemy/__init__.py#L558). | closed | 2019-03-07T00:37:38Z | 2020-12-05T20:37:38Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/681 | [] | zgoda | 2 |
tqdm/tqdm | pandas | 916 | Call set_description without access to instance methods | Currently, the description of a progress bar can only be accessed with
```python
from tqdm import tqdm
import time
def foo(i):
time.sleep(1)
# cannot access prog here easily
#tqdm.set_description('now waited', i) doesnt work
prog = tqdm(range(15))
for i in prog:
prog.set_description('preparing to wait')
foo(i)
```
Especially in the case of nested functions it would be useful to be able to access the latest tqdm-instance without explicitly passing the handle. This could be implemented by e.g. forwarding the call to `tqdm.tqdm.set_description` to the latest instance or by implementing a function such as `plt.gca()`, ie `tqdm.gcb()` (get current bar)
```python
from tqdm import tqdm
import time
def foo(i):
time.sleep(1)
# access via class method or gcb access prog here easily
tqdm.set_description('now waited', i)
tqdm.gcb().set_description('now waited', i)
prog = tqdm(range(15))
for i in prog:
prog.set_description('preparing to wait')
foo(i)
```
Is there any reason this hasn't been implemented yet?
| open | 2020-03-16T13:01:33Z | 2020-03-16T17:49:50Z | https://github.com/tqdm/tqdm/issues/916 | [
"question/docs โฝ",
"p4-enhancement-future ๐งจ"
] | skjerns | 2 |
gradio-app/gradio | deep-learning | 10,356 | save_history ChatInterface parameter does not respect fill_height=True | ### Describe the bug
### Description
When using the `save_history` parameter in the ChatInterface, setting `fill_height=True` does not behave as expected. The chat interface does not expand to fill the height of its container.
### Expected Behavior
The chat interface should expand to fill the height of its container when `fill_height=True` is set, regardless of the `save_history` parameter.
### Actual Behavior
The chat interface does not expand to fill the height of its container when the `save_history` parameter is used, even if `fill_height=True` is set.
### Steps to Reproduce
1. Create a ChatInterface with `save_history=True` and `fill_height=True`.
2. Observe that the chat interface does not fill the height of its container.
### Environment
- **Repository:** gradio-app/gradio
- **Languages:** Python, Jupyter Notebook
- **Browser:** Edge
- **Operating System:** Windows
### Have you searched existing issues? ๐
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
demo = gr.ChatInterface(
fn=chat,
type='messages',
multimodal=False,
fill_height=True,
fill_width=True,
editable=True,
save_history=True,
)
```
### Screenshot

### Logs
_No response_
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Windows
gradio version: 5.12.0
gradio_client version: 1.5.4
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.6.2.post1
audioop-lts is not installed.
fastapi: 0.115.5
ffmpy: 0.4.0
gradio-client==1.5.4 is not installed.
httpx: 0.27.2
huggingface-hub: 0.26.2
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 2.1.3
orjson: 3.10.11
packaging: 24.2
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.9.2
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.7.3
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit: 0.12.0
typer: 0.13.0
typing-extensions: 4.12.2
urllib3: 2.2.3
uvicorn: 0.32.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.27.2
huggingface-hub: 0.26.2
packaging: 24.2
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
I can work around it | closed | 2025-01-14T20:55:00Z | 2025-01-17T22:54:26Z | https://github.com/gradio-app/gradio/issues/10356 | [
"bug"
] | nicholasdbrady | 1 |
marshmallow-code/flask-smorest | rest-api | 50 | Schema Validation on Patch | So I was trying t implement a simple Patch use case but its proving to be a difficult affair. As Patch allows for partial submission of schema fields I tried the obvious solution and added the partial=True argument to my schema declaration.
`@blp.arguments(UserInputSchema(partial=True))`
This however leads to a problem related to the marshmallow model schema class, since it immediately tries to create a model instance from the passed object.
Am I missing something or does patch just not play well with the intended behavior of marshmallow's ModelSchema?
Thanks for the help in advance. | closed | 2019-03-21T16:15:03Z | 2021-09-12T19:20:39Z | https://github.com/marshmallow-code/flask-smorest/issues/50 | [
"question"
] | DNCoelho | 6 |
jupyterhub/repo2docker | jupyter | 553 | Package order in requirements.txt | I'm trying to switch from a conda env to a pip env for building our mybinder containers. My hope is that this will speed-up the image launch for our users, because pip envs are much lighter than conda envs.
There is an issue, however. [pip does not guarantee the install order in requirements.txt](https://github.com/pypa/pip/issues/3480), which is silly but leads to situations where package won't install because numpy isn't installed (yet) although present in `requirements.txt`.
[there has been a recent PEP addressing the issue](https://github.com/pypa/pip/issues/5761) but this requires the package maintainers to update their `setup.py`, but obviously this hasn't been taken care of everywhere yet.
Any idea how to proceed in repo2docker for the time being? | closed | 2019-01-12T12:40:06Z | 2019-01-13T08:07:49Z | https://github.com/jupyterhub/repo2docker/issues/553 | [
"user-support",
"documentation"
] | fmaussion | 2 |
litestar-org/litestar | asyncio | 3,964 | Enhancement: Marshmallow | ### Summary
Hey,
What do you think to include `Marshmallow` as a supported library for API validation and OpenAPI?
### Basic Example
```
from marshmallow import Schema, fields, validates, ValidationError
class CompanySchema(Schema):
name = fields.String(
required=True,
metadata={
"description": "Legal company name.",
"examples": ["Marine Charging Point Ltd"],
},
validate=lambda s: len(s) > 0 # Minimum length validation
)
country = fields.String(
required=True,
metadata={
"description": "Legal company registration country.",
"examples": ["Finland"],
},
validate=lambda s: len(s) > 0 # Minimum length validation
)
description = fields.String(
required=False,
allow_none=True,
metadata={
"description": "Short description.",
"examples": ["Equipment supplier"],
},
validate=lambda s: len(s) > 0 if s else True # Minimum length validation if provided
)
# Example usage
data = {
"name": "Marine Charging Point Ltd",
"country": "Finland",
"description": "Electric vehicle charging equipment supplier",
}
@route(...)
def update_company_via_id_view(self, data: CompanySchema) -> object: ...
```
### Drawbacks and Impact
_No response_
### Unresolved questions
_No response_ | closed | 2025-01-21T13:26:43Z | 2025-01-21T13:35:58Z | https://github.com/litestar-org/litestar/issues/3964 | [
"Enhancement"
] | RoTorEx | 1 |
explosion/spaCy | nlp | 13,422 | Converting into exe file through pyinstaller-> spacy cannot find factory for 'curated transformer' | ```
import spacy
import spacy_curated_transformers
# import spacy_transformers
import curated_transformers
import spacy_alignments
import spacy_legacy
import spacy_loggers
import spacy_pkuseg
import os
nlp = spacy.load(os.getcwd()+'\\en_core_web_trf-3.7.3')
x= input()
doc= nlp(x)
result =[]
for sent in doc.sents:
result.append(sent.text)
print(result)
```
I wanted to turn the above code into exe file.
However, [valueerror: [e002] can't find factory for 'curated transformer' for language english (en)] error occurs...
I used pyinstaller to convert it into exe file. In the pyinstaller, I included spacy, spacy_curated_transformers, curated_transformers into the hidden import.
I wonder how to make this executable file configure the curated transformer factory...
Please help me.

## My Environment
* Operating System: Windows 11
* Python Version Used: 3.11.8
* spaCy Version Used: 3.7.4
* Environment Information:
| closed | 2024-04-09T05:19:53Z | 2024-04-09T10:08:54Z | https://github.com/explosion/spaCy/issues/13422 | [
"install",
"feat / transformer"
] | estherkim083 | 1 |
mars-project/mars | pandas | 3,142 | [BUG] AttributeError: module 'asyncio' has no attribute 'create_task' | import mars.dataframe as md
Traceback (most recent call last):
File "F:/work/cii-pip-algo/mars_demo.py", line 3, in <module>
import mars.tensor as mt
File "C:\ProgramData\Anaconda3\envs\py36\lib\site-packages\mars\__init__.py", line 18, in <module>
from .core.context import get_context
File "C:\ProgramData\Anaconda3\envs\py36\lib\site-packages\mars\core\__init__.py", line 17, in <module>
from .base import ExecutionError
File "C:\ProgramData\Anaconda3\envs\py36\lib\site-packages\mars\core\base.py", line 18, in <module>
from ..serialization.core import Placeholder, fast_id
File "C:\ProgramData\Anaconda3\envs\py36\lib\site-packages\mars\serialization\__init__.py", line 15, in <module>
from .aio import AioSerializer, AioDeserializer
File "C:\ProgramData\Anaconda3\envs\py36\lib\site-packages\mars\serialization\aio.py", line 23, in <module>
from ..utils import lazy_import
File "C:\ProgramData\Anaconda3\envs\py36\lib\site-packages\mars\utils.py", line 91, in <module>
_create_task = asyncio.create_task
AttributeError: module 'asyncio' has no attribute 'create_task' | closed | 2022-06-14T07:37:25Z | 2022-06-20T05:39:18Z | https://github.com/mars-project/mars/issues/3142 | [] | zhangyuqi-1 | 3 |
dsdanielpark/Bard-API | nlp | 243 | Does this API use the new Gemini Pro model, instead of PaLM2? | **Solution you'd like**
- Please confirm if this API uses the Gemini Pro model instead of PaLM2
- If so, include Gemini Pro on the READMEs or docs
Thanks!
| closed | 2023-12-07T16:19:09Z | 2024-01-18T15:48:09Z | https://github.com/dsdanielpark/Bard-API/issues/243 | [
"documentation"
] | hansfzlorenzana | 3 |
fastapiutils/fastapi-utils | fastapi | 318 | [BUG] Required dependency `typing_inspect`? | **Describe the bug**
If I'm using Pydantic 2, cbv.py imports package `typing_inspect`. However this is listed as an optional dependency.
**To Reproduce**
Steps to reproduce the behavior:
1. Install latest Fast API, fastapi-utils
2. Add a Resource
3. Run service using `fastapi dev ...`
4. See error
**Expected behavior**
It doesn't crash
**Screenshots**
```
โ /home/kevin/.../venv/lib/python3.11/site-packages/fastapi โ
โ _utils/cbv_base.py:5 in <module> โ
โ โ
โ 2 โ
โ 3 from fastapi import APIRouter, FastAPI โ
โ 4 โ
โ โฑ 5 from .cbv import INCLUDE_INIT_PARAMS_KEY, RETURN_TYPES_FUNC_KEY, _cbv โ
โ 6 โ
โ 7 โ
โ 8 class Resource: โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโ locals โโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ Any = typing.Any โ โ
โ โ APIRouter = <class 'fastapi.routing.APIRouter'> โ โ
โ โ Dict = typing.Dict โ โ
โ โ FastAPI = <class 'fastapi.applications.FastAPI'> โ โ
โ โ Optional = typing.Optional โ โ
โ โ Tuple = typing.Tuple โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โ โ
โ /home/kevin/.../venv/lib/python3.11/site-packages/fastapi โ
โ _utils/cbv.py:21 in <module> โ
โ โ
โ 18 โ
โ 19 PYDANTIC_VERSION = pydantic.VERSION โ
โ 20 if PYDANTIC_VERSION[0] == "2": โ
โ โฑ 21 โ from typing_inspect import is_classvar โ
โ 22 else: โ
โ 23 โ from pydantic.typing import is_classvar # type: ignore[no-redef] โ
โ 24 โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ locals โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ Any = typing.Any โ โ
โ โ APIRoute = <class 'fastapi.routing.APIRoute'> โ โ
โ โ APIRouter = <class 'fastapi.routing.APIRouter'> โ โ
โ โ Callable = typing.Callable โ โ
โ โ cast = <function cast at 0x74cc440459e0> โ โ
โ โ Depends = <function Depends at 0x74cc4224dda0> โ โ
โ โ get_type_hints = <function get_type_hints at 0x74cc44045b20> โ โ
โ โ inspect = <module 'inspect' from '/usr/lib/python3.11/inspect.py'> โ โ
โ โ List = typing.List โ โ
โ โ pydantic = <module 'pydantic' from โ โ
โ โ '/home/kevin/.../venv/lib/python3โฆ โ โ
โ โ PYDANTIC_VERSION = '2.7.4' โ โ
โ โ Route = <class 'starlette.routing.Route'> โ โ
โ โ Tuple = typing.Tuple โ โ
โ โ Type = typing.Type โ โ
โ โ TypeVar = <class 'typing.TypeVar'> โ โ
โ โ Union = typing.Union โ โ
โ โ WebSocketRoute = <class 'starlette.routing.WebSocketRoute'> โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
ModuleNotFoundError: No module named 'typing_inspect'
```
**Environment:**
- OS: Linux
- FastAPI Utils, FastAPI, and Pydantic versions [e.g. `0.3.0`], get them with:
```Python
import fastapi_utils
import fastapi
import pydantic.utils
print(fastapi_utils.__version__)
print(fastapi.__version__)
print(pydantic.utils.version_info())
```
^ This also fails to run w/o typing_inspect.
After installing it:
```
0.7.0
0.111.0
/home/kevin/.../venv/lib/python3.11/site-packages/pydantic/_migration.py:283: UserWarning: `pydantic.utils:version_info` has been moved to `pydantic.version:version_info`.
warnings.warn(f'`{import_path}` has been moved to `{new_location}`.')
pydantic version: 2.7.4
pydantic-core version: 2.18.4
pydantic-core build: profile=release pgo=true
install path: /home/kevin/.../venv/lib/python3.11/site-packages/pydantic
python version: 3.11.6 (main, Oct 8 2023, 05:06:43) [GCC 13.2.0]
platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.38
related packages: typing_extensions-4.12.2 fastapi-0.111.0
commit: unknown
```
- Python version, get it with:
3.11.6
**Additional context**
Add any other context about the problem here.
| open | 2024-06-21T16:46:56Z | 2024-11-15T12:03:28Z | https://github.com/fastapiutils/fastapi-utils/issues/318 | [
"bug"
] | kevinhikaruevans | 4 |
coqui-ai/TTS | python | 3,603 | [Bug] `formatter` set to None for the next iteration when calling `load_tts_samples` with more than one datasets | ### Describe the bug
```python
def load_tts_samples(
datasets: Union[List[Dict], Dict],
eval_split=True,
formatter: Callable = None,
eval_split_max_size=None,
eval_split_size=0.01,
) -> Tuple[List[List], List[List]]:
"""Parse the dataset from the datasets config, load the samples as a List and load the attention alignments if provided.
If `formatter` is not None, apply the formatter to the samples else pick the formatter from the available ones based
on the dataset name.
Args:
...
formatter (Callable, optional): The preprocessing function to be applied to create the list of samples. It
must take the root_path and the meta_file name and return a list of samples in the format of
`[[text, audio_path, speaker_id], ...]]`. See the available formatters in `TTS.tts.dataset.formatter` as
example. Defaults to None.
...
Returns:
Tuple[List[List], List[List]: training and evaluation splits of the dataset.
"""
meta_data_train_all = []
meta_data_eval_all = [] if eval_split else None
if not isinstance(datasets, list):
datasets = [datasets]
for dataset in datasets:
formatter_name = dataset["formatter"]
dataset_name = dataset["dataset_name"]
root_path = dataset["path"]
meta_file_train = dataset["meta_file_train"]
meta_file_val = dataset["meta_file_val"]
ignored_speakers = dataset["ignored_speakers"]
language = dataset["language"]
# setup the right data processor
if formatter is None:
formatter = _get_formatter_by_name(formatter_name)
# load train set
meta_data_train = formatter(root_path, meta_file_train, ignored_speakers=ignored_speakers)
assert len(meta_data_train) > 0, f" [!] No training samples found in {root_path}/{meta_file_train}"
meta_data_train = add_extra_keys(meta_data_train, language, dataset_name)
print(f" | > Found {len(meta_data_train)} files in {Path(root_path).resolve()}")
# load evaluation split if set
...
# set none for the next iter
formatter = None
return meta_data_train_all, meta_data_eval_all
```
### To Reproduce
```python
train_samples, eval_samples = load_tts_samples(
model_config.datasets,
eval_split = True,
formatter = metadata_formatter,
eval_split_max_size = model_config.eval_split_max_size,
```
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [
"Tesla T4"
],
"available": true,
"version": "11.7"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "1.13.1+cu117",
"TTS": "0.16.1",
"numpy": "1.22.0"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "",
"python": "3.10.13",
"version": "#1 SMP Debian 5.10.205-2 (2023-12-31)"
}
}
```
### Additional context
_No response_ | closed | 2024-02-23T11:40:26Z | 2025-01-03T09:47:56Z | https://github.com/coqui-ai/TTS/issues/3603 | [
"bug",
"wontfix"
] | yonas-g | 1 |
huggingface/pytorch-image-models | pytorch | 1,819 | Porting PyTorch weight to Jax | Assume we have a PyTorch Model and a Jax model. Is there a framework where you can port PyTorch layer weight to Jax? I might need to implement many models from PyTorch to Jax, and the only way I can think of that can test the correctness of the algorithm is by initializing and then porting the models.
| closed | 2023-05-21T03:48:39Z | 2023-05-21T22:49:47Z | https://github.com/huggingface/pytorch-image-models/issues/1819 | [
"enhancement"
] | ranlucienwang | 1 |
mwaskom/seaborn | data-science | 2,777 | Option to visualize entire range with vertical line in barplot | [Feature request]
With seaborn [barplots](https://seaborn.pydata.org/generated/seaborn.barplot.html), in addition to the bars, vertical lines (aka error bars) can be plotted to indicate the confidence interval (ci).
I have a use case where I would like to use the error bars to visualize the entire range spanned by the samples, i.e. the min and the max of the data. For boxplot, this is possible by specifying `whis=(0, 100)` which apparently is passed to the matplotlib. However, it seems that this is not possible with barplot with the current version of seaborn (0.11.2).
I see 2 options to add this to seaborn:
* Proposal 1: Add a custom option for `ci`, e.g. `er` (for "entire range") similar to the already existing option `sd`.
* Proposal 2: Add an option to disable bootstrapping for determining the confidence interval. In this case, one should be able to disable bootstrapping and use `ci=100` to get the entire range.
I locally tried out proposal 1 and it seems to work fine. However, I'm not sure about potential negative side effects. Here is my small extension (relative to v0.11.2, line [1535](https://github.com/mwaskom/seaborn/blob/v0.11.2/seaborn/categorical.py#L1535)):
```
elif ci == 'er':
minVal = np.min(stat_data)
maxVal = np.max(stat_data)
confint[i].append((minVal, maxVal))
```
| closed | 2022-04-05T17:17:17Z | 2022-05-11T00:52:22Z | https://github.com/mwaskom/seaborn/issues/2777 | [] | cirnod | 1 |
JaidedAI/EasyOCR | pytorch | 794 | Adjusting Custom Model's Hyperparameter's is allowed, but not functional. | The codebase allows for custom models to have hyperparameters input programmatically, when that doesn't work. I'm sure this is intentional, but there's no documentation on the issue. There should be some sort of warning/exception for adjusting hyperparameters programmatically of a custom model, as a user may waste time on ineffective hyperparameter tuning. | open | 2022-07-21T17:40:01Z | 2022-07-21T17:40:01Z | https://github.com/JaidedAI/EasyOCR/issues/794 | [] | macksjeremy | 0 |
robotframework/robotframework | automation | 5,310 | Add optional sort key for ignore_value_order argument | This is a follow-up thread to some solutions to `ignore_value_order` in case when items in lists are not sortable (they are dict for example)
here https://github.com/robotframework/robotframework/pull/5220#issuecomment-2457742608 and here https://github.com/robotframework/robotframework/pull/5220#issuecomment-2468180335
issue occurs when dict looks like this
```
When compared dicts has a key that has a value of a list of another dictionaries
{
โ"a": 5,
โ"b": [
โโ{"a": 3, "b": 4},
โโ{"a": 1, "b": 2}
โ]
}
```
definition of method is like this
```
def dictionaries_should_be_equal(self, dict1, dict2, msg=None, values=True,
ignore_keys=None, ignore_case=False,
ignore_value_order=False):
```
one idea is to introduce another argument `custom_sort_key=None` where we can pass `custom_sort_key=str` or `custom_sort_key=lambda x: x["id"]`
`def normalize(self, value)` in `Collections.py` can then consume this argument
actual implementation
```
def normalize_list(self, value):
cls = type(value)
if self.ignore_order:
value = sorted(value)
value = [self.normalize(v) for v in value]
return self._try_to_preserve_type(value, cls)
```
use custom_sort_key in case default `sorted()` fails
```
def normalize_list(self, value, custom_sort_key=None):
cls = type(value)
if self.ignore_order:
try:
value = sorted(value)
except Exception as e:
if custom_sort_key:
value = sorted(value, key=custom_sort_key)
else:
raise e
value = [self.normalize(v) for v in value]
return self._try_to_preserve_type(value, cls)
```
| open | 2025-01-07T11:59:25Z | 2025-01-08T14:49:21Z | https://github.com/robotframework/robotframework/issues/5310 | [] | MarcinGmurczyk | 1 |
hankcs/HanLP | nlp | 757 | ่ฏ็ฝไธ่ฏๅพ็ๅบๅซ | ่ฏทๆไธไธ่ฏ็ฝๅ่ฏๅพ่ฟไธค็งๆฐๆฎ็ปๆๅๅซ้็จไบๆๆ ท็็ฎๆณใ
ๅจViterbiSegmentไธญไฝฟ็จไบ่ฏ็ฝ็ปๆ๏ผๆฏไธ่ก้ฝๆฏๅ็ผ่ฏ้พใๆ่งๅพไฝฟ็จๅDijkstraSegmentไธญไธๆ ท็่ฏๅพ๏ผ็ถๅไพๆฌกๅฏน่็น้ๆฉๆไผ่งฃๅบ่ฏฅไนๅฏไปฅๅฎ็ฐViterbiๆ็ญ่ทฏๅๅใ
้ฃไน๏ผๅฆๅคๅๅฎไน็่ฏ็ฝ็ปๆ็ๆไนๆฏไปไน๏ผๅฎๅจๆ็ไธๆฏๆดไผๅ๏ผ | closed | 2018-02-02T04:57:13Z | 2020-01-01T10:50:47Z | https://github.com/hankcs/HanLP/issues/757 | [
"ignored"
] | yunsuyunsu | 2 |
huggingface/datasets | machine-learning | 7,243 | ArrayXD with None as leading dim incompatible with DatasetCardData | ### Describe the bug
Creating a dataset with ArrayXD features leads to errors when downloading from hub due to DatasetCardData removing the Nones
@lhoestq
### Steps to reproduce the bug
```python
import numpy as np
from datasets import Array2D, Dataset, Features, load_dataset
def examples_generator():
for i in range(4):
yield {
"array_1d": np.zeros((10,1), dtype="uint16"),
"array_2d": np.zeros((10, 1), dtype="uint16"),
}
features = Features(array_1d=Array2D((None,1), "uint16"), array_2d=Array2D((None, 1), "uint16"))
dataset = Dataset.from_generator(examples_generator, features=features)
dataset.push_to_hub("alex-hh/test_array_1d2d")
ds = load_dataset("alex-hh/test_array_1d2d")
```
Source of error appears to be DatasetCardData.to_dict invoking DatasetCardData._remove_none
```python
from huggingface_hub import DatasetCardData
from datasets.info import DatasetInfosDict
dataset_card_data = DatasetCardData()
DatasetInfosDict({"default": dataset.info.copy()}).to_dataset_card_data(dataset_card_data)
print(dataset_card_data.to_dict()) # removes Nones in shape
```
### Expected behavior
Should be possible to load datasets saved with shape None in leading dimension
### Environment info
3.0.2 and latest huggingface_hub | open | 2024-10-21T15:08:13Z | 2024-10-22T14:18:10Z | https://github.com/huggingface/datasets/issues/7243 | [] | alex-hh | 5 |
serengil/deepface | machine-learning | 984 | add preprocessing module to load image | load_image, load_base64, normalize_input should be moved to that module
Ref:
- https://github.com/serengil/deepface/blob/master/deepface/commons/functions.py#L87
- https://github.com/serengil/deepface/blob/master/deepface/commons/functions.py#L270
- https://github.com/serengil/deepface/blob/master/deepface/commons/functions.py#L71 | closed | 2024-01-29T11:57:14Z | 2024-01-31T23:45:42Z | https://github.com/serengil/deepface/issues/984 | [
"enhancement"
] | serengil | 1 |
docarray/docarray | fastapi | 1,601 | Handle `max_elements` from HNSWLibIndexer | By default, `max_elements` is set to 1024. I believe this max_elements should be recomputed and indexes resized dynamically | closed | 2023-05-31T13:08:32Z | 2023-06-01T08:00:59Z | https://github.com/docarray/docarray/issues/1601 | [] | JoanFM | 0 |
ivy-llc/ivy | numpy | 28,372 | Fix Frontend Failing Test: tensorflow - mathematical_functions.jax.numpy.minimum | closed | 2024-02-21T17:50:59Z | 2024-02-21T21:29:19Z | https://github.com/ivy-llc/ivy/issues/28372 | [
"Sub Task"
] | samthakur587 | 0 | |
httpie/cli | rest-api | 1,046 | Content-Lenght header is one byte to big | **Checklist**
- [x] I've searched for similar issues.
- [2.3.0] I'm using the the latest version of HTTPie.
---
**What are the steps to reproduce the problem?**
1.
`echo "A" | http -v POST localhost:9999`
Request:
```POST / HTTP/1.1
Host: localhost:9999
User-Agent: HTTPie/2.3.0
Accept-Encoding: gzip, deflate
Accept: application/json, */*;q=0.5
Connection: keep-alive
Content-Type: application/json
Content-Length: 2
A
```
2.
`curl -s -XPOST localhost:9999 --data A`
Request:
```POST / HTTP/1.1
Host: localhost:9999
User-Agent: curl/7.64.1
Accept: */*
Content-Length: 1
Content-Type: application/x-www-form-urlencoded
A
```
**What is the expected result?**
Header Content-Length: 1 !
**What happens instead?**
Header Content-Length: 2
I think there's a newline added.
**Debug output**
See top
| closed | 2021-03-15T08:02:37Z | 2021-03-15T09:08:02Z | https://github.com/httpie/cli/issues/1046 | [] | cynay | 1 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 438 | [Feature request] ็ดๆญ้ด็คผ็ฉๅ่ฏ่ฎบๆฏๆ |
ๅธๆๅฏไปฅๆไพ็ดๆญ้ด็ๅฎๆถ็คผ็ฉๆฐๆฎๅ่ฏ่ฎบๆฐๆฎ็api
้ๅธธๆ่ฐข
| open | 2024-07-05T01:45:41Z | 2024-10-31T16:31:39Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/438 | [
"enhancement"
] | liangzhupic | 1 |
recommenders-team/recommenders | deep-learning | 1,514 | [BUG] Amazon reviews data set download | ### Description
<!--- Describe your issue/bug/request in detail -->
When testing some of the DeepRec algorithms, the dataset does not download properly from Amazon reviews website.
### In which platform does it happen?
<!--- Describe the platform where the issue is happening (use a list if needed) -->
<!--- For example: -->
Azure Data Science Virtual Machine.
<!--- * Azure Databricks. -->
<!--- * Other platforms. -->
### How do we replicate the issue?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for pyspark -->
<!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` -->
<!--- * ... -->
Tried it in a conda environment with everything installed (`recommenders[all]`). Doing
```
pytest tests/unit/recommenders/models/test_deeprec_model.py -k slirec
```
shows the error
```
EOFError: Compressed file ended before the end-of-stream marker was reached
```
raised by the gzip reader function.
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for SAR PySpark should pass successfully. -->
Test should pass successfully.
### Other Comments
| closed | 2021-09-01T11:58:14Z | 2021-09-09T12:03:30Z | https://github.com/recommenders-team/recommenders/issues/1514 | [
"bug"
] | anargyri | 1 |
apache/airflow | data-science | 48,178 | Update "suggest change on this page" links on Airflow website | ### Body
After https://github.com/apache/airflow/pull/47798, we need to update the links on airflow website to start pointing to the new links as 2.10.x and all others will be affected, either do that or add a redirect that would go back to the old path.
Conversation which is related https://github.com/apache/airflow/pull/47798#issuecomment-2746802098
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | open | 2025-03-24T07:21:29Z | 2025-03-24T08:49:28Z | https://github.com/apache/airflow/issues/48178 | [
"kind:documentation",
"kind:meta",
"type:doc-only"
] | amoghrajesh | 2 |
vaexio/vaex | data-science | 2,281 | [BUG-REPORT] How to use vaex.open partititioning argument? |
**Description**
Hello, I want to partially import a parquet file partitioned from vaex.
However, looking at the documentation, there is no such thing, so I am posting it in a bug report.
In pyarrow, you can extract rows from patitioned data using the filter function.
However, vaex couldn't find these functions, so I'm asking.
**In other words, I wonder if it is possible to use the filter function supported by pyarrow in vaex and for examples.**
**Software information**
- Vaex version
- {'vaex': '4.11.1', 'vaex-core': '4.11.1', 'vaex-viz': '0.5.2', 'vaex-hdf5': '0.12.3', 'vaex-server': '0.8.1', 'vaex-astro': '0.9.1', 'vaex-jupyter': '0.8.0', 'vaex-ml': '0.18.0'}
- Vaex was installed via: pip
- OS: ubuntu 18.04
**Additional information**
# generate data (paritioning)
```
import vaex
import numpy as np , pandas as pd
from sklearn.datasets import make_classification
import pyarrow as pa
import pyarrow.parquet as pq
X , y = make_classification(n_samples=100000, n_features=5,n_classes=2,seed=1234)
X_pd= pd.DataFrame(X,columns =[ f"feature_{i}" for i in range(X.shape[1])])
X_pd['class'] =y
X_pd['class1'] =np.random.randint(0,5,len(y))
X_pd.to_parquet(
path="./test_vaex",
engine='pyarrow',
compression='snappy',
partition_cols=['class','class1']
)
```
# filter
## filter data using pyarrow parquet
```
filters = [("class","=",0),("class1","in",{1,2})]
df_pq_filtered = pq.read_table("./test_vaex",filters=filters )
df_pq_filtered.shape
```
output : (20106, 7)
## filter data using vaex (I am not sure )
```
df_vaex_filtered = vaex.open("./test_vaex",filters=filters) ## NOT WORKING
df_vaex_filtered.shape
```
output : (100000, 7)
## filter data using vaex (just try)
```
df_vaex_partition = vaex.open('./test_vaex/', partitioning=['class1'])
df_vaex_partition[df_vaex_partition['class1']=='class=0'] # raise Error
```
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/dist-packages/vaex/dataframe.py", line 4223, in __repr__
return self._head_and_tail_table(format='plain')
File "/usr/local/lib/python3.9/dist-packages/vaex/dataframe.py", line 3962, in _head_and_tail_table
N = _len(self)
File "/usr/local/lib/python3.9/dist-packages/vaex/dataframe.py", line 72, in _len
return o.__len__()
File "/usr/local/lib/python3.9/dist-packages/vaex/dataframe.py", line 4311, in __len__
self._cached_filtered_length = int(self.count())
File "/usr/local/lib/python3.9/dist-packages/vaex/dataframe.py", line 965, in count
return self._compute_agg('count', expression, binby, limits, shape, selection, delay, edges, progress, array_type=array_type)
File "/usr/local/lib/python3.9/dist-packages/vaex/dataframe.py", line 939, in _compute_agg
return self._delay(delay, progressbar.exit_on(var))
File "/usr/local/lib/python3.9/dist-packages/vaex/dataframe.py", line 1778, in _delay
self.execute()
File "/usr/local/lib/python3.9/dist-packages/vaex/dataframe.py", line 420, in execute
self.executor.execute()
File "/usr/local/lib/python3.9/dist-packages/vaex/execution.py", line 308, in execute
for _ in self.execute_generator():
File "/usr/local/lib/python3.9/dist-packages/vaex/execution.py", line 432, in execute_generator
yield from self.thread_pool.map(self.process_part, dataset.chunk_iterator(run.dataset_deps, chunk_size),
File "/usr/local/lib/python3.9/dist-packages/vaex/multithreading.py", line 110, in map
for value in iterator:
File "/usr/local/lib/python3.9/dist-packages/vaex/itertools.py", line 5, in buffer
values.append(next(i))
File "/usr/lib/python3.9/concurrent/futures/_base.py", line 609, in result_iterator
yield fs.pop().result()
File "/usr/lib/python3.9/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
File "/usr/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/usr/lib/python3.9/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.9/dist-packages/vaex/multithreading.py", line 86, in wrapped
return callable(self.local.index, *args, **kwargs, **kwargs_extra)
File "/usr/local/lib/python3.9/dist-packages/vaex/execution.py", line 500, in process_part
self.process_tasks(thread_index, i1, i2, chunks, run, df, tasks)
File "/usr/local/lib/python3.9/dist-packages/vaex/execution.py", line 520, in process_tasks
filter_mask = filter_scope.evaluate(vaex.dataframe.FILTER_SELECTION_NAME)
File "/usr/local/lib/python3.9/dist-packages/vaex/scopes.py", line 113, in evaluate
result = self[expression]
File "/usr/local/lib/python3.9/dist-packages/vaex/scopes.py", line 156, in __getitem__
mask_values = selection.evaluate(self.df, variable, self.i1, self.i2, self)#, self.filter_mask)
File "/usr/local/lib/python3.9/dist-packages/vaex/selections.py", line 132, in evaluate
result = scope.evaluate(self.boolean_expression)
File "/usr/local/lib/python3.9/dist-packages/vaex/scopes.py", line 119, in evaluate
result = eval(expression, expression_namespace, self)
File "<string>", line 1, in <module>
File "/usr/local/lib/python3.9/dist-packages/vaex/arrow/numpy_dispatch.py", line 136, in wrapper
result = f(*args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/vaex/functions.py", line 48, in decorated
return f(x, *args, **kwargs)
File "/usr/local/lib/python3.9/dist-packages/vaex/functions.py", line 1006, in str_equals
x = _to_string_sequence(x)
File "/usr/local/lib/python3.9/dist-packages/vaex/column.py", line 607, in _to_string_sequence
return convert.column_from_arrow_array(x).string_sequence
File "/usr/local/lib/python3.9/dist-packages/vaex/arrow/convert.py", line 85, in column_from_arrow_array
return numpy_array_from_arrow_array(arrow_array)
File "/usr/local/lib/python3.9/dist-packages/vaex/arrow/convert.py", line 125, in numpy_array_from_arrow_array
dtype = vaex.array_types.to_numpy_type(arrow_array.type)
File "/usr/local/lib/python3.9/dist-packages/vaex/array_types.py", line 315, in to_numpy_type
return numpy_dtype_from_arrow_type(data_type, strict=strict)
File "/usr/local/lib/python3.9/dist-packages/vaex/array_types.py", line 332, in numpy_dtype_from_arrow_type
raise NotImplementedError(f'Cannot convert {arrow_type}')
NotImplementedError: Cannot convert dictionary<values=string, indices=int32, ordered=0>
``` | open | 2022-11-28T03:02:47Z | 2022-11-30T08:55:18Z | https://github.com/vaexio/vaex/issues/2281 | [] | sungreong | 4 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,843 | [Bug]: stable diffusion not using gpu | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
When using stable diffusion, the gpu of my rtx 3060 is not used, and when opening task manager, it does not appear when extracting images that it is being used ,, I want to use the gpu more since I have 12 GB of vram
Is there a solution to using the gpu so that I can appreciate the fastest image extraction process ุ

### Steps to reproduce the problem
1
### What should have happened?
1
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
1
### Console logs
```Shell
1
```
### Additional information
1 | open | 2025-02-14T23:45:26Z | 2025-02-21T13:27:18Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16843 | [] | Aivoice96 | 9 |
jazzband/django-oauth-toolkit | django | 665 | "+" in username for password grant? | Hello,
I am using the **resource password grant** option. I successfully manage to get the following console code to run:
``
curl -X POST -d "grant_type=password&username=johnsmith@some_email.com&password=<password>" -u"<client_id>:<client_secret>" http://localhost:8000/o/token/
``
However, when I run the following, I get a *Invalid credentials given* error:
``
curl -X POST -d "grant_type=password&username=johnsmith+1@some_email.com&password=<password>" -u"<client_id>:<client_secret>" http://localhost:8000/o/token/
``
Do you know what is going on? I am simply adding `+1` to the email and this issue happens.
Thanks! | closed | 2018-11-08T18:52:05Z | 2018-11-08T19:03:29Z | https://github.com/jazzband/django-oauth-toolkit/issues/665 | [
"question"
] | bartmika | 1 |
flavors/django-graphql-jwt | graphql | 251 | Error: Cannot return null for non-nullable field RefreshToken.token. | Hi,
I'm trying to use django-graphene-jwt and django-graphene-auth with my Vue frontend. As saving the token and refreshToken via httpOnly cookie is recommended as the safest way possible I'm running into the problem that the mutation refreshToken is resulting in an error if no cookie is set and therefore can't send anything.
```
mutation RefreshSilently {
refreshToken {
token
payload
refreshExpiresIn
errors
}
}
=> "Error: Cannot return null for non-nullable field RefreshToken.token."
```
I've been searching all related repros, the web and stackoverflow and can't find a solution to this. What am I doing wrong?
```
Packages installed_
django-graphql-auth==0.3.15
django-graphql-jwt==0.3.1
graphene==2.1.8
graphene-django==2.15.0
graphql-core==2.3.2
graphql-relay==2.0.1
``` | closed | 2021-01-19T14:31:14Z | 2021-01-19T21:51:26Z | https://github.com/flavors/django-graphql-jwt/issues/251 | [] | holtergram | 1 |
neuml/txtai | nlp | 331 | POST Error indexing images via Embeddings API service | I'm getting the following error, when indexing images using POST to the txtai service url `http://txtai.default.127.0.0.1.sslip.io/add`.
`"detail":[{"loc":["body"],"msg":"value is not a valid list","type":"type_error.list"}]}`
Possible related to the FastAPI endpoint?
The same cluster is successful with text documents, but unsure how to index images.
Is it possible to periodically index images in a remote s3 directory via a workflow?
My current workflow YAML is:
```yaml
writable: true
path: /tmp/index.tar.gz
cloud:
provider: s3
container: index
key: "<key>"
secret: "<secret>"
host: txtai.s3.amazonaws.com
port: 80
embeddings:
path: "sentence-transformers/clip-ViT-B-32-multilingual-v1"
content: true
```
I'm hoping to implement the Images embedding search in a workflow configuration, as in the [examples/images.ipynb notebook](https://github.com/neuml/txtai/blob/8a9e5592291ebce120c010bd625af3c542545cf5/examples/images.py) | closed | 2022-09-08T13:28:11Z | 2022-10-12T13:36:11Z | https://github.com/neuml/txtai/issues/331 | [] | edanweis | 14 |
pallets/flask | flask | 4,868 | docs should explain the `@setupmethod` behavior | The [appcontext documentation](https://flask.palletsprojects.com/en/2.2.x/appcontext/) should explain the new @setupmethod 'shell_context_process' behavior, linked to https://github.com/pallets/flask/pull/4577.
Flask raises an Assertion Error.
> AssertionError: The setup method 'shell_context_processor' can no
longer be called on the application. It has already handled its
first request, any changes will not be applied consistently. Make
sure all imports, decorators, functions, etc. needed to set up the
application are done before running it.
The error should be documented, and a correct implementation should be provided.
| closed | 2022-11-16T11:31:02Z | 2023-02-25T00:06:17Z | https://github.com/pallets/flask/issues/4868 | [
"docs"
] | HLFH | 3 |
trevorstephens/gplearn | scikit-learn | 37 | More Flexibility in User Defined Measure Function | I was trying to use a self defined fitness function.
Actually, my measure function won't use the `y_true` or `w` at all. Instead of using metrics that compares the difference between `y_true` and `y_pred`, I measure the fitness of individuals by feeding the `y_pred` into some other function(which would spit out a float) and get the returned values as fitness.
I think the following can be improved:
1) the measure function's arguments will not always be (y, y_pred, w), it could be more flexible and generalized.
2) the way you check if a np.float is returned by the function. I was searching for better ways of checking returned types in python, but didn't get satisfying results yet. It was actually quite tricky.
Thanks for your work on genetic programming. It's awesome and really easy to understand and use.
I'll be rather happy to help you improve this fantastic tool. There are much more can be added, things like selection function, mutation function, even user defined selection function, mutation function etc.
| closed | 2017-06-16T04:38:51Z | 2017-08-03T04:23:43Z | https://github.com/trevorstephens/gplearn/issues/37 | [] | chenyuan920911 | 3 |
ARM-DOE/pyart | data-visualization | 1,553 | Missing get_sweep_keys from xradar | * Py-ART version: git master
* Python version: 3.10
* Operating System: Linux from scratch
* [xradar](https://github.com/openradar/xradar): 0.<s>5</s>3.0
### Description
`ImportError: cannot import name 'get_sweep_keys' from 'xradar.util' (/usr/lib/python3.10/site-packages/xradar/util.py)`
### What I Did
```
import pyart
```
Introduced by https://github.com/openradar/xradar/issues/164 ? | closed | 2024-04-10T14:16:35Z | 2024-06-21T13:31:09Z | https://github.com/ARM-DOE/pyart/issues/1553 | [] | waarmond | 7 |
FactoryBoy/factory_boy | sqlalchemy | 1,092 | How to attach RelatedFactoryList result to instance? | Hi!
I have a question about using RelatedFactoryList in async SQLAlchemy. RelatedFactoryList creates instances but they are not attached to instance.
overridden for async base factory (from discussions in this repository):
```python
import inspect
from factory.alchemy import SESSION_PERSISTENCE_COMMIT, SESSION_PERSISTENCE_FLUSH, SQLAlchemyModelFactory
from factory.base import FactoryOptions
from factory.builder import StepBuilder, BuildStep, parse_declarations
from factory import FactoryError, RelatedFactoryList, CREATE_STRATEGY
from sqlalchemy import select
from sqlalchemy.exc import IntegrityError, NoResultFound
def use_postgeneration_results(self, step, instance, results):
return self.factory._after_postgeneration(
instance,
create=step.builder.strategy == CREATE_STRATEGY,
results=results,
)
FactoryOptions.use_postgeneration_results = use_postgeneration_results
class SQLAlchemyFactory(SQLAlchemyModelFactory):
@classmethod
async def _generate(cls, strategy, params):
if cls._meta.abstract:
raise FactoryError(
"Cannot generate instances of abstract factory %(f)s; "
"Ensure %(f)s.Meta.model is set and %(f)s.Meta.abstract "
"is either not set or False." % dict(f=cls.__name__)
)
step = AsyncStepBuilder(cls._meta, params, strategy)
return await step.build()
@classmethod
async def _create(cls, model_class, *args, **kwargs):
for key, value in kwargs.items():
if inspect.isawaitable(value):
kwargs[key] = await value
return await super()._create(model_class, *args, **kwargs)
@classmethod
async def create_batch(cls, size, **kwargs):
return [await cls.create(**kwargs) for _ in range(size)]
@classmethod
async def _save(cls, model_class, session, args, kwargs):
session_persistence = cls._meta.sqlalchemy_session_persistence
obj = model_class(*args, **kwargs)
session.add(obj)
if session_persistence == SESSION_PERSISTENCE_FLUSH:
await session.flush()
elif session_persistence == SESSION_PERSISTENCE_COMMIT:
await session.commit()
return obj
@classmethod
async def _get_or_create(cls, model_class, session, args, kwargs):
key_fields = {}
for field in cls._meta.sqlalchemy_get_or_create:
if field not in kwargs:
raise FactoryError(
"sqlalchemy_get_or_create - "
"Unable to find initialization value for '%s' in factory %s" % (field, cls.__name__)
)
key_fields[field] = kwargs.pop(field)
obj = (await session.execute(select(model_class).filter_by(*args, **key_fields))).scalars().one_or_none()
if not obj:
try:
obj = await cls._save(model_class, session, args, {**key_fields, **kwargs})
except IntegrityError as e:
session.rollback()
if cls._original_params is None:
raise e
get_or_create_params = {
lookup: value
for lookup, value in cls._original_params.items()
if lookup in cls._meta.sqlalchemy_get_or_create
}
if get_or_create_params:
try:
obj = (
(await session.execute(select(model_class).filter_by(**get_or_create_params)))
.scalars()
.one()
)
except NoResultFound:
# Original params are not a valid lookup and triggered a create(),
# that resulted in an IntegrityError.
raise e
else:
raise e
return obj
class AsyncStepBuilder(StepBuilder):
# Redefine build function that await for instance creation and awaitable postgenerations
async def build(self, parent_step=None, force_sequence=None):
"""Build a factory instance."""
# TODO: Handle "batch build" natively
pre, post = parse_declarations(
self.extras,
base_pre=self.factory_meta.pre_declarations,
base_post=self.factory_meta.post_declarations,
)
if force_sequence is not None:
sequence = force_sequence
elif self.force_init_sequence is not None:
sequence = self.force_init_sequence
else:
sequence = self.factory_meta.next_sequence()
step = BuildStep(
builder=self,
sequence=sequence,
parent_step=parent_step,
)
step.resolve(pre)
args, kwargs = self.factory_meta.prepare_arguments(step.attributes)
instance = await self.factory_meta.instantiate(
step=step,
args=args,
kwargs=kwargs,
)
postgen_results = {}
for declaration_name in post.sorted():
declaration = post[declaration_name]
declaration_result = declaration.declaration.evaluate_post(
instance=instance,
step=step,
overrides=declaration.context,
)
if inspect.isawaitable(declaration_result):
declaration_result = await declaration_result
if isinstance(declaration.declaration, RelatedFactoryList):
for idx, item in enumerate(declaration_result):
if inspect.isawaitable(item):
declaration_result[idx] = await item
postgen_results[declaration_name] = declaration_result
postgen = self.factory_meta.use_postgeneration_results(
instance=instance,
step=step,
results=postgen_results,
)
if inspect.isawaitable(postgen):
await postgen
return instance
```
**models.py**
```python
class TtzFile(Base):
"""ะะพะดะตะปั ัะฐะนะปะฐ ะขะขะ."""
__tablename__ = "ttz_files"
__mapper_args__ = {"eager_defaults": True}
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
ttz_id: Mapped[int] = mapped_column(ForeignKey("ttz.id"))
attachment_id: Mapped[UUID] = mapped_column()
ttz: Mapped["Ttz"] = relationship(back_populates="files")
class Ttz(Base):
__tablename__ = "ttz"
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
name: Mapped[str] = mapped_column(String(250))
files: Mapped[list["TtzFile"]] = relationship(cascade="all, delete-orphan", back_populates="ttz")
```
**factories.py**
```python
class TtzFactory(SQLAlchemyFactory):
name = Sequence(lambda n: f"ะขะขะ {n + 1}")
start_date = FuzzyDate(parse_date("2024-02-23"))
is_deleted = False
output_message = None
input_message = None
error_output_message = None
files = RelatedFactoryList("tests.factories.ttz.TtzFileFactory", 'ttz', 2)
class Meta:
model = Ttz
sqlalchemy_get_or_create = ["name"]
sqlalchemy_session_factory = Session
sqlalchemy_session_persistence = SESSION_PERSISTENCE_FLUSH
@classmethod
def _after_postgeneration(cls, instance, create, results=None):
session = cls._meta.sqlalchemy_session_factory()
return session.refresh(instance, attribute_names=["files"])
class TtzFileFactory(SQLAlchemyFactory):
ttz = SubFactory(TtzFactory)
file_name = Faker("file_name")
attachment_id = FuzzyUuid()
class Meta:
model = TtzFile
sqlalchemy_get_or_create = ["attachment_id"]
sqlalchemy_session_factory = Session
sqlalchemy_session_persistence = SESSION_PERSISTENCE_FLUSH
```
To make it available to get Ttz.files I have do refresh:
```python
@classmethod
def _after_postgeneration(cls, instance, create, results=None):
session = cls._meta.sqlalchemy_session_factory()
return session.refresh(instance, attribute_names=["files"])
```
My question is it is the only way to get Ttz.files? I mean do I have to write _after_postgeneration method in each factory where I need to get related list?
| open | 2024-09-10T10:27:48Z | 2024-09-12T13:31:30Z | https://github.com/FactoryBoy/factory_boy/issues/1092 | [
"Q&A",
"SQLAlchemy"
] | albertalexandrov | 6 |
pennersr/django-allauth | django | 4,000 | Feature request: MFA remember device key | Thanks for adding all the fantastic MFA options. Many sites, such as Google Apps, provide MFA with an option to "remember the device" which differs from stay logged in. A user may have an expired session, but can skip the MFA step on the device during login. One implementation of this is to store a code somewhere on the device, perhaps localstorage. Then the code can automatically be provided and MFA is skipped as far as the end user is concerned. There are security implications. What if the device is stolen? Is the code encrypted (probably not). It wouldn't be much different from stealing a TOTP secret. One hopes the browser is sandboxed and applications cannot simply open it's storage and read the key. If they could do so - they could just read the session cookie and save some trouble. I'm curious on your thoughts for this feature. | closed | 2024-07-31T18:41:31Z | 2025-03-20T12:44:21Z | https://github.com/pennersr/django-allauth/issues/4000 | [
"Feature request"
] | bufke | 2 |
NVIDIA/pix2pixHD | computer-vision | 148 | What is motivation of using large kernel size at input and output? | What is motivation of using large kernel size at input and output?
https://github.com/NVIDIA/pix2pixHD/blob/master/models/networks.py#L190
https://github.com/NVIDIA/pix2pixHD/blob/master/models/networks.py#L207 | open | 2019-09-01T20:12:38Z | 2019-09-01T20:12:38Z | https://github.com/NVIDIA/pix2pixHD/issues/148 | [] | mrgloom | 0 |
huggingface/transformers | tensorflow | 36,041 | CVE-2024-11392 - AWS Scanner and Trivy Flagging Transformers 4.48.1 as Vulnerable | ### System Info
I have updated the transformers package to version 4.48.1, but both my AWS scanner and Trivy are still flagging this version as vulnerable. I have referred to the following GitHub thread, which discusses a similar issue, but unfortunately, I wasn't able to find a resolution:
https://github.com/huggingface/transformers/issues/34840
My company places a strong emphasis on not using vulnerable package versions, and this has become a roadblock in my deployment process. Iโm unable to proceed with my deployment due to these security concerns.
Could anyone provide guidance on how this issue can be resolved or suggest any alternative solutions? Your help would be greatly appreciated.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Install the transformers package version 4.48.1 by running pip install transformers==4.48.1
2. Run the AWS scanner or Trivy on the environment where the package is installed.
3. Both scanners flag the transformers==4.48.1 version as vulnerable and flagged as [CVE-2024-11392]
### Expected behavior
The transformers==4.48.1 package should not be flagged as vulnerable by AWS scanner or Trivy. After updating to this version, there should be no security vulnerabilities detected in the package, allowing for smooth deployment without triggering any security alerts from vulnerability scanners.
| open | 2025-02-05T06:28:29Z | 2025-03-20T11:29:28Z | https://github.com/huggingface/transformers/issues/36041 | [
"bug"
] | rajdeinno | 9 |
horovod/horovod | tensorflow | 3,807 | TF 2.11.0 (mixed_float16): 'LossScaleOptimizerV3' object has no attribute | **Environment:**
1. Framework: TensorFlow
2. Framework version:2.11.0
3. Horovod version:0.26.1
4. MPI version:4.1.4
5. CUDA version:11.6
6. NCCL version:2.11.4-1
7. Python version:3.8
8. OS and version: Ubuntu 20.04
**Bug report:**
When a run a training in Tensorflow 2.11.0 with mixed_float16 with horovod. I have the following error message:
```bash
[1,0]<stderr>:Traceback (most recent call last):
[1,0]<stderr>: File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
[1,0]<stderr>: return _run_code(code, main_globals, None,
[1,0]<stderr>: File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
[1,0]<stderr>: exec(code, run_globals)
[1,0]<stderr>: File "/home/bruno/erx-ai/src/erxai/tf_train/tf_train.py", line 920, in <module>
[1,0]<stderr>: main(sys.argv[1:])
[1,0]<stderr>: File "/home/bruno/erx-ai/src/erxai/tf_train/tf_train.py", line 899, in main
[1,0]<stderr>: tf_train_semantic.run_train()
[1,0]<stderr>: File "/home/bruno/erx-ai/src/erxai/tf_train/tf_train.py", line 625, in run_train
[1,0]<stderr>: self.model.fit(
[1,0]<stderr>: File "/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py", line 70, in error_handler
[1,0]<stderr>: raise e.with_traceback(filtered_tb) from None
[1,0]<stderr>: File "/usr/local/lib/python3.8/dist-packages/horovod/_keras/callbacks.py", line 53, in on_batch_end
[1,0]<stderr>: hvd.broadcast_variables(self.model.optimizer.variables(),
[1,0]<stderr>:AttributeError: 'LossScaleOptimizerV3' object has no attribute '_variables'
```
| closed | 2023-01-05T13:50:55Z | 2023-09-22T18:14:06Z | https://github.com/horovod/horovod/issues/3807 | [
"bug"
] | RicoOscar | 6 |
huggingface/datasets | pandas | 7,243 | ArrayXD with None as leading dim incompatible with DatasetCardData | ### Describe the bug
Creating a dataset with ArrayXD features leads to errors when downloading from hub due to DatasetCardData removing the Nones
@lhoestq
### Steps to reproduce the bug
```python
import numpy as np
from datasets import Array2D, Dataset, Features, load_dataset
def examples_generator():
for i in range(4):
yield {
"array_1d": np.zeros((10,1), dtype="uint16"),
"array_2d": np.zeros((10, 1), dtype="uint16"),
}
features = Features(array_1d=Array2D((None,1), "uint16"), array_2d=Array2D((None, 1), "uint16"))
dataset = Dataset.from_generator(examples_generator, features=features)
dataset.push_to_hub("alex-hh/test_array_1d2d")
ds = load_dataset("alex-hh/test_array_1d2d")
```
Source of error appears to be DatasetCardData.to_dict invoking DatasetCardData._remove_none
```python
from huggingface_hub import DatasetCardData
from datasets.info import DatasetInfosDict
dataset_card_data = DatasetCardData()
DatasetInfosDict({"default": dataset.info.copy()}).to_dataset_card_data(dataset_card_data)
print(dataset_card_data.to_dict()) # removes Nones in shape
```
### Expected behavior
Should be possible to load datasets saved with shape None in leading dimension
### Environment info
3.0.2 and latest huggingface_hub | open | 2024-10-21T15:08:13Z | 2024-10-22T14:18:10Z | https://github.com/huggingface/datasets/issues/7243 | [] | alex-hh | 5 |
glumpy/glumpy | numpy | 156 | Mistaken post. | Sry, I need to investigate the issue further before posting. Cant delete with GIT right. | closed | 2018-07-07T01:26:10Z | 2018-07-07T01:43:02Z | https://github.com/glumpy/glumpy/issues/156 | [] | jeff5048 | 1 |
litestar-org/litestar | api | 3,391 | Bug:AttributeError: module 'pydantic._migration' has no attribute 'JsonValue' | ### Description
It pulls up this error when following the tutorial. This appears to be something caused by pydantic migrations for some reason?
Here is my dependancies
annotated-types==0.6.0
anyio==4.3.0
certifi==2024.2.2
click==8.1.7
EditorConfig==0.12.4
exceptiongroup==1.2.0
Faker==24.9.0
fast-query-parsers==1.0.3
h11==0.14.0
httpcore==1.0.5
httptools==0.6.1
httpx==0.27.0
idna==3.7
Jinja2==3.1.3
jsbeautifier==1.15.1
litestar==2.8.2
markdown-it-py==3.0.0
MarkupSafe==2.1.5
mdurl==0.1.2
msgspec==0.18.6
multidict==6.0.5
polyfactory==2.15.0
pydantic==2.7.0
pydantic_core==2.18.1
Pygments==2.17.2
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
PyYAML==6.0.1
rich==13.7.1
rich-click==1.7.4
six==1.16.0
sniffio==1.3.1
typing_extensions==4.11.0
uvicorn==0.29.0
uvloop==0.19.0
watchfiles==0.21.0
websockets==12.0
### URL to code causing the issue
_No response_
### MCVE
```python
# Your MCVE code here
```
### Steps to reproduce
```bash
1. Enter Hello world tutorial online
2. Use litestar run
```
### Screenshots
```bash
""
```
### Logs
```bash
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.10/bin/uvicorn", line 8, in <module>
sys.exit(main())
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/uvicorn/main.py", line 409, in main
run(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/uvicorn/main.py", line 575, in run
server.run()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/uvicorn/server.py", line 65, in run
return asyncio.run(self.serve(sockets=sockets))
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/uvicorn/server.py", line 69, in serve
await self._serve(sockets)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/uvicorn/server.py", line 76, in _serve
config.load()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/uvicorn/config.py", line 433, in load
self.loaded_app = import_from_string(self.app)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/uvicorn/importer.py", line 19, in import_from_string
module = importlib.import_module(module_str)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/Users/kodecreer/Documents/Python/Litestar/app.py", line 8, in <module>
app = Litestar([hello_world])
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/litestar/app.py", line 361, in __init__
plugins=self._get_default_plugins(list(plugins or [])),
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/litestar/app.py", line 521, in _get_default_plugins
from litestar.contrib.pydantic import (
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/litestar/contrib/pydantic/__init__.py", line 8, in <module>
from .pydantic_dto_factory import PydanticDTO
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/litestar/contrib/pydantic/pydantic_dto_factory.py", line 62, in <module>
pydantic_v2.JsonValue: Any,
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pydantic/__init__.py", line 210, in __getattr__
return _getattr_migration(attr_name)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pydantic/_migration.py", line 299, in wrapper
raise AttributeError(f'module {__name__!r} has no attribute {name!r}')
AttributeError: module 'pydantic._migration' has no attribute 'JsonValue'
```
### Litestar Version
2.82
### Platform
- [ ] Linux
- [X] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-04-14T03:26:15Z | 2025-03-20T15:54:35Z | https://github.com/litestar-org/litestar/issues/3391 | [
"Bug :bug:"
] | kodecreer | 6 |
seleniumbase/SeleniumBase | web-scraping | 2,327 | selenium.common.exceptions.JavascriptException: Message: javascript error: Failed to set the 'src' property on 'HTMLScriptElement': This document requires 'TrustedScriptURL' assignment | This error occurs when I try to use the code after logging in to google.es.
code:
choice = sb.get_jqc_button_input(message, buttons)
error:
selenium.common.exceptions.JavascriptException: Message: javascript error: Failed to set the 'src' property on 'HTMLScriptElement': This document requires 'TrustedScriptURL' assignment.
Does anyone know how to fix this problem? or if there is some way to work around it | closed | 2023-11-29T19:32:19Z | 2023-11-30T17:08:40Z | https://github.com/seleniumbase/SeleniumBase/issues/2327 | [
"question",
"UC Mode / CDP Mode"
] | Gantaronee | 4 |
yt-dlp/yt-dlp | python | 12,235 | Filesize filter fails despite size information being available | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm reporting a bug unrelated to a specific site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
Example video with the problem: https://www.facebook.com/reel/1138662114574595
I've simplified the filters to focus on the main issue: trying to download just a video (whether it has audio or not), that's why the bv* filter.
`yt-dlp -f 'bv*' https://www.facebook.com/reel/1138662114574595`
> Download succeeds
`yt-dlp -f 'bv*[filesize<100M]' https://www.facebook.com/reel/1138662114574595`
> Requested format is not available.
`yt-dlp -F https://www.facebook.com/reel/1138662114574595`
[info] Available formats for 1138662114574595:
ID EXT RESOLUTION โ FILESIZE TBR PROTO โ VCODEC VBR ACODEC ABR ASR MORE INFO
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
3002082459929957a m4a audio only โ ~147.76KiB 71k https โ audio only mp4a.40.5 71k 44k DASH audio, m4a_dash
sd mp4 unknown โ https โ unknown unknown
hd mp4 unknown โ https โ unknown unknown
1289356715449440v mp4 720x1280 โ ~ 1.51MiB 743k https โ av01.0.05M.08 743k video only DASH video, mp4_dash
1146858630297695v mp4 720x1280 โ ~ 2.33MiB 1143k https โ av01.0.05M.08 1143k video only DASH video, mp4_dash
1368895307601435v mp4 720x1280 โ ~ 2.84MiB 1393k https โ av01.0.05M.08 1393k video only DASH video, mp4_dash
627037733034075v mp4 720x1280 โ ~ 3.71MiB 1819k https โ av01.0.05M.08 1819k video only DASH video, mp4_dash
3905618026346225v mp4 1080x1920 โ ~ 5.36MiB 2629k https โ av01.0.08M.08 2629k video only DASH video, mp4_dash
I can see a size being reported on the video format 3905618026346225v, so I don't see why a [filesize<100M] filter should get rid of it. I've also tried filesize_approx and still the download fails, citing the format not being available.
I really hope I'm not wasting anyone's time misinterpreting something and apologies in advance if I missed something. But the above two examples seem straightforward. The filter gets rid of the option, despite size information being available in the format list. Probably worth looking into.
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '-f', 'bv*[filesize<100M]', 'https://www.facebook.com/reel/1138662114574595']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2025.01.28.232803 from yt-dlp/yt-dlp-nightly-builds (linux_exe)
[debug] Python 3.11.11 (CPython x86_64 64bit) - Linux-4.19.0-24-amd64-x86_64-with (OpenSSL 3.1.7 3 Sep 2024)
[debug] exe versions: ffmpeg 4.1.11-0, ffprobe 4.1.11-0
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.7.1, mutagen-1.47.0, requests-2.32.3, secretstorage-3.3.3, sqlite3-3.44.2, urllib3-2.3.0, websockets-14.2
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2025.01.28.232803 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2025.01.28.232803 from yt-dlp/yt-dlp-nightly-builds)
[facebook:reel] Extracting URL: https://www.facebook.com/reel/1138662114574595
[facebook] Extracting URL: https://m.facebook.com/watch/?v=1138662114574595&_rdr
[facebook] 1138662114574595: Downloading webpage
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
ERROR: [facebook] 1138662114574595: Requested format is not available. Use --list-formats for a list of available formats
Traceback (most recent call last):
File "yt_dlp/YoutubeDL.py", line 1637, in wrapper
File "yt_dlp/YoutubeDL.py", line 1793, in __extract_info
File "yt_dlp/YoutubeDL.py", line 1852, in process_ie_result
File "yt_dlp/YoutubeDL.py", line 2986, in process_video_result
yt_dlp.utils.ExtractorError: [facebook] 1138662114574595: Requested format is not available. Use --list-formats for a list of available formats
``` | closed | 2025-01-29T21:04:38Z | 2025-01-29T21:18:02Z | https://github.com/yt-dlp/yt-dlp/issues/12235 | [
"question"
] | martinoshub | 2 |
ymcui/Chinese-BERT-wwm | nlp | 38 | NER-MSRA็็ปๆๆ ๆณๅค็ฐ | ไฝ ๅฅฝ๏ผ่ฏท้ฎๅฏไปฅๆไพๆดๅคๅ
ณไบNER-MSRA็็ป่ๅ
ๆ็ฐๅจๅค็ฐ็็ปๆ่ฆไฝ0.5-0.8ไธช็พๅ็น | closed | 2019-09-07T02:21:57Z | 2019-10-21T11:01:10Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/38 | [] | shizhediao | 0 |
mljar/mljar-supervised | scikit-learn | 363 | [question] export best technique | Hi, thanks a lot for the easy-to-use repo @pplonski !!
I had a quick question - can we use the best model `MLJar` creates (say, KNN) then can we get it to output all the exact parameters it used to get that performance?
So that we can use those parameters in `scikit-learn` and use the exact model to reproduce the performance/accuracy achieved by Jar? | closed | 2021-04-01T20:24:38Z | 2021-04-02T11:42:10Z | https://github.com/mljar/mljar-supervised/issues/363 | [] | neel04 | 3 |
serengil/deepface | machine-learning | 1,196 | getting different results from verify and find function. | I'm trying to extract unique faces from set of folders and I initially saved all the faces in their respective folders and trying to update another folder say unique_faces, Now the situation arises if the newly coming image is in the unique_faces folder or not, so I tried find function and verify function [nested loop for verify of course] and i verify function being taking time, gives me desired results but not quicker and find method returns the results very fast but not accurate like some of the faces are also getting missed. I don't have issue with duplicates and I tried with all models and by adjusting the threshold distance. But verify and find should return same results on same threshold right?? that too is not working. even the find function calculates the newly added images representations, it is a lot quicker because may be one to three images will be added. I have the code too.
My only concern is how can improve find method's accuracy maintaining the fastness.
find method's code,
```
def verify_faces_in_database_2(face_folder, database_folder, hashmap):
logging.info('Starting verification in database in folder %s', face_folder)
db_filename = None
face_image_path = None
try:
face_files = os.listdir(face_folder)
for face_filename in face_files:
if face_filename.startswith('.'):
continue # Skip system files like .DS_Store
face_image_path = os.path.join(face_folder, face_filename)
# for db_filename in os.listdir(database_folder):
# if db_filename.startswith('.'):
# continue # Skip system files like .DS_Store
#db_image_path = os.path.join(database_folder, db_filename)
#if os.path.isfile(db_image_path):
# logging.info('face image path: %s', face_image_path)
#print('face image path: %s', face_image_path)
verified = DeepFace.find(img1_path=face_image_path, db_path=database_folder,
model_name='VGG-Face',
enforce_detection=False)
# logging.info('verified: %s', verified)
distance_threshold = 0.5
if verified[0]['distance'] < distance_threshold:
logging.info(f"Match found between {db_filename} and folder name {face_folder} and {face_filename}")
logging.info(verified[0])
break # Stop searching for matches if one is found
# else:
# verified['verified'] = False
# if verified['verified']:
# # print('verified: ', verified)
else:
logging.info('No match found, saving the image in the database with a unique ID')
unique_id = str(uuid.uuid4()) # Generate unique ID
# Save the image in the database with the unique ID
save_to_database_with_unique_id(face_image_path, unique_id, database_folder)
call_aws_api(face_image_path, hashmap, database_folder)
```
verify's code
```
def verify_faces_in_database_2(face_folder, database_folder, hashmap):
logging.info('Starting verification in database in folder %s', face_folder)
db_filename = None
face_image_path = None
try:
face_files = os.listdir(face_folder)
for face_filename in face_files:
if face_filename.startswith('.'):
continue # Skip system files like .DS_Store
face_image_path = os.path.join(face_folder, face_filename)
for db_filename in os.listdir(database_folder):
if db_filename.startswith('.'):
continue # Skip system files like .DS_Store
db_image_path = os.path.join(database_folder, db_filename)
if os.path.isfile(db_image_path):
# logging.info('face image path: %s', face_image_path)
print('face image path: %s', face_image_path)
verified = DeepFace.verify(img1_path=db_image_path, img2_path=face_image_path,
model_name='VGG-Face',
enforce_detection=False)
# logging.info('verified: %s', verified)
distance_threshold = 0.50
if verified['distance'] < distance_threshold:
verified['verified'] = True
else:
verified['verified'] = False
if verified['verified']:
# print('verified: ', verified)
logging.info(
f"Match found between {db_filename} and folder name {face_folder} and {face_filename}")
break # Stop searching for matches if one is found
else:
logging.info('No match found, saving the image in the database with a unique ID')
unique_id = str(uuid.uuid4()) # Generate unique ID
# Save the image in the database with the unique ID
save_to_database_with_unique_id(face_image_path, unique_id, database_folder)
call_aws_api(face_image_path, hashmap, database_folder)
``` | closed | 2024-04-18T13:19:36Z | 2024-04-18T17:19:58Z | https://github.com/serengil/deepface/issues/1196 | [
"question"
] | Raghucharan16 | 16 |
microsoft/JARVIS | deep-learning | 53 | Integrate "Segment Anything" from Meta | Is it possible to integrate the functionality of that project?
https://github.com/facebookresearch/segment-anything | closed | 2023-04-05T21:33:59Z | 2023-04-06T03:37:45Z | https://github.com/microsoft/JARVIS/issues/53 | [] | ekiwi111 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.