repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
PeterL1n/RobustVideoMatting | computer-vision | 195 | 关于视频编解码硬件加速的问题 | #159 我遇到了和该问题一致的情况,在使用v100时,模型推理速度只有2-3帧,speedtest却有60-70帧,请问是以下代码造成这种问题吗
```
if output_foreground is not None:
writer_fgr.write(fgr[0])
if output_alpha is not None:
writer_pha.write(pha[0]) # pha[0] [12,1,h,w]
if output_composition is not None:
if output_type == 'video':
com = fgr * pha + bgr * (1 - pha)
else:
fgr = fgr * pha.gt(0)
com = torch.cat([fgr, pha], dim=-3)
writer_com.write(com[0])
```
如果不是,拖慢推理速度的代码在哪里,如果是,我应该怎样加速视频编解码?谢谢! | open | 2022-08-11T08:20:26Z | 2023-04-17T03:42:22Z | https://github.com/PeterL1n/RobustVideoMatting/issues/195 | [] | tayton42 | 1 |
graphql-python/gql | graphql | 497 | Add License to the classifier | Hello,
It seems the MIT license is missing from your package classifier, is it possible to add it ?
Thanks | closed | 2024-10-14T10:44:15Z | 2024-10-27T13:39:22Z | https://github.com/graphql-python/gql/issues/497 | [
"type: chore"
] | Elias-SELHAOUI | 0 |
InstaPy/InstaPy | automation | 6,398 | My bot doesn't work help me pls | this is the error how can I resolve ?
My bot script is right.
here is the error :
File "C:\Users\Alessandro\Desktop\bot2.py", line 10, in <module>
session.like_by_tags(["minimal","techno","housemusic","minimalhouse"],amount=5)
File "C:\Users\Alessandro\AppData\Local\Programs\Python\Python310\lib\site-packages\instapy\instapy.py", line 1977, in like_by_tags
inappropriate, user_name, is_video, reason, scope = check_link(
File "C:\Users\Alessandro\AppData\Local\Programs\Python\Python310\lib\site-packages\instapy\like_util.py", line 618, in check_link
media = post_page[0]["shortcode_media"]
KeyError: 0
| open | 2021-11-18T17:14:43Z | 2022-01-23T05:00:28Z | https://github.com/InstaPy/InstaPy/issues/6398 | [] | AlessandroDiPatria | 5 |
ivy-llc/ivy | tensorflow | 28,655 | `__getnewargs__` | closed | 2024-03-21T11:52:10Z | 2024-03-21T12:01:39Z | https://github.com/ivy-llc/ivy/issues/28655 | [] | Ishticode | 0 | |
plotly/dash | dash | 2,552 | provide control item that allows selecting and ordering options | Like sortable lists in Ag Grid:
https://github.com/plotly/dash/assets/101562106/e35dcbbc-e6d3-436e-9bac-67edbabb2e4f
**Code for video app:**
app.py
```
import dash_ag_grid as dag
from dash import dcc, html, Input, Output, State, Dash, ctx, callback
import pandas as pd
import dash
app = Dash(__name__)
my_options = ['A', 'B', 'C', 'D', 'E', 'F', 'G']
def draggable_options(id, options) :
df = pd.DataFrame(
{
'options':options,
'del':['X']*len(options)
})
columnDefs = [
{
"field": "options",
"rowDrag": True,
'cellStyle': {'padding':'0px', 'margin-right':'15px'}
},
{
"field":"del",
"cellRenderer": "Button",
'cellStyle': {'padding':'0px', 'margin-right':'5px', 'text-align':'end'}
}
]
grid = dag.AgGrid(
id=id,
columnDefs=columnDefs ,
rowData=df.to_dict("records"),
getRowId="params.data.options",
columnSize="responsiveSizeToFit",
dashGridOptions={"rowDragManaged": True, "domLayout": "autoHeight","headerHeight":0},
style={"border":"none"},
)
return grid
app.layout = html.Div(
[
draggable_options(id="grid", options=[]),
dcc.Dropdown(id='dropdown', options=my_options, multi=True, style={'width':'300px'}),
html.Div(id="data-after-filter"),
],
style={"margin": 20, "width":"fit-content"},
)
@callback(
Output("grid", "rowTransaction"),
Output("dropdown", "value"),
Output("dropdown", "options"),
Input("grid", "cellRendererData"),
Input("dropdown", "value"),
State("dropdown", "options"),
prevent_initial_call=True
)
def showChange(click, value, current_options):
if ctx.triggered_id == 'grid' :
rowTransaction = {"remove": [{'options':click['rowId']}]}
new_options = current_options + [click['rowId']]
return rowTransaction, dash.no_update, new_options
elif ctx.triggered_id == "dropdown" :
rowTransaction = {"add": [{
"options": value,
"del" : "X"
}]}
new_options = [option for option in current_options if option not in value]
return rowTransaction, None, new_options
if __name__ == "__main__":
app.run_server(debug=True)
```
assets/dashAgGridComponentFunctions.js
```
var dagcomponentfuncs = window.dashAgGridComponentFunctions = window.dashAgGridComponentFunctions || {};
dagcomponentfuncs.Button = function (props) {
const {setData, data} = props;
function onClick() {
setData();
}
return React.createElement(
window.dash_html_components.Button,
{
onClick: onClick,
style: {
border: 'none',
background: 'none',
},
},
props.value
);
};
```
| open | 2023-05-31T11:39:09Z | 2024-08-13T19:33:43Z | https://github.com/plotly/dash/issues/2552 | [
"feature",
"P3"
] | celia-lm | 0 |
aleju/imgaug | deep-learning | 460 | Perspective Changes in imgaug -- 3D perspective change | Hi,
I am experimenting with single images and want to apply some perspective changes (like a 3D shift). However, the current documentation does not include this. Is it possible to do something like with the images below?
ORIGINAL

TRANSFORMED

Cheers,
Andi | open | 2019-10-18T08:53:31Z | 2019-10-19T19:04:35Z | https://github.com/aleju/imgaug/issues/460 | [] | AGrosserHH | 1 |
encode/httpx | asyncio | 3,324 | Improve robustness for HTTP/2 | We've got a handful of open issues around HTTP/2 robustness.
I'm going to collate these into a single issue, to help us keep the issue easily reviewable & improve prioritisation.
- [ ] #2983
- [ ] #3002
- [ ] #3072 | open | 2024-09-27T10:56:03Z | 2025-03-06T13:45:26Z | https://github.com/encode/httpx/issues/3324 | [
"http/2"
] | tomchristie | 5 |
seleniumbase/SeleniumBase | pytest | 2,934 | I would like to use a certificate, but I am not sure how to do so. | # Environment
Python: 3.10.11
SeleniumBase: 4.24.12
# Description
I would like to use a certificate, but I am not sure how to do so.
Below is the sample source code. In this example, I am unable to access "https://geo.brdtest.com/mygeo.json".
```
from seleniumbase import Driver as MySBDriver
import ssl
import certifi
ssl_context = ssl.create_default_context(cafile=certifi.where())
ssl_context.load_verify_locations(str(Path(f"{os.path.sep.join(sys.argv[0].split(os.path.sep)[:-1])}/ca.crt")))
driver = MySBDriver(uc=True, proxy="http://brd-customer-hl_e43c514b-zone-residential_proxy3:1234@brd.superproxy.io:22225")
driver.get("https://geo.brdtest.com/mygeo.json")
```
Please let me know if there are any possible remedies or ideas that may help solve this issue. Thank you. | closed | 2024-07-18T03:39:10Z | 2024-07-18T04:56:07Z | https://github.com/seleniumbase/SeleniumBase/issues/2934 | [
"question",
"UC Mode / CDP Mode"
] | ShogoDev0212 | 1 |
httpie/cli | api | 1,433 | Allow flags to be specified after method | ## Checklist
- [x] I've searched for similar feature requests.
---
## Enhancement request
Allow flags to be specified after the method or URL, for example `http post -f example.com ...`
## Problem it solves
It's more human-readable than putting flags after the command. For example, "Make an HTTP post request with a form to example.com", compared to "Make an HTTP request with a form using a post request to example.com" Being able to write it in the order I'd say it out loud (or the order I'm thinking of it in my head) provides a better user experience. Of course, having the flag right after the command could still be accepted. I know you can also have the flag after the URL, but in my opinion this is better, and it shouldn't hurt anything to have another option.
---
## Additional information, screenshots, or code examples
N/A | open | 2022-09-01T00:55:26Z | 2022-09-01T01:00:59Z | https://github.com/httpie/cli/issues/1433 | [
"enhancement",
"new"
] | asportnoy | 0 |
piskvorky/gensim | machine-learning | 3,516 | SyntaxError: future feature annotations is not defined | I use the python3.6 before, it's running successfully. But now we ara failed to run .
File "/opt/app/.venv/lib/python3.6/site-packages/gensim/utils.py", line 45, in <module>
from smart_open import open
File "/opt/app/.venv/lib/python3.6/site-packages/smart_open/__init__.py", line 34, in <module>
from .smart_open_lib import open, parse_uri, smart_open, register_compressor # noqa: E402
File "/opt/app/.venv/lib/python3.6/site-packages/smart_open/smart_open_lib.py", line 35, in <module>
from smart_open import doctools
File "/opt/app/.venv/lib/python3.6/site-packages/smart_open/doctools.py", line 21, in <module>
from . import transport
File "/opt/app/.venv/lib/python3.6/site-packages/smart_open/transport.py", line 104, in <module>
register_transport("smart_open.s3")
File "/opt/app/.venv/lib/python3.6/site-packages/smart_open/transport.py", line 49, in register_transport
submodule = importlib.import_module(submodule)
File "/opt/app/.venv/lib64/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/opt/app/.venv/lib/python3.6/site-packages/smart_open/s3.py", line 9
from __future__ import annotations
^
SyntaxError: future feature annotations is not defined | closed | 2024-03-14T09:38:27Z | 2024-03-15T04:04:59Z | https://github.com/piskvorky/gensim/issues/3516 | [] | ppdk-data | 4 |
2noise/ChatTTS | python | 84 | 运行报错 CUDNN_STATUS_NOT_SUPPORTED cuda 12.2 | 
| closed | 2024-05-30T06:43:32Z | 2024-05-30T07:45:52Z | https://github.com/2noise/ChatTTS/issues/84 | [] | gaoleihero | 0 |
explosion/spaCy | data-science | 13,248 | Cannot train Arabic models with a custom tokenizer | This issue was initially about a possible bug in the _training pipeline_, related to the _parser_ (see below). But now I believe that posing preliminary questions is more appropriate:
- is it possible to create a completely _custom tokenizer_, which does not define custom rules and a few methods, but just redefines the main `__call__` method?
- in that case, where can I find documentation on how the tokenizer should use the Vocabulary API to feed the vocabulary while tokenizing?
### Some context information
In the discussion _Arabic language support_, comment _[I'm willing to prototype a spaCy language model for Arabic (SMA)](https://github.com/explosion/spaCy/discussions/7146#discussioncomment-8094879)_, I reported on the choice of a _training set_ and on the unsatisfactory training results obtained using the native spaCy _tokenizer_. Then, I reported on the integration/adaptation of an alternative tokenizer whose output, according to the printout of the _debug data_ command, shows a better alignment with the tokens in the training set (after a minor modification of the training set itself).
With the [subsequent comment](https://github.com/explosion/spaCy/discussions/7146#discussioncomment-8115239), in the same discussion, I reported on
1. an exception emitted by a parser-related module of the spaCy training software, when executing the _train_ command with the same data and configuration as _debug data_;
2. the very bad results (low overall _score_) obtained with a reduced configuration, excluding the parser.
Here below is an excerpt of the _Traceback_ related to the exception (point 1). You can find the full Traceback in the discussion to which I refer.
```(omissis)
⚠ Aborting and saving the final best model. Encountered exception:
KeyError("[E900] Could not run the full pipeline for evaluation. If you
specified frozen components, make sure they were already initialized and
trained. Full pipeline: ['tok2vec', 'tagger', 'morphologizer',
'trainable_lemmatizer', 'parser']")
Traceback (most recent call last):
File "C:\language310\lib\site-packages\spacy\training\loop.py", line 298, in evaluate
scores = nlp.evaluate(dev_corpus(nlp))
File "C:\language310\lib\site-packages\spacy\language.py", line 1459, in evaluate
for eg, doc in zip(examples, docs):
File "C:\language310\lib\site-packages\spacy\language.py", line 1618, in pipe
for doc in docs:
File "C:\language310\lib\site-packages\spacy\util.py", line 1685, in _pipe
yield from proc.pipe(docs, **kwargs)
File "spacy\pipeline\transition_parser.pyx", line 255, in pipe
File "C:\language310\lib\site-packages\spacy\util.py", line 1704, in raise_error
raise e
File "spacy\pipeline\transition_parser.pyx", line 252, in spacy.pipeline.transition_parser.Parser.pipe
File "spacy\pipeline\transition_parser.pyx", line 345, in spacy.pipeline.transition_parser.Parser.set_annotations
File "spacy\pipeline\_parser_internals\nonproj.pyx", line 176, in spacy.pipeline._parser_internals.nonproj.deprojectivize
File "spacy\pipeline\_parser_internals\nonproj.pyx", line 181, in spacy.pipeline._parser_internals.nonproj.deprojectivize
File "spacy\strings.pyx", line 160, in spacy.strings.StringStore.__getitem__
KeyError: "[E018] Can't retrieve string for hash '8206900633647566924'. This usually refers to an issue with the `Vocab` or `StringStore`."
The above exception was the direct cause of the following exception:
(omissis)
```
### My Environment
* Operating System: Windows 11
* Python Version Used: 3.10
* spaCy Version Used: 3.7
| open | 2024-01-18T21:32:32Z | 2024-02-09T22:44:40Z | https://github.com/explosion/spaCy/issues/13248 | [
"lang / ar",
"feat / tokenizer"
] | gtoffoli | 3 |
koaning/scikit-lego | scikit-learn | 22 | enchancement: add GMM setting for `RandomRegressor` | once https://github.com/koaning/scikit-lego/pull/21 is merged there is a randomregression. this is a dummy model used for benchmarking. | closed | 2019-03-01T21:07:32Z | 2019-10-18T14:06:08Z | https://github.com/koaning/scikit-lego/issues/22 | [
"enhancement"
] | koaning | 1 |
saulpw/visidata | pandas | 1,805 | Add read-only flag | ## The problem
I personally only use VisiData to inspect the contents of SQLite databases used by software on my server, as this is much more secure and requires less maintenance than using a public-facing web interface. These databases may contain logs or user accounts, and I only need to read them; all the writing and updating is handled by the software that created the database. However, I'm not all too familiar with all the shortcuts that VisiData has, and sometimes I press one by accident, and suddenly a table is marked for deletion, at which point I need to make very sure I don't confirm this change, and should immediately quit the application.
## The solution
It would give me significant peace of mind if I could run `vd --read-only ./my_database.db` and then rest assured that no changes are ever written back to disk. An alternative would be to run `sudo -u myspecialuser vd ./my_database.db` and ensure that the user `myspecialuser` has read access but not write access to the file, but unfortunately my databases need a specific user and group assigned to it, and Linux and *BSD do not have fine-grained access control.
## The implementation
I can imagine several ways of implementing a read-only mode for VisiData, and they're all fine with me:
1. The [`nano`](https://en.wikipedia.org/wiki/GNU_nano) way: Similar to when a user runs `nano file.txt` and they have read but not write access to `file.txt`, the editor shows `File file.txt is unwritable` in the bottom, but still allows the user to perform any changes they want inside the editor. However, the editor gives an error when the user tries to save the file.
2. The other `nano` way: Similar to when a user runs `nano -v file.txt` or `nano --view file.txt`, VisiData refuses actions that would edit the database (such as deleting a table, inserting a new row, opening the editor for a record, etc.), showing the message `Key is invalid in view mode`.
3. The cheaty way: VisiData copies the database to `/tmp/visidata-<random>/database.db` and then opens the copy. The user can write changes, but they are stored to the temporary file, which is deleted afterwards, so the changes do not affect the original database. This can potentially be confusing for the user, though, and I'm not sure what the security implications are of copying a database to a directory where other users could potentially see it.
| closed | 2023-03-13T10:59:12Z | 2024-09-17T08:13:48Z | https://github.com/saulpw/visidata/issues/1805 | [
"wishlist",
"wish granted"
] | FWDekker | 4 |
cs230-stanford/cs230-code-examples | computer-vision | 13 | forget about masking when compute accuracy | https://github.com/cs230-stanford/cs230-code-examples/blob/159df10a6187c7e1d6ec949c8e06d7f67f8f1cd2/tensorflow/nlp/model/model_fn.py#L68
The computation of accuracy and the metrics of accuracy below are lack of masking. In this way the accuracy could be wrong because of wrong predictions of padded tokens. | open | 2018-06-01T21:18:33Z | 2018-06-01T21:18:33Z | https://github.com/cs230-stanford/cs230-code-examples/issues/13 | [] | yinxusen | 0 |
falconry/falcon | api | 1,856 | Can't decode URL when preprocessing the request | Hello,
I am building a falcon API that receives encoded URLs which contain the following format in query parameters : `eq%3foo` that I would like to preprocess and pass as `eq:foo` before it is routed.
I am using a middleware to try to do the following :
```python
class RequestPreProcessing:
def process_request(self, req, resp):
req.uri = falcon.uri.decode(req.uri)
```
However, this assignment to `req.uri` is blocked and prints a message `can't set attribute`, I guess this attribute is protected.
Do you know how I could achieve this goal ? Am I taking the right approach ?
Help would be very appreciated. Thank you in advance for your answers ! | closed | 2021-01-25T16:48:56Z | 2021-01-27T15:34:28Z | https://github.com/falconry/falcon/issues/1856 | [
"question"
] | cyrillay | 4 |
ivy-llc/ivy | tensorflow | 28,838 | [Bug]: `ivy.array(0)[None]` (adding dimention to a 0-dimentional array) `UnboundLocalError: cannot access local variable 'array_queries' where it is not associated with a value` | ### Bug Explanation
```shell
> python -c "import ivy; ivy.array(0)[..., None]"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "./venv/lib/python3.11/site-packages/ivy/func_wrapper.py", line 643, in _handle_view_indexing
ret = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/ivy/data_classes/array/array.py", line 429, in __getitem__
return ivy.get_item(self._data, query)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/ivy/func_wrapper.py", line 912, in _handle_nestable
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/ivy/func_wrapper.py", line 968, in _handle_partial_mixed_function
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/ivy/func_wrapper.py", line 643, in _handle_view_indexing
ret = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/ivy/func_wrapper.py", line 456, in _inputs_to_ivy_arrays
return fn(*ivy_args, **ivy_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/ivy/func_wrapper.py", line 328, in _handle_array_function
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/ivy/func_wrapper.py", line 761, in _handle_device
with ivy.DefaultDevice(ivy.default_device(dst_dev)):
File "./venv/lib/python3.11/site-packages/ivy/functional/ivy/device.py", line 128, in __exit__
raise exc_val
File "./venv/lib/python3.11/site-packages/ivy/func_wrapper.py", line 762, in _handle_device
return ivy.handle_soft_device_variable(*args, fn=fn, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/ivy/functional/ivy/device.py", line 133, in handle_soft_device_variable
return ivy.current_backend().handle_soft_device_variable(*args, fn=fn, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/ivy/functional/backends/numpy/device.py", line 68, in handle_soft_device_variable
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/ivy/functional/ivy/general.py", line 2812, in get_item
query, target_shape, vector_inds = _parse_query(query, x_shape)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "./venv/lib/python3.11/site-packages/ivy/functional/ivy/general.py", line 3001, in _parse_query
indices = array_queries.reshape((*target_shape, len(x_shape)))
^^^^^^^^^^^^^
UnboundLocalError: cannot access local variable 'array_queries' where it is not associated with a value
```
### Steps to Reproduce Bug
```shell
python -c "import ivy; ivy.array(0)[None]"
python -c "import ivy; ivy.array(0)[..., None]"
```
This works:
```shell
python -c "import numpy as np; np.array(0)[None]"
python -c "import numpy as np; np.array(0)[..., None]"
python -c "import ivy; ivy.array([0])[..., None]"
python -c "import ivy; ivy.set_backend('torch'); ivy.array(0)[..., None]"
python -c "import ivy; ivy.set_backend('numpy'); ivy.array(0)[..., None]"
```
### Environment
Ubuntu 22.04
### Ivy Version
0.0.9.16 and 1.0.0.0
### Backend
- [ ] NumPy
- [ ] TensorFlow
- [ ] PyTorch
- [ ] JAX
### Device
CPU | open | 2024-10-26T01:53:16Z | 2024-10-26T10:22:12Z | https://github.com/ivy-llc/ivy/issues/28838 | [
"Bug Report"
] | 34j | 2 |
recommenders-team/recommenders | data-science | 1,237 | [BUG] smoke test creates folders in rootdir that should have been temporary | ### Description
<!--- Describe your issue/bug/request in detail -->
When running test_extract_mind(size,tmp) in tests/smoke/test_mind.py, it creates MINDxxx_train.zip folder that are removed after tests. (xxx is either demo or small).
I would expect folder would be created in /tmp and would be deleted after the test.
### In which platform does it happen?
<!--- Describe the platform where the issue is happening (use a list if needed) -->
<!--- For example: -->
<!--- * Azure Data Science Virtual Machine. -->
<!--- * Azure Databricks. -->
<!--- * Other platforms. -->
All Linux platforms.
### How do we replicate the issue?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for pyspark -->
<!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` -->
<!--- * ... -->
(reco_base) azureuser@dsvmd16:~/src/recommenders$ pytest tests/smoke -m "smoke and not spark and not gpu" --durations 0 -k "extract_mind"
======================================== test session starts ========================================
platform linux -- Python 3.6.11, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
rootdir: /home/azureuser/src/recommenders, configfile: pytest.ini
plugins: xdist-2.1.0, forked-1.3.0
collected 45 items / 43 deselected / 2 selected
tests/smoke/test_mind.py .. [100%]
========================================= warnings summary ==========================================
../../../../anaconda/envs/reco_base/lib/python3.6/site-packages/ansiwrap/core.py:6
/anaconda/envs/reco_base/lib/python3.6/site-packages/ansiwrap/core.py:6: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
../../../../anaconda/envs/reco_base/lib/python3.6/importlib/_bootstrap.py:219
/anaconda/envs/reco_base/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216, got 192
return f(*args, **kwds)
../../../../anaconda/envs/reco_base/lib/python3.6/importlib/_bootstrap.py:219
/anaconda/envs/reco_base/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject
return f(*args, **kwds)
-- Docs: https://docs.pytest.org/en/stable/warnings.html
========================================= slowest durations =========================================
7.39s call tests/smoke/test_mind.py::test_extract_mind[small]
1.29s call tests/smoke/test_mind.py::test_extract_mind[demo]
(4 durations < 0.005s hidden. Use -vv to show these durations.)
=========================== 2 passed, 43 deselected, 3 warnings in 10.95s ===========================
(reco_base) azureuser@dsvmd16:~/src/recommenders$ ls -lt
total 2108
drwxrwxr-x 4 azureuser azureuser 4096 Nov 8 15:55 MINDsmall_train.zip
drwxrwxr-x 4 azureuser azureuser 4096 Nov 8 15:55 MINDdemo_train.zip
-rw-rw-r-- 1 azureuser azureuser 30304 Nov 8 15:25 output.ipynb
-rw-rw-r-- 1 azureuser azureuser 1274 Nov 8 14:49 reco_base.yaml
-rw-rw-r-- 1 azureuser azureuser 284 Nov 8 14:42 pytest.ini
-rw-rw-r-- 1 azureuser azureuser 4517 Nov 8 13:18 AUTHORS.md
-rw-rw-r-- 1 azureuser azureuser 19637 Nov 8 13:18 README.md
-rw-rw-r-- 1 azureuser azureuser 18380 Nov 8 13:18 SETUP.md
drwxrwxr-x 3 azureuser azureuser 4096 Nov 1 22:47 demo
drwxrwxr-x 7 azureuser azureuser 4096 Nov 1 19:10 tests
drwxrwxr-x 4 azureuser azureuser 4096 Oct 31 21:56 lightgbm_criteo.mml
drwxrwxr-x 4 azureuser azureuser 4096 Oct 31 20:06 tools
-rw-rw-r-- 1 azureuser azureuser 1979173 Oct 31 20:02 ml-100k.data
drwxrwxr-x 9 azureuser azureuser 4096 Oct 31 20:01 reco_utils
-rw-rw-r-- 1 azureuser azureuser 1352 Oct 31 19:52 reco_pyspark.yaml
drwxrwxr-x 8 azureuser azureuser 4096 Oct 31 19:49 scenarios
-rw-rw-r-- 1 azureuser azureuser 1556 Oct 31 19:49 setup.py
drwxrwxr-x 12 azureuser azureuser 4096 Oct 31 19:49 examples
drwxrwxr-x 3 azureuser azureuser 4096 Oct 31 19:49 docs
-rw-rw-r-- 1 azureuser azureuser 4583 Oct 31 19:49 CONTRIBUTING.md
-rw-rw-r-- 1 azureuser azureuser 7732 Oct 31 19:49 GLOSSARY.md
-rw-rw-r-- 1 azureuser azureuser 1162 Oct 31 19:49 LICENSE
-rw-rw-r-- 1 azureuser azureuser 4316 Oct 31 19:49 NEWS.md
-rw-rw-r-- 1 azureuser azureuser 2383 Oct 31 19:49 SECURITY.md
drwxrwxr-x 3 azureuser azureuser 4096 Oct 31 19:49 contrib
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for SAR PySpark should pass successfully. -->
### Other Comments
| closed | 2020-11-08T15:57:28Z | 2021-05-13T13:36:51Z | https://github.com/recommenders-team/recommenders/issues/1237 | [
"bug"
] | wutaomsft | 1 |
nl8590687/ASRT_SpeechRecognition | tensorflow | 42 | 这两个数据集的文件路径能给一下吗?直接下载好数据集后的文件需要数据集的哪些文件?以及你的路径是怎样的?能否给个路径树 | closed | 2018-09-12T03:00:33Z | 2019-04-09T09:57:42Z | https://github.com/nl8590687/ASRT_SpeechRecognition/issues/42 | [] | yuqinjh | 5 | |
QingdaoU/OnlineJudge | django | 121 | java 和 python | 用java和python提交代码都是RE | closed | 2018-01-29T13:23:05Z | 2018-01-30T03:58:24Z | https://github.com/QingdaoU/OnlineJudge/issues/121 | [] | yangdelu855 | 2 |
JoeanAmier/XHS-Downloader | api | 78 | 我是 macos 系统,通过指令进行数据采集,但是报了 SSLCertVerificationError 的错误,错误信息如下: | 错误信息如下:Cannot connect to host www.xiaohongshu.com:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed
certificate in certificate chain (_ssl.c:1002)')]
请问一下应该如何解决这个问题。
| open | 2024-04-22T12:35:53Z | 2024-06-20T15:19:24Z | https://github.com/JoeanAmier/XHS-Downloader/issues/78 | [
"功能异常(bug)"
] | wufu-fire | 5 |
manbearwiz/youtube-dl-server | rest-api | 20 | Added videos not downloading automatically anymore | I updated the docker image yesterday and was wondering why no new videos are being downloaded.
In my case, I add videos through flexget by the channels RSS feed and add them to the docker image through the curl command you provided (curl -X POST --data-urlencode "url={{url}}" http://{{address}}:8080/youtube-dl/q).
But since the update to the latest version yesterday they just sit there in the download queue and don't get resolved. The docker logs command shows the following for an example video.
> Added url https://www.youtube.com/watch?v=ds0cmAV-Yek to the download queue
> 192.168.170.7 - - [10/Dec/2018 21:22:14] "POST /youtube-dl/q HTTP/1.1" 200 100
and now it sits in the download queue /youtube-dl/q
> [\"https://www.youtube.com/watch?v=ds0cmAV-Yek\", {\"format\": null}]
maybe the format: null is the reason for this? | closed | 2018-12-11T07:16:56Z | 2018-12-11T09:11:51Z | https://github.com/manbearwiz/youtube-dl-server/issues/20 | [] | Fribb | 2 |
deepinsight/insightface | pytorch | 1,916 | Glint360k original data failed decompression | I downloaded the Glint360k dataset via Baidu cloud, and the link was written on the Readme file. After the downloading finished, I tried to decompress the file using the `cat glint360k_* | tar -xzvf -` command.
The decompression process was failed and reported an error like the below:
```
cat glint360k_* | tar -xzvf -
glint360k/
glint360k/lfw.bin
glint360k/cfp_fp.bin
glint360k/cfp_ff.bin
glint360k/train.rec
gzip: stdin: invalid compressed data--format violated
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
```
I can ensure that I haven't done anything to corrupt the file. Please tell me what's going on. Anything helpful will be appreciated.
| open | 2022-02-17T09:29:46Z | 2024-11-14T10:38:28Z | https://github.com/deepinsight/insightface/issues/1916 | [] | lujx1024 | 2 |
ml-tooling/opyrator | streamlit | 63 | More dynamic input modules for audio and video like webcam and 🎤 | <!--
Thanks for requesting a feature 🙌 ❤️
Before opening a new issue, please make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead. Also, be sure to check our documentation first.
-->
**Feature description:**
<!---
Provide a detailed description of the feature or improvement you are proposing. What specific solution would you like? What is the expected behaviour?
Add any other context, screenshots, or code snippets about the feature request here as well.
-->
**Problem and motivation:**
<!---
Why is this change important to you? What is the problem this feature would solve? How would you use it? How can it benefit other users?
-->
**Is this something you're interested in working on?**
<!--- Yes or No -->
| closed | 2022-10-11T08:23:45Z | 2023-01-25T02:28:27Z | https://github.com/ml-tooling/opyrator/issues/63 | [
"feature",
"stale"
] | aadityaverma | 1 |
pytorch/vision | computer-vision | 8,348 | Extracting custom EXIF data from PNG files | ### 🚀 The feature
Being able to use [torchvision.io.read_image(...)](https://pytorch.org/vision/main/generated/torchvision.io.read_image.html) to read PNG files alongside the custom metadata stored in the _exchangeable image file format_ (EXIF).
### Motivation, pitch
I am training different deep learning models and some of them rely on custom metadata stored in EXIF inside PNG files.
Currently I have to do
```python
from PIL import Image
with Image.open(file_path) as image:
image.load()
if image.format == "TIFF":
metadata = image.getexif()[37510] # 0x9286
else: # png
metadata = image.text
```
but would much rather just ditch [PIL.Image](https://pillow.readthedocs.io/en/stable/reference/Image.html#image-module) and do
```python
from torchvision.io import read_image
image, metadata = read_image(file_path, read_metadata=True)
```
### Alternatives
_No response_
### Additional context
_No response_
### Edit
added case distinction between TIFF and PNG files | open | 2024-03-21T07:50:05Z | 2024-03-21T11:37:18Z | https://github.com/pytorch/vision/issues/8348 | [] | frank690 | 2 |
roboflow/supervision | tensorflow | 956 | set_classes bugs when I use it in the yolo world model | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
If I use set_classes to set the classes count more than 7, after I convert into coreml, it does not work in M1/M2 macbook.
### Environment
python: 3.11.4
coremltools: 7.0
ultralytics: 8.1.19
### Minimal Reproducible Example
```python
def yolo_world_export():
with torch.no_grad():
# Initialize a YOLO model with pretrained weights
model = YOLO(
"yolov8s-world.pt"
) # You can also choose yolov8m/l-world.pt based on your needs
# Define custom classes specific to your application
custom_classes = [
"girl",
"ball",
"flower",
"vase",
"lavander",
"boy",
"car",
]
model.set_classes(custom_classes)
# Save the model with the custom classes defined (modified code)
model.save(
"custom_yolov8s.pt"
) # This saves extra metadata required for CoreML conversion
# # Load the saved model with custom classes
model = YOLO("custom_yolov8s.pt")
# # Export the model to CoreML format with non-maximum suppression enabled
model.export(format="coreml", nms=True)
```
### Additional
```python
def yolo_world_export():
with torch.no_grad():
# Initialize a YOLO model with pretrained weights
model = YOLO(
"yolov8s-world.pt"
) # You can also choose yolov8m/l-world.pt based on your needs
# Define custom classes specific to your application
custom_classes = [
"girl",
"ball",
"flower",
"vase",
"lavander",
"boy",
"car",
]
model.set_classes(custom_classes)
# Save the model with the custom classes defined (modified code)
model.save(
"custom_yolov8s.pt"
) # This saves extra metadata required for CoreML conversion
# # Load the saved model with custom classes
model = YOLO("custom_yolov8s.pt")
# # Export the model to CoreML format with non-maximum suppression enabled
model.export(format="coreml", nms=True)
```
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2024-02-28T15:57:47Z | 2024-02-28T16:15:25Z | https://github.com/roboflow/supervision/issues/956 | [
"bug"
] | qihuijia | 1 |
strawberry-graphql/strawberry-django | graphql | 211 | Automatic resolver for single types leaves `pk` argument optional | <!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
Given the following code:
```py
@strawberry.type
class Query:
foo: Foo = strawberry.django.field()
```
Will generate the following schema:
```gql
foo(pk: ID): Foo!
```
When it should probably generate:
```gql
foo(pk: ID!): Foo!
```
If you accidentally forget to pass the `pk` argument you will get a slightly more misleading `get() returned more than one Foo -- it returned xyz!`
## System Information
- Operating system: macOS/Linux
- Strawberry version (if applicable): ==0.133.5
## Additional Context
<!-- Add any other relevant information about the problem here. --> | closed | 2022-11-09T20:40:51Z | 2025-03-20T15:57:09Z | https://github.com/strawberry-graphql/strawberry-django/issues/211 | [
"bug"
] | fireteam99 | 6 |
littlecodersh/ItChat | api | 62 | 短时间关闭程序后重连功能的文档 | 短时间关闭程序后重连功能的文档建议的第一种用法,貌似应对改为下面的样子为宜,无论之前有无登录存档,均应当进入主循环。不知道我是否理解得正确。
``` python
import itchat
if not itchat.load_login_status():
itchat.auto_login()
itchat.dump_login_status()
print('Config stored')
@itchat.msg_register('Text')
def simple_reply(msg):
print(msg['Text'])
itchat.run()
itchat.dump_login_status()
```
| closed | 2016-08-09T09:43:29Z | 2016-08-16T07:19:17Z | https://github.com/littlecodersh/ItChat/issues/62 | [
"question"
] | brothertian | 1 |
netbox-community/netbox | django | 18,920 | Add user object type to event rules | ### NetBox version
v4.2.5
### Feature type
Data model extension
### Proposed functionality
It would be great to be able to trigger e.g. scripts based on events in the user object.
### Use case
This is needed to e.g. automatically run a script to configure newly created users to e.g. give them specific default groups.
### Database changes
_No response_
### External dependencies
_No response_ | open | 2025-03-17T10:48:33Z | 2025-03-20T17:34:00Z | https://github.com/netbox-community/netbox/issues/18920 | [
"type: feature",
"needs milestone",
"status: backlog",
"complexity: medium"
] | PaulR282 | 0 |
miguelgrinberg/flasky | flask | 133 | Selenium Tests fail with multiple tests per fixture | I've been getting an error when I try to use more than one test under the SeleniumTestCase fixture. I believe it is related to the client/driver not cleaning up properly, or perhaps it is related to the running server in the alternate thread (though it still does receive the request to /shutdown).
Simple steps to reproduce:
1) get a fresh clone of the repo
2) copy the test_admin_home_page() function in test_selenium.py, paste it below the original twice, once as test_admin_home_page_duplicate() and again as test_admin_home_page_triplicate(), as though those were two more tests under the SeleniumTestCase fixture
3) run manage.py test. The original test, test_admin_home_page, succeeds, but both _duplicate and _triplicate fail:
#
## FAIL: test_admin_home_page_duplicate (test_selenium.SeleniumTestCase)
Traceback (most recent call last):
File "/Users/me/flasky/tests/test_selenium.py", line 99, in test_admin_home_page_duplicate
self.client.page_source))
AssertionError: None is not true
#
## FAIL: test_admin_home_page_triplicate (test_selenium.SeleniumTestCase)
Traceback (most recent call last):
File "/Users/me/flasky/tests/test_selenium.py", line 120, in test_admin_home_page_triplicate
self.client.page_source))
AssertionError: None is not true
---
Ran 34 tests in 15.892s
FAILED (failures=2)
This is the cleanest way to reproduce this I see, though for what it's worth I'm seeing several more related issues in the rest of my test suite related to --I think-- Selenium tests not cleaning up properly. Any thoughts here? I've tried moving the client (webdriver.Firefox()) out of class-level setup and teardown and into the usual setUp and tearDown, I've tried creating both a class-level client and a test-level client, closing out both at their respective levels of teardown... nothing seems to work yet in that whenever I have multiple Selenium tests (or rerun the same ones with sniffer (https://github.com/jeffh/sniffer/)) the tests seem to get polluted and fail, even though a clean run once through with only the one Selenium test present in the clean repo passes. Is the Selenium test here leaving something behind that's messing things up? Or am I misunderstanding how to use Selenium tests, and perhaps they're different from the usual unittest tests in handling of multiple-tests-per-fixture situations?
Thanks,
Dan
| closed | 2016-04-26T01:53:54Z | 2017-03-17T18:59:53Z | https://github.com/miguelgrinberg/flasky/issues/133 | [] | danwiesenthal | 3 |
tiangolo/uwsgi-nginx-flask-docker | flask | 88 | How to configure nginx config? | Hi, I have a docker container running with:
```dockerfile
FROM tiangolo/uwsgi-nginx-flask:python3.6
COPY requirements.txt /tmp/
RUN pip install -U pip
RUN pip install -r /tmp/requirements.txt
COPY ./app /app
```
When running (it runs flawless except for this), there is a particular action that causes the server to crash with the following message:
```bash
2018/10/02 13:06:43 [error] 10#10: *9 upstream sent too big header while reading response header from upstream
```
I'm aware that the problem is with the `nginx.config` file and that it is located in `/etc/nginx/conf.d/nginx.conf`. More particularly, I have to add the answer [here](https://stackoverflow.com/questions/23844761/upstream-sent-too-big-header-while-reading-response-header-from-upstream) to the file. Because the request header is too large.
I have tried to modify that file while building the docker image, but haven't figured out exactly how to do it.
Here is my project tree:
```bash
├── Dockerfile
├── app
│ ├── config.py
│ ├── data
│ │ └── users.json
│ ├── forms.py
│ ├── main.py
│ ├── static
│ │ ├── ajax-loader.gif
│ │ └── logo.png
│ ├── templates
│ │ ├── 404.html
│ │ ├── 500.html
│ │ ├── base.html
│ │ └── upload.html
│ └── utils.py
└── requirements.txt
```
How exactly should I proceed?
Sorry for the question, appreciate any help and this awesome image!
| closed | 2018-10-02T13:26:22Z | 2018-11-23T13:09:15Z | https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/88 | [] | duarteocarmo | 16 |
amidaware/tacticalrmm | django | 1,939 | Feature Request - Store Hostname As Caps or Lower Case Option | **Is your feature request related to a problem? Please describe.**
When sorting reports using the "order by" for hostname, case sensitivity seem to mater. For example, Apple, Bat, Cat would be in order but another value with apple, but be under Cat.
**Describe the solution you'd like**
A Global Option, for the Item Hostname Value to store as All Caps, or All Lowercase.
**Describe alternatives you've considered**
Create a custom field. Running a powershell script to get the Hostname then convert it to all caps. Then reference the custom field hostname instead of the item.hostname.
| closed | 2024-07-26T20:07:51Z | 2024-07-26T20:31:15Z | https://github.com/amidaware/tacticalrmm/issues/1939 | [] | davesmith87 | 1 |
FlareSolverr/FlareSolverr | api | 355 | [yggtorrent] Exception (yggtorrent): FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Cloudflare Error: Cloudflare has blocked this request. Probably your IP is banned for this site, check in your web browser.: FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Cloudflare Error: Cloudflare has blocked this request. Probably your IP is banned for this site, check in your web browser. (Test) | **Please use the search bar** at the top of the page and make sure you are not creating an already submitted issue.
Check closed issues as well, because your issue may have already been fixed.
### How to enable debug and html traces
[Follow the instructions from this wiki page](https://github.com/FlareSolverr/FlareSolverr/wiki/How-to-enable-debug-and-html-trace)
### Environment
* **FlareSolverr version**: 2.2.2
* **Last working FlareSolverr version**: 2.2.2
* **Operating system**: Linux
* **Are you using Docker**: yes
* **FlareSolverr User-Agent (see log traces or / endpoint)**: Mozilla/5.0 (X11; Linux x86_64; rv:94.0) Gecko/20100101 Firefox/94.0"
* **Are you using a proxy or VPN?** no
* **Are you using Captcha Solver:** no
### Description
Same that the issue https://github.com/FlareSolverr/FlareSolverr/issues/330 before updating to 2.2.2
### Logged Error Messages
An error occurred while testing this indexer
Exception (yggtorrent): FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Cloudflare Error: Cloudflare has blocked this request. Probably your IP is banned for this site, check in your web browser.: FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Cloudflare Error: Cloudflare has blocked this request. Probably your IP is banned for this site, check in your web browser. | closed | 2022-04-06T18:33:02Z | 2022-04-17T07:59:26Z | https://github.com/FlareSolverr/FlareSolverr/issues/355 | [
"help wanted",
"confirmed"
] | EienFr | 28 |
huggingface/datasets | pytorch | 6,912 | Add MedImg for streaming | ### Feature request
Host the MedImg dataset (similar to Imagenet but for biomedical images).
### Motivation
There is a clear need for biomedical image foundation models and large scale biomedical datasets that are easily streamable. This would be an excellent tool for the biomedical community.
### Your contribution
MedImg can be found [here](https://www.cuilab.cn/medimg/#). | open | 2024-05-22T00:55:30Z | 2024-09-05T16:53:54Z | https://github.com/huggingface/datasets/issues/6912 | [
"dataset request"
] | lhallee | 8 |
dask/dask | numpy | 11,614 | Incorrect indexing with boolean arrays | <!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**:
For certain chunkings of dask arrays and certain boolean arrays, indexing these dask arrays by boolean arrays gives incorrects results. This appears to have been introduced in dask 2024.9.
**Minimal Complete Verifiable Example**:
```python
>>> import dask.array
>>> import numpy as np
>>> x = np.arange(1980)
>>> dx = dask.array.from_array(x, chunks=[248])
>>> ib = np.zeros(1980, dtype=bool)
>>> ib[1560,1860] = True
>>> dx[ib].compute()
[1560 1816 1817 1561 1562 1818 1563 1819 1564 1820 1821 1565 1822 1566
1823 1567 1824 1568 1825 1569 1826 1570 1827 1571 1572 1828 1573 1829
1574 1830 1831 1575 1576 1832 1577 1833 1578 1834 1835 1579 1580 1836
1581 1837 1838 1582 1583 1839 1584 1840 1585 1841 1842 1586 1587 1843
1588 1844 1845 1589 1846 1590 1847 1591 1848 1592 1849 1593 1850 1594
1595 1851 1852 1596 1597 1853 1854 1598 1855 1599 1600 1856 1857 1601
1602 1858 1859 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613
1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627
1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641
1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655
1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669
1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683
1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697
1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711
1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725
1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739
1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753
1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767
1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781
1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795
1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809
1810 1811 1812 1813 1814 1815]
>>> x[ib]
[1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573
1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587
1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601
1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615
1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629
1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643
1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657
1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671
1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685
1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699
1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713
1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727
1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741
1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755
1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769
1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783
1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797
1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811
1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825
1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839
1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853
1854 1855 1856 1857 1858 1859]
```
**Anything else we need to know?**:
**Environment**:
- Dask version: 2024.9 onwards
- Numpy version: 1.26.4
- Python version: 3.11
- Operating System: Linux
- Install method (conda, pip, source): conda
| closed | 2024-12-19T12:47:32Z | 2024-12-20T10:46:42Z | https://github.com/dask/dask/issues/11614 | [
"needs triage"
] | stephenworsley | 2 |
DistrictDataLabs/yellowbrick | scikit-learn | 1,188 | Documentation Exceptions | **Describe the bug**
There are two `.. plot::` directive bugs that are causing images to not be built in the documentation, their tracebacks are below.
**To Reproduce**
TODO: fill in this section with the code from the plot directives.
**Dataset**
See plot directives for internal YB datasets used.
**Expected behavior**
The docs should build without exception and all plot directive images should be shown.
**Traceback**
Exception 1:
```
/github/workspace/docs/api/model_selection/learning_curve.rst:99: WARNING: Exception occurred in plotting learning_curve-3
from /github/workspace/docs/api/model_selection/learning_curve.rst:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/matplotlib/sphinxext/plot_directive.py", line 483, in run_code
exec(code, ns)
File "<string>", line 13, in <module>
File "/github/workspace/yellowbrick/model_selection/learning_curve.py", line 249, in fit
curve = sk_learning_curve(self.estimator, X, y, **sklc_kwargs)
File "/usr/local/lib/python3.8/site-packages/sklearn/utils/validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/sklearn/model_selection/_validation.py", line 1412, in learning_curve
results = parallel(delayed(_fit_and_score)(
File "/usr/local/lib/python3.8/site-packages/joblib/parallel.py", line 966, in __call__
n_jobs = self._initialize_backend()
File "/usr/local/lib/python3.8/site-packages/joblib/parallel.py", line 733, in _initialize_backend
n_jobs = self._backend.configure(n_jobs=self.n_jobs, parallel=self,
File "/usr/local/lib/python3.8/site-packages/joblib/_parallel_backends.py", line 489, in configure
n_jobs = self.effective_n_jobs(n_jobs)
File "/usr/local/lib/python3.8/site-packages/joblib/_parallel_backends.py", line 525, in effective_n_jobs
elif n_jobs < 0:
TypeError: '<' not supported between instances of 'str' and 'int'
```
Exception 2:
```
/github/workspace/docs/api/text/umap_vis.rst:141: WARNING: Exception occurred in plotting umap_vis-1
from /github/workspace/docs/api/text/umap_vis.rst:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/matplotlib/sphinxext/plot_directive.py", line 483, in run_code
exec(code, ns)
File "<string>", line 17, in <module>
File "/github/workspace/yellowbrick/text/umap_vis.py", line 116, in umap
visualizer = UMAPVisualizer(
File "/github/workspace/yellowbrick/text/umap_vis.py", line 210, in __init__
raise YellowbrickValueError(
yellowbrick.exceptions.YellowbrickValueError: umap package doesn't seem to be installed.Please install UMAP via: pip install umap-learn
```
**Desktop (please complete the following information):**
- OS: ubuntu-latest
- Python Version 3.9
- Yellowbrick Version: branch develop
| closed | 2021-07-12T16:23:53Z | 2022-02-26T22:22:33Z | https://github.com/DistrictDataLabs/yellowbrick/issues/1188 | [
"type: bug",
"type: documentation"
] | bbengfort | 1 |
widgetti/solara | flask | 406 | Discord link broken | The discord link is invalid. | closed | 2023-11-30T19:22:57Z | 2023-11-30T19:51:33Z | https://github.com/widgetti/solara/issues/406 | [] | langestefan | 2 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,878 | solved | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [x] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
WebUI refuses to install torch in many restarts
### Steps to reproduce the problem
1. Start Web UI for first time
2. Wait
### What should have happened?
WebUI should install torch
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
Can't generate. Another bug pop-ups (specs is compatiable)
### Console logs
```Shell
Creating venv in directory V:\stable-diffusion-webui\venv using python "C:\Users\USER\AppData\Local\Programs\Python\Python313\python.exe"
Requirement already satisfied: pip in v:\stable-diffusion-webui\venv\lib\site-packages (24.3.1)
Collecting pip
Downloading pip-25.0.1-py3-none-any.whl.metadata (3.7 kB)
Downloading pip-25.0.1-py3-none-any.whl (1.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.8/1.8 MB 3.1 MB/s eta 0:00:00
Installing collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 24.3.1
Uninstalling pip-24.3.1:
Successfully uninstalled pip-24.3.1
Successfully installed pip-25.0.1
venv "V:\stable-diffusion-webui\venv\Scripts\Python.exe"
=============================================================================================================================
INCOMPATIBLE PYTHON VERSION
This program is tested with 3.10.6 Python, but you have 3.13.2.
If you encounter an error with "RuntimeError: Couldn't install torch." message,
or any other error regarding unsuccessful package (library) installation,
please downgrade (or upgrade) to the latest version of 3.10 Python
and delete current Python and "venv" folder in WebUI's directory.
You can download 3.10 Python from here: https://www.python.org/downloads/release/python-3106/
Alternatively, use a binary release of WebUI: https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre
Use --skip-python-version-check to suppress this warning.
=============================================================================================================================
Python 3.13.2 (tags/v3.13.2:4f8bb39, Feb 4 2025, 15:23:48) [MSC v.1942 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing torch and torchvision
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
ERROR: Could not find a version that satisfies the requirement torch==2.1.2 (from versions: 2.6.0)
ERROR: No matching distribution found for torch==2.1.2
Traceback (most recent call last):
File "V:\stable-diffusion-webui\launch.py", line 48, in <module>
main()
~~~~^^
File "V:\stable-diffusion-webui\launch.py", line 39, in main
prepare_environment()
~~~~~~~~~~~~~~~~~~~^^
File "V:\stable-diffusion-webui\modules\launch_utils.py", line 381, in prepare_environment
run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch", live=True)
~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "V:\stable-diffusion-webui\modules\launch_utils.py", line 116, in run
raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install torch.
Command: "V:\stable-diffusion-webui\venv\Scripts\python.exe" -m pip install torch==2.1.2 torchvision==0.16.2 --extra-index-url https://download.pytorch.org/whl/cu121
Error code: 1
```
### Additional information
first launch, renamed webui-user to Stable Diffusion, added line git pull, added argument --autolauch | closed | 2025-03-03T15:38:47Z | 2025-03-14T16:30:52Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16878 | [
"bug-report"
] | Sensanko52123 | 2 |
bregman-arie/devops-exercises | python | 173 | Great Repo - Thank you. | This should be a required class for all CS majors.
thank you for providing, i've sent the link to my son
| closed | 2021-11-10T18:04:12Z | 2021-11-11T12:34:48Z | https://github.com/bregman-arie/devops-exercises/issues/173 | [] | jfmatth | 1 |
statsmodels/statsmodels | data-science | 8,710 | Multivariate Granger causality | According to the documentation:
`The data for testing whether the time series in the second column Granger causes the time series in the first column.`
So it seems to visit possible to compute the GC using more time series. However: the following code:
```
import numpy as np
import statsmodels.api as sm
# Generate example data
nobs = 100
X1 = np.random.randn(nobs)
X2 = np.random.randn(nobs)
X3 = np.random.randn(nobs)
X4 = np.random.randn(nobs)
X5 = np.random.randn(nobs)
# Create numpy array with shape (nobs, 5)
data = np.column_stack((X1, X2, X3, X4, X5))
# Compute Granger causality tests for all pairs of variables and lags up to 2
maxlag = 2
results = sm.tsa.stattools.grangercausalitytests(data, maxlag, verbose=False)
# Print the p-values for the Granger causality tests
for lag in range(1, maxlag+1):
print(f"Lag {lag}:")
for i in range(data.shape[1]):
for j in range(data.shape[1]):
if i != j:
print(f"X{i+1} -> X{j+1}: p-value = {results[lag][i+1][j]['ssr_ftest'][1]}")
```
raise the following error:
`ValueError: wrong shape for coefs`
| open | 2023-03-02T10:50:31Z | 2023-03-02T10:50:31Z | https://github.com/statsmodels/statsmodels/issues/8710 | [] | MatteoSerafino | 0 |
neuml/txtai | nlp | 785 | Add a microphone pipeline | Add a pipeline that supports reading audio from input devices such as microphones. This pipeline should ahave voice activity detection built in. | closed | 2024-09-20T16:29:46Z | 2024-09-20T17:36:07Z | https://github.com/neuml/txtai/issues/785 | [] | davidmezzetti | 0 |
huggingface/datasets | deep-learning | 6,838 | Remove token arg from CLI examples | As suggested by @Wauplin, see: https://github.com/huggingface/datasets/pull/6831#discussion_r1579492603
> I would not advertise the --token arg in the example as this shouldn't be the recommended way (best to login with env variable or huggingface-cli login) | closed | 2024-04-25T14:00:38Z | 2024-04-26T16:57:41Z | https://github.com/huggingface/datasets/issues/6838 | [] | albertvillanova | 0 |
OWASP/Nettacker | automation | 387 | Installation error on Kali | Hi, I am facing issues while installing the Nettacker, someone can please help me to fix the issues. when run python setup.py install the following errors received.
Downloading https://files.pythonhosted.org/packages/60/e2/9c9b456c0ddaa1268f320bc2f739f1598290f5cf3acf8d65e2c36fde8afe/click-8.0.0a1.tar.gz#sha256=e4315a188403c0258bbc4a4e31863e48fc301c4e95b8007a8eeda0391158df13
Best match: click 8.0.0a1
Processing click-8.0.0a1.tar.gz
Writing /tmp/easy_install-gwlzMm/click-8.0.0a1/setup.cfg
Running click-8.0.0a1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-gwlzMm/click-8.0.0a1/egg-dist-tmp-K7Db2w
Traceback (most recent call last):
File "setup.py", line 82, in <module>
else "scripts/nettacker", "nettacker.py"] # script files for windows and other OS
File "/usr/lib/python2.7/dist-packages/setuptools/__init__.py", line 162, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python2.7/distutils/core.py", line 151, in setup
dist.run_commands()
File "/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/usr/lib/python2.7/dist-packages/setuptools/command/install.py", line 67, in run
self.do_egg_install()
File "/usr/lib/python2.7/dist-packages/setuptools/command/install.py", line 117, in do_egg_install
cmd.run(show_deprecation=False)
File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 449, in run
self.easy_install(spec, not self.no_deps)
File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 691, in easy_install
return self.install_item(None, spec, tmpdir, deps, True)
File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 738, in install_item
self.process_distribution(spec, dist, deps)
File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 783, in process_distribution
[requirement], self.local_index, self.easy_install
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 782, in resolve
replace_conflicting=replace_conflicting
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 1065, in best_match
return self.obtain(req, installer)
File "/usr/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 1077, in obtain
return installer(requirement)
File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 710, in easy_install
return self.install_item(spec, dist.location, tmpdir, deps)
File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 736, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 921, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 1189, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/usr/lib/python2.7/dist-packages/setuptools/command/easy_install.py", line 1175, in run_setup
run_setup(setup_script, args)
File "/usr/lib/python2.7/dist-packages/setuptools/sandbox.py", line 253, in run_setup
raise
File "/usr/lib/python2.7/contextlib.py", line 35, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/lib/python2.7/dist-packages/setuptools/sandbox.py", line 195, in setup_context
yield
File "/usr/lib/python2.7/contextlib.py", line 35, in __exit__
self.gen.throw(type, value, traceback)
File "/usr/lib/python2.7/dist-packages/setuptools/sandbox.py", line 166, in save_modules
saved_exc.resume()
File "/usr/lib/python2.7/dist-packages/setuptools/sandbox.py", line 141, in resume
six.reraise(type, exc, self._tb)
File "/usr/lib/python2.7/dist-packages/setuptools/sandbox.py", line 154, in save_modules
yield saved
File "/usr/lib/python2.7/dist-packages/setuptools/sandbox.py", line 195, in setup_context
yield
File "/usr/lib/python2.7/dist-packages/setuptools/sandbox.py", line 250, in run_setup
_execfile(setup_script, ns)
File "/usr/lib/python2.7/dist-packages/setuptools/sandbox.py", line 45, in _execfile
exec(code, globals, locals)
File "/tmp/easy_install-gwlzMm/click-8.0.0a1/setup.py", line 3, in <module>
File "/usr/lib/python2.7/dist-packages/setuptools/__init__.py", line 162, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python2.7/distutils/core.py", line 124, in setup
dist.parse_config_files()
File "/usr/lib/python2.7/dist-packages/setuptools/dist.py", line 702, in parse_config_files
ignore_option_errors=ignore_option_errors)
File "/usr/lib/python2.7/dist-packages/setuptools/config.py", line 121, in parse_configuration
meta.parse()
File "/usr/lib/python2.7/dist-packages/setuptools/config.py", line 426, in parse
section_parser_method(section_options)
File "/usr/lib/python2.7/dist-packages/setuptools/config.py", line 399, in parse_section
self[name] = value
File "/usr/lib/python2.7/dist-packages/setuptools/config.py", line 184, in __setitem__
value = parser(value)
File "/usr/lib/python2.7/dist-packages/setuptools/config.py", line 515, in _parse_version
version = self._parse_attr(value, self.package_dir)
File "/usr/lib/python2.7/dist-packages/setuptools/config.py", line 349, in _parse_attr
module = import_module(module_name)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/tmp/easy_install-gwlzMm/click-8.0.0a1/src/click/__init__.py", line 7, in <module>
File "/tmp/easy_install-gwlzMm/click-8.0.0a1/src/click/core.py", line 91
f"{hint}. Command {base_command.name!r} is set to chain and"
^
SyntaxError: invalid syntax
_________________
**OS**: `Kali Linux`
**OS Version**: `2020.4`
**Python Version**: `2.7.18`
| closed | 2021-02-05T04:11:16Z | 2021-02-05T04:15:36Z | https://github.com/OWASP/Nettacker/issues/387 | [] | aligrt | 0 |
freqtrade/freqtrade | python | 10,650 | Newbie Question about running multiple bots. | <!--
Have you searched for similar issues before posting it?
Did you have a VERY good look at the [documentation](https://www.freqtrade.io/en/latest/) and are sure that the question is not explained there
Please do not use the question template to report bugs or to request new features.
-->
## Describe your environment
* Operating system: ____
* Python Version: _____ (`python -V`)
* CCXT version: _____ (`pip freeze | grep ccxt`)
* Freqtrade Version: ____ (`freqtrade -V` or `docker compose run --rm freqtrade -V` for Freqtrade running in docker)
## Your question
Hi FreqTrade Team, Greetings!!! Am relatively newbie to Freqtrade Bot and unable to find an answer to a particular Use-Case as given below. Use-Case: One of our collaborating client is interested in knowing regarding the following functionality wrt to your Cryptotrading bot. ---- Is it possible in FreqTrade to create multiple strategies within a single or multiple bot as chosen by the end-user? If yes, what is the solution to address the same. Looking forward to your inputs even if otherwise. Thanks and Regards
| closed | 2024-09-13T06:21:16Z | 2024-09-13T06:56:56Z | https://github.com/freqtrade/freqtrade/issues/10650 | [
"Question"
] | vikashbaraiya | 1 |
charlesq34/pointnet | tensorflow | 120 | program for hand labeling, semantic segmentation | Hello,
Can someone give me any advise for a program to do the the semantic segmentation labeling to my own data? I usually work with Ubuntu 16 and ROS for visualization. I saw a lot of options, but none of them free or for Ubuntu. Is there any good program to do this on Ubuntu for free? Or what is a good free alternative in Windows?
Thanks | open | 2018-07-24T10:51:15Z | 2019-10-10T03:13:09Z | https://github.com/charlesq34/pointnet/issues/120 | [] | mescpla | 3 |
plotly/dash | jupyter | 3,210 | Devtools UI blocks mantine Notification | This is the app that I used to test this:
```
from dash import Dash, html, Input, Output, dcc
import dash_mantine_components as dmc
from dash_iconify import DashIconify
# external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = Dash(__name__)
import dash_mantine_components as dmc
from dash import Output, Input, html, callback
"""
Add Notifications to your app layout.
"""
app.layout = dmc.MantineProvider(
[
dmc.NotificationProvider(),
html.Div(id="notifications-container"),
dmc.Button("Show Notification", id="notify"),
]
)
@callback(
Output("notifications-container", "children"),
Input("notify", "n_clicks"),
prevent_initial_call=True,
)
def show(n_clicks):
return dmc.Notification(
title="Hey there!",
id="simple-notify",
action="show",
autoClose=False,
position="bottom-right",
message="Notifications in Dash, Awesome!",
icon=DashIconify(icon="ic:round-celebration"),
)
if __name__ == '__main__':
app.run(debug=True)
```
I tried changing the z-index for the devtools, but it appears that the dmc Notification component uses position static, which means that it ignores z-index. Not sure what the solution is for that! If we switch to a footer for the devtools then this issue can be closed | open | 2025-03-11T18:51:38Z | 2025-03-17T18:20:48Z | https://github.com/plotly/dash/issues/3210 | [
"bug",
"P1"
] | marthacryan | 0 |
jonaswinkler/paperless-ng | django | 810 | [BUG] Debug Mode breaks start | Wanted to start paperless via docker in debug mode.
When I enable
`PAPERLESS_DEBUG=true`
and start the containers, I got the following error and the container restarts all the time:
```
webserver_1 | 2021-03-23 06:58:44,362 INFO spawned: 'gunicorn' with pid 55
webserver_1 | System check identified some issues:
webserver_1 |
webserver_1 | WARNINGS:
webserver_1 | ?: DEBUG mode is enabled. Disable Debug mode. This is a serious security issue, since it puts security overides in place which are meant to be only used during development. This also means that paperless will tell anyone various debugging information when something goes wrong.
webserver_1 | 07:58:44 [Q] INFO Q Cluster utah-wisconsin-speaker-winner starting.
webserver_1 | 07:58:44 [Q] INFO Process-1:1 ready for work at 58
webserver_1 | 07:58:44 [Q] INFO Process-1 guarding cluster utah-wisconsin-speaker-winner
webserver_1 | 07:58:44 [Q] INFO Process-1:3 monitoring at 60
webserver_1 | 07:58:44 [Q] INFO Process-1:4 pushing tasks at 61
webserver_1 | 07:58:44 [Q] INFO Q Cluster utah-wisconsin-speaker-winner running.
webserver_1 | 07:58:44 [Q] INFO Process-1:2 ready for work at 59
webserver_1 | [2021-03-23 06:58:44 +0000] [55] [INFO] Starting gunicorn 20.0.4
webserver_1 | [2021-03-23 06:58:45 +0000] [55] [INFO] Listening at: http://0.0.0.0:8000 (55)
webserver_1 | [2021-03-23 06:58:45 +0000] [55] [INFO] Using worker: uvicorn.workers.UvicornWorker
webserver_1 | [2021-03-23 06:58:45 +0000] [55] [INFO] Server is ready. Spawning workers
webserver_1 | [2021-03-23 06:58:45 +0000] [64] [INFO] Booting worker with pid: 64
webserver_1 | [2021-03-23 06:58:45 +0000] [65] [INFO] Booting worker with pid: 65
webserver_1 | 2021-03-23 06:58:45,715 INFO success: gunicorn entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
webserver_1 | [2021-03-23 07:58:45 +0100] [64] [ERROR] Exception in worker process
webserver_1 | Traceback (most recent call last):
webserver_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
webserver_1 | worker.init_process()
webserver_1 | File "/usr/local/lib/python3.7/site-packages/uvicorn/workers.py", line 63, in init_process
webserver_1 | super(UvicornWorker, self).init_process()
webserver_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 119, in init_process
webserver_1 | self.load_wsgi()
webserver_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 144, in load_wsgi
webserver_1 | self.wsgi = self.app.wsgi()
webserver_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
webserver_1 | self.callable = self.load()
webserver_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 49, in load
webserver_1 | return self.load_wsgiapp()
webserver_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 39, in load_wsgiapp
webserver_1 | return util.import_app(self.app_uri)
webserver_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/util.py", line 358, in import_app
webserver_1 | mod = importlib.import_module(module)
webserver_1 | File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
webserver_1 | return _bootstrap._gcd_import(name[level:], package, level)
webserver_1 | File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
webserver_1 | File "<frozen importlib._bootstrap>", line 983, in _find_and_load
webserver_1 | File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
webserver_1 | File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
webserver_1 | File "<frozen importlib._bootstrap_external>", line 728, in exec_module
webserver_1 | File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
webserver_1 | File "/usr/src/paperless/src/paperless/asgi.py", line 9, in <module>
webserver_1 | django_asgi_app = get_asgi_application()
webserver_1 | File "/usr/local/lib/python3.7/site-packages/django/core/asgi.py", line 12, in get_asgi_application
webserver_1 | django.setup(set_prefix=False)
webserver_1 | File "/usr/local/lib/python3.7/site-packages/django/__init__.py", line 24, in setup
webserver_1 | apps.populate(settings.INSTALLED_APPS)
webserver_1 | File "/usr/local/lib/python3.7/site-packages/django/apps/registry.py", line 91, in populate
webserver_1 | app_config = AppConfig.create(entry)
webserver_1 | File "/usr/local/lib/python3.7/site-packages/django/apps/config.py", line 116, in create
webserver_1 | mod = import_module(mod_path)
webserver_1 | File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
webserver_1 | return _bootstrap._gcd_import(name[level:], package, level)
webserver_1 | File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
webserver_1 | File "<frozen importlib._bootstrap>", line 983, in _find_and_load
webserver_1 | File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
webserver_1 | File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
webserver_1 | File "<frozen importlib._bootstrap_external>", line 728, in exec_module
webserver_1 | File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
webserver_1 | File "/usr/local/lib/python3.7/site-packages/channels/apps.py", line 4, in <module>
webserver_1 | import daphne.server
webserver_1 | File "/usr/local/lib/python3.7/site-packages/daphne/server.py", line 20, in <module>
webserver_1 | asyncioreactor.install(twisted_loop)
webserver_1 | File "/usr/local/lib/python3.7/site-packages/twisted/internet/asyncioreactor.py", line 307, in install
webserver_1 | reactor = AsyncioSelectorReactor(eventloop)
webserver_1 | File "/usr/local/lib/python3.7/site-packages/twisted/internet/asyncioreactor.py", line 61, in __init__
webserver_1 | "SelectorEventLoop required, instead got: {}".format(_eventloop)
webserver_1 | TypeError: SelectorEventLoop required, instead got: <uvloop.Loop running=False closed=False debug=False>
webserver_1 | [2021-03-23 07:58:45 +0100] [64] [INFO] Worker exiting (pid: 64)
webserver_1 | [2021-03-23 07:58:45 +0100] [65] [ERROR] Exception in worker process
webserver_1 | Traceback (most recent call last):
webserver_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
webserver_1 | worker.init_process()
webserver_1 | File "/usr/local/lib/python3.7/site-packages/uvicorn/workers.py", line 63, in init_process
webserver_1 | super(UvicornWorker, self).init_process()
webserver_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 119, in init_process
webserver_1 | self.load_wsgi()
webserver_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 144, in load_wsgi
webserver_1 | self.wsgi = self.app.wsgi()
webserver_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
webserver_1 | self.callable = self.load()
webserver_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 49, in load
webserver_1 | return self.load_wsgiapp()
webserver_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 39, in load_wsgiapp
webserver_1 | return util.import_app(self.app_uri)
webserver_1 | File "/usr/local/lib/python3.7/site-packages/gunicorn/util.py", line 358, in import_app
webserver_1 | mod = importlib.import_module(module)
webserver_1 | File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
webserver_1 | return _bootstrap._gcd_import(name[level:], package, level)
webserver_1 | File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
webserver_1 | File "<frozen importlib._bootstrap>", line 983, in _find_and_load
webserver_1 | File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
webserver_1 | File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
webserver_1 | File "<frozen importlib._bootstrap_external>", line 728, in exec_module
webserver_1 | File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
webserver_1 | File "/usr/src/paperless/src/paperless/asgi.py", line 9, in <module>
webserver_1 | django_asgi_app = get_asgi_application()
webserver_1 | File "/usr/local/lib/python3.7/site-packages/django/core/asgi.py", line 12, in get_asgi_application
webserver_1 | django.setup(set_prefix=False)
webserver_1 | File "/usr/local/lib/python3.7/site-packages/django/__init__.py", line 24, in setup
webserver_1 | apps.populate(settings.INSTALLED_APPS)
webserver_1 | File "/usr/local/lib/python3.7/site-packages/django/apps/registry.py", line 91, in populate
webserver_1 | app_config = AppConfig.create(entry)
webserver_1 | File "/usr/local/lib/python3.7/site-packages/django/apps/config.py", line 116, in create
webserver_1 | mod = import_module(mod_path)
webserver_1 | File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
webserver_1 | return _bootstrap._gcd_import(name[level:], package, level)
webserver_1 | File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
webserver_1 | File "<frozen importlib._bootstrap>", line 983, in _find_and_load
webserver_1 | File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
webserver_1 | File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
webserver_1 | File "<frozen importlib._bootstrap_external>", line 728, in exec_module
webserver_1 | File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
webserver_1 | File "/usr/local/lib/python3.7/site-packages/channels/apps.py", line 4, in <module>
webserver_1 | import daphne.server
webserver_1 | File "/usr/local/lib/python3.7/site-packages/daphne/server.py", line 20, in <module>
webserver_1 | asyncioreactor.install(twisted_loop)
webserver_1 | File "/usr/local/lib/python3.7/site-packages/twisted/internet/asyncioreactor.py", line 307, in install
webserver_1 | reactor = AsyncioSelectorReactor(eventloop)
webserver_1 | File "/usr/local/lib/python3.7/site-packages/twisted/internet/asyncioreactor.py", line 61, in __init__
webserver_1 | "SelectorEventLoop required, instead got: {}".format(_eventloop)
webserver_1 | TypeError: SelectorEventLoop required, instead got: <uvloop.Loop running=False closed=False debug=False>
webserver_1 | [2021-03-23 07:58:45 +0100] [65] [INFO] Worker exiting (pid: 65)
webserver_1 | [2021-03-23 06:58:45 +0000] [55] [INFO] Shutting down: Master
webserver_1 | [2021-03-23 06:58:45 +0000] [55] [INFO] Reason: Worker failed to boot.
webserver_1 | 2021-03-23 06:58:47,020 INFO exited: gunicorn (exit status 3; not expected)
```
If I disable the debug mode, everything is fine.
I wanted to enable the debug code, to use the backend from the docker environment to have a look at the frontend development and to bypass CORS configuration | open | 2021-03-23T07:06:20Z | 2021-06-16T20:55:45Z | https://github.com/jonaswinkler/paperless-ng/issues/810 | [
"bug"
] | Bart1909 | 2 |
serengil/deepface | machine-learning | 823 | embeddings | How i can verify two face embeddings using the DeepFace library
| closed | 2023-08-14T21:21:31Z | 2023-08-15T17:28:50Z | https://github.com/serengil/deepface/issues/823 | [
"question"
] | oussaifi-majdi | 1 |
mlfoundations/open_clip | computer-vision | 546 | Support for training on spot instances | Big fan of the repo! I tried Horovod training with regular instances and the code worked great, wondering if there is support for spot instances with Horovod? My first guess would be no since it doesn't look like the repo uses [elastic horovod](https://horovod.readthedocs.io/en/stable/elastic_include.html) but wanted to get a second opinion before diving into the weeds to try and reinvent the wheel.
If it helps I'm trying to do stuff using the WebDataset format rather than say the CsvDataset which I imagine makes things harder unfortunately. | closed | 2023-06-13T16:54:00Z | 2024-05-10T18:42:05Z | https://github.com/mlfoundations/open_clip/issues/546 | [] | humzaiqbal | 0 |
dask/dask | pandas | 11,030 | Does not work with AWS - aiobotocore related error | Still getting [this error](https://github.com/dask/dask/issues/8335) when running cell 5 of this: https://nbviewer.org/github/NOAA-OWP/AORC-jupyter-notebooks/blob/master/jupyter_notebooks/AORC_Zarr_notebook.ipynb
`AttributeError: module 'aiobotocore' has no attribute 'AioSession'`
- Dask version:
- Python version:
- Operating System:
- Install method (conda, pip, source):
python 3.9.13
dask '2022.9.2'
s3fs '0.6.0'
aiobotocore '2.5.2'
windows, conda, | closed | 2024-03-27T18:44:20Z | 2024-04-02T14:13:37Z | https://github.com/dask/dask/issues/11030 | [
"needs triage"
] | openSourcerer9000 | 2 |
aws/aws-sdk-pandas | pandas | 2,845 | `athena.to_parquet` fails when `mode=overwrite_partitions` and `partition_cols` contains something like `hour(timestamp_col)`. | ### Describe the bug
When using `s3.to_parquet` to update a parquet file that is partitioned by a time interval or a timestamp "attribute" (such as year, month, hour, etc.), the function fails because for this mode the implementation assumes that the values of `partition_cols` are names of the parquet / table columns, and it does not find something like `hour(column)` in the dataframe columns.
I think the problem is [this line](https://github.com/aws/aws-sdk-pandas/blob/d554bea6c0e534df680bbbf0a814bcacd4321d00/awswrangler/athena/_write_iceberg.py#L452), which uses the function `delete_from_iceberg_table`, which expects column names.
### How to Reproduce
-
### Expected behavior
I expect the `partition_cols` option to accept anything that can be used to partition a parquet. In particular, anything that is accepted when the argument `mode` is `append` or `overwrite` instead of `overwrite_partitions`.
### Your project
_No response_
### Screenshots
_No response_
### OS
Ubuntu 22.04
### Python version
3.10
### AWS SDK for pandas version
3.7.3
### Additional context
_No response_ | open | 2024-06-04T18:02:10Z | 2024-08-19T14:42:09Z | https://github.com/aws/aws-sdk-pandas/issues/2845 | [
"bug",
"backlog"
] | useredsa | 3 |
ray-project/ray | python | 51,514 | [Autoscaler] Add Support for BatchingNodeProvider in Autoscaler Config Option | ### Description
[KubeRay](https://docs.ray.io/en/latest/cluster/kubernetes/user-guides/configuring-autoscaling.html#overview) currently uses the BatchingNodeProvider to manage clusters externally (using the KubeRay operator), which enables users to interact with external cluster management systems. However, to support custom providers with the BatchingNodeProvider, users must implement a module and integrate it as an external type provider, which leads to inconvenience.
On the other hand, [LocalNodeProvider](https://github.com/ray-project/ray/tree/master/python/ray/autoscaler/_private/local) offers the CoordinatorSenderNodeProvider to manage clusters externally through a coordinator server, [but the local type provider currently does not support updates for clusters](https://github.com/ray-project/ray/issues/39565).
To simplify custom cluster management, adding the BatchingNodeProvider and BatchingSenderNodeProvider would be highly beneficial. This would significantly assist users who wish to customize and use their own providers for managing clusters (on-premises or multi cloud environments).
For example, the following configuration could be used to add the BatchingNodeProvider to the provider type:
```yaml
provider:
type: batch
coordinator_address: "127.0.0.1:8000"
```
This would allow users to easily configure external cluster management with the BatchingNodeProvider, enhancing the flexibility and usability of the system.
### Use case
https://github.com/ray-project/ray/blob/8773682e49876627b9b4e10e2d2f4f32d961c0c9/python/ray/autoscaler/_private/providers.py#L184-L197
If the 'batch' type is additionally supported in the provider configuration, users will be able to manage the creation and deletion of cluster nodes externally in the coordinator server. | open | 2025-03-19T06:51:24Z | 2025-03-19T22:23:54Z | https://github.com/ray-project/ray/issues/51514 | [
"enhancement",
"P2",
"core"
] | nadongjun | 0 |
amidaware/tacticalrmm | django | 1,769 | Make the agent more resilient against DNS issues | **Is your feature request related to a problem? Please describe.**
When an agent loses DNS services it will not work anymore. (Mesh is not or less impacted)
**Describe the solution you'd like**
when a dns resolution fails it should fallback to the latest known working ip that was replied and probably changes the color of the "connectivity icon" to yellow or else.
**Additional context**
https://discord.com/channels/736478043522072608/1204337171633283092
| open | 2024-02-23T14:10:45Z | 2024-03-13T07:19:19Z | https://github.com/amidaware/tacticalrmm/issues/1769 | [
"enhancement"
] | P6g9YHK6 | 3 |
nolar/kopf | asyncio | 699 | Is there a way to set my own context vars that are visible in all handlers? | ## Question
I want to load some settings from the environment that I'm going to use later while the operator is running. Although not important, for settings management I use [pydantic](https://pydantic-docs.helpmanual.io/usage/settings/).
My initial thought was to make use of the startup handler to load these settings into a context var but it won't work because, I assume, the startup handler is run as a task that is not run in the same context as the other handlers so the change will consequently not be visible to other handlers.
e.g. this is what I had in mind:
```python
from pydantic import Base settings
class OperatorSettings(BaseSettings):
class Config:
env_file = ".env"
env_prefix = "OPERATOR_"
...
settings: ContextVar[OperatorSettings] = ContextVar("settings")
@kopf.on.startup(errors=kopf.ErrorsMode.PERMANENT)
async def init_settings(logger: logging.Logger, **kwargs):
logger.info("loading settings..")
settings.set(OperatorSettings())
logger.info("loaded the following settings: %r", settings)
@kopf.on.create(...):
async def create_fn(...):
use_settings(settings.get()) # setting.get() should not raise a lookup error
```
I can work around this by not using contextvars and use `settings` as a global variable instead but, if possible, I would like to avoid it.
## Checklist
- [x] I have read the [documentation](https://kopf.readthedocs.io/en/latest/) and searched there for the problem
- [x] I have searched in the [GitHub Issues](https://github.com/nolar/kopf/issues?utf8=%E2%9C%93&q=) for similar questions
## Keywords
contextvars, startup, settings, configuration
| closed | 2021-02-25T09:06:55Z | 2021-02-25T13:03:34Z | https://github.com/nolar/kopf/issues/699 | [
"question"
] | zoopp | 2 |
PrefectHQ/prefect | automation | 16,939 | Prefect run runtime not stopped when run is canceled | ### Bug summary
Hey!
We run our tasks on `fargate` in `ecs`.
Every once in a while our tasks silently fail, they are then marked as `Running` in prefect cloud UI, but the run is reporting close to no logs both in `Cloudwatch` or Prefect cloud Ui.
The only logs we see are:
```
EVENTS 1738079190030 15:46:25.912 | DEBUG | prefect.runner - Checking for cancelled flow runs... 1738079185912
EVENTS 1738079200025 15:46:37.091 | DEBUG | prefect.utilities.services.critical_service_loop - Starting run of functools.partial(<bound method Runner._check_for_cancelled_flow_runs of Runner(name='runner-ce1a9cc9-ebd1-447f-8f86-69c42bba3e61')>, should_stop=<function Runner.execute_flow_run.<locals>.<lambda> at 0x7f4edc7e6480>, on_stop=<bound method CancelScope.cancel of <anyio._backends._asyncio.CancelScope object at 0x7f4edaad3a90>>) 1738079197092
```
This is repeating every 10 seconds.
When we then cancel the flow run from the prefect cloud ui, the logs report:
```
EVENTS 1738079200025 15:46:37.545 | INFO | prefect.runner - Found 1 flow runs awaiting cancellation. 1738079197545
EVENTS 1738079200025 15:46:37.545 | WARNING | prefect.runner - Unable to kill process 17: The process was not found. Marking flow run as cancelled. 1738079197546
EVENTS 1738079200025 15:46:37.697 | DEBUG | Flow run 'secret-jackrabbit' - Running 1 deployment pull steps 1738079197698
EVENTS 1738079200025 15:46:37.723 | DEBUG | prefect.client - Connecting to API at https://api.prefect.cloud/api/accounts/.... 1738079197723
EVENTS 1738079200025 15:46:37.747 | DEBUG | prefect.client - Connecting to API at https://api.prefect.cloud/api/accounts/.... 1738079197748
EVENTS 1738079200025 15:46:37.749 | DEBUG | Flow run 'secret-jackrabbit' - Changing working directory to '/opt/prefect' 1738079197750
EVENTS 1738079200025 15:46:37.750 | DEBUG | Flow run 'secret-jackrabbit' - Importing flow code from 'flowlib/prefect/flow.py:load_and_transform' 1738079197751
EVENTS 1738079210023 15:46:46.662 | DEBUG | prefect.utilities.services.critical_service_loop - Starting run of functools.partial(<bound method Runner._check_for_cancelled_flow_runs of Runner(name='runner-ce1a9cc9-ebd1-447f-8f86-69c42bba3e61')>, should_stop=<function Runner.execute_flow_run.<locals>.<lambda> at 0x7f4edc7e6480>, on_stop=<bound method CancelScope.cancel of <anyio._backends._asyncio.CancelScope object at 0x7f4edaad3a90>>) 1738079206663
EVENTS 1738079210023 15:46:46.662 | DEBUG | prefect.runner - Checking for cancelled flow runs... 1738079206663
```
The run is then marked as cancelled in prefect cloud ui, but the ecs-tasks is still alive, and continues logging:
```
EVENTS 1738079210023 15:46:46.662 | DEBUG | prefect.utilities.services.critical_service_loop - Starting run of functools.partial(<bound method Runner._check_for_cancelled_flow_runs of Runner(name='runner-ce1a9cc9-ebd1-447f-8f86-69c42bba3e61')>, should_stop=<function Runner.execute_flow_run.<locals>.<lambda> at 0x7f4edc7e6480>, on_stop=<bound method CancelScope.cancel of <anyio._backends._asyncio.CancelScope object at 0x7f4edaad3a90>>) 1738079206663
EVENTS 1738079210023 15:46:46.662 | DEBUG | prefect.runner - Checking for cancelled flow runs... 1738079206663
```
It looks like the ecs-task keeps on running https://github.com/PrefectHQ/prefect/blob/26ae72909896078d4436e4b7e90075d586347f53/src/prefect/runner/runner.py#L803-L807 perhaps, and this loop is never ended.
We have so far only seen this in tasks using quite a bit of resources, but not given enough resources. For instance, when we have a task needing 14gb memory, but we only spec it to get 8gb. This job is also creating quite a bit of threads / prefect-tasks.
I have unfortunately not managed to reproduce this issue in a more controlled manner, but based on `EVENTS 1738079200025 15:46:37.545 | WARNING | prefect.runner - Unable to kill process 17: The process was not found. Marking flow run as cancelled. 1738079197546`, it looks a little like that the process the `prefect.runner`-loop is responsible for killing on a `cancel`-request is already gone, but that nothing has caught this? How this process has silently died, is not reported anywhere I have managed to see. But as mentioned, I'm guessing the high memory/thread-usage is involved. The flows we see this issue in, is running [`dlt`](https://dlthub.com/)-jobs.
Have you seen an issue like this before? If so, any suggestions on how to avoid this?
If we had another `critical_service_loop`, which is responsible for checking if the process is actually still running, and killing all the other loops if we have no more flow-runs running, could maybe be a way of handling this. The current state of things a little unfortunate for us, as we have to monitor for long-running ecs-tasks, and if the task has been running for longer than X, it might be a case of this type of run, and we manually have to kill the ecs-task
### Version info
```Text
Tested with 2.20.2 and 3.1.14
Version: 2.20.2
API version: 0.8.4
Python version: 3.11.9
Git commit: 51c3f290
Built: Wed, Aug 14, 2024 11:27 AM
OS/Arch: linux/x86_64
Profile: dope
Server type: cloud
Version: 3.1.14
API version: 0.8.4
Python version: 3.11.9
Git commit: 5f1ebb57
Built: Thu, Jan 23, 2025 1:22 PM
OS/Arch: linux/x86_64
Profile: ephemeral
Server type: ephemeral
Pydantic version: 2.9.2
Server:
Database: sqlite
SQLite version: 3.37.2
Integrations:
prefect-docker: 0.6.2
```
### Additional context
_No response_ | open | 2025-02-03T11:45:50Z | 2025-02-03T11:45:50Z | https://github.com/PrefectHQ/prefect/issues/16939 | [
"bug"
] | mch-sb | 0 |
sunscrapers/djoser | rest-api | 766 | You have a duplicated operationId in your OpenAPI schema: meUser | Hey, that's what im getting while generating openapi schema
```
python manage.py generateschema --file openapi-schema.yml
C:\Users\style\.virtualenvs\newsltr-dOCyhCR8\Lib\site-packages\rest_framework\schemas\openapi.py:49: UserWarning: You have a duplicated operationId in your OpenAPI schema: meUser
Route: /api/v1/auth/users/me/, Method: get
Route: /api/v1/auth/users/me/, Method: put
An operationId has to be unique across your schema. Your schema may not work in other tools.
warnings.warn(
C:\Users\style\.virtualenvs\newsltr-dOCyhCR8\Lib\site-packages\rest_framework\schemas\openapi.py:49: UserWarning: You have a duplicated operationId in your OpenAPI schema: meUser
Route: /api/v1/auth/users/me/, Method: put
Route: /api/v1/auth/users/me/, Method: patch
An operationId has to be unique across your schema. Your schema may not work in other tools.
warnings.warn(
C:\Users\style\.virtualenvs\newsltr-dOCyhCR8\Lib\site-packages\rest_framework\schemas\openapi.py:49: UserWarning: You have a duplicated operationId in your OpenAPI schema: meUser
Route: /api/v1/auth/users/me/, Method: patch
Route: /api/v1/auth/users/me/, Method: delete
An operationId has to be unique across your schema. Your schema may not work in other tools.
warnings.warn(
```
I use bump.sh as documentation service and for every method in endpoint `me` im getting literally same result. I need to change operationId manually to make it work fine, what's wrong, how can i resolve that? | closed | 2023-09-29T23:15:34Z | 2023-10-08T17:24:58Z | https://github.com/sunscrapers/djoser/issues/766 | [] | style77 | 1 |
raphaelvallat/pingouin | pandas | 105 | Deprecating or replacing pingouin.plot_skipped_corr | For the sake of simplicity, I would be in favor of deprecating the [pingouin.plot_skipped_corr](https://pingouin-stats.org/generated/pingouin.plot_skipped_corr.html#pingouin.plot_skipped_corr) function which I think is too specific and most likely only used by very few users (but I may be wrong?). That said, I do like the left panel of the output plot (= standard scatterplot with robust regression line and outliers highlighted), and I could also imagine replacing this function by a more general ``pingouin.plot_robust_corr`` function where users can specificy the robust correlation method (i.e. not limited to only a skipped correlation).
Please feel free to let me know which options you prefer:
1. Keeping the function as is.
2. Deprecating the function.
3. Replacing the function by a more general ``pingouin.plot_robust_corr`` function which would only integrate the left panel of the current function, i.e. without the bootstrapped distribution.
Thanks! | closed | 2020-06-23T05:27:27Z | 2020-07-29T04:12:27Z | https://github.com/raphaelvallat/pingouin/issues/105 | [
"deprecation :skull:"
] | raphaelvallat | 3 |
autogluon/autogluon | computer-vision | 3,905 | Multilabel Predictor Issue | I have trained a model 'Multilabel Predictor' in my local computer. I need to run a airflow pipeline to predict the data and store predictions in a table in redshift. The issue with the model stored in my computer is that the pickle file has the hardcore path of my computer (screenshot 1: first line of the pickle file), so when airflow tries to predict, theres an error that the path cannot be recognized. Due this situation, i've trained the same model in SageMaker and i stored it in a path of S3. When i try to predict the model (the one stored in s3), theres another error that botocore cant locate the credentials. (screenshot 2: logs error airflow).
Please, can you provide me any information of what can i do to do a airflow pipeline with the multilabel predictor of autogluon, i already did this for tabular predictor and it worked perfect.

Screenshot 1

Screenshot 2
| open | 2024-02-06T18:57:17Z | 2024-11-25T22:47:10Z | https://github.com/autogluon/autogluon/issues/3905 | [
"bug",
"module: tabular",
"priority: 1"
] | YilanHipp | 0 |
keras-team/keras | pytorch | 20,778 | The error you're facing arises from using masking in the Embedding layer while building a POS Tagging model. | The error you're encountering is related to using mask_zero=True in the Embedding layer and ensuring proper handling of the propagated mask through subsequent layers like LSTM and TimeDistributed. Below is a refined explanation and the updated solution.
Steps to Address the Issue
Mask Propagation
Ensure that all layers following the Embedding layer can handle the mask properly. LSTM and Bidirectional natively support masking, so no additional changes are needed there. However, ensure that the TimeDistributed layer processes the mask correctly.
Loss Function
The sparse_categorical_crossentropy loss expects integer labels, not one-hot encoded outputs. Ensure your target labels (Y_train) meet this requirement.
Input Shapes
Confirm that the input and output shapes align throughout the model pipeline.
Eager Execution
TensorFlow 2.x defaults to eager execution, but if issues persist, ensure it is explicitly enabled.
Corrected Code
python
Copy
Edit
import tensorflow as tf
from tensorflow import keras
# Define the model architecture
model = keras.Sequential([
keras.Input(shape=(200,)), # Match the padded sequence length
keras.layers.Embedding(
input_dim=vocab_len,
output_dim=50,
weights=[embedding_matrix],
mask_zero=True # Enable masking for padding tokens
),
keras.layers.Bidirectional(
keras.layers.LSTM(units=100, return_sequences=True)
), # Handles mask natively
keras.layers.Bidirectional(
keras.layers.LSTM(units=100, return_sequences=True)
),
keras.layers.TimeDistributed(
keras.layers.Dense(units=tags_len, activation="softmax")
) # Outputs predictions for each time step
])
# Compile the model
model.compile(
optimizer="adam",
loss="sparse_categorical_crossentropy", # Works with integer labels
metrics=["accuracy"]
)
# Display the model summary
model.summary()
# Train the model
model.fit(X_train, Y_train, epochs=10)
Changes and Fixes
Masking Compatibility
The Embedding layer propagates the mask with mask_zero=True.
LSTM and Bidirectional layers handle masking without additional adjustments.
The TimeDistributed layer does not require special handling as long as its input shapes match.
Loss Function
Ensure Y_train contains integer-encoded labels corresponding to the POS tags.
Debugging with tf.function (Optional)
If issues persist, use @tf.function to explicitly enable graph execution:
python
Copy
Edit
@tf.function
def train():
model.fit(X_train, Y_train, epochs=10)
train()
Eager Execution
Explicitly enable eager execution (if not already) to facilitate debugging:
python
Copy
Edit
tf.config.run_functions_eagerly(True)
Data Validation
Confirm that X_train and Y_train are padded to the same sequence length (200).
Ensure they are formatted as NumPy arrays or TensorFlow tensors.
Additional Tips
Handling Masking with TimeDistributed
If masking issues persist in the TimeDistributed layer, manually handle the mask by ensuring its propagation:
python
Copy
Edit
keras.layers.TimeDistributed(
keras.layers.Dense(units=tags_len, activation="softmax")
)
Debug Input Shapes
Print the shapes of inputs and outputs at each step to ensure consistency:
python
Copy
Edit
print(X_train.shape, Y_train.shape)
| closed | 2025-01-17T21:05:43Z | 2025-01-20T06:48:39Z | https://github.com/keras-team/keras/issues/20778 | [] | ARforyou | 1 |
kymatio/kymatio | numpy | 357 | `cdgmm` doesn't check that both inputs are on the same device | This is at least true for the 2D torch `cdgmm`. It only complains when you try to multiply tensors. We should throw an error instead. | closed | 2019-03-01T21:18:40Z | 2019-05-29T11:09:12Z | https://github.com/kymatio/kymatio/issues/357 | [
"bug"
] | janden | 3 |
litestar-org/polyfactory | pydantic | 79 | How to access faker to generate realistic value | Hello, thanks for this useful project! 👍
As for the title of my question, in factory_boy we can access faker like this:
```
class RandomUserFactory(factory.Factory):
class Meta:
model = models.User
first_name = factory.Faker('first_name')
last_name = factory.Faker('last_name')
```
The goal is to generate realistic values, as for example first names, that will be handled by faker.
How to access faker, in a similar way it is possible in factory_boy?
I was able with
```
class PersonFactory(ModelFactory[Person]):
__model__ = Person
name = ModelFactory._get_faker().first_name
```
But being `_get_faker()` private I was wondering why and if there is a public method to this.
| closed | 2022-10-08T23:59:06Z | 2022-10-09T09:15:34Z | https://github.com/litestar-org/polyfactory/issues/79 | [
"enhancement"
] | LeonardoGentile | 1 |
pandas-dev/pandas | data-science | 60,831 | BUG: `pd.Series.groupby` issues `FutureWarning` | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
# Some index that are not integers
index = pd.date_range(start='2000-01-01', periods=3, freq='YS')
# Integer as a name
data = pd.Series([1, 2, 3], index=index, name=2)
data.groupby(data) # FutureWarning: Series.__getitem__ treating keys as positions is deprecated. In a future version, integer keys will always be treated as labels (consistent with DataFrame behavior). To access a value by position, use `ser.iloc[pos]`
```
### Issue Description
The warning comes from this line:
https://github.com/pandas-dev/pandas/blob/0691c5cf90477d3503834d983f69350f250a6ff7/pandas/core/groupby/grouper.py#L1015
Here, `gpr.name` is `2`, which results in the warning. If the index consists of integers, this means `obj[gpr.name]` will actually return the line (which then fails the comparison).
I have not checked on the main branch, but the same line might be present here:
https://github.com/pandas-dev/pandas/blob/d72f165eb327898b1597efe75ff8b54032c3ae7b/pandas/core/groupby/grouper.py#L853
### Expected Behavior
Don't issue `FutureWarning`
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.13.0
python-bits : 64
OS : Windows
OS-release : 11
Version : 10.0.22621
machine : AMD64
processor : Intel64 Family 6 Model 186 Stepping 3, GenuineIntel
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : English_United Kingdom.1252
pandas : 2.2.3
numpy : 2.2.2
pytz : 2025.1
dateutil : 2.9.0.post0
pip : 25.0
Cython : None
sphinx : None
IPython : 8.32.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : 3.10.0
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.3.4
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.15.1
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : 0.23.0
tzdata : 2025.1
qtpy : None
pyqt5 : None
</details>
| closed | 2025-02-02T18:03:53Z | 2025-02-22T15:38:39Z | https://github.com/pandas-dev/pandas/issues/60831 | [
"Bug",
"Groupby",
"Warnings"
] | TiborVoelcker | 7 |
serengil/deepface | deep-learning | 690 | deepface.stream() ERROR | i am using python 3.10.5 on vscode and getting this error and could not able to find any solution for this can u help me ........
code----
from deepface import DeepFace
DeepFace.stream(db_path=r"C:\Users\HP\Desktop\studies\coding\python\detection\database")
Error--
File "c:\Users\HP\Desktop\studies\coding\python\detection\streamdetection.py", line 36, in <module>
DeepFace.stream(db_path=r'C:\Users\HP\Desktop\studies\coding\python\detection\database',model_name="VGG-Face")
File "C:\Users\HP\AppData\Roaming\Python\Python310\site-packages\deepface\DeepFace.py", line 590, in stream
realtime.analysis(db_path, model_name, detector_backend, distance_metric, enable_face_analysis
File "C:\Users\HP\AppData\Roaming\Python\Python310\site-packages\deepface\commons\realtime.py", line 97, in analysis
img = functions.preprocess_face(img = employee, target_size = (input_shape_y, input_shape_x), enforce_detection = False, detector_backend = detector_backend)
AttributeError: module 'deepface.commons.functions' has no attribute 'preprocess_face'. Did you mean: 'preprocess_input'? | closed | 2023-03-04T07:24:01Z | 2023-03-05T07:58:27Z | https://github.com/serengil/deepface/issues/690 | [
"question"
] | Aditya6371 | 1 |
tox-dev/tox | automation | 3,037 | `tox -e docs` not making `index.html` | ## Issue
<!-- Describe what's the expected behaviour and what you're observing. -->
Just sharing some hiccups encountered with `tox` developers setup.
First hiccup is the development guide should have `pip install tox` or `pip install -e .`, as it's required to be able to run `tox -e docs`. The [Setup](https://tox.wiki/en/latest/development.html#setup) section mentions using `pipx` to install `tox`, but imo the guide should just have `pipx install tox`.
Next, when I run `tox -e docs` as documented in [Building documentation](https://tox.wiki/en/latest/development.html#building-documentation), and `index.html` is not showing up in `docs_out`.
What I see is:
```tree
.tox/docs_out
├── output.json
└── output.txt
```
## Environment
Provide at least:
- OS: macOS Monterey version 12.6
- `pip list` of the host Python where `tox` is installed:
```python
Package Version
------------- -------
cachetools 5.3.1
chardet 5.1.0
colorama 0.4.6
distlib 0.3.6
filelock 3.12.2
packaging 23.1
pip 23.0.1
platformdirs 3.5.3
pluggy 1.0.0
pyproject-api 1.5.2
setuptools 65.5.0
tomli 2.0.1
tox 4.6.2
virtualenv 20.23.1
```
## Output of running tox
Provide the output of `tox -rvv`:
## Minimal example
If possible, provide a minimal reproducer for the issue:
```console
tox -e docs
```
Which runs
```console
sphinx-build -d /path/to/code/tox/.tox/docs/tmp/doctree docs /path/to/code/tox/.tox/docs_out --color -b html -b linkcheck -W
python -c 'print(r"documentation available under file:///path.to/code/tox/.tox/docs_out/index.html")'
```
Which outputs:
```
( development: line 138) ok https://www.kessler.de/prd/smartbear/BestPracticesForPeerCodeReview.pdf
( index: line 4) ok https://www.devpi.net
build succeeded.
Look for any errors in the above output or in .tox/docs_out/output.txt
docs: commands[1]> python -c 'print(r"documentation available under file:///path/to/code/tox/.tox/docs_out/index.html")'
documentation available under file:///path/to/code/tox/.tox/docs_out/index.html
.pkg: _exit> python /path/to/.pyenv/versions/3.10.11/lib/python3.10/site-packages/pyproject_api/_backend.py True hatchling.build
docs: OK (28.41=setup[0.83]+cmd[27.55,0.03] seconds)
congratulations :) (28.55 seconds)
```
| closed | 2023-06-17T05:47:05Z | 2023-06-20T16:56:29Z | https://github.com/tox-dev/tox/issues/3037 | [] | jamesbraza | 1 |
autokey/autokey | automation | 998 | Ubuntu installation process needs to be revised for Wayland | ### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland?
Wayland
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Enhancement
### Choose one or more terms that describe this issue:
- [ ] autokey triggers
- [ ] autokey-gtk
- [ ] autokey-qt
- [ ] beta
- [ ] bug
- [ ] critical
- [ ] development
- [ ] documentation
- [X] enhancement
- [X] installation/configuration
- [ ] phrase expansion
- [ ] scripting
- [ ] technical debt
- [ ] user interface
### Other terms that describe this issue if not provided above:
develop branch
### Which Linux distribution did you use?
Ubuntu 24.04
### Which AutoKey GUI did you use?
Both
### Which AutoKey version did you use?
The develop branch
### How did you install AutoKey?
Manually, following the instructions in wayland_install.md
### Can you briefly describe the issue?
The existing process to build Ubuntu packages does not not address the new steps that are required to install the AutoKey for Wayland code.
### Can the issue be reproduced?
Always
### What are the steps to reproduce the issue?
The wayland_install.md file in the develop branch documents the manual steps that must be performed during installation. The Ubuntu installation process will need to be updated to handle the following steps described in that document in an automated manner:
>3.1) Install the autokey-gnome-extension GNOME Shell extension<br>3.2) Make system configuration changes to enable use of the uinput interface
The Ubuntu installation process will also need to be updated to provide the user with instructions and tools to help them perform the manual steps required to configure their userids to use AutoKey with Wayland:
>3.3 Reboot<br>3.4 Enable the GNOME Shell extension and add your userid to the "input" user group
I have created a solution for this for Fedora installations of AutoKey. The assets I created may help provide suggestions during the development of the equivalent process for Ubuntu. These assets include:
- the **fedora/autokey.spec** file which handles the automated tasks, 3.1 and 3.2, and provides a prompt to the user during installation
- the **autokey-user-config** script which can probably be used in Ubuntu as well to address step 3.4
### What should have happened?
_No response_
### What actually happened?
_No response_
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
_No response_
<br/>
<hr/>
<details><summary>This repo is using Opire - what does it mean? 👇</summary><br/>💵 Everyone can add rewards for this issue commenting <code>/reward 100</code> (replace <code>100</code> with the amount).<br/>🕵️♂️ If someone starts working on this issue to earn the rewards, they can comment <code>/try</code> to let everyone know!<br/>🙌 And when they open the PR, they can comment <code>/claim #998</code> either in the PR description or in a PR's comment.<br/><br/>🪙 Also, everyone can tip any user commenting <code>/tip 20 @dlk3</code> (replace <code>20</code> with the amount, and <code>@dlk3</code> with the user to tip).<br/><br/>📖 If you want to learn more, check out our <a href="https://docs.opire.dev">documentation</a>.</details> | open | 2024-12-22T05:47:39Z | 2024-12-22T07:30:13Z | https://github.com/autokey/autokey/issues/998 | [
"enhancement",
"installation/configuration",
"development",
"wayland"
] | dlk3 | 0 |
psf/requests | python | 6,112 | What is the correct way to pass the proxy username and proxy password ? | Hey,
After reading and trying out many things, I am confused how the proxy username and password is passed in the follows -
proxy = {'https' : 'http://{username}:{password}@host:port',
'http': 'http://{username}:{password}@host:port' }
requests.get(url, proxies=proxy)
1. Do I have to use quote() from urllib.parse to encode the username and password before making the proxy dict ?
2. My username , password having "@" sign works fine without encoding, Still do I have to encode for any other chars ?
Version
requests = 2.20.0
urllib3 = 1.24.2
| closed | 2022-04-26T18:12:02Z | 2023-04-28T00:03:31Z | https://github.com/psf/requests/issues/6112 | [] | pghole | 3 |
plotly/dash-html-components | dash | 60 | Re-sync development toolchain | ~This line causes an infinite loop:
https://github.com/plotly/dash-html-components/blob/2692da82580ebdab08c40a0222642214257619b7/package.json#L27~
Actually, I think we just need to get rid of the old toolchain stuff, e.g. `npm run prepublish` doesn't work anymore. | closed | 2018-08-29T20:36:42Z | 2018-09-10T18:37:11Z | https://github.com/plotly/dash-html-components/issues/60 | [] | rmarren1 | 0 |
dynaconf/dynaconf | flask | 762 | [RFC] Put newest version of documentation on readthedocs (dynaconf.com seems to be expired) | **Is your feature request related to a problem? Please describe.**
RTD only contains documentation for v2.2.3, and dynaconf.com seems to be expired, so the current version of the documentation can only be seen using Google cache for dynaconf.com, AFAIK.
**Describe the solution you'd like**
Update the documentation on RTD.
**Describe alternatives you've considered**
None
**Additional context**
None
| closed | 2022-06-24T07:38:06Z | 2022-06-24T11:56:50Z | https://github.com/dynaconf/dynaconf/issues/762 | [
"Not a Bug",
"RFC"
] | nizwiz | 2 |
ContextLab/hypertools | data-visualization | 96 | animate = static, rotate or trajectory | Instead of setting animate to `True` or `False`, we could support (at least) 3 plot options:
1. static plots - this would be the default plot style
2. rotating plots - with this option, the data would be static, but the camera would rotate around the plot, so that all the data can be visualized easily
3. trajectory - this would be the same as the current plot when passing `animate=True`, good for visualizing timeseries data | closed | 2017-04-18T14:53:23Z | 2017-04-25T18:12:24Z | https://github.com/ContextLab/hypertools/issues/96 | [
"enhancement",
"help wanted"
] | andrewheusser | 5 |
ultralytics/yolov5 | pytorch | 13,437 | Setting Custom Anchors in YOLOv5 for Mismatched Bounding Box Sizes | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hi,
I'm facing a challenge with my object detection model. My training data is quite limited, so I am using a different dataset for training and reserving my primary dataset for testing. However, there's a significant mismatch between the bounding box sizes in the training and test datasets. The objects in the test dataset are much closer to the camera, resulting in larger bounding boxes compared to the training dataset.
To address this, I decided to use the anchor sizes derived from the test dataset as custom anchors during the training process. After researching and experimenting, I found a way to set custom anchors by:
1. Modifying the **hyperparameter file** to include my custom anchor sizes:

2. Changing a line in the YOLOv5 source code (`yolov5/models/yolo.py`):

3. Enabling the `--noautoanchor` flag during training.
I tried using the `model.yaml` file to set custom anchors as referenced in this [[GitHub issue](https://github.com/ultralytics/yolov5/issues/6838)](https://github.com/ultralytics/yolov5/issues/6838), but that approach didn't work for me.
Was my approach to setting custom anchors correct? Are there better or more efficient ways to achieve this? I'd greatly appreciate any insights or feedback.
Thanks in advance!
### Additional
_No response_ | closed | 2024-11-28T02:10:29Z | 2024-12-03T07:05:24Z | https://github.com/ultralytics/yolov5/issues/13437 | [
"question",
"detect"
] | shinzokuro | 2 |
bendichter/brokenaxes | matplotlib | 6 | Recover axes to origin with zoomed_inset_axes | I want to zoom out part of a line plot and use broken axes for another part of the plot. After following your answer [here](https://stackoverflow.com/a/43684155/6829195), my broken axes is ready. But I added another zoomed_inset_axes, and the plot is separate to two, one is almost with nothing and my original plot with broken is still there.
``` Python
solvers = ['s0', 's1', 's2', 's3']
colors =['orange', 'dodgerblue', 'limegreen', 'pink', 'gray', 'purple', 'brown']
markers = ['d', 's', 'x', '+', 'o', '^', 'p']
def time_sovled(file):
data = pd.read_csv(file)
'''
s0,s1,s2,s3
0.01,0.02,0.03,0.04
0.02,0.03,0.04,0.05
0.03,0.04,0.05,0.06
'''
times = []
for solver in solvers:
time = data[solver]
time = [t for t in time if t <= 15.0]
times.append(time)
rows = len(times[1])
Xticks =[int(x) for x in np.linspace(0, rows, 51)]
zoom = []
bax = bka.brokenaxes(xlims=((0, int(rows/5)), (rows-1500, 7000)), hspace=.05)
for i, time in enumerate(times):
cumsum = []
for tick in Xticks:
cumsum.append(sum(time[:int(tick)]))
plt.xlim(0, rows+500)
bax.plot(Xticks, cumsum, label=solvers[i], color=colors[i], marker=markers[i], markersize=5)
zoom.append(cumsum[-5:])
# this line is the cause of problem, but an original axes is needed
fig, ax = plt.subplots()
plt.xlabel('Solved instances')
plt.ylabel('Cumulative running time/s')
plt.legend()
# draw zoom-in plot
# with this method the plot doesn't work
axins = zoomed_inset_axes(ax, 2, loc=2)
axins.set_xlim(6000, 6600)
axins.set_ylim(65, 95)
plt.xticks(visible=False)
plt.yticks(visible=False)
axins.xaxis.set_visible(False)
axins.yaxis.set_visible(False)
mark_inset(ax, axins, loc1=2, loc2=4, fc="none", ec='gray')
for i, t in enumerate(zoom):
axins.plot(Xticks[-5:], zoom[i], color=colors[i], marker=markers[i])
```
What I'm thinking is recover the axis then my code should work. | closed | 2017-08-21T07:45:43Z | 2018-05-25T11:41:53Z | https://github.com/bendichter/brokenaxes/issues/6 | [] | zhangysh1995 | 2 |
iperov/DeepFaceLive | machine-learning | 83 | QPainter::begin: Paint device returned engine == 0, type: 2 | ```
Running DeepFaceLive.
QPainter::begin: Paint device returned engine == 0, type: 2
QPainter::setCompositionMode: Painter not active
QPainter::end: Painter not active, aborted
Traceback (most recent call last):
File "main.py", line 254, in <module>
main()
File "main.py", line 247, in main
args.func(args)
File "main.py", line 187, in run_DeepFaceLive
deep_face_live_app_inst = DeepFaceLiveApp(userdata_path=userdata_path)
File "/home/administrator/Projects/deepfacelive/apps/DeepFaceLive/DeepFaceLiveApp.py", line 238, in __init__
self.initialize()
File "/home/administrator/Projects/deepfacelive/apps/DeepFaceLive/DeepFaceLiveApp.py", line 254, in initialize
self.dfl_wnd = self._dfl_wnd = QDFLAppWindow(userdata_path=self.userdata_path, settings_dirpath=self.settings_dirpath)
File "/home/administrator/Projects/deepfacelive/apps/DeepFaceLive/DeepFaceLiveApp.py", line 197, in __init__
q_live_swap = self.q_live_swap = QLiveSwap(userdata_path=self._userdata_path, settings_dirpath=self._settings_dirpath)
File "/home/administrator/Projects/deepfacelive/apps/DeepFaceLive/DeepFaceLiveApp.py", line 69, in __init__
self.q_file_source = QFileSource(self.file_source)
File "/home/administrator/Projects/deepfacelive/apps/DeepFaceLive/ui/QFileSource.py", line 22, in __init__
self.q_input_paths = QPathEditCSWPaths(cs.input_paths)
File "/home/administrator/Projects/deepfacelive/apps/DeepFaceLive/ui/widgets/QPathEditCSWPaths.py", line 30, in __init__
btn_open = self._btn_open = qtx.QXPushButton(image=QXImageDB.folder_open_outline(color='light gray'),
File "/home/administrator/Projects/deepfacelive/xlib/qt/widgets/QXPushButton.py", line 28, in __init__
self._set_image(image)
File "/home/administrator/Projects/deepfacelive/xlib/qt/widgets/QXPushButton.py", line 88, in _set_image
self._update_icon_size()
File "/home/administrator/Projects/deepfacelive/xlib/qt/widgets/QXPushButton.py", line 71, in _update_icon_size
pixmap_aspect = size.width() / size.height()
ZeroDivisionError: division by zero
```
Ubuntu 22.04
Help me please! | closed | 2022-09-10T08:42:46Z | 2023-01-25T00:04:02Z | https://github.com/iperov/DeepFaceLive/issues/83 | [
"linux"
] | nikell28 | 1 |
yt-dlp/yt-dlp | python | 12,465 | binary search to find start of date range | ### Checklist
- [x] I'm requesting a feature unrelated to a specific site
- [x] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar requests **including closed ones**. DO NOT post duplicates
### Provide a description that is worded well enough to be understood
when i was downloading some old videos from a channel in a specific date range and it seems to go one by one in reverse chronological order and it took a long time to find videos with the right starting date, i feel like using binary search would work well here
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
<details>
```shell
--> yt-dlp -vU --newline --progress -i --write-description --write-info-json --write-thumbnail --user-agent "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36" --concurrent-fragments 99999 --windows-filenames --cookies-from-browser brave --embed-metadata --embed-thumbnail --audio-multistreams --sponsorblock-chapter-title --live-from-start --video-multistreams --no-keep-video --output "f:/videos/yt_downloads/_main/%(uploader)s/%(playlist|)s/%(playlist_index|)s%(playlist_index& ----- |)s%(title)s---%(upload_date>%Y-%m-%d)s---%(resolution)s-vbr;%(vbr)s--ext;%(ext)s---%(id)s--%(format_id)s" --dateafter 20160101 --datebefore 20200101 `
> "https://www.youtube.com/@channel_name"
[debug] Command-line config: ['-vU', '--newline', '--progress', '-i', '--write-description', '--write-info-json', '--write-thumbnail', '--user-agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '--concurrent-fragments', '99999', '--windows-filenames', '--cookies-from-browser', 'brave', '--embed-metadata', '--embed-thumbnail', '--audio-multistreams', '--sponsorblock-chapter-title', '--live-from-start', '--video-multistreams', '--no-keep-video', '--output', 'f:/videos/yt_downloads/_main/%(uploader)s/%(playlist|)s/%(playlist_index|)s%(playlist_index& ----- |)s%(title)s---%(upload_date>%Y-%m-%d)s---%(resolution)s-vbr;%(vbr)s--ext;%(ext)s---%(id)s--%(format_id)s', '--dateafter', '20160101', '--datebefore', '20200101', 'https://www.youtube.com/@channel_name']
[debug] Portable config "C:\Users\irl_name\scoop\apps\yt-dlp\current\yt-dlp.conf": []
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2025.01.15 from yt-dlp/yt-dlp [c8541f8b1] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 7.1-full_build-www.gyan.dev (setts), ffprobe 7.1-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.1
[debug] Proxy map: {}
Extracting cookies from brave
[debug] Extracting cookies from: "C:\Users\irl_name\AppData\Local\BraveSoftware\Brave-Browser\User Data\Default\Network\Cookies"
Attempting to unlock cookies
[debug] Found local state file at "C:\Users\irl_name\AppData\Local\BraveSoftware\Brave-Browser\User Data\Local State"
Extracted 2139 cookies from brave
[debug] cookie version breakdown: {'v10': 2100, 'other': 0, 'unencrypted': 50}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Plugin directories: ['C:\\Users\\irl_name\\AppData\\Roaming\\yt-dlp\\plugins\\yt-dlp-ChromeCookieUnlock\\yt_dlp_plugins']
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp/releases/latest/download/_update_spec
[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp/releases/download/2025.02.19/SHA2-256SUMS
Current version: stable@2025.01.15 from yt-dlp/yt-dlp
Latest version: stable@2025.02.19 from yt-dlp/yt-dlp
Current Build Hash: 16af96fe1ba8f373c0c085aa881b05b2d4a86a1757d0c1c40b6acc235c86af76
Updating to stable@2025.02.19 from yt-dlp/yt-dlp ...
[debug] Downloading yt-dlp.exe from https://github.com/yt-dlp/yt-dlp/releases/download/2025.02.19/yt-dlp.exe
Updated yt-dlp to stable@2025.02.19 from yt-dlp/yt-dlp
[debug] Restarting: C:\Users\irl_name\scoop\apps\yt-dlp\current\yt-dlp.exe -vU --newline --progress -i --write-description --write-info-json --write-thumbnail --user-agent "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36" --concurrent-fragments 99999 --windows-filenames --cookies-from-browser brave --embed-metadata --embed-thumbnail --audio-multistreams --sponsorblock-chapter-title --live-from-start --video-multistreams --no-keep-video --output "f:/videos/yt_downloads/_main/%(uploader)s/%(playlist|)s/%(playlist_index|)s%(playlist_index& ----- |)s%(title)s---%(upload_date>%Y-%m-%d)s---%(resolution)s-vbr;%(vbr)s--ext;%(ext)s---%(id)s--%(format_id)s" --dateafter 20160101 --datebefore 20200101 https://www.youtube.com/@channel_name
[debug] Command-line config: ['-vU', '--newline', '--progress', '-i', '--write-description', '--write-info-json', '--write-thumbnail', '--user-agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', '--concurrent-fragments', '99999', '--windows-filenames', '--cookies-from-browser', 'brave', '--embed-metadata', '--embed-thumbnail', '--audio-multistreams', '--sponsorblock-chapter-title', '--live-from-start', '--video-multistreams', '--no-keep-video', '--output', 'f:/videos/yt_downloads/_main/%(uploader)s/%(playlist|)s/%(playlist_index|)s%(playlist_index& ----- |)s%(title)s---%(upload_date>%Y-%m-%d)s---%(resolution)s-vbr;%(vbr)s--ext;%(ext)s---%(id)s--%(format_id)s', '--dateafter', '20160101', '--datebefore', '20200101', 'https://www.youtube.com/@channel_name']
[debug] Portable config "C:\Users\irl_name\scoop\apps\yt-dlp\current\yt-dlp.conf": []
[debug] Encodings: locale cp65001, fs utf-8, pref cp65001, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2025.02.19 from yt-dlp/yt-dlp [4985a4041] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 7.1-full_build-www.gyan.dev (setts), ffprobe 7.1-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2025.01.31, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-15.0
[debug] Proxy map: {}
Extracting cookies from brave
[debug] Extracting cookies from: "C:\Users\irl_name\AppData\Local\BraveSoftware\Brave-Browser\User Data\Default\Network\Cookies"
Attempting to unlock cookies
[debug] Found local state file at "C:\Users\irl_name\AppData\Local\BraveSoftware\Brave-Browser\User Data\Local State"
Extracted 2139 cookies from brave
[debug] cookie version breakdown: {'v10': 2100, 'other': 0, 'unencrypted': 50}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Plugin directories: ['C:\\Users\\irl_name\\AppData\\Roaming\\yt-dlp\\plugins\\yt-dlp-ChromeCookieUnlock\\yt_dlp_plugins']
[debug] Loaded 1841 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2025.02.19 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2025.02.19 from yt-dlp/yt-dlp)
[debug] [youtube:tab] Found YouTube account cookies
[youtube:tab] Extracting URL: https://www.youtube.com/@channel_name
[youtube:tab] @channel_name: Downloading webpage
[debug] [youtube:tab] Selected tab: 'videos' (videos), Requested tab: ''
[youtube:tab] Downloading all uploads of the channel. To download only the videos in a specific tab, pass the tab's URL
[youtube:tab] @channel_name/streams: Downloading webpage
[debug] [youtube:tab] Selected tab: 'streams' (live), Requested tab: 'streams'
[youtube:tab] @channel_name/shorts: Downloading webpage
[debug] [youtube:tab] Selected tab: 'shorts' (shorts), Requested tab: 'shorts'
[youtube:tab] Downloading as multiple playlists, separated by tabs. To download as a single playlist instead, pass https://www.youtube.com/playlist?list=censored_playlist_id?
[download] Downloading playlist: channel_name
[info] Writing playlist metadata as JSON to: f:\videos\yt_downloads\_main\channel_name\channel_name\0 ----- channel_name---NA---NA-vbr;NA--ext;NA---@channel_name--NA.info.json
[info] Writing playlist description to: f:\videos\yt_downloads\_main\channel_name\channel_name\0 ----- channel_name---NA---NA-vbr;NA--ext;NA---@channel_name--NA.description
Deleting existing file f:\videos\yt_downloads\_main\channel_name\channel_name\0 ----- channel_name---NA---NA-vbr;NA--ext;NA---@channel_name--NA.jpg
[info] Downloading playlist thumbnail avatar_uncropped ...
[info] Writing playlist thumbnail avatar_uncropped to: f:\videos\yt_downloads\_main\channel_name\channel_name\0 ----- channel_name---NA---NA-vbr;NA--ext;NA---@channel_name--NA.jpg
[youtube:tab] Playlist channel_name: Downloading 3 items of 3
[download] Downloading item 1 of 3
[download] Downloading playlist: channel_name - Videos
[youtube:tab] censored_channel_id? page 1: Downloading API JSON
[youtube:tab] censored_channel_id? page 2: Downloading API JSON
[youtube:tab] censored_channel_id? page 3: Downloading API JSON
[youtube:tab] censored_channel_id? page 4: Downloading API JSON
[youtube:tab] censored_channel_id? page 5: Downloading API JSON
[youtube:tab] censored_channel_id? page 6: Downloading API JSON
[youtube:tab] censored_channel_id? page 7: Downloading API JSON
[youtube:tab] censored_channel_id? page 8: Downloading API JSON
[youtube:tab] censored_channel_id? page 9: Downloading API JSON
[youtube:tab] censored_channel_id? page 10: Downloading API JSON
[youtube:tab] censored_channel_id? page 11: Downloading API JSON
[youtube:tab] censored_channel_id? page 12: Downloading API JSON
[youtube:tab] censored_channel_id? page 13: Downloading API JSON
[youtube:tab] censored_channel_id? page 14: Downloading API JSON
[youtube:tab] censored_channel_id? page 15: Downloading API JSON
[youtube:tab] censored_channel_id? page 16: Downloading API JSON
[youtube:tab] censored_channel_id? page 17: Downloading API JSON
[youtube:tab] censored_channel_id? page 18: Downloading API JSON
[youtube:tab] censored_channel_id? page 19: Downloading API JSON
[youtube:tab] censored_channel_id? page 20: Downloading API JSON
[youtube:tab] censored_channel_id? page 21: Downloading API JSON
[youtube:tab] censored_channel_id? page 22: Downloading API JSON
[youtube:tab] censored_channel_id? page 23: Downloading API JSON
[youtube:tab] censored_channel_id? page 24: Downloading API JSON
[youtube:tab] censored_channel_id? page 25: Downloading API JSON
[youtube:tab] censored_channel_id? page 26: Downloading API JSON
[youtube:tab] censored_channel_id? page 27: Downloading API JSON
[youtube:tab] censored_channel_id? page 28: Downloading API JSON
[youtube:tab] censored_channel_id? page 29: Downloading API JSON
[youtube:tab] censored_channel_id? page 30: Downloading API JSON
[youtube:tab] censored_channel_id? page 31: Downloading API JSON
[youtube:tab] censored_channel_id? page 32: Downloading API JSON
[youtube:tab] censored_channel_id? page 33: Downloading API JSON
[youtube:tab] censored_channel_id? page 34: Downloading API JSON
[youtube:tab] censored_channel_id? page 35: Downloading API JSON
[youtube:tab] censored_channel_id? page 36: Downloading API JSON
[youtube:tab] censored_channel_id? page 37: Downloading API JSON
[youtube:tab] censored_channel_id? page 38: Downloading API JSON
[info] Writing playlist metadata as JSON to: f:\videos\yt_downloads\_main\channel_name\channel_name - Videos\0000 ----- channel_name - Videos---NA---NA-vbr;NA--ext;NA---censored_channel_id?--NA.info.json
[info] Writing playlist description to: f:\videos\yt_downloads\_main\channel_name\channel_name - Videos\0000 ----- channel_name - Videos---NA---NA-vbr;NA--ext;NA---censored_channel_id?--NA.description
Deleting existing file f:\videos\yt_downloads\_main\channel_name\channel_name - Videos\0000 ----- channel_name - Videos---NA---NA-vbr;NA--ext;NA---censored_channel_id?--NA.jpg
[info] Downloading playlist thumbnail avatar_uncropped ...
[info] Writing playlist thumbnail avatar_uncropped to: f:\videos\yt_downloads\_main\channel_name\channel_name - Videos\0000 ----- channel_name - Videos---NA---NA-vbr;NA--ext;NA---censored_channel_id?--NA.jpg
[youtube:tab] Playlist channel_name - Videos: Downloading 1155 items of 1155
[download] Downloading item 1 of 1155
[debug] [youtube] Found YouTube account cookies
[youtube] Extracting URL: https://www.youtube.com/watch?v=censored_video_id
[youtube] censored_video_id: Downloading webpage
[youtube] censored_video_id: Downloading tv client config
[youtube] censored_video_id: Downloading player c8dbda2a
[youtube] censored_video_id: Downloading tv player API JSON
[debug] Loading youtube-nsig.c8dbda2a from cache
[debug] Discarding old cache from version 2025.01.15 (needs 2025.02.19)
[debug] Saving youtube-nsig.c8dbda2a to cache
[debug] [youtube] Decrypted nsig censored_video_nsig? => censored_video_nsig?
[debug] Loading youtube-nsig.c8dbda2a from cache
[debug] [youtube] Decrypted nsig censored_video_nsig? => censored_video_nsig?
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[download] 2025-02-23 upload date is not in range 2016-01-01 to 2020-01-01
[download] Downloading item 2 of 1155
[youtube] Extracting URL: https://www.youtube.com/watch?v=censored_video_id
[youtube] censored_video_id: Downloading webpage
[youtube] censored_video_id: Downloading tv client config
[youtube] censored_video_id: Downloading tv player API JSON
[debug] Loading youtube-nsig.c8dbda2a from cache
[debug] [youtube] Decrypted nsig censored_video_nsig? => censored_video_nsig?
[debug] Loading youtube-nsig.c8dbda2a from cache
[debug] [youtube] Decrypted nsig censored_video_nsig? => censored_video_nsig?
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[download] 2025-02-22 upload date is not in range 2016-01-01 to 2020-01-01
[download] Downloading item 3 of 1155
[youtube] Extracting URL: https://www.youtube.com/watch?v=censored_video_id
[youtube] censored_video_id: Downloading webpage
[youtube] censored_video_id: Downloading tv client config
[youtube] censored_video_id: Downloading tv player API JSON
[debug] Loading youtube-nsig.c8dbda2a from cache
[debug] [youtube] Decrypted nsig censored_video_nsig? => censored_video_nsig?
[debug] Loading youtube-nsig.c8dbda2a from cache
[debug] [youtube] Decrypted nsig censored_video_nsig? => censored_video_nsig?
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[download] 2025-02-18 upload date is not in range 2016-01-01 to 2020-01-01
[download] Downloading item 4 of 1155
[youtube] Extracting URL: https://www.youtube.com/watch?v=censored_video_id
[youtube] censored_video_id: Downloading webpage
[youtube] censored_video_id: Downloading tv client config
[youtube] censored_video_id: Downloading tv player API JSON
[debug] Loading youtube-nsig.c8dbda2a from cache
[debug] [youtube] Decrypted nsig censored_video_nsig? => censored_video_nsig?
[debug] Loading youtube-nsig.c8dbda2a from cache
[debug] [youtube] Decrypted nsig censored_video_nsig? => censored_video_nsig?
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[download] 2025-02-13 upload date is not in range 2016-01-01 to 2020-01-01
ERROR: Interrupted by user
ERROR: Interrupted by user
```
</details> | closed | 2025-02-23T23:28:35Z | 2025-03-03T20:12:39Z | https://github.com/yt-dlp/yt-dlp/issues/12465 | [
"enhancement",
"wontfix"
] | BPplays | 3 |
pydantic/pydantic | pydantic | 10,598 | When overriding __init__ with enclosed generic classes pydantic raises spurious validation errors | ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
I have a use case where I'm using python generics (`typing.Generic`) with pydantic models which are nested and where the nesting model has a custom `__init__` function which instantiates the nested model. I think the issue may somehow be with the generic code using the generic `typing.TypeVar` to instantiate rather than the concrete class, but I have been unable to figure out how to debug this as the traceback never actually goes down to the place where the exception is raised. This will be easier to explain with a working example, I think:
```python
import typing
import pydantic
GC = typing.TypeVar("GC", bound=pydantic.BaseModel)
class EnclosedGeneric(pydantic.BaseModel, typing.Generic[GC]):
version: int
model: GC
class EnclosingGeneric(pydantic.BaseModel, typing.Generic[GC]):
enclosed: EnclosedGeneric[GC]
class EnclosingGenericWithInit(pydantic.BaseModel, typing.Generic[GC]):
def __init__(self, version: int, model: GC):
super().__init__(enclosed=EnclosedGeneric[GC](version=version, model=model))
enclosed: EnclosedGeneric[GC]
```
When using `EnclosingGeneric` everything works as expected.
```python
class ConcreteModel(pydantic.BaseModel):
field: str
EnclosingGeneric[ConcreteModel](
enclosed=EnclosedGeneric[ConcreteModel](version=1, model=ConcreteModel(field="val"))
)
```
However, when using `EnclosingGenericWithInit`:
```python
EnclosingGenericWithInit[ConcreteModel](version=1, model=ConcreteModel(field="val"))
```
I get the following exception:
```
Traceback (most recent call last):
File "/Users/julia.patrin/src/sa-8078-solution-accelerator-framework/pydantic-generics.py", line 32, in <module>
EnclosingGenericWithInit[ConcreteModel](version=1, model=ConcreteModel(field="val"))
File "/Users/julia.patrin/src/sa-8078-solution-accelerator-framework/pydantic-generics.py", line 20, in __init__
super().__init__(enclosed=EnclosedGeneric[GC](version=version, model=model))
File "/Users/julia.patrin/src/sa-8078-solution-accelerator-framework/.venv/lib/python3.11/site-packages/pydantic/main.py", line 212, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for EnclosingGenericWithInit[ConcreteModel]
enclosed
Input should be a valid dictionary or instance of EnclosedGeneric[ConcreteModel] [type=model_type, input_value=EnclosedGeneric(version=1...creteModel(field='val')), input_type=EnclosedGeneric]
```
If I remove the generics entirely everything works fine:
```python
class Enclosed(pydantic.BaseModel):
version: int
model: pydantic.BaseModel
class Enclosing(pydantic.BaseModel):
enclosed: Enclosed
class EnclosingWithInit(pydantic.BaseModel):
def __init__(self, version: int, model: pydantic.BaseModel):
super().__init__(enclosed=Enclosed(version=version, model=model))
enclosed: Enclosed
Enclosing(enclosed=Enclosed(version=1, model=ConcreteModel(field="val")))
EnclosingWithInit(version=1, model=ConcreteModel(field="val"))
```
I did find a workaround which works with the generics, and that is passing in a dictionary instead of a concrete class in the custom `__init__` method, but that seems like a hack:
```python
class EnclosingGenericWithInitWorkaround(pydantic.BaseModel, typing.Generic[GC]):
def __init__(self, version: int, model: GC):
super().__init__(enclosed={"version": version, "model": model})
enclosed: EnclosedGeneric[GC]
EnclosingGenericWithInitWorkaround[ConcreteModel](version=1, model=ConcreteModel(field="val"))
```
### Example Code
```Python
import typing
import pydantic
GC = typing.TypeVar("GC", bound=pydantic.BaseModel)
class EnclosedGeneric(pydantic.BaseModel, typing.Generic[GC]):
version: int
model: GC
class EnclosingGenericWithInit(pydantic.BaseModel, typing.Generic[GC]):
def __init__(self, version: int, model: GC):
super().__init__(enclosed=EnclosedGeneric[GC](version=version, model=model))
enclosed: EnclosedGeneric[GC]
class ConcreteModel(pydantic.BaseModel):
field: str
EnclosingGenericWithInit[ConcreteModel](version=1, model=ConcreteModel(field="val"))
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.9.2
pydantic-core version: 2.23.4
pydantic-core build: profile=release pgo=false
install path: /Users/.../.venv/lib/python3.11/site-packages/pydantic
python version: 3.11.4 (main, Sep 19 2023, 17:11:25) [Clang 15.0.0 (clang-1500.0.40.1)]
platform: macOS-15.0-arm64-arm-64bit
related packages: mypy-1.11.2 typing_extensions-4.12.2
commit: unknown
```
| closed | 2024-10-10T18:03:43Z | 2024-10-11T19:53:43Z | https://github.com/pydantic/pydantic/issues/10598 | [
"bug V2",
"pending"
] | reversefold | 6 |
waditu/tushare | pandas | 1,230 | daily接口不能访问 | Python3 调用sdk时,
data = pro.daily(ts_code=tsc, start_date='20151231', end_date='20191221')
Exception: 抱歉系统错误,请反馈我们送积分,感谢!请求ID()
tsc="000001.SZ"
tushare 1.2.48
调用好用过。之后一直有问题 | closed | 2019-12-22T13:07:15Z | 2019-12-25T11:31:02Z | https://github.com/waditu/tushare/issues/1230 | [] | zhaoyifei | 1 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 362 | سعيد |
I am not maintaining this repo anymore (I explain why in the readme).
I keep issues open only because some old ones are useful.
I will not assist you in any way.
| closed | 2020-06-13T19:50:33Z | 2020-06-13T19:51:08Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/362 | [] | mooss11 | 1 |
streamlit/streamlit | deep-learning | 10,611 | `st.chat_input` does not expand to accommodate a multiline placeholder like it does for a multiline prompt | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
If you type a long prompt (or multiline prompt) into `st.chat_input`, the widget will grow to accommodate the size. However, if the placeholder text is long or multilined, it won't.
### Reproducible Code Example
```Python
import streamlit as st
L,R = st.columns(2)
L.chat_input("Meow "*20)
R.chat_input("Meow\n"*20)
```
### Steps To Reproduce
_No response_
### Expected Behavior
A multiline or long placeholder should be fully visible.
### Current Behavior
<img width="803" alt="Image" src="https://github.com/user-attachments/assets/52ca0f72-a46e-43f3-859e-5c509dc78b40" />
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version:
- Python version:
- Operating System:
- Browser:
### Additional Information
_No response_ | open | 2025-03-03T18:39:27Z | 2025-03-03T19:09:21Z | https://github.com/streamlit/streamlit/issues/10611 | [
"type:bug",
"status:confirmed",
"priority:P3",
"feature:st.chat_input"
] | sfc-gh-dmatthews | 1 |
deezer/spleeter | tensorflow | 341 | [Bug] name your bug |
## Description
Used in both cases: Spleet gui
result:
Spleeter works fine on Win7, but produces this, on Win10:
## Step to reproduce
Installed:
python-3.8.2.exe
Miniconda3-latest-Windows-x86_64.exe
then (without errors):
pip install spleeter
conda install numba
## Output
Informationen über das Aufrufen von JIT-Debuggen
anstelle dieses Dialogfelds finden Sie am Ende dieser Meldung.
************** Ausnahmetext **************
System.Security.SecurityException: Der angeforderte Registrierungszugriff ist unzulässig.
bei System.ThrowHelper.ThrowSecurityException(ExceptionResource resource)
bei Microsoft.Win32.RegistryKey.OpenSubKey(String name, Boolean writable)
bei System.Environment.SetEnvironmentVariable(String variable, String value, EnvironmentVariableTarget target)
bei spleetGUI.Form1.addtopath()
bei spleetGUI.Form1.InstallFFMPEG()
bei spleetGUI.Form1.button1_Click(Object sender, EventArgs e)
bei System.Windows.Forms.Control.OnClick(EventArgs e)
bei System.Windows.Forms.Button.OnClick(EventArgs e)
bei System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent)
bei System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks)
bei System.Windows.Forms.Control.WndProc(Message& m)
bei System.Windows.Forms.ButtonBase.WndProc(Message& m)
bei System.Windows.Forms.Button.WndProc(Message& m)
bei System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)
bei System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)
bei System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
Die Zone der Assembly, bei der ein Fehler aufgetreten ist:
MyComputer
************** Geladene Assemblys **************
mscorlib
Assembly-Version: 4.0.0.0.
Win32-Version: 4.6.1063.1 built by: NETFXREL3STAGE.
CodeBase: file:///C:/Windows/Microsoft.NET/Framework/v4.0.30319/mscorlib.dll.
----------------------------------------
spleetGUI
Assembly-Version: 1.0.0.0.
Win32-Version: 1.0.0.0.
CodeBase: file:///C:/OSTRIP/%5BTOOLS%5D/SpleetGUI.v2/SpleetGUI.exe.
----------------------------------------
System.Windows.Forms
Assembly-Version: 4.0.0.0.
Win32-Version: 4.6.1038.0 built by: NETFXREL2.
CodeBase: file:///C:/Windows/Microsoft.Net/assembly/GAC_MSIL/System.Windows.Forms/v4.0_4.0.0.0__b77a5c561934e089/System.Windows.Forms.dll.
----------------------------------------
System
Assembly-Version: 4.0.0.0.
Win32-Version: 4.6.1038.0 built by: NETFXREL2.
CodeBase: file:///C:/Windows/Microsoft.Net/assembly/GAC_MSIL/System/v4.0_4.0.0.0__b77a5c561934e089/System.dll.
----------------------------------------
System.Drawing
Assembly-Version: 4.0.0.0.
Win32-Version: 4.6.1068.2 built by: NETFXREL3STAGE.
CodeBase: file:///C:/Windows/Microsoft.Net/assembly/GAC_MSIL/System.Drawing/v4.0_4.0.0.0__b03f5f7f11d50a3a/System.Drawing.dll.
----------------------------------------
Accessibility
Assembly-Version: 4.0.0.0.
Win32-Version: 4.6.1038.0 built by: NETFXREL2.
CodeBase: file:///C:/Windows/Microsoft.Net/assembly/GAC_MSIL/Accessibility/v4.0_4.0.0.0__b03f5f7f11d50a3a/Accessibility.dll.
----------------------------------------
mscorlib.resources
Assembly-Version: 4.0.0.0.
Win32-Version: 4.6.1038.0 built by: NETFXREL2.
CodeBase: file:///C:/Windows/Microsoft.Net/assembly/GAC_MSIL/mscorlib.resources/v4.0_4.0.0.0_de_b77a5c561934e089/mscorlib.resources.dll.
----------------------------------------
System.Windows.Forms.resources
Assembly-Version: 4.0.0.0.
Win32-Version: 4.6.1038.0 built by: NETFXREL2.
CodeBase: file:///C:/Windows/Microsoft.Net/assembly/GAC_MSIL/System.Windows.Forms.resources/v4.0_4.0.0.0_de_b77a5c561934e089/System.Windows.Forms.resources.dll.
----------------------------------------
************** JIT-Debuggen **************
Um das JIT-Debuggen (Just-In-Time) zu aktivieren, muss in der
Konfigurationsdatei der Anwendung oder des Computers
(machine.config) der jitDebugging-Wert im Abschnitt system.windows.forms festgelegt werden.
Die Anwendung muss mit aktiviertem Debuggen kompiliert werden.
Zum Beispiel:
<configuration>
<system.windows.forms jitDebugging="true" />
</configuration>
Wenn das JIT-Debuggen aktiviert ist, werden alle nicht behandelten
Ausnahmen an den JIT-Debugger gesendet, der auf dem
Computer registriert ist, und nicht in diesem Dialogfeld behandelt.
## Environment
Firewall: disabled.
Host file: untouched from stock windows 10
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Windows 10|
| Installation type | Conda / pip |
| RAM available | 4Go |
| Hardware spec | Fujitsu Q702, GPU: Intel HD Graphics 4000, Intel(R) i3-3217U1.80Ghz |
## Additional context
 | closed | 2020-04-25T18:31:35Z | 2020-04-27T08:32:31Z | https://github.com/deezer/spleeter/issues/341 | [
"bug",
"invalid"
] | Ry3yr | 0 |
microsoft/MMdnn | tensorflow | 668 | Error when using the converted pytorch model(mxnet->IR->pytorch) | Platform (ubuntu 14.04):
Python version: 3.5
Source framework with version (like mxnet 1.0.0 with GPU):
Destination framework with version (like pytorch 0.4.0 with GPU):
Pre-trained model path (LResNet34E , https://github.com/Microsoft/MMdnn/blob/72529a9f6b17f7298eacb7b2a0dae6dc5b9ce408/mmdnn/conversion/pytorch/README.md):
Running scripts:
>>> import torch
>>> import imp
>>> import numpy as np
>>> MainModel = imp.load_source('MainModel', "tf_pytorch_vgg19.py")
>>> the_model = torch.load("tf_pytorch_vgg19.pth")
>>> the_model.eval()
>>> x = np.random.random([112,112,3])
>>> x = np.transpose(x, (2, 0, 1))
>>> x = np.expand_dims(x, 0).copy()
>>> data = torch.from_numpy(x)
>>> data = torch.autograd.Variable(data, requires_grad = False).float()
>>> predict = the_model(data)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/dbmeng/TooLdevkit/anaconda3/envs/gluon/lib/python3.6/site-packages/torch/nn/modules/module.py", line 491, in __call__
result = self.forward(*input, **kwargs)
File "IR50_2.py", line 163, in forward
self.minusscalar0_second = torch.autograd.Variable(torch.from_numpy(__weights_dict['minusscalar0_second']['value']), requires_grad=False)
NameError: name '_KitModel__weights_dict' is not defined
| open | 2019-06-03T16:39:44Z | 2020-06-11T01:51:55Z | https://github.com/microsoft/MMdnn/issues/668 | [] | MengDebin18 | 3 |
pnkraemer/tueplots | matplotlib | 20 | Related packages | It could be useful to add information about related packages somewhere (to the readme?). This way we can mix and match (and also explain what we are doing differently)
There are for instance:
* Seaborn: https://seaborn.pydata.org/index.html
* ProPlot: https://proplot.readthedocs.io/en/latest/cycles.html
* SciencePlots: https://github.com/garrettj403/SciencePlots
Any others? | closed | 2021-12-09T11:08:16Z | 2021-12-10T14:48:54Z | https://github.com/pnkraemer/tueplots/issues/20 | [] | pnkraemer | 2 |
jupyterlab/jupyter-ai | jupyter | 593 | Jupyter AI Installation Failure | The installation of the jupyter ai fails in the notebook as shown below,
<img width="491" alt="2024-01-23_01h15_44" src="https://github.com/jupyterlab/jupyter-ai/assets/18661896/05390a69-3f71-428f-8df1-1362b6751f59">
| closed | 2024-01-22T19:46:29Z | 2024-01-24T17:06:53Z | https://github.com/jupyterlab/jupyter-ai/issues/593 | [
"bug"
] | YashzAlphaGeek | 10 |
xuebinqin/U-2-Net | computer-vision | 261 | Does anyone have the u2netp model trained by human portrait dataset? | Constrain with the gpu memory and speed , using u2net trained by human portrait set which provided by author cause my gpu stack full, even on one batch. Therefore, l hope someone can provide the u2netp which is a small version of u2net trained by human portrait set ,sincerely. It is only 4.6M which means a piece of cake to upload. Hope and thank , sincerely. | open | 2021-10-23T04:03:56Z | 2021-10-23T04:03:56Z | https://github.com/xuebinqin/U-2-Net/issues/261 | [] | liuyishoua | 0 |
jupyter-incubator/sparkmagic | jupyter | 739 | pyspark should upload / import packages from local by default | **Is your feature request related to a problem? Please describe.**
When pyspark starts the connection to Spark cluster (Livy), it should load the packages in the local folder by default (or at least a way to specify that), so users can use these packages in the spark session as well.
For example, in pySpark kernel, if I do :
```
%%local
import matplotlib
```
It loads successfully. This is expected because "local" reads the package matplotlib I have on the jupyterlab machine.
But if I do:
```
import matplotlib
Starting Spark application
ID YARN Application ID Kind State Spark UI Driver log Current session?
32 application_1636505612345_0200 pyspark idle Link Link ✔
SparkSession available as 'spark'.
An error was encountered:
No module named 'matplotlib'
Traceback (most recent call last):
ModuleNotFoundError: No module named 'matplotlib'
```
As we can see it errors out. It can't find the said package on the spark cluster because in this case it runs in the cluster.
**Describe the solution you'd like**
People may say why not install the said packages on the Spark cluster? Well, most of the time, end users don't have direct permissions to do that. If there is a way so pyspark kernel can upload the packages when it starts the spark session, that will be really helpful! For example, a config before start the session, in which users can specify which packages to upload.
| open | 2021-11-11T02:53:38Z | 2024-05-13T21:07:57Z | https://github.com/jupyter-incubator/sparkmagic/issues/739 | [] | kdzhao | 3 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 558 | [BUG] fetch_user_live_videos_by_room_id問題 | ***发生错误的平台?***
抖音
***发生错误的端点?***
https://douyin.wtf/api/douyin/web/fetch_user_live_videos_by_room_id
会发生以上的问题不知是否有人遇过??
{
"code": 200,
"router": "/api/douyin/web/fetch_user_live_videos_by_room_id",
"data": {
"data": {
"message": "Request params error",
"prompts": "Request params error\t"
},
"extra": {
"now": 1740106115957
},
"status_code": 10011
}
}
| closed | 2025-02-21T03:09:28Z | 2025-02-22T19:32:12Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/558 | [
"BUG"
] | cary1123 | 2 |
deeppavlov/DeepPavlov | tensorflow | 1,261 | ner_f1 metric computed wrongly | Want to contribute to DeepPavlov? Please read the [contributing guideline](http://docs.deeppavlov.ai/en/master/devguides/contribution_guide.html) first.
Please enter all the information below, otherwise your issue may be closed without a warning.
**DeepPavlov version** (you can look it up by running `pip show deeppavlov`): 0.10.0
**Python version**: 3.7.7
**Operating system** (ubuntu linux, windows, ...): Ubuntu 16.04
**Issue**: The 'ner_f1' metric is computed wrongly because of bugs in function `chunk_finder()`.
**The code to reproduce the error**:
```python
from deeppavlov.metrics.fmeasure import precision_recall_f1
test_samples = [
{
# true chunks: (0, 1); found chunks: (0, 3)
'y_true': ['B-TAG', 'I-TAG', 'O-TAG', 'O-TAG'],
# true chunks: (2, 3); found_chunks: (0, 1), (2, 3)
'y_pred': ['O-TAG', 'O-TAG', 'B-TAG', 'I-TAG'],
'correct_metrics': {
'precision': 0, 'recall': 0, 'f1': 0, 'tp': 0, 'tn': 0, 'fp': 1, 'fn': 1}
},
{
'y_true': ['B-TAG', 'E-TAG', 'O-TAG'],
'y_pred': ['O-TAG', 'O-TAG', 'S-TAG'],
'correct_metrics': {
'precision': 0, 'recall': 0, 'f1': 0, 'tp': 0, 'tn': 0, 'fp': 1, 'fn': 1}
}
]
for test_sample in test_samples:
print("y_true:", test_sample['y_true'])
print("y_pred:", test_sample['y_pred'])
print(
"calculated_metrics:\n",
precision_recall_f1(
test_sample['y_true'],
test_sample['y_pred'],
print_results=False)
)
print(
"correct metrics:\n",
test_sample['correct_metrics']
)
print("#" * 40)
```
| closed | 2020-06-30T12:21:03Z | 2020-06-30T12:27:51Z | https://github.com/deeppavlov/DeepPavlov/issues/1261 | [
"bug"
] | PeganovAnton | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,603 | [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed - error while running the server on macOS | For macOS users, it's highly likely that they may encounter the `SSL: CERTIFICATE_VERIFY_FAILED` problem when starting the visdom server.
Fortunately, there is a remedy. Depending on the Python version in use (mine is 3.9), attempt to execute the following command to install certificates:
```
/Applications/Python\ 3.9/Install\ Certificates.command
```
Then you should be able to run the server without any other issues.
| open | 2023-10-13T23:51:07Z | 2023-10-13T23:51:07Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1603 | [] | foxtrotdev | 0 |
dgtlmoon/changedetection.io | web-scraping | 2,756 | Price tracker - Exception: could not convert string to float: '' | all versions
https://www.firestormcards.co.uk/POK85321 | closed | 2024-10-31T11:09:39Z | 2024-11-01T09:56:28Z | https://github.com/dgtlmoon/changedetection.io/issues/2756 | [
"triage"
] | dgtlmoon | 0 |
deezer/spleeter | tensorflow | 49 | [Bug] Spleeter hangs endlessly on "Tears of a Clown" | ## Description
On this environment, Spleeter hangs endlessly on "Tears of a Clown" and other certain songs. It works for the demo example and a majority of songs, but has an endless loop on some.
The hang is similar whether using the spleeter-cpu or spleeter-gpu activation. The hang is similar whether using 2 stem or 4 stem output. The hang is similar whether using WAV or MP3 input format.
## Step to reproduce
git clone https://github.com/Deezer/spleeter
conda env create -f spleeter/conda/spleeter-cpu.yaml
conda env create -f spleeter/conda/spleeter-gpu.yaml
conda activate spleeter-cpu
or
conda activate spleeter-gpu
## Output
```bash
(spleeter-cpu) E:\git\people\deezer>spleeter separate -i spleeter/spleeter separate -i spleeter/SmokeyRobinsonTheMiracles-2004-Motown1s-17-TearsofaClown.m4a -p spleeter:2stems -o output
INFO:tensorflow:Using config: {'_model_dir': 'pretrained_models\\2stems', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': gpu_options {
per_process_gpu_memory_fraction: 0.7
}
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x0000017870639240>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
INFO:tensorflow:Calling model_fn.
WARNING:tensorflow:From c:\programdata\miniconda3\envs\spleeter-cpu\lib\site-packages\spleeter\model\functions\unet.py:29: The name tf.keras.initializers.he_uniform is deprecated. Please use tf.compat.v1.keras.initializers.he_uniform instead.
INFO:tensorflow:Apply unet for vocals_spectrogram
INFO:tensorflow:Apply unet for accompaniment_spectrogram
INFO:tensorflow:Done calling model_fn.
WARNING:tensorflow:From c:\programdata\miniconda3\envs\spleeter-cpu\lib\site-packages\tensorflow\python\ops\array_ops.py:1354: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
INFO:tensorflow:Graph was finalized.
WARNING:tensorflow:From c:\programdata\miniconda3\envs\spleeter-cpu\lib\site-packages\tensorflow\python\training\saver.py:1276: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
INFO:tensorflow:Restoring parameters from pretrained_models\2stems\model
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
WARNING:tensorflow:The dtype of the watched tensor must be floating (e.g. tf.float32), got tf.string
WARNING:tensorflow:The dtype of the watched tensor must be floating (e.g. tf.float32), got tf.int32
WARNING:tensorflow:The dtype of the watched tensor must be floating (e.g. tf.float32), got tf.string
INFO:tensorflow:Loading audio b'spleeter/SmokeyRobinsonTheMiracles-2004-Motown1s-17-TearsofaClown.m4a' from 0.0 to 600.0
```
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Windows 10 64 bit |
| Installation type | Conda |
| RAM available | 32G |
| Hardware spec | CPU is Intel i7-6700K @ 4 GHz / GPU is NVidea GeForce GTX 1070
## Additional context
The particular song I used was Smokey Robinson and the Miracles "Tears of a Clown" from the 2004 CD Motown 1s. The song has changing tempo and some flutes and oboes that might make it tough to separate.
| closed | 2019-11-07T14:20:20Z | 2019-11-07T15:40:40Z | https://github.com/deezer/spleeter/issues/49 | [
"bug",
"duplicate"
] | beckerdo | 2 |
LAION-AI/Open-Assistant | python | 3,232 | Login failure by SSO | I registered for open assistant data platform when only email was available.
No I changed my device and email is not even in spam, using Google SSO I can't login because it's different auth type.
Because I just reached the top 50 it would be sad if I need to change my account and start from scratch again.
the email is flozi00@gmail.com | closed | 2023-05-26T06:57:24Z | 2023-08-27T21:10:35Z | https://github.com/LAION-AI/Open-Assistant/issues/3232 | [
"bug",
"website"
] | flozi00 | 9 |
dynaconf/dynaconf | flask | 1,093 | [RFC] Add option to perform a self token renew when using vault | **Is your feature request related to a problem? Please describe.**
I'm using dynaconf vault loader with a simple token, but since the token is not renewed dynaconf failed to authenticated.
**Describe the solution you'd like**
Add a configuration variable to perform `client.auth.token.self_renew()`
**Describe alternatives you've considered**
Switch to `approle` authentication
| open | 2024-04-29T15:10:03Z | 2024-07-08T18:37:54Z | https://github.com/dynaconf/dynaconf/issues/1093 | [
"Not a Bug",
"RFC"
] | asyd | 0 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 685 | Supported Python and dependency versions | I'd like to get a version support matrix defined and documented. Currently, tox is setup to run
* Python 2.6, 2.7, 3.3-3.6, pypy w/ Flask and SA at their latest released versions
* Some of those same versions w/ Flask 0.10 and SA 0.8
I would assume support for 2.6 and 3.3 should be able to be dropped considering tox won't even run those versions. See: https://github.com/pallets/flask-sqlalchemy/pull/684
However, can we "officially" do that without a major point release? refs: #682 | closed | 2019-03-08T22:11:35Z | 2020-12-05T19:58:29Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/685 | [] | rsyring | 8 |
rthalley/dnspython | asyncio | 622 | mx-2-0.pickle missing from 2.1.0 release | `tests/mx-2-0.pickle` isn't included in `dnspython-2.1.0.zip`, which causes the test suite to fail.
I guess `*.pickle` needs adding to the manifest as a pattern to include? | closed | 2021-01-08T10:46:45Z | 2021-01-08T12:49:12Z | https://github.com/rthalley/dnspython/issues/622 | [
"Bug",
"Fixed"
] | atsampson | 1 |
timkpaine/lantern | plotly | 76 | matplotlib scatter and bubble no legend | closed | 2017-10-18T02:03:08Z | 2018-09-19T04:21:47Z | https://github.com/timkpaine/lantern/issues/76 | [
"bug",
"matplotlib/seaborn"
] | timkpaine | 0 | |
xinntao/Real-ESRGAN | pytorch | 588 | 训练Real-ESRNet时执行验证会报错--When training Real-ESRNet, if I want to perform validation, the program will will report FileNotFoundError. | 在训练Real-ESRNet时,如果我想执行验证,取消掉注释内容以后运行,会出现 FileNotFoundError 的报错。
When training Real-ESRNet, if I want to perform validation, the program will will report FileNotFoundError.
```# general settings
name: train_RealESRNetx4plus_1000k_B12G4
model_type: RealESRNetModel
scale: 4
num_gpu: auto # auto: can infer from your visible devices automatically. official: 4 GPUs
manual_seed: 0
# ----------------- options for synthesizing training data in RealESRNetModel ----------------- #
gt_usm: True # USM the ground-truth
# the first degradation process
resize_prob: [0.2, 0.7, 0.1] # up, down, keep
resize_range: [0.15, 1.5]
gaussian_noise_prob: 0.5
noise_range: [1, 30]
poisson_scale_range: [0.05, 3]
gray_noise_prob: 0.4
jpeg_range: [30, 95]
# the second degradation process
second_blur_prob: 0.8
resize_prob2: [0.3, 0.4, 0.3] # up, down, keep
resize_range2: [0.3, 1.2]
gaussian_noise_prob2: 0.5
noise_range2: [1, 25]
poisson_scale_range2: [0.05, 2.5]
gray_noise_prob2: 0.4
jpeg_range2: [30, 95]
gt_size: 256
queue_size: 180
# dataset and data loader settings
datasets:
train:
name: CT_COVID
type: RealESRGANDataset
dataroot_gt: datasets/CT_COVID
meta_info: datasets/CT_COVID/meta_info/meta_info_CT_COVID_multiscale.txt
io_backend:
type: disk
blur_kernel_size: 21
kernel_list: ['iso', 'aniso', 'generalized_iso', 'generalized_aniso', 'plateau_iso', 'plateau_aniso']
kernel_prob: [0.45, 0.25, 0.12, 0.03, 0.12, 0.03]
sinc_prob: 0.1
blur_sigma: [0.2, 3]
betag_range: [0.5, 4]
betap_range: [1, 2]
blur_kernel_size2: 21
kernel_list2: ['iso', 'aniso', 'generalized_iso', 'generalized_aniso', 'plateau_iso', 'plateau_aniso']
kernel_prob2: [0.45, 0.25, 0.12, 0.03, 0.12, 0.03]
sinc_prob2: 0.1
blur_sigma2: [0.2, 1.5]
betag_range2: [0.5, 4]
betap_range2: [1, 2]
final_sinc_prob: 0.8
gt_size: 256
use_hflip: True
use_rot: False
# data loader
use_shuffle: true
num_worker_per_gpu: 5
batch_size_per_gpu: 12
dataset_enlarge_ratio: 1
prefetch_mode: ~
# Uncomment these for validation
val:
name: validation
type: PairedImageDataset
dataroot_gt: path_to_gt
dataroot_lq: path_to_lq
io_backend:
type: disk
# network structures
network_g:
type: RRDBNet
num_in_ch: 3
num_out_ch: 3
num_feat: 64
num_block: 23
num_grow_ch: 32
# path
path:
pretrain_network_g: experiments/pretrained_models/ESRGAN_SRx4_DF2KOST_official-ff704c30.pth
param_key_g: params_ema
strict_load_g: true
resume_state: ~
# training settings
train:
ema_decay: 0.999
optim_g:
type: Adam
lr: !!float 2e-4
weight_decay: 0
betas: [0.9, 0.99]
scheduler:
type: MultiStepLR
milestones: [1000000]
gamma: 0.5
total_iter: 1000000
warmup_iter: -1 # no warm up
# losses
pixel_opt:
type: L1Loss
loss_weight: 1.0
reduction: mean
# Uncomment these for validation
# validation settings
val:
val_freq: !!float 5e3
save_img: True
metrics:
psnr: # metric name
type: calculate_psnr
crop_border: 4
test_y_channel: false
# logging settings
logger:
print_freq: 100
save_checkpoint_freq: !!float 5e3
use_tb_logger: true
wandb:
project: ~
resume_id: ~
# dist training settings
dist_params:
backend: nccl
port: 29500
```
报错的具体内容是这样的:
The error is as follows:
```
Traceback (most recent call last):
File "realesrgan/train.py", line 11, in <module>
train_pipeline(root_path)
File "C:\ProgramData\Anaconda3\envs\python37\lib\site-packages\basicsr\train.py", line 120, in train_pipeline
result = create_train_val_dataloader(opt, logger)
File "C:\ProgramData\Anaconda3\envs\python37\lib\site-packages\basicsr\train.py", line 57, in create_train_val_dataloader
val_set = build_dataset(dataset_opt)
File "C:\ProgramData\Anaconda3\envs\python37\lib\site-packages\basicsr\data\__init__.py", line 34, in build_dataset
dataset = DATASET_REGISTRY.get(dataset_opt['type'])(dataset_opt)
File "C:\ProgramData\Anaconda3\envs\python37\lib\site-packages\basicsr\data\paired_image_dataset.py", line 63, in __init__
self.paths = paired_paths_from_folder([self.lq_folder, self.gt_folder], ['lq', 'gt'], self.filename_tmpl)
File "C:\ProgramData\Anaconda3\envs\python37\lib\site-packages\basicsr\data\data_util.py", line 219, in paired_paths_from_folder
input_paths = list(scandir(input_folder))
File "C:\ProgramData\Anaconda3\envs\python37\lib\site-packages\basicsr\utils\misc.py", line 74, in _scandir
for entry in os.scandir(dir_path):
FileNotFoundError: [WinError 3] 系统找不到指定的路径。: 'path_to_lq'
```
我想知道如何解决这个问题。
哪位大佬能帮帮我?非常感谢!
I want to know how to solve this problem.
Who can help me? Thank you very much! | open | 2023-03-17T03:46:08Z | 2023-03-27T09:46:28Z | https://github.com/xinntao/Real-ESRGAN/issues/588 | [] | GuangwuOfLongCity | 1 |
deezer/spleeter | deep-learning | 260 | [Discussion] How to separate multiple music files with one call function of separate on the terminal? | <!-- Please respect the title [Discussion] tag. -->
Hi, this the quick start method to separate the music files to vocals and accompaniment,
`git clone https://github.com/Deezer/spleeter`
`conda install -c conda-forge spleeter`
`spleeter separate -i spleeter/audio_example.mp3 -p spleeter:2stems -o output`
How to call this separate function once for multiple mp3 files(100+) in a folder so that i do not need to convert one by one manually? | closed | 2020-02-06T07:34:23Z | 2021-04-27T16:22:12Z | https://github.com/deezer/spleeter/issues/260 | [
"question"
] | chenglong96 | 3 |
Miserlou/Zappa | flask | 1,714 | read only files break zappa build on Windows | <!--- Provide a general summary of the issue in the Title above -->
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
On Windows only, zappa crashes on deploy/update when trying to delete the temporary zip file if any of the files being zipped are read-only. The temp files are also read-only and zappa crashes when trying to delete them.
The same problem for the same project does not exist on Mac OS.
## Expected Behavior
<!--- Tell us what should happen -->
I shouldn't need to make my entire folder and sub-folders non-read only to deploy.
## Actual Behavior
<!--- Tell us what happens instead -->
Zappa crashes if any file being zipped is read-only.
}
(v_lam) C:\src\aj\adm\web2py>zappa update
(botocore 1.12.52 (c:\web2py\v_lam\lib\site-packages), Requirement.parse('botocore<1.8.0,>=1.7.0'), set(['boto3']))
Calling update for stage dev..
Downloading and installing dependencies..
- psycopg2==2.7.3.1: Using locally cached manylinux wheel
- protobuf==3.4.0: Using locally cached manylinux wheel
- cryptography==2.0.3: Using locally cached manylinux wheel
- cffi==1.11.0: Using locally cached manylinux wheel
- coverage==4.3.4: Using locally cached manylinux wheel
- pillow==4.2.1: Using locally cached manylinux wheel
Packaging project as zip.
Oh no! An error occurred! :(
==============
Traceback (most recent call last):
File "c:\web2py\v_lam\lib\site-packages\zappa\cli.py", line 2712, in handle
sys.exit(cli.handle())
File "c:\web2py\v_lam\lib\site-packages\zappa\cli.py", line 509, in handle
self.dispatch_command(self.command, stage)
File "c:\web2py\v_lam\lib\site-packages\zappa\cli.py", line 556, in dispatch_command
self.update(self.vargs['zip'], self.vargs['no_upload'])
File "c:\web2py\v_lam\lib\site-packages\zappa\cli.py", line 891, in update
self.create_package()
File "c:\web2py\v_lam\lib\site-packages\zappa\cli.py", line 2227, in create_package
disable_progress=self.disable_progress
File "c:\web2py\v_lam\lib\site-packages\zappa\core.py", line 745, in create_lambda_zip
shutil.rmtree(temp_project_path)
File "c:\python27\Lib\shutil.py", line 261, in rmtree
rmtree(fullname, ignore_errors, onerror)
File "c:\python27\Lib\shutil.py", line 261, in rmtree
rmtree(fullname, ignore_errors, onerror)
File "c:\python27\Lib\shutil.py", line 261, in rmtree
rmtree(fullname, ignore_errors, onerror)
File "c:\python27\Lib\shutil.py", line 266, in rmtree
onerror(os.remove, fullname, sys.exc_info())
File "c:\python27\Lib\shutil.py", line 264, in rmtree
os.remove(fullname)
**WindowsError: [Error 5] Access is denied: 'c:\\users\\mike\\appdata\\local\\temp\\zappa-projectvk70j2\\applications\\admin\\modules\\__init__.py'**
==============
Need help? Found a bug? Let us know! :D
File bug reports on GitHub here: https://github.com/Miserlou/Zappa
And join our Slack channel here: https://slack.zappa.io
Love!,
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
Make your source files read only
zappa update
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.47.1
* Operating System and Python version: Windows7, Python 2.7.15
* The output of `pip freeze`:
* Link to your project (optional):
* Your `zappa_settings.py`:
{
"dev": {
"app_function": "gluon.main.wsgibase",
"profile_name": "zappa",
"project_name": "web2py",
"aws_region" : "us-east-1",
"memory_size" : 128,
"keep_warm": false,
"runtime": "python2.7",
"s3_bucket": "zappa-XXXXX",
"log_level": "DEBUG",
"environment_variables": {
"IS_AWS_LAMBDA": "True"
}
}
}
| open | 2018-11-28T19:30:32Z | 2018-11-28T19:30:32Z | https://github.com/Miserlou/Zappa/issues/1714 | [] | AppJarBiz | 0 |
svc-develop-team/so-vits-svc | pytorch | 92 | 进行聚类方案训练时出现错误 | root@autodl-container-ecb6118152-031fe729:~/autodl-tmp/so-vits-svc# python cluster/train_cluster.py
train kmeans for moss...
INFO:__main__:Loading features from dataset/44k/moss
20it [00:00, 1820.13it/s]
0 7.01171875 MB , shape: (7180, 256) float32
INFO:__main__:Clustering features of shape: (7180, 256)
Traceback (most recent call last):
File "cluster/train_cluster.py", line 64, in <module>
x = train_cluster(in_dir, n_clusters, verbose=False)
File "cluster/train_cluster.py", line 30, in train_cluster
kmeans = MiniBatchKMeans(n_clusters=n_clusters,verbose=verbose, batch_size=4096, max_iter=80).fit(features)
File "/root/miniconda3/lib/python3.8/site-packages/sklearn/cluster/_kmeans.py", line 1972, in fit
self._check_params(X)
File "/root/miniconda3/lib/python3.8/site-packages/sklearn/cluster/_kmeans.py", line 1804, in _check_params
super()._check_params(X)
File "/root/miniconda3/lib/python3.8/site-packages/sklearn/cluster/_kmeans.py", line 828, in _check_params
raise ValueError(
ValueError: n_samples=7180 should be >= n_clusters=10000.
root@autodl-container-ecb6118152-031fe729:~/autodl-tmp/so-vits-svc# | closed | 2023-03-27T00:16:58Z | 2023-03-27T00:24:16Z | https://github.com/svc-develop-team/so-vits-svc/issues/92 | [
"not urgent"
] | Jerryjzr | 3 |
ageitgey/face_recognition | machine-learning | 697 | dont know where to save image I want to scan | * face_recognition version:
* Python version:9
* Operating System:OSX
I get an error whenever i try to run my test program this is the error message:
Traceback (most recent call last):
File "/Users/nathaadmin/Documents/face_recog.py", line 2, in <module>
image = face_recognition.load_image_file("My_Image.png")
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/face_recognition/api.py", line 83, in load_image_file
im = PIL.Image.open(file)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/PIL/Image.py", line 2609, in open
fp = builtins.open(filename, "rb")
IsADirectoryError: [Errno 21] Is a directory: 'My_Image.png'
I named the photo I want scanned My_Image.png
this is my code:
import face_recognition
image = face_recognition.load_image_file("My_Image.png")
face_locations = face_recognition.face_locations(image)
i have my photo saved in my Downloads file | open | 2018-12-05T17:51:14Z | 2019-03-04T10:34:29Z | https://github.com/ageitgey/face_recognition/issues/697 | [] | naman108 | 1 |
fohrloop/dash-uploader | dash | 107 | Multiple issues with Linux-Python3.6 and Windows-Python3.7 | First of all, thank you for making this wonderful package.
I got 2 problems when running on 2 environments:
Script:
- The "Simple example" provided here
Environments:
- Linux-Python 3.6.8 and Windows-Python 3.7.3
- dash-uploader==0.7.0a1
The problems:
1. ImportError: cannot import name 'final' from 'typing'
```
Traceback (most recent call last):
File "C:\Users\user123\eclipse-workspace\For fun - Python\test7.py", line 3, in <module>
import dash_uploader as du
File "C:\Users\user123\AppData\Local\Programs\Python\Python37\lib\site-packages\dash_uploader\__init__.py", line 7, in <module>
from dash_uploader.configure_upload import configure_upload
File "C:\Users\user123\AppData\Local\Programs\Python\Python37\lib\site-packages\dash_uploader\configure_upload.py", line 5, in <module>
from dash_uploader.httprequesthandler import HttpRequestHandler
File "C:\Users\user123\AppData\Local\Programs\Python\Python37\lib\site-packages\dash_uploader\httprequesthandler.py", line 11, in <module>
from dash_uploader.utils import retry
File "C:\Users\user123\AppData\Local\Programs\Python\Python37\lib\site-packages\dash_uploader\utils.py", line 2, in <module>
from typing import final
ImportError: cannot import name 'final' from 'typing' (C:\Users\user123\AppData\Local\Programs\Python\Python37\lib\typing.py)
```
This can be fixed by:
a. Commenting out `from typing import final` from utils.py (not sure whats the side effects by doing this)
b. Change `from typing import final` -> `from typing_extensions import final` (for python 3.7 and below)
For this problem, you might need to change the requirement might be python 3.8+ (I think python3.8 `typing` has `final`, though I havent checked), not python3.6+.
2. pathlib.Path obj has no attribute 'write'
Linux: AttributeError: 'PosixPath' object has no attribute 'write'
Windows: AttributeError: 'WindowsPath' object has no attribute 'write'
Error message in Windows
```
Traceback (most recent call last):
File "C:\Users\user123\AppData\Local\Programs\Python\Python37\lib\site-packages\dash_uploader\httprequesthandler.py", line 85, in post
return self._post()
File "C:\Users\user123\AppData\Local\Programs\Python\Python37\lib\site-packages\dash_uploader\httprequesthandler.py", line 110, in _post
r.chunk_data.save(chunk_file)
File "C:\Users\user123\AppData\Local\Programs\Python\Python37\lib\site-packages\werkzeug\datastructures.py", line 2803, in save
copyfileobj(self.stream, dst, buffer_size)
File "C:\Users\user123\AppData\Local\Programs\Python\Python37\lib\shutil.py", line 82, in copyfileobj
fdst.write(buf)
AttributeError: 'WindowsPath' object has no attribute 'write'
```
To make this works, I have to edit: `r.chunk_data.save(chunk_file)` -> `r.chunk_data.save(str(chunk_file))` in `_post()` ("httprequesthandler.py"). After this, files are uploaded normally.
Am I missing anything to make this work without editing the files?
| open | 2022-10-23T22:24:12Z | 2022-10-24T16:50:22Z | https://github.com/fohrloop/dash-uploader/issues/107 | [] | chuhoaianh | 0 |
dynaconf/dynaconf | flask | 764 | [docs] Settings cannot load correctly in debug mode | When I use VScode to debug my python scripts, the settings cannot be load correctly, but it works fine when I directly run the scripts. How to solve this problem?
When debug

When directly run it prints correctly

simple settings.yaml just like this
daily:
recent_days: 365
m_volume_day_thres: 30
mvmp_width: 2
c_volume_width: 3
###
price_qs_maxday: 30
min_price_days: 90
| open | 2022-06-28T00:55:11Z | 2024-07-01T09:51:04Z | https://github.com/dynaconf/dynaconf/issues/764 | [
"RFC",
"django"
] | Kyridiculous2 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.