repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
DistrictDataLabs/yellowbrick | matplotlib | 868 | CI Tests fail due to NLTK download first byte timeout | **NLTK Downloader**
Here is a sample of an NLTK download in progress:
> [nltk_data] | Downloading package twitter_samples to
[nltk_data] | C:\Users\appveyor\AppData\Roaming\nltk_data...
[nltk_data] | Unzipping corpora\twitter_samples.zip.
[nltk_data] | Downloading package omw to
[nltk_data] | C:\Users\appveyor\AppData\Roaming\nltk_data...
[nltk_data] | Error downloading 'omw' from
[nltk_data] | <https://raw.githubusercontent.com/nltk/nltk_data
[nltk_data] | /gh-pages/packages/corpora/omw.zip>: HTTP Error
[nltk_data] | 503: first byte timeout
Error installing package. Retry? [n/y/e]
There is a related issue:
https://github.com/nltk/nltk/issues/1647
Could cached NLTK downloads provide the necessary NLTK data with sufficient freshness?
<!-- This line alerts the Yellowbrick maintainers, feel free to use this
@ address to alert us directly in follow up comments -->
@DistrictDataLabs/team-oz-maintainers
| closed | 2019-05-29T20:25:17Z | 2019-08-28T23:48:40Z | https://github.com/DistrictDataLabs/yellowbrick/issues/868 | [
"type: bug",
"type: technical debt",
"level: intermediate"
] | nickpowersys | 2 |
d2l-ai/d2l-en | deep-learning | 2,056 | multi-head Attention code has a big problem. | I only checked the pytorch version.
##################################################################
class MultiHeadAttention(nn.Module):
"""Multi-head attention.
Defined in :numref:`sec_multihead-attention`"""
def __init__(self, key_size, query_size, value_size, num_hiddens,
num_heads, dropout, bias=False, **kwargs):
super(MultiHeadAttention, self).__init__(**kwargs)
self.num_heads = num_heads
self.attention = d2l.DotProductAttention(dropout)
self.W_q = nn.Linear(query_size, num_hiddens, bias=bias) ######## should not be 'query_size'
self.W_k = nn.Linear(key_size, num_hiddens, bias=bias) #######should not be 'key_size'
self.W_v = nn.Linear(value_size, num_hiddens, bias=bias) ####### should not be 'value_size'
self.W_o = nn.Linear(num_hiddens, num_hiddens, bias=bias)
def forward(self, queries, keys, values, valid_lens):
# Shape of `queries`, `keys`, or `values`:
# (`batch_size`, no. of queries or key-value pairs, `num_hiddens`)
# Shape of `valid_lens`:
# (`batch_size`,) or (`batch_size`, no. of queries)
# After transposing, shape of output `queries`, `keys`, or `values`:
# (`batch_size` * `num_heads`, no. of queries or key-value pairs,
# `num_hiddens` / `num_heads`)
queries = transpose_qkv(self.W_q(queries), self.num_heads) ######## here, the last dime of queries is num_hiddens !
keys = transpose_qkv(self.W_k(keys), self.num_heads)
values = transpose_qkv(self.W_v(values), self.num_heads)
if valid_lens is not None:
# On axis 0, copy the first item (scalar or vector) for
# `num_heads` times, then copy the next item, and so on
valid_lens = torch.repeat_interleave(
valid_lens, repeats=self.num_heads, dim=0)
# Shape of `output`: (`batch_size` * `num_heads`, no. of queries,
# `num_hiddens` / `num_heads`)
output = self.attention(queries, keys, values, valid_lens)
# Shape of `output_concat`:
# (`batch_size`, no. of queries, `num_hiddens`)
output_concat = transpose_output(output, self.num_heads)
return self.W_o(output_concat)
#####################################################
When training, if you change the num_hiddens from 32 to 64, you will get "RuntimeError: mat1 dim 1 must match mat2 dim 0".
After debugging, I found in the MultiheadAttetion block, in the forward function, the shape of X is
(`batch_size`, no. of queries or key-value pairs, `num_hiddens`)
see the num_hiddens is the last dime
But the self.W_q = nn.Linear(query_size, num_hiddens, bias=bias)
the first dim of W_q is query_size !
So in this case, you always have to make num_hiddens = query_size to run. Which is obviously wrong.
#######################################################
My suggestion is to change self.W_q = nn.Linear(query_size, num_hiddens, bias=bias) ==> self.W_q = nn.Linear(num_hiddens, num_hiddens, bias=bias)
But there maybe another solution.
If my understanding is wrong, please correct me.
d2l is wonderful for sure.
P.S. The way for building a large sing-head attention and then bend it into multi-head is not elegant, it would be much better if your guys could find another solution.
| open | 2022-03-03T18:25:48Z | 2022-04-19T12:05:00Z | https://github.com/d2l-ai/d2l-en/issues/2056 | [] | Y-H-Joe | 2 |
aiortc/aiortc | asyncio | 712 | Webcam.py --video-codec video/H264 --play-without-decoding weird behavior | I encountered a strange behavior regarding the standard webcam.py example.
Im using the newest feature " --video-codec video/H264 --play-without-decoding" on my raspberry pi.
If im connecting from any client over the local network (android/ios/chome/firefox) it works the first time, after that only chrome on my windows machine works but the image quality gets signficantly worse than the first connect.
Steps to reproduce:
1. start python webcam.py --video-codec video/H264 --play-without-decoding
2. connect from firefox/ios/android browser with start
3. press stop button
4. refresh and press start again
I verified ice gathering is working and even the h264_omx encoder is starting to run but no feed is shown on the client side.
Only Chrome is working and is displaying a much blurrier image .
| closed | 2022-04-29T19:01:38Z | 2022-09-23T04:00:10Z | https://github.com/aiortc/aiortc/issues/712 | [
"stale"
] | unitw | 17 |
ageitgey/face_recognition | python | 907 | Continually ask to install `face_recognition_models` although I've installed already | * face_recognition version: face-recognition-1.2.3 face-recognition-models-0.3.0
* Python version: 3.6
* Operating System: Linux
### Description
I have use `pip3 install face_recognition` to install `face_recognition_models` successfully, but when I `import face_recognition`, it continually asks me to install again. I've searched for many issues but it hasn't been solved. Could you share your solution if you meet the same problem as me
### What I Did
```
(py36) xinshen@system-product-name:/devdata/guoqu/face_recognition$ pip3 install face_recognition
Collecting face_recognition
Downloading https://pypi.tuna.tsinghua.edu.cn/packages/3f/ed/ad9a28042f373d4633fc8b49109b623597d6f193d3bbbef7780a5ee8eef2/face_recognition-1.2.3-py2.py3-none-any.whl
Collecting dlib>=19.7 (from face_recognition)
Collecting Pillow (from face_recognition)
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/d6/98/0d360dbc087933679398d73187a503533ec0547ba4ffd2115365605559cc/Pillow-6.1.0-cp35-cp35m-manylinux1_x86_64.whl
Collecting face-recognition-models>=0.3.0 (from face_recognition)
Collecting Click>=6.0 (from face_recognition)
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/fa/37/45185cb5abbc30d7257104c434fe0b07e5a195a6847506c074527aa599ec/Click-7.0-py2.py3-none-any.whl
Collecting numpy (from face_recognition)
Using cached https://pypi.tuna.tsinghua.edu.cn/packages/69/25/eef8d362bd216b11e7d005331a3cca3d19b0aa57569bde680070109b745c/numpy-1.17.0-cp35-cp35m-manylinux1_x86_64.whl
Installing collected packages: dlib, Pillow, face-recognition-models, Click, numpy, face-recognition
Successfully installed Click-7.0 Pillow-5.3.0 dlib-19.17.0 face-recognition-1.2.3 face-recognition-models-0.3.0 numpy-1.17.0
You are using pip version 8.1.1, however version 19.2.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
(py36) xinshen@system-product-name:/devdata/guoqu/face_recognition$ python
Python 3.6.7 |Anaconda custom (64-bit)| (default, Oct 23 2018, 19:16:44)
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import face_recognition
Please install `face_recognition_models` with this command before using `face_recognition`:
pip install git+https://github.com/ageitgey/face_recognition_models
(py36) xinshen@system-product-name:/devdata/guoqu/face_recognition$
```
| closed | 2019-08-21T02:24:54Z | 2024-09-05T18:09:14Z | https://github.com/ageitgey/face_recognition/issues/907 | [] | MrArcrM | 4 |
python-restx/flask-restx | api | 404 | How to specify schema for nested json in flask_restx? | Im using flask_restx for swagger API's. The versions are as follows:
```
python - 3.8.0
flask - 2.0.2
flask_restx - 0.5.1
```
The following is the nested json I need to specify the schema for:
```
dr_status_fields = app_api.model('DR Status',{
fields.String: {
"elasticsearch": {
"backup_status": fields.String,
"backup_folder": fields.String,
},
"mongodb": {
"backup_status": fields.String,
"backup_folder": fields.String,
},
"postgresdb": {
"backup_status": fields.String,
"backup_folder": fields.String,
},
"overall_backup_status": fields.String
}
})
```
But with this when I load the swagger URL in browser , i get error in the command line:
```
Unable to render schema
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/flask_restx/api.py", line 571, in __schema__
self._schema = Swagger(self).as_dict()
File "/usr/local/lib/python3.8/site-packages/flask_restx/swagger.py", line 268, in as_dict
"definitions": self.serialize_definitions() or None,
File "/usr/local/lib/python3.8/site-packages/flask_restx/swagger.py", line 623, in serialize_definitions
return dict(
File "/usr/local/lib/python3.8/site-packages/flask_restx/swagger.py", line 624, in <genexpr>
(name, model.__schema__)
File "/usr/local/lib/python3.8/site-packages/flask_restx/model.py", line 76, in __schema__
schema = self._schema
File "/usr/local/lib/python3.8/site-packages/flask_restx/model.py", line 159, in _schema
properties[name] = field.__schema__
AttributeError: 'dict' object has no attribute '__schema__'
```
I have tried checking the documentaion of [flask_restx][1] for any sample example usage. But could not find any. Please help regarding the same
[1]: https://flask-restplus.readthedocs.io/en/stable/swagger.html | open | 2022-01-07T05:43:44Z | 2022-01-25T16:17:44Z | https://github.com/python-restx/flask-restx/issues/404 | [
"question"
] | rakeshk121 | 1 |
ansible/ansible | python | 84,059 | regex_replace fails to to work on passed in strings, OK on variables | ### Summary
ansible.builtin.regex_replace works fine when used on a variable
```
set_fact:
var: "{{ aVariable | ansible.builtin.regex('old','new') }}"
```
but does nothing to the text, if it is prevouslt filtered, thus
```
set_fact:
wrong: "{{ my_var1+myavr2 | string | ansible.builtin.regex('old','new') }}"
```
### Issue Type
Bug Report
### Component Name
ansible.builtin.regex_replace
### Ansible Version
```console
$ ansible --version
ansible [core 2.16.11]
config file = /home/ian/.ansible.cfg
configured module search path = ['/home/ian/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/ian/.ansible/collections:/usr/share/ansible/collections
executable location = /home/ian/.local/bin/ansible
python version = 3.12.3 (main, Sep 11 2024, 14:17:37) [GCC 13.2.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
BECOME_PASSWORD_FILE(/home/ian/.ansible.cfg) = /home/ian/.ansible/password
CONFIG_FILE() = /home/ian/.ansible.cfg
DEFAULT_BECOME(/home/ian/.ansible.cfg) = True
DEFAULT_HOST_LIST(/home/ian/.ansible.cfg) = ['/home/ian/Ansible/playbooks/config/hosts']
DEFAULT_LOG_PATH(/home/ian/.ansible.cfg) = /home/ian/Ansible/logfile
DEFAULT_VAULT_PASSWORD_FILE(/home/ian/.ansible.cfg) = /home/ian/.ansible/password
```
### OS / Environment
PRETTY_NAME="Ubuntu 24.04.1 LTS"
NAME="Ubuntu"
VERSION_ID="24.04"
VERSION="24.04.1 LTS (Noble Numbat)"
VERSION_CODENAME=noble
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=noble
LOGO=ubuntu-logo
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
# test.yml
#
- hosts: "elara.hcs"
become: yes
gather_facts: false
vars:
bitfolkBackup: "/home/ian/Backupfiles/Bitfolk/setB/"
dir: "/home/ian"
tasks:
- name: Compute restore dir
set_fact:
r_dir: "{{ bitfolkBackup+dir }}"
- name: Remove double slashes
set_fact:
restore_dir1: "{{ r_dir | regex_replace('//','/') }}"
- name: Do them both together
set_fact:
restore_dir2: "{{ bitfolkBackup+dir | string | regex_replace('//','/') }}"
- name: show result of 2 steps
debug:
var: restore_dir1
- name: show result of one step
debug:
var: restore_dir2
- name: Report error
debug: msg="The two results are not the same"
when: (restore_dir1 != restore_dir2)
```
### Expected Results
ansible.builtin.regex('old','new') should replace old with new in both cases. It fails when string provided is previously filtered.
### Actual Results
```console
PLAY [elara.hcs] ***********************************************************************************************
TASK [Compute restore dir] *************************************************************************************
ok: [elara.hcs]
TASK [Remove double slashes] ***********************************************************************************
ok: [elara.hcs]
TASK [Do them both together] ***********************************************************************************
ok: [elara.hcs]
TASK [show result of 2 steps] **********************************************************************************
ok: [elara.hcs] => {
"restore_dir1": "/home/ian/Backupfiles/Bitfolk/setB/home/ian"
}
TASK [show result of one step] *********************************************************************************
ok: [elara.hcs] => {
"restore_dir2": "/home/ian/Backupfiles/Bitfolk/setB//home/ian"
}
TASK [Report error] ********************************************************************************************
ok: [elara.hcs] => {
"msg": "The two results are not the same"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | closed | 2024-10-07T11:27:27Z | 2024-10-22T13:00:06Z | https://github.com/ansible/ansible/issues/84059 | [
"bug",
"affects_2.16"
] | hobson42 | 4 |
joke2k/django-environ | django | 10 | Typo doc | Hi,
In the documentation for the dict type, the prototype is this:
dict (BAR=key=val;foo=bar)
it should be this:
dict (BAR=key=val,foo=bar)
| closed | 2014-06-27T14:26:56Z | 2015-12-25T18:58:25Z | https://github.com/joke2k/django-environ/issues/10 | [] | moimael | 2 |
scikit-optimize/scikit-optimize | scikit-learn | 1,036 | Grid sampler is sampling less than n_samples | ```
from skopt.sampler import Grid
from skopt.space import Space
from scipy.spatial.distance import pdist
import numpy as np
n_samples = 10
space = Space([(0.0, 5)])
grid = Grid()
samples = np.array(grid.generate(space.dimensions, n_samples))
print(samples)
print(samples.shape)
assert(len(samples) == n_samples)
```
Expected: assertion to pass
Current: assertion failed | open | 2021-05-25T08:50:29Z | 2021-05-25T12:19:31Z | https://github.com/scikit-optimize/scikit-optimize/issues/1036 | [
"Bug"
] | eric-vader | 0 |
ranaroussi/yfinance | pandas | 2,251 | Incorrect output for YFxxx exceptions | ### Describe bug
For YFxxx exceptions the output is:
```
1 Failed download:
['QUATSCH']: YFPricesMissingError('$%ticker%: possibly delisted; no price data found (period=1d) (Yahoo error = "No data found, symbol may be delisted")')
```
I suspect the intended output is
```
1 Failed download:
YFPricesMissingError('$['QUATSCH']: possibly delisted; no price data found (period=1d) (Yahoo error = "No data found, symbol may be delisted")')
```
The error is in multi.py, function download
instead of
```
for err in errors.keys():
logger.error(f'{errors[err]}: ' + err)
```
it should be
```
for err in errors.keys():
logger.error(err.replace('%ticker%',errors[err])
```
### Simple code that reproduces your problem
```
import yfinance as yf
df = yf.download(tickers=["MSF.DE", "QUATSCH"], period="1d", progress=False)
```
### Debug log
DEBUG Entering download()
DEBUG Disabling multithreading because DEBUG logging enabled
DEBUG Entering history()
DEBUG Entering history()
DEBUG MSF.DE: Yahoo GET parameters: {'range': '1d', 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'}
DEBUG Entering get()
DEBUG Entering _make_request()
DEBUG url=https://query2.finance.yahoo.com/v8/finance/chart/MSF.DE
DEBUG params={'range': '1d', 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'}
DEBUG Entering _get_cookie_and_crumb()
DEBUG cookie_mode = 'basic'
DEBUG Entering _get_cookie_and_crumb_basic()
DEBUG loaded persistent cookie
DEBUG reusing cookie
DEBUG crumb = '0DlVzIFi41Z'
DEBUG Exiting _get_cookie_and_crumb_basic()
DEBUG Exiting _get_cookie_and_crumb()
DEBUG response code=200
DEBUG Exiting _make_request()
DEBUG Exiting get()
DEBUG MSF.DE: yfinance received OHLC data: 2025-01-31 16:36:23 only
DEBUG MSF.DE: OHLC after cleaning: 2025-01-31 17:36:23+01:00 only
DEBUG MSF.DE: OHLC after combining events: 2025-01-31 00:00:00+01:00 only
DEBUG MSF.DE: yfinance returning OHLC: 2025-01-31 00:00:00+01:00 only
DEBUG Exiting history()
DEBUG Exiting history()
DEBUG Entering history()
DEBUG Entering _fetch_ticker_tz()
DEBUG Entering get()
DEBUG Entering _make_request()
DEBUG url=https://query2.finance.yahoo.com/v8/finance/chart/QUATSCH
DEBUG params=frozendict.frozendict({'range': '1d', 'interval': '1d'})
DEBUG Entering _get_cookie_and_crumb()
DEBUG cookie_mode = 'basic'
DEBUG Entering _get_cookie_and_crumb_basic()
DEBUG reusing cookie
DEBUG reusing crumb
DEBUG Exiting _get_cookie_and_crumb_basic()
DEBUG Exiting _get_cookie_and_crumb()
DEBUG response code=404
DEBUG toggling cookie strategy basic -> csrf
DEBUG Entering _get_cookie_and_crumb()
DEBUG cookie_mode = 'csrf'
DEBUG Entering _get_crumb_csrf()
DEBUG loaded persistent cookie
DEBUG Failed to find "csrfToken" in response
DEBUG Exiting _get_crumb_csrf()
DEBUG toggling cookie strategy csrf -> basic
DEBUG Entering _get_cookie_and_crumb_basic()
DEBUG loaded persistent cookie
DEBUG reusing cookie
DEBUG crumb = '0DlVzIFi41Z'
DEBUG Exiting _get_cookie_and_crumb_basic()
DEBUG Exiting _get_cookie_and_crumb()
DEBUG response code=404
DEBUG Exiting _make_request()
DEBUG Exiting get()
DEBUG Got error from yahoo api for ticker QUATSCH, Error: {'code': 'Not Found', 'description': 'No data found, symbol may be delisted'}
DEBUG Exiting _fetch_ticker_tz()
DEBUG Entering history()
DEBUG QUATSCH: Yahoo GET parameters: {'range': '1d', 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'}
DEBUG Entering get()
DEBUG Entering _make_request()
DEBUG url=https://query2.finance.yahoo.com/v8/finance/chart/QUATSCH
DEBUG params={'range': '1d', 'interval': '1d', 'includePrePost': False, 'events': 'div,splits,capitalGains'}
DEBUG Entering _get_cookie_and_crumb()
DEBUG cookie_mode = 'basic'
DEBUG Entering _get_cookie_and_crumb_basic()
DEBUG reusing cookie
DEBUG reusing crumb
DEBUG Exiting _get_cookie_and_crumb_basic()
DEBUG Exiting _get_cookie_and_crumb()
DEBUG response code=404
DEBUG toggling cookie strategy basic -> csrf
DEBUG Entering _get_cookie_and_crumb()
DEBUG cookie_mode = 'csrf'
DEBUG Entering _get_crumb_csrf()
DEBUG loaded persistent cookie
DEBUG Failed to find "csrfToken" in response
DEBUG Exiting _get_crumb_csrf()
DEBUG toggling cookie strategy csrf -> basic
DEBUG Entering _get_cookie_and_crumb_basic()
DEBUG loaded persistent cookie
DEBUG reusing cookie
DEBUG crumb = '0DlVzIFi41Z'
DEBUG Exiting _get_cookie_and_crumb_basic()
DEBUG Exiting _get_cookie_and_crumb()
DEBUG response code=404
DEBUG Exiting _make_request()
DEBUG Exiting get()
ERROR
1 Failed download:
ERROR ['QUATSCH']: YFPricesMissingError('$%ticker%: possibly delisted; no price data found (period=1d) (Yahoo error = "No data found, symbol may be delisted")')
DEBUG ['QUATSCH']: Traceback (most recent call last):
File "C:\Users\meyer\Projects\Github\Current\meindepot\.venv\Lib\site-packages\yfinance\multi.py", line 272, in _download_one
data = Ticker(ticker).history(
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\meyer\Projects\Github\Current\meindepot\.venv\Lib\site-packages\yfinance\utils.py", line 104, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\meyer\Projects\Github\Current\meindepot\.venv\Lib\site-packages\yfinance\base.py", line 81, in history
return self._lazy_load_price_history().history(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\meyer\Projects\Github\Current\meindepot\.venv\Lib\site-packages\yfinance\utils.py", line 104, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\meyer\Projects\Github\Current\meindepot\.venv\Lib\site-packages\yfinance\scrapers\history.py", line 245, in history
raise _exception
yfinance.exceptions.YFPricesMissingError: $%ticker%: possibly delisted; no price data found (period=1d) (Yahoo error = "No data found, symbol may be delisted")
DEBUG Exiting download()
### Bad data proof
No bad data
### `yfinance` version
0.2.52
### Python version
3.12.8
### Operating system
Windows 11 | open | 2025-01-31T17:49:33Z | 2025-01-31T21:58:14Z | https://github.com/ranaroussi/yfinance/issues/2251 | [] | detlefm | 2 |
tfranzel/drf-spectacular | rest-api | 835 | Is it possible to keep the openapi file dynamic as the APIs change overtime? |
Consider that I have a specific structure in the **requestBody** argument of the _@extend_schema_ decorator. Over time if the API changes and the structure mentioned above has to change as well.
How can we keep the OpenAPI file dynamic without updating the _@extend_schema_ for the upcoming API updates?
| closed | 2022-10-12T18:52:08Z | 2022-11-21T16:35:03Z | https://github.com/tfranzel/drf-spectacular/issues/835 | [] | pshk04 | 2 |
apache/airflow | python | 47,521 | Test Deployment via breeze in KinD/K8s need to realx Same Origin Policy | ### Body
When a local K8s cluster (via KinD) is started through `breeze k8s ...` and you attempt to open the exposed URL in the browser (it uses a random forwarded port, e.g. http://localhost:13582/ ) then you get the following errors:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:8080/static/assets/index-gP-zsNBT.js. (Reason: CORS request did not succeed). Status code: (null).
Module source URI is not allowed in this document: “http://localhost:8080/static/assets/index-gP-zsNBT.js”.
Therefor the KinD/K8s deployment in breeze needs to be enhanced such that the forwarded ports are accepted from web server / UI.
(Not an expert in this area but might be a general K8s deployment problem outside breeze as well? Or is just a port / hostname from breeze to be set to config dynamically?)
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | closed | 2025-03-07T22:29:05Z | 2025-03-09T08:18:50Z | https://github.com/apache/airflow/issues/47521 | [
"kind:bug",
"area:dev-env",
"area:dev-tools",
"kind:meta",
"area:UI"
] | jscheffl | 2 |
tfranzel/drf-spectacular | rest-api | 415 | Support mark field deprecated | See https://spec.openapis.org/oas/v3.1.0#parameter-object | closed | 2021-06-02T07:40:54Z | 2023-04-09T19:27:23Z | https://github.com/tfranzel/drf-spectacular/issues/415 | [
"enhancement",
"fix confirmation pending"
] | elonzh | 5 |
localstack/localstack | python | 11,712 | feature request: Enable Stepfunctions to be provided the Mock Config | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Feature description
## Context
Step functions in Local stack uses - `amazon/aws-stepfunctions-local` image. With that image, AWS offers a way to test with mocked config, [link here ](https://docs.aws.amazon.com/step-functions/latest/dg/sfn-local-test-sm-exec.html), where we can pass the mocked config in an environment variable `SFN_MOCK_CONFIG` and when invoking the step function we just have to invoke it using the State Machine ARN + '#test_name'.
## Request
The functionality is already within the image,
Can Localstack help exposing that environment variable out, so that If I'm using a localstack image and provide that environment variable to my container, it can help me test my step function with the Mocked config way.
### 🧑💻 Implementation
_No response_
### Anything else?
_No response_ | open | 2024-10-18T08:59:49Z | 2025-02-26T20:13:41Z | https://github.com/localstack/localstack/issues/11712 | [
"type: feature",
"status: backlog"
] | priyatoshkashyap | 2 |
CTFd/CTFd | flask | 2,598 | CTFD Docker install: Docker-entrypoint not found, no such file or directory | <!--
If this is a bug report please fill out the template below.
If this is a feature request please describe the behavior that you'd like to see.
-->
**Environment**: Docker
- CTFd Version/Commit: 3.7.3
- Operating System: Windows
- Web Browser and Version: Firefox 130.0
**What happened?**
I ran the docker installation proces, as described in the documentation on (https://docs.ctfd.io/docs/deployment/installation/). Copying the repository and changing the compose file went fine, when i composed the docker with docker compose i got an error with the installation of Ctfd-1:
"ctfd-1 | exec /opt/CTFd/docker-entrypoint.sh: no such file or directory"
This halts the docker setup proces in docker desktop and prevents the setup from completing.
**What did you expect to happen?**
Docker file would download, docker container created and web application would open.
**How to reproduce your issue**
1. git copy the repository
2. open the repository location with powershell, command prompt or other windows command line.
3. run docker-compose up
4. error shows up in command line
**Any associated stack traces or error logs**
| open | 2024-09-13T07:23:51Z | 2024-09-13T13:09:55Z | https://github.com/CTFd/CTFd/issues/2598 | [] | GrainedPanther | 1 |
Miserlou/Zappa | flask | 1,549 | zappa package fails with zappa from git and slim_handler | ## Context
If you're trying to run a zappa as an editable from a git repo, it works, except if you have slim_handler enabled.
## Expected Behavior
It should build a zip with zappa inside.
## Actual Behavior
It fails with a traceback like this when building the handler_venv:
```
Traceback (most recent call last):
File "/Users/joao/testes/zappa-1/1/ve/src/zappa/zappa/cli.py", line 2693, in handle
sys.exit(cli.handle())
File "/Users/joao/testes/zappa-1/1/ve/src/zappa/zappa/cli.py", line 504, in handle
self.dispatch_command(self.command, stage)
File "/Users/joao/testes/zappa-1/1/ve/src/zappa/zappa/cli.py", line 543, in dispatch_command
self.package(self.vargs['output'])
File "/Users/joao/testes/zappa-1/1/ve/src/zappa/zappa/cli.py", line 633, in package
self.create_package(output)
File "/Users/joao/testes/zappa-1/1/ve/src/zappa/zappa/cli.py", line 2171, in create_package
venv=self.zappa.create_handler_venv(),
File "/Users/joao/testes/zappa-1/1/ve/src/zappa/zappa/core.py", line 403, in create_handler_venv
copytree(os.path.join(current_site_packages_dir, z), os.path.join(venv_site_packages_dir, z))
File "/Users/joao/testes/zappa-1/1/ve/src/zappa/zappa/utilities.py", line 37, in copytree
lst = os.listdir(src)
NotADirectoryError: [Errno 20] Not a directory: '/Users/joao/testes/zappa-1/1/ve/lib/python3.6/site-packages/zappa.egg-link
```
## Possible Fix
Use the function for copying editables as when slim_handler is false.
## Steps to Reproduce
With the following zappa_settings.json:
```
{
"dev": {
"project_name": "1",
"runtime": "python3.6",
"slim_handler": true
}
}
```
Run on a virtualenv: `pip install -e git+https://github.com/miserlou/zappa#egg=zappa`
and then `zappa package`.
## Your Environment
* Zappa version used: 0.46.1 from git
* Operating System and Python version: Mac OS X 10.12.6, python 3.6.4 (from home brew)
* The output of `pip freeze`:
```
argcomplete==1.9.3
base58==1.0.0
boto3==1.7.45
botocore==1.10.45
certifi==2018.4.16
cfn-flip==1.0.3
chardet==3.0.4
click==6.7
docutils==0.14
durationpy==0.5
first==2.0.1
future==0.16.0
hjson==3.0.1
idna==2.7
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.20.0
pip-tools==2.0.2
placebo==0.8.1
python-dateutil==2.6.1
python-slugify==1.2.4
PyYAML==3.12
requests==2.19.1
s3transfer==0.1.13
six==1.11.0
toml==0.9.4
tqdm==4.19.1
troposphere==2.3.0
Unidecode==1.0.22
urllib3==1.23
Werkzeug==0.14.1
wsgi-request-logger==0.4.6
-e git+https://github.com/miserlou/zappa@d99c193e32733946fb52a4f9b2bdfd1d2929ba49#egg=zappa
```
* Your `zappa_settings.py`: already above.
| open | 2018-06-26T16:01:02Z | 2020-04-03T17:17:31Z | https://github.com/Miserlou/Zappa/issues/1549 | [] | jneves | 5 |
deepfakes/faceswap | machine-learning | 1,145 | bug on require CPU (Mac Big Sur) | in the end of pip require....
Collecting pynvx==1.0.0
Using cached pynvx-1.0.0-cp39-cp39-macosx_10_9_x86_64.whl (119 kB)
ERROR: Could not find a version that satisfies the requirement tensorflow<2.5.0,>=2.2.0
ERROR: No matching distribution found for tensorflow<2.5.0,>=2.2.0
resolve? | closed | 2021-04-06T19:17:20Z | 2022-06-30T10:02:04Z | https://github.com/deepfakes/faceswap/issues/1145 | [
"bug"
] | SiNaPsEr0x | 8 |
babysor/MockingBird | pytorch | 928 | 系统找不到指定的路径。: 'encoder\\saved_models | **Summary[问题简述(一句话)]**
运行 python encoder_train.py bfx ./ 报错提示找不到路径
**Env & To Reproduce[复现与环境]**
**Screenshots[截图(如有)]**

| closed | 2023-07-01T08:56:15Z | 2023-09-04T13:59:18Z | https://github.com/babysor/MockingBird/issues/928 | [] | 2265290305 | 0 |
jwkvam/bowtie | plotly | 240 | use yarn.lock | https://yarnpkg.com/lang/en/docs/yarn-lock/ will make for consistent js environment other than node and yarn of course.
- [ ] release process to create yarn.lock from package.json
- [ ] use yarn.lock instead of package.json
Might not be able to do anything with component packages. I don't think it makes sense to have a yarn.lock per component...hmm | open | 2018-09-20T04:45:39Z | 2018-09-20T06:05:28Z | https://github.com/jwkvam/bowtie/issues/240 | [
"reliability"
] | jwkvam | 0 |
Esri/arcgis-python-api | jupyter | 1,835 | Feature Service Cloning renames Feature Service URL unnecessarily | **Describe the bug**
Cloning a Feature Service with a GUID in the name (e.g. as generated when Survey123 creates a Hosted Feature Service) from one Portal instance to another results in the renaming of the service name at the target, even if the end service name does not exist at the target.
For example:
https://source/../Hosted/**survey123_adfdf243f6be4944bbc46ab1b2d75d7f_results**/FeatureServer
becomes
https://target/../Hosted/**survey123_e09d2_results**/FeatureServer (or similar)
**To Reproduce**
Use .clone_items(clone_items, copy_data=True, search_existing_items=True) to clone a Survey123 source Hosted Feature Layer to a new Target portal
error:
N/A
**Screenshots**
N/A
**Expected behavior**
If the service name does not yet exist at the target, it should remain the same as the source.
**Platform (please complete the following information):**
- OS: Windows
- Browser : NA
- Python API Version: 2.3.0
**Additional context**
If I monkey patch ../clone.py, line 2945 as follows, no renaming occurs:
if not self.target.content.is_service_name_available(name, "featureService"):
name = self._get_unique_name(self.target, name)
| open | 2024-05-23T00:45:04Z | 2024-05-23T09:43:34Z | https://github.com/Esri/arcgis-python-api/issues/1835 | [
"bug"
] | shane-pienaar-gbs | 0 |
art049/odmantic | pydantic | 265 | Model.update not detecting datetime change in FastAPI | # Bug
I'm trying to automatically update the _modified_at_ field in my **Server** model from a FastAPI enpoint, but the **Model.update** method doesn't update the field.
### Current Behavior
For this i have the base model to **create** a Server (simplified version):
```
class Server(Model):
id: UUID = Field(primary_field=True)
name: str
created_at: datetime = Field(default_factory=datetime.now)
modified_at: datetime = Field(default_factory=datetime.now)
```
and a model used to **patch** the Server that automatically updates the _modified_at_ field (simplified version):
```
class ServerUpdate(BaseModel):
name: Optional[str]
modified_at: Optional[datetime]
class Config:
validate_assignment = True
@root_validator
def number_validator(cls, values):
values["modified_at"] = datetime.now()
return values
```
In my FastAPI endpoint i have something like this:
```
@router.put("/{id}", response_model=Server)
async def update_server_by_id(id: UUID, patch: ServerUpdate):
server = await engine.find_one(Server, Server.id == id)
if server is None:
raise HTTPException(404)
server.update(patch)
try:
await engine.save(server)
return server
except Exception:
raise HTTPException(500)
```
The _modified_at_ field is **never updated**.
If I `print(patch.dict())` it gives me:
`{'name': None, 'modified_at': datetime.datetime(2022, 9, 11, 11, 4, 6, 319829)}`
If I `print(server.dict())` before the update it gives me:
`{'id': UUID('c4dfa7ac-d25d-45cb-8bc3-f59e5938a58c'), 'name': 'Server 1', 'created_at': datetime.datetime(2022, 9, 11, 9, 46, 45, 234000), 'modified_at': datetime.datetime(2022, 9, 11, 9, 46, 45, 234000)}`
so the **patch** instance has the current datetime.
The only way I found to make it work is to set **`patch.modified_at = patch.modified_at`** before `server.update(patch)`
### Expected behavior
The Model.update method should update the datetime field.
### Environment
- ODMantic version: 0.8.0
- MongoDB version: 5.0.12
- Pydantic infos (output of `python -c "import pydantic.utils; print(pydantic.utils.version_info())`):
```
pydantic version: 1.10.1
pydantic compiled: True
install path: D:\Projects\Python\vendor-system-fastapi\.venv\Lib\site-packages\pydantic
python version: 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
platform: Windows-10-10.0.19043-SP0
optional deps. installed: ['dotenv', 'typing-extensions']
```
- Version of additional modules (if relevant):
- fastapi: 0.82.0
**Additional context**
_Add any other context about the problem here._
| open | 2022-09-11T09:38:16Z | 2022-11-11T05:12:05Z | https://github.com/art049/odmantic/issues/265 | [
"bug"
] | M4rk3tt0 | 4 |
apachecn/ailearning | python | 652 | Ggg | Gggg```
[_****_]()
``` | closed | 2024-08-24T22:06:57Z | 2024-08-25T12:23:41Z | https://github.com/apachecn/ailearning/issues/652 | [] | ghost | 1 |
polakowo/vectorbt | data-visualization | 177 | import errors | I am getting the following errors
from vectorbt.portfolio.nb import auto_call_seq_ctx_nb
ImportError: cannot import name 'auto_call_seq_ctx_nb' from 'vectorbt.portfolio.nb'
from vectorbt.portfolio.nb import order_nb
ImportError: cannot import name 'order_nb' from 'vectorbt.portfolio.nb'
| closed | 2021-06-23T15:07:22Z | 2024-03-16T09:27:56Z | https://github.com/polakowo/vectorbt/issues/177 | [] | hayilmaz | 6 |
frol/flask-restplus-server-example | rest-api | 43 | ImportError: cannot import name 'force_auto_coercion' | On centos 6 and python 3.4 when executing invoke app.run following error occurs
`2017-01-11 13:04:17,614 [INFO] [tasks.app.dependencies] Project dependencies are installed.
2017-01-11 13:04:17,615 [INFO] [tasks.app.dependencies] Installing Swagger UI assets...
2017-01-11 13:04:17,616 [INFO] [tasks.app.dependencies] Downloading Swagger UI assets...
2017-01-11 13:04:17,909 [INFO] [tasks.app.dependencies] Unpacking Swagger UI assets...
2017-01-11 13:04:18,069 [INFO] [tasks.app.dependencies] Swagger UI is installed.
Traceback (most recent call last):
File "/usr/bin/invoke", line 11, in <module>
sys.exit(program.run())
File "/usr/lib/python3.4/site-packages/invoke/program.py", line 274, in run
self.execute()
File "/usr/lib/python3.4/site-packages/invoke/program.py", line 389, in execute
executor.execute(*self.tasks)
File "/usr/lib/python3.4/site-packages/invoke/executor.py", line 113, in execute
result = call.task(*args, **call.kwargs)
File "/usr/lib/python3.4/site-packages/invoke/tasks.py", line 111, in __call__
result = self.body(*args, **kwargs)
File "/root/flask-restplus-server-example-master/tasks/app/run.py", line 40, in run
app = create_app()
File "/root/flask-restplus-server-example-master/app/__init__.py", line 62, in create_app
from . import extensions
File "/root/flask-restplus-server-example-master/app/extensions/__init__.py", line 15, in <module>
from sqlalchemy_utils import force_auto_coercion, force_instant_defaults
ImportError: cannot import name 'force_auto_coercion'
`
| closed | 2017-01-11T07:38:47Z | 2017-01-11T09:18:19Z | https://github.com/frol/flask-restplus-server-example/issues/43 | [] | akashtalole | 2 |
ivy-llc/ivy | tensorflow | 28,475 | Fix Ivy Failing Test: paddle - activations.softplus | To-do List: https://github.com/unifyai/ivy/issues/27501 | closed | 2024-03-04T12:22:56Z | 2024-03-17T12:41:32Z | https://github.com/ivy-llc/ivy/issues/28475 | [
"Sub Task"
] | ZJay07 | 0 |
huggingface/text-generation-inference | nlp | 2,952 | CUDA Out of memory when using the benchmarking tool with batch size greater than 1 | ### System Info
- TGI v3.0.1
- OS: GCP Container-Optimized OS
- 4xL4 GPUs (24GB memory each)
- Model is `hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4`
As soon as I run the TGI benchmarking tool (text-generation-benchmark) with the desired input length for our use case and batch size of 2, I get CUDA Out of Memory and the TGI server stops.
TGI starting command:
```
docker run -d --network shared_network_no_internet \
--volume /var/lib/nvidia/lib64:/usr/local/nvidia/lib64 \
--volume /var/lib/nvidia/bin:/usr/local/nvidia/bin \
--volume /mnt/disks/model:/mnt/disks/model \
--device /dev/nvidia0:/dev/nvidia0 \
--device /dev/nvidia1:/dev/nvidia1 \
--device /dev/nvidia2:/dev/nvidia2 \
--device /dev/nvidia3:/dev/nvidia3 \
--device /dev/nvidia-uvm:/dev/nvidia-uvm \
--device /dev/nvidiactl:/dev/nvidiactl \
-e HF_HUB_OFFLINE=true \
-e CUDA_VISIBLE_DEVICES=0,1,2,3 \
--shm-size 1g \
--name tgi-llama \
ghcr.io/huggingface/text-generation-inference:3.0.1 \
--model-id /mnt/disks/model/models--hugging-quants--Meta-Llama-3.1-70B-Instruct-AWQ-INT4 \
--port 3000 \
--sharded true \
--num-shard 4 \
--max-input-length 16500 \
--max-total-tokens 17500 \
--max-batch-size 7 \
--max-batch-total-tokens 125000 \
--max-concurrent-requests 30 \
--quantize awq
```
When starting TGI without `max-batch-total-tokens`, the logs were showing that I have max batch total tokens of 134902 available. That's why I came with a config like `--max-batch-size 7 --max-batch-total-tokens 125000`
```
INFO text_generation_launcher: Using prefill chunking = True
INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-3
INFO shard-manager: text_generation_launcher: Shard ready in 148.169397034s rank=3
INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=2
INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=0
INFO shard-manager: text_generation_launcher: Waiting for shard to be ready... rank=1
INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-1
INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-2
INFO text_generation_launcher: Server started at unix:///tmp/text-generation-server-0
INFO shard-manager: text_generation_launcher: Shard ready in 150.767169532s rank=2
INFO shard-manager: text_generation_launcher: Shard ready in 150.800946226s rank=0
INFO shard-manager: text_generation_launcher: Shard ready in 150.812528352s rank=1
INFO text_generation_launcher: Starting Webserver
INFO text_generation_router_v3: backends/v3/src/lib.rs:125: Warming up model
INFO text_generation_launcher: Using optimized Triton indexing kernels.
INFO text_generation_launcher: KV-cache blocks: 134902, size: 1
INFO text_generation_launcher: Cuda Graphs are enabled for sizes [32, 16, 8, 4, 2, 1]
INFO text_generation_router_v3: backends/v3/src/lib.rs:137: Setting max batch total tokens to 134902
WARN text_generation_router_v3::backend: backends/v3/src/backend.rs:39: Model supports prefill chunking. waiting_served_ratio and max_waiting_tokens will be ignored.
INFO text_generation_router_v3: backends/v3/src/lib.rs:166: Using backend V3
INFO text_generation_router::server: router/src/server.rs:1873: Using config Some(Llama)
```
This is how the GPU memory looks like after server startup:
<img width="763" alt="Image" src="https://github.com/user-attachments/assets/40b9f9f4-25e0-425a-9844-5b571dc88f35" />
I then run the benchmarking tool like this and get the OOM error:
```
text-generation-benchmark \
--tokenizer-name /mnt/disks/model/models--hugging-quants--Meta-Llama-3.1-70B-Instruct-AWQ-INT4 \
--sequence-length 6667 \
--decode-length 1000 \
--batch-size 2
```
### Information
- [x] Docker
- [x] The CLI directly
### Tasks
- [x] An officially supported command
- [ ] My own modifications
### Reproduction
I run the benchmarking tool like this and get the OOM error:
```
text-generation-benchmark \
--tokenizer-name /mnt/disks/model/models--hugging-quants--Meta-Llama-3.1-70B-Instruct-AWQ-INT4 \
--sequence-length 6667 \
--decode-length 1000 \
--batch-size 2
```
This is how the error looks like:
```
2025-01-24T14:15:44.533276Z ERROR prefill{id=0 size=2}:prefill{id=0 size=2}: text_generation_client: backends/client/src/lib.rs:46: Server error: CUDA out of memory. Tried to allocate 182.00 MiB. GPU 1 has a total capacity of 22.06 GiB of which 103.12 MiB is free. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 21.40 GiB is allocated by PyTorch, with 24.46 MiB allocated in private pools (e.g., CUDA Graphs), and 41.60 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2025-01-24T14:15:44.535857Z ERROR prefill{id=0 size=2}:prefill{id=0 size=2}: text_generation_client: backends/client/src/lib.rs:46: Server error: CUDA out of memory. Tried to allocate 182.00 MiB. GPU 3 has a total capacity of 22.06 GiB of which 103.12 MiB is free. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 21.40 GiB is allocated by PyTorch, with 24.46 MiB allocated in private pools (e.g., CUDA Graphs), and 41.60 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2025-01-24T14:15:44.536587Z ERROR prefill{id=0 size=2}:prefill{id=0 size=2}: text_generation_client: backends/client/src/lib.rs:46: Server error: CUDA out of memory. Tried to allocate 182.00 MiB. GPU 0 has a total capacity of 22.06 GiB of which 103.12 MiB is free. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 21.40 GiB is allocated by PyTorch, with 24.46 MiB allocated in private pools (e.g., CUDA Graphs), and 41.60 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
2025-01-24T14:15:44.536907Z ERROR prefill{id=0 size=2}:prefill{id=0 size=2}: text_generation_client: backends/client/src/lib.rs:46: Server error: CUDA out of memory. Tried to allocate 182.00 MiB. GPU 2 has a total capacity of 22.06 GiB of which 103.12 MiB is free. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 21.40 GiB is allocated by PyTorch, with 24.46 MiB allocated in private pools (e.g., CUDA Graphs), and 41.60 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
```
### Expected behavior
Because of the reported `Setting max batch total tokens to 134902` I am expecting the TGI server to be able to handle requests of 6667 tokens in batches of 2. Is that not the case? What am I missing here?
Is it possible that the benchmarking tool is doing something weird?
Thank you! | open | 2025-01-24T14:06:33Z | 2025-01-24T14:17:51Z | https://github.com/huggingface/text-generation-inference/issues/2952 | [] | mborisov-bi | 0 |
MaartenGr/BERTopic | nlp | 1,156 | Arrangement of topic embeddings result | Hi @MaartenGr, thank you for this wonderful package as it really helped in our use case, though I want to clarify the arrangement of the `topic_embeddings_` output. If I have let's say topics output like [-1, 0, 1, ..., 13]. is the `topic_embeddings_` result also ordered in such a way? or [0, 1, ..., 13, -1]. Thank you very much. | closed | 2023-04-05T08:33:59Z | 2023-04-06T13:19:20Z | https://github.com/MaartenGr/BERTopic/issues/1156 | [] | tanstephenjireh | 1 |
jschneier/django-storages | django | 1,214 | django-storages and mypy integration | When using mypy, the next error appeared:
"Skipping analyzing "storages.backends.s3boto3": module is installed, but missing library stubs or py.typed marker [import]mypy"
As for third-party modules, the solution will be to add support by bulding a django-storages-stubs package, so it can integrate well.
For now, for any user that may face this error, in the mypy.ini, add the following lines so mypy ignores the error:
```
[mypy-storages.*]
ignore_missing_imports = true
```
| closed | 2023-02-12T03:28:09Z | 2023-10-22T00:46:44Z | https://github.com/jschneier/django-storages/issues/1214 | [] | KMoreno96 | 1 |
koaning/scikit-lego | scikit-learn | 34 | feature request: FeatureSmoother/ConstrainedSmoother | 
it might be epic if you could smooth out every column in `X` with regards to `y` as a transformer before it goes into a estimator. when looking at the loe(w)ess model this seems to be exactly what i want. not sure if it is super useful tho. | closed | 2019-03-18T13:29:28Z | 2020-01-24T21:40:33Z | https://github.com/koaning/scikit-lego/issues/34 | [] | koaning | 3 |
alteryx/featuretools | scikit-learn | 2,057 | Improve tests for primitives that use Datetime inputs | For most datetime primitives, improper conversion of a timezone aware datetime value to a different timezone or UTC could result in the primitives returning incorrect values. For example, converting a time in the US to UTC will result in a change in the hour of the datetime and could also result in a change in the Day, Month, Week, or Year, depending on the datetime values.
We need to be careful to avoid improper timezone conversions inside the primitive functions. We should add tests for all datetime primitives in which a conversion between timezones could result in an incorrect output to make sure we aren't inadvertently performing operations inside the primitives that cause incorrect results. | open | 2022-05-11T18:39:20Z | 2023-06-26T19:10:49Z | https://github.com/alteryx/featuretools/issues/2057 | [] | thehomebrewnerd | 0 |
deezer/spleeter | tensorflow | 566 | [Feature] --version ? | ## Description
I need to know which version of spleeter is installed
for this, I would use `spleeter --version` but I can't find this information anywhere.
Does this feature exist?
Thanks | closed | 2021-01-25T13:14:35Z | 2021-01-30T04:22:30Z | https://github.com/deezer/spleeter/issues/566 | [
"enhancement",
"feature"
] | martinratinaud | 3 |
Lightning-AI/pytorch-lightning | pytorch | 19,955 | Adam optimizer is slower after loading model from checkpoint | ### Bug description
When i was resuming my model from training from checkpoint i notice slowness in gpu utilization. I have found problem that adam is doing cuda sync after restoring from checkpoint. It is a problem if you have a lot of optimziers in your network.
Adam implementation is assuming that step component of the state is a cpu tensor. It is assumed [here](https://github.com/pytorch/pytorch/blob/main/torch/optim/optimizer.py#L98) which is executed in adam [here](https://github.com/pytorch/pytorch/blob/main/torch/optim/adam.py#L414)
Problem is that lightning is putting all optimizer state to the gpu [here](https://github.com/Lightning-AI/pytorch-lightning/blob/master/src/lightning/fabric/utilities/optimizer.py#L34)
My current workaround is:
```python
def training_step(
self,
batch: Any,
batch_idx: int,
dataloader_idx: int = 0,
):
print("training_step")
optimizer = self.optimizers()
for _, vv in optimizer.state.items():
if "step" in vv and vv["step"].device.type == "cuda":
vv["step"] = vv["step"].cpu()
```
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
```python
import os
from typing import Any, Tuple
import lightning.pytorch as plight
import lightning.pytorch as pl
import torch
import torch.nn as nn
from lightning.pytorch.callbacks import ModelCheckpoint
from torch.utils.data import DataLoader
num_features = 6875
num_responses = 7
batch_size = 32768
class CachedRandomTensorDataset(torch.utils.data.Dataset):
"""Very low overhead torch dataset for training for a given number of steps"""
def __init__(self, batch_size: int, num_features: int, num_responses: int, length: int) -> None:
self.x = torch.randn((batch_size, num_features))
self.y = torch.randn((batch_size, num_responses))
self.length = length
def __getitem__(self, idx: int) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
return self.x.clone(), self.y.clone()
def __len__(self) -> int:
return self.length
dataset = CachedRandomTensorDataset(
num_features=num_features,
num_responses=num_responses,
length=1013,
batch_size=batch_size,
)
train_dataloader = DataLoader(dataset, batch_size=None, pin_memory=False, num_workers=0, shuffle=False)
class MLP(nn.Module):
def __init__(
self,
in_dim,
hidden_dim,
out_dim,
):
super().__init__()
self.layers = len(hidden_dim)
self.LinearClass = nn.Linear
self.activation_fn = nn.ReLU()
module_dict = {}
for i in range(self.layers):
layer_input_size = in_dim if i == 0 else hidden_dim[i - 1]
module_dict[f"layer_{i}"] = nn.Linear(layer_input_size, hidden_dim[i])
module_dict["last_linear"] = nn.Linear(hidden_dim[-1], out_dim)
self.module_dict = nn.ModuleDict(module_dict)
def forward(self, x):
for i in range(self.layers):
x = self.module_dict[f"layer_{i}"](x)
x = self.activation_fn(x)
yhat = self.module_dict["last_linear"](x)
return yhat
class TestNetwork(pl.LightningModule):
def __init__(
self,
model: nn.Module,
num_it: int,
**kwargs: Any,
) -> None:
super().__init__(**kwargs)
self.automatic_optimization = False
self.model = model
self.mse = nn.MSELoss()
self.num_it = num_it
def configure_optimizers(self, name=None):
optimizer = torch.optim.Adam(self.parameters(), lr=0.01)
return optimizer
def training_step(
self,
batch: Any,
batch_idx: int,
dataloader_idx: int = 0,
):
print("training_step")
optimizer = self.optimizers()
for _ in range(self.num_it):
torch.cuda.nvtx.range_push("it step")
x, y = batch
yhat = self.model.forward(x)
loss = self.mse(yhat, y)
optimizer.zero_grad()
self.manual_backward(loss)
torch.cuda.nvtx.range_push("optimizer")
optimizer.step()
torch.cuda.nvtx.range_pop()
torch.cuda.nvtx.range_pop()
train_model = TestNetwork(
MLP(
num_features,
[2048, 1024, 512, 256],
num_responses,
),
200,
)
trainer_max_steps = 200
checkpoint_name = "debug3"
checkpoint_dir = "./model_checkpoint"
ckpt_path = f"{checkpoint_dir}/{checkpoint_name}-step={trainer_max_steps}.ckpt"
if os.path.isfile(ckpt_path):
print("training from checkpoint")
trainer_max_steps = trainer_max_steps + 1
else:
print("training new model")
ckpt_path = None
checkpoint_callback = ModelCheckpoint(
dirpath=checkpoint_dir,
save_top_k=10,
monitor="step",
mode="max",
filename=checkpoint_name + "-{step:02d}",
every_n_train_steps=100,
)
# TRAINER CREATION
trainer = plight.Trainer(
accelerator="gpu",
devices=1,
num_nodes=1,
max_steps=trainer_max_steps,
max_epochs=1,
log_every_n_steps=50,
logger=[],
enable_progress_bar=True,
enable_checkpointing=True,
enable_model_summary=True,
num_sanity_val_steps=0,
check_val_every_n_epoch=None,
callbacks=[checkpoint_callback],
)
torch.cuda.set_sync_debug_mode(1)
trainer.fit(
train_model,
train_dataloader,
ckpt_path=ckpt_path,
)
```
### Error messages and logs
```
# Error messages and logs here please
```
below some nsys traces


### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU:
- NVIDIA A100-SXM4-80GB
- available: True
- version: 12.1
* Lightning:
- gpytorch: 1.11
- lightning: 2.2.5
- lightning-utilities: 0.11.2
- pytorch-lightning: 2.2.5
- torch: 2.3.1
- torchinfo: 1.8.0
- torchmetrics: 1.3.1
- torchtyping: 0.1.4
- torchvision: 0.18.0
- torchviz: 0.0.2
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.10.12
- release: 5.15.0-91-generic
- version: #101-Ubuntu SMP Tue Nov 14 13:30:08 UTC 2023
</details>
### More info
_No response_
cc @borda | closed | 2024-06-07T10:14:25Z | 2024-08-05T22:13:41Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19955 | [
"bug",
"help wanted",
"optimization",
"performance"
] | radomirgr | 27 |
SciTools/cartopy | matplotlib | 1,915 | PyProj transformers often reinstantiated | Profiling an animation I've been fiddling with, I noticed about 10 seconds (~15% of the total run time) was spent in `_safe_pj_transform`:
```
24631906 function calls (23998894 primitive calls) in 62.484 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
...
4660 0.025 0.000 11.467 0.002 geoaxes.py:117(transform_non_affine)
4660 0.407 0.000 11.441 0.002 crs.py:336(transform_points)
...
4660 0.016 0.000 10.998 0.002 crs.py:42(_safe_pj_transform)
4660 0.025 0.000 10.859 0.002 transformer.py:471(from_crs)
4660 0.033 0.000 10.813 0.002 transformer.py:298(__init__)
4660 0.009 0.000 10.764 0.002 transformer.py:91(__call__)
4660 10.724 0.002 10.746 0.002 {pyproj._transformer.from_crs}
```
Constantly creating new `Transformer` instances seems to be quite expensive. A quick hack improves things drastically:
```
sage: import functools
sage: ccrs.Transformer.from_crs = functools.cache(ccrs.Transformer.from_crs)
sage: cProfile.run('ani.save("/tmp/test.gif")', sort=pstats.SortKey.CUMULATIVE)
24871283 function calls (24218669 primitive calls) in 51.364 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
...
4660 0.020 0.000 0.322 0.000 geoaxes.py:117(transform_non_affine)
...
4660 0.147 0.000 0.301 0.000 crs.py:336(transform_points)
...
4660 0.019 0.000 0.123 0.000 crs.py:42(_safe_pj_transform)
...
1 0.000 0.000 0.003 0.003 transformer.py:471(from_crs)
1 0.000 0.000 0.003 0.003 transformer.py:298(__init__)
1 0.000 0.000 0.003 0.003 transformer.py:91(__call__)
...
1 0.003 0.003 0.003 0.003 {pyproj._transformer.from_crs}
```
Is there any chance Cartopy could take care of caching transformers in a similar manner? | closed | 2021-10-28T22:47:48Z | 2021-11-14T06:02:38Z | https://github.com/SciTools/cartopy/issues/1915 | [] | Bob131 | 5 |
vitalik/django-ninja | django | 1,345 | Can I cache a variable in ModelSchema? | code like this: I query SomeMode three times to get a OutSchema, Question: Can I cache the 'x' result to reduce db query?
```Python
class OutSchema(ModelSchema):
out_a: str
out_b: str,
out_c: str
class Config:
model = M
@staticmethod
def resolve_out_a(obj):
x = SomeModel.objects.get(name=obj.xxx)
return x.out_a
@staticmethod
def resolve_out_b(obj):
x = SomeModel.objects.get(name=obj.xxx)
return x.out_b
@staticmethod
def resolve_out_c(obj):
x = SomeModel.objects.get(name=obj.xxx)
return x.out_c
```
| open | 2024-11-26T07:25:01Z | 2024-11-26T08:40:24Z | https://github.com/vitalik/django-ninja/issues/1345 | [] | pyang30 | 1 |
albumentations-team/albumentations | deep-learning | 1,781 | Huggingface demo link in docs does not allow user uploaded images | ## Describe the bug
Huggingface demo link in docs does not allow user uploaded images.
### To Reproduce
Test here:
https://huggingface.co/spaces/qubvel-hf/albumentations-demo?transform=CLAHE | closed | 2024-06-08T19:35:46Z | 2024-09-19T04:31:04Z | https://github.com/albumentations-team/albumentations/issues/1781 | [
"feature request"
] | ogencoglu | 3 |
ansible/ansible | python | 84,131 | Cleanup conflicting `/etc/apt/sources.list.d/*.list` when adding `/etc/apt/sources.list.d/*.sources` | ### Summary
I've been using the new `deb822_repository` to declare apt repositories but when i ran it on an existing machine which already configured some repo the old way, I got a lot of warnings like this:
```
W: Target CNF (main/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list.d/vscode.list:3 and /etc/apt/sources.list.d/vscode.sources:1
W: Target Packages (non-free/binary-all/Packages) is configured multiple times in /etc/apt/sources.list.d/spotify.list:1 and /etc/apt/sources.list.d/spotify.sources:1
...
```
Would it make sense to have an option in `deb822_repository` that would do cleanup beforehand to avoid these warnings ?
### Issue Type
Feature Idea
### Component Name
deb822_repository
### Additional Information
I supposed it would require a new parameter to enable this behavior. The default would be disabled for compatibility reasons
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | closed | 2024-10-17T01:25:26Z | 2024-10-31T13:00:03Z | https://github.com/ansible/ansible/issues/84131 | [
"feature"
] | UnknownPlatypus | 2 |
keras-team/keras | data-science | 20,801 | Optimizer don't have apply() method | ### Environment:
_Python 3.12.7
Tensorflow 2.16.1
Keras 3.8.0_
### Problem:
All optimizers from `tf.keras.optimizers` do not have a method `apply()` to write the train routine from [scratch](https://keras.io/guides/writing_a_custom_training_loop_in_tensorflow/).
But the docs states [that it exists](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam#apply).
```
AttributeError Traceback (most recent call last)
Cell In[11], line 1
----> 1 getattr(tf.keras.optimizers.Optimizer(name="test"), "apply")
AttributeError: 'Optimizer' object has no attribute 'apply'
```
### Possible solution
Need to use method `apply_gradients(zip(gradients, model_parameters))` | open | 2025-01-23T13:20:11Z | 2025-03-12T02:04:34Z | https://github.com/keras-team/keras/issues/20801 | [
"type:support",
"stat:awaiting response from contributor",
"stale"
] | konstantin-frolov | 4 |
littlecodersh/ItChat | api | 961 | 请求增加web接口网页版文件传输助手登录方式,为被封网页版的微信号开个路。 | UOS不是也封了么。
我的微信因为用itchat被封了网页登录(https://wx.qq.com/)。
但是12月新出的网页文件传输助手(https://filehelper.weixin.qq.com)不限制。
我想问有没有哪个包可以调用web接口网页文件传输助手给自己的微信发消息。
我给自己的手机发消息没理由限制吧。
类似
```
itchat.auto_login(hotReload=True,fileHelper=True)
itchat.send('hello filehelper','filehelper')
```
| open | 2022-02-10T09:09:31Z | 2022-05-06T09:35:18Z | https://github.com/littlecodersh/ItChat/issues/961 | [] | ftune | 2 |
NullArray/AutoSploit | automation | 1,051 | Unhandled Exception (aac44356a) | Autosploit version: `3.1.1`
OS information: `Linux-4.19.0-kali3-amd64-x86_64-with-Kali-kali-rolling-kali-rolling`
Running context: `autosploit.py`
Error mesage: ``/root/AutoSploit/hosts.txt` and `/root/AutoSploit/hosts.txt` are the same file`
Error traceback:
```
Traceback (most recent call):
File "/root/AutoSploit/autosploit/main.py", line 116, in main
terminal.terminal_main_display(loaded_tokens)
File "/root/AutoSploit/lib/term/terminal.py", line 543, in terminal_main_display
self.do_load_custom_hosts(choice_data_list[-1])
File "/root/AutoSploit/lib/term/terminal.py", line 407, in do_load_custom_hosts
shutil.copy(file_path, lib.settings.HOST_FILE)
File "/usr/lib/python2.7/shutil.py", line 139, in copy
copyfile(src, dst)
File "/usr/lib/python2.7/shutil.py", line 83, in copyfile
raise Error("`%s` and `%s` are the same file" % (src, dst))
Error: `/root/AutoSploit/hosts.txt` and `/root/AutoSploit/hosts.txt` are the same file
```
Metasploit launched: `False`
| closed | 2019-04-20T09:35:43Z | 2019-09-03T21:17:02Z | https://github.com/NullArray/AutoSploit/issues/1051 | [
"bug"
] | AutosploitReporter | 1 |
Sanster/IOPaint | pytorch | 82 | Add resizable brush when alt+left mouse button is pressed | In photoshop, you could change the size of the brush when you pressed alt and held down the left mouse button. It would be convenient to use it here as well. Or as in [Ballon Translator](https://github.com/dmMaze/BallonsTranslator)
Example video:
https://user-images.githubusercontent.com/57861007/194719080-536bf0c8-318f-45d2-b068-d3000995936e.mp4
| closed | 2022-10-08T17:05:34Z | 2022-11-08T01:06:08Z | https://github.com/Sanster/IOPaint/issues/82 | [] | bropines | 2 |
nalepae/pandarallel | pandas | 19 | ArrowIOError: Could not connect to socket /tmp/test_plasma-pxm7ybav/plasma.sock | Hi,
I installed pandarallel via `pip install pandarallel`. And it cannot initialized properly. It seems that I cannot connect to the socket and I cannot find the exact file mentioned in the error message though the folder exists. Please see below.

| closed | 2019-04-09T03:13:56Z | 2019-11-09T22:57:39Z | https://github.com/nalepae/pandarallel/issues/19 | [] | Gabrielvon | 3 |
alteryx/featuretools | scikit-learn | 1,966 | How to restrict operations between two fields | I have four fields ['a', 'b', 'c', 'd']. How can I do the following with featuretools:
1. Generate a new feature, and the feature value is the ratio between two of the above fields;
2. The division between 'b' and 'd' is not allowed to generate new features, and the others can be used.
Thank you very much! | closed | 2022-03-16T11:14:58Z | 2022-03-18T14:37:02Z | https://github.com/alteryx/featuretools/issues/1966 | [] | jingsupo | 3 |
trevorstephens/gplearn | scikit-learn | 147 | Remove custom estimator checks when sklearn removes multi-class requirement | Ref: https://github.com/scikit-learn/scikit-learn/issues/6715#issuecomment-476087371 | closed | 2019-03-27T08:45:50Z | 2020-02-13T11:05:29Z | https://github.com/trevorstephens/gplearn/issues/147 | [
"dependencies"
] | trevorstephens | 2 |
assafelovic/gpt-researcher | automation | 555 | There is a minor bug in `DocumentLoader._load_document()` | __User Experience:__
* While reading directories that include document types not supported by the application. GPT-Researcher crashes.
__The Problem:__
* The function must return an iterable,
* which is what the `except Exception` handler is trying to manage by including a `return []`.
* However, if `loader = loader_dict.get(file_extension, None)` returns `None`,
* which it will if the `file_extension` is not in the `loader_dict` dictionary.
* the function is successfully completed, without ever hitting a `return` statement thus by default it returns None.
__The Solution:__
* The simplest solution would be to add an else on the if loader
```
if loader:
data = loader.load()
return data
else:
return [] # Fixes Bug
```
* My personal preference would be if your method is expected to return an iterable.
* Make that return the last line of the method, and don't allow other return paths.
* Rationale: This can help in maintainability as someone who does not know the code can immediately see that it returns an iterable.
```
async def _load_document(self, file_path: str, file_extension: str) -> list:
ret_data = []
try:
loader_dict = {
"pdf": PyMuPDFLoader(file_path),
"txt": TextLoader(file_path),
"doc": UnstructuredWordDocumentLoader(file_path),
"docx": UnstructuredWordDocumentLoader(file_path),
"pptx": UnstructuredPowerPointLoader(file_path),
"csv": UnstructuredCSVLoader(file_path, mode="elements"),
"xls": UnstructuredExcelLoader(file_path, mode="elements"),
"xlsx": UnstructuredExcelLoader(file_path, mode="elements"),
"md": UnstructuredMarkdownLoader(file_path)
}
loader = loader_dict.get(file_extension, None)
if loader:
ret_data = loader.load()
except Exception as e:
print(f"Failed to load document : {file_path}")
print(e)
return ret_data
```
Note: I'm happy to do a PR, but the fix is simple, and I was hoping other folks actively coding might just slip it in.
Thanks to all who are building such a great product here!! | closed | 2024-05-31T15:03:08Z | 2024-06-01T12:50:28Z | https://github.com/assafelovic/gpt-researcher/issues/555 | [] | DanHUMassMed | 1 |
neuml/txtai | nlp | 795 | Add audio signal processing and mixing methods | Add common utility methods for audio signal processing and mixing of multiple audio streams (i.e. speech and music). | closed | 2024-10-09T12:48:07Z | 2024-10-11T12:36:14Z | https://github.com/neuml/txtai/issues/795 | [] | davidmezzetti | 0 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 535 | When I press "Synthesise Only" a error appears. | Here's the log:
`
Traceback (most recent call last):
File "C:\Users\ABLPHA\Desktop\mods\Source SDK\Real-Time-Voice-Cloning-master\toolbox\__init__.py", line 121, in <lambda>
func = lambda: self.synthesize() or self.vocode()
File "C:\Users\ABLPHA\Desktop\mods\Source SDK\Real-Time-Voice-Cloning-master\toolbox\__init__.py", line 229, in synthesize
specs = self.synthesizer.synthesize_spectrograms(texts, embeds)
File "C:\Users\ABLPHA\Desktop\mods\Source SDK\Real-Time-Voice-Cloning-master\synthesizer\inference.py", line 101, in synthesize_spectrograms
[(self.checkpoint_fpath, embeddings, texts)])[0]
File "C:\ProgramData\Anaconda3\envs\deep-voice-cloning-no-avx\lib\site-packages\multiprocess\pool.py", line 276, in starmap
return self._map_async(func, iterable, starmapstar, chunksize).get()
File "C:\ProgramData\Anaconda3\envs\deep-voice-cloning-no-avx\lib\site-packages\multiprocess\pool.py", line 657, in get
raise self._value
NameError: name 'self' is not defined
`
I have CUDA 10.1, cuDNN 8.0.3.33, TF 1.14 GPU with no AVX, Python 3.7.6, running on Win10 | closed | 2020-09-29T18:24:49Z | 2020-09-30T07:55:48Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/535 | [] | mineLdiver | 11 |
gunthercox/ChatterBot | machine-learning | 1,629 | Chatterbot |
Hi ,
I want to know how to save chat history between the user and the bot in to the file
and i want to create it as dynamic chat without any data providing before and train that files just we need to train the chat history file through that we can make the conversation | closed | 2019-02-21T04:44:22Z | 2019-12-19T09:37:33Z | https://github.com/gunthercox/ChatterBot/issues/1629 | [] | mady143 | 3 |
DistrictDataLabs/yellowbrick | matplotlib | 539 | TSNE size & title bug | **Describe the bug**
Looks like our `TSNEVisualizer` might have a bug that causes an error on instantiation if either the `size` or `title` parameters are used.
**To Reproduce**
```python
from yellowbrick.text import TSNEVisualizer
from sklearn.feature_extraction.text import TfidfVectorizer
corpus = load_data('hobbies')
tfidf = TfidfVectorizer()
docs = tfidf.fit_transform(corpus.data)
labels = corpus.target
tsne = TSNEVisualizer(size=(1080, 720))
```
or
```
tsne = TSNEVisualizer(title="My Special TSNE Visualizer")
```
**Dataset**
This bug was triggered using the YB hobbies corpus.
**Expected behavior**
Users should be able to influence the size of the visualizer on instantiation using the `size` parameter and a tuple with `(width, height)` in pixels, and the title of the visualizer using the `title` parameter and a string.
**Traceback**
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-59-120fbfcec07c> in <module>()
----> 1 tsne = TSNEVisualizer(size=(1080, 720))
2 tsne.fit(labels)
3 tsne.poof()
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/yellowbrick/text/tsne.py in __init__(self, ax, decompose, decompose_by, labels, classes, colors, colormap, random_state, **kwargs)
180
181 # TSNE Parameters
--> 182 self.transformer_ = self.make_transformer(decompose, decompose_by, kwargs)
183
184 def make_transformer(self, decompose='svd', decompose_by=50, tsne_kwargs={}):
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/yellowbrick/text/tsne.py in make_transformer(self, decompose, decompose_by, tsne_kwargs)
234 # Add the TSNE manifold
235 steps.append(('tsne', TSNE(
--> 236 n_components=2, random_state=self.random_state, **tsne_kwargs)))
237
238 # return the pipeline
TypeError: __init__() got an unexpected keyword argument 'size'
```
or for `title`:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-64-92c88e0bdd33> in <module>()
----> 1 tsne = TSNEVisualizer(title="My Special TSNE Visualizer")
2 tsne.fit(labels)
3 tsne.poof()
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/yellowbrick/text/tsne.py in __init__(self, ax, decompose, decompose_by, labels, classes, colors, colormap, random_state, **kwargs)
180
181 # TSNE Parameters
--> 182 self.transformer_ = self.make_transformer(decompose, decompose_by, kwargs)
183
184 def make_transformer(self, decompose='svd', decompose_by=50, tsne_kwargs={}):
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/yellowbrick/text/tsne.py in make_transformer(self, decompose, decompose_by, tsne_kwargs)
234 # Add the TSNE manifold
235 steps.append(('tsne', TSNE(
--> 236 n_components=2, random_state=self.random_state, **tsne_kwargs)))
237
238 # return the pipeline
TypeError: __init__() got an unexpected keyword argument 'title'
```
**Desktop (please complete the following information):**
- macOS
- Python Version 3.6
- Yellowbrick Version 0.8
| closed | 2018-08-01T17:43:21Z | 2018-11-02T08:04:26Z | https://github.com/DistrictDataLabs/yellowbrick/issues/539 | [
"type: bug",
"priority: high"
] | rebeccabilbro | 6 |
ultralytics/ultralytics | pytorch | 19,323 | YOLOV8 segment | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
After performing instance segmentation with YOLOv8, why are the edges of the segmented objects not smooth and instead appear jagged? Thank you.
### Additional
_No response_ | open | 2025-02-20T06:34:21Z | 2025-02-26T10:33:38Z | https://github.com/ultralytics/ultralytics/issues/19323 | [
"question",
"segment"
] | 123lj-z | 8 |
microsoft/qlib | deep-learning | 1,196 | AttributeError: module 'collector' has no attribute 'YahooNormalizeUS1dExtend' while running update_data_to_bin | ## 🐛 Bug Description
After downloading all the US tickers I attempted to update the prices using 'python collector.py update_data_to_bin'
An observation made is it took almost the same amount of time than downloading all the data. Plus, by the end of the execution, the script crashed:
2022-07-13 06:22:32.848 | INFO | data_collector.base:_collector:197 - error symbol nums: 0
2022-07-13 06:22:32.848 | INFO | data_collector.base:_collector:198 - current get symbol nums: 12754
2022-07-13 06:22:32.848 | INFO | data_collector.base:collector_data:211 - 1 finish.
2022-07-13 06:22:32.848 | INFO | data_collector.base:collector_data:218 - total 12754, error: 0
Traceback (most recent call last):
File "collector.py", line 1203, in <module>
fire.Fire(Run)
File "/opt/conda/envs/qlib/lib/python3.8/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/opt/conda/envs/qlib/lib/python3.8/site-packages/fire/core.py", line 466, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/opt/conda/envs/qlib/lib/python3.8/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "collector.py", line 1178, in update_data_to_bin
self.normalize_data_1d_extend(qlib_data_1d_dir)
File "collector.py", line 1058, in normalize_data_1d_extend
_class = getattr(self._cur_module, f"{self.normalize_class_name}Extend")
AttributeError: module 'collector' has no attribute 'YahooNormalizeUS1dExtend'
## To Reproduce
Steps to reproduce the behavior:
1. Download all prices from US region
2. Run python collector.py update_data_to_bin --source_dir /us_source --normalize_dir /us_data_normal --qlib_data_1d_dir /us_data --region US --trading_date 2022-07-09 --end_date 2022-07-12
## Expected Behavior
Prices are updated without crashing
## Environment
Linux
x86_64
Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.10
#1 SMP Wed Mar 2 00:30:59 UTC 2022
Python version: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 06:04:10) [GCC 10.3.0]
Qlib version: 0.8.6.99
numpy==1.23.1
pandas==1.4.3
scipy==1.8.1
requests==2.28.1
sacred==0.8.2
python-socketio==5.6.0
redis==4.3.4
python-redis-lock==3.7.0
schedule==1.1.0
cvxpy==1.2.1
hyperopt==0.1.2
fire==0.4.0
statsmodels==0.13.2
xlrd==2.0.1
plotly==5.5.0
matplotlib==3.5.1
tables==3.7.0
pyyaml==6.0
mlflow==1.27.0
tqdm==4.64.0
loguru==0.6.0
lightgbm==3.3.2
tornado==6.1
joblib==1.1.0
fire==0.4.0
ruamel.yaml==0.17.21
## Additional Notes
<!-- Add any other information about the problem here. -->
| closed | 2022-07-13T09:54:50Z | 2022-07-18T22:48:11Z | https://github.com/microsoft/qlib/issues/1196 | [
"bug"
] | rmallof | 3 |
open-mmlab/mmdetection | pytorch | 11,955 | Is there any plan for next release? | Current v3.3 has compatibility issue which blocks the upgrade of mmcv and torch. Since the solution is already merged to dev-3.x branch, wanna know any plan for next release?
https://github.com/open-mmlab/mmdetection/pull/11764 | open | 2024-09-10T20:54:27Z | 2024-12-18T04:20:22Z | https://github.com/open-mmlab/mmdetection/issues/11955 | [] | rayhsieh | 1 |
dask/dask | scikit-learn | 11,115 | Previously working time series resampling breaks in new version of Dask | **Describe the issue**:
Resampling over a time series used to work with `dask==2024.2.1`. However, with `dask=2024.5.0` and `dask-expr=1.1.0` it throws an exception.
**Minimal Complete Verifiable Example**:
```python
import dask.dataframe as dd
df = dd.DataFrame.from_dict({
"Date":["11/26/2017", "11/26/2017"],
"Time":["17:00:00.067", "17:00:00.102"],
"Volume": [403, 3]
}, npartitions=1)
df["Timestamp"] = dd.to_datetime(df.Date) + dd.to_timedelta(df.Time)
df.to_parquet("test.parquet", write_metadata_file=True)
df = dd.read_parquet("test.parquet", index="Timestamp", calculate_divisions=True)
# This resampling breaks
minutely_volume = df[["Volume"]].resample("min").sum().compute()
print(minutely_volume)
```
**Anything else we need to know?**:
With old version:
```
<Deprecation warning about deprecated Dask version...>
Timestamp
2017-11-26 17:00:00 406
```
With new version:
```
Traceback (most recent call last):
File "/home/anders/src/daskbug/daskbug.py", line 14, in <module>
minutely_volume = df[["Volume"]].resample("min").sum().compute()
File "/home/anders/src/daskbug/venv2/lib/python3.10/site-packages/dask_expr/_collection.py", line 475, in compute
out = out.optimize(fuse=fuse)
File "/home/anders/src/daskbug/venv2/lib/python3.10/site-packages/dask_expr/_collection.py", line 590, in optimize
return new_collection(self.expr.optimize(fuse=fuse))
File "/home/anders/src/daskbug/venv2/lib/python3.10/site-packages/dask_expr/_expr.py", line 94, in optimize
return optimize(self, **kwargs)
File "/home/anders/src/daskbug/venv2/lib/python3.10/site-packages/dask_expr/_expr.py", line 3028, in optimize
return optimize_until(expr, stage)
File "/home/anders/src/daskbug/venv2/lib/python3.10/site-packages/dask_expr/_expr.py", line 2989, in optimize_until
expr = expr.lower_completely()
File "/home/anders/src/daskbug/venv2/lib/python3.10/site-packages/dask_expr/_core.py", line 436, in lower_completely
new = expr.lower_once()
File "/home/anders/src/daskbug/venv2/lib/python3.10/site-packages/dask_expr/_core.py", line 393, in lower_once
out = expr._lower()
File "/home/anders/src/daskbug/venv2/lib/python3.10/site-packages/dask_expr/_resample.py", line 74, in _lower
self.frame, new_divisions=self._resample_divisions[0], force=True
File "/usr/lib/python3.10/functools.py", line 981, in __get__
val = self.func(instance)
File "/home/anders/src/daskbug/venv2/lib/python3.10/site-packages/dask_expr/_resample.py", line 64, in _resample_divisions
return _resample_bin_and_out_divs(
File "/home/anders/src/daskbug/venv2/lib/python3.10/site-packages/dask/dataframe/tseries/resample.py", line 64, in _resample_bin_and_out_divs
temp = divs.resample(rule, closed=closed, label="left").count()
File "/home/anders/src/daskbug/venv2/lib/python3.10/site-packages/pandas/core/generic.py", line 9771, in resample
return get_resampler(
File "/home/anders/src/daskbug/venv2/lib/python3.10/site-packages/pandas/core/resample.py", line 2050, in get_resampler
return tg._get_resampler(obj, kind=kind)
File "/home/anders/src/daskbug/venv2/lib/python3.10/site-packages/pandas/core/resample.py", line 2229, in _get_resampler
_, ax, _ = self._set_grouper(obj, gpr_index=None)
File "/home/anders/src/daskbug/venv2/lib/python3.10/site-packages/pandas/core/resample.py", line 2529, in _set_grouper
obj, ax, indexer = super()._set_grouper(obj, sort, gpr_index=gpr_index)
File "/home/anders/src/daskbug/venv2/lib/python3.10/site-packages/pandas/core/groupby/grouper.py", line 403, in _set_grouper
indexer = self._indexer_deprecated = ax.array.argsort(
File "/home/anders/src/daskbug/venv2/lib/python3.10/site-packages/pandas/core/arrays/base.py", line 848, in argsort
return nargsort(
File "/home/anders/src/daskbug/venv2/lib/python3.10/site-packages/pandas/core/sorting.py", line 439, in nargsort
indexer = non_nan_idx[non_nans.argsort(kind=kind)]
TypeError: '<' not supported between instances of 'Timestamp' and 'int'
```
**Environment**:
- Dask version: 2024.5.0
- Dask-expr version: 1.1.0
- Python version: 3.10.12
- Operating System: Ubuntu 22.04.3 LTS
- Install method (conda, pip, source): `pip`
| closed | 2024-05-10T06:19:39Z | 2024-05-14T17:13:50Z | https://github.com/dask/dask/issues/11115 | [
"dataframe",
"dask-expr"
] | andersglindstrom | 3 |
trevorstephens/gplearn | scikit-learn | 212 | Question about results | Hello,
I trained the model and get the results below. But I find alpha_1 in the picture is a constant, not a variable, 'ts_sum' is my customed function. Can you figure out what the reason maybe is ?
Thank you!

| closed | 2020-11-30T02:33:38Z | 2022-04-30T23:41:40Z | https://github.com/trevorstephens/gplearn/issues/212 | [] | asherzhao8 | 0 |
lukasmasuch/streamlit-pydantic | pydantic | 39 | Forms for list-valued models do not work | This bug regarding lists-values was first described in a comment to #36. #36 in the strict sense is fixed by merged-not-yet-released PR #37. I open this issue to track the list-valued problem separately, which does not seem to be related to pydantic 2.
**Describe the bug:**
List valued models do not work. There is code in streamlit-pydantic for handling list-valued models that suggest that it might have worked earlier, but at least with current versions of `streamlit`, list-valued models do not render as forms.
List-Editing is solved in `streamlit-pydantic` among others by adding a button: `self._render_list_add_button(key, add_col, data_list)` ([ui_renderer.py:1000](https://github.com/LukasMasuch/streamlit-pydantic/blob/1c72c4a801f84bc5476955bf9613022cabdc4a5d/src/streamlit_pydantic/ui_renderer.py#L1000)), but that tries to add a `streamlit.button(...)` to the interface. However, adding a regular button within a form is disallowed by streamlit [button.py:407-411](https://github.com/streamlit/streamlit/blob/397530b0eac4e7b04f4430278304cdb544bc08e8/lib/streamlit/elements/button.py#L407-L411) raising an exception. Only form-submit buttons are currently allowed in Streamlit within forms. (The exception raising was part of Streamlit from the very beginning of adding the "form" feature in April 2021, so it's not clear how list valued editing was supposed to work in streamlit-pydantic to begin with)
The exception then seems to be silently caught and ignored by streamlit-pydantic, so that the form does not render, correctly, but also no exception is displayed within the streamlit app.
**Expected behaviour:**
List valued models should be also rendered and editing should be possible with streamlit-pydantic.
**Steps to reproduce the issue:**
You can use the example used in #36 on a codebase which has #37 merged to reproduce. `OtherSettings` has `: list[NonNegativeInt]` values.
**Technical details:**
- Host Machine OS (Windows/Linux/Mac): Windows
- Browser (Chrome/Firefox/Safari): Firefox
**Notes:**
Given that the raising of the specific exception in Streamlit is quite intentional and seems to be a core design decision, an alternative mode to edit lists has to be found, if possible at all.
After a quick browse through streamlit elements it can maybe solved by a "data editor" element, though the result would likely be unsightly and cumbersome. Similarly cumbersome would be using a "text input" with some convention for splitting the values. Neither seems to be a particularly good solution, the solution space needs to be explored further.
| open | 2023-07-25T10:25:42Z | 2023-12-08T18:10:04Z | https://github.com/lukasmasuch/streamlit-pydantic/issues/39 | [
"type:bug"
] | szabi | 2 |
tqdm/tqdm | pandas | 1,619 | Keras TqdmCallback handles poorly models with many metrics | ## Issue description
At this time, when a keras model has many outputs and/or many metrics being tracked, the loading bar tries to display them all by default with poor outcomes: necessarily, not all features are displayed, and due to the many features being tentatively displayed the loading bar breaks into multiple lines, as shown in the attached figure. In this specific screenshot, there is also the Keras `ModelCheckpoint` spamming between lines, but even without that the issue persists.
<img width="1792" alt="Screenshot 2024-09-29 alle 10 03 58" src="https://github.com/user-attachments/assets/d8291093-7ebd-4d5f-ad1a-36767671ca7b">
### Proposed solution
Since not all metrics can be reasonably shown at once, I wanted to add an argument to let the user specify by name which metrics to be displayed. If you agree, I can go ahead with the pull request. I had in mind something of this nature:
```python
TqdmCallback(
verbose=2,
metrics=["val_loss", "loss"]
)
```
## Issue form
- [x] I have marked all applicable categories:
+ [ ] documentation request (i.e. "X is missing from the documentation." If instead, I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [x] new feature request
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
`4.66.5 3.12.5 | packaged by Anaconda, Inc. | (main, Sep 12 2024, 18:27:27) [GCC 11.2.0] Linux`
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
[StackOverflow#tqdm]: https://stackoverflow.com/questions/tagged/tqdm
| open | 2024-09-29T08:10:39Z | 2024-09-29T08:11:01Z | https://github.com/tqdm/tqdm/issues/1619 | [] | LucaCappelletti94 | 0 |
agronholm/anyio | asyncio | 514 | `fail_after` deadline is set on initialization not context entry | Discussed at https://gitter.im/python-trio/AnyIO?at=63ac6d617de82d261600ea24
When using a `fail_after` context, the deadline is set at "initialization" time rather than `__enter__`. See https://github.com/agronholm/anyio/blob/0cbb84bfadd9078c5dad63bab43907ed0dd555a1/src/anyio/_core/_tasks.py#L112-L114
```python
import anyio
async def main():
ctx = anyio.fail_after(5)
await anyio.sleep(5)
with ctx:
for i in range(1, 6):
print(i)
await anyio.sleep(1)
anyio.run(main)
```
```
❯ python example.py
1
Traceback (most recent call last):
File "/Users/mz/dev/prefect/example.py", line 168, in main
await anyio.sleep(1)
File "/opt/homebrew/Caskroom/miniconda/base/envs/orion-dev-39/lib/python3.9/site-packages/anyio/_core/_eventloop.py", line 83, in sleep
return await get_asynclib().sleep(delay)
File "/opt/homebrew/Caskroom/miniconda/base/envs/orion-dev-39/lib/python3.9/asyncio/tasks.py", line 652, in sleep
return await future
asyncio.exceptions.CancelledError
```
Since this is a context manager, the user expectation is that the timer starts when the context is entered. | open | 2022-12-28T16:34:34Z | 2022-12-28T17:11:00Z | https://github.com/agronholm/anyio/issues/514 | [] | zanieb | 1 |
kizniche/Mycodo | automation | 690 | "Error: Activate Conditional: Measurement must be set" | ## Mycodo Issue Report:
- Specific Mycodo Version:7.7.3 & 7.7.4
#### Problem Description
Since last 2 updates (7.7.3 & 7.7.4) my functions doesn't work anymore?
When I save the code, i get the green validation but when i want to activate a function i get this error code:
"Error: Activate Conditional: Measurement must be set"
https://www.cjoint.com/doc/19_09/IIsqhLUJzMO_Capture2.PNG
I also tried to recreate my function with new measure ID, but same....
here is a part of the code (i removed the second part, not usefull now)
https://www.cjoint.com/doc/19_09/IIsqhuXwXdO_Capture.PNG
my last lines of logging
`2019-09-18 17:49:41,775 - INFO - mycodo.daemon - Mycodo daemon started in 5.417 seconds
2019-09-18 17:49:44,527 - INFO - mycodo.daemon - 40.10 MB RAM in use
2019-09-18 17:51:31,229 - INFO - mycodo.controllers.controller_conditional_8a38bf0c - Deactivated in 205.2 ms
2019-09-18 17:51:36,821 - INFO - mycodo.controllers.controller_conditional_8a38bf0c - Activated in 263.5 ms
2019-09-18 17:53:56,325 - INFO - mycodo.controllers.controller_conditional_8a38bf0c - Deactivated in 8.0 ms`
thanks
| closed | 2019-09-18T16:19:41Z | 2019-09-19T02:28:46Z | https://github.com/kizniche/Mycodo/issues/690 | [] | buzzfuzz2k | 8 |
ageitgey/face_recognition | machine-learning | 813 | when I import face_recognition module Python stop working. | I dont know what is the isssue. I reinstall my window and only install Visual studio 2019. then miniconda3. In env i install opencv, Cmake and then pip --no-cache-dir install face_recognition everything goes well. but when I import face_recognition python stop working error pop up. | closed | 2019-05-03T04:49:06Z | 2019-09-28T04:11:58Z | https://github.com/ageitgey/face_recognition/issues/813 | [] | ridarafisyed | 6 |
marimo-team/marimo | data-science | 3,816 | Styling of web components is difficult/impossible | ### Describe the bug
It seems to me that it is currently impossible to apply CSS styles from the notebook's custom CSS to UI components. I'm not sure whether this is intended behavior - but I think it's not.
I think this got introduced in #2184, which embeds the custom CSS directly into HTML. But then the custom stylesheet does not get copied to the shadow DOM because it gets skipped here:
https://github.com/marimo-team/marimo/blob/860a74f99efef7a99a8a3c1bd1d273046fc799f1/frontend/src/plugins/core/registerReactComponent.tsx#L477-L480
Changing this check alone does not fix the issue, because of the caching logic here which depends on `href`:
https://github.com/marimo-team/marimo/blob/860a74f99efef7a99a8a3c1bd1d273046fc799f1/frontend/src/plugins/core/registerReactComponent.tsx#L379-L383
I tried to find an entirely different solution by using the custom HEAD feature and including a CSS file from `/public/style.css` (which would pass the `shouldCopyStyleSheet` check), but that fails as well because it is loaded before the service worker that attaches `X-Notebook-Id` to the request is running, leading to a 404 response. In the end I made it work by crudely copying the logic for figuring out the notebook file from the other API endpoints instead of using `X-Notebook-Id` (see https://github.com/marimo-team/marimo/commit/4be10853f1e1b371cc0317fc018b8d7a2979b467 in my fork). I'd be happy to turn that into a PR, but I feel there must be a reason for the ServiceWorker machinery 😅
### Environment
<details>
```
{
"marimo": "0.11.5",
"OS": "Linux",
"OS Version": "6.11.0-17-generic",
"Processor": "x86_64",
"Python Version": "3.12.3",
"Binaries": {
"Browser": "--",
"Node": "v18.19.1"
},
"Dependencies": {
"click": "8.1.8",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.26.0",
"packaging": "24.2",
"psutil": "7.0.0",
"pygments": "2.19.1",
"pymdown-extensions": "10.14.3",
"pyyaml": "6.0.2",
"ruff": "0.9.6",
"starlette": "0.45.3",
"tomlkit": "0.13.2",
"typing-extensions": "4.12.2",
"uvicorn": "0.34.0",
"websockets": "14.2"
},
"Optional Dependencies": {
"altair": "5.5.0",
"pandas": "2.2.3"
},
"Experimental Flags": {}
}
```
</details>
### Code to reproduce
`reproducer.py`
```
import marimo
__generated_with = "0.11.5"
app = marimo.App(width="medium", css_file="style.css")
@app.cell
def _():
import marimo as mo
return (mo,)
@app.cell
def _(mo):
mo.ui.number(value=50)
return
if __name__ == "__main__":
app.run()
```
`style.css`
```
input[inputmode="numeric"] {
text-align: right;
}
```
... should give a right-aligned number input (which may be a good default anyway?). | closed | 2025-02-17T15:49:03Z | 2025-02-17T17:34:29Z | https://github.com/marimo-team/marimo/issues/3816 | [
"bug"
] | xqms | 1 |
explosion/spaCy | deep-learning | 12,524 | Showing ℹ 0 label(s) although labels present in the German dataset | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
Hi, I'm trying to train the German transformer model. I have a dataset in German language and modified into this format [https://github.com/explosion/spaCy/blob/master/extra/example_data/ner_example_data/ner-token-per-line.json](url)
After the successful conversion from json to Spacy using `convert` command, I used `debug data` command then I get the below response where **it's showing ℹ 0 label(s) although labels present.**
```
============================ Data file validation ============================
Some weights of the model checkpoint at bert-base-german-cased were not used when initializing BertModel: ['cls.predictions.transform.dense.weight', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.bias', 'cls.predictions.bias']
- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
✔ Pipeline can be initialized with data
✔ Corpus is loadable
=============================== Training stats ===============================
Language: de
Training pipeline: transformer, ner
1 training docs
1 evaluation docs
✔ No overlap between training and evaluation data
✘ Low number of examples to train a new pipeline (1)
It's recommended to use at least 2000 examples (minimum 100)
============================== Vocab & Vectors ==============================
ℹ 120114 total word(s) in the data (11228 unique)
10 most common words: ',' (5917), 'Die' (5168), '.' (4890), 'von' (3519), 'Zu'
(2685), 'A' (2414), 'Und' (2194), 'In' (2131), 'Das' (1414), '-' (1323)
ℹ No word vectors present in the package
========================== Named Entity Recognition ==========================
ℹ 0 label(s)
0 missing value(s) (tokens with '-' label)
Labels in train data:
✔ Good amount of examples for all labels
✔ Examples without occurrences available for all labels
✔ No entities consisting of or starting/ending with whitespace
✔ No entities crossing sentence boundaries
================================== Summary ==================================
✔ 7 checks passed
✘ 1 error
```
Below is the sample dataset in `json` format.
```
[
{
"id": 0,
"paragraphs": [
{
"sentences": [
{
"tokens": [
{
"ner": "O",
"orth": "Beispiele"
},
{
"ner": "O",
"orth": "von"
},
{
"ner": "O",
"orth": "Die"
},
{
"ner": "O",
"orth": "spaltend"
},
{
"ner": "O",
"orth": "Auswirkungen"
},
{
"ner": "O",
"orth": "von"
},
{
"ner": "O",
"orth": "rassisch"
},
{
"ner": "O",
"orth": "Gerrymandering"
},
{
"ner": "O",
"orth": "dürfen"
},
{
"ner": "O",
"orth": "Sei"
},
{
"ner": "O",
"orth": "gesehen"
},
{
"ner": "O",
"orth": "In"
},
{
"ner": "B-CARDINAL",
"orth": "zwei"
},
{
"ner": "O",
"orth": "Städte"
},
{
"ner": "O",
"orth": "--"
},
{
"ner": "B-GPE",
"orth": "Neu"
},
{
"ner": "I-GPE",
"orth": "York"
},
{
"ner": "O",
"orth": "Und"
},
{
"ner": "B-GPE",
"orth": "Birmingham"
},
{
"ner": "O",
"orth": ","
},
{
"ner": "B-GPE",
"orth": "Land"
},
{
"ner": "O",
"orth": "."
}
]
}
]
}
]
}
]
```
## Your Environment
```
============================== Info about spaCy ==============================
spaCy version 3.5.1
Location /usr/local/lib/python3.9/dist-packages/spacy
Platform Linux-5.10.147+-x86_64-with-glibc2.31
Python version 3.9.16
Pipelines de_dep_news_trf (3.5.0), en_core_web_sm (3.5.0)
```
| closed | 2023-04-12T13:12:16Z | 2023-05-14T00:02:20Z | https://github.com/explosion/spaCy/issues/12524 | [
"usage",
"feat / cli"
] | srikamalteja | 3 |
tox-dev/tox | automation | 2,655 | extras with underscore cause missmatch during package dependency discovery | ## Issue
I don't have time to write a lot here right now, but the problem kind of explains itself starting from https://github.com/astropy/astropy/pull/14139#issuecomment-1343642377 and then a few comments that follow. Thanks!
cc @dhomeier @gaborbernat | closed | 2022-12-09T01:17:13Z | 2024-10-07T18:37:38Z | https://github.com/tox-dev/tox/issues/2655 | [
"bug:normal",
"help:wanted"
] | pllim | 16 |
rpicard/explore-flask | flask | 94 | Add translations | Hey man, the explore flask is a very good site for those who want to learn flask in the right way. Fuck would translate it to Portuguese (my native language). So I was wondering if there is a possibility and how I would organize the directory of this translation in the project, like, if I would add a call en folder in the same level of static, or something.
| closed | 2016-02-07T22:55:53Z | 2016-02-09T19:36:07Z | https://github.com/rpicard/explore-flask/issues/94 | [] | wilfilho | 3 |
python-restx/flask-restx | flask | 544 | I am using the expect decorator, and the result of model validation is eventually printed with flask_restx.abort. How do I customize this response? | I am using the expect decorator, and the result of model validation is eventually printed with flask_restx.abort. How do I customize this response? | open | 2023-05-31T08:50:07Z | 2023-05-31T08:51:33Z | https://github.com/python-restx/flask-restx/issues/544 | [
"question"
] | miaokela | 1 |
vimalloc/flask-jwt-extended | flask | 11 | Give a dictionary to create_access_token | Hey, would be nice to pass a dictionary value when I'm creating token, so I can use in `add_claims_to_access_token` and I don't have to find again to database using identity.
LOGIN: Here I have to find on database.
```
@custom_api.route('/auth', methods=['POST'])
def login():
options = {
'body': None,
'status_code': 200
}
username = request.json.get('username', None)
password = request.json.get('password', None)
controller = Controller().query.filter_by(uuid=username).first()
if controller and not safe_str_cmp(controller.jwt.encode('utf-8'), password.encode('utf-8')):
raise NotAuthorizedException()
options['body'] = {'access_token': create_access_token(controller)}
return __build_response_msg(options=options)
```
If not I have to make find twice:
```
@jwt.user_claims_loader
def add_claims_to_access_token(identity):
controller = Controller().query.filter_by(uuid=identity).first()
return {
'id': controller.id,
'role': controller.role
}
```
Would be nice to do something like that so I can make a search less:
```
@jwt.user_claims_loader
def add_claims_to_access_token(identity):
# I can use every parameter from identity dictionary
return {
'id': identity.id,
'role': identity.role
}
```
| closed | 2016-10-16T16:04:50Z | 2016-10-24T16:40:17Z | https://github.com/vimalloc/flask-jwt-extended/issues/11 | [] | laranicolas | 10 |
Johnserf-Seed/TikTokDownload | api | 511 | [BUG] 程序在某个节点就会卡住,无报错,只是程序不动弹了也不刷新了 | **描述出现的错误**

**bug复现**
复现这次行为的步骤:
#510
看这里!!!(下面这个是图片)

**桌面(请填写以下信息):**
-操作系统:[Windows 10 Pro]
-vpn代理:[关闭]
-项目版本:[1.4.0]
-py版本:[3.11.4]
-依赖库的版本:

**附文**
这回莫得Log了,因为根本就不报错..我觉得好像是程序在运行的中,但是被卡住了?
| closed | 2023-08-14T22:54:25Z | 2023-08-18T11:44:01Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/511 | [
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] | EsperantoP | 3 |
AirtestProject/Airtest | automation | 1,277 | Airtest IDE无法调用api进行手机屏幕录制,一直卡在启动录制代码上 | (请尽量按照下面提示内容填写,有助于我们快速定位和解决问题,感谢配合。否则直接关闭。)
**(重要!问题分类)**
* 测试开发环境AirtestIDE使用问题 -> https://github.com/AirtestProject/AirtestIDE/issues
* 控件识别、树状结构、poco库报错 -> https://github.com/AirtestProject/Poco/issues
* 图像识别、设备控制相关问题 -> 按下面的步骤
**描述问题bug**
(简洁清晰得概括一下遇到的问题是什么。或者是报错的traceback信息。)
调用api进行手机屏幕录制时,一直卡在启动录制的代码
```
(在这里粘贴traceback或其他报错信息)
```
[Start running..]
do not connect device
save log in 'C:\Users\jingqiu.liang\AppData\Local\Temp\AirtestIDE\scripts\89a0feb80b5fc62fab6e79d73a01ded1'
[09:56:25][DEBUG]<airtest.core.android.adb> D:\android-sdk_r24.3.2-windows\android-sdk-windows\platform-tools\adb.exe -P 5037 -s d65db8c4 wait-for-device
[09:56:25][DEBUG]<airtest.core.android.adb> D:\android-sdk_r24.3.2-windows\android-sdk-windows\platform-tools\adb.exe -P 5037 -s d65db8c4 shell getprop ro.build.version.sdk
[09:56:25][DEBUG]<airtest.core.android.adb> D:\android-sdk_r24.3.2-windows\android-sdk-windows\platform-tools\adb.exe -P 5037 -s d65db8c4 shell ls -l /data/local/tmp/rotationwatcher.jar
[09:56:25][DEBUG]<airtest.core.android.rotation> install_rotationwatcher skipped
[09:56:25][DEBUG]<airtest.core.android.adb> D:\android-sdk_r24.3.2-windows\android-sdk-windows\platform-tools\adb.exe -P 5037 -s d65db8c4 shell app_process -Djava.class.path=/data/local/tmp/rotationwatcher.jar /data/local/tmp com.example.rotationwatcher.Main
[09:56:25][DEBUG]<airtest.utils.nbsp> [rotation_server]b'0'
[09:56:26][INFO]<airtest.core.android.rotation> update orientation None->0
[09:56:26][DEBUG]<airtest.core.android.adb> D:\android-sdk_r24.3.2-windows\android-sdk-windows\platform-tools\adb.exe -P 5037 -s d65db8c4 shell wm size
[09:56:26][DEBUG]<airtest.core.android.adb> D:\android-sdk_r24.3.2-windows\android-sdk-windows\platform-tools\adb.exe -P 5037 -s d65db8c4 shell getprop ro.sf.lcd_density
[09:56:26][DEBUG]<airtest.core.android.adb> D:\android-sdk_r24.3.2-windows\android-sdk-windows\platform-tools\adb.exe -P 5037 -s d65db8c4 shell dumpsys SurfaceFlinger
[09:56:26][DEBUG]<airtest.core.android.adb> D:\android-sdk_r24.3.2-windows\android-sdk-windows\platform-tools\adb.exe -P 5037 -s d65db8c4 shell dumpsys input
[09:56:26][DEBUG]<airtest.core.android.adb> D:\android-sdk_r24.3.2-windows\android-sdk-windows\platform-tools\adb.exe -P 5037 -s d65db8c4 shell dumpsys window displays
[09:56:27][DEBUG]<airtest.core.android.adb> D:\android-sdk_r24.3.2-windows\android-sdk-windows\platform-tools\adb.exe -P 5037 -s d65db8c4 shell getevent -p
[09:56:27][DEBUG]<airtest.core.android.adb> D:\android-sdk_r24.3.2-windows\android-sdk-windows\platform-tools\adb.exe -P 5037 -s d65db8c4 shell dumpsys package com.netease.nie.yosemite
[09:56:27][DEBUG]<airtest.core.android.adb> D:\android-sdk_r24.3.2-windows\android-sdk-windows\platform-tools\adb.exe -P 5037 -s d65db8c4 shell pm path com.netease.nie.yosemite
[09:56:27][DEBUG]<airtest.core.android.adb> D:\android-sdk_r24.3.2-windows\android-sdk-windows\platform-tools\adb.exe -P 5037 -s d65db8c4 shell CLASSPATH=/data/app/~~IPWjiSthdv-66pQ3_KIXLg==/com.netease.nie.yosemite-BK-q6DlUbeP4ZRAqJ_ci9A==/base.apk exec app_process -Dduration=1800 -Dbitrate=3204000 -Dvertical=off /system/bin com.netease.nie.yosemite.Recorder --start-record
[09:56:27][DEBUG]<airtest.utils.nbsp> [start_recording_1642513245744]b'---Name:: null---duration:: 1800---bitrate:: 3204000 --vertival:: off'
[09:56:27][DEBUG]<airtest.utils.nbsp> [start_recording_1642513245744]b'browsing http://192.168.88.149:8686/start<?filename=xxx.mp4> to start recording!'
[09:56:27][DEBUG]<airtest.utils.nbsp> [start_recording_1642513245744]b'then using http://192.168.88.149:8686/download to stop and download recording.'
[09:56:27][DEBUG]<airtest.utils.nbsp> [start_recording_1642513245744]b'or using http://192.168.88.149:8686/stop to stop the recording server.'
[09:56:28][DEBUG]<airtest.utils.nbsp> [start_recording_1642513245744]b'TESTherere'
[09:56:28][DEBUG]<airtest.utils.nbsp> [start_recording_1642513245744]b'record started, isVertical1: null'
[09:56:28][DEBUG]<airtest.utils.nbsp> [start_recording_1642513245744]b'record started, isVertical2: off'
[09:56:28][DEBUG]<airtest.utils.nbsp> [start_recording_1642513245744]b''
**相关截图**
(贴出遇到问题时的截图内容,如果有的话)
(在AirtestIDE里产生的图像和设备相关的问题,请贴一些AirtestIDE控制台黑窗口相关报错信息)

**复现步骤**
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**预期效果**
(预期想要得到什么、见到什么)
能够成功开启手机屏幕录制
**python 版本:** `python3.11
**airtest 版本:** `1.3.3.1
> airtest版本通过`pip freeze`可以命令可以查到
**设备:**
- 型号: [e.g. google pixel 2]
- 系统: [e.g. Android 8.1]
- (别的信息)
**其他相关环境信息**
(其他运行环境,例如在linux ubuntu16.04上运行异常,在windows上正常。)
| open | 2025-02-20T02:15:16Z | 2025-02-20T02:15:16Z | https://github.com/AirtestProject/Airtest/issues/1277 | [] | 15876379684aa | 0 |
horovod/horovod | machine-learning | 3,354 | Trying to get in touch regarding a security issue | Hey there!
I belong to an open source security research community, and a member (@srikanthprathi) has found an issue, but doesn’t know the best way to disclose it.
If not a hassle, might you kindly add a `SECURITY.md` file with an email, or another contact method? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository) this best practice to ensure security issues are responsibly disclosed, and it would serve as a simple instruction for security researchers in the future.
Thank you for your consideration, and I look forward to hearing from you!
(cc @huntr-helper) | closed | 2022-01-09T00:13:04Z | 2022-01-19T19:25:36Z | https://github.com/horovod/horovod/issues/3354 | [] | JamieSlome | 3 |
mars-project/mars | scikit-learn | 3,187 | [BUG] Ray executor raises KeyError | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
A clear and concise description of what the bug is.
```python
______________________________________________________________________________________________ test_index ______________________________________________________________________________________________
setup = <mars.deploy.oscar.session.SyncSession object at 0x14cb7f880>
def test_index(setup):
with option_context({"eager_mode": True}):
a = mt.random.rand(10, 5, chunk_size=5)
idx = slice(0, 5), slice(0, 5)
> a[idx] = 1
mars/tests/test_eager_mode.py:105:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
mars/tensor/indexing/setitem.py:375: in _setitem
ret = op(a, index, value)
mars/core/mode.py:77: in _inner
return func(*args, **kwargs)
mars/tensor/indexing/setitem.py:94: in __call__
return self.new_tensor(inputs, a.shape, order=a.order)
mars/tensor/operands.py:60: in new_tensor
return self.new_tensors(inputs, shape=shape, dtype=dtype, order=order, **kw)[0]
mars/tensor/operands.py:45: in new_tensors
return self.new_tileables(
mars/core/operand/core.py:277: in new_tileables
ExecutableTuple(tileables).execute()
mars/core/entity/executable.py:267: in execute
ret = execute(*self, session=session, **kw)
mars/deploy/oscar/session.py:1890: in execute
return session.execute(
mars/deploy/oscar/session.py:1684: in execute
execution_info: ExecutionInfo = fut.result(
../../.pyenv/versions/3.8.7/lib/python3.8/concurrent/futures/_base.py:439: in result
return self.__get_result()
../../.pyenv/versions/3.8.7/lib/python3.8/concurrent/futures/_base.py:388: in __get_result
raise self._exception
mars/deploy/oscar/session.py:1870: in _execute
await execution_info
../../.pyenv/versions/3.8.7/lib/python3.8/asyncio/tasks.py:695: in _wrap_awaitable
return (yield from awaitable.__await__())
mars/deploy/oscar/session.py:105: in wait
return await self._aio_task
mars/deploy/oscar/session.py:953: in _run_in_background
raise task_result.error.with_traceback(task_result.traceback)
mars/services/task/supervisor/processor.py:369: in run
await self._process_stage_chunk_graph(*stage_args)
mars/services/task/supervisor/processor.py:247: in _process_stage_chunk_graph
chunk_to_result = await self._executor.execute_subtask_graph(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <mars.services.task.execution.ray.executor.RayTaskExecutor object at 0x14cbd7310>, stage_id = 'GBWH8X4PjfyM0C0yy98hoelB'
subtask_graph = <mars.services.subtask.core.SubtaskGraph object at 0x1c45c2540>, chunk_graph = <mars.core.graph.entity.ChunkGraph object at 0x1c45c2130>, tile_context = {}, context = None
async def execute_subtask_graph(
self,
stage_id: str,
subtask_graph: SubtaskGraph,
chunk_graph: ChunkGraph,
tile_context: TileContext,
context: Any = None,
) -> Dict[Chunk, ExecutionChunkResult]:
if self._cancelled is True: # pragma: no cover
raise asyncio.CancelledError()
logger.info("Stage %s start.", stage_id)
# Make sure each stage use a clean dict.
self._cur_stage_first_output_object_ref_to_subtask = dict()
def _on_monitor_aiotask_done(fut):
# Print the error of monitor task.
try:
fut.result()
except asyncio.CancelledError:
pass
except Exception: # pragma: no cover
logger.exception(
"The monitor task of stage %s is done with exception.", stage_id
)
if IN_RAY_CI: # pragma: no cover
logger.warning(
"The process will be exit due to the monitor task exception "
"when MARS_CI_BACKEND=ray."
)
sys.exit(-1)
result_meta_keys = {
chunk.key
for chunk in chunk_graph.result_chunks
if not isinstance(chunk.op, Fetch)
}
# Create a monitor task to update progress and collect garbage.
monitor_aiotask = asyncio.create_task(
self._update_progress_and_collect_garbage(
stage_id,
subtask_graph,
result_meta_keys,
self._config.get_subtask_monitor_interval(),
)
)
monitor_aiotask.add_done_callback(_on_monitor_aiotask_done)
def _on_execute_aiotask_done(_):
# Make sure the monitor task is cancelled.
monitor_aiotask.cancel()
# Just use `self._cur_stage_tile_progress` as current stage progress
# because current stage is completed, its progress is 1.0.
self._cur_stage_progress = 1.0
self._pre_all_stages_progress += self._cur_stage_tile_progress
self._execute_subtask_graph_aiotask = asyncio.current_task()
self._execute_subtask_graph_aiotask.add_done_callback(_on_execute_aiotask_done)
task_context = self._task_context
output_meta_object_refs = []
self._pre_all_stages_tile_progress = (
self._pre_all_stages_tile_progress + self._cur_stage_tile_progress
)
self._cur_stage_tile_progress = (
self._tile_context.get_all_progress() - self._pre_all_stages_tile_progress
)
logger.info("Submitting %s subtasks of stage %s.", len(subtask_graph), stage_id)
result_meta_keys = {
chunk.key
for chunk in chunk_graph.result_chunks
if not isinstance(chunk.op, Fetch)
}
shuffle_manager = ShuffleManager(subtask_graph)
subtask_max_retries = self._config.get_subtask_max_retries()
for subtask in subtask_graph.topological_iter():
if subtask.virtual:
continue
subtask_chunk_graph = subtask.chunk_graph
input_object_refs = await self._load_subtask_inputs(
stage_id, subtask, task_context, shuffle_manager
)
# can't use `subtask_graph.count_successors(subtask) == 0` to check whether output meta,
# because a subtask can have some outputs which is dependent by downstream, but other outputs are not.
# see https://user-images.githubusercontent.com/12445254/168484663-a4caa3f4-0ccc-4cd7-bf20-092356815073.png
is_mapper, n_reducers = shuffle_manager.is_mapper(subtask), None
if is_mapper:
n_reducers = shuffle_manager.get_n_reducers(subtask)
output_keys, out_count = _get_subtask_out_info(
subtask_chunk_graph, is_mapper, n_reducers
)
subtask_output_meta_keys = result_meta_keys & output_keys
if is_mapper:
# shuffle meta won't be recorded in meta service.
output_count = out_count
else:
output_count = out_count + bool(subtask_output_meta_keys)
subtask_max_retries = subtask_max_retries if subtask.retryable else 0
output_object_refs = self._ray_executor.options(
num_returns=output_count, max_retries=subtask_max_retries
).remote(
subtask.task_id,
subtask.subtask_id,
serialize(subtask_chunk_graph),
subtask_output_meta_keys,
is_mapper,
*input_object_refs,
)
if output_count == 0:
continue
elif output_count == 1:
output_object_refs = [output_object_refs]
self._cur_stage_first_output_object_ref_to_subtask[
output_object_refs[0]
] = subtask
if subtask_output_meta_keys:
assert not is_mapper
meta_object_ref, *output_object_refs = output_object_refs
# TODO(fyrestone): Fetch(not get) meta object here.
output_meta_object_refs.append(meta_object_ref)
if is_mapper:
shuffle_manager.add_mapper_output_refs(subtask, output_object_refs)
else:
subtask_outputs = zip(output_keys, output_object_refs)
task_context.update(subtask_outputs)
logger.info("Submitted %s subtasks of stage %s.", len(subtask_graph), stage_id)
key_to_meta = {}
if len(output_meta_object_refs) > 0:
# TODO(fyrestone): Optimize update meta by fetching partial meta.
meta_count = len(output_meta_object_refs)
logger.info("Getting %s metas of stage %s.", meta_count, stage_id)
meta_list = await asyncio.gather(*output_meta_object_refs)
for meta in meta_list:
for key, (params, memory_size) in meta.items():
key_to_meta[key] = params
self._task_chunks_meta[key] = _RayChunkMeta(memory_size=memory_size)
assert len(key_to_meta) == len(result_meta_keys)
logger.info("Got %s metas of stage %s.", meta_count, stage_id)
chunk_to_meta = {}
# ray.wait requires the object ref list is unique.
output_object_refs = set()
for chunk in chunk_graph.result_chunks:
chunk_key = chunk.key
> object_ref = task_context[chunk_key]
E KeyError: '46283c0fe40539cc27ea3a8dc1ce4196_0'
mars/services/task/execution/ray/executor.py:551: KeyError
```
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version
2. The version of Mars you use
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
| closed | 2022-07-12T11:01:38Z | 2022-07-28T03:35:27Z | https://github.com/mars-project/mars/issues/3187 | [
"type: bug",
"mod: ray integration"
] | fyrestone | 0 |
seleniumbase/SeleniumBase | web-scraping | 3,012 | Proxy Causing Browser Crash | 
When using proxy, I get the above message in the browser... The stability definitely seems to be getting effected as my program works fine when proxy is removed.
`with SB(uc=True, test=True, slow=True, demo_sleep=3, proxy=<redacted>) as sb:`
Is there anyway to keep stability in-tact and use proxy with the SB context manager?
the portion of the test is crashing right when the recaptcha checkbox is selected and the images to select pop-up | closed | 2024-08-11T05:21:54Z | 2024-08-11T16:25:00Z | https://github.com/seleniumbase/SeleniumBase/issues/3012 | [
"question",
"invalid usage",
"UC Mode / CDP Mode"
] | ttraxxrepo | 5 |
davidteather/TikTok-Api | api | 92 | [BUG] Error when running getTiktoksByHashtag | Getting this error when running getTiktoksByHashtag
```
Converting response to JSON failed response is below (probably empty)
Traceback (most recent call last):
File "C:\Users\Asus\AppData\Local\Programs\Python\Python37\lib\site-packages\TikTokApi\tiktok.py", line 34, in getData
return r.json()
File "C:\Users\Asus\AppData\Local\Programs\Python\Python37\lib\site-packages\requests\models.py", line 898, in json
return complexjson.loads(self.text, **kwargs)
File "C:\Users\Asus\AppData\Local\Programs\Python\Python37\lib\json\__init__.py", line 348, in loads
return _default_decoder.decode(s)
File "C:\Users\Asus\AppData\Local\Programs\Python\Python37\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\Asus\AppData\Local\Programs\Python\Python37\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "getTiktoksByHashtag.py", line 7, in <module>
tiktoks = api.byHashtag('zendeechallenge', count=count)
File "C:\Users\Asus\AppData\Local\Programs\Python\Python37\lib\site-packages\TikTokApi\tiktok.py", line 190, in byHashtag
res = self.getData(api_url, b.signature, b.userAgent)
File "C:\Users\Asus\AppData\Local\Programs\Python\Python37\lib\site-packages\TikTokApi\tiktok.py", line 38, in getData
raise Exception('Invalid Response')
Exception: Invalid Response
``` | closed | 2020-05-18T02:27:10Z | 2021-02-03T22:03:29Z | https://github.com/davidteather/TikTok-Api/issues/92 | [
"bug"
] | cplasabas | 3 |
qwj/python-proxy | asyncio | 197 | http+socks5+ss on single port | hello, is it possible to do that? how i can set auth params for this config?
thanks a lot, | open | 2024-12-21T13:16:14Z | 2025-01-09T02:43:54Z | https://github.com/qwj/python-proxy/issues/197 | [] | MaximKiselev | 1 |
streamlit/streamlit | deep-learning | 10,598 | Avoid `StreamlitDuplicateElementId` error when the same widget is in the main area and sidebar | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Using the same widget in the main area and the sidebar results in a `StreamlitDuplicateElementId` error:
```python
import streamlit as st
st.button("Button")
st.sidebar.button("Button")
```

However, we could easily differentiate the automatically generated keys for these two elements, given that one of them is in the sidebar and the other isn't.
### Why?
Convenience, don't need to assign a key but it "just works".
### How?
_No response_
### Additional Context
_No response_ | open | 2025-03-02T19:48:13Z | 2025-03-17T11:55:52Z | https://github.com/streamlit/streamlit/issues/10598 | [
"type:enhancement",
"good first issue",
"feature:st.sidebar"
] | sfc-gh-jrieke | 7 |
huggingface/datasets | numpy | 6,905 | Extraction protocol for arrow files is not defined | ### Describe the bug
Passing files with `.arrow` extension into data_files argument, at least when `streaming=True` is very slow.
### Steps to reproduce the bug
Basically it goes through the `_get_extraction_protocol` method located [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L820)
The method then looks at some base known extensions where `arrow` is not defined so it proceeds to determine the compression with the magic number method which is slow when dealing with a lot of files which are stored in s3 and by looking at this predefined list, I don't see `arrow` in there either so in the end it return None:
```
MAGIC_NUMBER_TO_COMPRESSION_PROTOCOL = {
bytes.fromhex("504B0304"): "zip",
bytes.fromhex("504B0506"): "zip", # empty archive
bytes.fromhex("504B0708"): "zip", # spanned archive
bytes.fromhex("425A68"): "bz2",
bytes.fromhex("1F8B"): "gzip",
bytes.fromhex("FD377A585A00"): "xz",
bytes.fromhex("04224D18"): "lz4",
bytes.fromhex("28B52FFD"): "zstd",
}
```
### Expected behavior
My expectation is that `arrow` would be in the known lists so it would return None without going through the magic number method.
### Environment info
datasets 2.19.0 | closed | 2024-05-17T16:01:41Z | 2025-02-06T19:50:22Z | https://github.com/huggingface/datasets/issues/6905 | [] | radulescupetru | 1 |
falconry/falcon | api | 1,470 | No response is returned when requesting POST in 2.0.0a1 version | Hi, I might have found a bug on 2.0.0a1 version.
I tried this code on 2.0.0a1 version.
but, Processing stops on 'req.stream.read()', there is no response.
```python
import falcon
class SampleResource(object):
def on_post(self, req, resp):
data = req.stream.read()
return {'status': 'ok'}
app = falcon.API()
app.add_route("/", SampleResource())
if __name__ == "__main__":
from wsgiref import simple_server
httpd = simple_server.make_server("127.0.0.1", 8000, app)
httpd.serve_forever()
````
I executed this curl command.but, No response is returned.
```bash
curl -i -XPOST \ 23:10:34
-H "Accept:application/json" \
-H "Content-Type:application/json" \
"http://127.0.0.1:8000" \
-d '{"data": 2}'
```
| closed | 2019-03-04T14:31:46Z | 2019-03-06T17:47:19Z | https://github.com/falconry/falcon/issues/1470 | [
"documentation",
"needs-information"
] | shimakaze-git | 3 |
davidsandberg/facenet | tensorflow | 645 | Unable to load pretrained model | Hi, I would like to load pretrained model in my data training. When i type this in my command:
-- pretrained_model model-20170512-110547.ckpt-250000
My program failed to load it. Can I know how should I write in my command? | closed | 2018-02-17T17:27:39Z | 2018-10-23T06:52:39Z | https://github.com/davidsandberg/facenet/issues/645 | [] | ricky5151 | 2 |
influxdata/influxdb-client-python | jupyter | 279 | How to determine possible failures during write api | Hello, I am working on a piece of code to send influxdb formatted files on the local filesystem to our influxdb server.
I would like to be able to assure that a file's data has been successfully sent to the server before removing the local file.
In example code, It seems that the return values of write are not checked (as far as i can tell):
https://github.com/influxdata/influxdb-client-python/blob/master/examples/import_data_set.py#L73
And from reading the api, It seems that if the request is synchronus, it returns None
https://github.com/influxdata/influxdb-client-python/blob/master/influxdb_client/client/write_api.py#L257
I would like to have the option to take action in certain conditions, IE:
-A badly formatted line among a large number of well formed lines
-Timeout longer than configured has occured
-Incorrect authentication has been given
How would I go about detecting this in code?
Thank you!
| closed | 2021-06-29T20:25:20Z | 2021-12-10T13:52:12Z | https://github.com/influxdata/influxdb-client-python/issues/279 | [
"question",
"wontfix"
] | witanswer | 6 |
openapi-generators/openapi-python-client | rest-api | 657 | Add a way to override content types | Add `content_type_overrides` or similar to the formats accepted by `--config` so that folks can redirect a specialized content type to one of the implementations that exist in this project.
Specifically, one might solve #592 by adding something like:
```yaml
content_type_overrides:
"application/vnd.api+json": "application/json"
```
See https://github.com/openapi-generators/openapi-python-client/discussions/655 for more info. | closed | 2022-08-21T17:36:18Z | 2023-08-13T01:32:59Z | https://github.com/openapi-generators/openapi-python-client/issues/657 | [
"✨ enhancement"
] | dbanty | 0 |
Lightning-AI/pytorch-lightning | machine-learning | 19,602 | Notebook crashes before training | ### Bug description
When training a T5 finetuner, model.fit() ends without any output. Attempts to run any other cells hang until the notebook is restarted, so I can assume that the notebook has crashed.
### What version are you seeing the problem on?
master
### How to reproduce the bug
```python
from transformers import (
T5ForConditionalGeneration,
T5Tokenizer
)
import torch
from sklearn.model_selection import train_test_split
from torch.utils.data import Dataset, DataLoader
tokenizer = T5Tokenizer.from_pretrained('t5-base')
input_sequences = []
rewrite_prompts = [f"{prompt}" for prompt in data["transformation"]]
i = 0
for item in data["rewrite"]:
original = data['original'][i]
# format: original|new
line = f'{original}|{item}'
input_sequences.append(line)
i += 1
input_train, input_test, prompts_train, prompts_test = train_test_split(input_sequences, rewrite_prompts, test_size=0.1, random_state=42)
train_encodings = tokenizer(input_train, padding=True, truncation=True, return_tensors="pt", max_length=384)
train_labels = tokenizer(prompts_train, padding=True, truncation=True, return_tensors="pt", max_length=384).input_ids
# Tokenize testing data
test_encodings = tokenizer(input_test, padding=True, truncation=True, return_tensors="pt", max_length=384)
test_labels = tokenizer(prompts_test, padding=True, truncation=True, return_tensors="pt", max_length=384).input_ids
class TextDataset(Dataset):
def __init__(self, input_ids, labels):
self.input_ids = input_ids
self.labels = labels
def __len__(self):
return len(self.input_ids)
def __getitem__(self, idx):
item = {"input_ids": self.input_ids[idx], "labels": self.labels[idx]}
return item
train_dataset = TextDataset(input_ids=train_encodings["input_ids"], labels=train_labels)
validation_dataset = TextDataset(input_ids=test_encodings["input_ids"], labels=test_labels)
import pytorch_lightning as pl
import torch
from torch.utils.data import DataLoader
from transformers import T5ForConditionalGeneration, AdamW
class T5Tuner(pl.LightningModule):
def __init__(self, batchsize, t5model, t5tokenizer):
super(T5Tuner, self).__init__()
self.batch_size = batchsize
self.model = t5model
self.tokenizer = t5tokenizer
def forward(self, input_ids, labels=None):
return self.model(input_ids=input_ids, labels=labels)
def training_step(self, batch, batch_idx):
outputs = self.forward(batch['input_ids'], batch['labels'])
loss = outputs.loss
print('completed train step')
self.log("train_loss", loss, on_step=True, on_epoch=True, prog_bar=True, rank_zero_only=True)
print('logged')
return loss
def validation_step(self, batch, batch_idx):
outputs = self.forward(batch['input_ids'], batch['labels'])
loss = outputs.loss
print('completed val step')
self.log("val_loss", loss, on_step=True, on_epoch=True, prog_bar=True, rank_zero_only=True)
print('logged (2)')
return loss
def configure_optimizers(self):
optimizer = AdamW(self.parameters(), lr=3e-4, eps=1e-8)
return optimizer
def train_dataloader(self):
return DataLoader(train_dataset, batch_size=self.batch_size,
num_workers=4)
def val_dataloader(self):
return DataLoader(validation_dataset,
batch_size=self.batch_size,
num_workers=4)
# Model Fine-Tuning
t5_model = T5ForConditionalGeneration.from_pretrained('t5-base')
model = T5Tuner(16, t5_model, tokenizer)
trainer = pl.Trainer(max_epochs=3, accelerator="tpu", devices=1)
trainer.fit(model)
```
### Error messages and logs
```
INFO:pytorch_lightning.utilities.rank_zero:GPU available: False, used: False
INFO:pytorch_lightning.utilities.rank_zero:TPU available: True, using: 1 TPU cores
INFO:pytorch_lightning.utilities.rank_zero:IPU available: False, using: 0 IPUs
INFO:pytorch_lightning.utilities.rank_zero:HPU available: False, using: 0 HPUs
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow): Trainer
#- PyTorch Lightning Version (e.g., 1.5.0): latest
#- Lightning App Version (e.g., 0.5.2): n/a
#- PyTorch Version (e.g., 2.0): latest. torch-xla version is 1.13
#- Python version (e.g., 3.9): unknown (kaggle tpu)
#- OS (e.g., Linux): Linux
#- CUDA/cuDNN version: unknown
#- GPU models and configuration: TPU x1
#- How you installed Lightning(`conda`, `pip`, source): !pip install lightning
#- Running environment of LightningApp (e.g. local, cloud): n/a
<details>
<summary>Current environment</summary>
* CUDA:
- GPU: None
- available: False
- version: 12.1
* Lightning:
- torch: 2.1.0
- torch-xla: 2.1.0+libtpu
- torchaudio: 2.1.0
- torchdata: 0.7.0
- torchtext: 0.16.0
- torchvision: 0.16.0
* Packages:
- absl-py: 1.4.0
- accelerate: 0.27.2
- aiofiles: 22.1.0
- aiosqlite: 0.20.0
- anyio: 4.3.0
- argon2-cffi: 23.1.0
- argon2-cffi-bindings: 21.2.0
- array-record: 0.5.0
- arrow: 1.3.0
- astroid: 3.0.3
- asttokens: 2.4.1
- astunparse: 1.6.3
- attrs: 23.2.0
- audioread: 3.0.1
- autopep8: 2.0.4
- babel: 2.14.0
- beautifulsoup4: 4.12.3
- bleach: 6.1.0
- cachetools: 5.3.2
- certifi: 2024.2.2
- cffi: 1.16.0
- charset-normalizer: 3.3.2
- chex: 0.1.85
- click: 8.1.7
- cloud-tpu-client: 0.10
- cloudpickle: 3.0.0
- comm: 0.2.1
- contourpy: 1.2.0
- cycler: 0.12.1
- debugpy: 1.8.1
- decorator: 5.1.1
- defusedxml: 0.7.1
- diffusers: 0.26.3
- dill: 0.3.8
- distrax: 0.1.5
- dm-haiku: 0.0.12.dev0
- dm-tree: 0.1.8
- docstring-to-markdown: 0.15
- entrypoints: 0.4
- etils: 1.7.0
- exceptiongroup: 1.2.0
- executing: 2.0.1
- fastjsonschema: 2.19.1
- filelock: 3.13.1
- flake8: 7.0.0
- flatbuffers: 23.5.26
- flax: 0.8.1
- fonttools: 4.49.0
- fqdn: 1.5.1
- fsspec: 2024.2.0
- funcsigs: 1.0.2
- gast: 0.5.4
- gin-config: 0.5.0
- google-api-core: 1.34.1
- google-api-python-client: 1.8.0
- google-auth: 2.28.1
- google-auth-httplib2: 0.2.0
- google-auth-oauthlib: 1.2.0
- google-pasta: 0.2.0
- googleapis-common-protos: 1.62.0
- grpcio: 1.62.0
- gym: 0.26.2
- gym-notices: 0.0.8
- h5py: 3.10.0
- httplib2: 0.22.0
- huggingface-hub: 0.20.3
- idna: 3.6
- importlib-metadata: 7.0.1
- importlib-resources: 6.1.1
- ipykernel: 6.29.2
- ipython: 8.22.0
- ipython-genutils: 0.2.0
- isoduration: 20.11.0
- isort: 5.13.2
- jax: 0.4.23
- jaxlib: 0.4.23
- jedi: 0.19.1
- jinja2: 3.1.3
- jmp: 0.0.4
- joblib: 1.3.2
- jraph: 0.0.6.dev0
- json5: 0.9.17
- jsonpointer: 2.4
- jsonschema: 4.21.1
- jsonschema-specifications: 2023.12.1
- jupyter-client: 7.4.9
- jupyter-core: 5.7.1
- jupyter-events: 0.9.0
- jupyter-lsp: 1.5.1
- jupyter-server: 2.12.5
- jupyter-server-fileid: 0.9.1
- jupyter-server-terminals: 0.5.2
- jupyter-server-ydoc: 0.8.0
- jupyter-ydoc: 0.2.5
- jupyterlab: 3.6.7
- jupyterlab-pygments: 0.3.0
- jupyterlab-server: 2.25.3
- kagglehub: 0.1.9
- keras: 3.0.5
- keras-cv: 0.8.2
- keras-nlp: 0.8.1
- kiwisolver: 1.4.5
- lazy-loader: 0.3
- libclang: 16.0.6
- librosa: 0.10.1
- libtpu-nightly: 0.1.dev20231213
- llvmlite: 0.42.0
- markdown: 3.5.2
- markdown-it-py: 3.0.0
- markupsafe: 2.1.5
- matplotlib: 3.8.3
- matplotlib-inline: 0.1.6
- mccabe: 0.7.0
- mdurl: 0.1.2
- mistune: 3.0.2
- ml-dtypes: 0.2.0
- mpmath: 1.3.0
- msgpack: 1.0.7
- nbclassic: 1.0.0
- nbclient: 0.9.0
- nbconvert: 7.16.1
- nbformat: 5.9.2
- nest-asyncio: 1.6.0
- networkx: 3.2.1
- notebook: 6.5.6
- notebook-shim: 0.2.4
- numba: 0.59.0
- numpy: 1.26.4
- nvidia-cublas-cu12: 12.1.3.1
- nvidia-cuda-cupti-cu12: 12.1.105
- nvidia-cuda-nvrtc-cu12: 12.1.105
- nvidia-cuda-runtime-cu12: 12.1.105
- nvidia-cudnn-cu12: 8.9.2.26
- nvidia-cufft-cu12: 11.0.2.54
- nvidia-curand-cu12: 10.3.2.106
- nvidia-cusolver-cu12: 11.4.5.107
- nvidia-cusparse-cu12: 12.1.0.106
- nvidia-nccl-cu12: 2.18.1
- nvidia-nvjitlink-cu12: 12.3.101
- nvidia-nvtx-cu12: 12.1.105
- oauth2client: 4.1.3
- oauthlib: 3.2.2
- opencv-python-headless: 4.9.0.80
- opt-einsum: 3.3.0
- optax: 0.1.9
- orbax-checkpoint: 0.4.4
- overrides: 7.7.0
- packaging: 23.2
- pandas: 2.2.0
- pandocfilters: 1.5.1
- papermill: 2.5.0
- parso: 0.8.3
- pexpect: 4.9.0
- pillow: 10.2.0
- pip: 23.0.1
- platformdirs: 4.2.0
- pluggy: 1.4.0
- pooch: 1.8.1
- prometheus-client: 0.20.0
- promise: 2.3
- prompt-toolkit: 3.0.43
- protobuf: 3.20.3
- psutil: 5.9.8
- ptyprocess: 0.7.0
- pure-eval: 0.2.2
- pyasn1: 0.5.1
- pyasn1-modules: 0.3.0
- pycodestyle: 2.11.1
- pycparser: 2.21
- pydocstyle: 6.3.0
- pyflakes: 3.2.0
- pygments: 2.17.2
- pylint: 3.0.3
- pyparsing: 3.1.1
- python-dateutil: 2.8.2
- python-json-logger: 2.0.7
- python-lsp-jsonrpc: 1.1.2
- python-lsp-server: 1.10.0
- pytoolconfig: 1.3.1
- pytz: 2024.1
- pyyaml: 6.0.1
- pyzmq: 24.0.1
- referencing: 0.33.0
- regex: 2023.12.25
- requests: 2.31.0
- requests-oauthlib: 1.3.1
- rfc3339-validator: 0.1.4
- rfc3986-validator: 0.1.1
- rich: 13.7.0
- rope: 1.12.0
- rpds-py: 0.18.0
- rsa: 4.9
- safetensors: 0.4.2
- scikit-learn: 1.4.1.post1
- scipy: 1.12.0
- send2trash: 1.8.2
- setuptools: 65.5.1
- six: 1.16.0
- sniffio: 1.3.0
- snowballstemmer: 2.2.0
- soundfile: 0.12.1
- soupsieve: 2.5
- soxr: 0.3.7
- stack-data: 0.6.3
- sympy: 1.12
- tabulate: 0.9.0
- tenacity: 8.2.3
- tensorboard: 2.15.2
- tensorboard-data-server: 0.7.2
- tensorflow: 2.15.0
- tensorflow-datasets: 4.9.4
- tensorflow-estimator: 2.15.0
- tensorflow-hub: 0.16.1
- tensorflow-io: 0.36.0
- tensorflow-io-gcs-filesystem: 0.36.0
- tensorflow-metadata: 1.14.0
- tensorflow-probability: 0.23.0
- tensorflow-text: 2.15.0
- tensorstore: 0.1.45
- termcolor: 2.4.0
- terminado: 0.18.0
- tf-keras: 2.15.0
- threadpoolctl: 3.3.0
- tinycss2: 1.2.1
- tokenizers: 0.15.2
- toml: 0.10.2
- tomli: 2.0.1
- tomlkit: 0.12.3
- toolz: 0.12.1
- torch: 2.1.0
- torch-xla: 2.1.0+libtpu
- torchaudio: 2.1.0
- torchdata: 0.7.0
- torchtext: 0.16.0
- torchvision: 0.16.0
- tornado: 6.4
- tqdm: 4.66.2
- traitlets: 5.14.1
- transformers: 4.38.1
- trax: 1.4.1
- triton: 2.1.0
- types-python-dateutil: 2.8.19.20240106
- typing-extensions: 4.9.0
- tzdata: 2024.1
- ujson: 5.9.0
- uri-template: 1.3.0
- uritemplate: 3.0.1
- urllib3: 2.2.1
- wcwidth: 0.2.13
- webcolors: 1.13
- webencodings: 0.5.1
- websocket-client: 1.7.0
- werkzeug: 3.0.1
- whatthepatch: 1.0.5
- wheel: 0.42.0
- wrapt: 1.14.1
- y-py: 0.6.2
- yapf: 0.40.2
- ypy-websocket: 0.8.4
- zipp: 3.17.0
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor:
- python: 3.10.13
- release: 6.1.75+
- version: #1 SMP PREEMPT_DYNAMIC Fri Mar 1 15:14:26 UTC 2024
</details>
```
</details>
### More info
_No response_ | open | 2024-03-08T15:22:24Z | 2024-03-08T15:22:39Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19602 | [
"bug",
"needs triage",
"ver: 2.2.x"
] | JIBSIL | 0 |
pydantic/pydantic | pydantic | 11,328 | TypeAdapter.validate transforms a nested dictionary into an empty object | ### Initial Checks
- [x] I confirm that I'm using Pydantic V2
### Description
- action: use `TypeAdapter.validate_python` to validate a dict against a python `TypedDict`, the dict should not validate
- expected: exception since the value does not validate
- actual: `TypeAdapter` transforms the input value (it drops the offending keys) and outputs an object that complies with the input `TypedDict`
to reproduce:
```
❯ pyenv virtualenv 3.13.1 pydantic_test
❯ pyenv activate pydantic_test
❯ pip install pydantic
Collecting pydantic
Downloading pydantic-2.10.5-py3-none-any.whl.metadata (30 kB)
[snip]
Successfully installed annotated-types-0.7.0 pydantic-2.10.5 pydantic-core-2.27.2 typing-extensions-4.12.2
❯ python test.py
validate input: {'field': {'two': 'foo'}}
validate output: {'field': {}}
```
### Example Code
```Python
from typing import NotRequired, TypedDict
import pydantic
class Option1(TypedDict):
one: NotRequired[int]
class Option2(TypedDict):
two: NotRequired[bool]
class Unions(TypedDict):
field: Option1 | Option2
if __name__ == "__main__":
invalid = {"field": {"two": "foo"}}
result = pydantic.TypeAdapter(Unions).validate_python(invalid, strict=True)
print(f"validate input: {invalid}")
print(f"validate output: {result}")
```
### Python, Pydantic & OS Version
```Text
❯ python -c "import pydantic.version; print(pydantic.version.version_info())"
pydantic version: 2.10.5
pydantic-core version: 2.27.2
pydantic-core build: profile=release pgo=false
install path: /Users/josteingogstad/.pyenv/versions/3.13.1/envs/pydantic_test/lib/python3.13/site-packages/pydantic
python version: 3.13.1 (main, Jan 8 2025, 13:21:34) [Clang 15.0.0 (clang-1500.3.9.4)]
platform: macOS-15.2-arm64-arm-64bit-Mach-O
related packages: typing_extensions-4.12.2
commit: unknown
``` | open | 2025-01-23T14:06:33Z | 2025-02-12T19:28:41Z | https://github.com/pydantic/pydantic/issues/11328 | [
"bug V2"
] | jgogstad | 3 |
microsoft/qlib | machine-learning | 1,242 | US data displays negative annualized return for all the models | I tried US data for all the models, but all of them showing negative annualized return, (while the SP500 benchmark showing 12% annualized return). For examples, here are my workflow for LSTM, I wonder did I make any mistakes? (I also checked the SP500 from 2017 to 2020, the portfolio cannot be negative even randomly select stocks) Thanks a lot!
qlib_init:
provider_uri: "~/.qlib/qlib_data/us_data"
region: us
market: &market SP500
benchmark: &benchmark ^gspc
data_handler_config: &data_handler_config
start_time: 2008-01-01
end_time: 2020-11-01
fit_start_time: 2008-01-01
fit_end_time: 2014-12-31
instruments: *market
infer_processors:
- class: RobustZScoreNorm
kwargs:
fields_group: feature
clip_outlier: true
- class: Fillna
kwargs:
fields_group: feature
learn_processors:
- class: DropnaLabel
- class: CSRankNorm
kwargs:
fields_group: label
label: ["Ref($close, -2) / Ref($close, -1) - 1"]
port_analysis_config: &port_analysis_config
strategy:
class: TopkDropoutStrategy
module_path: qlib.contrib.strategy
kwargs:
signal:
- <MODEL>
- <DATASET>
topk: 50
n_drop: 5
backtest:
start_time: 2017-01-01
end_time: 2020-11-01
account: 100000000
benchmark: *benchmark
exchange_kwargs:
deal_price: close
open_cost: 0.0000229
close_cost: 0
min_cost: 0.01
task:
model:
class: LSTM
module_path: qlib.contrib.model.pytorch_lstm
kwargs:
d_feat: 6
hidden_size: 64
num_layers: 2
dropout: 0.0
n_epochs: 200
lr: 1e-3
early_stop: 20
batch_size: 800
metric: loss
loss: mse
GPU: 7
dataset:
class: DatasetH
module_path: qlib.data.dataset
kwargs:
handler:
class: Alpha360
module_path: qlib.contrib.data.handler
kwargs: *data_handler_config
segments:
train: [2008-01-01, 2014-12-31]
valid: [2015-01-01, 2016-12-31]
test: [2017-01-01, 2020-11-01]
record:
- class: SignalRecord
module_path: qlib.workflow.record_temp
kwargs:
model: <MODEL>
dataset: <DATASET>
- class: SigAnaRecord
module_path: qlib.workflow.record_temp
kwargs:
ana_long_short: False
ann_scaler: 252
- class: PortAnaRecord
module_path: qlib.workflow.record_temp
kwargs:
config: *port_analysis_config
'The following are analysis results of benchmark return(1day).'
risk
mean 0.000508
std 0.013132
annualized_return 0.120839
information_ratio 0.596487
max_drawdown -0.382494
'The following are analysis results of the excess return without cost(1day).'
risk
mean -0.000012
std 0.005776
annualized_return -0.002809
information_ratio -0.031517
max_drawdown -0.191328
| closed | 2022-08-01T19:33:34Z | 2022-12-17T18:01:54Z | https://github.com/microsoft/qlib/issues/1242 | [
"question",
"stale"
] | anonymous-papers | 3 |
pywinauto/pywinauto | automation | 891 | pywinauto fails to access controls when too many controls exist | ## Expected Behavior
This specific .NET application I am using works fine with pywinauto, except when the app retrieves a huge DataGridView (12k+ controls).
Controls, however, do exists as they can be checked using "Accessibility Insights" using backend uia (equivalent to inspect.exe)
Way more (not all!!) controls can be found when using app backend=win32, but I can not do the automation with that.
Pywinauto works fine when DataGridView goes only up to 2.5k-3k controls.
## Actual Behavior
Pywinauto simply stops working properly and is not able to access any controls method and/or even find the controls (dump_tree/get_controls_identifiers)
## Steps to Reproduce the Problem
1. Connect/start the application
2. Print controls (dump_tree/get_control_modifiers) when not-so-many controls exists, it works fine
3. Try to access/print controls when 12k+ controls exists - pywinauto fails
## Short Example of Code to Demonstrate the Problem
```
app_sci = pieces.pywinauto.application.Application(backend='uia')
app_sci = app_sci.connect(title_re='SCI - Sistema de Cadastro.*', found_index = 0)
for w in app_sci.windows():
print(w) # Only one window exists, the one that is grabbed by the line below
app_sci = app_sci.window(title_re='SCI - Sistema de Cadastro.*', top_level_only=True)
app_sci.wait('ready', timeout=10)
app_sci.dump_tree()
**It prints only... (xxx to conceal personal info)**
"""
Control Identifiers:
Dialog - 'SCI - xxx]' (L-8, T-8, R1928, B1048)
['Dialog', 'SCI xxx]Dialog', 'SCI - Sistema de Cadastro xxx']
child_window(title="SCI - Sistema de Cadastro xxx]", auto_id="MDIPrincipal", control_type="Window")
"""
```
Using app_sci.dump_tree() with backend win32 prints over 1050 lines of controls
## Specifications
- Pywinauto version: pywinauto==0.6.8
- Python version and bitness: 3.7.4 - 64bits
- Platform and OS: Windows Server 2016 - 64bits
| open | 2020-02-17T06:25:45Z | 2023-11-30T02:09:40Z | https://github.com/pywinauto/pywinauto/issues/891 | [
"performance"
] | Pestitschek | 7 |
ultralytics/ultralytics | computer-vision | 18,834 | How to enable Mixup and mosaic augmentation during training with --rect=True | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello. I would like to ask, why mixup and mosaic augmentations are not applied during training with --rect=True flag. is there any way to enable it? I prepared dataset, where each picture has a resolution of 1248x704, Please advice any available solution for this inquiry.
### Additional
_No response_ | closed | 2025-01-23T02:19:16Z | 2025-01-25T14:45:13Z | https://github.com/ultralytics/ultralytics/issues/18834 | [
"question"
] | WildTaras | 9 |
taverntesting/tavern | pytest | 408 | Remove 'payload' from MQTT response | Now that 'json' can take a single string/int/float/boolean value, it maybe makes sense to just remove this and tell users to use the 'json' key for everything. Other option is to use the 'payload' key for everything.
Also simplify the handling of the difference between json and 'plain' responses in mqtt/response.py | closed | 2019-08-14T16:33:55Z | 2019-12-09T14:14:51Z | https://github.com/taverntesting/tavern/issues/408 | [
"Type: Maintenance"
] | michaelboulton | 1 |
graphql-python/graphene-sqlalchemy | sqlalchemy | 212 | Generate Mutation input from Model | Is there a way to have something like this ?
```python
class AddUserInput(SQLAlchemyInputObjectType):
class Meta:
model = UserModel
``` | closed | 2019-05-09T14:21:54Z | 2023-02-25T06:58:19Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/212 | [] | Rafik-Belkadi | 3 |
cupy/cupy | numpy | 8,803 | `cupy.convolve` gives a wrong result when the input contains `nan` | ### Description
`cupy.convolve` gives a wrong result when the input contains `nan`
### To Reproduce
```py
import numpy as np
import cupy as cp
print("np:", np.convolve(np.array([1, np.nan]), np.array([1, np.nan])))
print("cp:", cp.convolve(cp.array([1, cp.nan]), cp.array([1, cp.nan])))
```
Output:
```py
np: [ 1. nan nan]
cp: [nan nan nan]
```
As convolve's definition, the result `s[0]=a[0]*b[0]`. Hence,`s[0]` should be `1`.
### Installation
Wheel (`pip install cupy-***`)
### Environment
```
OS : Linux-6.8.0-49-generic-x86_64-with-glibc2.35
Python Version : 3.10.12
CuPy Version : 13.3.0
CuPy Platform : NVIDIA CUDA
NumPy Version : 2.1.0
SciPy Version : 1.13.1
Cython Build Version : 0.29.36
Cython Runtime Version : 0.29.37
CUDA Root : /usr/local/cuda
nvcc PATH : /usr/local/cuda/bin/nvcc
CUDA Build Version : 12060
CUDA Driver Version : 12040
CUDA Runtime Version : 12060 (linked to CuPy) / 12020 (locally installed)
CUDA Extra Include Dirs : []
cuBLAS Version : 120201
cuFFT Version : 11008
cuRAND Version : 10303
cuSOLVER Version : (11, 5, 2)
cuSPARSE Version : 12101
NVRTC Version : (12, 2)
Thrust Version : 200600
CUB Build Version : 200600
Jitify Build Version : <unknown>
cuDNN Build Version : None
cuDNN Version : None
NCCL Build Version : None
NCCL Runtime Version : None
cuTENSOR Version : None
cuSPARSELt Build Version : None
Device 0 Name : NVIDIA GeForce RTX 4070 Laptop GPU
Device 0 Compute Capability : 89
Device 0 PCI Bus ID : 0000:01:00.0
```
### Additional Information
_No response_ | closed | 2024-12-10T11:51:42Z | 2025-02-07T00:20:10Z | https://github.com/cupy/cupy/issues/8803 | [
"issue-checked"
] | AnonymousPlayer2000 | 2 |
microsoft/RD-Agent | automation | 2 | This repo is missing important files | There are important files that Microsoft projects should all have that are not present in this repository. A pull request has been opened to add the missing file(s). When the pr is merged this issue will be closed automatically.
Microsoft teams can [learn more about this effort and share feedback](https://docs.opensource.microsoft.com/releasing/maintain/templates/) within the open source guidance available internally.
[Merge this pull request](https://github.com/microsoft/RD-Agent/pull/1) | closed | 2024-04-03T03:39:56Z | 2024-04-03T04:04:15Z | https://github.com/microsoft/RD-Agent/issues/2 | [] | microsoft-github-policy-service[bot] | 0 |
QingdaoU/OnlineJudge | django | 99 | 希望增加的一些小细节 | 1
希望可以指定某些账户(比如管理员发布题目和测试题目的账户)为保留账户,不参与排行榜的显示。
因为在发布题目之后需要跑一遍标程,但是跑过之后又会在rank榜上留下太多的提交记录。
发布题目的管理员账号一直霸占着排行榜的第一位。实在不太好看。
或者提供删除提交记录这个功能也可以。
2
添加题目时候的ID那一栏现在是空白的,需要手动添加。希望可以默认显示一个新ID。
3
提交测试样例的时候只支持.out 不支持.ans
在某些网站下载下来的测试样例是.ans格式的,还需要解压下来更改后缀名后再压缩上传。
希望可以直接支持.ans文件。
4
测试样例貌似必须得从1.in 1.out开始?有时候样例是0.in 0.out就无法识别了
5 OI赛制下的每个测试点的分数默认是0
但是在一道题有十个甚至二十个测试点的时候需要手动把每个测试点的0改成10或者5。需要改十遍或者二十遍。
如果可以在上传测试数据之后,自动把score显示成 (100 / 测试点数量) 就好了,然后单独对每个测试点修改也比较方便。
6 默认内存那里一直是256MB
我所在的服务器256MB不足以启动JAVA虚拟机,所以每道题的最小内存是512MB。
但是别的出题人再添加题目的时候不知道这个问题,经常会出现没有改默认的256MB的情况,然后有人用JAV提交题目之后才发现问题。
所以能不能提供一个选项来设置默认的内存大小?或者自动保存上一次使用的内存大小?
以上是我在使用这个OJ系统时发现的一些不太好用的小细节。
因为我以前没有接触过Python, Django,所以上面说的自己在底下折腾了好长时间也没有修改成功。
所以只能提一下issue,希望引起开发者的注意。
当然如果开发者认为不需要改动的话也没关系的。
| closed | 2017-12-05T11:48:56Z | 2017-12-08T15:37:52Z | https://github.com/QingdaoU/OnlineJudge/issues/99 | [] | zqqian | 4 |
reloadware/reloadium | flask | 59 | Syntax error when imports are not on first line | **Describe the bug**
I have a number of files that contain preambles before the code (eg copyright notices, file description, etc).
All my python files then contain `from __future__ import annotations` for typing purposes.
When using reloadium with debug, I get the following SyntaxError:
```py
Traceback (most recent call last):
File "one.py", line 4
from __future__ import annotations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SyntaxError: from __future__ imports must occur at the beginning of the file
```
**To Reproduce**
Steps to reproduce the behavior:
1. create 2 files: `one.py` and `two.py`
2. contents of `one.py`
```py
"""
Docstring
"""
from __future__ import annotations
```
3. contents of `two.py`
```py
import two
```
5. In pycharm, add the directory containing the two files to reloadable paths
6. Run and see error
**Expected behavior**
No error occurs
**Desktop:**
- OS: MacOS
- OS version: 12.6 (Monterey)
- Reloadium package version: 0.9.4
- PyCharm plugin version: 0.8.8
- Editor: Pycharm
- Run mode: Debug
**Additional context**
Unfortunately, this is blocking me trying to debug why the Run configuration doesn't work for my project (for some reason docstrings on \_\_init\_\_ functions are being stripped when using reloadium) | closed | 2022-11-06T01:28:13Z | 2022-11-21T22:42:45Z | https://github.com/reloadware/reloadium/issues/59 | [] | bragradon | 1 |
JaidedAI/EasyOCR | machine-learning | 1,025 | Baselining inference times of various Word detectors. | I am in need of making a trade-off between inference time and accuracies of the various detection model.
I tried docTR on a sample set and it gave following inference times:
real 0m4.963s
user 0m13.920s
sys 0m2.695s
And with CRAFT, I got
real 0m7.633s
user 0m6.958s
sys 0m2.356s
How do I read into this? Also can someone suggest some good architectures that I can try that are also fast.
PS: all the SOTA ones are GPU heavy
PSS: I know this is not the most apt place to have this discussion but then again, I do not know any other place either. | open | 2023-05-21T10:23:13Z | 2023-05-21T10:23:13Z | https://github.com/JaidedAI/EasyOCR/issues/1025 | [] | ceyxasm | 0 |
blb-ventures/strawberry-django-plus | graphql | 162 | InputMutation / Mutation not working with List return type | Not sure if this is on purpose, I think that an `gql.django.input_mutation` should be able to return a list type. There seems to be something off in schema generation based on the type annotations, the doesn't have a list, but the type itself, and the mutation fails at runtime.
Example:
```
@gql.type
class Foo:
bar: str
@gql.type
class Mutation:
@gql.django.input_mutation
def do_something(self, info: Info, foo: str, bar: str) -> List[Foo]:
pass
```
results in:
```
input DoSomethingInput {
foo: String!
bar: String!
}
union DoSomethingPayload = Foo | OperationInfo
```
but should result in:
```
"""Input data for `doSomething` mutation"""
input DoSomethingInput {
foo: String!
bar: String!
}
union DoSomethingPayload = [Foo] | OperationInfo
```
At runtime the error is:
```
{
"data": null,
"errors": [
{
"message": "The type \"<class 'list'>\" cannot be resolved for the field \"doSomething\" , are you using a strawberry.field?",
"locations": [
{
"line": 2,
"column": 3
}
],
"path": [
"doSomething"
]
}
]
}
``` | closed | 2023-01-12T17:15:57Z | 2023-01-12T19:51:32Z | https://github.com/blb-ventures/strawberry-django-plus/issues/162 | [] | gersmann | 3 |
FactoryBoy/factory_boy | django | 128 | Use **kwargs instead of extra for overwriting fields when calling #attributes() | I come from the ruby world and really liked factory girl, so I'm very happy to see that there is a similar solution for python. Thanks for that!
I'm currently introducing factory-boy into a project of our company and stumbled over #attributes(), because it's arguments behave differently than #build() or #create().
To make an example assume I have a User model and a UserFactory with the field `title`. To build / create an instance with a custom title, I would call `UserFactory.create(title='foobar')`. However, to get an attributes-dict with a custom title, I would call `UserFactory.attributes(extra={'title': 'foobar'})`.
I think that's a bit cumbersome and can be solved quickly by replacing the signature argument of `factory.attributes(cls, create=False, extra=None)` with something like `factory.attributes(cls, create=False, **kwargs)`.
What do you think?
| closed | 2014-02-03T06:49:20Z | 2015-05-06T05:06:48Z | https://github.com/FactoryBoy/factory_boy/issues/128 | [] | florianpilz | 2 |
ray-project/ray | tensorflow | 50,955 | [core] raylet memory leak | ### What happened + What you expected to happen
Seeing consistent memory growth in raylet process on a long-running cluster:
<img width="1371" alt="Image" src="https://github.com/user-attachments/assets/ebcdf7f7-0c1e-4ae7-91f0-7d18a0d7d183" />
<img width="1361" alt="Image" src="https://github.com/user-attachments/assets/67004d58-082b-443b-b666-4a1978150e00" />
### Versions / Dependencies
2.41
### Reproduction script
n/a
### Issue Severity
None | closed | 2025-02-27T18:02:35Z | 2025-03-08T00:20:31Z | https://github.com/ray-project/ray/issues/50955 | [
"bug",
"P0",
"core"
] | zcin | 1 |
huggingface/datasets | nlp | 6,478 | How to load data from lakefs | My dataset is stored on the company's lakefs server. How can I write code to load the dataset? It would be great if I could provide code examples or provide some references
| closed | 2023-12-06T09:04:11Z | 2024-07-03T19:13:57Z | https://github.com/huggingface/datasets/issues/6478 | [] | d710055071 | 3 |
tensorflow/tensor2tensor | deep-learning | 1,910 | Use of Layer Normalization | Hello! I would like to know why Layer Normalization and Residual connections have been used in the Transformer architecture. | open | 2022-03-19T20:35:23Z | 2022-03-19T20:35:23Z | https://github.com/tensorflow/tensor2tensor/issues/1910 | [] | Andre1998Shuvam | 0 |
aio-libs/aiomysql | asyncio | 303 | aiomysql Import Error _scramble | In new version of PyMySQL in connection.py _scramble and _scramble_323 has been removed and it cause an error when importing aiomysql. | closed | 2018-06-27T14:54:00Z | 2018-07-06T20:28:21Z | https://github.com/aio-libs/aiomysql/issues/303 | [] | farshid616 | 3 |
mwaskom/seaborn | matplotlib | 3,145 | Legend subtitles should not use hardcoded colors | Currently, in order to generate legend subtitles for multiple semantic types (i.e. hue, size, style), the relational plotter creates invisible legend entries and uses the labels as subtitles. If there is only one semantic type, then the actual "legend title" is used rather than these "hacked legend labels". See https://github.com/mwaskom/seaborn/pull/1483 and these lines:
https://github.com/mwaskom/seaborn/blob/bf4695466d742f301f361b8d0c8168c5c4bdf889/seaborn/relational.py#L205-L212
The "legend title" exhibits desirable behavior by reflecting `rcParams['text.color']` (changed to `'red'` below).
<img width="101" alt="image" src="https://user-images.githubusercontent.com/16842594/202290921-60495374-44c0-4a09-86b1-a50a2120e48e.png">
However, the "hacked legend labels" exhibit undesirable behavior:
<img width="120" alt="image" src="https://user-images.githubusercontent.com/16842594/202290817-64db8886-44e8-411b-9143-dcd3295963dc.png">
1) When `rcParams['legend.labelcolor'] = 'linecolor'`, the subtitles show up as white because the invisible handles are hardcoded to white ("w"). --> Should reflect `rcParams['text.color']` instead.
https://github.com/mwaskom/seaborn/blob/bf4695466d742f301f361b8d0c8168c5c4bdf889/seaborn/relational.py#L218-L220
2) Size and style labels are hardcoded to gray ("0.2"). --> Should reflect `rcParams['legend.labelcolor']` instead.
https://github.com/mwaskom/seaborn/blob/bf4695466d742f301f361b8d0c8168c5c4bdf889/seaborn/relational.py#L330
I'm not too familiar with this codebase, so there may be other places that also need de-hardcoding (to maintain consistency with these changes). | closed | 2022-11-16T21:13:56Z | 2022-12-27T23:25:40Z | https://github.com/mwaskom/seaborn/issues/3145 | [] | nikhilxb | 2 |
WZMIAOMIAO/deep-learning-for-image-processing | deep-learning | 544 | Batch size of the validation/test set for Faster RCNN | Hi,
Thanks a lot for your very helpful code. I have a question about the batch size of the validation/test set for Faster RCNN. When I evaluate the validation/test set for Faster RCNN, I got the mAP of 0 if the batch size of the validation/test set was higher than 1. If the batch size was changed to 1, the mAP would be higher than 0. Why does the batch size of the validation/test set for Faster RCNN need to be set as 1? Thanks a lot.
| closed | 2022-04-29T20:25:53Z | 2022-05-02T18:04:46Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/544 | [] | lfangyu09 | 1 |
amidaware/tacticalrmm | django | 1,263 | Feature Request: RMM to add disk checks for discovered drives | **Is your feature request related to a problem? Please describe.**
In the Automation Manager, I have a server policy which has multiple drive checks (C:, D:, E:, etc...) however they are all applied to servers but not all servers have these drives and then it creates an alert for it.
**Describe the solution you'd like**
It would be great to have the RMM add disk checks based on the drives discovered by the agent. This way, we won't miss any drives which are added to servers. It might be handy as well to have the RMM discover if a certain drive is a USB or not and have the option to not add a check for drives which are smaller than xGB, that way it doesn't add disk checks falsely to the agent.
**Describe alternatives you've considered**
N/A - No workaround other than add the disk checks manually.
**Additional context**
If not possible, it might be worth adding in the function that the agent ignore the disk check being applied if the drive doesn't exist.
| open | 2022-08-24T09:12:29Z | 2023-10-16T11:24:56Z | https://github.com/amidaware/tacticalrmm/issues/1263 | [
"enhancement"
] | mearkats | 1 |
davidsandberg/facenet | tensorflow | 1,031 | How to modify size of width and height of cropped image in facenet/src/align/align_dataset_mtcnn.py | The size of the cropped image is the same as the default length and width. How can I modify it to get the image size of 112*96? | open | 2019-06-05T04:13:42Z | 2019-06-17T07:05:27Z | https://github.com/davidsandberg/facenet/issues/1031 | [] | yangyingni | 1 |
ultralytics/yolov5 | machine-learning | 13,529 | drone resolutions for input yolov5 | ### Search before asking
- [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
halo
I am using a drone and trying to detect rice fields using yolov5. after successfully detecting the segmentation, I used the Ground Sampling Distance method to calculate the area of the rice field segmentation based on the number of pixels. my problem now is that the drone image input is about 20MP, but I have tried yolov5 and resized it automatically to 3008 x 2272 pixels. is there any suggestion for yolo to maintain the number of 20 mp pixels?
please ask for the answer from the professionals, thank you good health always
### Additional
_No response_ | open | 2025-03-10T09:03:52Z | 2025-03-15T22:39:39Z | https://github.com/ultralytics/yolov5/issues/13529 | [
"question",
"detect"
] | pepsodent72 | 9 |
pyg-team/pytorch_geometric | pytorch | 8,746 | diffpool view rendered docs | ### 📚 Describe the documentation issue
can't view rendered docs for [diffpool](https://pytorch-geometric.readthedocs.io/en/latest/_modules/torch_geometric/nn/dense/diff_pool.html)
### Suggest a potential alternative/fix
_No response_ | closed | 2024-01-10T06:35:55Z | 2024-01-10T06:37:54Z | https://github.com/pyg-team/pytorch_geometric/issues/8746 | [
"documentation"
] | willtryagain | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.