repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
microsoft/nlp-recipes | nlp | 618 | Cannot get the Aspect based sentiment model to work | Running into a number of issues:
1. First the numpy version is incorrect leading to a error module 'numpy.random' has no attribute 'default_rng'
2. NLP Architect then throws a bad zipfile error | open | 2021-04-10T07:15:27Z | 2021-04-10T07:15:27Z | https://github.com/microsoft/nlp-recipes/issues/618 | [
"bug"
] | vkurpad | 0 |
localstack/localstack | python | 11,936 | bug: "AttributeError: 'NoneType' object has no attribute 'get'" during creation of RDS replica | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
During the creation of a RDS replica (postgresql or mysql) of an existing main database an internal exception occurs with the error: "AttributeError: 'NoneType' object has no attribute 'get'"
### Expected Behavior
The replica is being setup and is running
### How are you starting LocalStack?
With the `localstack` script
### Steps To Reproduce
$ localstack start
$ tflocal init
$ tflocal apply
Use this main.tf for reproduction: [main.tf.txt](https://github.com/user-attachments/files/17924482/main.tf.txt)
I am using OpenTofu, but i guess this will happen with terraform as well
### Environment
```markdown
- OS:Ubuntu 24.04.1 LTS
- LocalStack:
LocalStack version: 4.0.3.dev6
LocalStack Docker image sha: 78ea0f759ffd
LocalStack build date: 2024-11-26
LocalStack build git hash: 49f1032fb
```
### Anything else?
Main.tf for reproduction: [main.tf.txt](https://github.com/user-attachments/files/17924482/main.tf.txt)
My log entries while creating the rds instances: [localstack.log.txt](https://github.com/user-attachments/files/17924471/localstack.log.txt)
| open | 2024-11-26T18:49:52Z | 2024-11-28T07:06:23Z | https://github.com/localstack/localstack/issues/11936 | [
"type: bug",
"aws:rds",
"status: backlog"
] | pflueg | 0 |
elliotgao2/gain | asyncio | 2 | Regex selector support. | There are Css() and Xpath() already.
I think the Regex() is useful too.
```python
class Post(Item):
id = Regex('\d{32}')
```
| closed | 2017-06-04T14:27:01Z | 2017-06-05T02:00:27Z | https://github.com/elliotgao2/gain/issues/2 | [] | elliotgao2 | 0 |
microsoft/hummingbird | scikit-learn | 43 | Minimum set of dependencies | Right now, users must install LGBM/XGBoost, etc. Can we have options in the `setup.py` file or some other options to not force users to install dependencies they won't be using? | closed | 2020-04-30T22:56:26Z | 2020-05-12T21:13:38Z | https://github.com/microsoft/hummingbird/issues/43 | [] | ksaur | 1 |
ultralytics/yolov5 | machine-learning | 12,427 | yolov5 model export | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
There are two arguments for model export, dynamic and simplify. What do they mean?
### Additional
_No response_ | closed | 2023-11-25T11:07:16Z | 2024-01-05T00:21:05Z | https://github.com/ultralytics/yolov5/issues/12427 | [
"question",
"Stale"
] | Neloy262 | 2 |
sczhou/CodeFormer | pytorch | 403 | How to finetune stage 2? | Excellent work and thanks for sharing the repository.
## Problem
I am trying to finetune the CodeFormer model. I am focused on finetuning the encoder and transformer and hence, I am planning to train only Stage 2 using my own dataset of HQ images.
As per the config file provided in `options/CodeFormer_stage2.yml` the model that is being used is `CodeFormerIdxModel`. However, I keep getting `AttributeError` when I try to load the weights to the model.
## Code
Here is the code snippet to reproduce the problem.
```
root_path = '.'
opt_path = './options/dummy.yml'
opt = parse(opt_path, root_path)
ckpt_path = './weights/CodeFormer/codeformer_stage2.pth'
ckpt = torch.load(ckpt_path)['params_ema']
model = CodeFormerIdxModel(opt)
model.load_state_dict(ckpt)
```
## Error
I get the following error when I execute the above code.
`AttributeError: 'CodeFormerIdxModel' object has no attribute 'load_state_dict'`.
## Queries
I have the following queries
1. The CodeFormerModel class has the `load_state_dict` method. However, despite having the same inheritance pattern i.e. `BaseModel -> SRModel -> CodeFormerIdxModel` why am I not getting the AttributeError when I do the same for CodeFormerIdxModel?
2. There are several types of CodeFormer models in this repository, namely CodeFormerModel, CodeFormerIdxModel and CodeFormerJointModel. What is the puspose of each model?
3. Can I finetune CodeFormerModel in stage 2 using the pre-trained weights `codeformer_stage2.pth`? | closed | 2024-10-03T12:39:42Z | 2024-10-07T13:44:58Z | https://github.com/sczhou/CodeFormer/issues/403 | [] | tintin-py | 0 |
STVIR/pysot | computer-vision | 562 | train coco val2017 single gpu? | I just want to train siamrnn ++ with val2017 in coco dataset, load resnet50.model, and train with single GPU, what should I do? Are there any training examples? Examples like Yolov5 or Darknet? First contact with Pysot, feeling | closed | 2022-01-25T06:59:07Z | 2022-01-26T06:02:21Z | https://github.com/STVIR/pysot/issues/562 | [] | WhXl | 0 |
RobertCraigie/prisma-client-py | pydantic | 56 | Add support for the BigInt type | https://www.prisma.io/docs/reference/api-reference/prisma-schema-reference#bigint
```prisma
model User {
id BigInt @id
name String
}
```
| closed | 2021-08-27T08:24:00Z | 2021-08-27T12:48:54Z | https://github.com/RobertCraigie/prisma-client-py/issues/56 | [
"kind/feature"
] | RobertCraigie | 0 |
dynaconf/dynaconf | flask | 201 | Document/Automate the Fedora packaging workflow | Thanks to @dmsimard dynaconf is being packaged for Fedora 31
https://koji.fedoraproject.org/koji/buildinfo?buildID=1336106
We need to put in place somewhere in the documentation the `how to update/release the packages after a new release is done`
we can also take a look on how to automate the process on Azure Pipelines CI | closed | 2019-07-26T18:33:18Z | 2020-09-12T03:44:51Z | https://github.com/dynaconf/dynaconf/issues/201 | [
"enhancement",
"Docs"
] | rochacbruno | 3 |
modin-project/modin | pandas | 7,370 | Define heuristics to automatically enable dynamic partitioning without performance penalty. | Dynamic partitioning works with different slowdowns for different functions, data and startup parameters (CPU count, MinPartitionSize and possibly others)
This task was created based on the results of these PRs: #7338 #7369 | open | 2024-08-19T09:03:12Z | 2025-02-02T00:38:31Z | https://github.com/modin-project/modin/issues/7370 | [
"Performance 🚀"
] | Retribution98 | 1 |
thewhiteh4t/pwnedOrNot | api | 58 | is the password shown is an old password? | hi the password shown are they an old breached passwords which no longer works? | closed | 2022-01-13T08:55:57Z | 2022-01-25T00:28:12Z | https://github.com/thewhiteh4t/pwnedOrNot/issues/58 | [] | jepunband | 1 |
huggingface/datasets | pandas | 7,400 | 504 Gateway Timeout when uploading large dataset to Hugging Face Hub | ### Description
I encountered consistent 504 Gateway Timeout errors while attempting to upload a large dataset (approximately 500GB) to the Hugging Face Hub. The upload fails during the process with a Gateway Timeout error.
I will continue trying to upload. While it might succeed in future attempts, I wanted to report this issue in the meantime.
### Reproduction
- I attempted the upload 3 times
- Each attempt resulted in the same 504 error during the upload process (not at the start, but in the middle of the upload)
- Using `dataset.push_to_hub()` method
### Environment Information
```
- huggingface_hub version: 0.28.0
- Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.39
- Python version: 3.11.10
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Running in Google Colab Enterprise ?: No
- Token path ?: /home/hotchpotch/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: hotchpotch
- Configured git credential helpers: store
- FastAI: N/A
- Tensorflow: N/A
- Torch: 2.5.1
- Jinja2: 3.1.5
- Graphviz: N/A
- keras: N/A
- Pydot: N/A
- Pillow: 10.4.0
- hf_transfer: N/A
- gradio: N/A
- tensorboard: N/A
- numpy: 1.26.4
- pydantic: 2.10.6
- aiohttp: 3.11.11
- ENDPOINT: https://huggingface.co
- HF_HUB_CACHE: /home/hotchpotch/.cache/huggingface/hub
- HF_ASSETS_CACHE: /home/hotchpotch/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/hotchpotch/.cache/huggingface/token
- HF_STORED_TOKENS_PATH: /home/hotchpotch/.cache/huggingface/stored_tokens
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
- HF_HUB_ETAG_TIMEOUT: 10
- HF_HUB_DOWNLOAD_TIMEOUT: 10
```
### Full Error Traceback
```python
Traceback (most recent call last):
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 406, in hf_raise_for_status
response.raise_for_status()
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/datasets/hotchpotch/fineweb-2-edu-japanese.git/info/lfs/objects/batch
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/create_edu_japanese_ds/upload_edu_japanese_ds.py", line 12, in <module>
ds.push_to_hub("hotchpotch/fineweb-2-edu-japanese", private=True)
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/datasets/dataset_dict.py", line 1665, in push_to_hub
split_additions, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 5301, in _push_parquet_shards_to_hub
api.preupload_lfs_files(
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/hf_api.py", line 4215, in preupload_lfs_files
_upload_lfs_files(
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/_commit_api.py", line 395, in _upload_lfs_files
batch_actions_chunk, batch_errors_chunk = post_lfs_batch_info(
^^^^^^^^^^^^^^^^^^^^
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/lfs.py", line 168, in post_lfs_batch_info
hf_raise_for_status(resp)
File "/home/hotchpotch/src/github.com/hotchpotch/fineweb-2-edu-classifier-japanese/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 477, in hf_raise_for_status
raise _format(HfHubHTTPError, str(e), response) from e
huggingface_hub.errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/datasets/hotchpotch/fineweb-2-edu-japanese.git/info/lfs/objects/batch
```
| open | 2025-02-14T02:18:35Z | 2025-02-14T23:48:36Z | https://github.com/huggingface/datasets/issues/7400 | [] | hotchpotch | 4 |
kymatio/kymatio | numpy | 966 | frontends should expose TimeFrequencyScattering | Currently the frontends doesn't include TimeFrequencyScattering, one needs to do "from kymatio.scattering1d.frontend.xxx import TimeFrequencyScatteringxxx. " | closed | 2022-09-30T10:21:45Z | 2023-03-03T07:58:45Z | https://github.com/kymatio/kymatio/issues/966 | [] | lylyhan | 0 |
chezou/tabula-py | pandas | 50 | Error tokenizing data | # Summary of your issue
Error tokenizing data. C error: Expected 10 fields in line 18, saw 11
# Environment
Jupyter Notebook- Anaconda
Write and check your environment.
- [x] `python --version`: >3
- [ ] `java -version`: Version 8 update 111: 1.8.0_111
- [ ] OS and it's version: Mac OS Sierra 10.12.4
- [ ] Your PDF URL: http://www.wrldc.in/9_reportNew/dailydata_01082017.pdf
# What did you do when you faced the problem?
I used read_pdf on above url and I received this error:
"Error tokenizing data. C error: Expected 10 fields in line 18, saw 11"
//write here
## Example code:
```
paste your core code
```
## Output:
```
paste your output
```
## What did you intend to be?
| closed | 2017-08-29T17:05:03Z | 2017-08-29T22:05:06Z | https://github.com/chezou/tabula-py/issues/50 | [] | jainsourabh | 1 |
mars-project/mars | scikit-learn | 3,279 | [BUG] mars.new_ray_session(backend="ray") raises TypeError: 'NoneType' object is not subscriptable when fetching data | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
A clear and concise description of what the bug is.
```python
import mars
import mars.dataframe as md
import numpy as np
import pandas as pd
mars.new_ray_session(backend="ray", default=True)
s = np.random.RandomState(0)
raw = pd.DataFrame(s.rand(100, 4), columns=list("abcd"))
df = md.DataFrame(raw, chunk_size=30)
r = df.describe().execute()
print(r)
```
raises an exception:
```python
Traceback (most recent call last):
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 198, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"
", file, 'exec'), glob, loc)
File "/home/admin/mars/t1.py", line 15, in <module>
print(r)
File "/home/admin/mars/mars/core/entity/core.py", line 101, in __str__
return self._data.__str__()
File "/home/admin/mars/mars/dataframe/core.py", line 2159, in __str__
return self._to_str(representation=False)
File "/home/admin/mars/mars/dataframe/core.py", line 2131, in _to_str
corner_data = fetch_corner_data(self, session=self._executed_sessions[-1])
File "/home/admin/mars/mars/dataframe/utils.py", line 1165, in fetch_corner_data
return df_or_series._fetch(session=session)
File "/home/admin/mars/mars/core/entity/executable.py", line 161, in _fetch
return fetch(self, session=session, **kw)
File "/home/admin/mars/mars/deploy/oscar/session.py", line 1926, in fetch
return session.fetch(tileable, *tileables, **kwargs)
File "/home/admin/mars/mars/deploy/oscar/session.py", line 1705, in fetch
return asyncio.run_coroutine_threadsafe(coro, self._loop).result()
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/concurrent/futures/_base.py", line 444, in result
return self.__get_result()
File "/home/admin/.pyenv/versions/3.8.13/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result
raise self._exception
File "/home/admin/mars/mars/deploy/oscar/session.py", line 1894, in _fetch
data = await session.fetch(tileable, *tileables, **kwargs)
File "/home/admin/mars/mars/deploy/oscar/session.py", line 1126, in fetch
await fetcher.append(chunk.key, meta, fetch_info.indexes)
File "/home/admin/mars/mars/services/task/execution/mars/fetcher.py", line 41, in append
storage_api = await self._get_storage_api(band)
File "/home/admin/mars/mars/lib/aio/lru.py", line 208, in wrapped
return await asyncio.shield(fut)
File "/home/admin/mars/mars/deploy/oscar/session.py", line 1084, in _get_storage_api
storage_api = await StorageAPI.create(self._session_id, band[0], band[1])
TypeError: 'NoneType' object is not subscriptable
```
This error is due to the `new_session` is called with an incorrect backend in `new_ray_session`.
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version
2. The version of Mars you use
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
| closed | 2022-10-13T09:07:55Z | 2022-10-18T14:32:09Z | https://github.com/mars-project/mars/issues/3279 | [
"type: bug"
] | fyrestone | 0 |
frol/flask-restplus-server-example | rest-api | 83 | Questions regarding swagger-ui and oauth | I used a lot of your stuff for a current project, because your enhancements are just awesome!
But actually I'm unable to get everything working regarding the swagger-ui. The authorize button is going to be displayed as expected. And I get the list of defined scopes. But as soon as I want to get myself authorized for the swagger session there are two possibilities:
- password flow: If I try to use password flow, then nothing happens, but the javacsript console shows me that "realm" is not defined within swagger-ui.js
```
handleOauth2Login: function (auth) {
var host = window.location;
var pathname = location.pathname.substring(0, location.pathname.lastIndexOf('/'));
var defaultRedirectUrl = host.protocol + '//' + host.host + pathname + '/o2c.html';
var redirectUrl = window.oAuthRedirectUrl || defaultRedirectUrl;
var url = null;
var scopes = _.map(auth.get('scopes'), function (scope) {
return scope.scope;
});
var state, dets, ep;
window.OAuthSchemeKey = auth.get('title');
window.enabledScopes = scopes;
var flow = auth.get('flow');
if(auth.get('type') === 'oauth2' && flow && (flow === 'implicit' || flow === 'accessCode')) {
dets = auth.attributes;
url = dets.authorizationUrl + '?response_type=' + (flow === 'implicit' ? 'token' : 'code');
window.swaggerUi.tokenName = dets.tokenName || 'access_token';
window.swaggerUi.tokenUrl = (flow === 'accessCode' ? dets.tokenUrl : null);
state = window.OAuthSchemeKey;
}
else if(auth.get('type') === 'oauth2' && flow && (flow === 'application')) {
dets = auth.attributes;
window.swaggerUi.tokenName = dets.tokenName || 'access_token';
this.clientCredentialsFlow(scopes, dets.tokenUrl, window.OAuthSchemeKey);
return;
}
else if(auth.get('grantTypes')) {
// 1.2 support
var o = auth.get('grantTypes');
for(var t in o) {
if(o.hasOwnProperty(t) && t === 'implicit') {
dets = o[t];
ep = dets.loginEndpoint.url;
url = dets.loginEndpoint.url + '?response_type=token';
window.swaggerUi.tokenName = dets.tokenName;
}
else if (o.hasOwnProperty(t) && t === 'accessCode') {
dets = o[t];
ep = dets.tokenRequestEndpoint.url;
url = dets.tokenRequestEndpoint.url + '?response_type=code';
window.swaggerUi.tokenName = dets.tokenName;
}
}
}
redirect_uri = redirectUrl;
url += '&redirect_uri=' + encodeURIComponent(redirectUrl);
url += '&realm=' + encodeURIComponent(realm);
url += '&client_id=' + encodeURIComponent(clientId);
url += '&scope=' + encodeURIComponent(scopes.join(scopeSeparator));
url += '&state=' + encodeURIComponent(state);
for (var key in additionalQueryStringParams) {
url += '&' + key + '=' + encodeURIComponent(additionalQueryStringParams[key]);
}
window.open(url);
},
```
- implicit flow: If I use this flow for security definition, then a click to "authorize" redirects me to the route /auth/oauth2/authorize which seems to be expected. But the get parameters "client_id" and "realm" are undefined there, so I'm Unable to render the authorize page correctly.
I also saw that you set the client_id and realm for swagger within your config, but I don't know where you are going to render them for usage within the swagger-ui.
Can you help me?
BR Daniel | closed | 2017-11-13T14:39:24Z | 2018-04-26T06:33:14Z | https://github.com/frol/flask-restplus-server-example/issues/83 | [
"question"
] | dhofstetter | 6 |
ets-labs/python-dependency-injector | flask | 582 | Options pattern | Hello,
I'm a happy user of your package, it's awesome. I come from the .NET stack and find it very useful in Python.
I don't know if it is available or not in the package, but I miss a couple of things that I used a lot in .NET applications.
The first one is the possibility of loading configuration from a .json file. I know that I can use `.ini`, `.yaml`, and other formats, but I think it would be good to support `.json` also.
The second one, and the most important in my opinion, is don't have something similar to this out-of-the-box https://docs.microsoft.com/en-us/aspnet/core/fundamentals/configuration/options?view=aspnetcore-6.0
The idea behind this, it's to allow to inject "part" of the configuration like an object. For now, I'm using a workaround that works perfectly.
`config.json`
```json
{
"foo": {
"bar": "bar",
"baz": "baz"
},
"qux": "qux"
}
```
`foo_options.py`
```python
from dataclasses import dataclass
@dataclass
class FooOptions:
bar: str
baz: str
```
`containers.py`
```python
import json
from dacite import from_dict
from dependency_injector.containers import DeclarativeContainer, WiringConfiguration
from dependency_injector import providers
from foo_options import FooOptions
def _create_foo_options() -> FooOptions:
with open("config.json") as f:
return from_dict(FooOptions, json.loads(f.read())["foo"]) # using dacite package https://github.com/konradhalas/dacite
class Container(DeclarativeContainer):
wiring_config = WiringConfiguration(
modules=["__main__"]
)
foo_options = providers.Singleton(_create_foo_options)
```
`main.py`
```python
from dependency_injector.wiring import inject, Provide
from containers import Container
from foo_options import FooOptions
@inject
def main(foo_options: FooOptions = Provide[Container.foo_options]):
print(foo_options) # FooOptions(bar='bar', baz='baz')
if __name__ == '__main__':
container = Container()
main()
```
I would like to know your opinion about this. What I want to avoid is having a big object with all my configuration when a class only needs to be aware of a little part. I know that I can inject concrete values of config using Configuration provider or even the whole config like a dict, but I want to use dataclasses with types safety and don't use primitive values.
Congrats for your great job, I couldn't live without this package :)
| open | 2022-04-22T17:06:53Z | 2022-04-22T17:38:35Z | https://github.com/ets-labs/python-dependency-injector/issues/582 | [] | panicoenlaxbox | 1 |
horovod/horovod | pytorch | 3,597 | mpirun command stuck on warning | **Setup:**
I have 2 VMs each with 1 GPU.
I have horovod installed on both VMs.
The training script exists on both VMs
**From my first VM I run the following command:**
/path/to/mpirun -np 2 \
-H {VM_1_IP}:1,{VM_2_IP}:1 \
-bind-to none -map-by slot \
-x NCCL_DEBUG=INFO -x LD_LIBRARY_PATH -x PATH \
-mca pml ob1 -mca btl ^openib \
python training.py
**I get the following message:**
WARNING: Open MPI accepted a TCP connection from what appears to be a
another Open MPI process but cannot find a corresponding process
entry for that peer.
This attempted connection will be ignored; your MPI job may or may not
continue properly.
Local host: VM_2_IP
PID: {4_digit_number}
And then nothing happens...
What am I doing wrong? Any tips appreciated!
| closed | 2022-07-08T15:58:38Z | 2022-07-08T20:21:17Z | https://github.com/horovod/horovod/issues/3597 | [] | bluepra | 1 |
ymcui/Chinese-LLaMA-Alpaca-2 | nlp | 3 | 新模型训练考虑加入FlashAttention等新技术吗 | closed | 2023-07-19T03:13:29Z | 2023-07-22T04:03:06Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/3 | [] | cnzx05cnzx | 1 | |
flasgger/flasgger | flask | 227 | flasgger_static 404 | Here is my config
```
gSWAGGER_CONFIG = {
"headers": [
],
"specs": [
{
"endpoint": 'apispec',
"route": '/apispec.json',
"rule_filter": lambda rule: True, # all in
"model_filter": lambda tag: True, # all in
}
],
"static_url_path": '/playground/topic-clf-api/flasgger_static',
# "static_folder": "static", # must be set by user
"swagger_ui": True,
"specs_route": "/",
}
```
I can access http://host/playground/topic-clf-api and http://host/playground/topic-clf-api/apispec.json, but static file is missing

| open | 2018-08-15T19:31:24Z | 2020-03-06T11:29:47Z | https://github.com/flasgger/flasgger/issues/227 | [
"question",
"hacktoberfest"
] | ShaneKao | 3 |
graphistry/pygraphistry | pandas | 372 | Clear SSO exceptions | Clear exns that state the problem & where applicable, solution
- [ ] Old graphistry server
- [ ] Unknown idp
- [ ] Org without an idp | closed | 2022-07-09T00:34:12Z | 2022-09-02T02:15:53Z | https://github.com/graphistry/pygraphistry/issues/372 | [] | lmeyerov | 1 |
localstack/localstack | python | 11,430 | bug: StartupScript runs twice with HELM deployment | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I am deploying a localstack container using helm with `enableStartupScripts: true` However the script I have put in the `startupScriptContent:` runs twice.
```
startupScriptContent: |
#!/bin/bash
echo "Hello World"
exit 0
```
### Expected Behavior
I expect that the script only runs once
### How are you starting LocalStack?
Custom (please describe below)
### Steps To Reproduce
Steps
1. Create a values yaml (see below for my example)
2. Deploy localstack using helm command like below
`helm upgrade --install --values=./localstack.values.yaml --namespace user test localstack/localstack`
```
image:
repository: localstack/localstack
tag: latest
pullPolicy: Always
extraEnvVars:
- name: SERVICES
value: "sqs,s3,sns"
- name: SQS_ENDPOINT_STRATEGY
value: "path"
service:
type: ClusterIP
ports:
- name: http
port: 4566
targetPort: 4566
podAnnotations:
sidecar.istio.io/inject: "false"
enableStartupScripts: true
startupScriptContent: |
#!/bin/bash
echo "Hello World"
exit 0
```
### Environment
```markdown
Mac OS: 14.4.1 (23E224
LocalStack version: 3.7.0
LocalStack build date: 2024-08-29
LocalStack build git hash: 6aa750570
```
### Anything else?
POD LOGS
```
LocalStack version: 3.7.0
LocalStack build date: 2024-08-29
LocalStack build git hash: 6aa750570
Hello World
Hello World
Ready.
``` | closed | 2024-08-29T10:13:50Z | 2025-02-18T14:00:13Z | https://github.com/localstack/localstack/issues/11430 | [
"type: bug",
"area: integration/kubernetes",
"status: backlog"
] | mosie7 | 5 |
moshi4/pyCirclize | matplotlib | 12 | error while plotting xticks & labels on user-specified position | Hello, I would like to plot CDS product labels at the specified position but I got the SyntaxError: positional argument follows keyword argument as shown below. I have tried to modify the command line but cannot figure it out. Could you let me know how to fix the problem?
### Source Code
```python
from pycirclize import Circos
from pycirclize.parser import Genbank
from pycirclize.utils import load_prokaryote_example_file
import numpy as np
from matplotlib.patches import Patch
# Load Genbank file
gbk = Genbank("/mnt/c/Users/Downloads/WGS/prokka_unicycler/L2_prokka/L2_prokka.gbk")
circos = Circos(sectors={gbk.name: gbk.range_size})
circos.text("Lactococcus species", size=12, r=20)
sector = circos.get_sector(gbk.name)
sector = circos.sectors[0]
cds_track = sector.add_track((90, 100))
cds_track.axis(fc="#EEEEEE", ec="none")
# Plot outer track with xticks
major_ticks_interval = 100000
minor_ticks_interval = 100000
outer_track = sector.add_track((98, 100))
outer_track.axis(fc="lightgrey")
outer_track.xticks_by_interval(
major_ticks_interval, label_formatter=lambda v: f"{v/ 10 ** 6:.1f} Mb"
)
outer_track.xticks_by_interval(minor_ticks_interval, tick_length=1, show_label=False)
# Plot Forward CDS, Reverse CDS track
f_cds_track = sector.add_track((90, 97), r_pad_ratio=0.1)
f_cds_track.genomic_features(gbk.extract_features("CDS", target_strand=1), fc="red")
r_cds_track = sector.add_track((90, 97), r_pad_ratio=0.1)
r_cds_track.genomic_features(gbk.extract_features("CDS", target_strand=-1), fc="blue")
# Add legend
handles = [
Patch(color="red", label="Forward CDS"),
Patch(color="blue", label="Reverse CDS"),
]
# Extract CDS product labels
pos_list, labels = [], []
for f in gbk.extract_features("CDS"):
start, end = int(str(f.location.end)), int(str(f.location.start))
pos = (start + end) / 2
label = f.qualifiers.get("product", [""])[0]
if label == "" or label.startswith("hypothetical"):
continue
if len(label) > 20:
label = label[:20] + "..."
pos_list.append(pos)
labels.append(label)
# Plot CDS product labels on outer position
cds_track.xticks(
pos_list=5866, 6609,
labels=("product"),
label_orientation="vertical",
show_bottom_line=True,
label_size=6,
line_kws=dict(ec="grey"),
)
fig = circos.plotfig()
_ = fig.legend(handles=handles, bbox_to_anchor=(0.5, 0.475), loc="center", fontsize=8)
```
### Error Output
```text
Cell In[252], line 61
)
^
SyntaxError: positional argument follows keyword argument
```
Thanks so much. | closed | 2023-02-28T07:52:34Z | 2024-05-03T02:36:59Z | https://github.com/moshi4/pyCirclize/issues/12 | [
"question"
] | KT-ABB | 2 |
sktime/sktime | scikit-learn | 7,484 | [ENH] Different exogenous features for different time series using a global model | Is there a way to use different exogenous features for different time series within a global model? For example, I would like to use feature_1 and feature_2 for time series ID_1, while using feature_3 and feature_4 for time series ID_2, and so on. Simply filling unused features with zeros for each time series ID is not a good idea. Does sktime have an existing solution for this scenario? | open | 2024-12-05T19:57:31Z | 2024-12-24T12:35:57Z | https://github.com/sktime/sktime/issues/7484 | [
"enhancement"
] | ncooder | 7 |
davidsandberg/facenet | tensorflow | 1,028 | which layers does train_softmax.py train? | So I was wondering, if I use train_softmax.py on my own dataset (training set) of just two classes, along with --pretrained model of VGGFace2 (using Inception_resnet_v1), does the network train just the final fully connected layers, not updating the conv/pooling layer weights previously?
Also, I have removed all the center loss terms and values from the code, as center loss keeps on going higher and higher until it hits infinity. I think its because of just 2 classes. Now I don't have any problem like that
In a nutshell, I was wondering if right now, I'm just training a CNN's final FC layers, keeping the weights of all layers previously as constant? This is what I wish to do.
Thanks! | open | 2019-05-26T20:49:51Z | 2019-07-06T15:01:32Z | https://github.com/davidsandberg/facenet/issues/1028 | [] | shashi438 | 4 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 796 | low valued checkered effect using cycle-gan | Hi, I am training a paired dataset (normalised to [-1, 1]) with both cyclegan and pix2pix. The pix2pix performs well as expected, but the cyclegan predicted fake data always contain low-valued checkered effect (ranging at around [-0.1, 0.1]). Have you seen this before? Thanks in advance. | closed | 2019-10-15T11:14:11Z | 2019-10-30T15:51:13Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/796 | [] | yjs0704 | 2 |
jessevig/bertviz | nlp | 90 | Displaying results with Streamlit using JavaScript/HTML | Hi,
Hope this message reaches you well. I am trying to deploy BertViz in a Streamlit app, which can display a visualization if an HTML/JavaScript string is provided. From the repo code, it seems that the raw HTML/JavaScript was not returned after the visualization was rendered. While I understand this is probably to improve out cleanliness in the Jupyter environment, I hope that `head_view` and `neuron_view` can optionally return the HTML/JavaScript code, so that I can embed the visualization in a web page. An example is available here https://share.streamlit.io/cdpierse/transformers-interpret-streamlit/main/app.py , you can view the source code by clicking the top-right menu. It allows to retrieve the visualization with `_repr_html_()`.
Thank you!
Charles | closed | 2022-02-13T21:58:56Z | 2022-04-02T13:55:54Z | https://github.com/jessevig/bertviz/issues/90 | [] | chen-yifu | 3 |
python-restx/flask-restx | flask | 324 | does Flask-restx will support flask 2.X.X ? | Hello,
does Flask-restx will support flask 2.X.X ?
I have to keep flask < 2.0.0 to use Flask-restX
Thank you | closed | 2021-05-25T13:20:04Z | 2021-05-27T07:02:29Z | https://github.com/python-restx/flask-restx/issues/324 | [
"question"
] | E-Kalla | 1 |
ipython/ipython | data-science | 14,648 | Jupyter shebang cells won't show partially written lines | When a shebang cell (i.e., %%sh, %%bash, etc.) in a Jupyter notebook writes/flushes output, the output doesn't show up until the script ends or a newline is written. This hurts the usability of programs that indicate progress by writing periodically on the same line.
Changing this ipython logic to read whatever text is available (rather than waiting for a newline) would fix the issue: https://github.com/ipython/ipython/blob/18e45295c06eb9/IPython/core/magics/script.py#L215. (I'm not familiar with asyncio's streams, so it might also be necessary to ensure the fd is in non-blocking mode.)
Below is a GIF demonstrating the issue. Observe that:
1. The output regular Python cell updates in realtime, even if it doesn't include a newline.
2. The output an equivalent shebang cell does not update in realtime.
3. The output of a shebang cell that includes newlines updates in realtime.
 | closed | 2025-01-15T07:40:59Z | 2025-01-31T08:59:26Z | https://github.com/ipython/ipython/issues/14648 | [] | divyansshhh | 1 |
d2l-ai/d2l-en | pytorch | 2,155 | Acceleration by Hybridization... But not actually accelerating for PyTorch | https://d2l.ai/chapter_computational-performance/hybridize.html#acceleration-by-hybridization
In the PyTorch tab:
```python
net = get_net()
with Benchmark('Without torchscript'):
for i in range(1000): net(x)
net = torch.jit.script(net)
with Benchmark('With torchscript'):
for i in range(1000): net(x)
```
```python
Without torchscript: 1.6357 sec
With torchscript: 1.6502 sec
```
I am reading that torchscript increased the time by 0.02.
I tried it out on my end, and get;
```python
Without torchscript: 0.0580 sec
With torchscript: 0.0922 sec
```
Either i am getting the whole idea wrong, or torchscript isn't accelerating and
> As is observed in the above results, after an nn.Sequential instance is scripted using the torch.jit.script function, computing performance is improved through the use of symbolic programming.
dosen't hold.
| open | 2022-06-14T02:49:06Z | 2022-06-23T17:19:38Z | https://github.com/d2l-ai/d2l-en/issues/2155 | [] | Risiamu | 1 |
google-research/bert | nlp | 1,369 | Is BERT capable of producing semantically close word embeddings for synonyms? | Hello everyone, I am currently working on my undergraduate thesis on matching job descriptions to resumes based on the contents of both. Recently, I came across the following statement by Schmitt et al., 2016: "[...] [Recruiters] and
job seekers [...] do not seem to speak the same language [...]. More precisely, CVs and job announcements tend to use different vocabularies, and same words might be used with different meanings".
Therefore, I wonder if BERT is able to create contextualized word embeddings that are semantically similar or close for synonyms and semantically dissimilar or distant for the same words that have different meanings in the context of resumes and job postings?
Thank you very much in advance! | open | 2022-09-13T12:54:21Z | 2023-12-14T15:50:44Z | https://github.com/google-research/bert/issues/1369 | [] | niquet | 1 |
gunthercox/ChatterBot | machine-learning | 2,082 | OSError: [E050] Can't find model 'en'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory. | closed | 2020-12-08T10:11:00Z | 2025-02-17T19:23:20Z | https://github.com/gunthercox/ChatterBot/issues/2082 | [] | ajayyewale96 | 2 | |
jonaswinkler/paperless-ng | django | 1,514 | webserver container does not start, django updates failing | trying to set up follwng the docker-compose-route from readthedocs.
running on alpine-linux
createsuperuser fails.
setting PAPERLESS_ADMIN_USER and password fails.
running the migrations on webserver fails.
```
docker-compose images
Container Repository Tag Image Id Size
-------------------------------------------------------------------------------------
paperless_broker_1 redis 6.0 5e9f874f2d50 111.7 MB
paperless_gotenberg_1 thecodingmachine/gotenberg latest 14e285f48574 1.436 GB
paperless_tika_1 apache/tika latest 95222fc811e0 454.1 MB
paperless_webserver_1 jonaswinkler/paperless-ng latest abaa6ea0f560 1.093 GB
```
```
alp:~/paperless-ng$ docker-compose run --rm webserver createsuperuser
Creating paperless_webserver_run ... done
Paperless-ng docker container starting...
Installing languages...
Get:1 http://security.debian.org/debian-security bullseye-security InRelease [44.1 kB]
Get:2 http://deb.debian.org/debian bullseye InRelease [116 kB]
Get:3 http://deb.debian.org/debian bullseye-updates InRelease [39.4 kB]
Get:4 http://security.debian.org/debian-security bullseye-security/main amd64 Packages [102 kB]
Get:5 http://deb.debian.org/debian bullseye/main amd64 Packages [8183 kB]
Get:6 http://deb.debian.org/debian bullseye-updates/main amd64 Packages [2592 B]
Fetched 8487 kB in 5s (1569 kB/s)
Reading package lists... Done
Package tesseract-ocr-deu already installed!
Package tesseract-ocr-eng already installed!
Creating directory ../data/index
Creating directory /tmp/paperless
Adjusting permissions of paperless files. This may take a while.
Apply database migrations...
Operations to perform:
Apply all migrations: admin, auth, authtoken, contenttypes, django_q, documents, paperless_mail, sessions
Running migrations:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 242, in _commit
return self.connection.commit()
sqlite3.OperationalError: database is locked
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/db/migrations/recorder.py", line 68, in ensure_schema
editor.create_model(self.Migration)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/sqlite3/schema.py", line 36, in __exit__
super().__exit__(exc_type, exc_value, traceback)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/schema.py", line 120, in __exit__
self.atomic.__exit__(exc_type, exc_value, traceback)
File "/usr/local/lib/python3.9/site-packages/django/db/transaction.py", line 246, in __exit__
connection.commit()
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 266, in commit
self._commit()
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 242, in _commit
return self.connection.commit()
File "/usr/local/lib/python3.9/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 242, in _commit
return self.connection.commit()
django.db.utils.OperationalError: database is locked
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/src/paperless/src/manage.py", line 11, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 398, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 89, in wrapped
res = handle_func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/core/management/commands/migrate.py", line 244, in handle
post_migrate_state = executor.migrate(
File "/usr/local/lib/python3.9/site-packages/django/db/migrations/executor.py", line 91, in migrate
self.recorder.ensure_schema()
File "/usr/local/lib/python3.9/site-packages/django/db/migrations/recorder.py", line 70, in ensure_schema
raise MigrationSchemaMissing("Unable to create the django_migrations table (%s)" % exc)
django.db.migrations.exceptions.MigrationSchemaMissing: Unable to create the django_migrations table (database is locked)
Search index out of date. Updating...
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/sqlite3/base.py", line 423, in execute
return Database.Cursor.execute(self, query, params)
sqlite3.OperationalError: no such table: documents_document
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/paperless/src/manage.py", line 11, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 398, in execute
output = self.handle(*args, **options)
File "/usr/src/paperless/src/documents/management/commands/document_index.py", line 23, in handle
index_reindex(progress_bar_disable=options['no_progress_bar'])
File "/usr/src/paperless/src/documents/tasks.py", line 29, in index_reindex
for document in tqdm.tqdm(documents, disable=progress_bar_disable):
File "/usr/local/lib/python3.9/site-packages/tqdm/std.py", line 992, in __init__
total = len(iterable)
File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 262, in __len__
self._fetch_all()
File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 1324, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 51, in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
File "/usr/local/lib/python3.9/site-packages/django/db/models/sql/compiler.py", line 1175, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 66, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 75, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.9/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/sqlite3/base.py", line 423, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.OperationalError: no such table: documents_document
Exception ignored in: <function tqdm.__del__ at 0x7f39cb7a6d30>
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/tqdm/std.py", line 1152, in __del__
File "/usr/local/lib/python3.9/site-packages/tqdm/std.py", line 1271, in close
AttributeError: 'tqdm' object has no attribute 'disable'
Executing management command createsuperuser
You have 83 unapplied migration(s). Your project may not work properly until you apply the migrations for app(s): admin, auth, authtoken, contenttypes, django_q, documents, paperless_mail, sessions.
Run 'python manage.py migrate' to apply them.
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/sqlite3/base.py", line 423, in execute
return Database.Cursor.execute(self, query, params)
sqlite3.OperationalError: no such table: auth_user
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/paperless/src/manage.py", line 11, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 419, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 413, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.9/site-packages/django/contrib/auth/management/commands/createsuperuser.py", line 79, in execute
return super().execute(*args, **options)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 398, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python3.9/site-packages/django/contrib/auth/management/commands/createsuperuser.py", line 100, in handle
default_username = get_default_username(database=database)
File "/usr/local/lib/python3.9/site-packages/django/contrib/auth/management/__init__.py", line 141, in get_default_username
auth_app.User._default_manager.db_manager(database).get(
File "/usr/local/lib/python3.9/site-packages/django/db/models/manager.py", line 85, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 431, in get
num = len(clone)
File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 262, in __len__
self._fetch_all()
File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 1324, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/usr/local/lib/python3.9/site-packages/django/db/models/query.py", line 51, in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
File "/usr/local/lib/python3.9/site-packages/django/db/models/sql/compiler.py", line 1175, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 66, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 75, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.9/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.9/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/sqlite3/base.py", line 423, in execute
return Database.Cursor.execute(self, query, params)
django.db.utils.OperationalError: no such table: auth_user
ERROR: 1
``` | open | 2021-12-30T14:00:00Z | 2022-01-01T13:39:36Z | https://github.com/jonaswinkler/paperless-ng/issues/1514 | [] | tspr | 0 |
TheKevJames/coveralls-python | pytest | 383 | Parallel builds in Codebuild | I am trying to run parallel builds in one repo (AWS Codebuild) and upload joined coverage to Coveralls.
```
build:
commands:
- poetry run coverage run -m pytest .
- COVERALLS_PARALLEL=true poetry run coveralls
post_build:
commands:
- curl -k "https://coveralls.io/webhook?repo_token=$COVERALLS_REPO_TOKEN" -d "payload[build_num]=$CODEBUILD_BUILD_NUMBER&payload[status]=done"
```
When running command `curl -k "https://coveralls.io/webhook?repo_token=$COVERALLS_REPO_TOKEN" -d "payload[build_num]=$CODEBUILD_BUILD_NUMBER&payload[status]=done"`
I get an error {"error":"No build matching CI build number 19 found"}
How can I add parallel builds webhook for Codebuild? | closed | 2023-03-20T12:30:34Z | 2024-04-26T15:33:16Z | https://github.com/TheKevJames/coveralls-python/issues/383 | [] | nadiiaparsons | 1 |
SALib/SALib | numpy | 600 | Sobol sample returning same values | I am trying to run sensitivity analysis on an ODE model using the Sobol sample method. When I run
`sp.sample_sobol(N, skip_values=X).evaluate(model_wrapper).analyze_sobol()`
I just get 0 or NaN values for all the sensitivities. When I isolate the `sample_sobol()` method, i.e. `b = sp.sample_sobol(N,skip_values=X)`, the sampler just returns many identical arrays regardless of what I use for N or skip_values. Is there any workaround for this? | closed | 2023-12-15T23:24:31Z | 2024-01-05T11:24:19Z | https://github.com/SALib/SALib/issues/600 | [] | hudsonb22 | 10 |
okken/pytest-check | pytest | 60 | 【question】How to write pytest check log to allure-Report | hello,I have a question about pytest-check,such as photo:

| closed | 2021-05-07T03:13:11Z | 2021-05-19T21:34:48Z | https://github.com/okken/pytest-check/issues/60 | [] | jamesz2011 | 1 |
pyg-team/pytorch_geometric | pytorch | 9,089 | Cat uncoallesced sparse_coo tensor. | ### 🐛 Describe the bug
I have a sparse_coo tensor in my dataset. When I want to load this tensor I get an error like this:
```
File "/home/amir/.pyenv/versions/3.9.13/lib/python3.9/site-packages/torch_geometric/data/batch.py", line 97, in from_data_list
batch, slice_dict, inc_dict = collate(
File "/home/amir/.pyenv/versions/3.9.13/lib/python3.9/site-packages/torch_geometric/data/collate.py", line 109, in collate
value, slices, incs = _collate(attr, values, data_list, stores,
File "/home/amir/.pyenv/versions/3.9.13/lib/python3.9/site-packages/torch_geometric/data/collate.py", line 232, in _collate
value = cat(values, dim=cat_dim)
File "/home/amir/.pyenv/versions/3.9.13/lib/python3.9/site-packages/torch_geometric/utils/sparse.py", line 686, in cat
return cat_coo(tensors, dim)
File "/home/amir/.pyenv/versions/3.9.13/lib/python3.9/site-packages/torch_geometric/utils/sparse.py", line 509, in cat_coo
indices.append(tensor.indices())
RuntimeError: Cannot get indices on an uncoalesced tensor, please call .coalesce() first
```
I do not want to coalesce my tensors because I will have some problems with my tensor dimensions.
I think the root cause of this issue is in the cat_coo method of torch_geometric.utils.sparse which calls tensor.indices(). I think it should be tensor._indices().
Can anyone help me with this issue?
### Versions
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.18 (main, Sep 11 2023, 13:41:44) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.35
Is CUDA available: N/A
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration:
GPU 0: NVIDIA GeForce RTX 3080
GPU 1: NVIDIA GeForce RTX 3080
Nvidia driver version: 525.147.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: N/A
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 16
On-line CPU(s) list: 0-15
Vendor ID: GenuineIntel
Model name: 11th Gen Intel(R) Core(TM) i9-11900K @ 3.50GHz
CPU family: 6
Model: 167
Thread(s) per core: 2
Core(s) per socket: 8
Socket(s): 1
Stepping: 1
CPU max MHz: 5300.0000
CPU min MHz: 800.0000
BogoMIPS: 7008.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap avx512ifma clflushopt intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm md_clear flush_l1d arch_capabilities
L1d cache: 384 KiB (8 instances)
L1i cache: 256 KiB (8 instances)
L2 cache: 4 MiB (8 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-15
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] No relevant packages
[conda] No relevant packages
| closed | 2024-03-21T16:01:22Z | 2024-03-25T14:13:16Z | https://github.com/pyg-team/pytorch_geometric/issues/9089 | [
"bug"
] | amir7697 | 1 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 890 | How to implement the Real-Time-Voice-Cloning in other scripts? | This is a lovely program, but I`m searching for a way to implement the voice cloning in my other script. It should run automatically without using the toolbox. So that it can be used as a speech assistent that sounds the way I want. Is there a way to do this?
Thanks! | closed | 2021-11-13T12:23:08Z | 2021-12-28T12:33:20Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/890 | [] | Andredenise | 26 |
modin-project/modin | pandas | 6,864 | Concat is slow | Hi, here is my code. I want to concatenate 5100 dataframes into one big dataframe. The memory usage of these 5100 dataframes is about 160G. However, the processing speed is really slow. I have been waiting for about 40 minutes, but it still hasn't finished. Here is my code:
```python
import modin.pandas as pd
import os
import ray
from tqdm import tqdm
ray.init()
def merge_feather_files(folder_path, suffix):
# 获取所有符合条件的文件名
files = [f for f in os.listdir(folder_path) if f.endswith(suffix + '.feather')]
# 读取所有文件并合并
df_list = [pd.read_feather(os.path.join(folder_path, file)) for file in tqdm(files)]
merged_df = pd.concat(df_list)
# 按照日期列排序
# merged_df.sort_values(by='date', inplace=True)
merged_df.to_feather('factors_date/factors'+suffix+'.feather')
# 使用方法示例
folder_path = 'factors_batch' # 设置文件夹路径
for i in range(1,25):
merged_df = merge_feather_files(folder_path, f'_{i}')
```
My machine has 128 cores, I am certain that they are not all utilized for processing.
| open | 2024-01-18T03:16:58Z | 2024-01-19T15:29:03Z | https://github.com/modin-project/modin/issues/6864 | [
"bug 🦗",
"Memory 💾",
"External",
"P3"
] | river7816 | 2 |
pandas-dev/pandas | pandas | 60,340 | DEPR: deprecate / warn about raising an error in __array__ when copy=False cannot be honore | The numpy 2.0 changed the behavior of the `copy` keyword in `__array__`, and especially making `copy=False` to be strict (raising an error when a zero-copy numpy array is not possible).
We only adjusted pandas to update the `copy` handling now in https://github.com/pandas-dev/pandas/pull/60046 (issue https://github.com/pandas-dev/pandas/issues/57739).
But that also introduced a breaking change for anyone doing `np.array(ser, copy=False)` (and who hasn't updated that when updating to numpy 2.0), which historically has always worked fine and could silently give a copy anyway.
The idea would be to still include a FutureWarning about this first before raising the error (as now in main) in pandas 3.0.
See https://github.com/pandas-dev/pandas/pull/60046#issuecomment-2457749926 for more context | closed | 2024-11-16T19:39:50Z | 2025-01-13T08:33:09Z | https://github.com/pandas-dev/pandas/issues/60340 | [
"Compat"
] | jorisvandenbossche | 12 |
mljar/mljar-supervised | scikit-learn | 793 | warninig from shap | ```
[/home/piotr/.config/mljar-studio/jlab_server/lib/python3.11/site-packages/shap/plots/_beeswarm.py:738](http://localhost:35923/lab/tree/sandbox/website-mljar/.config/mljar-studio/jlab_server/lib/python3.11/site-packages/shap/plots/_beeswarm.py#line=737): FutureWarning: The NumPy global RNG was seeded by calling `np.random.seed`. In a future version this function will no longer use the global RNG. Pass `rng` explicitly to opt-in to the new behaviour and silence this warning.
``` | open | 2025-03-12T12:26:47Z | 2025-03-12T12:26:47Z | https://github.com/mljar/mljar-supervised/issues/793 | [] | pplonski | 0 |
MilesCranmer/PySR | scikit-learn | 816 | [BUG]: Loss saved in hall of fame is not identical to loss recalculated based on prediction | Hello,
I have identified an issue with some PySR regressions, that I ran. The loss saved in the hall of fame is not identical to the loss calculated with the same dataset and same custom loss function based on the PySR prediction. I have documented the issue in the notebook file attached.
Have similar problems occurred before or do you have any idea, what might be the reason for this?
As an additional test, i was thinking about trying to recalculate the loss of the individual inside PySR, the be able to reproduce the potentially faulty loss-calculation. Is there a way to do so?
Best regards and thanks in advance!
Jupyter Notebook Code:
```
import pickle
import numpy as np
import pandas as pd
from pysr import jl
from pysr.julia_helpers import jl_array
```
```
# Import object of class implementing regression with PySR (pysr_object.regression_object is PySRRegressor)
with open('cluster_2_round_0_runs_4_conv_suc_pysr_object_0103.pkl', 'rb') as file:
pysr_object = pickle.load(file)
```
```
# Get loss of 22. individual for 2. y-variable
loss_in_pysr = pysr_object.regression_object.equations_[2].iloc[22].loss
```
```
# Training dataset used for regression
training_dataset = pysr_object.dataset.dataset[pysr_object.training_data_bool].reset_index(drop=True)
# Predict y-variables for training data with the 22. individual for the 2. y-variable
y_pred= pd.DataFrame(
data=pysr_object.regression_object.predict(
training_dataset[pysr_object.x_variables],
index = [0,0,22,0,0]
),
columns=pysr_object.y_variables
)
# Get prediction and actual data for 2. y-variable
y2_pred = y_pred[pysr_object.y_variables[2]]
y2_act = training_dataset[pysr_object.y_variables[2]]
```
```
# Create julia loss function string based on dataset
def make_fitness_function_v2_5on(y):
# For code see attached PDF
```
```
# Create julia custom loss function string, apart from in- and outputs identical to the one implemented in pysr_object (see below)
fitness_function = make_fitness_function_v2_5on(training_dataset[pysr_object.y_variables])
print(fitness_function)
#Printed:
function custom_loss_function(y_pred, y)
if isapprox(y[1],2.2565436)
p5 = 1.2804586
p80 = 15.113608
a_1 = 0.6248880798745418
b_1 = 1.9206878219999999
a_2 = 2
b_3 = -6.09607886912224
a_3 = 18.0350585617555
c_3 = -9.43495030694149
elseif isapprox(y[1],0.04650065)
p5 = 0.0040257196
p80 = 1.8914138
a_1 = 11.144567754841631
b_1 = 0.006038579739964286
a_2 = 2
b_3 = 0.0400789096218153
a_3 = 3.86298538267220
c_3 = 1.23985086650155
elseif isapprox(y[1],0.6221858)
p5 = 0.46299943
p80 = 13.051656
a_1 = 1.0391895552455737
b_1 = 0.694499148
a_2 = 2
b_3 = 4.05385061247809
a_3 = 34.2110131049562
c_3 = -71.0354535190814
elseif isapprox(y[1],-0.45914933)
p5 = 0.41124585
p80 = 5.6052365
a_1 = 1.1026412343064227
b_1 = 0.616868799645
a_2 = 2
b_3 = 0.514240734249554
a_3 = 12.2389548644991
c_3 = -10.9601082467183
elseif isapprox(y[1],0.8109809)
p5 = 0.00813762
p80 = 17.448
a_1 = 7.838560195225847
b_1 = 0.012206430080501547
a_2 = 2
b_3 = 6.53098971058287
a_3 = 47.9579776388101
c_3 = -117.475032351869
end
y_border = zeros(length(y))
y_border[abs.(y).<p5] .= (a_1 * y[abs.(y).<p5]).^2 .+ b_1
y_border[(abs.(y).>=p5).&(abs.(y).<=p80)] .= a_2 * abs.(y[(abs.(y).>=p5).&(abs.(y).<=p80)])
y_border[abs.(y).>p80] .= a_3 * log.(abs.(y[abs.(y).>p80]) .+ b_3) .+ c_3
custom_error = (sum( ( (1 ./ y_border) .* (y_pred .- y) ) .^2 ) )/(length(y))
return custom_error
end
```
```
# For comparison: loss function of pysr_object
loss_function_pysr = pysr_object.regression_object.loss_function
print(loss_function_pysr)
# Printed:
function custom_loss_function(tree, dataset::Dataset{T,L}, options::Options, idx=nothing)::L where {T,L}
if isapprox(dataset.y[1],2.2565436)
p5 = 1.2804586
p80 = 15.113608
a_1 = 0.6248880798745418
b_1 = 1.9206878219999999
a_2 = 2
b_3 = -6.09607886912224
a_3 = 18.0350585617555
c_3 = -9.43495030694149
elseif isapprox(dataset.y[1],0.04650065)
p5 = 0.0040257196
p80 = 1.8914138
a_1 = 11.144567754841631
b_1 = 0.006038579739964286
a_2 = 2
b_3 = 0.0400789096218153
a_3 = 3.86298538267220
c_3 = 1.23985086650155
elseif isapprox(dataset.y[1],0.6221858)
p5 = 0.46299943
p80 = 13.051656
a_1 = 1.0391895552455737
b_1 = 0.694499148
a_2 = 2
b_3 = 4.05385061247809
a_3 = 34.2110131049562
c_3 = -71.0354535190814
elseif isapprox(dataset.y[1],-0.45914933)
p5 = 0.41124585
p80 = 5.6052365
a_1 = 1.1026412343064227
b_1 = 0.616868799645
a_2 = 2
b_3 = 0.514240734249554
a_3 = 12.2389548644991
c_3 = -10.9601082467183
elseif isapprox(dataset.y[1],0.8109809)
p5 = 0.00813762
p80 = 17.448
a_1 = 7.838560195225847
b_1 = 0.012206430080501547
a_2 = 2
b_3 = 6.53098971058287
a_3 = 47.9579776388101
c_3 = -117.475032351869
end
if idx == nothing
y_pred, complete = eval_tree_array(tree, dataset.X, options)
y = dataset.y
else
y_pred, complete = eval_tree_array(tree, dataset.X[:,idx], options)
y = dataset.y[idx]
end
y_border = zeros(length(y))
y_border[abs.(y).<p5] .= (a_1 * y[abs.(y).<p5]).^2 .+ b_1
y_border[(abs.(y).>=p5).&(abs.(y).<=p80)] .= a_2 * abs.(y[(abs.(y).>=p5).&(abs.(y).<=p80)])
y_border[abs.(y).>p80] .= a_3 * log.(abs.(y[abs.(y).>p80]) .+ b_3) .+ c_3
if !complete
return L(Inf)
end
custom_error = (sum( ( (1 ./ y_border) .* (y_pred .- y) ) .^2 ) )/(length(y))
return custom_error
end
```
```
# Translating elements to julia
fitness_function_jl = jl.seval(fitness_function)
y2_pred_jl = jl_array(np.array(y2_pred, dtype=np.float32).T)
y2_act_jl = jl_array(np.array(y2_act, dtype=np.float32).T)
loss_recalculated = fitness_function_jl(y2_pred_jl, y2_act_jl)
```
```
print('Loss saved in PySR: ' + str(loss_in_pysr))
print('Loss recalculated: ' + str(loss_recalculated))
# Printed:
Loss saved in PySR: 0.01720243
Loss recalculated: 0.022473989882165576
```
-> Loss recalculated and loss calculated in PySR are not identical!
### Version
1.3.1
### Operating System
Windows
### Package Manager
Conda
### Interface
Other (specify below)
### Relevant log output
```shell
```
### Extra Info
[loss_difference_pysr.pdf](https://github.com/user-attachments/files/18537892/loss_difference_pysr.pdf) | open | 2025-01-24T15:06:38Z | 2025-02-05T11:48:35Z | https://github.com/MilesCranmer/PySR/issues/816 | [
"bug"
] | BrotherHa | 13 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 1,257 | I have no idea how to install it | This isn't so much an issue with the program as it is with me because I have never done anything like this before, but all of the programs online are "free", and then charge you to input your own voice, or write your own text, or make an account, etc. I downloaded and unzipped the zip, but I can't find any main file to run in python, and whenever I open any of those files in python it just closes immediately. When I open python normally it opens Windows PowerShell(is this like, the default python thing??? i have no idea). I'm using Windows 11. Some one-on-one help would be really useful, and maybe you also could add a document on the wiki for how to install and open the toolbox.
Thanks, Ember | open | 2023-09-26T02:26:01Z | 2023-10-18T19:46:40Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1257 | [] | TuffCat | 2 |
Kanaries/pygwalker | plotly | 278 | [bug] Incompatible version error in Kaggle | ```bash
!pip install -q pygwalker
```
Error
```
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
ibis-framework 6.2.0 requires sqlglot<18,>=10.4.3, but you have sqlglot 18.16.0 which is incompatible.
``` | closed | 2023-10-21T04:45:17Z | 2023-12-12T07:29:41Z | https://github.com/Kanaries/pygwalker/issues/278 | [
"bug"
] | ObservedObserver | 1 |
influxdata/influxdb-client-python | jupyter | 357 | Implement updating a bucket | __Proposal:__
Currently there is no way of updating a bucket with the current BucketsApi. With [the CLI](https://docs.influxdata.com/influxdb/v2.0/reference/cli/influx/bucket/update/) it is e.g. possible to update the name or the retention period of a bucket.
__Current behavior:__
no way
__Desired behavior:__
Beeing able to change the name of the bucket.
__Use case:__
Why is this important (helps with prioritizing requests)?
In my use case of influx, I always need to delete all data and write it again in order to have all my data there. Currently I first need to delete and then write the data again. What would result in less downtime or less incomplete data during writing is a workflow where I can:
1. create & write to 'temp' bucket
2. rename 'production' bucket to 'old'
3. rename 'temp' bucket to 'production'
4. delete 'old' bucket
That way I could easily write data as long as I want to a separate bucket and still be able to use the main bucket, and quickly change them after the writing is done. | closed | 2021-11-06T14:38:40Z | 2021-11-11T12:25:05Z | https://github.com/influxdata/influxdb-client-python/issues/357 | [
"enhancement"
] | SimonMayerhofer | 2 |
freqtrade/freqtrade | python | 10,781 | Ццу | <!--
Have you searched for similar issues before posting it?
If you have discovered a bug in the bot, please [search the issue tracker](https://github.com/freqtrade/freqtrade/issues?q=is%3Aissue).
If it hasn't been reported, please create a new issue.
Please do not use bug reports to request new features.
-->
## Describe your environment
* Operating system: ____
* Python Version: _____ (`python -V`)
* CCXT version: _____ (`pip freeze | grep ccxt`)
* Freqtrade Version: ____ (`freqtrade -V` or `docker compose run --rm freqtrade -V` for Freqtrade running in docker)
Note: All issues other than enhancement requests will be closed without further comment if the above template is deleted or not filled out.
## Describe the problem:
*Explain the problem you have encountered*
### Steps to reproduce:
1. _____
2. _____
3. _____
### Observed Results:
* What happened?
* What did you expect to happen?
### Relevant code exceptions or logs
Note: Please copy/paste text of the messages, no screenshots of logs please.
```
// paste your log here
```
| closed | 2024-10-12T18:11:27Z | 2024-10-12T18:15:52Z | https://github.com/freqtrade/freqtrade/issues/10781 | [
"Wont fix / Not a bug"
] | Zidane115 | 1 |
biolab/orange3 | scikit-learn | 6,880 | TypeError: can't compare offset-naive and offset-aware datetimes | <!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
<!-- Be specific, clear, and concise. Include screenshots if relevant. -->
<!-- If you're getting an error message, copy it, and enclose it with three backticks (```). -->
When I want to download the plug-in, when I click add-one to enter the page of loading the plug-in, I encounter the following problems:
```
Traceback (most recent call last):
File "D:\BaiduNetdiskDownload\Orange3_zh\Orange3-3.36.2\Orange\lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "D:\BaiduNetdiskDownload\Orange3_zh\Orange3-3.36.2\Orange\lib\site-packages\orangecanvas\application\addons.py", line 510, in <lambda>
lambda config=config: (config, list_available_versions(config)),
File "D:\BaiduNetdiskDownload\Orange3_zh\Orange3-3.36.2\Orange\lib\site-packages\orangecanvas\application\utils\addons.py", line 377, in list_available_versions
response = session.get(PYPI_API_JSON.format(name=p))
File "D:\BaiduNetdiskDownload\Orange3_zh\Orange3-3.36.2\Orange\lib\site-packages\requests_cache\session.py", line 102, in get
return self.request('GET', url, params=params, **kwargs)
File "D:\BaiduNetdiskDownload\Orange3_zh\Orange3-3.36.2\Orange\lib\site-packages\requests_cache\session.py", line 158, in request
return super().request(method, url, *args, headers=headers, **kwargs) # type: ignore
File "D:\BaiduNetdiskDownload\Orange3_zh\Orange3-3.36.2\Orange\lib\site-packages\requests\sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "D:\BaiduNetdiskDownload\Orange3_zh\Orange3-3.36.2\Orange\lib\site-packages\requests_cache\session.py", line 194, in send
actions.update_from_cached_response(cached_response, self.cache.create_key, **kwargs)
File "D:\BaiduNetdiskDownload\Orange3_zh\Orange3-3.36.2\Orange\lib\site-packages\requests_cache\policy\actions.py", line 184, in update_from_cached_response
usable_response = self.is_usable(cached_response)
File "D:\BaiduNetdiskDownload\Orange3_zh\Orange3-3.36.2\Orange\lib\site-packages\requests_cache\policy\actions.py", line 152, in is_usable
or (cached_response.is_expired and self._stale_while_revalidate is True)
File "D:\BaiduNetdiskDownload\Orange3_zh\Orange3-3.36.2\Orange\lib\site-packages\requests_cache\models\response.py", line 149, in is_expired
return self.expires is not None and datetime.utcnow() >= self.expires
TypeError: can't compare offset-naive and offset-aware datetimes
```

**How can we reproduce the problem?**
<!-- Upload a zip with the .ows file and data. -->
<!-- Describe the steps (open this widget, click there, then add this...) -->
After clicking add-one in the settings of version 3.36.2, the pop-up window will prompt the error when loading the plug-in, and then the plug-in that has not been downloaded cannot be loaded.
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system:windows11 22631.4037
- Orange version: 3.36.2
- How you installed Orange: Download the version 3.36.2 zip package from orange3 official website and extract it locally.
| closed | 2024-08-23T03:11:57Z | 2025-01-17T09:29:13Z | https://github.com/biolab/orange3/issues/6880 | [
"bug report"
] | TonyEinstein | 3 |
ray-project/ray | deep-learning | 50,992 | [Serve] exceptions raised by request timeout are inconsistent | ### What happened + What you expected to happen
**UPDATE**
Better repro: https://github.com/ray-project/ray/issues/50992#issuecomment-2718209358
If a request times out or a user disconnects while a Deployment (either parent or child) is initializing, Serve will raise a `ray.serve.exceptions.RequestCancelledError`. After the Deployment has been init-ed, however, it appears that requests are cancelled by Serve cancelling the underlying `asyncio.Task` which results in the propagation of an empty `asyncio.CancelledError`.
This issue/edge case arose during load testing apps with Child Deployments that have slow-ish (~30s) startup times and `min_replicas=0`.
**repro**
```Python
import asyncio
import time
import requests
from ray import serve
REQUEST_TIMEOUT = 1
@serve.deployment(autoscaling_config={"min_replicas": 0})
class SlowDeployment:
def __init__(self) -> None:
time.sleep(REQUEST_TIMEOUT + 1)
async def __call__(self) -> str:
# this will *always* timeout
try:
await asyncio.sleep(REQUEST_TIMEOUT + 1)
except asyncio.CancelledError:
print(">>>>> asyncio.CancelledError")
return "ok"
serve.start(http_options={"request_timeout_s": REQUEST_TIMEOUT})
app = serve.run(SlowDeployment.bind())
# timeout during Child startup --> raises a `ray.serve.exceptions.RequestCancelledError`
print("\n\n")
print("-" * 50)
print(">> first request")
requests.get("http://localhost:8000/")
print("-" * 50)
# let SlowDeployment fully start
time.sleep(1)
print("\n\n")
# timeout after Child started --> raises asyncio.CancelledError
print("-" * 50)
print(">> second request")
requests.get("http://localhost:8000/")
```
### Versions / Dependencies
```bash
ray==2.41.0
Python==3.12
```
**console output**
```bash
--------------------------------------------------
>> first request
(ServeController pid=30811) INFO 2025-02-28 12:23:37,260 controller 30811 -- Upscaling Deployment(name='SlowDeployment', app='default') from 0 to 1 replicas. Current ongoing requests: 1.00, current running replicas: 0.
(ServeController pid=30811) INFO 2025-02-28 12:23:37,260 controller 30811 -- Adding 1 replica to Deployment(name='SlowDeployment', app='default').
--------------------------------------------------
(ProxyActor pid=30805) WARNING 2025-02-28 12:23:38,194 proxy 127.0.0.1 b04c2229-cdc9-49be-bfa1-f0828df75644 -- Request timed out after 1.0s.
(ProxyActor pid=30805) Task exception was never retrieved
(ProxyActor pid=30805) future: <Task finished name='Task-21' coro=<ProxyResponseGenerator._await_response_anext() done, defined at .venv/lib/python3.12/site-packages/ray/serve/_private/proxy_response_generator.py:115> exception=RequestCancelledError('b04c2229-cdc9-49be-bfa1-f0828df75644')>
(ProxyActor pid=30805) Traceback (most recent call last):
(ProxyActor pid=30805) File ".venv/lib/python3.12/site-packages/ray/serve/_private/proxy_response_generator.py", line 116, in _await_response_anext
(ProxyActor pid=30805) return await self._response.__anext__()
(ProxyActor pid=30805) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(ProxyActor pid=30805) File ".venv/lib/python3.12/site-packages/ray/serve/handle.py", line 566, in __anext__
(ProxyActor pid=30805) replica_result = await self._fetch_future_result_async()
(ProxyActor pid=30805) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(ProxyActor pid=30805) File ".venv/lib/python3.12/site-packages/ray/serve/handle.py", line 287, in _fetch_future_result_async
(ProxyActor pid=30805) raise RequestCancelledError(self.request_id) from None
(ProxyActor pid=30805) ray.serve.exceptions.RequestCancelledError: Request b04c2229-cdc9-49be-bfa1-f0828df75644 was cancelled.
--------------------------------------------------
>> second request
(ProxyActor pid=30805) WARNING 2025-02-28 12:23:40,206 proxy 127.0.0.1 cdc628a9-a6dc-41a8-b536-24a12e285eb6 -- Request timed out after 1.0s.
(ServeReplica:default:SlowDeployment pid=30806) INFO 2025-02-28 12:23:40,207 default_SlowDeployment qusrv7tw cdc628a9-a6dc-41a8-b536-24a12e285eb6 -- GET / CANCELLED 396.1ms
(ServeReplica:default:SlowDeployment pid=30806) >>>>> asyncio.CancelledError
```
### Reproduction script
see above.
### Issue Severity
Low: It annoys or frustrates me. | open | 2025-02-28T17:17:55Z | 2025-03-13T21:05:39Z | https://github.com/ray-project/ray/issues/50992 | [
"bug",
"triage",
"serve"
] | paul-twelvelabs | 9 |
itamarst/eliot | numpy | 6 | Add standard out destination for logging output | For simple examples/scripts it would be useful to have a built-in "log to stdout" destination.
| closed | 2014-04-14T18:08:09Z | 2018-09-22T20:59:10Z | https://github.com/itamarst/eliot/issues/6 | [
"API enhancement"
] | itamarst | 1 |
docarray/docarray | fastapi | 1,532 | Write blog to use DocArray with ImageBind from facebook or even integrate docarray there | closed | 2023-05-12T10:56:16Z | 2023-05-24T11:07:49Z | https://github.com/docarray/docarray/issues/1532 | [] | JoanFM | 0 | |
sktime/pytorch-forecasting | pandas | 1,336 | Decoding to get the original x and y back | This is probably a super simple issue but I could not find it in the doc. I want to descale my x (using some scalers on real known variables) and get the original classes for target prediction (those were strings, changed in 0 1 2 by the dataset). In the code below, I would simple like to transform the encoded x and y back into their original values.
```
trial_dataset = TimeSeriesDataSet.from_dataset(training_dataset,
dataset, min_prediction_idx=validation_cutoff + 1)
x, y = next(iter(trial_dataset))
```
I can see the scalers in `.getParameters()`, but surely there must be a way to automatically reverse all the encoding so I don't need to do it one by one?
In case it is helpful, this is my original dataset:
```
max_prediction_length = 1 # maximum prediction/decoder length, is this timesteps in future used to predict? yes, predict one value in the future.
max_encoder_length = 10
y = np.array(dataset['y'])
training_cutoff = (round(dataset['idx'].max() * 0.7)) - max_prediction_length # use 70 percent for training
training_dataset = TimeSeriesDataSet(
dataset[lambda x: x['idx'] <= training_cutoff],
time_idx='idx',
target="y",
group_ids=["filename"], # groups different time series
min_encoder_length=max_encoder_length,
max_encoder_length=max_encoder_length,
min_prediction_length=1,
max_prediction_length=max_prediction_length,
static_categoricals=[],
static_reals=[],
time_varying_known_categoricals=[], # if time shifted.
time_varying_known_reals=['ATR', 'Open', 'High', 'Low', 'Close', 'Volume', 'CC', 'Close_pct' , 'Volume_pct'], ## add other variables later on
time_varying_unknown_categoricals=['y'],
time_varying_unknown_reals=[], #list of continuous variables that change over time and are not know in the future
categorical_encoders={'filename': pytorch_forecasting.data.encoders.NaNLabelEncoder(add_nan=True), 'y': pytorch_forecasting.data.encoders.NaNLabelEncoder(add_nan=False)}, ## how are nans processed? there should be none.
# scalers= {"Open": None, "idx": None, 'Volume_pct':None}, #StandardScaler, Defaults to sklearn’s StandardScaler()
target_normalizer=pytorch_forecasting.data.encoders.NaNLabelEncoder(),
add_relative_time_idx=False,
add_target_scales=False, ## what is this?
add_encoder_length=False,
allow_missing_timesteps=False, # does not allow idx missing
predict_mode = False #To get only last output
)
```
| open | 2023-06-19T08:00:30Z | 2023-09-11T01:04:08Z | https://github.com/sktime/pytorch-forecasting/issues/1336 | [] | dorienh | 2 |
nvbn/thefuck | python | 694 | Strange proposals for docker ps | I wrote `docker ps ,a` and I get `docker ps -a` as a solution and `docker ps a-a` .... I think both don't work. Maybe check if the option is behind the `-`
The Fuck 3.23 using Python 3.6.2
Docker version 17.07.0-ce, build 87847530f7
Shell: zsh
System: Antergos/Arch
Debug output:
```
DEBUG: Run with settings: {'alter_history': True,
'debug': True,
'env': {'GIT_TRACE': '1', 'LANG': 'C', 'LC_ALL': 'C'},
'exclude_rules': [],
'history_limit': None,
'instant_mode': False,
'no_colors': False,
'priority': {},
'repeat': False,
'require_confirmation': True,
'rules': [<const: All rules enabled>],
'slow_commands': ['lein', 'react-native', 'gradle', './gradlew', 'vagrant'],
'user_dir': PosixPath('/home/mdalheimer/.config/thefuck'),
'wait_command': 3,
'wait_slow_command': 15}
DEBUG: Received stdout:
DEBUG: Received stderr: "docker ps" accepts no argument(s).
See 'docker ps --help'.
Usage: docker ps [OPTIONS]
List containers
DEBUG: Call: docker ps .a; with env: {'XDG_SEAT_PATH': '/org/freedesktop/DisplayManager/Seat0', 'LANG': 'C', 'DISPLAY': ':0', 'SHLVL': '1', 'LOGNAME': 'mdalheimer', 'XDG_VTNR': '1', 'PWD': '/opt/workspace/Petsi', 'HG': '/usr/bin/hg', 'MOZ_PLUGIN_PATH': '/usr/lib/mozilla/plugins', 'XAUTHORITY': '/home/mdalheimer/.Xauthority', 'XDG_SESSION_CLASS': 'user', 'GIO_LAUNCHED_DESKTOP_FILE': '/usr/share/applications/terminator.desktop', 'COLORTERM': 'truecolor', 'XDG_SESSION_ID': 'c2', 'DESKTOP_SESSION': '/usr/share/xsessions/gnome', 'XDG_SESSION_DESKTOP': 'GNOME', 'GNOME_DESKTOP_SESSION_ID': 'this-is-deprecated', 'VTE_VERSION': '4803', 'SESSION_MANAGER': 'local/marvin-computer:@/tmp/.ICE-unix/973,unix/marvin-computer:/tmp/.ICE-unix/973', 'DBUS_SESSION_BUS_ADDRESS': 'unix:path=/run/user/1000/bus', 'TERMINATOR_UUID': 'urn:uuid:e872deb6-766d-4ad9-a17c-6e764e742235', 'MAIL': '/var/spool/mail/mdalheimer', 'XDG_DATA_DIRS': '/home/mdalheimer/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share', 'GJS_DEBUG_TOPICS': 'JS ERROR;JS LOG', 'TERMINATOR_DBUS_NAME': 'net.tenshu.Terminator20x1a6021154d881c', 'XDG_MENU_PREFIX': 'gnome-', 'TERMINATOR_DBUS_PATH': '/net/tenshu/Terminator2', 'SHELL': '/usr/bin/zsh', 'OLDPWD': '/home/mdalheimer', 'XDG_SESSION_TYPE': 'x11', 'GJS_DEBUG_OUTPUT': 'stderr', 'TERM': 'xterm-256color', 'GTK_MODULES': 'canberra-gtk-module', 'XDG_CURRENT_DESKTOP': 'GNOME', 'SSH_AUTH_SOCK': '/run/user/1000/keyring/ssh', 'EDITOR': '/usr/bin/nano', 'LC_COLLATE': 'de_DE.UTF-8', 'HOME': '/home/mdalheimer', 'BROWSER': '/usr/bin/chromium', 'XDG_SEAT': 'seat0', 'XDG_RUNTIME_DIR': '/run/user/1000', 'XDG_SESSION_PATH': '/org/freedesktop/DisplayManager/Session1', 'USER': 'mdalheimer', 'PATH': '/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl:/home/mdalheimer/.config/composer/vendor/bin:/home/mdalheimer/.node/bin', 'GIO_LAUNCHED_DESKTOP_FILE_PID': '10512', 'ZSH': '/home/mdalheimer/.oh-my-zsh', 'UPDATE_ZSH_DAYS': '7', 'PAGER': 'less', 'LESS': '-R', 'LC_CTYPE': 'de_DE.UTF-8', 'LSCOLORS': 'Gxfxcxdxbxegedabagacad', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:', 'ZSH_TMUX_TERM': 'screen-256color', '_ZSH_TMUX_FIXED_CONFIG': '/home/mdalheimer/.oh-my-zsh/plugins/tmux/tmux.only.conf', 'MANPATH': ':/usr/local/man:/home/mdalheimer/.node/share/man', 'NODE_PATH': ':/home/mdalheimer/.node/lib/node_modules', 'LC_ALL': 'C', 'HISTCONTROL': 'ignorespace', 'THEFUCK_DEBUG': 'true', '_': '/usr/bin/thefuck', 'GIT_TRACE': '1'}; is slow: took: 0:00:00.028029
DEBUG: Importing rule: ag_literal; took: 0:00:00.000732
DEBUG: Importing rule: apt_get; took: 0:00:00.001224
DEBUG: Importing rule: apt_get_search; took: 0:00:00.000489
DEBUG: Importing rule: apt_invalid_operation; took: 0:00:00.001605
DEBUG: Importing rule: aws_cli; took: 0:00:00.000543
DEBUG: Importing rule: brew_install; took: 0:00:00.000651
DEBUG: Importing rule: brew_link; took: 0:00:00.000553
DEBUG: Importing rule: brew_uninstall; took: 0:00:00.000514
DEBUG: Importing rule: brew_unknown_command; took: 0:00:00.000260
DEBUG: Importing rule: brew_update_formula; took: 0:00:00.000513
DEBUG: Importing rule: brew_upgrade; took: 0:00:00.000252
DEBUG: Importing rule: cargo; took: 0:00:00.000192
DEBUG: Importing rule: cargo_no_command; took: 0:00:00.000521
DEBUG: Importing rule: cd_correction; took: 0:00:00.002231
DEBUG: Importing rule: cd_mkdir; took: 0:00:00.000865
DEBUG: Importing rule: cd_parent; took: 0:00:00.000197
DEBUG: Importing rule: chmod_x; took: 0:00:00.000270
DEBUG: Importing rule: composer_not_command; took: 0:00:00.000637
DEBUG: Importing rule: cp_omitting_directory; took: 0:00:00.000711
DEBUG: Importing rule: cpp11; took: 0:00:00.000501
DEBUG: Importing rule: dirty_untar; took: 0:00:00.002459
DEBUG: Importing rule: dirty_unzip; took: 0:00:00.002836
DEBUG: Importing rule: django_south_ghost; took: 0:00:00.000343
DEBUG: Importing rule: django_south_merge; took: 0:00:00.000281
DEBUG: Importing rule: docker_not_command; took: 0:00:00.001258
DEBUG: Importing rule: dry; took: 0:00:00.000281
DEBUG: Importing rule: fab_command_not_found; took: 0:00:00.001118
DEBUG: Importing rule: fix_alt_space; took: 0:00:00.000794
DEBUG: Importing rule: fix_file; took: 0:00:00.009964
DEBUG: Importing rule: gem_unknown_command; took: 0:00:00.000646
DEBUG: Importing rule: git_add; took: 0:00:00.000938
DEBUG: Importing rule: git_add_force; took: 0:00:00.000525
DEBUG: Importing rule: git_bisect_usage; took: 0:00:00.000538
DEBUG: Importing rule: git_branch_delete; took: 0:00:00.000573
DEBUG: Importing rule: git_branch_exists; took: 0:00:00.000805
DEBUG: Importing rule: git_branch_list; took: 0:00:00.000564
DEBUG: Importing rule: git_checkout; took: 0:00:00.000560
DEBUG: Importing rule: git_diff_no_index; took: 0:00:00.000578
DEBUG: Importing rule: git_diff_staged; took: 0:00:00.000538
DEBUG: Importing rule: git_fix_stash; took: 0:00:00.000667
DEBUG: Importing rule: git_flag_after_filename; took: 0:00:00.000505
DEBUG: Importing rule: git_help_aliased; took: 0:00:00.000497
DEBUG: Importing rule: git_not_command; took: 0:00:00.000507
DEBUG: Importing rule: git_pull; took: 0:00:00.000505
DEBUG: Importing rule: git_pull_clone; took: 0:00:00.000501
DEBUG: Importing rule: git_pull_uncommitted_changes; took: 0:00:00.000514
DEBUG: Importing rule: git_push; took: 0:00:00.000526
DEBUG: Importing rule: git_push_force; took: 0:00:00.000496
DEBUG: Importing rule: git_push_pull; took: 0:00:00.000510
DEBUG: Importing rule: git_push_without_commits; took: 0:00:00.000582
DEBUG: Importing rule: git_rebase_merge_dir; took: 0:00:00.000525
DEBUG: Importing rule: git_rebase_no_changes; took: 0:00:00.000348
DEBUG: Importing rule: git_remote_seturl_add; took: 0:00:00.000353
DEBUG: Importing rule: git_rm_local_modifications; took: 0:00:00.000509
DEBUG: Importing rule: git_rm_recursive; took: 0:00:00.000507
DEBUG: Importing rule: git_rm_staged; took: 0:00:00.000494
DEBUG: Importing rule: git_stash; took: 0:00:00.000499
DEBUG: Importing rule: git_stash_pop; took: 0:00:00.000501
DEBUG: Importing rule: git_tag_force; took: 0:00:00.000569
DEBUG: Importing rule: git_two_dashes; took: 0:00:00.000506
DEBUG: Importing rule: go_run; took: 0:00:00.000455
DEBUG: Importing rule: gradle_no_task; took: 0:00:00.001093
DEBUG: Importing rule: gradle_wrapper; took: 0:00:00.000425
DEBUG: Importing rule: grep_arguments_order; took: 0:00:00.000497
DEBUG: Importing rule: grep_recursive; took: 0:00:00.000415
DEBUG: Importing rule: grunt_task_not_found; took: 0:00:00.000751
DEBUG: Importing rule: gulp_not_task; took: 0:00:00.000437
DEBUG: Importing rule: has_exists_script; took: 0:00:00.000472
DEBUG: Importing rule: heroku_not_command; took: 0:00:00.000428
DEBUG: Importing rule: history; took: 0:00:00.000171
DEBUG: Importing rule: hostscli; took: 0:00:00.000687
DEBUG: Importing rule: ifconfig_device_not_found; took: 0:00:00.000558
DEBUG: Importing rule: java; took: 0:00:00.000449
DEBUG: Importing rule: javac; took: 0:00:00.000408
DEBUG: Importing rule: lein_not_task; took: 0:00:00.000702
DEBUG: Importing rule: ln_no_hard_link; took: 0:00:00.000457
DEBUG: Importing rule: ln_s_order; took: 0:00:00.000461
DEBUG: Importing rule: ls_all; took: 0:00:00.000427
DEBUG: Importing rule: ls_lah; took: 0:00:00.000412
DEBUG: Importing rule: man; took: 0:00:00.000504
DEBUG: Importing rule: man_no_space; took: 0:00:00.000176
DEBUG: Importing rule: mercurial; took: 0:00:00.000422
DEBUG: Importing rule: missing_space_before_subcommand; took: 0:00:00.000184
DEBUG: Importing rule: mkdir_p; took: 0:00:00.000454
DEBUG: Importing rule: mvn_no_command; took: 0:00:00.000409
DEBUG: Importing rule: mvn_unknown_lifecycle_phase; took: 0:00:00.000427
DEBUG: Importing rule: no_command; took: 0:00:00.000475
DEBUG: Importing rule: no_such_file; took: 0:00:00.000174
DEBUG: Importing rule: npm_missing_script; took: 0:00:00.000850
DEBUG: Importing rule: npm_run_script; took: 0:00:00.000419
DEBUG: Importing rule: npm_wrong_command; took: 0:00:00.000857
DEBUG: Importing rule: open; took: 0:00:00.000779
DEBUG: Importing rule: pacman; took: 0:00:00.000780
DEBUG: Importing rule: pacman_not_found; took: 0:00:00.000225
DEBUG: Importing rule: path_from_history; took: 0:00:00.000279
DEBUG: Importing rule: pip_unknown_command; took: 0:00:00.000656
DEBUG: Importing rule: port_already_in_use; took: 0:00:00.000392
DEBUG: Importing rule: python_command; took: 0:00:00.000530
DEBUG: Importing rule: python_execute; took: 0:00:00.000535
DEBUG: Importing rule: quotation_marks; took: 0:00:00.000185
DEBUG: Importing rule: react_native_command_unrecognized; took: 0:00:00.000555
DEBUG: Importing rule: remove_trailing_cedilla; took: 0:00:00.000189
DEBUG: Importing rule: rm_dir; took: 0:00:00.000503
DEBUG: Importing rule: rm_root; took: 0:00:00.000515
DEBUG: Importing rule: scm_correction; took: 0:00:00.000486
DEBUG: Importing rule: sed_unterminated_s; took: 0:00:00.000554
DEBUG: Importing rule: sl_ls; took: 0:00:00.000189
DEBUG: Importing rule: ssh_known_hosts; took: 0:00:00.000509
DEBUG: Importing rule: sudo; took: 0:00:00.000199
DEBUG: Importing rule: sudo_command_from_user_path; took: 0:00:00.000470
DEBUG: Importing rule: switch_lang; took: 0:00:00.000224
DEBUG: Importing rule: systemctl; took: 0:00:00.000783
DEBUG: Importing rule: test.py; took: 0:00:00.000137
DEBUG: Importing rule: tmux; took: 0:00:00.000355
DEBUG: Importing rule: touch; took: 0:00:00.000339
DEBUG: Importing rule: tsuru_login; took: 0:00:00.000348
DEBUG: Importing rule: tsuru_not_command; took: 0:00:00.000385
DEBUG: Importing rule: unknown_command; took: 0:00:00.000142
DEBUG: Importing rule: vagrant_up; took: 0:00:00.000348
DEBUG: Importing rule: whois; took: 0:00:00.000541
DEBUG: Importing rule: workon_doesnt_exists; took: 0:00:00.000539
DEBUG: Importing rule: yarn_alias; took: 0:00:00.000351
DEBUG: Importing rule: yarn_command_not_found; took: 0:00:00.000674
DEBUG: Importing rule: yarn_command_replaced; took: 0:00:00.000474
DEBUG: Importing rule: yarn_help; took: 0:00:00.000345
DEBUG: Trying rule: path_from_history; took: 0:00:00.000668
DEBUG: Trying rule: dry; took: 0:00:00.000077
DEBUG: Trying rule: git_stash_pop; took: 0:00:00.000029
DEBUG: Trying rule: test.py; took: 0:00:00.000003
DEBUG: Trying rule: ag_literal; took: 0:00:00.000024
DEBUG: Trying rule: aws_cli; took: 0:00:00.000022
DEBUG: Trying rule: brew_link; took: 0:00:00.000023
DEBUG: Trying rule: brew_uninstall; took: 0:00:00.000020
DEBUG: Trying rule: brew_update_formula; took: 0:00:00.000020
DEBUG: Trying rule: cargo; took: 0:00:00.000003
DEBUG: Trying rule: cargo_no_command; took: 0:00:00.000023
DEBUG: Trying rule: cd_correction; took: 0:00:00.000025
DEBUG: Trying rule: cd_mkdir; took: 0:00:00.000021
DEBUG: Trying rule: cd_parent; took: 0:00:00.000003
DEBUG: Trying rule: chmod_x; took: 0:00:00.000003
DEBUG: Trying rule: composer_not_command; took: 0:00:00.000021
DEBUG: Trying rule: cp_omitting_directory; took: 0:00:00.000023
DEBUG: Trying rule: cpp11; took: 0:00:00.000022
DEBUG: Trying rule: dirty_untar; took: 0:00:00.000022
DEBUG: Trying rule: dirty_unzip; took: 0:00:00.000022
DEBUG: Trying rule: django_south_ghost; took: 0:00:00.000004
DEBUG: Trying rule: django_south_merge; took: 0:00:00.000003
DEBUG: Trying rule: docker_not_command; took: 0:00:00.000029
DEBUG: Trying rule: fab_command_not_found; took: 0:00:00.000022
DEBUG: Trying rule: fix_alt_space; took: 0:00:00.000006
DEBUG: Trying rule: fix_file; took: 0:00:00.000052
DEBUG: Trying rule: gem_unknown_command; took: 0:00:00.000023
DEBUG: Trying rule: git_add; took: 0:00:00.000018
DEBUG: Trying rule: git_add_force; took: 0:00:00.000018
DEBUG: Trying rule: git_bisect_usage; took: 0:00:00.000030
DEBUG: Trying rule: git_branch_delete; took: 0:00:00.000019
DEBUG: Trying rule: git_branch_exists; took: 0:00:00.000018
DEBUG: Trying rule: git_branch_list; took: 0:00:00.000017
DEBUG: Trying rule: git_checkout; took: 0:00:00.000017
DEBUG: Trying rule: git_diff_no_index; took: 0:00:00.000017
DEBUG: Trying rule: git_diff_staged; took: 0:00:00.000017
DEBUG: Trying rule: git_fix_stash; took: 0:00:00.000018
DEBUG: Trying rule: git_flag_after_filename; took: 0:00:00.000018
DEBUG: Trying rule: git_help_aliased; took: 0:00:00.000017
DEBUG: Trying rule: git_not_command; took: 0:00:00.000018
DEBUG: Trying rule: git_pull; took: 0:00:00.000018
DEBUG: Trying rule: git_pull_clone; took: 0:00:00.000018
DEBUG: Trying rule: git_pull_uncommitted_changes; took: 0:00:00.000017
DEBUG: Trying rule: git_push; took: 0:00:00.000018
DEBUG: Trying rule: git_push_pull; took: 0:00:00.000018
DEBUG: Trying rule: git_push_without_commits; took: 0:00:00.000018
DEBUG: Trying rule: git_rebase_merge_dir; took: 0:00:00.000018
DEBUG: Trying rule: git_rebase_no_changes; took: 0:00:00.000018
DEBUG: Trying rule: git_remote_seturl_add; took: 0:00:00.000017
DEBUG: Trying rule: git_rm_local_modifications; took: 0:00:00.000017
DEBUG: Trying rule: git_rm_recursive; took: 0:00:00.000018
DEBUG: Trying rule: git_rm_staged; took: 0:00:00.000018
DEBUG: Trying rule: git_stash; took: 0:00:00.000018
DEBUG: Trying rule: git_tag_force; took: 0:00:00.000018
DEBUG: Trying rule: git_two_dashes; took: 0:00:00.000022
DEBUG: Trying rule: go_run; took: 0:00:00.000022
DEBUG: Trying rule: gradle_no_task; took: 0:00:00.000022
DEBUG: Trying rule: gradle_wrapper; took: 0:00:00.000021
DEBUG: Trying rule: grep_arguments_order; took: 0:00:00.000022
DEBUG: Trying rule: grep_recursive; took: 0:00:00.000021
DEBUG: Trying rule: grunt_task_not_found; took: 0:00:00.000021
DEBUG: Trying rule: gulp_not_task; took: 0:00:00.000021
DEBUG: Trying rule: has_exists_script; took: 0:00:00.000024
DEBUG: Trying rule: heroku_not_command; took: 0:00:00.000024
DEBUG: Trying rule: hostscli; took: 0:00:00.000023
DEBUG: Trying rule: ifconfig_device_not_found; took: 0:00:00.000021
DEBUG: Trying rule: java; took: 0:00:00.000021
DEBUG: Trying rule: javac; took: 0:00:00.000021
DEBUG: Trying rule: lein_not_task; took: 0:00:00.000023
DEBUG: Trying rule: ln_no_hard_link; took: 0:00:00.000009
DEBUG: Trying rule: ln_s_order; took: 0:00:00.000008
DEBUG: Trying rule: ls_all; took: 0:00:00.000022
DEBUG: Trying rule: ls_lah; took: 0:00:00.000019
DEBUG: Trying rule: man; took: 0:00:00.000022
DEBUG: Trying rule: mercurial; took: 0:00:00.000021
DEBUG: Trying rule: mkdir_p; took: 0:00:00.000005
DEBUG: Trying rule: mvn_no_command; took: 0:00:00.000021
DEBUG: Trying rule: mvn_unknown_lifecycle_phase; took: 0:00:00.000019
DEBUG: Trying rule: no_such_file; took: 0:00:00.000864
DEBUG: Trying rule: npm_missing_script; took: 0:00:00.000026
DEBUG: Trying rule: npm_run_script; took: 0:00:00.000020
DEBUG: Trying rule: npm_wrong_command; took: 0:00:00.000020
DEBUG: Trying rule: open; took: 0:00:00.000037
DEBUG: Trying rule: pacman; took: 0:00:00.000006
DEBUG: Trying rule: pacman_not_found; took: 0:00:00.000007
DEBUG: Trying rule: pip_unknown_command; took: 0:00:00.000027
DEBUG: Trying rule: python_command; took: 0:00:00.000008
DEBUG: Trying rule: python_execute; took: 0:00:00.000024
DEBUG: Trying rule: quotation_marks; took: 0:00:00.000003
DEBUG: Trying rule: react_native_command_unrecognized; took: 0:00:00.000022
DEBUG: Trying rule: remove_trailing_cedilla; took: 0:00:00.000003
DEBUG: Trying rule: rm_dir; took: 0:00:00.000004
DEBUG: Trying rule: scm_correction; took: 0:00:00.000022
DEBUG: Trying rule: sed_unterminated_s; took: 0:00:00.000021
DEBUG: Trying rule: sl_ls; took: 0:00:00.000003
DEBUG: Trying rule: ssh_known_hosts; took: 0:00:00.000021
DEBUG: Trying rule: sudo; took: 0:00:00.000020
DEBUG: Trying rule: sudo_command_from_user_path; took: 0:00:00.000026
DEBUG: Trying rule: switch_lang; took: 0:00:00.000003
DEBUG: Trying rule: systemctl; took: 0:00:00.000023
DEBUG: Trying rule: tmux; took: 0:00:00.000021
DEBUG: Trying rule: touch; took: 0:00:00.000022
DEBUG: Trying rule: tsuru_login; took: 0:00:00.000021
DEBUG: Trying rule: tsuru_not_command; took: 0:00:00.000025
DEBUG: Trying rule: unknown_command; took: 0:00:00.000214
DEBUG: Trying rule: vagrant_up; took: 0:00:00.000025
DEBUG: Trying rule: whois; took: 0:00:00.000022
DEBUG: Trying rule: workon_doesnt_exists; took: 0:00:00.000022
DEBUG: Trying rule: yarn_alias; took: 0:00:00.000021
DEBUG: Trying rule: yarn_command_not_found; took: 0:00:00.000021
DEBUG: Trying rule: yarn_command_replaced; took: 0:00:00.000018
DEBUG: Trying rule: yarn_help; took: 0:00:00.000022
DEBUG: Trying rule: man_no_space; took: 0:00:00.000004
DEBUG: Trying rule: no_command; took: 0:00:00.000080
DEBUG: Trying rule: missing_space_before_subcommand; took: 0:00:00.048274
DEBUG: Trying rule: history; took: 0:00:00.019292
``` | open | 2017-09-19T13:43:16Z | 2017-11-22T11:15:33Z | https://github.com/nvbn/thefuck/issues/694 | [] | Rinma | 2 |
Lightning-AI/pytorch-lightning | deep-learning | 20,409 | Errors when deploying PyTorch Lightning Model to AWS SageMaker TrainingJobs: SMDDP does not support ReduceOp | ### Bug description
Hi, I am trying to follow the DDP (Distributed Data Parallel) guidance ([Guide 1](https://aws.amazon.com/blogs/machine-learning/run-pytorch-lightning-and-native-pytorch-ddp-on-amazon-sagemaker-training-featuring-amazon-search/), [Guide 2](https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel-modify-sdp-pt-lightning.html)) and deploy my deep learning models to AWS SageMaker. However, when running it, I am encountering the following error. [1 instance, 4 GPUs]
May I ask how I can fix this error? Any suggestions or comments would be greatly appreciated!
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
```python
Model Code:
from torch.distributed import init_process_group, destroy_process_group
from torchmetrics.functional import pairwise_cosine_similarity
from lightning.pytorch.callbacks import EarlyStopping, ModelCheckpoint
from lightning.pytorch.loggers import CSVLogger
import smdistributed.dataparallel.torch.torch_smddp
from lightning.pytorch.strategies import DDPStrategy
from lightning.fabric.plugins.environments.lightning import LightningEnvironment
import lightning as pl
env = LightningEnvironment()
env.world_size = lambda: int(os.environ["WORLD_SIZE"])
env.global_rank = lambda: int(os.environ["RANK"])
def main(args):
train_samples = 1000
val_samples = 200
test_samples = 200
csv_logger = CSVLogger(save_dir=args.model_dir, name=args.modelname)
# Initialize the DataModule
data_module = ImagePairDataModule(
data_save_folder=args.data_dir,
train_samples=train_samples,
val_samples=val_samples,
test_samples=test_samples,
batch_size=args.batch_size,
num_workers=12,
)
# Initialize the model
model = Siamese()
# Configure checkpoint callback to save the best model
checkpoint_callback = ModelCheckpoint(
monitor="val_loss", # Monitor validation loss
dirpath=args.model_dir, # Directory to save model checkpoints
filename="best-checkpoint-test", # File name of the checkpoint
save_top_k=1, # Only save the best model
mode="min", # Save when validation loss is minimized
save_on_train_epoch_end=False,
)
# Configure early stopping to stop training if validation loss doesn't improve
early_stopping_callback = EarlyStopping(
monitor="val_loss", # Monitor validation loss
patience=args.patience, # Number of epochs with no improvement before stopping
mode="min", # Stop when validation loss is minimized
)
# Set up ddp on SageMaker
# https://docs.aws.amazon.com/sagemaker/latest/dg/data-parallel-modify-sdp-pt-lightning.html
ddp = DDPStrategy(
cluster_environment=env,
process_group_backend="smddp",
accelerator="gpu"
)
# Initialize the PyTorch Lightning Trainer
trainer = pl.Trainer(
max_epochs=args.epochs,
strategy=ddp, # Distributed Data Parallel strategy
devices=torch.cuda.device_count(), # Use all available GPUs
precision=16, # Use mixed precision (16-bit)
callbacks=[checkpoint_callback, early_stopping_callback],
log_every_n_steps=10,
logger=csv_logger,
)
# Train the model
trainer.fit(model, datamodule=data_module)
best_model_path = checkpoint_callback.best_model_path
print(f"Saving best model to: {best_model_path}")
# Destroy the process group if distributed training was initialized
if torch.distributed.is_initialized():
torch.distributed.destroy_process_group()
if __name__ == "__main__":
# Set up argument parser for command-line arguments
parser = argparse.ArgumentParser()
# Adding arguments
parser.add_argument('--epochs', type=int, default=10)
parser.add_argument('--batch_size', type=int, default=256)
parser.add_argument('--patience', type=int, default=10)
parser.add_argument('--modelname', type=str, default='testing_model')
# Container environment
parser.add_argument("--hosts", type=list, default=json.loads(os.environ["SM_HOSTS"]))
parser.add_argument("--current-host", type=str, default=os.environ["SM_CURRENT_HOST"])
parser.add_argument("--model-dir", type=str, default=os.environ["SM_MODEL_DIR"])
parser.add_argument("--data-dir", type=str, default=os.environ["SM_CHANNEL_TRAINING"])
parser.add_argument("--num-gpus", type=int, default=os.environ["SM_NUM_GPUS"])
# Parse arguments
args = parser.parse_args()
# Ensure the model directory exists
os.makedirs(args.model_dir, exist_ok=True)
# Launch the main function
main(args)
```
Training Job Code:
```
hyperparameters_set = {
'epochs': 100, # Total number of epochs
'batch_size': 200, # Input batch size on each device
'patience': 10, # Early stopping patience
'modelname': model_task_name, # Name for the model
}
estimator = PyTorch(
entry_point = "model_01.py",
source_dir = "./sage_code_300",
output_path = jobs_folder + "/",
code_location = jobs_folder,
role = role,
input_mode = 'FastFile',
py_version="py310",
framework_version="2.2.0",
instance_count=1,
instance_type="ml.g5.12xlarge",
hyperparameters=hyperparameters_set,
volume_size=800,
distribution={'pytorchddp': {'enabled': True}},
dependencies=["./sage_code/requirements.txt"],
)
estimator.fit({"training": inputs},
job_name = job_name,
wait = False,
logs=True)
```
### Error messages and logs
```
2024-11-09 00:25:02 Uploading - Uploading generated training model
2024-11-09 00:25:02 Failed - Training job failed
---------------------------------------------------------------------------
UnexpectedStatusException Traceback (most recent call last)
Cell In[26], line 34
14 estimator = PyTorch(
15 entry_point = "model_01.py",
16 source_dir = "./sage_code_300",
(...)
29 dependencies=["./sage_code_300/requirements.txt"],
30 )
32 ######### Run the model #############
33 # Send the model to sage training jobs
---> 34 estimator.fit({"training": inputs},
35 job_name = job_name,
36 wait = True, # True
37 logs=True)
40 model_will_save_path = os.path.join(jobs_folder, job_name, "output", "model.tar.gz")
41 print(f"\nModel is saved at:\n\n{model_will_save_path}")
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/sagemaker/workflow/pipeline_context.py:346, in runnable_by_pipeline.<locals>.wrapper(*args, **kwargs)
342 return context
344 return _StepArguments(retrieve_caller_name(self_instance), run_func, *args, **kwargs)
--> 346 return run_func(*args, **kwargs)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/sagemaker/estimator.py:1376, in EstimatorBase.fit(self, inputs, wait, logs, job_name, experiment_config)
1374 forward_to_mlflow_tracking_server = True
1375 if wait:
-> 1376 self.latest_training_job.wait(logs=logs)
1377 if forward_to_mlflow_tracking_server:
1378 log_sagemaker_job_to_mlflow(self.latest_training_job.name)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/sagemaker/estimator.py:2750, in _TrainingJob.wait(self, logs)
2748 # If logs are requested, call logs_for_jobs.
2749 if logs != "None":
-> 2750 self.sagemaker_session.logs_for_job(self.job_name, wait=True, log_type=logs)
2751 else:
2752 self.sagemaker_session.wait_for_job(self.job_name)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/sagemaker/session.py:5945, in Session.logs_for_job(self, job_name, wait, poll, log_type, timeout)
5924 def logs_for_job(self, job_name, wait=False, poll=10, log_type="All", timeout=None):
5925 """Display logs for a given training job, optionally tailing them until job is complete.
5926
5927 If the output is a tty or a Jupyter cell, it will be color-coded
(...)
5943 exceptions.UnexpectedStatusException: If waiting and the training job fails.
5944 """
-> 5945 _logs_for_job(self, job_name, wait, poll, log_type, timeout)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/sagemaker/session.py:8547, in _logs_for_job(sagemaker_session, job_name, wait, poll, log_type, timeout)
8544 last_profiler_rule_statuses = profiler_rule_statuses
8546 if wait:
-> 8547 _check_job_status(job_name, description, "TrainingJobStatus")
8548 if dot:
8549 print()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/sagemaker/session.py:8611, in _check_job_status(job, desc, status_key_name)
8605 if "CapacityError" in str(reason):
8606 raise exceptions.CapacityError(
8607 message=message,
8608 allowed_statuses=["Completed", "Stopped"],
8609 actual_status=status,
8610 )
-> 8611 raise exceptions.UnexpectedStatusException(
8612 message=message,
8613 allowed_statuses=["Completed", "Stopped"],
8614 actual_status=status,
8615 )
UnexpectedStatusException: Error for Training job model-res50-300k-noft-contra-aug-2024-11-09-00-18-33: Failed. Reason: AlgorithmError: ExecuteUserScriptError:
ExitCode 1
ErrorMessage "RuntimeError: SMDDP does not support: ReduceOp
Traceback (most recent call last)
File "/opt/conda/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.10/site-packages/mpi4py/__main__.py", line 7, in <module>
main()
File "/opt/conda/lib/python3.10/site-packages/mpi4py/run.py", line 230, in main
run_command_line(args)
File "/opt/conda/lib/python3.10/site-packages/mpi4py/run.py", line 47, in run_command_line
run_path(sys.argv[0], run_name='__main__')
File "/opt/conda/lib/python3.10/runpy.py", line 289, in run_path
return _run_module_code(code, init_globals, run_name,
File "/opt/conda/lib/python3.10/runpy.py", line 96, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "s4_mod_pl_cloud_01.py", line 369, in <module>
main(args)
File "s4_mod_pl_cloud_01. Check troubleshooting guide for common errors: https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-python-sdk-troubleshooting.html
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.4.0): 2.4.0
#- PyTorch Version (e.g., 2.4): 2.2
#- Python version (e.g., 3.12): 3.10
#- OS (e.g., Linux): Linux2
#- CUDA/cuDNN version: 12.4
#- GPU models and configuration: A10G
#- How you installed Lightning(`conda`, `pip`, source): pip
```
</details>
### More info
_No response_ | closed | 2024-11-09T00:58:05Z | 2024-11-09T20:40:07Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20409 | [
"bug",
"needs triage",
"ver: 2.4.x"
] | ZihanChen1995 | 0 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 591 | hello i have issues starting the program | Hello i have issues starting the program i got this issue :
ImportError: cannot import name '_imaging'
Any idea ? | closed | 2020-11-08T07:39:57Z | 2020-11-09T19:39:31Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/591 | [] | Z3ugm4 | 1 |
mckinsey/vizro | plotly | 176 | Updating Meta Tags | ### What's the problem this feature will solve?
One of the cool features of Dash Pages is that it’s easy to update the meta tags for each page. This is good for SEO and creating nice looking preview cards when sharing a link on social media.
Currently with the Vizro demo app, only the title is included in the meta tags. Here's what the link preview looks like:

However, Dash Pages will update the meta tags with an image and description if supplied. For example, see the [Dash Example Index](https://dash-example-index.herokuapp.com). It has over 100 pages, each with it's own title, description and image:

### Describe the solution you'd like
It's easy to enable this feature in Vizro. Simple include (an optional) `description`, `image`, and `image_url` parameter in the `Page`.
I could do a pull request if you would like this feature enabled. I got it working locally. You can't see what the link looks like until the app is deployed , but when inspecting the browser, you can see that the meta tags now include an image and description.

### Alternative Solutions
No
### Additional context
None at this time.
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | closed | 2023-11-17T16:38:44Z | 2024-05-21T11:20:55Z | https://github.com/mckinsey/vizro/issues/176 | [
"Feature Request :nerd_face:"
] | AnnMarieW | 9 |
kaliiiiiiiiii/Selenium-Driverless | web-scraping | 101 | Headless mode not working on linux distributions. | I run my script, no errors, but it stays stuck at the

It never gets to the "working" part.
But it does have a pid for chromium:

I am not smart in this kind of systems dev stuff. I just build using tools like selenium-driverless.
Thanks! | closed | 2023-10-27T03:43:25Z | 2023-10-27T04:09:15Z | https://github.com/kaliiiiiiiiii/Selenium-Driverless/issues/101 | [] | AdonisCodes | 1 |
PokeAPI/pokeapi | api | 1,020 | Effect and Short Effect are not Reliable |
There are some attacks that get constantly updated and buffed, the pure numbers get updated, but not the effect texts.
Example Fury Cutter: The Long Effect text describes an end multiplier of 16x, the short effect of "maxing out after 5 turns"
Both dont even work, cause the maxing out of power happens at 160 power, which was 16x in Gens 2-4 and the 5 turns in 5.
Example Bind: The Effect texts claim an 1/16 of max hp damage, but that category of damage has been overall buffed to 1/8th of max hp damage.
I did not check all moves, but I happened to see this often enough that i can't call the effect text data reliable | open | 2024-01-24T21:09:42Z | 2024-04-25T17:31:26Z | https://github.com/PokeAPI/pokeapi/issues/1020 | [] | GreatNovaDragon | 8 |
ijl/orjson | numpy | 337 | Datetime subclass serialization | Hello, I have created a subclass of Python's `datetime.datetime` called `TimezoneAwareDatetime` that enforces a timezone:
<details>
<summary>TimezoneAwareDatetime</summary>
```python
class TimezoneAwareDatetime(datetime.datetime):
"""
Ultimately parses into a datetime just like the normal Pydantic validator but forces timezone information to be included.
To have this pass type evaluation with the Pycharm Pydantic plugin, add this to your pyproject.toml:
[tool.pydantic-pycharm-plugin.parsable-types]
"shipwell_common_python.contrib.pydantic.custom_types.TimezoneAwareDatetime" = ["str", "datetime.datetime"]
"""
def __new__(
cls: type[TimezoneAwareDatetime_T],
year: int,
month: int,
day: int,
hour: int = 0,
minute: int = 0,
second: int = 0,
microsecond: int = 0,
tzinfo: datetime.tzinfo | None = None,
*,
fold: int = 0,
) -> TimezoneAwareDatetime_T:
if tzinfo is None:
raise ValueError("Timezone information is required")
return super().__new__(
cls,
year=year,
month=month,
day=day,
hour=hour,
minute=minute,
second=second,
microsecond=microsecond,
tzinfo=tzinfo,
fold=fold,
)
@classmethod
def combine(
cls: type[TimezoneAwareDatetime_T],
date: datetime.date,
time: datetime.time,
tzinfo: datetime.tzinfo | None = None,
) -> TimezoneAwareDatetime_T:
tzinfo = tzinfo if tzinfo is not None else time.tzinfo
combined_dt = super().combine(date, time, tzinfo=tzinfo)
return cast(TimezoneAwareDatetime_T, combined_dt)
@classmethod
def strptime(
cls: type[TimezoneAwareDatetime_T], date_string: str, format: str # noqa: A002
) -> TimezoneAwareDatetime_T:
# Super implementation already returns this proper type; simply override so we can correct the type signature
dt = super().strptime(date_string, format)
return cast(TimezoneAwareDatetime_T, dt)
@classmethod
def __get_validators__(
cls,
) -> Generator[Callable[[Any], Any], None, None]:
yield cls.validate
@classmethod
def validate(cls, value: datetime.datetime | StrBytesIntFloat) -> "TimezoneAwareDatetime":
dt = parse_datetime(value)
if dt.tzinfo is None or dt.tzinfo.utcoffset(dt) is None:
raise ValueError("Timezone information required")
return cls.combine(dt.date(), dt.time(), tzinfo=dt.tzinfo)
def __copy__(self: TimezoneAwareDatetime_T) -> TimezoneAwareDatetime_T:
# default implementation does not pass `tzinfo` to `__new__()`, ensure we do that here to avoid errors
cls = self.__class__
return cls.__new__(
cls,
year=self.year,
month=self.month,
day=self.day,
hour=self.hour,
minute=self.minute,
second=self.second,
microsecond=self.microsecond,
tzinfo=self.tzinfo,
fold=self.fold,
)
def __deepcopy__(
self: TimezoneAwareDatetime_T, memodict: dict[str, Any]
) -> TimezoneAwareDatetime_T:
# we can afford for our deepcopy to be same as shallow-copy because both this and sub-properties are immutable
return self.__copy__()
```
</details>
However when I try to serialize this with orjson, orjson does not recognize it:
```python
import orjson
print(orjson.dumps({
"date": TimezoneAwareDatetime(
2023, 1, 25, 5, 30, 10, tzinfo=zoneinfo.ZoneInfo("America/Los_Angeles")
)
}).decode())
# Results in Type is not JSON serializable: TimezoneAwareDatetime
```
I suppose one of the things that makes orjson so fast is the lack of things like an `instanceof` check, but wonder if there's a way around it to handle it natively? The workaround I am currently using is:
```python
def _default_json(obj: Any) -> Any:
if isinstance(obj, TimezoneAwareDatetime):
return obj.isoformat()
raise TypeError(f"Type is not JSON serializable: {obj.__class__}")
print(orjson.dumps({
"date": TimezoneAwareDatetime(
2023, 1, 25, 5, 30, 10, tzinfo=zoneinfo.ZoneInfo("America/Los_Angeles")
)
}, default=_default_json).decode())
# prints {"date":"2023-01-25T05:30:10-08:00"}
``` | closed | 2023-01-25T18:05:46Z | 2023-02-09T14:57:37Z | https://github.com/ijl/orjson/issues/337 | [] | phillipuniverse | 3 |
apache/airflow | python | 47,373 | Deferred TI object has no attribute 'next_method' | ### Apache Airflow version
3.0.0b1
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
RetryOperator is failing.
```
[2025-03-05T07:12:25.188823Z] ERROR - Task failed with exception logger="task" error_detail=
[{"exc_type":"AttributeError","exc_value":"'RuntimeTaskInstance' object has no attribute
'next_method'","syntax_error":null,"is_cause":false,"frames":
[{"filename":"/opt/airflow/task_sdk/src/airflow/sdk/execution_time/task_runner.py","lineno":605,"name":"run"},
{"filename":"/opt/airflow/task_sdk/src/airflow/sdk/execution_time/task_runner.py","lineno":726,"name":"_execut
e_task"},{"filename":"/opt/airflow/airflow/models/baseoperator.py","lineno":168,"name":"wrapper"}
,{"filename":"/files/dags/retry.py","lineno":17,"name":"execute"},{"filename":"/usr/local/lib/python3.9/site
-packages/pydantic/main.py","lineno":891,"name":"__getattr__"}]}]
```
### What you think should happen instead?
RetryOperator should show same behaviour as AF2
### How to reproduce
Run the below DAG:
```python
from datetime import datetime, timedelta
from airflow import DAG
from airflow.exceptions import AirflowException
from airflow.models import BaseOperator
from airflow.triggers.testing import SuccessTrigger
class RetryOperator(BaseOperator):
def execute(self, context):
ti = context["ti"]
has_next_method = bool(ti.next_method)
try_number = ti.try_number
self.log.info(
f"In `execute`: has_next_method: {has_next_method}, try_number:{try_number}"
)
self.defer(
trigger=SuccessTrigger(),
method_name="next",
kwargs={"execute_try_number": try_number},
)
def next(self, context, execute_try_number, event=None):
self.log.info("In next!")
ti = context["ti"]
has_next_method = bool(ti.next_method)
try_number = ti.try_number
self.log.info(
f"In `next`: has_next_method: {has_next_method}, try_number:{try_number}, excute_try_number: {execute_try_number}"
)
if try_number == 1:
# Force a retry
raise AirflowException("Force a retry")
# Did we run `execute`?
if execute_try_number != try_number:
raise AirflowException("`execute` wasn't run during retry!")
return None # Success!
with DAG(
"triggerer_retry", schedule=None, start_date=datetime(2021, 9, 13), tags=['core']
) as dag:
RetryOperator(task_id="retry", retries=1, retry_delay=timedelta(seconds=15))
```
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-05T07:39:18Z | 2025-03-12T08:31:50Z | https://github.com/apache/airflow/issues/47373 | [
"kind:bug",
"priority:high",
"area:core",
"affected_version:3.0.0beta"
] | atul-astronomer | 10 |
MagicStack/asyncpg | asyncio | 514 | Extra zero decimal digits after decoding NUMERIC (DECIMAL) type | <!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided, with clear instructions on how to run it.
Thank you!
-->
* **asyncpg version**: 0.18.3
* **PostgreSQL version**: PostgreSQL 10.10 / PostgreSQL 11.5
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: Yes, problem is reproduced using the PostgreSQL provided by AWS and a local install
* **Python version**: 3.7
* **Platform**: Linux / Darwin 18.6.0 x86_64
* **Do you use pgbouncer?**: No
* **Did you install asyncpg with pip?**: Yes
* **If you built asyncpg locally, which version of Cython did you use?**: No
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: Yes
<!-- Enter your issue details below this comment. -->
I have encountered that asyncpg decodes values of NUMERIC type with extra zero decimal digits despite the explicitly specified scale.
This code:
```python
import asyncio
import asyncpg
async def main():
conn = await asyncpg.connect()
await conn.execute('''
CREATE TABLE test_numeric(
id serial PRIMARY KEY,
value NUMERIC(1000, 8)
)
''')
await conn.execute('''
INSERT INTO test_numeric(value) VALUES('0.00003000')
''')
row = await conn.fetchrow('SELECT * FROM test_numeric;')
print(row)
await conn.close()
asyncio.get_event_loop().run_until_complete(main())
```
Produces the following output (12 digits right to the point while the scale of 8 was specified):
```
<Record id=1 value=Decimal('0.000030000000')>
```
However, in the database everything looks fine:
```
test=# SELECT * FROM test_numeric;
id | value
----+------------
1 | 0.00003000
``` | closed | 2019-12-07T19:41:45Z | 2020-01-09T09:10:57Z | https://github.com/MagicStack/asyncpg/issues/514 | [] | vemikhaylov | 1 |
saulpw/visidata | pandas | 2,071 | IndexError in cursorRow | I couldn't reproduce this one, so if the traceback isn't enough to fix it, feel free to close it. I got this when `reload-modified` was active and reloading rows on a JSONL sheet.
```
Traceback (most recent call last):
File "/home/ramrachum/.local/lib/python3.10/site-
packages/visidata/sheets.py", line 196, in _colorize
r = colorizer.func(self, col, row, value)
File "/home/ramrachum/.local/lib/python3.10/site-
packages/visidata/sheets.py", line 129, in <lambda>
CellColorizer(3, 'color_current_cell', lambda s,c,r,v: c is s.cursorCol
and r is s.cursorRow)
File "/home/ramrachum/.local/lib/python3.10/site-
packages/visidata/sheets.py", line 375, in cursorRow
return self.rows[self.cursorRowIndex] if self.nRows > 0 else None
IndexError: list index out of range
``` | closed | 2023-10-21T11:59:15Z | 2023-10-23T03:48:04Z | https://github.com/saulpw/visidata/issues/2071 | [
"bug",
"fixed"
] | cool-RR | 3 |
sczhou/CodeFormer | pytorch | 350 | Anime version | Can you tell me what is the dataset format for training a model for CodeFormer? Or do you know any upscaler-like codeformer that does anime? | open | 2024-02-18T04:28:32Z | 2024-02-18T04:28:32Z | https://github.com/sczhou/CodeFormer/issues/350 | [] | JamesKnight0001 | 0 |
liangliangyy/DjangoBlog | django | 309 | 静态文件 | 新建文章会生成静态文件吗? 我直接拿静态文件放到 阿里云oss 之类的 是否也可以使用?
| closed | 2019-08-17T07:47:09Z | 2019-08-31T07:32:17Z | https://github.com/liangliangyy/DjangoBlog/issues/309 | [] | javanan | 5 |
strawberry-graphql/strawberry | graphql | 3,400 | Errors when closing a subscription after authentication failure | <!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
When a permissions class's `has_permission` method returns False for a subscription, strawberry subsequently throws some errors while closing the connection.
<!-- A clear and concise description of what the bug is. -->
## System Information
- Operating system: Ubuntu 22.04
- Strawberry version (if applicable): 0.216.1 (and 0.171.1, 0.219)
## Additional Context
When a subscription fails due to an authentication failure, we see log outputs that look like this
```
< TEXT '{"type":"connection_init","payload":{"Authoriza...bGt1hupQ9QVf1VYivKHw"}}' [844 bytes]
> TEXT '{"type": "connection_ack"}' [26 bytes]
< TEXT '{"id":"5","type":"start","payload":{"variables"...ypename\\n }\\n}\\n"}}' [3034 bytes]
> TEXT '{"type": "error", "id": "5", "payload": {"messa...buildingDataChanges"]}}' [144 bytes]
Not Authorized
GraphQL request:2:3
...
raise PermissionError(message)
PermissionError: Not Authorized
< TEXT '{"id":"5","type":"stop"}' [24 bytes]
Exception in ASGI application
Traceback (most recent call last):
...
File ".../strawberry/subscriptions/protocols/graphql_ws/handlers.py", line 193, in cleanup_operation
await self.subscriptions[operation_id].aclose()
KeyError: '5'
= connection is CLOSING
> CLOSE 1000 (OK) [2 bytes]
= connection is CLOSED
! failing connection with code 1006
closing handshake failed
Traceback (most recent call last):
...
File ".../websockets/legacy/protocol.py", line 935, in ensure_open
raise self.connection_closed_exc()
websockets.exceptions.ConnectionClosedError: sent 1000 (OK); no close frame received
connection closed
```
This seems to indicate that the permissions failure prevents the subscription from being created, which causes cleanup to fail since it assumes the subscription exists. If I modify https://github.com/strawberry-graphql/strawberry/blob/808d898a9041caffe74e4364314b585413e4e5e2/strawberry/subscriptions/protocols/graphql_ws/handlers.py#L192 to check for `operation_id` in `self.subscriptions` and `self.tasks` before accessing them, then both the `KeyError` and `closing handshake failed` errors go away.
Since an expired token can cause rapid subscription retries and failures, this can produce quite a lot of log spam.
<!-- Add any other relevant information about the problem here. --> | open | 2024-02-27T16:48:04Z | 2025-03-20T15:56:37Z | https://github.com/strawberry-graphql/strawberry/issues/3400 | [
"bug"
] | wlaub | 1 |
aio-libs/aiopg | sqlalchemy | 697 | ResourceWarning on parallel queries. | Hi! I've multiple parallel queries running on postgres and all query functions follow the pattern:
```python
...
sql = $ANY_SQL_QUERY
async with dbpool.acquire() as conn:
async with conn.cursor() as cursor:
await cursor.execute(sql, (project,))
rows = await cursor.fetchall()
return[Bucket(*row) for row in rows] if rows else None
...
```
I'm getting the error message bellow:
```shell
...
2020-07-07 15:09:19,012 INFO: Obtendo a lista de PVC para a namespace cpfdigital-p...
/opt/python/venv/lib/python3.7/site-packages/aiopg/pool.py:245: ResourceWarning: Invalid transaction status on released connection: 1
ResourceWarning
...
```
I think this is caused by failure on close or release connections but I thought that aiopg connection pool was responsible by this action.
Do I need to write the release, or closing, of connections for each run? What is the impact of this approach for aiopg connection pool?
Context:
Python3.7
aiohttp 3.6.2
aiopg 1.0.0
postgres 11 | open | 2020-07-08T13:51:58Z | 2020-07-08T13:57:21Z | https://github.com/aio-libs/aiopg/issues/697 | [] | vsadriano | 0 |
graphistry/pygraphistry | pandas | 51 | Set API key based on local user profile, if available | IPython already has a notion of user built in, so rather than each user baking in their API key (and the setting getting confused when notebooks get shared), dynamically lookup the API key based on who is logged into IPython.
This has been becoming a bit of an issue in practice in team settings.
( + @thibaudh @padentomasello @briantrice )
| closed | 2016-01-23T22:21:32Z | 2016-02-25T00:35:21Z | https://github.com/graphistry/pygraphistry/issues/51 | [
"enhancement",
"p3"
] | lmeyerov | 3 |
modin-project/modin | pandas | 7,479 | move or copy the query compiler casting code to the API layer | open | 2025-03-21T19:29:18Z | 2025-03-21T19:29:18Z | https://github.com/modin-project/modin/issues/7479 | [] | sfc-gh-mvashishtha | 0 | |
wkentaro/labelme | deep-learning | 935 | [Question] Semantic and Instance level annotation. | ## What is your question? Please describe.
I'm using **label-me** tools to annotate for **semantic** and **instance** segmentation. And I have a bit confused about the overall process and need some assistance.
## Describe what you have tried so far
First, let's consider the following example.
<img src="https://user-images.githubusercontent.com/17668390/137580752-5c5c1be1-9660-48ed-9ba5-bc82f15414a8.jpg" width="500" height="500" />
It's a **single category** (i.e. a parrot). Let's first try to do **instance-level** annotation. (I know all are the same categories and it's just a demonstration purpose). Ok; I use the latest version of **labelme** (`4.5.13`). And start labeling as follows for **instance segmentation**.
<img src="https://user-images.githubusercontent.com/17668390/137580863-c4fd579a-a5c5-425d-be81-40f7921aaf7e.png" width="300" height="300" /> <img src="https://user-images.githubusercontent.com/17668390/137580954-8bee95d6-9c80-4b4b-b4f2-fc301e54d442.png" width="300" height="300" />
After drawing all the polygons.
<img src="https://user-images.githubusercontent.com/17668390/137581135-2a955e0b-9e97-4b43-9a9f-fc651b667adf.png" width="500" height="400" />
After saving the annotation file **(`.json`)**, I created manually the **`label.txt`** file. All files can be found [here](https://drive.google.com/drive/folders/1EU1XteMirdjJ9xhPRpM-sqPYPJ7tpwFX?usp=sharing).
---
Until now, **the process of annotation for semantic segmentation and instance segmentation looks pretty the same** (please correct me if I misunderstood). In **semantic segmentation**, all `parrot` would mark as the same category (1 label), and in **instance-level segmentation**, each of them would have also the same category **but** a different **label/id** (parrot 0, parrot 1, parrot 2 ... etc); isn't it?
But while using **labelme** tools, I didn't encounter any option to do such a thing. In another word, the annotation process I'm doing for semantic and instance are almost the same. Hope you understand my confusion. Please feel free to ask if I need any further explanation. And kindly check the above-mentioned google drive link where I share the raw image and its corresponding `.json` and `label.txt`.
FYI, I also try to run your code [here](https://gist.github.com/wkentaro/84f14a29a7e4888d3cb5e4270c0398c0) but faced the following error
```
python vis_coco_annotations.py.py aug.json
Traceback (most recent call last):
File "check.py", line 57, in <module>
main()
File "check.py", line 25, in main
class_names = [None] * (max(coco.getCatIds()) + 1)
File "....\site-packages\pycocotools\coco.py", line 171, in getCatIds
cats = self.dataset['categories']
KeyError: 'categories'
```
| closed | 2021-10-16T09:15:09Z | 2021-11-10T13:31:52Z | https://github.com/wkentaro/labelme/issues/935 | [] | innat | 1 |
prkumar/uplink | rest-api | 170 | httpbin for more comprehensive testing ? | **Is your feature request related to a problem? Please describe.**
I am trying to run various tests for some features of REST with various async implementations... Currently the examples use a simple flask app, which is good for looking into the details, but doesn't have the large coverage of existing solution, and might not be friendly enough to run for some users...
**Describe the solution you'd like**
Leveraging httpbin.org would be a quick way to have an large API and integration tests...
What do you think about it ? | open | 2019-08-21T09:19:44Z | 2019-08-24T23:01:15Z | https://github.com/prkumar/uplink/issues/170 | [
"Testing"
] | asmodehn | 1 |
deepfakes/faceswap | deep-learning | 798 | CUDNN_STATUS_ALLOC_FAILED | **Describe the bug**
Critical Error, I got the frames from effmpeg, loaded them into Extract, hit extract and then the error below a few seconds later
**Expected behavior**
Thought Faceswap would begin Extraction
Loading...
07/18/2019 07:52:24 INFO Log level set to: INFO
07/18/2019 07:52:26 INFO Output Directory: C:\Users\Philip\Documents\faceswap\Faceswap_NakedGun2_Pt1\Align_for_Training_A
07/18/2019 07:52:26 INFO Input Directory: C:\Users\Philip\Documents\faceswap\Faceswap_NakedGun2_Pt1\Img_A
07/18/2019 07:52:26 INFO Loading Detect from S3Fd plugin...
07/18/2019 07:52:26 INFO Loading Align from Fan plugin...
07/18/2019 07:52:26 INFO Starting, this may take a while...
07/18/2019 07:52:27 INFO Initializing S3FD Detector...
2019-07-18 07:52:38.449679: E tensorflow/stream_executor/cuda/cuda_dnn.cc:334] Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED
2019-07-18 07:52:38.450371: E tensorflow/stream_executor/cuda/cuda_dnn.cc:334] Could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED
07/18/2019 07:52:38 ERROR Caught exception in child process: 21412: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.\n [[node s3fd/convolution (defined at C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py:147) ]]\n [[node s3fd/transpose_99 (defined at C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py:147) ]]\n\nCaused by op 's3fd/convolution', defined at:\n File "<string>", line 1, in <module>\n File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\multiprocessing\spawn.py", line 105, in spawn_main\n exitcode = _main(fd)\n File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\multiprocessing\spawn.py", line 118, in _main\n return self._bootstrap()\n File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\multiprocessing\process.py", line 258, in _bootstrap\n self.run()\n File "C:\Users\Philip\faceswap\lib\multithreading.py", line 361, in run\n super().run()\n File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\multiprocessing\process.py", line 93, in run\n self._target(*self._args, **self._kwargs)\n File "C:\Users\Philip\faceswap\plugins\extract\detect\_base.py", line 129, in run\n self.detect_faces(*args, **kwargs)\n File "C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py", line 75, in detect_faces\n super().detect_faces(*args, **kwargs)\n File "C:\Users\Philip\faceswap\plugins\extract\detect\_base.py", line 101, in detect_faces\n self.initialize(*args, **kwargs)\n File "C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py", line 44, in initialize\n self.model = S3fd(self.model_path, self.target, tf_ratio, card_id, confidence)\n File "C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py", line 132, in __init__\n self.graph = self.load_graph()\n File "C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py", line 147, in load_graph\n self.tf.import_graph_def(graph_def, name="s3fd")\n File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func\n return func(*args, **kwargs)\n File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\importer.py", line 442, in import_graph_def\n _ProcessNewOps(graph)\n File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\importer.py", line 235, in _ProcessNewOps\n for new_op in graph._add_new_tf_operations(compute_devices=False): # pylint: disable=protected-access\n File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\ops.py", line 3433, in _add_new_tf_operations\n for c_op in c_api_util.new_tf_operations(self)\n File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\ops.py", line 3433, in <listcomp>\n for c_op in c_api_util.new_tf_operations(self)\n File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\ops.py", line 3325, in _create_op_from_tf_operation\n ret = Operation(c_op, self)\n File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__\n self._traceback = tf_stack.extract_stack()\n\nUnknownError (see above for traceback): Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.\n [[node s3fd/convolution (defined at C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py:147) ]]\n [[node s3fd/transpose_99 (defined at C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py:147) ]]\n
07/18/2019 07:52:38 ERROR Traceback:
Traceback (most recent call last):
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
return fn(*args)
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node s3fd/convolution}}]]
[[{{node s3fd/transpose_99}}]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Philip\faceswap\plugins\extract\detect\_base.py", line 129, in run
self.detect_faces(*args, **kwargs)
File "C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py", line 75, in detect_faces
super().detect_faces(*args, **kwargs)
File "C:\Users\Philip\faceswap\plugins\extract\detect\_base.py", line 101, in detect_faces
self.initialize(*args, **kwargs)
File "C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py", line 71, in initialize
raise err
File "C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py", line 44, in initialize
self.model = S3fd(self.model_path, self.target, tf_ratio, card_id, confidence)
File "C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py", line 135, in __init__
self.session = self.set_session(target_size, vram_ratio, card_id)
File "C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py", line 174, in set_session
session.run(self.output, feed_dict={self.input: placeholder})
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
run_metadata_ptr)
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
run_metadata)
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[node s3fd/convolution (defined at C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py:147) ]]
[[node s3fd/transpose_99 (defined at C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py:147) ]]
Caused by op 's3fd/convolution', defined at:
File "<string>", line 1, in <module>
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\multiprocessing\spawn.py", line 118, in _main
return self._bootstrap()
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\multiprocessing\process.py", line 258, in _bootstrap
self.run()
File "C:\Users\Philip\faceswap\lib\multithreading.py", line 361, in run
super().run()
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Philip\faceswap\plugins\extract\detect\_base.py", line 129, in run
self.detect_faces(*args, **kwargs)
File "C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py", line 75, in detect_faces
super().detect_faces(*args, **kwargs)
File "C:\Users\Philip\faceswap\plugins\extract\detect\_base.py", line 101, in detect_faces
self.initialize(*args, **kwargs)
File "C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py", line 44, in initialize
self.model = S3fd(self.model_path, self.target, tf_ratio, card_id, confidence)
File "C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py", line 132, in __init__
self.graph = self.load_graph()
File "C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py", line 147, in load_graph
self.tf.import_graph_def(graph_def, name="s3fd")
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\importer.py", line 442, in import_graph_def
_ProcessNewOps(graph)
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\importer.py", line 235, in _ProcessNewOps
for new_op in graph._add_new_tf_operations(compute_devices=False): # pylint: disable=protected-access
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\ops.py", line 3433, in _add_new_tf_operations
for c_op in c_api_util.new_tf_operations(self)
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\ops.py", line 3433, in <listcomp>
for c_op in c_api_util.new_tf_operations(self)
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\ops.py", line 3325, in _create_op_from_tf_operation
ret = Operation(c_op, self)
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
self._traceback = tf_stack.extract_stack()
UnknownError (see above for traceback): Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[node s3fd/convolution (defined at C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py:147) ]]
[[node s3fd/transpose_99 (defined at C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py:147) ]]
07/18/2019 07:52:43 ERROR Got Exception on main handler:
Traceback (most recent call last):
File "C:\Users\Philip\faceswap\lib\cli.py", line 122, in execute_script
process.process()
File "C:\Users\Philip\faceswap\scripts\extract.py", line 61, in process
self.run_extraction()
File "C:\Users\Philip\faceswap\scripts\extract.py", line 181, in run_extraction
self.extractor.launch()
File "C:\Users\Philip\faceswap\plugins\extract\pipeline.py", line 175, in launch
self.launch_detector()
File "C:\Users\Philip\faceswap\plugins\extract\pipeline.py", line 238, in launch_detector
raise ValueError("Error initializing Detector")
ValueError: Error initializing Detector
07/18/2019 07:52:43 CRITICAL An unexpected crash has occurred. Crash report written to 'C:\Users\Philip\faceswap\crash_report.2019.07.18.075243331247.log'. Please verify you are running the latest version of faceswap before reporting
Process exited.
**Desktop (please complete the following information):**
Windows 10
Intel E3 Xeon_v5 Skylake
32gb DDR4
Nvidia RTX 2080
**Additional context**
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
run_metadata)
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[node s3fd/convolution (defined at C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py:147) ]]
[[node s3fd/transpose_99 (defined at C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py:147) ]]
Caused by op 's3fd/convolution', defined at:
File "<string>", line 1, in <module>
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\multiprocessing\spawn.py", line 118, in _main
return self._bootstrap()
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\multiprocessing\process.py", line 258, in _bootstrap
self.run()
File "C:\Users\Philip\faceswap\lib\multithreading.py", line 361, in run
super().run()
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\multiprocessing\process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\Philip\faceswap\plugins\extract\detect\_base.py", line 129, in run
self.detect_faces(*args, **kwargs)
File "C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py", line 75, in detect_faces
super().detect_faces(*args, **kwargs)
File "C:\Users\Philip\faceswap\plugins\extract\detect\_base.py", line 101, in detect_faces
self.initialize(*args, **kwargs)
File "C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py", line 44, in initialize
self.model = S3fd(self.model_path, self.target, tf_ratio, card_id, confidence)
File "C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py", line 132, in __init__
self.graph = self.load_graph()
File "C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py", line 147, in load_graph
self.tf.import_graph_def(graph_def, name="s3fd")
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\importer.py", line 442, in import_graph_def
_ProcessNewOps(graph)
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\importer.py", line 235, in _ProcessNewOps
for new_op in graph._add_new_tf_operations(compute_devices=False): # pylint: disable=protected-access
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\ops.py", line 3433, in _add_new_tf_operations
for c_op in c_api_util.new_tf_operations(self)
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\ops.py", line 3433, in <listcomp>
for c_op in c_api_util.new_tf_operations(self)
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\ops.py", line 3325, in _create_op_from_tf_operation
ret = Operation(c_op, self)
File "C:\Users\Philip\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
self._traceback = tf_stack.extract_stack()
UnknownError (see above for traceback): Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[node s3fd/convolution (defined at C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py:147) ]]
[[node s3fd/transpose_99 (defined at C:\Users\Philip\faceswap\plugins\extract\detect\s3fd.py:147) ]]
Traceback (most recent call last):
File "C:\Users\Philip\faceswap\lib\cli.py", line 122, in execute_script
process.process()
File "C:\Users\Philip\faceswap\scripts\extract.py", line 61, in process
self.run_extraction()
File "C:\Users\Philip\faceswap\scripts\extract.py", line 181, in run_extraction
self.extractor.launch()
File "C:\Users\Philip\faceswap\plugins\extract\pipeline.py", line 175, in launch
self.launch_detector()
File "C:\Users\Philip\faceswap\plugins\extract\pipeline.py", line 238, in launch_detector
raise ValueError("Error initializing Detector")
ValueError: Error initializing Detector
============ System Information ============
encoding: cp1252
git_branch: master
git_commits: 5f73418 Update icons. 0a82e51 Bugflx: Correct input variable to res_block. ae259e9 Add .mpg file extension support. afdf8c8 Image loader Variable name fix. 4371d50 Remove he_uniform from init from res block and use FS default
gpu_cuda: No global version found. Check Conda packages for Conda Cuda
gpu_cudnn: No global version found. Check Conda packages for Conda cuDNN
gpu_devices: GPU_0: GeForce RTX 2080
gpu_devices_active: GPU_0
gpu_driver: 419.35
gpu_vram: GPU_0: 8192MB
os_machine: AMD64
os_platform: Windows-10-10.0.17763-SP0
os_release: 10
py_command: C:\Users\Philip\faceswap\faceswap.py extract -i C:/Users/Philip/Documents/faceswap/Faceswap_NakedGun2_Pt1/Img_A -o C:/Users/Philip/Documents/faceswap/Faceswap_NakedGun2_Pt1/Align_for_Training_A -l 0.4 --serializer json -D s3fd -A fan -nm none -bt 0.0 -sz 256 -min 0 -een 10 -si 0 -L INFO -gui
py_conda_version: conda 4.7.5
py_implementation: CPython
py_version: 3.6.8
py_virtual_env: True
sys_cores: 8
sys_processor: Intel64 Family 6 Model 94 Stepping 3, GenuineIntel
sys_ram: Total: 32713MB, Available: 21369MB, Used: 11343MB, Free: 21369MB
=============== Pip Packages ===============
absl-py==0.7.1
astor==0.7.1
certifi==2019.6.16
cloudpickle==1.2.1
cycler==0.10.0
cytoolz==0.9.0.1
dask==2.0.0
decorator==4.4.0
fastcluster==1.1.25
ffmpy==0.2.2
gast==0.2.2
grpcio==1.16.1
h5py==2.9.0
imageio==2.5.0
imageio-ffmpeg==0.3.0
joblib==0.13.2
Keras==2.2.4
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0
kiwisolver==1.1.0
Markdown==3.1.1
matplotlib==2.2.2
mkl-fft==1.0.12
mkl-random==1.0.2
mkl-service==2.0.2
mock==3.0.5
networkx==2.3
numpy==1.16.2
nvidia-ml-py3==7.352.0
olefile==0.46
pathlib==1.0.1
Pillow==6.0.0
protobuf==3.8.0
psutil==5.6.3
pyparsing==2.4.0
pyreadline==2.1
python-dateutil==2.8.0
pytz==2019.1
PyWavelets==1.0.3
pywin32==223
PyYAML==5.1.1
scikit-image==0.15.0
scikit-learn==0.21.2
scipy==1.2.1
six==1.12.0
tensorboard==1.13.1
tensorflow==1.13.1
tensorflow-estimator==1.13.0
termcolor==1.1.0
toolz==0.9.0
toposort==1.5
tornado==6.0.3
tqdm==4.32.1
Werkzeug==0.15.4
wincertstore==0.2
============== Conda Packages ==============
# packages in environment at C:\Users\Philip\MiniConda3\envs\faceswap:
#
# Name Version Build Channel
_tflow_select 2.1.0 gpu
absl-py 0.7.1 py36_0
astor 0.7.1 py36_0
blas 1.0 mkl
ca-certificates 2019.5.15 0
certifi 2019.6.16 py36_0
cloudpickle 1.2.1 py_0
cudatoolkit 10.0.130 0
cudnn 7.6.0 cuda10.0_0
cycler 0.10.0 py36h009560c_0
cytoolz 0.9.0.1 py36hfa6e2cd_1
dask-core 2.0.0 py_0
decorator 4.4.0 py36_1
fastcluster 1.1.25 py36h830ac7b_1000 conda-forge
ffmpeg 4.1.3 h6538335_0 conda-forge
ffmpy 0.2.2 pypi_0 pypi
freetype 2.9.1 ha9979f8_1
gast 0.2.2 py36_0
grpcio 1.16.1 py36h351948d_1
h5py 2.9.0 py36h5e291fa_0
hdf5 1.10.4 h7ebc959_0
icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha66f8fd_1
imageio 2.5.0 py36_0
imageio-ffmpeg 0.3.0 py_0 conda-forge
intel-openmp 2019.4 245
joblib 0.13.2 py36_0
jpeg 9c hfa6e2cd_1001 conda-forge
keras 2.2.4 0
keras-applications 1.0.8 py_0
keras-base 2.2.4 py36_0
keras-preprocessing 1.1.0 py_1
kiwisolver 1.1.0 py36ha925a31_0
libblas 3.8.0 8_mkl conda-forge
libcblas 3.8.0 8_mkl conda-forge
liblapack 3.8.0 8_mkl conda-forge
liblapacke 3.8.0 8_mkl conda-forge
libmklml 2019.0.3 0
libpng 1.6.37 h7602738_0 conda-forge
libprotobuf 3.8.0 h7bd577a_0
libtiff 4.0.10 h6512ee2_1003 conda-forge
libwebp 1.0.2 hfa6e2cd_2 conda-forge
lz4-c 1.8.3 he025d50_1001 conda-forge
markdown 3.1.1 py36_0
matplotlib 2.2.2 py36had4c4a9_2
mkl 2019.4 245
mkl-service 2.0.2 py36he774522_0
mkl_fft 1.0.12 py36h14836fe_0
mkl_random 1.0.2 py36h343c172_0
mock 3.0.5 py36_0
networkx 2.3 py_0
numpy 1.16.2 py36h19fb1c0_0
numpy-base 1.16.2 py36hc3f5095_0
nvidia-ml-py3 7.352.0 pypi_0 pypi
olefile 0.46 py36_0
opencv 4.1.0 py36hb4945ee_5 conda-forge
openssl 1.1.1c he774522_1
pathlib 1.0.1 py36_1
pillow 6.0.0 py36hdc69c19_0
pip 19.1.1 py36_0
protobuf 3.8.0 py36h33f27b4_0
psutil 5.6.3 py36he774522_0
pyparsing 2.4.0 py_0
pyqt 5.9.2 py36h6538335_2
pyreadline 2.1 py36_1
python 3.6.8 h9f7ef89_7
python-dateutil 2.8.0 py36_0
pytz 2019.1 py_0
pywavelets 1.0.3 py36h8c2d366_1
pywin32 223 py36hfa6e2cd_1
pyyaml 5.1.1 py36he774522_0
qt 5.9.7 hc6833c9_1 conda-forge
scikit-image 0.15.0 py36ha925a31_0
scikit-learn 0.21.2 py36h6288b17_0
scipy 1.2.1 py36h29ff71c_0
setuptools 41.0.1 py36_0
sip 4.19.8 py36h6538335_0
six 1.12.0 py36_0
sqlite 3.28.0 he774522_0
tensorboard 1.13.1 py36h33f27b4_0
tensorflow 1.13.1 gpu_py36h9006a92_0
tensorflow-base 1.13.1 gpu_py36h871c8ca_0
tensorflow-estimator 1.13.0 py_0
tensorflow-gpu 1.13.1 h0d30ee6_0
termcolor 1.1.0 py36_1
tk 8.6.8 hfa6e2cd_0
toolz 0.9.0 py36_0
toposort 1.5 py_3 conda-forge
tornado 6.0.3 py36he774522_0
tqdm 4.32.1 py_0
vc 14.1 h0510ff6_4
vs2015_runtime 14.15.26706 h3a45250_4
werkzeug 0.15.4 py_0
wheel 0.33.4 py36_0
wincertstore 0.2 py36h7fe50ca_0
xz 5.2.4 h2fa13f4_1001 conda-forge
yaml 0.1.7 hc54c509_2
zlib 1.2.11 h2fa13f4_1004 conda-forge
zstd 1.4.0 hd8a0e53_0 conda-forge | closed | 2019-07-18T15:03:08Z | 2019-09-26T16:26:29Z | https://github.com/deepfakes/faceswap/issues/798 | [] | Aki113s | 5 |
nerfstudio-project/nerfstudio | computer-vision | 2,906 | Splatfacto during test . NotImplementedError: Saving images is not implemented yet | ```
Traceback (most recent call last):
File "/home/liubohan/anaconda3/envs/nerfstudio/bin/ns-eval", line 8, in <module>
sys.exit(entrypoint())
File "/home/liubohan/anaconda3/envs/nerfstudio/lib/python3.8/site-packages/nerfstudio/scripts/eval.py", line 66, in entrypoint
tyro.cli(ComputePSNR).main()
File "/home/liubohan/anaconda3/envs/nerfstudio/lib/python3.8/site-packages/nerfstudio/scripts/eval.py", line 49, in main
metrics_dict = pipeline.get_average_eval_image_metrics(output_path=self.render_output_path, get_std=True)
File "/home/liubohan/anaconda3/envs/nerfstudio/lib/python3.8/site-packages/nerfstudio/utils/profiler.py", line 112, in inner
out = func(*args, **kwargs)
File "/home/liubohan/anaconda3/envs/nerfstudio/lib/python3.8/site-packages/nerfstudio/pipelines/base_pipeline.py", line 380, in get_average_eval_image_metrics
raise NotImplementedError("Saving images is not implemented yet")
NotImplementedError: Saving images is not implemented yet
```
| closed | 2024-02-12T06:20:03Z | 2024-11-30T10:07:31Z | https://github.com/nerfstudio-project/nerfstudio/issues/2906 | [] | liu-bohan | 2 |
scrapy/scrapy | python | 6,644 | Unable to use CrawlerProcess to run a spider as a script to debug it at breakpoints | <!--
Thanks for taking an interest in Scrapy!
If you have a question that starts with "How to...", please see the Scrapy Community page: https://scrapy.org/community/.
The GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself.
Keep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md
The following is a suggested template to structure your issue, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#reporting-bugs
-->
### Description
I was trying to debug my scrapy spider by using CrawlerProcess but I keep facing this problem
### Steps to Reproduce
1. I made a python file by the name of runner.py(**its in the same directory as my scrapy.cfg file**) with the following code
```
from scrapy.crawler import CrawlerProcess
from scrapy.utils import project
from glasses_shop_uk.spiders.bestsellers import BestsellersSpider
process = CrawlerProcess(settings=project.get_project_settings())
process.crawl(BestsellersSpider)
process.start()
```
2. In my spider script named bestsellers.py this is the code (**Breakpoint shown with the commented line**)
```
import scrapy
class BestsellersSpider(scrapy.Spider):
name = "bestsellers"
allowed_domains = ["www.glassesshop.com"]
start_urls = ["https://www.glassesshop.com/bestsellers"]
def parse(self, response):
# my breakpoint is here
products = response.xpath('//div[@id="product-lists"]/div/div[@class="pb-5 mb-lg-3 product-list-row text-center product-list-item"]')
for product in products:
product_name = product.xpath('.//descendant::a[starts-with(@class,"active product-img")]/img/@alt').get()
product_price = product.xpath('.//descendant::div[@class="p-price"]/div[starts-with(@class,"active")]/span/span[1]/text()').get()
product_url_relative = product.xpath('.//descendant::a[starts-with(@class,"active product-img")]/@href').get()
product_url_full = f"https://www.glassesshop.com{product_url_relative}"
yield scrapy.Request(product_url_full, callback=self.parse_product, meta={"product_name": product_name, "product_price": product_price})
def parse_product(self, response):
product_name = response.meta["product_name"]
product_price = response.meta["product_price"]
product_img_url = response.xpath('//*[@id="app"]/div/section/div/div/div[1]/div[1]/div[2]/div[2]/div/div/div[1]/img/@src').get()
yield {
"product_name": product_name,
"product_price": product_price,
"product_img_url": product_img_url,
"product_url" : response.url
}
```
3. Now I run the runner.py script in debugging mode and I get a traceback
**Expected behavior:**
I enter the debugger after the parse method has started on the response from the start_url
**Actual behavior:**
```
import sys; print('Python %s on %s' % (sys.version, sys.platform))
D:\Desktop\Company\PythonProject\.venv\Scripts\python.exe -X pycache_prefix=C:\Users\Risha\AppData\Local\JetBrains\PyCharm2024.3\cpython-cache "C:/Program Files/JetBrains/PyCharm 2024.3.1.1/plugins/python-ce/helpers/pydev/pydevd.py" --multiprocess --qt-support=auto --port 29781 --file D:\Desktop\Company\PythonProject\glasses_shop_uk\runner.py
2025-02-01 14:49:59 [scrapy.utils.log] INFO: Scrapy 2.12.0 started (bot: glasses_shop_uk)
2025-02-01 14:49:59 [scrapy.utils.log] INFO: Versions: lxml 5.3.0.0, libxml2 2.11.7, cssselect 1.2.0, parsel 1.10.0, w3lib 2.2.1, Twisted 24.11.0, Python 3.12.8 (tags/v3.12.8:2dc476b, Dec 3 2024, 19:30:04) [MSC v.1942 64 bit (AMD64)], pyOpenSSL 25.0.0 (OpenSSL 3.4.0 22 Oct 2024), cryptography 44.0.0, Platform Windows-11-10.0.26100-SP0
2025-02-01 14:49:59 [scrapy.addons] INFO: Enabled addons:
[]
2025-02-01 14:49:59 [asyncio] DEBUG: Using selector: SelectSelector
2025-02-01 14:49:59 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2025-02-01 14:49:59 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop
2025-02-01 14:49:59 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2025-02-01 14:49:59 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop
2025-02-01 14:49:59 [scrapy.extensions.telnet] INFO: Telnet Password: dc50ac58d34cb67d
2025-02-01 14:50:00 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2025-02-01 14:50:00 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'glasses_shop_uk',
'FEED_EXPORT_ENCODING': 'utf-8',
'NEWSPIDER_MODULE': 'glasses_shop_uk.spiders',
'ROBOTSTXT_OBEY': True,
'SPIDER_MODULES': ['glasses_shop_uk.spiders'],
'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'}
2025-02-01 14:50:00 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.offsite.OffsiteMiddleware',
'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2025-02-01 14:50:00 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2025-02-01 14:50:00 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2025-02-01 14:50:00 [scrapy.core.engine] INFO: Spider opened
2025-02-01 14:50:00 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2025-02-01 14:50:00 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2025-02-01 14:50:01 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.glassesshop.com/robots.txt> (referer: https://google.com)
2025-02-01 14:50:03 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.glassesshop.com/bestsellers> (referer: https://google.com)
2025-02-01 14:50:03 [asyncio] ERROR: Exception in callback <Task pending name='Task-1' coro=<SpiderMiddlewareManager.scrape_response.<locals>.process_callback_output() running at D:\Desktop\Company\PythonProject\.venv\Lib\site-packages\scrapy\core\spidermw.py:313> cb=[Deferred.fromFuture.<locals>.adapt() at D:\Desktop\Company\PythonProject\.venv\Lib\site-packages\twisted\internet\defer.py:1251]>()
handle: <Handle <Task pending name='Task-1' coro=<SpiderMiddlewareManager.scrape_response.<locals>.process_callback_output() running at D:\Desktop\Company\PythonProject\.venv\Lib\site-packages\scrapy\core\spidermw.py:313> cb=[Deferred.fromFuture.<locals>.adapt() at D:\Desktop\Company\PythonProject\.venv\Lib\site-packages\twisted\internet\defer.py:1251]>()>
Traceback (most recent call last):
File "C:\Users\Risha\AppData\Local\Programs\Python\Python312\Lib\asyncio\events.py", line 88, in _run
self._context.run(self._callback, *self._args)
TypeError: 'Task' object is not callable
2025-02-01 14:51:00 [scrapy.extensions.logstats] INFO: Crawled 2 pages (at 2 pages/min), scraped 0 items (at 0 items/min)
2025-02-01 14:52:00 [scrapy.extensions.logstats] INFO: Crawled 2 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2025-02-01 14:53:00 [scrapy.extensions.logstats] INFO: Crawled 2 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
```
**Reproduces how often:** every time
### Versions
```
scrapy version --verbose
Scrapy : 2.12.0
lxml : 5.3.0.0
libxml2 : 2.11.7
cssselect : 1.2.0
parsel : 1.10.0
w3lib : 2.2.1
Twisted : 24.11.0
Python : 3.12.8 (tags/v3.12.8:2dc476b, Dec 3 2024, 19:30:04) [MSC v.1942 64 bit (AMD64)]
pyOpenSSL : 25.0.0 (OpenSSL 3.4.0 22 Oct 2024)
cryptography : 44.0.0
Platform : Windows-11-10.0.26100-SP0
```
Please help as this is a really helpful feature for debugging a spider | closed | 2025-02-01T08:40:44Z | 2025-02-04T15:19:43Z | https://github.com/scrapy/scrapy/issues/6644 | [
"bug",
"not reproducible",
"upstream issue",
"asyncio"
] | RishavDaredevil | 5 |
scikit-learn/scikit-learn | data-science | 30,772 | Wrong Mutual Information Calculation | ### Describe the bug
#### Issue
I encountered a bug unexpectedly while reviewing some metrics in a project.
When calculating mutual information using the `mutual_info_classif`, I noticed values higher than entropy, which is [impossible](https://en.wikipedia.org/wiki/Mutual_information#/media/File:Figchannel2017ab.svg). There is no such issue with `mutual_info_regression` (although, there, the self-mi is far from entropy, which may be another interesting case).
##### Implication
Any algorithm sorting features based on `mutual_info_classif` or any metric based on this function may be affected.
Thanks a lot for putting time on this.
P.S. In the minimal example, the feature is fixed (all one). However, I encountered the same issue in other scenarios as well. The example is just more simplified. The problem persists on both Linux and Mac. I attached personal computer session info.
### Steps/Code to Reproduce
```python
import numpy as np
import pandas as pd
from sklearn.feature_selection import mutual_info_classif
big_n = 1_000_000
bug_df = pd.DataFrame({
'feature': np.ones(big_n),
'target': (np.arange(big_n) < 100).astype(int),
})
bug_df
mi = mutual_info_classif(bug_df[['feature']], bug_df['target'])
entropy = mutual_info_classif(bug_df[['target']], bug_df['target'])
print(f"mi: {mi[0] :.6f}")
print(f"self-mi (entropy): {entropy[0] :.6f}")
from scipy import stats
scipy_entropy = stats.entropy([bug_df['target'].mean(), 1 - bug_df['target'].mean()])
print(f"scipy entropy: {scipy_entropy :.6f}")
```
### Expected Results
```
mi: 0.000000
self-mi (entropy): 0.001023
scipy entropy: 0.001021
```
### Actual Results
```
mi: 0.215495
self-mi (entropy): 0.001023
scipy entropy: 0.001021
```
### Versions
```shell
System:
python: 3.10.13 (main, Sep 11 2023, 08:16:02) [Clang 14.0.6 ]
executable: /Users/*/miniconda3/envs/*/bin/python
machine: macOS-15.1.1-arm64-arm-64bit
Python dependencies:
sklearn: 1.6.1
pip: 23.3.1
setuptools: 68.2.2
numpy: 1.26.1
scipy: 1.15.1
Cython: None
pandas: 2.2.3
matplotlib: 3.8.2
joblib: 1.3.2
threadpoolctl: 3.2.0
Built with OpenMP: True
threadpoolctl info:
user_api: blas
internal_api: openblas
num_threads: 8
prefix: libopenblas
filepath: /Users/*/miniconda3/envs/*/lib/python3.10/site-packages/numpy/.dylibs/libopenblas64_.0.dylib
version: 0.3.23.dev
threading_layer: pthreads
architecture: armv8
user_api: openmp
internal_api: openmp
num_threads: 8
prefix: libomp
filepath: /Users/*/miniconda3/envs/*/lib/python3.10/site-packages/sklearn/.dylibs/libomp.dylib
version: None
``` | open | 2025-02-05T16:46:04Z | 2025-02-10T22:08:07Z | https://github.com/scikit-learn/scikit-learn/issues/30772 | [
"Bug",
"Needs Investigation"
] | moinfar | 6 |
Colin-b/pytest_httpx | pytest | 19 | Set cookies in the response | I don't think that feature exists yet?
I need to set a cookie in a response 🙂 | closed | 2020-05-26T15:57:10Z | 2020-05-27T08:17:33Z | https://github.com/Colin-b/pytest_httpx/issues/19 | [
"question"
] | pawamoy | 3 |
localstack/localstack | python | 12,120 | feature request: aws kafkaconnect custom plugin compatibility | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Feature description
For my use case, I'm trying to orchestrate a CDC pipeline from a mongo/documentdb database through an MSK cluster to a Lambda function using the Debezium CDC connector for MongoDB. To do this in AWS I would need to create an MSK connect custom plugin referencing the connector JAR or ZIP in S3. It appears this functionality is not yet supported in localstack making it difficult to fully test this CDC pipeline.
### 🧑💻 Implementation
_No response_
### Anything else?
This is the example I'm trying to follow. https://docs.aws.amazon.com/msk/latest/developerguide/msk-connect-debeziumsource-connector-example-steps.html | open | 2025-01-09T20:10:51Z | 2025-01-16T10:05:37Z | https://github.com/localstack/localstack/issues/12120 | [
"type: feature",
"aws:kafka",
"status: backlog"
] | jaugat | 1 |
twopirllc/pandas-ta | pandas | 637 | bbands bug which equateс PERCENT to BANDWIDTH when offset is set to >0 | row 45 in the code is ```percent = bandwidth.shift(offset)``` and it should be ```percent = percent.shift(offset)``` | closed | 2023-01-17T10:11:08Z | 2023-05-09T22:18:25Z | https://github.com/twopirllc/pandas-ta/issues/637 | [
"bug"
] | dnentchev | 1 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 746 | CUDA out of memory only in python demo_toolbox.py (python demo_cli.py works) | `python demo_toolbox.py
UserWarning: Unable to import 'webrtcvad'. This package enables noise removal and is recommended.
warn("Unable to import 'webrtcvad'. This package enables noise removal and is recommended.")
Arguments:
datasets_root: None
enc_models_dir: encoder\saved_models
syn_models_dir: synthesizer\saved_models
voc_models_dir: vocoder\saved_models
cpu: False
seed: None
no_mp3_support: False
Warning: you did not pass a root directory for datasets as argument.
The recognized datasets are:
LibriSpeech/dev-clean
LibriSpeech/dev-other
LibriSpeech/test-clean
LibriSpeech/test-other
LibriSpeech/train-clean-100
LibriSpeech/train-clean-360
LibriSpeech/train-other-500
LibriTTS/dev-clean
LibriTTS/dev-other
LibriTTS/test-clean
LibriTTS/test-other
LibriTTS/train-clean-100
LibriTTS/train-clean-360
LibriTTS/train-other-500
LJSpeech-1.1
VoxCeleb1/wav
VoxCeleb1/test_wav
VoxCeleb2/dev/aac
VoxCeleb2/test/aac
VCTK-Corpus/wav48
Feel free to add your own. You can still use the toolbox by recording samples yourself.
Loaded encoder "pretrained.pt" trained to step 1564501
Synthesizer using device: cuda
Trainable Parameters: 30.870M
Loaded synthesizer "pretrained.pt" trained to step 295000
+----------+---+
| Tacotron | r |
+----------+---+
| 295k | 2 |
+----------+---+
| Generating 1/1
Traceback (most recent call last):
File "C:\python\Real-Time-Voice-Cloning-master\toolbox\__init__.py", line 120, in <lambda>
func = lambda: self.synthesize() or self.vocode()
File "C:\python\Real-Time-Voice-Cloning-master\toolbox\__init__.py", line 227, in synthesize
specs = self.synthesizer.synthesize_spectrograms(texts, embeds)
File "C:\python\Real-Time-Voice-Cloning-master\synthesizer\inference.py", line 124, in synthesize_spectrograms
_, mels, alignments = self._model.generate(chars, speaker_embeddings)
File "C:\python\Real-Time-Voice-Cloning-master\synthesizer\models\tacotron.py", line 464, in generate
postnet_out = self.postnet(mel_outputs)
File "C:\python\pyto\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\python\Real-Time-Voice-Cloning-master\synthesizer\models\tacotron.py", line 142, in forward
conv_bank = torch.cat(conv_bank, dim=1)
RuntimeError: CUDA out of memory. Tried to allocate 50.00 MiB (GPU 0; 2.00 GiB total capacity; 1.07 GiB already allocated; 0 bytes free; 1.11 GiB reserved in total by PyTorch)`
I mean that demo_toolbox dont able a % free of memory...
Someone can help me how i can reserve memory to app? | closed | 2021-04-25T15:02:10Z | 2021-05-04T17:21:51Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/746 | [] | AjaxFB | 2 |
pytest-dev/pytest-cov | pytest | 639 | Pytest-watch with coverage (If not existent) | #### What's the problem this feature will solve?
I can:
1. run tests with package `pytest-cov` by command run `coverage run --rcfile=.coveragerc -m pytest`.
2. generate coverage with package `pytest-cov` by command run `coverage report --show-missing`.
3. watch tests with package `pytest-watch` by command run `ptw --quiet --spool 200 --clear --nobeep --config pytest.ini --ext=.py --onfail="echo Tests failed, fix the issues" -v`
I would like to have a way to watch coverage while testing as well. May you enlight me?
#### Describe the solution you'd like
I wrote following bash script ith desired solution:
```
#!/bin/bash
clear
while true; do
coverage run --rcfile=.coveragerc -m pytest
coverage report --show-missing
sleep 5 # Adjust delay between test runs if needed
clear
done
```
#### Alternative Solutions
I have not found the proper way to proceed.
| closed | 2024-04-03T14:01:46Z | 2024-09-15T15:58:10Z | https://github.com/pytest-dev/pytest-cov/issues/639 | [] | brunolnetto | 1 |
recommenders-team/recommenders | deep-learning | 1,238 | [BUG] can u provide the code how to gennerate the data of MINDSMALL_utils.zip ? | there are embedding.npy , uid2index.pkl, word_dict.pkl, nrms.yaml in MINDSMALL_utils.zip ,
Can you share the code to generate these files? | open | 2020-11-09T07:29:54Z | 2020-11-12T11:53:40Z | https://github.com/recommenders-team/recommenders/issues/1238 | [
"bug"
] | bestpredicts | 1 |
coleifer/sqlite-web | flask | 121 | Editing and deleting not working | When I try to delete or edit a row I get this error
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/peewee.py", line 7024, in get
return clone.execute(database)[0]
File "/usr/local/lib/python3.10/dist-packages/peewee.py", line 4389, in __getitem__
return self.row_cache[item]
IndexError: list index out of range
```
but there's also exceptions that happen when handling that exception apparently, and also the website shows a different error (along with exceptions that happened when handling that exception), so I'm not exactly sure what the issue is

The errors only happen with some rows (seems like it might be due to it having a space or something?), trying to edit or delete other rows either shows the UI as expected, or returns a 404. | closed | 2023-07-16T08:54:32Z | 2023-09-05T01:22:46Z | https://github.com/coleifer/sqlite-web/issues/121 | [] | LunarTwilight | 9 |
gradio-app/gradio | python | 10,201 | Accordion - Expanding vertically to the right | - [x] I have searched to see if a similar issue already exists.
I would really like to have the ability to place an accordion vertically and expand to the right. I have scenarios where this would be a better UI solution, as doing so would automatically push the other components to the right of it forward.
I have no idea how to tweak this in CSS to make it work. If you have a simple CSS solution I would appreciate it until we have this feature. I am actually developing something that would really need this feature.
I made this drawing of what it would be like.

| closed | 2024-12-14T23:51:34Z | 2024-12-16T16:46:38Z | https://github.com/gradio-app/gradio/issues/10201 | [] | elismasilva | 1 |
Miserlou/Zappa | flask | 1,839 | Zappa Deploy FileExistsError | I am very new to Zappa and AWS. I successfully installed zappa and managed to go through zappa init. However, when I try to deploy it with zappa deploy, I keep getting this error below.
I cleared the temp directory and tried again and again but nothing changed.
**Error**
```
Traceback (most recent call last):
File "c:\users\xx\appdata\local\programs\python\python37-32\Lib\distutils\dir_util.py", line 70, in mkpath
os.mkdir(head, mode)
FileExistsError: [WinError 183] File exists
'C:\\Users\\xx\\AppData\\Local\\Temp\\zappa-project_jcpoxaq\\hjson'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\users\xx\desktop\botdeneme\botenv\lib\site-packages\zappa\cli.py", line 2779, in handle
sys.exit(cli.handle())
File "c:\users\xx\desktop\botdeneme\botenv\lib\site-packages\zappa\cli.py", line 509, in handle
self.dispatch_command(self.command, stage)
File "c:\users\xx\desktop\botdeneme\botenv\lib\site-packages\zappa\cli.py", line 546, in dispatch_command
self.deploy(self.vargs['zip'])
File "c:\users\xx\desktop\botdeneme\botenv\lib\site-packages\zappa\cli.py", line 718, in deploy
self.create_package()
File "c:\users\xx\desktop\botdeneme\botenv\lib\site-packages\zappa\cli.py", line 2267, in create_package
disable_progress=self.disable_progress
File "c:\users\xx\desktop\botdeneme\botenv\lib\site-packages\zappa\core.py", line 629, in create_lambda_zip
copy_tree(temp_package_path, temp_project_path, update=True)
File "c:\users\xx\appdata\local\programs\python\python37-32\Lib\distutils\dir_util.py", line 159, in copy_tree
verbose=verbose, dry_run=dry_run))
File "c:\users\xx\appdata\local\programs\python\python37-32\Lib\distutils\dir_util.py", line 135, in copy_tree
mkpath(dst, verbose=verbose)
File "c:\users\xx\appdata\local\programs\python\python37-32\Lib\distutils\dir_util.py", line 74, in mkpath
"could not create '%s': %s" % (head, exc.args[-1]))
distutils.errors.DistutilsFileError: could not create 'C:\Users\xx\AppData\Local\Temp\zappa-project_jcpoxaq\hjson' File exists
``` | open | 2019-03-24T23:57:30Z | 2019-03-24T23:57:30Z | https://github.com/Miserlou/Zappa/issues/1839 | [] | enotuniq | 0 |
onnx/onnx | pytorch | 5,985 | Inputs from a subgraph disappear when the graph is used as second input in `onnx.compose.merge_models` | # Bug Report
### Describe the bug
Inputs used in a subgraph disappear from their graph when two models are merged through ` onnx.compose.merge_models`. It only happens when the subgraph is in the second graph when merging.
If the two models have the same inputs names, the error does not appear as the second graph relies on the input node of the first one.
See `Reproduction instructions`.
```
onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted, however input 'base_two' of node:
name: Slice_22 OpType: Slice
is not output of any previous nodes.
```
### System information
- OS Platform and Distribution: Linux Ubuntu 20.04
- ONNX version: '1.15.0'
- Python version: 3.9
### Reproduction instructions
Full reproduction script. Generated ONNX files are attached.
[onnx_models.zip](https://github.com/onnx/onnx/files/14464800/onnx_models.zip)
```python
import onnx
import torch
from torch import nn
from torch import Tensor
class ModelGraphOne(nn.Module):
def __init__(self):
super().__init__()
def forward(self, inputs: Tensor) -> Tensor:
return 2 * inputs
@torch.jit._script_if_tracing
def extract_from_coords(base: Tensor, coords: Tensor):
n_extractions = coords.size(0)
extractions = torch.zeros(n_extractions, 10, 10)
for extract_id, extract_coords in enumerate(coords):
x1 = extract_coords[0]
x2 = extract_coords[1]
y1 = extract_coords[2]
y2 = extract_coords[3]
extractions[extract_id] = base[x1:x2, y1:y2]
return extractions
class ModelGraphTwo(nn.Module):
def __init__(self):
super().__init__()
def forward(self, base: Tensor, coords: Tensor) -> Tensor:
extractions = extract_from_coords(base, coords)
return extractions
base = torch.randn(100, 100)
coords = torch.tensor([[0, 10, 0, 10], [10, 20, 30, 40]])
model_one = ModelGraphOne()
model_two = ModelGraphTwo()
# Everything goes as planned as inputs names of model_one and model_two are the same
torch.onnx.export(model_one, args=(base, ), f="model_one.onnx", input_names=["base"], output_names=["base_mult"])
torch.onnx.export(model_two, args=(base, coords), f="model_two.onnx", input_names=["base", "coords"]) # has a subgraph
onnx_model_one = onnx.load("model_one.onnx")
onnx_model_two = onnx.load("model_two.onnx")
merged_model = onnx.compose.merge_models(onnx_model_one, onnx_model_two, io_map=[("base_mult", "base")])
print("Models merged successfully")
# Inputs names of model one and model two are now different
# If the model containing the subgraph is the first one during the merge, everything goes as planned
torch.onnx.export(model_one, args=(base, ), f="model_one.onnx", input_names=["base"], output_names=["base_mult"])
torch.onnx.export(model_two, args=(base, coords), f="model_two.onnx", input_names=["base_two", "coords"], output_names=["extract"]) # has a subgraph
onnx_model_one = onnx.load("model_one.onnx")
onnx_model_two = onnx.load("model_two.onnx")
onnx.compose.merge_models(onnx_model_two, onnx_model_one, io_map=[("extract", "base")])
print("Models merged successfully")
# If the model containing the subgraph is the second one during the merge -> KO
onnx.compose.merge_models(onnx_model_one, onnx_model_two, io_map=[("base_mult", "base_two")])
# onnx.onnx_cpp2py_export.checker.ValidationError: Nodes in a graph must be topologically sorted, however input 'base_two' of node:
# name: Slice_22 OpType: Slice
# is not output of any previous nodes.
#
# ==> Context: Bad node spec for node. Name: Loop_8 OpType: Loop
```
### Expected behavior
```python
onnx.compose.merge_models(onnx_model_one, onnx_model_two, io_map=[("base_mult", "base_two")])
```
should work.
### Notes
<!-- Any additional information -->
| closed | 2024-03-01T18:54:46Z | 2024-03-05T04:18:45Z | https://github.com/onnx/onnx/issues/5985 | [
"bug"
] | Quintulius | 2 |
timkpaine/lantern | plotly | 103 | Remove y=0 line | closed | 2017-10-23T20:07:00Z | 2017-10-24T04:36:16Z | https://github.com/timkpaine/lantern/issues/103 | [
"bug"
] | timkpaine | 0 | |
graphistry/pygraphistry | pandas | 525 | [BUG] hop chain demo bug | in https://github.com/graphistry/pygraphistry/blob/master/demos/more_examples/graphistry_features/hop_and_chain_graph_pattern_mining.ipynb
line: ` 'to': names[j],`
should be: ` 'to': data[0]['usernameList'][j],`
---
plots should be regenerated etc, or at least that line of code edited and plots preserved | closed | 2023-12-06T05:20:40Z | 2023-12-23T00:21:27Z | https://github.com/graphistry/pygraphistry/issues/525 | [
"bug",
"help wanted",
"docs",
"good-first-issue"
] | lmeyerov | 0 |
apache/airflow | machine-learning | 47,300 | LogTemplate table not seeded in custom schema causes TypeError when triggering DAG | ### Apache Airflow version
Other Airflow 2 version (please specify below)
### If "Other Airflow 2 version" selected, which one?
2.10.4
### What happened?
What happened:
When using a custom schema for the Airflow metadata database, the log_template table is not automatically seeded with default data. As a result, when triggering a DAG, Airflow attempts to set the log_template_id using the following code:
run.log_template_id = int(session.scalar(select(func.max(LogTemplate.__table__.c.id))))
Because the custom schema’s log_template table is empty, the query returns None, and attempting to convert None to an integer raises a TypeError:
TypeError: int() argument must be a string, a bytes-like object or a real number, not 'NoneType'
In contrast, when using the default public schema, the table is seeded with default data, and this error does not occur.
### What you think should happen instead?
The log_template table should be seeded with default data even when using a custom schema—just as it is for the public schema. Alternatively, the code should handle an empty log_template table gracefully (for example, by defaulting to 0 when no rows exist).
### How to reproduce
Create a custom schema in your Airflow metadata database (e.g., my_custom_schema).
Update the Airflow configuration (in airflow.cfg or via the environment variable **sql_alchemy_schema = ${SQL_ALCHEMY_SCHEMA}**
to use your custom schema.
- Start Airflow.
- Run db migrate ( this will add permisssions and tables inside the schema)
- Trigger a DAG via the UI or API.
- Observe that the DAG trigger fails with the following error:
- TypeError: int() argument must be a string, a bytes-like object or a real number, not 'NoneType'
because the log_template table in the custom schema is empty.

<img width="748" alt="Image" src="https://github.com/user-attachments/assets/599875fb-3a12-49a5-91a3-915e13983788" />
### Operating System
docker container- aws ecs
### Versions of Apache Airflow Providers
2.10.4 ( docker image)
### Deployment
Other Docker-based deployment
### Deployment details
we used the offciail docker image from airflow and on top of that we run few commands and deploy it in aws ecs service
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-03T11:36:46Z | 2025-03-06T03:39:16Z | https://github.com/apache/airflow/issues/47300 | [
"kind:bug",
"area:logging",
"area:db-migrations",
"affected_version:2.10"
] | arvindmunna | 10 |
saulpw/visidata | pandas | 1,905 | Help with my test failures | On `=visidata-2.8`: https://ppb.chymera.eu/d8a1a0.log
On `=visidata-2.11`: https://ppb.chymera.eu/637ad6.log
Not sure whether this is the same error in both cases, but it appears to.
Both logs end with:
```
diff --git a/tests/golden/errors.csv b/tests/golden/errors.csv
index 00eea70..1a12cc1 100644
Binary files a/tests/golden/errors.csv and b/tests/golden/errors.csv differ
``` | closed | 2023-05-25T06:09:39Z | 2023-05-26T02:44:03Z | https://github.com/saulpw/visidata/issues/1905 | [
"question"
] | TheChymera | 7 |
FlareSolverr/FlareSolverr | api | 868 | cloudflare solved but does not redirect? | ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version: 3.3.2
- Last working FlareSolverr version: 3.3.2
- Operating system: Windows Server 2016
- Are you using Docker: no
- FlareSolverr User-Agent (see log traces or / endpoint): Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36
- Are you using a VPN: no
- Are you using a Proxy: no
- Are you using Captcha Solver: no
- URL to test this issue: https://www.cardkingdom.com/customer_login
```
### Description
trying to login to site but redirecting back to same page?
### Logged Error Messages
```text
2023-08-19 13:48:35 INFO ReqId 9656 Incoming request => POST /v1 body: {'postData': 'dest=%2Fmyaccount%2Fprofile&email=REMOVED&password=REMOVED&_token=REMOVED', 'maxTimeout': 120000, 'cmd': 'request.post', 'url': 'https://www.cardkingdom.com/customer_login'}
2023-08-19 13:48:35 DEBUG ReqId 9656 Launching web browser...
2023-08-19 13:48:36 DEBUG ReqId 9656 Started executable: `undetected_chromedriver\chromedriver.exe` in a child process with pid: 17624
2023-08-19 13:48:36 DEBUG ReqId 9656 New instance of webdriver has been created to perform the request
2023-08-19 13:48:36 DEBUG ReqId 20808 Navigating to... https://www.cardkingdom.com/customer_login
2023-08-19 13:48:45 INFO ReqId 20808 Challenge detected. Title found: Just a moment...
2023-08-19 13:48:45 DEBUG ReqId 20808 Waiting for title (attempt 1): Just a moment...
2023-08-19 13:48:46 DEBUG ReqId 20808 Timeout waiting for selector
2023-08-19 13:48:46 DEBUG ReqId 20808 Try to find the Cloudflare verify checkbox...
2023-08-19 13:48:46 DEBUG ReqId 20808 Cloudflare verify checkbox not found on the page.
2023-08-19 13:48:46 DEBUG ReqId 20808 Try to find the Cloudflare 'Verify you are human' button...
2023-08-19 13:48:46 DEBUG ReqId 20808 The Cloudflare 'Verify you are human' button not found on the page.
2023-08-19 13:48:48 DEBUG ReqId 20808 Waiting for title (attempt 2): Just a moment...
2023-08-19 13:48:48 DEBUG ReqId 20808 Waiting for title (attempt 2): DDoS-Guard
2023-08-19 13:48:48 DEBUG ReqId 20808 Waiting for selector (attempt 2): #cf-challenge-running
2023-08-19 13:48:48 DEBUG ReqId 20808 Waiting for selector (attempt 2): .ray_id
2023-08-19 13:48:48 DEBUG ReqId 20808 Waiting for selector (attempt 2): .attack-box
2023-08-19 13:48:48 DEBUG ReqId 20808 Waiting for selector (attempt 2): #cf-please-wait
2023-08-19 13:48:48 DEBUG ReqId 20808 Waiting for selector (attempt 2): #challenge-spinner
2023-08-19 13:48:48 DEBUG ReqId 20808 Waiting for selector (attempt 2): #trk_jschal_js
2023-08-19 13:48:48 DEBUG ReqId 20808 Waiting for selector (attempt 2): td.info #js_info
2023-08-19 13:48:48 DEBUG ReqId 20808 Waiting for selector (attempt 2): div.vc div.text-box h2
2023-08-19 13:48:48 DEBUG ReqId 20808 Waiting for redirect
2023-08-19 13:48:49 DEBUG ReqId 20808 Timeout waiting for redirect
2023-08-19 13:48:49 INFO ReqId 20808 Challenge solved!
2023-08-19 13:48:50 DEBUG ReqId 9656 A used instance of webdriver has been destroyed
2023-08-19 13:48:50 DEBUG ReqId 9656 Response => POST /v1 body: {'status': 'ok', 'message': 'Challenge solved!', 'solution': {'url': 'https://www.cardkingdom.com/customer_login', 'status': 200, 'cookies': [{'domain': '.cardkingdom.com', 'expiry': 1724014128, 'httpOnly': True, 'name': 'cf_clearance', 'path': '/', 'sameSite': 'None', 'secure': True, 'value': 'zIYAU7aX51dzG0_6Yt_FFPMo1K8GSAvjWADx1NnajbE-1692478127-0-1-48867304.a592bff2.1f872ce4-150.2.1692478127'}, {'domain': '.cardkingdom.com', 'httpOnly': True, 'name': '_cfuvid', 'path': '/', 'sameSite': 'None', 'secure': True, 'value': 'oTL6DPZeJ_marFZO.xCPwOEsAt9F4qMzFfYXPEM5eq0-1692478126990-0-604800000'}, {'domain': '.cardkingdom.com', 'httpOnly': True, 'name': '__cfruid', 'path': '/', 'sameSite': 'None', 'secure': True, 'value': '673377a6b876f9b43d27c64c6227803a8452a701-1692478126'}, {'domain': 'www.cardkingdom.com', 'expiry': 1695070127, 'httpOnly': True, 'name': 'laravel_session', 'path': '/', 'sameSite': 'Lax', 'secure': False, 'value': 'eyJpdiI6Im0zQlBEYkUvOWc1cnFLbXNyNkMvU3c9PSIsInZhbHVlIjoid3cxQU9CTXRxRmhDN1dmMkovZEU2emdEWEtaY2NaZ2FhVG9Ic1dOcUppWUpOcW4xZU0vWlJER2ovVzN5UHh4eHZPSW1zNENkVzhSS0RsVE9XbEV0MmdodFN6VHU3ck0rRElya3hpbWxIRUovdkZxUHpvU3pmYjdZNEFaR0h2eUoiLCJtYWMiOiJlYjY4Y2YyYjI0NjVmMzE1ZDM5NTEzNGY2NmEwYWMzNGZhMGI1MjFhNDA1MzIwOGZhMjhlYjY1MGQ5YmFlOTkxIiwidGFnIjoiIn0%3D'}], 'userAgent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/115.0.0.0 Safari/537.36', 'headers': {}, 'response': '<html lang="en"><head>\n<meta charset="utf-8">\n<meta name="viewport" content="width=device-width, initial-scale=1">\n<title>Page Expired</title>\n\n<script src="/cdn-cgi/apps/head/eP6FRgbMdwKe8bDs6MHi5IZZuxo.js"></script><link rel="preconnect" href="https://fonts.gstatic.com">\n<link href="https://fonts.googleapis.com/css2?family=Nunito&display=swap" rel="stylesheet">\n<style>\n /*! normalize.css v8.0.1 | MIT License | github.com/necolas/normalize.css */html{line-height:1.15;-webkit-text-size-adjust:100%}body{margin:0}a{background-color:transparent}code{font-family:monospace,monospace;font-size:1em}[hidden]{display:none}html{font-family:system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,Apple Color Emoji,Segoe UI Emoji,Segoe UI Symbol,Noto Color Emoji;line-height:1.5}*,:after,:before{box-sizing:border-box;border:0 solid #e2e8f0}a{color:inherit;text-decoration:inherit}code{font-family:Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace}svg,video{display:block;vertical-align:middle}video{max-width:100%;height:auto}.bg-white{--bg-opacity:1;background-color:#fff;background-color:rgba(255,255,255,var(--bg-opacity))}.bg-gray-100{--bg-opacity:1;background-color:#f7fafc;background-color:rgba(247,250,252,var(--bg-opacity))}.border-gray-200{--border-opacity:1;border-color:#edf2f7;border-color:rgba(237,242,247,var(--border-opacity))}.border-gray-400{--border-opacity:1;border-color:#cbd5e0;border-color:rgba(203,213,224,var(--border-opacity))}.border-t{border-top-width:1px}.border-r{border-right-width:1px}.flex{display:flex}.grid{display:grid}.hidden{display:none}.items-center{align-items:center}.justify-center{justify-content:center}.font-semibold{font-weight:600}.h-5{height:1.25rem}.h-8{height:2rem}.h-16{height:4rem}.text-sm{font-size:.875rem}.text-lg{font-size:1.125rem}.leading-7{line-height:1.75rem}.mx-auto{margin-left:auto;margin-right:auto}.ml-1{margin-left:.25rem}.mt-2{margin-top:.5rem}.mr-2{margin-right:.5rem}.ml-2{margin-left:.5rem}.mt-4{margin-top:1rem}.ml-4{margin-left:1rem}.mt-8{margin-top:2rem}.ml-12{margin-left:3rem}.-mt-px{margin-top:-1px}.max-w-xl{max-width:36rem}.max-w-6xl{max-width:72rem}.min-h-screen{min-height:100vh}.overflow-hidden{overflow:hidden}.p-6{padding:1.5rem}.py-4{padding-top:1rem;padding-bottom:1rem}.px-4{padding-left:1rem;padding-right:1rem}.px-6{padding-left:1.5rem;padding-right:1.5rem}.pt-8{padding-top:2rem}.fixed{position:fixed}.relative{position:relative}.top-0{top:0}.right-0{right:0}.shadow{box-shadow:0 1px 3px 0 rgba(0,0,0,.1),0 1px 2px 0 rgba(0,0,0,.06)}.text-center{text-align:center}.text-gray-200{--text-opacity:1;color:#edf2f7;color:rgba(237,242,247,var(--text-opacity))}.text-gray-300{--text-opacity:1;color:#e2e8f0;color:rgba(226,232,240,var(--text-opacity))}.text-gray-400{--text-opacity:1;color:#cbd5e0;color:rgba(203,213,224,var(--text-opacity))}.text-gray-500{--text-opacity:1;color:#a0aec0;color:rgba(160,174,192,var(--text-opacity))}.text-gray-600{--text-opacity:1;color:#718096;color:rgba(113,128,150,var(--text-opacity))}.text-gray-700{--text-opacity:1;color:#4a5568;color:rgba(74,85,104,var(--text-opacity))}.text-gray-900{--text-opacity:1;color:#1a202c;color:rgba(26,32,44,var(--text-opacity))}.uppercase{text-transform:uppercase}.underline{text-decoration:underline}.antialiased{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale}.tracking-wider{letter-spacing:.05em}.w-5{width:1.25rem}.w-8{width:2rem}.w-auto{width:auto}.grid-cols-1{grid-template-columns:repeat(1,minmax(0,1fr))}@-webkit-keyframes spin{0%{transform:rotate(0deg)}to{transform:rotate(1turn)}}@keyframes spin{0%{transform:rotate(0deg)}to{transform:rotate(1turn)}}@-webkit-keyframes ping{0%{transform:scale(1);opacity:1}75%,to{transform:scale(2);opacity:0}}@keyframes ping{0%{transform:scale(1);opacity:1}75%,to{transform:scale(2);opacity:0}}@-webkit-keyframes pulse{0%,to{opacity:1}50%{opacity:.5}}@keyframes pulse{0%,to{opacity:1}50%{opacity:.5}}@-webkit-keyframes bounce{0%,to{transform:translateY(-25%);-webkit-animation-timing-function:cubic-bezier(.8,0,1,1);animation-timing-function:cubic-bezier(.8,0,1,1)}50%{transform:translateY(0);-webkit-animation-timing-function:cubic-bezier(0,0,.2,1);animation-timing-function:cubic-bezier(0,0,.2,1)}}@keyframes bounce{0%,to{transform:translateY(-25%);-webkit-animation-timing-function:cubic-bezier(.8,0,1,1);animation-timing-function:cubic-bezier(.8,0,1,1)}50%{transform:translateY(0);-webkit-animation-timing-function:cubic-bezier(0,0,.2,1);animation-timing-function:cubic-bezier(0,0,.2,1)}}@media (min-width:640px){.sm\\:rounded-lg{border-radius:.5rem}.sm\\:block{display:block}.sm\\:items-center{align-items:center}.sm\\:justify-start{justify-content:flex-start}.sm\\:justify-between{justify-content:space-between}.sm\\:h-20{height:5rem}.sm\\:ml-0{margin-left:0}.sm\\:px-6{padding-left:1.5rem;padding-right:1.5rem}.sm\\:pt-0{padding-top:0}.sm\\:text-left{text-align:left}.sm\\:text-right{text-align:right}}@media (min-width:768px){.md\\:border-t-0{border-top-width:0}.md\\:border-l{border-left-width:1px}.md\\:grid-cols-2{grid-template-columns:repeat(2,minmax(0,1fr))}}@media (min-width:1024px){.lg\\:px-8{padding-left:2rem;padding-right:2rem}}@media (prefers-color-scheme:dark){.dark\\:bg-gray-800{--bg-opacity:1;background-color:#2d3748;background-color:rgba(45,55,72,var(--bg-opacity))}.dark\\:bg-gray-900{--bg-opacity:1;background-color:#1a202c;background-color:rgba(26,32,44,var(--bg-opacity))}.dark\\:border-gray-700{--border-opacity:1;border-color:#4a5568;border-color:rgba(74,85,104,var(--border-opacity))}.dark\\:text-white{--text-opacity:1;color:#fff;color:rgba(255,255,255,var(--text-opacity))}.dark\\:text-gray-400{--text-opacity:1;color:#cbd5e0;color:rgba(203,213,224,var(--text-opacity))}}\n </style>\n<style>\n body {\n font-family: \'Nunito\', sans-serif;\n }\n </style>\n</head>\n<body class="antialiased">\n<div class="relative flex items-top justify-center min-h-screen bg-gray-100 dark:bg-gray-900 sm:items-center sm:pt-0">\n<div class="max-w-xl mx-auto sm:px-6 lg:px-8">\n<div class="flex items-center pt-8 sm:justify-start sm:pt-0">\n<div class="px-4 text-lg text-gray-500 border-r border-gray-400 tracking-wider">\n419 </div>\n<div class="ml-4 text-lg text-gray-500 uppercase tracking-wider">\nPage Expired </div>\n</div>\n</div>\n</div>\n<script>(function(){var js = "window[\'__CF$cv$params\']={r:\'7f9542652f3746ce\',t:\'MTY5MjQ3ODEyNi45ODkwMDA=\'};_cpo=document.createElement(\'script\');_cpo.nonce=\'\',_cpo.src=\'/cdn-cgi/challenge-platform/scripts/invisible.js\',document.getElementsByTagName(\'head\')[0].appendChild(_cpo);";var _0xh = document.createElement(\'iframe\');_0xh.height = 1;_0xh.width = 1;_0xh.style.position = \'absolute\';_0xh.style.top = 0;_0xh.style.left = 0;_0xh.style.border = \'none\';_0xh.style.visibility = \'hidden\';document.body.appendChild(_0xh);function handler() {var _0xi = _0xh.contentDocument || _0xh.contentWindow.document;if (_0xi) {var _0xj = _0xi.createElement(\'script\');_0xj.innerHTML = js;_0xi.getElementsByTagName(\'head\')[0].appendChild(_0xj);}}if (document.readyState !== \'loading\') {handler();} else if (window.addEventListener) {document.addEventListener(\'DOMContentLoaded\', handler);} else {var prev = document.onreadystatechange || function () {};document.onreadystatechange = function (e) {prev(e);if (document.readyState !== \'loading\') {document.onreadystatechange = prev;handler();}};}})();</script><iframe height="1" width="1" style="position: absolute; top: 0px; left: 0px; border: none; visibility: hidden;"></iframe><script defer="" src="https://static.cloudflareinsights.com/beacon.min.js/v8b253dfea2ab4077af8c6f58422dfbfd1689876627854" integrity="sha512-bjgnUKX4azu3dLTVtie9u6TKqgx29RBwfj3QXYt5EKfWM/9hPSAI/4qcV5NACjwAo8UtTeWefx6Zq5PHcMm7Tg==" data-cf-beacon="{"rayId":"7f9542652f3746ce","token":"82f0e103fe6049b0b98a20f556967db5","version":"2023.8.0","si":100}" crossorigin="anonymous"></script>\n\n\n</body></html>'}, 'startTimestamp': 1692478115810, 'endTimestamp': 1692478130002, 'version': '3.3.2'}
2023-08-19 13:48:50 INFO ReqId 9656 Response in 14.192 s
2023-08-19 13:48:50 INFO ReqId 9656 127.0.0.1 POST http://localhost:8191/v1 200 OK
```
### Screenshots
_No response_ | closed | 2023-08-19T20:58:13Z | 2023-08-25T16:11:08Z | https://github.com/FlareSolverr/FlareSolverr/issues/868 | [
"help wanted"
] | asulwer | 6 |
serengil/deepface | deep-learning | 741 | Rotation in align is the wrong way around. | Hi,
While playing around with align, I noticed the align function was increasing the rotation instead of aligning the face.
The directions have been mixed in:
https://github.com/serengil/deepface/blob/52ad8e46bec7b5100a2983b375573e1154174ccf/deepface/detectors/FaceDetector.py#L78-L93
I can do a corrective PR this afternoon or tomorrow.
Best,
Vincent | closed | 2023-05-03T09:22:25Z | 2023-05-03T15:59:54Z | https://github.com/serengil/deepface/issues/741 | [
"question"
] | Vincent-Stragier | 2 |
redis/redis-om-python | pydantic | 322 | find method does not work | ## What is the issue
I have created a FastAPI application, and I am trying to integrate aredis_om (but this fails with redis_om as well) with one of my rest endpoint modules.
I created a model called `Item`:
```python
# app/models/redis/item.py
from aredis_om import Field, HashModel
from app.db.redis.session import redis_conn
class Item(HashModel):
id: int = Field(index=True)
name: str = Field(index=True)
timestamp: float = Field(index=True)
class Meta:
database = redis_conn
```
```python
# app/chemas/redis/item.py
from pydantic import BaseModel
class ItemCreate(BaseModel):
id: int
name: str
```
```python
# app/db/redis/session.py
from aredis_om import get_redis_connection
from app.core.config import settings
redis_conn = get_redis_connection(
url=f"redis://{settings.REDIS_HOST}:{settings.REDIS_PORT}",
decode_responses=True
)
```
```python
# app/api/api_v1/endpoints/redis_item.py
import time
from typing import Any, List, Optional
from fastapi import APIRouter, HTTPException
from aredis_om import NotFoundError
from app.models.redis.item import Item
from app.schemas.redis.item import ItemCreate
router = APIRouter()
@router.get("/", response_model=List[Item])
async def list_redis_items(name: Optional[str] = None) -> Any:
items = []
pks = [pk async for pk in await Item.all_pks()]
for pk in pks:
item = await Item.get(pk)
if name is None:
items.append(item)
else:
if item.name == name:
items.append(item)
return items
@router.post("/", response_model=Item)
async def post_redis_item(item: ItemCreate) -> Any:
return await Item(id=item.id, name=item.name, timestamp=float(time.time())).save()
@router.get("/{id}", response_model=Item)
async def get_redis_item(id: int) -> Any:
items = []
pks = [pk async for pk in await Item.all_pks()]
for pk in pks:
item = await Item.get(pk)
if item.id == id:
return item
raise HTTPException(status_code=404, detail=f"Item {id} not found")
@router.put("/{id}", response_model=Item)
async def update_redis_item(id: int, patch: Item) -> Any:
try:
item = await Item.get(id)
except NotFoundError:
raise HTTPException(status_code=404, detail=f"Item {id} not found")
item.name = patch.name
return await item.save()
```
As you can see in my endpoints file, I had to make a workaround to be able to pull the individual item, and to be able to get a list of items from Redis. My last endpoint, I believe is just wrong, I have to change id to a pk in order to get that item, so the last endpoint can be ignored.
My attempt for the first endpoint was this:
```python
...
if name is None:
items = await Item.find().all()
else:
items = await Item.find(Item.name == name).all()
return items
```
When I hit the endpoint with the .find() method I received a traceback of:
```bash
INFO: 127.0.0.1:56128 - "GET /api/v1/redis_item/ HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/user/.cache/pypoetry/virtualenvs/app-iUv8FE9o-py3.8/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/home/user/.cache/pypoetry/virtualenvs/app-iUv8FE9o-py3.8/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/home/user/.cache/pypoetry/virtualenvs/app-iUv8FE9o-py3.8/lib/python3.8/site-packages/fastapi/applications.py", line 269, in __call__
await super().__call__(scope, receive, send)
File "/home/user/.cache/pypoetry/virtualenvs/app-iUv8FE9o-py3.8/lib/python3.8/site-packages/starlette/applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/user/.cache/pypoetry/virtualenvs/app-iUv8FE9o-py3.8/lib/python3.8/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/home/user/.cache/pypoetry/virtualenvs/app-iUv8FE9o-py3.8/lib/python3.8/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/home/user/.cache/pypoetry/virtualenvs/app-iUv8FE9o-py3.8/lib/python3.8/site-packages/starlette/middleware/cors.py", line 84, in __call__
await self.app(scope, receive, send)
File "/home/user/.cache/pypoetry/virtualenvs/app-iUv8FE9o-py3.8/lib/python3.8/site-packages/starlette/exceptions.py", line 93, in __call__
raise exc
File "/home/user/.cache/pypoetry/virtualenvs/app-iUv8FE9o-py3.8/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__
await self.app(scope, receive, sender)
File "/home/user/.cache/pypoetry/virtualenvs/app-iUv8FE9o-py3.8/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/home/user/.cache/pypoetry/virtualenvs/app-iUv8FE9o-py3.8/lib/python3.8/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/home/user/.cache/pypoetry/virtualenvs/app-iUv8FE9o-py3.8/lib/python3.8/site-packages/starlette/routing.py", line 670, in __call__
await route.handle(scope, receive, send)
File "/home/user/.cache/pypoetry/virtualenvs/app-iUv8FE9o-py3.8/lib/python3.8/site-packages/starlette/routing.py", line 266, in handle
await self.app(scope, receive, send)
File "/home/user/.cache/pypoetry/virtualenvs/app-iUv8FE9o-py3.8/lib/python3.8/site-packages/starlette/routing.py", line 65, in app
response = await func(request)
File "/home/user/.cache/pypoetry/virtualenvs/app-iUv8FE9o-py3.8/lib/python3.8/site-packages/fastapi/routing.py", line 227, in app
raw_response = await run_endpoint_function(
File "/home/user/.cache/pypoetry/virtualenvs/app-iUv8FE9o-py3.8/lib/python3.8/site-packages/fastapi/routing.py", line 160, in run_endpoint_function
return await dependant.call(**values)
File "/home/user/ApeWorx/Kerkopes/kerkopes/backend/app/./app/api/api_v1/endpoints/redis_item.py", line 17, in list_redis_items
return await Item.find().all()
File "/home/user/.cache/pypoetry/virtualenvs/app-iUv8FE9o-py3.8/lib/python3.8/site-packages/aredis_om/model/model.py", line 760, in all
return await query.execute()
File "/home/user/.cache/pypoetry/virtualenvs/app-iUv8FE9o-py3.8/lib/python3.8/site-packages/aredis_om/model/model.py", line 725, in execute
raw_result = await self.model.db().execute_command(*args)
File "/home/user/.cache/pypoetry/virtualenvs/app-iUv8FE9o-py3.8/lib/python3.8/site-packages/aioredis/client.py", line 1085, in execute_command
return await self.parse_response(conn, command_name, **options)
File "/home/user/.cache/pypoetry/virtualenvs/app-iUv8FE9o-py3.8/lib/python3.8/site-packages/aioredis/client.py", line 1101, in parse_response
response = await connection.read_response()
File "/home/user/.cache/pypoetry/virtualenvs/app-iUv8FE9o-py3.8/lib/python3.8/site-packages/aioredis/connection.py", line 919, in read_response
raise response from None
aioredis.exceptions.ResponseError: unknown command `ft.search`, with args beginning with: `:app.models.redis.item.Item:index`, `*`, `LIMIT`, `0`, `10`,
```
If you need more information from me, let me know! Thank you in advance. | closed | 2022-07-30T13:31:33Z | 2022-09-15T19:47:26Z | https://github.com/redis/redis-om-python/issues/322 | [] | johnson2427 | 7 |
wandb/wandb | tensorflow | 8,944 | [Bug]: Files deleted through API are not really deleted | ### Describe the bug
Recently there was a bug in the wandb API that was crashing `file.delete()` call (Issue #8753 ) in a simple deletion script such as:
```python
import wandb
entity = wandb.setup()._get_username()
api = wandb.Api()
project = ... # some project name
run_id = ... # some run ID
run = api.run(f"{entity}/{project}/{run_id}")
files = run.files()
files[0].delete() # Runs fine now but the files don't seem to get deleted
```
The bug has been been seemingly fixed. It no longer crashes, however, I believe the files are either not getting deleted or at least the deletion is not registered on the wandb web interface.
I have been running deletion scripts to selectively delete data that is no longer needed. However, nothing changed in the space reported on the web interface. I understand the interface might take a few hours to update. However, in my case it's several days now.
I am overquota and basically blocked right now, with no good way to delete the files I no longer need. | closed | 2024-11-25T10:14:55Z | 2024-12-13T10:32:47Z | https://github.com/wandb/wandb/issues/8944 | [
"ty:bug",
"a:app"
] | radekd91 | 7 |
modin-project/modin | pandas | 6,970 | Implement to/from_ray_dataset functions | closed | 2024-02-27T15:01:05Z | 2024-03-05T13:28:41Z | https://github.com/modin-project/modin/issues/6970 | [] | Retribution98 | 0 | |
AntonOsika/gpt-engineer | python | 881 | 404 | ## 👋 Welcome!
We’re using Discussions as a place to connect with other members of our community. We hope that you:
* Ask questions you’re wondering about.
* Share ideas.
* Engage with other community members.
* Welcome others and are open-minded. Remember that this is a community we
build together 💪.
To get started, comment below with an introduction of yourself and tell us about what you do with this community.
_Originally posted by @AntonOsika in https://github.com/AntonOsika/gpt-engineer/discussions/98_ | closed | 2023-12-01T04:53:54Z | 2023-12-01T08:22:21Z | https://github.com/AntonOsika/gpt-engineer/issues/881 | [] | creatorbd66 | 1 |
aminalaee/sqladmin | fastapi | 497 | Use relative URLs instead of absolute URLs | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
When I try out the package without HTTPS, all URLs work including static assets but when I turn on HTTPS, there are issues with mixed content as an absolute URL is generated for all assets instead of relative URLs. This is stopping me from using this package in PROD deployments where HTTPS is mandatory.
### Describe the solution you would like.
All URLs are relative to the admin URL
### Describe alternatives you considered
_No response_
### Additional context
_No response_ | closed | 2023-05-17T16:19:12Z | 2024-02-12T10:35:35Z | https://github.com/aminalaee/sqladmin/issues/497 | [] | allentv | 2 |
widgetti/solara | flask | 999 | When embedding PDFs with `Columns` it does not fill the entire height | <!--- Provide a general summary of the issue in the Title above -->
## Expected Behavior
When a column component is added with `Columns` that contain `embed` or `iframe` etc...it fills the entire height
## Current Behavior
When using `solara.Columns([1, 1, 1])` with `solara.HTML(tag="embed", ...)` it doesn't appear possible to fill that component to the height of the column object
## Steps to Reproduce the Problem
<!--- Provide a link to a live example, for example via [PyCafe](https://py.cafe), and/or an unambiguous -->
<!--- set of steps to reproduce this bug. Include code, if relevant -->
Script to reproduce is in the gist:
https://gist.github.com/HeardACat/4c650a9c69a698ab995bc4d4d4806d8b
## Specifications
- Solara Version: 1.44
- Platform: macOS/Linux
- Affected Python Versions: 3.11
Screenshots:
<img width="1724" alt="Image" src="https://github.com/user-attachments/assets/df983d3c-2e6e-49df-a612-52f5cfc741cc" />
Another screenshot with dummy text for the other two columns
<img width="841" alt="Image" src="https://github.com/user-attachments/assets/93631c99-e2c5-44cf-9809-8bca005da67c" /> | closed | 2025-02-10T15:45:45Z | 2025-02-16T18:45:20Z | https://github.com/widgetti/solara/issues/999 | [] | HeardACat | 2 |
piskvorky/gensim | nlp | 2,752 | n | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
What are you trying to achieve? What is the expected result? What are you seeing instead?
#### Steps/code/corpus to reproduce
Include full tracebacks, logs and datasets if necessary. Please keep the examples minimal ("minimal reproducible example").
#### Versions
Please provide the output of:
```python
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import gensim; print("gensim", gensim.__version__)
from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
```
| closed | 2020-02-13T08:09:49Z | 2020-02-13T08:38:02Z | https://github.com/piskvorky/gensim/issues/2752 | [] | milind29 | 1 |
QuivrHQ/quivr | api | 3,572 | Retrieval + generation eval: run RAG on dataset questions | open | 2025-01-28T18:14:07Z | 2025-01-28T18:14:10Z | https://github.com/QuivrHQ/quivr/issues/3572 | [] | jacopo-chevallard | 1 | |
cleanlab/cleanlab | data-science | 355 | Juypter notebook tutorials: Better support for code drop-downs | Currently the drop-down cells are implemented via Markdown with special html tags.
This means the code to be displayed in these cells is duplicated:
* once to be displayed in the Markdown drop-down.
* once to be actually executed in a hidden code cell of the Jupyter notebook.
Would be good to avoid this code duplication if a better solution can be found. | open | 2022-08-23T23:35:47Z | 2023-03-06T14:29:13Z | https://github.com/cleanlab/cleanlab/issues/355 | [] | jwmueller | 0 |
pywinauto/pywinauto | automation | 981 | How to access Popup Menu from Application | ## Expected Behavior
Be able to inspect / access the opened Popup Menu
## Actual Behavior
Popup Menu closes as far as I access the Application
## Steps to Reproduce the Problem
1. Connect to Application
2. Open Popup Menu
3. Try to access Popup Menu
4. Menu closes befor it can be accessed
## Short Example of Code to Demonstrate the Problem
```
import pywinauto
# Open App
clientProcess = pywinauto.Application('uia').start('C:\\tmp\\myapp.exe')
# Open Popup Menu
clientProcess.MyApp.Custom.children()[4].click_input()
# Click `Disconnect` Item in Popup Menu
clientProcess.MyApp.Disconnect.click_input() # Fails because accessing 'clientProcess.MyApp'
# triggers the Popup Menu to close
# Close App
clientProcess.kill()
```
## Specifications
- Pywinauto version: 0.6.6
- Python version and bitness: 3.7.2 64 Bit
- Platform and OS: Windwos 10
| closed | 2020-09-10T13:24:51Z | 2020-09-18T10:56:41Z | https://github.com/pywinauto/pywinauto/issues/981 | [
"question"
] | dschiller | 4 |
sqlalchemy/alembic | sqlalchemy | 1,343 | Is there a existing way to log each operation when running upgrade? | **Describe the use case**
We're having an issue where some of our revisions are _intermittently_ failing, i.e. running `alembic upgrade heads` against a clean database will usually work but sometimes 2 particular revisions will fail.
The two revisions contain many `op.{method}` statements to create tables and add foreign keys. And while the error implies that there is an issue creating the table, we don't want to rule out it's actually creating a foreign key after the table is created and the error is misleading.
**Databases / Backends / Drivers targeted**
MySQL/pymysql
**Additional context**
I appreciate this is more of a support question than an issue, but I just assumed there would be an option to log each operations, not just each revision, but looking through the code I don't spot anything. I also searched exiting issues but nothing stood out to me. | closed | 2023-11-06T17:06:14Z | 2023-11-06T18:11:02Z | https://github.com/sqlalchemy/alembic/issues/1343 | [
"use case"
] | notatallshaw-gts | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.