repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers | tensorflow | 36,157 | Add functionality to save model when training unexpectedly terminates | ### Feature request
I'm thinking of implementing it like this:
```python
try:
trainer.train(resume_from_checkpoint=args.resume_from_checkpoint)
finally:
trainer._save_checkpoint(trainer.model, None)
```
I want to utilize the characteristics of 'finally' to ensure that the model is saved at least once at the end,
even if the training terminates unexpectedly.
### Motivation
Sometimes we need to terminate training unintentionally due to scheduling or various other issues.
If the model checkpoint hasn't been saved even after training has progressed to some extent,
all the training resources used until now are wasted.
### Your contribution
Therefore, I want to add functionality to save the model checkpoint unconditionally
even if the process is terminated by an error or kill signal unintentionally.
And I want to control this through train_args. | closed | 2025-02-13T07:37:29Z | 2025-02-14T11:30:52Z | https://github.com/huggingface/transformers/issues/36157 | [
"Feature request"
] | jp1924 | 3 |
tensorpack/tensorpack | tensorflow | 646 | do preprocess before making lmdb | A simple question, is it possible to do all the preprocess before writing to lmdb, so when training, we won't be bottlenecked by CPU preprocessing
| closed | 2018-02-09T11:27:56Z | 2018-05-30T20:59:35Z | https://github.com/tensorpack/tensorpack/issues/646 | [
"usage"
] | twangnh | 2 |
tflearn/tflearn | tensorflow | 477 | About training by AlexNet | When I made a try on "tflearn/examples/images/alexnet.py", I found that for "X, Y = oxflower17.load_data(one_hot=True, resize_pics=(227, 227))", it would load a weight file of "17flowers.pkl" for training. I wonder if I did not use such a file but only use images for training, whether I can obtain a similar trained model compared with the way by "17flowers.pkl"? | open | 2016-11-21T07:39:11Z | 2017-05-11T14:48:12Z | https://github.com/tflearn/tflearn/issues/477 | [] | joei1981 | 7 |
mwaskom/seaborn | data-visualization | 3,693 | Is there a plan to add internal axvline/axhline support to seaborn.objects soon? | I am currently in the process of updating my book for its second edition, and am changing the Python plotting code I suggest to readers to use `seaborn.objects` where possible.
However, the plotting code I use in the book contains quite a few `plt.axvline()` and `plt.axhline()` lines! I am now changing these to the fairly laborious `fig = plt.figure(); (so.Plot().on(fig).etc.); fig.axes[0].axhline()`.
I know there have been several issues previously about adding `axhline/axvline` support, and I know you've expressed interest in supporting it and that there are barriers to doing so. My main question is whether there's a plan to implement this on the horizon **soon**, i.e. I should be planning to revise my book text to reflect the new implementation before I publish the thing. If there's not, that's fine, I'm still planning to make the `seaborn.objects` switch. But this thing will be in print for quite a while and I'd rather it not get out of date *that* fast.
Thank you! | closed | 2024-05-10T21:23:12Z | 2025-01-26T15:36:55Z | https://github.com/mwaskom/seaborn/issues/3693 | [] | NickCH-K | 9 |
deepspeedai/DeepSpeed | machine-learning | 6,518 | nv-nightly CI test failure | The Nightly CI for https://github.com/microsoft/DeepSpeed/actions/runs/10783394303 failed.
| closed | 2024-09-10T02:58:16Z | 2024-09-11T15:16:18Z | https://github.com/deepspeedai/DeepSpeed/issues/6518 | [
"ci-failure"
] | github-actions[bot] | 1 |
huggingface/datasets | numpy | 6,846 | Unimaginable super slow iteration | ### Describe the bug
Assuming there is a dataset with 52000 sentences, each with a length of 500, it takes 20 seconds to extract a sentence from the dataset……?Is there something wrong with my iteration?
### Steps to reproduce the bug
```python
import datasets
import time
import random
num_rows = 52000
num_cols = 500
random_input = [[random.randint(1, 100) for _ in range(num_cols)] for _ in range(num_rows)]
random_output = [[random.randint(1, 100) for _ in range(num_cols)] for _ in range(num_rows)]
s=time.time()
d={'random_input':random_input,'random_output':random_output}
dataset=datasets.Dataset.from_dict(d)
print('from dict',time.time()-s)
print(dataset)
for i in range(len(dataset)):
aa=time.time()
a,b=dataset['random_input'][i],dataset['random_output'][i]
print(time.time()-aa)
```
corresponding output
```bash
from dict 9.215498685836792
Dataset({
features: ['random_input', 'random_output'],
num_rows: 52000
})
19.129778146743774
19.329464197158813
19.27668261528015
19.28557538986206
19.247620582580566
19.624247074127197
19.28673791885376
19.301053047180176
19.290496110916138
19.291821718215942
19.357765197753906
```
### Expected behavior
Under normal circumstances, iteration should be very rapid as it does not involve the main tasks other than getting items
### Environment info
- `datasets` version: 2.19.0
- Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.13
- `huggingface_hub` version: 0.21.4
- PyArrow version: 15.0.0
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0 | closed | 2024-04-28T05:24:14Z | 2024-05-06T08:30:03Z | https://github.com/huggingface/datasets/issues/6846 | [] | rangehow | 1 |
cupy/cupy | numpy | 8,091 | CSR & CSC sum along Axis slow | ### Description
I recently had a user notice a slowdown while summing a sparse CSR matrix along the major axis. After some tinkering, I created a new method for summing across the major axis that works for CSR and CSC matrices and doesn't rely on matrix multiplication.
All the speedups from 0.9.3 to 0.9.4 are because of the new kernel.
https://github.com/scverse/rapids_singlecell/issues/111#issuecomment-1879554381
Would this be something you are interested in?
### Additional Information
Link to kernel: https://github.com/scverse/rapids_singlecell/blob/main/src/rapids_singlecell/preprocessing/_kernels/_norm_kernel.py | open | 2024-01-08T10:29:26Z | 2024-01-09T05:46:35Z | https://github.com/cupy/cupy/issues/8091 | [
"contribution welcome",
"cat:performance"
] | Intron7 | 1 |
samuelcolvin/dirty-equals | pytest | 112 | Two test regressions related to IP addresses in Python 3.14 | To run tests in `test/test_other.py` without Pydantic (which I can’t yet install in a Python 3.14 virtualenv), I hacked up `tests/test_other.py` by removing all the tests that call `IsUUID` or `IsUrl`. Then:
```
$ git checkout https://github.com/samuelcolvin/dirty-equals.git
$ cd dirty-equals
$ python3.14 --version
Python 3.14.0a5
$ python3.14 -m venv _e
$ . _e/bin/activate
(_e) $ pip install -e .
(_e) $ pip install -r requirements/tests.txt
(_e) $ TZ=UTC python -m pytest
======================================== test session starts =========================================
platform linux -- Python 3.14.0a5, pytest-8.3.2, pluggy-1.5.0
rootdir: /home/ben/src/forks/dirty-equals
configfile: pyproject.toml
testpaths: tests
plugins: mock-3.14.0, examples-0.0.13, pretty-1.2.0
collected 581 items
tests/test_base.py ................................ [ 5%]
tests/test_boolean.py ............................................ [ 13%]
tests/test_datetime.py .................................................. [ 21%]
tests/test_dict.py .......................................................... [ 31%]
tests/test_docs.py ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss [ 41%]
tests/test_inspection.py ............................ [ 46%]
tests/test_list_tuple.py ..................................................................... [ 58%]
......... [ 60%]
tests/test_numeric.py ........................................................................ [ 72%]
........... [ 74%]
tests/test_other.py ................F [ 77%]
tests/test_other.py:116 test_is_ip_true[other1-dirty1] - AssertionError: assert IPv4Networ…
tests/test_other.py .F [ 77%]
tests/test_other.py:116 test_is_ip_true[other3-dirty3] - AssertionError: assert IPv6Networ…
tests/test_other.py ...................................................................... [ 89%]
tests/test_strings.py ........................................................... [100%]
============================================== FAILURES ==============================================
___________________________________ test_is_ip_true[other1-dirty1] ___________________________________
other = IPv4Network('43.48.0.0/12'), dirty = IsIP()
@pytest.mark.parametrize(
'other,dirty',
[
(IPv4Address('127.0.0.1'), IsIP()),
(IPv4Network('43.48.0.0/12'), IsIP()),
(IPv6Address('::eeff:ae3f:d473'), IsIP()),
(IPv6Network('::eeff:ae3f:d473/128'), IsIP()),
('2001:0db8:0a0b:12f0:0000:0000:0000:0001', IsIP()),
('179.27.154.96', IsIP),
('43.62.123.119', IsIP(version=4)),
('::ffff:2b3e:7b77', IsIP(version=6)),
('0:0:0:0:0:ffff:2b3e:7b77', IsIP(version=6)),
('54.43.53.219/10', IsIP(version=4, netmask='255.192.0.0')),
('::ffff:aebf:d473/12', IsIP(version=6, netmask='fff0::')),
('2001:0db8:0a0b:12f0:0000:0000:0000:0001', IsIP(version=6)),
(3232235521, IsIP()),
(b'\xc0\xa8\x00\x01', IsIP()),
(338288524927261089654018896845572831328, IsIP(version=6)),
(b'\x20\x01\x06\x58\x02\x2a\xca\xfe\x02\x00\x00\x00\x00\x00\x00\x01', IsIP(version=6)),
],
)
def test_is_ip_true(other, dirty):
> assert other == dirty
E AssertionError: assert IPv4Network('43.48.0.0/12') == IsIP()
tests/test_other.py:139: AssertionError
___________________________________ test_is_ip_true[other3-dirty3] ___________________________________
other = IPv6Network('::eeff:ae3f:d473/128'), dirty = IsIP()
@pytest.mark.parametrize(
'other,dirty',
[
(IPv4Address('127.0.0.1'), IsIP()),
(IPv4Network('43.48.0.0/12'), IsIP()),
(IPv6Address('::eeff:ae3f:d473'), IsIP()),
(IPv6Network('::eeff:ae3f:d473/128'), IsIP()),
('2001:0db8:0a0b:12f0:0000:0000:0000:0001', IsIP()),
('179.27.154.96', IsIP),
('43.62.123.119', IsIP(version=4)),
('::ffff:2b3e:7b77', IsIP(version=6)),
('0:0:0:0:0:ffff:2b3e:7b77', IsIP(version=6)),
('54.43.53.219/10', IsIP(version=4, netmask='255.192.0.0')),
('::ffff:aebf:d473/12', IsIP(version=6, netmask='fff0::')),
('2001:0db8:0a0b:12f0:0000:0000:0000:0001', IsIP(version=6)),
(3232235521, IsIP()),
(b'\xc0\xa8\x00\x01', IsIP()),
(338288524927261089654018896845572831328, IsIP(version=6)),
(b'\x20\x01\x06\x58\x02\x2a\xca\xfe\x02\x00\x00\x00\x00\x00\x00\x01', IsIP(version=6)),
],
)
def test_is_ip_true(other, dirty):
> assert other == dirty
E AssertionError: assert IPv6Network('::eeff:ae3f:d473/128') == IsIP()
tests/test_other.py:139: AssertionError
Summary of Failures
┏━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓
┃ File ┃ Function ┃ Function Line ┃ Error Line ┃ Error ┃
┡━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩
│ tests/test_other.py │ test_is_ip_true[oth… │ 117 │ 139 │ AssertionError │
│ tests/test_other.py │ test_is_ip_true[oth… │ 117 │ 139 │ AssertionError │
└───────────────────────┴────────────────────────┴─────────────────┴──────────────┴──────────────────┘
Results (0.32s):
2 failed
519 passed
60 skipped
``` | open | 2025-03-18T11:35:16Z | 2025-03-18T11:35:16Z | https://github.com/samuelcolvin/dirty-equals/issues/112 | [] | musicinmybrain | 0 |
huggingface/datasets | numpy | 6,485 | FileNotFoundError: [Errno 2] No such file or directory: 'nul' | ### Describe the bug
it seems that sth wrong with my terrible "bug body" life, When i run this code, "import datasets"
i meet this error FileNotFoundError: [Errno 2] No such file or directory: 'nul'


### Steps to reproduce the bug
1.import datasets
### Expected behavior
i just run a single line code and stuct in this bug
### Environment info
OS: Windows10
Datasets==2.15.0
python=3.10 | closed | 2023-12-11T08:52:13Z | 2023-12-14T08:09:08Z | https://github.com/huggingface/datasets/issues/6485 | [] | amanyara | 1 |
ExpDev07/coronavirus-tracker-api | rest-api | 75 | 1 day lag with JHU CSSE Data | Hello !
Thanks a lot for this API, it is exactly what I was looking for.
It seems that there is a lot of lag and the data is always one day behind.
Is there a way to fix it and be up to date ?
Thanks again ! | closed | 2020-03-18T14:07:29Z | 2020-03-24T19:11:23Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/75 | [
"feedback"
] | mploux | 7 |
allenai/allennlp | data-science | 5,003 | Train a region detector on the features from Visual Genome | This is a project in computer vision, rather than natural language processing. It is here because we have found this `RegionEmbedder` to be important for down-stream tasks that combine vision and language features.
In AllenNLP, `RegionDetector`s take an image and predict "regions of interest". Each region is represented by some coordinates and a vector expressing the contents of the region.
[Visual Genome](http://visualgenome.org) is a dataset containing millions of such regions. This task about training a new region detector on the Visual Genome dataset.
Most of the meat of the model will not be implemented from scratch. Rather, we will use the components that `torchvision` gives us. Most of the work will be in writing a dataset reader that can read the visual genome features, and writing a model that is basically an adapter between the AllenNLP formats and the `torchvision` formats.
This project has many moving parts, and will likely be a bit on the difficult side. | open | 2021-02-19T03:01:09Z | 2021-03-09T23:41:02Z | https://github.com/allenai/allennlp/issues/5003 | [
"Contributions welcome",
"Models",
"hard"
] | dirkgr | 0 |
lk-geimfari/mimesis | pandas | 971 | Russian locale is unreasonably big | [person.json](https://github.com/lk-geimfari/mimesis/blob/master/mimesis/data/ru/person.json) for russian locale is oversized. Compare his 158 klines with 6 klines in [english version](https://github.com/lk-geimfari/mimesis/blob/master/mimesis/data/en/person.json). The dictionaries contains a ton of crap like"Абдулгаллямов", "Некрут", "Прибыток", "Хач", "Шт" etc, impacts performance/memory.
Rapid benchmarking
```python
#!/usr/bin/env python3
from mimesis import Person
import time
def run_test(locale, n):
start_time = time.time()
gen = Person(locale)
for i in range(0, n):
_ = gen.last_name()
end_time = time.time()
print('%s: %d operations, %f seconds' % (locale, n, end_time - start_time))
N = 1000000
run_test('ru', N)
run_test('en', N)
```
shows such results
```
ru: 1000000 operations, 8.939918 seconds
en: 1000000 operations, 1.817508 seconds
```
Dictionaries need to be washed. | closed | 2020-11-15T09:36:21Z | 2021-01-30T03:12:40Z | https://github.com/lk-geimfari/mimesis/issues/971 | [
"stale"
] | alexanderfefelov | 3 |
lazyprogrammer/machine_learning_examples | data-science | 94 | adding random forest tutorial section | i want to add random forest tutorial section in this repository, i have mistakenly created teow pr's that why deleted previous one
please assign this issue to me, i honestly want to work on it | closed | 2024-06-20T12:51:19Z | 2025-02-19T23:07:56Z | https://github.com/lazyprogrammer/machine_learning_examples/issues/94 | [] | RajKhanke | 1 |
autogluon/autogluon | data-science | 3,914 | Update on 2023 Roadmap Tasks and Request for 2024 Roadmap | Hello,
Could you please provide an update on the 2023 roadmap tasks? The last update was about two years ago (#2590). It would be helpful to know which tasks have been completed, which are currently underway, and if there are any that have been deferred or changed.
Additionally, as we're now in 2024, I was wondering if there is a roadmap for 2024 in the works. If a 2024 roadmap is being developed, could you share any insights or timelines on when it might be available?
Thank you for your time and consideration. | open | 2024-02-12T14:19:30Z | 2024-10-24T17:19:57Z | https://github.com/autogluon/autogluon/issues/3914 | [] | gkoulis | 0 |
s3rius/FastAPI-template | asyncio | 209 | Kafka container failed to start ... | Just started a new project with Kafka dependency, got this error when running docker-compose:
> docker-compose -f deploy/docker-compose.yml -f deploy/docker-compose.dev.yml --project-directory . up --build
```
kafka-1 | [2024-06-26 04:23:45,075] INFO Loading logs from log dirs ArrayBuffer(/bitnami/kafka/data) (kafka.log.LogManager)
kafka-1 | [2024-06-26 04:23:45,160] INFO Attempting recovery for all logs in /bitnami/kafka/data since no clean shutdown file was found (kafka.log.LogManager)
kafka-1 | [2024-06-26 04:23:45,514] INFO Loaded 0 logs in 438ms. (kafka.log.LogManager)
kafka-1 | [2024-06-26 04:23:45,535] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
kafka-1 | [2024-06-26 04:23:45,606] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
kafka-1 | [2024-06-26 04:23:46,012] INFO Starting the log cleaner (kafka.log.LogCleaner)
kafka-1 | [2024-06-26 04:23:47,366] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner)
kafka-1 | [2024-06-26 04:23:59,841] INFO [BrokerToControllerChannelManager broker=1 name=forwarding]: Starting (kafka.server.BrokerToControllerRequestThread)
dependency failed to start: container market_insights-kafka-1 exited (137)
make: *** [dev] Error 1
```
How to understand this error? I'm running this on my Mac machine. Thanks | closed | 2024-06-26T04:33:06Z | 2024-06-27T04:52:09Z | https://github.com/s3rius/FastAPI-template/issues/209 | [] | rcholic | 1 |
mithi/hexapod-robot-simulator | dash | 3 | robot pose wrt to body not wrt to feet on ground | The body should be positions wrt to the feet
right now the feet is positioned wrt to the body | closed | 2020-02-18T07:41:07Z | 2020-02-19T17:20:34Z | https://github.com/mithi/hexapod-robot-simulator/issues/3 | [
"bug"
] | mithi | 0 |
eriklindernoren/ML-From-Scratch | deep-learning | 49 | Genetic algorithm mutation rate | Issue with Genetic algorithm commit at Unsupervised learning.
The approach taken towards the initiation of mutation is a bit unrealistic. Given the fact that the mutation rate provided by the user isn't treated as a rate in it's original sense but rather it's being compared directly with the Numpy's random value, which makes it useless even if a high mutation rate was set. | closed | 2018-09-09T18:01:23Z | 2018-09-29T09:15:58Z | https://github.com/eriklindernoren/ML-From-Scratch/issues/49 | [] | stillbigjosh | 7 |
python-restx/flask-restx | api | 33 | Exception data is returned instead of custom error handle data | Hi,
I'm having an issue with the error handler which does not return to the client the correct data. Here's a simple test case that currently fail:
### **Code**
```python
def test_errorhandler_for_custom_exception_with_data(self, app, client):
api = restx.Api(app)
class CustomException(RuntimeError):
data = "Foo Bar"
@api.route('/test/', endpoint='test')
class TestResource(restx.Resource):
def get(self):
raise CustomException('error')
@api.errorhandler(CustomException)
def handle_custom_exception(error):
return {'message': str(error), 'test': 'value'}, 400
response = client.get('/test/')
assert response.status_code == 400
assert response.content_type == 'application/json'
data = json.loads(response.data.decode('utf8'))
assert data == {
'message': 'error',
'test': 'value',
}
```
```
E AssertionError: assert 'Foo Bar' == {'message': 'error', 'test': 'value'}
```
### **Repro Steps**
1. Register an error handler
2. Raise an exception having the attribute **data** (such as Marshmallow ValidationError)
3. The client gets back the value of the data attribute of the exception, not the one from the error handler
### **Expected Behavior**
Should return to the client the return value of the error handler
### **Actual Behavior**
Output the exception attribute instead
### **Environment**
- Python version: 3.6 and 3.7
- Flask version: 1.1.1
- Flask-RESTX version: 0.1.0
### **Additional Context**
The issue appears when the exception has an attribute **data**, because in **api.py** line 655 (*handle_error*) there's a ```data = getattr(e, 'data', default_data)```, so the handler returns the **data** attribute of the exception **e** instead of the custom handler data located in **default_data**
Thanks!
| open | 2020-02-06T14:23:00Z | 2020-02-12T17:43:59Z | https://github.com/python-restx/flask-restx/issues/33 | [
"bug"
] | AchilleAsh | 2 |
wger-project/wger | django | 1,564 | Dynamic workouts | ## Use case
Be able to select which day (set of exercises) should the client do based on a set of rules.
## Proposal
Instead of having a fixed schedule of which exercises should be done each day, introduce a new option to "code" which day you should do.
This decision could be based of the measurements of the last workouts, or in a measurement made that same day after warming up.
Example, if you have a device to measure max strength or speed, you can do some measures after warming up, introduce them as "measurements" and let the dynamic workout choose which "day" (set of exercises) you should be doing.
Code:
https://github.com/adrianlzt/wger/tree/feature/dynamic_days
https://github.com/adrianlzt/wger-react/tree/feature/dynamic_days
https://github.com/wger-project/wger/assets/3237784/37612863-079d-4df2-8bea-8dfed0a24df2
| open | 2024-01-24T07:52:54Z | 2024-05-03T14:55:31Z | https://github.com/wger-project/wger/issues/1564 | [] | adrianlzt | 9 |
freqtrade/freqtrade | python | 11,465 | Exception when using download-data from Binance after some time | ## Describe your environment
* Operating system: Docker
* Python Version: 3.12.9 (`python -V`)
* CCXT version: 4.4.62 (`pip freeze | grep ccxt`)
* Freqtrade Version: 2025.2 (`freqtrade -V` or `docker compose run --rm freqtrade -V` for Freqtrade running in docker)
## Describe the problem:
When using data-download to data from Binance an exception occurs ``Out of bounds nanosecond timestamp``. In order to make it show up is to download some data first then some days later (tested with 14 days) try to append the data again.
Even though the exception occurs, the data is downloaded through the standard means. Just not from Binance public data.
### Steps to reproduce:
1. Download data from Binance. ``freqtrade download-data -c ./user_data/config.json -t 5m --timerange 20231218-``
2. Wait two weeks.
3. Download the data again. ``freqtrade download-data -c ./user_data/config.json -t 5m --timerange 20231218-``
### Observed Results:
* What happened?
Exception occured and Freqtrade couldn't download the data from Binance's public data.
* What did you expect to happen?
No exception should occur and Freqtrade should download the data from Binance's public data.
### Relevant code exceptions or logs
```
2025-03-04 12:41:08,208 - freqtrade - INFO - freqtrade 2025.2
2025-03-04 12:41:08,863 - numexpr.utils - INFO - NumExpr defaulting to 16 threads.
2025-03-04 12:41:09,961 - freqtrade.configuration.load_config - INFO - Using config: ./user_data/config.json ...
2025-03-04 12:41:10,002 - freqtrade.loggers - INFO - Enabling colorized output.
2025-03-04 12:41:10,003 - freqtrade.loggers - INFO - Verbosity set to 0
2025-03-04 12:41:10,004 - freqtrade.configuration.configuration - INFO - Parameter --timerange detected: 20231218- ...
2025-03-04 12:41:10,323 - freqtrade.configuration.configuration - INFO - Using user-data directory: /freqtrade/user_data ...
2025-03-04 12:41:10,331 - freqtrade.configuration.configuration - INFO - Using data directory: /freqtrade/user_data/data/binance ...
2025-03-04 12:41:10,331 - freqtrade.configuration.configuration - INFO - timeframes --timeframes: ['5m']
2025-03-04 12:41:10,332 - freqtrade.configuration.configuration - INFO - Filter trades by timerange: 20231218-
2025-03-04 12:41:10,333 - freqtrade.exchange.check_exchange - INFO - Checking exchange...
2025-03-04 12:41:10,343 - freqtrade.exchange.check_exchange - INFO - Exchange "binance" is officially supported by the Freqtrade development team.
2025-03-04 12:41:10,343 - freqtrade.configuration.configuration - INFO - Using pairlist from configuration.
2025-03-04 12:41:10,344 - freqtrade.configuration.config_validation - INFO - Validating configuration ...
2025-03-04 12:41:11,312 - freqtrade.exchange.exchange - INFO - Instance is running with dry_run enabled
2025-03-04 12:41:11,313 - freqtrade.exchange.exchange - INFO - Using CCXT 4.4.62
2025-03-04 12:41:11,331 - freqtrade.exchange.exchange - INFO - Using Exchange "Binance"
2025-03-04 12:41:11,332 - freqtrade.resolvers.exchange_resolver - INFO - Using resolved exchange 'Binance'...
2025-03-04 12:41:11,333 - freqtrade.exchange.exchange - INFO - Markets were not loaded. Loading them now..
2025-03-04 12:41:13,775 - freqtrade.data.history.history_utils - INFO - About to download pairs: ['OM/USDT', 'VANRY/USDT', 'AMP/USDT', 'ACA/USDT', 'SANTOS/USDT', 'IQ/USDT', 'ASR/USDT', 'CREAM/USDT', 'FIS/USDT', 'BURGER/USDT', 'BSW/USDT',
'FARM/USDT', 'PEPE/USDT', 'FLOKI/USDT', 'RARE/USDT', 'FIDA/USDT', 'FTT/USDT'], intervals: ['5m'] to /freqtrade/user_data/data/binance
2025-03-04 12:41:13,904 - freqtrade.data.history.history_utils - INFO - Download history data for "OM/USDT", 5m, spot and store in /freqtrade/user_data/data/binance. From 2025-02-21T06:55:00 to now
2025-03-04 12:41:17,333 - freqtrade.exchange.binance_public_data - WARNING - An exception raised: : Out of bounds nanosecond timestamp: 57111-06-14 00:00:00
2025-03-04 12:41:18,231 - freqtrade.data.history.history_utils - INFO - Downloaded data for OM/USDT with length 3237.
2025-03-04 12:41:18,565 - freqtrade.data.history.history_utils - INFO - Download history data for "VANRY/USDT", 5m, spot and store in /freqtrade/user_data/data/binance. From 2025-02-21T06:55:00 to now
2025-03-04 12:41:22,662 - freqtrade.exchange.binance_public_data - WARNING - An exception raised: : Out of bounds nanosecond timestamp: 57111-06-14 00:00:00
2025-03-04 12:41:23,023 - freqtrade.data.history.history_utils - INFO - Downloaded data for VANRY/USDT with length 3237.
2025-03-04 12:41:23,212 - freqtrade.data.history.history_utils - INFO - Download history data for "AMP/USDT", 5m, spot and store in /freqtrade/user_data/data/binance. From 2025-02-21T06:55:00 to now
2025-03-04 12:41:26,397 - freqtrade.exchange.binance_public_data - WARNING - An exception raised: : Out of bounds nanosecond timestamp: 57111-06-14 00:00:00
2025-03-04 12:41:26,769 - freqtrade.data.history.history_utils - INFO - Downloaded data for AMP/USDT with length 3237.
2025-03-04 12:41:27,084 - freqtrade.data.history.history_utils - INFO - Download history data for "ACA/USDT", 5m, spot and store in /freqtrade/user_data/data/binance. From 2025-02-21T06:55:00 to now
2025-03-04 12:41:30,450 - freqtrade.exchange.binance_public_data - WARNING - An exception raised: : Out of bounds nanosecond timestamp: 57111-06-14 00:00:00
2025-03-04 12:41:31,498 - freqtrade.data.history.history_utils - INFO - Downloaded data for ACA/USDT with length 3237.
2025-03-04 12:41:31,739 - freqtrade.data.history.history_utils - INFO - Download history data for "SANTOS/USDT", 5m, spot and store in /freqtrade/user_data/data/binance. From 2025-02-21T06:55:00 to now
2025-03-04 12:41:35,106 - freqtrade.exchange.binance_public_data - WARNING - An exception raised: : Out of bounds nanosecond timestamp: 57111-06-14 00:00:00
2025-03-04 12:41:35,475 - freqtrade.data.history.history_utils - INFO - Downloaded data for SANTOS/USDT with length 3237.
2025-03-04 12:41:35,735 - freqtrade.data.history.history_utils - INFO - Download history data for "IQ/USDT", 5m, spot and store in /freqtrade/user_data/data/binance. From 2025-02-21T06:55:00 to now
2025-03-04 12:41:38,859 - freqtrade.exchange.binance_public_data - WARNING - An exception raised: : Out of bounds nanosecond timestamp: 57111-06-14 00:00:00
2025-03-04 12:41:39,213 - freqtrade.data.history.history_utils - INFO - Downloaded data for IQ/USDT with length 3237.
2025-03-04 12:41:39,420 - freqtrade.data.history.history_utils - INFO - Download history data for "ASR/USDT", 5m, spot and store in /freqtrade/user_data/data/binance. From 2025-02-21T06:55:00 to now
2025-03-04 12:41:42,696 - freqtrade.exchange.binance_public_data - WARNING - An exception raised: : Out of bounds nanosecond timestamp: 57111-06-14 00:00:00
2025-03-04 12:41:43,056 - freqtrade.data.history.history_utils - INFO - Downloaded data for ASR/USDT with length 3237.
2025-03-04 12:41:43,311 - freqtrade.data.history.history_utils - INFO - Download history data for "CREAM/USDT", 5m, spot and store in /freqtrade/user_data/data/binance. From 2025-02-21T06:55:00 to now
2025-03-04 12:41:46,434 - freqtrade.exchange.binance_public_data - WARNING - An exception raised: : Out of bounds nanosecond timestamp: 57111-06-14 00:00:00
2025-03-04 12:41:46,861 - freqtrade.data.history.history_utils - INFO - Downloaded data for CREAM/USDT with length 3237.
2025-03-04 12:41:47,058 - freqtrade.data.history.history_utils - INFO - Download history data for "FIS/USDT", 5m, spot and store in /freqtrade/user_data/data/binance. From 2025-02-21T06:55:00 to now
2025-03-04 12:41:50,389 - freqtrade.exchange.binance_public_data - WARNING - An exception raised: : Out of bounds nanosecond timestamp: 57111-06-14 00:00:00
2025-03-04 12:41:50,957 - freqtrade.data.history.history_utils - INFO - Downloaded data for FIS/USDT with length 3237.
2025-03-04 12:41:51,164 - freqtrade.data.history.history_utils - INFO - Download history data for "BURGER/USDT", 5m, spot and store in /freqtrade/user_data/data/binance. From 2025-02-21T06:55:00 to now
2025-03-04 12:41:54,528 - freqtrade.exchange.binance_public_data - WARNING - An exception raised: : Out of bounds nanosecond timestamp: 57111-06-14 00:00:00
2025-03-04 12:41:55,077 - freqtrade.data.history.history_utils - INFO - Downloaded data for BURGER/USDT with length 3237.
2025-03-04 12:41:55,460 - freqtrade.data.history.history_utils - INFO - Download history data for "BSW/USDT", 5m, spot and store in /freqtrade/user_data/data/binance. From 2025-02-21T06:55:00 to now
2025-03-04 12:41:59,201 - freqtrade.exchange.binance_public_data - WARNING - An exception raised: : Out of bounds nanosecond timestamp: 57111-06-14 00:00:00
2025-03-04 12:42:00,284 - freqtrade.data.history.history_utils - INFO - Downloaded data for BSW/USDT with length 3237.
2025-03-04 12:42:00,511 - freqtrade.data.history.history_utils - INFO - Download history data for "FARM/USDT", 5m, spot and store in /freqtrade/user_data/data/binance. From 2025-02-21T06:55:00 to now
2025-03-04 12:43:09,069 - freqtrade.exchange.binance_public_data - WARNING - An exception raised: : Out of bounds nanosecond timestamp: 57111-06-14 00:00:00
2025-03-04 12:43:09,912 - freqtrade.data.history.history_utils - INFO - Downloaded data for FARM/USDT with length 3237.
2025-03-04 12:43:10,170 - freqtrade.data.history.history_utils - INFO - Download history data for "PEPE/USDT", 5m, spot and store in /freqtrade/user_data/data/binance. From 2025-02-21T06:55:00 to now
2025-03-04 12:43:14,053 - freqtrade.exchange.binance_public_data - WARNING - An exception raised: : Out of bounds nanosecond timestamp: 57111-06-14 00:00:00
2025-03-04 12:43:15,197 - freqtrade.data.history.history_utils - INFO - Downloaded data for PEPE/USDT with length 3237.
2025-03-04 12:43:15,369 - freqtrade.data.history.history_utils - INFO - Download history data for "FLOKI/USDT", 5m, spot and store in /freqtrade/user_data/data/binance. From 2025-02-21T06:55:00 to now
2025-03-04 12:43:18,723 - freqtrade.exchange.binance_public_data - WARNING - An exception raised: : Out of bounds nanosecond timestamp: 57111-06-14 00:00:00
2025-03-04 12:43:19,537 - freqtrade.data.history.history_utils - INFO - Downloaded data for FLOKI/USDT with length 3237.
2025-03-04 12:43:19,778 - freqtrade.data.history.history_utils - INFO - Download history data for "RARE/USDT", 5m, spot and store in /freqtrade/user_data/data/binance. From 2025-02-21T06:55:00 to now
2025-03-04 12:43:23,452 - freqtrade.exchange.binance_public_data - WARNING - An exception raised: : Out of bounds nanosecond timestamp: 57111-06-14 00:00:00
2025-03-04 12:43:23,834 - freqtrade.data.history.history_utils - INFO - Downloaded data for RARE/USDT with length 3237.
2025-03-04 12:43:24,022 - freqtrade.data.history.history_utils - INFO - Download history data for "FIDA/USDT", 5m, spot and store in /freqtrade/user_data/data/binance. From 2025-02-21T06:55:00 to now
2025-03-04 12:43:26,921 - freqtrade.exchange.binance_public_data - WARNING - An exception raised: : Out of bounds nanosecond timestamp: 57111-06-14 00:00:00
2025-03-04 12:43:27,276 - freqtrade.data.history.history_utils - INFO - Downloaded data for FIDA/USDT with length 3237.
2025-03-04 12:43:27,501 - freqtrade.data.history.history_utils - INFO - Download history data for "FTT/USDT", 5m, spot and store in /freqtrade/user_data/data/binance. From 2025-02-21T06:55:00 to now
2025-03-04 12:43:30,628 - freqtrade.exchange.binance_public_data - WARNING - An exception raised: : Out of bounds nanosecond timestamp: 57111-06-14 00:00:00
2025-03-04 12:43:31,421 - freqtrade.data.history.history_utils - INFO - Downloaded data for FTT/USDT with length 3237.
```
| closed | 2025-03-04T12:53:06Z | 2025-03-05T19:19:15Z | https://github.com/freqtrade/freqtrade/issues/11465 | [
"Bug",
"Data download"
] | Debeet | 1 |
jpadilla/django-rest-framework-jwt | django | 482 | How to use this library by only using Http Only Cookie? | After using JWT token in un unsafe way for a little over an year I've finally decided that I would like to fix my current setup.
I read everywhere that is not good to save a JWT token in the local client and that is best to use Http Only Cookie.
I'm now trying to use JWT_AUTH_COOKIE in order to create an Http Only Cookie.
I'm getting the Cookie correctly returned by the server when using getToken API. What I'm wondering now, is how I can refresh the token.
What happens when I call refreshToken I get the following response:
```
{"token":["This field is required."]}
```
True, I'm not sending any token in the request's HEADER and that is what I want since the client isn't supposed to keep it saved anywhere.
And that is where I'm getting confused:
If i'm not wrong from now on every request the client does to the server, the cookie should be added to the request.
Shouldn't the server check the cookie after it sees that no token has been passed in the Header?
| open | 2019-06-13T19:46:39Z | 2020-04-03T11:39:49Z | https://github.com/jpadilla/django-rest-framework-jwt/issues/482 | [] | pinkynrg | 1 |
NullArray/AutoSploit | automation | 1,007 | Ekultek, you are correct. | Kek | closed | 2019-04-19T16:46:46Z | 2019-04-19T16:57:49Z | https://github.com/NullArray/AutoSploit/issues/1007 | [] | AutosploitReporter | 0 |
lgienapp/aquarel | data-visualization | 10 | Default themes do not specify line color | closed | 2022-08-12T22:36:03Z | 2022-08-12T22:38:56Z | https://github.com/lgienapp/aquarel/issues/10 | [
"bug"
] | lgienapp | 0 | |
deepfakes/faceswap | deep-learning | 833 | CUDA driver version is insufficient for CUDA runtime version | Loading...
08/12/2019 03:14:02 INFO Log level set to: DEBUG
08/12/2019 03:14:04 INFO Output Directory: C:\Users\Administrator\Desktop\out
08/12/2019 03:14:04 INFO Input Video: C:\Users\Administrator\Desktop\1.mp4
08/12/2019 03:14:04 VERBOSE Using 'json' serializer for alignments
08/12/2019 03:14:04 VERBOSE Alignments filepath: 'C:\Users\Administrator\Desktop\1_alignments.json'
08/12/2019 03:14:04 INFO Loading Detect from Mtcnn plugin...
08/12/2019 03:14:04 VERBOSE Loading config: 'C:\Users\Administrator\faceswap\config\extract.ini'
08/12/2019 03:14:04 INFO Loading Align from Fan plugin...
08/12/2019 03:14:04 VERBOSE Tesla V100-PCIE-16GB - 16288MB free of 16288MB
08/12/2019 03:14:05 INFO Starting, this may take a while...
08/12/2019 03:14:20 WARNING FFMPEG hung while attempting to obtain the frame count. Retrying 2 of 3
08/12/2019 03:14:22 INFO Initializing Face Alignment Network...
08/12/2019 03:14:22 VERBOSE Using device Tesla V100-PCIE-16GB with 16288MB free of 16288MB
08/12/2019 03:14:22 VERBOSE Reserving 2240MB for face alignments
08/12/2019 03:14:22 VERBOSE Initializing Face Alignment Network model...
2019-08-12 03:14:27.485169: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
2019-08-12 03:14:28.054730: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: Tesla V100-PCIE-16GB major: 7 minor: 0 memoryClockRate(GHz): 1.38
pciBusID: 0000:05:00.0
totalMemory: 15.91GiB freeMemory: 15.46GiB
2019-08-12 03:14:28.055514: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
08/12/2019 03:14:28 ERROR Caught exception in child process: 4500
08/12/2019 03:14:28 ERROR Traceback:
Traceback (most recent call last):
File "C:\Users\Administrator\faceswap\plugins\extract\align\_base.py", line 112, in run
self.align(*args, **kwargs)
File "C:\Users\Administrator\faceswap\plugins\extract\align\_base.py", line 127, in align
self.initialize(*args, **kwargs)
File "C:\Users\Administrator\faceswap\plugins\extract\align\fan.py", line 47, in initialize
raise err
File "C:\Users\Administrator\faceswap\plugins\extract\align\fan.py", line 41, in initialize
self.model = FAN(self.model_path, ratio=tf_ratio)
File "C:\Users\Administrator\faceswap\plugins\extract\align\fan.py", line 199, in __init__
self.session = self.set_session(ratio)
File "C:\Users\Administrator\faceswap\plugins\extract\align\fan.py", line 221, in set_session
session = self.tf.Session(config=config)
File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 1551, in __init__
super(Session, self).__init__(target, graph, config=config)
File "C:\Users\Administrator\MiniConda3\envs\faceswap\lib\site-packages\tensorflow\python\client\session.py", line 676, in __init__
self._session = tf_session.TF_NewSessionRef(self._graph._c_graph, opts)
tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
08/12/2019 03:14:34 ERROR Got Exception on main handler:
Traceback (most recent call last):
File "C:\Users\Administrator\faceswap\lib\cli.py", line 125, in execute_script
process.process()
File "C:\Users\Administrator\faceswap\scripts\extract.py", line 62, in process
self.run_extraction()
File "C:\Users\Administrator\faceswap\scripts\extract.py", line 183, in run_extraction
self.extractor.launch()
File "C:\Users\Administrator\faceswap\plugins\extract\pipeline.py", line 171, in launch
self.launch_aligner()
File "C:\Users\Administrator\faceswap\plugins\extract\pipeline.py", line 206, in launch_aligner
raise ValueError("Error initializing Aligner")
ValueError: Error initializing Aligner
08/12/2019 03:14:34 CRITICAL An unexpected crash has occurred. Crash report written to 'C:\Users\Administrator\faceswap\crash_report.2019.08.12.031434375303.log'. Please verify you are running the latest version of faceswap before reporting
Process exited.
| closed | 2019-08-12T03:15:37Z | 2019-08-12T04:21:35Z | https://github.com/deepfakes/faceswap/issues/833 | [] | shadowzoom | 1 |
satwikkansal/wtfpython | python | 161 | Typos detected with codespell | $ __pip3 install codespell__
$ __ codespell__
```
309./CONTRIBUTING.md:32: Outupt ==> Output
310./README.md:578: ocurred ==> occurred
311./README.md:2854: Tim ==> Time, tim | disabled due to being a person's name
312./wtfpython-pypi/content.md:1997: Tim ==> Time, tim | disabled due to being a person's name
316./irrelevant/wtf.ipynb:1047: lenght ==> length
317./irrelevant/wtf.ipynb:1292: ocurred ==> occurred
318./irrelevant/wtf.ipynb:9831: Tim ==> Time, tim | disabled due to being a person's name
319./irrelevant/wtf.ipynb:10787: valu ==> value
320./irrelevant/wtf.ipynb:10899: valu ==> value
321./irrelevant/notebook_instructions.md:24: tbe ==> the
322./irrelevant/obsolete/add_categories:142: Implicity ==> Implicitly
323./irrelevant/obsolete/initial.md:106: Implicity ==> Implicitly
324./irrelevant/obsolete/initial.md:292: recongnizes ==> recognizes
325./irrelevant/obsolete/initial.md:474: Tim ==> Time, tim | disabled due to being a person's name
326./irrelevant/obsolete/initial.md:1529: Ouptut ==> Output
327./irrelevant/obsolete/initial.md:2097: Implicity ==> Implicitly
333The command "codespell" exited with 16.
```python | closed | 2019-12-26T09:05:52Z | 2019-12-27T14:35:29Z | https://github.com/satwikkansal/wtfpython/issues/161 | [] | cclauss | 0 |
bigscience-workshop/petals | nlp | 258 | How to start a monitor for the private swarm? | closed | 2023-02-15T08:43:06Z | 2023-02-16T14:32:42Z | https://github.com/bigscience-workshop/petals/issues/258 | [] | xu-song | 3 | |
mitmproxy/mitmproxy | python | 7,232 | mitmweb can't recognize my upstream proxy formate | #### Problem Description
I code a demo by mitmproxy, the .py code like
```
from mitmproxy import http
def response(flow: http.HTTPFlow):
print(f"Intercepted response from {flow.request.url}")
if 'google' in flow.request.url:
print(f"Intercepted response from {flow.request.url}")
```
And I used command like 'mitmweb -q -s demo.py --mode upstream://user-10019679_27-country-KR-session-551067:password@proxy-as.ipoasis.com:8888', but it always return error 'Invalid server specification: //user-10019679_27-country-KR-session-551067:password@proxy-as.ipoasis.com:8888', is there any mistake in my script or command?
#### Steps to reproduce the behavior:
#### System Information
Mitmproxy: 5.3.0
Python: 3.7.0
OpenSSL: OpenSSL 1.1.1h 22 Sep 2020
Platform: Windows-10-10.0.19041-SP0
| closed | 2024-10-10T15:53:15Z | 2024-10-10T16:20:29Z | https://github.com/mitmproxy/mitmproxy/issues/7232 | [
"kind/triage"
] | guyujia | 1 |
cvat-ai/cvat | pytorch | 8,548 | pose | closed | 2024-10-16T07:33:31Z | 2024-10-16T08:16:42Z | https://github.com/cvat-ai/cvat/issues/8548 | [] | lbje | 0 | |
gunthercox/ChatterBot | machine-learning | 2,268 | Logic adapters not working at all on Mac M1 | I've been trying to use the specific_response logic adapter but it does not seem to be working. I then tried using all of the other logic adapters and found out that none of them work as if they were not even added to begin with. if there is a way to fix this please let me know | open | 2022-09-06T14:35:29Z | 2022-09-06T14:35:29Z | https://github.com/gunthercox/ChatterBot/issues/2268 | [] | ilikeapple10 | 0 |
aidlearning/AidLearning-FrameWork | jupyter | 85 | Can't open port 80 for web service | nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied)
but i am using the root user | open | 2020-02-26T11:07:15Z | 2020-07-29T01:13:33Z | https://github.com/aidlearning/AidLearning-FrameWork/issues/85 | [
"question"
] | ThisisBillhe | 3 |
quantumlib/Cirq | api | 6,225 | Copy/paste errors in docstrings | **Description of the issue**
Density matrix is mentioned in docstrings for `_BufferedStateVector`, `SimulationState._perform_measurement`, and `Simulator`. Should be changed to "state vector" and `QuantumStateRepresentation` respectively. | closed | 2023-08-01T03:37:49Z | 2025-03-19T18:29:17Z | https://github.com/quantumlib/Cirq/issues/6225 | [
"good first issue",
"kind/bug-report",
"triage/accepted",
"area/docstrings",
"area/docs"
] | daxfohl | 0 |
areed1192/interactive-broker-python-api | rest-api | 13 | How to get "secType" for an order | How did you find the secType to use in your orders?
for example in the sample code, the secType is "secType": "362673777:FUT"
here's an example order
{
"conid": 362698833,
"secType": "362673777:FUT",
"cOID": "YOUR_CONTRACT_ORDER_ID",
"orderType": "MKT",
"side": "BUY",
"quantity": 1,
"tif": "DAY"
} | closed | 2020-09-23T03:30:49Z | 2020-09-29T04:12:43Z | https://github.com/areed1192/interactive-broker-python-api/issues/13 | [] | cbora | 0 |
PablocFonseca/streamlit-aggrid | streamlit | 135 | Getting clicked cell information | Hello, how could I get clicked cell information (row and column) and display them in StreamLIt?
I was only able to play with "selected cell" :)
Looks like `onCellClicked` is one of grid options but wonder how to pass that to StreamLit front.
https://www.ag-grid.com/javascript-data-grid/grid-events/
| closed | 2022-08-21T23:49:08Z | 2024-04-04T17:54:01Z | https://github.com/PablocFonseca/streamlit-aggrid/issues/135 | [] | iheo | 2 |
huggingface/datasets | tensorflow | 7,088 | Disable warning when using with_format format on tensors | ### Feature request
If we write this code:
```python
"""Get data and define datasets."""
from enum import StrEnum
from datasets import load_dataset
from torch.utils.data import DataLoader
from torchvision import transforms
class Split(StrEnum):
"""Describes what type of split to use in the dataloader"""
TRAIN = "train"
TEST = "test"
VAL = "validation"
class ImageNetDataLoader(DataLoader):
"""Create an ImageNetDataloader"""
_preprocess_transform = transforms.Compose(
[
transforms.Resize(256),
transforms.CenterCrop(224),
]
)
def __init__(self, batch_size: int = 4, split: Split = Split.TRAIN):
dataset = (
load_dataset(
"imagenet-1k",
split=split,
trust_remote_code=True,
streaming=True,
)
.with_format("torch")
.map(self._preprocess)
)
super().__init__(dataset=dataset, batch_size=batch_size)
def _preprocess(self, data):
if data["image"].shape[0] < 3:
data["image"] = data["image"].repeat(3, 1, 1)
data["image"] = self._preprocess_transform(data["image"].float())
return data
if __name__ == "__main__":
dataloader = ImageNetDataLoader(batch_size=2)
for batch in dataloader:
print(batch["image"])
break
```
This will trigger an user warning :
```bash
datasets\formatting\torch_formatter.py:85: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs})
```
### Motivation
This happens because the the way the formatted tensor is returned in `TorchFormatter._tensorize`.
This function handle values of different types, according to some tests it seems that possible value types are `int`, `numpy.ndarray` and `torch.Tensor`.
In particular this warning is triggered when the value type is `torch.Tensor`, because is not the suggested Pytorch way of doing it:
- https://stackoverflow.com/questions/55266154/pytorch-preferred-way-to-copy-a-tensor
- https://discuss.pytorch.org/t/it-is-recommended-to-use-source-tensor-clone-detach-or-sourcetensor-clone-detach-requires-grad-true/101218#:~:text=The%20warning%20points%20to%20wrapping%20a%20tensor%20in%20torch.tensor%2C%20which%20is%20not%20recommended.%0AInstead%20of%20torch.tensor(outputs)%20use%20outputs.clone().detach()%20or%20the%20same%20with%20.requires_grad_(True)%2C%20if%20necessary.
### Your contribution
A solution that I found to be working is to change the current way of doing it:
```python
return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs})
```
To:
```python
if (isinstance(value, torch.Tensor)):
tensor = value.clone().detach()
if self.torch_tensor_kwargs.get('requires_grad', False):
tensor.requires_grad_()
return tensor
else:
return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs})
``` | open | 2024-08-05T00:45:50Z | 2024-08-05T00:45:50Z | https://github.com/huggingface/datasets/issues/7088 | [
"enhancement"
] | Haislich | 0 |
python-security/pyt | flask | 202 | How to put it inside the proxy? | I want to put it in the proxy, then connect to the web application and check if there are any problems.What should I do? | closed | 2019-07-11T04:34:42Z | 2019-09-25T16:33:34Z | https://github.com/python-security/pyt/issues/202 | [] | angel482400 | 1 |
strawberry-graphql/strawberry | django | 3,580 | TestClient `asserts_errors` flag behaviour is counter intuitive | ## Feature Request Type
- [ ] Core functionality
- [x] Alteration (enhancement/optimization) of existing feature(s)
- [ ] New behavior
## Description
In the `BaseGraphQLTestClient` (`/test/client.py`), there is an option `asserts_errors` which is by default `True`, and asserts that `response.errors is None`.
I'm finding this to be counterintuitive for two reasons:
1. It's not the job of a query client to decide what should be in the response (especially by default), this is the job of my test code ("separation of concerns").
2. It's named incorrectly. "Asserts Errors" implies it asserts that there ARE errors, not that there AREN'T.
I think this check should be removed. If it is kept, the name should be made explicit (`assert_no_errors`) and ideally made opt-in rather than opt-out.
### Special concern
Is it typical for cases to arise in which errors would be returned *alongside* an otherwise valid response? Because if that's the case then I kind of understand why it's opt-out by default. Otherwise, testing for valid data response should be sufficient, AFAICT?
## Contributing
Probably much quicker for a regular maintainer but I'm happy to make a PR for this if accepted! | closed | 2024-07-25T15:48:33Z | 2025-03-20T15:56:48Z | https://github.com/strawberry-graphql/strawberry/issues/3580 | [] | thclark | 6 |
ydataai/ydata-profiling | pandas | 1,374 | KeyError 'tinyint' during profiling on Apache Spark DataFrame | ### Current Behaviour
I encountered an error while attempting to run profiling on an Apache Spark DataFrame. The Spark DataFrame contains data retrieved from parquet files. The specific error message I received is as follows:
```
Traceback (most recent call last):
File "/tmp/profile.py", line 41, in <module>
profile.to_html()
File "/home/spark/.local/lib/python3.7/site-packages/typeguard/__init__.py", line 1033, in wrapper
retval = func(*args, **kwargs)
File "/home/spark/.local/lib/python3.7/site-packages/ydata_profiling/profile_report.py", line 468, in to_html
return self.html
File "/home/spark/.local/lib/python3.7/site-packages/typeguard/__init__.py", line 1033, in wrapper
retval = func(*args, **kwargs)
File "/home/spark/.local/lib/python3.7/site-packages/ydata_profiling/profile_report.py", line 275, in html
self._html = self._render_html()
File "/home/spark/.local/lib/python3.7/site-packages/typeguard/__init__.py", line 1033, in wrapper
retval = func(*args, **kwargs)
File "/home/spark/.local/lib/python3.7/site-packages/ydata_profiling/profile_report.py", line 383, in _render_html
report = self.report
File "/home/spark/.local/lib/python3.7/site-packages/typeguard/__init__.py", line 1033, in wrapper
retval = func(*args, **kwargs)
File "/home/spark/.local/lib/python3.7/site-packages/ydata_profiling/profile_report.py", line 269, in report
self._report = get_report_structure(self.config, self.description_set)
File "/home/spark/.local/lib/python3.7/site-packages/typeguard/__init__.py", line 1033, in wrapper
retval = func(*args, **kwargs)
File "/home/spark/.local/lib/python3.7/site-packages/ydata_profiling/profile_report.py", line 256, in description_set
self._sample,
File "/home/spark/.local/lib/python3.7/site-packages/ydata_profiling/model/describe.py", line 73, in describe
config, df, summarizer, typeset, pbar
File "/home/spark/.local/lib/python3.7/site-packages/multimethod/__init__.py", line 315, in __call__
return func(*args, **kwargs)
File "/home/spark/.local/lib/python3.7/site-packages/ydata_profiling/model/spark/summary_spark.py", line 93, in spark_get_series_descriptions
executor.imap_unordered(multiprocess_1d, args)
File "/usr/lib64/python3.7/multiprocessing/pool.py", line 748, in next
raise value
File "/usr/lib64/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/home/spark/.local/lib/python3.7/site-packages/ydata_profiling/model/spark/summary_spark.py", line 88, in multiprocess_1d
return column, describe_1d(config, df.select(column), summarizer, typeset)
File "/home/spark/.local/lib/python3.7/site-packages/multimethod/__init__.py", line 315, in __call__
return func(*args, **kwargs)
File "/home/spark/.local/lib/python3.7/site-packages/ydata_profiling/model/spark/summary_spark.py", line 62, in spark_describe_1d
}[dtype]
KeyError: 'tinyint'
```
I believe the issue can be resolved by including data types such as "tinyint" and "smallint" in summary_spark.py.
Do you think it a right solution? If yes, I could try submitting a PR.
https://github.com/ydataai/ydata-profiling/blob/cfb020d9ad0ce7ef3be53962763b7a57b88732f9/src/ydata_profiling/model/spark/summary_spark.py#L52-L62
### Expected Behaviour
Profiling runs
### Data Description
Private dataset
### Code that reproduces the bug
```Python
from ydata_profiling import ProfileReport
df = ...
profile = ProfileReport(
df,
title=’Title',
infer_dtypes=False,
interactions=None,
missing_diagrams=None,
correlations={'auto': {'calculate': False},
'pearson': {'calculate': True},
'spearman': {'calculate': True}},
)
```
### pandas-profiling version
v4.3.1
### Dependencies
```Text
...
```
### OS
Spark cluster
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | open | 2023-06-30T21:21:04Z | 2024-02-13T16:30:45Z | https://github.com/ydataai/ydata-profiling/issues/1374 | [
"help wanted 🙋",
"spark :zap:"
] | talgatomarov | 2 |
PaddlePaddle/PaddleHub | nlp | 1,791 | 图像文字识别时,读取带有中文路径图片失败 | 1)PaddlePaddle版本:PaddlePaddle1.6.2
2)系统环境:Windows python3.8.12


文档中并未提及,希望及时补充文档 | closed | 2022-02-16T09:01:56Z | 2022-02-23T02:56:41Z | https://github.com/PaddlePaddle/PaddleHub/issues/1791 | [] | YouJianYue | 3 |
jowilf/starlette-admin | sqlalchemy | 491 | Enhancement: hook to populate UI with default values on entity creation | **Is your feature request related to a problem? Please describe.**
Currently the UI to add an entity does not populate it with default values.
https://github.com/jowilf/starlette-admin/pull/327 does not solve it. The UI still opens with blank values.
**Describe the solution you'd like**
I was expecting before_create from https://github.com/jowilf/starlette-admin/pull/327 to do this, but it seems to run AFTER the SAVE button is clicked. The UI is opened blank, still. Maybe we need another hook, before_ui_opens or something like that?
**Describe alternatives you've considered**
I have overriden before_create but it is only run AFTER. Same with before_edit
**Additional context**
In before_create I have put logic to honor SQLALchemy's default values:
```
async def before_create(self, request: Request, data: Dict[str, Any], knowledge_ingestion_profile: Any) -> None:
# We basically want to honor all columns with default values defined by SQLAlchemy
if knowledge_ingestion_profile.requests_per_second is None:
knowledge_ingestion_profile.requests_per_second = KnowledgeIngestionProfile.requests_per_second.default.arg
if knowledge_ingestion_profile.verify_ssl is None:
knowledge_ingestion_profile.verify_ssl = KnowledgeIngestionProfile.verify_ssl.default.arg
if knowledge_ingestion_profile.http_headers is None or len(knowledge_ingestion_profile.http_headers.strip()) <= 0:
knowledge_ingestion_profile.http_headers = KnowledgeIngestionProfile.http_headers.default.arg
...
...
```
Unfortunately it does not run soon enough to populate the UI when the user clicks "New <MyEntity>"
| open | 2024-01-27T13:53:37Z | 2024-01-30T00:59:43Z | https://github.com/jowilf/starlette-admin/issues/491 | [
"enhancement"
] | sglebs | 5 |
pyg-team/pytorch_geometric | pytorch | 8,968 | 【 HELP ❗️ 】Error about PyG2.5.0 : NameError: name 'OptPairTensor' is not defined | ### 🐛 Describe the bug
When I run the code, the Error raise as:
```
Traceback (most recent call last):
File "/script/ComENet.py", line 824, in <listcomp>
SimpleInteractionBlock(
File "/script/ComENet.py", line 151, in __init__
self.conv1 = EdgeGraphConv(hidden_channels, hidden_channels)
File "/export/disk3/why/software/Miniforge3/envs/PyG250/lib/python3.8/site-packages/torch_geometric/nn/conv/graph_conv.py", line 55, in __init__
super().__init__(aggr=aggr, **kwargs)
File "/export/disk3/why/software/Miniforge3/envs/PyG250/lib/python3.8/site-packages/torch_geometric/nn/conv/message_passing.py", line 177, in __init__
signature=self._get_propagate_signature(),
File "/export/disk3/why/software/Miniforge3/envs/PyG250/lib/python3.8/site-packages/torch_geometric/nn/conv/message_passing.py", line 936, in _get_propagate_signature
param_dict = self.inspector.get_params_from_method_call(
File "/export/disk3/why/software/Miniforge3/envs/PyG250/lib/python3.8/site-packages/torch_geometric/inspector.py", line 382, in get_params_from_method_call
type=self.eval_type(type_repr),
File "/export/disk3/why/software/Miniforge3/envs/PyG250/lib/python3.8/site-packages/torch_geometric/inspector.py", line 44, in eval_type
return eval_type(value, self._globals)
File "/export/disk3/why/software/Miniforge3/envs/PyG250/lib/python3.8/site-packages/torch_geometric/inspector.py", line 422, in eval_type
return typing._eval_type(value, _globals, None) # type: ignore
File "/export/disk3/why/software/Miniforge3/envs/PyG250/lib/python3.8/typing.py", line 270, in _eval_type
return t._evaluate(globalns, localns)
File "/export/disk3/why/software/Miniforge3/envs/PyG250/lib/python3.8/typing.py", line 518, in _evaluate
eval(self.__forward_code__, globalns, localns),
File "<string>", line 1, in <module>
NameError: name 'OptPairTensor' is not defined
```
The coding is running well with PyG2.2.0 & Python=3.7 & Torch = 1.13 & cudatoolkit =11.3
The SimpleInteractionBlock is:
```
from torch_cluster import radius_graph
from torch_geometric.nn import GraphConv
from torch_geometric.nn import inits
from torch_geometric.nn.norm import GraphNorm
from torch_scatter import scatter, scatter_min
from torch.nn import Embedding
from torch import nn
import torch
from torch import nn
from torch import Tensor
import torch.nn.functional as F
import math
from math import sqrt
try:
import sympy as sym
except ImportError:
sym = None
class SimpleInteractionBlock(torch.nn.Module):
def __init__(
self,
hidden_channels,
middle_channels,
num_radial,
num_spherical,
num_layers,
output_channels,
act=swish,
norm=None
):
super(SimpleInteractionBlock, self).__init__()
self.act = act
self.conv1 = EdgeGraphConv(hidden_channels, hidden_channels)
self.conv2 = EdgeGraphConv(hidden_channels, hidden_channels)
self.lin1 = Linear(hidden_channels, hidden_channels)
self.lin2 = Linear(hidden_channels, hidden_channels)
self.lin_cat = Linear(2 * hidden_channels, hidden_channels)
self.norm = norm
if self.norm == 'layer':
self.norm_layer = nn.LayerNorm(hidden_channels)
elif self.norm =='batch':
self.norm_layer = nn.BatchNorm1d(hidden_channels)
elif self.norm == 'graph':
self.norm_layer = GraphNorm(hidden_channels)
else:
pass
# Transformations of Bessel and spherical basis representations.
self.lin_feature1 = TwoLayerLinear(num_radial * num_spherical ** 2, middle_channels, hidden_channels)
self.lin_feature2 = TwoLayerLinear(num_radial * num_spherical, middle_channels, hidden_channels)
# Dense transformations of input messages.
self.lin = Linear(hidden_channels, hidden_channels)
self.lins = torch.nn.ModuleList()
for _ in range(num_layers):
self.lins.append(Linear(hidden_channels, hidden_channels))
self.final = Linear(hidden_channels, output_channels)
self.reset_parameters()
def reset_parameters(self):
self.conv1.reset_parameters()
self.conv2.reset_parameters()
self.norm_layer.reset_parameters()
self.lin_feature1.reset_parameters()
self.lin_feature2.reset_parameters()
self.lin.reset_parameters()
self.lin1.reset_parameters()
self.lin2.reset_parameters()
self.lin_cat.reset_parameters()
for lin in self.lins:
lin.reset_parameters()
self.final.reset_parameters()
def forward(self, x, feature1, feature2, edge_index, batch):
x = self.act(self.lin(x))
feature1 = self.lin_feature1(feature1)
h1 = self.conv1(x, edge_index, feature1)
h1 = self.lin1(h1)
h1 = self.act(h1)
feature2 = self.lin_feature2(feature2)
h2 = self.conv2(x, edge_index, feature2)
h2 = self.lin2(h2)
h2 = self.act(h2)
h = self.lin_cat(torch.cat([h1, h2], 1))
h = h + x
for lin in self.lins:
h = self.act(lin(h)) + h
if 'graph' in self.norm:
h = self.norm_layer(h, batch)
else:
h = self.norm_layer(h)
h = self.final(h)
return h
class EdgeGraphConv(GraphConv):
def message(self, x_j, edge_weight) -> Tensor:
return x_j if edge_weight is None else edge_weight * x_j
```
### Versions
```
Collecting environment information...
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 16.04.7 LTS (x86_64)
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.12) 5.4.0 20160609
Clang version: Could not collect
CMake version: version 3.24.0
Libc version: glibc-2.23
Python version: 3.8.18 | packaged by conda-forge | (default, Dec 23 2023, 17:21:28) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 11.7.64
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: TITAN RTX
GPU 1: TITAN RTX
GPU 2: TITAN RTX
GPU 3: TITAN RTX
GPU 4: TITAN RTX
GPU 5: TITAN RTX
GPU 6: TITAN RTX
GPU 7: TITAN RTX
Nvidia driver version: 450.51.06
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 40
On-line CPU(s) list: 0-39
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz
Stepping: 1
CPU MHz: 2405.052
CPU max MHz: 3100.0000
CPU min MHz: 1200.0000
BogoMIPS: 4403.61
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 25600K
NUMA node0 CPU(s): 0-9,20-29
NUMA node1 CPU(s): 10-19,30-39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.0.0
[pip3] torch-cluster==1.6.3
[pip3] torch_geometric==2.5.0
[pip3] torch-scatter==2.1.2
[pip3] torch-sparse==0.6.18
[pip3] triton==2.0.0
[conda] blas 2.121 mkl conda-forge
[conda] blas-devel 3.9.0 21_linux64_mkl conda-forge
[conda] cudatoolkit 11.7.0 hd8887f6_10 nvidia
[conda] libblas 3.9.0 21_linux64_mkl conda-forge
[conda] libcblas 3.9.0 21_linux64_mkl conda-forge
[conda] liblapack 3.9.0 21_linux64_mkl conda-forge
[conda] liblapacke 3.9.0 21_linux64_mkl conda-forge
[conda] mkl 2024.0.0 ha957f24_49657 conda-forge
[conda] mkl-devel 2024.0.0 ha770c72_49657 conda-forge
[conda] mkl-include 2024.0.0 ha957f24_49657 conda-forge
[conda] numpy 1.24.4 py38h59b608b_0 conda-forge
[conda] pyg 2.5.0 py38_torch_2.0.0_cu117 pyg
[conda] pytorch 2.0.0 py3.8_cuda11.7_cudnn8.5.0_0 pytorch
[conda] pytorch-cluster 1.6.3 py38_torch_2.0.0_cu117 pyg
[conda] pytorch-cuda 11.7 h778d358_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorch-scatter 2.1.2 py38_torch_2.0.0_cu117 pyg
[conda] pytorch-sparse 0.6.18 py38_torch_2.0.0_cu117 pyg
[conda] torchtriton 2.0.0 py38 pytorch
``` | closed | 2024-02-26T02:07:12Z | 2024-02-26T20:24:55Z | https://github.com/pyg-team/pytorch_geometric/issues/8968 | [
"bug"
] | StefanIsSmart | 9 |
voila-dashboards/voila | jupyter | 647 | Unable to see widgets | I am not able to see any ipywidgets in a voila served notebook. I am running voila version 0.1.21.
The first cell of my notebook (your example) is simply
```
import ipywidgets as widgets
slider = widgets.FloatSlider(description='$x$', value=4)
text = widgets.FloatText(disabled=True, description='$x^2$')
def compute(*ignore):
text.value = str(slider.value ** 2)
slider.observe(compute, 'value')
widgets.VBox([slider, text])
```
I see the widgets fine in the Notebook.
When I start voila, I see no obvious errors related to this
```
root@af78caa85c46:/tmp/working# voila /tmp/working/notebooks
[Voila] Using /tmp to store connection files
[Voila] Storing connection files in /tmp/voila_9lv5ta76.
[Voila] Serving static files from /opt/conda/lib/python3.7/site-packages/voila/static.
[Voila] Voila is running at:
http://localhost:8866/
[Voila] WARNING | No web browser found: could not locate runnable browser.
ERROR:tornado.general:Could not open static file '/base/images/favicon.ico'
ERROR:tornado.general:Could not open static file '/style/style.min.css'
[Voila] WARNING | Notebook Untitled2-Copy1.ipynb is not trusted
[Voila] Kernel started: 85cebb55-bae9-401a-b537-3de34af4b927
WARNING:tornado.access:404 GET /api/kernels/?1594158156510 (10.254.31.40) 0.50ms
```
Is there anything else I can check? | open | 2020-07-07T21:45:43Z | 2020-07-14T14:46:47Z | https://github.com/voila-dashboards/voila/issues/647 | [] | marketneutral | 6 |
AntonOsika/gpt-engineer | python | 667 | Simplify UX for the "ask for file list" workflow to just one step | 1. If file_list.txt exists, ask for confirmation only once (same time as asking for confirmation about prompt to use, in `get_improve_prompt()`)
2. Store both "folders" to use, and files, in file_list.txt, to not have to update if new files are created in a folder | closed | 2023-09-03T13:57:25Z | 2023-09-24T08:31:17Z | https://github.com/AntonOsika/gpt-engineer/issues/667 | [] | AntonOsika | 1 |
tflearn/tflearn | tensorflow | 1,185 | ดูหนังฟรีไม่มีโฆษณา!ธี่หยด 2 เต็มเรื่อง | ดูฟรีไม่มีโฆษณา!ธี่หยด2 เต็มเรื่องซูม ดูหนังออนไลน์ (อังกฤษ Death Whisperer 2) ธี่หยด 2เต็มเรื่อง พากย์ เรื่องราวการล้างแค้นของยักษ์ที่ยังตามล่าผีชุดดำ,ดูหนัง Tee Yod 2 (2024) ธี่หยด 2 เต็มเรื่อง พากย์/ซับ เรื่องราวการล้างแค้นของยักษ์ที่ตามล่าผีชุดดำอย่างไม่ลดละ ดูหนังออนไลน์คมชัดระดับ
ดูที่นี่: [ธี่หยด 2 เต็มเรื่อง 2024 พากย์](https://t.co/KtetLN9Pxu)
ดูที่นี่: [ธี่หยด 2 เต็มเรื่อง 2024 พากย์](https://t.co/KtetLN9Pxu) | open | 2024-10-29T05:01:55Z | 2024-10-29T05:06:41Z | https://github.com/tflearn/tflearn/issues/1185 | [] | ghost | 0 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 15,327 | [Bug]: New Gallery icons (refresh, tree view, folder with magnifying lens, ...) are not present in offline mode | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
The new Gallery icons (refresh, tree view, folder with magnifying lens, ...) don't show up in offline mode. Buttons are empty, they still work.
(nice improvement btw, the tree view keep shutting down if you move out from the tab but I guess it is still WIP)
### Steps to reproduce the problem
offline mode
### What should have happened?
icons should be visible
### What browsers do you use to access the UI ?
Other
### Sysinfo
1
### Console logs
```Shell
1
```
### Additional information
_No response_ | open | 2024-03-19T19:36:53Z | 2024-05-20T21:03:51Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15327 | [
"bug-report"
] | dilectiogames | 7 |
plotly/plotly.py | plotly | 4,847 | Allow (or switch to) generating html that starts with `<!doctype html>` when using `to_html(full_html=True)` | Currently, plotly.py's [`to_html()`]([plotly.io](https://plotly.com/python-api-reference/plotly.io.html#module-plotly.io).to_html) function (when using the default `full_html=True` argument) generates html that doesn't start with a required [doctype](https://developer.mozilla.org/en-US/docs/Glossary/Doctype) (i.e. `<!doctype html>`). It would be good to support this, or switch to doing this.
The code can be found here: https://github.com/plotly/plotly.py/blob/095d2d80b81ef606874b88dc9f20dc65cb20c3dc/packages/python/plotly/plotly/io/_html.py#L343-L352
The reason I noticed this is that some plots I'm working with were formatted differently when I used `full_html=False` to include a plot into another html document that _did_ start with `<!doctype html>`. I narrowed the reason for this difference in formatting down to the presence or absence of that single line. Since plots can be included into html with or without a doctype, it would probably be good to be aware of this difference. | open | 2024-11-03T21:27:05Z | 2024-11-05T21:07:44Z | https://github.com/plotly/plotly.py/issues/4847 | [
"feature",
"P2"
] | cjerdonek | 0 |
albumentations-team/albumentations | machine-learning | 1,522 | [Feature request] Add keypoint support to GridDropout | Keypoint support comes quite naturally to Dropout transforms.
=> no reason why not to add such support to the GridDropout transform | closed | 2024-02-17T20:01:11Z | 2024-10-02T18:02:32Z | https://github.com/albumentations-team/albumentations/issues/1522 | [
"good first issue"
] | ternaus | 0 |
huggingface/pytorch-image-models | pytorch | 2,094 | DINOv2 worse performance compared to the original version | I've trained Vision Transformer (ViT) models, small and large, with DINOv2 pretrained weights from [Facebook](https://github.com/facebookresearch/dinov2) (vit_small_patch14_reg4_dinov2.lvd142m) and timm (dinov2_vits14_reg_lc). The timm version underperforms, as seen in feature and attention map, compared to Facebook's fine-tuned version. So any issues with timm's DINOv2 weights?
Despite this, timm's version is faster and uses less GPU VRAM than Facebook's. What are difference aspects of timm's implementation? remains unclear to me.
I hope someone can explain them
Thank you for great work
| closed | 2024-02-13T01:09:09Z | 2024-03-13T19:07:21Z | https://github.com/huggingface/pytorch-image-models/issues/2094 | [
"bug"
] | davissf | 5 |
Esri/arcgis-python-api | jupyter | 1,513 | QUESTION - loop list of "Fixes" when updating attributes | In my code below, I am manually entering fixes but I would rather feed it a list of needed fixes - any help would be appreciated:
ALL_fl = item.layers[0]
FIX = ("ENERTECH INDUSTRIES")
fix_features = [f for f in ALL_fset if f.attributes["MNFRS_3S"] == FIX]
edits = []
for feature in fix_features:
feature.attributes["MNFRS_3S"] = "ENERTECH"
edits.append(feature)
update_result = ALL_fl.edit_features(updates=edits)
print(update_result)
| closed | 2023-03-31T20:47:01Z | 2023-04-21T15:33:34Z | https://github.com/Esri/arcgis-python-api/issues/1513 | [] | SPhillips2022 | 4 |
deeppavlov/DeepPavlov | nlp | 845 | Estimated time | Hello!
How can I find out estimated time left during training process in NER network | closed | 2019-05-17T13:47:00Z | 2019-05-29T15:51:19Z | https://github.com/deeppavlov/DeepPavlov/issues/845 | [] | serhio7 | 2 |
FactoryBoy/factory_boy | django | 141 | Making (or having the option to make) factory.django.FileField non-post-generation | I have a use case where the file is interacted with just before saving to the database (I'm making a sha256 hash of the file and storing this). When using `create()`, I get an error about the file not existing, because factory boy is building things for the file field _after_ saving. The workaround I have is to use `build()`, which works, and then `save()` the instance after all post generation has taken place.
Based on the way your code looks, it seems like it would probably be a little challenging to make FileField optionally post-generation (because it appears to be based on inheritance). Mainly, I'm curious if this is something that you would consider supporting, or if there is a good reason for forcing it to be post-generation. If it seems like an ok idea, I would consider contributing the PR for it, given your thoughts on an implementation.
| closed | 2014-04-29T18:10:54Z | 2015-03-26T22:33:00Z | https://github.com/FactoryBoy/factory_boy/issues/141 | [
"Feature"
] | ianawilson | 3 |
Netflix/metaflow | data-science | 1,810 | Unable to utilise pytest unit test for metaflow | from HelloWorld import HelloWorld
from metaflow import Flow, Metaflow
import pytest
@pytest.fixture
def get_flow():
# Setup
flow = HelloWorld() // OSError: could not get source code
# Below is a simple Metaflow
from metaflow import FlowSpec, step
class HelloWorldFlow(FlowSpec):
@step
def start(self):
print("Hello, World!")
self.next(self.end)
@step
def end(self):
print("Flow completed.")
if __name__ == '__main__':
HelloWorldFlow()
| open | 2024-04-24T17:53:43Z | 2024-04-30T17:32:16Z | https://github.com/Netflix/metaflow/issues/1810 | [] | iamdansari | 1 |
sanic-org/sanic | asyncio | 2,698 | Getting first element of an array in request.form.get('arr_name') | I was getting the first element of a array(named 'search_strings').
Passing search_strings = ['V','F','X'] and req.form.get('search_strings') would only retrieve 'V'.
```python
print(req.form) #outputs {'search_strings': ['V', 'F', 'X']}
print(req.form.get("search_strings")) #outputs V
```
Had to use
```python
json.loads(str(req.form).replace("'", '"'))[
"search_strings"
]
```
Being passed as:

_Originally posted by @LucasGrasso in https://github.com/sanic-org/sanic/issues/288#issuecomment-1445742071_
| closed | 2023-02-27T05:44:00Z | 2023-02-27T11:00:10Z | https://github.com/sanic-org/sanic/issues/2698 | [] | LucasGrasso | 1 |
eamigo86/graphene-django-extras | graphql | 26 | Warining: Abstract type is deprecated | I'm getting this warning when runing my server
```
/Users/sgaseretto/.virtualenvs/graphene/lib/python3.6/site-packages/graphene/types/abstracttype.py:9: DeprecationWarning: Abstract type is deprecated, please use normal object inheritance instead.
See more: https://github.com/graphql-python/graphene/blob/2.0/UPGRADE-v2.0.md#deprecations
"Abstract type is deprecated, please use normal object inheritance instead.\n"
/Users/sgaseretto/.virtualenvs/graphene/lib/python3.6/site-packages/graphene/types/abstracttype.py:9: DeprecationWarning: Abstract type is deprecated, please use normal object inheritance instead.
See more: https://github.com/graphql-python/graphene/blob/2.0/UPGRADE-v2.0.md#deprecations
"Abstract type is deprecated, please use normal object inheritance instead.\n"
[14/Mar/2018 20:02:27] "POST /graphql/ HTTP/1.1" 200 72700
```
Any idea of how to avoid it? | closed | 2018-03-14T20:06:40Z | 2018-03-16T01:06:05Z | https://github.com/eamigo86/graphene-django-extras/issues/26 | [] | sgaseretto | 3 |
httpie/cli | python | 537 | Collections in httpie? | I'm regular user of Postman, I have there lot of requests in structured collections, is there any recommended option how to store or create 'book' of requests for each project?
Thanks! | open | 2016-11-10T14:47:01Z | 2017-06-27T05:23:32Z | https://github.com/httpie/cli/issues/537 | [] | yangwao | 6 |
jina-ai/serve | fastapi | 5,627 | Document `jina deployment` CLI in Executor serve section | Document `jina deployment` CLI in Executor serve section | closed | 2023-01-26T08:31:27Z | 2023-02-15T13:42:08Z | https://github.com/jina-ai/serve/issues/5627 | [] | alaeddine-13 | 0 |
zappa/Zappa | django | 612 | [Migrated] Replace windows path separator with posix separator in Zip files #1358 | Originally from: https://github.com/Miserlou/Zappa/issues/1570 by [aldokkani](https://github.com/aldokkani)
<!--
Before you submit this PR, please make sure that you meet these criteria:
* Did you read the [contributing guide](https://github.com/Miserlou/Zappa/#contributing)?
* If this is a non-trivial commit, did you **open a ticket** for discussion?
* Did you **put the URL for that ticket in a comment** in the code?
* If you made a new function, did you **write a good docstring** for it?
* Did you avoid putting "_" in front of your new function for no reason?
* Did you write a test for your new code?
* Did the Travis build pass?
* Did you improve (or at least not significantly reduce) the amount of code test coverage?
* Did you **make sure this code actually works on Lambda**, as well as locally?
* Did you test this code with both **Python 2.7** and **Python 3.6**?
* Does this commit ONLY relate to the issue at hand and have your linter shit all over the code?
If so, awesome! If not, please try to fix those issues before submitting your Pull Request.
Thank you for your contribution!
-->
## Description
Replace windows path separators \\ with posix separators /
## GitHub Issues
https://github.com/Miserlou/Zappa/issues/1358
| closed | 2021-02-20T12:26:38Z | 2024-07-13T08:17:56Z | https://github.com/zappa/Zappa/issues/612 | [
"no-activity",
"auto-closed"
] | jneves | 3 |
deezer/spleeter | tensorflow | 425 | [Discussion] How to set GPU device | I have multiple GPU's but I can't figure out how to set which GPU device separator should use.
In my python script I've tried adding `os.environ['CUDA_VISIBLE_DEVICES'] = "1"` but it does nothing.
I've also tried adding `device_count={'GPU': 1}` to `ConfigProto`, but it does nothing either.
Has anyone got a clue how to set which device to use?
I'm on Windows 10 with RTX2080 | open | 2020-06-18T15:35:58Z | 2020-06-22T13:01:50Z | https://github.com/deezer/spleeter/issues/425 | [
"question"
] | aidv | 2 |
horovod/horovod | pytorch | 3,711 | TF DataService example fails with tf 2.11 | Starting with Tensorflow 2.11, the DataService example fails for Gloo and MPI (but not Spark!) on GPU using CUDA (not reproducible locally with GPU but without CUDA). The identical MNIST example without DataService passes. | open | 2022-09-22T06:02:31Z | 2022-09-22T06:06:26Z | https://github.com/horovod/horovod/issues/3711 | [
"bug"
] | EnricoMi | 0 |
plotly/dash-table | plotly | 873 | Deactivate or Loading state for when Export Button is pressed | I am generating rather large datasets (10,000+ rows) of text data. The export feature works wonders but it can sometimes take many seconds for it the download to appear. It would be nice if the export button went inactive, or if there was a way to wrap a dcc.Loading component around it, so that my users knew for sure that it was working. Thanks! | open | 2021-03-01T19:37:25Z | 2021-05-17T15:13:46Z | https://github.com/plotly/dash-table/issues/873 | [] | aboyher | 2 |
davidteather/TikTok-Api | api | 229 | [INSTALLATION] - Your error here | **Describe the error**
Put the error trace here.
**The buggy code**
Please insert the code that is throwing errors or is giving you weird unexpected results.
**Error Trace (if any)**
Put the error trace below if there's any error thrown.
```
# Error Trace Here
```
**Desktop (please complete the following information):**
- OS: [e.g. Windows 10]
- TikTokApi Version [e.g. 3.3.1] - if out of date upgrade before posting an issue
**Additional context**
Put what you have already tried. Your problem is probably in the closed issues tab already.
| closed | 2020-08-24T05:35:14Z | 2020-08-24T06:03:56Z | https://github.com/davidteather/TikTok-Api/issues/229 | [
"bug",
"installation_help"
] | dudabomm | 2 |
nolar/kopf | asyncio | 483 | [archival placeholder] | This is a placeholder for later issues/prs archival.
It is needed now to reserve the initial issue numbers before going with actual development (PRs), so that later these placeholders could be populated with actual archived issues & prs with proper intra-repo cross-linking preserved. | closed | 2020-08-18T20:07:45Z | 2020-08-18T20:07:46Z | https://github.com/nolar/kopf/issues/483 | [
"archive"
] | kopf-archiver[bot] | 0 |
ranaroussi/yfinance | pandas | 2,376 | yfinance Documentation page is down | ### Describe bug
yfinance Documentation page is down:
https://yfinance-python.org/
### Simple code that reproduces your problem
Go to: https://yfinance-python.org/
It returns a 404 response.
### Debug log from yf.enable_debug_mode()
Go to: https://yfinance-python.org/
It returns a 404 response.
### Bad data proof
Go to: https://yfinance-python.org/
It returns a 404 response.
### `yfinance` version
latest, main branch
### Python version
_No response_
### Operating system
_No response_ | closed | 2025-03-22T00:42:40Z | 2025-03-22T15:12:25Z | https://github.com/ranaroussi/yfinance/issues/2376 | [] | cibervicho | 2 |
jupyter/nbgrader | jupyter | 1,012 | Add warning when course root is not accessible from notebook root | <!--
Thanks for helping to improve nbgrader!
If you are submitting a bug report or looking for support, please use the below
template so we can efficiently solve the problem.
If you are requesting a new feature, feel free to remove irrelevant pieces of
the issue template.
-->
### Operating system
Ubuntu 16.04
### `nbgrader --version`
0.5.4
### `jupyterhub --version` (if used with JupyterHub)
0.8.1
### `jupyter notebook --version`
5.6.0
### Expected behavior
I set the `CourseDirectory.root` to `/opt/submissions`. I expected that this would allow me to have multiple teachers work on the same course (adding students, creating assignments, etc). OK, I know that I should use a proper database for that, but it's a first step.
### Actual behavior
With one of the teacher users I could create an assignment. However, when I clicked on it to actually edit it and create the assignment notebook, I got a 404. The URL the system looked at was `/user/teacher/submissions/source/<assignment>/`.
So it seems that somehow the `/opt` has become lost from the path, the `submissions` part was kept and put after the default `/user/teacher/`, which is the user's home.
If I change the root path to `/home/teacher/nbgrader`, everything works and the url is `/user/teacher/tree/nbgrader/source/<assignment>/`.
Note that the db was created in `/opt/submissions` just fine, I could add users... it's just the formgrader that has this problem.
### Steps to reproduce the behavior
Just set `CourseDirectory.root` to outside the user's home directory. | closed | 2018-09-17T09:47:18Z | 2019-06-01T16:01:39Z | https://github.com/jupyter/nbgrader/issues/1012 | [
"bug"
] | DavidNemeskey | 6 |
tqdm/tqdm | pandas | 1,630 | 4.67.0: pytest fails with `Unknown config option: asyncio_default_fixture_loop_scope` | - [ ] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [ ] visual output bug
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [ ] I have mentioned version numbers, operating system and
environment, where applicable:
I'm packaging your module as an rpm package so I'm using the typical PEP517 based build, install and test cycle used on building packages from non-root account.
- `python3 -sBm build -w --no-isolation`
- because I'm calling `build` with `--no-isolation` I'm using during all processes only locally installed modules
- install .whl file in </install/prefix> using `installer` module
- run pytest with $PYTHONPATH pointing to sitearch and sitelib inside </install/prefix>
- build is performed in env which is *`cut off from access to the public network`* (pytest is executed with `-m "not network"`)
<details>
<summary>Here is pytest output:</summary>
```console
+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-tqdm-4.67.0-2.fc37.x86_64/usr/lib64/python3.10/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-tqdm-4.67.0-2.fc37.x86_64/usr/lib/python3.10/site-packages
+ /usr/bin/pytest -ra -m 'not network' -q --deselect tests/tests_perf.py::test_lock_args
============================= test session starts ==============================
platform linux -- Python 3.10.14, pytest-8.2.2, pluggy-1.5.0
rootdir: /home/tkloczko/rpmbuild/BUILD/tqdm-4.67.0
configfile: pyproject.toml
testpaths: tests
plugins: asyncio-0.23.8, timeout-2.3.1
asyncio: mode=strict
timeout: 30.0s
timeout method: signal
timeout func_only: False
collected 150 items / 1 deselected / 149 selected
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/usr/lib/python3.10/site-packages/_pytest/main.py", line 292, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
INTERNALERROR> File "/usr/lib/python3.10/site-packages/_pytest/main.py", line 345, in _main
INTERNALERROR> config.hook.pytest_collection(session=session)
INTERNALERROR> File "/usr/lib/python3.10/site-packages/pluggy/_hooks.py", line 513, in __call__
INTERNALERROR> return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
INTERNALERROR> File "/usr/lib/python3.10/site-packages/pluggy/_manager.py", line 120, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> File "/usr/lib/python3.10/site-packages/pluggy/_callers.py", line 139, in _multicall
INTERNALERROR> raise exception.with_traceback(exception.__traceback__)
INTERNALERROR> File "/usr/lib/python3.10/site-packages/pluggy/_callers.py", line 122, in _multicall
INTERNALERROR> teardown.throw(exception) # type: ignore[union-attr]
INTERNALERROR> File "/usr/lib/python3.10/site-packages/_pytest/logging.py", line 794, in pytest_collection
INTERNALERROR> return (yield)
INTERNALERROR> File "/usr/lib/python3.10/site-packages/pluggy/_callers.py", line 122, in _multicall
INTERNALERROR> teardown.throw(exception) # type: ignore[union-attr]
INTERNALERROR> File "/usr/lib/python3.10/site-packages/_pytest/warnings.py", line 120, in pytest_collection
INTERNALERROR> return (yield)
INTERNALERROR> File "/usr/lib/python3.10/site-packages/pluggy/_callers.py", line 124, in _multicall
INTERNALERROR> teardown.send(result) # type: ignore[union-attr]
INTERNALERROR> File "/usr/lib/python3.10/site-packages/_pytest/config/__init__.py", line 1425, in pytest_collection
INTERNALERROR> self._validate_config_options()
INTERNALERROR> File "/usr/lib/python3.10/site-packages/_pytest/config/__init__.py", line 1447, in _validate_config_options
INTERNALERROR> self._warn_or_fail_if_strict(f"Unknown config option: {key}\n")
INTERNALERROR> File "/usr/lib/python3.10/site-packages/_pytest/config/__init__.py", line 1486, in _warn_or_fail_if_strict
INTERNALERROR> self.issue_config_time_warning(PytestConfigWarning(message), stacklevel=3)
INTERNALERROR> File "/usr/lib/python3.10/site-packages/_pytest/config/__init__.py", line 1538, in issue_config_time_warning
INTERNALERROR> warnings.warn(warning, stacklevel=stacklevel)
INTERNALERROR> pytest.PytestConfigWarning: Unknown config option: asyncio_default_fixture_loop_scope
============================ 1 deselected in 0.76s =============================
```
</details>
<details>
<summary>List of installed modules in build env:</summary>
```console
Package Version
------------------ -----------
build 1.2.2.post1
click 8.1.7
cloudpickle 3.1.0
dask 2024.6.2
distro 1.9.0
exceptiongroup 1.1.3
fsspec 2024.10.0
importlib_metadata 8.5.0
iniconfig 2.0.0
installer 0.7.0
locket 1.0.0
markdown-it-py 3.0.0
mdurl 0.1.2
numpy 1.26.4
packaging 24.0
pandas 2.2.1
partd 1.4.1
pluggy 1.5.0
Pygments 2.18.0
pyproject_hooks 1.2.0
pytest 8.2.2
pytest-asyncio 0.23.8
pytest-timeout 2.3.1
python-dateutil 2.9.0.post0
pytz 2024.2
PyYAML 6.0.2
rich 13.7.1
setuptools 75.1.0
setuptools-scm 8.1.0
tokenize_rt 6.1.0
tomli 2.0.1
toolz 1.0.0
wheel 0.44.0
zipp 3.20.2
```
</details>
Please let me know if you need more details or want me to perform some diagnostics.
| closed | 2024-11-07T09:47:32Z | 2024-11-12T13:35:36Z | https://github.com/tqdm/tqdm/issues/1630 | [
"invalid ⛔",
"question/docs ‽"
] | kloczek | 6 |
allure-framework/allure-python | pytest | 305 | [allure-behave] Support @flaky tag for scenarios (to mark them as flaky in the allure report) | #### I'm submitting a ...
- [ ] bug report
- [x] feature request
- [ ] support request => Please do not submit support request here, see note at the top of this template.
#### What is the current behavior?
In an allure report based on the output of the `allure-behave` formatter, a `@flaky` tag is simply shown in the list of all scenario tags.
But the scenario is not marked as flaky or special in any way.
#### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
Use the allure-behave formatter for testruns with scenarios containing a `@flaky` tag and view the corresponding allure test report.
#### What is the expected behavior?
It would be nice if a scenario tagged as flaky was marked as flaky in the json file produced by the `allure-behave` formatter (allure's flaky attribute set to `True`) - and thus shown as flaky test in a resulting allure test report.
#### What is the motivation / use case for changing the behavior?
Scenarios tagged as flaky could be more easily spotted in the overall allure test report.
Allure does support marking tests as flaky:
https://docs.qameta.io/allure/#_flaky_tests
Tagging a scenario or feature as flaky is apparently already supported by `allure-cucumber-jvm`:
https://docs.qameta.io/allure/#_test_markers
#### Please tell us about your environment:
- allure-behave 2.5.1
- allure-python-commons 2.5.1
- allure 2.3-SNAPSHOT
#### Other information
[//]: # (
. e.g. detailed explanation, stacktraces, related issues, suggestions
. how to fix, links for us to have more context, eg. Stackoverflow, Gitter etc
)
| open | 2018-11-05T20:17:06Z | 2023-01-23T08:24:46Z | https://github.com/allure-framework/allure-python/issues/305 | [
"theme:behave",
"task:new feature"
] | m-i-tk | 2 |
AirtestProject/Airtest | automation | 840 | 设备控制相关问题,在使用命令行执行一直提示device not ready | (请尽量按照下面提示内容填写,有助于我们快速定位和解决问题,感谢配合。否则直接关闭。)
**(重要!问题分类)**
*设备控制相关问题
**描述问题bug**
使用命令行无法启动脚本,ADB命令是可以连接手机的
```报错复制
C:\Users\Administrator>airtest run "H:/work/_AirtestIDE/BC2UI_1.air" --device Android:///8BN0217830009097A --log
save log in 'H:/work/_AirtestIDE/BC2UI_1.air\log'
[11:13:03][DEBUG]<airtest.core.android.adb> c:\users\administrator\appdata\local\programs\python\python37\lib\site-packages\airtest\core\android\static\adb\windows\adb.exe -s 8BN0217830009097A wait-for-device
======================================================================
ERROR: setUpClass (airtest.cli.runner.AirtestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "c:\users\administrator\appdata\local\programs\python\python37\lib\site-packages\airtest\core\android\adb.py", line 298, in wait_for_device
self.cmd("wait-for-device", timeout=timeout)
File "c:\users\administrator\appdata\local\programs\python\python37\lib\site-packages\airtest\core\android\adb.py", line 178, in cmd
stdout, stderr = proc_communicate_timeout(proc, timeout)
File "c:\users\administrator\appdata\local\programs\python\python37\lib\site-packages\airtest\utils\compat.py", line 79, in proc_communicate_timeout
raise_from(exp, None)
File "<string>", line 3, in raise_from
RuntimeError: Command ['c:\\users\\administrator\\appdata\\local\\programs\\python\\python37\\lib\\site-packages\\airtest\\core\\android\\static\\adb\\windows\\adb.exe', '-s', '8BN0217830009097A', 'wait-for-device'] timed out after 5 seconds: stdout['b'''], stderr['b''']
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\users\administrator\appdata\local\programs\python\python37\lib\site-packages\airtest\cli\runner.py", line 28, in setUpClass
setup_by_args(args)
File "c:\users\administrator\appdata\local\programs\python\python37\lib\site-packages\airtest\cli\runner.py", line 144, in setup_by_args
auto_setup(dirpath, devices, args.log, project_root, compress)
File "c:\users\administrator\appdata\local\programs\python\python37\lib\site-packages\airtest\core\api.py", line 109, in auto_setup
connect_device(dev)
File "c:\users\administrator\appdata\local\programs\python\python37\lib\site-packages\airtest\core\api.py", line 58, in connect_device
dev = init_device(platform, uuid, **params)
File "c:\users\administrator\appdata\local\programs\python\python37\lib\site-packages\airtest\core\api.py", line 33, in init_device
dev = cls(uuid, **kwargs)
File "c:\users\administrator\appdata\local\programs\python\python37\lib\site-packages\airtest\core\android\android.py", line 46, in __init__
self.adb.wait_for_device()
File "c:\users\administrator\appdata\local\programs\python\python37\lib\site-packages\airtest\core\android\adb.py", line 300, in wait_for_device
raisefrom(DeviceConnectionError, "device not ready", e)
File "c:\users\administrator\appdata\local\programs\python\python37\lib\site-packages\airtest\utils\compat.py", line 55, in raisefrom
raise_from(exc_type(message), exc)
File "<string>", line 3, in raise_from
airtest.core.error.DeviceConnectionError: 'device not ready'
```
**相关截图**
(贴出遇到问题时的截图内容,如果有的话)
(在AirtestIDE里产生的图像和设备相关的问题,请贴一些AirtestIDE控制台黑窗口相关报错信息)
**预期效果**
希望能用命令行执行脚本
**python 版本:** `python3.7.7`
**airtest 版本:** `1.1.6`
> airtest版本通过`pip freeze`可以命令可以查到
**设备:**
- 型号: [Honor 9y]
- 系统: [Android 8.0]
- (别的信息)
**其他相关环境信息**
(Win10 64位)
| closed | 2020-12-07T03:22:59Z | 2021-02-21T03:36:33Z | https://github.com/AirtestProject/Airtest/issues/840 | [] | RonnyHU0210 | 1 |
pytorch/pytorch | python | 148,983 | inference_mode Tensors do not always need to be guarded on | the following triggers a recompile
```
with torch.inference_mode():
x = torch.randn(3)
y = torch.randn(3)
@torch.compile(backend="eager", fullgraph=True)
def f(x):
return x.sin()
f(x)
f(y)
```
We saw this in vLLM | open | 2025-03-11T18:42:08Z | 2025-03-13T15:12:34Z | https://github.com/pytorch/pytorch/issues/148983 | [
"triaged",
"vllm-compile",
"dynamo-triage-jan2025"
] | zou3519 | 0 |
voila-dashboards/voila | jupyter | 907 | How to set a specific width and height for the output section ? | Is there any way to modify the output section to specify height and width and make the page shorter.
In my web application with voila, more than 100 lines of logs are shown as output, printing that output might make the vertical display longer because it depends on the number of output lines. Is any way to give a layout ,a certain height and width of the output section and make it auto scroll.
Please reply as soon as possible, I've been searching for this all over the web and haven't found any solution to my problem. | closed | 2021-06-19T16:10:37Z | 2021-06-23T13:56:22Z | https://github.com/voila-dashboards/voila/issues/907 | [] | pingme998 | 5 |
SYSTRAN/faster-whisper | deep-learning | 1,126 | Use multiple GPUs to process queue | I am trying to use both of my GPUs who are passed through to my docker container.
```
services: faster-whisper-server-cuda: image: fedirz/faster-whisper-server:latest-cuda build: dockerfile: Dockerfile.cuda context: . platforms: - linux/amd64 - linux/arm64 restart: unless-stopped ports: - 8162:8000 environment: - WHISPER__MODEL=deepdml/faster-whisper-large-v3-turbo-ct2 - WHISPER__INFERENCE_DEVICE=cuda - WHISPER__COMPUTE_TYPE=int8 - WHISPER__NUM_WORKERS=4 - WHISPER__CPU_THREADS=4 - WHISPER_DEVICE=cuda - DEFAULT_LANGUAGE=en - PRELOAD_MODELS=["deepdml/faster-whisper-large-v3-turbo-ct2"] volumes: - hugging_face_cache:/root/.cache/huggingface privileged: true deploy: resources: reservations: devices: - driver: nvidia count: all capabilities: [gpu] volumes: hugging_face_cache:
```
I tried everything but it won't use more than 1 GPU even if:

| open | 2024-11-10T20:48:30Z | 2024-11-11T08:45:03Z | https://github.com/SYSTRAN/faster-whisper/issues/1126 | [] | theodufort | 1 |
home-assistant/core | asyncio | 140,984 | LaundryCare.Washer.EnumType.SpinSpeed.RPM700 undefined | ### The problem
HomeConnect stalls for Washing machine after certain programs and fails to update.
Reloading the integration helps.
### What version of Home Assistant Core has the issue?
core-2025.3.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Home Connect
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/home_connect
### Diagnostics information
```
2025-03-20 05:49:12.139 ERROR (MainThread) [homeassistant] Error doing job: Task exception was never retrieved (None)
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/home_connect/coordinator.py", line 214, in _event_listener
self._call_event_listener(event_message)
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/home_connect/coordinator.py", line 288, in _call_event_listener
listener()
~~~~~~~~^^
File "/usr/src/homeassistant/homeassistant/components/home_connect/common.py", line 29, in _create_option_entities
for entity in get_option_entities_for_appliance(entry, appliance)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/home_connect/select.py", line 336, in _get_option_entities_for_appliance
HomeConnectSelectOptionEntity(entry.runtime_data, appliance, desc)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/home_connect/select.py", line 495, in __init__
super().__init__(
~~~~~~~~~~~~~~~~^
coordinator,
^^^^^^^^^^^^
appliance,
^^^^^^^^^^
desc,
^^^^^
)
^
File "/usr/src/homeassistant/homeassistant/components/home_connect/entity.py", line 45, in __init__
self.update_native_value()
~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/usr/src/homeassistant/homeassistant/components/home_connect/select.py", line 524, in update_native_value
self.entity_description.values_translation_key[option]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^
KeyError: 'LaundryCare.Washer.EnumType.SpinSpeed.RPM700'
```
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | closed | 2025-03-20T09:50:24Z | 2025-03-20T11:52:37Z | https://github.com/home-assistant/core/issues/140984 | [
"integration: home_connect"
] | wolfgangbures | 5 |
google-research/bert | tensorflow | 639 | Classification fine tuning for Q & A | Hello.
I want to fine-tune BERT for Q & A in a different way than the SQuAD mission:
I have pairs of (question, answer)
Part of them are the correct answer (Label - 1)
Part of them are the incorrect answer (Label - 0)
I want to fine-tune BERTto learn the classification mission: Given a pair of (q, a), predict if a is a correct answer for q.
What is the best way to do it?
Which model should I use?
What is this mission?
It's not a classic classification because the run_classifier demands only the text and the label, and I have the answer as well.
It's also not classic q&a like SQuAD...
What should I do?
Thanks | open | 2019-05-12T15:22:36Z | 2019-09-11T21:57:02Z | https://github.com/google-research/bert/issues/639 | [] | yonatanbitton | 6 |
Yorko/mlcourse.ai | scikit-learn | 767 | fix prophet and NumPy 2.0 | In [https://mlcourse.ai/book/topic09/assignment09_time_series_solution.html](https://mlcourse.ai/book/topic09/assignment09_time_series_solution.html) there's an issue importing prophet.
Same in topic 9 part 2 | closed | 2024-08-19T16:07:32Z | 2025-01-06T16:33:12Z | https://github.com/Yorko/mlcourse.ai/issues/767 | [
"bug"
] | Yorko | 0 |
sinaptik-ai/pandas-ai | data-science | 822 | Connector-Mysql databse connectotion | from pandasai.connectors import MySQLConnector
mysql_connector = MySQLConnector(
config={
"host": "localhost",
"port": 3306,
"database": "mydb",
"username": "root",
"password": "root",
"table": "loans",
"where": [
# this is optional and filters the data to
# reduce the size of the dataframe
["loan_status", "=", "PAIDOFF"],
],
}
)
df = SmartDataframe(mysql_connector)
df.chat('What is the total amount of loans in the last year?')
In table how can i mention the multiple tables?it is nit taking list of tables.
@gventuri @avelino @nautics889 @victor-hugo-dc | closed | 2023-12-15T12:02:44Z | 2023-12-15T13:45:08Z | https://github.com/sinaptik-ai/pandas-ai/issues/822 | [] | Sana555-Attar | 2 |
pytest-dev/pytest-html | pytest | 35 | Improve handling of multiple images | I'm using `pytest-html` with `pytest-selenium`. My selenium tests are doing an image diff.
I want to include the results of a failed image diff test in my pytest-html report.
However, if I just use `pytest_html.extras.append(...)` then my output gets mixed up with pytest-selenium's output and makes for a very incoherent report e.g.
<img width="1321" alt="screen shot 2016-03-01 at 3 51 00 pm" src="https://cloud.githubusercontent.com/assets/1796208/13446101/726bc5ac-dfc5-11e5-8f20-83ac240b5b03.png">
Full report at: https://s3.amazonaws.com/bokeh-travis/112986254/tests/pytest-report.html
I have hacked around this at the moment by overriding `longrepr` and `extras` completely so that I get something like this:
<img width="1322" alt="screen shot 2016-03-01 at 3 42 47 pm" src="https://cloud.githubusercontent.com/assets/1796208/13446138/a73b14ae-dfc5-11e5-804c-77871b92ac25.png">
Full report at: https://s3.amazonaws.com/bokeh-travis/112997379/tests/pytest-report.html
What I'd like is a hook to be able to add another row of my own custom test output.
| open | 2016-03-01T23:54:02Z | 2020-07-02T19:41:10Z | https://github.com/pytest-dev/pytest-html/issues/35 | [
"enhancement"
] | birdsarah | 19 |
robinhood/faust | asyncio | 358 | partitions argument for Table is ignored | ## Checklist
- [x] I have included information about relevant versions
- [x] I have verified that the issue persists when using the `master` branch of Faust.
## Steps to reproduce
```python
import faust
app = faust.App(
'test',
version=1,
broker='kafka://localhost:9092',
store='memory://',
topic_partitions=2,
processing_guarantee='exactly_once',
)
class Model(faust.Record):
a: int
b: int
test_topic = app.topic(
'test_topic',
value_type=Model,
partitions=2
)
test_table = app.Table(
'test_table',
default=int,
partitions=2
)
@app.agent(test_topic)
async def process(test_stream):
async for stream in test_stream:
test_table[stream.a] += 1
@app.timer(5)
async def produce():
for i in range(10):
obj = Model(i, i+1)
await test_topic.send(value=obj)
```
## Expected behavior
The above script should create 2 partitions for both `test_topic` and
`test-test_table-changelog`.
## Actual behavior
Kafka shows only 1 partition being created for `test_topic`:
```shell
./kafka-topics.sh --describe --zookeeper localhost:2181 --topic test_topic
Topic:test_topic PartitionCount:1 ReplicationFactor:1
Configs:
Topic: test_topic Partition: 0 Leader: 0 Replicas: 0 Isr: 0
```
The number of parition for `test-test_table-changelog` is correct:
```shell
./kafka-topics.sh --describe --zookeeper localhost:2181 --topic test-test_table-changelog
Topic:test-test_table-changelog PartitionCount:2 ReplicationFactor:1
Configs:cleanup.policy=compact
Topic: test-test_table-changelog Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic: test-test_table-changelog Partition: 1 Leader: 0 Replicas: 0 Isr: 0
```
Faust worker log:
```
┌Highwater - Active─────────┬───────────┬───────────┐
│ topic │ partition │ highwater │
├───────────────────────────┼───────────┼───────────┤
│ test-test_table-changelog │ 0 │ -1 │
│ test-test_table-changelog │ 1 │ -1 │
└───────────────────────────┴───────────┴───────────┘
[2019-06-04 13:56:34,786: INFO]: [^---Recovery]: active offsets at start of reading:
┌Reading Starts At - Active─┬───────────┬────────┐
│ topic │ partition │ offset │
├───────────────────────────┼───────────┼────────┤
│ test-test_table-changelog │ 0 │ -1 │
│ test-test_table-changelog │ 1 │ -1 │
└───────────────────────────┴───────────┴────────┘
[2019-06-04 13:56:35,793: INFO]: [^---Recovery]: standby offsets at start of reading:
┌Reading Starts At - Standby─┐
│ topic │ partition │ offset │
└───────┴───────────┴────────┘
```
## Full traceback
```pytb
[2019-06-04 13:56:37,800: ERROR]: [^---Agent*: test.process]: Crashed reason=PartitionsMismatch("The source topic 'test_topic' for table 'test_table'\nhas 1 partitions, but the changelog\ntopic 'test-test_table-changelog' has 2 partitions.\n\nPlease make sure the topics have the same number of partitions\nby configuring Kafka correctly.\n",)
Traceback (most recent call last):
File "/Users/sohaibfarooqi/projects/faust_test/.env/lib/python3.6/site-packages/faust/agents/agent.py", line 601, in _execute_task
await coro
File "/Users/sohaibfarooqi/projects/faust_test/test.py", line 32, in process
test_table[stream.a] += 1
File "/Users/sohaibfarooqi/projects/faust_test/.env/lib/python3.6/site-packages/mode/utils/collections.py", line 505, in __setitem__
self.on_key_set(key, value)
File "/Users/sohaibfarooqi/projects/faust_test/.env/lib/python3.6/site-packages/faust/tables/table.py", line 72, in on_key_set
self._send_changelog(event, key, value)
File "/Users/sohaibfarooqi/projects/faust_test/.env/lib/python3.6/site-packages/faust/tables/base.py", line 228, in _send_changelog
self._verify_source_topic_partitions(event)
File "/Users/sohaibfarooqi/projects/faust_test/.env/lib/python3.6/site-packages/faust/tables/base.py", line 257, in _verify_source_topic_partitions
change_n=change_n,
faust.exceptions.PartitionsMismatch: The source topic 'test_topic' for table 'test_table'
has 1 partitions, but the changelog
topic 'test-test_table-changelog' has 2 partitions.
Please make sure the topics have the same number of partitions
by configuring Kafka correctly.
```
# Versions
* Python version=3.6.5
* Faust version=1.6.0
* Operating system=OSX Mojave 10.14
* Kafka version=2.20
* RocksDB version=N/A
| closed | 2019-06-04T06:58:01Z | 2021-05-18T07:46:09Z | https://github.com/robinhood/faust/issues/358 | [] | sohaibfarooqi | 6 |
iperov/DeepFaceLab | machine-learning | 5,219 | X-Seg Copy and Paste mask idea | Hey a quick suggestion for the X-Seg tool. Is there a way to copy and paste a mask from the previous frame. When working on obstructions it would make it easier to work with the frames and also it would make the heads more uniform.
Thanks
| closed | 2020-12-27T11:17:13Z | 2020-12-27T13:19:31Z | https://github.com/iperov/DeepFaceLab/issues/5219 | [] | 4damAce | 1 |
littlecodersh/ItChat | api | 298 | 请问怎么实现定时发动消息的功能? | 如题? | closed | 2017-03-24T23:38:27Z | 2020-05-03T17:57:49Z | https://github.com/littlecodersh/ItChat/issues/298 | [
"question"
] | djtu | 1 |
httpie/cli | rest-api | 1,594 | Add type hints for plugin developers | ## Checklist
- [X] I've searched for similar feature requests.
---
## Enhancement request
It would be great if the project could add type hints and expose them (`py.typed` file), so that plugin developers could benefit from type checkers like mypy
--- | open | 2024-08-23T09:00:09Z | 2024-08-23T09:00:09Z | https://github.com/httpie/cli/issues/1594 | [
"enhancement",
"new"
] | kasium | 0 |
lepture/authlib | flask | 280 | Same input causes exception in python 3.8.3 but not in python 3.6.8 | **Describe the bug**
An input for authlib.jose.jwt.decode() causes an exception on python 3.8.3 while it works on python 3.6.8 .
I'm using:
```python
from authlib.jose import jwt
jwt.decode(mytoken, key=mykey, claims_options=something)
```
The causing parameter is `key`. In my case, the (shortened) value is :
```
{
"keys": [
{
"kid": "something2",
"kty": "RSA",
"alg": "RS256",
"use": "sig",
"n": "something3",
"e": "AQAB",
"x5c": ["something4"],
"x5t": "something5",
"x5t#S256": "something6"
}
]
}
```
However, if I set `mykey` to the key in the inner list:
```
{
"kid": "something2",
"kty": "RSA",
"alg": "RS256",
"use": "sig",
"n": "something3",
"e": "AQAB",
"x5c": ["something4"],
"x5t": "something5",
"x5t#S256": "something6"
}
```
then authlib accepts the input for both, python 3.6.8 and 3.8.3.
**Error Stacks**
```
[...]
File "/Users/njjn/.pyenv/versions/demo2/lib/python3.8/site-packages/authlib/jose/rfc7519/jwt.py", line 98, in decode
data = self._jws.deserialize_compact(s, load_key, decode_payload)
File "/Users/njjn/.pyenv/versions/demo2/lib/python3.8/site-packages/authlib/jose/rfc7515/jws.py", line 102, in deserialize_compact
algorithm, key = self._prepare_algorithm_key(jws_header, payload, key)
File "/Users/njjn/.pyenv/versions/demo2/lib/python3.8/site-packages/authlib/jose/rfc7515/jws.py", line 258, in _prepare_algorithm_key
key = algorithm.prepare_key(key)
File "/Users/njjn/.pyenv/versions/demo2/lib/python3.8/site-packages/authlib/jose/rfc7518/_cryptography_backends/_jws.py", line 41, in prepare_key
return RSAKey.import_key(raw_data)
File "/Users/njjn/.pyenv/versions/demo2/lib/python3.8/site-packages/authlib/jose/rfc7518/_cryptography_backends/_keys.py", line 116, in import_key
return import_key(
File "/Users/njjn/.pyenv/versions/demo2/lib/python3.8/site-packages/authlib/jose/rfc7518/_cryptography_backends/_keys.py", line 277, in import_key
cls.check_required_fields(raw)
File "/Users/njjn/.pyenv/versions/demo2/lib/python3.8/site-packages/authlib/jose/rfc7517/models.py", line 120, in check_required_fields
raise ValueError('Missing required field: "{}"'.format(k))
ValueError: Missing required field: "e"
```
**To Reproduce**
(untested): get a valid key and put it in the structure as depicted in my example above.
**Expected behavior**
The same input gets accepted without errors for both python versions.
**Environment:**
- OS: macos/linux
- Python Version: 3.8.3
- Authlib Version: v0.15 | closed | 2020-10-12T11:01:07Z | 2020-10-14T13:06:20Z | https://github.com/lepture/authlib/issues/280 | [
"bug"
] | juergen-kaiser-by | 12 |
iperov/DeepFaceLab | machine-learning | 533 | Could you provide a demo code to convert photo one by one | closed | 2019-12-26T08:12:38Z | 2020-01-02T01:31:47Z | https://github.com/iperov/DeepFaceLab/issues/533 | [] | Nicolaszh | 0 | |
long2ice/fastapi-cache | fastapi | 143 | recent merge causing coder execption | hi, an issue appears on the main branch,
the first call does work, and the json response persisted to redis.
the second, where there's a hit, failing with the below error.
can be reproduced
```
@projects_router.get("")
@cached(expire=60)
def get_projects(
request: Request,
page: int = 1,
limit: int = 20,
start_date: str = None,
end_date: str = None
):
return {"foo":2}
```
2023-05-14T17:55:30.151362251Z {"asctime": "2023-05-14 17:55:30.145", "levelname": "ERROR", "message": "'str' object has no attribute 'decode'", "exc_info": "Traceback (most recent call last):\n File \"/usr/local/lib/python3.9/site-packages/anyio/streams/memory.py\", line 94, in receive\n return self.receive_nowait()\n File \"/usr/local/lib/python3.9/site-packages/anyio/streams/memory.py\", line 89, in receive_nowait\n raise WouldBlock\nanyio.WouldBlock\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py\", line 78, in call_next\n message = await recv_stream.receive()\n File \"/usr/local/lib/python3.9/site-packages/anyio/streams/memory.py\", line 114, in receive\n raise EndOfStream\nanyio.EndOfStream\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py\", line 162, in __call__\n await self.app(scope, receive, _send)\n File \"/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py\", line 108, in __call__\n response = await self.dispatch_func(request, call_next)\n File \"/app/pdt_api/main.py\", line 66, in add_headers_and_context\n response = await call_next(request)\n File \"/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py\", line 84, in call_next\n raise app_exc\n File \"/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py\", line 70, in coro\n await self.app(scope, receive_or_disconnect, send_no_error)\n File \"/usr/local/lib/python3.9/site-packages/starlette/middleware/cors.py\", line 84, in __call__\n await self.app(scope, receive, send)\n File \"/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py\", line 79, in __call__\n raise exc\n File \"/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py\", line 68, in __call__\n await self.app(scope, receive, sender)\n File \"/usr/local/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py\", line 21, in __call__\n raise e\n File \"/usr/local/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py\", line 18, in __call__\n await self.app(scope, receive, send)\n File \"/usr/local/lib/python3.9/site-packages/starlette/routing.py\", line 718, in __call__\n await route.handle(scope, receive, send)\n File \"/usr/local/lib/python3.9/site-packages/starlette/routing.py\", line 276, in handle\n await self.app(scope, receive, send)\n File \"/usr/local/lib/python3.9/site-packages/starlette/routing.py\", line 66, in app\n response = await func(request)\n File \"/usr/local/lib/python3.9/site-packages/fastapi/routing.py\", line 237, in app\n raw_response = await run_endpoint_function(\n File \"/usr/local/lib/python3.9/site-packages/fastapi/routing.py\", line 163, in run_endpoint_function\n return await dependant.call(**values)\n File \"/usr/local/lib/python3.9/site-packages/fastapi_cache/decorator.py\", line 205, in inner\n result = cast(R, coder.decode_as_type(cached, type_=return_type))\n File \"/usr/local/lib/python3.9/site-packages/fastapi_cache/coder.py\", line 82, in decode_as_type\n result = cls.decode(value)\n File \"/usr/local/lib/python3.9/site-packages/fastapi_cache/coder.py\", line 110, in decode\n return json.loads(value.decode(), object_hook=object_hook)\nAttributeError: 'str' object has no attribute 'decode'", "ctx": { "x-request-id": null}}
https://github.com/long2ice/fastapi-cache/blob/915f3dd8f2dfe6699a7d6202d8b9c687dd9b15df/fastapi_cache/coder.py#L106
Maybe, shouldn't decode whats persisted as json string ( from json response) and not bytes
| closed | 2023-05-14T18:07:09Z | 2023-05-14T18:44:28Z | https://github.com/long2ice/fastapi-cache/issues/143 | [] | lb-ronyeh | 1 |
koxudaxi/datamodel-code-generator | fastapi | 1,483 | Unable to parse headers from openapi file | I have this (simplified) schema:
```yaml
openapi: 3.0.0
paths:
/initiate:
get:
parameters:
- $ref: '#/components/parameters/Fancy-ID'
requestBody:
content:
"application/json":
schema:
$ref: '#/components/schemas/FancySchema'
components:
headers:
Fancy-ID:
description: >
This is a fancy ID
schema:
type: string
example: "8476a9db-f82c-4713-824c-c6046521a947"
schemas:
FancySchema:
required:
- pizza
type: object
properties:
topping:
type: string
```
I'm using this command to generate the models:
```
datamodel-codegen --input bug.yaml --output bug.py --field-constraints
```
And I'm getting this output:
```python
# generated by datamodel-codegen:
# filename: bug.yaml
# timestamp: 2023-08-09T10:57:04+00:00
from __future__ import annotations
from typing import Optional
from pydantic import BaseModel
class FancySchema(BaseModel):
topping: Optional[str] = None
```
I'm not getting anything for `Fancy-ID`.
Would you know why this is happening? Am I missing something?
**Version:**
- OS: MacOs
- Python version: 3.11.4
- datamodel-code-generator version: `datamodel-code-generator = {extras = ["http"], version = "^0.21.3"}`
| closed | 2023-08-09T11:01:31Z | 2024-02-08T15:15:19Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1483 | [
"answered"
] | JPFrancoia | 2 |
frappe/frappe | rest-api | 31,491 | Map View Filter not working for child tabels | <!--
Welcome to the Frappe Framework issue tracker! Before creating an issue, please heed the following:
1. This tracker should only be used to report bugs and request features / enhancements to Frappe
- For questions and general support, use https://stackoverflow.com/questions/tagged/frappe
- For documentation issues, refer to https://frappeframework.com/docs/user/en or the developer cheetsheet https://github.com/frappe/frappe/wiki/Developer-Cheatsheet
2. Use the search function before creating a new issue. Duplicates will be closed and directed to
the original discussion.
3. When making a bug report, make sure you provide all required information. The easier it is for
maintainers to reproduce, the faster it'll be fixed.
4. If you think you know what the reason for the bug is, share it with us. Maybe put in a PR 😉
-->
## Description of the issue
## Context information (for bug reports)
**Output of `bench version`**
```
erpnext 15.53.0
frappe 15.56.1
```
## Steps to reproduce the issue
1. Open the map view for a doc type with child table
2. filter for a child table field
### Observed result

### Expected result
A working filter
### Stacktrace / full error message
```
Traceback (most recent call last):
File "apps/frappe/frappe/app.py", line 114, in application
response = frappe.api.handle(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/api/__init__.py", line 49, in handle
data = endpoint(**arguments)
^^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/api/v1.py", line 36, in handle_rpc_call
return frappe.handler.handle()
^^^^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/handler.py", line 50, in handle
data = execute_cmd(cmd)
^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/handler.py", line 86, in execute_cmd
return frappe.call(method, **frappe.form_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/__init__.py", line 1726, in call
return fn(*args, **newargs)
^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/utils/typing_validations.py", line 31, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/geo/utils.py", line 14, in get_coords
coords = return_location(doctype, filters_sql)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "apps/frappe/frappe/geo/utils.py", line 63, in return_location
coords = frappe.db.sql(
^^^^^^^^^^^^^^
File "apps/frappe/frappe/database/database.py", line 230, in sql
self._cursor.execute(query, values)
File "env/lib/python3.11/site-packages/pymysql/cursors.py", line 153, in execute
result = self._query(query)
^^^^^^^^^^^^^^^^^^
File "env/lib/python3.11/site-packages/pymysql/cursors.py", line 322, in _query
conn.query(q)
File "env/lib/python3.11/site-packages/pymysql/connections.py", line 563, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "env/lib/python3.11/site-packages/pymysql/connections.py", line 825, in _read_query_result
result.read()
File "env/lib/python3.11/site-packages/pymysql/connections.py", line 1199, in read
first_packet = self.connection._read_packet()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "env/lib/python3.11/site-packages/pymysql/connections.py", line 775, in _read_packet
packet.raise_for_error()
File "env/lib/python3.11/site-packages/pymysql/protocol.py", line 219, in raise_for_error
err.raise_mysql_exception(self._data)
File "env/lib/python3.11/site-packages/pymysql/err.py", line 150, in raise_mysql_exception
raise errorclass(errno, errval)
pymysql.err.OperationalError: (1054, "Unknown column 'tabSupporter Network.network_type' in 'WHERE'")
```
| open | 2025-03-03T16:23:37Z | 2025-03-03T16:23:37Z | https://github.com/frappe/frappe/issues/31491 | [
"bug"
] | david-loe | 0 |
pydata/xarray | pandas | 9,332 | Enable multi-coord grouping from xarray | I'm working through some around non-trivial `groupby` from colleagues. In particular, grouping by multiple coordinates seems much harder in xarray than pandas. Flox actually does this really nicely, as per the comment from @dcherian:
> As an aside, the API isn't great but this works in flox (I think)
```python
import flox.xarray
flox.xarray.xarray_reduce(da, "labels1", "labels2", func="mean")
```
<img width="824" alt="image" src="https://github.com/user-attachments/assets/fb48a967-a185-4927-a1cb-3bf8695f48f0">
_Originally posted by @dcherian in https://github.com/pydata/xarray/issues/9278#issuecomment-2250993016_
Would be great if `da.groupby(["labels1", "labels2"]).mean()` worked too — is that just a simple translation to flox from the xarray code? (I can probably do it if so). Or is there something more complex going on? | closed | 2024-08-12T21:23:06Z | 2024-08-26T15:56:42Z | https://github.com/pydata/xarray/issues/9332 | [
"topic-groupby"
] | max-sixty | 3 |
glumpy/glumpy | numpy | 14 | Collection update | ### Implement a mechanism for easy updating of a collection item
When a new item is appended to a collection, it is generally baked into a set of vertices whose count might be different from the initial item. For example, a segment is specified using two points, but the baked segment is 4 vertices because of thickness. Updating such item is relatively easy in such case because the item won't change its size when updating. See [collection-update.py](https://github.com/glumpy/glumpy/blob/master/examples/collection-update.py).
Some collection are trickier to handle because an item can be updated with using different size (glyphs & paths). And finally, the polygon collection is also tricky because the topology of the points might influence the number of vertices in the item.
- [ ] RawPointCollection
- [ ] AggPointCollection
- [ ] MarkerPointCollection
- [ ] RawSegmentCollection
- [ ] AggSegmentCollection
- [ ] RawPathCollection
- [ ] AggPathCollection
- [ ] AggFastPathCollection
- [ ] RawPolygonCollection
- [ ] GlyphCollection
| open | 2015-01-02T19:55:36Z | 2015-01-17T18:18:52Z | https://github.com/glumpy/glumpy/issues/14 | [
"enhancement"
] | rougier | 0 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 560 | Error message keeps saying I need to download the pretrained models even though i already have? | 
| closed | 2020-10-16T00:58:35Z | 2020-10-16T05:24:14Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/560 | [] | icandancelikecool | 1 |
CanopyTax/asyncpgsa | sqlalchemy | 17 | Error on pg.init | On trying to invoke `pg.init`, the module throws this cryptic error. I have been looking for this `init` required keyword in the documentation but could not find any thing. Am I missing something? Please help.
The code that triggered it:
```python
import asyncio
from asyncpgsa import pg
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete( pg.init( dsn="postgres:///tranql_bot" ) )
```
And error:
> Traceback (most recent call last):
> File "/usr/lib/python3.5/runpy.py", line 193, in _run_module_as_main
> "__main__", mod_spec)
> File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
> exec(code, run_globals)
> File "/home/diwank/github.com/creatorrr/tranql-bot/app/__main__.py", line 12, in <module>
> loop.run_until_complete( pg.init( dsn="postgres:///tranql_bot" ) )
> File "/usr/lib/python3.5/asyncio/base_events.py", line 457, in run_until_complete
> return future.result()
> File "/usr/lib/python3.5/asyncio/futures.py", line 292, in result
> raise self._exception
> File "/usr/lib/python3.5/asyncio/tasks.py", line 239, in _step
> result = coro.send(None)
> File "/home/diwank/.envs/tranql/lib/python3.5/site-packages/asyncpgsa/pgsingleton.py", line 33, in init
> self.__pool = await create_pool(*args, **kwargs)
> File "/home/diwank/.envs/tranql/lib/python3.5/site-packages/asyncpgsa/pool.py", line 75, in create_pool
> **connect_kwargs)
> TypeError: __init__() missing 1 required keyword-only argument: 'init'
> | closed | 2017-03-05T23:13:06Z | 2017-03-07T03:39:52Z | https://github.com/CanopyTax/asyncpgsa/issues/17 | [] | creatorrr | 2 |
plotly/dash-recipes | dash | 15 | dash-global-cache.py does not work | when I run it, error below pops up
ValueError: cannot have a multithreaded and multi process server.
I modified processes = 6 to 1. Then get the following error:
redis.exceptions.ConnectionError: Error 10061 connecting to None:6379. No connection could be made because the target machine actively refused it. | closed | 2018-08-31T15:16:33Z | 2018-08-31T15:27:18Z | https://github.com/plotly/dash-recipes/issues/15 | [] | zlw8844 | 1 |
xinntao/Real-ESRGAN | pytorch | 415 | FileNotFoundError: [WinError 2] 系统找不到指定的文件 | run this code:“ ffmpeg.input(video_path).output('pipe:', format='rawvideo', pix_fmt='bgr24',loglevel='error').run_async(pipe_stdin=True, pipe_stdout=True, cmd=args.ffmpeg_bin))”。Tjhe error display:FileNotFoundError: [WinError 2] 系统找不到指定的文件 #415 | open | 2022-08-20T11:19:49Z | 2025-03-02T12:08:06Z | https://github.com/xinntao/Real-ESRGAN/issues/415 | [] | shengyang2 | 9 |
healthchecks/healthchecks | django | 1,008 | check display | I have very old logs (even from 2022). In the replication log I can see only month and day

It suggest that all checks from the current year should be in format Mon DD (e.g. Jun 01) but from the previous years it should include in the format also year (Mon DD, YYYY)
or
just to remove ambiguity and display a year next to all logs. | closed | 2024-06-03T17:11:56Z | 2024-06-17T11:07:19Z | https://github.com/healthchecks/healthchecks/issues/1008 | [] | pmatuszy | 0 |
twopirllc/pandas-ta | pandas | 841 | return in finally swallows exceptions | In https://github.com/twopirllc/pandas-ta/blob/b465491f226d9e07fffd4e59cd0affc9284521ca/pandas_ta/utils/_core.py#L45 there is a `return` statement in a `finally` block, which would swallow any in-flight exception.
This means that if any exception is raised from the `try` body (including `BaseException` such as `KeyboardInterrupt`), it will not propagate on as expected.
See also https://docs.python.org/3/tutorial/errors.html#defining-clean-up-actions.
| closed | 2024-10-24T13:53:46Z | 2024-11-09T22:02:46Z | https://github.com/twopirllc/pandas-ta/issues/841 | [
"bug"
] | iritkatriel | 1 |
gradio-app/gradio | python | 10,519 | [Gradio 5.15 container] - Width size: Something changed | ### Describe the bug
I was controlling width of main interface with custom css in class:
but in this new version its is not working.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
css ="""
.gradio-container {width: 95% !important}
div.gradio-container{
max-width: unset !important;
}
"""
with gr.Blocks(css=css) as app:
with gr.Tabs():
with gr.TabItem("Test"):
gallery = gr.Gallery(label="Generated Images", interactive=True, show_label=True, preview=True, allow_preview=True)
app.launch(inbrowser=True)
```
### Screenshot

### Logs
```shell
N/A
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Windows
gradio version: 5.15.0
gradio_client version: 1.7.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.4.0
audioop-lts is not installed.
fastapi: 0.115.4
ffmpy: 0.4.0
gradio-client==1.7.0 is not installed.
httpx: 0.27.0
huggingface-hub: 0.28.1
jinja2: 3.1.3
markupsafe: 2.1.5
numpy: 1.26.3
orjson: 3.10.6
packaging: 24.1
pandas: 2.2.2
pillow: 11.0.0
pydantic: 2.8.2
pydub: 0.25.1
python-multipart: 0.0.19
pyyaml: 6.0.1
ruff: 0.9.4
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit: 0.12.0
typer: 0.12.3
typing-extensions: 4.12.2
urllib3: 2.2.2
uvicorn: 0.30.5
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.2.0
httpx: 0.27.0
huggingface-hub: 0.28.1
packaging: 24.1
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
I can work around it | closed | 2025-02-05T21:34:22Z | 2025-02-27T07:03:10Z | https://github.com/gradio-app/gradio/issues/10519 | [
"bug"
] | elismasilva | 4 |
encode/databases | sqlalchemy | 364 | pytest -- AssertionError: DatabaseBackend is not running | from fastapi.testclient import TestClient
from main import app
client = TestClient(app)
test_device_sn1 = test_device_sn2 = test_device_sn3 = ""
def test_read_devices():
response = client.get("/devices/")
assert response.status_code == 200
assert test_device_sn1 in (d["sn"] for d in response.json())
assert test_device_sn2 in (d["sn"] for d in response.json())
assert test_device_sn3 in (d["sn"] for d in response.json())
linux run command : pytest
FAILED test_api.py::test_read_devices - AssertionError: DatabaseBackend is not running
in db.py:
database = databases.Database(DB_URI, min_size=5, max_size=20)
in mian.py:
@app.on_event("startup")
async def startup():
await database.connect()
@app.on_event("shutdown")
async def shutdown():
await database.disconnect()
| closed | 2021-07-21T07:04:44Z | 2021-07-21T09:07:58Z | https://github.com/encode/databases/issues/364 | [] | fantasyhh | 1 |
OpenBB-finance/OpenBB | machine-learning | 6,727 | [Bug] Docker build error | **Describe the bug**
I followed the instructions on the docs to build the docker file and I get a build error
```
=> [builder 3/5] COPY ./openbb_platform ./openbb_platform 0.5s
=> ERROR [builder 4/5] RUN pip install /openbb/openbb_platform[all] 6.6s
------
> [builder 4/5] RUN pip install /openbb/openbb_platform[all]:
#13 1.523 Processing ./openbb_platform
...
...
...
#13 6.078 Collecting openbb-fmp<2.0.0,>=1.3.2 (from openbb==4.3.2)
#13 6.087 Downloading openbb_fmp-1.3.2-py3-none-any.whl.metadata (957 bytes)
#13 6.311 Collecting openbb-fred<2.0.0,>=1.3.2 (from openbb==4.3.2)
#13 6.321 Downloading openbb_fred-1.3.2-py3-none-any.whl.metadata (925 bytes)
#13 6.446 INFO: pip is looking at multiple versions of openbb to determine which version is compatible with other requirements. This could take a while.
#13 6.446 ERROR: Could not find a version that satisfies the requirement openbb-imf<2.0.0,>=1.0.0b (from openbb) (from versions: none)
#13 6.446 ERROR: No matching distribution found for openbb-imf<2.0.0,>=1.0.0b
#13 6.490
#13 6.490 [notice] A new release of pip is available: 24.0 -> 24.2
#13 6.490 [notice] To update, run: pip install --upgrade pip
------
executor failed running [/bin/sh -c pip install /openbb/openbb_platform[all]]: exit code: 1
```
**To Reproduce**
Cloned the repo
`docker build -f build/docker/platform.dockerfile -t openbb-platform:latest .`
**Desktop (please complete the following information):**
- OS: Windows 11
- Python version N/A as its a docker build
| closed | 2024-10-02T15:12:50Z | 2024-10-10T17:48:48Z | https://github.com/OpenBB-finance/OpenBB/issues/6727 | [
"docker"
] | eugeneniemand | 2 |
dbfixtures/pytest-postgresql | pytest | 1,055 | Use pre-commit on CI | There's a config, some linters and checks can be defined in pre-commit.
However pre-commit isn't supported by dependabot.... | closed | 2025-01-08T10:57:09Z | 2025-01-31T08:58:11Z | https://github.com/dbfixtures/pytest-postgresql/issues/1055 | [] | fizyk | 9 |
modin-project/modin | pandas | 6,908 | Make it clearer which parameters to pass into the engine initialization regarding number of workers | We have the following warnings when initalizing an engine.
**Ray**
```bash
UserWarning: Ray execution environment not yet initialized. Initializing...
To remove this warning, run the following python code before doing dataframe operations:
import ray
ray.init()
```
**Dask**
```bash
UserWarning: Dask execution environment not yet initialized. Initializing...
To remove this warning, run the following python code before doing dataframe operations:
from distributed import Client
client = Client()
```
By default, Ray and Dask use all available cores to create workers for parallel computation but it is not always beneficial to spawn the number of workers equal to the number of CPUs. We should make it clearer which parameters to pass into the engine initialization regarding a number of workers to spawn. It would be especially useful for Dask, which creates both worker threads and processes by default. | closed | 2024-02-02T20:09:11Z | 2024-02-06T16:43:28Z | https://github.com/modin-project/modin/issues/6908 | [
"new feature/request 💬",
"Dask ⚡",
"Ray ⚡"
] | YarShev | 0 |
microsoft/qlib | machine-learning | 1,621 | How to load dataset from pandas dataframe? | How to load dataset without the need of qlib_data directory, using dataframe to feed the dataset directly? This would be necessary in HF trading as disk I/O is very slow. | closed | 2023-08-10T07:49:14Z | 2023-11-16T00:06:24Z | https://github.com/microsoft/qlib/issues/1621 | [
"question",
"stale"
] | 2young-2simple-sometimes-naive | 3 |
huggingface/datasets | nlp | 7,049 | Save nparray as list | ### Describe the bug
When I use the `map` function to convert images into features, datasets saves nparray as a list. Some people use the `set_format` function to convert the column back, but doesn't this lose precision?
### Steps to reproduce the bug
the map function
```python
def convert_image_to_features(inst, processor, image_dir):
image_file = inst["image_url"]
file = image_file.split("/")[-1]
image_path = os.path.join(image_dir, file)
image = Image.open(image_path)
image = image.convert("RGBA")
inst["pixel_values"] = processor(images=image, return_tensors="np")["pixel_values"]
return inst
```
main function
```python
map_fun = partial(
convert_image_to_features, processor=processor, image_dir=image_dir
)
ds = ds.map(map_fun, batched=False, num_proc=20)
print(type(ds[0]["pixel_values"])
```
### Expected behavior
(type < list>)
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-4.19.91-009.ali4000.alios7.x86_64-x86_64-with-glibc2.35
- Python version: 3.11.5
- `huggingface_hub` version: 0.23.4
- PyArrow version: 14.0.2
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0 | closed | 2024-07-15T11:36:11Z | 2024-07-18T11:33:34Z | https://github.com/huggingface/datasets/issues/7049 | [] | Sakurakdx | 5 |
hankcs/HanLP | nlp | 726 | 人名识别出错:党员来到村民焦玉莲家中 | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:
我使用的版本是:
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
人名识别出错:党员来到村民焦玉莲家中
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
### 步骤
1. 首先……
2. 然后……
3. 接着……
### 触发代码
```
HanLP.Config.enableDebug(true);
Segment segment = HanLP.newSegment().enableNameRecognize(true).enableOrganizationRecognize(true).enablePlaceRecognize(true).enableOffset(true).enablePartOfSpeechTagging(true).enableCustomDictionary(true);
```
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
```
党员/nnt, 来到/v, 村民/n, 焦玉莲/nr, 家中/s
```
### 实际输出
<!-- HanLP实际输出了什么?产生了什么效果?错在哪里?-->
```
党员/nnt, 来到/v, 村民/n, 焦/ng, 玉莲/nz, 家中/s
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
| closed | 2017-12-28T05:33:01Z | 2020-01-01T10:51:14Z | https://github.com/hankcs/HanLP/issues/726 | [
"ignored"
] | dbmove | 2 |
davidsandberg/facenet | tensorflow | 1,007 | Transferring Models to C++ using OpenCV | Hi @davidsandberg and thanks for sharing your great work.
I want to use the models in C++ using OpenCV.
This is what I have already tried and the corresponding errors:
1- Input layer not found:
First, I tried to use readNetFromTensorflow("model.pb") with no config file. I tried to use your .pb file but it has this error:
`OpenCV(4.0.0) Error: Unspecified error (Input layer not found: image_batch) in cv::dnn::dnn4_v20180917:: 'anonymous-namespace' :: TFImporter::connect, file C:build\master_winpack-build-win64-vc14\opencv\modules\dnn\src\tensorflow\src\tensorflow\tf_importer.cpp, line 497`
2- In the next step I produce .pbtxt file using this script:
`with tf.gfile.GFile('model.pb', 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read());
tf.train.write_graph(graph_def, '', 'model.pbtxt')`
Now I have this error when I call readNetFromTensorflow("model.pb" , "model.pbtxt):
`[cv::Exception] = {msg="OpenCV(4.0.0) C:\\build\\master_winpack-build-win64-vc14\\opencv\\modules\\dnn\\src\\tensorflow\\tf_importer.cpp:616: error: (-215:Assertion failed) const_layers.insert(std::make_pair(name, li)).second in fun... ...}`
I also see https://github.com/davidsandberg/facenet/issues/699 but nobody answers that guy.
My question is:
Is the model compatible with OpenCV implementation? | open | 2019-04-17T06:39:13Z | 2019-12-08T16:37:21Z | https://github.com/davidsandberg/facenet/issues/1007 | [] | Rasoul20sh | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.