repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
jazzband/django-oauth-toolkit | django | 1,348 | Sensitive data like access token is logged in logs. | It will be good if the logging is configurable. ie we can enable or disable logging such details via a flag in settings.py
| closed | 2023-10-24T11:21:49Z | 2023-10-31T12:48:26Z | https://github.com/jazzband/django-oauth-toolkit/issues/1348 | [
"enhancement"
] | ygg-basil | 3 |
jupyter-incubator/sparkmagic | jupyter | 527 | Jupyterhub-Sparkmagic-Livy-Kerberos | Is there any example to configure Sparkmagic to use kerberos authentication with Livy? Setting authentication type "Kerberos" in config.json doesn't work. I get a "401" error. But If I open a terminal in the user session in JupyterHub and execute klist command i can see the tgt ticket in ccache. I'm using the default spawner. Should I use another specific custom spawner? | closed | 2019-05-09T07:50:44Z | 2021-08-31T21:02:10Z | https://github.com/jupyter-incubator/sparkmagic/issues/527 | [] | aleevangelista | 5 |
scikit-multilearn/scikit-multilearn | scikit-learn | 43 | Python 2 vs Python 3 | What are the plans regarding python 2 vs python 3? Stick with python 2 for the time being?
| closed | 2016-06-03T13:02:50Z | 2016-06-03T15:03:45Z | https://github.com/scikit-multilearn/scikit-multilearn/issues/43 | [] | queirozfcom | 2 |
aminalaee/sqladmin | asyncio | 624 | Customized sort_query | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
At the moment, the admin panel does not support sorting by objects, but this can be solved using the custom sort_query method. Similar to list_query, search_query.
### Describe the solution you would like.
You can move the sorting from the main list method to separate methods.
``` Python
async def list(self, request: Request) -> Pagination:
...
sort_fields = self._get_sort_fields(request)
stmt = self.sort_query(stmt, sort_fields)
...
```
A separate method for getting fields.
```Python
def _get_sort_fields(self, request: Request) -> List[Tuple[str, bool]]:
sort_by = request.query_params.get("sortBy", None)
sort = request.query_params.get("sort", "asc")
if sort_by:
sort_fields = [(sort_by, sort == "desc")]
else:
sort_fields = self._get_default_sort()
return sort_fields
```
And an overridable method that can be used at your discretion.
```Python
def sort_query(self, stmt: Select, sort_fields: List[Tuple[str, bool]]) -> Select:
for sort_field, is_desc in sort_fields:
if is_desc:
stmt = stmt.order_by(desc(sort_field))
else:
stmt = stmt.order_by(asc(sort_field))
return stmt
```
The idea of custom methods solves many problems and gives great flexibility.
I haven't actually added anything, but it solves the problem with sorting by object.
### Describe alternatives you considered
_No response_
### Additional context
_No response_ | closed | 2023-09-21T08:54:40Z | 2023-09-22T09:27:59Z | https://github.com/aminalaee/sqladmin/issues/624 | [] | YarLikviD | 1 |
netbox-community/netbox | django | 18,056 | Filter or Search by multi column | ### NetBox version
v4.1.6
### Feature type
New functionality
### Triage priority
I volunteer to perform this work (if approved)
### Proposed functionality
Filter or Search function is as excel file or another software below. It's more convenient.


### Use case
Can filter more column handy.
### Database changes
_No response_
### External dependencies
_No response_ | closed | 2024-11-21T09:09:24Z | 2025-02-26T04:24:05Z | https://github.com/netbox-community/netbox/issues/18056 | [
"type: feature",
"status: revisions needed",
"pending closure"
] | funbsd | 3 |
MaartenGr/BERTopic | nlp | 1,526 | Can BERTopic.fit_transform be accelerated with GPU support? | I am new to BERTopic. When I run
```
from bertopic import BERTopic
topic_model = BERTopic()
topics, probs = topic_model.fit_transform(docs)
```
all the processing happens on my CPU.
I was wondering: Are there parts of the computation that could be run on the GPU, and is there a way to enable that? | closed | 2023-09-12T14:54:11Z | 2023-09-19T19:30:08Z | https://github.com/MaartenGr/BERTopic/issues/1526 | [] | clstaudt | 4 |
ultralytics/ultralytics | machine-learning | 19,448 | ultralytics randomly restarts training run while training | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
I'm having the weirdest problem. I'm training a yolov11 model on my data with the following params:
```
epochs=50,
perspective=0.004,
shear=15,
imgsz=1024,
batch=0.6,
workers=2,
lr0=3e-4,
lrf=0.001,
cos_lr=True,
weight_decay=0.001,
model_type: str = "yolo11l-seg.pt",
optimizer: str = "auto",
fliplr: float = 0.5,
scale: float = 0,
imgsz: int = 1024, #: tuple = (768,1024),
profile: bool = False,
overlap_mask: bool = False,
degrees: float = 0,
```
Now at epoch 20ish or sometimes at other epochs, the training run randomly restarts without any error (see image). I'm wondering if someone ever had the same issue
<img width="1162" alt="Image" src="https://github.com/user-attachments/assets/a2f57fa3-1c72-467a-aaa5-f34f6a9579fb" />
### Environment
Linux machine on modal.com, 24GB gpu L4.
### Minimal Reproducible Example
No idea...
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-02-26T17:37:40Z | 2025-02-27T06:03:10Z | https://github.com/ultralytics/ultralytics/issues/19448 | [
"bug",
"detect"
] | ThierryDeruyttere | 3 |
Lightning-AI/pytorch-lightning | machine-learning | 20,199 | LightningCLI: --help argument given after the subcommand fails | ### Bug description
I'm trying to use LightningCLI to configure my code from the command line but LightningCLI is seeming to have trouble parsing the default logger of the trainer when I run the following:
```
python test.py fit -h
```
However, just running the command without the -h flag works.
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
```python
import torch
from torch.utils.data import DataLoader, Dataset
from pytorch_lightning import LightningModule
from pytorch_lightning.cli import LightningCLI
class RandomDataset(Dataset):
def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)
def __getitem__(self, index):
return self.data[index]
def __len__(self):
return self.len
class RandomModel(LightningModule):
def __init__(self):
super().__init__()
self.layer = torch.nn.Linear(32, 2)
def forward(self, x):
return self.layer(x)
def training_step(self, batch, batch_idx):
loss = self(batch).sum()
self.log("train_loss", loss)
return {"loss": loss}
def configure_optimizers(self):
return torch.optim.SGD(self.layer.parameters(), lr=0.1)
def train_dataloader(self):
return DataLoader(RandomDataset(32, 64))
def main():
cli = LightningCLI(RandomModel)
if __name__=='__main__':
main()
```
### Error messages and logs
```
ValueError: Not possible to determine the import path for object typing.Iterable[pytorch_lightning.loggers.logger.Logger].
```
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU:
- NVIDIA GeForce RTX 4090
- available: True
- version: 12.1
* Lightning:
- lightning: 2.4.0
- lightning-utilities: 0.11.6
- pytorch-lightning: 2.4.0
- torch: 2.4.0
- torchaudio: 2.4.0
- torchmetrics: 1.4.0.post0
- torchvision: 0.19.0
* Packages:
- accelerate: 0.21.0
- asttokens: 2.0.5
- autocommand: 2.2.2
- backcall: 0.2.0
- backports.tarfile: 1.2.0
- bottleneck: 1.3.7
- brotli: 1.0.9
- certifi: 2024.7.4
- charset-normalizer: 3.3.2
- colorama: 0.4.6
- comm: 0.2.2
- contourpy: 1.2.0
- cycler: 0.11.0
- debugpy: 1.6.7
- decorator: 5.1.1
- diffusers: 0.18.2
- docstring-parser: 0.16
- entrypoints: 0.4
- exceptiongroup: 1.2.0
- executing: 0.8.3
- filelock: 3.13.1
- fonttools: 4.51.0
- fsspec: 2024.6.1
- gmpy2: 2.1.2
- huggingface-hub: 0.23.1
- idna: 3.7
- importlib-metadata: 7.0.1
- importlib-resources: 6.4.0
- inflect: 7.3.1
- ipykernel: 6.29.5
- ipython: 8.15.0
- jaraco.context: 5.3.0
- jaraco.functools: 4.0.1
- jaraco.text: 3.12.1
- jedi: 0.19.1
- jinja2: 3.1.4
- joblib: 1.4.2
- jsonargparse: 4.32.0
- jupyter-client: 7.4.9
- jupyter-core: 5.7.2
- kiwisolver: 1.4.4
- lightning: 2.4.0
- lightning-utilities: 0.11.6
- markupsafe: 2.1.3
- matplotlib: 3.8.4
- matplotlib-inline: 0.1.6
- mkl-fft: 1.3.8
- mkl-random: 1.2.4
- mkl-service: 2.4.0
- more-itertools: 10.3.0
- mpmath: 1.3.0
- nest-asyncio: 1.6.0
- networkx: 3.2.1
- numexpr: 2.8.7
- numpy: 1.26.4
- ordered-set: 4.1.0
- packaging: 24.1
- pandas: 2.2.2
- parso: 0.8.3
- pickleshare: 0.7.5
- pillow: 10.4.0
- pip: 24.2
- platformdirs: 4.2.2
- ply: 3.11
- prompt-toolkit: 3.0.43
- psutil: 5.9.0
- pure-eval: 0.2.2
- pygments: 2.15.1
- pyparsing: 3.0.9
- pyqt5: 5.15.10
- pyqt5-sip: 12.13.0
- pysocks: 1.7.1
- python-dateutil: 2.9.0.post0
- pytorch-lightning: 2.4.0
- pytz: 2024.1
- pywin32: 305.1
- pyyaml: 6.0.1
- pyzmq: 24.0.1
- regex: 2024.7.24
- requests: 2.32.3
- scikit-learn: 1.5.1
- scipy: 1.13.1
- setuptools: 72.1.0
- sip: 6.7.12
- six: 1.16.0
- stack-data: 0.2.0
- sympy: 1.12
- threadpoolctl: 3.5.0
- tomli: 2.0.1
- torch: 2.4.0
- torchaudio: 2.4.0
- torchmetrics: 1.4.0.post0
- torchvision: 0.19.0
- tornado: 6.4.1
- tqdm: 4.66.5
- traitlets: 5.14.3
- typeguard: 4.3.0
- typeshed-client: 2.7.0
- typing-extensions: 4.11.0
- tzdata: 2023.3
- unicodedata2: 15.1.0
- urllib3: 2.2.2
- wcwidth: 0.2.5
- wheel: 0.43.0
- win-inet-pton: 1.1.0
- zipp: 3.17.0
* System:
- OS: Windows
- architecture:
- 64bit
- WindowsPE
- processor: Intel64 Family 6 Model 183 Stepping 1, GenuineIntel
- python: 3.9.19
- release: 10
- version: 10.0.22631
</details>
### More info
I'm wondering if this is a bug, or just something I'm doing wrong with my setup?
Also, without the -h flag, the code runs fine. However my config file has the following ouput for the trainer:
```YAML
trainer:
accelerator: auto
strategy: auto
devices: auto
num_nodes: 1
precision: null
logger: null
callbacks: null
fast_dev_run: false
max_epochs: null
min_epochs: null
max_steps: -1
min_steps: null
max_time: null
limit_train_batches: null
limit_val_batches: null
limit_test_batches: null
limit_predict_batches: null
overfit_batches: 0.0
val_check_interval: null
check_val_every_n_epoch: 1
num_sanity_val_steps: null
log_every_n_steps: null
enable_checkpointing: null
enable_progress_bar: null
enable_model_summary: null
accumulate_grad_batches: 1
gradient_clip_val: null
gradient_clip_algorithm: null
deterministic: null
benchmark: null
inference_mode: true
use_distributed_sampler: true
profiler: null
detect_anomaly: false
barebones: false
plugins: null
sync_batchnorm: false
reload_dataloaders_every_n_epochs: 0
default_root_dir: null
```
Should there really be so many null values? | open | 2024-08-14T04:24:17Z | 2024-11-08T09:01:45Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20199 | [
"bug",
"needs triage",
"ver: 2.4.x"
] | nisar2 | 5 |
dynaconf/dynaconf | fastapi | 212 | [bug] In Python 3.4 and below inspect.stack() returns just a tuple in the list, not a list of named tuples | **Describe the bug**
In Python 3.4 and below `inspect.stack()` returns just a tuple in the list, not a list of named tuples. Only since version 3.5 does it return a list of named tuples.
**To Reproduce**
Steps to reproduce the behavior:
1. Having the following folder structure
<!-- Describe or use the command `$ tree -v` and paste below -->
<details>
<summary> Project structure </summary>
```bash
# example.py
# settings.ini
```
</details>
2. Having the following config files:
<!-- Please adjust if you are using different files and formats! -->
<details>
<summary> Config files </summary>
**settings.ini**
```ini
[default]
path = 'example.txt'
```
</details>
3. Having the following app code:
<details>
<summary> Code </summary>
**example.py**
```python
from dynaconf import settings
print(settings.PATH)
```
</details>
4. Executing under the following environment
<details>
<summary> Execution </summary>
```bash
$ python example.py
Traceback (most recent call last):
File "example.py", line 3, in <module>
print(settings.PATH)
File "C:\Python\Python34\lib\site-packages\dynaconf\base.py", line 93, in __getattr__
self._setup()
File "C:\Python\Python34\lib\site-packages\dynaconf\base.py", line 118, in _setup
default_settings.reload()
File "C:\Python\Python34\lib\site-packages\dynaconf\default_settings.py", line 74, in reload
start_dotenv(*args, **kwargs)
File "C:\Python\Python34\lib\site-packages\dynaconf\default_settings.py", line 61, in start_dotenv
or _find_file(".env", project_root=root_path)
File "C:\Python\Python34\lib\site-packages\dynaconf\utils\files.py", line 62, in find_file
script_dir = os.path.dirname(os.path.abspath(inspect.stack()[-1].filename))
AttributeError: 'tuple' object has no attribute 'filename'
```
</details>
**Expected behavior**
```python
from dynaconf import settings
print(settings.PATH)
# 'example.txt'
```
**Debug output**
<details>
<summary> Debug Output </summary>
```bash
export `DEBUG_LEVEL_FOR_DYNACONF=DEBUG` reproduce your problem and paste the output here
2019-08-26:16:46:23,988 DEBUG [default_settings.py:55 - start_dotenv] Starting Dynaconf Dotenv Base
2019-08-26:16:46:23,989 DEBUG [files.py:57 - find_file] No root_path for .env
Traceback (most recent call last):
File "example.py", line 3, in <module>
print(settings.PATH)
File "C:\Python\Python34\lib\site-packages\dynaconf\base.py", line 93, in __getattr__
self._setup()
File "C:\Python\Python34\lib\site-packages\dynaconf\base.py", line 118, in _setup
default_settings.reload()
File "C:\Python\Python34\lib\site-packages\dynaconf\default_settings.py", line 74, in reload
start_dotenv(*args, **kwargs)
File "C:\Python\Python34\lib\site-packages\dynaconf\default_settings.py", line 61, in start_dotenv
or _find_file(".env", project_root=root_path)
File "C:\Python\Python34\lib\site-packages\dynaconf\utils\files.py", line 62, in find_file
script_dir = os.path.dirname(os.path.abspath(inspect.stack()[-1].filename))
AttributeError: 'tuple' object has no attribute 'filename'
```
</details>
**Environment (please complete the following information):**
- OS: Windows 8.1
- Dynaconf Version 2.0.4
- Python Version 3.4.3
| closed | 2019-08-26T11:52:32Z | 2019-09-02T18:00:25Z | https://github.com/dynaconf/dynaconf/issues/212 | [
"help wanted",
"Not a Bug",
"good first issue"
] | Jazzis18 | 3 |
serpapi/google-search-results-python | web-scraping | 43 | Exception not handled on SerpApiClient.get_json | I am experiencing unexpected behaviors when getting thousands of queries. For some reason, sometimes the API returns an empty response. It happens at random (1 time out of 10000 perhaps).
When this situation happens, the method SerpApiClient.get_json does not handle the empty response. In consecuence, the json.loads() raises an exception causing a JSONDecodeError.
I attach an image to clarify the issue.

It seems a problem with the API service. Not sure if the problem should be solved with an Exception handling, handling the code 204 (empty response), or if there is any bug with servers.
to reproduce the exception:
`import json
json.loads('')
`
Do you recommend any guidelines to handle the problem in the meanwhile you review the issue on the source code?
Thanks.
| open | 2023-04-12T15:36:37Z | 2023-04-13T09:53:22Z | https://github.com/serpapi/google-search-results-python/issues/43 | [] | danielperezefremova-tomtom | 1 |
deepinsight/insightface | pytorch | 2,566 | Comercial needs | Hi, respect to your work on Insightface!
We wound like to use the model antelopev2/buffalo_1 in our commercial project, but unfortunately found it's limited when using models and datasets in commercial way. I wanna know if there is any commercial cooperation opportunity to achieve our goal, may be a payment for usage.
Looking forward to your reply badly! | open | 2024-04-22T07:35:51Z | 2024-04-22T07:35:51Z | https://github.com/deepinsight/insightface/issues/2566 | [] | Zwe1 | 0 |
encode/uvicorn | asyncio | 1,345 | Connection aborted after HttpParserInvalidMethodError when making consecutive POST requests | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
After making a few `POST` requests, one of the requests ends with the connection aborted. The server logs a log about an invalid HTTP method, however only POST method is consistently attempted.
### Steps to reproduce the bug
The bug is reproduced when making multiple `requests.posts` against a service running with latest uvicorn
Unfortunately I was not able to create a reliable reproduction, as running locally I seemed to make successful requests
### Expected behavior
All the requests used to pass in sequence without errors
### Actual behavior
One of the requests, after an inconsistent number of successful requests, fails with a connection error after sending the request. The server logs
### Debugging material
Request aborted
```
File "/usr/local/lib/python3.10/site-packages/requests/api.py", line 117, in post
return request('post', url, data=data, json=json, **kwargs)
File "/usr/local/lib/python3.10/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 529, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.10/site-packages/requests/sessions.py", line 645, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.10/site-packages/requests/adapters.py", line 501, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
```
Server warning (`uvicorn.error`).
```
"Invalid HTTP request received."
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 132, in data_received
self.parser.feed_data(data)
File "httptools/parser/parser.pyx", line 212, in httptools.parser.parser.HttpParser.feed_data
httptools.parser.errors.HttpParserInvalidMethodError: Invalid method encountered
```
### Environment
- Running uvicorn 0.17.1 with CPython 3.10.1 on Linux
- Encountered with Kubernetes services, _without_ reverse proxy/ingress
### Additional context
Since requests library is using a keepalive for the requests, a suspect change is the recent https://github.com/encode/uvicorn/pull/1332, and reverting uvicorn version seems to make the problem go away. | closed | 2022-01-30T20:12:07Z | 2022-02-03T14:15:25Z | https://github.com/encode/uvicorn/issues/1345 | [] | vogre | 11 |
coqui-ai/TTS | pytorch | 3,854 | How can I customize a speaker on server? | Myconfig works like this, but it won't run

and i also run this command,but config doesn't work。
`.\venv\Scripts\python.exe TTS/server/server.py --config_path .\TTS\server\conf.json --model_name tts_models/zh-CN/baker/tacotron2-DDC-GST` | closed | 2024-08-06T12:33:32Z | 2024-08-17T02:52:50Z | https://github.com/coqui-ai/TTS/issues/3854 | [
"feature request"
] | durantgod | 1 |
aminalaee/sqladmin | asyncio | 137 | Lazy support | ### Checklist
- [x] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
Hello. In my project I have a lot relationships with `lazy="dynamic"`. But sqladmin don't supports it
### Describe the solution you would like.
I would like to see setting in config like load_lazys . If if is True load all relationships
### Describe alternatives you considered
_No response_
### Additional context
_No response_ | open | 2022-04-18T18:16:25Z | 2022-07-10T11:00:07Z | https://github.com/aminalaee/sqladmin/issues/137 | [
"hold"
] | badger-py | 1 |
openapi-generators/openapi-python-client | rest-api | 451 | Exception when generating list properties in multipart forms | After upgrading from 0.9.2 to 0.10.0 the client generation fails with:
```
Traceback (most recent call last):
File "REDACTED/.venv/bin/openapi-python-client", line 8, in <module>
sys.exit(app())
File "REDACTED/.venv/lib/python3.8/site-packages/typer/main.py", line 214, in __call__
return get_command(self)(*args, **kwargs)
File "REDACTED/.venv/lib/python3.8/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "REDACTED/.venv/lib/python3.8/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "REDACTED/.venv/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "REDACTED/.venv/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "REDACTED/.venv/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "REDACTED/.venv/lib/python3.8/site-packages/typer/main.py", line 497, in wrapper
return callback(**use_params) # type: ignore
File "REDACTED/.venv/lib/python3.8/site-packages/openapi_python_client/cli.py", line 141, in generate
errors = create_new_client(
File "REDACTED/.venv/lib/python3.8/site-packages/openapi_python_client/__init__.py", line 314, in create_new_client
return project.build()
File "REDACTED/.venv/lib/python3.8/site-packages/openapi_python_client/__init__.py", line 108, in build
self._build_api()
File "REDACTED/.venv/lib/python3.8/site-packages/openapi_python_client/__init__.py", line 263, in _build_api
module_path.write_text(endpoint_template.render(endpoint=endpoint), encoding=self.file_encoding)
File "REDACTED/.venv/lib/python3.8/site-packages/jinja2/environment.py", line 1289, in render
self.environment.handle_exception()
File "REDACTED/.venv/lib/python3.8/site-packages/jinja2/environment.py", line 924, in handle_exception
raise rewrite_traceback_stack(source=source)
File "REDACTED/.venv/lib/python3.8/site-packages/openapi_python_client/templates/endpoint_module.py.jinja", line 38, in top-level template code
{{ multipart_body(endpoint) | indent(4) }}
File "REDACTED/.venv/lib/python3.8/site-packages/jinja2/runtime.py", line 828, in _invoke
rv = self._func(*arguments)
File "REDACTED/.venv/lib/python3.8/site-packages/openapi_python_client/templates/endpoint_macros.py.jinja", line 80, in template
{{ transform_multipart(property, property.python_name, destination) }}
File "REDACTED/.venv/lib/python3.8/site-packages/jinja2/utils.py", line 81, in from_obj
if hasattr(obj, "jinja_pass_arg"):
jinja2.exceptions.UndefinedError: the template 'property_templates/list_property.py.jinja' (imported on line 79 in 'endpoint_macros.py.jinja') does not export the requested name 'transform_multipart'
```
| closed | 2021-07-07T08:11:50Z | 2021-07-11T23:57:26Z | https://github.com/openapi-generators/openapi-python-client/issues/451 | [
"🐞bug"
] | dpursehouse | 11 |
qubvel-org/segmentation_models.pytorch | computer-vision | 342 | Expected more than 1 value per channel when training when using PAN, other architecture works fine | ```
model = smp.PAN(
encoder_name="efficientnet-b3",
encoder_weights=None,
in_channels=1,
classes=num_classes,
activation= None
)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
test = torch.rand(1, 1, 128, 128).cuda()
out = model(test)
out.shape
```
Got this error when running the code above :
`ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 32, 1, 1])`
the code works fine if you change PAN to another architecture.. | closed | 2021-02-08T06:22:59Z | 2024-07-30T14:22:39Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/342 | [] | leocd91 | 5 |
bmoscon/cryptofeed | asyncio | 693 | FTX websockets url change | I believe "wss://ftexchange.com/ws/" should be replaced by "wss://ftx.com/ws/" for FTX exchange as per FTX's documentation. An error is returned by the api when using the former and first started appearing today. | closed | 2021-10-20T00:21:07Z | 2021-10-20T13:06:26Z | https://github.com/bmoscon/cryptofeed/issues/693 | [] | jfc1493 | 0 |
OthersideAI/self-operating-computer | automation | 164 | [Question] About the Third-party API | [Question] How can I integrate a third-party API? Is it enough to change the client.base_url in config.py to the URL of the third-party API, such as client.base_url = "https://api.example.com" ? Do I also need to add something like /v1/chat/completions to the URL?
Thank you very much! | open | 2024-02-17T02:35:31Z | 2024-12-01T10:27:01Z | https://github.com/OthersideAI/self-operating-computer/issues/164 | [] | lueluelue2006 | 1 |
kizniche/Mycodo | automation | 661 | RPM Control PID: Sinlge PWM output for both raise and lower | ## Mycodo Issue Report:
- Specific Mycodo Version: 7.5.3
#### Problem Description
I have 2 PWM controlled fans of which I'm monitoring- and want to control the RPM.
If I want to do this with a PID controller I have to create 2 PWM outputs in order to be able to do this because the raise and lower outputs aren't allowed to be the same.
Is it possible to make it possbile to use the same output in a future version?
| closed | 2019-05-28T09:48:17Z | 2019-05-29T23:14:37Z | https://github.com/kizniche/Mycodo/issues/661 | [] | floriske | 4 |
dgtlmoon/changedetection.io | web-scraping | 1,954 | [feature] add setting to show watched price / any other filter in overview.. | **Version and OS**
Docker
**Describe the solution you'd like**
When a price element is watched, it would be helpfull to se the price in the overview page in the table in between website and last checked. That way it's easy to tell what webpage has the lowest price without having to go through them one by one.
**Describe the use-case and give concrete real-world examples**
Exampel showen in image below

| closed | 2023-11-10T07:43:28Z | 2024-07-12T15:09:44Z | https://github.com/dgtlmoon/changedetection.io/issues/1954 | [
"enhancement",
"user-interface"
] | AlexGuld | 1 |
flaskbb/flaskbb | flask | 155 | Doesn't serve static files in standalone mode | If I set up flaskbb and start it running locally with "make run", it doesn't serve any of the static files.
I'm not sure if there's something I've missed during setup, since the docs don't entirely appear to line up with the current state of flaskbb.
| closed | 2015-12-28T18:17:14Z | 2018-04-15T07:47:37Z | https://github.com/flaskbb/flaskbb/issues/155 | [] | gordonjcp | 1 |
jmcnamara/XlsxWriter | pandas | 697 | can't combine sheet.right_to_left() with and set_column with justify-center? | Hi,
I am using XlsxWriter to write RTL data. I want the worksheet to be right to left aligned, but the contents of each cell to be horizontally and vertically justified to the middle/center.
When I remove `sheet.right_to_left()` then the vertical and horizontal alignment work perfectly, however when I add it either before or after I'm using `set_column()`, all contents are justified to the right.
I am using Python version 3.8.1 and XlsxWriter 1.2.8 and Pages.
Here is some code that demonstrates the problem:
```python
import pandas as pd
top_movers_dict = {'שם יישוב': {350: 'מלכיאור לוי', 1207: 'שמואל}, 'top_movers': {350: -0.03, 1207: -0.011}, '23_scoring_pct': {350: 0.563, 1207: 0.729}, '22_scoring_pct': {350: 0.593, 1207: 0.74}, 'זבז_23': {350: 12693, 1207: 22565}}
top_movers = pd.DataFrame.from_dict(top_movers_dict)
# Create a Pandas Excel writer using XlsxWriter as the engine.
writer = pd.ExcelWriter("pandas_simple.xlsx", engine="xlsxwriter")
workbook = writer.book
cell_format = workbook.add_format({"valign": "vcenter", "align": "center"})
# Convert the dataframe to an XlsxWriter Excel object.
top_movers.to_excel(writer, sheet_name="Sheet1", index=False)
worksheet = writer.sheets["Sheet1"] # pull worksheet object
worksheet.right_to_left()
for idx, col in enumerate(top_movers): # loop through all columns
series = top_movers[col]
max_len = (
max(
(
series.astype(str).map(len).max(), # len of largest item
len(str(series.name)), # len of column name/header
)
)
+ 1
) # adding a little extra space
worksheet.set_column(idx, idx, max_len, cell_format) # set column width
# Close the Pandas Excel writer and output the Excel file.
writer.save()
```
| closed | 2020-03-04T22:37:35Z | 2020-03-07T19:07:09Z | https://github.com/jmcnamara/XlsxWriter/issues/697 | [
"under investigation",
"awaiting user feedback"
] | SHxKM | 6 |
openapi-generators/openapi-python-client | fastapi | 621 | Support Client Side Certificates | It would be nice if the generated client would support client side certificates.
This would be an additional authentication method used in secured environments.
The underlaying httpx lib does support it with a named argument "cert":
https://www.python-httpx.org/advanced/#client-side-certificates
I was not able to get the kwargs from the openapi-python-client passed through to httpx. | closed | 2022-06-01T15:01:51Z | 2022-11-12T19:03:25Z | https://github.com/openapi-generators/openapi-python-client/issues/621 | [
"✨ enhancement"
] | marioland | 2 |
collerek/ormar | fastapi | 814 | can ormar.JSON accept ensure_ascii=False | I'm chinese, I want keep the origin char of Chinese when save some json to db.
So, is there any way to pass ensure_ascii=False to ormar.JSON field type to do that?
| open | 2022-09-06T14:37:11Z | 2022-09-27T10:28:06Z | https://github.com/collerek/ormar/issues/814 | [
"enhancement"
] | ljj038 | 1 |
coqui-ai/TTS | python | 3,992 | Finetune XTTS for new languages | Hello everyone, below is my code for fine-tuning XTTS for a new language. It works well in my case with over 100 hours of audio.
https://github.com/nguyenhoanganh2002/XTTSv2-Finetuning-for-New-Languages | closed | 2024-09-08T08:18:10Z | 2025-01-25T12:14:49Z | https://github.com/coqui-ai/TTS/issues/3992 | [
"wontfix",
"feature request"
] | anhnh2002 | 25 |
3b1b/manim | python | 1,798 | UpdatersExample negative width error in example scenes | ### Describe the error
In the UpdatersExample in the [example scenes](https://3b1b.github.io/manim/getting_started/example_scenes.html), the line:
`lambda m: m.set_width(w0 * math.cos(self.time - now))` results in an error. In order for this to match the animation in the video it should read:
`lambda m: m.set_width(w0 * (math.cos(self.time - now - PI/2)+1))`
### Code and Error
**Code**:
UpdatersExample
**Error**:
manim\manimlib\mobject\mobject.py:903: RuntimeWarning: overflow encountered in double_scalars
self.scale(length / old_length, **kwargs)
### Environment
**OS System**: Windows 10
**manim version**: master manimGL v1.6.1
**python version**: Python 3.9.6
| open | 2022-04-23T18:49:16Z | 2022-06-07T05:16:25Z | https://github.com/3b1b/manim/issues/1798 | [] | vchizhov | 1 |
ydataai/ydata-profiling | jupyter | 937 | The running process is not finished | I run this example and process is not finished

as well as in other datasets. I use jupyter notebook, windows 10 and python 3.8.12
Any solution about this? I search some solution on internet but not solved | closed | 2022-03-05T06:36:49Z | 2022-10-05T16:08:04Z | https://github.com/ydataai/ydata-profiling/issues/937 | [
"information requested ❔"
] | haloapping | 2 |
miguelgrinberg/Flask-SocketIO | flask | 1,566 | Gunicorn and eventlet error | Hi Miguel, I'm developing a _Flask Rest API_. I'm trying to use web sockets to push messages from backend to frontend.
I can run the project locally, and everything works fine. But when I try to use **Gunicorn** to start up the project, the project does not work properly.
## This is my code / entry point
### wsgi.py file
```
#pylint: disable=wrong-import-position, wrong-import-order
import eventlet
eventlet.monkey_patch()
import os
import time
from flask import g, request
from flask_socketio import SocketIO
from app import create_app, db
app = create_app(os.environ.get("FLASK_ENV", 'development'))
socketio = SocketIO(app, async_mode='eventlet')
if __name__ == '__main__':
socketio.run(app, host="0.0.0.0", port=5000, debug=True, use_reloader=True, log_output=True)
```
### Running locally
```
export FLASK_ENV=development
export FLASK_APP=wsgi.py
export DEBUG=1
python wsgi.py
```
### Terminal output
```
Server initialized for eventlet.
* Restarting with stat
Server initialized for eventlet.
* Debugger is active!
* Debugger PIN: 124-101-814
(73886) wsgi starting up on http://0.0.0.0:5000
```
But when I try to use Gunicorn as it is described [here](https://flask-socketio.readthedocs.io/en/latest/deployment.html#gunicorn-web-server)
```
gunicorn -k eventlet -w 1 wsgi:app -b :5000
```
### Terminal output
```
[2021-06-07 19:26:26 -0300] [74095] [INFO] Starting gunicorn 20.1.0
[2021-06-07 19:26:26 -0300] [74095] [INFO] Listening at: http://0.0.0.0:5000 (74095)
[2021-06-07 19:26:26 -0300] [74095] [INFO] Using worker: eventlet
[2021-06-07 19:26:26 -0300] [74097] [INFO] Booting worker with pid: 74097
Server initialized for eventlet.
Traceback (most recent call last):
File "/home/jony/flask_app/venv/lib/python3.8/site-packages/eventlet/hubs/hub.py", line 476, in fire_timers
timer()
File "/home/jony/flask_app/venv/lib/python3.8/site-packages/eventlet/hubs/timer.py", line 59, in __call__
cb(*args, **kw)
File "/home/jony/flask_app/venv/lib/python3.8/site-packages/eventlet/greenthread.py", line 221, in main
result = function(*args, **kwargs)
File "/home/jony/flask_app/venv/lib/python3.8/site-packages/gunicorn/workers/geventlet.py", line 78, in _eventlet_serve
conn, addr = sock.accept()
File "/home/jony/flask_app/venv/lib/python3.8/site-packages/eventlet/greenio/base.py", line 230, in accept
self._trampoline(fd, read=True, timeout=self.gettimeout(), timeout_exc=_timeout_exc)
File "/home/jony/flask_app/venv/lib/python3.8/site-packages/eventlet/greenio/base.py", line 208, in _trampoline
return trampoline(fd, read=read, write=write, timeout=timeout,
File "/home/jony/flask_app/venv/lib/python3.8/site-packages/eventlet/hubs/__init__.py", line 155, in trampoline
listener = hub.add(hub.READ, fileno, current.switch, current.throw, mark_as_closed)
File "/home/jony/flask_app/venv/lib/python3.8/site-packages/eventlet/hubs/kqueue.py", line 53, in add
self._control([event], 0, 0)
File "/home/jony/flask_app/venv/lib/python3.8/site-packages/eventlet/hubs/kqueue.py", line 39, in _control
return self.kqueue.control(events, max_events, timeout)
OSError: [Errno 9] Bad file descriptor
[2021-06-07 19:26:27 -0300] [74097] [ERROR] Exception in worker process
```
Any idea, what I'm doing wrong?
Thanks in advance,
Juan R
Let me attach my requirements.txt, in case it helps.
[requirements.txt](https://github.com/miguelgrinberg/Flask-SocketIO/files/6612303/requirements.txt)
| closed | 2021-06-07T22:31:25Z | 2021-06-27T19:38:31Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1566 | [
"question"
] | jonyr | 6 |
jupyterlab/jupyter-ai | jupyter | 377 | /learn error with non-UTF-8 files | Hi,
When I run `/learn docs/` I get the message:
`Sorry, that path doesn't exist: C:\Users\sdonn\docs/`
That is not the working directory I'm using in Jupyter, so it appears to be in the wrong directory? Do I need to enter the working directory somehow? Tried this a few ways but to no avail... Thanks.
Edit: AI magics and chat are working perfectly fine. | closed | 2023-09-03T11:28:32Z | 2023-10-30T16:05:40Z | https://github.com/jupyterlab/jupyter-ai/issues/377 | [
"bug",
"scope:chat-ux"
] | scottdonnelly | 10 |
davidsandberg/facenet | tensorflow | 505 | Training on MSCELEB | When training on MSCELEB did you do anything different than training on CASIA or was it just the addition of more data? | open | 2017-10-27T20:33:02Z | 2017-12-06T17:42:40Z | https://github.com/davidsandberg/facenet/issues/505 | [] | RakshakTalwar | 6 |
vimalloc/flask-jwt-extended | flask | 144 | Error with the basic example | Hi, I follow all the basic steps to install the package from the webpage http://flask-jwt-extended.readthedocs.io/en/stable/installation.html and when i try to execute the basic usage example of the webpage: http://flask-jwt-extended.readthedocs.io/en/stable/basic_usage.html this happens to me all the time:

| closed | 2018-05-03T09:43:24Z | 2018-05-03T09:51:50Z | https://github.com/vimalloc/flask-jwt-extended/issues/144 | [] | CarlosCordoba96 | 0 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 1,137 | Hi! | Hi!
For a tabletop RPG I need to change my voice and I found this app (which is great, btw) and I planned to do my sessions on Discord. I did the same thing as in the video for other people in the voice chat to hear the change but it didn't worked, don't know why. Don't know if I did something wrong or if it's a bug, but I prefer to say it here maybe someone here is having the same issue so I'm saying it here.
Thanks you in advance for what you will do and have a great day!
__Originally posted by @BL4NK69 in https://github.com/symphonly/figaro/issues/65__ | closed | 2022-11-20T01:49:18Z | 2022-12-02T08:51:51Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1137 | [] | ImanuillKant1 | 0 |
pandas-dev/pandas | data-science | 60,779 | BUG: pd.read_csv Incorrect Checksum validation for COMPOSITE Checksum | ### Pandas version checks
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import base64
import zlib
import awswrangler as wr
import boto3
import pandas as pd
# DL DEV
AWS_ACCESS_KEY_ID = <Redacted>
AWS_SECRET_ACCESS_KEY = <Redacted>
AWS_SESSION_TOKEN = <Redacted>
session_west = boto3.Session(
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
aws_session_token=AWS_SESSION_TOKEN,
region_name="eu-west-1",
)
client = session_west.client("s3")
localpath = <Redacted>
bigfile = "bigfile.csv"
smallfile = "smallfile.csv"
bucket = "checksum-test-bucket"
s3path = "checksum-test"
for filetype in [smallfile, bigfile]:
with open(f"{localpath}{filetype}", "rb") as file:
# Calculate CRC32 ourselves for reference
crcval = zlib.crc32(file.read())
crc_bytes = crcval.to_bytes(4, "big")
crc = base64.b64encode(crc_bytes).decode("utf-8")
print(f"{filetype} - {crc}")
with open(f"{localpath}{filetype}", "rb") as file:
client.put_object(
Bucket=bucket, Key=f"{s3path}/put_object/{filetype}", Body=file
)
client.upload_file(
Bucket=bucket,
Key=f"{s3path}/upload_file/{filetype}",
Filename=f"{localpath}{filetype}",
)
for filetype in [smallfile, bigfile]:
for upload_method in ["put_object", "upload_file"]:
path = f"s3://{bucket}/{s3path}/{upload_method}/{filetype}"
print(path)
try:
fw: pd.DataFrame = wr.s3.read_csv(
path=path,
dtype="object",
boto3_session=session_west,
)
print(fw.shape)
except Exception as e:
print(f"wrangler failed - {e}")
try:
fp = pd.read_csv(
path,
storage_options={
"key": AWS_ACCESS_KEY_ID,
"secret": AWS_SECRET_ACCESS_KEY,
"token": AWS_SESSION_TOKEN,
},
)
print(fp.shape)
except Exception as e:
print(f"Pandas fail - {e}")
try:
client = session_west.client("s3")
fb = client.get_object(
Bucket=bucket,
Key=f"{s3path}/{upload_method}/{filetype}",
ChecksumMode="ENABLED",
)
print(f'{fb["ChecksumCRC32"]} - {fb["ChecksumType"]}')
except Exception as e:
print(f"boto error - {e}")
```
### Issue Description
Boto3 >=1.36.0 has modified behaviour to add CRC32 checksum by default where supported.
When accessing s3 objects with pd.read_csv any s3 object that has created a COMPOSITE checksum fails reading as the checksum compared against is the FULL_OBJECT checksum.
Composite checksum appears to be calculated when an object exceeds ~10Mb when using boto3 upload_file(), seemingly it switches to a multi-part upload behind the scenes at that threshold. Other explicit multi-part uploads will presumably have the same behaviour.
Included test using both Pandas and Awswrangler for completeness
Output for failing versions
```pip show boto3 botocore s3transfer pandas awswrangler| egrep 'Name:|Version:'
Name: boto3
Version: 1.36.5
Name: botocore
Version: 1.36.5
Name: s3transfer
Version: 0.11.2
Name: pandas
Version: 2.2.3
Name: awswrangler
Version: 3.11.0
smallfile.csv - CbsfmA==
bigfile.csv - vGPIeA==
s3://checksum-test-bucket/checksum-test/put_object/smallfile.csv
(1461, 91)
(1461, 91)
CbsfmA== - FULL_OBJECT
s3://checksum-test-bucket/checksum-test/upload_file/smallfile.csv
(1461, 91)
(1461, 91)
CbsfmA== - FULL_OBJECT
s3://checksum-test-bucket/checksum-test/put_object/bigfile.csv
(20467, 91)
(20467, 91)
vGPIeA== - FULL_OBJECT
s3://checksum-test-bucket/checksum-test/upload_file/bigfile.csv
wrangler failed - Expected checksum DIoExg== did not match calculated checksum: vGPIeA==
Pandas fail - Expected checksum DIoExg== did not match calculated checksum: vGPIeA==
DIoExg==-2 - COMPOSITE
```
Using boto3 <1.36 all scenarios from the example code work
Test with older version
```pip install "boto3<1.36.0" ```
Output from working version
```show boto3 botocore s3transfer pandas awswrangler| egrep 'Name:|Version:'
Name: boto3
Version: 1.35.99
Name: botocore
Version: 1.35.99
Name: s3transfer
Version: 0.10.4
Name: pandas
Version: 2.2.3
Name: awswrangler
Version: 3.11.0
smallfile.csv - CbsfmA==
bigfile.csv - vGPIeA==
s3://checksum-test-bucket/checksum-test/put_object/smallfile.csv
(1461, 91)
(1461, 91)
None - None
s3://checksum-test-bucket/checksum-test/upload_file/smallfile.csv
(1461, 91)
(1461, 91)
None - None
s3://checksum-test-bucket/checksum-test/put_object/bigfile.csv
(20467, 91)
(20467, 91)
None - None
s3://checksum-test-bucket/checksum-test/upload_file/bigfile.csv
(20467, 91)
(20467, 91)
None - None```
### Expected Behavior
When reading using pd.read_csv the checksum calculated for comparison should be aware of whether the stored checksum is FULL_OBJECT or COMPOSITE and handle it correctly.
### Installed Versions
<details>
Working versions
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.11
python-bits : 64
OS : Linux
OS-release : 6.8.0-1021-aws
Version : #23~22.04.1-Ubuntu SMP Tue Dec 10 16:50:46 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.utf8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.9.0.post0
pip : 24.3.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.12.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.4
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 16.0.0
pyreadstat : None
pytest : 8.2.0
python-calamine : None
pyxlsb : None
s3fs : 2024.12.0
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None
None
Failing versions
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.11.11
python-bits : 64
OS : Linux
OS-release : 6.8.0-1021-aws
Version : #23~22.04.1-Ubuntu SMP Tue Dec 10 16:50:46 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.utf8
LOCALE : en_US.UTF-8
pandas : 2.2.3
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.9.0.post0
pip : 24.3.1
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.12.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.4
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 16.0.0
pyreadstat : None
pytest : 8.2.0
python-calamine : None
pyxlsb : None
s3fs : 2024.12.0
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None
None
</details>
| closed | 2025-01-24T11:48:54Z | 2025-01-27T12:34:44Z | https://github.com/pandas-dev/pandas/issues/60779 | [
"Bug",
"IO CSV",
"IO Network",
"Needs Triage"
] | munch9 | 2 |
waditu/tushare | pandas | 850 | [Bug]停复牌信息存在duplicates | ID: araura@126.com
df = pro.suspend(ts_code='', suspend_date='20181119', resume_date='', fiedls='')
df[df['ts_code']=='002193.SZ']
或者
df = pro.suspend(ts_code='', suspend_date='20181120', resume_date='', fiedls='')
df[df['ts_code']=='601600.SH']
其中,第二个案例返回:
ts_code suspend_date resume_date suspend_reason
35 601600.SH 20181120 None 重要事项未公告
37 601600.SH 20181120 20181121 重要事项未公告
这样看来,我有点怀疑数据甚至可能存在更新历史数据的情况。
https://tushare.pro/register?reg=126473 | closed | 2018-11-29T09:48:51Z | 2018-11-29T13:22:23Z | https://github.com/waditu/tushare/issues/850 | [] | araura2000 | 1 |
tflearn/tflearn | tensorflow | 264 | Is it possible to get the validation error in DNN.fit? | Hello, Thanks for this awesome work.
I just transfer the work to tflearn from torch. So I am pretty new here.
A quick question: When I apply the validation set in the DNN.fit, how can I get the prediction when the TFlearn.DNN.fit and get the best model?
If not, is it possible to write the code in the loop to evaluate the model and save it likes follows?
for steps in xrange(MAX_STEPS):
model.fit
error = model.evaluate/predict
if the error is the minimal:
save model
| closed | 2016-08-09T18:02:50Z | 2016-08-10T17:51:27Z | https://github.com/tflearn/tflearn/issues/264 | [] | ShownX | 3 |
home-assistant/core | python | 141,223 | Fitbit token refresh fails | ### The problem
I use the Fitbit integration and have no issues authenticating and fetching data. Unfortunately, the token refresh always fails and I have to reconfigure and reauthorize the integration with Fitbit every day or so.
I also have a different addon script that fetches extra Fitbit data and sends it to HA via MQTT and am encountering the same problem there:
```json
{
"errors": [{
"errorType": "invalid_grant",
"message": "Refresh token invalid: xxxxxx. Visit https://dev.fitbit.com/docs/oauth2 for more information on the Fitbit Web API authorization process."
}],
"success": false
}
```
### What version of Home Assistant Core has the issue?
2025.3.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Fitbit
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/fitbit/
### Diagnostics information
2025-03-23 10:24:40.875 DEBUG (MainThread) [homeassistant.components.fitbit.application_credentials] Client response error status=400, body=
2025-03-23 10:24:40.875 WARNING (MainThread) [homeassistant.config_entries] Config entry 'Bob X.' for fitbit integration could not authenticate: Bad Request error: 400, message='Bad Request', url='https://api.fitbit.com/oauth2/token'
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | closed | 2025-03-23T16:25:49Z | 2025-03-23T21:31:41Z | https://github.com/home-assistant/core/issues/141223 | [
"integration: fitbit"
] | Cyberes | 6 |
521xueweihan/HelloGitHub | python | 2,340 | 开源项目:PRemoteM —— 爽快利落的 Windows 平台远程桌面管理软件 | ## 推荐项目
<!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。-->
<!-- 点击上方 “Preview” 立刻查看提交的内容 -->
<!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址-->
- 项目地址:https://github.com/1Remote/PRemoteM
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)-->
- 类别:C#
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:
PRemoteM —— 爽快利落的 Windows 平台远程桌面管理软件
<!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符-->
- 项目描述:
该项目致力于提供优秀的远程桌面管理体验,通过设计的启动器,帮助使用者快速开启到服务器的远程链接。目前该项目支持 RDP、SSH、SFTP、FTP、远程 APP 等多种远程连接方式。
<!--令人眼前一亮的点是什么?类比同类型项目有什么特点!-->
- 亮点:
1. 该项目已经支持多显示器+全屏的RDP远程桌面连接,在市面上可能是除微软官方外唯一一款支持这个功能的第三方RDP连接工具。
2. 通过启动器来快速启动远程桌面,体验非常好。
3. 该项目已免费发布到 Microsoft store,并广受好评。(商店中满分是5星,该项目获得了61个评分,平均为4.9星,详见地址:https://www.microsoft.com/store/productId/9PNMNF92JNFP )

- 后续更新计划:
长期维护,修改bug,并吸取各方面的改进建议。
| closed | 2022-08-25T10:23:09Z | 2022-10-28T03:32:53Z | https://github.com/521xueweihan/HelloGitHub/issues/2340 | [
"已发布",
"C# 项目"
] | VShawn | 1 |
Evil0ctal/Douyin_TikTok_Download_API | api | 448 | [BUG] douyin/web/fetch_one_video return 400 code with error message "An error occurred." | ***Platform where the error occurred?***
Douyin
***The endpoint where the error occurred?***
https://douyin.wtf/api/douyin/web/fetch_one_video?aweme_id=7235604590035569976
***Submitted input value?***

***Have you tried again?***
Yes i tried a serveral times but i got the same error
***Have you checked the readme or interface documentation for this project?***
Yes, and it is very sure that the problem is caused by the program.
| closed | 2024-07-14T06:33:13Z | 2024-07-17T21:36:19Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/448 | [
"BUG",
"enhancement"
] | akngg | 1 |
huggingface/datasets | deep-learning | 6,661 | Import error on Google Colab | ### Describe the bug
Cannot be imported on Google Colab, the import throws the following error:
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Steps to reproduce the bug
1. `! pip install -U datasets`
2. `import datasets`
### Expected behavior
Should be possible to use the library
### Environment info
- `datasets` version: 2.17.0
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.6.0 | closed | 2024-02-13T13:12:40Z | 2024-02-25T16:37:54Z | https://github.com/huggingface/datasets/issues/6661 | [] | kithogue | 4 |
deeppavlov/DeepPavlov | tensorflow | 954 | ValueError: Error when checking target: expected activation_5 to have shape (2,) but got array with shape (1,) | Hi,
I'm training a binary classifier, and I keep getting the error in the title whenever my training data is 1282 samples long.
I have this in my configuration:
```
"split_fields": [
"train",
"valid"
],
"split_proportions": [
0.9,
0.1
]
```
I did some debugging, and I'm pretty sure the problem is the data is split into training that is 1153 samples long, and validation which is the rest.
This causes `DataLearningIterator.gen_batches` to generate a batch which is 1 sample (because 1153 divided by 64 has a remainder of 1).
This, in turn, causes a problem in `KerasClassificationModel.train_on_batch`.
Specifically, it seems like `np.squeeze(np.array(labels))` creates an array of shape `(2,)` instead of `(1,2)` when `len(labels)==1`.
The exception is then caused by `training_utils.standardize_input_data` in `Keras` reading this as it would an array with shape `(2,1)`, i.e. two label vectors of length 1 instead of one label vector of length 2.
Thanks | closed | 2019-07-30T10:48:59Z | 2019-08-20T11:54:09Z | https://github.com/deeppavlov/DeepPavlov/issues/954 | [] | drorvinkler | 1 |
yinkaisheng/Python-UIAutomation-for-Windows | automation | 180 | WebDriver API for this tool | This is just a suggestion to provide a WebDriver API to this project/tool. It could also be a 3rd party project for such support.
I created such support for AutoIt: https://github.com/daluu/autoitdriverserver (in python), could similarly adapt for this project. | open | 2021-09-27T22:06:53Z | 2021-12-21T03:26:52Z | https://github.com/yinkaisheng/Python-UIAutomation-for-Windows/issues/180 | [] | daluu | 1 |
dbfixtures/pytest-postgresql | pytest | 1,102 | Drop PostgreSQL 13 from CI [Nov 20225] | In November 2025, PostgreSQL 13 ends its life: https://endoflife.date/postgresql | open | 2025-03-06T08:52:12Z | 2025-03-06T08:52:24Z | https://github.com/dbfixtures/pytest-postgresql/issues/1102 | [] | fizyk | 0 |
roboflow/supervision | tensorflow | 1,313 | Colormap for visualizing depth, normal and gradient images | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Description
hello :wave: @SkalskiP @LinasKo ,
what do you think about adding a feature to visualize depth images, normal image and graident images across various tasks?
### Use case
_No response_
### Additional
[here](https://learnopencv.com/applycolormap-for-pseudocoloring-in-opencv-c-python/) is a full reference of implementation in opencv
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2024-06-27T13:41:04Z | 2024-10-30T09:28:37Z | https://github.com/roboflow/supervision/issues/1313 | [
"enhancement"
] | hardikdava | 5 |
itamarst/eliot | numpy | 87 | Create neo4j input script for Eliot logs | neo4j seems pretty well suited for storing Eliot logs - it's a graph database, so can represent relationships between actions, and it's also schemaless so should be able to deal with arbitrary messages.
http://neo4j.com/guides/graph-concepts/
| closed | 2014-05-26T20:19:52Z | 2019-05-09T18:18:08Z | https://github.com/itamarst/eliot/issues/87 | [] | itamarst | 1 |
yt-dlp/yt-dlp | python | 12,226 | [iqiyi] Can't find any video | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Poland
### Provide a description that is worded well enough to be understood
yt-dlp can't download videos from Chinese video portal iqiyi.com, but in Poland page on browser work correctly.
I analise get video by webpage (no by app) and i see, probably they're using a .f4v format (octet-stream) to play video on browser.
screenshot:

### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '-F', 'https://www.iqiyi.com/w_19s90th4cd.html', '--print-traffic']
[debug] Encodings: locale cp1250, fs utf-8, pref cp1250, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2025.01.26 from yt-dlp/yt-dlp [3b4531934] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: none
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.12.14, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-14.2
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
director: Handler preferences for this request: urllib=0, requests=100, websockets=0, curl_cffi=-100
director: Checking if "requests" supports this request.
director: Sending request via "requests"
requests: Starting new HTTPS connection (1): api.github.com:443
send: b'GET /repos/yt-dlp/yt-dlp/releases/latest HTTP/1.1\r\nHost: api.github.com\r\nConnection: keep-alive\r\nUser-Agent: yt-dlp\r\nAccept: application/vnd.github+json\r\nAccept-Language: en-us,en;q=0.5\r\nSec-Fetch-Mode: navigate\r\nX-Github-Api-Version: 2022-11-28\r\nAccept-Encoding: gzip, deflate, br\r\n\r\n'
reply: 'HTTP/1.1 200 OK\r\n'
header: Date: Tue, 28 Jan 2025 21:45:03 GMT
header: Cache-Control: public, max-age=60, s-maxage=60
header: Vary: Accept,Accept-Encoding, Accept, X-Requested-With
header: ETag: "684d7398af29119adb161db329b5655683e47f8fee82ebf7fd2279bc873c3420"
header: Last-Modified: Sun, 26 Jan 2025 04:00:32 GMT
header: x-github-api-version-selected: 2022-11-28
header: Access-Control-Expose-Headers: ETag, Link, Location, Retry-After, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Used, X-RateLimit-Resource, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval, X-GitHub-Media-Type, X-GitHub-SSO, X-GitHub-Request-Id, Deprecation, Sunset
header: Access-Control-Allow-Origin: *
header: Strict-Transport-Security: max-age=31536000; includeSubdomains; preload
header: X-Frame-Options: deny
header: X-Content-Type-Options: nosniff
header: X-XSS-Protection: 0
header: Referrer-Policy: origin-when-cross-origin, strict-origin-when-cross-origin
header: Content-Security-Policy: default-src 'none'
header: Server: github.com
header: Content-Type: application/json; charset=utf-8
header: X-GitHub-Media-Type: github.v3; format=json
header: Content-Encoding: gzip
header: Accept-Ranges: bytes
header: X-RateLimit-Limit: 60
header: X-RateLimit-Remaining: 57
header: X-RateLimit-Reset: 1738104240
header: X-RateLimit-Resource: core
header: X-RateLimit-Used: 3
header: Content-Length: 5461
header: X-GitHub-Request-Id: CE73:3315F5:16DE73F:17907A8:67994FEB
Latest version: stable@2025.01.26 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2025.01.26 from yt-dlp/yt-dlp)
[iqiyi] Extracting URL: https://www.iqiyi.com/w_19s90th4cd.html
[iqiyi] temp_id: download video page
director: Handler preferences for this request: urllib=0, requests=100, websockets=0, curl_cffi=-100
director: Checking if "requests" supports this request.
director: Sending request via "requests"
requests: Starting new HTTPS connection (1): www.iqiyi.com:443
send: b'GET /w_19s90th4cd.html HTTP/1.1\r\nHost: www.iqiyi.com\r\nConnection: keep-alive\r\nUser-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.93 Safari/537.36\r\nAccept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\nAccept-Language: en-us,en;q=0.5\r\nSec-Fetch-Mode: navigate\r\nAccept-Encoding: gzip, deflate, br\r\n\r\n'
reply: 'HTTP/1.1 200 OK\r\n'
header: Server: QWS
header: Content-Type: text/html; charset=utf-8
header: X-Powered-By: Next.js
header: Cache-Control: no-cache
header: Vue-SSR-Nginx-Cache: no-cache
header: Access-Control-Allow-Origin: *
header: Vary: Accept-Encoding
header: Date: Tue, 28 Jan 2025 21:45:17 GMT
header: Transfer-Encoding: chunked
header: Connection: keep-alive
header: Connection: Transfer-Encoding
header: Set-Cookie: QC005=2e68278c51f3640d6ffbd6dca4f0854a; path=/; expires=Thu, 04 Jan 2125 21:45:17 GMT; domain=iqiyi.com;
header: Set-Cookie: QC173=1; path=/; expires=Thu, 04 Jan 2125 21:45:17 GMT; domain=iqiyi.com;
header: Server-Timing: ak_p; desc="1738100716380_1551592314_481956731_125780_2054_42_50_-";dur=1
ERROR: [iqiyi] Can't find any video; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\iqiyi.py", line 352, in _real_extract
``` | closed | 2025-01-28T21:55:54Z | 2025-01-28T22:29:26Z | https://github.com/yt-dlp/yt-dlp/issues/12226 | [
"duplicate",
"site-bug"
] | KPniX | 1 |
iterative/dvc | data-science | 10,313 | `dvc.api.dataset` | API for new `datasets` feature
- [x] DVCX (highest priority) - return name, version
- [x] DVC (next priority) - return URL, path, sha
- [x] Cloud-versioned (nice to have) - return URL, version ID for each file
Needs spec | closed | 2024-02-22T13:46:37Z | 2024-02-28T08:56:20Z | https://github.com/iterative/dvc/issues/10313 | [
"p1-important",
"A: api"
] | dberenbaum | 0 |
OpenBB-finance/OpenBB | machine-learning | 6,867 | [🕹️] Side Quest: Integrate OpenBB into a dashboard | ### What side quest or challenge are you solving?
Side Quest: Integrate OpenBB into a dashboard or web application
### Points
Points: 300-750 Points
### Description
_No response_
### Provide proof that you've completed the task
[https://github.com/soamn/oss-obb-dashboard](url) | closed | 2024-10-24T20:36:18Z | 2024-11-02T07:41:36Z | https://github.com/OpenBB-finance/OpenBB/issues/6867 | [] | soamn | 7 |
WZMIAOMIAO/deep-learning-for-image-processing | pytorch | 307 | def __getitem__(self, index): | **System information**
* Have I written custom code:
* OS Platform(e.g., window10 or Linux Ubuntu 16.04):
* Python version:
* Deep learning framework and version(e.g., Tensorflow2.1 or Pytorch1.3):
* Use GPU or not:
* CUDA/cuDNN version(if you use GPU):
* The network you trained(e.g., Resnet34 network):
**Describe the current behavior**
**Error info / logs**
| closed | 2021-06-21T03:58:43Z | 2021-06-22T09:23:16Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/307 | [] | why228430 | 2 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 876 | TypeError: __init__() got an unexpected keyword argument 'merge_weights' | ### Check before submitting issues
- [X] Make sure to pull the latest code, as some issues and bugs have been fixed.
- [X] Due to frequent dependency updates, please ensure you have followed the steps in our [Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)
- [X] I have read the [FAQ section](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/FAQ) AND searched for similar issues and did not find a similar problem or solution
- [X] Third-party plugin issues - e.g., [llama.cpp](https://github.com/ggerganov/llama.cpp), [text-generation-webui](https://github.com/oobabooga/text-generation-webui), [LlamaChat](https://github.com/alexrozanski/LlamaChat), we recommend checking the corresponding project for solutions
- [X] Model validity check - Be sure to check the model's [SHA256.md](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/SHA256.md). If the model is incorrect, we cannot guarantee its performance
### Type of Issue
Model training and fine-tuning
### Base Model
LLaMA-7B
### Operating System
Linux
### Describe your issue in detail
```
lr=1e-4
lora_rank=8
lora_alpha=32
lora_trainable="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj"
modules_to_save="embed_tokens,lm_head"
lora_dropout=0.05
pretrained_model='daryl149/llama-2-7b-hf'
chinese_tokenizer_path='chinese_llama_lora_7b/'
dataset_dir='../../data/'
per_device_train_batch_size=1
per_device_eval_batch_size=1
gradient_accumulation_steps=8
output_dir=output_dir
peft_model=chinese_llama_lora_7b/
validation_file=../../data/alpaca_data_zh_51k.json
deepspeed_config_file=ds_zero2_no_offload.json
torchrun --nnodes 1 --nproc_per_node 1 run_clm_sft_with_peft.py \
--deepspeed ${deepspeed_config_file} \
--model_name_or_path ${pretrained_model} \
--tokenizer_name_or_path ${chinese_tokenizer_path} \
--dataset_dir ${dataset_dir} \
--validation_split_percentage 0.001 \
--per_device_train_batch_size ${per_device_train_batch_size} \
--per_device_eval_batch_size ${per_device_eval_batch_size} \
--do_train \
--do_eval \
--seed $RANDOM \
--fp16 \
--num_train_epochs 1 \
--lr_scheduler_type cosine \
--learning_rate ${lr} \
--warmup_ratio 0.03 \
--weight_decay 0 \
--logging_strategy steps \
--logging_steps 10 \
--save_strategy steps \
--save_total_limit 3 \
--evaluation_strategy steps \
--eval_steps 100 \
--save_steps 200 \
--gradient_accumulation_steps ${gradient_accumulation_steps} \
--preprocessing_num_workers 8 \
--max_seq_length 512 \
--output_dir ${output_dir} \
--overwrite_output_dir \
--ddp_timeout 30000 \
--logging_first_step True \
--lora_rank ${lora_rank} \
--lora_alpha ${lora_alpha} \
--trainable ${lora_trainable} \
--modules_to_save ${modules_to_save} \
--lora_dropout ${lora_dropout} \
--torch_dtype float16 \
--validation_file ${validation_file} \
--gradient_checkpointing \
--ddp_find_unused_parameters False \
--peft_path ${peft_model} \
```
### Execution logs or screenshots
model = PeftModel.from_pretrained(model, training_args.peft_path)
File "/home/azime/.local/lib/python3.9/site-packages/peft/peft_model.py", line 323, in from_pretrained
config = PEFT_TYPE_TO_CONFIG_MAPPING[
File "/home/azime/.local/lib/python3.9/site-packages/peft/config.py", line 137, in from_pretrained
config = config_cls(**kwargs)
TypeError: __init__() got an unexpected keyword argument 'merge_weights' | closed | 2024-01-02T14:05:26Z | 2024-01-24T22:02:07Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/876 | [
"stale"
] | IsraelAbebe | 2 |
allenai/allennlp | nlp | 5,459 | Output more predictions including sub-optimal ones | closed | 2021-11-08T11:16:29Z | 2021-11-09T06:04:42Z | https://github.com/allenai/allennlp/issues/5459 | [
"Feature request"
] | AlfredAM | 0 | |
dask/dask | pandas | 10,991 | Combined save and calculation is using excessive memory | Not clear if this is truly a "bug", but it seems highly undesirable + I can't work out how to avoid it
# Describe the issue:
We are trying to save large data arrays to file, and *at the same time* check the data for any occurrences of a given value
All while, hopefully, loading and processing each data chunk only once.
For each array, we use a `dask.delayed` function which takes as argument both the (delayed) `da.store` and the collision check computation on the same data.
But this seems to cause Dask to load the entire dataset into memory,
which, for big enough data, simply crashes.
### a bit more detail
When saving data to netcdf, we create a file and stream data from Dask arrays into the file variables with 'dask.array.store'.
( typically, out source data is also derived from netcdf files -- but this is not required to provoke the problem ).
We need at the same time to perform a check on the data, which determines whether there are any data points which are masked, or unmasked points matching a proposed "fill value".
Our existing code combines a delayed 'store' operation with computing the check function.
Since one is a "store" and one a "compute", they are combined by creating a delayed function which takes both as arguments.
The aim of this is that the data should only be fetched once, and streamed to the file one chunk at a time, so that we can handle files larger than memory.
What we have found is that in some cases, this operation is using memory equivalent to the size of the ***entire*** data variable, rather than a small number of its chunks
# Minimal Complete Verifiable Example
```python
import tracemalloc
import dask
import dask.array as da
import numpy as np
# construct test data as a stack of random arrays
nt, nd = 50, 1000000
lazydata_all = da.stack([
da.random.uniform(
0, 1,
size=nd
)
for _ in range(nt)
])
# existing "target" array which we will store the result into
store_target = np.zeros((nt, nd), dtype=np.float64)
def array_size_mb(arr: np.ndarray):
return arr.nbytes * 1.0e-6
print(f"\nData full size {str(lazydata_all.shape).rjust(16)} = {array_size_mb(lazydata_all):8.1f} Mib")
print(f" .. chunk size {str(lazydata_all[0].shape).rjust(16)} = {array_size_mb(lazydata_all[0]):8.1f} Mib")
# Construct the combined store-and-calculate operation, as a delayed
@dask.delayed
def store_data_and_compute(store_operation, calculation):
return calculation
delayed_save = da.store(lazydata_all, store_target, compute=False, lock=False)
lazy_calculation = lazydata_all[0, 0]
delayed_result = store_data_and_compute(delayed_save, lazy_calculation)
# Measure peak additional memory claimed by operations
def compute_memory_mb(delayed_calc):
tracemalloc.start()
delayed_calc.compute()
_, peak_mem_bytes = tracemalloc.get_traced_memory()
tracemalloc.stop()
return peak_mem_bytes * 1.0e-6
print("\nCombined calculation:")
combined_operation_mb = compute_memory_mb(delayed_result)
chunk_mb = array_size_mb(lazydata_all[0])
print(f"Consumed memory ~ {combined_operation_mb:6.2f} Mb.")
print(f" --> {combined_operation_mb / chunk_mb:.1f} * chunksize")
```
Sample output :
```
Data full size (50, 1000000) = 400.0 Mib
.. chunk size (1000000,) = 8.0 Mib
Combined calculation:
Consumed memory ~ 400.25 Mb.
--> 50.0 * chunksize
```
### NOTE: the individual operations (store or calculate) do ***not*** consume large amounts of memory
```
store_only_mb = compute_memory_mb(delayed_save)
print(f"\nStore alone, takes {store_only_mb} Mb.")
calc_only_mb = compute_memory_mb(lazy_calculation)
print(f"Calculate alone, takes {calc_only_mb} Mb.")
```
Resulting
```
Store alone, takes 32.230089 Mb.
Calculate alone, takes 8.018998 Mb.
```
**NOTE:** this on a machine with 4 CPUs, 4 dask workers
Hence 32 ~4*8, seems to makes sense
## Anything else we need to know?
### Environment
- Dask version: 2023.09.01 and 2024.02.01
- Python version: 3.11
- Operating System: linux
- Machine : 4 CPUs
- Install method (conda, pip, source): conda
| open | 2024-03-11T10:39:33Z | 2024-03-28T11:26:09Z | https://github.com/dask/dask/issues/10991 | [
"needs triage"
] | pp-mo | 3 |
marshmallow-code/marshmallow-sqlalchemy | sqlalchemy | 298 | 'DummySession' object has no attribute 'query' in many-to-many relationship | Hi, i have a problem in a simple many to many relationship. I have two models, User and Role, with an association table defined as helper table, like Flask-SqlAlchemy documentation recommended.
After defined the relative schema, i try to load a new user instance from json data, with a single role attached, but i encounter the error in subject.
This is the code:
```
user_roles = db.Table('user_roles',
db.Column('user_id', db.Integer, db.ForeignKey('users.user_id'), primary_key=True),
db.Column('role_code', db.String(20), db.ForeignKey('roles.role_code'), primary_key=True)
)
class UserModel(db.Model):
__tablename__ = 'users'
user_id = db.Column(db.Integer, primary_key=True)
user_name = db.Column(db.String(100), nullable=False)
user_surname = db.Column(db.String(100), nullable=False)
user_email = db.Column(db.String(100), nullable=False, unique=True)
user_password = db.Column(db.String(100), nullable=False)
user_active = db.Column(db.String(1), nullable=False, default='Y')
roles = db.relationship('RoleModel', secondary=user_roles, lazy='subquery',
backref=db.backref('users', lazy=True))
class RoleModel(db.Model):
__tablename__ = 'roles'
role_code = db.Column(db.String(20), primary_key=True)
role_name = db.Column(db.String(50), nullable=False)
role_active = db.Column(db.String(1), nullable=False, default='Y')`
```
```
class UserSchema(ma.SQLAlchemyAutoSchema):
class Meta:
model = UserModel
load_instance=True
include_fk=True
load_only=('user_password',)
dump_only=('user_id',)
user_name = ma.auto_field(validate=validate.Length(min=1, error="Field can not be blank"),)
user_surname = ma.auto_field(validate=validate.Length(min=1, error="Field can not be blank"),)
user_email = ma.auto_field(validate=validate.Length(min=1, error="Field can not be blank"),)
user_password = ma.auto_field(validate=validate.Length(min=1, error="Field can not be blank"),)
roles = ma.Nested("RoleSchema", many=True, only=('role_code', 'role_name',))
class RoleSchema(ma.SQLAlchemyAutoSchema):
class Meta:
model = RoleModel
load_instance=True
include_fk=True
```
```
user = user_schema.load(data_json, partial=('roles.role_name',))
```
The json data:
```
{
"user_name": "John",
"user_surname": "Doe",
"user_email": "john.doe@domain.com",
"user_password": "xxxxx",
"user_active": "Y",
"roles": [
{
"role_code": "ADMIN"
}
]
}
```
Maybe is better define the association table like a model?
Thank you for help and this library! ;)
Ruggero | closed | 2020-04-03T20:51:55Z | 2021-05-14T14:25:06Z | https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/298 | [] | ruggerotosc | 3 |
sktime/sktime | scikit-learn | 7,792 | [BUG] fix aggressive neural network printouts | The forecaster tests print the following repeatedly, dozens or hundreds of time.
She printout for whichever estimator is causing this should be turned off.
@benHeid, @geetu040, do you know which one this is?
```
INFO: GPU available: False, used: False
INFO: TPU available: False, using: 0 TPU cores
INFO: HPU available: False, using: 0 HPUs
INFO:
| Name | Type | Params | Mode
------------------------------------------------------------------------------------------------
0 | loss | QuantileLoss | 0 | train
1 | logging_metrics | ModuleList | 0 | train
2 | input_embeddings | MultiEmbedding | 0 | train
3 | prescalers | ModuleDict | 32 | train
4 | static_variable_selection | VariableSelectionNetwork | 0 | train
5 | encoder_variable_selection | VariableSelectionNetwork | 334 | train
6 | decoder_variable_selection | VariableSelectionNetwork | 132 | train
7 | static_context_variable_selection | GatedResidualNetwork | 88 | train
8 | static_context_initial_hidden_lstm | GatedResidualNetwork | 88 | train
9 | static_context_initial_cell_lstm | GatedResidualNetwork | 88 | train
10 | static_context_enrichment | GatedResidualNetwork | 88 | train
11 | lstm_encoder | LSTM | 160 | train
12 | lstm_decoder | LSTM | 160 | train
13 | post_lstm_gate_encoder | GatedLinearUnit | 40 | train
14 | post_lstm_add_norm_encoder | AddNorm | 8 | train
15 | static_enrichment | GatedResidualNetwork | 104 | train
16 | multihead_attn | InterpretableMultiHeadAttention | 49 | train
17 | post_attn_gate_norm | GateAddNorm | 48 | train
18 | pos_wise_ff | GatedResidualNetwork | 88 | train
19 | pre_output_gate_norm | GateAddNorm | 48 | train
20 | output_layer | Linear | 35 | train
------------------------------------------------------------------------------------------------
1.5 K Trainable params
0 Non-trainable params
1.5 K Total params
0.006 Total estimated model params size (MB)
176 Modules in train mode
0 Modules in eval mode
``` | open | 2025-02-08T17:58:21Z | 2025-02-15T08:51:12Z | https://github.com/sktime/sktime/issues/7792 | [
"bug",
"module:forecasting"
] | fkiraly | 10 |
coqui-ai/TTS | deep-learning | 3,440 | [Bug] Hi, I'm using `python TTS/tts/compute_attention_masks.py` for compute duration use tacotron2, but I met this question | ### Describe the bug
Traceback (most recent call last):
File "TTS/bin/compute_attention_masks.py", line 14, in <module>
from TTS.tts.utils.text.characters import make_symbols, phonemes, symbols
ImportError: cannot import name 'make_symbols' from 'TTS.tts.utils.text.characters'
### To Reproduce
run `python TTS/bin/compute_attention_masks.py`
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA GeForce RTX 3090",
"NVIDIA GeForce RTX 3090",
"NVIDIA GeForce RTX 3090",
"NVIDIA GeForce RTX 3090"
],
"available": true,
"version": "11.7"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.0.0+cu117",
"TTS": "0.13.2",
"numpy": "1.21.6"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.8.16",
"version": "#224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023"
}
}
```
### Additional context
_No response_ | closed | 2023-12-18T10:11:45Z | 2024-02-04T23:03:54Z | https://github.com/coqui-ai/TTS/issues/3440 | [
"bug",
"wontfix"
] | Henryplay | 1 |
automl/auto-sklearn | scikit-learn | 1,209 | Occasional test failure for `test_module_idempotent` | The tests occasionaly fail when testing `GradientBoosting`, specifically the `test_module_idempotent` as seen [here ](https://github.com/automl/auto-sklearn/runs/3260866212#step:9:1531).
<details>
<summary>Failure log</summary>
<br>
_____________ GradientBoostingComponentTest.test_module_idempotent _____________
self = <test_pipeline.components.classification.test_gradient_boosting.GradientBoostingComponentTest testMethod=test_module_idempotent>
def test_module_idempotent(self):
if self.__class__ == BaseClassificationComponentTest:
return
def check_classifier(cls):
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1],
[0, 0], [0, 1], [1, 0], [1, 1],
[0, 0], [0, 1], [1, 0], [1, 1],
[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 1, 1, 0,
0, 1, 1, 0,
0, 1, 1, 0,
0, 1, 1, 0])
params = []
for i in range(2):
try:
classifier.fit(X, y)
except ValueError as e:
if (
isinstance(e.args[0], str)
) and (
"Numerical problems in QDA" in e.args[0]
):
continue
elif (
"BaseClassifier in AdaBoostClassifier ensemble is "
"worse than random, ensemble can not be fit." in e.args[0]
):
continue
else:
raise e
except UnboundLocalError as e:
if "local variable 'raw_predictions_val' referenced before assignment" in \
e.args[0]:
continue
p = classifier.estimator.get_params()
if 'random_state' in p:
del p['random_state']
if 'base_estimator' in p:
del p['base_estimator']
for ignore_hp in self.res.get('ignore_hps', []):
del p[ignore_hp]
params.append(p)
if i > 0:
self.assertEqual(
params[-1],
params[0],
)
classifier = self.module
configuration_space = classifier.get_hyperparameter_search_space()
default = configuration_space.get_default_configuration()
classifier = classifier(random_state=np.random.RandomState(1),
**{hp_name: default[hp_name] for hp_name in
default if default[hp_name] is not None})
check_classifier(classifier)
for i in range(5):
classifier = self.module
config = configuration_space.sample_configuration()
classifier = classifier(random_state=np.random.RandomState(1),
**{hp_name: config[hp_name] for hp_name in
config if config[hp_name] is not None})
> check_classifier(classifier)
/home/runner/work/auto-sklearn/auto-sklearn/test/test_pipeline/components/classification/test_base.py:290:
/home/runner/work/auto-sklearn/auto-sklearn/test/test_pipeline/components/classification/test_base.py:271: in check_classifier
self.assertEqual(
E AssertionError: {'cat[192 chars]er': 128, 'max_leaf_nodes': 202, 'min_samples_[161 chars]True} != {'cat[192 chars]er': 512, 'max_leaf_nodes': 202, 'min_samples_[161 chars]True}
E {'categorical_features': None,
E 'early_stopping': True,
E 'l2_regularization': 1.8340340346818956e-05,
E 'learning_rate': 0.05749038268034438,
E 'loss': 'auto',
E 'max_bins': 255,
E 'max_depth': None,
E - 'max_iter': 128,
E ? -
E
E + 'max_iter': 512,
E ? +
E
E 'max_leaf_nodes': 202,
E 'min_samples_leaf': 2,
E 'monotonic_cst': None,
E 'n_iter_no_change': 2,
E 'scoring': 'loss',
E 'tol': 1e-07,
E 'validation_fraction': 0.39328363642167863,
E 'verbose': 0,
E 'warm_start': True}
</details> | closed | 2021-08-06T11:06:07Z | 2021-09-06T07:29:08Z | https://github.com/automl/auto-sklearn/issues/1209 | [
"maintenance"
] | eddiebergman | 1 |
pydantic/pydantic | pydantic | 11,126 | Fields with `any` type raise a ValidationError | ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
It seems like starting Pydantic 2.10, fields defined as `bar: any` fail with a ValidationError, prior to 2.10 this triggered just a warning.
You can use this snippet to reproduce:
```python
from pydantic import BaseModel, ConfigDict, SkipValidation
class Foo(BaseModel):
model_config = ConfigDict(
arbitrary_types_allowed=True,
)
bar: any
Foo(bar = "test")
```
With pydantic==2.9.2 I get the following warning when running the snippet above:
```
…/pydantic/_internal/_generate_schema.py:547: UserWarning: <built-in function any> is not a Python type (it may be an instance of an object), Pydantic will allow any object with no validation since we cannot even enforce that the input is an instance of the given type. To get rid of this error wrap the type with `pydantic.SkipValidation`.
warn(
```
With pydantic>=2.10 I get a validation error (with a message that wasn’t very clear):
```
Traceback (most recent call last):
File "....py", line 10, in <module>
Foo(bar = "test")
File "/usr/local/google/home/amirha/venv2/lib/python3.11/site-packages/pydantic/main.py", line 214, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for Foo
bar
Arguments must be a tuple, list or a dictionary [type=arguments_type, input_value='test', input_type=str]
For further information visit https://errors.pydantic.dev/2.10/v/arguments_type
```
Annotating the field with SkipValidation fixes the validation error with 2.10 (interestingly it doesn’t silence the warning with 2.9.2)
### Example Code
```Python
from pydantic import BaseModel, ConfigDict, SkipValidation
class Foo(BaseModel):
model_config = ConfigDict(
arbitrary_types_allowed=True,
)
bar: any
Foo(bar = "test")
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.10.3
pydantic-core version: 2.27.1
pydantic-core build: profile=release pgo=false
install path: /usr/local/google/home/amirha/venv2/lib/python3.11/site-packages/pydantic
python version: 3.11.9 (main, Jun 19 2024, 00:38:48) [GCC 13.2.0]
platform: Linux-6.10.11-1rodete2-amd64-x86_64-with-glibc2.39
related packages: typing_extensions-4.12.2
commit: unknown
```
| closed | 2024-12-16T17:33:49Z | 2024-12-16T18:48:37Z | https://github.com/pydantic/pydantic/issues/11126 | [
"bug V2",
"pending"
] | amirh | 3 |
polakowo/vectorbt | data-visualization | 82 | SizeType.TargetPercent and Order rejected: Not enough cash to long | Hi, I try to write stock factor analysis,
but `SizeType.TargetPercent` work not well
```python
from datetime import datetime
import numpy as np
import pandas as pd
import yfinance as yf
import vectorbt as vbt
from vectorbt.portfolio.enums import SizeType, CallSeqType
# Define params
assets = ['FB', 'AMZN', 'NFLX', 'GOOG', 'AAPL']
start_date = datetime(2017, 1, 1)
end_date = datetime(2020, 12, 31)
# Download data
asset_price = pd.DataFrame({
s: yf.Ticker(s).history(start=start_date, end=end_date)['Close']
for s in assets
}, columns=pd.Index(assets, name='asset'))
print(asset_price.shape)
asset_price.to_pickle('asset_price.pkl')
close = pd.read_pickle('asset_price.pkl')
# modify data, it will lead to RejectedOrderError
close.iloc[:-100, -2:-1] = np.nan
# calculate factor
factor = close.pct_change()
# split 3 parts
q = 3
labels = factor.rank(axis=1, pct=True) * q // 1
# rebalance every 10 days
labels.iloc[np.arange(len(labels)) % 10 != 0] = np.nan
labels_0 = close.where(labels == 0, np.nan)
# equal weight
weights = np.sign(labels_0)
weights = weights.divide(weights.abs().sum(axis=1), axis=0)
# not work
# idx = ~weights.isna().all(axis=1)
# weights[idx] = weights[idx].fillna(0)
portfolio = vbt.Portfolio.from_orders(
close=close,
size=weights,
size_type=SizeType.TargetPercent,
cash_sharing=True,
call_seq=CallSeqType.Auto,
group_by=True,
freq='1D',
incl_unrealized=True,
raise_reject=True,
)
# %%
print(portfolio.value())
```
```python
portfolio = vbt.Portfolio.from_orders(
File "D:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\vectorbt\portfolio\base.py", line 1149, in from_orders
order_records, log_records = nb.simulate_from_orders_nb(
File "d:\Users\Kan\miniconda3\envs\py38_vectorbt\lib\site-packages\vectorbt\portfolio\nb.py", line 437, in process_order_nb
raise RejectedOrderError("Order rejected: Not enough cash to long")
vectorbt.portfolio.enums.RejectedOrderError: Order rejected: Not enough cash to long
``` | closed | 2021-01-08T17:52:44Z | 2021-01-09T17:30:05Z | https://github.com/polakowo/vectorbt/issues/82 | [] | wukan1986 | 3 |
koxudaxi/datamodel-code-generator | fastapi | 2,098 | Expose ids into template context | **Is your feature request related to a problem? Please describe.**
I want to implement basic schema registry out of generated models, so I need a way to include extra identifying info into my models.
**Describe the solution you'd like**
It would be nice for JSON Schema's `$id` and OpenAPI's `operationId` to be available within the template. Potentially something like `x-id` might be ok too, but if we're going this way, it is better to expose all of the `x-` fields.
Ideal solution would be an ability to do something like this in the template:
```jinja2
{%- if id_ %}
_id: ClassVar[str] = "{{ id_ }}"
{%- endif %}
```
**Describe alternatives you've considered**
I've glanced the code, but it looks like there is no obvious way to extract ids. `self.reference.path` of `DataModel` instances looks unique enough, but that's not exactly an id defined in the schema. Given the nature of the schemas I work with, exposing it would be enough for now. | open | 2024-10-04T16:14:08Z | 2024-10-28T18:46:27Z | https://github.com/koxudaxi/datamodel-code-generator/issues/2098 | [] | ZipFile | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,201 | No gradient penalty in WGAN-GP | When computing WGAN-GP loss, the cal_gradient_penalty function is not called, and gradient penalty is not applied. | open | 2020-11-29T14:58:05Z | 2020-11-29T14:58:05Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1201 | [] | GwangPyo | 0 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,049 | [WITHOUT addons]: Specific contents of a page cause UC to mangle DOM and crash renderer | Tested WITHOUT addons. Same result.
While working with UC, I stumbled on some pages that would cause UC and only UC to mangle the contents of DOM to a degree where the whole thing would break down after the timeout was reached.
All content after `<head>`, `<body>` and a few other will be wiped clean. No tag is closed. The strange this is, the page is partly loaded,
DOM is partly loaded as you can see the page being rendered to a point and then BOOM, broken.
The thing is, when I am logged in, everything works until that post is loaded in via API.
But it also causes the page/UC to break when loading that post logged in or logged out. Without API.
I could not track it down to why this happens, but I tested it with pure selenium and there it works without problems.
So it has to be something specific to UC. This has been happening for a few versions now. I tested it just now on 3.4.4 and it still happens.
To reproduce, just do the following:
```
import undetected_chromedriver as uc
options = uc.ChromeOptions()
options.add_experimental_option('prefs', {'intl.accept_languages': 'en'})
driver = uc.Chrome(options=options)
driver.get('https://patreon.com')
driver.add_cookie({'domain': '.patreon.com', 'name': 'randomcookie', 'value': 'somebullshit'}) #do it like this or suffer
#Open Dev Tools
driver.get('https://www.patreon.com/posts/updated-in-2022-66910536')
#Go to element view
```
After that the chromedriver will wait for a timeout and exit with "timed out renderer" error message. Which... yeah, kinda makes sense when DOM is borked in such a way. | closed | 2023-02-08T17:10:04Z | 2024-01-17T20:50:56Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1049 | [] | slowengine | 8 |
aminalaee/sqladmin | sqlalchemy | 784 | Ckeditor <TypeError: Cannot convert undefined or null to object> | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
Following the guide [sqladmin ckeditor](https://aminalaee.dev/sqladmin/cookbook/using_wysiwyg/) got an error:
```
TypeError: Cannot convert undefined or null to object
at Function.keys (<anonymous>)
at ou.init (datacontroller.ts:344:42)
at ou.<anonymous> (observablemixin.ts:277:33)
at ou.fire (emittermixin.ts:241:31)
at <computed> [as init] (observablemixin.ts:281:17)
at classiceditor.ts:227:31
```
### Steps to reproduce the bug
Follow the ckeditor embedding documentation
### Expected behavior
_No response_
### Actual behavior
```
TypeError: Cannot convert undefined or null to object
at Function.keys (<anonymous>)
at ou.init (datacontroller.ts:344:42)
at ou.<anonymous> (observablemixin.ts:277:33)
at ou.fire (emittermixin.ts:241:31)
at <computed> [as init] (observablemixin.ts:281:17)
at classiceditor.ts:227:31
```
### Debugging material
_No response_
### Environment
Sqladmin 0.17.0
### Additional context
Everything starts working fine if you add the following code
```
<script src="https://cdn.ckeditor.com/ckeditor5/39.0.1/classic/ckeditor.js"></script>
<script>
ClassicEditor
.create(document.querySelector('#content'))
.catch(error => {
console.error(error);
});
</script>
```
straight into the editor.html template before the last closing block, however I don't think this is the correct behavior. | closed | 2024-06-24T17:13:29Z | 2024-06-25T10:03:29Z | https://github.com/aminalaee/sqladmin/issues/784 | [] | A-V-tor | 1 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 529 | 请问是否有办法下载2k和4k无水印的抖音视频? | 已经添加了cookies,但是下载下来的画质还是只有1080p,网页版显示原视频有2k和4k的画质,但直接网页解析下载的码率和大小都很差,有没有办法解析到2k/4k的高码率无水印视频呢? | closed | 2024-12-28T15:01:24Z | 2025-02-26T08:01:31Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/529 | [] | Kavka-7 | 5 |
scikit-learn/scikit-learn | machine-learning | 30,183 | The Affinity Matrix Is NON-BINARY with`affinity="precomputed_nearest_neighbors"` | ### Describe the issue linked to the documentation
## Issue Source:
https://github.com/scikit-learn/scikit-learn/blob/59dd128d4d26fff2ff197b8c1e801647a22e0158/sklearn/cluster/_spectral.py#L452-L454
## Issue Description
The Affinity Matrix Is _non-binary_ with`affinity`=`"precomputed_nearest_neighbors"`. I.e., when a precomputed distance matrix is given as `x`, the affinity matrix from SpectralClustering.fit().affinity_matrix_ is NOT binary (as described in the document). It has 3 values: 0.0, 1.0, and 0.5.
## Reproducible Code Snippet
Generate a random distance ,a
```python
from sklearn.cluster import SpectralClustering
import numpy as np
## generate a random distance matrix --> symmetric
np.random.seed(0)
distmat=np.random.rand(200,200)
distmat=(np.triu(distmat,1)+np.triu(distmat,1).T)/2
print(f"Check asymmetric locations (if any):\t{np.where(distmat!=distmat.T)}")
## affinity matrix
aff_mat=SpectralClustering(n_clusters=30,affinity='precomputed_nearest_neighbors',assign_labels='discretize', n_neighbors=50 ,n_jobs=-1).fit(distmat).affinity_matrix_.toarray()
print(f"Unique values (ought to be 'binary'):\t{np.unique(aff_mat)}")
```
## Machine & Version Info
```python
System:
python: 3.8.3 (default, Jul 2 2020, 16:21:59) [GCC 7.3.0]
executable: /opt/share/linux-rocky8-x86_64/gcc-12.2.0/anaconda3-2020.07-yv6vdwqiouaru27jxhpezh6t6mdpqf3e/bin/python
machine: Linux-4.18.0-425.3.1.el8.x86_64-x86_64-with-glibc2.10
Python dependencies:
pip: 20.1.1
setuptools: 65.6.3
sklearn: 0.23.1
numpy: 1.22.3
scipy: 1.5.0
Cython: 0.29.21
pandas: 1.4.2
Built with OpenMP: True
```
### Suggest a potential alternative/fix
Since the affinity matrix is calculated as `(connectivity+connectivity.T)*0.5` [source_code](https://github.com/scikit-learn/scikit-learn/blob/59dd128d4d26fff2ff197b8c1e801647a22e0158/sklearn/cluster/_spectral.py#L715C8-L720C74), and that the `connectivity` is calculated by `kneighbors_graph` [source_code](https://github.com/scikit-learn/scikit-learn/blob/59dd128d4d26fff2ff197b8c1e801647a22e0158/sklearn/neighbors/_base.py#L998C5-L1002C1), it is intrinsically not symmetric -- i might be j's K nearest neighbor, while j could not be i's when `n_quirey `== `n_samples`.
| open | 2024-10-31T07:07:44Z | 2024-11-04T10:35:54Z | https://github.com/scikit-learn/scikit-learn/issues/30183 | [
"Documentation",
"Needs Investigation"
] | OrangeAoo | 2 |
ultralytics/yolov5 | pytorch | 12,825 | There is any way to find specific point in bounding box in yolov5 ? | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hi 🤗
I trying to detect the specific object in image using yolov5.
My question is. Is that possible to detect specific point in detected bounding box using yolov5?!
In fact, first user click on desired point in image. So I store x, and y coordinates of specific point.
Now I want when bounding box detected. The specific point be find and scaled in box.
For example:
Firat user click on car's driver in image, so I store x,y coord in txt file.
Then I detect car in (online) image (the same car that was used for Train).
Now here I want to find that specific point (location) of driver in online image.
**Notice that the car in online image may be rotated or flipped...**
Reference Image:

Online In Image:

### Additional
What I tried :
I try using sift and Homography method for extract features, key points and descriptors.
That I scaled the point that user clicked in image (drivera coordinates) and then calculate coordinates of point in online image.
But there's is a problem here:
1: the sift algorithm is too much heavy for embedded systems such as Raspberry pi or Jetson platform.
2: this method have some problems that creates some errors in system. | closed | 2024-03-18T17:02:09Z | 2024-05-12T00:23:42Z | https://github.com/ultralytics/yolov5/issues/12825 | [
"question",
"Stale"
] | SJavad | 8 |
yezyilomo/django-restql | graphql | 311 | Add DELETE operation | The purpose of this is to delete nested objects i.e
```json
{
"add": [1,2,4],
"delete": [8]
}
```
Just like `REMOVE` operation `DELETE` will be supporting `__all__` value to i.e
```json
{
"add": [1,2,4],
"delete": "__all__"
}
```
It will also be supporting `allow_delete_all` kwarg just like `allow_remove_all`
**Another idea:** Make `__all__` value configurable through global `django-restql` settings | open | 2023-11-09T13:51:09Z | 2023-11-09T13:51:09Z | https://github.com/yezyilomo/django-restql/issues/311 | [] | yezyilomo | 0 |
graphql-python/graphene | graphql | 898 | Proposal: explicitly declaring resolvers with decorator | ```
class MySecondAttempt(ObjectType):
name = String()
age = Int(required=True)
@name.resolver
def resolve_name(self, ...): ...
@age.resolver
def resolve_age(self, ...): ...
```
Explicit is better than implicit.
The resolvers could have any arbitrary name, and could also be defined before or after the class definition. Resolvers could still be defined using the `resolver_` prefix convention if the explicit decorator isn't used. | closed | 2019-01-28T03:52:04Z | 2024-06-22T10:26:08Z | https://github.com/graphql-python/graphene/issues/898 | [
"✨ enhancement"
] | DrPyser | 8 |
home-assistant/core | asyncio | 140,865 | [Overkiz] Incomplete handling of PositionableDualRollerShutter / io:DualRollerShutterIOComponent | ### The problem
Using a TaHoMa Switch I have connected various Velux Windows and their shutters.
Specifially two devices that home assistant shows as PositionableDualRollerShutter by Velux and of hardware type io:DualRollerShutterIOComponent.
Within the TaHoMa App these are shown to have an upper and a lower shutter which can be operated separately. This was validated in field use.
In Home Assistant this type of device is shown as a typical cover and does not recognize the upper an dlower part. This limits the control functionality severely.
I assume this is an error instead of a feature?
### What version of Home Assistant Core has the issue?
core-2025.3.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Overkiz
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/overkiz/
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | open | 2025-03-18T10:52:53Z | 2025-03-22T07:35:06Z | https://github.com/home-assistant/core/issues/140865 | [
"integration: overkiz"
] | GuiHue | 5 |
lucidrains/vit-pytorch | computer-vision | 280 | Using vision transformers for different image resolutions | Hi, I ma working on using vision transformers not only the vanilla ViT, but different models on UMDAA2 data set, this data set has an image resolution of 128*128 would it be better to transform the images into the vit desired resolution like 224 or 256 or it is better to keep the 128 and try to update the other vision transformer parameters to this resolution like the dim,depth,heads ? | open | 2023-09-24T22:24:24Z | 2023-10-02T21:42:02Z | https://github.com/lucidrains/vit-pytorch/issues/280 | [] | Oussamab21 | 1 |
s3rius/FastAPI-template | fastapi | 207 | Pre Commits Hooks fails when installed with Python-3.12.1 | Hello,
`Flake8`, `add-trailing-comma`, `pre-commit-hooks`, and `language-formatters-pre-commit-hooks` versions need to be updated. Otherwise fails with following errors on a newly generated project:
``` shell
boilerplate (master) ✗ poetry run pre-commit install
pre-commit installed at .git/hooks/pre-commit
boilerplate (master) ✗ poetry run pre-commit run -a
Check python ast.........................................................Passed
Trim Trailing Whitespace.................................................Passed
Check Toml...............................................................Passed
Fix End of Files.........................................................Passed
Add trailing commas......................................................Failed
- hook id: add-trailing-comma
- exit code: 1
Traceback (most recent call last):
File "/Users/m4hi2/.cache/pre-commit/repoqonxr4s2/py_env-python3.12/bin/add-trailing-comma", line 5, in <module>
from add_trailing_comma._main import main
ModuleNotFoundError: No module named 'add_trailing_comma'
Traceback (most recent call last):
File "/Users/m4hi2/.cache/pre-commit/repoqonxr4s2/py_env-python3.12/bin/add-trailing-comma", line 5, in <module>
from add_trailing_comma._main import main
ModuleNotFoundError: No module named 'add_trailing_comma'
Traceback (most recent call last):
File "/Users/m4hi2/.cache/pre-commit/repoqonxr4s2/py_env-python3.12/bin/add-trailing-comma", line 5, in <module>
from add_trailing_comma._main import main
ModuleNotFoundError: No module named 'add_trailing_comma'
Traceback (most recent call last):
File "/Users/m4hi2/.cache/pre-commit/repoqonxr4s2/py_env-python3.12/bin/add-trailing-comma", line 5, in <module>
from add_trailing_comma._main import main
ModuleNotFoundError: No module named 'add_trailing_comma'
Traceback (most recent call last):
File "/Users/m4hi2/.cache/pre-commit/repoqonxr4s2/py_env-python3.12/bin/add-trailing-comma", line 5, in <module>
from add_trailing_comma._main import main
ModuleNotFoundError: No module named 'add_trailing_comma'
Traceback (most recent call last):
File "/Users/m4hi2/.cache/pre-commit/repoqonxr4s2/py_env-python3.12/bin/add-trailing-comma", line 5, in <module>
from add_trailing_comma._main import main
ModuleNotFoundError: No module named 'add_trailing_comma'
Traceback (most recent call last):
File "/Users/m4hi2/.cache/pre-commit/repoqonxr4s2/py_env-python3.12/bin/add-trailing-comma", line 5, in <module>
from add_trailing_comma._main import main
ModuleNotFoundError: No module named 'add_trailing_comma'
Traceback (most recent call last):
File "/Users/m4hi2/.cache/pre-commit/repoqonxr4s2/py_env-python3.12/bin/add-trailing-comma", line 5, in <module>
from add_trailing_comma._main import main
ModuleNotFoundError: No module named 'add_trailing_comma'
Pretty format YAML.......................................................Failed
- hook id: pretty-format-yaml
- exit code: 1
Traceback (most recent call last):
File "/Users/m4hi2/.cache/pre-commit/repoh6sog_z7/py_env-python3.12/bin/pretty-format-yaml", line 5, in <module>
from language_formatters_pre_commit_hooks.pretty_format_yaml import pretty_format_yaml
File "/Users/m4hi2/.cache/pre-commit/repoh6sog_z7/py_env-python3.12/lib/python3.12/site-packages/language_formatters_pre_commit_hooks/__init__.py", line 8, in <module>
import pkg_resources
ModuleNotFoundError: No module named 'pkg_resources'
Traceback (most recent call last):
File "/Users/m4hi2/.cache/pre-commit/repoh6sog_z7/py_env-python3.12/bin/pretty-format-yaml", line 5, in <module>
from language_formatters_pre_commit_hooks.pretty_format_yaml import pretty_format_yaml
File "/Users/m4hi2/.cache/pre-commit/repoh6sog_z7/py_env-python3.12/lib/python3.12/site-packages/language_formatters_pre_commit_hooks/__init__.py", line 8, in <module>
import pkg_resources
ModuleNotFoundError: No module named 'pkg_resources'
autoflake................................................................Passed
Format with Black........................................................Passed
isort....................................................................Passed
Check with Flake8........................................................Failed
- hook id: flake8
- exit code: 1
Traceback (most recent call last):
File "/Users/m4hi2/Library/Caches/pypoetry/virtualenvs/boilerplate-eD2Xgr7s-py3.12/bin/flake8", line 8, in <module>
sys.exit(main())
^^^^^^
File "/Users/m4hi2/Library/Caches/pypoetry/virtualenvs/boilerplate-eD2Xgr7s-py3.12/lib/python3.12/site-packages/flake8/main/cli.py", line 22, in main
app.run(argv)
File "/Users/m4hi2/Library/Caches/pypoetry/virtualenvs/boilerplate-eD2Xgr7s-py3.12/lib/python3.12/site-packages/flake8/main/application.py", line 375, in run
self._run(argv)
File "/Users/m4hi2/Library/Caches/pypoetry/virtualenvs/boilerplate-eD2Xgr7s-py3.12/lib/python3.12/site-packages/flake8/main/application.py", line 363, in _run
self.initialize(argv)
File "/Users/m4hi2/Library/Caches/pypoetry/virtualenvs/boilerplate-eD2Xgr7s-py3.12/lib/python3.12/site-packages/flake8/main/application.py", line 343, in initialize
self.find_plugins(config_finder)
File "/Users/m4hi2/Library/Caches/pypoetry/virtualenvs/boilerplate-eD2Xgr7s-py3.12/lib/python3.12/site-packages/flake8/main/application.py", line 157, in find_plugins
self.check_plugins = plugin_manager.Checkers(local_plugins.extension)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/m4hi2/Library/Caches/pypoetry/virtualenvs/boilerplate-eD2Xgr7s-py3.12/lib/python3.12/site-packages/flake8/plugins/manager.py", line 363, in __init__
self.manager = PluginManager(
^^^^^^^^^^^^^^
File "/Users/m4hi2/Library/Caches/pypoetry/virtualenvs/boilerplate-eD2Xgr7s-py3.12/lib/python3.12/site-packages/flake8/plugins/manager.py", line 243, in __init__
self._load_entrypoint_plugins()
File "/Users/m4hi2/Library/Caches/pypoetry/virtualenvs/boilerplate-eD2Xgr7s-py3.12/lib/python3.12/site-packages/flake8/plugins/manager.py", line 261, in _load_entrypoint_plugins
eps = importlib_metadata.entry_points().get(self.namespace, ())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'EntryPoints' object has no attribute 'get'
Validate types with MyPy.................................................Passed
```
Workaround, update Flake8 on `pyproject.toml` file, run `poetry update` and to update pre-commit hooks run:
``` shell
poetry run pre-commit autoupdate
``` | open | 2024-04-11T00:09:47Z | 2024-07-12T08:16:00Z | https://github.com/s3rius/FastAPI-template/issues/207 | [] | m4hi2 | 2 |
guohongze/adminset | django | 130 | 演示环境无法登陆,可以处理下吗? | 演示环境无法登陆,可以处理下吗? | closed | 2020-01-17T07:08:26Z | 2020-05-14T13:07:44Z | https://github.com/guohongze/adminset/issues/130 | [] | leeredstar | 0 |
ymcui/Chinese-BERT-wwm | nlp | 103 | 使用Transformers加载模型 | 请问我使用如下方法加载模型时,Transformers是会重新下载模型还是可以使用我已经下载好的模型呢?
tokenizer = BertTokenizer.from_pretrained("MODEL_NAME")
model = BertModel.from_pretrained("MODEL_NAME") | closed | 2020-04-09T13:34:55Z | 2020-04-17T07:11:05Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/103 | [] | haozheshz | 2 |
jumpserver/jumpserver | django | 14,425 | [Feature] Feature Request: Option to Display Only Mouse Cursor in Remote Desktop Sessions | ### Product Version
v4.1.0
### Product Edition
- [X] Community Edition
- [ ] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [ ] Online Installation (One-click command installation)
- [X] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [X] Kubernetes
- [ ] Source Code
### ⭐️ Feature Description
When using JumpServer to manage remote machines via RDP and VNC, the interaction can become cumbersome, especially during window resizing. This is due to the simultaneous display of both local and remote cursors, which often do not align perfectly. This misalignment can make precise control difficult, negatively impacting the usability of remote management tools. The problem is particularly pronounced when adjusting window sizes or performing detailed graphical tasks on the remote machine.
### Proposed Solution
I propose adding an optional feature that allows users to toggle between displaying only the remote cursor or both cursors. This feature would mirror functionality found in VMware's ESXi and noVNC, where only the remote cursor is displayed, effectively hiding the local cursor. This approach would simplify cursor tracking on the remote desktop and eliminate the confusion and lack of precision caused by displaying two cursors.
### Additional Information
This issue often arises in environments with high-latency or during intensive graphical operations on the remote desktop, where cursor lag and misalignment can significantly disrupt workflow. Implementing this feature as a toggleable option provides flexibility for users to choose the cursor display mode that best suits their needs, depending on their specific use case and network conditions. This enhancement would align JumpServer with other leading remote management solutions, improving its overall functionality and user experience. | open | 2024-11-10T10:44:28Z | 2024-11-28T09:57:14Z | https://github.com/jumpserver/jumpserver/issues/14425 | [
"⏳ Pending feedback",
"⭐️ Feature Request",
"📦 z~release:Version TBD"
] | Galac666 | 6 |
FactoryBoy/factory_boy | django | 254 | Readthedocs fail to detect version | Hello,
I am would like readthedocs fail to detects versionof factory_boy. See http://readthedocs.org/projects/factoryboy/versions/ . I don't have any idea why it happen.
Greetings,
| closed | 2015-11-28T16:01:48Z | 2016-01-05T20:29:45Z | https://github.com/FactoryBoy/factory_boy/issues/254 | [
"Doc"
] | ad-m | 1 |
errbotio/errbot | automation | 1,483 | Command with a default arg fails to run when omitting the argument | In order to let us help you better, please fill out the following fields as best you can:
### I am...
* [x] Reporting a bug
* [ ] Suggesting a new feature
* [ ] Requesting help with running my bot
* [ ] Requesting help writing plugins
* [ ] Here about something else
### I am running...
* Errbot version: 6.1.6
* OS version: ubuntu 18.04
* Python version: 3.6.9
* Using a virtual environment: docker
* Backend: slack
### Issue description
I have a command with an argument. I'd like it to work with a default value if the argument is omitted.
```
@arg_botcmd("service_name", type=str, default="myservice", help="Service name")
def getstatus(self, msg, service_name):
...
```
I get an error when I try to run it w/o an argument.
```
User: !getstatus myservice
Errbot: ok
User: !getstatus
Errbot: I couldn't parse the arguments; the following arguments are required: service_name
```
The correct/expected result would be for the command to pick the default value `myservice` for the missing argument `service_name`. I also tried to use a default arg value in the method signature, but it did not help.
```
def getstatus(self, msg, service_name="myservice"):
``` | open | 2020-12-01T18:06:14Z | 2020-12-19T06:37:08Z | https://github.com/errbotio/errbot/issues/1483 | [
"type: bug",
"backend: Common"
] | alpinweis | 1 |
holoviz/panel | jupyter | 7,442 | JSComponent should document `_rename` | Without the following, bokeh raises a serialization error `bokeh.core.serialization.SerializationError: can't serialize <class 'function'>`, but https://panel.holoviz.org/reference/custom_components/JSComponent.html doesn't mention anything about it.
```
_rename = {"event_drop_callback": None}
```
```python
import panel as pn
import param
from panel.custom import JSComponent
pn.extension()
class SimpleFullCalendar(JSComponent):
event_drop_callback = param.Callable(
doc="""
A callback that will be called when an event is dropped on the calendar.
"""
)
_esm = """
import { Calendar } from '@fullcalendar/core';
import dayGridPlugin from '@fullcalendar/daygrid';
import interactionPlugin from '@fullcalendar/interaction';
export function render({ model, el }) {
let calendar = new Calendar(el, {
plugins: [ dayGridPlugin, interactionPlugin ],
editable: true,
eventDrop(info) {
model.send_msg({event_drop: JSON.stringify(info)})
},
events: [
{ title: 'event 1', date: '2024-10-01' },
{ title: 'event 2', date: '2024-10-02' }
]
});
calendar.render()
}
"""
_importmap = {
"imports": {
"@fullcalendar/core": "https://cdn.skypack.dev/@fullcalendar/core@6.1.15",
"@fullcalendar/daygrid": "https://cdn.skypack.dev/@fullcalendar/daygrid@6.1.15",
"@fullcalendar/interaction": "https://cdn.skypack.dev/@fullcalendar/interaction@6.1.15",
}
}
_rename = {"event_drop_callback": None}
def _handle_msg(self, msg):
import json
if "event_drop" in msg:
if self.event_drop_callback:
self.event_drop_callback(json.loads(msg["event_drop"]))
calendar = SimpleFullCalendar(event_drop_callback=lambda event: print(event))
calendar.show()
``` | open | 2024-10-24T19:04:17Z | 2024-11-03T07:16:51Z | https://github.com/holoviz/panel/issues/7442 | [
"type: docs"
] | ahuang11 | 1 |
benbusby/whoogle-search | flask | 246 | [QUESTION] Public Instance whooglesearch.net | Hi, I have created this public instance in a linode vps (https://whooglesearch.net/). I will try to keep it up to date and working as long as possible. I hope you can include it in the readme file. I hope to collaborate in this way with the community. | closed | 2021-03-30T15:23:56Z | 2021-03-31T02:16:09Z | https://github.com/benbusby/whoogle-search/issues/246 | [
"question"
] | drugal | 1 |
tfranzel/drf-spectacular | rest-api | 787 | Read only fields included in required list for input serializer | **Describe the bug**
In a serializer with a field defined as `read_only=True`, the field is also in the 'required' list. This means that it must be supplied as input, when it is read-only and cannot be changed.
**To Reproduce**
```python
class TopicSerializer(serializers.ModelSerializer):
impacted_systems_count = serializers.IntegerField(read_only=True)
class Meta:
model = models.RuleTopic
fields = (
'name', 'slug', 'description', 'featured',
'enabled', 'impacted_systems_count',
)
```
Generates (compressed for clarity):
```json
"Topic": {
"type": "object",
"description": "Topics group rules by a tag shared by all the rules.",
"properties": {
"name": {"type": "string", "maxLength": 80},
"slug": {"type": "string", "maxLength": 20},
"description": {"type": "string"},
"tag": {"type": "string"},
"featured": {"type": "boolean"},
"enabled": {"type": "boolean"},
"impacted_systems_count": {"type": "integer", "readOnly": true}
},
"required": [
"description", "impacted_systems_count", "name", "slug", "tag"
]
},
```
**Expected behavior**
The schema I expect this to generate is:
```json
"Topic": {
"type": "object",
"description": "Topics group rules by a tag shared by all the rules.",
"properties": {
"name": {"type": "string", "maxLength": 80},
"slug": {"type": "string", "maxLength": 20},
"description": {"type": "string"},
"tag": {"type": "string"},
"featured": {"type": "boolean"},
"enabled": {"type": "boolean"},
"impacted_systems_count": {"type": "integer", "readOnly": true}
},
"required": [
"description", "name", "slug", "tag"
]
},
```
The `impacted_systems_count` field is not required.
I also tried using:
```python
impacted_systems_count = serializers.IntegerField(read_only=True, required=False)
```
But this made no difference to the output schema. | closed | 2022-08-16T23:19:21Z | 2022-08-17T05:51:51Z | https://github.com/tfranzel/drf-spectacular/issues/787 | [] | PaulWay | 2 |
wq/django-rest-pandas | rest-api | 27 | Datetimefield is serialized to str | When I serialize this model
```class Well(models.Model):
site = models.ForeignKey(Site, on_delete=models.CASCADE)
name = models.CharField(max_length=100, )
start_date_time = models.DateTimeField(auto_now=False, auto_now_add=False, verbose_name='Date and time at which the well is deployed.')
```
with this serializer
```
class FileSerializer(serializers.ModelSerializer):
class Meta:
model = Well
fields = ('id', 'name', 'site', )
```
The resulting dataframe has a column `start_date_time` but this is a string rather than a datetime object. I work around this by using `transform_dataframe` and convert the column using `pandas.to_datetime` function. Is this the way to go or am I missing someting? | closed | 2017-08-24T13:08:53Z | 2017-09-01T20:43:54Z | https://github.com/wq/django-rest-pandas/issues/27 | [] | martinwk | 1 |
jupyterhub/zero-to-jupyterhub-k8s | jupyter | 3,039 | hub image's pip packages are installed as root making subsequent installs require root | In https://github.com/jupyterhub/zero-to-jupyterhub-k8s/pull/3003 @pnasrat concluded that one may need to be root to do `pip` install in a Dockerfile with `FROM jupyterhub/k8s-hub:2.0.0`.
I'm not sure, but I'm thinking that isn't desired. | closed | 2023-02-28T07:53:23Z | 2023-02-28T09:23:18Z | https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/3039 | [] | consideRatio | 2 |
open-mmlab/mmdetection | pytorch | 11,205 | Use backbone network implemented in MMPretrain, raise KeyError(f'Cannot find {norm_layer} in registry under | [https://github.com/Jano-rm-rf/Test/blob/main/TestChannels.py](url)
I wanted to use the SaprseResNet implemented in mmpretrain, but I got an error

| open | 2023-11-23T11:58:53Z | 2024-09-21T13:33:30Z | https://github.com/open-mmlab/mmdetection/issues/11205 | [] | Jano-rm-rf | 1 |
marshmallow-code/flask-smorest | rest-api | 214 | Webargs "unknown=EXCLUDE" not working for updated Marshmallow | I wrote an app with flask-smorest that is supposed to ignore unknown webargs, but when I upgraded to Marshmallow 3.10.0 (from 3.9.1), it stopped ignoring unknown args in a web request.
Possibly related to #211. Not sure if this is a problem with Marshmallow, flask-marshmallow, or flask-smorest.
Software Versions:
```
flask-smorest==0.27.0
flask-marshmallow==0.14.0
marshmallow==3.10.0
flask-sqlalchemy==2.4.4
flask==1.1.2
```
Here is an example application that demonstrates the error:
```python
from flask import Flask
from flask.views import MethodView
from marshmallow import EXCLUDE
from flask_marshmallow import Marshmallow
from flask_sqlalchemy import SQLAlchemy
from flask_smorest import Api, Blueprint
app = Flask(__name__)
app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:////tmp/test.db"
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
app.config["API_TITLE"] = "My API"
app.config["API_VERSION"] = "v1"
app.config["OPENAPI_VERSION"] = "3.0.2"
db = SQLAlchemy(app)
ma = Marshmallow(app)
api = Api(app)
blp = Blueprint(
"widgets", "widgets", url_prefix="/widgets", description="Operations on widgets"
)
class Widget(db.Model):
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
name = db.Column(db.String(32), unique=True, nullable=False)
color = db.Column(db.String(32), nullable=False, default="blue")
def __repr__(self):
return "<Widget %r>" % self.name
class WidgetSchema(ma.SQLAlchemySchema):
class Meta:
model = Widget
# vvv This is ignored in flask-smorest 0.27.0/Webargs 7
unknown = EXCLUDE
name = ma.auto_field(required=True)
color = ma.auto_field(required=False)
@blp.route("/")
class Widgets(MethodView):
@blp.response(WidgetSchema(many=True))
def get(self):
return Widget.query.all()
# vvv Even using unknown=None in the schema here doesn't work
@blp.arguments(WidgetSchema(unknown=None))
@blp.response(WidgetSchema, code=201)
def post(self, new_data):
w = Widget(**new_data)
db.session.add(w)
return w
app.register_blueprint(blp)
if __name__ == "__main__":
app.run(debug=True)
```
And here is a test file that triggers the error:
```python
import pytest
from app import Widget, app
@pytest.fixture
def db():
from app import db
with app.app_context():
db.create_all()
yield db
db.session.close()
db.drop_all()
db.session.commit()
@pytest.fixture
def client(db):
with app.app_context():
with app.test_client() as client:
yield client
def test_app(client, db):
widget = Widget(name="Thingy")
db.session.add(widget)
db.session.commit()
resp = client.get("/widgets/")
got = resp.get_json()
assert resp.status_code == 200, f"Bad status {resp.status_code}:\n{got}"
assert len(got) == 1
assert got[0]["name"] == "Thingy"
# 'detail' field is unknown, so should be ignored/excluded
new = {"name": "Whats-it", "color": "red", "detail": "this is ignored"}
resp = client.post("/widgets/", json=new)
got = resp.get_json()
# but this assert fails because it is not excluded as expected:
assert resp.status_code == 201, f"Bad status {resp.status_code}:\n{got}"
assert got["name"] == new["name"]
```
| closed | 2021-01-20T21:21:16Z | 2021-01-22T19:17:32Z | https://github.com/marshmallow-code/flask-smorest/issues/214 | [] | camercu | 3 |
Lightning-AI/LitServe | api | 452 | Potential Memory Leak in Response Buffer | ## 🐛 Bug
The [response_buffer](https://github.com/Lightning-AI/LitServe/blob/08a9caa4360aeef94ee585fc5e88f721550d267b/src/litserve/server.py#L74) dictionary could grow indefinitely if requests fail before their responses are processed.
Implement a cleanup mechanism or timeout for orphaned entries
#### Code sample
<!-- Ideally attach a minimal code sample to reproduce the decried issue.
Minimal means having the shortest code but still preserving the bug. -->
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
### Environment
If you published a Studio with your bug report, we can automatically get this information. Otherwise, please describe:
- PyTorch/Jax/Tensorflow Version (e.g., 1.0):
- OS (e.g., Linux):
- How you installed PyTorch (`conda`, `pip`, source):
- Build command you used (if compiling from source):
- Python version:
- CUDA/cuDNN version:
- GPU models and configuration:
- Any other relevant information:
### Additional context
<!-- Add any other context about the problem here. -->
| open | 2025-03-12T11:32:40Z | 2025-03-16T05:14:43Z | https://github.com/Lightning-AI/LitServe/issues/452 | [
"bug",
"good first issue",
"help wanted"
] | aniketmaurya | 1 |
autokey/autokey | automation | 773 | Update Python versions in the develop branch | ### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland?
Xorg
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Enhancement
### Which Linux distribution did you use?
_No response_
### Which AutoKey GUI did you use?
None
### Which AutoKey version did you use?
_No response_
### How did you install AutoKey?
_No response_
### Can you briefly describe the issue?
All references to Python versions in the **develop** branch of the source code need to be updated to currently-supported versions.
### Can the issue be reproduced?
Always
### What are the steps to reproduce the issue?
Submit a pull request to the project.
### What should have happened?
It should pass all tests.
### What actually happened?
It fails on Python versions.
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
| open | 2023-02-18T21:52:59Z | 2023-03-08T22:24:37Z | https://github.com/autokey/autokey/issues/773 | [
"0.96.1"
] | Elliria | 10 |
mljar/mljar-supervised | scikit-learn | 499 | X has feature names, but StandardScaler was fitted without feature names | This issue has been coming up when I use,
automl.predict_proba(input)
I am using the requirements.txt in venv. Shouldn't input have feature names?
This message did not used to come up and I don't know why. | closed | 2021-12-05T01:07:26Z | 2024-03-01T07:05:05Z | https://github.com/mljar/mljar-supervised/issues/499 | [
"help wanted",
"good first issue"
] | JacobMarley | 7 |
slackapi/python-slack-sdk | asyncio | 1,421 | SlackApiError: 'dict' object has no attribute 'status_code' | `slack_sdk.web` reports the `'dict' object has no attribute 'status_code'` error when attempting to retrieve the SlackApiError's `status_code`.
### Reproducible in:
```bash
$ pip list | grep slack
slack-sdk 3.23.0
$ pip freeze | grep slack
-e git+ssh://git@github.com/python-slack-sdk.git@9967dc0a206d974f7ef2c09dd6c49f70b98ecf1e#egg=slack_sdk
$ python --version
Python 3.9.6
$ sw_vers
ProductName: macOS
ProductVersion: 13.6.1
BuildVersion: 22G313
```
#### Steps to reproduce:
1. When catching SlackApiError, decide the different process logics by retrieving and checking the returned response's `status_code`, similar to the example code of checking ratelimit `status_code 429` in [slack.dev ](https://slack.dev/python-slack-sdk/web/index.html#rate-limits) website. The code looks like below:
```python
try:
response = send_slack_message(channel, message)
except SlackApiError as e:
if err_status_code := e.response.status_code >= 500:
print(f"SlackApiError returned status code: {err_status_code}")
else:
<omitted>
```
2. Wait for Slack Service Unavailable e.g. 503 to trigger the except processing in the above code
### Expected result:
When SlackApiError happens, the code (by calling `slack_sdk.web`) can retrieve `status_code` from SlackApiError's [response](https://github.com/slackapi/python-slack-sdk/blob/v3.23.0/slack_sdk/web/slack_response.py#L72) attribute successfully, w/o throwing out any further error.
### Actual result:
When a real Slack service outage happened, the following AttributeError `'dict' object has no attribute 'status_code'` was throw out unexpectedly:
```bash
slack_sdk.errors.SlackApiError: Received a response in a non-JSON format: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"><html><head><title>503 Service Unavailable</title>...
The server responded with: {'status': 503, 'headers': {'date': '<day>, <date> <time>', 'server': 'Apache', 'content-length': '299', 'content-type': 'text/html; charset=iso-8859-1', 'x-envoy-upstream-service-time': '1', 'x-backend': 'main_normal main_canary_with_overflow main_control_with_overflow', 'x-server': 'slack-www-hhvm-main-iad-vdno', 'via': 'envoy-www-iad-rplytdoy, envoy-edge-iad-zbjxhisv', 'x-slack-shared-secret-outcome': 'no-match', 'x-edge-backend': 'envoy-www', 'x-slack-edge-shared-secret-outcome': 'no-match', 'connection': 'close'}, 'body': '<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">\n<html><head>\n<title>503 Service Unavailable</title>\n</head><body>\n<h1>Service Unavailable</h1>\n<p>The server is temporarily unable to service your\nrequest due to maintenance downtime or capacity\nproblems. Please try again later.</p>\n</body></html>\n'}
During handling of the above exception, another exception occurred:
AttributeError: 'dict' object has no attribute 'status_code'
```
### Probable cause:
When encountering API return's json decoding error (that's the case of 500 error), the response is a dict (returned by [_perform_urllib_http_request()](https://github.com/slackapi/python-slack-sdk/blob/v3.23.0/slack_sdk/web/base_client.py#L323)) - this implementation in [slack_sdk.web.base_client](https://github.com/slackapi/python-slack-sdk/blob/v3.23.0/slack_sdk/web/base_client.py#L294-L302) module must have been there for a long time, even before v3.0 :
```python
response = self._perform_urllib_http_request(url=url, args=request_args)
response_body = response.get("body", None) # skipcq: PTC-W0039
response_body_data: Optional[Union[dict, bytes]] = response_body
if response_body is not None and not isinstance(response_body, bytes):
try:
response_body_data = json.loads(response["body"])
except json.decoder.JSONDecodeError:
message = _build_unexpected_body_error_message(response.get("body", ""))
raise err.SlackApiError(message, response)
```
But SlackApiError's [response](https://github.com/slackapi/python-slack-sdk/blob/v3.23.0/slack_sdk/errors/__init__.py#L22) attribute should be the type of [SlackResponse](https://github.com/slackapi/python-slack-sdk/blob/v3.23.0/slack_sdk/web/slack_response.py#L12) instead of dict that doesn't have `status_code`:
```python
class SlackApiError(SlackClientError):
"""Error raised when Slack does not send the expected response.
Attributes:
response (SlackResponse): The SlackResponse object containing all of the data sent back from the API.
```
### Probable solution:
There are probably several solutions. To make the code easy to reference (always provide `status_code` return for whatever SlackApiError) and consistent (use only one type of response for SlackApiError and one entry to SlackApiError), we can re-assemble the response body data by leveraging the message output from [_build_unexpected_body_error_message()](https://github.com/slackapi/python-slack-sdk/blob/v3.23.0/slack_sdk/web/internal_utils.py#L293) (that's a str), with one line change only:
```python
except json.decoder.JSONDecodeError:
message = _build_unexpected_body_error_message(response.get("body", ""))
# raise err.SlackApiError(message, response)
response_body_data = {'ok': False, 'error': message}
```
Then this payload will be sent to SlackResponse's [validate()](https://github.com/slackapi/python-slack-sdk/blob/v3.23.0/slack_sdk/web/base_client.py#L309), that will trigger SlackApiError eventually:
```python
return SlackResponse(
client=self,
http_verb="POST", # you can use POST method for all the Web APIs
api_url=url,
req_args=request_args,
data=response_body_data,
headers=dict(response["headers"]),
status_code=response["status"],
).validate()
```
Seeking the feedback and suggestions - I can raise a PR if the above proposed fix is acceptable.
### Mock test:
Using the above proposed solution, mock test passed like below - no longer trigger another unexpected AttributeError, also provide status_code access and consistent error message:
```python
>>> with patch('slack_sdk.WebClient._perform_urllib_http_request') as mock_urllib:
... http_error_response = HTTPError("https://example.com/api/data", 503, "503 Service Unavailable", None, BytesIO(b'<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"><html><head><title>503 Service Unavailable</title>'))
... mock_urllib.side_effect = http_error_response
... try:
... ret = client.chat_postMessage(channel="chan", text="hello")
... except HTTPError as e:
... response_headers = {'date': '<day>, <date> <time>', 'server': 'Apache', 'content-length': '299', 'content-type': 'text/html; charset=iso-8859-1'}
... response = {"status": e.code, "headers": response_headers, "body": e.read().decode("utf-8")}
... print(f"The type of returned response: {type(response)}")
... response_body = response.get("body", None)
... response_body_data: Optional[Union[dict, bytes]] = response_body
... if response_body is not None and not isinstance(response_body, bytes):
... try:
... response_body_data = json.loads(response["body"])
... except json.decoder.JSONDecodeError:
... message = _build_unexpected_body_error_message(response.get("body", ""))
... # raise SlackApiError(message, response)
... response_body_data = {'ok': False, 'error': message}
... try:
... SlackResponse(
... client=client,
... http_verb="POST", # you can use POST method for all the Web APIs
... api_url=url,
... req_args=request_args,
... data=response_body_data,
... headers=dict(response["headers"]),
... status_code=response["status"],
... ).validate()
... except SlackApiError as e:
... print(f"SlackApiError attribute response's status_code: {e.response.status_code}")
... print(f"SlackApiError attribute response's data: {e.response.data}")
...
The type of returned response: <class 'dict'>
SlackApiError attribute response's status_code: 503
SlackApiError attribute response's data: {'ok': False, 'error': 'Received a response in a non-JSON format: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"><html><head><title>503 Service Unavailable</title>'}
>>>
``` | closed | 2023-11-07T12:24:15Z | 2023-11-13T08:56:54Z | https://github.com/slackapi/python-slack-sdk/issues/1421 | [
"bug",
"web-client",
"Version: 3x"
] | vinceta | 3 |
betodealmeida/shillelagh | sqlalchemy | 23 | Documentation needed | Some docs would be nice. Showing how to use sqlalchemy dialects & dbapi, with, say, google sheets.
At a minimum, putting examples (or a link to them) in the README | closed | 2021-06-18T15:59:40Z | 2021-06-21T17:49:44Z | https://github.com/betodealmeida/shillelagh/issues/23 | [] | polvoazul | 2 |
falconry/falcon | api | 1,628 | Advice needed on API responder. | Greetings,
I am not sure where to post this question? If I should raise it elsewhere please advice.
Building an JSON-restful API with Falcon is pretty straight forward. However what does not fall in the scope of the documentation is backend storage integration and parsing/validation of data. This does not fall within the scope of the Falcon project. The question I have though is how are people validating and storing data from JSON post/patch requests? You can't possibly be writing validation code in each responder (what a waste) and JSON schema doesn't cut it nor does OpenAPI integration (Since its based on json schema validation).. You have to validate the data before storing it or processing some form of transaction. So what approach are the other Falcon developer using here? People that can't afford to write hundreds and thousands of lines of code for many different responders to validate input. I have not yet seen a common approach to this problem and many projects differ, some are even flawed in design. SQLAlchmey is great but it doesn't offer you validation for example for the format of a phone number or email and you don't have always want to store the data in SQL :-) sometimes just validate. So what ever it is, it needs to solve validation and passing it to storage like SQL when needed.
Any pointers? | closed | 2020-01-03T07:11:32Z | 2022-01-05T06:49:37Z | https://github.com/falconry/falcon/issues/1628 | [
"question",
"community"
] | cfrademan | 4 |
jupyter/nbviewer | jupyter | 193 | Can I use nbviewer with private repos/gists? | Is there any way I can give nbviewer access to a private repo/gist so I can use it privately with a collaborator?
| closed | 2014-02-19T16:44:47Z | 2024-04-10T23:32:01Z | https://github.com/jupyter/nbviewer/issues/193 | [] | rhiever | 5 |
nvbn/thefuck | python | 1,368 | Python Unicode Warning (but on Git Bash) | The output of `thefuck --version`:
> The Fuck 3.32 using Python 3.11.1 and Bash
Your system (Debian 7, ArchLinux, Windows, etc.):
> Windows
How to reproduce the bug:
> Put the eval stuff on the .bashrc and run a new terminal
The warning:
> C:\Program Files\Python311\Lib\site-packages\win_unicode_console\__init__.py:31: RuntimeWarning: sys.stdin.encoding == 'utf-8', whereas sys.stdout.encoding == 'cp1252', readline hook consumer may assume they are the same readline_hook.enable(use_pyreadline=use_pyreadline)
I've read about this exact error and how to solve it for PowerShell, but how can I solve it for Git Bash?
| closed | 2023-03-29T15:32:56Z | 2023-03-29T15:34:33Z | https://github.com/nvbn/thefuck/issues/1368 | [] | LuizLoyola | 1 |
chaos-genius/chaos_genius | data-visualization | 819 | [BUG] Yaxis labels getting cutoff in charts with large data during formatting and display | ## Describe the bug
A clear and concise description of what the bug is.
yAxis labels are getting cut off during formatting of large numbers
<img width="241" alt="Screenshot 2022-03-12 at 1 02 04 PM" src="https://user-images.githubusercontent.com/50948001/158008633-b1017c5f-9d22-4782-bbd0-3c1bb8b88482.png">
| closed | 2022-03-12T07:32:24Z | 2022-03-23T06:50:07Z | https://github.com/chaos-genius/chaos_genius/issues/819 | [
"🖥️ frontend"
] | ChartistDev | 1 |
adamerose/PandasGUI | pandas | 126 | json.decoder.JSONDecodeError when import pandasgui | PLEASE FILL OUT THE TEMPLATE
**Describe the bug**
use the following code, and python reports error
import pandas as pd
from pandasgui import show
df = pd.DataFrame({'a':[1,2,3], 'b':[4,5,6], 'c':[7,8,9]})
show(df)
error message:
Traceback (most recent call last):
File "test_pandagui.py", line 2, in <module>
from pandasgui import show
File "C:\ProgramData\Anaconda3\envs\analyze_excel\lib\site-packages\pandasgui\__init__.py", line 15, in <module>
from pandasgui.gui import show
File "C:\ProgramData\Anaconda3\envs\analyze_excel\lib\site-packages\pandasgui\gui.py", line 13, in <module>
from pandasgui.store import PandasGuiStore, PandasGuiDataFrameStore
File "C:\ProgramData\Anaconda3\envs\analyze_excel\lib\site-packages\pandasgui\store.py", line 550, in <module>
SETTINGS_STORE = SettingsStore()
File "C:\ProgramData\Anaconda3\envs\analyze_excel\lib\site-packages\pandasgui\store.py", line 96, in __init__
saved_settings = read_saved_settings()
File "C:\ProgramData\Anaconda3\envs\analyze_excel\lib\site-packages\pandasgui\store.py", line 35, in read_saved_settings
saved_settings = json.load(f)
File "C:\ProgramData\Anaconda3\envs\analyze_excel\lib\json\__init__.py", line 293, in load
return loads(fp.read(),
File "C:\ProgramData\Anaconda3\envs\analyze_excel\lib\json\__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "C:\ProgramData\Anaconda3\envs\analyze_excel\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\ProgramData\Anaconda3\envs\analyze_excel\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
**Environment**
OS: (eg. Windows 10)
Python: (eg. 3.8.4)
IDE: (eg. PyCharm)
**Package versions**
install anaconda environment
dependencies:
- ca-certificates=2021.1.19=haa95532_1
- certifi=2020.12.5=py38haa95532_0
- openssl=1.1.1k=h2bbff1b_0
- pip=21.0.1=py38haa95532_0
- python=3.8.8=hdbf39b2_4
- setuptools=52.0.0=py38haa95532_0
- sqlite=3.35.2=h2bbff1b_0
- vc=14.2=h21ff451_1
- vs2015_runtime=14.27.29016=h5e58377_2
- wheel=0.36.2=pyhd3eb1b0_0
- wincertstore=0.2=py38_0
- pip:
- appdirs==1.4.4
- astor==0.8.1
- backcall==0.2.0
- colorama==0.4.4
- cycler==0.10.0
- decorator==4.4.2
- et-xmlfile==1.0.1
- future==0.18.2
- ipython==7.22.0
- ipython-genutils==0.2.0
- jedi==0.18.0
- kiwisolver==1.3.1
- matplotlib==3.4.0
- numexpr==2.7.3
- numpy==1.20.1
- openpyxl==3.0.7
- pandas==1.2.3
- pandasgui==0.2.10.1
- pandastable==0.12.2.post1
- parso==0.8.1
- pickleshare==0.7.5
- pillow==8.1.2
- plotly==4.14.3
- prompt-toolkit==3.0.18
- pyarrow==3.0.0
- pygments==2.8.1
- pynput==1.7.3
- pyparsing==2.4.7
- pyqt5==5.15.4
- pyqt5-qt5==5.15.2
- pyqt5-sip==12.8.1
- pyqtwebengine==5.15.4
- pyqtwebengine-qt5==5.15.2
- python-dateutil==2.8.1
- pytz==2021.1
- retrying==1.3.3
- six==1.15.0
- tkintertable==1.3.2
- traitlets==5.0.5
- wcwidth==0.2.5
- wordcloud==1.8.1
- xlrd==2.0.1
- xlsxwriter==1.3.7
- xlwt==1.3.0
| closed | 2021-03-27T04:59:30Z | 2021-04-23T04:44:17Z | https://github.com/adamerose/PandasGUI/issues/126 | [
"bug"
] | andrewmabc | 3 |
autogluon/autogluon | computer-vision | 4,636 | Figure out how to do source install with uv on Colab/Kaggle | We should have working instructions for AutoGluon source installs with UV on Colab/Kaggle notebooks. | open | 2024-11-10T01:26:32Z | 2025-02-25T20:33:38Z | https://github.com/autogluon/autogluon/issues/4636 | [
"install",
"priority: 0"
] | Innixma | 0 |
ultralytics/ultralytics | deep-learning | 19,648 | can not run tensorrt,bug error: module 'tensorrt' has no attribute '__version__' | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Install
### Bug
i download the right cuda、cudnn、torch、vision,and the i download the tensorrt 8.5 GA in my windows.
when i run this demo code:
```
from ultralytics import YOLO
if __name__ == '__main__':
model = YOLO('./yolo11n.pt')
model.export(
format='engine',
imgsz=640,
keras=False,
optimize=False,
half=False,
int8=False,
dynamic=False,
simplify=True,
opset=None,
workspace=5.0,
nms=False,
batch=1,
device='0',
)
```
and run error blow as:
```
(yolo11) E:\yolo11>python tensorrt.py
Ultralytics 8.3.87 🚀 Python-3.9.21 torch-2.2.2+cu118 CUDA:0 (NVIDIA GeForce RTX 3060 Laptop GPU, 6144MiB)
ONNX: starting export with onnx 1.17.0 opset 17...
ONNX: slimming with onnxslim 0.1.48...
ONNX: export success ✅ 3.5s, saved as 'yolo11n.onnx' (10.2 MB)
TensorRT: export failure ❌ 3.5s: module 'tensorrt' has no attribute '__version__'
Traceback (most recent call last):
File "E:\yolo11\tensorrt.py", line 9, in <module>
model.export(
File "E:\yolo11\ultralytics\engine\model.py", line 742, in export
return Exporter(overrides=args, _callbacks=self.callbacks)(model=self.model)
File "E:\yolo11\ultralytics\engine\exporter.py", line 429, in __call__
f[1], _ = self.export_engine(dla=dla)
File "E:\yolo11\ultralytics\engine\exporter.py", line 182, in outer_func
raise e
File "E:\yolo11\ultralytics\engine\exporter.py", line 177, in outer_func
f, model = inner_func(*args, **kwargs)
File "E:\yolo11\ultralytics\engine\exporter.py", line 855, in export_engine
check_version(trt.__version__, ">=7.0.0", hard=True)
AttributeError: module 'tensorrt' has no attribute '__version__'
```
### Environment
Python 3.9.21
torch 2.2 with their vision
tensorrt 8.5
cuda 11.8 cudnn for 11.8
windows 10
```
(yolo11) E:\yolo11>pip list
Package Version
------------------- ------------
certifi 2025.1.31
charset-normalizer 3.4.1
colorama 0.4.6
coloredlogs 15.0.1
contourpy 1.3.0
cycler 0.12.1
filelock 3.17.0
flatbuffers 25.2.10
fonttools 4.56.0
fsspec 2025.3.0
humanfriendly 10.0
idna 3.10
importlib_resources 6.5.2
Jinja2 3.1.6
kiwisolver 1.4.7
MarkupSafe 3.0.2
matplotlib 3.9.4
mpmath 1.3.0
networkx 3.2.1
numpy 1.24.0
onnx 1.17.0
onnxruntime-gpu 1.19.2
onnxslim 0.1.48
opencv-python 4.11.0.86
packaging 24.2
pandas 2.2.3
pillow 11.1.0
pip 25.0
protobuf 6.30.0
psutil 7.0.0
py-cpuinfo 9.0.0
pyparsing 3.2.1
pyreadline3 3.5.4
python-dateutil 2.9.0.post0
pytz 2025.1
PyYAML 6.0.2
requests 2.32.3
scipy 1.13.1
seaborn 0.13.2
setuptools 75.8.0
six 1.17.0
sympy 1.13.1
tensorrt 8.5.1.7
torch 2.2.2+cu118
torchaudio 2.2.2+cu118
torchvision 0.17.2+cu118
tqdm 4.67.1
typing_extensions 4.12.2
tzdata 2025.1
ultralytics 8.3.87
ultralytics-thop 2.0.14
urllib3 2.3.0
wheel 0.45.1
zipp 3.21.0
```
### Minimal Reproducible Example
```
from ultralytics import YOLO
if __name__ == '__main__':
model = YOLO('./yolo11n.pt')
model.export(
format='engine',
imgsz=640,
keras=False,
optimize=False,
half=False,
int8=False,
dynamic=False,
simplify=True,
opset=None,
workspace=5.0,
nms=False,
batch=1,
device='0',
)
```
### Additional
_No response_
### Are you willing to submit a PR?
- [x] Yes I'd like to help by submitting a PR! | closed | 2025-03-11T18:20:51Z | 2025-03-12T07:18:29Z | https://github.com/ultralytics/ultralytics/issues/19648 | [
"dependencies",
"exports"
] | Hitchliff | 4 |
pydata/bottleneck | numpy | 409 | Python crashed in fuzzing test of 7 APIs | **Describe the bug**
[API-list](https://github.com/baltsers/polyfuzz/blob/main/bottleneck/API-list.txt): bn.nanmedian, bn.nanmean, bn.nanstd, bn.median, bn.ss, bn.nanmin, bn.nanmax.
Python crashed in our fuzzing test of 7 APIs
All tests were run on the latest developing branch.
**To Reproduce**
To assist in reproducing the bug, please include the following:
1. [Script](https://github.com/baltsers/polyfuzz/blob/main/bottleneck/random_shape.py)
*example:* python random_shape.py input1 bn.nanmedian
3. Python3.8 & Ubuntu18.04
4. pip 21.2.4
5. [pip list](https://github.com/baltsers/polyfuzz/blob/main/bottleneck/pip-list)
**Expected behavior**
python should not crash
**Additional context**
[input1](https://github.com/baltsers/polyfuzz/blob/main/bottleneck/input1)
[input2](https://github.com/baltsers/polyfuzz/blob/main/bottleneck/input2)
[input3](https://github.com/baltsers/polyfuzz/blob/main/bottleneck/input3)
[input4](https://github.com/baltsers/polyfuzz/blob/main/bottleneck/input4)
[input5](https://github.com/baltsers/polyfuzz/blob/main/bottleneck/input5)
[input6](https://github.com/baltsers/polyfuzz/blob/main/bottleneck/input6)
[input7](https://github.com/baltsers/polyfuzz/blob/main/bottleneck/input7)
Crash information:
[median](https://github.com/baltsers/polyfuzz/blob/main/bottleneck/crash_info-median.txt)
[nanmean](https://github.com/baltsers/polyfuzz/blob/main/bottleneck/crash_info-nanmean.txt)
[nanmedian](https://github.com/baltsers/polyfuzz/blob/main/bottleneck/crash_info-nanmedian.txt)
[nanmin](https://github.com/baltsers/polyfuzz/blob/main/bottleneck/crash_info-nanmin.txt)
[nanstd](https://github.com/baltsers/polyfuzz/blob/main/bottleneck/crash_info-nanstd.txt)
[ss](https://github.com/baltsers/polyfuzz/blob/main/bottleneck/crash_info-ss.txt)
nanmin and nanmax crashed at the same position. | open | 2022-05-15T18:59:51Z | 2023-05-31T10:22:37Z | https://github.com/pydata/bottleneck/issues/409 | [
"bug"
] | baltsers | 1 |
K3D-tools/K3D-jupyter | jupyter | 419 | "No module named `k3d`" on macos | * K3D version: 2.15.2
* Python version: 3.11.3 (via homebrew)
* Operating System: macos 13.3.1 (22E261) (intel)
### Description
After installing k3d, importing the module does not work:

### What I Did
Create a new venv and install k3d:
```sh
pip install k3d
```
Run `jupyter-notebook` locally and connect to it.
| closed | 2023-04-21T07:48:01Z | 2023-04-22T02:58:39Z | https://github.com/K3D-tools/K3D-jupyter/issues/419 | [] | bsekura | 1 |
tox-dev/tox | automation | 3,474 | TOML configuration of `set_env` should also support loading from environment files | ## What's the problem this feature will solve?
Loading environment files in `set_env` was implemented by @gaborbernat in #1668 and is currently documented as being generally usable. However, it appears that the `file|` syntax and custom parsing code is incompatible with a TOML configuration of `set_env`.
## Describe the solution you'd like
TOML configuration of `set_env` should support the ability to load environment files. This would probably be most elegant with some new syntax, as the `file|` syntax doesn't map cleanly onto the expected type of `set_env` in TOML.
## Alternative Solutions
Alternatively, the documentation should be updated to clarify that the environment file loading / `file|` syntax is only supported in the legacy INI configuration file format. | closed | 2025-01-29T18:36:07Z | 2025-03-06T22:18:17Z | https://github.com/tox-dev/tox/issues/3474 | [
"help:wanted",
"enhancement"
] | brianhelba | 2 |
pytest-dev/pytest-qt | pytest | 369 | PyQt6 support breaks PyQt5 on Ubuntu | I'm currently facing an issue in my CI which looks related to a recent change in pytest-qt:
The culprit is this line here: https://github.com/pytest-dev/pytest-qt/blob/master/src/pytestqt/qt_compat.py#L144 which was introduced 3 months ago.
System: Ubuntu Focal
Qt version: 5.12.8
PyQt5 version: 5.14.1
log:
```
INTERNALERROR> Traceback (most recent call last):
18:14:56
18:14:56
INTERNALERROR> File "/usr/lib/python3/dist-packages/_pytest/main.py", line 202, in wrap_session
18:14:56
18:14:56
INTERNALERROR> config._do_configure()
18:14:56
18:14:56
INTERNALERROR> File "/usr/lib/python3/dist-packages/_pytest/config/__init__.py", line 723, in _do_configure
18:14:56
18:14:56
INTERNALERROR> self.hook.pytest_configure.call_historic(kwargs=dict(config=self))
18:14:56
18:14:56
INTERNALERROR> File "/usr/lib/python3/dist-packages/pluggy/hooks.py", line 308, in call_historic
18:14:56
18:14:56
INTERNALERROR> res = self._hookexec(self, self.get_hookimpls(), kwargs)
18:14:56
18:14:56
INTERNALERROR> File "/usr/lib/python3/dist-packages/pluggy/manager.py", line 92, in _hookexec
18:14:56
18:14:56
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
18:14:56
18:14:56
INTERNALERROR> File "/usr/lib/python3/dist-packages/pluggy/manager.py", line 83, in <lambda>
18:14:56
18:14:56
INTERNALERROR> self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
18:14:56
18:14:56
INTERNALERROR> File "/usr/lib/python3/dist-packages/pluggy/callers.py", line 208, in _multicall
18:14:56
18:14:56
INTERNALERROR> return outcome.get_result()
18:14:56
18:14:56
INTERNALERROR> File "/usr/lib/python3/dist-packages/pluggy/callers.py", line 80, in get_result
18:14:56
18:14:56
INTERNALERROR> raise ex[1].with_traceback(ex[2])
18:14:56
18:14:56
INTERNALERROR> File "/usr/lib/python3/dist-packages/pluggy/callers.py", line 187, in _multicall
18:14:56
18:14:56
INTERNALERROR> res = hook_impl.function(*args)
18:14:56
18:14:56
INTERNALERROR> File "/usr/local/lib/python3.8/dist-packages/pytestqt/plugin.py", line 203, in pytest_configure
18:14:56
18:14:56
INTERNALERROR> qt_api.set_qt_api(config.getini("qt_api"))
18:14:56
INTERNALERROR> File "/usr/local/lib/python3.8/dist-packages/pytestqt/qt_compat.py", line 144, in set_qt_api
18:14:56
18:14:56
INTERNALERROR> self.isdeleted = _import_module("sip").isdeleted
18:14:56
18:14:56
INTERNALERROR> File "/usr/local/lib/python3.8/dist-packages/pytestqt/qt_compat.py", line 101, in _import_module
18:14:56
18:14:56
INTERNALERROR> return getattr(m, module_name)
18:14:56
18:14:56
INTERNALERROR> AttributeError: module 'PyQt5' has no attribute 'sip'
18:14:56
18:14:56
[Testcase: testtest_lib] ... FAILURE!
18:14:56
``` | closed | 2021-06-04T16:47:37Z | 2021-06-07T08:50:14Z | https://github.com/pytest-dev/pytest-qt/issues/369 | [] | machinekoder | 7 |
healthchecks/healthchecks | django | 857 | Building a Docker image fails on Windows 10 | Hi there,
`>docker-compose up` fails on my windows 10 with:
```bash
=> [web stage-1 4/9] COPY --from=builder /wheels /wheels 0.2s
=> [web stage-1 5/9] RUN apt update && apt install -y libcurl4 libpq5 libmariadb3 libxml2 && rm -rf 10.1s
=> [web stage-1 6/9] RUN pip install --upgrade pip && pip install --no-cache /wheels/* 11.2s
=> [web stage-1 7/9] COPY --from=builder /opt/healthchecks/ /opt/healthchecks/ 0.2s
=> ERROR [web stage-1 8/9] RUN rm -f /opt/healthchecks/hc/local_settings.py && DEBUG=False SECRET_KEY=bu 0.7s
------
> [web stage-1 8/9] RUN rm -f /opt/healthchecks/hc/local_settings.py && DEBUG=False SECRET_KEY=build-key ./manage.py collectstatic --noinput && DEBUG=False SECRET_KEY=build-key ./manage.py compress:
#0 0.680 /usr/bin/env: ‘python\r’: No such file or directory
#0 0.680 /usr/bin/env: use -[v]S to pass options in shebang lines
------
time="2023-07-03T13:28:19-04:00" level=warning msg="buildx: git was not found in the system. Current commit information was not captured by the build"
failed to solve: process "/bin/sh -c rm -f /opt/healthchecks/hc/local_settings.py && DEBUG=False SECRET_KEY=build-key ./manage.py collectstatic --noinput && DEBUG=False SECRET_KEY=build-key ./manage.py compress" did not complete successfully: exit code: 127
```
yes I don't have `git` on that machine but is this actual issue? I don't see any requirements to be able get that running in docker.
using latest from master:
```bash
>git log
commit 1bfd1d22b0f2518e6a7a102e48e731d492456a4d (HEAD -> master, origin/master, origin/HEAD)
Author: Viktor Szépe <viktor@szepe.net>
Date: Sun Jul 2 14:32:20 2023 +0200
```
UPD:
Steps to reproduce:
- `git clone` repo
- copy project to my box without git installed (via RDP)
- `cd healthchecks/docker`
- `cp .env.example .env`
- `vim .env` and upd all fields listed in wiki https://github.com/healthchecks/healthchecks/tree/master/docker#getting-started
- `docker-compose up`
Expected: I got new image
Actual: failed with exception (see above)
| closed | 2023-07-03T17:43:28Z | 2023-07-05T18:12:26Z | https://github.com/healthchecks/healthchecks/issues/857 | [] | kyxap | 4 |
jwkvam/bowtie | plotly | 17 | support for periodic tasks | this would enable among other things real time streaming
| closed | 2016-08-27T19:56:55Z | 2016-08-28T16:43:05Z | https://github.com/jwkvam/bowtie/issues/17 | [] | jwkvam | 0 |
jina-ai/serve | machine-learning | 5,562 | chore: draft release note 3.13.2 | ## Release Note (`3.13.2`)
This release contains 1 bug fix.
## 🐞 Bug Fixes
### fix: respect timeout_ready when generating startup probe ([#5560](https://github.com/jina-ai/jina/pull/5560))
As Kubernetes Startup Probes were added to all deployments in [release v3.13.0](https://github.com/jina-ai/jina/releases/tag/v3.13.0), we added default values for all probe configurations. However, if those default configurations were not enough to wait for an Executor that takes time to load and become ready, the Executor deployment would become subject to the configured restart policy.
Therefore, Executors that are slow to load would keep restarting forever.
In this patch, this behavior is fixed by making sure that Startup Probe configurations respect the `timeout_ready` argument of Executors.
Startup Probes are configured like so:
* `periodSeconds` always set to 5 seconds
* `timeoutSeconds` always set to 10 seconds
* `failureThreshold` varies according to `timeout_ready`. The formula used is `failureThreshold = timeout_ready / 5` and
in all cases, it will be at least 3. If `timeout_ready` is `-1` (in jina it means waiting forever for the Executor to
become ready), since waiting forever is not supported in Kubernetes, the value for `failureThreshold` will be 120.
## 🤘 Contributors
We would like to thank all contributors to this release:
- AlaeddineAbdessalem ([@alaeddine-13](https://github.com/alaeddine-13))
| closed | 2022-12-30T10:22:13Z | 2022-12-30T11:13:45Z | https://github.com/jina-ai/serve/issues/5562 | [] | alaeddine-13 | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.