repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
wkentaro/labelme | computer-vision | 728 | [BUG] Pascal Voc conversion - white outlining | **Describe the bug**
The pascal voc conversion for image segmentation doesn't use the white outlining that the pascal voc dataset uses.
**To Reproduce**
Steps to reproduce the behavior:
1. follow example here: https://github.com/wkentaro/labelme/tree/master/examples/instance_segmentation
**Expected behavior**
An output that matches the pascal voc data set. Here is an example from the data set itself:

However, the output doesn't have the same white outlining:

**Desktop (please complete the following information):**
- Windows Anaconda
**Additional context**
Everything else about instance segmentation seems good!
| closed | 2020-07-17T00:24:31Z | 2024-03-08T07:30:15Z | https://github.com/wkentaro/labelme/issues/728 | [
"issue::bug"
] | crouchjt | 6 |
psf/requests | python | 6,303 | response.reason is None instead of the expected reason string | <!-- Summary. -->
`response.reason` is `None` instead of the expected reason string.
## Expected Result
Print `'INTERNAL SERVER ERROR'` or something similar
<!-- What you expected. -->
## Actual Result
`'None'` is printed
<!-- What happened instead. -->
## Reproduction Steps
```python
requests_mock.get(
'https://google.com',
json={},
status_code=500
)
response = requests.get('https://google.com')
print(response.reason)
```
## System Information
$ python -m requests.help
```json
{
"chardet": {
"version": null
},
"charset_normalizer": {
"version": "2.1.1"
},
"cryptography": {
"version": ""
},
"idna": {
"version": "3.4"
},
"implementation": {
"name": "CPython",
"version": "3.11.0"
},
"platform": {
"release": "22.1.0",
"system": "Darwin"
},
"pyOpenSSL": {
"openssl_version": "",
"version": null
},
"requests": {
"version": "2.28.1"
},
"system_ssl": {
"version": "1010113f"
},
"urllib3": {
"version": "1.26.13"
},
"using_charset_normalizer": true,
"using_pyopenssl": false
}
```
<!-- This command is only available on Requests v2.16.4 and greater. Otherwise,
please provide some basic information about your system (Python version,
operating system, &c). -->
| closed | 2022-12-08T01:31:27Z | 2023-12-09T00:03:13Z | https://github.com/psf/requests/issues/6303 | [] | westy92 | 3 |
donnemartin/data-science-ipython-notebooks | numpy | 61 | PyTorch tutorials | Hey, I see that there are no tutorial notebooks for implementing machine learning algorithms and neural networks in PyTorch in this repo yet. PyTorch is gaining a lot of traction lately and is really going to be one of the most popular frameworks due to its dynamic computational graph & eager execution.
I would like to add such tutorial notebooks in PyTorch. | open | 2018-10-30T13:56:07Z | 2024-04-28T13:01:09Z | https://github.com/donnemartin/data-science-ipython-notebooks/issues/61 | [
"help wanted",
"feature-request"
] | gokriznastic | 4 |
miguelgrinberg/Flask-Migrate | flask | 188 | Should be easier to run upgrades with one transaction per version | When starting from a blank database, some data migration versions fail because they rely on tables created by previous versions. This seems to be because all version upgrades are done in a single transaction.
Example:
- Version 1 initialises the database by creating a "contact" table for the `Contact` class.
- Version 2 creates a bunch of pre-defined `Contact` objects in the database.
When starting from a blank (PostgreSQL) database, version 2 fails because the relation "contact" does not exist yet. If you run the two versions as separate upgrades then it works fine.
It's possible to get Alembic to run migrations with one transaction per version. This makes a lot of sense because each version should stand alone, so if one fails you want to roll-back that one but not all the previous versions that succeeded. However, it's difficult to work out how to configure flask-migrate to do a transaction per version.
IMHO one transaction per version should be the default, but I'd be happy just with an easy way to enable it. | closed | 2018-02-21T21:36:37Z | 2022-09-04T19:27:56Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/188 | [
"question",
"auto-closed"
] | quantoid | 12 |
aminalaee/sqladmin | asyncio | 737 | SqlAlchemy dataclass support missing | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
If model mapped as dataclass (MappedAsDataclass), creating new record rises an error at sqladmin._queries:[194|206]

### Steps to reproduce the bug
_No response_
### Expected behavior
_No response_
### Actual behavior
_No response_
### Debugging material
_No response_
### Environment
- all
### Additional context
_No response_ | open | 2024-04-02T19:54:53Z | 2024-05-07T07:39:54Z | https://github.com/aminalaee/sqladmin/issues/737 | [] | Goradii | 6 |
voxel51/fiftyone | data-science | 5,633 | [FR] able to change class name of a bounding box and edit a bounding box | ### Proposal Summary
currently, in order to change the class name of a bounding box, user need to launch an annotation tool like CVAT. this is cumbersome.
beside this, it's create if we can edit a bounding box in fiftyone
### Motivation
- What is the use case for this feature? easier change of annotation
- Why is this use case valuable to support for FiftyOne users in general? ease of use for bounding box annotation
- Why is this use case valuable to support for your project(s) or organization? bounding box annotation
- Why is it currently difficult to achieve this use case? need to always load to CVAT and it's cumbersome
### What areas of FiftyOne does this feature affect?
- [ x ] App: FiftyOne application
- [ ] Core: Core `fiftyone` Python library
- [ ] Server: FiftyOne server
### Details
P1: able to change class name of bounding box in Fiftyone
P2: able to edit bounding box coordinate if bounding box is in accurate
P3: able to create / delete bbox
### Willingness to contribute
The FiftyOne Community welcomes contributions! Would you or another member of your organization be willing to contribute an implementation of this feature?
- [ ] Yes. I can contribute this feature independently
- [ ] Yes. I would be willing to contribute this feature with guidance from the FiftyOne community
- [x] No. I cannot contribute this feature at this time
| open | 2025-03-24T02:23:32Z | 2025-03-24T18:17:39Z | https://github.com/voxel51/fiftyone/issues/5633 | [
"feature"
] | 201power | 1 |
chaoss/augur | data-visualization | 3,051 | AttributeError: module 'configparser' has no attribute 'SafeConfigParser' | **Description:**
While following the guide of [augur installation
](https://oss-augur.readthedocs.io/en/dev/getting-started/installation.html#general-augur-installation-steps-irrespective-of-operating-system), I'm getting an error after running `make install`. I suspect this error is because of my python version
```
$ python3 --version
Python 3.12.2
```
I did some changes while creating the virtual environment and the installation command worked with no errors.
here are the commands that i ran
```
python3.10 -m venv $HOME/.virtualenvs/augur_env
source $HOME/.virtualenvs/augur_env/bin/activate
pip install --upgrade setuptools wheel
make install
```
**How to reproduce:**
1. Use python3.12.2 while creating virtual environment
2. run `make install` after following the installation guide of augur
**Expected behavior:**
`make install` should work with no errors.
**Screenshots**
If applicable, add screenshots to help explain your problem. If your bug is related to the UI, you **must** include screenshots.
**Log files**
https://pastebin.com/6z58zKJU
**Software versions:**
- Augur: N/A
- OS: Linux Mint 21.2 x86_64
- Browser: (if applicable) | open | 2025-03-12T17:19:01Z | 2025-03-12T18:48:46Z | https://github.com/chaoss/augur/issues/3051 | [] | officialasishkumar | 2 |
ray-project/ray | tensorflow | 51,628 | [Core] Unable to build Ray wheel on Windows using Docker due to private image access issues | ### What happened + What you expected to happen
When trying to build a Ray wheel locally on Windows using Docker with the command python -m ci.ray_ci.build_in_docker_windows wheel --python-version=3.12 --operating-system=windows, I encounter an authorization error. The build process attempts to pull a Docker image from a private ECR repository that I don't have access to.
Error message:
Expected behavior:
I expected to be able to build Ray wheels locally using Docker without requiring access to private ECR repositories. The build process should either use publicly available Docker images or provide clear documentation on how to set up a local build environment without these private images.
### Versions / Dependencies
Ray: Latest from repository
Python: 3.12
OS: Windows 11
Docker: 28.0.1
### Reproduction script
## Reproduction script
```python
# Simply run the following command from the Ray repository root
import subprocess
import sys
def main():
cmd = [
sys.executable,
"-m",
"ci.ray_ci.build_in_docker_windows",
"wheel",
"--python-version=3.12",
"--operating-system=windows"
]
result = subprocess.run(cmd, capture_output=True, text=True)
print(result.stdout)
print(result.stderr)
if result.returncode != 0:
print(f"Command failed with exit code {result.returncode}")
if __name__ == "__main__":
main()
```
### Issue Severity
Medium: It is a significant difficulty but I can work around it. | open | 2025-03-23T02:07:05Z | 2025-03-23T02:07:05Z | https://github.com/ray-project/ray/issues/51628 | [
"bug",
"triage"
] | KimChow | 0 |
xmu-xiaoma666/External-Attention-pytorch | pytorch | 82 | 能不能加一下核心ccnn代码, | 地址https://github.com/david-knigge/ccnn | open | 2022-09-08T13:01:03Z | 2022-09-08T13:01:03Z | https://github.com/xmu-xiaoma666/External-Attention-pytorch/issues/82 | [] | luyifanlu | 0 |
LibrePhotos/librephotos | django | 1,117 | Ability to select individual photo for face scan | Require new feature
Some photo for some reason are not face scanned but the face is obvious.
Maybe have a feature to allow us to select that particular photo for a rescan of the face. | open | 2023-10-04T07:29:06Z | 2024-01-06T14:05:42Z | https://github.com/LibrePhotos/librephotos/issues/1117 | [] | D1n0Bot | 2 |
openapi-generators/openapi-python-client | fastapi | 117 | Header parameters not supported? | **Describe the bug**
It seems header parameters are not supported? Marking this as bug because it generates a traceback.
```
Traceback (most recent call last):
File "/home/user/.local/bin/openapi-python-client", line 8, in <module>
sys.exit(app())
File "/home/user/.local/pipx/venvs/openapi-python-client/lib/python3.7/site-packages/typer/main.py", line 213, in __call__
return get_command(self)()
File "/home/user/.local/pipx/venvs/openapi-python-client/lib/python3.7/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/home/user/.local/pipx/venvs/openapi-python-client/lib/python3.7/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/home/user/.local/pipx/venvs/openapi-python-client/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/user/.local/pipx/venvs/openapi-python-client/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/user/.local/pipx/venvs/openapi-python-client/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/home/user/.local/pipx/venvs/openapi-python-client/lib/python3.7/site-packages/typer/main.py", line 496, in wrapper
return callback(**use_params) # type: ignore
File "/home/user/.local/pipx/venvs/openapi-python-client/lib/python3.7/site-packages/openapi_python_client/cli.py", line 91, in generate
create_new_client(url=url, path=path)
File "/home/user/.local/pipx/venvs/openapi-python-client/lib/python3.7/site-packages/openapi_python_client/__init__.py", line 48, in create_new_client
project = _get_project_for_url_or_path(url=url, path=path)
File "/home/user/.local/pipx/venvs/openapi-python-client/lib/python3.7/site-packages/openapi_python_client/__init__.py", line 31, in _get_project_for_url_or_path
openapi = OpenAPI.from_dict(data_dict)
File "/home/user/.local/pipx/venvs/openapi-python-client/lib/python3.7/site-packages/openapi_python_client/openapi_parser/openapi.py", line 227, in from_dict
endpoint_collections_by_tag = EndpointCollection.from_dict(d["paths"])
File "/home/user/.local/pipx/venvs/openapi-python-client/lib/python3.7/site-packages/openapi_python_client/openapi_parser/openapi.py", line 44, in from_dict
endpoint = Endpoint.from_data(data=method_data, path=path, method=method, tag=tag)
File "/home/user/.local/pipx/venvs/openapi-python-client/lib/python3.7/site-packages/openapi_python_client/openapi_parser/openapi.py", line 150, in from_data
endpoint._add_parameters(data)
File "/home/user/.local/pipx/venvs/openapi-python-client/lib/python3.7/site-packages/openapi_python_client/openapi_parser/openapi.py", line 136, in _add_parameters
raise ValueError(f"Don't know where to put this parameter: {param_dict}")
ValueError: Don't know where to put this parameter: {'name': 'Content-Disposition', 'in': 'header', 'required': True, 'description': 'Name of the file to save', 'schema': {'type': 'string'}}
```
**To Reproduce**
Add a header parameter in your spec, try to generate a client from it.
**Expected behavior**
Header parameters would be supported. How? I don't know :grin:
**OpenAPI Spec File**
I'm not sure I can share it. I'll try to give a minimal example later.
**Desktop (please complete the following information):**
- OS: RedHat 7
- Python Version: 3.7.8
- openapi-python-client version 0.4.2
| closed | 2020-08-03T15:53:47Z | 2020-08-11T13:23:29Z | https://github.com/openapi-generators/openapi-python-client/issues/117 | [
"✨ enhancement"
] | pawamoy | 3 |
zihangdai/xlnet | nlp | 233 | What is the best CPU inference acceleration solution for BERT now? | Thank you very much.
Thank you very much.
Thank you very much. | open | 2019-09-20T02:52:20Z | 2019-09-20T02:52:20Z | https://github.com/zihangdai/xlnet/issues/233 | [] | guotong1988 | 0 |
3b1b/manim | python | 1,273 | Example Scene OpeningManimExample throws an out of range error since submobjects is empty. | I have just downloaded and installed Manim exactly as described in the instructions. I have MikTex and Ffmpeg etc installed too. I can run the SquaresToCircles example but this example fails.
```
File "D:\Development\Manim\manimlib\extract_scene.py", line 155, in main
scene = SceneClass(**scene_kwargs)
File "D:\Development\Manim\manimlib\scene\scene.py", line 75, in __init__
self.construct()
File "example_scenes.py", line 20, in construct
title = TextMobject("This is some \\LaTeX")
File "D:\Development\Manim\manimlib\mobject\svg\tex_mobject.py", line 150, in __init__
self.break_up_by_substrings()
File "D:\Development\Manim\manimlib\mobject\svg\tex_mobject.py", line 190, in break_up_by_substrings
sub_tex_mob.move_to(self.submobjects[last_submob_index], RIGHT)
IndexError: list index out of range
``` | open | 2020-11-16T18:23:38Z | 2021-06-09T09:17:05Z | https://github.com/3b1b/manim/issues/1273 | [] | JamieMair | 4 |
pyg-team/pytorch_geometric | pytorch | 9,312 | HeteroConv | ### 🐛 Describe the bug
```python
tempHeteroDict = {};
for key in sageLayerNameList:
tempHeteroDict[key] = GCNConv(sage_dim_in, sage_dim_in);
self.hetero_conv = HeteroConv(tempHeteroDict, aggr='lstm');
```
### Versions
Hi,
Thank you for your amazing work.
I encountered a bug while using HeteroConv with the aggregation scheme being set as "lstm". The error detail can be found at the bottom of this message. The other aggregation schemes are working fine. The output of "collect_env.py" can be found below. It would be great if you have any suggestions.
Thank you.
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 22068 100 22068 0 0 4222 0 0:00:05 0:00:05 --:--:-- 5593
zsh: command not found: #
Collecting environment information...
PyTorch version: 2.1.2
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.4.1 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.1.0.2.5)
CMake version: Could not collect
Libc version: N/A
Python version: 3.11.5 | packaged by conda-forge | (main, Aug 27 2023, 03:33:12) [Clang 15.0.7 ] (64-bit runtime)
Python platform: macOS-14.4.1-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M3 Pro
Versions of relevant libraries:
[pip3] flake8==6.0.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.3
[pip3] numpydoc==1.5.0
[pip3] torch==2.1.2
[pip3] torch-cluster==1.6.3
[pip3] torch-geometric==2.6.0
[pip3] torch-scatter==2.1.2
[pip3] torch-sparse==0.6.18
[pip3] torch-spline-conv==1.2.2
[pip3] torchdata==0.7.1
[conda] numpy 1.24.3 py311hb57d4eb_0
[conda] numpy-base 1.24.3 py311h1d85a46_0
[conda] numpydoc 1.5.0 py311hca03da5_0
[conda] torch 2.1.2 pypi_0 pypi
[conda] torch-cluster 1.6.3 pypi_0 pypi
[conda] torch-geometric 2.6.0 pypi_0 pypi
[conda] torch-scatter 2.1.2 pypi_0 pypi
[conda] torch-sparse 0.6.18 pypi_0 pypi
[conda] torch-spline-conv 1.2.2 pypi_0 pypi
[conda] torchdata 0.7.1 pypi_0 pypi
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
File <timed exec>:32
Cell In[8], line 82, in CustomGNN.fit(self, graph_data_x, x_dict, edge_index_dict, homo_graph_data, epochs)
77 for epoch in range(epochs+1):
78
79 # Train
80 optimizer.zero_grad()
---> 82 _, out = self(x_dict, edge_index_dict, homo_graph_data);
84 loss = criterion(out[graph_data_x.train_mask], graph_data_x.y[graph_data_x.train_mask])
85 acc = accuracy(out[graph_data_x.train_mask].argmax(dim=1),graph_data_x.y[graph_data_x.train_mask])
File ~/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1517 else:
-> 1518 return self._call_impl(*args, **kwargs)
File ~/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
1522 # If we don't have any hooks, we want to skip the rest of the logic in
1523 # this function, and just call forward.
1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1525 or _global_backward_pre_hooks or _global_backward_hooks
1526 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527 return forward_call(*args, **kwargs)
1529 try:
1530 result = None
Cell In[8], line 56, in CustomGNN.forward(self, x_dict, edge_index_dict, homo_graph_data)
49 """
50 Sage's foward function parameters
51 The first parameter is the data matrix, with rows being nodes and colomns being features
52 The Second parameter is edge index with a dimension of 2xNumberOfEdges, each row is an edge
53 """
54 #======================================
---> 56 out_dict = self.hetero_conv(x_dict, edge_index_dict);
57 hSum = out_dict[list(out_dict.keys())[0]];
58 #======================================
File ~/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py:1518, in Module._wrapped_call_impl(self, *args, **kwargs)
1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1517 else:
-> 1518 return self._call_impl(*args, **kwargs)
File ~/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py:1527, in Module._call_impl(self, *args, **kwargs)
1522 # If we don't have any hooks, we want to skip the rest of the logic in
1523 # this function, and just call forward.
1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1525 or _global_backward_pre_hooks or _global_backward_hooks
1526 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1527 return forward_call(*args, **kwargs)
1529 try:
1530 result = None
File ~/anaconda3/lib/python3.11/site-packages/torch_geometric/nn/conv/hetero_conv.py:166, in HeteroConv.forward(self, *args_dict, **kwargs_dict)
163 out_dict[dst].append(out)
165 for key, value in out_dict.items():
--> 166 out_dict[key] = group(value, self.aggr)
168 return out_dict
File ~/anaconda3/lib/python3.11/site-packages/torch_geometric/nn/conv/hetero_conv.py:24, in group(xs, aggr)
22 else:
23 out = torch.stack(xs, dim=0)
---> 24 out = getattr(torch, aggr)(out, dim=0)
25 out = out[0] if isinstance(out, tuple) else out
26 return out
TypeError: lstm() received an invalid combination of arguments - got (Tensor, dim=int), but expected one of:
* (Tensor data, Tensor batch_sizes, tuple of Tensors hx, tuple of Tensors params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional)
* (Tensor input, tuple of Tensors hx, tuple of Tensors params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) | closed | 2024-05-12T04:25:47Z | 2024-07-16T20:09:42Z | https://github.com/pyg-team/pytorch_geometric/issues/9312 | [
"bug"
] | ZhenjiangFan | 2 |
man-group/arctic | pandas | 164 | ChunkStore - update without a multiindex results in cartesian product and out of memory errors | ran into problem with ChunkStore where I was just using a date index, rather than a date/security index on my DataFrames. ChunkStore merges existing chunks with updated chunks using DataFrame.combine_first. Since both of the chunks had just the date index, and the date was identical, there was no good way for combine_first to do this. It ended up with an exponential explosion of records (the Cartesian product). After a few iterations, this gets too big to calculate.
Need to look into alternatives to combine_first when a multiindex is not used.
| closed | 2016-07-12T18:01:23Z | 2016-08-02T15:56:02Z | https://github.com/man-group/arctic/issues/164 | [] | bmoscon | 1 |
sqlalchemy/alembic | sqlalchemy | 465 | Fails to rename a boolen column with constraint | **Migrated issue, originally created by Rick Salevsky**
sqlalchemy create's a constraint for boolen column's if the database doesn't support native boolen values. In my case I use MariaDB 10.2.10 with the pymysql driver which doesn't support native boolen so sqlalchemy creates a constraint.
The table looks like this:
```
CREATE TABLE `firewall_groups_v2` (
`id` varchar(36) NOT NULL,
`name` varchar(255) DEFAULT NULL,
`description` varchar(1024) DEFAULT NULL,
`project_id` varchar(255) DEFAULT NULL,
`status` varchar(16) DEFAULT NULL,
`admin_state_up` tinyint(1) DEFAULT NULL,
`public` tinyint(1) DEFAULT NULL,
`egress_firewall_policy_id` varchar(36) DEFAULT NULL,
`ingress_firewall_policy_id` varchar(36) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `egress_firewall_policy_id` (`egress_firewall_policy_id`),
KEY `ingress_firewall_policy_id` (`ingress_firewall_policy_id`),
KEY `ix_firewall_groups_v2_project_id` (`project_id`),
CONSTRAINT `firewall_groups_v2_ibfk_1` FOREIGN KEY (`egress_firewall_policy_id`) REFERENCES `firewall_policies_v2` (`id`),
CONSTRAINT `firewall_groups_v2_ibfk_2` FOREIGN KEY (`ingress_firewall_policy_id`) REFERENCES `firewall_policies_v2` (`id`),
CONSTRAINT `CONSTRAINT_1` CHECK (`admin_state_up` in (0,1)),
CONSTRAINT `CONSTRAINT_2` CHECK (`public` in (0,1))
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
```
So before alembic can rename the column it has to remove the constraint and add it back after the rename is completed.
| closed | 2017-11-17T18:25:54Z | 2017-11-27T22:37:14Z | https://github.com/sqlalchemy/alembic/issues/465 | [
"bug"
] | sqlalchemy-bot | 4 |
sqlalchemy/alembic | sqlalchemy | 683 | Documentation Notes On "Building an Up to Date Database from Scratch" | My goal was simple enough:
- Extract ~40+ table MySQL schema via SQLAlchemy.inspect (done, but the scipt's as rough as you'd expect) to a vanilla models.py
- Write unit tests against SQLite and Alembic
- Delploy to a production MySQL with another migration to handle things like FULLTEXT indices.
Challenges encountered:
- Using the stamp() operation with the models.py and "head" gave me a headache (heh).
- I really needed to prepare a 0th revision and then use that revision number in stamp().
- Then I could prepare another 1st revision and use bulk_insert().
- But to get bulk_insert() to work, I had to say, in my revision script. . .
`from models import SomeTable`
`. . .`
`op.bulk_insert(SomeTable.__table__,[{row_dict0},{row_dict1}...])`
**Summary:** I am very grateful for this tool and hope someday to contribute documentation of better quality than this ransom note.
Cheers! | open | 2020-04-09T23:12:34Z | 2020-04-09T23:16:39Z | https://github.com/sqlalchemy/alembic/issues/683 | [
"documentation"
] | smitty1eGH | 0 |
tox-dev/tox | automation | 2,819 | No tty / color support on Windows | ## Issue
Color output in Windows from tools does not work as expected.
Expected output:
<img width="843" alt="image" src="https://user-images.githubusercontent.com/238652/210615058-c9f2403d-c789-4598-9bdb-71e3e8c8be2c.png">
Actual output within tox:
<img width="940" alt="image" src="https://user-images.githubusercontent.com/238652/210615178-2e927658-b7c8-4ece-ad75-24c3a6dabd6f.png">
This should have been fixed by @masenf in https://github.com/tox-dev/tox/pull/2711 So maybe there was a regression since then.
## Environment
Provide at least:
- OS: Windows
- `pip list` of the host Python where `tox` is installed:
```console
Package Version
------------- -------
cachetools 5.2.0
chardet 5.1.0
colorama 0.4.6
distlib 0.3.6
filelock 3.9.0
packaging 22.0
pip 22.3.1
platformdirs 2.6.2
pluggy 1.0.0
pyproject_api 1.4.0
setuptools 65.6.3
tomli 2.0.1
tox 4.2.1
virtualenv 20.17.1
wheel 0.38.4
```
## Minimal example
If possible, provide a minimal reproducer for the issue:
```ini
[testenv]
deps = pytest
commands =
python -c 'import sys; print("sys.stdout.isatty(): %s" % sys.stdout.isatty()); print("to stdout, should be colorless\n"); print("sys.stderr.isatty(): %s" % sys.stderr.isatty()); print("to stderr, should be red?\n", file=sys.stderr)'
pytest {posargs}
```
test_test.py
```python
def test():
assert False
``` | closed | 2023-01-04T17:36:40Z | 2023-01-08T15:31:21Z | https://github.com/tox-dev/tox/issues/2819 | [
"area:documentation",
"help:wanted"
] | schlamar | 5 |
ansible/awx | django | 15,343 | Enhance Audit Capabilities for Node Management in AWX | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
### Feature type
New Feature
### Feature Summary
Currently, in AAP, when a node is disabled, there is no logging or audit trail that captures who performed this action or why it was performed. This lack of transparency can lead to security and operational challenges, especially in environments where multiple teams or automated services interact with the platform.
*Proposal:*
I propose that AWX includes enhanced audit logging features that:
- Log the identity of the user or service account that performs the action of enabling or disabling a node in the activity streams.
- Provide an option to include a mandatory comment field when disabling a node, where the user can specify the reason for the action.
- Make these logs easily accessible within the AWX UI and via API endpoints for integration with external monitoring and audit systems.
### Select the relevant components
- [X] UI
- [X] API
- [X] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Steps to reproduce
Disable and enable a node in AAP, nothing can be seen in the activity stream. There is no way to tell who in the team, or what serviceaccount disabled the node or what happened.
### Current results
Disable and enable a node in AAP, nothing can be seen in the activity stream. There is no way to tell who in the team, or what serviceaccount disabled the node or what happened.
### Sugested feature result
For me it would make sense to at least have this action in the activity stream, so you can know the time and the serviceacount/admin member who disabled or enabled the node. Additionally, if it is possible to track the reason why something was enabled it would be great I think.
### Additional information
If this makes sense I am willing to fork and work on it myself, once we agree on what we want to implement. | open | 2024-07-09T13:03:13Z | 2024-07-10T17:16:52Z | https://github.com/ansible/awx/issues/15343 | [
"type:enhancement",
"component:api",
"component:ui",
"component:docs",
"community"
] | jangel97 | 0 |
AutoGPTQ/AutoGPTQ | nlp | 661 | [BUG]safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer | **Describe the bug**
root@ac6edc15b00f:/workspace/quantization# python test_gptq.py
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 53%|██████████████████████████████████████████████▉ | 16/30 [09:03<07:55, 34.00s/it]
Traceback (most recent call last):
File "/workspace//quantization/test_gptq.py", line 27, in <module>
model = AutoGPTQForCausalLM.from_pretrained(pretrained_model_dir, quantize_config)
File "/opt/conda/lib/python3.10/site-packages/auto_gptq/modeling/auto.py", line 76, in from_pretrained
return GPTQ_CAUSAL_LM_MODEL_MAP[model_type].from_pretrained(
File "/opt/conda/lib/python3.10/site-packages/auto_gptq/modeling/_base.py", line 787, in from_pretrained
model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path, **merged_kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 563, in from_pretrained
return model_class.from_pretrained(
File "/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3677, in from_pretrained
) = cls._load_pretrained_model(
File "/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4084, in _load_pretrained_model
state_dict = load_state_dict(shard_file, is_quantized=is_quantized)
File "/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py", line 507, in load_state_dict
with safe_open(checkpoint_file, framework="pt") as f:
**safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer**
**Hardware details**
A800
root@ac6edc15b00f:/workspace/code/qwen/quantization2# nvidia-smi
Tue Apr 30 06:00:36 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.54.03 Driver Version: 535.54.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA A800 80GB PCIe Off | 00000000:17:00.0 Off | 0 |
| N/A 38C P0 65W / 300W | 6**4181MiB / 81920MiB |** 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA A800 80GB PCIe Off | 00000000:31:00.0 Off | 0 |
| N/A 36C P0 65W / 300W | 62285MiB / 81920MiB | 0% Default |
| | | Disabled |
**Software version**
Linux ac6edc15b00f 5.4.0-177-generic #197-Ubuntu SMP Thu Mar 28 22:45:47 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Python 3.10.13
root@ac6edc15b00f:/workspace/quantization2# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Mon_Apr__3_17:16:06_PDT_2023
Cuda compilation tools, release 12.1, V12.1.105
Build cuda_12.1.r12.1/compiler.32688072_0
torch Version: 2.2.1
accelerate Version: 0.29.3
transformers Version: 4.40.1
**To Reproduce**
**Expected behavior**
**Screenshots**
**Additional context**
**已经下载好的llama3 instruct,尝试量化,是因为显存不够了吗?**
| closed | 2024-04-30T06:28:35Z | 2024-06-27T21:55:30Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/661 | [
"bug"
] | chuangzhidan | 1 |
aleju/imgaug | deep-learning | 231 | seq.to_deterministic() augments images and bounding boxes differently if an image has no corresponding bounding box | For example, if i augment a list of numpy images and a corresponding list of ia.BoundingBoxesOnImage:
```
seq = iaa.Sequential(
[iaa.Fliplr(0.5),
iaa.Affine(
rotate=(-30, 30),
mode='edge')
],
random_order=True
)
seq_det = seq.to_deterministic()
image_aug = seq_det.augment_images(np_img_array)
bbs_aug = seq_det.augment_bounding_boxes(bbs_on_img_array)
```
If one of the images has no bounding box, bbs_on_img_array may look something like this after the annotation xmls was parsed:
```
[BoundingBoxesOnImage([BoundingBox(x1=266.0000, y1=633.0000, x2=299.0000, y2=692.0000, label=person)], shape=(720, 1280)),
BoundingBoxesOnImage([BoundingBox(x1=283.0000, y1=637.0000, x2=312.0000, y2=701.0000, label=person)], shape=(720, 1280)),
BoundingBoxesOnImage([BoundingBox(x1=279.0000, y1=656.0000, x2=308.0000, y2=720.0000, label=person)], shape=(720, 1280)),
BoundingBoxesOnImage([], shape=(720, 1280)),
BoundingBoxesOnImage([BoundingBox(x1=272.0000, y1=632.0000, x2=301.0000, y2=696.0000, label=person)], shape=(720, 1280)),
BoundingBoxesOnImage([BoundingBox(x1=266.0000, y1=634.0000, x2=305.0000, y2=692.0000, label=person)], shape=(720, 1280)),
BoundingBoxesOnImage([BoundingBox(x1=261.0000, y1=637.0000, x2=321.0000, y2=689.0000, label=person)], shape=(720, 1280))]
```
And this will cause the augmentations after the 4th bounding box to jumble up. Currently, I have added the images without bounding boxes to a separate queue to be augmented. But I believe this shouldn't be the intended behaviour as it can be quite difficult to figure out what is the underlying cause of the bounding boxes and images not being augmented correctly when the dataset contains images with no bounding boxes.
| open | 2019-01-15T06:41:11Z | 2019-02-05T20:05:50Z | https://github.com/aleju/imgaug/issues/231 | [] | sun-yitao | 2 |
tableau/server-client-python | rest-api | 1,536 | `populate_connections` doesn't fetch `datasource_id` | **Describe the bug**
Hi,
I'm encountering an issue with the Tableau Python client. Specifically, the `datasource_id` is missing from the `ConnectionItem` object.
Here’s the scenario: after retrieving a datasource by id, I populate its connections, but the `ConnectionItem` objects lack the datasource information.
From reviewing the source code:
https://github.com/tableau/server-client-python/blob/1d98fdad189ebed130fb904e8fa5dca2207f9011/tableauserverclient/server/endpoint/datasources_endpoint.py#L106
it seems the REST method being used by the module is: `GET /api/api-version/sites/site-id/datasources/datasource-id/connections`.
According to the <a href="https://help.tableau.com/current/api/rest_api/en-us/REST/rest_api_ref_data_sources.htm#query_data_sources" target="_blank">official documentation</a>, this method does not appear to return datasource information.
**Versions**
- Tableau Cloud 2024.3
- Python version: 3.11.0
- TSC library version: 0.34
**To Reproduce**
```python
import os
import tableauserverclient as TSC
token_name = 'my_token'
token_value = os.environ['TABLEAU_TOKEN']
tableau_site = 'my-site-it'
tableau_server = 'https://eu-west-1a.online.tableau.com'
datasource_id = 'REPLACE_ME'
def main():
tableau_auth = TSC.PersonalAccessTokenAuth(token_name, token_value, site_id=tableau_site)
server = TSC.Server(tableau_server, use_server_version=True)
with server.auth.sign_in(tableau_auth):
ds = server.datasources.get_by_id(datasource_id)
server.datasources.populate_connections(ds)
# This will fail
assert ds.connections[0].datasource_id
if __name__ == '__main__':
main()
```
**Results**
`connections[0].datasource_id` is `None`, data is not fetched from the REST API.
Thanks :) | closed | 2024-11-21T16:18:36Z | 2025-01-04T03:22:58Z | https://github.com/tableau/server-client-python/issues/1536 | [
"bug",
"0.35"
] | LuigiCerone | 7 |
ivy-llc/ivy | tensorflow | 28,705 | Fix Frontend Failing Test: paddle - creation.jax.numpy.triu | To-do List: https://github.com/unifyai/ivy/issues/27500 | closed | 2024-03-29T12:18:54Z | 2024-03-29T16:45:11Z | https://github.com/ivy-llc/ivy/issues/28705 | [
"Sub Task"
] | ZJay07 | 0 |
graphdeco-inria/gaussian-splatting | computer-vision | 1,023 | Guide for who have error about finding 'cub' libraries when building simple-knn | Hi, I struggled to build simple-knn submodule with **Pytorch 2.1.0** + **CUDA 12.1**, which show errors about finding 'cub' libraries.
And I finally find out venv setting to make build process success.
1. Find submodule from the author's gitlab.
`cd <somewhere> && git submodule add https://gitlab.inria.fr/bkerbl/simple-knn'
2. Resolve dependency by installing cccl.
Key is to install `cccl` which resolves finding 'cub'.
For conda, type `conda install cccl`
3. Build simple-knn
`python -m pip install ./simple-knn`
After all, I found error from the fact that 'cub' libraries typically exists on `/usr/local/cuda/include` so if `/usr/local/bin/nvcc` is called then it may go smoothly.
But in my case, I installed `pytorch`, `pytorch-cuda`, `cuda`, `cuda-toolkit` packages to make my venv completely independent.
In this situation, 'cub' is not installed in `<venv path>/include`. So `<venv path>/bin/nvcc` is called and fails to build. Solution is to install cccl package, which provides cub and other CUDA core compute libraries. | open | 2024-10-24T20:16:18Z | 2024-10-24T20:21:42Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/1023 | [] | intMinsu | 1 |
pallets/flask | python | 5,062 | CONTRIBUTING: Change from venv name "env" to ".venv" for Windows as well? | https://github.com/pallets/flask/blob/182ce3dd15dfa3537391c3efaf9c3ff407d134d4/CONTRIBUTING.rst?plain=1#L123-L124
I think in line 124 `env` should also be renamed to `.venv` | closed | 2023-04-14T10:31:46Z | 2023-04-30T00:05:48Z | https://github.com/pallets/flask/issues/5062 | [
"docs"
] | Zocker1999NET | 1 |
scikit-image/scikit-image | computer-vision | 7,472 | skimage.draw.ellipsoid_stats doesn't give the result for a sphere if a=b=c | ### Description:
Calculation of surface area and volume using skimage.draw.ellipsoid_stats doesn't converge to the properties of a sphere.
## Expected
We should get the surface area of a sphere when the three major axis dimensions (a, b, c) are same
## Result
Get a divide by zero error
### Way to reproduce:
```python
import skimage as ski
a = b = c = 10
ellipsoid = ski.draw.ellipsoid(a, b, c)
vol, surf = ski.draw.ellipsoid_stats(a, b, c)
print(f'Theoritical volume = {vol}\nTheoritical Surface Area = {surf}')```
##Error Message
Traceback (most recent call last):
File "c:\Users\...\test.py", line 5, in <module>
vol, surf = ski.draw.ellipsoid_stats(a, b, c)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\...\Lib\site-packages\skimage\draw\draw3d.py", line 105, in ellipsoid_stats
m = (a ** 2 * (b ** 2 - c ** 2) /
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ZeroDivisionError: float division by zero
### Version information:
```Shell
3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 20:42:31) [MSC v.1937 64 bit (AMD64)]
Windows-11-10.0.22631-SP0
scikit-image version: 0.22.0
numpy version: 1.26.4
```
| closed | 2024-07-18T19:45:40Z | 2024-07-26T07:52:55Z | https://github.com/scikit-image/scikit-image/issues/7472 | [
":bug: Bug"
] | pamitabh | 18 |
ultralytics/ultralytics | computer-vision | 19,064 | Decision which layers get weight decay regularization | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hey,
This is a logger output before the training starts:
`optimizer: Adam(lr=0.01, momentum=0.937) with parameter groups 63 weight(decay=0.0), 73 weight(decay=0.0005), 72 bias(decay=0.0)`
How it is decided which parameters receive weight decay? Are only those parameter groups taken that are useful for the training?
Thanks for clarification!
### Additional
_No response_ | closed | 2025-02-04T12:39:55Z | 2025-02-08T03:47:58Z | https://github.com/ultralytics/ultralytics/issues/19064 | [
"question"
] | Petros626 | 4 |
sqlalchemy/alembic | sqlalchemy | 394 | Alembic's edit fails when editor contains a space | **Migrated issue, originally created by David Pärsson ([@davidparsson](https://github.com/davidparsson))**
Alembic's `edit` commands fails when `$EDITOR` contains a space:
```
$ echo $EDITOR
subl --wait
$ alembic edit head
FAILED: Error executing editor ([Errno 2] No such file or directory: 'subl --wait')
$ which subl
/usr/local/bin/subl
```
I'm on Alembic v0.8.8.
| closed | 2016-11-03T14:02:26Z | 2016-11-04T13:04:28Z | https://github.com/sqlalchemy/alembic/issues/394 | [
"bug",
"low priority"
] | sqlalchemy-bot | 3 |
Miserlou/Zappa | flask | 1,938 | Zappa deployment does not work if the virtualenv name is the same as the app name | ## Context
When running Zappa in a virtualenv, I found that if the name of your Django app and the name of your virtualenv are the same, then you will get a
```
ModuleNotFoundError: No module named '<app-name>'
```
when deploying your app.
## Expected Behavior
Zappa should deploy normally. e.g.
```
Your updated Zappa deployment is live!: https://api.com (https://...api.us-west-2.amazonaws.com/dev)
```
## Actual Behavior
Zappa logs return:
```
[1570156390363] Instancing..
[1570156392032] No module named '<app>': ModuleNotFoundError
Traceback (most recent call last):
File "/var/task/handler.py", line 580, in lambda_handler
return LambdaHandler.lambda_handler(event, context)
File "/var/task/handler.py", line 245, in lambda_handler
handler = cls()
File "/var/task/handler.py", line 151, in __init__
wsgi_app_function = get_django_wsgi(self.settings.DJANGO_SETTINGS)
File "/var/task/zappa/ext/django_zappa.py", line 20, in get_django_wsgi
return get_wsgi_application()
File "/var/task/django/core/wsgi.py", line 12, in get_wsgi_application
django.setup(set_prefix=False)
File "/var/task/django/__init__.py", line 19, in setup
configure_logging(settings.LOGGING_CONFIG, settings.LOGGING)
File "/var/task/django/conf/__init__.py", line 79, in __getattr__
self._setup(name)
File "/var/task/django/conf/__init__.py", line 66, in _setup
self._wrapped = Settings(settings_module)
File "/var/task/django/conf/__init__.py", line 157, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File "/var/lang/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 941, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 953, in _find_and_load_unlocked
ModuleNotFoundError: No module named '<app>'
```
and the
```
zappa deploy <stage>`
```
command returns
```
Error: Warning! Status check on the deployed lambda failed. A GET request to '/' yielded a 500 response code.
```
## Possible Fix
I'm unsure, but if someone could point me in the right direction I can take a look.
## Steps to Reproduce
(Assuming you are using virtualenvwrapper)
1. `mkvirtualenv <app> `
2. Name the directory of your main Django app (where the `settings.py` and `urls.py` live) '<app>'.
3. `zappa deploy/update <stage>`
## Your Environment
* Zappa version used: 0.47.0
* Operating System and Python version: Ubuntu 18.04, Python 3.6
* The output of `pip freeze`:
```
awscli==1.16.152
boto3==1.9.152
djangorestframework==3.10.3
django==2.2.5
django-allauth==0.38.0
django-cors-headers==3.1.0
django-extensions==2.1.6
django-filter==2.1.0
django-ranged-fileresponse==0.1.2
django-rest-auth==0.9.5
django-storages==1.7.1
django-environ==0.4.5
future==0.16.0
gunicorn==19.9.0
jmespath==0.9.3
pinax-badges==2.0.0
psycopg2-binary==2.8.2
python-dateutil==2.6.1
sentry-sdk==0.11.2
whoosh==2.7.4
zappa==0.47.0
```
* Link to your project (optional):
* Your `zappa_settings.py`: Can't display.
| closed | 2019-10-04T02:49:39Z | 2019-10-17T20:36:49Z | https://github.com/Miserlou/Zappa/issues/1938 | [] | piraka9011 | 2 |
postmanlabs/httpbin | api | 237 | Connection header in Google Chrome browser | Google Chrome says it's sending `Connection: keep-alive` but I don't see it echoed in the response.
| closed | 2015-06-11T14:05:10Z | 2018-04-26T17:51:07Z | https://github.com/postmanlabs/httpbin/issues/237 | [] | smarts | 4 |
harry0703/MoneyPrinterTurbo | automation | 484 | 压缩报错,路径也没有中文,特殊字符 | 


| open | 2024-08-30T05:54:37Z | 2024-08-30T05:59:39Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/484 | [] | GKDSSS | 1 |
neuml/txtai | nlp | 460 | Using external transform and similarity function reports an error ValueError: shapes (1,512) and (768,1) not aligned: 512 (dim 1) != 768 (dim 0) | hi,can you help me
txtai version: 5.4.0
```
# use SentenceTransformer, return 768 dim
def transform(inputs):
response = requests.get("http://127.0.0.1:8080/embedding/1?q=" + inputs[0])
return np.array([response.json()['vec']], dtype=np.float32)
embeddings = Embeddings({"method": "external", "transform": transform})
npa = np.asarray([[x for x in range(768)]], dtype=np.float32)
embeddings.similarity('hello',npa)
```
ValueError: shapes (1,512) and (768,1) not aligned: 512 (dim 1) != 768 (dim 0)
But if I don't use external, this error will not appear | closed | 2023-04-15T10:48:51Z | 2023-04-17T12:47:22Z | https://github.com/neuml/txtai/issues/460 | [] | jqsl2012 | 2 |
google-research/bert | nlp | 829 | how to set to use four GPU | open | 2019-08-29T16:03:18Z | 2019-10-31T07:21:23Z | https://github.com/google-research/bert/issues/829 | [] | luluforever | 1 | |
ansible/awx | django | 15,351 | Flaky integration test failure | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
Collection integration tests checks failed
```
TASK [bulk_job_launch : Delete Job Template] ***********************************
task path: /home/runner/.ansible/collections/ansible_collections/awx/awx/tests/output/.tmp/integration/bulk_job_launch-j8j3ssrr-ÅÑŚÌβŁÈ/tests/integration/targets/bulk_job_launch/tasks/main.yml:67
Using module file /home/runner/.ansible/collections/ansible_collections/awx/awx/plugins/modules/job_template.py
Pipelining is enabled.
<testhost> ESTABLISH LOCAL CONNECTION FOR USER: runner
<testhost> EXEC /bin/sh -c '/usr/bin/python3 && sleep 0'
The full traceback is:
File "/tmp/ansible_job_template_payload_vrx_hk7z/ansible_job_template_payload.zip/ansible_collections/awx/awx/plugins/module_utils/controller_api.py", line 507, in make_request
response = self.session.open(
File "/tmp/ansible_job_template_payload_vrx_hk7z/ansible_job_template_payload.zip/ansible/module_utils/urls.py", line 899, in open
r = urllib.request.urlopen(request, None, timeout)
File "/usr/lib/python3.10/urllib/request.py", line 216, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.10/urllib/request.py", line 525, in open
response = meth(req, response)
File "/usr/lib/python3.10/urllib/request.py", line 634, in http_response
response = self.parent.error(
File "/usr/lib/python3.10/urllib/request.py", line 563, in error
return self._call_chain(*args)
File "/usr/lib/python3.10/urllib/request.py", line 496, in _call_chain
result = func(*args)
File "/usr/lib/python3.10/urllib/request.py", line 643, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
fatal: [testhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"allow_simultaneous": null,
"ask_credential_on_launch": null,
"ask_diff_mode_on_launch": null,
"ask_execution_environment_on_launch": null,
"ask_forks_on_launch": null,
"ask_instance_groups_on_launch": null,
"ask_inventory_on_launch": null,
"ask_job_slice_count_on_launch": null,
"ask_job_type_on_launch": null,
"ask_labels_on_launch": null,
"ask_limit_on_launch": null,
"ask_scm_branch_on_launch": null,
"ask_skip_tags_on_launch": null,
"ask_tags_on_launch": null,
"ask_timeout_on_launch": null,
"ask_variables_on_launch": null,
"ask_verbosity_on_launch": null,
"become_enabled": null,
"controller_config_file": null,
"controller_host": null,
"controller_oauthtoken": null,
"controller_password": null,
"controller_username": null,
"copy_from": null,
"credential": null,
"credentials": null,
"custom_virtualenv": null,
"description": null,
"diff_mode": null,
"execution_environment": null,
"extra_vars": null,
"force_handlers": null,
"forks": null,
"host_config_key": null,
"instance_groups": null,
"inventory": null,
"job_slice_count": null,
"job_tags": null,
"job_type": null,
"labels": null,
"limit": null,
"name": "AWX-Collection-tests-bulk_job_launch-pFcZYEmgQUttbjTH",
"new_name": null,
"notification_templates_error": null,
"notification_templates_started": null,
"notification_templates_success": null,
"organization": null,
"playbook": null,
"prevent_instance_group_fallback": null,
"project": null,
"request_timeout": null,
"scm_branch": null,
"skip_tags": null,
"start_at_task": null,
"state": "absent",
"survey_enabled": null,
"survey_spec": null,
"timeout": null,
"use_fact_cache": null,
"validate_certs": null,
"vault_credential": null,
"verbosity": null,
"webhook_credential": null,
"webhook_service": null
}
},
"msg": "You don't have permission to DELETE to /api/v2/job_templates/8/ (HTTP 403)."
}
```
This appears to be flaky, so documenting here.
### AWX version
devel
### Select the relevant components
- [ ] UI
- [ ] UI (tech preview)
- [X] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Installation method
N/A
### Modifications
no
### Ansible version
_No response_
### Operating system
_No response_
### Web browser
_No response_
### Steps to reproduce
flaky
### Expected results
passing
### Actual results
sometimes may fail
### Additional information
_No response_ | open | 2024-07-09T19:08:02Z | 2024-07-09T19:08:14Z | https://github.com/ansible/awx/issues/15351 | [
"type:bug",
"component:api",
"needs_triage"
] | AlanCoding | 0 |
Kanaries/pygwalker | matplotlib | 76 | [Feat] Reduce unnecessary storage consumption, both memory and disk | closed | 2023-03-10T06:41:18Z | 2023-09-08T01:44:18Z | https://github.com/Kanaries/pygwalker/issues/76 | [
"enhancement"
] | Asm-Def | 0 | |
jmcnamara/XlsxWriter | pandas | 267 | Question about Windows Indexing XlsxWriter Created Files | We're using xlsxwriter on a Centos server to create and mail out xlsx workbooks - all works well and the workbooks can be opened by the recipient. We've found though that Windows doesn't seem to index the files so the contents are not searchable through Windows Explorer or Outlook.
Opening the file in Excel and immediately saving it results in a file size change as Excel creates an additional sharedStrings.xml (amongst other changes) and the file contents will now be found in a Windows search.
Just wondering if anyone else has experienced this and found a workaround, or if this is something that XlsxWriter could even influence.
| closed | 2015-06-18T08:55:01Z | 2015-06-18T12:09:28Z | https://github.com/jmcnamara/XlsxWriter/issues/267 | [
"question"
] | MatthewJWalters | 6 |
iperov/DeepFaceLab | deep-learning | 5,239 | Model collapse when training with GAN + Adabelief (no Learning Rate Dropout) and Gradient Clipping | Iperov reply: "Offer a fix. Or dont use gan" Clearly you're the developer and users are testers, we find bugs and report them so you can fix it. You can keep acting like a child or investigate the issue, if you think gradient clipping works fine with adabelief and LRD doesn't cause any issues too test it and prove it works.
Using the latest release, training a pretrained DF-UD 320 model with dims 320/72/72/16 on bunch of random faces before actual training.
Model has AB enabled from the start (both pretraining and training done with it enabled). I've ran during the training set most options, starting with Random Warp, then no RW, Uniform Yaw, bit of true face, mouth and eye priority until I've enabled learning rate dropout for a while and then disabled it and enabled GAN at 0.1 power with default patch size and dims.
Model trained fine for some time until it collapsed, I've decided to enabled gradient clipping despite reading up about some potential issues with it running on models using adabelief optimizer but still it also collapsed after some time producing results like this: https://imgur.com/a/fHDQc4v
When gradient clipping was disabled the collapsed face previews were completely black with DST faces looking fine, with gradient clipping enabled I'm getting these black/blue swirls and DST faces are all distorted and just look bad.
In both cases loss values go up to 4.100 like values. This happened also when the GAN values for patch size and dims were changed (lower than default).
Rest of the setup: updated Windows 10, AVX supported CPU, 1080Ti, Win 10 fix applied (GPU scheduling one)
This is really frustrating as I don't see any obvious reasons for why this model would collapse 3 times already, old GAN never caused any such issues and I was also able to turn on and off gradient clipping with no issues too, now with AB turning gradient clipping causes issues and training with GAN collapses model after just 1-3 hours of training no matter if gradient clipping is enabled or not.
Other thing I've noticed too is that models sometimes seem to train much faster, before AB models would always run at the same speed and use up same amount of VRAM, now with AB enabled models VRAM requirements for the same model can differ and sometimes the model may use 1-1,5GB more or less, which also affects training speed which when working correctly is under 800ms and when it's consuming more for some reason speed also goes down with iterations reaching 3000ms with usual 16th iteration raises going as high as 8000ms. I always make sure the available VRAM is the same before running the model and doesn't fully load the cards VRAM (usage never goes above 10,6GB on 11GB card and minimum usage when just the display is being driven by the GPU ranges between 0.4-0.8GB before launching the training. | closed | 2021-01-11T03:26:59Z | 2021-01-11T16:30:59Z | https://github.com/iperov/DeepFaceLab/issues/5239 | [] | ThomasBardem | 3 |
allenai/allennlp | nlp | 5,033 | Publish info about each model implementation in models repo | From our discussion on Slack.
This could be markdown docs, ideally automatically generated. Could also publish to our API docs.
Kind of related to #4720 | open | 2021-03-02T18:12:27Z | 2021-03-02T18:12:27Z | https://github.com/allenai/allennlp/issues/5033 | [] | epwalsh | 0 |
ultralytics/yolov5 | pytorch | 12,507 | How is the number of c3 residuals in the backbone network determined? | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
"Hello, after a deep understanding of YOLOv5, I am very fond of it. Subsequent versions of YOLO in both code writing style and performance in our tasks cannot match the level of V5. However, I have a question: in the backbone network, how is the number of residual blocks determined in C3? Is the ratio of residual blocks in C3, progressing from shallow to deep with a sequence of 1, 2, 3, 1, designed for better balancing the detection of small, medium, and large objects? I would greatly appreciate it if you could provide insights into my question. Thank you!"
### Additional
_No response_ | closed | 2023-12-15T01:48:52Z | 2024-10-20T19:34:22Z | https://github.com/ultralytics/yolov5/issues/12507 | [
"question",
"Stale"
] | SijieLuo | 4 |
manrajgrover/halo | jupyter | 81 | Add tests for checking color of spinner in stdout | <!-- Please use the appropriate issue title format:
BUG REPORT
Bug: {Short description of bug}
SUGGESTION
Suggestion: {Short description of suggestion}
OTHER
{Question|Discussion|Whatever}: {Short description} -->
## Description
Currently tests run on cleaned `stdout` with no ansi escapings. Actual color in output is not tested. This issue aims to track and fix it.
See https://github.com/manrajgrover/halo/pull/78#pullrequestreview-155713163 for more information.
## People to notify
<!-- Please @mention relevant people here:-->
@manrajgrover
| closed | 2018-09-15T13:07:56Z | 2019-06-04T08:00:25Z | https://github.com/manrajgrover/halo/issues/81 | [
"up-for-grabs",
"hacktoberfest",
"test"
] | manrajgrover | 0 |
replicate/cog | tensorflow | 2,215 | How to run Cog models as Docker images? A toy example | This seems to be a replicate of https://github.com/replicate/cog/issues/1804. I could not find any document about this. The whole cog file seems to be a black box.
I tried to build a toy example, where the predicion is simply to return the input float number. However I failed. Any help would be useful!
`predict.py`:
```
# Prediction interface for Cog ⚙️
# https://cog.run/python
from cog import BasePredictor, Input, Path
class Predictor(BasePredictor):
def predict(
self,
# scale: float = Input(description="Factor to scale image by", ge=0, le=10, default=1.5),
scale: float = 1.5,
) -> float:
"""Run a single prediction on the model"""
print(scale)
return scale
```
I avoid the pendatic usage Input, because I do not know how to get the value out of the scale variable. The `print(scale)` will give `<class 'pydantic.fields.FieldInfo'>` instead of the actuall number `1.5`. This would be my first question.
Then I build this image using `cog build` and serve it using `docker run`.
I write another file to test it:
`send.py`
```
import requests
import pdb
input = {
"scale": 12.5
}
response = requests.post('http://localhost:5001/predictions', json=input)
print(response.text)
```
The output is
`{"input":{},"output":1.5,"id":null,"version":null,"created_at":null,"started_at":"2025-03-19T11:12:51.872927+00:00","completed_at":"2025-03-19T11:12:51.875585+00:00","logs":"--------------\n1.5\n<class 'float'>\n3.0\n--------------\n","error":null,"status":"succeeded","metrics":{"predict_time":0.002658}}`
This is unexpected because the ouput value is still `1.5`, which is the default value, not the value `12.5` I provided. Any idea how this can be solved? Thanks!
| closed | 2025-03-19T11:33:53Z | 2025-03-19T15:56:38Z | https://github.com/replicate/cog/issues/2215 | [] | wzm2256 | 2 |
JoeanAmier/TikTokDownloader | api | 340 | 采集完准备下载时闪退 | 如题描述,信息如下:
请选择账号链接来源:
1. 使用 accounts_urls_tiktok 参数的账号链接(推荐)
2. 手动输入待采集的账号链接
3. 从文本文档读取待采集的账号链接
2
请输入账号主页链接: https://www.tiktok.com/@austin_animations
开始处理第 1 个账号
共获取到 68 个账号发布作品数据
开始提取作品数据
账号昵称:Austin Animations;账号标识:Austin Animations;账号 ID:7399690820750132257
当前账号筛选作品数量: 68
开始下载作品文件
正在关闭程序
Traceback (most recent call last):
File "main.py", line 11, in <module>
File "asyncio\runners.py", line 194, in run
File "asyncio\runners.py", line 118, in run
File "asyncio\base_events.py", line 687, in run_until_complete
File "main.py", line 7, in main
File "src\application\TikTokDownloader.py", line 327, in run
File "src\application\TikTokDownloader.py", line 212, in main_menu
File "src\application\TikTokDownloader.py", line 295, in compatible
File "src\application\TikTokDownloader.py", line 220, in complete
File "src\application\main_complete.py", line 1522, in run
File "src\application\main_complete.py", line 239, in account_acquisition_interactive_tiktok
File "src\application\main_complete.py", line 281, in __account_secondary_menu
File "src\application\main_complete.py", line 898, in __multiple_choice
File "src\application\main_complete.py", line 359, in account_detail_inquire_tiktok
File "src\application\main_complete.py", line 389, in __account_detail_handle
File "src\application\main_complete.py", line 460, in deal_account_detail
File "src\application\main_complete.py", line 524, in _batch_process_detail
File "src\application\main_complete.py", line 571, in download_account_detail
File "src\downloader\download.py", line 123, in run
File "src\downloader\download.py", line 151, in run_batch
File "src\downloader\download.py", line 275, in batch_processing
File "src\downloader\download.py", line 296, in downloader_chart
File "src\tools\retry.py", line 17, in inner
File "src\downloader\download.py", line 471, in request_file
File "httpx\_models.py", line 761, in raise_for_status
httpx.HTTPStatusError: Client error '403 Forbidden' for url 'https://v16-webapp-prime.us.tiktok.com/video/tos/useast2a/tos-useast2a-ve-0068c001-euttp/o8f9ZbjnlAhiBUBIQEWxyDAZUlNo2zwCBCIrik/?a=1988&bti=ODszNWYuMDE6&ch=0&cr=3&dr=0&lr=all&cd=0%7C0%7C0%7C&cv=1&br=2184&bt=1092&cs=0&ds=6&ft=4KJMyMzm8Zmo0B8nrb4jV3U~dpWrKsd.&mime_type=video_mp4&qs=0&rc=OTc1NzZoNWg0ZGdmNGc0NUBpam44amw5cmtwdjMzZjczM0AxYjBhNWM0NWExLWMvMzYuYSMtYGRmMmRzYWlgLS1kMWNzcw%3D%3D&btag=e00088000&expire=1733069904&l=20241201101756F2F31F1B1D9894AEBDB9&ply_type=2&policy=2&signature=be4c52ef73c63557fbb0af5afe357f35&tk=tt_chain_token'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
[4932] Failed to execute script 'main' due to unhandled exception!
D:\视频下载工具\TikTokDownloader_V5.4_WIN> | closed | 2024-12-01T10:27:21Z | 2025-01-12T10:17:03Z | https://github.com/JoeanAmier/TikTokDownloader/issues/340 | [] | jialin9716 | 1 |
eriklindernoren/ML-From-Scratch | machine-learning | 41 | Suppport Vector Machine Problem | I had implemented my own svm based on your implementation. Wierdly, the accuracy was very low.
I copied your code and use sklearn has a benchemark. Your svm implementation gives me back almost random prediction.
Here is the code for you to check.
https://github.com/tchaton/interviews_prep/blob/master/code_prep/mlfromscratch/supervised/svm.py | open | 2018-05-20T16:03:36Z | 2019-03-25T18:38:10Z | https://github.com/eriklindernoren/ML-From-Scratch/issues/41 | [] | tchaton | 2 |
apachecn/ailearning | scikit-learn | 552 | 逻辑回归weighs的推倒 | weighs的梯度推导过程应该是根据最大似然函数求极值来的吧;要求的是最大似然函数的极大值,所以称为梯度上升;如果给极大似然函数加负号就是梯度下降了。所求的梯度不是z=w0*x0+w1*x1...的梯度。这是我对《机器学习实战》的理解。 | closed | 2019-10-15T06:50:17Z | 2021-09-07T17:45:05Z | https://github.com/apachecn/ailearning/issues/552 | [] | upider | 1 |
mljar/mercury | data-visualization | 217 | Embedded Mercury into vue.js | Hi,
I am wondering if there are any tips/example how to directly embedded a Mercury notebook into a vue.js website ?
Thanks a lot, and good continuation | closed | 2023-02-21T13:44:40Z | 2023-02-23T07:56:20Z | https://github.com/mljar/mercury/issues/217 | [] | julian-passebecq | 2 |
LAION-AI/Open-Assistant | machine-learning | 3,122 | Add more semantics, logical & reasoning datasets | multilingual | I believe that such data can help the model become more meaningful; at the moment, when tested, for example, on semantic tests (evaluate which words from the list are synonyms), the model gives terrible results.
I will gradually develop this idea (as well as refine this issue) and find (then adapt in QnA/instruction format) datasets that could potentially help solve this problem.
---
**Very** useful source: https://allenai.org/data
---
### Semantics:
- [x] **[semantics] [multilingual]** Tatoeba Q&A Translation Dataset; resolved by https://github.com/LAION-AI/Open-Assistant/pull/3114
- [x] **[semantics] [multilingual]** Word similarity dataset; resolved by https://github.com/LAION-AI/Open-Assistant/pull/3200
- [x] ~~**[semantics] [russian]** RUSSE is series of workshops on Russian Semantic Evaluation. Each workshop is centered around a shared task on a specific topic related to the semantic processing of the Russian language; https://github.com/nlpub/russe-evaluation~~ (too many entries; ws ds^ covered Russian too, so no need)
- [ ] Find more...
### Reasoning & Logic:
- [x] **[reasoning, logic] [russian]** Russian Riddles; resolved by https://github.com/LAION-AI/Open-Assistant/pull/3074
- [x] **[reasoning, logic] [bulgarian]** reasoning_bg; resolved by https://github.com/LAION-AI/Open-Assistant/pull/3137
- [x] **[reasoning, logic] [english]** gsm8k; resolved by https://github.com/LAION-AI/Open-Assistant/pull/3149
- [x] **[reasoning, logic] [english]** gsm-hard; resolved by https://github.com/LAION-AI/Open-Assistant/pull/3149
- [ ] Find more...
### Instructions:
- [ ] **[instruction-following] [russian]** ru-turbo-alpaca https://huggingface.co/datasets/IlyaGusev/ru_turbo_alpaca
--- | open | 2023-05-10T19:49:52Z | 2023-05-20T10:09:29Z | https://github.com/LAION-AI/Open-Assistant/issues/3122 | [
"data"
] | echo0x22 | 0 |
ansible/awx | django | 15,891 | UI_next shows page half in Dutch, half in English | ### Please confirm the following
- [x] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [x] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [x] I understand that AWX is open source software provided for free and that I might not receive a timely response.
- [x] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.)
### Bug Summary
I'm running AWX 24.6.1.
Browser languages: English (United Kingdom), English (United States), English
When switching to ui_next, I get a popup 'Welcome to the new Ansible user interface', with an English text, and a Dutch 'Sluiten' button.
The rest of the interface is also mostly Dutch, although a few things (Instellingen - User Preferences, or Access Management - Authentication, 'Taken' is in Dutch but 'View all jobs' next to it) are in English.
### AWX version
24.6.1
### Select the relevant components
- [ ] UI
- [x] UI (tech preview)
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Installation method
kubernetes
### Modifications
no
### Ansible version
2.17.0
### Operating system
Windows
### Web browser
Chrome, Edge
### Steps to reproduce
Chrome languages

The popup when opening ui_next for the first time

'Taken' in Dutch, 'View all Jobs' in English

'Inventarissen' in Dutch, 'Synced/Synced failures' in English

'Instellingen' in Dutch, 'User preferences' in English

### Expected results
The entire website in English, as this is my browser default.
### Actual results
A mix of English and Dutch.
### Additional information
I'm using SAML to log in with my Entra account, which has 'NL' as usage location. | open | 2025-03-12T07:05:18Z | 2025-03-12T07:07:35Z | https://github.com/ansible/awx/issues/15891 | [
"type:bug",
"needs_triage",
"community"
] | ildjarnisdead | 0 |
SYSTRAN/faster-whisper | deep-learning | 172 | CUDA + fp16 + windows, faster on Large-v1 than on Medium ? | Hello, I ran some benchmarks on models on a RTX 2070 and I end up with the following results :
```txt
Medium GPU cuda float16 | file 874 seconds : 114 seconds working
Large GPU cuda float16 | file 874 seconds : 107 seconds working
```
Any idea why it's faster on Large-v1 ? | closed | 2023-04-24T08:43:49Z | 2024-11-20T14:30:13Z | https://github.com/SYSTRAN/faster-whisper/issues/172 | [] | ExtReMLapin | 3 |
yt-dlp/yt-dlp | python | 12,691 | Member Only Subtitle [info] HmSG1F97z1A: Downloading 1 format(s): 248+251 | ### Checklist
- [x] I'm asking a question and **not** reporting a bug or requesting a feature
- [x] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar questions **including closed ones**. DO NOT post duplicates
### Please make sure the question is worded well enough to be understood
hello! I'm trying to download subtitles from members only youtube video. I'm already a member of this channel and already downloaded a few videos. I've been trying to download subtitles using this command:
yt-dlp --skip-download --sub-langs all --cookies-from-browser firefox
it's not showing any error but i can't find the file on the yt-dlp folder. i would really appreciate it if i can be enlightened by what the code below means:
Extracting cookies from firefox
Extracted 304 cookies from firefox
[youtube] Extracting URL: https://youtu.be/HmSG1F97z1A?si=X08unn-QV2jQs15i
[youtube] HmSG1F97z1A: Downloading webpage
[youtube] HmSG1F97z1A: Downloading tv client config
[youtube] HmSG1F97z1A: Downloading player 69f581a5
[youtube] HmSG1F97z1A: Downloading tv player API JSON
[info] HmSG1F97z1A: Downloading 1 format(s): 248+251
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
[debug] Command-line config: ['-vU', 'yt-dlp', '--skip-download', '--sub-langs', 'all', '--cookies-from-browser', 'firefox', 'https://youtu.be/HmSG1F97z1A?si=HIFSJu4POizywObS', "'verbose':", 'True']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2025.03.21 from yt-dlp/yt-dlp [f36e4b6e6] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg N-118776-g26c5d8cf5d-20250315 (setts), ffprobe N-118776-g26c5d8cf5d-20250315
[debug] Optional libraries: Cryptodome-3.22.0, brotli-1.1.0, certifi-2025.01.31, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-15.0.1
[debug] Proxy map: {}
Extracting cookies from firefox
[debug] Extracting cookies from: "C:\Users\D&P Office\AppData\Roaming\Mozilla\Firefox\Profiles\a7ay8nlx.default-release-1742288539706\cookies.sqlite"
Extracted 311 cookies from firefox
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Plugin directories: none
[debug] Loaded 1847 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2025.03.21 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2025.03.21 from yt-dlp/yt-dlp)
[CommonMistakes] Extracting URL: yt-dlp
ERROR: [CommonMistakes] You've asked yt-dlp to download the URL "yt-dlp". That doesn't make any sense. Simply remove the parameter in your command or configuration.
File "yt_dlp\extractor\common.py", line 747, in extract
File "yt_dlp\extractor\commonmistakes.py", line 25, in _real_extract
[debug] [youtube] Found YouTube account cookies
[youtube] Extracting URL: https://youtu.be/HmSG1F97z1A?si=HIFSJu4POizywObS
[youtube] HmSG1F97z1A: Downloading webpage
[youtube] HmSG1F97z1A: Downloading tv client config
[youtube] HmSG1F97z1A: Downloading player 69f581a5
[youtube] HmSG1F97z1A: Downloading tv player API JSON
[debug] Loading youtube-nsig.69f581a5 from cache
[debug] [youtube] Decrypted nsig 0Q3WZHG3H4xBlA6Xu => adlaAWr2D9bUuA
[debug] Loading youtube-nsig.69f581a5 from cache
[debug] [youtube] Decrypted nsig 22RCOzba1lvA1XcUA => fsPiFLDy5XGq6w
[debug] Loading youtube-nsig.69f581a5 from cache
[debug] [youtube] Decrypted nsig M0K7QJFRG4_TrzPSA => H17yi1gJCJy9gg
[debug] [youtube] HmSG1F97z1A: web client https formats require a GVS PO Token which was not provided. They will be skipped as they may yield HTTP Error 403. You can manually pass a GVS PO Token for this client with --extractor-args "youtube:po_token=web.gvs+XXX". For more information, refer to https://github.com/yt-dlp/yt-dlp/wiki/PO-Token-Guide . To enable these broken formats anyway, pass --extractor-args "youtube:formats=missing_pot"
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] HmSG1F97z1A: Downloading 1 format(s): 248+251
[generic] Extracting URL: 'verbose':
ERROR: [generic] "'verbose':" is not a valid URL. Set --default-search "ytsearch" (or run yt-dlp "ytsearch:'verbose':" ) to search YouTube
File "yt_dlp\extractor\common.py", line 747, in extract
File "yt_dlp\extractor\generic.py", line 2374, in _real_extract
[generic] Extracting URL: True
ERROR: [generic] 'True' is not a valid URL. Set --default-search "ytsearch" (or run yt-dlp "ytsearch:True" ) to search YouTube
File "yt_dlp\extractor\common.py", line 747, in extract
File "yt_dlp\extractor\generic.py", line 2374, in _real_extract | open | 2025-03-22T02:33:32Z | 2025-03-22T04:47:29Z | https://github.com/yt-dlp/yt-dlp/issues/12691 | [
"question"
] | serenakhala | 8 |
deezer/spleeter | deep-learning | 693 | [Bug] line 219 in _call_with_frames_removed | poetry run pytest tests/
plugins: forked-1.3.0
collecting ... Fatal Python error: Illegal instruction
Current thread 0x00007f7be6985740 (most recent call first):
File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 922 in create_module
File "<frozen importlib._bootstrap>", line 571 in module_from_spec
File "<frozen importlib._bootstrap>", line 658 in _load_unlocked
File "<frozen importlib._bootstrap>", line 955 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 971 in _find_and_load
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib64/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 64 in <module>
File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 678 in exec_module
File "<frozen importlib._bootstrap>", line 665 in _load_unlocked
File "<frozen importlib._bootstrap>", line 955 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 971 in _find_and_load
File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1023 in _handle_fromlist
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib64/python3.6/site-packages/tensorflow/python/pywrap_tfe.py", line 28 in <module>
File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 678 in exec_module
File "<frozen importlib._bootstrap>", line 665 in _load_unlocked
File "<frozen importlib._bootstrap>", line 955 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 971 in _find_and_load
File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1023 in _handle_fromlist
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib64/python3.6/site-packages/tensorflow/python/eager/context.py", line 35 in <module>
File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 678 in exec_module
File "<frozen importlib._bootstrap>", line 665 in _load_unlocked
File "<frozen importlib._bootstrap>", line 955 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 971 in _find_and_load
File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1023 in _handle_fromlist
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib64/python3.6/site-packages/tensorflow/python/__init__.py", line 40 in <module>
File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 678 in exec_module
File "<frozen importlib._bootstrap>", line 665 in _load_unlocked
File "<frozen importlib._bootstrap>", line 955 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 971 in _find_and_load
File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 941 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 971 in _find_and_load
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib64/python3.6/site-packages/tensorflow/__init__.py", line 41 in <module>
File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 678 in exec_module
File "<frozen importlib._bootstrap>", line 665 in _load_unlocked
File "<frozen importlib._bootstrap>", line 955 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 971 in _find_and_load
File "/heisler/spleeter/spleeter/audio/adapter.py", line 14 in <module>
File "<frozen importlib._bootstrap>", line 219 in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 678 in exec_module
File "<frozen importlib._bootstrap>", line 665 in _load_unlocked
File "<frozen importlib._bootstrap>", line 955 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 971 in _find_and_load
File "/heisler/spleeter/tests/test_eval.py", line 18 in <module>
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/_pytest/assertion/rewrite.py", line 170 in exec_module
File "<frozen importlib._bootstrap>", line 665 in _load_unlocked
File "<frozen importlib._bootstrap>", line 955 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 971 in _find_and_load
File "<frozen importlib._bootstrap>", line 994 in _gcd_import
File "/usr/lib64/python3.6/importlib/__init__.py", line 126 in import_module
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/_pytest/pathlib.py", line 524 in import_path
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/_pytest/python.py", line 578 in _importtestmodule
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/_pytest/python.py", line 500 in _getobj
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/_pytest/python.py", line 291 in obj
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/_pytest/python.py", line 516 in _inject_setup_module_fixture
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/_pytest/python.py", line 503 in collect
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/_pytest/runner.py", line 341 in <lambda>
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/_pytest/runner.py", line 311 in from_call
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/_pytest/runner.py", line 341 in pytest_make_collect_report
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/pluggy/callers.py", line 187 in _multicall
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/pluggy/manager.py", line 87 in <lambda>
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/pluggy/manager.py", line 93 in _hookexec
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/pluggy/hooks.py", line 286 in __call__
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/_pytest/runner.py", line 458 in collect_one_node
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/_pytest/main.py", line 808 in genitems
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/_pytest/main.py", line 811 in genitems
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/_pytest/main.py", line 634 in perform_collect
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/_pytest/main.py", line 333 in pytest_collection
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/pluggy/callers.py", line 187 in _multicall
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/pluggy/manager.py", line 87 in <lambda>
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/pluggy/manager.py", line 93 in _hookexec
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/pluggy/hooks.py", line 286 in __call__
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/_pytest/main.py", line 322 in _main
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/_pytest/main.py", line 269 in wrap_session
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/_pytest/main.py", line 316 in pytest_cmdline_main
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/pluggy/callers.py", line 187 in _multicall
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/pluggy/manager.py", line 87 in <lambda>
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/pluggy/manager.py", line 93 in _hookexec
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/pluggy/hooks.py", line 286 in __call__
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/_pytest/config/__init__.py", line 163 in main
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/lib/python3.6/site-packages/_pytest/config/__init__.py", line 185 in console_main
File "/heisler/.cache/pypoetry/virtualenvs/spleeter-DNv974mv-py3.6/bin/pytest", line 8 in <module>
Illegal instruction (core dumped)
| closed | 2021-12-20T00:21:47Z | 2021-12-22T18:09:27Z | https://github.com/deezer/spleeter/issues/693 | [
"bug",
"invalid"
] | heislersin | 1 |
litestar-org/litestar | asyncio | 4,048 | Enhancement: OpenAPI discriminant for Pydantic tagged unions | ### Description
OpenAPI discriminant is not generated when using Pydantic tagged unions.
### URL to code causing the issue
_No response_
### MCVE
```python
import typing
import litestar
import pydantic
class First(pydantic.BaseModel):
type: typing.Literal["first"] = "first"
class Second(pydantic.BaseModel):
type: typing.Literal["second"] = "second"
AnyModel = typing.Annotated[First | Second, pydantic.Discriminator("type")]
@litestar.get("/")
def hello_world() -> AnyModel:
return First()
app = litestar.Litestar(route_handlers=[hello_world])
def test_pydantic_has_discriminator() -> None:
assert pydantic.TypeAdapter(AnyModel).json_schema() == { # noqa: S101
"$defs": {
"First": {
"properties": {
"type": {
"const": "first",
"default": "first",
"title": "Type",
"type": "string",
}
},
"title": "First",
"type": "object",
},
"Second": {
"properties": {
"type": {
"const": "second",
"default": "second",
"title": "Type",
"type": "string",
}
},
"title": "Second",
"type": "object",
},
},
"discriminator": {
"mapping": {"first": "#/$defs/First", "second": "#/$defs/Second"},
"propertyName": "type",
},
"oneOf": [{"$ref": "#/$defs/First"}, {"$ref": "#/$defs/Second"}],
}
def test_openapi_schema_has_discriminator() -> None:
assert app.openapi_schema.to_schema() == { # noqa: S101
"info": {"title": "Litestar API", "version": "1.0.0"},
"openapi": "3.1.0",
"servers": [{"url": "/"}],
"paths": {
"/": {
"get": {
"summary": "HelloWorld",
"operationId": "HelloWorld",
"responses": {
"200": {
"description": "Request fulfilled, document follows",
"headers": {},
"content": {
"application/json": {
"schema": {
"discriminator": {
"propertyName": "type",
"mapping": {
"first": "#/components/schemas/First",
"second": "#/components/schemas/Second",
},
},
"oneOf": [
{"$ref": "#/components/schemas/First"},
{"$ref": "#/components/schemas/Second"},
],
}
}
},
}
},
"deprecated": False,
}
}
},
"components": {
"schemas": {
"First": {
"properties": {
"type": {
"type": "string",
"const": "first",
"default": "first",
}
},
"type": "object",
"required": [],
"title": "First",
},
"Second": {
"properties": {
"type": {
"type": "string",
"const": "second",
"default": "second",
}
},
"type": "object",
"required": [],
"title": "Second",
},
}
},
}
```
### Steps to reproduce
1. Create `app.py` with content above
2. Run `uv run --with pydantic --with litestar --with pytest pytest app.py -vv`
3. See passing Pydantic test correctly generating json schema and failing Litestar test generating OpenAPI Schema
### Screenshots
_No response_
### Logs
Diff from pytest:
```diff
'paths': {
'/': {
'get': {
'deprecated': False,
'operationId': 'HelloWorld',
'responses': {
'200': {
'content': {
'application/json': {
'schema': {
- 'discriminator': {
- 'mapping': {
- 'first': '#/components/schemas/First',
- 'second': '#/components/schemas/Second',
- },
- 'propertyName': 'type',
- },
'oneOf': [
{
'$ref': '#/components/schemas/First',
},
{
'$ref': '#/components/schemas/Second',
},
],
},
},
},
'description': 'Request fulfilled, document follows',
'headers': {},
},
},
'summary': 'HelloWorld',
},
},
},
```
Full pytest output
```
============================= test session starts ==============================
platform darwin -- Python 3.13.2, pytest-8.3.5, pluggy-1.5.0 -- /Users/me/.cache/uv/archive-v0/-uH6G2i2mFt96ooBr2XNm/bin/python
cachedir: .pytest_cache
rootdir: /private/tmp
plugins: Faker-36.2.2, anyio-4.8.0
collecting ... collected 2 items
app.py::test_pydantic_has_discriminator PASSED [ 50%]
app.py::test_openapi_schema_has_discriminator FAILED [100%]
=================================== FAILURES ===================================
____________________ test_openapi_schema_has_discriminator _____________________
def test_openapi_schema_has_discriminator() -> None:
> assert app.openapi_schema.to_schema() == { # noqa: S101
"info": {"title": "Litestar API", "version": "1.0.0"},
"openapi": "3.1.0",
"servers": [{"url": "/"}],
"paths": {
"/": {
"get": {
"summary": "HelloWorld",
"operationId": "HelloWorld",
"responses": {
"200": {
"description": "Request fulfilled, document follows",
"headers": {},
"content": {
"application/json": {
"schema": {
"discriminator": {
"propertyName": "type",
"mapping": {
"first": "#/components/schemas/First",
"second": "#/components/schemas/Second",
},
},
"oneOf": [
{"$ref": "#/components/schemas/First"},
{"$ref": "#/components/schemas/Second"},
],
}
}
},
}
},
"deprecated": False,
}
}
},
"components": {
"schemas": {
"First": {
"properties": {
"type": {
"type": "string",
"const": "first",
"default": "first",
}
},
"type": "object",
"required": [],
"title": "First",
},
"Second": {
"properties": {
"type": {
"type": "string",
"const": "second",
"default": "second",
}
},
"type": "object",
"required": [],
"title": "Second",
},
}
},
}
E AssertionError: assert {'info': {'title': 'Litestar API', 'version': '1.0.0'}, 'openapi': '3.1.0', 'servers': [{'url': '/'}], 'paths': {'/': {'get': {'summary': 'HelloWorld', 'operationId': 'HelloWorld', 'responses': {'200': {'description': 'Request fulfilled, document follows', 'headers': {}, 'content': {'application/json': {'schema': {'oneOf': [{'$ref': '#/components/schemas/First'}, {'$ref': '#/components/schemas/Second'}]}}}}}, 'deprecated': False}}}, 'components': {'schemas': {'First': {'properties': {'type': {'type': 'string', 'const': 'first', 'default': 'first'}}, 'type': 'object', 'required': [], 'title': 'First'}, 'Second': {'properties': {'type': {'type': 'string', 'const': 'second', 'default': 'second'}}, 'type': 'object', 'required': [], 'title': 'Second'}}}} == {'info': {'title': 'Litestar API', 'version': '1.0.0'}, 'openapi': '3.1.0', 'servers': [{'url': '/'}], 'paths': {'/': {'get': {'summary': 'HelloWorld', 'operationId': 'HelloWorld', 'responses': {'200': {'description': 'Request fulfilled, document follows', 'headers': {}, 'content': {'application/json': {'schema': {'discriminator': {'propertyName': 'type', 'mapping': {'first': '#/components/schemas/First', 'second': '#/components/schemas/Second'}}, 'oneOf': [{'$ref': '#/components/schemas/First'}, {'$ref': '#/components/schemas/Second'}]}}}}}, 'deprecated': False}}}, 'components': {'schemas': {'First': {'properties': {'type': {'type': 'string', 'const': 'first', 'default': 'first'}}, 'type': 'object', 'required': [], 'title': 'First'}, 'Second': {'properties': {'type': {'type': 'string', 'const': 'second', 'default': 'second'}}, 'type': 'object', 'required': [], 'title': 'Second'}}}}
E
E Common items:
E {'components': {'schemas': {'First': {'properties': {'type': {'const': 'first',
E 'default': 'first',
E 'type': 'string'}},
E 'required': [],
E 'title': 'First',
E 'type': 'object'},
E 'Second': {'properties': {'type': {'const': 'second',
E 'default': 'second',
E 'type': 'string'}},
E 'required': [],
E 'title': 'Second',
E 'type': 'object'}}},
E 'info': {'title': 'Litestar API', 'version': '1.0.0'},
E 'openapi': '3.1.0',
E 'servers': [{'url': '/'}]}
E Differing items:
E {'paths': {'/': {'get': {'deprecated': False, 'operationId': 'HelloWorld', 'responses': {'200': {'content': {...}, 'description': 'Request fulfilled, document follows', 'headers': {}}}, 'summary': 'HelloWorld'}}}} != {'paths': {'/': {'get': {'deprecated': False, 'operationId': 'HelloWorld', 'responses': {'200': {'content': {...}, 'description': 'Request fulfilled, document follows', 'headers': {}}}, 'summary': 'HelloWorld'}}}}
E
E Full diff:
E {
E 'components': {
E 'schemas': {
E 'First': {
E 'properties': {
E 'type': {
E 'const': 'first',
E 'default': 'first',
E 'type': 'string',
E },
E },
E 'required': [],
E 'title': 'First',
E 'type': 'object',
E },
E 'Second': {
E 'properties': {
E 'type': {
E 'const': 'second',
E 'default': 'second',
E 'type': 'string',
E },
E },
E 'required': [],
E 'title': 'Second',
E 'type': 'object',
E },
E },
E },
E 'info': {
E 'title': 'Litestar API',
E 'version': '1.0.0',
E },
E 'openapi': '3.1.0',
E 'paths': {
E '/': {
E 'get': {
E 'deprecated': False,
E 'operationId': 'HelloWorld',
E 'responses': {
E '200': {
E 'content': {
E 'application/json': {
E 'schema': {
E - 'discriminator': {
E - 'mapping': {
E - 'first': '#/components/schemas/First',
E - 'second': '#/components/schemas/Second',
E - },
E - 'propertyName': 'type',
E - },
E 'oneOf': [
E {
E '$ref': '#/components/schemas/First',
E },
E {
E '$ref': '#/components/schemas/Second',
E },
E ],
E },
E },
E },
E 'description': 'Request fulfilled, document follows',
E 'headers': {},
E },
E },
E 'summary': 'HelloWorld',
E },
E },
E },
E 'servers': [
E {
E 'url': '/',
E },
E ],
E }
app.py:63: AssertionError
=============================== warnings summary ===============================
app.py:18
/private/tmp/app.py:18: LitestarWarning: Use of a synchronous callable <function hello_world at 0x107410680> without setting sync_to_thread is discouraged since synchronous callables can block the main thread if they perform blocking operations. If the callable is guaranteed to be non-blocking, you can set sync_to_thread=False to skip this warning, or set the environmentvariable LITESTAR_WARN_IMPLICIT_SYNC_TO_THREAD=0 to disable warnings of this type entirely.
@litestar.get("/")
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
FAILED app.py::test_openapi_schema_has_discriminator - AssertionError: assert {'info': {'title': 'Litestar API', 'version': '1.0.0'}, 'openapi': '3.1.0', 'servers': [{'url': '/'}], 'paths': {'/': {'get': {'summary': 'HelloWorld', 'operationId': 'HelloWorld', 'responses': {'200': {'description': 'Request fulfilled, document follows', 'headers': {}, 'content': {'application/json': {'schema': {'oneOf': [{'$ref': '#/components/schemas/First'}, {'$ref': '#/components/schemas/Second'}]}}}}}, 'deprecated': False}}}, 'components': {'schemas': {'First': {'properties': {'type': {'type': 'string', 'const': 'first', 'default': 'first'}}, 'type': 'object', 'required': [], 'title': 'First'}, 'Second': {'properties': {'type': {'type': 'string', 'const': 'second', 'default': 'second'}}, 'type': 'object', 'required': [], 'title': 'Second'}}}} == {'info': {'title': 'Litestar API', 'version': '1.0.0'}, 'openapi': '3.1.0', 'servers': [{'url': '/'}], 'paths': {'/': {'get': {'summary': 'HelloWorld', 'operationId': 'HelloWorld', 'responses': {'200': {'description': 'Request fulfilled, document follows', 'headers': {}, 'content': {'application/json': {'schema': {'discriminator': {'propertyName': 'type', 'mapping': {'first': '#/components/schemas/First', 'second': '#/components/schemas/Second'}}, 'oneOf': [{'$ref': '#/components/schemas/First'}, {'$ref': '#/components/schemas/Second'}]}}}}}, 'deprecated': False}}}, 'components': {'schemas': {'First': {'properties': {'type': {'type': 'string', 'const': 'first', 'default': 'first'}}, 'type': 'object', 'required': [], 'title': 'First'}, 'Second': {'properties': {'type': {'type': 'string', 'const': 'second', 'default': 'second'}}, 'type': 'object', 'required': [], 'title': 'Second'}}}}
Common items:
{'components': {'schemas': {'First': {'properties': {'type': {'const': 'first',
'default': 'first',
'type': 'string'}},
'required': [],
'title': 'First',
'type': 'object'},
'Second': {'properties': {'type': {'const': 'second',
'default': 'second',
'type': 'string'}},
'required': [],
'title': 'Second',
'type': 'object'}}},
'info': {'title': 'Litestar API', 'version': '1.0.0'},
'openapi': '3.1.0',
'servers': [{'url': '/'}]}
Differing items:
{'paths': {'/': {'get': {'deprecated': False, 'operationId': 'HelloWorld', 'responses': {'200': {'content': {...}, 'description': 'Request fulfilled, document follows', 'headers': {}}}, 'summary': 'HelloWorld'}}}} != {'paths': {'/': {'get': {'deprecated': False, 'operationId': 'HelloWorld', 'responses': {'200': {'content': {...}, 'description': 'Request fulfilled, document follows', 'headers': {}}}, 'summary': 'HelloWorld'}}}}
Full diff:
{
'components': {
'schemas': {
'First': {
'properties': {
'type': {
'const': 'first',
'default': 'first',
'type': 'string',
},
},
'required': [],
'title': 'First',
'type': 'object',
},
'Second': {
'properties': {
'type': {
'const': 'second',
'default': 'second',
'type': 'string',
},
},
'required': [],
'title': 'Second',
'type': 'object',
},
},
},
'info': {
'title': 'Litestar API',
'version': '1.0.0',
},
'openapi': '3.1.0',
'paths': {
'/': {
'get': {
'deprecated': False,
'operationId': 'HelloWorld',
'responses': {
'200': {
'content': {
'application/json': {
'schema': {
- 'discriminator': {
- 'mapping': {
- 'first': '#/components/schemas/First',
- 'second': '#/components/schemas/Second',
- },
- 'propertyName': 'type',
- },
'oneOf': [
{
'$ref': '#/components/schemas/First',
},
{
'$ref': '#/components/schemas/Second',
},
],
},
},
},
'description': 'Request fulfilled, document follows',
'headers': {},
},
},
'summary': 'HelloWorld',
},
},
},
'servers': [
{
'url': '/',
},
],
}
==================== 1 failed, 1 passed, 1 warning in 0.61s ====================
```
### Litestar Version
2.15.1
Pydantic 2.10.6
### Platform
- [ ] Linux
- [x] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | open | 2025-03-06T11:22:35Z | 2025-03-06T14:19:24Z | https://github.com/litestar-org/litestar/issues/4048 | [
"Enhancement"
] | vrslev | 2 |
python-restx/flask-restx | flask | 412 | add password to protect the UI page | **Ask a question**
A clear and concise question:
the UI page is exposing to public and everyone can play with the endpoints, this is danger if we couldont add password to protect the UI page
**Additional context**
Add any other context or screenshots about the feature request here.
| open | 2022-02-11T20:43:02Z | 2022-02-17T13:52:21Z | https://github.com/python-restx/flask-restx/issues/412 | [
"question"
] | kinizumi | 1 |
vitalik/django-ninja | pydantic | 1,251 | [BUG] Sync-only Authentication Callbacks not Working on Async Operations | **Describe the bug**
If an authentication callback can only works in sync context, then it will not work on async views.
Example Code:
```python
from ninja.security import django_auth
api = NinjaAPI()
@api.get("/foobar", auth=django_auth)
async def foobar(request) -> str:
return "foobar"
```
Accessing the view will raise: `django.core.exceptions.SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async.`
This is because accessing `request.user` involves DB query (when using Django's default session engine) and can only be run in sync context. But in `AsyncOperation._run_authentication()` it didn't switch the context for the authentication callback.
**Versions (please complete the following information):**
- Python version: 3.12
- Django version: 5.0.7
- Django-Ninja version: 1.2.2
- Pydantic version: 2.8.2
| open | 2024-08-03T23:49:21Z | 2024-08-04T04:59:54Z | https://github.com/vitalik/django-ninja/issues/1251 | [] | Xdynix | 1 |
tensorpack/tensorpack | tensorflow | 1,489 | Could not reproduce imagenet-resnet50 accuracy using keras | Hi Tensorpack team!
I'm trying to reproduce resnet50 validation accuracy (75%) using the code in:
https://github.com/tensorpack/tensorpack/blob/master/examples/keras/imagenet-resnet-keras.py
running on 8 titanx gpus, with TOTAL_BATCH_SIZE=512.
First experiments use the code as is. Maximal validation accuracy is 71.5%.
Second experiments add weight decay as described [here](https://jricheimer.github.io/keras/2019/02/06/keras-hack-1/#:~:text=Weight%20decay%2C%20or%20L2%20regularization,decrease%20during%20the%20training%20process.) with alpha=1e-5. Still same accuracy.
Is there anything I should change in the code / environment in order to get the 75%?
Thanks,
Yochay
| closed | 2020-10-21T13:45:09Z | 2020-10-28T17:45:44Z | https://github.com/tensorpack/tensorpack/issues/1489 | [] | YochayTzur | 4 |
matplotlib/mplfinance | matplotlib | 489 | Bug Report: high and low tick marks inverse | **Describe the bug**
Sometimes the high and low tick marks are inverted. For example, instead of the high marker being on top of the candlebox, it is within the candlebox.
**To Reproduce**
Steps to reproduce the behavior:
use this code
```
for start, end in r.items():
image = fplt.plot(
df2.loc[start : end],
type="candle",
#mav=(2),
figratio=(12,8),
volume=False,
axisoff=True,
show_nontrading=False,
style='classic',
savefig=path1 + str(f) + str(start) + str(end)+ ".png"
)
```
**Expected behavior**
It is expected that all the candleboxes will have the high and low tick markers on top or below the box. Not within it.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: Mac
- Browser chrome
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
<img width="687" alt="Screen Shot 2022-01-05 at 9 20 51 PM" src="https://user-images.githubusercontent.com/96645673/148327633-9cdb5f58-7e2e-4b52-a6af-1ea30d390ff3.png">
Add any other context about the problem here.
| closed | 2022-01-06T04:24:21Z | 2022-01-11T16:26:10Z | https://github.com/matplotlib/mplfinance/issues/489 | [
"bug"
] | spicker22 | 2 |
manrajgrover/halo | jupyter | 124 | Bug: Halo install requires UTF-8 locale to be set | Thank you for the software. I recently added it to an internal tool and now am installing Halo on CI.
I hit this issue when installing Halo on some CI machines.
Some systems do not have UTF-8 set as the default locale.
Halo does not work in these cases.
This is because "README.md" is a UTF-8 encoded file, but `open()` uses the default encoding.
One way to reproduce this is with Docker:
```
$ docker run -it ubuntu /bin/bash
# apt-get -qq update
# apt-get -qq install -y python3 python3-pip
# python3 -c 'import locale; print(locale.getpreferredencoding(False))'
ANSI_X3.4-1968 # << Note, this is not UTF-8
# pip3 install halo
Collecting halo
Downloading https://files.pythonhosted.org/packages/d5/14/e2b6180addc38803b8170afb798a06c2e407e79efb8e14591e8820e718d3/halo-0.0.23.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-5gyk692_/halo/setup.py", line 10, in <module>
long_description = infile.read()
File "/usr/lib/python3.6/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 3604: ordinal not in range(128)
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-5gyk692_/halo
``` | closed | 2019-05-02T13:48:05Z | 2019-05-29T04:26:40Z | https://github.com/manrajgrover/halo/issues/124 | [] | adamtheturtle | 0 |
nerfstudio-project/nerfstudio | computer-vision | 3,542 | Re-train a reconstructed scene, adding extra images | Hello,
I know that with COLMAP and HLOC is possible to register new images in an existing reconstrurtion.
My question is:
If I add some new images, does it need to train from the beggining the nerfacto model?
Is there anyway to continue the training from the last checkpoint but with new images or do I need to train a new model from the beginning?
Thanks in advance
| open | 2024-12-06T10:37:07Z | 2025-01-04T07:24:20Z | https://github.com/nerfstudio-project/nerfstudio/issues/3542 | [] | vrahnos3 | 1 |
jeffknupp/sandman2 | rest-api | 27 | maturity stage of sandman2 | Hi there,
What is the current maturity level of sandman2? Can it be used in production or should be better to use sandman1?
| closed | 2016-01-31T23:53:11Z | 2016-02-01T19:36:40Z | https://github.com/jeffknupp/sandman2/issues/27 | [] | lerrua | 3 |
FactoryBoy/factory_boy | django | 261 | float_to_decimal fails when FloatOperation trapped in Decimal context | Hi,
If I set
`getcontext().traps[FloatOperation] = True`
then FuzzyDecimal fails because it does indeed convert a float directly to a decimal without first converting it to a string.
This is happening for Python 2.6 but not above. Is there a reason for this? Or, am I misunderstanding something about the use of Decimals?
Many thanks
| closed | 2016-01-05T19:24:35Z | 2016-02-09T23:09:57Z | https://github.com/FactoryBoy/factory_boy/issues/261 | [] | dschien | 6 |
jschneier/django-storages | django | 677 | SSLError: [SSL] malloc failure (_ssl.c:2990) / AWS S3 | My rig:
Python 3.7.2
Django 2.1.7
BOTO3 1.9.106
botocore 1.12.106
dj-storages 1.7.1
Hmmmm
What to do...
I have turned off use_ssl for now just to make it work...
Let's see what happens....
Hopefully it can work without SSL
otherwise my site is fucked... | closed | 2019-03-04T08:40:48Z | 2019-09-08T03:10:31Z | https://github.com/jschneier/django-storages/issues/677 | [
"s3boto"
] | gotexis | 3 |
replicate/cog | tensorflow | 1,829 | Force docker buildx to build OCI compatible images | Currently the code in the docker build section of cog does not specify the format to use and delegates this responsibility to the host platform https://github.com/replicate/cog/blob/eae972dd69dad240eef95e80ba45deb4782bcf48/pkg/docker/build.go#L42
I would suggest we use `--type=oci` to be more compatible with a wider range of products (or allow the type to be defined in as a user argument). | open | 2024-07-25T21:30:56Z | 2024-07-25T21:30:57Z | https://github.com/replicate/cog/issues/1829 | [] | 8W9aG | 0 |
piccolo-orm/piccolo | fastapi | 1,118 | Encode strings as JSON in `where` clauses | We just added more powerful JSON filtering, but when integrating it into Piccolo Admin, I realised there's a bug here:
https://github.com/piccolo-orm/piccolo/blob/448a818e3e7420e6f052b6e79473bbeee0b3e76f/piccolo/query/operators/json.py#L15-L16
We should also encode strings as JSON e.g. `'hello world'` -> `'"hello world"'`. However, if the string is already valid JSON, we should leave it alone (e.g. `'{"message": "hello world"}'`). | open | 2024-10-25T17:24:39Z | 2024-10-25T19:54:32Z | https://github.com/piccolo-orm/piccolo/issues/1118 | [
"bug"
] | dantownsend | 0 |
stanford-oval/storm | nlp | 293 | question mark causes error on windows | When a user types a query that includes a question mark - e.g. "What are floating point numbers?" it tries to create a directory with the question mark, which throws an error. | closed | 2025-01-05T19:53:32Z | 2025-03-08T09:02:28Z | https://github.com/stanford-oval/storm/issues/293 | [] | BBC-Esq | 2 |
axnsan12/drf-yasg | rest-api | 475 | Is there a way to represent multiple responses? | Hi, sorry for my bad English.
I want to represent multiple responses with two serializers.
I found the answer and following is the link of the answer.
[https://stackoverflow.com/questions/55772347/documenting-a-drf-get-endpoint-with-multiple-responses-using-swagger](https://stackoverflow.com/questions/55772347/documenting-a-drf-get-endpoint-with-multiple-responses-using-swagger)
But above answer uses json and has to write down all of fields manually.
Is there a way using serializer?
or Can I make two document with one url? | open | 2019-10-18T06:15:48Z | 2025-03-07T12:16:29Z | https://github.com/axnsan12/drf-yasg/issues/475 | [
"triage"
] | darkblank | 0 |
modoboa/modoboa | django | 2,612 | Missing sitestatic/bootstrap/dist/css/bootstrap.min.css.map | # Impacted versions
* OS Type: Debian
* OS Version: bullseye
* Database Type: PostgreSQL
* Database version: 13.8-0+deb11u1
* Modoboa:2.0.1
* installer used: Yes
* Webserver: Nginx
# Current behavior
Hello,
When a page loads in the GUI v1, we systematically have this call to a css which is not found.
Failed to load resource: the server responded with a status of 404 (Not Found)
https://mail.xxxx.fr/sitestatic/bootstrap/dist/css/bootstrap.min.css.map
A "locate bootstrap.min.css.map" found nothing.
```
root@mailhub:/srv/modoboa# ll /srv/modoboa/instance/sitestatic/css/
total 52
-rw-r--r-- 1 modoboa modoboa 4370 Sep 20 12:31 custom.css
-rw-r--r-- 1 modoboa modoboa 80 Sep 20 12:31 jquery.sortable.css
-rw-r--r-- 1 modoboa modoboa 226 Sep 20 12:31 login.css
-rw-r--r-- 1 modoboa modoboa 550 Sep 20 12:31 logo-icon.png
-rw-r--r-- 1 modoboa modoboa 13670 Sep 20 12:31 modoboa.png
-rw-r--r-- 1 modoboa modoboa 630 Sep 20 12:31 offline.css
-rw-r--r-- 1 modoboa modoboa 348 Sep 20 12:31 searchbar.css
-rw-r--r-- 1 modoboa modoboa 2545 Sep 20 12:31 spinner.gif
-rw-r--r-- 1 modoboa modoboa 2485 Sep 20 12:31 viewmail.css
```
| closed | 2022-09-23T08:25:58Z | 2022-09-26T06:32:15Z | https://github.com/modoboa/modoboa/issues/2612 | [] | stefaweb | 2 |
mwaskom/seaborn | data-visualization | 3,741 | Seaborn with scanpy and statannotations | Hello,
I am trying to add statistical annotation on a boxplot I generated with scanpy but I get those errors :
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
scanpy 1.10.2 requires seaborn>=0.13, but you have seaborn 0.11.2 which is incompatible.
So I upgraded seaborn
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
statannotations 0.6.0 requires seaborn<0.12,>=0.9.0, but you have seaborn 0.13.0 which is incompatible.
How do I solve that ?
Best regards,
Lionel Lenoir | closed | 2024-07-30T07:48:35Z | 2024-07-30T12:00:31Z | https://github.com/mwaskom/seaborn/issues/3741 | [] | LioLenr | 1 |
pyg-team/pytorch_geometric | deep-learning | 9,638 | Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults | ### 🐛 Describe the bug
## Bug Description
The latest release mentions fixing the issue of converting the model to TorchScript when it contains message_passing. However, we tested it and found that this bug remains.
### Bug's Detail:
#### The error message:
> Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults:
> File "some_path/pytorch_geometric-master/torch_geometric/nn/conv/message_passing.py", line 425
> edge_index: Adj,
> size: Size = None,
> **kwargs: Any,
> ~~~~~~~ <--- HERE
> ) -> Tensor:
> r"""The initial call to start propagating messages.
#### The toy model taken from [official tutorial](https://pytorch-geometric.readthedocs.io/en/latest/notes/create_gnn.html):
```python
import torch
import torch.nn.functional as F
from torch_geometric.nn import GCNConv
from torch_geometric.nn import MessagePassing
from torch.nn import Linear, Parameter
from torch_geometric.utils import add_self_loops, degree
class GCNConv(MessagePassing):
def __init__(self, in_channels, out_channels):
super().__init__(aggr='add')
self.lin = Linear(in_channels, out_channels, bias=False)
self.bias = Parameter(torch.empty(out_channels))
self.reset_parameters()
def reset_parameters(self):
self.lin.reset_parameters()
self.bias.data.zero_()
def forward(self, x, edge_index):
edge_index, _ = add_self_loops(edge_index, num_nodes=x.size(0))
x = self.lin(x)
row, col = torch.split(edge_index, 1, dim=0) #<-- We modified this line
deg = degree(col, x.size(0), dtype=x.dtype)
deg_inv_sqrt = deg.pow(-0.5)
deg_inv_sqrt[deg_inv_sqrt == float('inf')] = 0
norm = deg_inv_sqrt[row] * deg_inv_sqrt[col]
out = self.propagate(edge_index, x=x, norm=norm)
out = out + self.bias
return out
def message(self, x_j, norm):
return norm.view(-1, 1) * x_j
test_model = GCNConv(10, 10)
```
#### The line of code triggered the issue:
`script_test_model = torch.jit.script(test_model)`
### A side bug we found during this process:
If we completely follow the official tutorial's code without modifying that line with a comment, i.e.,
```python
# the original code in the tutorial
row, col = edge_index
```
The error message will be:
> Tensor cannot be used as a tuple:
> File "/var/folders/3f/5776h3152rs7rlmtldx8gyxh0000gn/T/ipykernel_15097/3913400558.py", line 14
> x = self.lin(x)
> # row, col = torch.split(edge_index, 1, dim=0)
> row, col = edge_index
> ~~~~~~~~~~ <--- HERE
> deg = degree(col, x.size(0), dtype=x.dtype)
> deg_inv_sqrt = deg.pow(-0.5)
### Versions (Corrected)
I am testing it with my Mac in an environment where torch_geometric is not installed. I downloaded the latest code base and directly imported it from this local source. This issue persists when we test our model on the GPU machine with 2.5.3. version is installed.
Versions of relevant libraries:
[pip3] flake8==3.8.4
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.24.4
[pip3] numpydoc==1.1.0
[pip3] torch==2.2.1
[pip3] torchaudio==2.2.1
[pip3] torchvision==0.17.1
[conda] blas 1.0 mkl
[conda] mkl 2021.4.0 hecd8cb5_637
[conda] mkl-service 2.4.0 py38h9ed2024_0
[conda] mkl_fft 1.3.1 py38h4ab4a9b_0
[conda] mkl_random 1.2.2 py38hb2f4e1b_0
[conda] numpy 1.24.4 pypi_0 pypi
[conda] numpydoc 1.1.0 pyhd3eb1b0_1
[conda] torch 2.2.1 pypi_0 pypi
[conda] torchaudio 2.2.1 pypi_0 pypi
[conda] torchvision 0.17.1 pypi_0 pypi | open | 2024-09-04T01:17:24Z | 2024-09-13T17:37:13Z | https://github.com/pyg-team/pytorch_geometric/issues/9638 | [
"bug"
] | KevinHooah | 5 |
RobertCraigie/prisma-client-py | asyncio | 314 | Add support for composite types | ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
In 3.10 Prisma added support for embedded types, this looks like this in the schema
```prisma
model User {
id Int @id @default(autoincrement())
name String
photo Photo
}
type Photo {
width Int
height Int
data Bytes
}
```
This is currently only supported for MongoDB although there are plans to add support for this to other database providers in the future.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
I have not looked into the API that Prisma provides for this yet but I imagine that this will be very similar to relational fields.
https://www.prisma.io/docs/concepts/components/prisma-client/composite-types | open | 2022-03-03T16:18:08Z | 2024-11-17T21:42:36Z | https://github.com/RobertCraigie/prisma-client-py/issues/314 | [
"kind/feature",
"topic: client",
"level/advanced",
"priority/medium"
] | RobertCraigie | 5 |
davidsandberg/facenet | tensorflow | 1,044 | how to | closed | 2019-07-01T18:21:01Z | 2019-07-01T18:27:53Z | https://github.com/davidsandberg/facenet/issues/1044 | [] | FredAkh | 0 | |
wkentaro/labelme | computer-vision | 1,164 | i want to selected a box and move it,but always draw a point that's make so bad to use? it's very boring and make lower work efficency | ### Provide environment information
i want to move an box,but always draw a point that's make so bad to use? it's very boring
### What OS are you using?
ubantu
### Describe the Bug
i want to move an box,but always draw a point that's make so bad to use? it's very boring
### Expected Behavior
_No response_
### To Reproduce
_No response_ | open | 2022-08-26T07:28:58Z | 2022-09-26T15:00:28Z | https://github.com/wkentaro/labelme/issues/1164 | [
"issue::bug",
"status: wip-by-author"
] | cqray1990 | 3 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,572 | [SOLVED] No longer scraping undetected after 117 update | Anyone else experiencing their code crashing after the Chrome 117 update? I am suddenly being detected by my target site I'm trying to scrape and unable to get through some dumb javascript puzzle they are sending me. Anyone know any tips they have used to get past this? | open | 2023-09-17T20:40:35Z | 2023-09-27T17:54:45Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1572 | [] | jstoxrocky | 2 |
flasgger/flasgger | api | 248 | Global Model definitions |
Olhei as issues relacionadas e PRs, mas não consegui entender se tem ou não implementado e como fazer para ter um o model definition em um local único, que fique visível para o swagger e possamos usar:
```
schema:
$ref: '#/definitions/My Model'
``` | open | 2018-09-26T17:14:30Z | 2021-03-18T11:23:25Z | https://github.com/flasgger/flasgger/issues/248 | [
"hacktoberfest"
] | andersonkxiass | 2 |
thunlp/OpenPrompt | nlp | 233 | [help] How to do NER using OpenPrompt? | Hello,
despite my best efforts, I could not succeed in NER tagging using `OpenPrompt`. The docs show an example of a prompt for NER tagging, but I haven't managed to put all the pieces together. I have assumed that it should be done using `PromptForGeneration` and I was trying to adapt https://github.com/thunlp/OpenPrompt/blob/main/tutorial/2.1_conditional_generation.py, but to no avail.
Could someone post a simple example of tagging entities in a sentence? That would be extremely useful and appreciated. | open | 2023-01-15T19:51:56Z | 2023-03-30T07:51:58Z | https://github.com/thunlp/OpenPrompt/issues/233 | [] | megaduks | 2 |
mwaskom/seaborn | data-visualization | 3,817 | ValueError: array must not contain infs or NaNs in seaborn.histplot(... kde=True) | ```
>>> import seaborn
>>> seaborn.__version__
'0.13.2'
>>> seaborn.histplot(x=range(5), weights=[0,0,3,0,0], kde=False)
<Axes: ylabel='Count'>
>>> seaborn.histplot(x=range(5), weights=[0,0,3,0,0], kde=True)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python311\Lib\site-packages\seaborn\distributions.py", line 1416, in histplot
p.plot_univariate_histogram(
File "C:\Python311\Lib\site-packages\seaborn\distributions.py", line 447, in plot_univariate_histogram
densities = self._compute_univariate_density(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\seaborn\distributions.py", line 345, in _compute_univariate_density
density, support = estimator(observations, weights=weights)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\seaborn\_statistics.py", line 193, in __call__
return self._eval_univariate(x1, weights)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\seaborn\_statistics.py", line 154, in _eval_univariate
kde = self._fit(x, weights)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\seaborn\_statistics.py", line 143, in _fit
kde = gaussian_kde(fit_data, **fit_kws)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\scipy\stats\_kde.py", line 226, in __init__
self.set_bandwidth(bw_method=bw_method)
File "C:\Python311\Lib\site-packages\scipy\stats\_kde.py", line 574, in set_bandwidth
self._compute_covariance()
File "C:\Python311\Lib\site-packages\scipy\stats\_kde.py", line 586, in _compute_covariance
self._data_cho_cov = linalg.cholesky(self._data_covariance,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\scipy\linalg\_decomp_cholesky.py", line 101, in cholesky
c, lower = _cholesky(a, lower=lower, overwrite_a=overwrite_a, clean=True,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\scipy\linalg\_decomp_cholesky.py", line 18, in _cholesky
a1 = asarray_chkfinite(a) if check_finite else asarray(a)
^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\numpy\lib\function_base.py", line 630, in asarray_chkfinite
raise ValueError(
ValueError: array must not contain infs or NaNs
``` | open | 2025-01-21T13:25:30Z | 2025-01-21T20:33:13Z | https://github.com/mwaskom/seaborn/issues/3817 | [] | petsuter | 5 |
pytorch/vision | computer-vision | 8,661 | references/segmentation/coco_utils might require merging rles? | https://github.com/pytorch/vision/blob/6d7851bd5e2bedc294e40e90532f0e375fcfee04/references/segmentation/coco_utils.py#L27-L41 Above seems to assume that objects are not occluded, not merging rles from `frPyObjects`. In such case, i think it must be changed to
```python
rles = coco_mask.frPyObjects(polygons, height, width)
rle = coco_mask.merge(rles)
mask = coco_mask.decode(rle)
```
Is there any specific reason for this, or am I wrong? | open | 2024-09-26T02:53:47Z | 2024-10-11T13:36:25Z | https://github.com/pytorch/vision/issues/8661 | [] | davidgill97 | 1 |
strawberry-graphql/strawberry | django | 3,567 | Upload inputs not being properly validated | If a client provides invalid input for the `Upload` field (e.g. string or number), strawberry don't raise any errors and executes the related resolver.
## Describe the Bug
```python
@strawberry.type
class Mutation:
@strawberry.mutation
def mutation(self, file: Upload) -> bool:
return True
```
```graphql
mutation { mutation (value: "just-a-string") }
```
If client will provide invalid input for the `Upload` field for such a mutation, the mutation will be executed without any errors.
<details>
<summary>Test that shows the issue</summary>
Ordinary fields are validated fine, but `Upload` fields are not validated
```python
import pytest
from pytest_mock import MockerFixture
from starlette.testclient import TestClient
import strawberry
from strawberry.file_uploads import Upload
from tests.fastapi.app import create_app
@strawberry.type
class Query:
empty: None = None
@strawberry.input
class SimpleInput:
value: bool
@strawberry.input
class UploadInput:
value: Upload
@pytest.mark.parametrize(
("input_value_annotation", "graphql_type", "bad_variable"),
[
(bool, "Boolean", "not a boolean"),
(SimpleInput, "SimpleInput", "just a string"),
(SimpleInput, "SimpleInput", {"value": "not a boolean"}),
(UploadInput, "UploadInput", "just a string"),
(UploadInput, "UploadInput", {"value": "not an upload"}), # this is currently failing
(Upload, "Upload", "not an upload"), # this is currently failing
],
)
async def test_mutation_input_validation(
mocker: MockerFixture, input_value_annotation, graphql_type, bad_variable
):
mock = mocker.Mock()
def resolver(value) -> bool:
mock()
return True
# dynamic addition of input field annotation:
resolver.__annotations__ = {"value": input_value_annotation}
@strawberry.type
class Mutation:
mutation = strawberry.mutation(resolver, graphql_type=bool)
app = create_app(schema=strawberry.Schema(Query, mutation=Mutation))
response = TestClient(app).post(
"/graphql",
json={
"query": f"mutation($value: {graphql_type}!) {{ mutation(value: $value) }}",
"variables": {"value": bad_variable},
},
)
response_json = response.json()
assert mock.call_count == 0
assert response_json["data"] is None
assert response_json["errors"] is not None
```
</details>
<!-- A clear and concise description of what the bug is. -->
## System Information
- Operating system: Linux
- Strawberry version (if applicable): 0.235.2
## Additional Context
<!-- Add any other relevant information about the problem here. --> | open | 2024-07-11T10:13:42Z | 2025-03-20T15:56:47Z | https://github.com/strawberry-graphql/strawberry/issues/3567 | [
"bug"
] | Nnonexistent | 0 |
ageitgey/face_recognition | machine-learning | 1,018 | Is this GPU overkill for face recognition? | * face_recognition version: 1.2.3
* Python version: 3.6.6
* Operating System: Ubuntu 18.04
I am looking ar buying a GPU for face recognition (image and video). I am not sure if there is are diminishing returns on the number of CUDA cores, ram, and clock speed. I see in these notes that the authors recommended the GTX 1050ti as a suitable graphics card (768 cores, 4 GB, 1290/1392 clock speed). The cost on Amazon is about $150-$200. I could also get an ASUS GTX 1660 (1408 cores, 6 GB, 1800/1830 clock speed) for $230. Is this card worth the extra cost, or have I reached the law of diminishing returns and the extra cores, ram, and clock speed will not provide a significant boost in face recognition performance? I am not a gamer, so that added benefit of this card is wasted on me!
Thanks,
Mark | open | 2019-12-28T20:33:02Z | 2019-12-28T20:33:02Z | https://github.com/ageitgey/face_recognition/issues/1018 | [] | pmi123 | 0 |
proplot-dev/proplot | data-visualization | 120 | Overlapping gridlines in cartopy plots | The below is copied from the discussion in #78.
When the grid is not centered on 0, ProPlot used to draw overlapping grids to the right of the dateline:
```python
import proplot as plot
season = "SON"
f, axs = plot.subplots(proj='cyl', proj_kw={'lon_0':180}, width=6)
axs.format(
geogridlinewidth=0.5, geogridcolor='gray8', geogridalpha=0.5, labels=True,
coast=True, suptitle=season+" snow cover bias", ocean=True, oceancolor='gray4'
)
m = axs[0].contourf(
season_clim_diff[0], cmap='ColdHot', levels=np.arange(-45,50,5), extend='max', norm='midpoint'
)
f.colorbar(m, label="Snow Area Fraction [%]")
```

This is related to some fancy gridliner repairs proplot tries to do. Without going into too much detail, my choices were (a) have overlapping gridlines or (b) have non-overlapping gridlines, but the labels above +180 degrees east disappear. I picked the former.
However, I've now added a "monkey patch" in [version 0.2.3](https://proplot.readthedocs.io/en/latest/changelog.html#proplot-v0-2-3-2019-12-05) that fixes both issues; no overlapping gridlines, and labels above +180 degrees are permitted. Might submit a PR to cartopy but will wait until SciTools/cartopy#1117 is released in version 0.18 -- the new gridliner API may solve this issue.
https://github.com/lukelbd/proplot/blob/15ed28fbaa31e6483c4c2eb4615bf5328ae340b1/proplot/axes.py#L3164-L3188
https://github.com/lukelbd/proplot/blob/15ed28fbaa31e6483c4c2eb4615bf5328ae340b1/proplot/axes.py#L3253-L3255 | closed | 2020-02-10T21:55:12Z | 2022-01-22T12:18:14Z | https://github.com/proplot-dev/proplot/issues/120 | [
"bug"
] | lukelbd | 1 |
Anjok07/ultimatevocalremovergui | pytorch | 588 | error | Last Error Received:
Process: VR Architecture
If this error persists, please contact the developers with the error details.
Raw Error Details:
MemoryError: "Unable to allocate 3.86 GiB for an array with shape (2, 673, 384860) and data type complex64"
Traceback Error: "
File "UVR.py", line 4592, in process_start
File "separate.py", line 619, in seperate
File "separate.py", line 743, in inference_vr
"
Error Time Stamp [2023-05-31 14:29:16]
Full Application Settings:
vr_model: 5_HP-Karaoke-UVR
aggression_setting: 10
window_size: 320
batch_size: 4
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: v3 | UVR_Model_1
segment: Default
overlap: 0.25
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: Choose Model
chunks: Auto
margin: 44100
compensate: Auto
is_denoise: False
is_invert_spec: False
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: True
is_testing_audio: False
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_create_model_folder: False
mp3_bit_set: 320k
save_format: WAV
wav_type_set: PCM_16
help_hints_var: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems | open | 2023-05-31T06:43:27Z | 2023-05-31T06:43:27Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/588 | [] | Fisory | 0 |
collerek/ormar | sqlalchemy | 781 | select_for_update() QuerySet Method | Implement `select_for_update()` QuerySet method to lock rows until the end of the transaction.
```python
first = await Person.objects.select_for_update().get(id=1)
second = await Person.objects.select_for_update().get(id=2)
async with database.transaction():
first.balance -= 10
first.save()
second.balance += 10
second.save()
``` | open | 2022-08-13T07:16:22Z | 2022-08-13T07:16:22Z | https://github.com/collerek/ormar/issues/781 | [
"enhancement"
] | SepehrBazyar | 0 |
skypilot-org/skypilot | data-science | 4,154 | [K8s] Optimizer still shows kubernetes as candidate when the cluster is all occupied | <!-- Describe the bug report / feature request here -->
Currently, when the k8s cluster is fully occupied, the optimizer will still shows it as candidate. For example, in the replica resources optimization result, it select k8s as resources, but actually it launches on GCP.
```bash
(base) root@49aaf5a031fc:/skycamp-tutorial/03_inferencing_and_serving# sky serve up service.yaml -n llm-service --env BUCKET_NAME
Service from YAML spec: service.yaml
Verifying bucket for storage skycamp24-finetune-f98d-0
Storage type StoreType.GCS already exists.
Service Spec:
Readiness probe method: GET /v1/models
Readiness initial delay seconds: 1200
Readiness probe timeout seconds: 15
Replica autoscaling policy: Fixed 2 replicas
Spot Policy: No spot fallback policy
Each replica will use the following resources (estimated):
Considered resources (1 node):
----------------------------------------------------------------------------------------------------------------------------------------------------
CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN
----------------------------------------------------------------------------------------------------------------------------------------------------
Kubernetes 8CPU--16GB--1L4 8 16 L4:1 gke_skycamp-skypilot-fastchat_us-central1-c_skycamp-gke-test 0.00 ✔
GCP g2-standard-8 8 32 L4:1 us-east4-a 0.85
----------------------------------------------------------------------------------------------------------------------------------------------------
Launching a new service 'llm-service'. Proceed? [Y/n]:
Verifying bucket for storage skycamp24-finetune-f98d-0
Launching controller for 'llm-service'...
Considered resources (1 node):
--------------------------------------------------------------------------------------------------------------------------------------------------
CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN
--------------------------------------------------------------------------------------------------------------------------------------------------
Kubernetes 4CPU--4GB 4 4 - gke_skycamp-skypilot-fastchat_us-central1-c_skycamp-gke-test 0.00 ✔
GCP n2-standard-4 4 16 - us-central1-a 0.19
--------------------------------------------------------------------------------------------------------------------------------------------------
⚙︎ Launching serve controller on Kubernetes.
└── Pod is up.
✓ Cluster launched: sky-serve-controller-9f92c97d. View logs at: ~/sky_logs/sky-2024-10-23-00-41-06-916345/provision.log
⚙︎ Mounting files.
Syncing (to 1 node): /tmp/service-task-llm-service-6vgt96ab -> ~/.sky/serve/llm_service/task.yaml.tmp
Syncing (to 1 node): /tmp/tmpz9j3n79w -> ~/.sky/serve/llm_service/config.yaml
✓ Files synced. View logs at: ~/sky_logs/sky-2024-10-23-00-41-06-916345/file_mounts.log
⚙︎ Running setup on serve controller.
Check & install cloud dependencies on controller: done.
✓ Setup completed. View logs at: ~/sky_logs/sky-2024-10-23-00-41-06-916345/setup-*.log
⚙︎ Service registered.
Service name: llm-service
Endpoint URL: [34.55.247.200:30001](http://34.55.247.200:30001/)
📋 Useful Commands
├── To check service status: sky serve status llm-service [--endpoint]
├── To teardown the service: sky serve down llm-service
├── To see replica logs: sky serve logs llm-service [REPLICA_ID]
├── To see load balancer logs: sky serve logs --load-balancer llm-service
├── To see controller logs: sky serve logs --controller llm-service
├── To monitor the status: watch -n10 sky serve status llm-service
└── To send a test request: curl [34.55.247.200:30001](http://34.55.247.200:30001/)
✓ Service is spinning up and replicas will be ready shortly.
(base) root@49aaf5a031fc:/skycamp-tutorial/03_inferencing_and_serving# sky serve status llm-service
Services
NAME VERSION UPTIME STATUS REPLICAS ENDPOINT
llm-service - - NO_REPLICA 0/2 [34.55.247.200:30001](http://34.55.247.200:30001/)
Service Replicas
SERVICE_NAME ID VERSION ENDPOINT LAUNCHED RESOURCES STATUS REGION
llm-service 1 1 - - - PENDING -
llm-service 2 1 - - - PENDING -
(base) root@49aaf5a031fc:/skycamp-tutorial/03_inferencing_and_serving# sky serve status llm-service
Services
NAME VERSION UPTIME STATUS REPLICAS ENDPOINT
llm-service - - NO_REPLICA 0/2 [34.55.247.200:30001](http://34.55.247.200:30001/)
Service Replicas
SERVICE_NAME ID VERSION ENDPOINT LAUNCHED RESOURCES STATUS REGION
llm-service 1 1 - a few secs ago 1x GCP({'L4': 1}) PROVISIONING us-east4
llm-service 2 1 - a few secs ago 1x GCP({'L4': 1}) PROVISIONING us-east4
``` | closed | 2024-10-23T00:52:31Z | 2024-12-19T09:31:43Z | https://github.com/skypilot-org/skypilot/issues/4154 | [] | cblmemo | 2 |
Textualize/rich | python | 2,995 | Upgrade markdown-it-py to 3.0.0 | - [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
In one of my projects I use [Bandit](https://github.com/PyCQA/bandit), which uses this project, however currently Github's dependabot has created a PR in my project to upgrade from markdown-it-py from 2.2.0 to 3.0.0, which is currently failing with the following error.
```
...
ERROR: Cannot install -r config/requirements/dev_lock.txt (line 1349) and markdown-it-py==3.0.0 because these package versions have conflicting dependencies.
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
The conflict is caused by:
The user requested markdown-it-py==3.0.0
rich 13.4.1 depends on markdown-it-py<3.0.0 and >=2.2.0
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
Error: Process completed with exit code 1.
```
Please loosen the range for markdown-it-py so that markdown-it-py can be upgraded.
| closed | 2023-06-09T15:07:49Z | 2023-06-13T15:43:28Z | https://github.com/Textualize/rich/issues/2995 | [
"Needs triage"
] | epicserve | 4 |
mljar/mercury | data-visualization | 200 | mercury run only seems to run the mercury demo app | Hello,
I have a typical Python project with some Jupyter notebooks. I added a new notebook `mynotebook.ipynb` in the same location as my others, then proceeded to replicate the "getting started" app in that new notebook, per instructions here: https://mercury-docs.readthedocs.io/en/latest/get-started/.
Attempting to run this "getting started" app, and in my terminal, I navigated to the location of `mynotebook.ipynb` then executed: `mercury run mynotebook.ipynb`. It starts.
Then, when opening my browser to `http://127.0.0.1:8000/`, unfortunately, I can only see the demo that's normally viewed via `mercury run demo`. And, indeed, the terminal shows that the built-in demo notebook is being served: `"GET /media/demo-notebook.html HTTP/1.1" 304 0` and not my `mynotebook.ipynb` app.
How can I get `mercury run mynotebook.ipynb` to run my app?
FWIW, yes, the raw YAML config is in the `mynotebook.ipynb` notebook:
```
---
title: Hello
description: Hello App
params:
planet:
input: select
label: Please select a planet
value: Earth
choices: [Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune ]
---
```
Thank you!
| closed | 2023-02-06T02:56:55Z | 2023-02-16T16:37:41Z | https://github.com/mljar/mercury/issues/200 | [
"bug"
] | chris-brinkman | 4 |
huggingface/datasets | tensorflow | 7,359 | There are multiple 'mteb/arguana' configurations in the cache: default, corpus, queries with HF_HUB_OFFLINE=1 | ### Describe the bug
Hey folks,
I am trying to run this code -
```python
from datasets import load_dataset, get_dataset_config_names
ds = load_dataset("mteb/arguana")
```
with HF_HUB_OFFLINE=1
But I get the following error -
```python
Using the latest cached version of the dataset since mteb/arguana couldn't be found on the Hugging Face Hub (offline mode is enabled).
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[2], line 1
----> 1 ds = load_dataset("mteb/arguana")
File ~/env/lib/python3.10/site-packages/datasets/load.py:2129, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2124 verification_mode = VerificationMode(
2125 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS
2126 )
2128 # Create a dataset builder
-> 2129 builder_instance = load_dataset_builder(
2130 path=path,
2131 name=name,
2132 data_dir=data_dir,
2133 data_files=data_files,
2134 cache_dir=cache_dir,
2135 features=features,
2136 download_config=download_config,
2137 download_mode=download_mode,
2138 revision=revision,
2139 token=token,
2140 storage_options=storage_options,
2141 trust_remote_code=trust_remote_code,
2142 _require_default_config_name=name is None,
2143 **config_kwargs,
2144 )
2146 # Return iterable dataset in case of streaming
2147 if streaming:
File ~/env/lib/python3.10/site-packages/datasets/load.py:1886, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)
1884 builder_cls = get_dataset_builder_class(dataset_module, dataset_name=dataset_name)
1885 # Instantiate the dataset builder
-> 1886 builder_instance: DatasetBuilder = builder_cls(
1887 cache_dir=cache_dir,
1888 dataset_name=dataset_name,
1889 config_name=config_name,
1890 data_dir=data_dir,
1891 data_files=data_files,
1892 hash=dataset_module.hash,
1893 info=info,
1894 features=features,
1895 token=token,
1896 storage_options=storage_options,
1897 **builder_kwargs,
1898 **config_kwargs,
1899 )
1900 builder_instance._use_legacy_cache_dir_if_possible(dataset_module)
1902 return builder_instance
File ~/env/lib/python3.10/site-packages/datasets/packaged_modules/cache/cache.py:124, in Cache.__init__(self, cache_dir, dataset_name, config_name, version, hash, base_path, info, features, token, repo_id, data_files, data_dir, storage_options, writer_batch_size, **config_kwargs)
122 config_kwargs["data_dir"] = data_dir
123 if hash == "auto" and version == "auto":
--> 124 config_name, version, hash = _find_hash_in_cache(
125 dataset_name=repo_id or dataset_name,
126 config_name=config_name,
127 cache_dir=cache_dir,
128 config_kwargs=config_kwargs,
129 custom_features=features,
130 )
131 elif hash == "auto" or version == "auto":
132 raise NotImplementedError("Pass both hash='auto' and version='auto' instead")
File ~/env/lib/python3.10/site-packages/datasets/packaged_modules/cache/cache.py:84, in _find_hash_in_cache(dataset_name, config_name, cache_dir, config_kwargs, custom_features)
72 other_configs = [
73 Path(_cached_directory_path).parts[-3]
74 for _cached_directory_path in glob.glob(os.path.join(cached_datasets_directory_path_root, "*", version, hash))
(...)
81 )
82 ]
83 if not config_id and len(other_configs) > 1:
---> 84 raise ValueError(
85 f"There are multiple '{dataset_name}' configurations in the cache: {', '.join(other_configs)}"
86 f"\nPlease specify which configuration to reload from the cache, e.g."
87 f"\n\tload_dataset('{dataset_name}', '{other_configs[0]}')"
88 )
89 config_name = cached_directory_path.parts[-3]
90 warning_msg = (
91 f"Found the latest cached dataset configuration '{config_name}' at {cached_directory_path} "
92 f"(last modified on {time.ctime(_get_modification_time(cached_directory_path))})."
93 )
ValueError: There are multiple 'mteb/arguana' configurations in the cache: queries, corpus, default
Please specify which configuration to reload from the cache, e.g.
load_dataset('mteb/arguana', 'queries')
```
It works when I run the same code with HF_HUB_OFFLINE=0, but after the data is downloaded, I turn off the HF hub cache with HF_HUB_OFFLINE=1, and then this error appears.
Are there some files I am missing with hub disabled?
### Steps to reproduce the bug
from datasets import load_dataset, get_dataset_config_names
ds = load_dataset("mteb/arguana")
with HF_HUB_OFFLINE=1
(after already running it with HF_HUB_OFFLINE=0 and populating the datasets cache)
### Expected behavior
Dataset loaded successfully as it does with HF_HUB_OFFLINE=1
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-5.15.148.2-2.cm2-x86_64-with-glibc2.35
- Python version: 3.10.14
- `huggingface_hub` version: 0.27.0
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1 | open | 2025-01-06T17:42:49Z | 2025-01-06T17:43:31Z | https://github.com/huggingface/datasets/issues/7359 | [] | Bhavya6187 | 1 |
rasbt/watermark | jupyter | 76 | `-d` doesn't print date, only when combined with `-u` | There is currently an issue that `-d` alone doesn't print the date. It only works with `-u`. In my opinion `-d` should work as a standalone argument.

| closed | 2021-02-24T04:04:06Z | 2024-09-22T20:08:51Z | https://github.com/rasbt/watermark/issues/76 | [
"bug"
] | rasbt | 1 |
neuml/txtai | nlp | 553 | Document how to run API via HTTPS | Add a HTTPS section to the API documentation covering the following configuration options for HTTPS with FastAPI.
- [FastAPI HTTPS](https://fastapi.tiangolo.com/deployment/https/)
- [Uvicorn documentation](https://www.uvicorn.org/deployment/) | closed | 2023-09-11T16:04:06Z | 2023-09-19T12:45:46Z | https://github.com/neuml/txtai/issues/553 | [] | davidmezzetti | 0 |
explosion/spaCy | data-science | 13,747 | Avoid using pip for download models | I use misaki package from pypi which uses spacy.
When I run the script that uses it, it tried first to download something from pypi using pip but I use `uv` which doesn't have `pip` by default.
I can see in
- https://github.com/explosion/spaCy/blob/b3c46c315eb16ce644bddd106d31c3dd349f6bb2/spacy/cli/download.py#L161
That you spawn pip for fetch models.
Can you fetch it with requests instead or letting the user decide where and when to fetch it? | open | 2025-02-09T06:57:16Z | 2025-02-28T15:17:06Z | https://github.com/explosion/spaCy/issues/13747 | [] | thewh1teagle | 2 |
d2l-ai/d2l-en | tensorflow | 1,639 | pip install only finds 0.16.0, not 0.16.1 | In the Google Colab notebooks, d2l is installed with version 0.16.1
`!pip install d2l==0.16.1`
However, the newest pip version right now seems to be 0.16.0 ([see here](https://pypi.org/project/d2l/#history)).
Therefore, this line will always fail with:
`ERROR: Could not find a version that satisfies the requirement d2l==0.16.1 (from versions: 0.8.2, 0.8.5, 0.8.6, 0.8.7, 0.9.1, 0.9.2, 0.10, 0.10.1, 0.10.2, 0.10.3, 0.11.0, 0.11.1, 0.11.2, 0.11.3, 0.11.4, 0.12.0, 0.13.0, 0.13.1, 0.13.2, 0.14.0, 0.14.1, 0.14.2, 0.14.3, 0.14.4, 0.15.0, 0.15.1, 0.16.0)
ERROR: No matching distribution found for d2l==0.16.1`
Either, the wrong version number is used in pip or the wrong version is referenced in Google Colab.
| closed | 2021-01-19T16:26:25Z | 2021-01-19T21:42:45Z | https://github.com/d2l-ai/d2l-en/issues/1639 | [] | floriandonhauser | 2 |
roboflow/supervision | tensorflow | 1,720 | The character's ID changed after a brief loss | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
Hello, I am using supervision ByteTrack, in the video, the male ID is 2 and the female ID is 3. However, when the female continues to move forward and the ID is lost, when it is detected again, the female ID is no longer 3, and the male and female IDs alternate. What should I do?Thank you
Male ID: 2
Girl ID: 3


Male ID: 3
Female ID: 2

### Additional
_No response_ | closed | 2024-12-09T11:06:03Z | 2024-12-09T11:12:54Z | https://github.com/roboflow/supervision/issues/1720 | [
"question"
] | DreamerYinYu | 1 |
home-assistant/core | python | 140,923 | ServiceCall home_connect.set_program_and_options seems to use wrong temperature enums | ### The problem
When setting up a script using the ServiceCall home_connect.set_program_and_options in the UI Editor, when I chose the option Temperature and set it to 40°C, then in the YAML-view the option is set to
`laundry_care_washer_option_temperature: laundry_care_washer_enum_type_temperature_g_c40`
When I then execute the script I get the following error message in the logs:
`Waschmaschine starten Programm Mix 40°C: Error executing script. Invalid data for call_service at pos 1: value must be one of ['laundry_care_washer_enum_type_temperature_cold', 'laundry_care_washer_enum_type_temperature_g_c_20', 'laundry_care_washer_enum_type_temperature_g_c_30', 'laundry_care_washer_enum_type_temperature_g_c_40', 'laundry_care_washer_enum_type_temperature_g_c_50', 'laundry_care_washer_enum_type_temperature_g_c_60', 'laundry_care_washer_enum_type_temperature_g_c_70', 'laundry_care_washer_enum_type_temperature_g_c_80', 'laundry_care_washer_enum_type_temperature_g_c_90', 'laundry_care_washer_enum_type_temperature_ul_cold', 'laundry_care_washer_enum_type_temperature_ul_extra_hot', 'laundry_care_washer_enum_type_temperature_ul_hot', 'laundry_care_washer_enum_type_temperature_ul_warm'] for dictionary value @ data['laundry_care_washer_option_temperature']`
It seems that there is a missing underscore in the enums. When I change it in the yaml-editor to the correct value the script does work.
### What version of Home Assistant Core has the issue?
core-2025-3-3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Home Connect
### Link to integration documentation on our website
_No response_
### Diagnostics information
[config_entry-home_connect-01JHACN27N8J8XGCY8ZNDAJCSK.json](https://github.com/user-attachments/files/19339145/config_entry-home_connect-01JHACN27N8J8XGCY8ZNDAJCSK.json)
### Example YAML snippet
```yaml
This is generated when using the UI-Editor:
action: home_connect.set_program_and_options
metadata: {}
data:
device_id: bf1a6eed85ea8c5c80c93f491d7b5dfa
affects_to: active_program
program: laundry_care_washer_program_mix
laundry_care_washer_option_temperature: laundry_care_washer_enum_type_temperature_g_c40
And this SHOULD be the correct form:
action: home_connect.set_program_and_options
metadata: {}
data:
device_id: bf1a6eed85ea8c5c80c93f491d7b5dfa
affects_to: active_program
program: laundry_care_washer_program_mix
laundry_care_washer_option_temperature: laundry_care_washer_enum_type_temperature_g_c_40
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | closed | 2025-03-19T11:03:18Z | 2025-03-20T21:52:47Z | https://github.com/home-assistant/core/issues/140923 | [
"integration: home_connect"
] | sotima | 1 |
vitalik/django-ninja | django | 765 | Add Documentation For Unit Test of Multiple File Upload | Django Ninja is awesome. You guys have made the Django ecosystem so much more useful. I ran into some difficulties trying to figure out how to use your test client (via django-ninja-extra) to POST a multipart request with multiple files. The syntax that was required was not what I was expecting, and I couldn't find it documented anywhere.
Here was the POST endpoint I'm writing a test for (defined with `django-ninja-extra` helpers):
```python
@route.post(
"",
auth=AsyncJWTAuth(),
response=CollectionModelSchema,
url_name="create",
summary="Create a new document collection and queue the documents for processing.",
)
async def create_collection(
self,
title: str = Form(...),
description: str = Form(...),
files: list[UploadedFile] = File(...),
):
```
I ultimately had to do something like this in my test:
```python
file_content = b"test content"
# Test data
collection_data = {
"title": "Test Collection",
"description": "A test collection",
}
files = MultiValueDict({
"files": [
SimpleUploadedFile("document1.txt", file_content),
SimpleUploadedFile("document1.txt", file_content)
]
})
response = await self.client.post(
"",
data=collection_data,
FILES=files,
headers=self.headers,
)
```
This is a bit different than what you guys did in your [test](https://github.com/vitalik/django-ninja/blob/1fa3090eef6f122bf174b6b5a28cd00eee10ec2b/tests/test_with_django/test_multi_param_parsing.py#L13) as you tested a form with only a single file. For me, having to pass a FILES parameter that was not only a dict but wrapped in MultiValueDict was unexpected. I am still working to improve my Django test knowledge, however, so perhaps I should have known this. In any case, I thought you might want to add some additional documentation on how to test multi-file uploads like this?
**Describe the solution you'd like**
I'd like to open a PR with some documentation and possibly a unit test for Django Ninja itself if you want to test multiple file multipart uploads.
| open | 2023-05-12T16:11:01Z | 2023-05-12T16:11:01Z | https://github.com/vitalik/django-ninja/issues/765 | [] | JSv4 | 0 |
jwkvam/bowtie | jupyter | 28 | add support for more complex login process | redirect `/` to a `/login` page which is supplied by the user?
Current proposal is is to have the user provide the following:
- a login function which returns a bool indicating if the login was successful or not
- a test if someone is logged (i.e. to be used to redirect to login page or continue to web app)
- a login page (perhaps as a string or file, maybe provide a default template to make things easy)
| open | 2016-09-26T18:00:38Z | 2018-07-24T01:43:26Z | https://github.com/jwkvam/bowtie/issues/28 | [
"low-priority",
"moderate"
] | jwkvam | 0 |
aiortc/aioquic | asyncio | 456 | `QuicStreamAdapter` should implement more methods of `BaseTransport` | TLDR: At least need to implement:
- `is_closing`
- `close`
## Long Story
It seems that `StreamWriter` returned by `QuicConnectionProtocol#create_stream` will cause error when GC-ed.
Here is the minimal example:
```python
import asyncio
from aioquic.asyncio import QuicConnectionProtocol
async def tmp():
proto = QuicConnectionProtocol(None)
reader, writer = proto._create_stream(123)
del writer
asyncio.run(tmp())
```
The following exception occurred:
```
Exception ignored in: <function StreamWriter.__del__ at 0x1034b5e40>
Traceback (most recent call last):
File ".../python3.11/asyncio/streams.py", line 395, in __del__
if not self._transport.is_closing():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../python3.11/asyncio/transports.py", line 25, in is_closing
raise NotImplementedError
NotImplementedError:
```
It is called in `StreamWriter#__del__`:
```python
class StreamWriter:
...
def close(self):
return self._transport.close()
def __del__(self):
if not self._transport.is_closing():
self.close()
``` | closed | 2024-01-16T12:18:16Z | 2024-01-17T01:51:30Z | https://github.com/aiortc/aioquic/issues/456 | [] | lotabout | 2 |
sammchardy/python-binance | api | 1,120 | APIError(code=-2010): This action disabled is on this account. | **Describe the bug**
>>> client.create_order(symbol="ETHUPUSDT", side=SIDE_BUY, type=ORDER_TYPE_MARKET, quantity=1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Finn\AppData\Local\Programs\Python\Python310\lib\site-packages\binance\client.py", line 1385, in create_order
return self._post('order', True, data=params)
File "C:\Users\Finn\AppData\Local\Programs\Python\Python310\lib\site-packages\binance\client.py", line 374, in _post
return self._request_api('post', path, signed, version, **kwargs)
File "C:\Users\Finn\AppData\Local\Programs\Python\Python310\lib\site-packages\binance\client.py", line 334, in _request_api
return self._request(method, uri, signed, **kwargs)
File "C:\Users\Finn\AppData\Local\Programs\Python\Python310\lib\site-packages\binance\client.py", line 315, in _request
return self._handle_response(self.response)
File "C:\Users\Finn\AppData\Local\Programs\Python\Python310\lib\site-packages\binance\client.py", line 324, in _handle_response
raise BinanceAPIException(response, response.status_code, response.text)
binance.exceptions.BinanceAPIException: APIError(code=-2010): This action disabled is on this account.
**To Reproduce**
from binance.client import Client
from binance.enums import *
client = Client(api_key, secret_key)
client.create_order(symbol="ETHUPUSDT", side=SIDE_BUY, type=ORDER_TYPE_MARKET, quantity=1)
**Expected behavior**
I was trying to buy ETHUP, but it said the function was disabled. How do I enable it?
| open | 2022-01-18T23:40:44Z | 2022-01-19T20:28:01Z | https://github.com/sammchardy/python-binance/issues/1120 | [] | Fo3nix | 2 |
quasarstream/python-ffmpeg-video-streaming | dash | 44 | CMAF support | **Is your feature request related to a problem? Please describe.**
CMAF is a newer packaging method that claims to save server space by half and provide low latency.
Here's a description of the [same](https://www.ubik-ingenierie.com/blog/video-streaming-cmaf-and-low-latency/).
It seems FFMPEG also supports this. However, I am absolutely unaware of the switches to be used to make things work.
**Describe the solution you'd like**
methods to have cmaf support
| closed | 2020-11-17T02:29:41Z | 2021-03-20T12:20:24Z | https://github.com/quasarstream/python-ffmpeg-video-streaming/issues/44 | [] | swagato-c | 2 |
pytest-dev/pytest-cov | pytest | 642 | Error combining coverage when running with pytest-xdist and a custom coverage plugin | # Summary
I am working on a custom coverage plugin that returns custom file tracers for Python files. When I use this coverage plugin together with pytest-xdist, it fails when combining the coverage data from the xdist workers (see traceback below).
To debug this, I added `keep=True` to the calls to `Coverage.combine` in `pytest_cov/engine.py`.
In an example with three xdist workers:
* Four `.coverage.*` files are created: one from each worker proceses, and one from the master process.
* All four coverage files contain the same set of files in the "files" table.
* The coverage file generated from the master process has an empty "arc" table, and no entry in the "tracer" table.
When combining the coverage files, coverage checks if the "tracer" entries of all coverage files match. If a file is listed in the "files" table, but does not have an entry in the "tracer" table, it uses "" as the default tracer. This conflicts with the "tracer" info from the other data files, which causes coverage to error out.
To resolve this issue, `pytest-cov` could not create that extra, empty coverage data file from the master process.
## Traceback
```
Traceback (most recent call last):
File "<venv_path>/lib/python3.10/site-packages/_pytest/main.py", line 271, in wrap_session
session.exitstatus = doit(config, session) or 0
File "<venv_path>/lib/python3.10/site-packages/_pytest/main.py", line 325, in _main
config.hook.pytest_runtestloop(session=session)
File "<venv_path>/lib/python3.10/site-packages/pluggy/_hooks.py", line 501, in __call__
return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
File "<venv_path>/lib/python3.10/site-packages/pluggy/_manager.py", line 119, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "<venv_path>/lib/python3.10/site-packages/pluggy/_callers.py", line 155, in _multicall
teardown[0].send(outcome)
File "<venv_path>/lib/python3.10/site-packages/pytest_cov/plugin.py", line 339, in pytest_runtestloop
self.cov_controller.finish()
File "<venv_path>/lib/python3.10/site-packages/pytest_cov/engine.py", line 46, in ensure_topdir_wrapper
return meth(self, *args, **kwargs)
File "<venv_path>/lib/python3.10/site-packages/pytest_cov/engine.py", line 355, in finish
self.cov.combine()
File "<venv_path>/lib/python3.10/site-packages/coverage/control.py", line 836, in combine
combine_parallel_data(
File "<venv_path>/lib/python3.10/site-packages/coverage/data.py", line 179, in combine_parallel_data
data.update(new_data, aliases=aliases)
File "<venv_path>/lib/python3.10/site-packages/coverage/sqldata.py", line 760, in update
raise DataError(
coverage.exceptions.DataError: Conflicting file tracer name for '<project_path>/tests/test_example.py': '' vs 'example_plugin.ExampleCoveragePlugin'
```
## Versions
Python 3.10.11 on Windows
```
coverage 7.4.4
pytest 7.4.3
pytest-cov 5.0.0
pytest-xdist 3.5.0
```
The same also fails on Linux with the same package versions.
## Config
`pytest` is run via `python -m pytest --cov --cov-config=pyproject.toml -v --numprocesses=3 --maxschedchunk=1`.
`coverage` is configured as follows:
```toml
[tool.coverage.run]
branch = true
plugins = ["example_plugin"]
concurrency = ["thread", "multiprocessing"]
```
| open | 2024-04-19T14:20:45Z | 2024-09-17T23:09:01Z | https://github.com/pytest-dev/pytest-cov/issues/642 | [] | slanzmich | 4 |
iperov/DeepFaceLive | machine-learning | 159 | RTX3070 Mobile GPU = low performance? | Hello,
I downloaded dfl for nvidia and tried to use it, but as soon as I start it I do only get 1-2 images of my webcam in the dfl software?
It kinda looks like it has issues with rendering it, it is not responding quickly and does not show any "video" just 1-2 frames and only for camera source.
I do use a laptop with tx3070, ryzen 9 and 16gb of ram.
Is my performance to low for using this software? Or are there any hidden settings to improve? | closed | 2023-05-22T23:11:02Z | 2023-05-23T11:09:57Z | https://github.com/iperov/DeepFaceLive/issues/159 | [] | OnlyStopOnTop | 2 |
giotto-ai/giotto-tda | scikit-learn | 634 | [BUG] Instalation problems | I've tried to run Giotto from github or from a copy from github in pycharm and installing the package (User and developer instalation). At the end the same always happens:
"No module named 'gtda.externals.modules'"
It happens with the repository copied from git directly (git copy on cmd). I've runed setup.py before trying some examples and if I do so :
"Cannot find reference 'packaging' in '__init__.py'" (in reference of pkg_resources.extern.packaging)
The function version is defined in pkg_resources __init__.py but for some reason the extern.__init__.py fails to call it or smthg.
If the code is too long, feel free to put it in a public gist and link
I have boost and Cmake installed as the giotto web indicates, python 3.7.13), and I tried to follow the solution in https://github.com/giotto-ai/giotto-tda/issues/463 for the instalation, but when I type in cmd this happens:
(before the error a lot of code appeared, I only tagged the error)
C:\Users\isbjo\gtda_test_git>python -m pip install -e ".[dev]"
ERROR: Command errored out with exit status 1:
command: 'C:\Users\isbjo\anaconda3\envs\gtda_tests\python.exe' -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\isbjo\\gtda_test_git\\setup.py'"'"'; __file__='"'"'C:\\Users\\isbjo\\gtda_test_git\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps
cwd: C:\Users\isbjo\gtda_test_git\
Complete output (100 lines):
running develop
running egg_info
creating giotto_tda.egg-info
writing giotto_tda.egg-info\PKG-INFO
writing dependency_links to giotto_tda.egg-info\dependency_links.txt
writing requirements to giotto_tda.egg-info\requires.txt
writing top-level names to giotto_tda.egg-info\top_level.txt
writing manifest file 'giotto_tda.egg-info\SOURCES.txt'
reading manifest file 'giotto_tda.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
adding license file 'LICENSE'
writing manifest file 'giotto_tda.egg-info\SOURCES.txt'
running build_ext
Submodule 'gtda/externals/eigen' (https://gitlab.com/libeigen/eigen) registered for path 'gtda/externals/eigen'
Submodule 'gtda/externals/gudhi-devel' (https://github.com/giotto-ai/gudhi-devel) registered for path 'gtda/externals/gudhi-devel'
Submodule 'gtda/externals/hera' (https://github.com/grey-narn/hera) registered for path 'gtda/externals/hera'
Submodule 'gtda/externals/pybind11' (https://github.com/pybind/pybind11) registered for path 'gtda/externals/pybind11'
Cloning into 'C:/Users/isbjo/gtda_test_git/gtda/externals/eigen'...
warning: redirecting to https://gitlab.com/libeigen/eigen.git/
Cloning into 'C:/Users/isbjo/gtda_test_git/gtda/externals/gudhi-devel'...
Cloning into 'C:/Users/isbjo/gtda_test_git/gtda/externals/hera'...
Cloning into 'C:/Users/isbjo/gtda_test_git/gtda/externals/pybind11'...
Submodule path 'gtda/externals/eigen': checked out '25424d91f60a9f858e7dc1c7936021cc1dd72019'
Submodule path 'gtda/externals/gudhi-devel': checked out 'a265b030effa9b34a99a09b0e1b5073e8bb50cb6'
Submodule path 'gtda/externals/hera': checked out '2c5e6c606ee37cd68bbe9f9915dba99f7677dd87'
Submodule path 'gtda/externals/pybind11': checked out '8fa70e74838e93f0db38417f3590ba792489b958'
-- Building for: Visual Studio 17 2022
-- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.19044.
-- The CXX compiler identification is MSVC 19.32.31328.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.32.31326/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- pybind11 v2.6.0 dev1
CMake Warning (dev) at C:/Program Files/CMake/share/cmake-3.23/Modules/CMakeDependentOption.cmake:89 (message):
Policy CMP0127 is not set: cmake_dependent_option() supports full Condition
Syntax. Run "cmake --help-policy CMP0127" for policy details. Use the
cmake_policy command to set the policy and suppress this warning.
Call Stack (most recent call first):
gtda/externals/pybind11/CMakeLists.txt:91 (cmake_dependent_option)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Found PythonInterp: C:/Users/isbjo/anaconda3/envs/gtda_tests/python.exe (found version "3.7.13")
-- Found PythonLibs: C:/Users/isbjo/anaconda3/envs/gtda_tests/libs/python37.lib
-- Performing Test HAS_MSVC_GL_LTCG
-- Performing Test HAS_MSVC_GL_LTCG - Success
-- BOOST_ROOT_PIPELINE:
-- BOOST_ROOT: C:/local;
CMake Error at C:/Program Files/CMake/share/cmake-3.23/Modules/FindPackageHandleStandardArgs.cmake:230 (message):
Could NOT find Boost (missing: Boost_INCLUDE_DIR) (Required is at least
version "1.56")
Call Stack (most recent call first):
C:/Program Files/CMake/share/cmake-3.23/Modules/FindPackageHandleStandardArgs.cmake:594 (_FPHSA_FAILURE_MESSAGE)
C:/Program Files/CMake/share/cmake-3.23/Modules/FindBoost.cmake:2375 (find_package_handle_standard_args)
cmake/HelperBoost.cmake:23 (find_package)
CMakeLists.txt:7 (include)
-- Configuring incomplete, errors occurred!
See also "C:/Users/isbjo/gtda_test_git/build/temp.win-amd64-3.7/Release/CMakeFiles/CMakeOutput.log".
C:\Users\isbjo\anaconda3\envs\gtda_tests\lib\site-packages\setuptools\dist.py:760: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead
% (opt, underscore_opt)
C:\Users\isbjo\anaconda3\envs\gtda_tests\lib\site-packages\setuptools\command\easy_install.py:147: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
EasyInstallDeprecationWarning,
C:\Users\isbjo\anaconda3\envs\gtda_tests\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
setuptools.SetuptoolsDeprecationWarning,
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\isbjo\gtda_test_git\setup.py", line 158, in <module>
cmdclass=dict(build_ext=CMakeBuild))
File "C:\Users\isbjo\anaconda3\envs\gtda_tests\lib\site-packages\setuptools\__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
File "C:\Users\isbjo\anaconda3\envs\gtda_tests\lib\site-packages\setuptools\_distutils\core.py", line 148, in setup
return run_commands(dist)
File "C:\Users\isbjo\anaconda3\envs\gtda_tests\lib\site-packages\setuptools\_distutils\core.py", line 163, in run_commands
dist.run_commands()
File "C:\Users\isbjo\anaconda3\envs\gtda_tests\lib\site-packages\setuptools\_distutils\dist.py", line 967, in run_commands
self.run_command(cmd)
File "C:\Users\isbjo\anaconda3\envs\gtda_tests\lib\site-packages\setuptools\dist.py", line 1214, in run_command
super().run_command(command)
File "C:\Users\isbjo\anaconda3\envs\gtda_tests\lib\site-packages\setuptools\_distutils\dist.py", line 986, in run_command
cmd_obj.run()
File "C:\Users\isbjo\anaconda3\envs\gtda_tests\lib\site-packages\setuptools\command\develop.py", line 34, in run
self.install_for_development()
File "C:\Users\isbjo\anaconda3\envs\gtda_tests\lib\site-packages\setuptools\command\develop.py", line 114, in install_for_development
self.run_command('build_ext')
File "C:\Users\isbjo\anaconda3\envs\gtda_tests\lib\site-packages\setuptools\_distutils\cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "C:\Users\isbjo\anaconda3\envs\gtda_tests\lib\site-packages\setuptools\dist.py", line 1214, in run_command
super().run_command(command)
File "C:\Users\isbjo\anaconda3\envs\gtda_tests\lib\site-packages\setuptools\_distutils\dist.py", line 986, in run_command
cmd_obj.run()
File "C:\Users\isbjo\gtda_test_git\setup.py", line 105, in run
self.build_extension(ext)
File "C:\Users\isbjo\gtda_test_git\setup.py", line 136, in build_extension
cwd=self.build_temp, env=env)
File "C:\Users\isbjo\anaconda3\envs\gtda_tests\lib\subprocess.py", line 363, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', 'C:\\Users\\isbjo\\gtda_test_git', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=C:\\Users\\isbjo\\gtda_test_git\\gtda\\externals\\modules', '-DPYTHON_EXECUTABLE=C:\\Users\\isbjo\\anaconda3\\envs\\gtda_tests\\python.exe', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=C:\\Users\\isbjo\\gtda_test_git\\gtda\\externals\\modules', '-A', 'x64']' returned non-zero exit status 1.
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Users\isbjo\anaconda3\envs\gtda_tests\python.exe' -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\isbjo\\gtda_test_git\\setup.py'"'"'; __file__='"'"'C:\\Users\\isbjo\\gtda_test_git\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps Check the logs for full command output.
I think that this error is also due to the setup.py failing in some way but I don't know what else to do | closed | 2022-05-19T10:56:26Z | 2022-05-25T16:59:28Z | https://github.com/giotto-ai/giotto-tda/issues/634 | [
"bug"
] | Bernat-Jorda-Carbonell | 13 |
hankcs/HanLP | nlp | 666 | 解析txt格式的个人求职简历 | 楼主好,有个问题想请教下,我现在有这么个需求:
需要**解析txt格式的个人求职简历**
首先我通过分词来获取求职者的姓名、性别、住址、邮箱等基本信息没有什么问题,
**但是现在需要提取出简历中的求职者的项目工作经验,这个好像很不好搞,没找到好的办法能够完整的提取出项目经验的信息来,楼主有啥办法帮忙指导下么?**
感谢!!! | closed | 2017-11-03T10:00:41Z | 2020-01-01T10:51:57Z | https://github.com/hankcs/HanLP/issues/666 | [
"ignored"
] | altraman00 | 27 |
sigmavirus24/github3.py | rest-api | 907 | UnprocessableEntity when trying to obtain an authorization token with 2FA enabled | ## Version Information
Please provide:
- The version of Python you're using **3.6.5**
- The version of pip you used to install github3.py **pipenv version 2018.7.1 and pip 18.0**
- The version of github3.py, requests, uritemplate, and dateutil installed
- github3.py: 1.2.0
- uritemplate: 3.0.0
- requests: 2.20.1
- dateutil: 2.7.5
## Minimum Reproducible Example
```python
from github3 import login
import github3
def two_fa_callback():
return input("Please enter 2FA Token: ")
auth = github3.authorize(user, password, two_factor_callback=two_fa_callback, scopes=["repo"])
```
this asks me for the 2FA one time password. If I enter it, I get the traceback below:
```
File "/Users/xxx/.local/share/virtualenvs/github-cards-4cr74AxL/lib/python3.6/site-packages/github3/api.py", line 26, in deprecation_wrapper
return func(*args, **kwargs)
File "/Users/xxx/.local/share/virtualenvs/github-cards-4cr74AxL/lib/python3.6/site-packages/github3/api.py", line 59, in authorize
client_secret)
File "/Users/xxx/.local/share/virtualenvs/github-cards-4cr74AxL/lib/python3.6/site-packages/github3/github.py", line 462, in authorize
json = self._json(self._post(url, data=data), 201)
File "/Users/xxx/.local/share/virtualenvs/github-cards-4cr74AxL/lib/python3.6/site-packages/github3/models.py", line 156, in _json
raise exceptions.error_for(response)
github3.exceptions.AuthenticationFailed: 401 Must specify two-factor authentication OTP code.
```
## Exception information
When I'm using the same arguments to `github.authorize` to simply login, similarly to the approach shown here: https://github3.readthedocs.io/en/master/examples/two_factor_auth.html everything works fine. However, in that case I need to enter the 2FA Code on each request which is quite annoying.
<!-- links -->
[search]: https://github.com/sigmavirus24/github3.py/issues?utf8=%E2%9C%93&q=is%3Aissue
| open | 2018-11-22T08:04:45Z | 2019-08-25T14:59:06Z | https://github.com/sigmavirus24/github3.py/issues/907 | [] | larsrinn | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.