repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
agronholm/anyio | asyncio | 574 | poetry add anyio[trio] not working | ### Things to check first
- [X] I have searched the existing issues and didn't find my bug already reported there
- [X] I have checked that my bug is still present in the latest release
### AnyIO version
3.7
### Python version
3.10
### What happened?
Installed anyio with poetry install, but I can't install `trio` backend with `poetry install anyio[trio]` but I receive this error:
`no matches found: anyio[trio]`
How to install anyio trio backed?
### How can we reproduce the bug?
run `poetry install anyio[trio]` | closed | 2023-06-05T06:45:22Z | 2023-06-05T07:30:43Z | https://github.com/agronholm/anyio/issues/574 | [
"bug"
] | deivydaslipskis | 5 |
mitmproxy/pdoc | api | 653 | "View Source" button for @property, @cached_property methods | #### Problem Description
pdoc nicely collects docstrings from both @property as well as @cached_property methods and shows them as documentation for the respective instance attributes. But it doesn't give a "View Source" button to view the source code of the corresponding methods.
I'm not sure if this is intended behaviour. I also couldn't find any directly related issue. Anyway, I'd like to suggest to add a "View Source" button in both cases (and maybe for other related decorators?). I'd say that usually an attribute is a @property (instead of just an instance variable) because there is something interesting inside the @property method, that a reader of the API documentation may well want to see.
I've looked into the source code of pdoc and found the option to produce "View Source" buttons used only in the jija template. But I couldn't figure out:
* do I just need to add similar code somewhere else in the jinja template (where attributes are treated)?
* or is extra handling needed to extract the source code of a @property in the parsing stage of pdoc and associate it to the corresponding attribute?
Maybe, you could give me a hint, then I'd try and provide a pull request.
#### Steps to reproduce the behavior:
Save this to bla.py:
```python
from functools import cached_property
class A:
"""An example class to be documented by pdoc."""
@property
def la():
"""Print la."""
print("la")
@cached_property
def li():
"""Print li."""
print("li")
```
then:
```bash
pdoc bla.py
```
#### System Information
Paste the output of "pdoc --version" here.
| closed | 2023-12-20T10:35:22Z | 2023-12-22T23:14:03Z | https://github.com/mitmproxy/pdoc/issues/653 | [
"enhancement"
] | tmeyier | 8 |
coqui-ai/TTS | deep-learning | 2,383 | [Feature request] TTS command line tool: Possibility to convert text file to speech | <!-- Welcome to the 🐸TTS project!
We are excited to see your interest, and appreciate your support! --->
**🚀 Feature Description**
<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
It would be nice to be able to convert a text file into speech, not just a small pice of text.
Especially when you are a blind person you have a lot of text for example books or articles.
Converting just one sentence or a small string of text is pretty difficult.
**Solution**
<!-- A clear and concise description of what you want to happen. -->
Adding above possibility and adding a command line argument to specify a .txt file for conversion.
Thanks.
**Alternative Solutions**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
| closed | 2023-03-05T19:04:56Z | 2023-06-05T08:11:55Z | https://github.com/coqui-ai/TTS/issues/2383 | [
"help wanted",
"feature request"
] | domasofan | 15 |
marshmallow-code/flask-marshmallow | rest-api | 277 | New release from dev | Hello,
Thank you for this great extension. I love it
I use it with SQLAlchemy, and I can't update anymore flask-sqlalchemy because of the new extension's structure.
Would it be possible to create a new release from dev branch because problems are solved but not released as seen in this issues, PR :
#247
#249
#260 | closed | 2023-12-19T10:48:18Z | 2023-12-20T07:11:59Z | https://github.com/marshmallow-code/flask-marshmallow/issues/277 | [] | jgriffon | 2 |
ymcui/Chinese-LLaMA-Alpaca-2 | nlp | 205 | 怎样在指令精调时指定训练使用的GPU | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 我已阅读[项目文档](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki)和[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案。
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[LangChain](https://github.com/hwchase17/langchain)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)等,同时建议到对应的项目中查找解决方案。
### 问题类型
模型训练与精调
### 基础模型
Alpaca-2-13B
### 操作系统
Linux
### 详细描述问题
如题。有多块GPU,但只想使用其中一块进行训练。
### 依赖情况(代码类问题务必提供)
```
# 请在此处粘贴依赖情况(请粘贴在本代码块里)
```
### 运行日志或截图
```
# 请在此处粘贴运行日志(请粘贴在本代码块里)
``` | closed | 2023-08-30T05:37:02Z | 2024-03-18T08:36:59Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca-2/issues/205 | [] | czhcc | 2 |
lanpa/tensorboardX | numpy | 499 | TypeError: 'GraphDef' object does not support indexing | Getting the error
```
Traceback (most recent call last):
File "main.py", line 144, in <module>
train(epoch)
File "main.py", line 97, in train
w.add_graph(net, (input,))
File "/home/x/miniconda3/envs/py3.6_tensorboard/lib/python3.6/site-packages/tensorboardX/writer.py", line 738, in add_graph
self._get_file_writer().add_graph(graph(model, input_to_model, verbose, **kwargs))
File "/home/x/miniconda3/envs/py3.6_tensorboard/lib/python3.6/site-packages/tensorboardX/writer.py", line 138, in add_graph
graph = graph_profile[0]
TypeError: 'GraphDef' object does not support indexing
```
when I try to dump the graph of **Resnet18** (model code is [here)](https://github.com/kuangliu/pytorch-cifar)
```
with SummaryWriter(comment = 'resnet18') as w:
w.add_graph(net, (input,))
```
I am using Pytorch 1.2.0 and TensorboardX 1.8
Could you help me to fix this problem? | closed | 2019-08-26T09:18:38Z | 2019-10-23T15:30:14Z | https://github.com/lanpa/tensorboardX/issues/499 | [] | xuehui1991 | 2 |
holoviz/panel | plotly | 7,437 | SyntaxError in ChatFeed reference guide | Looking at the `ChatFeed` reference guide https://panel.holoviz.org/reference/chat/ChatFeed.html for panel==1.5.3:

```bash
Traceback (most recent call last):
File "/Users/runner/work/panel/panel/panel/io/mime_render.py", line 169, in exec_with_return
exec(compile(last_ast, "<ast>", "exec"), global_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<ast>", line 3
SyntaxError: 'await' outside function
```
| closed | 2024-10-24T06:01:49Z | 2024-10-29T17:01:07Z | https://github.com/holoviz/panel/issues/7437 | [
"type: docs"
] | MarcSkovMadsen | 0 |
huggingface/pytorch-image-models | pytorch | 2,390 | [BUG] Usefulness of `GlobalResponseNormMlp` | **Describe the bug**
Does `GlobalResponseNormMlp` makes sense for 1D inputs? The "Mlp" in the name and the implementation suggest that the layer can be fed 1D inputs (`BLC` where L is the sequence length and C the number of features) but the `Grn` implementation requires 2D inputs and if I'm not wrong Grn only makes sense for 2D inputs since its computes the norm on the spatial dimensions.
Am I right or there this layer could be used with the `use_conv=False` flag?
I can propose a PR if the implementation needs to be changed.
**Desktop (please complete the following information):**
- OS: macOS
- This repository version: 1.0.12
- PyTorch version: 2.5 | closed | 2025-01-02T14:29:48Z | 2025-01-03T15:51:12Z | https://github.com/huggingface/pytorch-image-models/issues/2390 | [
"bug"
] | laclouis5 | 1 |
donnemartin/system-design-primer | python | 658 | System design | open | 2022-04-16T11:57:31Z | 2022-04-23T13:17:59Z | https://github.com/donnemartin/system-design-primer/issues/658 | [
"needs-review"
] | vk34code | 0 | |
plotly/dash | dash | 2,273 | [BUG] dcc.Dropdown use outdated styles for react-virtualized-select (cursor should be pointer) | 
Cursor should be "pointer". The underlying library [has fixed this](https://github.com/bvaughn/react-virtualized-select/commit/b2c5fe394ec3145319bde37158d05b3508fbf84a) and dash use a version which includes the fix (3.1.3), but the stylesheet is ["hardcoded"](https://github.com/plotly/dash/blob/dev/components/dash-core-components/src/components/css/react-virtualized-select%403.1.0.css) to 3.1.0
```
> conda list | grep dash
dash 2.6.1 pyhd8ed1ab_0 conda-forge
dash-bootstrap-components 1.2.1 pyhd8ed1ab_0 conda-forge
dash-core-components 2.0.0 pypi_0 pypi
dash-daq 0.5.0 pypi_0 pypi
dash-html-components 2.0.0 pypi_0 pypi
dash-table 5.0.0 pypi_0 pypi
``` | closed | 2022-10-16T13:08:11Z | 2022-11-02T19:33:25Z | https://github.com/plotly/dash/issues/2273 | [] | olejorgenb | 0 |
ultralytics/ultralytics | machine-learning | 19,499 | Why is the Inference Speed of YOLOv8 ONNX Much Slower Compared to PyTorch (.pt) Model | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
When using YOLOv8 for object detection, I noticed that after converting the PyTorch model (.pt) to ONNX format, the inference speed significantly decreased. Specifically:
ONNX:
Speed: 122.5ms preprocess, 27.4ms inference, 15.2ms postprocess per image at shape (1, 3, 640, 640)
PT:
Speed: 2.8ms preprocess, 23.5ms inference, 14.2ms postprocess per image at shape (1, 3, 384, 640)
use
from ultralytics import YOLO
onnx_model = YOLO("D:\pythonProject1\my_yolov8\yolov8n.pt")
# Run inference
results = onnx_model(r"D:\useing\Final_Test")
On different hardware (e.g., CPU, GPU), the performance of ONNX was consistently worse than the PyTorch model.
I have tried the following optimization measures:
Used ONNX Runtime for inference with CUDA acceleration enabled.
Ensured the ONNX model was exported with optimal settings (e.g., opset version, dynamic axes).
Despite these efforts, the ONNX model still performs slower. Could this be due to differences in model optimization, runtime overhead, or specific limitations of ONNX for YOLOv8? Are there additional steps or configurations I can apply to improve ONNX inference speed?
### Additional
_No response_ | open | 2025-03-03T10:01:04Z | 2025-03-03T10:54:27Z | https://github.com/ultralytics/ultralytics/issues/19499 | [
"question",
"detect",
"exports"
] | WindFreeFoliage | 3 |
google-research/bert | tensorflow | 497 | xnli fin-tunining downlink invalid | i want test fine tuning example at https://github.com/google-research/bert/blob/master/multilingual.md, howerer the dataset download link https://s3.amazonaws.com/xnli/XNLI-1.0.zip doesn't work, where can i get the data, i download the xnli at https://github.com/facebookresearch/XNLI , it now required. | closed | 2019-03-13T05:09:14Z | 2019-07-19T08:25:38Z | https://github.com/google-research/bert/issues/497 | [] | yelunightroad | 2 |
laurentS/slowapi | fastapi | 85 | $.ajax and $.getJSON request not limited | Hi everyone, I'm having a problem. When I make too many browser or postman requests rightly the APIs block me as I set, but if I make a request via AJAX and jquery's $ .getJSON, I'm not limited. How can I solve? | closed | 2022-02-17T20:03:47Z | 2022-02-27T22:57:50Z | https://github.com/laurentS/slowapi/issues/85 | [] | Matt0550 | 2 |
dagster-io/dagster | data-science | 28,615 | Support specifying configuration for backfills | ### What's the use case?
This is a long-standing issue with Dagster. Launching a backfill with a customized configuration is not possible, so users (for example, myself) have to manually launch multiple runs while editing the config for each run.
Given the recent improvements around working with backfills, I feel like it should not be hard to implement.
### Ideas of implementation
Allow selecting multiple partitions in the drop-down partitions menu in the launchpad.
### Additional information
I was really surprised this issue didn't exist! Or at least I couldn't find a duplicate.
### Message from the maintainers
Impacted by this issue? Give it a 👍! We factor engagement into prioritization. | closed | 2025-03-19T19:39:38Z | 2025-03-24T18:30:13Z | https://github.com/dagster-io/dagster/issues/28615 | [
"type: feature-request"
] | danielgafni | 4 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,089 | Can not update from 4.4.4 to 4.4.6 | **Describe the bug**
Can not update from 4.4.4 to 4.4.6
Error message
CW: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://deb.globaleaks.org bionic/ Release: The following signatures were invalid: BADSIG 32E6792624045008 GlobaLeaks software signing key <info@globaleaks.org>
W: Failed to fetch http://deb.globaleaks.org/bionic/Release.gpg The following signatures were invalid: BADSIG 32E6792624045008 GlobaLeaks software signing key <info@globaleaks.org>
W: Some index files failed to download. They have been ignored, or old ones used instead. | closed | 2021-11-04T08:02:44Z | 2021-11-04T08:36:49Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3089 | [] | elbill | 1 |
tortoise/tortoise-orm | asyncio | 917 | Bug in nested QuerySet when defined via `objects = CustomManager(...)` | In continuation to https://github.com/tortoise/tortoise-orm/issues/864
Now it works if `manager` is defined like this:
```python
class ManagerModel(AbstractManagerModel):
class Meta:
manager = StatusManager(queryset_cls=StatusQuerySet)
```
_(Thats from tests/testmodels.py)_
However it will not work if `manager` is defined like this:
```python
class ManagerModel(AbstractManagerModel):
objects = StatusManager(queryset_cls=StatusQuerySet)
```
However it works, when in `tortoise/queryset.py:321`
```
- queryset = self.__class__.__new__(QuerySet)
+ queryset = self.__class__.__new__(self.__class__)
``` | open | 2021-09-20T19:00:55Z | 2021-09-20T19:00:55Z | https://github.com/tortoise/tortoise-orm/issues/917 | [] | maxcoredev | 0 |
allenai/allennlp | data-science | 5,322 | Switch over to using torch.testing.assert_close in tests | `assert_allclose` is being deprecated. See https://github.com/pytorch/pytorch/issues/61844. | open | 2021-07-19T18:55:19Z | 2021-07-26T18:30:09Z | https://github.com/allenai/allennlp/issues/5322 | [] | epwalsh | 2 |
graphql-python/graphql-core | graphql | 124 | Recursively finding implemented interfaces | Given a GraphQL schema with interface types and types that implement interfaces, it's possible to have a "chain" of implementations. For instance, a schema might have an interface `Entity` implemented by the interface `LivingEntity`, which is then implemented by the interface `Animal`, which is in turn implemented by the object type `Dog`.
Given the `ObjectTypeDefinitionNode` for `Dog` in the schema (e.g. while using a visitor object), is there a way to find all interfaces it implements (i.e. `{Entity, LivingEntity, Animal}`? Right now, the `ObjectTypeDefinitionNode`'s `interfaces` field is a `NamedTypeNode` list rather than an `InterfaceTypeDefinitionNode` list. That makes it impossible to use the `interfaces` field to do this recursively, since we'd find the `NamedTypeNode` for `Animal` and have no way of finding the `InterfaceTypeDefinitionNode` for `Animal`.
The `GraphQLSchema` type does have a private field named `_implementations_map` dictionary, but this is a mapping from interface to implementation, rather than the other way around. | closed | 2021-03-18T19:38:25Z | 2021-03-29T21:05:11Z | https://github.com/graphql-python/graphql-core/issues/124 | [] | LWprogramming | 4 |
LibrePhotos/librephotos | django | 1,198 | torch causes Fatal Python error: Floating point exception |
# 🐛 Bug Report
## 📝 Description of issue:
The log is filled with python exception traces like the below. I'm scanning in tens of thousands of photos on a fresh Docker install.
> 00:31:21 [Q] CRITICAL reincarnated worker Process-e59e78ff6711490fb016575816db4f62 after death
00:31:21 [Q] INFO Process-5affe1a61cf44377ab85d669f69acbb0 ready for work at 11707
00:31:21 [Q] INFO Process-5affe1a61cf44377ab85d669f69acbb0 processing coffee-uniform-ack-papa 'api.directory_watcher.handle_new_image'
INFO:ownphotos:job f61d95b4-fbe3-4bda-a5e9-3e591c2aefed: calculate aspect ratio: /data/XXXXXXPATHTOMYPHOTOXXXXX.jpg, elapsed: 1.269778
Fatal Python error: Floating point exception
> Current thread 0x00007fa671ffd040 (most recent call first):
File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/conv.py", line 456 in _conv_forward
File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/conv.py", line 460 in forward
File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1520 in _call_impl
File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1511 in _wrapped_call_impl
File "/code/api/places365/wideresnet.py", line 95 in forward
File "/code/api/places365/places365.py", line 140 in inference_places365
File "/code/api/models/photo.py", line 271 in _generate_captions
File "/code/api/directory_watcher.py", line 168 in handle_new_image
File "/usr/local/lib/python3.11/dist-packages/django_q/worker.py", line 97 in worker
File "/usr/lib/python3.11/multiprocessing/process.py", line 108 in run
File "/usr/lib/python3.11/multiprocessing/process.py", line 314 in _bootstrap
File "/usr/lib/python3.11/multiprocessing/popen_fork.py", line 71 in _launch
File "/usr/lib/python3.11/multiprocessing/popen_fork.py", line 19 in __init__
File "/usr/lib/python3.11/multiprocessing/context.py", line 281 in _Popen
File "/usr/lib/python3.11/multiprocessing/context.py", line 224 in _Popen
File "/usr/lib/python3.11/multiprocessing/process.py", line 121 in start
File "/usr/local/lib/python3.11/dist-packages/django_q/cluster.py", line 191 in spawn_process
File "/usr/local/lib/python3.11/dist-packages/django_q/cluster.py", line 198 in spawn_worker
File "/usr/local/lib/python3.11/dist-packages/django_q/cluster.py", line 227 in reincarnate
File "/usr/local/lib/python3.11/dist-packages/django_q/cluster.py", line 306 in guard
File "/usr/local/lib/python3.11/dist-packages/django_q/cluster.py", line 167 in start
File "/usr/local/lib/python3.11/dist-packages/django_q/cluster.py", line 158 in __init__
File "/usr/lib/python3.11/multiprocessing/process.py", line 108 in run
File "/usr/lib/python3.11/multiprocessing/process.py", line 314 in _bootstrap
File "/usr/lib/python3.11/multiprocessing/popen_fork.py", line 71 in _launch
File "/usr/lib/python3.11/multiprocessing/popen_fork.py", line 19 in __init__
File "/usr/lib/python3.11/multiprocessing/context.py", line 281 in _Popen
File "/usr/lib/python3.11/multiprocessing/context.py", line 224 in _Popen
File "/usr/lib/python3.11/multiprocessing/process.py", line 121 in start
File "/usr/local/lib/python3.11/dist-packages/django_q/cluster.py", line 66 in start
File "/usr/local/lib/python3.11/dist-packages/django_q/management/commands/qcluster.py", line 37 in handle
File "/usr/local/lib/python3.11/dist-packages/django/core/management/base.py", line 458 in execute
File "/usr/local/lib/python3.11/dist-packages/django/core/management/base.py", line 412 in run_from_argv
File "/usr/local/lib/python3.11/dist-packages/django/core/management/__init__.py", line 436 in execute
File "/usr/local/lib/python3.11/dist-packages/django/core/management/__init__.py", line 442 in execute_from_command_line
File "/code/manage.py", line 31 in <module>
> Extension modules: psutil._psutil_linux, psutil._psutil_posix, numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, markupsafe._speedups, charset_normalizer.md, _cffi_backend, torch._C, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, PIL._imaging, PIL._imagingft, yaml._yaml, matplotlib._c_internal_utils, matplotlib._path, kiwisolver._cext, pandas._libs.tslibs.ccalendar, pandas._libs.tslibs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.tslibs.tzconversion, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.strptime, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._libs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.lib, pandas._libs.ops, pandas._libs.hashing, pandas._libs.arrays, pandas._libs.tslib, pandas._libs.sparse, pandas._libs.internals, pandas._libs.indexing, pandas._libs.index, pandas._libs.writers, pandas._libs.join, pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.groupby, pandas._libs.json, pandas._libs.parsers, pandas._libs.testing, matplotlib._image, scipy._lib._ccallback_c, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.linalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg.cython_blas, scipy.linalg._matfuncs_expm, scipy.linalg._decomp_update, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.linalg._propack._spropack, scipy.sparse.linalg._propack._dpropack, scipy.sparse.linalg._propack._cpropack, scipy.sparse.linalg._propack._zpropack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flow, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, scipy.spatial._ckdtree, scipy._lib.messagestream, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._distance_wrap, scipy.spatial._hausdorff, scipy.special._ufuncs_cxx, scipy.special._cdflib, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.special._ellip_harm_2, scipy.spatial.transform._rotation, scipy.ndimage._nd_image, _ni_label, scipy.ndimage._ni_label, scipy.optimize._minpack2, scipy.optimize._group_columns, scipy.optimize._trlib._trlib, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._cobyla, scipy.optimize._slsqp, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, scipy.optimize._zeros, scipy.optimize._highs.cython.src._highs_wrapper, scipy.optimize._highs._highs_wrapper, scipy.optimize._highs.cython.src._highs_constants, scipy.optimize._highs._highs_constants, scipy.linalg._interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.optimize._direct, scipy.integrate._odepack, scipy.integrate._quadpack, scipy.integrate._vode, scipy.integrate._dop, scipy.integrate._lsoda, scipy.special.cython_special, scipy.stats._stats, scipy.stats.beta_ufunc, scipy.stats._boost.beta_ufunc, scipy.stats.binom_ufunc, scipy.stats._boost.binom_ufunc, scipy.stats.nbinom_ufunc, scipy.stats._boost.nbinom_ufunc, scipy.stats.hypergeom_ufunc, scipy.stats._boost.hypergeom_ufunc, scipy.stats.ncf_ufunc, scipy.stats._boost.ncf_ufunc, scipy.stats.ncx2_ufunc, scipy.stats._boost.ncx2_ufunc, scipy.stats.nct_ufunc, scipy.stats._boost.nct_ufunc, scipy.stats.skewnorm_ufunc, scipy.stats._boost.skewnorm_ufunc, scipy.stats.invgauss_ufunc, scipy.stats._boost.invgauss_ufunc, scipy.interpolate._fitpack, scipy.interpolate.dfitpack, scipy.interpolate._bspl, scipy.interpolate._ppoly, scipy.interpolate.interpnd, scipy.interpolate._rbfinterp_pythran, scipy.interpolate._rgi_cython, scipy.stats._biasedurn, scipy.stats._levy_stable.levyst, scipy.stats._stats_pythran, scipy._lib._uarray._uarray, scipy.stats._ansari_swilk_statistics, scipy.stats._sobol, scipy.stats._qmc_cy, scipy.stats._mvn, scipy.stats._rcont.rcont, scipy.stats._unuran.unuran_wrapper, scipy.cluster._vq, scipy.cluster._hierarchy, scipy.cluster._optimal_leaf_ordering, sklearn.__check_build._check_build, sklearn.utils._isfinite, sklearn.utils.murmurhash, sklearn.utils._openmp_helpers, sklearn.metrics.cluster._expected_mutual_info_fast, sklearn.utils.sparsefuncs_fast, sklearn.preprocessing._csr_polynomial_expansion, sklearn.preprocessing._target_encoder_fast, sklearn.metrics._dist_metrics, sklearn.metrics._pairwise_distances_reduction._datasets_pair, sklearn.utils._cython_blas, sklearn.metrics._pairwise_distances_reduction._base, sklearn.metrics._pairwise_distances_reduction._middle_term_computer, sklearn.utils._heap, sklearn.utils._sorting, sklearn.metrics._pairwise_distances_reduction._argkmin, sklearn.metrics._pairwise_distances_reduction._argkmin_classmode, sklearn.utils._vector_sentinel, sklearn.metrics._pairwise_distances_reduction._radius_neighbors, sklearn.metrics._pairwise_distances_reduction._radius_neighbors_classmode, sklearn.metrics._pairwise_fast, sklearn.neighbors._partition_nodes, sklearn.neighbors._ball_tree, sklearn.neighbors._kd_tree, sklearn.utils.arrayfuncs, sklearn.utils._random, sklearn.utils._seq_dataset, sklearn.linear_model._cd_fast, sklearn._loss._loss, sklearn.svm._liblinear, sklearn.svm._libsvm, sklearn.svm._libsvm_sparse, sklearn.utils._weight_vector, sklearn.linear_model._sgd_fast, sklearn.linear_model._sag_fast, sklearn.decomposition._online_lda_fast, sklearn.decomposition._cdnmf_fast, hdbscan.dist_metrics, hdbscan._hdbscan_linkage, hdbscan._hdbscan_tree, hdbscan._hdbscan_reachability, hdbscan._hdbscan_boruvka, sklearn._isotonic, sklearn.tree._utils, sklearn.tree._tree, sklearn.tree._splitter, sklearn.tree._criterion, sklearn.neighbors._quad_tree, sklearn.manifold._barnes_hut_tsne, sklearn.manifold._utils, hdbscan._prediction_utils, PIL._imagingmath, PIL._webp (total: 232)
## 🔁 How can we reproduce it:
Unsure. This happened on a fresh install. I reproduced it by deleting all the librephotos and database folders and running again. I'm running on podman instead of docker but the web interface is working well and I can see that it has found my photos. I don't think the torch library should cause the librephotos job to crash like this. Does it need some exception handling to fail more gracefully?
It's certainly possible this is an artifact of using podman. Here is the podman kube file I'm using with podman play kube (note that in podman Pods, all containers share an IP address and localhost):
```
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-4.9.3
# NOTE: If you generated this yaml from an unprivileged and rootless podman container on an SELinux
# enabled system, check the podman generate kube man page for steps to follow to ensure that your pod/container
# has the right permissions to access the volumes added.
---
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2024-04-08T09:10:59Z"
labels:
app: librephotos
name: librephotos
spec:
containers:
- args:
- postgres
- -c
- fsync=off
- -c
- synchronous_commit=off
- -c
- full_page_writes=off
- -c
- random_page_cost=1.0
env:
- name: POSTGRES_USER
value: docker
- name: POSTGRES_PASSWORD
value: MYPASSWORDHERE
- name: POSTGRES_DB
value: librephotos
image: docker.io/library/postgres:13
name: db
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: storage-storage-librephotos-data-db-host-0
- args:
- nginx
- -g
- daemon off;
image: docker.io/reallibrephotos/librephotos-proxy:latest
name: proxy
ports:
- containerPort: 80
hostPort: 3000
volumeMounts:
- mountPath: /data
name: storage-pictures-host-0
readOnly: true
- mountPath: /protected_media
name: storage-storage-librephotos-data-protected_media-host-1
- image: docker.io/reallibrephotos/librephotos-frontend:latest
name: frontend
securityContext: {}
- env:
- name: DB_PORT
value: "5432"
- name: BACKEND_HOST
value: backend
- name: DB_NAME
value: librephotos
- name: DB_BACKEND
value: postgresql
- name: DB_PASS
value: MYPASSWORDHERE
- name: DB_USER
value: docker
- name: DB_HOST
value: localhost
- name: DEBUG
value: "0"
- name: WEB_CONCURRENCY
value: "1"
- name: ALLOW_UPLOAD
value: "false"
image: docker.io/reallibrephotos/librephotos:latest
name: backend
volumeMounts:
- mountPath: /root/.cache
name: storage-storage-librephotos-data-cache-host-0
- mountPath: /data
name: storage-pictures-host-1
readOnly: true
- mountPath: /protected_media
name: storage-storage-librephotos-data-protected_media-host-2
- mountPath: /logs
name: storage-storage-librephotos-data-logs-host-3
volumes:
- hostPath:
path: /storage/librephotos/data/db
type: Directory
name: storage-storage-librephotos-data-db-host-0
- hostPath:
path: /pictures
type: Directory
name: storage-pictures-host-0
- hostPath:
path: /storage/librephotos/data/protected_media
type: Directory
name: storage-storage-librephotos-data-protected_media-host-1
- hostPath:
path: /storage/librephotos/data/cache
type: Directory
name: storage-storage-librephotos-data-cache-host-0
- hostPath:
path: /pictures
type: Directory
name: storage-pictures-host-1
- hostPath:
path: /storage/librephotos/data/protected_media
type: Directory
name: storage-storage-librephotos-data-protected_media-host-2
- hostPath:
path: /storage/librephotos/data/logs
type: Directory
name: storage-storage-librephotos-data-logs-host-3
```
## Please provide additional information:
- 💻 Operating system: Linux (Fedora CoreOS)
- ⚙ Architecture (x86 or ARM): x86_64
- 🔢 Librephotos version: 2024w14p1 (docker latest)
- 📸 Librephotos installation method (Docker, Kubernetes, .deb, etc.): Docker (but using podman on Fedora CoreOS)
* 🐋 If Docker or Kubernets, provide docker-compose image tag: latest
- 📁 How is you picture library mounted (Local file system (Type), NFS, SMB, etc.): Local file system
| closed | 2024-04-09T04:45:29Z | 2024-06-11T06:47:40Z | https://github.com/LibrePhotos/librephotos/issues/1198 | [
"bug"
] | rw57 | 3 |
modin-project/modin | pandas | 7,207 | BUG: Falling back to standard Pandas implementation when assigning a dataframe to a columnar selection | ### Modin version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest released version of Modin.
- [X] I have confirmed this bug exists on the main branch of Modin. (In order to do this you can follow [this guide](https://modin.readthedocs.io/en/stable/getting_started/installation.html#installing-from-the-github-main-branch).)
### Reproducible Example
```python
import sys
import modin
import modin.pandas as pd
data = {
'A': [1.234, 5.678],
'B': [1, 2],
'C': ['one', 'two']
}
df = pd.DataFrame(data)
df_selected = df[['A', 'C']]
df[df_selected.columns] = df_selected
```
### Issue Description
The example outputs the following warning
```
UserWarning: `DataFrame.setitem_unhashable_key` is not currently supported by PandasOnRay, defaulting to pandas implementation.
Please refer to https://modin.readthedocs.io/en/stable/supported_apis/defaulting_to_pandas.html for explanation.
UserWarning: Distributing <class 'pandas.core.frame.DataFrame'> object. This may take some time.
```
The same behavior occurs when using `Dask` engine.
### Expected Behavior
I would expect to be able to make this assignment without returning the dataset to pandas then back to Modin Dataset, since I'm working with very large datasets and this operation can become very costly (cpu/io wise).
There is no code to show an expected behavior.
### Error Logs
<details>
```python-traceback
UserWarning: `DataFrame.setitem_unhashable_key` is not currently supported by PandasOnRay, defaulting to pandas implementation.
Please refer to https://modin.readthedocs.io/en/stable/supported_apis/defaulting_to_pandas.html for explanation.
UserWarning: Distributing <class 'pandas.core.frame.DataFrame'> object. This may take some time.
```
</details>
### Installed Versions
INSTALLED VERSIONS
------------------
commit : e9dbcc127913db77473a83936e8b6bb94ef84f0d
python : 3.10.13.final.0
python-bits : 64
OS : Linux
OS-release : 5.10.0-28-cloud-amd64
Version : #1 SMP Debian 5.10.209-2 (2024-01-31)
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
Modin dependencies
------------------
modin : 0.29.0+9.ge9dbcc12
ray : 2.9.3
dask : 2024.4.1
distributed : 2024.4.1
hdk : None
pandas dependencies
-------------------
pandas : 2.2.1
numpy : 1.25.2
pytz : 2024.1
dateutil : 2.9.0.post0
setuptools : 69.1.1
pip : 24.0
Cython : None
pytest : 8.1.1
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 3.1.3
IPython : 8.21.0
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2024.2.0
gcsfs : 2024.2.0
matplotlib : 3.8.3
numba : 0.58.1
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : 0.22.0
pyarrow : 15.0.1
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.11.4
sqlalchemy : 2.0.28
tables : None
tabulate : None
xarray : 2024.3.0
xlrd : None
zstandard : 0.22.0
tzdata : 2024.1
qtpy : None
pyqt5 : None | closed | 2024-04-19T20:58:22Z | 2024-04-26T15:06:23Z | https://github.com/modin-project/modin/issues/7207 | [
"new feature/request 💬",
"External"
] | cw-igormorgado | 1 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 15,624 | [Feature Request]: | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
NVIDIA align your steps sampler support ?
### Proposed workflow
1. Image generation should allow the user to select NVIDIA align your steps sampler
### Additional information
You may find the details here : https://research.nvidia.com/labs/toronto-ai/AlignYourSteps/
Also it seems comfui already supports this: https://github.com/comfyanonymous/ComfyUI/commit/644a3ae58d426ffbbc02ef4104034c98e8fc6513 | closed | 2024-04-25T08:09:52Z | 2024-04-28T04:22:33Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15624 | [
"enhancement"
] | risedangel | 5 |
iterative/dvc | machine-learning | 10,419 | bug: running exp with `--queue` always take workspace version for python deps | # Bug Report
## Issue name
exp run --queue: expriment runs with workspace version of code, ignoring changes made in experiment.
## Description
When using imports with `dvc exp run --queue`, the file being imported is always on the workspace version regardless of its state when running experiment.
### Reproduce
1. Create empty git repo.
2. `dvc init`.
3. Create two files: `main.py` and `dep.py`
main.py:
```python
from dep import my_str
print(my_str)
```
dep.py:
```python
my_str = 'main'
```
4. Create dvc.yaml
```yaml
stages:
main:
cmd: python /home/aleksei/git/dvc-simple/main.py
deps:
- /home/aleksei/git/dvc-simple/main.py
- /home/aleksei/git/dvc-simple/dep.py
```
Use absolute imports for `--queue` to work properly.
5. Run `dvc repro`, you'll see `main` printed to stdout, that's ok.
6. Change `my_str = 'queue'` to `dep.py`
7. Run `dvc exp run --name 'bug' --queue`
8. Change `my_str = 'main'` back in `dep.py`.
9. Make sure `git status` says that `dep.py` is not changed.
10. Run `dvc queue start`
11. Check logs of the experiment.
12. You'll see `main` printed to stdout, though you created experiment with `"queue"` in dep.py.
### Expected
Expected stdout to be "queue", not "main".
### Environment information
<!--
This is required to ensure that we can reproduce the bug.
-->
**Output of `dvc doctor`:**
```console
$ dvc doctor
DVC version: 3.50.1 (pip)
-------------------------
Platform: Python 3.10.13 on Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Subprojects:
dvc_data = 3.15.1
dvc_objects = 5.1.0
dvc_render = 1.0.2
dvc_task = 0.4.0
scmrepo = 3.3.2
Supports:
http (aiohttp = 3.9.5, aiohttp-retry = 2.8.3),
https (aiohttp = 3.9.5, aiohttp-retry = 2.8.3)
Config:
Global: /home/aleksei/.config/dvc
System: /etc/xdg/dvc
Cache types: <https://error.dvc.org/no-dvc-cache>
Caches: local
Remotes: None
Workspace directory: ext4 on /dev/sdb
Repo: dvc, git
Repo.site_cache_dir: /var/tmp/dvc/repo/44d5031bae74ad1636ec6d61087b9602
```
**Additional Information (if any):**
| closed | 2024-05-08T06:04:50Z | 2024-05-10T15:32:21Z | https://github.com/iterative/dvc/issues/10419 | [
"awaiting response"
] | alekseik1 | 4 |
marimo-team/marimo | data-visualization | 3,501 | Unable to run ibis in marimo WebAssembly notebooks | ### Describe the bug
Both Ibis and it's DuckDB backend works on pyodide, but somehow it doesn't work when using it in marimo.
I have created an issue over at the ibis-project's issue tracker https://github.com/ibis-project/ibis/issues/10687.
### Environment
<details>
```
{
"marimo": "0.10.14",
"OS": "Darwin",
"OS Version": "24.2.0",
"Processor": "arm",
"Python Version": "3.12.7",
"Binaries": {
"Browser": "132.0.6834.83",
"Node": "v18.16.0"
},
"Dependencies": {
"click": "8.1.8",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.22.0",
"packaging": "24.2",
"psutil": "6.1.1",
"pygments": "2.19.1",
"pymdown-extensions": "10.14",
"pyyaml": "6.0.2",
"ruff": "0.9.2",
"starlette": "0.45.2",
"tomlkit": "0.13.2",
"typing-extensions": "4.12.2",
"uvicorn": "0.34.0",
"websockets": "14.1"
},
"Optional Dependencies": {
"anywidget": "0.9.13",
"duckdb": "1.1.3",
"ibis-framework": "9.5.0",
"pandas": "2.2.3",
"pyarrow": "17.0.0"
}
}
```
</details>
### Code to reproduce
```python
import marimo
__generated_with = "0.10.14"
app = marimo.App(width="medium")
@app.cell
def _():
import marimo as mo
return (mo,)
@app.cell
def _():
import ibis
return (ibis,)
@app.cell
def _(ibis):
ibis.duckdb.connect(":memory:")
return
@app.cell
def _():
return
if __name__ == "__main__":
app.run()
```
```sh
marimo export html-wasm minimal.py -o output_dir --mode edit
``` | closed | 2025-01-19T16:56:49Z | 2025-01-20T15:09:34Z | https://github.com/marimo-team/marimo/issues/3501 | [
"bug"
] | kyrre | 4 |
ContextLab/hypertools | data-visualization | 113 | grand challenge: streaming brain decoding | Achieving this grand challenge requires:
1. Support for [streaming data](https://github.com/ContextLab/hypertools/issues/101)
2. [Interactive feature/event labels](https://github.com/ContextLab/hypertools/issues/111)
3. [On-the-fly decoding](https://github.com/ContextLab/hypertools/issues/112)
4. Reading in brain data on the fly, e.g. from an OpenBCI device (e.g. see [this](https://github.com/OpenBCI/OpenBCI_Python) project)
Here's the vision:
The user wears their brain recording device, streaming data into hypertools. Periodically, they focus hard on imagining a word (e.g. picture an apple as intensely as possible for a few seconds). As this happens, they [press spacebar and tag that brain pattern/event with the label "apple"](https://github.com/ContextLab/hypertools/issues/111). This repeats for dozens (hundreds?) of words, and several presentations of each word. The decoding model (labeled brain patterns) is saved out to disk.
Now we switch to "decode" mode. Load in the decoding model and start streaming data from the headset again. Now the user picks a word from the labeled set and focuses hard on bringing that word to mind. Imagine their shock and delight when their brain trajectory moves to the appropriate labeled point, and [the word they were thinking of is highlighted on the display](https://github.com/ContextLab/hypertools/issues/112)!
Here's another (related) vision:
This tool could be used as a benchmark for brain decoding challenges. For example, suppose someone writes a `feature_extractor` function for translating raw brain data into arbitrary features (power spectra, some sort of deep neural network's re-representation of the data, classifier outputs, etc.). In other words, the `feature_extractor` allows us to focus in on the features/components of brain activity we think are important.
We also need a decoding function (`decoder`). This could be based on the Euclidean distance between the current brain patterns and labeled patterns, correlation between patterns, etc. The decoder tells us how to map between extracted features and labeled points, ideally in a robust way.
Now brain decoding is a matter of finding the right `feature_extractor` function and `decoder` function. | open | 2017-04-28T00:44:53Z | 2018-01-11T19:50:56Z | https://github.com/ContextLab/hypertools/issues/113 | [
"enhancement",
"help wanted",
"awesome",
"pie in the sky"
] | jeremymanning | 2 |
alteryx/featuretools | data-science | 2,012 | Add Vincenty primitive for LatLongs | - We currently have a Haversine primitive to calculate distance (assuming Earth is a sphere)
- [Vincenty distance](https://en.wikipedia.org/wiki/Vincenty%27s_formulae) uses a more accurate ellipsoidal model | open | 2022-04-12T14:53:04Z | 2023-06-26T19:10:16Z | https://github.com/alteryx/featuretools/issues/2012 | [] | gsheni | 0 |
unit8co/darts | data-science | 2,616 | [QUESTION] indices of groups when using TimeSeries.from_group_dataframe | **Describe the issue linked to the documentation**
I am using `TimeSeries.from_group_dataframe` to create timeseries for weekly sales of some items. Series should be grouped by `'item_id'`, with items sharing potentially useful covariates (eg `'item_category'`)
I am not sure how to handle indices of timeseries. More specifically, when creating series via `TimeSeries.from_group_dataframe`:
- if I specify `drop_group_cols=['item_id']`, then item_id is removed from static covariates. However I don't know how to map elements in the list of series to their `item_id`'s
- if I do NOT specify `drop_group_cols`, then item_id is added to static_covariates. However using this as a covariate is not useful - sometimes even detrimental - for downstream models
Please consider in my pipeline I am also removing series from the created list depending on some conditions, so I believe there should be a a way to handle grouping indices separately from static_covariates.
Thanks,
Marco
| closed | 2024-12-11T15:19:24Z | 2024-12-12T09:00:32Z | https://github.com/unit8co/darts/issues/2616 | [
"question"
] | marcoscattolin | 2 |
PaddlePaddle/ERNIE | nlp | 771 | NER源码阅读的一点问题 | finetune_ner.py 文件evaluation中以下代码中:
_dic字典中没有存放[UNK]标志位的对应lo数组,但是后面la数组没有将对应位置去除,而只是remove了pad
假设原始lo的shape是50, 在经历了_dic的处理后变成merged_lo,shape只有43,假设la的shape是69,好像代码里会直接padding成69进行f1计算,这样在计算f1值时不就造成错位吗?
`merged_lo = np.array([np.array(l).mean(0) for _, l in six.iteritems(_dic)])
merged_preds = np.argmax(merged_lo, -1)
la = la[np.where(la != (other_tag_id + 1))] # remove pad` | closed | 2021-12-03T08:14:26Z | 2022-02-11T06:42:16Z | https://github.com/PaddlePaddle/ERNIE/issues/771 | [
"wontfix"
] | promisejia | 1 |
youfou/wxpy | api | 20 | Message are sent out successfully but then raise exeception "KeyError: 'self'" | Using python3.5, and in Linux ENV.
In [43]: my_wife.send("test")
Out[43]: <ItchatReturnValue: {'MsgID': '8287475435897459665', 'LocalID': '14904491571103', 'BaseResponse': {'RawMsg': '请求成功', 'Ret': 0, 'ErrMsg': '请求成功'}}>
In [44]: Traceback (most recent call last):
File "/home/mint/.local/lib/python3.5/site-packages/itchat/components/login.py", line 240, in maintain_loop
msgList = produce_msg(self, msgList)
File "/home/mint/.local/lib/python3.5/site-packages/itchat/components/messages.py", line 61, in produce_msg
produce_group_chat(core, m)
File "/home/mint/.local/lib/python3.5/site-packages/itchat/components/messages.py", line 250, in produce_group_chat
atFlag = '@' + (chatroom['self']['DisplayName']
KeyError: 'self'
| closed | 2017-03-25T13:42:37Z | 2017-03-29T11:49:11Z | https://github.com/youfou/wxpy/issues/20 | [] | slxiao | 2 |
neuml/txtai | nlp | 532 | Add metadata support for client-server databases | Currently, metadata can be stored in embedded databases such as SQLite and DuckDB.
This issue will expand support to client-server [databases via SQLAlchemy](https://docs.sqlalchemy.org/en/20/dialects/index.html) that have [JSON support](https://docs.sqlalchemy.org/en/20/core/type_basics.html#sqlalchemy.types.JSON).
This includes: PostgreSQL, MariaDB/MySQL and Microsoft SQL Server. Oracle has recently added JSON support but it's not supported by SQLAlchemy as of 2.0.20. | closed | 2023-08-25T20:31:17Z | 2023-08-26T01:07:26Z | https://github.com/neuml/txtai/issues/532 | [] | davidmezzetti | 0 |
google-research/bert | nlp | 1,399 | bert中文交流群,交流应用和训练心得 | 
| open | 2024-01-22T03:52:08Z | 2024-03-08T11:45:53Z | https://github.com/google-research/bert/issues/1399 | [] | guozhencs | 2 |
tableau/server-client-python | rest-api | 792 | Publish DataSource with multiple database connections - need multiple connection credentials | It appears the datasources.publish method does not have functionality to use multiple connection_credentials objects. We have several sources that connect to different databases, and while the source will publish and you can connect a workbook with no issues, you cannot refresh the source without manually entering the credentials. | open | 2021-02-04T22:09:34Z | 2023-08-05T20:18:58Z | https://github.com/tableau/server-client-python/issues/792 | [
"bug",
"in-progress"
] | bis4fun | 7 |
lucidrains/vit-pytorch | computer-vision | 198 | RuntimeError: stack expects a non-empty TensorList | Hi,
I tried to add the Extractor and Recorder methods to access the attns and embeddings but can't make it work.
Without those new lines, my code runs well.
Any idea what might by the issue here?
Thank you
` model = model.to(device)
# create train loop
for epoch in range(self.epochs):
phase_loss_running = 0
phase_acc_running = 0
for i_train, (images, phase_train_label, video_idx) in tqdm(enumerate(train_dataloader, start=1)):
if torch.cuda.is_available():
images = images.to(device)
phase_train_label = phase_train_label.to(device)
optimizer.zero_grad(set_to_none = set_grad_to_none)
if auto_cast:
with autocast():
model = Recorder(model)
pred_train, attns = model(images)
if i_train == 1:
print("attensions.shape: ", attns.shape)
model = model.eject()
model = Extractor(model)
logits, embeddings = model(images)
if i_train == 1:
print("embeddings.shape: ", embeddings.shape)`
The Error:
` File "train_vit.py", line 280, in __init__
pred_train, attns = model(images)
File "/home/localadmin/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/localadmin/.local/lib/python3.8/site-packages/vit_pytorch/recorder.py", line 58, in forward
attns = torch.stack(recordings, dim = 1)
RuntimeError: stack expects a non-empty TensorList
` | closed | 2022-01-28T22:32:44Z | 2022-01-29T12:23:56Z | https://github.com/lucidrains/vit-pytorch/issues/198 | [] | maxboels | 2 |
localstack/localstack | python | 12,169 | bug: Value for x-amz-checksum-crc32 header is invalid when uploading to S3 via signed URL using AWS SDK v3 for JS | ### Is there an existing issue for this?
- [x] I have searched the existing issues
### Current Behavior
LocalStack throws an error when I'm trying to upload a file using a signed URL generated by AWS SDK v3 (for Javascript).
The part code example:
```ts
const putObjectCommand = new PutObjectCommand({
Bucket: AWS_S3_BUCKET_NAME,
Key: testFileName,
});
const signedUrl = await getSignedUrl(s3Client, putObjectCommand, { expiresIn: 3600 });
const response = await fetch(signedUrl, {
method: 'PUT',
body: testFileContent,
headers: {
'Content-Type': 'text/plain',
},
});
if (!response.ok) {
const responseBody = await response.text();
console.error('Response body:', responseBody);
throw new Error(`Upload failed: ${response.status} ${response.statusText}`);
}
```
The error example:
```xml
<?xml version='1.0' encoding='utf-8'?>
<Error><Code>InvalidRequest</Code><Message>Value for x-amz-checksum-crc32 header is invalid.</Message><RequestId>5cade7dd-e5f4-4b8c-88c5-65f34ae39209</RequestId></Error>
```
### Expected Behavior
The expected behaviour is to get the file uploaded without errors.
The issue happens only when I use **AWS SDK v3 + LocalStack (v4 and v3)**.
It works perfectly when I use AWS SDK v2 + LocalStack (v4 and v3).
It works perfectly when I use the same code with AWS SDK v3 + real AWS.
Also, I found a temporary trick to get it to work:
```ts
const putObjectCommand = new PutObjectCommand({
Bucket: AWS_S3_BUCKET_NAME,
Key: testFileName,
ChecksumCRC32: '', // <<-- This helped me to make it work, but it is not how I would expect it to work.
});
```
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
To simplify reproducing I've created a repo with the demonstration of the issue: https://github.com/ifree92/localstack-s3-upload-issue-demo
Below is the full code I'm running to reproduce it.
### docker-compose.yml
```yml
networks:
experiments_aws:
name: experiments_aws
driver: bridge
services:
localstack:
image: localstack/localstack:4
ports:
- '4566:4566'
- '4510-4559:4510-4559'
environment:
- DEBUG=0
- DOCKER_HOST=unix:///var/run/docker.sock
- LAMBDA_EXECUTOR=docker-reuse
- DISABLE_CORS_CHECKS=1
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- experiments_aws
```
### main.tf
The terraform file I'm using to create resources
```hcl
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "5.81.0"
}
}
}
provider "aws" {
access_key = "foobar"
region = "us-west-2"
secret_key = "foobar"
# only required for non virtual hosted-style endpoint use case.
# https://registry.terraform.io/providers/hashicorp/aws/latest/docs#s3_use_path_style
s3_use_path_style = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
apigateway = "http://localhost:4566"
cloudformation = "http://localhost:4566"
cloudwatch = "http://localhost:4566"
dynamodb = "http://localhost:4566"
es = "http://localhost:4566"
firehose = "http://localhost:4566"
iam = "http://localhost:4566"
kinesis = "http://localhost:4566"
kms = "http://localhost:4566"
lambda = "http://localhost:4566"
route53 = "http://localhost:4566"
redshift = "http://localhost:4566"
s3 = "http://localhost:4566"
secretsmanager = "http://localhost:4566"
ses = "http://localhost:4566"
sns = "http://localhost:4566"
sqs = "http://localhost:4566"
ssm = "http://localhost:4566"
stepfunctions = "http://localhost:4566"
sts = "http://localhost:4566"
}
}
# ====================== S3 ======================
resource "aws_s3_bucket" "system-assets" {
bucket = "system-assets"
}
resource "aws_s3_bucket_cors_configuration" "system-assets-cors" {
bucket = aws_s3_bucket.system-assets.id
cors_rule {
allowed_headers = ["*"]
allowed_methods = ["GET", "PUT", "POST", "DELETE", "HEAD"]
allowed_origins = [
"http://localhost:3000",
"http://localstack:3000",
"http://127.0.0.1:3000",
]
expose_headers = ["ETag"]
max_age_seconds = 3000
}
}
output "s3-bucket_system-assets" {
value = aws_s3_bucket.system-assets.bucket
}
```
### app-s3.ts
```ts
import { PutObjectCommand, S3Client } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
const { AWS_ENDPOINT, AWS_S3_BUCKET_NAME } = process.env;
function provideS3Client() {
return AWS_ENDPOINT
? new S3Client({
region: 'us-west-2',
endpoint: 'http://localhost:4566',
forcePathStyle: true,
useAccelerateEndpoint: false,
credentials: {
accessKeyId: 'foobar',
secretAccessKey: 'foobar',
},
})
: new S3Client({ useAccelerateEndpoint: false });
}
async function main() {
const s3Client = provideS3Client();
const testFileContent = `Content ${Math.random()}`;
const testFileName = `test-file_${new Date().toISOString()}.txt`;
const putObjectCommand = new PutObjectCommand({
Bucket: AWS_S3_BUCKET_NAME,
Key: testFileName,
});
const signedUrl = await getSignedUrl(s3Client, putObjectCommand, { expiresIn: 3600 });
console.log('signedUrl ->', signedUrl);
// Upload the file content using the signed URL
const response = await fetch(signedUrl, {
method: 'PUT',
body: testFileContent,
headers: {
'Content-Type': 'text/plain',
},
});
if (!response.ok) {
const responseBody = await response.text();
console.error('Response body:', responseBody);
throw new Error(`Upload failed: ${response.status} ${response.statusText}`);
}
console.log(`Successfully uploaded ${testFileName} to S3`);
}
main().catch(console.error);
```
### .local.env
```bash
AWS_ACCESS_KEY_ID=foobar
AWS_SECRET_ACCESS_KEY=foobar
AWS_REGION=us-west-2
AWS_ENDPOINT=http://localstack:4566
AWS_S3_BUCKET_NAME=system-assets
```
### package.json
```json
{
"name": "aws-sdk-experiments",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"start": "ts-node src/app-s3.ts"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"@aws-sdk/client-dynamodb": "^3.576.0",
"@aws-sdk/client-s3": "^3.204.0",
"@aws-sdk/credential-providers": "^3.204.0",
"@aws-sdk/lib-dynamodb": "^3.576.0",
"@aws-sdk/s3-request-presigner": "^3.732.0",
"aws-sdk": "^2.1692.0",
"express": "^4.21.2",
"sqs-consumer": "^3.8.0",
"uuid": "^9.0.1"
},
"devDependencies": {
"@types/express": "^5.0.0",
"@types/node": "^18.11.9",
"@types/sqs-consumer": "^5.0.0",
"@types/uuid": "^9.0.8",
"nodemon": "^3.1.9",
"ts-node": "^10.9.2"
}
}
```
How to run:
```
$ npm install
$ docker compose up -d
$ terraform init & terraform apply -auto-approve
$ set -a && source .local.env && set +a && npm start
```
### Environment
```markdown
- OS: macOS 15.2
- Docker version 27.4.0, build bde2b89
- LocalStack:
LocalStack version: 4
LocalStack Docker image sha: https://hub.docker.com/layers/localstack/localstack/4.0/images/sha256-8b1d40975ca01d830d7bb69131e68b9fdfbce9eee1c14405ee21457c869b5904
LocalStack build date: ^^
LocalStack build git hash: ^^
```
### Anything else?
_No response_ | closed | 2025-01-22T23:26:01Z | 2025-03-07T17:31:21Z | https://github.com/localstack/localstack/issues/12169 | [
"type: bug",
"status: resolved/fixed",
"aws:s3"
] | ifree92 | 2 |
aiogram/aiogram | asyncio | 1,218 | Redis storage fails with redis.exceptions.ConnectionError: Error UNKNOWN while writing to socket. Connection lost. | ## Context
I have FSMStorage running on Redis. Redis, like the bot and the API, are on Digital Ocean.
Periodically, I receive this error:
redis.exceptions.ConnectionError: Error UNKNOWN while writing to socket. Connection lost.
I thought it might be due to writing a lot to Redis. So, I replicated Redis, now there are two nodes, but the issue persists. I thought it might be due to high load, but right now there is no load and the error still appears occasionally.
Perhaps there's another issue with Redis, but the API communicates a lot with it and there are no problems. The Redis metrics are also good, everything works. I've loaded Redis and everything is okay with it too.
What can be done about this problem? How to catch and handle it? It emerges too often. Usually with the GET state requests.
* Operating System: ubuntu
* Python Version: 3.10.6
* aiogram version: 3.0.0b7
* aiohttp version: 3.8.4
* uvloop version (if installed): 0.17.0
### Failure Logs
Ids and host are changed
[traceback.txt](https://github.com/aiogram/aiogram/files/12050350/traceback.txt)
| closed | 2023-07-14T12:27:23Z | 2023-07-16T21:20:01Z | https://github.com/aiogram/aiogram/issues/1218 | [
"upstream"
] | BobaZooba | 1 |
JaidedAI/EasyOCR | pytorch | 1,172 | Add Gujarati Language | For Group 3 (Devanagari)
Request to add Gujarati Language
[gu.txt](https://github.com/JaidedAI/EasyOCR/files/13487204/gu.txt)
[gu_char.txt](https://github.com/JaidedAI/EasyOCR/files/13487208/gu_char.txt) | open | 2023-11-28T11:14:16Z | 2024-09-08T12:36:41Z | https://github.com/JaidedAI/EasyOCR/issues/1172 | [] | ashudhatma | 1 |
MagicStack/asyncpg | asyncio | 541 | When using connection pool, session parameters are not preserved | When using a connection pool, session parameters such as `search_path`, `session time zone`, `application_name` are not preserved when connection is returned to the pool. As a result, I have to use a setup coroutine that is called every time I acquire a connection from the pool:
```python
async def conn_setup(conn):
await conn.execute("set search_path to my_schema,public")
await conn.execute("set session time zone 'UTC'")
await conn.execute("set application_name to 'my_app'")
pool = await asyncpg.create_pool(..., setup=conn_setup)
```
It means that 3 additional operations will be performed every time I need to run a simple query on a pool. What is the reason that these session parameters are cleared each time? Can we change this logic? | closed | 2020-03-12T23:35:06Z | 2023-08-11T06:29:07Z | https://github.com/MagicStack/asyncpg/issues/541 | [] | sergeyspatar | 3 |
Lightning-AI/pytorch-lightning | pytorch | 20,151 | Support computing parameter count in ModelSummary for FSDP models | ### Description & Motivation
Models that are set up with FSDP (or DTensor) do not show the total parameter count in the ModelSummary.
### Pitch
Compute the shapes correctly (similar to the DeepSpeed summary).
### Alternatives
_No response_
### Additional context
_No response_
cc @borda @awaelchli @carmocca | closed | 2024-08-02T08:07:02Z | 2024-08-05T14:57:33Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20151 | [
"feature",
"callback: model summary",
"strategy: fsdp"
] | awaelchli | 0 |
mwaskom/seaborn | pandas | 3,835 | sns plot to pickle -> itertools deprecation warning for python 3.14 | For some of my apps, I cache figures in [as files via pickle](https://stackoverflow.com/a/12734723/9501624). This works both for "pure matplotlib" as well as seaborn figures.
As of late, I am seeing an itertools depcreciation warning when storing sns plots this way:
```
import pickle
import seaborn as sns
tips = sns.load_dataset("tips")
ax = sns.boxplot(x="day", y="total_bill", data=tips)
pickle.dump(ax.get_figure(), open("temp.pkl", "wb"))
```
_DeprecationWarning: Pickle, copy, and deepcopy support will be removed from itertools in Python 3.14._
This warning does not occur when storing pure matplotlib plots. My guess is that this storing technique will fail for sns plots with python >= 3.14. Is there a way to avoid this?
Versions:
sns: 0.13.2
plt: 3.10.1 | open | 2025-03-20T06:31:41Z | 2025-03-20T12:10:50Z | https://github.com/mwaskom/seaborn/issues/3835 | [] | ChrisOKay | 1 |
milesmcc/shynet | django | 56 | Document primary-key integration | Readme page says that I can associate visitors in Shynet with their user accounts on my site. But how? I can't find any clue to this. | closed | 2020-07-03T12:11:04Z | 2020-07-07T14:07:10Z | https://github.com/milesmcc/shynet/issues/56 | [] | mrspartak | 5 |
amdegroot/ssd.pytorch | computer-vision | 8 | Not use RandomHorizontalFlip? | The train_transform() is not used in the base_transform. So does this project use RandomHorizontalFlip?
Or this function is called other place? | closed | 2017-04-17T14:23:14Z | 2017-05-01T19:47:29Z | https://github.com/amdegroot/ssd.pytorch/issues/8 | [] | miraclebiu | 2 |
BeastByteAI/scikit-llm | scikit-learn | 84 | [Feature Request]: Batched Async Prediction | Hi,
The current scikit-llm is implemented in a synchronous way - the prompts are sent to the api one-by-one.
This is not ideal when we have a large dataset and a high tier (high TPM/RPM) account. Is it possible to incorporate batched async feature?
Reference:
[oaib](https://github.com/SpellcraftAI/oaib) | open | 2024-02-13T00:33:21Z | 2024-06-14T10:49:17Z | https://github.com/BeastByteAI/scikit-llm/issues/84 | [] | WindChimeRan | 2 |
hbldh/bleak | asyncio | 1,311 | client.write_gatt_char [WinError -2147483629] object close | * bleak version: bleak==0.20.2
bleak-winrt==1.2.0
* Python version: 3.10.5
* Operating System: Windows 10 (Windows Feature Experience Pack 1000.19041.1000.0)
* BlueZ version (`bluetoothctl -v`) in case of Linux:
### Description
Traceback (most recent call last):
File "D:\develop\py\Flappy-bird-python\app\service\imu_service.py", line 152, in init_device
await self.init_device(fd, retry-1)
│ │ │ └ 4
│ │ └ <_io.BufferedWriter name='C:/Users/Administrator/nx-game/data\\20230515-103758\\imu-im948-LeftFore-V3.01-30-20230515-103831.m...
│ └ <function IMUService.init_device at 0x0000023ED4A367A0>
└ <app.service.imu_service.IMUService object at 0x0000023ED4F4D000>
> File "D:\develop\py\Flappy-bird-python\app\service\imu_service.py", line 130, in init_device
await self._imu_client.write_gatt_char(par_write_characteristic, wakestr)
│ │ │ │ └ b')'
│ │ │ └ 5
│ │ └ <function BleakClient.write_gatt_char at 0x0000023EAD002D40>
│ └ <BleakClient, 26:C0:10:F7:CE:EC, <class 'bleak.backends.winrt.client.BleakClientWinRT'>>
└ <app.service.imu_service.IMUService object at 0x0000023ED4F4D000>
File "D:\develop\py\Flappy-bird-python\venv\lib\site-packages\bleak\__init__.py", line 659, in write_gatt_char
await self._backend.write_gatt_char(char_specifier, data, response)
│ │ │ │ │ └ False
│ │ │ │ └ b')'
│ │ │ └ 5
│ │ └ <function BleakClientWinRT.write_gatt_char at 0x0000023ED4F4E950>
│ └ <bleak.backends.winrt.client.BleakClientWinRT object at 0x0000023ED4F64070>
└ <BleakClient, 26:C0:10:F7:CE:EC, <class 'bleak.backends.winrt.client.BleakClientWinRT'>>
File "D:\develop\py\Flappy-bird-python\venv\lib\site-packages\bleak\backends\winrt\client.py", line 874, in write_gatt_char
await characteristic.obj.write_value_with_result_async(buf, response),
│ │ │ │ └ <GattWriteOption.WRITE_WITHOUT_RESPONSE: 1>
│ │ │ └ <_bleak_winrt_Windows_Storage_Streams.Buffer object at 0x0000023ED4F1E410>
│ │ └ <method 'write_value_with_result_async' of '_bleak_winrt_Windows_Devices_Bluetooth_GenericAttributeProfile.GattCharacteristic...
│ └ <_bleak_winrt_Windows_Devices_Bluetooth_GenericAttributeProfile.GattCharacteristic object at 0x0000023ED4F1E8B0>
└ <bleak.backends.winrt.characteristic.BleakGATTCharacteristicWinRT object at 0x0000023ED4F679A0>
OSError: [WinError -2147483629] 该对象已关闭。
### What I Did
``` python
par_write_characteristic=0x0005
wakestr=bytes([0x29])
self._imu_client = BleakClient(device, disconnected_callback=self.disconnected_callback, timeout=5.0, )
await self._imu_client.connect()
await self._imu_client.write_gatt_char(par_write_characteristic, wakestr)
```
### Logs
2023-05-15 10:38:03.259 | INFO | app.service.imu_service:connect:90 - connect device im948-LeftFore-V3.01
10:38:03 client.py[line:253] DEBUG Connecting to BLE device @ 26:C0:10:F7:CE:EC
10:38:03 client.py[line:632] DEBUG getting services (service_cache_mode=None, cache_mode=None)...
10:38:03 client.py[line:656] DEBUG calling get_gatt_services_async
10:38:03 client.py[line:692] DEBUG returned from get_gatt_services_async
10:38:03 client.py[line:704] DEBUG calling get_characteristics_async
10:38:03 client.py[line:712] DEBUG returned from get_characteristics_async
10:38:03 client.py[line:721] DEBUG calling get_descriptors_async
10:38:03 client.py[line:729] DEBUG returned from get_descriptors_async
10:38:03 client.py[line:704] DEBUG calling get_characteristics_async
10:38:05 client.py[line:712] DEBUG returned from get_characteristics_async
10:38:05 client.py[line:721] DEBUG calling get_descriptors_async
10:38:05 client.py[line:729] DEBUG returned from get_descriptors_async
10:38:05 client.py[line:721] DEBUG calling get_descriptors_async
10:38:06 client.py[line:729] DEBUG returned from get_descriptors_async
10:38:06 client.py[line:721] DEBUG calling get_descriptors_async
10:38:06 client.py[line:729] DEBUG returned from get_descriptors_async
10:38:06 client.py[line:721] DEBUG calling get_descriptors_async
10:38:06 client.py[line:729] DEBUG returned from get_descriptors_async
10:38:06 client.py[line:721] DEBUG calling get_descriptors_async
10:38:07 client.py[line:729] DEBUG returned from get_descriptors_async
10:38:07 client.py[line:721] DEBUG calling get_descriptors_async
10:38:07 client.py[line:729] DEBUG returned from get_descriptors_async
10:38:07 client.py[line:704] DEBUG calling get_characteristics_async
10:38:08 client.py[line:744] DEBUG disposing service objects
10:38:08 client.py[line:283] DEBUG closing requester
10:38:08 client.py[line:300] DEBUG closing session
2023-05-15 10:38:08.491 | ERROR | app.service.imu_service:connect:98 -
2023-05-15 10:38:08.492 | INFO | app.service.imu_service:connect:90 - connect device im948-LeftFore-V3.01
10:38:08 client.py[line:253] DEBUG Connecting to BLE device @ 26:C0:10:F7:CE:EC
10:38:08 client.py[line:632] DEBUG getting services (service_cache_mode=None, cache_mode=None)...
10:38:08 client.py[line:656] DEBUG calling get_gatt_services_async
10:38:08 client.py[line:692] DEBUG returned from get_gatt_services_async
10:38:08 client.py[line:704] DEBUG calling get_characteristics_async
10:38:08 client.py[line:712] DEBUG returned from get_characteristics_async
10:38:08 client.py[line:721] DEBUG calling get_descriptors_async
10:38:08 client.py[line:729] DEBUG returned from get_descriptors_async
10:38:08 client.py[line:704] DEBUG calling get_characteristics_async
10:38:08 client.py[line:712] DEBUG returned from get_characteristics_async
10:38:08 client.py[line:721] DEBUG calling get_descriptors_async
10:38:08 client.py[line:729] DEBUG returned from get_descriptors_async
10:38:08 client.py[line:721] DEBUG calling get_descriptors_async
10:38:08 client.py[line:729] DEBUG returned from get_descriptors_async
10:38:08 client.py[line:721] DEBUG calling get_descriptors_async
10:38:08 client.py[line:729] DEBUG returned from get_descriptors_async
10:38:08 client.py[line:721] DEBUG calling get_descriptors_async
10:38:08 client.py[line:729] DEBUG returned from get_descriptors_async
10:38:08 client.py[line:721] DEBUG calling get_descriptors_async
10:38:08 client.py[line:729] DEBUG returned from get_descriptors_async
10:38:08 client.py[line:721] DEBUG calling get_descriptors_async
10:38:08 client.py[line:729] DEBUG returned from get_descriptors_async
10:38:08 client.py[line:704] DEBUG calling get_characteristics_async
10:38:08 client.py[line:712] DEBUG returned from get_characteristics_async
10:38:08 client.py[line:721] DEBUG calling get_descriptors_async
10:38:08 client.py[line:729] DEBUG returned from get_descriptors_async
10:38:08 client.py[line:721] DEBUG calling get_descriptors_async
10:38:09 client.py[line:729] DEBUG returned from get_descriptors_async
10:38:09 client.py[line:704] DEBUG calling get_characteristics_async
10:38:10 client.py[line:712] DEBUG returned from get_characteristics_async
10:38:10 client.py[line:721] DEBUG calling get_descriptors_async
10:38:10 client.py[line:729] DEBUG returned from get_descriptors_async
10:38:10 client.py[line:721] DEBUG calling get_descriptors_async
10:38:10 client.py[line:729] DEBUG returned from get_descriptors_async
10:38:10 client.py[line:704] DEBUG calling get_characteristics_async
10:38:11 client.py[line:712] DEBUG returned from get_characteristics_async
10:38:11 client.py[line:721] DEBUG calling get_descriptors_async
10:38:12 client.py[line:729] DEBUG returned from get_descriptors_async
10:38:30 client.py[line:331] DEBUG session_status_changed_event_handler: id: BluetoothLE#BluetoothLEf4:4e:fc:04:a6:a8-26:c0:10:f7:ce:ec, error: BluetoothError.SUCCESS, status: GattSessionStatus.CLOSED
10:38:30 client.py[line:348] DEBUG max_pdu_size_changed_handler: 23
10:38:30 client.py[line:267] DEBUG 26:C0:10:F7:CE:EC: services changed
10:38:30 client.py[line:267] DEBUG 26:C0:10:F7:CE:EC: services changed
10:38:30 client.py[line:267] DEBUG 26:C0:10:F7:CE:EC: services changed
10:38:30 client.py[line:267] DEBUG 26:C0:10:F7:CE:EC: services changed
10:38:30 client.py[line:267] DEBUG 26:C0:10:F7:CE:EC: services changed
10:38:30 client.py[line:267] DEBUG 26:C0:10:F7:CE:EC: services changed
10:38:31 client.py[line:331] DEBUG session_status_changed_event_handler: id: BluetoothLE#BluetoothLEf4:4e:fc:04:a6:a8-26:c0:10:f7:ce:ec, error: BluetoothError.SUCCESS, status: GattSessionStatus.ACTIVE
2023-05-15 10:38:31.149 | INFO | app.service.imu_service:run:175 - connect im948-LeftFore-V3.01 success ...
2023-05-15 10:38:31.150 | INFO | app.service.imu_service:create_fd:116 - write data C:/Users/Administrator/nx-game/data\20230515-103758\imu-im948-LeftFore-V3.01-30-20230515-103831.mex success ...
2023-05-15 10:38:31.151 | ERROR | app.service.imu_service:init_device:140 - [WinError -2147483629] 该对象已关闭。
Traceback (most recent call last):
File "D:\develop\py\Flappy-bird-python\app\service\imu_service.py", line 181, in run
await self.init_device(fd)
│ │ └ <_io.BufferedWriter name='C:/Users/Administrator/nx-game/data\\20230515-103758\\imu-im948-LeftFore-V3.01-30-20230515-103831.m...
│ └ <function IMUService.init_device at 0x0000023ED4A367A0>
└ <app.service.imu_service.IMUService object at 0x0000023ED4F4D000>
> File "D:\develop\py\Flappy-bird-python\app\service\imu_service.py", line 130, in init_device
await self._imu_client.write_gatt_char(par_write_characteristic, wakestr)
│ │ │ │ └ b')'
│ │ │ └ 5
│ │ └ <function BleakClient.write_gatt_char at 0x0000023EAD002D40>
│ └ <BleakClient, 26:C0:10:F7:CE:EC, <class 'bleak.backends.winrt.client.BleakClientWinRT'>>
└ <app.service.imu_service.IMUService object at 0x0000023ED4F4D000>
File "D:\develop\py\Flappy-bird-python\venv\lib\site-packages\bleak\__init__.py", line 659, in write_gatt_char
await self._backend.write_gatt_char(char_specifier, data, response)
│ │ │ │ │ └ False
│ │ │ │ └ b')'
│ │ │ └ 5
│ │ └ <function BleakClientWinRT.write_gatt_char at 0x0000023ED4F4E950>
│ └ <bleak.backends.winrt.client.BleakClientWinRT object at 0x0000023ED4F64070>
└ <BleakClient, 26:C0:10:F7:CE:EC, <class 'bleak.backends.winrt.client.BleakClientWinRT'>>
File "D:\develop\py\Flappy-bird-python\venv\lib\site-packages\bleak\backends\winrt\client.py", line 874, in write_gatt_char
await characteristic.obj.write_value_with_result_async(buf, response),
│ │ │ │ └ <GattWriteOption.WRITE_WITHOUT_RESPONSE: 1>
│ │ │ └ <_bleak_winrt_Windows_Storage_Streams.Buffer object at 0x0000023ED4F1DEF0>
│ │ └ <method 'write_value_with_result_async' of '_bleak_winrt_Windows_Devices_Bluetooth_GenericAttributeProfile.GattCharacteristic...
│ └ <_bleak_winrt_Windows_Devices_Bluetooth_GenericAttributeProfile.GattCharacteristic object at 0x0000023ED4F1E8B0>
└ <bleak.backends.winrt.characteristic.BleakGATTCharacteristicWinRT object at 0x0000023ED4F679A0>
OSError: [WinError -2147483629] 该对象已关闭。
2023-05-15 10:38:31.153 | ERROR | app.service.imu_service:init_device:140 - [WinError -2147483629] 该对象已关闭。
Traceback (most recent call last):
File "D:\develop\py\Flappy-bird-python\app\service\imu_service.py", line 152, in init_device
await self.init_device(fd, retry-1)
│ │ │ └ 10
│ │ └ <_io.BufferedWriter name='C:/Users/Administrator/nx-game/data\\20230515-103758\\imu-im948-LeftFore-V3.01-30-20230515-103831.m...
│ └ <function IMUService.init_device at 0x0000023ED4A367A0>
└ <app.service.imu_service.IMUService object at 0x0000023ED4F4D000>
> File "D:\develop\py\Flappy-bird-python\app\service\imu_service.py", line 130, in init_device
await self._imu_client.write_gatt_char(par_write_characteristic, wakestr)
│ │ │ │ └ b')'
│ │ │ └ 5
│ │ └ <function BleakClient.write_gatt_char at 0x0000023EAD002D40>
│ └ <BleakClient, 26:C0:10:F7:CE:EC, <class 'bleak.backends.winrt.client.BleakClientWinRT'>>
└ <app.service.imu_service.IMUService object at 0x0000023ED4F4D000>
File "D:\develop\py\Flappy-bird-python\venv\lib\site-packages\bleak\__init__.py", line 659, in write_gatt_char
await self._backend.write_gatt_char(char_specifier, data, response)
│ │ │ │ │ └ False
│ │ │ │ └ b')'
│ │ │ └ 5
│ │ └ <function BleakClientWinRT.write_gatt_char at 0x0000023ED4F4E950>
│ └ <bleak.backends.winrt.client.BleakClientWinRT object at 0x0000023ED4F64070>
└ <BleakClient, 26:C0:10:F7:CE:EC, <class 'bleak.backends.winrt.client.BleakClientWinRT'>>
File "D:\develop\py\Flappy-bird-python\venv\lib\site-packages\bleak\backends\winrt\client.py", line 874, in write_gatt_char
await characteristic.obj.write_value_with_result_async(buf, response),
│ │ │ │ └ <GattWriteOption.WRITE_WITHOUT_RESPONSE: 1>
│ │ │ └ <_bleak_winrt_Windows_Storage_Streams.Buffer object at 0x0000023ED4F1EC30>
│ │ └ <method 'write_value_with_result_async' of '_bleak_winrt_Windows_Devices_Bluetooth_GenericAttributeProfile.GattCharacteristic...
│ └ <_bleak_winrt_Windows_Devices_Bluetooth_GenericAttributeProfile.GattCharacteristic object at 0x0000023ED4F1E8B0>
└ <bleak.backends.winrt.characteristic.BleakGATTCharacteristicWinRT object at 0x0000023ED4F679A0>
OSError: [WinError -2147483629] 该对象已关闭。
| open | 2023-05-15T02:48:27Z | 2023-08-31T19:47:24Z | https://github.com/hbldh/bleak/issues/1311 | [
"Backend: WinRT",
"more info required"
] | xiasanshi | 4 |
sgl-project/sglang | pytorch | 3,717 | [Bug] TCPStore Error processing client message: Too many keys being waited. keys: 3891110078061282660, max: 13107 | ### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [ ] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug

### Reproduction
Start on Two nodes of H100, with NCCL IB and below command:
```
docker run --gpus all \
--shm-size 64g \
--network=host \
--privileged \
-v ~/.cache/huggingface:/root/.cache/huggingface \
-v /home/xxx:/home/xxx\
--name sglang_multinode1 \
-it \
-e NCCL_SOCKET_IFNAME=ib0 \
-e NCCL_DEBUG=INFO \
-e GLOO_SOCKET_IFNAME=ib0 \
-e GLOO_DEBUG=INFO \
--rm \
--env "HF_TOKEN=$HF_TOKEN" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server --model-path /home/xxx/deepseek-r1 --tp 16 --dist-init-addr IP:20000 --nnodes 2 --node-rank 0 --trust-remote-code --host localhost --port 1105 --enable-torch-compile --torch-compile-max-bs 8 --api-key=xxxxx
docker run --gpus all \
--shm-size 64g \
--network=host \
--privileged \
-v ~/.cache/huggingface:/root/.cache/huggingface \
-v /home/xxx:/home/xxx\
--name sglang_multinode2 \
-it \
-e NCCL_SOCKET_IFNAME=ib0 \
-e NCCL_DEBUG=INFO \
-e GLOO_SOCKET_IFNAME=ib0 \
-e GLOO_DEBUG=INFO \
--rm \
--env "HF_TOKEN=$HF_TOKEN" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server --model-path /home/xxx/deepseek-r1 --tp 16 --dist-init-addr IP:20000 --nnodes 2 --node-rank 1 --trust-remote-code --host localhost --port 1105 --enable-torch-compile --torch-compile-max-bs 8 --api-key=xxxxx
```
### Environment
INFO 02-20 03:28:59 __init__.py:190] Automatically detected platform cuda.
Python: 3.12.8 (main, Jan 27 2025, 17:53:37) [GCC 11.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA H100 80GB HBM3
GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 550.127.05
PyTorch: 2.5.1+cu124
sgl_kernel: 0.0.3.post6
flashinfer: 0.2.1.post2+cu124torch2.5
triton: 3.1.0
transformers: 4.48.3
torchao: 0.8.0
numpy: 1.26.4
aiohttp: 3.11.12
fastapi: 0.115.8
hf_transfer: 0.1.9
huggingface_hub: 0.29.0
interegular: 0.3.3
modelscope: 1.23.0
orjson: 3.10.15
packaging: 24.2
psutil: 7.0.0
pydantic: 2.10.6
multipart: 0.0.20
zmq: 26.2.1
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: 0.7.2
openai: 1.63.2
tiktoken: 0.9.0
anthropic: 0.46.0
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 NIC8 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 PIX NODE NODE NODE SYS SYS SYS SYS NODE 0-55,112-167 0 N/A
GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 NODE PIX NODE NODE SYS SYS SYS SYS NODE 0-55,112-167 0 N/A
GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 NODE NODE PIX NODE SYS SYS SYS SYS NODE 0-55,112-167 0 N/A
GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 NODE NODE NODE PIX SYS SYS SYS SYS NODE 0-55,112-167 0 N/A
GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 SYS SYS SYS SYS PIX NODE NODE NODE SYS 56-111,168-223 1 N/A
GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 SYS SYS SYS SYS NODE PIX NODE NODE SYS 56-111,168-223 1 N/A
GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 SYS SYS SYS SYS NODE NODE PIX NODE SYS 56-111,168-223 1 N/A
GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X SYS SYS SYS SYS NODE NODE NODE PIX SYS 56-111,168-223 1 N/A
NIC0 PIX NODE NODE NODE SYS SYS SYS SYS X NODE NODE NODE SYS SYS SYS SYS NODE
NIC1 NODE PIX NODE NODE SYS SYS SYS SYS NODE X NODE NODE SYS SYS SYS SYS NODE
NIC2 NODE NODE PIX NODE SYS SYS SYS SYS NODE NODE X NODE SYS SYS SYS SYS NODE
NIC3 NODE NODE NODE PIX SYS SYS SYS SYS NODE NODE NODE X SYS SYS SYS SYS NODE
NIC4 SYS SYS SYS SYS PIX NODE NODE NODE SYS SYS SYS SYS X NODE NODE NODE SYS
NIC5 SYS SYS SYS SYS NODE PIX NODE NODE SYS SYS SYS SYS NODE X NODE NODE SYS
NIC6 SYS SYS SYS SYS NODE NODE PIX NODE SYS SYS SYS SYS NODE NODE X NODE SYS
NIC7 SYS SYS SYS SYS NODE NODE NODE PIX SYS SYS SYS SYS NODE NODE NODE X SYS
NIC8 NODE NODE NODE NODE SYS SYS SYS SYS NODE NODE NODE NODE SYS SYS SYS SYS X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_5
NIC4: mlx5_6
NIC5: mlx5_7
NIC6: mlx5_8
NIC7: mlx5_9
NIC8: mlx5_bond_0
ulimit soft: 1048576 | open | 2025-02-20T03:29:21Z | 2025-02-20T04:29:01Z | https://github.com/sgl-project/sglang/issues/3717 | [] | Superskyyy | 2 |
ScottfreeLLC/AlphaPy | scikit-learn | 34 | Features Generation BugFixes | Please see attached patch that fixes a bunch of bugs around feature generation in mflow.
It has the following...
1. Fix for this issue...
https://github.com/ScottfreeLLC/AlphaPy/issues/33
2. train mode now correctly ignores the --pdate argument rather
than falling over.
3. Arrays of NaN in feature generation were being dropped
rather than added as a column of sentinels.
4. Multi feature generation was
failing due to feature name count not matching feature count, improved the
asserts around this and fixed the feature names.
5. Disabled scipy signal-to-noise ratio as this seems to be long-since deprecated. (my understanding is that the scipy version currently in use would fail for this feature).
[mflow_bugfixes.txt](https://github.com/ScottfreeLLC/AlphaPy/files/4701284/mflow_bugfixes.txt)
| open | 2020-05-29T10:36:17Z | 2020-08-25T23:53:34Z | https://github.com/ScottfreeLLC/AlphaPy/issues/34 | [
"bug"
] | sykesdev | 1 |
coqui-ai/TTS | python | 3,522 | [Bug] result audio repeat some word many times at end | ### Describe the bug
python:3.10.12
tts:0.22.0
modeltts_models/zh-CN/baker/tacotron2-DDC-GST
text:我们要去看电影了你去不去
result audio:我们要去看电影了你去不去去不去去去去去去去去去去
### To Reproduce
tts --text "我们要去看电影了你去不去" --model_name tts_models/zh-CN/baker/tacotron2-DDC-GST --out_path a.wav
### Expected behavior
when play the audio file a.wav, will say "我们要去看电影了你去不去", but it speak 我们要去看电影了你去不去去不去去去去去去去去去去",
### Logs
```shell
> tts_models/zh-CN/baker/tacotron2-DDC-GST is already downloaded.
> Using model: tacotron2
> Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log10
| > min_level_db:-100
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:0
| > fft_size:1024
| > power:1.5
| > preemphasis:0.0
| > griffin_lim_iters:60
| > signal_norm:True
| > symmetric_norm:True
| > mel_fmin:50.0
| > mel_fmax:7600.0
| > pitch_fmin:0.0
| > pitch_fmax:640.0
| > spec_gain:1.0
| > stft_pad_mode:reflect
| > max_norm:4.0
| > clip_norm:True
| > do_trim_silence:True
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:/d/tts/.local/share/tts/tts_models--zh-CN--baker--tacotron2-DDC-GST/scale_stats.npy
| > base:10
| > hop_length:256
| > win_length:1024
> Model's reduction rate `r` is set to: 2
> Text: 我们要去看电影了你去不去
> Text splitted to sentences.
['我们要去看电影了你去不去']
Building prefix dict from the default dictionary ...
DEBUG:jieba:Building prefix dict from the default dictionary ...
Loading model from cache /tmp/jieba.cache
DEBUG:jieba:Loading model from cache /tmp/jieba.cache
Loading model cost 0.386 seconds.
DEBUG:jieba:Loading model cost 0.386 seconds.
Prefix dict has been built successfully.
DEBUG:jieba:Prefix dict has been built successfully.
> Decoder stopped with `max_decoder_steps` 500
> Processing time: 3.709467649459839
> Real-time factor: 0.3077915650798868
> Saving output to a.wav
```
### Environment
```shell
tts@chat:~$ python3 get_env.py
{
"CUDA": {
"GPU": [],
"available": false,
"version": "12.1"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.2+cu121",
"TTS": "0.22.0",
"numpy": "1.22.0"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.10.12",
"version": "#92-Ubuntu SMP Mon Aug 14 09:30:42 UTC 2023"
}
}
```
### Additional context
_No response_ | closed | 2024-01-17T03:45:40Z | 2024-02-24T10:55:15Z | https://github.com/coqui-ai/TTS/issues/3522 | [
"bug",
"wontfix"
] | yumoqing | 1 |
dnouri/nolearn | scikit-learn | 197 | nolearn.lasagne.visualize.plot_conv_activity should use the new net.get_output | closed | 2016-01-15T23:13:55Z | 2016-03-11T02:51:25Z | https://github.com/dnouri/nolearn/issues/197 | [] | dnouri | 2 | |
biolab/orange3 | pandas | 7,017 | Resize Feature Statistics Widget Window | Hi all,
After changing the colour of the distribution I can resize the window of the Feature Statistics widget because the legend is too long. On my Mac I cannot get to the bottom of the window. Do you have any suggestions?
<img width="1507" alt="Image" src="https://github.com/user-attachments/assets/83ba3ac5-8697-45d5-bccd-61d106e46d45" />
| open | 2025-02-04T18:18:50Z | 2025-02-07T13:17:07Z | https://github.com/biolab/orange3/issues/7017 | [
"bug report"
] | TheItalianDataGuy | 2 |
omnilib/aiomultiprocess | asyncio | 15 | How to execute async generator in Pool | ### Description
Hi, great library, thanks!
I want to run async generator in Process Pool using aiomultiprocess lib, how can I do that, any pointer. Currently I'm getting error like:
`File "/home/rohankar/anaconda3/lib/python3.6/site-packages/aiomultiprocess/core.py", line 93, in __init__
raise ValueError(f"target must be coroutine function")
ValueError: target must be coroutine function`
Script that I tried;
```
import asyncio
async def ait(nr):
for i in range(nr):
await asyncio.sleep(0.1)
yield i
from aiomultiprocess import Worker
async def main():
# This Works
# async for i in ait(10):
# print(i)
# This throw error
p = Worker(target=ait, args=(20,))
p.start()
print(await p)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
```
### Details
* OS: Ubuntu 18.04
* Python version: 3.6
* aiomultiprocess version: 0.5.0
* Can you repro on master? didn't try
* Can you repro in a clean virtualenv? didn't try
| closed | 2019-02-23T12:32:08Z | 2019-03-30T20:09:52Z | https://github.com/omnilib/aiomultiprocess/issues/15 | [] | sagarr | 1 |
FactoryBoy/factory_boy | sqlalchemy | 776 | Add Dynamodb ORM Factory | #### The problem
I haven't found support for creating a factory with the Dynamodb ORM [pynamodb](https://github.com/pynamodb/PynamoDB). Sometimes I use a django-supported ORM for which the `DjangoModelFactory` works great, and sometimes I need a NoSQL DB.
#### Proposed solution
I assume this would include implementing the `base.Factory` interface, though I'm pretty unfamiliar with what's under the hood of factory_boy.
Edit: ORM (Object Relational Mapping) for a NoSQL DB is a misnomer :-P ONSQLM (Object NoSQL Mapping) would be more appropriate | open | 2020-08-28T15:46:34Z | 2022-01-25T12:57:30Z | https://github.com/FactoryBoy/factory_boy/issues/776 | [
"Feature",
"DesignDecision"
] | ezbc | 6 |
nikitastupin/clairvoyance | graphql | 53 | help | 2022-10-14 20:34:28 INFO | Starting blind introspection on https://site.com/graphql/...
2022-10-14 20:34:29 DEBUG | Root typenames are: {'queryType': None, 'mutationType': None, 'subscriptionType': None}
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/boss/tools/clairvoyance/clairvoyance/__main__.py", line 4, in <module>
cli()
File "/home/boss/tools/clairvoyance/clairvoyance/cli.py", line 109, in cli
asyncio.run(
File "/usr/lib/python3.8/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "/home/boss/tools/clairvoyance/clairvoyance/cli.py", line 67, in blind_introspection
schema = await oracle.clairvoyance(
File "/home/boss/tools/clairvoyance/clairvoyance/oracle.py", line 485, in clairvoyance
typename = await probe_typename(input_document)
File "/home/boss/tools/clairvoyance/clairvoyance/oracle.py", line 402, in probe_typename
raise Exception(f'Expected "{errors}" to match any of "{wrong_field_regexes}".')
Exception: Expected "[{'message': "Validation error of type FieldUndefined: Field 'imwrongfield' in type 'Query' is undefined @ 'imwrongfield'", 'locations': [{'line': 1, 'column': 9}], 'extensions': {'classification': 'ValidationError'}}]" to match any of "['Cannot query field [\'"]imwrongfield[\'"] on type [\'"](?P<typename>[_0-9a-zA-Z\\[\\]!]*)[\'"].', 'Field [\'"][_0-9a-zA-Z\\[\\]!]*[\'"] must not have a selection since type [\'"](?P<typename>[_A-Za-z\\[\\]!][_0-9a-zA-Z\\[\\]!]*)[\'"] has no subfields.', 'Field [\'"][_0-9a-zA-Z\\[\\]!]*[\'"] of type [\'"](?P<typename>[_A-Za-z\\[\\]!][_0-9a-zA-Z\\[\\]!]*)[\'"] must not have a sub selection.']".
2022-10-14 20:34:29 ERROR | Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x7f744a55f8e0>
2022-10-14 20:34:29 ERROR | Unclosed connector
connections: ['[(<aiohttp.client_proto.ResponseHandler object at 0x7f744a462e80>, 94397.773572156)]']
connector: <aiohttp.connector.TCPConnector object at 0x7f744a55f670>
command i used python3 -m clairvoyance -vv -o schema.json -w google-10000-english.txt https://site.com/graphql/ | open | 2022-10-14T20:39:15Z | 2024-09-17T17:20:57Z | https://github.com/nikitastupin/clairvoyance/issues/53 | [
"bug",
"question"
] | vansh1 | 9 |
iterative/dvc | data-science | 10,645 | dvc exp show does not visualize experiments | # Bug Report
<!--
## Issue name
Issue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug.
Example: `repro: doesn't detect input changes`
-->
After running some experiments in branch A, I then pushed the experiments in dvc, then I squashed and merge branch A into main, and then deleted branch A.
Now I can still see the experiments with `dvc exp list --all`.
I can apply the experiments with `dvc exp apply`.
But it I do `dvc exp show` I cannot see any experiment.
<!--
A clear and concise description of what the bug is.
-->
### Reproduce
- run some experiments in a branch
- push the experiments `dvc exp push origin --all`
- squash and merge the branch
- delete the branch
- run `dvc exp show --all-commit --all-branches` <!--
Step list of how to reproduce the bug
-->
<!--
Example:
1. dvc init
2. Copy dataset.zip to the directory
3. dvc add dataset.zip
4. dvc run -d dataset.zip -o model ./train.sh
5. modify dataset.zip
6. dvc repro
-->
### Expected

<!--
A clear and concise description of what you expect to happen.
-->
### Environment information
<!--
This is required to ensure that we can reproduce the bug.
-->
DVC version: 3.56.0 (pip)
-------------------------
Platform: Python 3.10.12 on Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.35
Subprojects:
dvc_data = 3.16.7
dvc_objects = 5.1.0
dvc_render = 1.0.2
dvc_task = 0.40.2
scmrepo = 3.3.8
Supports:
http (aiohttp = 3.11.4, aiohttp-retry = 2.9.1),
https (aiohttp = 3.11.4, aiohttp-retry = 2.9.1),
s3 (s3fs = 2024.10.0, boto3 = 1.35.36)
Config:
Global: /home/leopra96/.config/dvc
System: /etc/xdg/dvc
Cache types: hardlink, symlink
Cache directory: ext4 on /dev/sdc
Caches: local
Remotes: s3
Workspace directory: ext4 on /dev/sdc
Repo: dvc (subdir), git
Repo.site_cache_dir: /var/tmp/dvc/repo/c956f7904aee9f04196cd0369a56f204
```console
$ dvc doctor
```
**Additional Information (if any):**
<!--
Please check https://github.com/iterative/dvc/wiki/Debugging-DVC on ways to gather more information regarding the issue.
If applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`.
If the issue is regarding the performance, please attach the profiling information and the benchmark comparisons.
-->
| closed | 2024-12-06T19:32:10Z | 2024-12-07T20:23:03Z | https://github.com/iterative/dvc/issues/10645 | [
"triage",
"A: experiments"
] | OS-leonardopratesi | 3 |
smarie/python-pytest-cases | pytest | 74 | [Tiny bug] Wrong deprecation warning for parametrize_plus | `main_fixtures.py`
warn("`parametrize_plus` is deprecated. Please use the new alias `parametrize_plus`. ")
| closed | 2020-02-18T13:40:42Z | 2020-02-18T17:21:33Z | https://github.com/smarie/python-pytest-cases/issues/74 | [] | jitsejan | 2 |
marshmallow-code/flask-marshmallow | sqlalchemy | 44 | 'DummySession' object has no attribute 'query' | We have been using flask-marshmallow 0.6.0 with marshmallow-sqlalchemy 0.3.0 for some time now, and been quite happy. However, in trying to upgrade our packages we have encountered the error message above.
It appears that marshmallow-sqlalchemy is now trying to actually manage adding/merging the models with the session. Personally, I don't want that. I want marshmallow to handle the deserialization and leave it to me to decide when and how I want to add or merge the model with the session, as we have been doing for some time. I have suggested on their issue board where others had brought up the issue that it would be nice if their code just skipped the session management parts if the session was none (see [https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/62](url)). If they did that, then you wouldn't need to have a DummySession class at all. I do not know how amenable they will be to that suggestion.
The alternative is unfortunately to have to make DummySession implement methods to avoid generating errors, but this requires not just the query method but then it would appear filter_by, one, and first.
Or perhaps there is an alternative workaround that you already have in place. If so, I would be anxious to hear it.
Thanks.
Oh the full traceback is
Traceback (most recent call last):
File "tests.py", line 245, in runTest
(obj, errors) = schema.loads(example_str[name])
File "/home/davism/Src/atsdb/venv/lib/python2.7/site-packages/marshmallow/schema.py", line 564, in loads
return self.load(data, many=many, partial=partial)
File "/home/davism/Src/atsdb/venv/lib/python2.7/site-packages/marshmallow_sqlalchemy/schema.py", line 186, in load
return super(ModelSchema, self).load(data, _args, *_kwargs)
File "/home/davism/Src/atsdb/venv/lib/python2.7/site-packages/marshmallow/schema.py", line 542, in load
result, errors = self._do_load(data, many, partial=partial, postprocess=True)
File "/home/davism/Src/atsdb/venv/lib/python2.7/site-packages/marshmallow/schema.py", line 646, in _do_load
result = self._invoke_load_processors(POST_LOAD, result, many, original_data=data)
File "/home/davism/Src/atsdb/venv/lib/python2.7/site-packages/marshmallow/schema.py", line 767, in _invoke_load_processors
data=data, many=many, original_data=original_data)
File "/home/davism/Src/atsdb/venv/lib/python2.7/site-packages/marshmallow/schema.py", line 865, in _invoke_processors
data = utils.if_none(processor(data), data)
File "/home/davism/Src/atsdb/venv/lib/python2.7/site-packages/marshmallow_sqlalchemy/schema.py", line 169, in make_instance
instance = self.instance or self.get_instance(data)
File "/home/davism/Src/atsdb/venv/lib/python2.7/site-packages/marshmallow_sqlalchemy/schema.py", line 154, in get_instance
return self.session.query(
AttributeError: 'DummySession' object has no attribute 'query'
| open | 2016-05-24T21:57:05Z | 2024-03-06T06:13:46Z | https://github.com/marshmallow-code/flask-marshmallow/issues/44 | [] | medavis | 15 |
openapi-generators/openapi-python-client | fastapi | 146 | Specify additional headers | **Is your feature request related to a problem? Please describe.**
When trying to download a file in chunks, I need to be able to specify the `Range` header. There is currently no way of specifying any additional headers outside of the call to the provided client's `get_headers()`
**Describe the solution you'd like**
There are two ways of going about this, and I feel both have their uses:
1. Specify custom headers when creating a `Client` to be used with endpoint functions. This is useful in cases where a header needs to be used/reused frequently. The only downside is this is a bit inconvenient in use cases where the headers _aren't_ used/reused frequently. There could be a `set_headers` method or something similar, but the easiest way to handle single-use headers would be:
1. Add a `headers: Optional[Dict[str, Any]]` param to endpoint functions. This allows headers to be specified on a per-call basis such that using something like the `Range` header wouldn't require creating a new `Client` (or updating an existing Client's headers) for each call
I think implementing both would be best. It gives the flexibility of per-call headers, while also allowing users to have clients with a pre-set (albeit updatable) set of headers for easy reuse
| closed | 2020-08-10T16:59:39Z | 2020-08-11T13:23:29Z | https://github.com/openapi-generators/openapi-python-client/issues/146 | [
"✨ enhancement"
] | emann | 1 |
PaddlePaddle/models | nlp | 4,764 | 如果仅使用tsn进行特征提取,并输出提取的特征,该如何实现呢? | 如果仅使用tsn进行特征提取,并输出提取的特征,该如何实现呢? | open | 2020-07-23T06:47:48Z | 2020-07-27T03:22:23Z | https://github.com/PaddlePaddle/models/issues/4764 | [] | liu824 | 6 |
pytest-dev/pytest-cov | pytest | 149 | Add newline after --no-cov warning (trivial) | I would open a PR but I can't for silly legal reasons - can somebody take this small change on? Thanks!
---
Suggested change:
From:
```python
terminalreporter.write('WARNING: %s' % msg, red=True, bold=True)
```
into:
```python
terminalreporter.write('WARNING: %s\n' % msg, red=True, bold=True)
```
---
When running `pytest --no-cov`, this will fix the output, from:
```
Coverage disabled via --no-cov switch!=========== pytest-warning summary ===========
```
to:
```
WARNING: Coverage disabled via --no-cov switch!
=========== pytest-warning summary ===========
```
---
| closed | 2017-02-15T13:35:25Z | 2017-02-16T13:07:06Z | https://github.com/pytest-dev/pytest-cov/issues/149 | [] | sitaktif | 0 |
browser-use/browser-use | python | 116 | ERROR: 5 consecutive failures | I verified the API token validity and used the sample provided in the README.md file of this repo
```
import asyncio
from browser_use import Agent
from langchain_community.chat_models import ChatOpenAI
async def main():
agent = Agent(
task="Find a one-way flight from Bali to Oman on 12 January 2025 on Google Flights. Return me the cheapest option.",
llm=ChatOpenAI(model="gpt-4o"),
)
result = await agent.run()
print(result)
asyncio.run(main())
```
<img width="340" alt="image" src="https://github.com/user-attachments/assets/5a5e8315-fda8-4bcd-961c-8408e3c254e6" />
| open | 2024-12-24T18:48:39Z | 2025-01-07T10:18:41Z | https://github.com/browser-use/browser-use/issues/116 | [] | EssamMohamedAbo-ElMkarem | 14 |
ray-project/ray | pytorch | 51,075 | [core] add tests to ensure the ConcurrencyGroupManager creates the correct number of threads | ### Description
as title
### Use case
_No response_ | open | 2025-03-04T23:03:42Z | 2025-03-04T23:03:51Z | https://github.com/ray-project/ray/issues/51075 | [
"enhancement",
"core"
] | kevin85421 | 0 |
vllm-project/vllm | pytorch | 15,230 | [Bug]: jinja2.exceptions.TemplateSyntaxError: expected token 'end of print statement', got 'name' | ### Your current environment
<details>
<summary>The output of `python collect_env.py`</summary>
vllm==0.7.2 in Kaggle.
</details>
### 🐛 Describe the bug
When manually applying [QwQ-32B Chat Template](https://huggingface.co/Qwen/QwQ-32B/blob/main/tokenizer_config.json#L230), `TemplateSyntaxError` is raised. Why does it work when it uses `tokenizer.chat_template`?
```python
chat_template = """
{%- if tools %}
{{- '<|im_start|>system\n' }}
{%- if messages[0]['role'] == 'system' %}
{{- messages[0]['content'] }}
{%- else %}
{{- '' }}
{%- endif %}
{{- "\n\n# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
{%- for tool in tools %}
{{- "\n" }}
{{- tool | tojson }}
{%- endfor %}
{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
{%- else %}
{%- if messages[0]['role'] == 'system' %}
{{- '<|im_start|>system\n' + messages[0]['content'] + '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- for message in messages %}
{%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
{{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
{%- elif message.role == "assistant" and not message.tool_calls %}
{%- set content = message.content %}
{%- if not loop.last %}
{%- set content = message.content.split('</think>')[-1].lstrip('\n') %}
{%- endif %}
{{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
{%- elif message.role == "assistant" %}
{%- set content = message.content %}
{%- if not loop.last %}
{%- set content = message.content.split('</think>')[-1].lstrip('\n') %}
{%- endif %}
{{- '<|im_start|>' + message.role }}
{%- if message.content %}
{{- '\n' + content }}
{%- endif %}
{%- for tool_call in message.tool_calls %}
{%- if tool_call.function is defined %}
{%- set tool_call = tool_call.function %}
{%- endif %}
{{- '\n<tool_call>\n{"name": "' }}
{{- tool_call.name }}
{{- '", "arguments": ' }}
{{- tool_call.arguments | tojson }}
{{- '}\n</tool_call>' }}
{%- endfor %}
{{- '<|im_end|>\n' }}
{%- elif message.role == "tool" %}
{%- if (loop.index0 == 0) or (messages[loop.index0 - 1].role != "tool") %}
{{- '<|im_start|>user' }}
{%- endif %}
{{- '\n<tool_response>\n' }}
{{- message.content }}
{{- '\n</tool_response>' }}
{%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
{{- '<|im_end|>\n' }}
{%- endif %}
{%- endif %}
{%- endfor %}
{%- if add_generation_prompt %}
{{- '<|im_start|>assistant\n<think>\n' }}
{%- endif %}
"""
import jinja2
template = jinja2.Template(chat_template)
print(template.render(messages=[{"role": "user", "content": "2+2=?"}]))
```
Changing " to ' line 14 in the chat_template from
`{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}`
to
`{{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}`
fixes the issue.
Related Issue: https://github.com/runpod-workers/worker-vllm/issues/129
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | closed | 2025-03-20T15:43:29Z | 2025-03-20T15:53:05Z | https://github.com/vllm-project/vllm/issues/15230 | [
"bug"
] | SmartManoj | 1 |
ni1o1/transbigdata | data-visualization | 94 | 六边形栅格没有布满bounds区域 | 大佬您好
我使用您的库时发现
当我设置栅格为六边形hexa时,如 `(2)`,比如经纬度为 104.1545..., 30.8061...(这是在 `self.bounds` 范围内的,如 `(1)`),它对应的栅格id为 (21, -13, -34) ,这运行 `(3)` 得出。然后我发现这个id并不在 `self.grid` 范围中
```
self.bounds = [103.90, 30.52, 104.26, 30.81] (1)
self.grid, self.params = tbd.area_to_grid(self.bounds, accuracy=1200, method='hexa') (2)
tbd.GPS_to_grid(104.15457922898818, 30.80613863513823, self.params) (3)
```
请问大佬这个六边形栅格是不是没有布满整个bounds区域 | open | 2024-03-30T14:03:59Z | 2024-03-30T14:03:59Z | https://github.com/ni1o1/transbigdata/issues/94 | [] | FvNCCR228 | 0 |
sgl-project/sglang | pytorch | 4,489 | Capture cuda graph failed | #### command
docker run --gpus '"device=4,5"' -d --name ifusion_sglang_qwen2.5_72b --shm-size=32g -p 30000:30000 -v /data/:/data --ipc=host lmsysorg/sglang:latest python3 -m sglang.launch_server --model-path /data/models/Qwen2.5-72B-Instruct --host 0.0.0.0 --port 30000 --tp 2 --dp 2 --mem-fraction-static 0.8 --quantization gptq_marlin --chunked-prefill-size 8192 --context-length 32768 --enable-dp-attention --enable-torch-compile
#### Error
Traceback (most recent call last):
File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 1748, in run_scheduler_process
scheduler = Scheduler(server_args, port_args, gpu_id, tp_rank, dp_rank)
File "/sgl-workspace/sglang/python/sglang/srt/managers/scheduler.py", line 218, in __init__
self.tp_worker = TpWorkerClass(
File "/sgl-workspace/sglang/python/sglang/srt/managers/tp_worker_overlap_thread.py", line 63, in __init__
self.worker = TpModelWorker(server_args, gpu_id, tp_rank, dp_rank, nccl_port)
File "/sgl-workspace/sglang/python/sglang/srt/managers/tp_worker.py", line 74, in __init__
self.model_runner = ModelRunner(
File "/sgl-workspace/sglang/python/sglang/srt/model_executor/model_runner.py", line 166, in __init__
self.initialize(min_per_gpu_memory)
File "/sgl-workspace/sglang/python/sglang/srt/model_executor/model_runner.py", line 207, in initialize
self.init_cuda_graphs()
File "/sgl-workspace/sglang/python/sglang/srt/model_executor/model_runner.py", line 881, in init_cuda_graphs
self.cuda_graph_runner = CudaGraphRunner(self)
File "/sgl-workspace/sglang/python/sglang/srt/model_executor/cuda_graph_runner.py", line 254, in __init__
raise Exception(
Exception: Capture cuda graph failed: shape mismatch: value tensor of shape [160, 4, 128] cannot be broadcast to indexing result of shape [160, 8, 128]
Possible solutions:
1. disable cuda graph by --disable-cuda-graph
2. set --mem-fraction-static to a smaller value (e.g., 0.8 or 0.7)
3. disable torch compile by not using --enable-torch-compile
4. set --cuda-graph-max-bs to a smaller value (e.g., 32)
Open an issue on GitHub https://github.com/sgl-project/sglang/issues/new/choose
### How to solve it | closed | 2025-03-17T05:36:25Z | 2025-03-17T08:11:42Z | https://github.com/sgl-project/sglang/issues/4489 | [] | White-Friday | 0 |
noirbizarre/flask-restplus | api | 817 | python3-flask-restplus_0.13.0.bb | Is python3-flask-restplus_0.13.0.bb provided?
I currently have usage requirements on yocto | open | 2024-01-17T09:59:16Z | 2024-01-17T09:59:16Z | https://github.com/noirbizarre/flask-restplus/issues/817 | [
"bug"
] | p35420102 | 0 |
aiortc/aiortc | asyncio | 334 | new offer/RTCPeerConnection from server side to client | Hi!
Would anyone be willing to share with me some tips on how to create additional RTCPeerConnections FROM the server to a client/peer?
(I am currently experimenting with the Server example)
For example once the first connection is created from the client/peer offer I would like to then create several additional connections to the peer FROM the server to send several video tracks (as I understand that it is not currently possible to add multiple tracks to a stream).
Is this sort of the idea?
- create a RTCPeerConnection instance (pc)
- create an offer (pc.createOffer()) and set it as the localDescription
- create a RemoteDescription using the existing connection's SDP
- ?
I imagine I would need to extend the implementation in the client.js file to handle an offer from the server (some handler for "onTrack"?)?
Or is it easier to send a message over the data channel to the client with something like "create new connection for track X" and have the client then create a new offer and add track X to it somehow?
Any help would be greatly appreciated!
Cheers
Adam
| closed | 2020-04-14T14:38:37Z | 2022-06-11T03:04:43Z | https://github.com/aiortc/aiortc/issues/334 | [
"question",
"stale"
] | adamteale | 6 |
huggingface/datasets | pytorch | 7,318 | Introduce support for PDFs | ### Feature request
The idea (discussed in the Discord server with @lhoestq ) is to have a Pdf type like Image/Audio/Video. For example [Video](https://github.com/huggingface/datasets/blob/main/src/datasets/features/video.py) was recently added and contains how to decode a video file encoded in a dictionary like {"path": ..., "bytes": ...} as a VideoReader using decord. We want to do the same with pdf and get a [pypdfium2.PdfDocument](https://pypdfium2.readthedocs.io/en/stable/_modules/pypdfium2/_helpers/document.html#PdfDocument).
### Motivation
In many cases PDFs contain very valuable information beyond text (e.g. images, figures). Support for PDFs would help create datasets where all the information is preserved.
### Your contribution
I can start the implementation of the Pdf type :) | open | 2024-12-10T16:59:48Z | 2024-12-12T18:38:13Z | https://github.com/huggingface/datasets/issues/7318 | [
"enhancement"
] | yabramuvdi | 6 |
scikit-learn/scikit-learn | data-science | 30,652 | Unconsistent FutureWarning when using `force_int_remainder_cols=True` in `ColumnTransformer` | ### Describe the bug
Calling fit on a pipeline that includes a `ColumnTransformer` step with `remainder="passthrough"` and `force_int_remainder_cols=True` (the default value as in v1.6) raises a
`FutureWarning:
The format of the columns of the 'remainder' transformer in ColumnTransformer.transformers_ will change in version 1.7 to match the format of the other transformers.`
Calling a cross-validation doesn't.
### Steps/Code to Reproduce
```python
import pandas as pd
from sklearn.compose import make_column_selector as selector
from sklearn.model_selection import cross_validate
from sklearn.pipeline import make_pipeline
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OrdinalEncoder
from sklearn.ensemble import HistGradientBoostingClassifier
data = pd.DataFrame({
"quarters": ["Q1", "Q2", "Q3", "Q1", "Q3"],
"profit": [4.20, 7.70, 9.20, 4.26, 1.84],
"expenses": [3.32, 3.32, 3.32, 2.21, 2.21],
}
)
target = pd.Series([0, 1, 0, 1, 0])
categorical_columns_selector = selector(dtype_include=object)
categorical_columns = categorical_columns_selector(data)
categorical_preprocessor = OrdinalEncoder(
handle_unknown="use_encoded_value", unknown_value=-1
)
preprocessor = ColumnTransformer(
[("categorical", categorical_preprocessor, categorical_columns)],
remainder="passthrough",
)
model = make_pipeline(preprocessor, HistGradientBoostingClassifier())
model.fit(data, target) # raises FutureWarning
cross_validate(model, data, target, cv=2) # does not raise FutureWarning
```
### Expected Results
Warning should be raised when cross-validating as well.
At least for the first internal fit.
### Actual Results
Warning is not raised when cross-validating.
### Versions
```shell
Python dependencies:
sklearn: 1.6.1
pip: 24.3.1
setuptools: 75.6.0
numpy: 2.2.0
scipy: 1.14.1
Cython: None
pandas: 2.2.3
matplotlib: 3.10.0
joblib: 1.4.2
threadpoolctl: 3.5.0
``` | closed | 2025-01-15T16:20:16Z | 2025-01-20T14:43:42Z | https://github.com/scikit-learn/scikit-learn/issues/30652 | [
"Bug"
] | ArturoAmorQ | 3 |
oegedijk/explainerdashboard | dash | 292 | Update component plots when selecting data | Hello, I'm making a custom dashboard with ExplainerDashboard components and a map. The idea is to be able to select a region in the map to filter the data and re calculate the shap values in order to understand a certain area's predictions by seeing the feature importances in this area in particular. However, since I'm not an expert in Dash I haven't been able to update the components. After being initialized correctly, once I select an area of the map and trigger the callback, the component plots end up empty. This is my (shortened) code:
dash.py (omitting initial setup)
```
app = Dash(__name__)
server = app.server
map_tab = RegressionDashboard(consolidated, eb_explainer, model, model_type, name="Regression Dashboard", app=app)
app.layout = html.Div([
map_tab.layout()
])
map_tab.register_callbacks(app)
if __name__ == "__main__":
log.info('Starting dashboard server ...')
app.run(port=6660, host='0.0.0.0')
```
regression_dashboard.py
```
class RegressionDashboard(ExplainerComponent):
def __init__(self, consolidated, explainer, model, model_type, app, source_crs='EPSG:32719',name=None,**kwargs):
super().__init__(explainer, title="Map")
# a lot of self.(something) lines
self.contrib = ShapContributionsGraphComponent(explainer,
hide_selector=True, hide_cats=True,
hide_depth=True, hide_sort=True,
**kwargs)
self.shap_summary = ShapSummaryComponent(explainer, hide_selector=True, hide_cats=True,
hide_depth=True, hide_sort=True, hide_type=True,
**kwargs) #Feature importances basically, edit title
self.shap_dependance = ShapDependenceComponent(explainer, hide_selector=True, hide_cats=True,
hide_depth=True, hide_sort=True, plot_sample=100000,
**kwargs)
self.shap_dependance_connector = ShapSummaryDependenceConnector(self.shap_summary, self.shap_dependance)
#terrible layout, just for testing purposes
def layout(self):
self.map_fig = self.create_map()
return html.Div(
html.Div([
html.Div(
dcc.Graph(figure=self.map_fig, id="preds_map", style={'height': '45vh'}),
style={
'width': '50%',
'display': 'inline-block',
'border': 'thin lightgrey solid',
'boxSizing': 'border-box',
'height': '50vh'
}
),
html.Div([
self.contrib.layout(),
self.shap_summary.layout(),
self.shap_dependance.layout(),
],
)
],
style={
'width': '100%',
'height': '60vh'
}),
id='layout-container')
def update_layout_components(self):
return html.Div([
html.Div(
dcc.Graph(figure=self.map_fig, id="preds_map", style={'height': '45vh'}),
style={
'width': '50%',
'display': 'inline-block',
'border': 'thin lightgrey solid',
'boxSizing': 'border-box',
'height': '50vh'
}
),
html.Div([
self.contrib.layout(),
self.shap_summary.layout(),
self.shap_dependance.layout(),
]),
],
style={
'width': '100%',
'height': '60vh'
})
def create_map(self, filtered_data = None, max_points = None):
#map code, irrelevant
return fig
def transform_coordinates(self, df, x_col, y_col, source_crs):
# transform coordinates from one system to another, irrelevant
return df
#I want to filter by coordinates but right now I'm just trying to update the plots by just making a
# random subsample of the data to prove
# the plots are updating
def update_components(self):
predictor = self.model.steps[-1][1]
X_transformed, blockids = consolidated_to_X(self.consolidated.sample(n=3000, random_state=42), self.model)
X_transformed.drop(['long', 'lat'], axis=1, inplace=True)
explainer = RegressionExplainer(model=predictor, X=X_transformed, n_jobs=-1, index_name="Block ID",
precision="float32", target="DEPVAR")
shap_explainer = shap.Explainer(predictor, X_transformed)
shap_values = shap_explainer.shap_values(X_transformed, check_additivity=False, approximate=True)
base_values = shap_explainer.expected_value
explainer.set_shap_values(base_values, shap_values)
self.contrib = ShapContributionsGraphComponent(explainer,
hide_selector=True, hide_cats=True,
hide_depth=True, hide_sort=True,
)
self.shap_summary = ShapSummaryComponent(explainer, hide_selector=True, hide_cats=True,
hide_depth=True, hide_sort=True, hide_type=True,
) #Feature importances basically, edit title
self.shap_dependance = ShapDependenceComponent(explainer, hide_selector=True, hide_cats=True,
hide_depth=True, hide_sort=True, plot_sample=100000,
)
self.shap_dependance_connector = ShapSummaryDependenceConnector(self.shap_summary, self.shap_dependance)
def component_callbacks(self, app):
@app.callback(
Output('layout-container', 'children'),
Input('preds_map', 'selectedData'),
prevent_initial_call=True)
def update_selected_data(selectedData):
if not selectedData:
raise PreventUpdate
self.update_components()
new_layout = self.update_layout_components()
return new_layout
```
What am I missing here? I know there's probably a lot of unnecesary code here and it's really messy, but I'm really losing my mind over this. Any help is greatly appreciated. Thanks! | closed | 2023-12-28T20:02:37Z | 2024-01-25T14:43:40Z | https://github.com/oegedijk/explainerdashboard/issues/292 | [] | soundgarden134 | 3 |
labmlai/annotated_deep_learning_paper_implementations | pytorch | 262 | mha.py array shapes | I wonder why array shapes in aha are (C, B, D) rather than (B, C, D). I thought it was convention that the batch was the first dimension. Specially, here are the first few lines of the `forward` method of class `MultiHeadAttention`:
```
def forward(self, *,
query: torch.Tensor,
key: torch.Tensor,
value: torch.Tensor,
mask: Optional[torch.Tensor] = None):
"""
`query`, `key` and `value` are the tensors that store
collection of *query*, *key* and *value* vectors.
They have shape `[seq_len, batch_size, d_model]`. <<<<<<<<
`mask` has shape `[seq_len, seq_len, batch_size]` and
`mask[i, j, b]` indicates whether for batch `b`,
query at position `i` has access to key-value at position `j`.
"""
```
Thanks. | open | 2024-07-13T02:59:37Z | 2024-11-14T03:09:35Z | https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/262 | [] | erlebach | 1 |
psf/black | python | 3,758 | cannot parse assignment expression in preview style since 23.1 | Hello,
I seem to have found a bug in black's preview style regarding assignment expressions, that has been present since version 23.1.0.
**Describe the bug**
When using an assignment expression in my code example, black with preview=true complains it cannot parse the line. Black preview=false accepts the code happily and leaves no changes.
**To Reproduce**
Here's my code example:
```python
# file.py
from pydriller import Commit
commits: list[Commit] = []
update_hashes: list[str] = []
upstream_messages: list[str] = []
parsed = [
{
"hash": commit.hash,
"author": f"{commit.author.name} <{commit.author.email}>",
# black 23.1 --preview can't parse the following line:
"is_update": (up := commit.hash in update_hashes),
"is_upstream": up and commit.msg in upstream_messages,
}
for commit in commits
]
```
And run it with these arguments:
```sh
$ black file.py --target-version py311 --preview
```
The resulting error is:
> `cannot format file.py: Cannot parse: 12:24: "is_update": up := commit.hash in update_hashes,`
**Expected behavior**
It should parse the line like it does with preview=false.
Also, look at the error above. The line of code shown doesn't include the parentheses like it does in my source. Without the parens, cpython can't parse it either!
**Environment**
- Black's version: 23.3, also tested on main branch at g839ef35.
- OS and Python version: Linux/Python 3.11.4
**Additional context**
I love you lots <3
| closed | 2023-06-30T22:52:59Z | 2024-01-17T19:04:16Z | https://github.com/psf/black/issues/3758 | [
"T: bug",
"C: preview style"
] | wizpig64 | 2 |
miguelgrinberg/python-socketio | asyncio | 852 | I am trying to integrate rasa bot to a website and the auth error is coming | **Describe the bug**
<html>
<body>
"<div id="rasa-chat-widget" data-websocket-url="http://localhost:5005/"></div>"
<script src="https://unpkg.com/@rasahq/rasa-chat" type="application/javascript"></script>
</body>
</html>
The above code is available on rasa website which i am using to integrate the chatbot to a website
Rasa Version : 2.8.16
Minimum Compatible Version: 2.8.9
Rasa SDK Version : 2.8.3
Rasa X Version : None
Python Version : 3.6.8
Operating System : Windows-10-10.0.19041-SP0
Python 3.6.8
connect async handler error
Traceback (most recent call last):
File "c:\program files\python36\lib\site-packages\engineio\asyncio_server.py", line 423, in _trigger_event
ret = await self.handlers[event](*args)
File "c:\program files\python36\lib\site-packages\socketio\asyncio_server.py", line 519, in _handle_eio_connect
return await self._handle_connect(sid, '/')
File "c:\program files\python36\lib\site-packages\socketio\asyncio_server.py", line 419, in _handle_connect
self.environ[sid])
File "c:\program files\python36\lib\site-packages\socketio\asyncio_server.py", line 501, in _trigger_event
ret = await self.handlers[namespace][event](*args)
TypeError: connect() missing 1 required positional argument: 'auth'
this is the error i see on command prompt
and this error is shown in inspect
rasa-chat:2
GET http://localhost:5005/socket.io/?EIO=4&transport=polling&t=Nvdoisu net::ERR_CONNECTION_REFUSE

D | closed | 2022-01-17T11:30:31Z | 2022-04-29T23:32:20Z | https://github.com/miguelgrinberg/python-socketio/issues/852 | [
"documentation"
] | HarshMagiya7 | 7 |
open-mmlab/mmdetection | pytorch | 12,014 | SwinL weights for Grounding DINO | Hello!
I wanted to ask you if there is a specific reason why you have not released weights for mmdetection Grounding DINO with the SwinL transformer as the backbone.
I guess that just applying this converter it could be done easily no? https://github.com/open-mmlab/mmdetection/blob/main/tools/model_converters/groundingdino_to_mmdet.py Or maybe I am missing something.
Thanks 😄 | open | 2024-10-24T13:17:25Z | 2024-10-24T13:17:42Z | https://github.com/open-mmlab/mmdetection/issues/12014 | [] | german36-del | 0 |
slackapi/python-slack-sdk | asyncio | 791 | Bad redirect URI error | i have follow this url for authentication
https://slack.dev/python-slackclient/auth.html
But below code return error `{'ok': False, 'error': 'bad_redirect_uri'}`
# Request the auth tokens from Slack
response = client.oauth_v2_access(
client_id=client_id,
client_secret=client_secret,
code=code_param
)
print(response) | closed | 2020-08-31T15:58:49Z | 2023-03-25T18:53:19Z | https://github.com/slackapi/python-slack-sdk/issues/791 | [
"question"
] | chiragkanhasoft | 2 |
microsoft/nni | machine-learning | 5,709 | Adding back the sensitivity analysis tool to v3.0 | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
Are there any plans to add back the sensitivty analysis tool to v3.0 ?
**Why is this needed**:
It would be great to have this tool back as a debug tool, to target better the layers that are not sensible to pruning. This tool was available in the previous versions (until [v2.6](https://nni.readthedocs.io/en/v2.6/Compression/CompressionUtils.html#sensitivity-analysis)) but was removed later on.
**Without this feature, how does current nni work**:
Without this tool, it's more difficult to identify which layers can be pruned further instead of using a uniform sparsity rate for the whole network without goint into the tedious process of trial & error. Which can be a big overhead for big models with long evaluation processes.
**Components that may involve changes**:
the utils package
**Brief description of your proposal if any**:
| open | 2023-11-08T15:01:19Z | 2023-11-08T15:01:19Z | https://github.com/microsoft/nni/issues/5709 | [] | mehdi-nait | 0 |
strawberry-graphql/strawberry-django | graphql | 625 | Ordering does not work properly with federation gateway | I'm not sure if this is a problem with this library but definitely the current ordering design doesn't work with @apollo/gateway.
The sort order depends on the order of the keys and when requesting the server directly it works as it should, but when requesting through the federation the order of the keys changes to alphabetical.
Perhaps the ordering body should be an array to avoid dependence on the order of the keys.
I understand that this is a big change in design but in this case we are dependent on how the dictionaries are implemented in one or another part of the system.
<!--- Provide a general summary of the changes you want in the title above. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. --> | closed | 2024-09-10T10:26:11Z | 2025-03-20T15:57:37Z | https://github.com/strawberry-graphql/strawberry-django/issues/625 | [] | iamcrookedman | 2 |
django-import-export/django-import-export | django | 1,205 | AutoField and pk fk with type Varchar - CharField | I have models
```
class Users(models.Model):
user_login = models.CharField(primary_key=True, max_length=255, verbose_name="Логин")
user_password = models.CharField(max_length=255, verbose_name="Пароль")
first_name = models.CharField(max_length=255, verbose_name="Имя")
middle_name = models.CharField(max_length=255, blank=True, null=True, verbose_name="Фамилия")
sur_name = models.CharField(max_length=255, verbose_name="Отчество")
birth_date = models.DateField(verbose_name="Дата рождения")
phone_num = models.CharField(max_length=255, verbose_name="Номер телефона", unique=True)
email_addr = models.EmailField(max_length=255, verbose_name="E-mail", unique=True)
logical_delete_status = models.BooleanField(default=False, verbose_name="Логическое удаление")
user_image_src = models.ImageField(max_length=255, upload_to=upload_location, verbose_name="Путь до аватарки пользователя")
def __str__(self):
return self.user_login
class Meta:
verbose_name = "Пользователь"
verbose_name_plural = "Пользователи"
db_table = 'users'
def image_img(self):
if self.user_image_src:
from django.utils.safestring import mark_safe
return mark_safe(u'<a href="{0}" target="_blank"><img src="{0}" width="100"/></a>'.format(self.user_image_src.url))
else:
return '(Нет изображения)'
image_img.short_description = 'Картинка'
image_img.allow_tags = True
class Workgroups(models.Model):
workgroup_id = models.AutoField(primary_key=True, verbose_name="Код группы")
workgroup_name = models.CharField(unique=True, max_length=255, verbose_name="Название группы")
def __str__(self):
return '%s' % self.workgroup_name
class Meta:
db_table = 'workgroups'
verbose_name = 'Рабочая группа'
verbose_name_plural = 'Рабочие группы'
```
And in admin.py
```
class WorkGroupsResource(resources.ModelResource):
class Meta:
model = models.Workgroups
# fields = ('workgroup_name')
exclude = ('workgroup_id', )
import_id_fields = ('workgroup_id', )
class WorkgroupsModel(ImportExportModelAdmin, admin.ModelAdmin):
resources = WorkGroupsResource
```
But export dont exclude the workgroup_id and import dont work with error
Номер строки: 1 - 'id'
1, Test
Traceback (most recent call last):
File "/home/artem/Desktop/djangoAdminDiplom/env/lib/python3.8/site-packages/import_export/resources.py", line 639, in import_row
instance, new = self.get_or_init_instance(instance_loader, row)
File "/home/artem/Desktop/djangoAdminDiplom/env/lib/python3.8/site-packages/import_export/resources.py", line 334, in get_or_init_instance
instance = self.get_instance(instance_loader, row)
File "/home/artem/Desktop/djangoAdminDiplom/env/lib/python3.8/site-packages/import_export/resources.py", line 321, in get_instance
import_id_fields = [
File "/home/artem/Desktop/djangoAdminDiplom/env/lib/python3.8/site-packages/import_export/resources.py", line 322, in <listcomp>
self.fields[f] for f in self.get_import_id_fields()
KeyError: 'id'
Import data
[
{
"workgroup_id": 1,
"workgroup_name": "TestImport"
}
]
second try
[
{
"workgroup_name": "TestImport"
}
]
etc
[
{
"workgroup_id": null,
"workgroup_name": "TestImport"
}
] | closed | 2020-10-31T11:23:57Z | 2020-11-09T13:27:30Z | https://github.com/django-import-export/django-import-export/issues/1205 | [
"question"
] | Artemka-py | 0 |
yihong0618/running_page | data-visualization | 788 | 可以把 strava 数据同步到 keep 吗 | closed | 2025-03-05T13:17:38Z | 2025-03-12T03:16:35Z | https://github.com/yihong0618/running_page/issues/788 | [] | chensoul | 1 | |
matplotlib/mplfinance | matplotlib | 365 | Marking / Highlighting After Hours | I'm working on styling for my charts and I'm wondering if there's a good way to highlight / grey out / mark after hours trading periods in some way like some graphing programs do.
Here's an example of one of my charts at the moment:

Here's a quick photoshop markup of what I want to do:

I'm currently pulling from an api that gives data on a multitude of periods and intervals. I saw there was functionality for highlighting between timestamps with the fill_between feature. However, I'm a bit stumped on how to make sure I cover all after hours periods in any given period. Any pointers in the right direction on doing this properly would be greatly appreciated! | open | 2021-03-22T04:31:21Z | 2021-04-26T20:55:48Z | https://github.com/matplotlib/mplfinance/issues/365 | [
"question"
] | Jellayy | 4 |
iterative/dvc | data-science | 9,723 | dvc.api.params_show: LockError: Unable to acquire lock - when running multiple processes with `torchrun` | # Bug Report
## Description
I'm running a standard `torchrun` to kick off my python script, and the first thing I do is grab the parameters from dvc using a line like this:
`dvc_params = dvc.api.params_show(stages=dvc_stage_name)`
Of course, that takes a dvc lock under the covers, and apparently that takes too long sometimes, because I am getting this error:
`LockError: Unable to acquire lock. Most likely another DVC process is running or
was terminated abruptly. Check the page
<https://dvc.org/doc/user-guide/troubleshooting#lock-issue> for other
possible reasons and to learn how to resolve this.`
When you use `torchrun`, it kicks off as many processes as there are GPUs, so 8 in this case. So I expect that it would take just a little while for each process to run, although frankly less than the default lock timeout of what appears to be 3 seconds, but I don't know what all dvc is doing under the covers when I call that.
Is there a better way for me to grab the parameters somehow without risking a lock timeout? I can't just look at the params file because I am using the ability to override parameters on the command line.
### Reproduce
Launch a python script 8 times simultaneously, with each one calling: ``dvc_params = dvc.api.params_show(stages=dvc_stage_name)`
### Expected
I could either avoid the lock timeout by specifying I'm okay with a longer timeout, or this would be fast enough that I could call it across 8 processes without getting a timeout error.
| open | 2023-07-11T18:25:47Z | 2023-07-19T13:18:50Z | https://github.com/iterative/dvc/issues/9723 | [
"p2-medium",
"A: api"
] | Taytay | 5 |
sgl-project/sglang | pytorch | 4,594 | [Bug] cannot load prequantized model with scalar weight scale | ### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [x] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
Right now after loading the model and converting the weight scale to channel wise, there's an implicit assumption that the weight scale tensors in model weight is 1-D tensor. This is not the case for modelopt-quantized FP8 in fp8 cutlass supported hardware, since QKVParalleLinear will go through a requantization to the same scale.
### Reproduction
```python
import sglang as sgl
if __name__ == '__main__':
llm = sgl.Engine(
model_path="nvidia/Llama-3.1-8B-Instruct-FP8",
quantization="modelopt",
revision="13858565416dbdc0b4e7a4a677fadfbd5b9e5bb9",
log_level="debug",
)
```
Error:
```
[2025-03-19 20:37:24 TP0] Scheduler hit an exception: Traceback (most recent call last):
File "/home/jobuser/sglang/python/sglang/srt/managers/scheduler.py", line 1809, in run_scheduler_process
scheduler = Scheduler(server_args, port_args, gpu_id, tp_rank, dp_rank)
File "/home/jobuser/sglang/python/sglang/srt/managers/scheduler.py", line 227, in __init__
self.tp_worker = TpWorkerClass(
File "/home/jobuser/sglang/python/sglang/srt/managers/tp_worker_overlap_thread.py", line 63, in __init__
self.worker = TpModelWorker(server_args, gpu_id, tp_rank, dp_rank, nccl_port)
File "/home/jobuser/sglang/python/sglang/srt/managers/tp_worker.py", line 74, in __init__
self.model_runner = ModelRunner(
File "/home/jobuser/sglang/python/sglang/srt/model_executor/model_runner.py", line 168, in __init__
self.initialize(min_per_gpu_memory)
File "/home/jobuser/sglang/python/sglang/srt/model_executor/model_runner.py", line 178, in initialize
self.load_model()
File "/home/jobuser/sglang/python/sglang/srt/model_executor/model_runner.py", line 383, in load_model
self.model = get_model(
File "/home/jobuser/sglang/python/sglang/srt/model_loader/__init__.py", line 22, in get_model
return loader.load_model(
File "/home/jobuser/sglang/python/sglang/srt/model_loader/loader.py", line 382, in load_model
quant_method.process_weights_after_loading(module)
File "/home/jobuser/sglang/python/sglang/srt/layers/quantization/modelopt_quant.py", line 169, in process_weights_after_loading
max_w_scale = convert_to_channelwise(max_w_scale, layer.logical_widths)
File "/home/jobuser/sglang/python/sglang/srt/layers/quantization/utils.py", line 81, in convert_to_channelwise
weight_scale_channel[start:end, :] = weight_scale[idx]
IndexError: invalid index of a 0-dim tensor. Use `tensor.item()` in Python or `tensor.item<T>()` in C++ to convert a 0-dim tensor to a number
```
### Environment
```
Python: 3.10.14 (main, Jul 14 2024, 22:24:12) [GCC 11.2.0]
CUDA available: True
GPU 0: NVIDIA H100 80GB HBM3
GPU 0 Compute Capability: 9.0
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.6, V12.6.77
CUDA Driver Version: 550.54.15
PyTorch: 2.5.1+cu124
sglang: 0.4.4.post1
sgl_kernel: 0.0.5.post3
flashinfer: 0.2.3
triton: 3.1.0
transformers: 4.48.3
torchao: 0.9.0
numpy: 1.26.4
aiohttp: 3.11.14
fastapi: 0.115.11
hf_transfer: 0.1.9
huggingface_hub: 0.29.3
interegular: 0.3.3
modelscope: 1.24.0
orjson: 3.10.15
packaging: 24.2
psutil: 7.0.0
pydantic: 2.10.6
multipart: 0.0.20
zmq: 26.3.0
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: 0.7.2
openai: 1.66.3
tiktoken: 0.9.0
anthropic: 0.49.0
decord: 0.6.0
NVIDIA Topology:
GPU0 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PIX SYS SYS SYS SYS SYS 0-63,128-191 0 N/A
NIC0 PIX X SYS SYS SYS SYS SYS
NIC1 SYS SYS X PIX SYS SYS SYS
NIC2 SYS SYS PIX X SYS SYS SYS
NIC3 SYS SYS SYS SYS X SYS SYS
NIC4 SYS SYS SYS SYS SYS X SYS
NIC5 SYS SYS SYS SYS SYS SYS X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NIC1: mlx5_1
NIC2: mlx5_2
NIC3: mlx5_3
NIC4: mlx5_4
NIC5: mlx5_5
ulimit soft: 10000000
``` | closed | 2025-03-19T20:44:35Z | 2025-03-22T07:47:54Z | https://github.com/sgl-project/sglang/issues/4594 | [] | yundai424 | 0 |
matplotlib/matplotlib | matplotlib | 28,922 | [Bug]: Title of middle subplot does not auto-wrap like subplots on edge | ### Bug summary
When creating a grid of subplots that all share x and y axes, and then the title of each subplot is set with auto-wrapping requested, the title of the plot in the middle does not wrap properly.
### Code for reproduction
```Python
import matplotlib.pyplot as plt
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, sharey=True, sharex=True)
y_lims = (-12, 12)
x_lims = (-5, 12)
ax1.set_ylim(*y_lims)
ax1.set_xlim(*x_lims)
ax1.set_title("A long long long long title that needs to wrap", wrap=True)
ax2.set_title("Another long long long title that will need to wrap", wrap=True)
ax3.set_title("The last long long long title that will need to wrap", wrap=True)
fig.tight_layout()
fig.show()
```
### Actual outcome

### Expected outcome
The title of the middle subplot should wrap like the subplots to either side of it.
### Additional information
_No response_
### Operating system
Ubuntu 22.04.5 LTS
### Matplotlib Version
3.9.2
### Matplotlib Backend
qtagg
### Python version
3.10.14
### Jupyter version
_No response_
### Installation
pip | closed | 2024-10-02T16:07:43Z | 2024-10-02T17:10:13Z | https://github.com/matplotlib/matplotlib/issues/28922 | [] | bielsnohr | 2 |
adap/flower | scikit-learn | 4,346 | Train stops when a client fails | ### Describe the bug
When a round encounters failures because of Grpc-Bridge is closed for one of the clients, the whole training stops.
First, it wasn't doing evaluation after fitting. Thus, I disabled evaluation.
Now if the first round has failures, the second round doesn't start!
### Steps/Code to Reproduce
I am using the code example here
https://flower.ai/docs/examples/embedded-devices.html
Most of the time, my topo work, but when the GRPC bridge close (not sure why), the training stops.
### Expected Results
The training should continue, ignoring the failed devices when accept_failures is True. Or, the server should try to crete a new GRPC connection (bridge).
### Actual Results
The training doesn't continue when accept_failures is True. | closed | 2024-10-21T16:00:37Z | 2025-03-12T16:30:28Z | https://github.com/adap/flower/issues/4346 | [
"bug",
"part: examples"
] | oabuhamdan | 2 |
marcomusy/vedo | numpy | 1,230 | Class Line's find_index_at_position() method returns wrong indices in some instances | I just came across this: the find_index_at_position method of the Line Class seems to return faulty indices. My intent was to enter a new point at a certain position on the Line and to make that point the very first one of the vertices.
Only for some of my lines, the returned index was more than 200 off the correct value. I am so sorry, i cannot share the line data but i can try to share the code reproduce it? if it helps i can try to send the exact line i am showcasing here.
I tried my best to make some kind of minimal example:
```
lines_long_after_seam_order=[]
eval_fraction=tens_liner_instance.seam_position
line=lines_long[27]
eval_fraction=0.28
new_idx_0_point=line.eval(eval_fraction)
idx_fraction=line.find_index_at_position(new_idx_0_point)
idx_before=math.floor(idx_fraction)
idx_after=math.ceil(idx_fraction)
vertices_new=[[new_idx_0_point],line.vertices[idx_after:-1],line.vertices[0:idx_before]]
vertices_new=list(chain.from_iterable(vertices_new))
line_new=vedo.Line(vertices_new,closed=True)
lines_long_after_seam_order.append(line_new)
idx=0
labs=lines_long_after_seam_order[idx].labels2d()
vedo.show([lines_long_after_seam_order[idx],labs])
```
As the image shows, the sampled point was not between the identified indices, but somewhere entirely else:
<img width="667" alt="Image" src="https://github.com/user-attachments/assets/9eee8a97-22d7-4ced-8800-65cebdf6356a" /> | closed | 2025-03-03T22:28:29Z | 2025-03-05T15:35:19Z | https://github.com/marcomusy/vedo/issues/1230 | [
"bug"
] | natabma | 2 |
pinry/pinry | django | 255 | Docker pinry - cant login | Hi all,
I've been battling this for a few hours now.
Im not new at all to linux server config, but I am new to using docker.
I've followed the docs...
"docker pull getpinry/pinry" - all good
"docker run -d=true -p=80:80 -v=/opt/docker-data/pinry:/data pinry/pinry"
This seemed to download the container again? should this be "getpinry/pinry" too?
Anyway, it appears to start fine. After that i can open the local pinry webpage, but i cant login.
So, first, i cant find a "local_settings.py" file anywhere? Docs dont say where it should be located?
I tried creating the file in "/opt/docker-data/pinry/local_settings.py", then restarting the docker container. No change.
So where should this go?
I also tried creating the superuser as noted on the "Updating Passwords" doc page.
I open a docker container shell using "docker exec -it <containername> /bin/bash".
I get:
root@17c4f0bede75:/srv/www/pinry# python manage.py createsuperuser --settings=pinry.settings.docker
Traceback (most recent call last):
File "manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
root@17c4f0bede75:/srv/www/pinry#
So this is a bunch of fail so far.
... no wonder ive read some reports around the net of this being difficult to setup :(
Any help would be appreciated. Thanks. | closed | 2021-03-05T11:05:04Z | 2021-03-08T14:00:15Z | https://github.com/pinry/pinry/issues/255 | [] | MWP | 8 |
Lightning-AI/pytorch-lightning | pytorch | 20,184 | MLFlowLogger does not save config.yaml for each run | ### Bug description
The `MLFlowLogger` seems to save the `config.yaml` in the top-level `save_dir` (e.g. `./mlruns`) directory (not even inside the experiment directory), instead of the specific run directory as for the other loggers. See below for minimal example. When running the same experiment twice, this results in an error because the `config.yaml` already exists.
Here is an example folder structure where you can see the `config.yaml` being at the top-level.
```shell
mlruns/
├── 557060468949431600 (experiment ID)
│ ├── 14625fca5e654f7faff19061b1ed44fa (run ID)
│ ├── 8b0a025336d6492391929adb37c18d2b (run ID)
│ └── meta.yaml
└── config.yaml
```
**Expected behavior:** just like with the default logger, we expect the `config.yaml` to be saved for inside the directory of each run of the given experiment.
```shell
mlruns/
└── 519079607625374876 (experiment ID)
├── 71d8f4b93eac490c8046d07bf7b49d31 (run ID)
│ ├── ...
│ └── config.yaml
├── 81a4e345f552487ea0d591e6bc14c881 (run ID)
│ ├── ...
│ └── config.yaml
└── meta.yaml
```
**Solution idea:** two lines of interest seem to be:
- [lightning/pytorch/loggers/mlflow.py#L302](https://github.com/Lightning-AI/pytorch-lightning/blob/master/src/lightning/pytorch/loggers/mlflow.py#L302)
- [lightning/pytorch/trainer/trainer.py#L1227](https://github.com/Lightning-AI/pytorch-lightning/blob/master/src/lightning/pytorch/trainer/trainer.py#L1227)
**Workaround 1:** we can just avoid the error with `LightningCLI(save_config_kwargs={"overwrite": True})` as suggested in the error message. However this does not save the config per-run.
**Workaround 2:** We can override [cli.SaveConfigCallback.save_config](https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.cli.SaveConfigCallback.html#lightning.pytorch.cli.SaveConfigCallback) to set `save_to_log_dir=False`, and implement logic to save in the correct folder by using the experiment ID and run ID.
```python
from pathlib import Path
from lightning.fabric.utilities.cloud_io import get_filesystem
from lightning.pytorch.cli import LightningCLI, SaveConfigCallback
from lightning.pytorch.demos.boring_classes import DemoModel, BoringDataModule
class MLFlowSaveConfigCallback(SaveConfigCallback):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.save_to_log_dir = False
def save_config(self, trainer, pl_module, stage):
dir_runs = Path(trainer.logger.save_dir)
dir_run = dir_runs / trainer.logger.experiment_id / trainer.logger.run_id
path_config = dir_run / self.config_filename
fs = get_filesystem(dir_run)
fs.makedirs(dir_run, exist_ok=True)
self.parser.save(
self.config, path_config, skip_none=False, overwrite=self.overwrite, multifile=self.multifile
)
def cli_main():
LightningCLI(DemoModel, BoringDataModule,
save_config_callback=MLFlowSaveConfigCallback)
if __name__ == "__main__":
cli_main()
```
### What version are you seeing the problem on?
v2.4
### How to reproduce the bug
With the files below, run `python main.py fit --config config.yaml` twice. The first run will succeed, and the second one will fail with the error message below.
**main.py**
```python
from lightning.pytorch.cli import LightningCLI
from lightning.pytorch.demos.boring_classes import DemoModel, BoringDataModule
def cli_main():
LightningCLI(DemoModel, BoringDataModule)
if __name__ == "__main__":
cli_main()
```
**config.yaml**
```yaml
# lightning.pytorch==2.4.0
trainer:
logger:
class_path: lightning.pytorch.loggers.MLFlowLogger
```
### Error messages and logs
```shell
RuntimeError: SaveConfigCallback expected ./mlruns/config.yaml to NOT exist. Aborting to avoid overwriting results of a previous run. You can delete the previous config file, set `LightningCLI(save_config_callback=None)` to disable config saving, or set `LightningCLI(save_config_kwargs={"overwrite": True})` to overwrite the config file.
```
### Environment
<details>
<summary>Current environment</summary>
* CUDA:
- GPU:
- NVIDIA RTX 2000 Ada Generation Laptop GPU
- available: True
- version: 12.1
* Lightning:
- efficientnet-pytorch: 0.7.1
- lightning: 2.4.0
- lightning-utilities: 0.11.3.post0
- pytorch-lightning: 2.3.1
- segmentation-models-pytorch: 0.3.3
- torch: 2.3.1
- torchgeo: 0.5.2
- torchmetrics: 1.4.0.post0
- torchvision: 0.18.1
* Packages:
- aenum: 3.1.15
- affine: 2.4.0
- aiohttp: 3.9.5
- aiosignal: 1.3.1
- albucore: 0.0.12
- albumentations: 1.4.10
- alembic: 1.13.2
- aniso8601: 9.0.1
- annotated-types: 0.7.0
- antlr4-python3-runtime: 4.9.3
- asttokens: 2.4.1
- async-timeout: 4.0.3
- attrs: 23.2.0
- basemap: 1.4.1
- basemap-data: 1.3.2
- bitsandbytes: 0.43.1
- blinker: 1.8.2
- cachetools: 5.3.3
- certifi: 2024.6.2
- charset-normalizer: 3.3.2
- click: 8.1.7
- click-plugins: 1.1.1
- cligj: 0.7.2
- cloudpickle: 3.0.0
- comm: 0.2.2
- contourpy: 1.2.1
- cycler: 0.12.1
- databricks-sdk: 0.29.0
- debugpy: 1.8.2
- decorator: 5.1.1
- deprecated: 1.2.14
- docker: 7.1.0
- docstring-parser: 0.16
- efficientnet-pytorch: 0.7.1
- einops: 0.8.0
- entrypoints: 0.4
- exceptiongroup: 1.2.1
- executing: 2.0.1
- filelock: 3.15.4
- fiona: 1.9.6
- flask: 3.0.3
- fonttools: 4.53.0
- frozenlist: 1.4.1
- fsspec: 2024.6.1
- gitdb: 4.0.11
- gitpython: 3.1.43
- google-auth: 2.33.0
- graphene: 3.3
- graphql-core: 3.2.3
- graphql-relay: 3.2.0
- greenlet: 3.0.3
- gunicorn: 22.0.0
- huggingface-hub: 0.23.4
- hydra-core: 1.3.2
- idna: 3.7
- imageio: 2.34.2
- importlib-metadata: 7.2.1
- importlib-resources: 6.4.0
- ipykernel: 6.29.5
- ipython: 8.26.0
- itsdangerous: 2.2.0
- jedi: 0.19.1
- jinja2: 3.1.4
- joblib: 1.4.2
- jsonargparse: 4.31.0
- jupyter-client: 8.6.2
- jupyter-core: 5.7.2
- kiwisolver: 1.4.5
- kornia: 0.7.3
- kornia-rs: 0.1.4
- lazy-loader: 0.4
- lightly: 1.5.8
- lightly-utils: 0.0.2
- lightning: 2.4.0
- lightning-utilities: 0.11.3.post0
- mako: 1.3.5
- markdown: 3.6
- markdown-it-py: 3.0.0
- markupsafe: 2.1.5
- matplotlib: 3.8.4
- matplotlib-inline: 0.1.7
- mdurl: 0.1.2
- mlflow: 2.15.1
- mlflow-skinny: 2.15.1
- mpmath: 1.3.0
- multidict: 6.0.5
- munch: 4.0.0
- nest-asyncio: 1.6.0
- networkx: 3.3
- numpy: 1.26.4
- nvidia-cublas-cu12: 12.1.3.1
- nvidia-cuda-cupti-cu12: 12.1.105
- nvidia-cuda-nvrtc-cu12: 12.1.105
- nvidia-cuda-runtime-cu12: 12.1.105
- nvidia-cudnn-cu12: 8.9.2.26
- nvidia-cufft-cu12: 11.0.2.54
- nvidia-curand-cu12: 10.3.2.106
- nvidia-cusolver-cu12: 11.4.5.107
- nvidia-cusparse-cu12: 12.1.0.106
- nvidia-ml-py: 12.535.161
- nvidia-nccl-cu12: 2.20.5
- nvidia-nvjitlink-cu12: 12.5.82
- nvidia-nvtx-cu12: 12.1.105
- nvitop: 1.3.2
- omegaconf: 2.3.0
- opencv-python-headless: 4.10.0.84
- opentelemetry-api: 1.26.0
- opentelemetry-sdk: 1.26.0
- opentelemetry-semantic-conventions: 0.47b0
- packaging: 23.2
- pandas: 2.2.2
- parso: 0.8.4
- pexpect: 4.9.0
- pillow: 10.4.0
- pip: 24.1.1
- platformdirs: 4.2.2
- pretrainedmodels: 0.7.4
- prompt-toolkit: 3.0.47
- protobuf: 5.27.2
- psutil: 6.0.0
- ptyprocess: 0.7.0
- pure-eval: 0.2.2
- pyarrow: 15.0.2
- pyasn1: 0.6.0
- pyasn1-modules: 0.4.0
- pydantic: 2.8.0
- pydantic-core: 2.20.0
- pygments: 2.18.0
- pyparsing: 3.1.2
- pyproj: 3.6.1
- pyshp: 2.3.1
- python-dateutil: 2.9.0.post0
- pytorch-lightning: 2.3.1
- pytz: 2024.1
- pyyaml: 6.0.1
- pyzmq: 26.0.3
- querystring-parser: 1.2.4
- rasterio: 1.3.10
- requests: 2.32.3
- rich: 13.7.1
- rsa: 4.9
- rtree: 1.2.0
- safetensors: 0.4.3
- scikit-image: 0.24.0
- scikit-learn: 1.5.0
- scipy: 1.14.0
- segmentation-models-pytorch: 0.3.3
- setuptools: 65.5.0
- shapely: 2.0.4
- six: 1.16.0
- smmap: 5.0.1
- snuggs: 1.4.7
- sqlalchemy: 2.0.32
- sqlparse: 0.5.1
- stack-data: 0.6.3
- sympy: 1.12.1
- tensorboardx: 2.6.2.2
- termcolor: 2.4.0
- threadpoolctl: 3.5.0
- tifffile: 2024.6.18
- timm: 0.9.2
- tomli: 2.0.1
- torch: 2.3.1
- torchgeo: 0.5.2
- torchmetrics: 1.4.0.post0
- torchvision: 0.18.1
- tornado: 6.4.1
- tqdm: 4.66.4
- traitlets: 5.14.3
- triton: 2.3.1
- typeshed-client: 2.5.1
- typing-extensions: 4.12.2
- tzdata: 2024.1
- urllib3: 2.2.2
- wcwidth: 0.2.13
- werkzeug: 3.0.3
- wrapt: 1.16.0
- yarl: 1.9.4
- zipp: 3.19.2
* System:
- OS: Linux
- architecture:
- 64bit
- ELF
- processor: x86_64
- python: 3.10.14
- release: 6.5.0-1025-oem
- version: #26-Ubuntu SMP PREEMPT_DYNAMIC Tue Jun 18 12:35:22 UTC 2024
</details>
### More info
_No response_ | open | 2024-08-10T00:14:26Z | 2024-08-10T03:35:29Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20184 | [
"bug",
"needs triage",
"ver: 2.4.x"
] | jeangud | 0 |
ultralytics/ultralytics | python | 19,150 | How to set the anchor frame parameters of yolov8? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
After running with the yolov8 segement S model, I found that there is always some missing at the edge of the recognition of a large target, and the recognition effect of a smaller target is very good, so I changed how to adjust the parameters during training.I've done it before with displacement, rotation, symmetry, and nothing too good,If I want to change the anchor box parameters, how do I change them?Is there any other way to make the big target recognition not missing a piece?
### Additional
_No response_ | open | 2025-02-10T03:16:00Z | 2025-02-10T06:36:16Z | https://github.com/ultralytics/ultralytics/issues/19150 | [
"question",
"segment"
] | zhoujzhouj | 2 |
deepspeedai/DeepSpeed | machine-learning | 6,605 | [REQUEST] Inquiry about code for Domino | I saw in [Domino](https://arxiv.org/pdf/2409.15241) that the code would be released here. Could you let me know when will the code be released to the public?
| closed | 2024-10-07T23:04:46Z | 2025-02-05T23:48:33Z | https://github.com/deepspeedai/DeepSpeed/issues/6605 | [
"enhancement"
] | s1ghhh | 5 |
ahmedfgad/GeneticAlgorithmPython | numpy | 61 | fitness_func() Repetition Issues | https://github.com/ahmedfgad/GeneticAlgorithmPython/blob/c87641bb9f774cebc40a45e70834832b04ae32b5/pygad.py#L3080
`fitness_func()` is repeatedly called whenever I call to `best_solution()` (for example, `on_generation`). Maybe it's called in order of `best_solution()` -> `cal_pop_fitness0` -> `fitness_func()`
I think it need to change `pop_fitness = self.last_generation_fitness` on line 3094 or fix all the examples where `best_solution()` is called without a argument.
<s> Also, in `run()`, a `fitness_func()` is called unnecessarily because of `cal_pop_fitness()` above the `main for statement`. </s> | closed | 2021-08-12T09:08:30Z | 2023-02-25T19:55:38Z | https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/61 | [
"help wanted"
] | sogNok | 1 |
vitalik/django-ninja | django | 739 | customizing field names in output | I need to override the fields being output in the response to an entirely custom value, so i think the "Example Camel Case mode" is not applicable here. Actually the values I need are the verbose names of the django model's fields.
```python
class A(models.Model):
a = models.CharField("field a", ...)
class AOut(Schema):
a: str # need `"field a": value` in output
``` | closed | 2023-04-13T08:59:53Z | 2023-04-13T12:06:24Z | https://github.com/vitalik/django-ninja/issues/739 | [] | minusf | 6 |
allure-framework/allure-python | pytest | 811 | Not implemented type for Arrow list to pandas: fixed_size_binary[16] | I'm submitting a ...
- [X] bug report
- [ ] feature request
- [ ] support request => Please do not submit support request here, see note at the top of this template.
#### What is the current behavior?
using pandas dataframe with dtype as pd.ArrowDtype(pa.list_(pa.binary(16)) in pytest
#### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
test.py:
```
import pytest
from uuid import UUID
import pandas as pd
import pyarrow as pa
uuid=UUID('5d212a78-cc48-e3b1-4235-b4d91473ee87').bytes
df=pd.DataFrame({'a': [[uuid, uuid, uuid], [uuid,uuid, uuid]]},dtype=pd.ArrowDtype(pa.list_(pa.binary(16))))
class TestUuid():
@pytest.mark.parametrize("data",[df])
def test_uuid(self,data):
pass
````
run codes:
pytest ./test.py --alluredir=allure-results
result:
```
(py311) PS D:\code\test> pytest ./test.py --alluredir=allure-results
======================================================================================================== test session starts ========================================================================================================
platform win32 -- Python 3.11.5, pytest-8.1.1, pluggy-1.5.0
benchmark: 4.0.0 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: D:\code\test
plugins: allure-pytest-2.13.5, benchmark-4.0.0, html-4.1.1, metadata-3.0.0, ordering-0.6, rerunfailures-13.0, xdist-3.5.0
collected 1 item
test.py E [100%]
============================================================================================================== ERRORS ===============================================================================================================
____________________________________________________________________________________________ ERROR at setup of TestUuid.test_uuid[data0] ____________________________________________________________________________________________
self = <allure_pytest.listener.AllureListener object at 0x0000024B939297D0>, item = <Function test_uuid[data0]>
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_setup(self, item):
if not self._cache.get(item.nodeid):
uuid = self._cache.push(item.nodeid)
test_result = TestResult(name=item.name, uuid=uuid, start=now(), stop=now())
self.allure_logger.schedule_test(uuid, test_result)
yield
self._update_fixtures_children(item)
uuid = self._cache.get(item.nodeid)
test_result = self.allure_logger.get_test(uuid)
params = self.__get_pytest_params(item)
param_id = self.__get_pytest_param_id(item)
test_result.name = allure_name(item, params, param_id)
full_name = allure_full_name(item)
test_result.fullName = full_name
test_result.testCaseId = md5(full_name)
test_result.description = allure_description(item)
test_result.descriptionHtml = allure_description_html(item)
current_param_names = [param.name for param in test_result.parameters]
> test_result.parameters.extend([
Parameter(name=name, value=represent(value))
for name, value in params.items()
if name not in current_param_names
])
D:\software\Anaconda3\envs\py311\Lib\site-packages\allure_pytest\listener.py:116:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
D:\software\Anaconda3\envs\py311\Lib\site-packages\allure_pytest\listener.py:117: in <listcomp>
Parameter(name=name, value=represent(value))
D:\software\Anaconda3\envs\py311\Lib\site-packages\allure_commons\utils.py:93: in represent
return repr(item)
D:\software\Anaconda3\envs\py311\Lib\site-packages\pandas\core\frame.py:1214: in __repr__
return self.to_string(**repr_params)
D:\software\Anaconda3\envs\py311\Lib\site-packages\pandas\util\_decorators.py:333: in wrapper
return func(*args, **kwargs)
D:\software\Anaconda3\envs\py311\Lib\site-packages\pandas\core\frame.py:1394: in to_string
return fmt.DataFrameRenderer(formatter).to_string(
D:\software\Anaconda3\envs\py311\Lib\site-packages\pandas\io\formats\format.py:962: in to_string
string = string_formatter.to_string()
D:\software\Anaconda3\envs\py311\Lib\site-packages\pandas\io\formats\string.py:29: in to_string
text = self._get_string_representation()
D:\software\Anaconda3\envs\py311\Lib\site-packages\pandas\io\formats\string.py:44: in _get_string_representation
strcols = self._get_strcols()
D:\software\Anaconda3\envs\py311\Lib\site-packages\pandas\io\formats\string.py:35: in _get_strcols
strcols = self.fmt.get_strcols()
D:\software\Anaconda3\envs\py311\Lib\site-packages\pandas\io\formats\format.py:476: in get_strcols
strcols = self._get_strcols_without_index()
D:\software\Anaconda3\envs\py311\Lib\site-packages\pandas\io\formats\format.py:740: in _get_strcols_without_index
fmt_values = self.format_col(i)
D:\software\Anaconda3\envs\py311\Lib\site-packages\pandas\io\formats\format.py:754: in format_col
return format_array(
D:\software\Anaconda3\envs\py311\Lib\site-packages\pandas\io\formats\format.py:1161: in format_array
return fmt_obj.get_result()
D:\software\Anaconda3\envs\py311\Lib\site-packages\pandas\io\formats\format.py:1194: in get_result
fmt_values = self._format_strings()
D:\software\Anaconda3\envs\py311\Lib\site-packages\pandas\io\formats\format.py:1528: in _format_strings
array = np.asarray(values, dtype=object)
D:\software\Anaconda3\envs\py311\Lib\site-packages\pandas\core\arrays\arrow\array.py:663: in __array__
return self.to_numpy(dtype=dtype)
D:\software\Anaconda3\envs\py311\Lib\site-packages\pandas\core\arrays\arrow\array.py:1399: in to_numpy
result = data._pa_array.to_numpy()
pyarrow\table.pxi:509: in pyarrow.lib.ChunkedArray.to_numpy
???
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> ???
E pyarrow.lib.ArrowNotImplementedError: Not implemented type for Arrow list to pandas: fixed_size_binary[16]
pyarrow\error.pxi:91: ArrowNotImplementedError
========================================================================================================= warnings summary ==========================================================================================================
test.py::TestUuid::test_uuid[data0]
D:\software\Anaconda3\envs\py311\Lib\site-packages\_pytest\runner.py:240: PluggyTeardownRaisedWarning: A plugin raised an exception during an old-style hookwrapper teardown.
Plugin: allure_listener, Hook: pytest_runtest_setup
ArrowNotImplementedError: Not implemented type for Arrow list to pandas: fixed_size_binary[16]
For more information see https://pluggy.readthedocs.io/en/stable/api_reference.html#pluggy.PluggyTeardownRaisedWarning
lambda: runtest_hook(item=item, **kwds), when=when, reraise=reraise
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
====================================================================================================== short test summary info ======================================================================================================
ERROR test.py::TestUuid::test_uuid[data0] - pyarrow.lib.ArrowNotImplementedError: Not implemented type for Arrow list to pandas: fixed_size_binary[16]
==================================================================================================== 1 warning, 1 error in 0.62s ====================================================================================================
```
#### What is the expected behavior?
no error raise
#### Please tell us about your environment:
- python: 3.11.5
- Test framework: pytest@8.1.1
- Allure adaptor: allure-pytest@2.13.5
- pandas: 2.2.2
- pyarrow: 14.0.1
| open | 2024-04-26T08:11:30Z | 2024-04-26T09:51:51Z | https://github.com/allure-framework/allure-python/issues/811 | [] | jbShi1017 | 0 |
deepfakes/faceswap | machine-learning | 481 | Extract working not working with GPU, only CPU, is there a way to get it to work with GPU? | **Note: Please only report bugs in this repository. Just because you are getting an error message does not automatically mean you have discovered a bug. If you don't have a lot of experience with this type of project, or if you need for setup help and other issues in using the faceswap tool, please refer to the [faceswap-playground](https://github.com/deepfakes/faceswap-playground/issues) instead. The faceswap-playground is also an excellent place to ask questions and submit feedback.**
## Expected behavior
I'm trying to get the extract portion of the code to work with the GPU, but it's only using the CPU, which makes it 10 or 20 time slower than if the GPU was doing it. Is there a way to get the GPU to do the Extract or is it only CPU enabled?
## Steps to reproduce
python faceswap.py extract -i [my folder with the sequence of frames] -o [the folder with the extracted frames] + parameters such as dlib-cnn or with ae as well
No matter what I choose for the parameters, only the CPU kicks in.
For the training, the GPU works like a charm.
## Other relevant information
- **Operating system and version: Windows OS 10
- **Python version: 3.6.4
- **Faceswap version: the latest current Master (163942f69ad37736c6424cbae56995e5a895b0a9)
- **Faceswap method: For Extract, CPU only, sadly
| closed | 2018-08-24T03:19:46Z | 2018-08-28T16:58:51Z | https://github.com/deepfakes/faceswap/issues/481 | [] | deepfaceswap12345 | 5 |
keras-team/keras | deep-learning | 20,423 | AttributeError: 'KerasHistory' object has no attribute 'layer' | I'm encountering the error "AttributeError: 'KerasHistory' object has no attribute 'layer'"
while working with a Keras model.
I'm trying to access layer information, but it seems I'm referencing the wrong object. the version of TensorFlow is 2.17.0 I tried to change the name layer to operation but it's not working.
this is the code:
import tensorflow as tf
from tensorflow.keras.models import Model
from tensorflow.keras.initializers import glorot_uniform
from tensorflow.keras.layers import Input, ZeroPadding2D, Conv2D, MaxPooling2D, BatchNormalization, Activation, Add, AveragePooling2D, Flatten, Dense, Dropout
input_shape = (96, 96, 1)
X_input = Input(input_shape)
X = ZeroPadding2D((3,3))(X_input)
X = Conv2D(64, (7,7), strides= (2,2), name = 'conv1', kernel_initializer= glorot_uniform(seed = 0))(X)
X = BatchNormalization(axis =3, name = 'bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3,3), strides= (2,2))(X)
X = res_block(X, filter= [64,64,256], stage= 2)
X = res_block(X, filter= [128,128,512], stage= 3)
X = AveragePooling2D((2,2), name = 'Averagea_Pooling')(X)
X = Flatten()(X)
X = Dense(4096, activation = 'relu')(X)
X = Dropout(0.2)(X)
X = Dense(2048, activation = 'relu')(X)
X = Dropout(0.1)(X)
X = Dense(30, activation = 'relu')(X)
model_1_facialKeyPoints = Model( inputs= X_input, outputs = X)
model_1_facialKeyPoints.summary()
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-366-fd266d53d661> in <cell line: 34>()
32
33
---> 34 model_1_facialKeyPoints = Model( inputs= X_input, outputs = X)
35 model_1_facialKeyPoints.summary()
4 frames
/usr/local/lib/python3.10/dist-packages/tensorflow/python/keras/engine/functional.py in _validate_graph_inputs_and_outputs(self)
692 # Check that x is an input tensor.
693 # pylint: disable=protected-access
--> 694
695 layer = x._keras_history.layer
696 if len(layer._inbound_nodes) > 1 or (
AttributeError: 'KerasHistory' object has no attribute 'layer' | closed | 2024-10-29T03:37:36Z | 2024-10-29T18:43:01Z | https://github.com/keras-team/keras/issues/20423 | [
"type:Bug"
] | Neta-Robinzon-Butbul | 3 |
autogluon/autogluon | computer-vision | 4,602 | `common.features.infer_types.check_if_nlp_feature` fails on bytes columns | https://github.com/autogluon/autogluon/blob/ca3e0b5cadb064e256cd836b4214046aefae66bd/common/src/autogluon/common/features/infer_types.py#L141-L144
This try/catch only handles `AttributeError`, but if a sequence of bytes is passed, a `TypeError` occurs due to `.str.split()` | closed | 2024-10-30T21:08:14Z | 2024-11-25T21:57:01Z | https://github.com/autogluon/autogluon/issues/4602 | [
"bug",
"module: tabular",
"module: features"
] | samg-stripe | 2 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 645 | Compatibility update with newer librosa version in collab | Hi, recently the program stopped working in collab due to librosa 0.8.0 update, giving error "module 'librosa' has no attribute 'output'", please solve this problem | closed | 2021-01-31T21:40:08Z | 2021-02-15T08:13:18Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/645 | [] | Gero39 | 4 |
home-assistant/core | python | 140,478 | IHC component fails | ### The problem
IHC fails to setup. It is not in the app list. Sometime when home assistant is restarted, it is starting up and i have access to the items. But after next restart it fails again.
`Logger: ihcsdk.ihcconnection
Kilde: components/ihc/auto_setup.py:89
Første forekomst: 21.54.37 (1 forekomster)
Senest logget: 21.54.37
soap request exception ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))`
### What version of Home Assistant Core has the issue?
core-2025.3.2
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
IHC
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/ihc
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
ihc:
- url: 'http://192.168.1.xx'
username: 'admin'
password: 'xxxxxx'
info: true
```
### Anything in the logs that might be useful for us?
```txt
Logger: homeassistant.setup
Kilde: setup.py:422
Første forekomst: 21.54.37 (1 forekomster)
Senest logget: 21.54.37
Error during setup of component ihc: a bytes-like object is required, not 'bool'
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/setup.py", line 422, in _async_setup_component
result = await task
^^^^^^^^^^
File "/usr/local/lib/python3.13/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/src/homeassistant/homeassistant/components/ihc/__init__.py", line 36, in setup
if not ihc_setup(hass, config, controller_conf, index):
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/ihc/__init__.py", line 64, in ihc_setup
if controller_conf[CONF_AUTOSETUP] and not autosetup_ihc_products(
~~~~~~~~~~~~~~~~~~~~~~^
hass, config, ihc_controller, controller_id
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
):
^
File "/usr/src/homeassistant/homeassistant/components/ihc/auto_setup.py", line 89, in autosetup_ihc_products
if not (project_xml := ihc_controller.get_project()):
~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/usr/local/lib/python3.13/site-packages/ihcsdk/ihccontroller.py", line 142, in get_project
self._project = self.client.get_project_in_segments()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/usr/local/lib/python3.13/site-packages/ihcsdk/ihcclient.py", line 122, in get_project_in_segments
buffer.write(self.get_project_segment(s, projectMajor, projectMinor))
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: a bytes-like object is required, not 'bool'
```
### Additional information
It never worked as it should.
running on a raspberry pi 5. reinstalled home assistant several times with no luck.
| open | 2025-03-12T21:10:07Z | 2025-03-23T19:37:46Z | https://github.com/home-assistant/core/issues/140478 | [
"integration: ihc"
] | Dannikorsholm | 4 |
tensorpack/tensorpack | tensorflow | 1,043 | Bug in resnet_model.py | In `tensorpack/examples/ResNet/resnet_model.py` line 105
`def resnet_group(name, l, block_func, features, count, stride):`
should be
`def resnet_group(l, name, block_func, features, count, stride):`
Otherwise, `load-resnet.py` does't work.
| closed | 2019-01-11T06:58:42Z | 2019-01-11T08:50:53Z | https://github.com/tensorpack/tensorpack/issues/1043 | [] | leix28 | 0 |
huggingface/datasets | nlp | 7,420 | better correspondence between cached and saved datasets created using from_generator | ### Feature request
At the moment `.from_generator` can only create a dataset that lives in the cache. The cached dataset cannot be loaded with `load_from_disk` because the cache folder is missing `state.json`. So the only way to convert this cached dataset to a regular is to use `save_to_disk` which needs to create a copy of the cached dataset. For large datasets this can end up wasting a lot of space. In my case the saving operation failed so I am stuck with a large cached dataset and no clear way to convert to a `Dataset` that I can use. The requested feature is to provide a way to be able to load a cached dataset using `.load_from_disk`. Alternatively `.from_generator` can create the dataset at a specified location so that it can be loaded from there with `.load_from_disk`.
### Motivation
I have the following workflow which has exposed some awkwardness about the Datasets saving/caching.
1. I created a cached dataset using `.from_generator` which was cached in a folder. This dataset is rather large (~600GB) with many shards.
2. I tried to save this dataset using `.save_to_disk` to another location so that I can use later as a `Dataset`. This essentially creates another copy (for a total of 1.2TB!) of what is already in the cache... In my case the saving operation keeps dying for some reason and I am stuck with a cached dataset and no copy.
3. Now I am trying to "save" the existing cached dataset but it is not clear how to access the cached files after `.from_generator` has finished e.g. from a different process. I should not be even looking at the cache but I really do not want to waste another 2hr to generate the set so that if fails agains (I already did this couple of times).
- I tried `.load_from_disk` but it does not work with cached files and complains that this is not a `Dataset` (!).
- I looked at `.from_file` which takes one file but the cached file has many (shards) so I am not sure how to make this work.
- I tried `.load_dataset` but this seems to either try to "download" a copy (of a file which is already in the local file system!) which I will then need to save or I need to use `streaming=False` to create an `IterableDataset `which then I need to convert (using the cache) to `Dataset` so that I can save it. With both options I will end up with 3 copies of the same dataset for a total of ~2TB! I am hoping here is another way to do this...
Maybe I am missing something here: I looked at docs and forums but no luck. I have a bunch of arrow files cached by `Dataset.from_generator` and no clean way to make them into a `Dataset` that I can use.
This all could be so much easer if `load_from_disk` can recognize the cached files and produce a `Dataset`: after the cache is created I would not have to "save" it again and I can just load it when I need. At the moment `load_from_disk` needs `state.json` which is lacking in the cache folder. So perhaps `.from_generator` could be made to "finalize" (e.g. create `state.json`) the dataset once it is done so that it can be loaded easily. Or provide `.from_generator` with a `save_to_dir` parameter in addition to `cache_dir` which can be used for the whole process including creating the `state.json` at the end.
As a proof of concept I just created `state.json` by hand and `load_from_disk` worked using the cache! So it seems to be the missing piece here.
### Your contribution
Time permitting I can look into `.from_generator` to see if adding `state.json` is feasible. | open | 2025-02-24T22:14:37Z | 2025-02-26T03:10:22Z | https://github.com/huggingface/datasets/issues/7420 | [
"enhancement"
] | vttrifonov | 0 |
pandas-dev/pandas | data-science | 60,690 | ENH: frozensets are shown in parentheses (like tuples) | ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- Query: `is:issue in:title frozenset`
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
s = pd.Series([frozenset([1])])
print(s)
```
### Issue Description
```
0 (1)
dtype: object
```
### Expected Behavior
The same as `s.map(repr)`:
```
0 frozenset({1})
dtype: object
```
Or if you insist on an abbreviated option, maybe something like this:
```
0 f{1}
dtype: object
```
### Installed Versions
<details>
```
INSTALLED VERSIONS
------------------
commit : 3aba767f3ac4507185d911ed120a49969cdee63d
python : 3.12.8
python-bits : 64
OS : Linux
OS-release : 5.4.0-204-generic
Version : #224-Ubuntu SMP Thu Dec 5 13:38:28 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : fr_CA.UTF-8
LOCALE : fr_CA.UTF-8
pandas : 3.0.0.dev0+1815.g3aba767f3a
numpy : 2.1.0.dev0+git20240403.e59c074
dateutil : 2.9.0.post0
pip : 24.3.1
Cython : None
sphinx : None
IPython : 8.22.2
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pytz : 2024.1
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None
```
</details>
| closed | 2025-01-09T23:55:12Z | 2025-02-05T17:49:32Z | https://github.com/pandas-dev/pandas/issues/60690 | [
"Enhancement",
"Output-Formatting"
] | wjandrea | 2 |
shibing624/text2vec | nlp | 64 | 使用您的 shibing624/text2vec-base-chinese 模型,输出的词嵌入是768维的,能降低维度吗?比如128维 | 使用您的 shibing624/text2vec-base-chinese 模型,输出的词嵌入是768维的,能降低维度吗?比如128维
| closed | 2023-04-27T13:29:18Z | 2023-08-17T13:19:32Z | https://github.com/shibing624/text2vec/issues/64 | [
"question"
] | JonGates | 1 |
xuebinqin/U-2-Net | computer-vision | 321 | What is the time complexity? | Hi,
what is the expected time complexity in big-O notation for an inference session on the U-2-Net model?
Thanks! | open | 2022-07-21T19:51:54Z | 2022-07-22T05:31:19Z | https://github.com/xuebinqin/U-2-Net/issues/321 | [] | BennyTheDev | 1 |
drivendataorg/cookiecutter-data-science | data-science | 420 | Cut a 2.0.1 release | After closing #336 and fixing #419 we'll be ready for a 2.0.1 release, and then can add in a number of great features that are on the docket! | closed | 2025-02-16T18:56:59Z | 2025-02-26T17:03:27Z | https://github.com/drivendataorg/cookiecutter-data-science/issues/420 | [] | chrisjkuch | 1 |
flairNLP/flair | nlp | 3,116 | [Question]: Uptrain existing model | ### Question
Hi, I'm new here. I've trained ner model with flair and it works just fine. However, I need to add another NER to existing model with my other custom ner entity. How could I do that? Unfortunately Flair doesn't produce config.json so I don't understand how to upload my .pt model to huggingface to get it for the training pipeline.
Or maybe it is not related to flair itself, but I still don't get how to do what I need :( | closed | 2023-02-20T10:15:06Z | 2023-02-20T22:54:10Z | https://github.com/flairNLP/flair/issues/3116 | [
"question"
] | GeorgeKontsevik | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.