repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
wkentaro/labelme | computer-vision | 329 | How to crop images from json annotations? | I would like to crop the images that I already have done the annotations, is this possible? | closed | 2019-02-21T17:56:39Z | 2019-04-27T01:58:32Z | https://github.com/wkentaro/labelme/issues/329 | [] | andrewsueg | 3 |
huggingface/peft | pytorch | 1,583 | cannot import name 'prepare_model_for_int8_training' from 'peft' | When I run the finetuning sample code from llama-recipes i.e. peft_finetuning.ipynb, I get this error. My code is , and python=3.9, peft=0.10.0

the error is

Has this method been deprecated? Or does my version not match? Thanks for the reply!
| closed | 2024-03-24T07:44:42Z | 2024-05-15T13:49:13Z | https://github.com/huggingface/peft/issues/1583 | [] | Eyict | 8 |
horovod/horovod | pytorch | 3,515 | Question: How can I run a distribution training using Horovod for a Regressor model | Question:
I have a regressor model which is not using libraries such as Tensorflow, Pytorch etc. Is there any example for that. | closed | 2022-04-20T14:14:41Z | 2022-04-20T14:50:14Z | https://github.com/horovod/horovod/issues/3515 | [] | mChowdhury-91 | 0 |
jupyter-incubator/sparkmagic | jupyter | 215 | Bad error messages when you forget arguments to magics | ```
%%delete
```
This throws a confusing exception rather than explaining to the user that they're missing a `-s` argument.
| closed | 2016-03-30T18:16:46Z | 2016-04-04T21:31:21Z | https://github.com/jupyter-incubator/sparkmagic/issues/215 | [
"kind:bug"
] | msftristew | 0 |
graphistry/pygraphistry | jupyter | 372 | Clear SSO exceptions | Clear exns that state the problem & where applicable, solution
- [ ] Old graphistry server
- [ ] Unknown idp
- [ ] Org without an idp | closed | 2022-07-09T00:34:12Z | 2022-09-02T02:15:53Z | https://github.com/graphistry/pygraphistry/issues/372 | [] | lmeyerov | 1 |
openapi-generators/openapi-python-client | rest-api | 557 | Types named datetime cause conflict with Python's datetime | **Describe the bug**
I am creating an OAS3 file from an existing API and one of the schemas includes a property named `datetime` which causes an Attribute exception when the generated model is imported.
**To Reproduce**
Steps to reproduce the behavior:
1. Convert a OAS3 file that contains a property named datatime:
2. Conversion succeeds with no errors or warnings
3. When executing generated code that has the offending model imported, an AttributeError will be thrown with message:
`'Unset' object has no attribute 'datetime'`
**Expected behavior**
A typed named `datetime` should be renamed with an underscore suffix like other built-in types
**OpenAPI Spec File**
```
components:
schemas:
fubar:
type: object
properties:
datetime:
type: string
format: date-time
```
**Desktop (please complete the following information):**
- OS: macOS 12.1
- Python Version: 3.9.6
- openapi-python-client version 0.10.8
**Additional context**
This looks like a simple fix. I'll work on a PR.
| closed | 2021-12-23T21:36:21Z | 2022-01-17T19:45:45Z | https://github.com/openapi-generators/openapi-python-client/issues/557 | [
"🐞bug"
] | kmray | 0 |
Johnserf-Seed/TikTokDownload | api | 265 | [BUG]一个图集作品一个文件夹 | **描述出现的错误**
一个图集作品一个文件夹
**bug复现**
复现这次行为的步骤:
1、下载用户主页作品时出现的
| closed | 2022-12-07T15:18:14Z | 2022-12-07T15:34:37Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/265 | [
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] | WYF03 | 1 |
huggingface/diffusers | pytorch | 10,722 | RuntimeError: The size of tensor a (4608) must match the size of tensor b (5120) at non-singleton dimension 2 during DreamBooth Training with Prior Preservation | ### Describe the bug
I am trying to run "train_dreambooth_lora_flux.py" on my dataset, but the error will happen if --with_prior_preservation is used.
**Who can help me? Thanks!**
### Reproduction
python ./examples/dreambooth/train_dreambooth_lora_flux.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--instance_data_dir=$INSTANCE_DIR \
--output_dir=$OUTPUT_DIR \
--with_prior_preservation \
--class_data_dir="my_file" \
--class_prompt="A photo" \
--instance_prompt="A sks photo" \
--resolution=1024 \
--rank=32 \
--max_train_steps=5000 \
--checkpointing_steps=100 \
--seed="0" \
--mixed_precision="bf16" \
--train_batch_size=1 \
--guidance_scale=1 \
--gradient_accumulation_steps=4 \
--optimizer="prodigy" \
--learning_rate=1. \
--report_to="tensorboard" \
--lr_scheduler="constant" \
--lr_warmup_steps=0
### Logs
```shell
Traceback (most recent call last):
File "/data4/work/yinguowei/code/diffusers/./examples/dreambooth/train_dreambooth_lora_flux.py", line 1926, in <module>
main(args)
File "/data4/work/yinguowei/code/diffusers/./examples/dreambooth/train_dreambooth_lora_flux.py", line 1720, in main
model_pred = transformer(
File "/data/miniconda3/envs/diffusers_ygw/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/miniconda3/envs/diffusers_ygw/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/data/miniconda3/envs/diffusers_ygw/lib/python3.10/site-packages/accelerate/utils/operations.py", line 819, in forward
return model_forward(*args, **kwargs)
File "/data/miniconda3/envs/diffusers_ygw/lib/python3.10/site-packages/accelerate/utils/operations.py", line 807, in __call__
return convert_to_fp32(self.model_forward(*args, **kwargs))
File "/data/miniconda3/envs/diffusers_ygw/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 44, in decorate_autocast
return func(*args, **kwargs)
File "/data4/work/yinguowei/code/diffusers/src/diffusers/models/transformers/transformer_flux.py", line 529, in forward
encoder_hidden_states, hidden_states = block(
File "/data/miniconda3/envs/diffusers_ygw/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/miniconda3/envs/diffusers_ygw/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/data4/work/yinguowei/code/diffusers/src/diffusers/models/transformers/transformer_flux.py", line 188, in forward
attention_outputs = self.attn(
File "/data/miniconda3/envs/diffusers_ygw/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/data/miniconda3/envs/diffusers_ygw/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/data4/work/yinguowei/code/diffusers/src/diffusers/models/attention_processor.py", line 595, in forward
return self.processor(
File "/data4/work/yinguowei/code/diffusers/src/diffusers/models/attention_processor.py", line 2325, in __call__
query = apply_rotary_emb(query, image_rotary_emb)
File "/data4/work/yinguowei/code/diffusers/src/diffusers/models/embeddings.py", line 1204, in apply_rotary_emb
out = (x.float() * cos + x_rotated.float() * sin).to(x.dtype)
RuntimeError: The size of tensor a (4608) must match the size of tensor b (5120) at non-singleton dimension 2
```
### System Info
- 🤗 Diffusers version: 0.33.0.dev0
- Platform: Linux-5.4.119-19.0009.44-x86_64-with-glibc2.28
- Running on Google Colab?: No
- Python version: 3.10.16
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.27.1
- Transformers version: 4.48.1
- Accelerate version: 1.3.0
- PEFT version: 0.14.0
- Bitsandbytes version: not installed
- Safetensors version: 0.5.2
- xFormers version: not installed
- Accelerator: NVIDIA A800-SXM4-80GB, 81920 MiB
NVIDIA A800-SXM4-80GB, 81920 MiB
NVIDIA A800-SXM4-80GB, 81920 MiB
NVIDIA A800-SXM4-80GB, 81920 MiB
NVIDIA A800-SXM4-80GB, 81920 MiB
NVIDIA A800-SXM4-80GB, 81920 MiB
NVIDIA A800-SXM4-80GB, 81920 MiB
NVIDIA A800-SXM4-80GB, 81920 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_ | open | 2025-02-05T08:48:35Z | 2025-03-13T15:03:47Z | https://github.com/huggingface/diffusers/issues/10722 | [
"bug",
"stale"
] | yinguoweiOvO | 5 |
ultralytics/yolov5 | pytorch | 12,638 | Training is slow on the second step of each epoch | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello, i rencently made a ml rig to improve my training time. This allowed me to train my datasets (around 150k images) 10 times faster than before. I currently train with batch size = 1000 and each epoch is around 24 sec for the first step (against ~7min before) and the second step is around 1min30.
From my experience, the second step was always faster than the first one so I wonder why it is now almost 4 times longer

For the second step, the training time didn't improve that much and i don't really understand why. What does the second step does exactly ? Do you have any idea how I can improve the speed of the training.
I also have the feeling that i am CPU limited and that all the cores are not used during the training process. Is there a solution for that too ?
### Additional
_No response_ | closed | 2024-01-16T11:24:15Z | 2024-10-20T19:37:30Z | https://github.com/ultralytics/yolov5/issues/12638 | [
"question",
"Stale"
] | Busterfake | 3 |
vanna-ai/vanna | data-visualization | 628 | a minor bug in azure ai search | in azure ai search in similiar sql question it is extracting the sql in the format of question it is getting in text but it should get in question | closed | 2024-09-04T08:19:06Z | 2024-09-12T14:41:40Z | https://github.com/vanna-ai/vanna/issues/628 | [
"bug"
] | Jaya-sys | 2 |
LibrePhotos/librephotos | django | 680 | 6gb+ docker image | **Describe the enhancement you'd like**
All ofmy many docker images combines are <15gb.... one layer alone (00856e006cf8) on the librephotos images is 6gb !!
**Describe why this will benefit the LibrePhotos**
On a slow connection, 6gb takes 'forever' to download, and excess/unused 'stuff' must be being retained on that layer (given that apps way more complex in nature that librephotos take up way less)
**Additional context**
```
Pulling image: reallibrephotos/singleton:latest
IMAGE ID [1965417284]: Pulling from reallibrephotos/singleton.
IMAGE ID [e96e057aae67]: Pulling fs layer. Downloading 100% of 29 MB. Verifying Checksum. Download complete. Extracting. Pull complete.
IMAGE ID [01eca18ab462]: Pulling fs layer. Downloading 100% of 60 MB. Verifying Checksum. Download complete. Extracting. Pull complete.
IMAGE ID [9207938ee730]: Pulling fs layer. Downloading 100% of 983 KB. Verifying Checksum. Download complete. Extracting. Pull complete.
IMAGE ID [ee2b68d68fec]: Pulling fs layer. Downloading 100% of 53 KB. Verifying Checksum. Download complete. Extracting. Pull complete.
IMAGE ID [b164cc3bd207]: Pulling fs layer. Downloading 100% of 106 KB. Download complete. Extracting. Pull complete.
IMAGE ID [dc5062a40050]: Pulling fs layer. Downloading 100% of 67 KB. Verifying Checksum. Download complete. Extracting. Pull complete.
IMAGE ID [4f4fb700ef54]: Pulling fs layer. Downloading 100% of 32 B. Verifying Checksum. Download complete. Extracting. Pull complete.
IMAGE ID [00856e006cf8]: Pulling fs layer. Downloading 13% of 6 GB.
IMAGE ID [17ec9298135e]: Pulling fs layer. Downloading 100% of 845 KB. Verifying Checksum. Download complete.
IMAGE ID [9b9b1629d62f]: Pulling fs layer. Downloading 100% of 1 MB. Verifying Checksum. Download complete.
IMAGE ID [4ceafbd41d85]: Pulling fs layer. Downloading 100% of 122 B.
IMAGE ID [570003f2242e]: Pulling fs layer. Downloading 100% of 5 KB. Verifying Checksum. Download complete.
IMAGE ID [504f653315c2]: Pulling fs layer. Downloading 100% of 530 B. Verifying Checksum. Download complete.
IMAGE ID [8bd51bdb2792]: Pulling fs layer. Downloading 100% of 5 KB. Verifying Checksum. Download complete.
IMAGE ID [84af761819c0]: Pulling fs layer. Downloading 100% of 174 B. Verifying Checksum. Download complete
``` | closed | 2022-11-21T11:43:13Z | 2023-01-22T14:44:28Z | https://github.com/LibrePhotos/librephotos/issues/680 | [
"enhancement"
] | techie2000 | 6 |
mljar/mljar-supervised | scikit-learn | 751 | warning in test: tests/tests_automl/test_dir_change.py::AutoMLDirChangeTest::test_compute_predictions_after_dir_change | ```
============================= test session starts ==============================
platform linux -- Python 3.12.3, pytest-8.3.2, pluggy-1.5.0 -- /home/adas/mljar/mljar-supervised/venv/bin/python3
cachedir: .pytest_cache
rootdir: /home/adas/mljar/mljar-supervised
configfile: pytest.ini
plugins: cov-5.0.0
collecting ... collected 1 item
tests/tests_automl/test_dir_change.py::AutoMLDirChangeTest::test_compute_predictions_after_dir_change AutoML directory: automl_testing_A/automl_testing
The task is regression with evaluation metric rmse
AutoML will use algorithms: ['Baseline', 'Linear', 'Decision Tree', 'Random Forest', 'Xgboost', 'Neural Network']
AutoML will ensemble available models
AutoML steps: ['simple_algorithms', 'default_algorithms', 'ensemble']
* Step simple_algorithms will try to check up to 3 models
1_Baseline rmse 141.234462 trained in 0.25 seconds
2_DecisionTree rmse 89.485706 trained in 0.24 seconds
3_Linear rmse 0.0 trained in 0.25 seconds
* Step default_algorithms will try to check up to 3 models
4_Default_Xgboost rmse 55.550016 trained in 0.33 seconds
5_Default_NeuralNetwork rmse 9.201656 trained in 0.29 seconds
6_Default_RandomForest rmse 82.006449 trained in 0.57 seconds
* Step ensemble will try to check up to 1 model
Ensemble rmse 0.0 trained in 0.17 seconds
AutoML fit time: 5.97 seconds
AutoML best model: 3_Linear
FAILED
=================================== FAILURES ===================================
________ AutoMLDirChangeTest.test_compute_predictions_after_dir_change _________
cls = <class '_pytest.runner.CallInfo'>
func = <function call_and_report.<locals>.<lambda> at 0x7d808995b2e0>
when = 'call'
reraise = (<class '_pytest.outcomes.Exit'>, <class 'KeyboardInterrupt'>)
@classmethod
def from_call(
cls,
func: Callable[[], TResult],
when: Literal["collect", "setup", "call", "teardown"],
reraise: type[BaseException] | tuple[type[BaseException], ...] | None = None,
) -> CallInfo[TResult]:
"""Call func, wrapping the result in a CallInfo.
:param func:
The function to call. Called without arguments.
:type func: Callable[[], _pytest.runner.TResult]
:param when:
The phase in which the function is called.
:param reraise:
Exception or exceptions that shall propagate if raised by the
function, instead of being wrapped in the CallInfo.
"""
excinfo = None
start = timing.time()
precise_start = timing.perf_counter()
try:
> result: TResult | None = func()
venv/lib/python3.12/site-packages/_pytest/runner.py:341:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
venv/lib/python3.12/site-packages/_pytest/runner.py:242: in <lambda>
lambda: runtest_hook(item=item, **kwds), when=when, reraise=reraise
venv/lib/python3.12/site-packages/pluggy/_hooks.py:513: in __call__
return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
venv/lib/python3.12/site-packages/pluggy/_manager.py:120: in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
venv/lib/python3.12/site-packages/_pytest/threadexception.py:92: in pytest_runtest_call
yield from thread_exception_runtest_hook()
venv/lib/python3.12/site-packages/_pytest/threadexception.py:68: in thread_exception_runtest_hook
yield
venv/lib/python3.12/site-packages/_pytest/unraisableexception.py:95: in pytest_runtest_call
yield from unraisable_exception_runtest_hook()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def unraisable_exception_runtest_hook() -> Generator[None, None, None]:
with catch_unraisable_exception() as cm:
try:
yield
finally:
if cm.unraisable:
if cm.unraisable.err_msg is not None:
err_msg = cm.unraisable.err_msg
else:
err_msg = "Exception ignored in"
msg = f"{err_msg}: {cm.unraisable.object!r}\n\n"
msg += "".join(
traceback.format_exception(
cm.unraisable.exc_type,
cm.unraisable.exc_value,
cm.unraisable.exc_traceback,
)
)
> warnings.warn(pytest.PytestUnraisableExceptionWarning(msg))
E pytest.PytestUnraisableExceptionWarning: Exception ignored in: <_io.FileIO [closed]>
E
E Traceback (most recent call last):
E File "/home/adas/mljar/mljar-supervised/supervised/base_automl.py", line 218, in load
E self._data_info = json.load(open(data_info_path))
E ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E ResourceWarning: unclosed file <_io.TextIOWrapper name='automl_testing_B/automl_testing/data_info.json' mode='r' encoding='UTF-8'>
venv/lib/python3.12/site-packages/_pytest/unraisableexception.py:85: PytestUnraisableExceptionWarning
=========================== short test summary info ============================
FAILED tests/tests_automl/test_dir_change.py::AutoMLDirChangeTest::test_compute_predictions_after_dir_change
============================== 1 failed in 8.06s ===============================
``` | closed | 2024-08-23T09:19:11Z | 2024-08-29T08:16:42Z | https://github.com/mljar/mljar-supervised/issues/751 | [] | a-szulc | 1 |
skypilot-org/skypilot | data-science | 4,876 | [Python API] Add `asyncio` support | Many frameworks (e.g., temporal) are asyncio native and it would be much easier to integrate with them if SkyPilot APIs returned await-able coroutines. For example, `request_id = sky.launch(); sky.get(request_id)` should be replaceable with an `await sky.launch()`.
Example of how ray works with asyncio: https://docs.ray.io/en/latest/ray-core/actors/async_api.html#asyncio-for-actors
Current workaround is to use `asyncio.to_thread` to call `sky.get`. | open | 2025-03-05T00:54:44Z | 2025-03-05T00:54:44Z | https://github.com/skypilot-org/skypilot/issues/4876 | [] | romilbhardwaj | 0 |
google-deepmind/graph_nets | tensorflow | 77 | How to use modules.SelfAttention | Hi
I tried to use modules.SelfAttention like this:
`graph_network = modules.SelfAttention()`
`input_graphs = utils_tf.data_dicts_to_graphs_tuple(graph_dicts)`
`output_graphs = graph_network(input_graphs)`
Then, I get this error:
`TypeError: _build() missing 3 required positional arguments: 'node_keys', 'node_queries', and 'attention_graph'`
I wonder if there's a notebook explaining how to use the modules.SelfAttention.
Best regards,
Ron
| closed | 2019-05-24T07:04:25Z | 2019-05-28T08:27:07Z | https://github.com/google-deepmind/graph_nets/issues/77 | [] | ronsoohyeong | 2 |
donnemartin/system-design-primer | python | 198 | Can you share with us the technic you used to create the design images? | Can you share what app you used to create the design images and the source files?
( opening study_guide.graffle in omnigraffle doesn't look like your pictures.)
Thanks.
(and great knowledge source, btw) | closed | 2018-08-11T16:12:01Z | 2018-08-12T03:02:47Z | https://github.com/donnemartin/system-design-primer/issues/198 | [] | yperry | 1 |
allure-framework/allure-python | pytest | 299 | Highlight test step | Hello,
Is there a way to highlight a step in allure report (generated by Robotframework).
I mean something like blue font or adding an icon ...

| closed | 2018-10-09T07:10:18Z | 2018-11-20T07:05:59Z | https://github.com/allure-framework/allure-python/issues/299 | [] | ric79 | 1 |
svc-develop-team/so-vits-svc | pytorch | 125 | [Help]: 在Google colab上运行,到推理这一步出现错误,ModuleNotFoundError: No module named 'torchcrepe' | ### 请勾选下方的确认框。
- [X] 我已仔细阅读[README.md](https://github.com/svc-develop-team/so-vits-svc/blob/4.0/README_zh_CN.md)和[wiki中的Quick solution](https://github.com/svc-develop-team/so-vits-svc/wiki/Quick-solution)。
- [X] 我已通过各种搜索引擎排查问题,我要提出的问题并不常见。
- [X] 我未在使用由第三方用户提供的一键包/环境包。
### 系统平台版本号
win10
### GPU 型号
525.85.12
### Python版本
Python 3.9.16
### PyTorch版本
2.0.0+cu118
### sovits分支
4.0(默认)
### 数据集来源(用于判断数据集质量)
UVR处理过的vtb直播音频
### 出现问题的环节或执行的命令
推理
### 问题描述
在推理环节出现以下错误
Traceback (most recent call last):
File "/content/so-vits-svc/inference_main.py", line 11, in <module>
from inference import infer_tool
File "/content/so-vits-svc/inference/infer_tool.py", line 20, in <module>
import utils
File "/content/so-vits-svc/utils.py", line 20, in <module>
from modules.crepe import CrepePitchExtractor
File "/content/so-vits-svc/modules/crepe.py", line 8, in <module>
import torchcrepe
ModuleNotFoundError: No module named 'torchcrepe'
### 日志
```python
#@title 合成音频(推理)
#@markdown 需要将音频上传到so-vits-svc/raw 文件夹下, 然后设置模型路径、配置文件路径、合成的音频名称
!python inference_main.py -m "logs/44k/G_32000.pth" -c "configs/config.json" -n "live1.wav" -s qq
Traceback (most recent call last):
File "/content/so-vits-svc/inference_main.py", line 11, in <module>
from inference import infer_tool
File "/content/so-vits-svc/inference/infer_tool.py", line 20, in <module>
import utils
File "/content/so-vits-svc/utils.py", line 20, in <module>
from modules.crepe import CrepePitchExtractor
File "/content/so-vits-svc/modules/crepe.py", line 8, in <module>
import torchcrepe
ModuleNotFoundError: No module named 'torchcrepe'
```
### 截图`so-vits-svc`、`logs/44k`文件夹并粘贴到此处

### 补充说明
_No response_ | closed | 2023-04-05T14:49:38Z | 2023-04-09T05:14:42Z | https://github.com/svc-develop-team/so-vits-svc/issues/125 | [
"help wanted"
] | Asgardloki233 | 1 |
statsmodels/statsmodels | data-science | 9,232 | How to get in touch regarding a security concern | Hello 👋
I run a security community that finds and fixes vulnerabilities in OSS. A researcher (@tvnnn) has found a potential issue, which I would be eager to share with you.
Could you add a `SECURITY.md` file with an e-mail address for me to send further details to? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository) a security policy to ensure issues are responsibly disclosed, and it would help direct researchers in the future.
Looking forward to hearing from you 👍
(cc @huntr-helper) | open | 2024-04-24T09:19:55Z | 2024-07-10T07:59:42Z | https://github.com/statsmodels/statsmodels/issues/9232 | [] | psmoros | 10 |
mithi/hexapod-robot-simulator | dash | 47 | Improve code quality of 4 specific modules | - [x] `hexapod.models`
- [x] `hexapod.linkage`
- [x] `hexapod.ik_solver.ik_solver`
- [x] `hexapod.ground_contact_solver`
https://github.com/mithi/hexapod-robot-simulator/blob/master/hexapod/models.py
https://github.com/mithi/hexapod-robot-simulator/blob/master/hexapod/linkage.py
https://github.com/mithi/hexapod-robot-simulator/blob/master/hexapod/ground_contact_solver.py
https://github.com/mithi/hexapod-robot-simulator/blob/master/hexapod/ik_solver/ik_solver.py | closed | 2020-04-13T21:27:01Z | 2020-04-14T12:02:58Z | https://github.com/mithi/hexapod-robot-simulator/issues/47 | [
"PRIORITY",
"code quality"
] | mithi | 2 |
milesmcc/shynet | django | 122 | Ignore inactive browser tabs | Sometimes it happens that users that visit my site leave their tab open while not actually visiting the site anymore. Currently I have a session that's been going on for 11 hours now, and this is also affecting my average session duration.
Maybe we should send the `document.visibilityState` in each heartbeat, and have a configurable option per site to ignore or not ignore hearbeats when this value is `hidden`.
> The Document of the top level browsing context can be in one of the following visibility states:
> `hidden`
> The Document is not visible at all on any screen.
> `visible`
> The Document is at least partially visible on at least one screen. This is the same condition under which the hidden attribute is set to false.
| closed | 2021-04-22T11:01:32Z | 2021-04-22T18:26:18Z | https://github.com/milesmcc/shynet/issues/122 | [] | CasperVerswijvelt | 3 |
SciTools/cartopy | matplotlib | 1,734 | Plot etopo | I'd like to suggest a new function for cartopy. As far as I'm aware, plotting an etopo bathymetry map with cartopy isn't easy. Basemap has a convenient function to do this: https://basemaptutorial.readthedocs.io/en/latest/backgrounds.html#etopo.
It would be great if cartopy has something similar! | open | 2021-02-18T18:08:07Z | 2021-03-12T21:32:03Z | https://github.com/SciTools/cartopy/issues/1734 | [
"Type: Enhancement",
"Experience-needed: low",
"Component: Raster source",
"Component: Feature source"
] | yz3062 | 0 |
graphistry/pygraphistry | jupyter | 475 | [BUG] donations demo fails midway | More merge branch testing:
1. umap() calls print memoization failure warnings
Ex:
```python
g = graphistry.nodes(ndf).bind(point_title='Category')
g2 = g.umap(X=['Why?'], y = ['Category'],
min_words=50000, # encode as topic model by setting min_words high
n_topics_target=4, # turn categories into a 4dim vector of regressive targets
n_topics=21, # latent embedding size
cardinality_threshold_target=2, # make sure that we throw targets into topic model over targets
)
```
=>
```
! Failed umap speedup attempt. Continuing without memoization speedups.* Ignoring target column of shape (285, 4) in UMAP fit, as it is not one dimensional
```
2. Failure starting with this cell:
```python
# pretend you have a minibatch of new data -- transform under the fit from the above
new_df, new_y = ndf.sample(5), ndf.sample(5) # pd.DataFrame({'Category': ndf['Category'].sample(5)})
a, b = g2.transform(new_df, new_y, kind='nodes')
a
```
=>
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[20], line 3
1 # pretend you have a minibatch of new data -- transform under the fit from the above
2 new_df, new_y = ndf.sample(5), ndf.sample(5) # pd.DataFrame({'Category': ndf['Category'].sample(5)})
----> 3 a, b = g2.transform(new_df, new_y, kind='nodes')
4 a
TypeError: cannot unpack non-iterable Plotter object
``` | closed | 2023-05-01T06:21:32Z | 2023-05-26T23:29:48Z | https://github.com/graphistry/pygraphistry/issues/475 | [
"bug"
] | lmeyerov | 12 |
smarie/python-pytest-cases | pytest | 222 | Make the CI workflow execute the tests based on installed package, not source | As pointed out in https://github.com/smarie/python-pytest-cases/issues/220
The nox and CI build currently run the tests against the source, not the installed package. This is not nice, as it could miss some packaging issues.
In order to make this move, we would need the `tests/` folder to be outside of the `pytest_cases` folder, otherwise an import issue happens as noted by @kloczek in #220. This is due to the fact that `pytest` has a package auto-detection mechanism recursively climbing up the folders above `conftest.py` and stopping at the last `__init__.py`.
Let's take some time to figure out if this is the **ONLY** way to go ?
| closed | 2021-06-16T14:44:57Z | 2021-11-09T09:10:14Z | https://github.com/smarie/python-pytest-cases/issues/222 | [
"packaging (rpm/apt/...)"
] | smarie | 2 |
matterport/Mask_RCNN | tensorflow | 2,138 | 0 bbox&mask_loss when training on large images with plentiful annotation | Hi guys,
I met a problem which is when I was traning on large images (4000x2000) with plentiful annotations (100~300 annotated masks), the
mrcnn_bbox_loss
mrcnn_mask_loss
val_mrcnn_bbox_loss:
val_mrcnn_mask_loss
are always be 0.0000e+00
And the trained model cannot detect any instance on the testing images.
The training on small images (200x200) with just several annotations (1~5 annotated masks) is working fine with a good result.
Any ideas? I will be very appreciate,
| open | 2020-04-22T00:27:55Z | 2020-04-23T19:13:19Z | https://github.com/matterport/Mask_RCNN/issues/2138 | [] | yhc1994 | 1 |
nteract/papermill | jupyter | 468 | Workflows: Executing notebooks as a DAG? | **Note:** This should be tagged as question / suggestion but I don't think I can do that myself.
I have a use case where I want to enable data scientists to execute the notebooks they create as a DAG. This would be part of their development workflow, in order to ensure a set of notebooks work as an integrated pipeline before it is scheduled in a production environment using something like Airflow.
My question; Is there currently a way to achieve this using papermill? The Readme mentions workflows, but papermill engines are the closest thing I can find to a workflow (interpreted as a pipeline of notebooks)
And if there's no such functionality, what would be the best way to integrate this with papermill, bookstore, scrapbook etc?
@MSeal , Am particularly interested in feedback if you have the time.
Some very high level details about the environment this is situated in;
- Data scientists author individual Jupyter notebooks
- When moved to production, these are scheduled as a DAG using Airflow
- Each Airflow task spawns it's own papermill kernell on a remote compute cluster; which is responsible for executing only the notebook described by the task instance.
- nteract-scrapbook collects logs and provides feedback to the task scheduler.
What I'm working on is a way the data scientists to formally define their set of notebooks as a DAG and be able to execute it during their development workflow, hopefully producing a better integrated pipeline of notebooks without requiring the data scientist to work with (or have knowledge of) the task scheduler (Airflow) and compute cluster internals.
This is similar to this component of Metaflow by Netflix;

| closed | 2020-02-02T19:07:19Z | 2021-06-11T11:07:11Z | https://github.com/nteract/papermill/issues/468 | [
"question"
] | matthiasdv | 3 |
davidteather/TikTok-Api | api | 176 | Give likes or follow user | Thanks to the use of puppeteer is it possible that we can program it to click on the like or follow button in the profile page of an user or tiktok?
I have no experience with puppeteer so I don't know if it's possible.
Btw, great work!
| closed | 2020-07-11T08:59:57Z | 2020-07-11T14:56:14Z | https://github.com/davidteather/TikTok-Api/issues/176 | [] | elblogbruno | 2 |
tensorflow/datasets | numpy | 8,140 | Xxxx | /!\ PLEASE INCLUDE THE FULL STACKTRACE AND CODE SNIPPET
**Short description**
Description of the bug.
**Environment information**
* Operating System: <os>
* Python version: <version>
* `tensorflow-datasets`/`tfds-nightly` version: <package and version>
* `tensorflow`/`tf-nightly` version: <package and version>
* Does the issue still exists with the last `tfds-nightly` package (`pip install --upgrade tfds-nightly`) ?
**Reproduction instructions**
```python
<put a code snippet, a link to a gist or a colab here>
```
If you share a colab, make sure to update the permissions to share it.
**Link to logs**
If applicable, <link to gist with logs, stack trace>
**Expected behavior**
What you expected to happen.
**Additional context**
Add any other context about the problem here. | closed | 2024-11-25T17:51:02Z | 2024-12-09T14:26:23Z | https://github.com/tensorflow/datasets/issues/8140 | [
"bug"
] | Maxeboi1 | 0 |
ageitgey/face_recognition | python | 612 | Fails on Chinese people | * face_recognition version:
* Python version: 3.5.2
* Operating System: Ubuntu 16.04
### The library can't separate chinese people
I tried finding
https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=images&cd=&cad=rja&uact=8&ved=2ahUKEwjKwbLM6JTdAhWGMewKHekXBIYQjRx6BAgBEAU&url=https%3A%2F%2Flaotiantimes.com%2F2016%2F09%2F05%2Fchinese-prime-minister-to-visit-laos%2F&psig=AOvVaw0jNXiklPM4YaxtrqIEOjQe&ust=1535719348182900
in
https://www.google.com/url?sa=i&rct=j&q=&esrc=s&source=images&cd=&cad=rja&uact=8&ved=2ahUKEwiWwOmD55TdAhVNMewKHaGnBo0QjRx6BAgBEAU&url=https%3A%2F%2Fwww.japantimes.co.jp%2Fnews%2F2015%2F01%2F09%2Fnational%2Fhistory%2Fchina-brings-sex-slave-issue-spotlight%2F&psig=AOvVaw0jNXiklPM4YaxtrqIEOjQe&ust=1535719348182900
image and it identified the first person as the target person. Thus it can't accurately compare faces. | open | 2018-08-30T12:54:10Z | 2018-08-30T18:17:59Z | https://github.com/ageitgey/face_recognition/issues/612 | [] | f3uplift | 1 |
horovod/horovod | pytorch | 3,691 | NVTabular Docker image does not build | The `docker/horovod-nvtabular/Dockerfile` Docker image does not build in master any more:
https://github.com/horovod/horovod/runs/8261962616?check_suite_focus=true#step:8:2828
```
#27 40.96 Traceback (most recent call last):
#27 40.96 File "<string>", line 1, in <module>
#27 40.96 File "/root/miniconda3/lib/python3.8/site-packages/tensorflow/__init__.py", line 37, in <module>
#27 40.96 from tensorflow.python.tools import module_util as _module_util
#27 40.96 File "/root/miniconda3/lib/python3.8/site-packages/tensorflow/python/__init__.py", line 37, in <module>
#27 40.96 from tensorflow.python.eager import context
#27 40.96 File "/root/miniconda3/lib/python3.8/site-packages/tensorflow/python/eager/context.py", line 29, in <module>
#27 40.96 from tensorflow.core.framework import function_pb2
#27 40.96 File "/root/miniconda3/lib/python3.8/site-packages/tensorflow/core/framework/function_pb2.py", line 7, in <module>
#27 40.96 from google.protobuf import descriptor as _descriptor
#27 40.96 File "/root/miniconda3/lib/python3.8/site-packages/google/protobuf/descriptor.py", line 40, in <module>
#27 40.96 from google.protobuf.internal import api_implementation
#27 40.96 File "/root/miniconda3/lib/python3.8/site-packages/google/protobuf/internal/api_implementation.py", line 104, in <module>
#27 40.96 from google.protobuf.pyext import _message
#27 40.96 TypeError: bases must be types
```
It is likely that conflicting dependencies break tensorflow (e.g. #3684). Try to compare versions installed earlier when the image used to build with versions now being installed. | closed | 2022-09-09T06:00:42Z | 2022-09-15T09:41:56Z | https://github.com/horovod/horovod/issues/3691 | [
"bug"
] | EnricoMi | 4 |
slackapi/bolt-python | fastapi | 380 | Document about the ways to write unit tests for Bolt apps | I am looking to write unit tests for my Bolt API, which consists of an app with several handlers for events / messages. The handlers are both _decorated_ using the `app.event` decorator, _and_ make use of the `app` object to access things like the `db` connection that has been put on it. For example:
```python
# in main.py
app = App(
token=APP_TOKEN,
signing_secret=SIGNING_SECRET,
process_before_response=True,
token_verification_enabled=token_verification_enabled,
)
app.db = db
# in api.py:
from .main import app
@app.command("/slashcommand")
def slash_command(ack, respond):
ack()
results = app.db.do_query()
respond(...)
```
The thing is, I cannot find any framework pointers, or documentation, on how I would write reasonable unit tests for this. Presumably ones where the kwargs like `ack` and `respond` are replaced by test doubles. How do I do this?
### The page URLs
* https://slack.dev/bolt-python/
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| open | 2021-06-15T21:30:22Z | 2025-03-22T19:18:06Z | https://github.com/slackapi/bolt-python/issues/380 | [
"docs",
"enhancement",
"area:async",
"area:sync"
] | offbyone | 19 |
httpie/cli | api | 1,010 | Output not properly displayed in UTF-8 | Hi,
I'm using httpie v.2.3.0, it's really wonderful! While playing around with my website today I encountered a strange effect while displaying the output. All special characters, like €(EUR) or Ä/Ö/Ü are not shown properly, it seems like there's encoding error. `curl` on the other side does the job correctly.
Here are the headers:
```
# http -F --headers stocksport1.at:
HTTP/1.1 200 OK
accept-ranges: bytes
age: 39
content-encoding: gzip
content-length: 854
content-type: text/html
date: Sun, 03 Jan 2021 11:20:03 GMT
etag: "6cb-5b0869f17f6b5"
last-modified: Wed, 30 Sep 2020 11:58:44 GMT
server: Apache
vary: Accept-Encoding
# curl stocksport1.at -L -D -
HTTP/1.1 200 OK
accept-ranges: bytes
age: 39
content-encoding: gzip
content-length: 854
content-type: text/html
date: Sun, 03 Jan 2021 11:20:03 GMT
etag: "6cb-5b0869f17f6b5"
last-modified: Wed, 30 Sep 2020 11:58:44 GMT
server: Apache
vary: Accept-Encoding
```
The charset is defined in the header of the HTML file, but i guess this isn't interpreted by httpie. So, what I'm doing wrong or what is httpie doing wrong?
Many thanks in advance, regards, Thomas
| closed | 2021-01-03T11:30:56Z | 2021-02-19T21:00:24Z | https://github.com/httpie/cli/issues/1010 | [] | thmsklngr | 3 |
deepfakes/faceswap | deep-learning | 1,198 | add the function of selecting multiple faces in the UI of manual | the gui is wonderful, but i cant choose multiple faces that i want to delete,
i have to delete them one by one..
so , is it possbile to add the function so we can quickly choose lots of faces that we want to delete | closed | 2021-12-11T13:47:27Z | 2021-12-12T03:02:21Z | https://github.com/deepfakes/faceswap/issues/1198 | [] | leeso2021 | 1 |
fugue-project/fugue | pandas | 503 | [COMPATIBILITY] Deprecate python 3.7 support | Python 3.7 end of life happened in June 2023, 0.8.6 is our last Fugue version to support python 3.7. It has several major bug fixes, and after this release, there will be breaking changes that will not work with Python 3.7. | closed | 2023-08-16T05:30:52Z | 2023-08-16T07:28:51Z | https://github.com/fugue-project/fugue/issues/503 | [
"python deprecation",
"compatibility"
] | goodwanghan | 0 |
iperov/DeepFaceLab | deep-learning | 5,608 | Деловое предложение | Иван добрый день , меня зовут Андрей запустил проекты Антивирус Гризли про , online-ocenka.ru оценка для нотариусов , к вам есть деловое предложение. Свяжитесь со мной пожалуйста , не могу найти как вам написать. ТГ @andrei_kratkiy .
Сорри за спам ветки. | open | 2023-01-09T09:48:08Z | 2023-06-08T23:08:15Z | https://github.com/iperov/DeepFaceLab/issues/5608 | [] | Andreypetrov10 | 1 |
aleju/imgaug | machine-learning | 5 | Changes : Review documentation style | I have added (some) documentation for the Augmenter and Noop classes in the augmenters.py file.
Review the changes in [My fork](https://github.com/SarthakYadav/imgaug) and make suggestions.
This is just a glimpse. Much work will be needed on documentation (as more often than not I am not quite sure what a parameter signifies, etc). | closed | 2016-12-11T12:26:10Z | 2016-12-11T18:34:25Z | https://github.com/aleju/imgaug/issues/5 | [] | SarthakYadav | 5 |
tflearn/tflearn | tensorflow | 624 | Bug using batch_norm in combination with tf==1.0.0 | I am experiencing very strange `batch_norm` behaviour, probably caused by the commit https://github.com/tflearn/tflearn/commit/3cfc4f471a6184f9acd36b50b239329911de5ab2 by @WHAAAT
Except in the code is alway catching the following exception (which is very strange):
```
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tflearn/layers/conv.py", line 1147, in residual_block
resnet = tflearn.batch_normalization(resnet)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tflearn/layers/normalization.py", line 79, in batch_normalization
trainable=False, restore=restore)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/contrib/framework/python/ops/arg_scope.py", line 177, in func_with_args
return func(*args, **current_args)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tflearn/variables.py", line 66, in variable
validate_shape=validate_shape)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 988, in get_variable
custom_getter=custom_getter)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 890, in get_variable
custom_getter=custom_getter)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 348, in get_variable
validate_shape=validate_shape)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 333, in _true_getter
caching_device=caching_device, validate_shape=validate_shape)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 684, in _get_single_variable
validate_shape=validate_shape)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 226, in __init__
expected_shape=expected_shape)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/ops/variables.py", line 303, in _init_from_args
initial_value(), name="initial_value", dtype=dtype)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 673, in <lambda>
shape.as_list(), dtype=dtype, partition_info=partition_info)
TypeError: __init__() got multiple values for keyword argument 'dtype'
```
and the code in except is always being run.
This thing is causing following problems for me:
1. I am unable to load old tflearn models with following problem:
```
.....
model.load(model_file)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tflearn/models/dnn.py", line 278, in load
self.trainer.restore(model_file, weights_only, **optargs)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tflearn/helpers/trainer.py", line 449, in restore
self.restorer.restore(self.session, model_file)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1439, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 767, in run
run_metadata_ptr)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 965, in _run
feed_dict_string, options, run_metadata)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1015, in _do_run
target_list, options, run_metadata)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1035, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: Tensor name "BatchNormalization/moving_mean_1" not found in checkpoint files mymodel/model.ckpt-20
[[Node: save_1/RestoreV2_2 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save_1/Const_0, save_1/RestoreV2_2/tensor_names, save_1/RestoreV2_2/shape_and_slices)]]
Caused by op u'save_1/RestoreV2_2', defined at:
....
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tflearn/models/dnn.py", line 64, in __init__
best_val_accuracy=best_val_accuracy)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tflearn/helpers/trainer.py", line 145, in __init__
keep_checkpoint_every_n_hours=keep_checkpoint_every_n_hours)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1051, in __init__
self.build()
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1081, in build
restore_sequentially=self._restore_sequentially)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 675, in build
restore_sequentially, reshape)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 402, in _AddRestoreOps
tensors = self.restore_op(filename_tensor, saveable, preferred_shard)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 242, in restore_op
[spec.tensor.dtype])[0])
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/ops/gen_io_ops.py", line 668, in restore_v2
dtypes=dtypes, name=name)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
op_def=op_def)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2395, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/Users/janzikes/anaconda2/envs/research/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1264, in __init__
self._traceback = _extract_stack()
NotFoundError (see above for traceback): Tensor name "BatchNormalization/moving_mean_1" not found in checkpoint files mymodel/model.ckpt-20
[[Node: save_1/RestoreV2_2 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save_1/Const_0, save_1/RestoreV2_2/tensor_names, save_1/RestoreV2_2/shape_and_slices)]]
```
2. Interesting thing happens when i change the code in tflearn from:
```
try:
moving_mean = vs.variable('moving_mean', input_shape[-1:], initializer=tf.zeros_initializer,
trainable=False, restore=restore)
except:
moving_mean = vs.variable('moving_mean', input_shape[-1:], initializer=tf.zeros_initializer(),
trainable=False, restore=restore)
```
to the original state:
```
moving_mean = vs.variable('moving_mean', input_shape[-1:], initializer=tf.zeros_initializer(),
trainable=False, restore=restore)
```
This loads the old model for me, but unfortunately predicts different stuff (worse stuff) than it used to predict with `tensorflow==0.11` and older tflearn (pre 0.3.0 commit).
3. During the training phase everything works well, except that it always ends up in the except block, the code in the try block always fails with the exception `TypeError: __init__() got multiple values for keyword argument 'dtype'`.
My python packages are following: `tflearn==0.3.0` and `tensorflow==1.0.0`. | closed | 2017-02-23T13:07:59Z | 2017-02-27T15:20:27Z | https://github.com/tflearn/tflearn/issues/624 | [] | ziky90 | 4 |
ultralytics/ultralytics | machine-learning | 18,832 | Segmentation mask error in training batch when using multiprocess | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Train
### Bug
I've noticed a very strange bug that occurs when starting YOLOv8 training in a seperate process (starting each trainig in a seperate multiprocess):
The generated training batches (for some reason always batch 2 and 4) generate weird training masks. I'm not sure if they are applied during the training or just appear on the training batch. It happens throughout all YOLOv8 versions.
At first I thought it has a connection with the albumentations library, but I was wrong.
I've attached the generated training batches as files.
kfold2_train_batch1_wrong:
has almost ALL masks wrong except images 0755
[](https://github.com/user-attachments/assets/e4b81136-f607-43e0-be52-77ebcbb331ad)
as a comparison I've uploaded kfold kfold4_train_batch_0, there you can clearly see image 0938 (top left ish) has the correct mask compared to kfold2_train_batch1_wrong.
[](https://github.com/user-attachments/assets/b5a8c3a3-004b-42f9-bc4c-d9d9b31df487)
the other training batch with same problem is kfold3_train_batch2, here only 2 images appear to be wrong (0691 and 0660)
[](https://github.com/user-attachments/assets/bd4bfcf5-05f8-4ffc-80b4-37bee938174b)
I can guarantee the mask conversion from the original dataset to YOLOv8 annotation format is correct and this isn't the problem. This only happens when using the multiprocess.
Any alternative way that I can use something which acts like multiprocess? I really need to start the trainig in a seperate process and get the results then close it.
### Environment
```
Ultralytics 8.3.65 🚀 Python-3.11.8 torch-2.5.1+cu124 CUDA:0 (Tesla V100-SXM2-16GB, 16144MiB)
Setup complete ✅ (80 CPUs, 503.8 GB RAM, 379.5/439.0 GB disk)
OS Linux-5.15.0-1065-nvidia-x86_64-with-glibc2.35
Environment Linux
Python 3.11.8
Install pip
RAM 503.76 GB
Disk 379.5/439.0 GB
CPU Intel Xeon E5-2698 v4 2.20GHz
CPU count 80
GPU Tesla V100-SXM2-16GB, 16144MiB
GPU count 1
CUDA 12.4
numpy ✅ 1.26.4>=1.23.0
numpy ✅ 1.26.4<2.0.0; sys_platform == "darwin"
matplotlib ✅ 3.10.0>=3.3.0
opencv-python ✅ 4.11.0.86>=4.6.0
pillow ✅ 11.1.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.15.1>=1.4.1
torch ✅ 2.5.1>=1.8.0
torch ✅ 2.5.1!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.20.1>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 6.1.1
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
```
### Minimal Reproducible Example
```
from ultralytics import YOLO
import torch.multiprocessing as mp
from multiprocessing import Queue
ROOT_DIR = os.getcwd()
DATA_DIR = os.path.join(ROOT_DIR,'data')
TRAINING_DIR = os.path.join(ROOT_DIR,'training')
def init_yaml(kfold_number=1):
data = {
'path': f'{DATA_DIR}/{kfold_number}',
'train': 'train/images',
'val': 'val/images',
'names': {
0: 'wound'
}
}
with open(os.path.join(ROOT_DIR, "config.yaml"), "w") as f:
yaml.dump(data, f, default_flow_style=False)
def train_model_function(i, result_queue):
model = YOLO('yolov8x-seg.pt')
results_training = model.train(data=os.path.join(ROOT_DIR, "config.yaml"), project=TRAINING_DIR, name="train_test_" + str(i),
epochs=1, imgsz=512, batch=16, deterministic=True, plots=True, seed=1401,
close_mosaic=0, augment=False, hsv_h=0, hsv_s=0, hsv_v=0, degrees=0, translate=0, scale=0,
shear=0.0, perspective=0, flipud=0, fliplr=0, bgr=0, mosaic=0, mixup=0, copy_paste=0, erasing=0,crop_fraction=0)
r = {
'train_map': results_training.seg.map,
'train_map50': results_training.seg.map50,
'train_map75': results_training.seg.map75,
'train_precision': float(results_training.seg.p[0]),
'train_recall': float(results_training.seg.r[0]),
'train_f1': float(results_training.seg.f1[0]),
}
result_queue.put((r))
def start_training_in_process(i):
result_queue = Queue()
train_process = mp.Process(
target=train_model_function,
args=(i, result_queue)
)
train_process.name = "TrainingProcess"
train_process.start()
train_process.join()
if not result_queue.empty():
r = result_queue.get()
return r
else:
print("No results returned from process")
return None
if __name__ == '__main__':
#loop through different kfolds of dataset
for i in range(1,6):
init_yaml(i)
start_training_in_process(i)
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-01-22T23:49:08Z | 2025-02-26T06:34:19Z | https://github.com/ultralytics/ultralytics/issues/18832 | [
"bug",
"segment"
] | armanivers | 37 |
yihong0618/running_page | data-visualization | 560 | 请问从nike换成strava后,历史数据有重复怎么删除? | 从nike换成strava后,我的历史数据有重复,应该是因为strava把之前nike的记录又重复上传了一次(它们不完全一样,有一点差别),用的是github action,请问怎么能把重复的去掉?谢谢!
我的网站:
https://schenxia.github.io/running_page/ | closed | 2023-12-04T05:29:57Z | 2023-12-05T16:40:47Z | https://github.com/yihong0618/running_page/issues/560 | [] | schenxia | 2 |
plotly/plotly.py | plotly | 4,415 | Empty Facets on partially filled MultiIndex DataFrame with datetime.date in bar plot | I have a pandas dataframe with a multicolumn index consisting of three levels, which I want to plot using a plotly.express.bar. First dimension `Station` (str) goes into the facets, the second dimension `Night` (datetime.date) goes onto the x-axis and the third dimension `Limit` (int) is going to be the stacked bars.
The bug is, that the resulting figure does not show any data on facets which do only have data at one `Night`. The expected outcome would be that even though data exists only on one date this data is presented.
A reproducible example looks like this:
```python
import datetime
import pandas as pd
import plotly.express as px
df = pd.DataFrame([
["station-01", datetime.date(2023,10,1), 50, 0.1],
["station-01", datetime.date(2023,10,1), 100, 0.15],
["station-01", datetime.date(2023,10,2), 50, 0.2],
["station-01", datetime.date(2023,10,2), 100, 0.22],
["station-01", datetime.date(2023,10,3), 50, 0.05],
["station-01", datetime.date(2023,10,3), 100, 0.02],
["station-02", datetime.date(2023,10,1), 50, 0.5],
["station-02", datetime.date(2023,10,1), 100, 0.2],
["station-03", datetime.date(2023,10,1), 50, 0.5],
["station-03", datetime.date(2023,10,1), 100, 0.5],
], columns=["Station", "Night", "Limit", "Relative Duration"])
df = df.set_index(["Station", "Night", "Limit"])
px.bar(
df,
x=df.index.get_level_values("Night"),
y="Relative Duration",
range_y=[0, 1],
color=df.index.get_level_values("Limit"),
facet_col=df.index.get_level_values("Station"),
)
```
The figure created looks like this:

When replacing the `datetime.date` type by a string the figure looks like expected:
```python
import datetime
import pandas as pd
import plotly.express as px
df = pd.DataFrame([
["station-01", "101", 50, 0.1],
["station-01", "101", 100, 0.15],
["station-01", "102", 50, 0.2],
["station-01", "102", 100, 0.22],
["station-01", "103", 50, 0.05],
["station-01", "103", 100, 0.02],
["station-02", "101", 50, 0.5],
["station-02", "101", 100, 0.2],
["station-03", "101", 50, 0.5],
["station-03", "101", 100, 0.3],
], columns=["Station", "Night", "Limit", "Relative Duration"])
df = df.set_index(["Station", "Night", "Limit"])
px.bar(
df,
x=df.index.get_level_values("Night"),
y="Relative Duration",
range_y=[0, 1],
color=df.index.get_level_values("Limit"),
facet_col=df.index.get_level_values("Station"),
)
```

Due to this behavior I suspect the plot yielded from the initial data frame is false and there is a bug inside of the plotly.express code.
Thanks for helping out. | open | 2023-11-06T13:59:49Z | 2024-08-12T13:41:39Z | https://github.com/plotly/plotly.py/issues/4415 | [
"bug",
"sev-2",
"P3"
] | jonashoechst | 0 |
CorentinJ/Real-Time-Voice-Cloning | python | 411 | poor performance in compare to the main paper? | Hi,
For those of you who are working with this repo to synthesize different voices;
Have you noticed a huge difference between the voices generated by this repo and the samples released by the main paper [here](https://google.github.io/tacotron/publications/speaker_adaptation/)?
If yes, let's discuss and find out the reason(s). | closed | 2020-07-09T09:11:52Z | 2020-07-22T21:41:48Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/411 | [] | amintavakol | 6 |
tflearn/tflearn | tensorflow | 951 | Save model and load it on another model with same structure | I have two DNN-models with the same structure and they are trained independently, but I want to transfer the weights of the first model to the second model at runtime.
```
import tflearn
from tflearn.layers.core import input_data, fully_connected
from tflearn.layers.estimator import regression
#import copy
def create_model(shape):
fc = fully_connected(shape, n_units=12, activation="relu")
fc2 = fully_connected(fc, n_units=2, activation="softmax")
regressor = regression(fc2)
return tflearn.DNN(regressor)
shape = input_data([None, 2])
model1 = create_model(shape) #first model
model2 = create_model(shape) #second model
"""
...
train model1
...
"""
path = "models/test_model.tfl"
model1.save(path)
model2.load(path, weights_only=True) #transfer the weights of the first model
#model2 = copy.deepcopy(model1) throws Error because of thread.Lock Object
```
If I run this I get the following Error:
```
2017-11-05 17:42:47.978858: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\36\tensorflow\core\framework\op_kernel.cc:1192] Not found: Key FullyConnected_3/b not found in checkpoint
2017-11-05 17:42:47.981882: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\36\tensorflow\core\framework\op_kernel.cc:1192] Not found: Key FullyConnected_3/W not found in checkpoint
2017-11-05 17:42:47.982010: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\36\tensorflow\core\framework\op_kernel.cc:1192] Not found: Key FullyConnected_2/b not found in checkpoint
2017-11-05 17:42:47.982181: W C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\36\tensorflow\core\framework\op_kernel.cc:1192] Not found: Key FullyConnected_2/W not found in checkpoint
Traceback (most recent call last):
File "D:\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1323, in _do_call
return fn(*args)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1302, in _run_fn
status, run_metadata)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 473, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.NotFoundError: Key FullyConnected_3/b not found in checkpoint
[[Node: save_6/RestoreV2_7 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save_6/Const_0_0, save_6/RestoreV2_7/tensor_names, save_6/RestoreV2_7/shape_and_slices)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/Users/Jonas/PycharmProjects/ReinforcementLearning/TflearnSaveLoadTest.py", line 20, in <module>
model2.load(path, weights_only=True)
File "D:\Python\Python36\lib\site-packages\tflearn\models\dnn.py", line 308, in load
self.trainer.restore(model_file, weights_only, **optargs)
File "D:\Python\Python36\lib\site-packages\tflearn\helpers\trainer.py", line 492, in restore
self.restorer_trainvars.restore(self.session, model_file)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\training\saver.py", line 1666, in restore
{self.saver_def.filename_tensor_name: save_path})
File "D:\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 889, in run
run_metadata_ptr)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1120, in _run
feed_dict_tensor, options, run_metadata)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1317, in _do_run
options, run_metadata)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 1336, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: Key FullyConnected_3/b not found in checkpoint
[[Node: save_6/RestoreV2_7 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save_6/Const_0_0, save_6/RestoreV2_7/tensor_names, save_6/RestoreV2_7/shape_and_slices)]]
Caused by op 'save_6/RestoreV2_7', defined at:
File "C:/Users/Jonas/PycharmProjects/ReinforcementLearning/TflearnSaveLoadTest.py", line 16, in <module>
model2 = create_model(shape)
File "C:/Users/Jonas/PycharmProjects/ReinforcementLearning/TflearnSaveLoadTest.py", line 11, in create_model
return tflearn.DNN(regressor)
File "D:\Python\Python36\lib\site-packages\tflearn\models\dnn.py", line 65, in __init__
best_val_accuracy=best_val_accuracy)
File "D:\Python\Python36\lib\site-packages\tflearn\helpers\trainer.py", line 155, in __init__
allow_empty=True)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\training\saver.py", line 1218, in __init__
self.build()
File "D:\Python\Python36\lib\site-packages\tensorflow\python\training\saver.py", line 1227, in build
self._build(self._filename, build_save=True, build_restore=True)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\training\saver.py", line 1263, in _build
build_save=build_save, build_restore=build_restore)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\training\saver.py", line 751, in _build_internal
restore_sequentially, reshape)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\training\saver.py", line 427, in _AddRestoreOps
tensors = self.restore_op(filename_tensor, saveable, preferred_shard)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\training\saver.py", line 267, in restore_op
[spec.tensor.dtype])[0])
File "D:\Python\Python36\lib\site-packages\tensorflow\python\ops\gen_io_ops.py", line 1020, in restore_v2
shape_and_slices=shape_and_slices, dtypes=dtypes, name=name)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 2956, in create_op
op_def=op_def)
File "D:\Python\Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 1470, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access
NotFoundError (see above for traceback): Key FullyConnected_3/b not found in checkpoint
[[Node: save_6/RestoreV2_7 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save_6/Const_0_0, save_6/RestoreV2_7/tensor_names, save_6/RestoreV2_7/shape_and_slices)]]
```
I've also tried to use the deepcopy function of the copy module to duplicate the model, but that doesn't seem to work:
`TypeError: can't pickle _thread.lock objects` | open | 2017-11-05T17:13:42Z | 2017-11-07T06:28:13Z | https://github.com/tflearn/tflearn/issues/951 | [] | TheRealfanibu | 1 |
plotly/plotly.py | plotly | 4,317 | Inconsistent Behaviour Between PlotlyJS and Plotly Python | If you take the figure object, the formatting string: '.2%f' will format values as percents when created with the Javascript library, but not with the Python library. '.2%', however, works with both libraries. | closed | 2023-08-11T16:15:15Z | 2023-08-11T17:51:34Z | https://github.com/plotly/plotly.py/issues/4317 | [] | msillz | 2 |
hack4impact/flask-base | sqlalchemy | 228 | ModuleNotFoundError: No module named 'flask.ext' | I have cloned the repo and followed the README.md to start the app.
But after running the command ```honcho start -e config.env -f Local```
i got below error, seems like the module is depricated.
```(.flask) user@flask:~/flask-base$ python -m honcho start -e config.env -f Local
15:44:52 system | redis.1 started (pid=3830)
15:44:53 system | web.1 started (pid=3834)
15:44:53 system | worker.1 started (pid=3835)
15:44:53 redis.1 | 3833:C 07 Mar 2024 15:44:53.027 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
15:44:53 redis.1 | 3833:C 07 Mar 2024 15:44:53.032 # Redis version=7.0.15, bits=64, commit=00000000, modified=0, pid=3833, just started
15:44:53 redis.1 | 3833:C 07 Mar 2024 15:44:53.033 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
15:44:53 redis.1 | 3833:M 07 Mar 2024 15:44:53.035 * Increased maximum number of open files to 10032 (it was originally set to 1024).
15:44:53 redis.1 | 3833:M 07 Mar 2024 15:44:53.036 * monotonic clock: POSIX clock_gettime
15:44:53 redis.1 | 3833:M 07 Mar 2024 15:44:53.037 * Running mode=standalone, port=6379.
15:44:53 redis.1 | 3833:M 07 Mar 2024 15:44:53.040 # Server initialized
15:44:53 redis.1 | 3833:M 07 Mar 2024 15:44:53.041 * Ready to accept connections
15:44:53 worker.1 | Traceback (most recent call last):
15:44:53 worker.1 | File "userflask-base/manage.py", line 5, in <module>
15:44:53 worker.1 | from flask_migrate import Migrate, MigrateCommand
15:44:53 worker.1 | File "/home/user/.flask/lib/python3.11/site-packages/flask_migrate/__init__.py", line 3, in <module>
15:44:53 worker.1 | from flask.ext.script import Manager
15:44:53 worker.1 | ModuleNotFoundError: No module named 'flask.ext'
15:44:53 web.1 | Traceback (most recent call last):
15:44:53 web.1 | File "/home/user/flask-base/manage.py", line 5, in <module>
15:44:53 web.1 | from flask_migrate import Migrate, MigrateCommand
15:44:53 web.1 | File "/home/user/.flask/lib/python3.11/site-packages/flask_migrate/__init__.py", line 3, in <module>
15:44:53 web.1 | from flask.ext.script import Manager
15:44:53 web.1 | ModuleNotFoundError: No module named 'flask.ext'
15:44:53 system | worker.1 stopped (rc=1)
15:44:53 system | sending SIGTERM to redis.1 (pid 3830)
15:44:53 system | sending SIGTERM to web.1 (pid 3834)
15:44:53 redis.1 | 3833:signal-handler (1709826293) Received SIGTERM scheduling shutdown...
15:44:53 system | web.1 stopped (rc=-15)
15:44:53 redis.1 | 3833:M 07 Mar 2024 15:44:53.444 # User requested shutdown...
15:44:53 redis.1 | 3833:M 07 Mar 2024 15:44:53.444 * Saving the final RDB snapshot before exiting.
15:44:53 redis.1 | 3833:M 07 Mar 2024 15:44:53.447 * DB saved on disk
15:44:53 redis.1 | 3833:M 07 Mar 2024 15:44:53.447 # Redis is now ready to exit, bye bye...
15:44:53 system | redis.1 stopped (rc=-15)``` | open | 2024-03-07T16:01:03Z | 2024-05-10T17:31:19Z | https://github.com/hack4impact/flask-base/issues/228 | [] | sathish-sign | 1 |
cobrateam/splinter | automation | 515 | Suggested updates to the `FlaskClient` driver | Hi - first off thank you for this wonderful library. I'm using it in conjunction with **pytest-bdd** and **Flask** and for the most part it's been plain sailing, however I am currently using a custom `FlaskClient` that overrides the `_do_method` to resolve 2 issues/traits I came across.
### `302` and `303` behaviour
By default most browser (correctly or incorrectly) will use the `GET` method when redirecting based on a `302` response, and `303`s (I believe) specifically address this confusion by declaring that the `303` response from a `POST` request is to a `GET` resource.
At the moment using the `FlaskClient` all `30X` responses retain the current method, so if a `303` response is received from a `POST` request then the `_do_method` will make another request using the `post` not the intended `get` method.
### `args` and `form` data
**Flask** provides access to data transmitted via `POST` or `PUT` through `request.form`, alternatively data transmitted in the URL (e.g via `GET`) is available through `request.args` (there's also `request.values` which allows you to access data in either but that's not important here).
Using the `FlaskClient` when you submit a form with a `method` attribute of `GET` (e.g `<form method="GET" ...>`) the request will be made as using the `GET` method but the data will be sent as if `POST`ed and will end up in the `request.form` attribute. As you can imagine this then causes issues because the view being called is looking for the submitted data in `request.args`.
The cause of the issue is that data is always sent in the `_do_method` using the `data` keyword argument when calling the `func_method`, but for a `GET` request (I believe) the `data` keyword should not sent as `None` and the data should be set against the URL as a query.
### Fixing these issues
As I mentioned at the start of the issue I'm currently using a patched driver based on the existing `FlaskClient` to resolve these issues/traits. I've included the code below in case others come across the same issues in the short-term and as a suggestion as to how they might be fixed. If you'd like me to contribute the changes as a pull request I'm more than happy to have a look at doing so over the weekend.
> **Note:** My patch is using Python3.4+, hence `from urllib import parse` so would need to be made compatible with 2.7+ though I think that would be a relatively easy task.
``` python
from urllib import parse
from splinter.browser import _DRIVERS
from splinter.driver import flaskclient
__all__ = ['FlaskClient']
class FlaskClient(flaskclient.FlaskClient):
"""
A patched `FlaskClient` driver that implements more standard `302`/`303`
behaviour and that sets data for `GET` requests against the URL.
"""
driver_name = 'flask'
def _do_method(self, method, url, data=None):
# Set the initial URL and client/HTTP method
self._url = url
func_method = getattr(self._browser, method.lower())
# Continue to make requests until a non 30X response is recieved
while True:
self._last_urls.append(url)
# If we're making a GET request set the data against the URL as a
# query.
if method.lower() == 'get':
# Parse the existing URL and it's query
url_parts = parse.urlparse(url)
url_params = parse.parse_qs(url_parts.query)
# Update any existing query dictionary with the `data` argument
url_params.update(data or {})
url_parts = url_parts._replace(
query=parse.urlencode(url_params, doseq=True))
# Rebuild the URL
url = parse.urlunparse(url_parts)
# As the `data` argument will be passed as a keyword argument to
# the `func_method` we set it `None` to prevent it populating
# `flask.request.form` on `GET` requests.
data = None
# Call the flask client
self._response = func_method(
url,
headers=self._custom_headers,
data=data,
follow_redirects=False
)
# Implement more standard `302`/`303` behaviour
if self._response.status_code in (302, 303):
func_method = getattr(self._browser, 'get')
# If the response was not in the `30X` range we're done
if self._response.status_code not in (301, 302, 303, 305, 307):
break
# If the response was in the `30X` range get next URL to request
url = self._response.headers['Location']
self._url = self._last_urls[-1]
self._post_load()
# Patch the default `FlaskClient` driver
_DRIVERS['flask'] = FlaskClient
```
| closed | 2016-09-20T16:37:13Z | 2020-03-08T14:54:24Z | https://github.com/cobrateam/splinter/issues/515 | [] | anthonyjb | 9 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 37 | Question: Usage of model with multiple inputs | Hello,
I want to train pre-trained BERT model, which accepts three arguments (token_ids, token_types, attention_mask), all of them are tensors (N x L). As far as I understand from your source code, during training with `trainer` instance, I am able to put only one tensor into a model.
What would you recommend for me: 1. wrap model into another model, that accepts stacked tensors and splits them 2. edit source code 3. something else? Or may be I am wrong, please correct me, if it so | closed | 2020-04-08T16:59:29Z | 2020-04-09T16:36:19Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/37 | [
"Frequently Asked Questions"
] | lebionick | 4 |
custom-components/pyscript | jupyter | 32 | bug with list comprehension since 0.3 | Updating to 0.3 I suddenly see this message in my error-logs
`SyntaxError: no binding for nonlocal 'entity_id' found`
Reproduce with:
```python
def white_or_cozy(group_entity_id):
entity_ids = state.get_attr(group_entity_id)['entity_id']
attrs = [state.get_attr(entity_id) for entity_id in entity_ids]
``` | closed | 2020-10-09T08:45:29Z | 2020-10-09T17:28:27Z | https://github.com/custom-components/pyscript/issues/32 | [] | basnijholt | 2 |
recommenders-team/recommenders | data-science | 2,064 | [BUG] NameError in ImplicitCF | ### Description
<!--- Describe your issue/bug/request in detail -->
```
2024-02-19T18:34:57.2553239Z @pytest.mark.gpu
2024-02-19T18:34:57.2568702Z def test_model_lightgcn(deeprec_resource_path, deeprec_config_path):
2024-02-19T18:34:57.2569997Z data_path = os.path.join(deeprec_resource_path, "dkn")
2024-02-19T18:34:57.2571489Z yaml_file = os.path.join(deeprec_config_path, "lightgcn.yaml")
2024-02-19T18:34:57.2572675Z user_file = os.path.join(data_path, r"user_embeddings.csv")
2024-02-19T18:34:57.2573837Z item_file = os.path.join(data_path, r"item_embeddings.csv")
2024-02-19T18:34:57.2574736Z
2024-02-19T18:34:57.2575330Z df = movielens.load_pandas_df(size="100k")
2024-02-19T18:34:57.2576298Z train, test = python_stratified_split(df, ratio=0.75)
2024-02-19T18:34:57.2577150Z
2024-02-19T18:34:57.2577757Z > data = ImplicitCF(train=train, test=test)
2024-02-19T18:34:57.2578784Z E NameError: name 'ImplicitCF' is not defined
2024-02-19T18:34:57.2579411Z
2024-02-19T18:34:57.2580002Z tests/smoke/recommenders/recommender/test_deeprec_model.py:251: NameError
```
### In which platform does it happen?
<!--- Describe the platform where the issue is happening (use a list if needed) -->
<!--- For example: -->
<!--- * Azure Data Science Virtual Machine. -->
<!--- * Azure Databricks. -->
<!--- * Other platforms. -->
### How do we replicate the issue?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for pyspark -->
<!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` -->
<!--- * ... -->
https://github.com/recommenders-team/recommenders/actions/runs/7963399372/job/21738878728
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for SAR PySpark should pass successfully. -->
### Other Comments
FYI @SimonYansenZhao this is similar to the issue we got in #2022 | closed | 2024-02-19T19:30:03Z | 2024-04-05T14:07:32Z | https://github.com/recommenders-team/recommenders/issues/2064 | [
"bug"
] | miguelgfierro | 3 |
pallets-eco/flask-wtf | flask | 593 | Support overriding RECAPTCHA_ERROR_CODES | The recaptcha error codes are in English, see https://github.com/wtforms/flask-wtf/blob/main/src/flask_wtf/recaptcha/validators.py#L10 and I would to overrride them with my (Dutch) translation.
app.config['RECAPTCHA_ERROR_CODES'] = {
'missing-input-secret': 'De geheime parameter ontbreekt.',
'invalid-input-secret': 'De geheime parameter is ongeldig of misvormd.',
'missing-input-response': 'De responsparameter ontbreekt.',
'invalid-input-response': 'De responsparameter is ongeldig of misvormd.',
}
Even better would be that translations (via existing gettext i18n?) for [all supported languages](https://docs.hcaptcha.com/languages) are shipped with Flask WTF. Like with content of the widget, the browser's locale is used, but can also be overridden with:
app.config['RECAPTCHA_PARAMETERS'] = {'hl': 'nl'}
It should work for recaptcha and hcaptcha. But overriding with a dict as shown above would be a good first step.
See also https://github.com/wtforms/flask-wtf/issues/583 | closed | 2024-01-08T23:11:15Z | 2024-01-23T00:54:30Z | https://github.com/pallets-eco/flask-wtf/issues/593 | [] | PanderMusubi | 1 |
OpenInterpreter/open-interpreter | python | 1,127 | Repeating output | ### Describe the bug
When rendering Markdown content, particularly within code snippets or text areas, the text is being repeated multiple times, despite it being intended to display only once. This repetition occurs within the rendered output, making it difficult to read and understand the Markdown content properly.
This maybe a issue of powershell. if Yes please create the gui fast.
### Reproduce
Write this as prompt or any prompt that will make it generate output longer than the display size of monitor use. Make me a simple Pomodoro app.
### Expected behavior
The Markdown content should be displayed exactly once, providing a clear and concise representation of the intended text.
### Screenshots

### Open Interpreter version
0.2.4
### Python version
3.10.7
### Operating System name and version
Windows 11
### Additional context
_No response_ | open | 2024-03-25T08:19:21Z | 2024-12-14T17:35:34Z | https://github.com/OpenInterpreter/open-interpreter/issues/1127 | [
"Bug"
] | qwertystars | 5 |
StratoDem/sd-material-ui | dash | 408 | Update font icon | https://material-ui.com/components/icons/ | closed | 2020-08-11T14:32:55Z | 2020-08-17T18:17:13Z | https://github.com/StratoDem/sd-material-ui/issues/408 | [] | coralvanda | 0 |
sktime/sktime | data-science | 7,950 | [BUG] Link to documentation in ExpandingGreedySplitter display goes to incorrect URL | **Describe the bug**
The image displayed with you instantiate an `ExpandingGreedySplitter` object has a `?` icon in the top right that links to the API reference for the object. However rather than link to the `latest` or `stable` page, it links to the version of `sktime` that you have installed. In my case, it is `v0.36.0`, which displays "404 Documentation page not found".
i.e. `https://www.sktime.net/en/v0.36.0/api_reference/auto_generated/sktime.split.expandinggreedy.ExpandingGreedySplitter.html`
**To Reproduce**
```python
from sktime.split import ExpandingGreedySplitter
ExpandingGreedySplitter(12)
```
**Expected behavior**
The URL should link a page that works. Either `https://www.sktime.net/en/latest/api_reference/auto_generated/sktime.split.ExpandingGreedySplitter.html` or `https://www.sktime.net/en/stable/api_reference/auto_generated/sktime.split.ExpandingGreedySplitter.html`. I do not know whether `latest` or `stable` is preferred.
**Additional context**
<!--
Add any other context about the problem here.
-->
**Versions**
<details>
<!--
Please run the following code snippet and paste the output here:
from sktime import show_versions; show_versions()
-->
```python
Python dependencies:
pip: 25.0
sktime: 0.36.0
sklearn: 1.6.1
skbase: 0.12.0
numpy: 2.0.1
scipy: 1.15.1
pandas: 2.2.3
matplotlib: 3.10.0
joblib: 1.4.2
numba: None
statsmodels: 0.14.4
pmdarima: 2.0.4
statsforecast: None
tsfresh: None
tslearn: None
torch: None
tensorflow: None
```
</details>
| open | 2025-03-07T17:55:13Z | 2025-03-22T14:23:35Z | https://github.com/sktime/sktime/issues/7950 | [
"bug",
"documentation"
] | gbilleyPeco | 5 |
roboflow/supervision | machine-learning | 1,641 | DetectionDataset merge fails when class name contains capital letter | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar bug report.
### Bug
Hello, thanks for this great library! I'm facing an issue while trying to merge 2 datasets when any of the class names contain a capital letter.
Error:
```
ValueError: Class Animal not found in target classes. source_classes must be a subset of target_classes.
```
The issue stems from the ```merge_class_lists function``` at https://github.com/roboflow/supervision/blob/37cacec70443a2c28ea6642f6bc54e6c5151c111/supervision/dataset/utils.py#L53
where the class names are converted to lower-case, but ```build_class_index_mapping``` keeps the class names as it is. For my use case, I was able to get around by removing the lower-case conversion.
### Environment
- Supervision 0.24.0
- OS: Windows 10
- Python 3.10.14
### Minimal Reproducible Example
Example: I downloaded 2 roboflow datasets - https://universe.roboflow.com/cvlab-6un5p/cv-lab-kpdek and https://universe.roboflow.com/padidala-indhu-e1dhl/animals-gzsxr and tried to merge them
```python
import supervision as sv
def main():
ds1 = sv.DetectionDataset.from_coco("data/CV-LAB.v1i.coco/train", "data/CV-LAB.v1i.coco/train/_annotations.coco.json")
ds2 = sv.DetectionDataset.from_coco("data/Animals.v1i.coco/train", "data/Animals.v1i.coco/train/_annotations.coco.json")
sv.DetectionDataset.merge([ds1, ds2])
if __name__ == '__main__':
main()
```
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | closed | 2024-11-01T08:49:54Z | 2024-11-02T06:16:37Z | https://github.com/roboflow/supervision/issues/1641 | [
"bug"
] | Suhas-G | 5 |
dnouri/nolearn | scikit-learn | 13 | Ability to Shuffle Data Before Each Epoch | This will improve SGD and deal with missing residual data.
See https://github.com/dnouri/nolearn/pull/11#issuecomment-68298607.
| closed | 2014-12-29T21:48:04Z | 2015-02-09T09:43:09Z | https://github.com/dnouri/nolearn/issues/13 | [] | cancan101 | 5 |
TheKevJames/coveralls-python | pytest | 157 | coveralls installation issue? | #coveralls-1.1 (https://pypi.python.org/pypi/coveralls)
C:\Users\RD\Anaconda3\pkgs_rd\coveralls-1.1>pip install setup.py
Error: could not find the version that satisfies the requirement setup.py
Version: Python 3.6.1 !Anaconda 4.4.0 (32-bit) ! (default, date, time) [MSC v.1 900 32 bit (intel)] on win32 | closed | 2017-06-19T11:47:26Z | 2017-06-20T17:54:12Z | https://github.com/TheKevJames/coveralls-python/issues/157 | [] | rgoalcast | 1 |
miguelgrinberg/microblog | flask | 255 | GitHub Dependabot alert: pycrypto (pip) | Hi Miguel, I'm learning/playing around with GitHub's Dependabot alert features and have been getting CVE (Common Vulnerabilities and Exposures) warnings about **pycrypto** for a while.
**I'm wondering how can I trace back which pip package has included pycrypto in my projects?** I've been doing several Python courses on the same environment so all package installs are sharing the same development environment.
**Microblog**
https://github.com/miguelgrinberg/microblog/blob/master/requirements.txt
**Testing a Flask Application using pytest**
https://gitlab.com/patkennedy79/flask_user_management_example/-/blob/master/requirements.txt
**pycrypto**
CVE-2013-7459
CVE-2018-6594 | closed | 2020-08-19T10:36:29Z | 2020-08-21T09:37:37Z | https://github.com/miguelgrinberg/microblog/issues/255 | [
"question"
] | mrbiggleswirth | 2 |
wkentaro/labelme | computer-vision | 895 | [Feature] | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| closed | 2021-07-25T17:34:06Z | 2021-07-25T20:34:59Z | https://github.com/wkentaro/labelme/issues/895 | [] | Apidwalin | 0 |
TracecatHQ/tracecat | fastapi | 132 | Suggestion for this task - Bring-your-own LLM (OpenAI, Mistral, Anthropic etc.) | There is one great open source project on GitHub called LiteLLM.
https://github.com/BerriAI/litellm
If that project is integrated to this project then this task of todo check list can be easily fulfilled without struggling and read documentations for n numbers of different LLM's as LiteLLM has simple schema/structure in which people can fill info and use any LLM(100's of LLM supported by LiteLLM) easily.
It created local proxy server and use open ai api structure for working so we don't need too much things and don't have to study different LLM documentation to integrate it in this solution so things will be much more easy so I would suggest you to study on it and implement it to make things much more easy for yourself and for users too :-))
Many great projects have implemented and in some i have given this same suggestion so it can be helpful for all :-))
Thankyou for this great project | closed | 2024-05-05T16:59:58Z | 2024-10-10T02:09:44Z | https://github.com/TracecatHQ/tracecat/issues/132 | [
"enhancement"
] | Greatz08 | 5 |
scikit-image/scikit-image | computer-vision | 7,301 | `morphology.flood` fails for boolean image type if tolerance is set | ### Description:
This is because neither `numpy.finfo` nor `numpy.iinfo` work for a boolean dtype in
https://github.com/scikit-image/scikit-image/blob/f4c1b34ac968d9fda332d7d9a63c83499aaac1f6/skimage/morphology/_flood_fill.py#L275-L280
### Way to reproduce:
```python
import numpy as np
import skimage as ski
image = np.empty([10, 10], dtype=bool)
ski.morphology.flood(image, (1, 1), tolerance=2)
```
<details><summary>Traceback</summary>
<p>
```
Traceback (most recent call last):
File "/home/lg/Res/scikit-image/skimage/morphology/_flood_fill.py", line 276, in flood
max_value = np.finfo(working_image.dtype).max
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lg/.local/lib/micromamba/envs/skimage2numpy2/lib/python3.12/site-packages/numpy/_core/getlimits.py", line 519, in __new__
raise ValueError("data type %r not inexact" % (dtype))
ValueError: data type <class 'numpy.bool'> not inexact
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/lg/.local/lib/micromamba/envs/skimage2numpy2/lib/python3.12/site-packages/IPython/core/interactiveshell.py", line 3553, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-5-2c35428dde4a>", line 5, in <module>
ski.morphology.flood(image, (1, 1), tolerance=2)
File "/home/lg/Res/scikit-image/skimage/morphology/_flood_fill.py", line 279, in flood
max_value = np.iinfo(working_image.dtype).max
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/lg/.local/lib/micromamba/envs/skimage2numpy2/lib/python3.12/site-packages/numpy/_core/getlimits.py", line 697, in __init__
raise ValueError("Invalid integer data type %r." % (self.kind,))
ValueError: Invalid integer data type 'b'.
```
</p>
</details>
### Version information:
```Shell
3.12.1 | packaged by conda-forge | (main, Dec 23 2023, 08:03:24) [GCC 12.3.0]
Linux-6.7.0-arch3-1-x86_64-with-glibc2.38
scikit-image version: 0.23.0rc0.dev0+git20240118.4f65ab74a
numpy version: 2.0.0.dev0+git20240113.d2f60ff
```
| closed | 2024-01-18T13:17:21Z | 2024-08-13T19:26:23Z | https://github.com/scikit-image/scikit-image/issues/7301 | [
":sleeping: Dormant",
":bug: Bug"
] | lagru | 5 |
wkentaro/labelme | deep-learning | 347 | Polygon label list causes missing objects (bug reported) | Hi there, I think I might have discovered a bug related to label list. If we don't sort the list accordingly, some labelled objects will be gone in the generated VOC-type files. For instance, if object A is within object B, you have to put A at the end of the list, so as to make A appear on the final VOC-like image.
Please refer to the "Polygon label" section and see the difference:


| closed | 2019-03-15T08:23:57Z | 2019-04-27T02:10:36Z | https://github.com/wkentaro/labelme/issues/347 | [] | rocklinsuv | 2 |
jupyter/nbgrader | jupyter | 1,095 | Use db for exchange mechanism | Perhaps for Edinburgh-Hackathon...
The exchange directory ties assignment movement to a shared filesystem. I'm wondering if storing files in a database (or first pass just having a higher level abstraction) would be more flexible for hubs with distributed notebook servers. | closed | 2019-05-30T05:16:46Z | 2022-07-13T15:16:57Z | https://github.com/jupyter/nbgrader/issues/1095 | [] | ryanlovett | 9 |
robinhood/faust | asyncio | 112 | Error upon start up on faust==1.0.12 | I get the following error when running faust 1.0.12:
```
Traceback (most recent call last):
File "/home/robinhood/env/lib/python3.6/site-packages/mode/services.py", line 685, in _execute_task
await task
File "/home/robinhood/env/lib/python3.6/site-packages/faust/transport/consumer.py", line 292, in _commit_handler
await self.commit()
File "/home/robinhood/env/lib/python3.6/site-packages/faust/transport/consumer.py", line 331, in commit
return await self.force_commit(topics)
File "/home/robinhood/env/lib/python3.6/site-packages/mode/services.py", line 417, in _and_transition
return await fun(self, *args, **kwargs)
File "/home/robinhood/env/lib/python3.6/site-packages/faust/transport/consumer.py", line 345, in force_commit
did_commit = await self._commit_tps(commit_tps)
File "/home/robinhood/env/lib/python3.6/site-packages/faust/transport/consumer.py", line 355, in _commit_tps
await self._handle_attached(commit_offsets)
File "/home/robinhood/env/lib/python3.6/site-packages/faust/transport/consumer.py", line 389, in _handle_attached
await producer.wait_many(pending)
File "/home/robinhood/env/lib/python3.6/site-packages/mode/services.py", line 600, in wait_many
return await self._wait_one(coro, timeout=timeout)
File "/home/robinhood/env/lib/python3.6/site-packages/mode/services.py", line 625, in _wait_one
f.result() # propagate exceptions
File "/home/robinhood/python-3.6.3/lib/python3.6/asyncio/tasks.py", line 304, in wait
raise ValueError('Set of coroutines/Futures is empty.')
ValueError: Set of coroutines/Futures is empty.
```
The app crashes after this error. | closed | 2018-06-07T00:18:57Z | 2018-07-31T14:39:16Z | https://github.com/robinhood/faust/issues/112 | [] | vineetgoel | 0 |
jeffknupp/sandman2 | sqlalchemy | 30 | Document further query parameters | Thanks for making such great code. I've got an instance running and am trying to document the ways that I can query the API.
So far, I've found:
- _?page=2_
- _?[columnheader]=[value]_
- _?[columnheader1]=[value1]&[columnheader3]=[value4]_ (returns results that have both value1 and value2)
- _?[columnheader1]=[value1]&[columnheader1]=[value2]_ (returns results that have value1 OR value2)
I've been trying to figure these but haven't had any luck:
- set number of resources per page (e.g. _?limit=30_)
- search (e.g. _?q=value5_)
- sort (e.g. _?sort=-[columnhearder2]_)
- exclude (e.g. _?[columnheader1]=[value1]&[columnheader3]=-[value4]_) (return all entries that have value1 in columnheader1 but exclude those that have value4 in columnheader3).
- return only certain fields (e.g. _?field=columnheader2)
I've looked pretty hard but sorry if I missed these being explained elsewhere. Thanks for any help with this.
| open | 2016-03-18T21:43:01Z | 2018-10-05T14:39:54Z | https://github.com/jeffknupp/sandman2/issues/30 | [
"enhancement",
"help wanted"
] | gbinal | 3 |
allure-framework/allure-python | pytest | 432 | In Scenario Outline of Behave, Allure reports include Skipped test even when uisng --no-skipped or show_skipped = false | I'm submitting a ...
- [.] bug report
- [ ] feature request
What is the current behavior?
In **Scenario Outline** of Behave, Skipped issues appear in Allure Results even if Skipped --no-skipped flag is set or behave.ini is set to show_skipped = false
Steps:
1) I have written 2 scenario outlines. Each scenario outline loops through 4 kits(C, Java, .Net, .Net core) for 2 different examples.
2) In first scenario outline, after first scenario ie, C, I have skipped remaining 3 scenarios of Java, .Net, .Net core)
3) When it rolls around second scenario outline, in Allure reports I can see the skipped scenario results of first scenario outline which should not be (highlighted with yellow color in one of the attachments).
4) behave is skipped the scenarios properly





What is the expected behavior?
The skipped scenario results of previous scenario outline should not appear in next scenario outline results
Please tell us about your environment:
- Allure-version: allure-behave@2.8.5
- Test framework: behave@1.2.6
| open | 2019-09-23T12:29:40Z | 2023-07-08T22:33:25Z | https://github.com/allure-framework/allure-python/issues/432 | [
"bug",
"theme:behave"
] | Syed8787 | 3 |
plotly/plotly.py | plotly | 4,501 | imshow() animation_frame behaves incorrectly with binary_string = True | Hi, I think there's something strange going on with:
- imshow()
- animation_frame
- binary_string
From my tests, binary string (BS) causes the animation to glitch out and freeze, sometimes making the slider pop out of place, and the only way to "reset" the bug is to double click the image to auto resize it.
attached: two videos denoting the problem, it's a simple px.imshow(image, animation_frame = 0, binary_string = True)
TLDR - BS images load faster, but "freezes", no freezes when BS = False
https://github.com/plotly/plotly.py/assets/52300841/4dc37d5a-f7d7-476e-8f1c-c37e4d3356c7
https://github.com/plotly/plotly.py/assets/52300841/2b9c5e99-399d-4ea9-80c8-c81bea42c771
| open | 2024-02-02T04:46:42Z | 2024-08-13T13:08:01Z | https://github.com/plotly/plotly.py/issues/4501 | [
"bug",
"P3"
] | hhdtan | 3 |
ghtmtt/DataPlotly | plotly | 292 | IndexError when removing my last plot in a layout | **Describe the bug**
* Setting up a single plot in a layout
* Removing the single plot from layout
* Then Python error
`index = 0`
Index Error L203
get_polygon_filter [plot_layout_item.py:203]
```
def get_polygon_filter(self, index=0):
if self.linked_map and self.plot_settings[index].properties.get('layout_filter_by_map', False):
``` | closed | 2022-06-16T14:32:52Z | 2022-08-21T14:42:30Z | https://github.com/ghtmtt/DataPlotly/issues/292 | [
"bug"
] | Gustry | 0 |
streamlit/streamlit | deep-learning | 10,872 | Infer page title and favicon from `st.title` instead of requiring `st.set_page_config` | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Today, to change the page title/favicon shown in the browser tab, you need to call `st.set_page_config`. What if we instead inferred that title and favicon from the first call to `st.title`, but still let the dev overwrite it with `st.set_page_config` if they want to have a different title in the app and in the browser tab.
### Why?
- It's very annoying to always need to call `st.set_page_config`. Especially since in most cases you want the same title as shown in the app.
- If devs don't call `st.set_page_config`, the browser tab will just have a default title, which doesn't look great.
### How?
- Set the page title to the first use of `st.title` on the page.
- Ideally, we can also extract an icon from the `st.title` call, e.g. if it's at the beginning or end of the string.
- This should also work if you show the title with markdown, e.g. `st.markdown("# The title")`.
- I think we should not use `st.sidebar.title` for this, since it might be something completely unrelated. But might be worth looking at a few apps.
- If `st.set_page_config` is set, it should always overwrite the title inferred from `st.title`.
### Additional Context
_No response_ | open | 2025-03-21T19:39:38Z | 2025-03-21T19:40:12Z | https://github.com/streamlit/streamlit/issues/10872 | [
"type:enhancement",
"feature:st.set_page_config",
"feature:st.title"
] | jrieke | 1 |
mckinsey/vizro | pydantic | 541 | Mobile version layout bugs | ### Description
Here's some configurations where layout is not working as expected:
1. Table in one container, graph in second
<img width="294" alt="image" src="https://github.com/mckinsey/vizro/assets/35569332/f4a6c52f-72d0-4392-b678-340486a39cf5">
2. Table in one container, graph in second in horizontal orientation
<img width="843" alt="image" src="https://github.com/mckinsey/vizro/assets/35569332/ecde59a2-1c55-42bf-a40f-0ef1d117a9fa">
3. Two graphs in horizontal orientation plus card
<img width="841" alt="image" src="https://github.com/mckinsey/vizro/assets/35569332/1e165435-3a88-4714-9659-f873d7d634be">
4. Agrid title overlaps second container tab
<img width="295" alt="image" src="https://github.com/mckinsey/vizro/assets/35569332/f694f78f-88a2-42f6-ac78-d738a0facb1e">
5. Agreed is unreachable with another graph in horizontal orientation
<img width="840" alt="image" src="https://github.com/mckinsey/vizro/assets/35569332/f8f94bf8-8d96-4d9c-90ab-978aa6f8df0d">
6. No graph displayed with two table on the same page
<img width="290" alt="image" src="https://github.com/mckinsey/vizro/assets/35569332/6a03b35f-171e-4597-970b-fec933359b1c">
7. No graph displayed with lots of components in one container
<img width="292" alt="image" src="https://github.com/mckinsey/vizro/assets/35569332/8d41f06a-ca49-453c-ae9e-41250070ac5a">
### Expected behavior
_No response_
### Which package?
vizro
### Package version
0.1.17
### Python version
3.9
### OS
Mac, Linux
### How to Reproduce
Just run tests examples from `vizro-qa`
### Output
_No response_
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | open | 2024-06-21T13:06:28Z | 2024-06-27T08:28:34Z | https://github.com/mckinsey/vizro/issues/541 | [
"Bug Report :bug:"
] | l0uden | 0 |
gradio-app/gradio | python | 10,621 | @gr.render causes the button to fail to trigger a click event in some scenarios which simliar to Multipage Apps | ### Describe the bug
Background:create one page include multi tabs and every tab contains a Blocks which contains @gr.render flag and contens.
similar Multipage Apps, here no route flag, just use Blocks.render in tab,avoid having too much code in the same file.
Example: main page, include page_a.py in tab 1 ,and include page_b.py in tab 2 and include page_c.py in tab 3.
business code in @gr.render blocks
My question is :
all components in 3 page display,but only one button in one of the three pages can trigger click event,and the page which button can work is random.
When refresh browser,the runnable page may become another one.
in Screenshot cases, only Button b trigger click event normally。Button a and Button c cant trigger click event .
If remove @gr.render flags ,all button trigger click event normally。
Please Help Me ,thks 。
Code in Reproduction Part.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
main_page.py
import gradio as gr
import page_a
import page_b
import page_c
with gr.Blocks() as main:
gr.Markdown("page main")
with gr.Tabs():
with gr.Tab("tabs"):
with gr.TabItem("tab1"):
page_a.pa.render()
with gr.TabItem("tab2"):
page_b.pb.render()
with gr.TabItem("tab3"):
page_c.pc.render()
if __name__ == "__main__":
main.launch()
page_a.py
import gradio as gr
def click(input):
return "click "+input
with gr.Blocks() as pa:
@gr.render(trigger_mode='always_last')
def refresh():
gr.Markdown("page a")
input = gr.Textbox("input a",label="input a")
btn = gr.Button("buttion a")
btn.click(click, inputs=input, outputs=input)
page_b.py
import gradio as gr
def click(input):
return "click "+input
with gr.Blocks() as pb:
@gr.render(trigger_mode='always_last')
def refresh():
gr.Markdown("page b")
input = gr.Textbox("input b", label="input b")
btn = gr.Button("buttion b")
btn.click(click, inputs=input, outputs=input)
page_c.py
import gradio as gr
def click(input):
return "click "+input
with gr.Blocks() as pc:
@gr.render(trigger_mode='always_last')
def refresh():
gr.Markdown("page c")
input = gr.Textbox("input c", label="input c")
btn = gr.Button("buttion c")
btn.click(click, inputs=input, outputs=input)
### Screenshot
<img width="1337" alt="Image" src="https://github.com/user-attachments/assets/d55c03ba-9844-4879-a9d7-28c72d5dc372" />
<img width="1069" alt="Image" src="https://github.com/user-attachments/assets/18cc8dc0-bc0d-4d92-ad32-fbe4953d6cdc" />
<img width="1327" alt="Image" src="https://github.com/user-attachments/assets/fcb08a23-a373-4733-ad37-6f665472bbce" />
### Logs
```shell
```
### System Info
```shell
Gradio version :5.16.1
Browser:chrome 133.0.6943.55 (arm64)
IDE : PyCharm 2023.3.2 (Professional Edition)
Hardware:MacBook Pro (13-inch, M1, 2020)
```
### Severity
Blocking usage of gradio | open | 2025-02-18T11:26:30Z | 2025-02-18T11:26:30Z | https://github.com/gradio-app/gradio/issues/10621 | [
"bug"
] | suxiaofei | 0 |
pydantic/pydantic-core | pydantic | 947 | Type stubs: Incorrect return type on validate_assignment | Is it possible that the below type stub is incorrect?
https://github.com/pydantic/pydantic-core/blob/6a139753af85fc7bb6b34f26c1328994506f94ee/python/pydantic_core/_pydantic_core.pyi#L174-L183
In the below example the returned value seems to be `a` of type `A`.
```py
from pydantic import BaseModel
class A(BaseModel, validate_assignment=True):
i: int
a = A(i=1)
x = a.__pydantic_validator__.validate_assignment(a, "i", 3)
print(type(x)) # <class '__main__.A'>
print(x is a) # True
```
| open | 2023-09-05T21:07:36Z | 2024-08-28T17:42:45Z | https://github.com/pydantic/pydantic-core/issues/947 | [
"bug"
] | hassec | 5 |
ydataai/ydata-profiling | jupyter | 916 | Interactions not work as expected when i change advanced usage configs | when i set interactions after "ProfileReport()" part interaction part of report seems buggy like this:
i just set options that wich columns visualized with scatter plot, i choose just 3 column(['HEIGHT','LENGHT','AREA']) and got this:

Also there is a **bug** with choosing columns via `profile.config.interactions.targets = ['column_a','column_b','column_c']` option.
When i write column names with **uppercase** profiling process works without problem otherwise(lower case column names etc.) it not working, process stuck at somewhere
**To Reproduce**
Data:
Page Block dataset used for this experiment. You can find dataset from [here](https://archive.ics.uci.edu/ml/datasets/Page+Blocks+Classification)
_Code:_ Preferably, use this code format:
```python
import pandas as pd
import pandas_profiling
df = pd.read_csv(r"pageblock.csv")
profile = ProfileReport(
postgre_db_table, title="pageblock", html={"style": {"full_width": True}},
sort=None
)
profile.config.interactions.targets = ['HEIGHT','LENGHT','AREA']
profile.to_notebook_iframe()
```
**Version information:**
_Python version_: 3.7.5
_Environment_: Jupyter Notebook (local)
_`pip`_:
absl-py==0.13.0
adal==1.2.6
alembic==1.4.1
altair==4.1.0
altgraph==0.17.2
amqp==2.6.1
analytics-python==1.4.0
apispec==3.3.2
appdirs==1.4.4
argcomplete==1.12.3
astroid==2.3.1
astunparse==1.6.3
atomicwrites==1.4.0
attrs==20.3.0
autopep8==1.5
azure-common==1.1.26
azure-graphrbac==0.61.1
azure-mgmt-authorization==0.61.0
azure-mgmt-containerregistry==2.8.0
azure-mgmt-keyvault==2.2.0
azure-mgmt-resource==12.0.0
azure-mgmt-storage==11.2.0
azureml-core==1.23.0
Babel==2.8.0
backcall==0.1.0
backoff==1.10.0
backports.tempfile==1.0
backports.weakref==1.0.post1
bcrypt==3.2.0
beautifulsoup4==4.9.0
billiard==3.6.3.0
black==21.9b0
bleach==3.1.0
bokeh==2.3.1
Boruta==0.3
boto==2.49.0
boto3==1.12.9
botocore==1.15.9
Bottleneck==1.3.2
Brotli==1.0.9
bs4==0.0.1
bson==0.5.9
cached-property==1.5.2
cachelib==0.1.1
cachetools==4.2.1
celery==4.4.7
certifi==2019.9.11
cffi==1.14.3
chardet==3.0.4
chart-studio==1.1.0
clang==5.0
click==7.1.2
cloudpickle==1.6.0
colorama==0.4.1
colorcet==2.0.6
colorlover==0.3.0
colour==0.1.5
confuse==1.4.0
contextlib2==0.6.0.post1
croniter==0.3.34
cryptography==3.2
cssselect==1.1.0
cufflinks==0.17.3
cx-Oracle==7.2.3
cycler==0.11.0
d6tcollect==1.0.5
d6tstack==0.2.0
dash==1.16.1
dash-core-components==1.12.1
dash-html-components==1.1.1
dash-renderer==1.8.1
dash-table==4.10.1
databricks-cli==0.14.2
dataclasses==0.6
debugpy==1.5.1
decorator==4.4.0
defusedxml==0.6.0
dnspython==2.0.0
docker==4.4.4
docopt==0.6.2
docutils==0.15.2
dtreeviz==1.3
email-validator==1.1.1
entrypoints==0.3
et-xmlfile==1.0.1
exitstatus==1.4.0
extratools==0.8.2.1
fake-useragent==0.1.11
feature-selector===N-A
ffmpy==0.3.0
findspark==1.4.2
Flask==1.1.1
Flask-AppBuilder==3.0.1
Flask-Babel==1.0.0
Flask-CacheBuster==1.0.0
Flask-Caching==1.9.0
Flask-Compress==1.5.0
Flask-Cors==3.0.10
Flask-JWT-Extended==3.24.1
Flask-Login==0.4.1
Flask-Migrate==2.5.3
Flask-OpenID==1.2.5
Flask-SQLAlchemy==2.5.1
flask-talisman==0.7.0
Flask-WTF==0.14.3
flatbuffers==1.12
fonttools==4.28.2
future==0.18.2
gast==0.4.0
gensim==3.8.1
geographiclib==1.50
geopy==2.0.0
gitdb==4.0.5
GitPython==3.1.14
google-api-core==1.26.0
google-auth==1.27.0
google-auth-oauthlib==0.4.6
google-cloud-core==1.6.0
google-cloud-storage==1.36.1
google-crc32c==1.1.2
google-pasta==0.2.0
google-resumable-media==1.2.0
googleapis-common-protos==1.53.0
gradio==2.3.7
graphviz==0.17
great-expectations==0.13.19
grpcio==1.40.0
WARNING: Could not generate requirement for distribution -atplotlib 3.4.1 (c:\users\enes_\appdata\roaming\python\python37\site-packages): Parse error at "'-atplotl'": Expected W:(abcd...)
WARNING: Could not generate requirement for distribution -ywin32 300 (c:\users\enes_\appdata\local\programs\python\python37\lib\site-packages): Parse error at "'-ywin32='": Expected W:(abcd...)
WARNING: Could not generate requirement for distribution -illow 7.1.1 (c:\users\enes_\appdata\local\programs\python\python37\lib\site-packages): Parse error at "'-illow=='": Expected W:(abcd...)
gunicorn==20.0.4
h5py==3.1.0
heatmapz==0.0.4
htmlmin==0.1.12
humanize==2.6.0
idna==2.8
ImageHash==4.2.0
imageio==2.9.0
imbalanced-learn==0.5.0
imblearn==0.0
imgkit==1.2.2
importlib-metadata==1.7.0
imutils==0.5.4
iniconfig==1.1.1
instaloader==4.7.1
ipykernel==6.4.2
ipython==7.29.0
ipython-genutils==0.2.0
ipywidgets==7.5.1
isodate==0.6.0
isort==4.3.21
itsdangerous==1.1.0
jdcal==1.4.1
jedi==0.18.0
jeepney==0.6.0
Jinja2==2.11.2
jmespath==0.9.5
joblib==1.0.1
json5==0.8.5
jsonpatch==1.32
jsonpickle==2.0.0
jsonpointer==2.1
jsonschema==3.0.2
jupyter==1.0.0
jupyter-client==6.1.11
jupyter-console==6.2.0
jupyter-contrib-core==0.3.3
jupyter-contrib-nbextensions==0.5.1
jupyter-core==4.7.0
jupyter-highlight-selected-word==0.2.0
jupyter-latex-envs==1.4.6
jupyter-nbextensions-configurator==0.4.1
jupyterlab==1.1.3
jupyterlab-server==1.0.6
jupyterthemes==0.20.0
karateclub==1.0.11
keras==2.6.0
Keras-Preprocessing==1.1.2
keras-tuner==1.0.4
kiwisolver==1.3.2
kombu==4.6.11
kt-legacy==1.0.4
kubernetes==12.0.1
lazy-object-proxy==1.4.2
lesscpy==0.14.0
lightgbm==2.2.3
llvmlite==0.35.0
lxml==4.5.0
Mako==1.1.3
Markdown==3.2.2
markdown2==2.4.1
MarkupSafe==2.0.1
marshmallow==3.8.0
marshmallow-enum==1.5.1
marshmallow-sqlalchemy==0.23.1
matplotlib==3.5.0
matplotlib-inline==0.1.3
mccabe==0.6.1
MechanicalSoup==0.12.0
metakernel==0.27.5
missingno==0.4.2
mistune==0.8.4
mleap==0.16.1
mlxtend==0.17.3
monotonic==1.6
msgpack==1.0.0
msrest==0.6.21
msrestazure==0.6.4
multimethod==1.6
mypy-extensions==0.4.3
natsort==7.0.1
nbconvert==5.6.0
nbformat==4.4.0
ndg-httpsclient==0.5.1
networkx==2.6.3
nltk==3.6.7
notebook==6.0.1
numba==0.52.0
numpy==1.21.4
oauthlib==3.1.0
opencv-python==4.5.5.62
openpyxl==3.0.6
opt-einsum==3.3.0
packaging==20.9
pandas==1.1.5
pandas-profiling==3.1.0
pandocfilters==1.4.2
param==1.10.1
paramiko==2.7.2
parse==1.15.0
parsedatetime==2.6
parso==0.8.2
pathlib2==2.3.5
pathspec==0.9.0
patsy==0.5.1
pefile==2021.9.3
pexpect==4.8.0
phik==0.11.2
pickleshare==0.7.5
Pillow==8.4.0
pipreqs==0.4.10
platformdirs==2.4.0
plotly==4.14.3
pluggy==0.13.1
ply==3.11
polyline==1.4.0
prefixspan==0.5.2
prison==0.1.3
prometheus-client==0.7.1
prometheus-flask-exporter==0.18.1
prompt-toolkit==2.0.9
protobuf==3.15.4
psutil==5.7.0
psycopg2==2.8.6
ptyprocess==0.6.0
py==1.10.0
pyarrow==3.0.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycodestyle==2.5.0
pycparser==2.20
pycryptodome==3.11.0
pyct==0.4.8
pydantic==1.8.2
pydot==1.4.2
pydub==0.25.1
pyee==7.0.2
Pygments==2.4.2
pygogo==0.13.2
PyGSP==0.5.1
pyinstaller==4.5.1
pyinstaller-hooks-contrib==2021.3
PyJWT==1.7.1
pylint==2.4.2
pymssql==2.1.5
PyNaCl==1.4.0
pyodbc==4.0.27
pyOpenSSL==20.0.1
pyparsing==3.0.4
pyppeteer==0.2.2
pyquery==1.4.1
pyrsistent==0.15.4
pysftp==0.2.9
PySocks==1.7.1
pytest==6.2.4
python-dateutil==2.8.2
python-dotenv==0.14.0
python-editor==1.0.4
python-louvain==0.13
python3-openid==3.2.0
pytz==2021.3
PyWavelets==1.1.1
pywin32==227
pywin32-ctypes==0.2.0
pywinpty==0.5.5
PyYAML==5.3
pyzmq==18.1.0
qtconsole==5.0.1
QtPy==1.9.0
querystring-parser==1.2.4
regex==2021.10.8
requests==2.25.1
requests-html==0.10.0
requests-oauthlib==1.3.0
retrying==1.3.3
rsa==4.7.2
ruamel.yaml==0.16.12
ruamel.yaml.clib==0.2.2
s3transfer==0.3.3
scikit-image==0.18.1
scikit-learn==1.0.2
scikit-plot==0.3.7
scipy==1.7.1
seaborn==0.11.1
SecretStorage==3.3.1
Send2Trash==1.5.0
setuptools-scm==6.3.2
shap==0.36.0
Shapely==1.7.1
six==1.16.0
sklearn==0.0
slicer==0.0.7
smart-open==1.9.0
smmap==3.0.5
sortedcontainers==2.3.0
soupsieve==2.0
spylon==0.3.0
spylon-kernel==0.4.1
SQLAlchemy==1.3.19
SQLAlchemy-Utils==0.36.8
sqlparse==0.4.1
statsmodels==0.9.0
tabulate==0.8.9
tangled-up-in-unicode==0.1.0
tensorboard==2.6.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.0
tensorflow==2.6.0
tensorflow-estimator==2.6.0
termcolor==1.1.0
terminado==0.8.2
testpath==0.4.2
threadpoolctl==2.1.0
tifffile==2021.4.8
toml==0.10.2
tomli==1.2.1
toolz==0.11.1
torch==1.10.1
torchaudio==0.10.1
torchvision==0.11.2
tornado==6.0.3
tqdm==4.60.0
traitlets==4.3.2
tweepy==3.8.0
twitter-scraper==0.4.2
typed-ast==1.4.3
typing-extensions==3.10.0.2
tzlocal==2.1
urllib3==1.25.9
vine==1.3.0
virtualenv==16.7.9
visions==0.7.4
w3lib==1.22.0
waitress==1.4.4
wcwidth==0.1.7
webencodings==0.5.1
websocket-client==0.58.0
websockets==8.1
Werkzeug==1.0.0
widgetsnbextension==3.5.1
wrapt==1.12.1
WTForms==2.3.3
xgboost==1.1.1
xlrd==1.2.0
XlsxWriter==1.2.2
yarg==0.1.9
yellowbrick==0.7
zipp==3.6.0 | open | 2022-02-03T13:41:20Z | 2022-05-16T18:05:04Z | https://github.com/ydataai/ydata-profiling/issues/916 | [
"bug 🐛"
] | enesMesut | 1 |
ultralytics/yolov5 | machine-learning | 12,946 | RuntimeError: The size of tensor a (24) must match the size of tensor b (20) at non-singleton dimension 2 | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hi, I sent multiple images concurrently for detection. I use torch.hub.load to load the same model with thread function for detection. However, I have an error, "RuntimeError: The size of tensor a (24) must match the size of tensor b (20) at non-singleton dimension 2." I have used the latest yolov5 master version but the issue still exists. May I know how to solve this issue? Thanks
### Additional
_No response_ | closed | 2024-04-21T08:21:34Z | 2024-06-07T00:22:20Z | https://github.com/ultralytics/yolov5/issues/12946 | [
"question",
"Stale"
] | KnightInsight | 6 |
agronholm/anyio | asyncio | 99 | Possible to run an async task in context of synchronious function? | I'm not sure whether I'm overlooking something obvious, but is it possible to run a single asynchronous task in the context of a non-asynchronous function (like it is with asyncio.create_task(xxx) )?
The following is code that works with asyncio, is there an equivalent for anyio?
``` python
async main_func():
some stuff...
def event_happended(source, event_type, event_details):
print("event_type: {}".format(event_type))
async def wrap():
childs = await source.get_values()
print(childs)
asyncio.create_task(wrap())
pub.subscribe(event_happended, "topic-name")
```
Context is a pubsub library that doesn't support co-routines, so I'm forced to use synchronious handler functions.
I can't use anyio.run, since that requires no event loop currently running, whereas in my case there's already one. Happy for any suggestions about how to organize my code differently too, if this is not possible with anyio directly...
| closed | 2020-02-04T23:40:40Z | 2020-02-05T11:41:07Z | https://github.com/agronholm/anyio/issues/99 | [] | makkus | 7 |
huggingface/datasets | machine-learning | 7,080 | Generating train split takes a long time | ### Describe the bug
Loading a simple webdataset takes ~45 minutes.
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("PixArt-alpha/SAM-LLaVA-Captions10M")
```
### Expected behavior
The dataset should load immediately as it does when loaded through a normal indexed WebDataset loader. Generating splits should be optional and there should be a message showing how to disable it.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.14
- `huggingface_hub` version: 0.24.1
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | open | 2024-07-29T01:42:43Z | 2024-10-02T15:31:22Z | https://github.com/huggingface/datasets/issues/7080 | [] | alexanderswerdlow | 2 |
huggingface/datasets | deep-learning | 7,048 | ImportError: numpy.core.multiarray when using `filter` | ### Describe the bug
I can't apply the filter method on my dataset.
### Steps to reproduce the bug
The following snippet generates a bug:
```python
from datasets import load_dataset
ami = load_dataset('kamilakesbi/ami', 'ihm')
ami['train'].filter(
lambda example: example["file_name"] == 'EN2001a'
)
```
I get the following error:
`ImportError: numpy.core.multiarray failed to import (auto-generated because you didn't call 'numpy.import_array()' after cimporting numpy; use '<void>numpy._import_array' to disable if you are certain you don't need it).`
### Expected behavior
It should work properly!
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- `huggingface_hub` version: 0.23.4
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | closed | 2024-07-15T11:21:04Z | 2024-07-16T10:11:25Z | https://github.com/huggingface/datasets/issues/7048 | [] | kamilakesbi | 4 |
aminalaee/sqladmin | asyncio | 845 | Icon for category | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
I want add icon for category
### Describe the solution you would like.
We can add `category_icon` to `BaseView`
### Describe alternatives you considered
_No response_
### Additional context
_No response_ | closed | 2024-10-28T13:00:23Z | 2024-11-04T10:53:31Z | https://github.com/aminalaee/sqladmin/issues/845 | [] | sheldygg | 3 |
jonaswinkler/paperless-ng | django | 339 | Where is the difference or advantages/disadvantages to the Tika Docker-Compose? | First of all, thanks for the cool project. Is there a list where the different Docker variants are explained. Where is the difference or advantages/disadvantages to the Tika version? Postgres and Sqlite are clear to me. | closed | 2021-01-14T08:57:31Z | 2021-01-14T12:43:08Z | https://github.com/jonaswinkler/paperless-ng/issues/339 | [
"documentation"
] | unknownFalleN | 1 |
ansible/ansible | python | 84,753 | ansible.builtin.b64decode undocumented encoding attribute | ### Summary
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/b64decode_filter.html
Does not mention the "encoding" attribute, that can be used to for example decode Unicode text files from Windows, with the code `b64decode(encoding='utf-16-le')`.
### Issue Type
Documentation Report
### Component Name
ansible.builtin.b64decode
### Ansible Version
```console
$ ansible --version
Not relevant
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
Not relevant
```
### OS / Environment
Not relevant
### Additional Information
I've been able to determine it exists from the issue #67478 .
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct | closed | 2025-02-25T16:15:02Z | 2025-03-14T13:00:02Z | https://github.com/ansible/ansible/issues/84753 | [
"has_pr"
] | juresaht2 | 2 |
sunscrapers/djoser | rest-api | 618 | User inactive or deleted | Registering a new user in the database either by curl or postman actually works and the token , when i try retrieving the user instance , i get
>> `User inactive or deleted.` response , can this be sorted out ? The funny thing is that when you key the user yourself from database then it works. | open | 2021-06-14T11:27:26Z | 2021-06-14T11:27:26Z | https://github.com/sunscrapers/djoser/issues/618 | [] | peter-evance | 0 |
davidsandberg/facenet | computer-vision | 266 | "sess.run" the certain layer based on the pre-trained model (aim at feature extraction) | Thanks a lot for your excellent work and code!
I've success running the code and I can extract a 128-dim feature via some code like this
> embeddings = tf.get_default_graph().get_tensor_by_name("embeddings:0")
> emb = sess.run(embeddings, feed_dict=feed_dict)
NOW, I would like to get a feature whose dim is bigger than 128 , however, I can not get the detailed info about network structure and the certain layer's name, could you help me with it?
Best wishes! | closed | 2017-05-07T16:21:52Z | 2020-08-13T19:39:46Z | https://github.com/davidsandberg/facenet/issues/266 | [] | AlbertDu | 11 |
JaidedAI/EasyOCR | deep-learning | 1,264 | original dataset | Hi i'm wondering how many data used for english model and korean model. Is there original dataset that I can download it? | open | 2024-06-06T07:57:48Z | 2024-06-06T07:57:48Z | https://github.com/JaidedAI/EasyOCR/issues/1264 | [] | wkdwldnd7487 | 0 |
jupyter-incubator/sparkmagic | jupyter | 634 | Way to move data from spark to local? | Would it be possible to move data (like a pandas dataframe or pyspark dataframe) from the spark cluster to the local env? i.e. similar to `%%send_to_spark`, except in the opposite direction? | closed | 2020-02-20T23:27:03Z | 2020-02-27T18:51:35Z | https://github.com/jupyter-incubator/sparkmagic/issues/634 | [] | sid-kap | 9 |
huggingface/transformers | nlp | 36,068 | cannot import name 'is_timm_config_dict' from 'transformers.utils.generic' | ### System Info
Transformers version 4.48.2
platform kaggle L4*4 or P40
timm version 1.0.12 or1.0.14 or None
Python version 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
from vllm.platforms import current_platform
then get
```
ImportError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py in _get_module(self, module_name)
1792 missing_backends = self._object_missing_backend[name]
-> 1793
1794 class Placeholder(metaclass=DummyObject):
/usr/lib/python3.10/importlib/__init__.py in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
127
/usr/lib/python3.10/importlib/_bootstrap.py in _gcd_import(name, package, level)
/usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load(name, import_)
/usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_)
/usr/lib/python3.10/importlib/_bootstrap.py in _load_unlocked(spec)
/usr/lib/python3.10/importlib/_bootstrap_external.py in exec_module(self, module)
/usr/lib/python3.10/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds)
/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py in <module>
42 )
---> 43 from .utils.generic import is_timm_config_dict
44
ImportError: cannot import name 'is_timm_config_dict' from 'transformers.utils.generic' (/usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py)
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
<ipython-input-13-46a92ab71489> in <cell line: 1>()
----> 1 from vllm.platforms import current_platform
2 device_name = current_platform.get_device_name().lower()
3 print(device_name)
/usr/local/lib/python3.10/dist-packages/vllm/__init__.py in <module>
4 import torch
5
----> 6 from vllm.engine.arg_utils import AsyncEngineArgs, EngineArgs
7 from vllm.engine.async_llm_engine import AsyncLLMEngine
8 from vllm.engine.llm_engine import LLMEngine
/usr/local/lib/python3.10/dist-packages/vllm/engine/arg_utils.py in <module>
9
10 import vllm.envs as envs
---> 11 from vllm.config import (CacheConfig, CompilationConfig, ConfigFormat,
12 DecodingConfig, DeviceConfig, HfOverrides,
13 KVTransferConfig, LoadConfig, LoadFormat, LoRAConfig,
/usr/local/lib/python3.10/dist-packages/vllm/config.py in <module>
15 import torch
16 from pydantic import BaseModel, Field, PrivateAttr
---> 17 from transformers import PretrainedConfig
18
19 import vllm.envs as envs
/usr/lib/python3.10/importlib/_bootstrap.py in _handle_fromlist(module, fromlist, import_, recursive)
/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py in __getattr__(self, name)
1779 def __dir__(self):
1780 result = super().__dir__()
-> 1781 # The elements of self.__all__ that are submodules may or may not be in the dir already, depending on whether
1782 # they have been accessed or not. So we only add the elements of self.__all__ that are not already in the dir.
1783 for attr in self.__all__:
/usr/local/lib/python3.10/dist-packages/transformers/utils/import_utils.py in _get_module(self, module_name)
1793
1794 class Placeholder(metaclass=DummyObject):
-> 1795 _backends = missing_backends
1796
1797 def __init__(self, *args, **kwargs):
RuntimeError: Failed to import transformers.configuration_utils because of the following error (look up to see its traceback):
cannot import name 'is_timm_config_dict' from 'transformers.utils.generic' (/usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py)
```
### Expected behavior
No error
@zucchini-nlp | open | 2025-02-06T10:43:47Z | 2025-03-21T11:39:38Z | https://github.com/huggingface/transformers/issues/36068 | [
"bug"
] | xiezhipeng-git | 19 |
graphdeco-inria/gaussian-splatting | computer-vision | 202 | convert.py missing glew32.dll? | I'm sure this is not a bug, but some kind of dependency issue on my end. Hopefully it's ok to post this question here, presuming 'convert.py' was written by the authors.
I've manage to generate a sparse point cloud in colmap using footage taken with a 360 camera. The camera model used was 'opencv_fisheye' since that was the only model that could successful tackle this.
Obviously I now need to convert to (simple_)pinhole, however when using the conver.py script it gives a "**_System Error: ... glew32.dll was not found. Reinstalling the program may fix this problem._**"
A pretty basic openGL dependency, so could this be a simple matter of fixing some environment variables? If so, which?
"Plan B" was to use Colmap GUI (since that has glew32.dll in it's 3rd party subfolder) to undistort the images. I remember being able to manually overwrite the camera model, resave it and then trick train.py to run (using previously undistorted images and "fake" simple_pinhole cameras), but that doesn't seem to work in this particular case (maybe because there is only 1 camera shared between 1000s of images). | closed | 2023-09-15T18:37:43Z | 2023-09-15T18:45:13Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/202 | [] | divljikunic | 1 |
serengil/deepface | machine-learning | 745 | Unsupported depth of input image:'VDepth::contains(depth)'where 'depth' is 6 (CV_64F) |
I'm using the video streaming method, prompted by the following error, I checked the information and it suggests to me that there is a problem with the image, can I deal with it there or is there any other method I can call, any ideas are greatly appreciated ^_^ Best wishes
```bash
File "C:\Users\山河已无恙\AppData\Roaming\Python\Python310\site-packages\deepface\detectors\FaceDetector.py", line 71, in detect_faces
obj = detect_face_fn(face_detector, img, align)
File "C:\Users\山河已无恙\AppData\Roaming\Python\Python310\site-packages\deepface\detectors\MtcnnWrapper.py", line 19, in detect_face
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # mtcnn expects RGB but OpenCV read BGR
cv2.error: OpenCV(4.7.0) d:\a\opencv-python\opencv-python\opencv\modules\imgproc\src\color.simd_helpers.hpp:94: error: (-2:Unspecified error) in function '__cdecl cv::impl::`anonymous-namespace'::CvtHelper<struct cv::impl::`anonymous namespace'::Set<3,4,-1>,struct cv::impl::A0x981fb336::Set<3,4,-1>,struct cv::impl::A0x981fb336::Set<0,2,5>,2>::CvtHelper(const class cv::_InputArray &,const class cv::_OutputArray &,int)'
> Unsupported depth of input image:
> 'VDepth::contains(depth)'
> where
> 'depth' is 6 (CV_64F)
```
```python
DeepFace.stream( db_path="database",
model_name="Facenet",
detector_backend="mtcnn",
source="rtsp://admin:hik12345@192.168.1.60:554/Streaming/Channels/101?transportmode=multicast")
``` | closed | 2023-05-06T06:21:04Z | 2023-05-08T09:22:15Z | https://github.com/serengil/deepface/issues/745 | [
"dependencies"
] | LIRUILONGS | 2 |
mljar/mljar-supervised | scikit-learn | 439 | Error running the code in google collab | Here's the code
import pandas as pd
from sklearn.model_selection import train_test_split
!pip install AutoML
!pip install mljar-supervised
from supervised.automl import AutoML
df = pd.read_csv(
"https://raw.githubusercontent.com/Rovky123/Rovky123/main/1k40.csv",
skipinitialspace=True,
)
X_train, X_test, y_train, y_test = train_test_split(
df[df.columns[:-1]], df["income"], test_size=0.25
)
automl = AutoML()
automl.fit(X_train, y_train)
predictions = automl.predict(X_test)
I've found some errors loading AutoML in google collab.
I want to solve the linear problem (1, 2,, 3, 4 and so on) and non linear problem (1, 2,1,2,1,2,1,2,1,2)
I think that's the basic problem.
Thanks in advance. | open | 2021-07-20T22:46:53Z | 2021-07-21T07:19:43Z | https://github.com/mljar/mljar-supervised/issues/439 | [
"docs"
] | Rovky123 | 1 |
wagtail/wagtail | django | 11,996 | Rich text external-to-internal link converter not working when using a non-root path for `wagtail_serve` | ### Issue Summary
Entering an Internal link in the External Link for Rich Text Editor isn't converting to an internal link.
### Steps to Reproduce
1. in settings.py enable Link Conversion by setting
- WAGTAILADMIN_EXTERNAL_LINK_CONVERSION = "confirm"
or
- WAGTAILADMIN_EXTERNAL_LINK_CONVERSION = "all"
2. Open a page in wagtail to edit
3. Add a paragraph tag, link some text using the link builder

I have tried a relative link and a fully qualified URL in my local, feature, and production environments, but the link isn't converting.
- /the-ascent/credit-cards/best-hotel-credit-cards/
**To get this functionality to work locally,**
I had to hack the **.venv/lib/python3.11/site-packages/wagtail/admin/views/chooser.py** file to remove the first instance of things in the components when doing a page match.
``` python
components = [
component for component in url.split("/") if component
]
if "localhost" in settings.WAGTAILADMIN_BASE_URL:
components = components[1:]
route = Site.objects.get(pk=pk).root_page.specific.route(
request,
components,
)
```
if the setting is ="confirm",
I'd expect to get the popup:

If it's "all, I'd expect the next time I'd edit the link it would be automatically set to "Internal Link" and the page that it's currently set to as the highlighted option. **NOTE this is not what is highlighted**, its parent page or the parent page of the page I'm linking from is what is highlighted:

- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: (no)
### Technical details
- Python version:3.11.1
- Django version: 5.0.3
- Wagtail version: 6.1.2
- Browser version: Chrome on Mac 125.0.6422.113
| closed | 2024-05-31T04:20:32Z | 2024-06-25T16:59:38Z | https://github.com/wagtail/wagtail/issues/11996 | [
"type:Bug"
] | maliahavlicek | 2 |
axnsan12/drf-yasg | django | 823 | Responses with Nested Serializers | # Feature Request
I'd like to be able to nest serializers within the response documentation under string keys.
## Description
In my api, all of our responses are nested under a `"data"` key like so:
```json
{
"data": {
"first_name": "Foo",
"email": "test@example.com"
},
"message": "User account successfully created!"
}
```
To achieve this response the routes return responses that look something like this...
```python
return Response(
ResponseSerializer(
dict(
data=UserSerializer(user).data,
message="User account successfully created!",
)
).data,
status=status.HTTP_200_OK,
)
```
## Describe the solution you'd like
I'd like to be able to do this:
```python
USER_RESPONSES = {
200: {
"data": UserSerializer(),
"message": "User account successfully created!"
}
}
```
However that doesn't work, so I'm forced to manually write out all the attributes within the `UserSerializer` again within the documentation.
## Describe alternatives you've considered
Are there any workarounds? I've investigated, but I haven't been able to find any good options for how to achieve this behaviour.
| open | 2022-11-08T13:47:04Z | 2025-03-07T12:10:48Z | https://github.com/axnsan12/drf-yasg/issues/823 | [
"triage"
] | jamesstonehill | 0 |
lanpa/tensorboardX | numpy | 406 | add_histogram does not display data | add_histogram wasn't plotting any data (but creating an empty plot) when placed at the end of a function. I found that I could resolve the issue by adding a writer.file_writer.flush() after the add_histogram() call.
**Environment**
googleapis-common-protos 1.5.8
protobuf 3.6.1
tensorboardX 1.6
torch 1.0.0
torchvision 0.2.1
Edit: for larger arrays, the flush does not solve the problem. Adding a time.sleep(20) work though. Any advice? | closed | 2019-04-08T14:56:21Z | 2019-04-08T18:47:40Z | https://github.com/lanpa/tensorboardX/issues/406 | [] | mbanani | 1 |
gtalarico/django-vue-template | rest-api | 45 | Whitenoise 4.0 breaks Django 3.0 | Hi, this project's Pipfile specifies Python 3.6, which I don't have installed locally, so I thought I'd bump it to 3.7 for my project. (Edit: this may or may not be actually related; maybe if someone has 3.6 installed they can check).
The Pipfile also specifies `django = "*"` (so Django 3.0 got installed) and `whitenoise = "==4.0"`. However, whitenoise 4 is apparently not compatible with Django 3, because it tries to import from `django.utils.six`, which I guess has gone away in Django 3 ([before](https://docs.djangoproject.com/en/2.2/_modules/django/utils/six/), [after](https://docs.djangoproject.com/en/3.0/ref/utils/)). This resulted in errors like:
```
Exception in thread django-main-thread:
Traceback (most recent call last):
File "/home/ben/.local/share/virtualenvs/my-env-heqyJ-0V/lib/python3.7/site-packages/django/core/servers/basehttp.py", line 45, in get_internal_wsgi_application
return import_string(app_path)
File "/home/ben/.local/share/virtualenvs/my-env-heqyJ-0V/lib/python3.7/site-packages/django/utils/module_loading.py", line 17, in import_string
module = import_module(module_path)
File "/home/ben/.local/share/virtualenvs/my-env-heqyJ-0V/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/ben/local/my-env/backend/wsgi.py", line 18, in <module>
application = get_wsgi_application()
File "/home/ben/.local/share/virtualenvs/my-env-heqyJ-0V/lib/python3.7/site-packages/django/core/wsgi.py", line 13, in get_wsgi_application
return WSGIHandler()
File "/home/ben/.local/share/virtualenvs/my-env-heqyJ-0V/lib/python3.7/site-packages/django/core/handlers/wsgi.py", line 127, in __init__
self.load_middleware()
File "/home/ben/.local/share/virtualenvs/my-env-heqyJ-0V/lib/python3.7/site-packages/django/core/handlers/base.py", line 35, in load_middleware
middleware = import_string(middleware_path)
File "/home/ben/.local/share/virtualenvs/my-env-heqyJ-0V/lib/python3.7/site-packages/django/utils/module_loading.py", line 17, in import_string
module = import_module(module_path)
File "/home/ben/.local/share/virtualenvs/my-env-heqyJ-0V/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/ben/.local/share/virtualenvs/my-env-heqyJ-0V/lib/python3.7/site-packages/whitenoise/middleware.py", line 10, in <module>
from django.utils.six.moves.urllib.parse import urlparse
ModuleNotFoundError: No module named 'django.utils.six'
```
Whitenoise 5 is out, so this seemed to fix the problem for my purposes:
```
--- a/Pipfile
+++ b/Pipfile
@@ -7,11 +7,11 @@ name = "pypi"
django = "*"
djangorestframework = "*"
gunicorn = "*"
-whitenoise = "==4.0"
+whitenoise = ">=5"
dj-database-url = "*"
psycopg2-binary = "*"
[dev-packages]
[requires]
-python_version = "3.6"
+python_version = "3.7"
```
Or alternatively you could specify Django 2.2 if you don't want to bump the Python and the Django versions. | open | 2019-12-15T20:37:29Z | 2022-08-08T03:39:43Z | https://github.com/gtalarico/django-vue-template/issues/45 | [] | BenQuigley | 2 |
slackapi/bolt-python | fastapi | 605 | Route socket exceptions through the custom exception handler |
### Reproducible in:
```
slack_bolt >= 1.11.1
slack_sdk>=3.9.0,<4
```
#### Python runtime version
`python3.9`
#### OS info
Not relevant
#### Steps to reproduce:
```python
import json
import logging
import os
import re
from typing import Optional
from slack_bolt import App
from slack_bolt.adapter.socket_mode import SocketModeHandler
from slack_bolt.error import BoltUnhandledRequestError
from slack_bolt.response import BoltResponse
from slack_sdk import WebClient
app_token = os.environ.get("SLACK_APP_TOKEN")
bot_token = os.environ.get("SLACK_BOT_TOKEN")
http_proxy = os.environ.get("SLACK_HTTP_PROXY")
signing_secret = os.environ.get("SLACK_SIGNING_SECRET")
client = WebClient(
token=bot_token,
proxy=http_proxy,
)
app = App(
client=client,
token=bot_token,
ssl_check_enabled=False,
signing_secret=signing_secret,
raise_error_for_unhandled_request=True,
)
logger = logging.getLogger("blinkbot")
@time_metric("message")
@app.message(re.compile("{.*}"))
def decorate_link(message, say, body) -> None:
...
@app.event("message")
def handle_message_events():
"""Stub to avoid warnings."""
...
@app.error
def custom_error_handler(error: Exception) -> Optional[BoltResponse]:
if isinstance(error, BrokenPipeError) or isinstance(error, BlockingIOError):
logger.warning("Socket issues: %s", error)
return BoltResponse(status=200, body="")
elif isinstance(error, BoltUnhandledRequestError):
# This may be noisy
logger.info("BoltUnhandledRequestError: %s", error, exc_info=True)
return BoltResponse(status=200, body="")
logging.exception("Uncaught exception: %s", error)
return None
if __name__ == "__main__":
handler = SocketModeHandler(app, app_token, proxy=http_proxy)
handler.start()
```
### Expected/Actual result:
We expect the `custom_error_handler` to catch all configured exceptions but these exceptions leak through.
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2022-02-28T19:12:49Z | 2022-03-01T15:08:10Z | https://github.com/slackapi/bolt-python/issues/605 | [
"question",
"area:adapter"
] | gpiks | 3 |
deezer/spleeter | tensorflow | 751 | [Discussion] your question |
ERROR: Cannot uninstall 'llvmlite'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall. | closed | 2022-04-17T21:48:13Z | 2022-04-29T09:14:45Z | https://github.com/deezer/spleeter/issues/751 | [
"question"
] | sstefanovski21 | 0 |
pyg-team/pytorch_geometric | pytorch | 10,024 | PGExplainer: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! | ### 🐛 Describe the bug
Hello everyone,
I'm trying to implement PGExplainer in my model, but I'm encountering a frustrating error:
`Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0!`
I’ve already tried converting all inputs to CUDA, but the issue persists. Below is a snippet of my code for reference.
Does anyone know how to fix this? Any help would be greatly appreciated!
```python
explainer = Explainer(
model=model,
algorithm=PGExplainer(epochs=50, lr=0.001),
explanation_type='phenomenon',
edge_mask_type="object",
model_config=ModelConfig(mode="binary_classification",
task_level="graph",
return_type="raw"))
# threshold_config=ThresholdConfig(threshold_type="topk", value=topk))
data = data.cuda()
for epoch in range(50):
loss = explainer.algorithm.train(epoch=epoch,
model=model.cuda(),
x=torch.cat((data.x, data.pcc), dim=1).cuda(),
edge_index=data.edge_index.cuda(),
target=data.y.cuda(),
batch=None)
```
The full error is:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[26], line 25
23 data = data.cuda()
24 for epoch in range(50):
---> 25 loss = explainer.algorithm.train(epoch=epoch,
26 model=model.cuda(),
27 x=torch.cat((data.x, data.pcc), dim=1).cuda(),
28 edge_index=data.edge_index.cuda(),
29 target=data.y.cuda(),
30 batch=None)
File ~/miniconda/envs/gamotisi-torch/lib/python3.10/site-packages/torch_geometric/explain/algorithm/pg_explainer.py:130, in PGExplainer.train(self, epoch, model, x, edge_index, target, index, **kwargs)
127 temperature = self._get_temperature(epoch)
129 inputs = self._get_inputs(z, edge_index, index)
--> 130 logits = self.mlp(inputs).view(-1)
131 edge_mask = self._concrete_sample(logits, temperature)
132 set_masks(model, edge_mask, edge_index, apply_sigmoid=True)
File ~/miniconda/envs/gamotisi-torch/lib/python3.10/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/miniconda/envs/gamotisi-torch/lib/python3.10/site-packages/torch/nn/modules/module.py:1747, in Module._call_impl(self, *args, **kwargs)
1742 # If we don't have any hooks, we want to skip the rest of the logic in
1743 # this function, and just call forward.
1744 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1745 or _global_backward_pre_hooks or _global_backward_hooks
1746 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1747 return forward_call(*args, **kwargs)
1749 result = None
1750 called_always_called_hooks = set()
File ~/miniconda/envs/gamotisi-torch/lib/python3.10/site-packages/torch/nn/modules/container.py:250, in Sequential.forward(self, input)
248 def forward(self, input):
249 for module in self:
--> 250 input = module(input)
251 return input
File ~/miniconda/envs/gamotisi-torch/lib/python3.10/site-packages/torch/nn/modules/module.py:1736, in Module._wrapped_call_impl(self, *args, **kwargs)
1734 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1735 else:
-> 1736 return self._call_impl(*args, **kwargs)
File ~/miniconda/envs/gamotisi-torch/lib/python3.10/site-packages/torch/nn/modules/module.py:1844, in Module._call_impl(self, *args, **kwargs)
1841 return inner()
1843 try:
-> 1844 return inner()
1845 except Exception:
1846 # run always called hooks if they have not already been run
1847 # For now only forward hooks have the always_call option but perhaps
1848 # this functionality should be added to full backward hooks as well.
1849 for hook_id, hook in _global_forward_hooks.items():
File ~/miniconda/envs/gamotisi-torch/lib/python3.10/site-packages/torch/nn/modules/module.py:1790, in Module._call_impl.<locals>.inner()
1787 bw_hook = BackwardHook(self, full_backward_hooks, backward_pre_hooks)
1788 args = bw_hook.setup_input_hook(args)
-> 1790 result = forward_call(*args, **kwargs)
1791 if _global_forward_hooks or self._forward_hooks:
1792 for hook_id, hook in (
1793 *_global_forward_hooks.items(),
1794 *self._forward_hooks.items(),
1795 ):
1796 # mark that always called hook is run
File ~/miniconda/envs/gamotisi-torch/lib/python3.10/site-packages/torch_geometric/nn/dense/linear.py:147, in Linear.forward(self, x)
141 def forward(self, x: Tensor) -> Tensor:
142 r"""Forward pass.
143
144 Args:
145 x (torch.Tensor): The input features.
146 """
--> 147 return F.linear(x, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)```
### Versions
torch 2.5.1
torch-geometric 2.6.1 | closed | 2025-02-12T16:38:04Z | 2025-02-15T14:01:33Z | https://github.com/pyg-team/pytorch_geometric/issues/10024 | [
"bug",
"explain"
] | giuseppeantoniomotisi | 2 |
plotly/dash | flask | 2,731 | Improve react-docgen usage | Right now, component libraries need to include `react-docgen` specified as a devdep, and it's stuck on the 5.x series even though `react-docgen` itself is up to 7.x, because it's actually _used_ by `extract-meta.js` in this repo, via the `dash-generate-components` command. Can we either:
- Use a version of `react-docgen` provided by Dash - if we did this we would either need to include a built `react-docgen` with dash, or have `dash-generate-components` run `npm i` inside Dash, since component authors would not generally have built Dash itself.
- At least update `extract-meta.js` (and `dash-generate-components` if necessary) to support `react-docgen` 7.x (but maintaining support for 5.x for compatibility with existing components). Not ideal, since components have no direct dependency on `react-docgen`, but at least component authors could keep their dependencies up to date.
Also note: It looks like Typescript component generation pulls in Typescript from the component's `node_modules`, and this should not change. Typescript projects are built with a specific Typescript version so should be parsed with that same version. This means we'll need to keep `extract-meta.js` working with any changes that might be introduced in Typescript, but that seems unavoidable (and so far hasn't been an issue 🙏)
@T4rk1n curious your thoughts. | open | 2024-01-24T14:35:58Z | 2024-08-13T19:45:23Z | https://github.com/plotly/dash/issues/2731 | [
"feature",
"P3"
] | alexcjohnson | 0 |
apache/airflow | machine-learning | 47,295 | Pgbouncer exporter doesn't support metrics exposed by updated pgbouncer. | ### Apache Airflow version
2.10.5
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
PGBouncer has been updated and now exposes more metrics than before - the exporter no longer supports the full list of metrics and the following error can be seen in the logs of the exporter:
> could not get store result: could not get stats: unexpected column: total_server_assignment_count
The support has been added in upstream [pg_exporter](https://github.com/Vonng/pg_exporter/blob/main/pg_exporter.yml#L5441C12-L5441C12), supporting pgbouncer version 1.24 and up.
The pgbouncer base image was updated to 1.24 in the [airflow-pgbouncer-2025.01.10-1.24.0](https://hub.docker.com/layers/apache/airflow/airflow-pgbouncer-2025.01.10-1.24.0/images/sha256-e8fd120604e8113082e9ad070e638b715cf512c279299782e76cc5ad431a25ad) docker image, however the exporter has not been updated to support it.
The [defaults for the helm chart](https://github.com/apache/airflow/blob/main/chart/values.yaml#L116-L122) are currently;
```
pgbouncer:
tag: airflow-pgbouncer-2025.01.10-1.24.0
pgbouncerExporter:
tag: airflow-pgbouncer-exporter-2024.06.18-0.17.0
```
Which are not compatible.
### What you think should happen instead?
The exporter should be compatible with the version of pgbouncer deployed.
### How to reproduce
Deploy the latest helm chart enabling pgbouncer, the logs for the `metrics-exporter` container in the pgbouncer pod will indicate the error
> could not get store result: could not get stats: unexpected column: total_server_assignment_count
### Operating System
Ubuntu 22.04
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
pgbouncer enabled
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-03T10:02:47Z | 2025-03-07T17:32:51Z | https://github.com/apache/airflow/issues/47295 | [
"kind:bug",
"good first issue",
"area:helm-chart",
"area:metrics"
] | mjmammoth | 3 |
supabase/supabase-py | fastapi | 300 | Attempting to get a table of data using rpc in python fails | **Describe the bug**
Attempting to get a table of data using rpc in python fails
**To Reproduce**
1. Create a simple database function like below:
```
create or replace function add_planet(name text)
returns TABLE(INTERVAL_DATETIME timestamp, DUID text)
language plpgsql
as $$
begin
return query
select "INTERVAL_DATETIME", "DUID" from bidding_data limit 10;
end;
$$;
```
2. Call function in python
```
url = os.environ.get("SUPABASE_URL")
key = os.environ.get("SUPABASE_KEY")
supabase = create_client(url, key)
data = supabase.rpc('add_planet', {'name': 'pop'}).execute()
print(data)
# data=[] count=None
```
**Expected behavior**
Should return data.
**Screenshots**
Calling the function works via the supabase sql editor

**Desktop (please complete the following information):**
- OS: Windows 10 Pro for Workstations
- supabase: 0.7.1
- python: 3.9.2
| closed | 2022-11-02T07:59:32Z | 2022-11-03T00:37:11Z | https://github.com/supabase/supabase-py/issues/300 | [] | nick-gorman | 1 |
mherrmann/helium | web-scraping | 85 | Text and position cannot locate button文本及位置无法定位按钮 | I try to locate the element with button(text), text, find_all(text) below can't locate the button 我尝试用button(text)定位元素,text,find_all(text) below均无法定位到按钮

`<button data-v-3295b14d="" type="button" class="el-button el-button--primary el-button--small" style="width: 100px;"><!----><!----><span>确 认</span></button>`
xpath can locate the button xpath可以定位到按钮
| closed | 2022-06-23T08:41:13Z | 2022-07-08T02:17:17Z | https://github.com/mherrmann/helium/issues/85 | [] | 3293406747 | 0 |
LibreTranslate/LibreTranslate | api | 600 | Downloaded models are not compatible with installed version of libretranslate | I have a docker service based on `libretranslate/libretranslate:v1.3.8`, and it used to work fine. However, if I launch a new instance (or manually update models) some models are no longer compatible:
> 500: Internal Server Error
>
> Cannot translate text: Unsupported model binary version. This executable supports models with binary version v5 or below, but the model has binary version v6. This usually means that the model was generated by a later version of CTranslate2. (Forward compatibility is not guaranteed.)
This was when translating from Swedish to French.
Is there any way to make sure compatible models are downloaded, or do I need to manually cache models to make sure they are not automatically updated to an incompatible version? | closed | 2024-03-08T19:51:52Z | 2024-05-05T22:16:24Z | https://github.com/LibreTranslate/LibreTranslate/issues/600 | [
"possible bug"
] | deadbeef84 | 3 |
pallets/flask | python | 5,031 | URL Hack? | 

Im not too sure what the person was trying to do with this, but they were able to clear my MongoDB. I don't even use phpadmin so I'm not sure either what they were doing there, but yeah.
If this is just a issue with me and how i setup my site, would be nice if someone could tell me what i did wrong, since I've used flask for ages and I've never had this happen.
**new issue in comments** | closed | 2023-03-23T17:07:41Z | 2023-04-11T00:05:42Z | https://github.com/pallets/flask/issues/5031 | [] | Rehold | 2 |
joerick/pyinstrument | django | 117 | Where are the reports stored by default? | Hi, maybe I am to blind to find the answer myself: where does `pyinstrument` store the reports by default? I.e., from where are they loaded, when using the `--load-prev` flag.
And since I am asking, is this what is changed when specifying `--outfile`?
Thanks :) | closed | 2021-01-20T13:35:54Z | 2021-01-21T14:23:15Z | https://github.com/joerick/pyinstrument/issues/117 | [] | GittiHab | 6 |
pyg-team/pytorch_geometric | pytorch | 8,886 | Is there a bug in `FeatureStore` + `NegativeSampling`? | ### 🐛 Describe the bug
I am using FeatureStore in a distributed large scale setting. I find that features in negatives are **sometimes** different from expected. Is there a bug?
```
import torch
from torch_geometric.sampler import NegativeSampling
from torch_geometric.data import HeteroData
from torch_geometric.loader import LinkNeighborLoader
from torch_geometric.testing import (
MyFeatureStore,
MyGraphStore,
get_random_edge_index,
)
def test_custom_hetero_link_neighbor_loader():
data = HeteroData()
feature_store = MyFeatureStore()
graph_store = MyGraphStore()
# Set up node features:
x = torch.arange(10)
data['paper'].x = x
feature_store.put_tensor(x, group_name='paper', attr_name='x', index=None)
x = torch.arange(10, 300)
data['author'].x = x
feature_store.put_tensor(x, group_name='author', attr_name='x', index=None)
# Set up edge indices (GraphStore does not support `edge_attr` at the
# moment):
edge_index = get_random_edge_index(10, 10, 50)
data['paper', 'to', 'paper'].edge_index = edge_index
graph_store.put_edge_index(edge_index=(edge_index[0], edge_index[1]),
edge_type=('paper', 'to', 'paper'),
layout='coo', size=(10, 10))
edge_index = get_random_edge_index(10, 20, 100)
data['paper', 'to', 'author'].edge_index = edge_index
graph_store.put_edge_index(edge_index=(edge_index[0], edge_index[1]),
edge_type=('paper', 'to', 'author'),
layout='coo', size=(10, 20))
edge_index = get_random_edge_index(20, 10, 100)
data['author', 'to', 'paper'].edge_index = edge_index
graph_store.put_edge_index(edge_index=(edge_index[0], edge_index[1]),
edge_type=('author', 'to', 'paper'),
layout='coo', size=(20, 10))
loader1 = LinkNeighborLoader(
data,
num_neighbors=[-1] * 2,
edge_label_index=('paper', 'to', 'author'),
batch_size=20,
neg_sampling= NegativeSampling(
mode="triplet", amount=1)
)
loader2 = LinkNeighborLoader(
(feature_store, graph_store),
num_neighbors=[-1] * 2,
edge_label_index=('paper', 'to', 'author'),
batch_size=20,
neg_sampling= NegativeSampling(
mode="triplet", amount=1
)
)
assert str(loader1) == str(loader2)
for (batch1, batch2) in zip(loader1, loader2):
print("negative node ids batch1", batch1['author'].dst_neg_index)
print("negative node ids batch2", batch2['author'].dst_neg_index)
common_indices = list(set(batch1['author'].dst_neg_index.tolist()).intersection(set(batch2['author'].dst_neg_index.tolist())))
# get just one common node id
common_indices = common_indices[0]
print("common node id in negatives", common_indices)
mask_common_index1 = batch1["author"].dst_neg_index == common_indices
mask_common_index2 = batch2["author"].dst_neg_index == common_indices
# node ids
# and let's limit to one element only (there might be more than one match)
node_id1 = batch1["author"].dst_neg_index[mask_common_index1][0]
node_id2 = batch2["author"].dst_neg_index[mask_common_index2][0]
# PASSES!
assert node_id2 == node_id1
print(batch1["author"].x)
print(batch1["author"].x[node_id1])
print(batch2["author"].x[node_id2])
# NOPE! (sometimes, so run it multiple times)
assert batch1["author"].x[node_id1] == batch2["author"].x[node_id2]
```
output
```
negative node ids batch1 tensor([28, 9, 14, 27, 21, 18, 29, 30, 25, 20, 16, 7, 22, 23, 24, 5, 19, 26,
15, 17])
negative node ids batch2 tensor([11, 12, 5, 2, 9, 14, 0, 7, 0, 4, 3, 5, 3, 4, 11, 7, 15, 3,
16, 15])
common node id in negatives 5
tensor([ 10, 11, 13, 14, 16, 17, 18, 19, 21, 22, 25, 26, 27, 29,
45, 56, 65, 73, 77, 81, 93, 98, 108, 127, 138, 176, 186, 193,
209, 279, 281, 23, 15, 28, 12, 20, 24])
tensor(17)
tensor(16)
====================================================================== short test summary info =======================================================================
FAILED tests/ray_utils/test_ll.py::test_custom_hetero_link_neighbor_loader - assert tensor(17) == tensor(16)
```
### Versions
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: macOS 14.3 (arm64)
GCC version: Could not collect
Clang version: 15.0.0 (clang-1500.1.0.2.5)
CMake version: version 3.26.4
Libc version: N/A
Python version: 3.9.16 (main, Mar 8 2023, 04:29:24) [Clang 14.0.6 ] (64-bit runtime)
Python platform: macOS-14.3-arm64-arm-64bit
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Apple M1 Max
Versions of relevant libraries:
[pip3] mypy==1.7.0
[pip3] mypy-extensions==1.0.0
[pip3] mypy-protobuf==1.23
[pip3] numpy==1.23.5
[pip3] torch==2.0.0
[pip3] torch-cluster==1.6.1
[pip3] torch-geometric==2.3.1
[pip3] torch-scatter==2.1.1
[pip3] torch-sparse==0.6.17
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.15.2
[conda] numpy 1.23.5 pypi_0 pypi
[conda] torch 2.0.0 pypi_0 pypi
[conda] torch-cluster 1.6.1 pypi_0 pypi
[conda] torch-geometric 2.3.1 pypi_0 pypi
[conda] torch-scatter 2.1.1 pypi_0 pypi
[conda] torch-sparse 0.6.17 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi | closed | 2024-02-08T14:53:01Z | 2024-02-12T07:42:08Z | https://github.com/pyg-team/pytorch_geometric/issues/8886 | [
"bug"
] | denadai2 | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.