repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
gradio-app/gradio | data-visualization | 10,380 | render=True doesn't work in Dataframe if value is HTML code | ### Describe the bug
I apply gr.Dataframe to read a csv file and show it on web. One of the column's value is the full path name of audio file, I want this column shows as play button, when user click the button, the audio in such position will be played.
However, I can only see the html code in such column, and the Dataframe doesn't seem to render the html code
I open the debugger of the browser, these strings are always parsered as string
`
<span tabindex="-1" role="button" class="svelte-q8uklq"><audio controls><source src="/asr/users/yi.liu/result_2025-01-09//asr/users/yi.liu/result_2025-01-09/pcm/1530346663179644545.wav" type="audio/wav"></audio></span>
`
The string is highlighted with double quote. When I remove this double quote, the output will show correct web component.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
server.py
```python
from function import *
with gr.Blocks() as demo:
with gr.Row():
load_btn = gr.Button("Load", elem_id="load_btn", interactive=False)
with gr.Row():
table_output = gr.Dataframe(type="pandas", render=True)
load_btn.click(
fn=load_file,
inputs=source_file,
outputs=[table_output, load_btn]
)
if __name__ == '__main__':
demo.launch(server_name='localhost', server_port=54321, share=False, debug=True)
```
function.py
```python
from os import PathLike
import gradio as gr
USER_SESSIONS = {}
def __render_audio_column__(audio_path: PathLike):
return f'<audio controls><source src="{audio_path}" type="audio/wav"></audio>'
def load_file(src_file: PathLike):
username = getpass.getuser()
global CURRENT_SESSION
if username in USER_SESSIONS:
CURRENT_SESSION = USER_SESSIONS.get(username)
else:
USER_SESSIONS[username] = UserInfo()
CURRENT_SESSION = USER_SESSIONS[username]
CURRENT_SESSION.df = pd.read_csv(src_file, header=0)
CURRENT_SESSION.df['Audio'] = CURRENT_SESSION.df['Audio'].apply(__render_audio_column__)
return CURRENT_SESSION.df, gr.Button.update(interactive=False)
```

### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
System: Ubuntu 22.04
Python: 3.10.0
gradio: 3.32.0
Pandas: 2.0.3
```
### Severity
I cannot work around it | closed | 2025-01-17T08:41:01Z | 2025-01-20T07:13:47Z | https://github.com/gradio-app/gradio/issues/10380 | [
"bug",
"pending clarification"
] | Yb2S3Man | 3 |
laughingman7743/PyAthena | sqlalchemy | 218 | init cursor from previously run query_id | We have an application where we'd like to restore results from previous Athena queries.
If a user has run a query, we store the query id, and we'd like to use pyAthena to open a previously run query.
Is there any way this can be acieved from pyAthena, like initializing the cursor with a query_id ? | closed | 2021-02-25T08:12:34Z | 2022-08-07T12:02:45Z | https://github.com/laughingman7743/PyAthena/issues/218 | [] | moshir | 7 |
learning-at-home/hivemind | asyncio | 500 | Would you consider to add some CV examples with hivemind?[Feature Request] | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
The CV models usually have the batch normalization part, which might cause problems when using hivemind, would you consider apply the framework to the CV model?
| closed | 2022-08-02T15:23:43Z | 2022-08-08T13:06:17Z | https://github.com/learning-at-home/hivemind/issues/500 | [
"enhancement",
"help wanted"
] | elricwan | 2 |
sinaptik-ai/pandas-ai | data-science | 603 | Update the code in documentation Home page | ### Rewrite code on homepage for Documentation
```python
import pandas as pd
from pandasai import PandasAI
# Sample DataFrame
df = pd.DataFrame({
"country": ["United States", "United Kingdom", "France", "Germany", "Italy", "Spain", "Canada", "Australia", "Japan", "China"],
"gdp": [19294482071552, 2891615567872, 2411255037952, 3435817336832, 1745433788416, 1181205135360, 1607402389504, 1490967855104, 4380756541440, 14631844184064],
"happiness_index": [6.94, 7.16, 6.66, 7.07, 6.38, 6.4, 7.23, 7.22, 5.87, 5.12]
})
# Instantiate a LLM
from pandasai.llm.openai import OpenAI
llm = OpenAI()
pandas_ai = PandasAI(llm)
pandas_ai.run(df, prompt='Which are the 5 happiest countries?')
```
The code above works by pasting directly, but the code https://github.com/gventuri/pandas-ai/blob/main/docs/index.md on this page does not work.
It throws an error as below.
```sh
AttributeError: 'SmartDatalake' object has no attribute '_llm'. Did you mean: 'llm'?
``` | closed | 2023-10-02T14:59:16Z | 2024-07-27T21:36:49Z | https://github.com/sinaptik-ai/pandas-ai/issues/603 | [] | snapfast | 10 |
koxudaxi/fastapi-code-generator | fastapi | 43 | [suggestion] Explicitly stating the required python version | I tried to install this library for quite a while, but pip couldn't find an appropriate version which confused me quite a lot. After a while, I realized I had python 3.7 and maybe that was the issue. After seeing the badge with python 3.8, I realized I had to use 3.8 and it worked. However, this requirement is not explicitly stated anywhere as far as I could find, apart from the button.
I think it would be clearer if this was added somewhere for clarity. I also don't know if the 3.8 is a hard requirement or that is only has been tested for that version, but I'll leave that open. | closed | 2020-11-01T16:11:20Z | 2020-11-14T11:04:27Z | https://github.com/koxudaxi/fastapi-code-generator/issues/43 | [
"released"
] | Baukebrenninkmeijer | 3 |
Nekmo/amazon-dash | dash | 147 | Scapy update | ### What is the purpose of your *issue*?
- [x] Other
### Description
Scapy 2.4.3 is available, which solves the installation issues versions 2.4.1/2.4.2 had.
Note that Scapy 2.4.0 (which is currently pinned) is vulnerable to https://github.com/secdev/scapy/security/advisories/GHSA-q5wg-mj9r-hp59, and is now more than a year old.
I'd encourage to update. | closed | 2019-11-14T22:26:41Z | 2020-04-11T14:11:56Z | https://github.com/Nekmo/amazon-dash/issues/147 | [
"enhancement"
] | gpotter2 | 1 |
ultralytics/ultralytics | computer-vision | 19,824 | WeightsUnpickler error | Hello;
I am working on a project and using the Yolov8.yaml file to train the module from scratch, everything okay however I have faced this problem, and I am stuck here I have used this repo before, and there were no problems or errors, but when I want to return to the project, I have faced this issue.
and this is the error message:
```
File "/usr/local/lib/python3.11/dist-packages/torch/serialization.py", line 1470, in load
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
_pickle.UnpicklingError: Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint.
(1) In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
WeightsUnpickler error: Unsupported global: GLOBAL ultralytics.nn.tasks.DetectionModel was not an allowed global by default. Please use `torch.serialization.add_safe_globals([DetectionModel])` or the `torch.serialization.safe_globals([DetectionModel])` context manager to allowlist this global if you trust this class/function.
```
could anyone help, please? | open | 2025-03-22T13:36:31Z | 2025-03-23T00:22:18Z | https://github.com/ultralytics/ultralytics/issues/19824 | [
"detect"
] | Salmankm93 | 2 |
ionelmc/pytest-benchmark | pytest | 235 | Tracking success rate of benchmarked functions | I have a use case for tracking the performance and success rate of non-deterministic functions.
The following function serves to outline the scenario:
```
def foo():
time.sleep(base_time + abs(random.gauss(0, 0.01)))
if random.random() < error_rate:
raise RuntimeError
```
I have played around and arrived at the following result:
```
def benchmark_pedantic_with_count(benchmark, function, *args, **kwargs):
successes = []
@wraps(function)
def wrapper(*args, **kwargs):
try:
result = function(*args, **kwargs)
successes.append(True)
return result
except:
successes.append(False)
benchmark.pedantic(wrapper, *args, **kwargs)
benchmark.extra_info['success_count'] = sum(successes)
new_stats_fields = list(benchmark.stats.stats.fields)
new_stats_fields.append('succ')
benchmark.stats.stats.fields = new_stats_fields
benchmark.stats.stats.succ = sum(successes) / len(successes)
```
To get the new column `succ` actually displayed, I had to also:
* Add `succ` to `pytest_benchmark.utils.ALLOWED_COLUMNS`.
* Overwrite `pytest_benchmark.table.display` so it shows `succ`.
(How exactly to achieve those two things is left an an exercise for the reader.)
While this does work, I am unsure if my solution could be upstreamed easily.
How should I do it if I want my solution to be merged into `pytest-benchmark`?
Alternate and related approaches:
* Add an argument to `benchmark.pedantic` that makes it continue on exceptions, but gives it an argument of the list of exceptions caught (like `[None, None, RuntimeError, None, RuntimeError]`).
* Add an argument to `benchmark.pedantic` to change the return type to a list of all results, then set up the benchmarked function so that it catches relevant exceptions and returns whatever I want.
* Allow `extra_info` keys in the terminal table. | open | 2023-03-02T22:41:27Z | 2023-05-25T12:38:00Z | https://github.com/ionelmc/pytest-benchmark/issues/235 | [] | amoskopp | 1 |
graphdeco-inria/gaussian-splatting | computer-vision | 761 | Training with mask | I would like to know if it is possible to train with the mask of the target object. I just want to get the gs model of the target object.
ex:

| closed | 2024-04-17T10:59:38Z | 2024-04-20T19:16:48Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/761 | [] | nuko7055 | 1 |
psf/requests | python | 6,906 | Improve "import requests" time by delaying ssl context preload | Requests is not accepting feature requests at this time.
In requests/adapters.py there is initialization for _preloaded_ssl_context in global scope.
In corporate environment this operation consumes significant portion of the "import requests" time.
Wrapping this initialization into a function that is called where this context is actually needed would eliminate the overhead.
Estimates based on my machine:
# original:
$ time python -c 'import requests'
real 0m0.918s
...
# with a function wrapper:
$ time python -c 'import requests'
real 0m0.356s
...
| closed | 2025-02-23T18:11:10Z | 2025-02-23T18:11:20Z | https://github.com/psf/requests/issues/6906 | [
"Feature Request",
"actions/autoclose-feat"
] | gregory-shklover | 1 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,485 | Add support for Debian Bookworm (12) | ### Proposal
This ticket is to track the work related to extend support to [Debian Bookworm (12)](https://www.debian.org/News/2023/20230610) released in date June 10th, 2023 | closed | 2023-06-13T20:26:55Z | 2023-11-05T10:26:34Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3485 | [
"C: Packaging"
] | evilaliv3 | 0 |
biolab/orange3 | numpy | 6,825 | Help not working on windows | <!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
F1 or the help button (below) gives a grey blank window

**How can we reproduce the problem?**
Just cliking on the button or F1
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system: Windows 10
- Orange version: 3.37.0
- How you installed Orange:
Windows
Standalone installer (default)
[Orange3-3.37.0-Miniconda-x86_64.exe](https://download.biolab.si/download/files/Orange3-3.37.0-Miniconda-x86_64.exe)
Can be used without administrative priviledges.
| closed | 2024-06-10T08:50:44Z | 2024-11-23T08:47:25Z | https://github.com/biolab/orange3/issues/6825 | [
"bug report"
] | simonaubertbd | 1 |
2noise/ChatTTS | python | 818 | the file is to long | do you find : the first text I just input ten words, the second text I write 40 words, but the output wav file size is the same , the second wav file have some blank , so do you know how to trim the blank? | closed | 2024-11-09T14:51:48Z | 2024-11-10T03:26:48Z | https://github.com/2noise/ChatTTS/issues/818 | [] | wuabc0954 | 1 |
ydataai/ydata-profiling | data-science | 792 | GPU Support | **It would be great if pandas-profiling can run on GPU**
<!--
Is your feature request related to a problem?
Give a clear and concise description of what the problem is.
Example:
I'm always frustrated when [...]
-->
**Proposed feature**
<!--
Describe the solution you'd like
A clear and concise description of what you want to happen.
-->
**Alternatives considered**
<!--
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
-->
**Additional context**
<!--
Add any other context or screenshots about the feature request here.
-->
| open | 2021-05-18T13:05:21Z | 2021-05-18T14:08:09Z | https://github.com/ydataai/ydata-profiling/issues/792 | [
"feature request 💬",
"help wanted 🙋"
] | salmanea | 1 |
dask/dask | pandas | 11,252 | Local memory explodes on isin() | When doing a Series.isin() with PyArrow strings, local memory just explodes.
It just works (with version 2023.9.1) when it is of type "object".
A workaround is to disable the string conversion (`dask.config.set({"dataframe.convert-string": False})`), but not ideal. Any ideas why this happens now?
**Minimal Complete Verifiable Example**:
```python
import dask.dataframe as dd
import random
import string
test = dd.from_dict(
{
"id": [''.join(random.choices(string.ascii_uppercase + string.digits, k=35)) for _ in range(1000000)],
},
npartitions=1
)
users = [''.join(random.choices(string.ascii_uppercase + string.digits, k=35)) for _ in range(5000000)]
test[test.id.isin(users)].compute()
```
**Environment**:
- Dask version: 2024.2.1
- Python version: 3.10
- Operating System: Linux
- Install method (conda, pip, source): pip
| open | 2024-07-25T11:38:17Z | 2024-07-25T12:37:42Z | https://github.com/dask/dask/issues/11252 | [
"dataframe",
"upstream"
] | manschoe | 1 |
flairNLP/flair | pytorch | 2,706 | corpus.make_tag_dictionary(tag_type=tag_type) NOT working when trying to train custom Dataset | **Describe the bug**
my code was working fine before but now when i try ti execute the command corpus.make_tag_dictionary(tag_type=tag_type) i got an error says : 1522 for sentence in _iter_dataset(self.get_all_sentences()):
1523 for token in sentence.tokens:
-> 1524 tag_dictionary.add_item(token.get_tag(tag_type).value)
1525 tag_dictionary.add_item("<START>")
1526 tag_dictionary.add_item("<STOP>")
AttributeError: 'Token' object has no attribute 'get_tag'
**To Reproduce**
I'm trying to train flair with my custom Dataset
**Expected behavior**
the command should work like before without any problem
**Screenshots**

**Environment (please complete the following information):**
- OS [ Colab]:
- Version [e.g. flair-0.3.2]:
| closed | 2022-04-06T12:05:48Z | 2022-09-09T02:02:27Z | https://github.com/flairNLP/flair/issues/2706 | [
"bug",
"wontfix"
] | hmiche | 2 |
airtai/faststream | asyncio | 1,363 | Bug: AsyncConfluentProducer / AsyncConfluentConsumer are not that async | Both classes from `faststream.confluent.client` call blocking code in their constructors, `self.producer.list_topics()` and `create_topics(topics=self.topics, config=self.config)` respectively. When Kafka cluster is not reachable (DNS issues, firewall, ...), FastAPI application does not reach started state (uvicorn does not print `INFO: Uvicorn running on http://127.0.0.1:8080 (Press CTRL+C to quit)`) and endpoints do not respond to requests.
**How to reproduce**
Start example at https://faststream.airt.ai/latest/getting-started/integrations/fastapi/#__tabbed_1_2 without broker, try to fetch http://localhost:8080/docs (I'm getting timeout)
With broker still unreachable, commenting https://github.com/airtai/faststream/blob/63a4453f79c8963fccdb523a092a28a4f4ce0893/faststream/confluent/client.py#L150 and https://github.com/airtai/faststream/blob/63a4453f79c8963fccdb523a092a28a4f4ce0893/faststream/confluent/client.py#L354 allows successful startup, with `connection refused` errors in log. Then start
```
docker run -d -p 9092:9092 --name kafka \
-e KAFKA_ENABLE_KRAFT=yes \
-e KAFKA_CFG_NODE_ID=1 \
-e KAFKA_CFG_PROCESS_ROLES=broker,controller \
-e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER \
-e KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093 \
-e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT \
-e KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092 \
-e KAFKA_BROKER_ID=1 \
-e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@kafka:9093 \
-e ALLOW_PLAINTEXT_LISTENER=yes \
--add-host kafka:127.0.0.1 bitnami/kafka:3.7.0
```
and observe that error log messages from faststream consumer about `connection refused` stopped and `/docs` still responds. | closed | 2024-04-11T17:09:56Z | 2024-06-26T07:03:18Z | https://github.com/airtai/faststream/issues/1363 | [
"bug",
"Confluent"
] | lecko-cngroup | 1 |
huggingface/datasets | numpy | 6,908 | Fail to load "stas/c4-en-10k" dataset since 2.16 version | ### Describe the bug
When update datasets library to version 2.16+ ( I test it on 2.16, 2.19.0 and 2.19.1), using the following code to load stas/c4-en-10k dataset
```python
from datasets import load_dataset, Dataset
dataset = load_dataset('stas/c4-en-10k')
```
and then it raise UnicodeDecodeError like
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 2523, in load_dataset
builder_instance = load_dataset_builder(
File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 2195, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 1846, in dataset_module_factory
raise e1 from None
File "/home/*/conda3/envs/watermark/lib/python3.10/site-packages/datasets/load.py", line 1798, in dataset_module_factory
can_load_config_from_parquet_export = "DEFAULT_CONFIG_NAME" not in f.read()
File "/home/*/conda3/envs/watermark/lib/python3.10/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
```
I found that fs.open loads a gzip file and parses it like plain text using utf-8 encoder.
```python
fs = HfFileSystem('https://huggingface.co')
fs.open("datasets/stas/c4-en-10k/c4-en-10k.py", "rb")
data = fs.read() # data is gzip bytes begin with b'\x1f\x8b\x08\x00\x00\tn\x88\x00...'
data2 = unzip_gzip_bytes(data) # data2 is what we want: '# coding=utf-8\n# Copyright 2020 The HuggingFace Datasets...'
```
### Steps to reproduce the bug
1. Install datasets between version 2.16 and 2.19
2. Use `datasets.load_dataset` method to load `stas/c4-en-10k` dataset.
### Expected behavior
Load dataset normally.
### Environment info
Platform = Linux-5.4.0-159-generic-x86_64-with-glibc2.35
Python = 3.10.14
Datasets = 2.19 | closed | 2024-05-20T02:43:59Z | 2024-05-24T10:58:09Z | https://github.com/huggingface/datasets/issues/6908 | [] | guch8017 | 2 |
pydata/bottleneck | numpy | 60 | ZeroDivisionError of bn.nanstd with arrays of size 1 and ddof=1 | In [310]: bn.nanstd(np.array([1.]), ddof=1)
ZeroDivisionError Traceback (most recent call last)
<ipython-input-310-96d0501de6e0> in <module>()
----> 1 bn.nanstd(np.array([1.]), ddof=1)
/usr/local/lib/python2.7/dist-packages/Bottleneck-0.6.0-py2.7-linux-i686.egg/bottleneck/func.so in func.nanstd (bottleneck/src/func/32bit/func.c:59697)()
/usr/local/lib/python2.7/dist-packages/Bottleneck-0.6.0-py2.7-linux-i686.egg/bottleneck/func.so in func.nanstd_1d_float64_axisNone (bottleneck/src/func/32bit/func.c:64868)()
ZeroDivisionError: float division
Numpy std simply returns 0.0 for that case. The integer functions show nan, instead of 0.0.
I just did some further tests, it seems like it generally fails, if there is only one valid value, e.g. np.array([1, np.nan, np.nan]) fails the same way.
| closed | 2013-03-04T12:25:37Z | 2013-03-06T16:53:32Z | https://github.com/pydata/bottleneck/issues/60 | [] | ml31415 | 10 |
babysor/MockingBird | deep-learning | 135 | 请问如何对多个数据集进行合并,并进行训练以降低Loss? | 请问如何对多个数据集进行合并,并进行训练以降低Loss?
我已经下载了三个数据集
<datasets_root>
├──aidatatang_200zh
│ ├──corpus
│ │ ├──dev
│ │ ├──test
│ │ └──train
│ └──transcript
├──data_aishell3
│ ├──test
│ │ └──wav
│ └──train
│ └──wav
├──MAGICDATA
│ └──train
│ ├──14_3466
│ ├──14_3664
│ ........
│ └──5_970
└──SV2TTS
└──synthesizer
├──audio
├──embeds
└──mels
现在我应该如何将这三个混合起来? | open | 2021-10-11T00:20:49Z | 2022-03-02T13:40:47Z | https://github.com/babysor/MockingBird/issues/135 | [] | twilightsg | 3 |
graphql-python/graphene-sqlalchemy | graphql | 110 | How to filter the child of the main query? | I'm a relative newcomer to Graphene, I'm trying to make a query like this
Query example
```
{
user(username: "Jon") {
name
last_lame
username
posts(in_draft : true) {
title
text
in_draft
update_at
}
}
}
```
Filter the posts that are in draft. Is this possible?
Query:
```
class Query(graphene.ObjectType):
user = graphene.Field(lambda: User, username=graphene.String())
def resolve_user(self, info, username):
query = User.get_query(info)
return query.filter(UserModel.username == username).first()
posts = graphene.List(lambda: Post, in_draft=graphene.Boolean())
def resolve_posts(self, info, in_draft):
query = Post.get_query(info)
return query.filter(PostModel.in_draft == in_draft).all()
```
| closed | 2018-01-31T19:36:52Z | 2023-02-24T14:55:59Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/110 | [] | afr-dt | 8 |
reloadware/reloadium | pandas | 99 | Reloadium crashes with SIGSEGV signal | ## Describe the bug*
When updating a previous function in the stack, reloadium crashes with SIGSEGV signal.
## To Reproduce
Run [this script](https://github.com/ChillarAnand/avilpage.com/blob/master/scripts/dynamic.py) in debug mode.
After the breakpoint is hit, try to change the array values.
## Expected behavior
It shouldn't crash and update the functions.
## Screenshots
<img width="1280" alt="Screenshot 2023-02-06 at 13 30 10" src="https://user-images.githubusercontent.com/4463796/216916611-3d86f6b4-b511-4804-a978-03ca91ecd6be.png">
## Desktop or remote (please complete the following information):**
OS: Mac
OS version: 13.0.1
M1 chip: yes
Reloadium package version: 0.9.10
PyCharm plugin version: 0.9.5
Editor: PyCharm
Python Version: 3.9.12
Python Architecture:64
Run mode: Debug
| closed | 2023-02-06T08:03:09Z | 2023-02-06T11:57:38Z | https://github.com/reloadware/reloadium/issues/99 | [] | ChillarAnand | 3 |
python-restx/flask-restx | api | 1 | Rename flask-restplus -> flask-restx | All occurrences of Flask-RESTPlus, flask-restplus e.t.c. should be replaced with Flaks-RESTX, flask-restx e.t.c. In addition, the Flask-RESTPlus logo should also be removed until a Flask-RESTX logo is created. | closed | 2020-01-09T13:47:02Z | 2020-01-15T19:09:12Z | https://github.com/python-restx/flask-restx/issues/1 | [] | SteadBytes | 3 |
Avaiga/taipy | automation | 2,245 | Support for Dash interactive plots | ### Description
Taipy supports Plotly already. however the charts are static. it would be great to support Dash objects, so that the charts can be interactive
### Solution Proposed
it would be great to support Dash objects, so that the charts can be interactive
### Acceptance Criteria
- [ ] If applicable, a new demo code is provided to show the new feature in action.
- [ ] Integration tests exhibiting how the functionality works are added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2024-11-14T00:26:42Z | 2025-01-06T09:11:09Z | https://github.com/Avaiga/taipy/issues/2245 | [
"🖰 GUI",
"✨New feature",
"💬 Discussion"
] | mjpan | 14 |
gradio-app/gradio | deep-learning | 10,034 | Support examples for gr.Gallery | ### Describe the bug
Hi Gradio Development Team,
I suspect there may be an issue with the `Examples` mechanism when using the `gr.Gallery` component. The same `Examples` implementation works perfectly with the `gr.Image` component. Here's a detailed explanation of the issue:
Recently, I updated my Gradio application by replacing the `gr.Image` component with `gr.Gallery`. However, this resulted in a `PermissionError: [Errno 13] Permission denied: 'C:\\my\\path'`.
Upon investigation, it appears that the issue may be related to the `component.as_example(ex)` function in `gradio\components\dataset.py`.
To debug, I added a print statement in the `__init__` method of `dataset.py`. Below are the console logs for comparison:
**When using `gr.Image`, the console log shows:**
<details>
component:<gradio.components.image.Image object at 0x00000215AB195E40>
ex:power.jpg
component.as_example(ex):path='power.jpg' url=None size=None orig_name='power.jpg' mime_type=None is_stream=False meta={'_type': 'gradio.FileData'}
</details>
**When using `gr.Gallery`, the console log shows:**
<details>
component:<gradio.components.gallery.Gallery object at 0x000001CEE1667070>
ex:power.jpg
component.as_example(ex):root=[GalleryImage(image=FileData(path='p', url=None, size=None, orig_name='p', mime_type=None, is_stream=False, meta={'_type': 'gradio.FileData'}), caption=None), GalleryImage(image=FileData(path='o', url=None, size=None, orig_name='o', mime_type=None, is_stream=False, meta={'_type': 'gradio.FileData'}), caption=None), GalleryImage(image=FileData(path='w', url=None, size=None, orig_name='w', mime_type=None, is_stream=False, meta={'_type': 'gradio.FileData'}), caption=None), GalleryImage(image=FileData(path='e', url=None, size=None, orig_name='e', mime_type=None, is_stream=False, meta={'_type': 'gradio.FileData'}), caption=None), GalleryImage(image=FileData(path='r', url=None, size=None, orig_name='r', mime_type=None, is_stream=False, meta={'_type': 'gradio.FileData'}), caption=None), GalleryImage(image=FileData(path='.', url=None, size=None, orig_name='', mime_type=None, is_stream=False, meta={'_type': 'gradio.FileData'}), caption=None), GalleryImage(image=FileData(path='j', url=None, size=None, orig_name='j', mime_type=None, is_stream=False, meta={'_type': 'gradio.FileData'}), caption=None), GalleryImage(image=FileData(path='p', url=None, size=None, orig_name='p', mime_type=None, is_stream=False, meta={'_type': 'gradio.FileData'}), caption=None), GalleryImage(image=FileData(path='g', url=None, size=None, orig_name='g', mime_type=None, is_stream=False, meta={'_type': 'gradio.FileData'}), caption=None)]
Traceback (most recent call last):
File "C:\my\path\app.py", line 469, in <module>
main()
File "C:\my\path\app.py", line 449, in main
gr.Examples(
File "C:\my\path\venv\lib\site-packages\gradio\helpers.py", line 56, in create_examples
examples_obj = Examples(
File "C:\my\path\venv\lib\site-packages\gradio\helpers.py", line 264, in __init__
self.dataset = components.Dataset(
File "C:\my\path\venv\lib\site-packages\gradio\component_meta.py", line 179, in wrapper
return fn(self, **kwargs)
File "C:\my\path\venv\lib\site-packages\gradio\components\dataset.py", line 117, in __init__
processing_utils.move_files_to_cache(
File "C:\my\path\venv\lib\site-packages\gradio\processing_utils.py", line 516, in move_files_to_cache
return client_utils.traverse(
File "C:\my\path\venv\lib\site-packages\gradio_client\utils.py", line 1009, in traverse
new_obj.append(traverse(item, func, is_root))
File "C:\my\path\venv\lib\site-packages\gradio_client\utils.py", line 1004, in traverse
new_obj[key] = traverse(value, func, is_root)
File "C:\my\path\venv\lib\site-packages\gradio_client\utils.py", line 1000, in traverse
return func(json_obj)
File "C:\my\path\venv\lib\site-packages\gradio\processing_utils.py", line 490, in _move_to_cache
temp_file_path = block.move_resource_to_block_cache(payload.path)
File "C:\my\path\venv\lib\site-packages\gradio\blocks.py", line 347, in move_resource_to_block_cache
temp_file_path = processing_utils.save_file_to_cache(
File "C:\my\path\venv\lib\site-packages\gradio\processing_utils.py", line 277, in save_file_to_cache
temp_dir = hash_file(file_path)
File "C:\my\path\venv\lib\site-packages\gradio\processing_utils.py", line 206, in hash_file
with open(file_path, "rb") as f:
PermissionError: [Errno 13] Permission denied: 'C:\\my\\path'
</details>
Could you please help investigate and confirm this behavior? Thank you!
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
def main():
with gr.Blocks() as demo:
with gr.Column():
#image = gr.Image(type="pil", image_mode="RGBA", label="Input")
gallery = gr.Gallery(columns=5, rows=5, show_share_button=False, interactive=True, height="500px", label="Input")
gr.Examples(
[["power.jpg"]],
inputs=[
gallery,
],
)
demo.queue(max_size=10)
demo.launch(inbrowser=True)
if __name__ == "__main__":
main()
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
The testing environment is Windows 10 with Python 3.10.9 and Gradio 5.6.0.
```
### Severity
Blocking usage of gradio | open | 2024-11-25T08:33:59Z | 2024-12-01T06:03:16Z | https://github.com/gradio-app/gradio/issues/10034 | [
"bug",
"enhancement"
] | avan06 | 2 |
graphql-python/graphql-core | graphql | 228 | Version 0.3.0 fails to install on python 3.12.3/ubuntu 24.04 with latest pip | ```
WARNING: Ignoring version 0.3.0 of graphql-ws since it has invalid metadata:
Requested graphql-ws<=0.3.0 from https://files.pythonhosted.org/packages/4b/32/85a8c99131149b1657347baca5528867046453272296452513da5d9d21ef/graphql_ws-0.3.0-py2.py3-none-any.whl (from ax) has invalid metadata: Expected matching RIGHT_PARENTHESIS for LEFT_PARENTHESIS, after version specifier
graphql-core (>=2.0<3)
~~~~~~^
Please use pip<24.1 if you need to use this version.
```
I get this error. In the metadata of the wheel, I see this:
`Requires-Dist: graphql-core (>=2.0<3)`
Does this really *have* to be there, since newer pip versions seemingly cannot handle that? | open | 2024-09-26T22:26:49Z | 2024-09-27T22:13:11Z | https://github.com/graphql-python/graphql-core/issues/228 | [] | NormanTUD | 1 |
lux-org/lux | jupyter | 330 | [BUG] | **Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Please describe the steps needed to reproduce the behavior. For example:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Additional context**
Add any other context about the problem here. | closed | 2021-03-27T07:47:09Z | 2021-03-27T15:42:05Z | https://github.com/lux-org/lux/issues/330 | [] | ghost | 0 |
Farama-Foundation/Gymnasium | api | 1,032 | [Proposal] Space.seed should be reproducible for Space.seed | ### Proposal
`Space.seed` will return a list of seeds, i.e., `Discrete(3).seed(None) -> [1234]`
However, we can't do `Discrete(3).seed([1234])` to generate the same RNG
My proposal is the work code should work for all gymnasium spaces
```python
seeding_values = space.seed(None)
samples = [space.sample() for _ in range(3)]
space.seed(seeding_values)
new_samples = [space.sample() for _ in range(3)]
assert samples == new_samples
```
### Motivation
One of the aims of Gymnasium is reproducibility, for space seed, a user must have a known seed for this to be possible currently as `space.seed(None)` does not guarantee that you get back all the information to recreate the space (particularly subspaces)
### Pitch
Modify the necessary `space.seed` functions to enable this feature
### Alternatives
This has worked in the past and no-one has asked for it yet
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| closed | 2024-04-18T16:15:25Z | 2024-05-21T07:53:34Z | https://github.com/Farama-Foundation/Gymnasium/issues/1032 | [
"enhancement"
] | pseudo-rnd-thoughts | 2 |
abhiTronix/vidgear | dash | 299 | [Question]: urllib3 Error | When everything was fine and working healthy, I just took a tea break and when I returned, I started getting this error. I reinstalled python and even recompiled opencv but still the same error, I'm going crazy please help.
OS: Windows 10
Python: 3.8.3
File "C:\Program Files\Python38\lib\site-packages\vidgear\gears\helper.py", line 44, in <module>
from requests.packages.urllib3.util.retry import Retry
ModuleNotFoundError: No module named 'requests.packages.urllib3.util.retry'; 'requests.packages.urllib3.util' is not a package | closed | 2022-03-14T15:12:28Z | 2022-04-01T11:51:14Z | https://github.com/abhiTronix/vidgear/issues/299 | [
"QUESTION :question:",
"SOLVED :checkered_flag:"
] | sweetngx | 3 |
zihangdai/xlnet | tensorflow | 222 | Is xlnet indeed context aware? | Hi All
I've been playing with Spacy and BERT and I'm trying to see how the embedding of each word varies across different context.
For example, for the following three sentences:
nlp = spacy.load("en_pytt_bertbaseuncased_lg")
apple1 = nlp("Apple shares rose on the news.")
apple2 = nlp("Apple sold fewer iPhones this quarter.")
apple3 = nlp("Apple pie is delicious.")
print(apple1[0].similarity(apple2[0])) # 0.73428553
print(apple1[0].similarity(apple3[0])) # 0.43365782
0.7342856
0.43365765
As one would expect. So far so good. However, if I do the same w/
nlp_xlnet = spacy.load("en_pytt_xlnetbasecased_lg")
apple1 = nlp_xlnet("Apple shares rose on the news.")
apple2 = nlp_xlnet("Apple sold fewer iPhones this quarter.")
apple3 = nlp_xlnet("Apple pie is delicious.")
print(apple1[0].similarity(apple2[0])) # 0.73428553
print(apple1[0].similarity(apple3[0])) # 0.43365782
0.9853272
0.9792127
It means that xlnet (at least in this example) is completely unaware of the context. Given xlnet's stellar GLUE and Squad2 results, I was really surprised by this finding. Granted, it's only a super trivial example, but still, it causes me to pause and scratch my head.
Anyone else has experienced similar results? Or maybe I've done something wrong or simply missed how the whole thing was supposed to work?
Thank you for your input.
SH | open | 2019-08-28T23:33:06Z | 2020-08-01T16:03:15Z | https://github.com/zihangdai/xlnet/issues/222 | [] | studiocardo | 5 |
LAION-AI/Open-Assistant | python | 2,718 | instructions not correct and missing e2e stack setup | Hi, when starting with profile `inference` as in the docs there is no service described to connect to for prompting (`text-client` does not exist). I also could not find instructions to start the entire stack including the frontend. | closed | 2023-04-18T13:27:09Z | 2023-04-20T10:53:36Z | https://github.com/LAION-AI/Open-Assistant/issues/2718 | [] | Morriz | 1 |
BlinkDL/RWKV-LM | pytorch | 71 | 建议:用大脑分区的概念 | 建议:用大脑分区的概念
你好,开发者,若能将神经网络像人脑使用区块链设计能行吗?
注:我不懂编程,但如果能为你带来新的思路,那就再好不过了。
将参数分区,例如:1:0语言、1图像、2短期记忆、3音频
| closed | 2023-04-05T09:04:50Z | 2023-04-06T16:15:39Z | https://github.com/BlinkDL/RWKV-LM/issues/71 | [] | win10ogod | 1 |
microsoft/nni | tensorflow | 4,869 | Are there any tutorials to build NNI on existing frameworks (e.g., mmdetection/detectron2) | closed | 2022-05-18T10:41:13Z | 2022-08-01T02:40:11Z | https://github.com/microsoft/nni/issues/4869 | [
"question",
"documentation"
] | liming-ai | 5 | |
httpie/cli | rest-api | 1,463 | cookie is not being set | ## Checklist
- [x] I've searched for similar issues.
- [x] I'm using the latest version of HTTPie.
---
## Minimal reproduction code and steps
see httpie debug output below from WSL. also repro'd on Ubuntu 22.10
## Current result
`sid` cookie is not being set
## Expected result
`sid` cookie should be set. it works correctly using Chrome, Firefox, curl, and Python/requests with or without a `session`.
```py
import requests
base_url = 'https://gsroka-neto.oktapreview.com'
token = '...'
# Not using `session`:
r = requests.get(base_url + '/login/sessionCookieRedirect?redirectUrl=/&token=' + token)
sid = r.cookies.get('sid')
print(sid)
print(r.headers['set-cookie'])
u = requests.get(base_url + '/api/v1/users/me', cookies={'sid': sid}).json()
print(u['id'])
```
---
## Debug output
Please re-run the command with `--debug`, then copy the entire command & output and paste both below:
I've redacted actual token and cookie values with `XXX123`.
```bash
$ https -vv --debug --session=./cookies.json "https://gsroka-neto.oktapreview.com/login/sessionCookieRedirect?redirectUrl=/&token=token123"
HTTPie 3.2.1
Requests 2.25.1
Pygments 2.11.2
Python 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0]
/usr/bin/python3
Linux 4.4.0-19041-Microsoft
<Environment {'apply_warnings_filter': <function Environment.apply_warnings_filter at 0x7f219af4e950>,
'args': Namespace(),
'as_silent': <function Environment.as_silent at 0x7f219af4e830>,
'colors': 256,
'config': {'__meta__': {'about': 'HTTPie configuration file',
'help': 'https://httpie.org/doc#config',
'httpie': '1.0.3'},
'default_options': []},
'config_dir': PosixPath('/home/gabrielsroka/.httpie'),
'devnull': <property object at 0x7f219af3a980>,
'is_windows': False,
'log_error': <function Environment.log_error at 0x7f219af4e8c0>,
'program_name': 'https',
'quiet': 0,
'rich_console': <functools.cached_property object at 0x7f219af357e0>,
'rich_error_console': <functools.cached_property object at 0x7f219af37310>,
'show_displays': True,
'stderr': <_io.TextIOWrapper name='<stderr>' mode='w' encoding='utf-8'>,
'stderr_isatty': True,
'stdin': <_io.TextIOWrapper name='<stdin>' mode='r' encoding='utf-8'>,
'stdin_encoding': 'utf-8',
'stdin_isatty': True,
'stdout': <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>,
'stdout_encoding': 'utf-8',
'stdout_isatty': True}>
<PluginManager {'adapters': [],
'auth': [<class 'httpie.plugins.builtin.BasicAuthPlugin'>,
<class 'httpie.plugins.builtin.DigestAuthPlugin'>,
<class 'httpie.plugins.builtin.BearerAuthPlugin'>],
'converters': [],
'formatters': [<class 'httpie.output.formatters.headers.HeadersFormatter'>,
<class 'httpie.output.formatters.json.JSONFormatter'>,
<class 'httpie.output.formatters.xml.XMLFormatter'>,
<class 'httpie.output.formatters.colors.ColorFormatter'>]}>
>>> requests.request(**{'auth': None,
'data': RequestJSONDataDict(),
'headers': <HTTPHeadersDict('User-Agent': b'HTTPie/3.2.1')>,
'method': 'get',
'params': <generator object MultiValueOrderedDict.items at 0x7f219ac00f20>,
'url': 'https://gsroka-neto.oktapreview.com/login/sessionCookieRedirect?redirectUrl=/&token=token123'})
GET /login/sessionCookieRedirect?redirectUrl=/&token=token123 HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: gsroka-neto.oktapreview.com
User-Agent: HTTPie/3.2.1
HTTP/1.1 302 Found
Connection: keep-alive
Content-Length: 0
Date: Fri, 30 Dec 2022 14:14:38 GMT
Public-Key-Pins-Report-Only: pin-sha256="jZomPEBSDXoipA9un78hKRIeN/+U4ZteRaiX8YpWfqc="; pin-sha256="axSbM6RQ+19oXxudaOTdwXJbSr6f7AahxbDHFy3p8s8="; pin-sha256="SE4qe2vdD9tAegPwO79rMnZyhHvqj3i5g1c2HkyGUNE="; pin-sha256="ylP0lMLMvBaiHn0ihLxHjzvlPVQNoyQ+rMiaj0da/Pw="; max-age=60; report-uri="https://okta.report-uri.com/r/default/hpkp/reportOnly"
Server: nginx
Strict-Transport-Security: max-age=315360000; includeSubDomains
X-Robots-Tag: noindex,nofollow
cache-control: no-cache, no-store
content-language: en
content-security-policy: default-src 'self' gsroka-neto.oktapreview.com *.oktacdn.com; connect-src 'self' gsroka-neto.oktapreview.com gsroka-neto-admin.oktapreview.com *.oktacdn.com *.mixpanel.com *.mapbox.com app.pendo.io data.pendo.io pendo-static-5634101834153984.storage.googleapis.com pendo-static-5391521872216064.storage.googleapis.com *.mtls.oktapreview.com gsroka-neto.kerberos.oktapreview.com https://oinmanager.okta.com data:; script-src 'unsafe-inline' 'unsafe-eval' 'self' gsroka-neto.oktapreview.com *.oktacdn.com; style-src 'unsafe-inline' 'self' gsroka-neto.oktapreview.com *.oktacdn.com app.pendo.io cdn.pendo.io pendo-static-5634101834153984.storage.googleapis.com pendo-static-5391521872216064.storage.googleapis.com; frame-src 'self' gsroka-neto.oktapreview.com gsroka-neto-admin.oktapreview.com login.okta.com; img-src 'self' gsroka-neto.oktapreview.com *.oktacdn.com *.tiles.mapbox.com *.mapbox.com app.pendo.io data.pendo.io cdn.pendo.io pendo-static-5634101834153984.storage.googleapis.com pendo-static-5391521872216064.storage.googleapis.com data: blob:; font-src 'self' gsroka-neto.oktapreview.com data: *.oktacdn.com fonts.gstatic.com; frame-ancestors 'self'
expect-ct: report-uri="https://oktaexpectct.report-uri.com/r/t/ct/reportOnly", max-age=0
expires: 0
location: https://gsroka-neto.oktapreview.com/
p3p: CP="HONK"
pragma: no-cache
set-cookie: sid=""; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=/, autolaunch_triggered=""; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=/, JSESSIONID=jession123; Path=/; Secure; HttpOnly, t=summer; Path=/, DT=dt123;Version=1;Path=/;Max-Age=63072000;Secure;Expires=Sun, 29 Dec 2024 14:14:38 GMT;HttpOnly, sid=sid123; Path=/; Secure
x-frame-options: SAMEORIGIN
x-okta-request-id: req123
x-rate-limit-limit: 850
x-rate-limit-remaining: 849
x-rate-limit-reset: 1672409738
x-xss-protection: 0
```
## Additional information, screenshots, or code examples
note that the `sid` cookie appears twice in the `set-cookie` header: once at the beginning to clear it, once at the end to set it. i'm not sure if this is related.
i guess these are technically 2 `set-cookie` headers, but they're all joined with a `,`, whereas curl, etc, show them as separate headers -- which is useful for debugging. is there a way to show these separately using httpie?
Edit:
https://www.rfc-editor.org/rfc/rfc6265
> User agents MUST implement the more liberal processing rules defined in Section 5, in order to maximize interoperability with existing servers that do not conform to the well-behaved profile defined in Section 4.
> Origin servers SHOULD NOT fold multiple Set-Cookie header fields into a single header field. The usual mechanism for folding HTTP headers fields (i.e., as defined in [RFC2616]) might change the semantics of the Set-Cookie header field because the %x2C (",") character is used by Set-Cookie in a way that conflicts with such folding. | open | 2022-12-30T14:59:05Z | 2022-12-31T00:23:28Z | https://github.com/httpie/cli/issues/1463 | [
"bug",
"new"
] | gabrielsroka | 0 |
marshmallow-code/marshmallow-sqlalchemy | sqlalchemy | 126 | Merging values into existing object from database | I could not find any documentation around merging incoming json fields with attributes already present in the database object in memory. I came up with the following code to achieve that. Wondering if there is an easier way to do this:
```python
from sqlalchemy.orm.attributes import set_attribute, flag_modified
@post_load
def make_author(self, data):
author_from_db = db_helper.find_one_by_id(data['author_id'])
if author_from_db:
for attr, value in data.iteritems():
set_attribute(author_from_db, attr, value)
flag_modified(author_from_db, attr)
return author_from_db
else:
return Author(**data)
```
In the above, `db_helper` is my custom helper to get the object from db like below:
```python
def find_one_by_author_id(author_id):
authors = find_all_by_author_id([author_id])
if authors:
return authors[0]
return None
def find_all_by_author_id(author_ids):
session = db_setup.get_session()
query = session.query(Author).filter(
Author.author_id.in_(author_ids))
return query.all()
``` | closed | 2018-03-08T20:38:38Z | 2018-12-04T18:05:56Z | https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/126 | [] | tispratik | 7 |
python-gitlab/python-gitlab | api | 2,349 | `project-merge-request cancel-merge-when-pipeline-succeeds` fails with `gitlab.exceptions.GitlabMROnBuildSuccessError: 404: 404 Not Found` | ## Description of the problem, including code/CLI snippet
When a pipeline is marked a merge on successful pipeline it seems this cannot be cancelled as the request is failing with a 404.
Running python-gitlab with `-d` shows that's it's doing a `PUT`:
```
DEBUG:urllib3.connectionpool:https://gitlab.<redacted>:443 "PUT /api/v4/projects/10/merge_requests/26/cancel_merge_when_pipeline_succeeds HTTP/1.1" 404 25
^^^^^
```
The documentation however specfies a `POST` should be used: https://docs.gitlab.com/ee/api/merge_requests.html#cancel-merge-when-pipeline-succeeds
## Expected Behavior
The request to merge is cancelled when pipeline succeeded.
## Actual Behavior
```
# gitlab -v project-merge-request cancel-merge-when-pipeline-succeeds --project-id 10 --iid 26 INT ✘
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/gitlab/exceptions.py", line 333, in wrapped_f
return f(*args, **kwargs)
File "/usr/lib/python3.10/site-packages/gitlab/v4/objects/merge_requests.py", line 188, in cancel_merge_when_pipeline_succeeds
server_data = self.manager.gitlab.http_put(path, **kwargs)
File "/usr/lib/python3.10/site-packages/gitlab/client.py", line 1067, in http_put
result = self.http_request(
File "/usr/lib/python3.10/site-packages/gitlab/client.py", line 798, in http_request
raise gitlab.exceptions.GitlabHttpError(
gitlab.exceptions.GitlabHttpError: 404: 404 Not Found
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/bin/gitlab", line 33, in <module>
sys.exit(load_entry_point('python-gitlab==3.10.0', 'console_scripts', 'gitlab')())
File "/usr/lib/python3.10/site-packages/gitlab/cli.py", line 377, in main
gitlab.v4.cli.run(
File "/usr/lib/python3.10/site-packages/gitlab/v4/cli.py", line 542, in run
data = g_cli.run()
File "/usr/lib/python3.10/site-packages/gitlab/v4/cli.py", line 81, in run
return self.do_custom()
File "/usr/lib/python3.10/site-packages/gitlab/v4/cli.py", line 102, in do_custom
return getattr(class_instance, method_name)(**self.args)
File "/usr/lib/python3.10/site-packages/gitlab/cli.py", line 71, in wrapped_f
return f(*args, **kwargs)
File "/usr/lib/python3.10/site-packages/gitlab/exceptions.py", line 335, in wrapped_f
raise error(e.error_message, e.response_code, e.response_body) from e
gitlab.exceptions.GitlabMROnBuildSuccessError: 404: 404 Not Found
```
## Specifications
- python-gitlab version: 3.10.0
- API version you are using (v3/v4): v4
- Gitlab server version (or gitlab.com): 15.5.1 (but also observed on 15.3.3)
| closed | 2022-10-30T04:10:12Z | 2023-11-06T01:16:59Z | https://github.com/python-gitlab/python-gitlab/issues/2349 | [] | TheDJVG | 2 |
babysor/MockingBird | pytorch | 733 | 1 validation error for ParsingModel[Input] root -> 语音解码模型 none is not an allowed value (type=type_error.none.not_allowed) | **Summary[web上面AI拟音提示这个报错1 validation error for ParsingModel[Input] root -> 语音解码模型 none is not an allowed value (type=type_error.none.not_allowed))]**
A clear and concise description of what the issue is.
**Env & To Reproduce[复现与环境]**
python3.9 Windows Server 2019
**Screenshots[截图(如有)]**
If applicable, add screenshots to help
| open | 2022-09-05T06:11:16Z | 2022-09-05T06:11:16Z | https://github.com/babysor/MockingBird/issues/733 | [] | 2563411574 | 0 |
comfyanonymous/ComfyUI | pytorch | 6,920 | ERROR lora diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight shape '[640, 2048]' is invalid for input of size 491520 | ### Your question
ERROR lora diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight shape '[640, 2048]' is invalid for input of size 491520
### Logs
```powershell
```
### Other
_No response_ | open | 2025-02-22T10:58:21Z | 2025-03-24T11:30:31Z | https://github.com/comfyanonymous/ComfyUI/issues/6920 | [
"User Support",
"Stale"
] | lvhao007 | 1 |
slackapi/bolt-python | fastapi | 643 | 403 response (Received an unexpected response for handshake) when running socket handler in AWS ECS inside VPC | When I try to run my slack bot in AWS ECS, inside a VPC, I get a 403 error on attempting to start the handler. I am running the exact same code locally where it works fine. I have double checked that my local and AWS instances of my bot are using identical tokens.
### Reproducible in:
#### The `slack_bolt` version
slack-bolt==1.13.1
#### Python runtime version
Python 3.9.1
#### OS info
Can't provide - its running in an AWS ECS container
#### Steps to reproduce:
I have the below file deployed to ECS in a very simple docker container:
```python
from slack_bolt import App
from slack_bolt.adapter.socket_mode import SocketModeHandler
from loggers import LOGGER
SLACK_BOT_TOKEN = os.environ['SLACK_BOT_TOKEN']
SLACK_APP_TOKEN = os.environ['SLACK_APP_TOKEN']
app = App(
token=SLACK_BOT_TOKEN,
)
if __name__ == "__main__":
LOGGER.debug('Attempting to start Slack socket handler')
handler = SocketModeHandler(app, SLACK_APP_TOKEN, logger=LOGGER, trace_enabled=True)
LOGGER.debug(f'Created handler: {handler}')
handler.start()
```
### Expected result:
The handler should connect successfully, as it does locally
### Actual result:
The handler fails to connect and I get a 403 error. Here's the traceback I am getting from Cloudwatch:

## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2022-05-10T12:27:54Z | 2022-08-08T00:10:59Z | https://github.com/slackapi/bolt-python/issues/643 | [
"question",
"need info",
"auto-triage-stale"
] | chris104957 | 9 |
pytorch/pytorch | numpy | 148,938 | [triton 3.3] `AOTInductorTestABICompatibleGpu.test_triton_kernel_tma_descriptor_1d_dynamic_False_cuda` | ### 🐛 Describe the bug
1. Update triton to `release/3.3.x` https://github.com/triton-lang/triton/tree/release/3.3.x
2. run `python test/inductor/test_aot_inductor.py -vvv -k test_triton_kernel_tma_descriptor_1d_dynamic_False_cuda`
Possibly an easier repro is
```
TORCHINDUCTOR_CPP_WRAPPER=1 python test/inductor/test_triton_kernels.py -k test_tma_descriptor_1d_dynamic_False_backend_inductor
```
errors:
<details>
```
/home/dberard/local/triton-env2/pytorch/torch/backends/cudnn/__init__.py:108: UserWarning: PyTorch was compiled without cuDNN/MIOpen support. To use cuDNN/MIOpen, rebuild PyTorch making sure the library is visible to the build system.
warnings.warn(
/home/dberard/local/triton-env2/pytorch/torch/backends/mkldnn/__init__.py:78: UserWarning: TF32 acceleration on top of oneDNN is available for Intel GPUs. The current Torch version does not have Intel GPU Support. (Triggered internally at /home/dberard/local/triton-env2/pytorch/aten/src/ATen/Context.cpp:148.)
torch._C._set_onednn_allow_tf32(_allow_tf32)
W0310 18:58:17.091000 2102274 torch/_export/__init__.py:67] +============================+
W0310 18:58:17.091000 2102274 torch/_export/__init__.py:68] | !!! WARNING !!! |
W0310 18:58:17.092000 2102274 torch/_export/__init__.py:69] +============================+
W0310 18:58:17.092000 2102274 torch/_export/__init__.py:70] torch._export.aot_compile()/torch._export.aot_load() is being deprecated, please switch to directly calling torch._inductor.aoti_compile_and_package(torch.export.export())/torch._inductor.aoti_load_package() instead.
ETEST SUITE EARLY TERMINATION due to torch.cuda.synchronize() failure
CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
======================================================================
ERROR: test_triton_kernel_tma_descriptor_1d_dynamic_False_cuda (__main__.AOTInductorTestABICompatibleGpu)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/dberard/local/triton-env2/pytorch/torch/testing/_comparison.py", line 1221, in not_close_error_metas
pair.compare()
File "/home/dberard/local/triton-env2/pytorch/torch/testing/_comparison.py", line 700, in compare
self._compare_values(actual, expected)
File "/home/dberard/local/triton-env2/pytorch/torch/testing/_comparison.py", line 830, in _compare_values
compare_fn(
File "/home/dberard/local/triton-env2/pytorch/torch/testing/_comparison.py", line 1009, in _compare_regular_values_close
matches = torch.isclose(
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/dberard/local/triton-env2/pytorch/torch/testing/_internal/common_utils.py", line 3150, in wrapper
method(*args, **kwargs)
File "/home/dberard/local/triton-env2/pytorch/test/inductor/test_torchinductor.py", line 12836, in new_test
return value(self)
File "/home/dberard/local/triton-env2/pytorch/torch/testing/_internal/common_utils.py", line 552, in instantiated_test
test(self, **param_kwargs)
File "/home/dberard/local/triton-env2/pytorch/test/inductor/test_aot_inductor.py", line 2568, in test_triton_kernel_tma_descriptor_1d
self.check_model(
File "/home/dberard/local/triton-env2/pytorch/test/inductor/test_aot_inductor_utils.py", line 207, in check_model
self.assertEqual(actual, expected, atol=atol, rtol=rtol)
File "/home/dberard/local/triton-env2/pytorch/torch/testing/_internal/common_utils.py", line 4052, in assertEqual
error_metas = not_close_error_metas(
File "/home/dberard/local/triton-env2/pytorch/torch/testing/_comparison.py", line 1228, in not_close_error_metas
f"Comparing\n\n"
File "/home/dberard/local/triton-env2/pytorch/torch/testing/_comparison.py", line 367, in __repr__
body = [
File "/home/dberard/local/triton-env2/pytorch/torch/testing/_comparison.py", line 368, in <listcomp>
f" {name}={value!s},"
File "/home/dberard/local/triton-env2/pytorch/torch/_tensor.py", line 590, in __repr__
return torch._tensor_str._str(self, tensor_contents=tensor_contents)
File "/home/dberard/local/triton-env2/pytorch/torch/_tensor_str.py", line 710, in _str
return _str_intern(self, tensor_contents=tensor_contents)
File "/home/dberard/local/triton-env2/pytorch/torch/_tensor_str.py", line 631, in _str_intern
tensor_str = _tensor_str(self, indent)
File "/home/dberard/local/triton-env2/pytorch/torch/_tensor_str.py", line 363, in _tensor_str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
File "/home/dberard/local/triton-env2/pytorch/torch/_tensor_str.py", line 146, in __init__
tensor_view, torch.isfinite(tensor_view) & tensor_view.ne(0)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
To execute this test, run the following from the base repo dir:
python test/inductor/test_aot_inductor.py AOTInductorTestABICompatibleGpu.test_triton_kernel_tma_descriptor_1d_dynamic_False_cuda
This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
----------------------------------------------------------------------
Ran 1 test in 5.612s
FAILED (errors=1)
inline_call []
unimplemented []
stats [('calls_captured', 2), ('unique_graphs', 1)]
inductor [('extern_calls', 4), ('async_compile_cache_miss', 2), ('benchmarking.InductorBenchmarker.benchmark_gpu', 2), ('pattern_matcher_count', 1), ('pattern_matcher_nodes', 1), ('async_compile_cache_hit', 1)]
graph_break []
aten_mm_info []
```
</details>
errors w/ compute-sanitizer:
https://gist.github.com/davidberard98/ecd9fefff91393b3a3fa0725dea96e22
### Versions
triton: release/3.3.x
pytorch: viable/strict from mar 10
cc @chauhang @penguinwu @voznesenskym @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @kadeng @muchulee8 @amjames @aakhundov @bertmaher @int3 @nmacchioni @embg @peterbell10 @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @yushangdi @oulgen | open | 2025-03-11T02:00:45Z | 2025-03-13T16:58:07Z | https://github.com/pytorch/pytorch/issues/148938 | [
"oncall: pt2",
"module: inductor",
"upstream triton",
"oncall: export",
"module: aotinductor",
"module: user triton"
] | davidberard98 | 1 |
benbusby/whoogle-search | flask | 1,206 | [BUG] Getting Sponsored results (french language) | **Describe the bug**
Same issue as https://github.com/benbusby/whoogle-search/issues/1172, I have sponsored links showing in results. I guess there is no filtering for french sponsored links ("Sponsorisé" in french).
**To Reproduce**
Steps to reproduce the behavior:
1. Search anything
2. An ad could appear, always at the top
**Deployment Method**
- Docker
**Version of Whoogle Search**
- Latest build from Docker
**Desktop (please complete the following information):**
- OS: Win10
- Browser Firefox
- Version 133
See example:

Tell me if you need any information. Thank you!
| closed | 2024-12-18T16:20:19Z | 2025-01-17T00:31:51Z | https://github.com/benbusby/whoogle-search/issues/1206 | [
"bug"
] | Althior | 1 |
gunthercox/ChatterBot | machine-learning | 2,221 | prevent chatbot from learning curse words & replying with them | is it possible to prevent a bot from either learning fowl language or avoid learning it entirely?
i let other people test my chatbot and they immediately taught it how to swear | open | 2021-12-06T00:46:59Z | 2022-01-06T12:20:26Z | https://github.com/gunthercox/ChatterBot/issues/2221 | [] | ilikeapple10 | 2 |
thtrieu/darkflow | tensorflow | 695 | How to modify cfg/v1/yolo-full.cfg to run with different number of classes | for tiny-yolo it's clearly explained that in the cfg file we need to modify "filters" and "classes". What should be modified in .cfg file (except obviously "classes") to run full yolo with desired number of classes? | open | 2018-04-03T21:32:15Z | 2018-05-24T23:21:13Z | https://github.com/thtrieu/darkflow/issues/695 | [] | ssusie | 1 |
axnsan12/drf-yasg | django | 605 | support coreapi.autoschema | Hi, I just swithced to yasg, and I noticed my manual parameters are ignored in the resulting swagger page. I user coreapi.AutoSchema.get_manual_fields to add these. The definitions of the parameters are in the ApiView. How do I get DRF-YASG to include these ?
```
class Django_API_Call(APIView):
manual_fields = [
coreapi.Field("list",
required=False,
location="query",
description="only return a list of available cron classes",
schema=coreschema.Boolean()),
coreapi.Field("cron_class",
required=False,
location="query",
description="execute only this cron class",
schema=coreschema.String(), ),
]
schema = AutoSchema(manual_fields=manual_fields)
def get(self, request):
return Response(data='data')
``` | closed | 2020-06-17T08:55:49Z | 2020-10-25T23:34:06Z | https://github.com/axnsan12/drf-yasg/issues/605 | [] | JorisBenschop | 1 |
deepset-ai/haystack | nlp | 8,991 | Drop greater than or equal to python 3.9 checks in type serialization | As a follow up to https://github.com/deepset-ai/haystack/issues/8971 and https://github.com/deepset-ai/haystack/issues/8894 we should drop the python 3.8 specific behavior used here https://github.com/deepset-ai/haystack/blob/c4fafd9b04a6d0988a23ecda626c2473891ef7e5/haystack/utils/type_serialization.py#L125
This will help to simplify the code and will eventually be needed to tackle the `typing` library deprecation. E.g. https://stackoverflow.com/questions/66738753/python-typing-deprecation | open | 2025-03-06T12:22:25Z | 2025-03-14T14:31:17Z | https://github.com/deepset-ai/haystack/issues/8991 | [
"P3"
] | sjrl | 0 |
yinkaisheng/Python-UIAutomation-for-Windows | automation | 245 | Win 11 x64 系统无法获取元素 | open | 2023-04-02T10:11:26Z | 2024-06-04T11:54:31Z | https://github.com/yinkaisheng/Python-UIAutomation-for-Windows/issues/245 | [] | qiwei123 | 3 | |
ansible/awx | django | 15,867 | NFS daemon is not coming up after glibc upgrade to 2.40 | I have a query regarding the compatibility between nfsd and glibc. In our system, we've upgraded glibc to version 2.40 and are using NFS version 2.1.1 with NFSv3. Previously, with glibc 2.23, everything was working fine, and we weren’t using libtirpc. However, after the glibc upgrade, libtirpc was included and enabled in NFS as well. Now, none of the NFS-related services (nfsd, rpc.statd, rpc.mountd, portmap) are running.
When attempting to start nfsd, the following errors occur:
"unable to set any sockets for nfsd"
"writing fd to kernel failed: errno 89 (Destination address required)" or "errno 111 (Connection refused)"
Console logs show:
"svc: failed to register nfsdv3 RPC service (errno 111)."
After upgrading glibc, the --enable-obsolete-rpc option has been removed from glibc. Can anyone provide guidance on how to debug or resolve this issue? | closed | 2025-03-03T11:33:11Z | 2025-03-12T15:31:23Z | https://github.com/ansible/awx/issues/15867 | [
"needs_triage",
"community"
] | melsamathew | 3 |
home-assistant/core | asyncio | 140,609 | SwitchBot Meter Pro (CO2 Monitor) Unvailable After Update | ### The problem
After updating Switchbot to V1.6, I no longer receive any data in homeassistant.
### What version of Home Assistant Core has the issue?
core-2025.3.1
### What was the last working version of Home Assistant Core?
core-2025.3.1
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Switchbot Bluetooth
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/switchbot
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | closed | 2025-03-14T17:02:59Z | 2025-03-14T17:51:48Z | https://github.com/home-assistant/core/issues/140609 | [
"integration: switchbot"
] | hans-sein | 1 |
vitalik/django-ninja | django | 767 | [BUG] Union does not work properly | Here is my code required for minimal setup for the app:
models.py
```py
from django.db import models
from polymorphic.models import PolymorphicModel
class Page(models.Model):
name = models.CharField(max_length=50, unique=True)
class Section(PolymorphicModel):
page = models.ForeignKey(Page, on_delete=models.CASCADE)
class SimpleSection(Section):
text = models.CharField(max_length=255,blank=True)
class FeatureSection(Section):
name = models.CharField(max_length=50)
class FeatureCell(models.Model):
feature_section = models.ForeignKey(FeatureSection, on_delete=models.CASCADE, related_name='cells')
text = models.TextField()
class GallerySection(Section):
name = models.CharField(max_length=50)
class GalleryImage(models.Model):
gallery_section = models.ForeignKey(GallerySection, on_delete=models.CASCADE, related_name='images')
image = models.ImageField()
```
schema.py
```py
from typing import List, Union
from ninja import ModelSchema
from .models import (FeatureCell, FeatureSection, GalleryImage, GallerySection,
Page, Section, SimpleSection)
class SimpleSectionSchema(ModelSchema):
kind: str
class Config:
model = SimpleSection
model_fields = ['id', 'page', 'text']
@staticmethod
def resolve_kind(obj: Section) -> str:
return obj.__class__.__name__
class FeatureCellSchema(ModelSchema):
class Config:
model = FeatureCell
model_fields = ['id', 'text']
class FeatureSectionSchema(ModelSchema):
cells: List['FeatureCellSchema']
kind: str
class Config:
model = FeatureSection
model_fields = ['id', 'page', 'name']
@staticmethod
def resolve_kind(obj: Section) -> str:
return obj.__class__.__name__
class GalleryImageSchema(ModelSchema):
class Config:
model = GalleryImage
model_fields = ['id', 'image']
class GallerySectionSchema(ModelSchema):
images: List['GalleryImageSchema']
kind: str
class Config:
model = GallerySection
model_fields = ['id', 'name']
@staticmethod
def resolve_kind(obj: Section) -> str:
return obj.__class__.__name__
class PageSchema(ModelSchema):
sections: List[Union[SimpleSectionSchema, FeatureSectionSchema, GallerySectionSchema]]
class Config:
model = Page
model_fields = ['id', 'name']
@staticmethod
def resolve_sections(obj: Page) -> List[Union[SimpleSectionSchema, FeatureSectionSchema, GallerySectionSchema]]:
return obj.section_set.all()
```
schema.py
```py
from typing import List
from django.shortcuts import get_object_or_404
from ninja import Router
from .models import Page
from .schemas import PageSchema
router = Router()
@router.get('/{page_id}', response=PageSchema)
def get_page(request, page_id: int):
return get_object_or_404(Page, id=page_id)
```
Django Ninja seems to get the schema very right:

However when I query I get response for all the records as they were `SimpleSection` records (by inspecting the fields visible in the response). There are fields missing for `FeatureSection` and `GallerySection` kinds - `cells` and `images` respectively.
```json
{
"id": 1,
"name": "Test Page",
"sections": [
{
"id": 1,
"page": 1,
"text": "text goes here",
"kind": "SimpleSection"
},
{
"id": 2,
"page": 1,
"text": null,
"kind": "FeatureSection"
},
{
"id": 3,
"page": 1,
"text": null,
"kind": "GallerySection"
}
]
}
```
I swear to God it was working just a day ago, but now it stopped. I have no idea what has changed. I have all package versions locked in requirements.txt file so there is no way there was some upgrade to any of the packages.
To me it looks like maybe some caching issue for the schema generation process, though I have not dived into the source code just yet.
Can you see if I am doing anything wrong or it is a bug indeed?
**Versions (please complete the following information):**
- Python version: 3.9
- Django version: 4.2
- Django-Ninja version: 0.21.0
- Pydantic version: 1.10.7
- Django-Polymorphic: 3.1.0
| closed | 2023-05-26T21:52:14Z | 2024-03-18T09:28:49Z | https://github.com/vitalik/django-ninja/issues/767 | [] | an0o0nym | 4 |
lexiforest/curl_cffi | web-scraping | 157 | tcp/ip request | hello. I'm facing a problem. I work with a proxy that has a limit on the number of tcp/ip connections. I use Connection : close headers and also close the session.close(), but it still doesn't help, connections still remain in tcpview Time Wait. If someone has encountered this-I will be glad of your help, thank you! | closed | 2023-11-16T22:14:31Z | 2023-11-20T06:14:44Z | https://github.com/lexiforest/curl_cffi/issues/157 | [] | havenotdrugs | 1 |
MolSSI/cookiecutter-cms | pytest | 135 | GitHub actions to work with sphinx docs and conda packages | Hi! Our lab is recently working with your cookie-cutter to begin every new project. Thanks for the initiative and the work to keep it operative and well documented.
We'd like to suggest the inclusion of two GitHub actions that might be useful for most of the users. We missed the automatization of the two following workflows:
- The sphinx HTML documentation creation and its pushing to a gh-pages to be served by GitHub pages.
- The production of new conda packages with new releases and pre-releases, and their upload to Anaconda.
Because of this reason, we developed, based on other public actions, two actions to cover these needs:
https://github.com/uibcdf/action-sphinx-docs-to-gh-pages
https://github.com/uibcdf/action-build-and-upload-conda-packages
I am not sure if every user of this cookie-cutter would found these useful. Maybe not, what's your opinion?
If you think that these workflows can be helpful to other colleagues. Please, feel free to copy or re-cook the scripts to be included here. No problem. Maybe MolSSI could have their own GitHub actions.
Thanks again for offering this tool to the community.
| open | 2021-07-23T15:51:09Z | 2021-07-30T16:21:05Z | https://github.com/MolSSI/cookiecutter-cms/issues/135 | [] | dprada | 4 |
ultralytics/yolov5 | deep-learning | 12,446 | yolov5-v7.0 training custom data encounter cpu problem | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
when I started training with custom data with following command
`python train.py --data ~/yolov5-7.0/data/custom_data.yaml --weights '' --cfg models/yolov5s.yaml --batch 16 --epochs 300 --hyp data/hyps/hyp.scratch-high.yaml --device 7`
Have no error. It will start training.
But the key problem is, it just used two cpu core in dataloader. These cpu cores are 100% running, and make the training very slowly.
I think it will use other cpu cores normally.
And I reinstall the environment several times, but it does not work.
In other project, like paddledetection, it uses other cpu core.
Any idea to fix this problem?
### Additional
env: python 3.8, pytorch 1.10.1, cuda 11.3 | closed | 2023-11-30T01:52:23Z | 2024-10-20T19:32:52Z | https://github.com/ultralytics/yolov5/issues/12446 | [
"question"
] | jiaminglei-lei | 4 |
Nike-Inc/koheesio | pydantic | 43 | [FEATURE] Add missing merge clause types in DeltaTableWriter | ## Is your feature request related to a problem? Please describe.
**DeltaTableWriter** provides the option to configure the writer in **MERGE** mode using the **output_mode_params** field by providing in it a list of merge clauses in the **merge_builder** key. This is especially useful in case the Delta table or the DataFrame to be merged are not available upfront. Not all merge clauses are currently supported (whenMatchedUpdateAll and whenNotMatchedInsertAll are not covered).
## Describe the solution you'd like
It should be possible to use all clauses in the configuration
## Describe alternatives you've considered
No alternative solutions can be used in this case because this is the only way of configuring the writer in MERGE mode when the table and the DataFrame are not available when we create the DeltaTableWriter
## Additional context
...
| closed | 2024-06-07T15:21:56Z | 2024-06-10T14:26:27Z | https://github.com/Nike-Inc/koheesio/issues/43 | [
"enhancement"
] | riccamini | 0 |
holoviz/panel | matplotlib | 7,407 | Cannot upload accepted_filetypes pdf | I'm on panel 1.5.2 using the FileDropper and try to make a working example for https://discourse.holoviz.org/t/problem-with-accessing-pdf-file-s-input-in-panel/8339.

```python
import io
import panel as pn
import base64
from PyPDF2 import PdfReader
pn.extension('filedropper')
def transform_pdfs(value):
pages = {}
for key, value in value:
f=io.BytesIO(file_input.value)
reader = PdfReader(f)
page = reader.pages[0]
pages[key]=page
print(pages)
file_input = pn.widgets.FileDropper(accepted_filetypes=[".pdf"])
pn.Column(
file_input, pn.bind(transform_pdfs, file_input)
).servable()
```
I've also tried `accepted_filetypes=["pdf"]` without success.
| closed | 2024-10-16T09:13:19Z | 2024-12-23T07:58:59Z | https://github.com/holoviz/panel/issues/7407 | [
"duplicate"
] | MarcSkovMadsen | 3 |
pytorch/vision | machine-learning | 8,916 | [free threading] Support for free threading Python in torchvision | The nightly builds for Python 3.13t are now enabled in Linux, Linux aarch64, MacOS and Windows:
https://hud.pytorch.org/hud/pytorch/vision/nightly/1?per_page=50&name_filter=3_13t
We would like to:
1. audit trochvision code for thread safety
2. perform parallel testing and identify failures
3. Work on fixing these failures
References:
https://py-free-threading.github.io/porting/
https://docs.python.org/3/howto/free-threading-python.html
cc @NicolasHug @malfet @albanD @scotts @rgommers
### Versions
0.22.0 nightly | open | 2025-02-18T14:43:35Z | 2025-02-18T14:46:41Z | https://github.com/pytorch/vision/issues/8916 | [] | atalman | 0 |
nicodv/kmodes | scikit-learn | 159 | How to properly match new data to existing centroids? | Hey
After a successful fit/predict I get a list of centroids using kprototype.cluster_centroids_ that looks like this (one cluster only):
`array(['-0.3', '1.4', '0.1', '-0.7', '1.4', '2.1', '0.4', '0.3', '2016-06-04', 'Berlin', 'XYZ', 'ABC'], dtype='<U32')`
The init-property of KPrototypes expects numerical and categorical centroids seperated, so I need to split the centroids like that:
```
numerical_centroids = kprototype.cluster_centroids_[:, :8]
categorical_centroids = kprototype.cluster_centroids_[:, 8:12]
```
Then I can inititiate a new prediction process like that:
```
kprototype2 = KPrototypes(
n_jobs = -1,
n_clusters = 5,
init = [numerical_centroids, categorical_centroids],
random_state = 0
)
```
This results in this error:
`invalid literal for int() with base 10: '2016-06-04'
`
My questions is: Am I'm doing something wrong when providing initial centroids? Or is this a bug? When I look into the source, I see that KProto expects two lists, one of type Float, second of type Int:
```
centroids = [np.asarray(init[0], dtype=np.float64),
np.asarray(init[1], dtype=np.uint16)]
```
This is somehow interesting, because I assume that the second list contains categorical centroids and not numerical?
So, any help is highly appreciated, as I'm kind of stuck here :)
cheers
| closed | 2021-06-29T07:12:25Z | 2021-06-29T12:53:35Z | https://github.com/nicodv/kmodes/issues/159 | [] | nickyreinert | 1 |
pyro-ppl/numpyro | numpy | 1,865 | Refactor argument validation? | What are your thoughts on refactoring the argument validation code below into a separate method `validate_args()` (or another name) so it can be invoked on an existing instance? The motivation is that I often have jitted functions that return distribution instances which I would like to validate.
https://github.com/pyro-ppl/numpyro/blob/94f4b99710d855bea456210cf91e6e55eeac3926/numpyro/distributions/distribution.py#L231-L246 | closed | 2024-09-23T00:24:01Z | 2024-09-24T21:38:01Z | https://github.com/pyro-ppl/numpyro/issues/1865 | [
"enhancement"
] | tillahoffmann | 1 |
mckinsey/vizro | data-visualization | 713 | [Docs] Remove redundant provision of `id` in docs examples | We still have some examples where an `id` is provided to a component even though it is not required.
1. Look through the code examples in our docs e.g. `vizro-core/docs` and `vizro-ai/docs`
2. Remove the `id` from `vm.Graph`, `vm.Table`, `vm.AgGrid` or `vm.Card` if **it is not required**
#### When is it not required?
The `id` is normally not required if that component is not the target of any kind of action e.g. filter_interaction, export, filters or parameters. A good rule of thumb is, if the `id` appears only once in the entire app configuration, it's probably not required.
**Example of a redundant `id` provision** (and the first example where you can remove it from the docs):
In the first example the `id="scatter_chart"` is not required, because the Graph is not being targeted by any action. Also the `id` only appears once in the entire app configuration. In the second example it is required though, because it is now the target of the Filter.
```
from vizro import Vizro
import vizro.plotly.express as px
import vizro.models as vm
iris = px.data.iris()
page = vm.Page(
title="My first page",
components=[
vm.Graph(id="scatter_chart", figure=px.scatter(iris, x="sepal_length", y="petal_width", color="species")),
],
)
dashboard = vm.Dashboard(pages=[page])
Vizro().build(dashboard).run()
```
**Example where the `id` is required:**
```
from vizro import Vizro
import vizro.plotly.express as px
import vizro.models as vm
iris = px.data.iris()
page = vm.Page(
title="My first page",
components=[
vm.Graph(id="scatter_chart", figure=px.scatter(iris, x="sepal_length", y="petal_width", color="species")),
vm.Graph(id="scatter_chart2", figure=px.scatter(iris, x="petal_length", y="sepal_width", color="species")),
],
controls=[
vm.Filter(column="petal_length",targets=["scatter_chart"],selector=vm.RangeSlider(step=1)),
],
)
dashboard = vm.Dashboard(pages=[page])
Vizro().build(dashboard).run()
``` | closed | 2024-09-17T13:10:10Z | 2024-11-25T14:37:34Z | https://github.com/mckinsey/vizro/issues/713 | [
"Docs :spiral_notepad:",
"Good first issue :baby_chick:",
"hacktoberfest"
] | huong-li-nguyen | 3 |
Esri/arcgis-python-api | jupyter | 1,613 | OSError: [Errno 22] Invalid argument when I created a new map widgets or WebMap. Any solutions to solve this? | **Describe the bug**
I tried to run with both on Spyder and Jupyter Notebook through Anaconda with Python 3.9.17 and ArcGIS API package installed. Every time i executed gis.map() or WebMap() the error will pop up.
**To Reproduce**
Steps to reproduce the behavior:
```python
from arcgis.gis import GIS
map1 = gis.map()
or
empty_webmap = WebMap()
```
error:
```python
OSError Traceback (most recent call last)
Cell In[9], line 2
1 from arcgis.gis import GIS
----> 2 map1 = gis.map()
File ~\.conda\envs\Python3917\lib\site-packages\arcgis\gis\__init__.py:1325, in GIS.map(self, location, zoomlevel, mode, geocoder)
1323 mapwidget = MapView(gis=self, item=location, mode=mode)
1324 else:
-> 1325 mapwidget = MapView(gis=self, mode=mode)
1327 # Geocode the location
1328 if isinstance(location, str):
File ~\.conda\envs\Python3917\lib\site-packages\arcgis\widgets\_mapview\_mapview.py:900, in MapView.__init__(self, gis, item, mode, **kwargs)
893 def __init__(self, gis=None, item=None, mode="2D", **kwargs):
894 """Constructor of Map widget.
895 Accepts the following keyword arguments:
896 gis The gis instance with which the map widget works, used for authentication, and adding secure layers and
897 private items from that GIS
898 item web map item from portal with which to initialize the map widget
899 """
--> 900 super(MapView, self).__init__(**kwargs)
901 self._uuid = str(uuid4())
903 # Set up the visual display of the layout
File ~\.conda\envs\Python3917\lib\site-packages\ipywidgets\widgets\widget.py:480, in Widget.__init__(self, **kwargs)
477 super(Widget, self).__init__(**kwargs)
479 Widget._call_widget_constructed(self)
--> 480 self.open()
File ~\.conda\envs\Python3917\lib\site-packages\ipywidgets\widgets\widget.py:503, in Widget.open(self)
500 if self._model_id is not None:
501 args['comm_id'] = self._model_id
--> 503 self.comm = Comm(**args)
File ~\.conda\envs\Python3917\lib\site-packages\ipykernel\comm\comm.py:73, in Comm.__init__(self, *args, **kwargs)
71 def __init__(self, *args, **kwargs):
72 # Comm takes positional arguments, LoggingConfigurable does not, so we explicitly forward arguments
---> 73 traitlets.config.LoggingConfigurable.__init__(self, **kwargs)
74 # drop arguments not in BaseComm
75 kwargs.pop("kernel", None)
File ~\.conda\envs\Python3917\lib\site-packages\traitlets\config\configurable.py:86, in Configurable.__init__(self, **kwargs)
83 config = kwargs.pop("config", None)
85 # load kwarg traits, other than config
---> 86 super().__init__(**kwargs)
88 # record traits set by config
89 config_override_names = set()
File ~\.conda\envs\Python3917\lib\site-packages\traitlets\traitlets.py:1367, in HasTraits.__init__(self, *args, **kwargs)
1364 self.notify_change(changes[key])
1366 try:
-> 1367 super().__init__(*super_args, **super_kwargs)
1368 except TypeError as e:
1369 arg_s_list = [repr(arg) for arg in super_args]
File ~\.conda\envs\Python3917\lib\site-packages\comm\base_comm.py:56, in BaseComm.__init__(self, target_name, data, metadata, buffers, comm_id, primary, target_module, topic, _open_data, _close_data, **kwargs)
52 self._closed = True
54 if self.primary:
55 # I am primary, open my peer.
---> 56 self.open(data=data, metadata=metadata, buffers=buffers)
57 else:
58 self._closed = False
File ~\.conda\envs\Python3917\lib\site-packages\comm\base_comm.py:80, in BaseComm.open(self, data, metadata, buffers)
78 comm_manager.register_comm(self)
79 try:
---> 80 self.publish_msg(
81 "comm_open",
82 data=data,
83 metadata=metadata,
84 buffers=buffers,
85 target_name=self.target_name,
86 target_module=self.target_module,
87 )
88 self._closed = False
89 except Exception:
File ~\.conda\envs\Python3917\lib\site-packages\ipykernel\comm\comm.py:33, in BaseComm.publish_msg(self, msg_type, data, metadata, buffers, **keys)
30 if self.kernel is None:
31 self.kernel = Kernel.instance()
---> 33 self.kernel.session.send(
34 self.kernel.iopub_socket,
35 msg_type,
36 content,
37 metadata=json_clean(metadata),
38 parent=self.kernel.get_parent("shell"),
39 ident=self.topic,
40 buffers=buffers,
41 )
File ~\.conda\envs\Python3917\lib\site-packages\jupyter_client\session.py:850, in Session.send(self, stream, msg_or_type, content, parent, ident, buffers, track, header, metadata)
848 if self.adapt_version:
849 msg = adapt(msg, self.adapt_version)
--> 850 to_send = self.serialize(msg, ident)
851 to_send.extend(buffers)
852 longest = max([len(s) for s in to_send])
File ~\.conda\envs\Python3917\lib\site-packages\jupyter_client\session.py:719, in Session.serialize(self, msg, ident)
717 content = self.none
718 elif isinstance(content, dict):
--> 719 content = self.pack(content)
720 elif isinstance(content, bytes):
721 # content is already packed, as in a relayed message
722 pass
File ~\.conda\envs\Python3917\lib\site-packages\jupyter_client\session.py:95, in json_packer(obj)
93 """Convert a json object to a bytes."""
94 try:
---> 95 return json.dumps(
96 obj,
97 default=json_default,
98 ensure_ascii=False,
99 allow_nan=False,
100 ).encode("utf8", errors="surrogateescape")
101 except (TypeError, ValueError) as e:
102 # Fallback to trying to clean the json before serializing
103 packed = json.dumps(
104 json_clean(obj),
105 default=json_default,
106 ensure_ascii=False,
107 allow_nan=False,
108 ).encode("utf8", errors="surrogateescape")
File ~\.conda\envs\Python3917\lib\json\__init__.py:234, in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
232 if cls is None:
233 cls = JSONEncoder
--> 234 return cls(
235 skipkeys=skipkeys, ensure_ascii=ensure_ascii,
236 check_circular=check_circular, allow_nan=allow_nan, indent=indent,
237 separators=separators, default=default, sort_keys=sort_keys,
238 **kw).encode(obj)
File ~\.conda\envs\Python3917\lib\json\encoder.py:199, in JSONEncoder.encode(self, o)
195 return encode_basestring(o)
196 # This doesn't pass the iterator directly to ''.join() because the
197 # exceptions aren't as detailed. The list call should be roughly
198 # equivalent to the PySequence_Fast that ''.join() would do.
--> 199 chunks = self.iterencode(o, _one_shot=True)
200 if not isinstance(chunks, (list, tuple)):
201 chunks = list(chunks)
File ~\.conda\envs\Python3917\lib\json\encoder.py:257, in JSONEncoder.iterencode(self, o, _one_shot)
252 else:
253 _iterencode = _make_iterencode(
254 markers, self.default, _encoder, self.indent, floatstr,
255 self.key_separator, self.item_separator, self.sort_keys,
256 self.skipkeys, _one_shot)
--> 257 return _iterencode(o, 0)
File ~\.conda\envs\Python3917\lib\site-packages\jupyter_client\jsonutil.py:111, in json_default(obj)
109 if isinstance(obj, datetime):
110 obj = _ensure_tzinfo(obj)
--> 111 return obj.isoformat().replace('+00:00', 'Z')
113 if isinstance(obj, bytes):
114 return b2a_base64(obj, newline=False).decode('ascii')
File ~\.conda\envs\Python3917\lib\site-packages\dateutil\tz\tz.py:222, in tzlocal.utcoffset(self, dt)
219 if dt is None and self._hasdst:
220 return None
--> 222 if self._isdst(dt):
223 return self._dst_offset
224 else:
File ~\.conda\envs\Python3917\lib\site-packages\dateutil\tz\tz.py:291, in tzlocal._isdst(self, dt, fold_naive)
288 return False
290 # Check for ambiguous times:
--> 291 dstval = self._naive_is_dst(dt)
292 fold = getattr(dt, 'fold', None)
294 if self.is_ambiguous(dt):
File ~\.conda\envs\Python3917\lib\site-packages\dateutil\tz\tz.py:260, in tzlocal._naive_is_dst(self, dt)
258 def _naive_is_dst(self, dt):
259 timestamp = _datetime_to_timestamp(dt)
--> 260 return time.localtime(timestamp + time.timezone).tm_isdst
OSError: [Errno 22] Invalid argument
```
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Expected behavior**
There should be a new map create.
**Platform (please complete the following information):**
- OS: Window 10
- Browser: Edge
- Python API Version: 2.0.0
**Additional context**
Add any other context about the problem here, attachments etc.
| closed | 2023-07-27T07:44:14Z | 2023-07-30T18:17:04Z | https://github.com/Esri/arcgis-python-api/issues/1613 | [
"bug"
] | locnguyenle123 | 5 |
kennethreitz/responder | graphql | 119 | Specify version of marshmallow | According to marshmallow's [chanegelog](https://github.com/marshmallow-code/marshmallow/blob/dev/CHANGELOG.rst#300b7-2018-02-03) method `dump` of schema returns data only for versions >= `3.0.0b7`, for older versions `dumps` returns [MarshalResult](https://github.com/marshmallow-code/marshmallow/blob/3.0.0b6/marshmallow/schema.py#L26).
By [default](https://pypi.org/project/marshmallow/#history), an older version of marshmallow was installed on my system.
```bash
➜ responder_python_test mkvirtualenv --python python3.7 test_marshmallow_default_version
(test_marshmallow_default_version) ➜ responder_python_test pip install responder
(test_marshmallow_default_version) ➜ responder_python_test pip freeze | grep marshmallow
marshmallow==2.16.0
```
So [this](http://python-responder.org/en/latest/tour.html#openapi-schema-support) example is incorrect until you update the marshmallow manually.
Perhaps the best solution would be to fix the documentation. | closed | 2018-10-22T16:42:14Z | 2018-10-22T21:06:50Z | https://github.com/kennethreitz/responder/issues/119 | [] | Pentusha | 0 |
huggingface/diffusers | pytorch | 10,553 | All training scripts might be wrong when using gradients accumulation! | Here is a simple case:
```
loss = loss.mean()
accelerator.backward(loss)
if accelerator.sync_gradients:
params_to_clip = flux_transformer.parameters()
accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
```
should be updated to
```
loss = loss.mean()
accelerator.backward(loss)
if accelerator.sync_gradients:
params_to_clip = flux_transformer.parameters()
accelerator.clip_grad_norm_(params_to_clip, args.max_grad_norm)
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
```
@yiyixuxu | closed | 2025-01-13T06:50:13Z | 2025-01-15T01:52:43Z | https://github.com/huggingface/diffusers/issues/10553 | [] | chenbinghui1 | 7 |
facebookresearch/fairseq | pytorch | 5,591 | question | > tamam orjinal comsensetive agresifasttoessto
_Originally posted by @contens3 in [d871f61](https://github.com/facebookresearch/fairseq/commit/d871f6169f8185837d1c11fb28da56abfd83841c#r151588777)_ | open | 2025-01-21T16:54:33Z | 2025-01-21T16:54:33Z | https://github.com/facebookresearch/fairseq/issues/5591 | [] | contens3 | 0 |
pallets-eco/flask-sqlalchemy | flask | 450 | flask-sqlalchemy ignores the setting "app.json_encoder" | When I provide a customized json encoder, this will be ignored by flask_sqlalchemy.
Let's say I have:
```
from flask import Flask
app = Flask(__name__)
app.json_encoder = MyCustomJSONEncoder
```
Then this encoder should be used when creating the engine, like this:
```
sqlalchemy.create_engine(
...,
json_serializer=lambda o: json.dumps(o, cls=app.json_encoder),
json_deserializer=lambda j: app.json_encoder.loads(j),
...)
``` | closed | 2016-12-05T16:45:31Z | 2020-12-05T19:58:29Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/450 | [
"config"
] | TimoStolz | 2 |
charlesq34/pointnet | tensorflow | 194 | SUNRGBD prediction dump | Hi,
thanks a lot for providing the code.
Is there any chance to get the pickled predictions for the SunRGBD dataset used in the paper?
Thanks
| closed | 2019-08-27T15:14:55Z | 2019-08-27T15:45:48Z | https://github.com/charlesq34/pointnet/issues/194 | [] | kilianyp | 0 |
pyg-team/pytorch_geometric | deep-learning | 9,384 | Add docs for environment setup on XPU device | ### 📚 Describe the documentation issue
Currently the environment setting for XPU device is very concise, I only found two of them:
- One in benchmark folder, see [here](https://github.com/pyg-team/pytorch_geometric/tree/master/benchmark/multi_gpu/training#environment-setup), see screenshot below.

- One in example folder, in the comment of the code, see [here](https://github.com/pyg-team/pytorch_geometric/blob/master/examples/multi_gpu/distributed_sampling_xpu.py#L5), see screenshot below.

However, it assume the users has good knowledge of setting up the environment in Intel GPU, which is not the case for some beginners. Just follow the above simple guide, users are not likely to launch the scripts in Intel GPU.
### Suggest a potential alternative/fix
Give a more detailed guide for users to setting up the environment on Intel GPU. I would likely to help with the following docs:
- A doc give the detailed guide on setting up the environment on bare mental server or docker runtime, and contains related links
- A Dockerfile with everything setup in advance, users could utilize this to build the images on top of their server | closed | 2024-06-03T15:15:06Z | 2024-06-14T08:18:27Z | https://github.com/pyg-team/pytorch_geometric/issues/9384 | [
"documentation"
] | zhouyu5 | 3 |
pennersr/django-allauth | django | 3,831 | ratelimit traceback with `secure_admin_login()` | We upgraded to 0.63.1 and added `secure_admin_login()` per the [doc](https://docs.allauth.org/en/latest/common/admin.html) but `runserver` gives us a traceback. I've recreated with the regular-django example in this project's source. Here's the patch I applied (following the docs) to recreate the issue:
```$ git diff
diff --git a/examples/regular-django/example/urls.py b/examples/regular-django/example/urls.py
index ffdf8ec2..c099529d 100644
--- a/examples/regular-django/example/urls.py
+++ b/examples/regular-django/example/urls.py
@@ -2,8 +2,11 @@ from django.contrib import admin
from django.urls import include, path
from django.views.generic.base import TemplateView
+from allauth.account.views import login as secure_admin_login
+
admin.autodiscover()
+admin.site.login = secure_admin_login(admin.site.login)
urlpatterns = [
path("", TemplateView.as_view(template_name="index.html")),
````
Once you create the virtualenv per `examples/regular-django/example/Readme.org` you'll get the following traceback with `runserver`:
```
$ python manage.py runserver 8999
Watching for file changes with StatReloader
Performing system checks...
Exception in thread django-main-thread:
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/home/me/devel/source/django-allauth/examples/regular-django/venv/lib/python3.10/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/home/me/devel/source/django-allauth/examples/regular-django/venv/lib/python3.10/site-packages/django/core/management/commands/runserver.py", line 133, in inner_run
self.check(display_num_errors=True)
File "/home/me/devel/source/django-allauth/examples/regular-django/venv/lib/python3.10/site-packages/django/core/management/base.py", line 486, in check
all_issues = checks.run_checks(
File "/home/me/devel/source/django-allauth/examples/regular-django/venv/lib/python3.10/site-packages/django/core/checks/registry.py", line 88, in run_checks
new_errors = check(app_configs=app_configs, databases=databases)
File "/home/me/devel/source/django-allauth/examples/regular-django/venv/lib/python3.10/site-packages/django/core/checks/urls.py", line 14, in check_url_config
return check_resolver(resolver)
File "/home/me/devel/source/django-allauth/examples/regular-django/venv/lib/python3.10/site-packages/django/core/checks/urls.py", line 24, in check_resolver
return check_method()
File "/home/me/devel/source/django-allauth/examples/regular-django/venv/lib/python3.10/site-packages/django/urls/resolvers.py", line 519, in check
for pattern in self.url_patterns:
File "/home/me/devel/source/django-allauth/examples/regular-django/venv/lib/python3.10/site-packages/django/utils/functional.py", line 47, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/home/me/devel/source/django-allauth/examples/regular-django/venv/lib/python3.10/site-packages/django/urls/resolvers.py", line 738, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/home/me/devel/source/django-allauth/examples/regular-django/venv/lib/python3.10/site-packages/django/utils/functional.py", line 47, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/home/me/devel/source/django-allauth/examples/regular-django/venv/lib/python3.10/site-packages/django/urls/resolvers.py", line 731, in urlconf_module
return import_module(self.urlconf_name)
File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/me/devel/source/django-allauth/examples/regular-django/example/urls.py", line 9, in <module>
admin.site.login = secure_admin_login(admin.site.login)
File "/home/me/devel/source/django-allauth/examples/regular-django/venv/lib/python3.10/site-packages/django/views/generic/base.py", line 104, in view
return self.dispatch(request, *args, **kwargs)
File "/home/me/devel/source/django-allauth/examples/regular-django/venv/lib/python3.10/site-packages/django/utils/decorators.py", line 48, in _wrapper
return bound_method(*args, **kwargs)
File "/home/me/devel/source/django-allauth/examples/regular-django/venv/lib/python3.10/site-packages/allauth/decorators.py", line 10, in wrap
resp = ratelimit.consume_or_429(request, action=action, **rl_kwargs)
File "/home/me/devel/source/django-allauth/examples/regular-django/venv/lib/python3.10/site-packages/allauth/core/ratelimit.py", line 129, in consume_or_429
if not consume(request, *args, **kwargs):
File "/home/me/devel/source/django-allauth/examples/regular-django/venv/lib/python3.10/site-packages/allauth/core/ratelimit.py", line 95, in consume
if not request or request.method == "GET":
AttributeError: 'function' object has no attribute 'method'
``` | closed | 2024-05-20T13:53:16Z | 2024-05-20T14:30:13Z | https://github.com/pennersr/django-allauth/issues/3831 | [] | rcj4747 | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,547 | I tried to input 1224*370 images for training |
Thank you for your dedication to this project. I am attempting to directly input 1224*370 images for training and set
` '--preprocess', type=str, default='none', `however, I have found that my GPU with 12G of memory is not sufficient to meet the training requirements. Is there any other way for me to succeed? | open | 2023-03-03T13:04:45Z | 2023-03-03T13:04:45Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1547 | [] | a-free-a | 0 |
GibbsConsulting/django-plotly-dash | plotly | 126 | 'loading' page display when rendering template plotly_dash add embedded in. | FYI verisons are
dash 0.36.0
dash-core-components 0.43.0
dash-html-components 0.13.5
django-plotly-dash 0.9.8
Installation of django-plotly-dash is ok;
But when rendering the template, always stopped at loading dash app.
template file:
......{%load plotly_dash%}
{% plotly_app name = “SimpleExample”%}
......
So, what can i do next? Thanks!
-----------------
mysite/setting.py:
-----------------
INSTALLED_APPS = [
'main.apps.MainConfig',
'chart',
'userAuth',
'django_dash',
'django_plotly_dash.apps.DjangoPlotlyDashConfig',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django_plotly_dash.middleware.BaseMiddleware',
]
ROOT_URLCONF = 'mysite.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'mysite.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.1/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/2.1/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.1/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'Asia/Shanghai'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.1/howto/static-files/
STATIC_ROOT = '/var/www/mysite/static/'
STATIC_URL = '/static/'
-----------------------------
mysite/urls.py
----------------------------
from django.contrib import admin
#from django.urls import path
from django.urls import path, include
from django.conf.urls import url
# Use static() to add url mapping to serve static files during development (only)
from django.conf import settings
from django.conf.urls.static import static
from django_plotly_dash.views import add_to_session
urlpatterns = [
path('',include('main.urls')),
path('chart/',include('chart.urls')),
url('^django_plotly_dash/', include('django_plotly_dash.urls')),
path('django_dash/',include('django_dash.urls')),
path('admin/', admin.site.urls),
path('userAuth/', include('userAuth.urls')),
#dash app
#path('dash_within_django/', include('dash_within_django.urls')),
]
------------------------
Django_dash/urls.py
------------------------
from django.views.generic import TemplateView
from django.urls import path, re_path, include
from django.conf.urls import url
from . import views
from . import plotly_apps
from django_plotly_dash.views import add_to_session
app_name = 'django_dash'
urlpatterns = [
url('^$', views.index_view, name="index_view"),
]
| closed | 2019-03-10T05:18:22Z | 2019-04-16T07:41:56Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/126 | [] | fengjunCN | 5 |
KaiyangZhou/deep-person-reid | computer-vision | 273 | Can I train on my own data? | I am a fresh for Reid. I just find that several projects use DUKE,MARKET,,,,. I can create a data which contain some other images ? Thanks in advance | closed | 2019-12-09T06:48:47Z | 2020-05-18T10:09:53Z | https://github.com/KaiyangZhou/deep-person-reid/issues/273 | [] | lingbo666 | 1 |
python-visualization/folium | data-visualization | 1,422 | CustomIcons using TimestampedGeoJson | **Describe the bug**
I'm using folium maps with TimestampedGeoJson plugin and just realize that custom icons is not working on it ?
**To Reproduce**
Use TimestampedGeoJson example trying to use Custom Icons for markers.
**Expected behavior**
Plot CustomIcons over maps using TimestampedGeoJson :).
**Environment (please complete the following information):**
- Firefox
- Jupyter Notebook
- Python 3.8.3
- folium 0.11.0
- branca 0.4.1
**Possible solutions**
Add possibility for using CustomIcons :) | closed | 2020-11-27T11:00:45Z | 2023-11-30T16:24:47Z | https://github.com/python-visualization/folium/issues/1422 | [] | carluqcor | 6 |
plotly/dash-core-components | dash | 362 | Input n_blur lose focus get the initial values back in the component. | Source of the input percy diffs I have been trying to debug for the last week. The component lose focus after text assert and the initial value gets prepended somehow to the value.
https://percy.io/plotly/dash/builds/1118236 | closed | 2018-11-02T18:48:51Z | 2018-12-04T20:15:40Z | https://github.com/plotly/dash-core-components/issues/362 | [
"dash-type-bug"
] | T4rk1n | 1 |
huggingface/datasets | tensorflow | 7,107 | load_dataset broken in 2.21.0 | ### Describe the bug
`eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)`
used to work till 2.20.0 but doesn't work in 2.21.0
In 2.20.0:

in 2.21.0:

### Steps to reproduce the bug
1. Spin up a new google collab
2. `pip install datasets==2.21.0`
3. `import datasets`
4. `eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)`
5. Will throw an error.
### Expected behavior
Try steps 1-5 again but replace datasets version with 2.20.0, it will work
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.23.5
- PyArrow version: 17.0.0
- Pandas version: 2.1.4
- `fsspec` version: 2024.5.0
| closed | 2024-08-16T14:59:51Z | 2024-08-18T09:28:43Z | https://github.com/huggingface/datasets/issues/7107 | [] | anjor | 4 |
graphistry/pygraphistry | pandas | 370 | [BUG] Personal org should not propagate to uploads | When `register(api=3, username=..., password=...)`, with no `org_name`, the client<>server action gets confused on legacy pre-org servers
* [x] the client should NOT send org_name to the server, e.g., it should NOT send `org_name: None` or `org_name: <personalorg>` <-- this seems to currently be confusing existing clients <> old servers
* [x] arrow_uploader should record `None` for the org_name instead of not recording it: https://github.com/graphistry/pygraphistry/blob/322be2d30784842571cf9f087193d26c1e1633da/graphistry/arrow_uploader.py#L163
* [x] api calls should drop key `org_name` when it is `None`, e.g., https://github.com/graphistry/pygraphistry/blob/322be2d30784842571cf9f087193d26c1e1633da/graphistry/arrow_uploader.py#L173
* [x] audit: `arrow_uploader`, `ArrowFileUploader` (via `as_files=True`, called by `plot()` -> `arrow_uploader`), login
* [x] instead, the server should autofill `org = <users's default org>` <--- File/Dataset via AccessControlModel seems to do `org = self.author.organization`, which is personal org, not default organization | open | 2022-07-07T00:20:47Z | 2022-07-15T15:16:35Z | https://github.com/graphistry/pygraphistry/issues/370 | [
"bug"
] | lmeyerov | 1 |
ClimbsRocks/auto_ml | scikit-learn | 310 | let the user pass in their own model training code? | we expect it to take in a scipy sparse matrix, and return a trained model.
that trained model must then have a .predict (and ideally .predict_proba if a classifier) that takes in a scipy sparse matrix, and returns a list of predictions (one per prediction row).
this lets the user get crazy, and implement any algos they want.
combine this with .transform_only, and we're pretty covered. | open | 2017-08-02T01:59:52Z | 2017-08-02T01:59:52Z | https://github.com/ClimbsRocks/auto_ml/issues/310 | [] | ClimbsRocks | 0 |
dpgaspar/Flask-AppBuilder | rest-api | 2,304 | Issue while enabling okta on Airflow 2.10.4 | Hi Airflow community, I was trying to enable okta for the first time in our airflow application but facing challenges. Can someone please help us validate our configs and let us know if we are missing something on our end?
```
Airflow version: 2.10.4 running on python3.9
oauthlib 2.1.0
authlib-1.4.1
flask-oauthlib-0.9.6
flask-oidc-2.2.2
requests-oauthlib-1.1.0
Okta-2.9.0
```
Below is our Airflow webserver.cfg file
```
#Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""Default configuration for the Airflow webserver"""
import os
from airflow.www.fab_security.manager import AUTH_OAUTH
#from flask_appbuilder.security.manager import AUTH_OAUTH
basedir = os.path.abspath(os.path.dirname(__file__))
# Flask-WTF flag for CSRF
WTF_CSRF_ENABLED = True
# ----------------------------------------------------
# AUTHENTICATION CONFIG
# ----------------------------------------------------
# For details on how to set up each of the following authentication, see
# http://flask-appbuilder.readthedocs.io/en/latest/security.html# authentication-methods
# for details.
# The authentication type
AUTH_TYPE = AUTH_OAUTH
# Uncomment to setup Full admin role name
AUTH_ROLE_ADMIN = 'Admin'
# When using OAuth Auth, uncomment to setup provider(s) info
# Google OAuth example:
OAUTH_PROVIDERS = [{
'name':'okta',
'token_key':'access_token',
'icon':'fa-circle-o',
'remote_app': {
'client_id': 'xxxxxxxxxxxxx',
'client_secret': 'xxxxxxxxxxxxxxxxxxx',
'api_base_url': 'https://xxxxxxx.com/oauth2/v1/',
'client_kwargs':{'scope': 'openid profile email groups'},
# 'redirect_uri': 'https://xxxxxxx.com/oauth-authorized/okta',
'access_token_url': 'https://xxxxxxx.com/oauth2/v1/token',
'authorize_url': 'https://xxxxxxx.com/oauth2/v1/authorize',
'jwks_uri': 'https://xxxxxxx.com/oauth2/v1/keys'
# 'server_metadata_url': 'https://xxxxxxx.com/.well-known/openid-configuration'
}
}]
# Will allow user self registrationf
AUTH_USER_REGISTRATION = True
# The default user self registration role
AUTH_USER_REGISTRATION_ROLE = "Admin"
AUTH_ROLES_MAPPING = {
"Admin": ["Admin"]
}
# if we should replace ALL the user's roles each login, or only on registration
AUTH_ROLES_SYNC_AT_LOGIN = True
# force users to re-auth after 12hr of inactivity (to keep roles in sync)
PERMANENT_SESSION_LIFETIME = 43200
```
Error I am getting in the webserver logs is as below (Internal Server Error):
```
[2025-01-29 19:55:59 +0000] [21] [CRITICAL] WORKER TIMEOUT (pid:92)
[2025-01-29 19:55:59 +0000] [92] [ERROR] Error handling request /oauth-authorized/okta?code=xxxxxxxxxxxxxx&state=xxxxxxxxxxx
Traceback (most recent call last):
File "/opt/app-root/lib64/python3.9/site-packages/gunicorn/workers/sync.py", line 134, in handle
self.handle_request(listener, req, client, addr)
File "/opt/app-root/lib64/python3.9/site-packages/gunicorn/workers/sync.py", line 177, in handle_request
respiter = self.wsgi(environ, resp.start_response)
File "/opt/app-root/lib64/python3.9/site-packages/flask/app.py", line 2552, in __call__
return self.wsgi_app(environ, start_response)
File "/opt/app-root/lib64/python3.9/site-packages/flask/app.py", line 2529, in wsgi_app
response = self.full_dispatch_request()
File "/opt/app-root/lib64/python3.9/site-packages/flask/app.py", line 1823, in full_dispatch_request
rv = self.dispatch_request()
File "/opt/app-root/lib64/python3.9/site-packages/flask/app.py", line 1799, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/opt/app-root/lib64/python3.9/site-packages/flask_appbuilder/security/views.py", line 679, in oauth_authorized
resp = self.appbuilder.sm.oauth_remotes[provider].authorize_access_token()
File "/opt/app-root/lib64/python3.9/site-packages/authlib/integrations/flask_client/apps.py", line 101, in authorize_access_token
token = self.fetch_access_token(**params, **kwargs)
File "/opt/app-root/lib64/python3.9/site-packages/authlib/integrations/base_client/sync_app.py", line 347, in fetch_access_token
token = client.fetch_token(token_endpoint, **params)
File "/opt/app-root/lib64/python3.9/site-packages/authlib/oauth2/client.py", line 217, in fetch_token
return self._fetch_token(
File "/opt/app-root/lib64/python3.9/site-packages/authlib/oauth2/client.py", line 366, in _fetch_token
resp = self.session.post(
File "/opt/app-root/lib64/python3.9/site-packages/requests/sessions.py", line 637, in post
return self.request("POST", url, data=data, json=json, **kwargs)
File "/opt/app-root/lib64/python3.9/site-packages/authlib/integrations/requests_client/oauth2_session.py", line 112, in request
return super().request(
File "/opt/app-root/lib64/python3.9/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/opt/app-root/lib64/python3.9/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/opt/app-root/lib64/python3.9/site-packages/requests/adapters.py", line 667, in send
resp = conn.urlopen(
File "/opt/app-root/lib64/python3.9/site-packages/urllib3/connectionpool.py", line 715, in urlopen
httplib_response = self._make_request(
File "/opt/app-root/lib64/python3.9/site-packages/urllib3/connectionpool.py", line 404, in _make_request
self._validate_conn(conn)
File "/opt/app-root/lib64/python3.9/site-packages/urllib3/connectionpool.py", line 1060, in _validate_conn
conn.connect()
File "/opt/app-root/lib64/python3.9/site-packages/urllib3/connection.py", line 419, in connect
self.sock = ssl_wrap_socket(
File "/opt/app-root/lib64/python3.9/site-packages/urllib3/util/ssl_.py", line 449, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(
File "/opt/app-root/lib64/python3.9/site-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "/usr/lib64/python3.9/ssl.py", line 501, in wrap_socket
return self.sslsocket_class._create(
File "/usr/lib64/python3.9/ssl.py", line 1074, in _create
self.do_handshake()
File "/usr/lib64/python3.9/ssl.py", line 1343, in do_handshake
self._sslobj.do_handshake()
File "/opt/app-root/lib64/python3.9/site-packages/gunicorn/workers/base.py", line 204, in handle_abort
sys.exit(1)
SystemExit: 1
``` | open | 2025-01-29T20:57:23Z | 2025-01-29T20:57:23Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/2304 | [] | sumanth567 | 0 |
autogluon/autogluon | computer-vision | 4,795 | Low GPU utilization with TabPFNMix model when using presets | I'm using the TabPFNMix model with AutoGluon and noticed a significant difference in GPU utilization depending on whether presets are used in the fit() function.
**Steps to Reproduce:**
Define the hyperparameters for TabPFNMix:
```python
tabpfnmix_default = {
"model_path_classifier": "autogluon/tabpfn-mix-1.0-classifier",
"model_path_regressor": "autogluon/tabpfn-mix-1.0-regressor",
"n_ensembles": 1,
"max_epochs": 30,
}
hyperparameters = {
"TABPFNMIX": [
tabpfnmix_default,
],
}
```
**Train the TabPFNMix model without any preset:**
```python
predictor = TabularPredictor(label='label', path='model_save_path', eval_metric='accuracy', problem_type='binary')
predictor.fit(train_data, hyperparameters=hyperparameters, verbosity=3, time_limit=3600, num_gpus=1)
```
GPU utilization: ~11.6 GB VRAM + 9 GB on my dataset.
RAM on CPU: ~2 GB.
Train the same model with a preset (e.g., best_quality):
```python
predictor = TabularPredictor(label='label', path='model_save_path', eval_metric='accuracy', problem_type='binary')
predictor.fit(train_data, presets='best_quality', hyperparameters=hyperparameters, verbosity=3, time_limit=3600, num_gpus=1)
```
GPU utilization: <2 GB VRAM.
Training and inference are significantly slower.
**Expected Behavior:**
When using a preset like best_quality, GPU utilization should remain high (similar to the no-preset scenario), ensuring faster training and inference times.
**Observed Behavior:**
Using presets reduces GPU usage drastically, leading to slower training and inference.
**Questions:**
- Is there a way to ensure high GPU utilization when using presets with TabPFNMix?
- Are there specific parameters or configurations that could mitigate this issue?
- Is this a known limitation or a bug related to the presets' implementation? | open | 2025-01-14T16:15:03Z | 2025-02-23T14:16:48Z | https://github.com/autogluon/autogluon/issues/4795 | [
"bug",
"module: tabular"
] | Killer3048 | 4 |
piskvorky/gensim | machine-learning | 3,360 | KeyedVectors.load_word2vec_format() can't load GoogleNews-vectors-negative300.bin | #### Problem description
KeyedVectors.load_word2vec_format() can't load GoogleNews-vectors-negative300.bin,
This is the my code.
```
from gensim.models.keyedvectors import KeyedVectors
gensim_model = KeyedVectors.load_word2vec_format(
'./GoogleNews-vectors-negative300.bin', binary=True, limit=300000)
```
This is the error code.
#### Versions
```
Traceback (most recent call last):
File "D:\desktop\2\word2vec.py", line 4, in <module>
gensim_model = KeyedVectors.load_word2vec_format(
File "C:\Users\admin\anaconda3\envs\dl\lib\site-packages\gensim\models\keyedvectors.py", line 1723, in load_word2vec_format
return _load_word2vec_format(
File "C:\Users\admin\anaconda3\envs\dl\lib\site-packages\gensim\models\keyedvectors.py", line 2063, in _load_word2vec_format
vocab_size, vector_size = [int(x) for x in header.split()] # throws for invalid file format
ValueError: not enough values to unpack (expected 2, got 0)
```
Please provide the output of:
python 3.9.1
gensim 4.2.0
| closed | 2022-07-01T12:54:46Z | 2022-07-02T06:30:12Z | https://github.com/piskvorky/gensim/issues/3360 | [] | xwz-19990627 | 2 |
Farama-Foundation/Gymnasium | api | 936 | [Question] Are results on mujoco games v3 and v4 comparable | ### Question
Hi! I want to compare results generated on mujoco games (v4) to those generated on v3 games. Is there any change in v4 that modifies the underlying environment dynamics? | closed | 2024-02-24T02:07:21Z | 2024-03-04T10:45:54Z | https://github.com/Farama-Foundation/Gymnasium/issues/936 | [
"question"
] | yiwan-rl | 1 |
JoeanAmier/TikTokDownloader | api | 97 | 能否单独保存文案?即desc;能否只采集音频 | 我看到可以在文件名中使用desc字段来存储文案,即视频描述,但是有些文案特别长,或者有需求需要采集文案,所以能否添加一个功能,采集视频的同时把文案单独保存到txt文本中;
还有一些视频,只想采集音频,能否单独采集音频,不要视频 | open | 2023-12-14T19:10:24Z | 2023-12-15T10:16:17Z | https://github.com/JoeanAmier/TikTokDownloader/issues/97 | [] | HiColinn | 1 |
Kav-K/GPTDiscord | asyncio | 315 | Allow the conversation_starter to be disabled | **Is your feature request related to a problem? Please describe.**
In its current implementation the conversation_starter_pretext and conversation_starter_pretext_minimal are always invoked when interacting with the bot. This leads to an issue with large opener prompts hitting the max_conversation_length and not being passed thru.
**Describe the solution you'd like**
Change the behavior of the commands.py cog to support something like "use_pretext" = True/False when creating a conversationw with the bot
**Describe alternatives you've considered**
Merging pretext and openers so that only 1 is used at a time
**Additional context**
I posed the question on the discord and the solution posed was to zero out the 2 pretext files, which seems unnecessary.
| closed | 2023-05-10T05:33:46Z | 2023-10-21T05:47:55Z | https://github.com/Kav-K/GPTDiscord/issues/315 | [] | jeffe | 2 |
explosion/spaCy | machine-learning | 12,370 | ValueError('[E1010] Unable to set entity information for token 3 which is included in more than one span in entities, blocked, missing or outside.') | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
First here is the code:
```
ents_data = json["ents_data"]
for ents in ents_data:
# print(ents, ":")
doc = nlp(ents["text"])
for ent in ents["ents"]:
start = ent["start"]
end = ent["end"]
label = ent["type"]
span = doc.char_span(int(start), int(end), label=label)
if not _temp_ents.count(span.text):
_ents.append(span)
_temp_ents.append(span.text)
doc.ents = _ents
db.add(doc)
```
And here is my json:
```
"ents_data":
[
{
"text": "I have a fever and stomach hurt",
"ents": [
{
"start": "9",
"end": "14",
"type": "SYMPTOM"
},
{
"start": "19",
"end": "31",
"type": "SYMPTOM"
}
]
},
{
"text": "I'm experiencing head pain and a fever",
"ents": [
{
"start": "17",
"end": "26",
"type": "SYMPTOM"
}
]
}
]
```
When i run the above code there is an error: ValueError('[E1010] Unable to set entity information for token 3 which is included in more than one span in entities, blocked, missing or outside.')
But i checked there are only three spans in the entities array which are "Fever", "Stomach hurt" and "Head pain". Noway Head pain overlapped with Stomach hurt right guys?. I just can't figure why?
## My Environment
* Operating System: Windows 11
* Python Version Used: 3.10.10
* spaCy Version Used: v3.5
| closed | 2023-03-06T11:24:46Z | 2023-03-06T14:41:12Z | https://github.com/explosion/spaCy/issues/12370 | [] | Yukari-Tryhard | 1 |
widgetti/solara | flask | 987 | `use_task` example | In your async task example
```python
import asyncio
import solara
from solara.lab import use_task, Task
@solara.component
def Page():
number = solara.use_reactive(4)
async def square():
await asyncio.sleep(1)
return number.value**2
result: Task[int] = use_task(square, dependencies=[number.value])
solara.InputInt("Square", value=number, continuous_update=True)
if result.finished:
solara.Success(f"Square of {number} == {result.value}")
solara.ProgressLinear(result.pending)
```
you annotate the `Task` with `int` as the return type of `use_task`. I think this may no longer be accurate. My editor (VS Code) suggets the return type of `use_task` is `Task[(), Coroutine[Any, Any, int]]`, thus not allowing me to properly type it. | open | 2025-01-27T06:25:06Z | 2025-01-28T15:39:26Z | https://github.com/widgetti/solara/issues/987 | [] | edan-bainglass | 1 |
replicate/cog | tensorflow | 1,258 | Building headless colmap (without gui) using cog's yaml file | **Steps to reproduce**
cog build the following yaml file
```
build:
# set to true if your model requires a GPU
gpu: true
cuda: "12.1"
# a list of ubuntu apt packages to install
system_packages:
- "libgl1-mesa-glx"
- "libglib2.0-0"
- "git"
- "cmake"
- "ninja-build"
- "build-essential"
- "libboost-program-options-dev"
- "libboost-filesystem-dev"
- "libboost-graph-dev"
- "libboost-system-dev"
- "libboost-test-dev"
- "libeigen3-dev"
- "libflann-dev"
- "libfreeimage-dev"
- "libmetis-dev"
- "libgoogle-glog-dev"
- "libgflags-dev"
- "libsqlite3-dev"
- "libglew-dev"
- "qtbase5-dev"
- "libqt5opengl5-dev"
- "libcgal-dev"
- "libceres-dev"
# - "colmap"
- "ffmpeg"
# python version in the form '3.11' or '3.11.4'
python_version: "3.10"
# a list of packages in the format <package-name>==<version>
python_packages:
# - "torch"
# - "torchvision"
# - "torchaudio"
# - "tensorflow"
- "pillow"
- "rembg"
- "rembg[cli]"
# commands run after the environment is setup
run:
- git clone https://gitlab.com/medoalmasry/headless-colmap.git && cd headless-colmap/build
- cmake ../CMakeLists.txt -GNinja -DCMAKE_CUDA_ARCHITECTURES=native
- ninja && ninja install
# predict.py defines how predictions are run on your model
predict: "predict.py:Predictor"
```
**Error Produced:**
```
> [stage-0 11/14] RUN cmake ../CMakeLists.txt -GNinja -DCMAKE_CUDA_ARCHITECTURES=native:
0.237 CMake Error: The source directory "/" does not appear to contain CMakeLists.txt.
0.237 Specify --help for usage, or press the help button on the CMake GUI.
------
Dockerfile:47
--------------------
45 | RUN --mount=type=cache,target=/root/.cache/pip pip install -r /tmp/requirements.txt
46 | RUN git clone https://gitlab.com/medoalmasry/headless-colmap.git && cd headless-colmap/build
47 | >>> RUN cmake ../CMakeLists.txt -GNinja -DCMAKE_CUDA_ARCHITECTURES=native
48 | RUN ninja && ninja install
49 | WORKDIR /src
--------------------
ERROR: failed to solve: process "/bin/sh -c cmake ../CMakeLists.txt -GNinja -DCMAKE_CUDA_ARCHITECTURES=native" did not complete successfully: exit code: 1
ⅹ Failed to build Docker image: exit status 1
```
My reference to building colmap is this [dockerfile](https://github.com/colmap/colmap/blob/dev/docker/Dockerfile):
| closed | 2023-08-12T22:58:33Z | 2023-08-13T21:07:43Z | https://github.com/replicate/cog/issues/1258 | [] | Medoalmasry | 0 |
graphql-python/graphene | graphql | 813 | Commercial support? | I love this lib - thanks so much for your hard work, @syrusakbary!
With that said, I'd happily contribute to paying for more of your time to extend Graphene, and fill in a few missing pieces that would make it more viable to use in production.
Namely:
- Error catching, per https://github.com/graphql-python/graphql-core/issues/177, and https://github.com/graphql-python/graphql-core/issues/202. System exceptions bubble up to user output by default, which a dangerous default IMO. I'd love to see a cleaner way of logging exceptions and controlling error output that could be defined in one place, rather than defensive `try/except` blocks in every mutation.
- Core improvements / re-factoring. Thread safety (#43), resolving promise issues (https://github.com/syrusakbary/promise/issues/57), moving to `async/await` per [your comment](https://github.com/graphql-python/graphene/issues/612#issuecomment-347066815), etc. There are a few core things that are beyond my immediate experience with how things work under the hood, that seem like they have some potential to throw weird/unexpected issues that are hard to diagnose.
- Documented subscription support/patterns, per #781.
- Docs synced with releases. There have been a few occasions where trying doc examples has thrown errors, such as #812 today or https://github.com/graphql-python/flask-graphql/issues/52 which I ran into a few days ago. When this happens, monkey-patched workarounds are usually suggested by the community, which makes for brittler code.
- Some more tooling to provide official solutions for #772, etc. Calculating the cost of queries, parsing the AST tree and preempting query joins that might be necessary, etc. Outside the purview of the core, perhaps, but necessary stuff at scale that would save devs reinventing the wheel.
- Closing issues faster. This is just a general thing, but there are issues going back 2-3 years that would be good to get unstuck. Much of it might even be redundant now or have other solutions, but clearing the backlog would make a clearer case for using the lib in production, knowing those same issues are unlikely to resurface.
Obviously, your time is valuable and the fact that you've put together _anything_ at all - let alone something as cool as Graphene - is amazing, so thank you.
But perhaps there's a way, as a community, we could buy more of your time to address some of the above? I'd happily chuck in a few bucks on the reg to keep this lib up-to-date. | closed | 2018-08-15T15:10:29Z | 2020-06-21T19:49:23Z | https://github.com/graphql-python/graphene/issues/813 | [
"📖 documentation"
] | leebenson | 11 |
allenai/allennlp | nlp | 4,955 | Learning rate scheduler do not work on AllenNLP v2 | Hello, I´m porting my code to the v2 version, and realized that the Learning rate scheduler was not working.
After inspecting the training code, I realized that on def _try_train(self) the variable:
`this_epoch_val_metric: float = 0.0` never changes it values
However the scheduler API requires a validation metric
```
# The Scheduler API is agnostic to whether your schedule requires a validation metric -
# if it doesn't, the validation metric passed here is ignored.
if self._learning_rate_scheduler:
self._learning_rate_scheduler.step(this_epoch_val_metric)
if self._momentum_scheduler:
self._momentum_scheduler.step(this_epoch_val_metric)
```
I guess it is related to _metric_tracker change, that now receives a list of metrics. But I guess @dirkgr would solve it more elegantly as he was the designer of the improved metric tracker | closed | 2021-02-02T21:26:48Z | 2021-02-04T19:21:59Z | https://github.com/allenai/allennlp/issues/4955 | [
"bug"
] | bratao | 4 |
gee-community/geemap | jupyter | 1,772 | keep the camelCase method to mirror GEE javascript map functions | I asked this question during the breakout session of Geo4Good, If I understand correctly you are planning on dropping methods that were mirroring GEE javascript ones.
Also I completely understand that respecting the Python convention is making the geempa package more consistent, I wanted to highligh 2 advantages:
- if you are coming from the code editor you know by design that this function is perfectly mimicking the behaviour you would get in the javascript interface
- It gives Python users an insight: this is a earthengine-API call which is very relevant for Commercial users of the lib. That's the reasoning I'm following to [refactor geeo_tools](https://github.com/gee-community/gee_tools/discussions/121).
I think keeping them in painless as it's a 1 liner at the end of the class:
```
class Map:
# ...
addLayer = self.add_ee_layer
_Originally posted by @12rambau in https://github.com/gee-community/geemap/discussions/1770_ | closed | 2023-10-13T15:51:25Z | 2023-10-18T01:08:58Z | https://github.com/gee-community/geemap/issues/1772 | [] | 12rambau | 1 |
custom-components/pyscript | jupyter | 639 | state_trigger stop working after update to 2024.10 | I have a few scripts with the state trigger that stopped working once I updated to 2024.10.
These were working before. the time_trigger does work as expected.
example of the state_trigger
```python
@state_trigger("light.hallway_light_1.*")
def test1(**args):
log.info(f"light changed {args}")
@state_trigger("binary_sensor.hallway_motion_sensor_occupancy")
def motion_state_changed(**args):
log.info("something is present")
context = args["context"]
if hallway.enable_debug():
log.debug(
f"[{hallway.name()}] Presence sensor binary_sensor.hallway_motion_sensor_occupancy was triggered with state {args['value']} from {context.parent_id}"
)
ceiling_light_flow()
``` | closed | 2024-10-04T07:17:42Z | 2024-10-05T14:04:18Z | https://github.com/custom-components/pyscript/issues/639 | [] | cjlapao | 3 |
pydantic/pydantic-settings | pydantic | 266 | Pydantic settings not reloading env vars when .env file is updated | ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
## Bug
Context: I was using Pydantic's BaseSettings to configure some Database settings.
The issue I faced was no matter what values I updated in the `.env`, the output of the settings always remained the same.
This led me to:
- Commenting out all values in the `.env`
- Deleting the `.env` file
- Changing `.env_file` field in model_config to random paths/strings
All of which resulted in the same values being outputted (The cached values that I have set before which were not updating)
My initial bug report was going to be that Pydantic is reading off a non-existent `.env` file. However, after more debugging, I realized that these environment variables were persistently set for my current directory (i.e., creating a new shell session did not fix the issue).
This means that Pydantic settings somehow previously set persistent environment variables that are not being overridden with the .env ones. I am unsure if this is expected behavior, but this feels like a bug to me, please correct me if I'm wrong.
Additionally, Pydantic is not validating the `.env` file path. For example, I can set the `.env_file` in `model_config` to any random string, and as long as the environment variables the model is expecting exist, no errors will be thrown.
E.g., this throws no errors
```
model_config = SettingsConfigDict(
env_file="random string",
env_file_encoding="utf-8",
env_prefix="DB_",
case_sensitive=False,
extra="ignore",
)
```
## How to reproduce
1. Set environment variables of what your model is expecting in your current shell
2. Define model_config in your setting class to expect an `env_file`
3. Put any path/string, valid or invalid, to the `env_file`
4. Print out the values in your setting class. No errors will be thrown given an invalid `.env` path, updated values will not be populated given a valid `.env` path.
### Example Code
```Python
# .env
DB_USERNAME=test_user
DB_PASSWORD=test_password
DB_HOST=test_server
# Before running config.py
export DB_USERNAME=user_to_be_overridden
export DB_PASSWORD=password_to_be_overridden
export DB_HOST=server_to_be_overrriden
# config.py
import sqlalchemy
from pydantic import SecretStr, ValidationInfo, field_validator
from pydantic_settings import BaseSettings, SettingsConfigDict
class DatabaseSettings(BaseSettings):
model_config = SettingsConfigDict(
env_file=".env", # Can be replaced with an invalid path/string
env_file_encoding="utf-8",
env_prefix="DB_",
case_sensitive=False,
extra="ignore",
)
USERNAME: str
PASSWORD: SecretStr
HOST: str
NAME: str
PORT: str | None = None
URL: str | None = None
@field_validator("URL")
@classmethod
def assemble_db_url(cls, v: str, info: ValidationInfo) -> str:
if isinstance(v, str):
return v
url = sqlalchemy.URL.create(
drivername="mssql+pyodbc",
username=info.data.get("USERNAME"),
password=info.data.get("PASSWORD").get_secret_value(),
host=info.data.get("HOST"),
port=info.data.get("PORT"),
database=info.data.get("NAME"),
query=dict(driver="ODBC Driver 17 for SQL Server"),
)
return url.render_as_string(hide_password=False)
if __name__ == "__main__":
from pprint import pprint
database_settings = DatabaseSettings(NAME="TEST")
print("Database settings model dump:")
pprint(database_settings.model_dump())
print("Database settings model config:")
pprint(DatabaseSettings.model_config)
"""
OUTPUT:
Database settings model dump:
{'HOST': 'server_to_be_overrriden', # Not value from .env
'NAME': 'TEST',
'PASSWORD': SecretStr('**********'),
'PORT': None,
'URL': 'mssql+pyodbc://user_to_be_overridden:password_to_be_overridden@server_to_be_overrriden/TEST?driver=ODBC+Driver+17+for+SQL+Server', # Not value from .env
'USERNAME': 'user_to_be_overridden'} # Not value from .env
Database settings model config:
{'arbitrary_types_allowed': True,
'case_sensitive': False,
'env_file': '.env',
'env_file_encoding': 'utf-8',
'env_ignore_empty': False,
'env_nested_delimiter': None,
'env_parse_none_str': None,
'env_prefix': 'DB_',
'extra': 'ignore',
'json_file': None,
'json_file_encoding': None,
'protected_namespaces': ('model_', 'settings_'),
'secrets_dir': None,
'toml_file': None,
'validate_default': True,
'yaml_file': None,
'yaml_file_encoding': None}
"""
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.6.4
pydantic-core version: 2.16.3
pydantic-core build: profile=release pgo=true
install path: C:\Users\dan\AppData\Local\anaconda3\envs\traceability\Lib\site-packages\pydantic
python version: 3.11.8 | packaged by Anaconda, Inc. | (main, Feb 26 2024, 21:34:05) [MSC v.1916 64 bit (AMD64)]
platform: Windows-10-10.0.19045-SP0
related packages: pydantic-settings-2.2.1 typing_extensions-4.10.0
commit: unknown
```
| closed | 2024-04-09T17:58:29Z | 2024-04-11T20:40:57Z | https://github.com/pydantic/pydantic-settings/issues/266 | [] | pongpatapee | 2 |
tortoise/tortoise-orm | asyncio | 1,396 | Nested select_related raises KeyError inside _init_from_db | **Describe the bug**
When using nested select_related of form:
'modelA__modelB'
I'm getting a KeyError when ORM tries to assign values to modelB.
An issue arises since kwargs values passed to ```_init_from_db``` function getting sliced version of model attribute's name, for example, instead of passing 'city_obj' attribute, 'city_ob' is being passed to kwargs, same way instead of attribute 'building', 'buildin' is being passed etc.
Actual error:
File "/usr/local/lib/python3.9/site-packages/tortoise/queryset.py", line 1008, in _execute
instance_list = await self._db.executor_class(
File "/usr/local/lib/python3.9/site-packages/tortoise/backends/base/executor.py", line 155, in execute_select
obj = model._init_from_db(
File "/usr/local/lib/python3.9/site-packages/tortoise/models.py", line 747, in _init_from_db
setattr(self, key, meta.fields_map[key].to_python_value(value))
KeyError: 'buildin'
ORM version:
tortoise-orm==0.19.3
| open | 2023-05-30T06:56:06Z | 2024-09-19T18:02:58Z | https://github.com/tortoise/tortoise-orm/issues/1396 | [] | Tauassar | 2 |
jacobgil/pytorch-grad-cam | computer-vision | 118 | Possibility to declare our own augmentations for aug_smooth = True | Hi,
Thanks for the library. Would it be possible for us to declare our own augmentations. Right now I am superclassing your class | closed | 2021-07-29T07:54:03Z | 2021-09-09T14:51:54Z | https://github.com/jacobgil/pytorch-grad-cam/issues/118 | [] | mhashas | 8 |
zappa/Zappa | flask | 673 | [Migrated] Inconsistent amount of scheduled events in outputs of "update" & "status" commands | Originally from: https://github.com/Miserlou/Zappa/issues/1725 by [tunghim](https://github.com/tunghim)
<!--- Provide a general summary of the issue in the Title above -->
## Context
When I run
`$ zappa update {env_name}`
Scheduled events are being shown in the output properly.

But when I run
`$ zappa status {env_name}`
Only parts of scheduled events are being shown.

<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
## Expected Behavior
<!--- Tell us what should happen -->
They should be the same.
## Actual Behavior
<!--- Tell us what happens instead -->
They are not the same.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.47.1
* Operating System and Python version: Lambda v13, Python v3.6.7
| closed | 2021-02-20T12:32:45Z | 2024-04-13T17:36:51Z | https://github.com/zappa/Zappa/issues/673 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
OpenVisualCloud/CDN-Transcode-Sample | dash | 2 | Feature request to integrate with Kubernetes | closed | 2019-04-15T06:01:54Z | 2019-06-12T08:42:41Z | https://github.com/OpenVisualCloud/CDN-Transcode-Sample/issues/2 | [
"enhancement"
] | czhou26 | 2 | |
nonebot/nonebot2 | fastapi | 3,374 | Plugin: nonebot-plugin-tieba-monitor | ### PyPI 项目名
nonebot-plugin-tieba-monitor
### 插件 import 包名
nonebot_plugin_tieba_monitor
### 标签
[]
### 插件配置项
```dotenv
```
### 插件测试
- [ ] 如需重新运行插件测试,请勾选左侧勾选框 | open | 2025-03-16T16:42:42Z | 2025-03-24T04:29:37Z | https://github.com/nonebot/nonebot2/issues/3374 | [
"Plugin",
"Publish"
] | su-liu-guang | 3 |
NullArray/AutoSploit | automation | 361 | Unhandled Exception (e7b1da053) | Autosploit version: `3.0`
OS information: `Linux-4.15.0-20-generic-x86_64-with-Ubuntu-18.04-bionic`
Running context: `autosploit.py`
Error meesage: `global name 'Except' is not defined`
Error traceback:
```
Traceback (most recent call):
File "/home/verhe054/exploit/Autosploit/autosploit/main.py", line 113, in main
loaded_exploits = load_exploits(EXPLOIT_FILES_PATH)
File "/home/verhe054/exploit/Autosploit/lib/jsonize.py", line 61, in load_exploits
except Except:
NameError: global name 'Except' is not defined
```
Metasploit launched: `False`
| closed | 2019-01-13T15:01:30Z | 2019-01-14T18:03:32Z | https://github.com/NullArray/AutoSploit/issues/361 | [] | AutosploitReporter | 0 |
cvat-ai/cvat | pytorch | 8,682 | Export problem (COCO File) | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce


### Expected Behavior
want to export data without loosing my annotation
### Possible Solution
_No response_
### Context
_No response_
### Environment
_No response_ | closed | 2024-11-11T16:15:55Z | 2024-11-11T18:37:34Z | https://github.com/cvat-ai/cvat/issues/8682 | [
"bug"
] | isu-jahan | 1 |
RobertCraigie/prisma-client-py | pydantic | 60 | Validate query arguments using pydantic | ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Currently if any invalid arguments are passed then a very verbose and potentially confusing error message is raised, for example:
```prisma
model Post {
id String @id @default(cuid())
title String
published Boolean
}
```
```py
await client.post.create({})
```
```
prisma.errors.MissingRequiredValueError: Failed to validate the query: `Unable to match input value to any allowed input type for the field. Parse errors: [Query parsing/validation error at `Mutation.createOnePost.data.PostCreateInput.published`: A value is required but not set., Query parsing/validation error at `Mutation.createOnePost.data.PostUncheckedCreateInput.published`: A value is required but not set.]` at `Mutation.createOnePost.data`
```
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
Use pydantic's `@validate_arguments` [decorator](https://pydantic-docs.helpmanual.io/usage/validation_decorator/).
The above example would then error with something like:
```
pydantic.error_wrappers.ValidationError: 2 validation errors for PostCreateInput
title
field required (type=value_error.missing)
published
field required (type=value_error.missing)
```
This feature should however have a schema option and a programmatic method for disabling validation as validation incurs a runtime performance cost and provides no benefits when static type checkers are used.
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
I suspect that the `@validate_arguments` decorator doesn't handle forward references properly so we'll probably have to do some horrible monkey patching to get this to work.
| open | 2021-08-31T18:51:14Z | 2022-02-01T15:30:47Z | https://github.com/RobertCraigie/prisma-client-py/issues/60 | [
"kind/feature",
"topic: client",
"level/advanced",
"priority/medium"
] | RobertCraigie | 0 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 272 | Toolbox Can't Synthesize Voice after Recording | ```C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master>demo_toolbox.py
WARNING:tensorflow:From C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\models\modules.py:91: The name tf.nn.rnn_cell.RNNCell is deprecated. Please use tf.compat.v1.nn.rnn_cell.RNNCell instead.
Arguments:
datasets_root: None
enc_models_dir: encoder\saved_models
syn_models_dir: synthesizer\saved_models
voc_models_dir: vocoder\saved_models
low_mem: False
Warning: you did not pass a root directory for datasets as argument.
The recognized datasets are:
LibriSpeech/dev-clean
LibriSpeech/dev-other
LibriSpeech/test-clean
LibriSpeech/test-other
LibriSpeech/train-clean-100
LibriSpeech/train-clean-360
LibriSpeech/train-other-500
LibriTTS/dev-clean
LibriTTS/dev-other
LibriTTS/test-clean
LibriTTS/test-other
LibriTTS/train-clean-100
LibriTTS/train-clean-360
LibriTTS/train-other-500
LJSpeech-1.1
VoxCeleb1/wav
VoxCeleb1/test_wav
VoxCeleb2/dev/aac
VoxCeleb2/test/aac
VCTK-Corpus/wav48
Feel free to add your own. You can still use the toolbox by recording samples yourself.
Loaded encoder "pretrained.pt" trained to step 1564501
Found synthesizer "pretrained" trained to step 278000
WARNING:tensorflow:From C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\inference.py:57: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead.
Constructing model: Tacotron
WARNING:tensorflow:From C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\tacotron2.py:15: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
WARNING:tensorflow:From C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\tacotron2.py:21: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.
WARNING:tensorflow:From C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\models\tacotron.py:86: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, there are two
options available in V2.
- tf.py_function takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means `tf.py_function`s can use accelerators such as GPUs as well as
being differentiable using a gradient tape.
- tf.numpy_function maintains the semantics of the deprecated tf.py_func
(it is not differentiable, and manipulates numpy arrays). It drops the
stateful argument making all functions stateful.
WARNING:tensorflow:From C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\models\tacotron.py:123: The name tf.train.replica_device_setter is deprecated. Please use tf.compat.v1.train.replica_device_setter instead.
WARNING:tensorflow:From C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\models\tacotron.py:135: The name tf.get_variable is deprecated. Please use tf.compat.v1.get_variable instead.
WARNING:tensorflow:From C:\Users\Name\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\ops\init_ops.py:1251: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
WARNING:tensorflow:From C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\models\modules.py:112: LSTMCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This class is equivalent as tf.keras.layers.LSTMCell, and will be replaced by that in Tensorflow 2.0.
WARNING:tensorflow:From C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\models\modules.py:421: conv1d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.keras.layers.Conv1D` instead.
WARNING:tensorflow:Entity <bound method Conv1D.call of <tensorflow.python.layers.convolutional.Conv1D object at 0x00000197E2DD3D08>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv1D.call of <tensorflow.python.layers.convolutional.Conv1D object at 0x00000197E2DD3D08>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:From C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\models\modules.py:422: batch_normalization (from tensorflow.python.layers.normalization) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.BatchNormalization instead. In particular, `tf.control_dependencies(tf.GraphKeys.UPDATE_OPS)` should not be used (consult the `tf.keras.layers.batch_normalization` documentation).
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x00000197E2DCF808>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x00000197E2DCF808>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:From C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\models\modules.py:425: dropout (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.dropout instead.
WARNING:tensorflow:Entity <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x00000197E2DCF808>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x00000197E2DCF808>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv1D.call of <tensorflow.python.layers.convolutional.Conv1D object at 0x0000019782779C48>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv1D.call of <tensorflow.python.layers.convolutional.Conv1D object at 0x0000019782779C48>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x00000197E2DEF748>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x00000197E2DEF748>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x0000019783FB4F08>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x0000019783FB4F08>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv1D.call of <tensorflow.python.layers.convolutional.Conv1D object at 0x0000019782779B88>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv1D.call of <tensorflow.python.layers.convolutional.Conv1D object at 0x0000019782779B88>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x0000019783FE9F88>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x0000019783FE9F88>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x0000019783FE32C8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x0000019783FE32C8>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:From C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\models\modules.py:236: bidirectional_dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `keras.layers.Bidirectional(keras.layers.RNN(cell))`, which is equivalent to this API
WARNING:tensorflow:From C:\Users\Name\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\ops\rnn.py:464: dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version.
Instructions for updating:
Please use `keras.layers.RNN(cell)`, which is equivalent to this API
WARNING:tensorflow:From C:\Users\Name\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\ops\rnn_cell_impl.py:961: calling Zeros.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
WARNING:tensorflow:Entity <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x00000197822AF948>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x00000197822AF948>>: AttributeError: module 'gast' has no attribute 'Num'
WARNING:tensorflow:From C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\models\modules.py:156: The name tf.nn.rnn_cell.LSTMStateTuple is deprecated. Please use tf.compat.v1.nn.rnn_cell.LSTMStateTuple instead.
WARNING:tensorflow:From C:\Users\Name\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\ops\rnn.py:244: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
WARNING:tensorflow:Entity <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x00000197822AFCC8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x00000197822AFCC8>>: AttributeError: module 'gast' has no attribute 'Num'
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x0000019783FE9CC8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x0000019783FE9CC8>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:From C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\models\attention.py:158: The name tf.layers.Conv1D is deprecated. Please use tf.compat.v1.layers.Conv1D instead.
WARNING:tensorflow:From C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\models\attention.py:161: The name tf.layers.Dense is deprecated. Please use tf.compat.v1.layers.Dense instead.
WARNING:tensorflow:From C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\models\modules.py:305: MultiRNNCell.__init__ (from tensorflow.python.ops.rnn_cell_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This class is equivalent as tf.keras.layers.StackedRNNCells, and will be replaced by that in Tensorflow 2.0.
WARNING:tensorflow:From C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\models\modules.py:269: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.dense instead.
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x00000197BF9EC408>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x00000197BF9EC408>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x00000197BFA22688>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x00000197BFA22688>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x00000197BFA6EDC8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x00000197BFA6EDC8>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x00000197BFA6EDC8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x00000197BFA6EDC8>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method MultiRNNCell.call of <tensorflow.python.ops.rnn_cell_impl.MultiRNNCell object at 0x00000197AEC0EDC8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method MultiRNNCell.call of <tensorflow.python.ops.rnn_cell_impl.MultiRNNCell object at 0x00000197AEC0EDC8>>: AttributeError: module 'gast' has no attribute 'Num'
WARNING:tensorflow:Entity <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x00000197AEC0E648>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x00000197AEC0E648>>: AttributeError: module 'gast' has no attribute 'Num'
WARNING:tensorflow:Entity <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x00000197AEC0EBC8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method LSTMCell.call of <tensorflow.python.ops.rnn_cell_impl.LSTMCell object at 0x00000197AEC0EBC8>>: AttributeError: module 'gast' has no attribute 'Num'
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x00000197ABB17F88>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x00000197ABB17F88>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv1D.call of <tensorflow.python.layers.convolutional.Conv1D object at 0x00000197AEBEB588>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv1D.call of <tensorflow.python.layers.convolutional.Conv1D object at 0x00000197AEBEB588>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x00000197ABB2B4C8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x00000197ABB2B4C8>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x00000197AEC2D0C8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x00000197AEC2D0C8>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x00000197BFA556C8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x00000197BFA556C8>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv1D.call of <tensorflow.python.layers.convolutional.Conv1D object at 0x00000197F943CC88>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv1D.call of <tensorflow.python.layers.convolutional.Conv1D object at 0x00000197F943CC88>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x00000197F945A608>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x00000197F945A608>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x00000197F945A608>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x00000197F945A608>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv1D.call of <tensorflow.python.layers.convolutional.Conv1D object at 0x00000197BC5C0E08>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv1D.call of <tensorflow.python.layers.convolutional.Conv1D object at 0x00000197BC5C0E08>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x00000197F940FBC8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x00000197F940FBC8>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x00000197F94DBC88>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x00000197F94DBC88>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv1D.call of <tensorflow.python.layers.convolutional.Conv1D object at 0x00000197BC5B8688>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv1D.call of <tensorflow.python.layers.convolutional.Conv1D object at 0x00000197BC5B8688>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x00000197FAB00A48>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x00000197FAB00A48>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x00000197BC5B8688>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x00000197BC5B8688>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv1D.call of <tensorflow.python.layers.convolutional.Conv1D object at 0x00000197F94DBF88>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv1D.call of <tensorflow.python.layers.convolutional.Conv1D object at 0x00000197F94DBF88>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x00000197F94FD348>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x00000197F94FD348>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x00000197F94DBF88>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x00000197F94DBF88>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv1D.call of <tensorflow.python.layers.convolutional.Conv1D object at 0x00000197F940FE48>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv1D.call of <tensorflow.python.layers.convolutional.Conv1D object at 0x00000197F940FE48>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x00000197FAB332C8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x00000197FAB332C8>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x0000019783FA01C8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x0000019783FA01C8>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x00000197F94BFEC8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x00000197F94BFEC8>>: AssertionError: Bad argument number for Name: 3, expecting 4
initialisation done /gpu:0
WARNING:tensorflow:From C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\models\tacotron.py:286: The name tf.trainable_variables is deprecated. Please use tf.compat.v1.trainable_variables instead.
Initialized Tacotron model. Dimensions (? = dynamic shape):
Train mode: False
Eval mode: False
GTA mode: False
Synthesis mode: True
Input: (?, ?)
device: 0
embedding: (?, ?, 512)
enc conv out: (?, ?, 512)
encoder out (cond): (?, ?, 768)
decoder out: (?, ?, 80)
residual out: (?, ?, 512)
projected residual out: (?, ?, 80)
mel out: (?, ?, 80)
<stop_token> out: (?, ?)
Tacotron Parameters 28.439 Million.
Loading checkpoint: synthesizer\saved_models\logs-pretrained\taco_pretrained\tacotron_model.ckpt-278000
2020-01-29 16:25:15.751548: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2020-01-29 16:25:15.757543: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library nvcuda.dll
2020-01-29 16:25:15.847611: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:01:00.0
2020-01-29 16:25:15.852256: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 1 with properties:
name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.582
pciBusID: 0000:02:00.0
2020-01-29 16:25:15.856989: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2020-01-29 16:25:15.861116: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0, 1
2020-01-29 16:25:16.627557: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-01-29 16:25:16.630706: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0 1
2020-01-29 16:25:16.632612: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N N
2020-01-29 16:25:16.634549: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 1: N N
2020-01-29 16:25:16.637899: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 8367 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
2020-01-29 16:25:16.644516: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 8788 MB memory) -> physical GPU (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:02:00.0, compute capability: 6.1)
WARNING:tensorflow:From C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\tacotron2.py:62: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.
WARNING:tensorflow:From C:\Users\Name\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\training\saver.py:1276: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
2020-01-29 16:25:17.702371: E tensorflow/stream_executor/cuda/cuda_dnn.cc:319] Loaded runtime CuDNN library: 7.2.1 but source was compiled with: 7.4.1. CuDNN library major and minor version needs to match or have higher minor version in case of CuDNN 7.0 or later version. If using a binary install, upgrade your CuDNN library. If building from sources, make sure the library loaded at runtime is compatible with the version specified during compile configuration.
2020-01-29 16:25:17.712958: E tensorflow/stream_executor/cuda/cuda_dnn.cc:319] Loaded runtime CuDNN library: 7.2.1 but source was compiled with: 7.4.1. CuDNN library major and minor version needs to match or have higher minor version in case of CuDNN 7.0 or later version. If using a binary install, upgrade your CuDNN library. If building from sources, make sure the library loaded at runtime is compatible with the version specified during compile configuration.
Traceback (most recent call last):
File "C:\Users\Name\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\client\session.py", line 1356, in _do_call
return fn(*args)
File "C:\Users\Name\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\client\session.py", line 1341, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "C:\Users\Name\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\client\session.py", line 1429, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
(0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node Tacotron_model/inference/encoder_convolutions/conv_layer_1_encoder_convolutions/conv1d/conv1d}}]]
[[Tacotron_model/inference/add/_269]]
(1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node Tacotron_model/inference/encoder_convolutions/conv_layer_1_encoder_convolutions/conv1d/conv1d}}]]
0 successful operations.
0 derived errors ignored.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\toolbox\__init__.py", line 173, in synthesize
specs = self.synthesizer.synthesize_spectrograms(texts, embeds)
File "C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\inference.py", line 78, in synthesize_spectrograms
specs, alignments = self._model.my_synthesize(embeddings, texts)
File "C:\Users\Name\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\tacotron2.py", line 86, in my_synthesize
feed_dict=feed_dict)
File "C:\Users\Name\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\client\session.py", line 950, in run
run_metadata_ptr)
File "C:\Users\Name\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\client\session.py", line 1173, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Users\Name\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\client\session.py", line 1350, in _do_run
run_metadata)
File "C:\Users\Name\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\client\session.py", line 1370, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found.
(0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[node Tacotron_model/inference/encoder_convolutions/conv_layer_1_encoder_convolutions/conv1d/conv1d (defined at \Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\models\modules.py:421) ]]
[[Tacotron_model/inference/add/_269]]
(1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[node Tacotron_model/inference/encoder_convolutions/conv_layer_1_encoder_convolutions/conv1d/conv1d (defined at \Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\models\modules.py:421) ]]
0 successful operations.
0 derived errors ignored.
Original stack trace for 'Tacotron_model/inference/encoder_convolutions/conv_layer_1_encoder_convolutions/conv1d/conv1d':
File "\Desktop\All\Spook\Real-Time-Voice-Cloning-master\demo_toolbox.py", line 32, in <module>
Toolbox(**vars(args))
File "\Desktop\All\Spook\Real-Time-Voice-Cloning-master\toolbox\__init__.py", line 51, in __init__
self.ui.start()
File "\Desktop\All\Spook\Real-Time-Voice-Cloning-master\toolbox\ui.py", line 497, in start
self.app.exec_()
File "\Desktop\All\Spook\Real-Time-Voice-Cloning-master\toolbox\__init__.py", line 173, in synthesize
specs = self.synthesizer.synthesize_spectrograms(texts, embeds)
File "\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\inference.py", line 77, in synthesize_spectrograms
self.load()
File "\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\inference.py", line 58, in load
self._model = Tacotron2(self.checkpoint_fpath, hparams)
File "\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\tacotron2.py", line 28, in __init__
split_infos=split_infos)
File "\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\models\tacotron.py", line 145, in initialize
encoder_outputs = encoder_cell(embedded_inputs, tower_input_lengths[i])
File "\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\models\architecture_wrappers.py", line 36, in __call__
conv_output = self._convolutions(inputs)
File "\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\models\modules.py", line 192, in __call__
"conv_layer_{}_".format(i + 1) + self.scope)
File "\Desktop\All\Spook\Real-Time-Voice-Cloning-master\synthesizer\models\modules.py", line 421, in conv1d
padding="same")
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\util\deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\layers\convolutional.py", line 218, in conv1d
return layer.apply(inputs)
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 1479, in apply
return self.__call__(inputs, *args, **kwargs)
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\layers\base.py", line 537, in __call__
outputs = super(Layer, self).__call__(inputs, *args, **kwargs)
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 634, in __call__
outputs = call_fn(inputs, *args, **kwargs)
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 146, in wrapper
), args, kwargs)
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 446, in converted_call
return _call_unconverted(f, args, kwargs)
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\autograph\impl\api.py", line 253, in _call_unconverted
return f(*args, **kwargs)
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\keras\layers\convolutional.py", line 373, in call
return super(Conv1D, self).call(inputs)
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\keras\layers\convolutional.py", line 196, in call
outputs = self._convolution_op(inputs, self.kernel)
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 1079, in __call__
return self.conv_op(inp, filter)
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 635, in __call__
return self.call(inp, filter)
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 234, in __call__
name=self.name)
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 223, in _conv1d
name=name)
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\util\deprecation.py", line 574, in new_func
return func(*args, **kwargs)
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\util\deprecation.py", line 574, in new_func
return func(*args, **kwargs)
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 1624, in conv1d
name=name)
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 1161, in conv2d
data_format=data_format, dilations=dilations, name=name)
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\framework\ops.py", line 3616, in create_op
op_def=op_def)
File "\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflow\python\framework\ops.py", line 2005, in __init__
self._traceback = tf_stack.extract_stack()```
**This is the error I get in my terminal when I press "Synthesize only" after a few recordings.**
It should be noted that I run the toolbox from the terminal without using the "python" prefix:
`> demo_toolbox.py` instead of `> python demo_toolbox.py`
I do this since `> python demo_toolbox.py` doesn't do anything.
Another weird error that occurs is that after the terminal prints out the error, the toolbox window extends downwards, going off the screen. Any attempts to resize the window will send it off-screen, and there is no way to get it back.
I'm sure I'm missing something oblivious here, but I'm not too experienced with any of these libraries, I don't know exactly what to do here.
Thanks for the help. | closed | 2020-01-30T00:45:54Z | 2020-07-04T23:14:35Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/272 | [] | Ixiplious | 1 |
aio-libs/aiomysql | asyncio | 556 | AioMysql breaking under latest pymysql release | **Compiling it under buildozer with kivy, following logs from android phone:**
01-11 20:14:22.728 16052 16090 I python : Android kivy bootstrap done. name is main
01-11 20:14:22.728 16052 16090 I python : AND: Ran string
01-11 20:14:22.728 16052 16090 I python : Run user program, change dir and execute entrypoint
01-11 20:14:22.844 16052 16090 I python : Traceback (most recent call last):
01-11 20:14:22.844 16052 16090 I python : File "/home/dan/python/apps/sunsaturn/helloworld/.buildozer/android/app/main.py", line 5, in
01-11 20:14:22.844 16052 16090 I python : File "/home/dan/python/apps/sunsaturn/helloworld/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/hellow
orld/aiomysql/init.py", line 32, in
01-11 20:14:22.844 16052 16090 I python : File "/home/dan/python/apps/sunsaturn/helloworld/.buildozer/android/platform/build-armeabi-v7a/build/python-installs/hellow
orld/aiomysql/connection.py", line 19, in
01-11 20:14:22.844 16052 16090 I python : ModuleNotFoundError: No module named 'pymysql.util'
01-11 20:14:22.844 16052 16090 I python : Python for android ended. | closed | 2021-01-12T03:28:45Z | 2022-01-13T17:35:46Z | https://github.com/aio-libs/aiomysql/issues/556 | [
"enhancement",
"pymysql"
] | syleishere | 1 |
strawberry-graphql/strawberry | fastapi | 3,372 | CI: send coverage reports to Codecov once when all tests are done | Looks like Codecov is still failing a lot, we send coverage info for every single test, I think it would be best to collect all the coverage and then send it only once.
Not sure if this is possible, but it would be neat 😊 | closed | 2024-02-05T15:36:42Z | 2025-03-20T15:56:35Z | https://github.com/strawberry-graphql/strawberry/issues/3372 | [
"help wanted",
"good first issue"
] | patrick91 | 0 |
widgetti/solara | fastapi | 910 | Needed mutation detection improvements | With #595 merged, We'll move these todo items from the PR to issues as a reminder to look at them before releasing Solara 2.0.
TODO:
- [ ] Trigger calls to check_mutations() from several places: after an event handler (requires a change in reacton) and after a component run.
- [ ] We probably do not want two equals function in solara/reacton, reconcile this.
- [ ] #983
- [ ] support reactive.get(copy=True) to always get a copy, even when _CHECK_MUTATIONS is False
- [ ] Do we need support reactive.get(reference=True) to always get a reference, or is the opt-out enough? | open | 2024-12-04T15:00:16Z | 2025-01-23T12:27:35Z | https://github.com/widgetti/solara/issues/910 | [] | iisakkirotko | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.