repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
Johnserf-Seed/TikTokDownload | api | 555 | 希望加个抖音被封账号批量下载视频功能 | 现在被封账号,下载链接是403,登录被封账号就能下载 | open | 2023-09-21T15:17:53Z | 2023-12-26T11:59:13Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/555 | [
"不修复(wontfix)"
] | allcdn | 2 |
google-research/bert | nlp | 839 | Windows fatal exception: access violation | When I was running the MRPC example, this line of code reported a fatal error.
` with tf.io.gfile.GFile(FLAGS.input_meta_data_path, 'rb') as reader:
input_meta_data = json.loads(reader.read().decode('utf-8'))
` | open | 2019-09-05T02:14:55Z | 2022-08-11T04:36:36Z | https://github.com/google-research/bert/issues/839 | [] | BruceLee66 | 2 |
openapi-generators/openapi-python-client | rest-api | 525 | Add support for basic auth | **Is your feature request related to a problem? Please describe.**
OpenAPI 3 supports expressing basic auth support: https://swagger.io/docs/specification/authentication/basic-authentication/
While basic auth is often not ideal for production, during development basic auth can be quite handy. Currently it is not possible to directly use basic auth with the generated Python client.
**Describe the solution you'd like**
Detect if an API supports basic auth and provide it as an alternative `AuthenticatedClient`.
An example implementation:
```py
from base64 import b64encode
from typing import Dict
import attr
@attr.s(auto_attribs=True)
class BasicAuthAuthenticatedClient(Client):
"""A Client which has been authenticated for use on secured endpoints"""
username: str
password: str
def get_headers(self) -> Dict[str, str]:
"""Get headers to be used in authenticated endpoints"""
encoded_credentials = b64encode(f"{self.username}:{self.password}".encode()).decode()
return {"Authorization": f"Basic {encoded_credentials}", **self.headers}
```
**Describe alternatives you've considered**
The `AuthenticatedClient` could be made to take either a `token` or `username`/`password`.
| closed | 2021-10-25T19:09:40Z | 2023-08-13T01:56:52Z | https://github.com/openapi-generators/openapi-python-client/issues/525 | [
"✨ enhancement",
"🍭 OpenAPI Compliance"
] | johnthagen | 0 |
alteryx/featuretools | scikit-learn | 2,755 | Is it possible to support azure Synapse as SQL dialect | All the data on my business is on Azure SQL Synapse.
Is it possible to support this with feature tools. Otherwise I cant use the package due to infrastructure constraints.
| open | 2024-10-07T09:20:04Z | 2024-10-10T09:13:23Z | https://github.com/alteryx/featuretools/issues/2755 | [
"new feature"
] | Fish-Soup | 1 |
seleniumbase/SeleniumBase | web-scraping | 3,226 | SeleniumBase Freezing when initializing in debugger | I'm having a problem when initializing the Driver. When I run it through the vscode debugger, when I install the SeleniumBase library, it runs normally once. However, when I run it again through the debugger, it freezes when declaring the Driver (uc = True). Without the parameter, it runs normally.
disclaimer: it runs normally if I run it without the debugger. | closed | 2024-10-27T02:40:27Z | 2024-10-27T03:49:34Z | https://github.com/seleniumbase/SeleniumBase/issues/3226 | [
"invalid usage",
"external",
"UC Mode / CDP Mode"
] | TheHolsback | 1 |
MaartenGr/BERTopic | nlp | 1,130 | LookupError in fit_transform | Hello,
Running `fit_transform` gave me the following output.
> LookupError Traceback (most recent call last) Cell In[14], line 1 ----> 1 topics, probabilities = model.fit_transform(cleaned_articles) File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\bertopic\_bertopic.py:366, in BERTopic.fit_transform(self, documents, embeddings, y) 363 documents = self._sort_mappings_by_frequency(documents) 365 # Extract topics by calculating c-TF-IDF --> 366 self._extract_topics(documents) 368 # Reduce topics 369 if self.nr_topics: File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\bertopic\_bertopic.py:2980, in BERTopic._extract_topics(self, documents) 2971 """ Extract topics from the clusters using a class-based TF-IDF 2972 2973 Arguments: (...) 2977 c_tf_idf: The resulting matrix giving a value (importance score) for each word per topic 2978 """ 2979 documents_per_topic = documents.groupby(['Topic'], as_index=False).agg({'Document': ' '.join}) -> 2980 self.c_tf_idf_, words = self._c_tf_idf(documents_per_topic) 2981 self.topic_representations_ = self._extract_words_per_topic(words, documents) 2982 self._create_topic_vectors() File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\bertopic\_bertopic.py:3121, in BERTopic._c_tf_idf(self, documents_per_topic, fit, partial_fit)
A model is produced and I know it because when I save it the file is not empty (it has some MBs). But the error propagates in the following analysis. For example, running `model.get_topic(6)` produces the following output:
> TypeError Traceback (most recent call last) Cell In[18], line 1 ----> 1 model.get_topic(6) File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\bertopic\_bertopic.py:1346, in BERTopic.get_topic(self, topic) 1331 """ Return top n words for a specific topic and their c-TF-IDF scores 1332 1333 Arguments: (...) 1343 ``` 1344 """ 1345 check_is_fitted(self) -> 1346 if topic in self.topic_representations_: 1347 return self.topic_representations_[topic] 1348 else: TypeError: argument of type 'NoneType' is not iterable
What could have happened here? | closed | 2023-03-27T21:27:19Z | 2023-05-23T09:25:53Z | https://github.com/MaartenGr/BERTopic/issues/1130 | [] | PanosP | 4 |
rougier/from-python-to-numpy | numpy | 48 | Typos 3.3 | > Thus, if you need fancy indexing, it's better to keep a copy of **your** fancy index (especially if it was complex to compute it) and to work with it:
...
> If you are unsure if the result of **your** indexing is a view or a copy, you can check what is the base of your result. If it is None, then you result is a copy:
...
> However, if your arrays are big, then you have to be careful with such **expressions** and wonder if you can do it differently | closed | 2017-01-26T17:06:17Z | 2017-01-26T19:47:40Z | https://github.com/rougier/from-python-to-numpy/issues/48 | [] | pylang | 1 |
google-research/bert | tensorflow | 557 | masked_lm_accuracy is low at 0.51, but next_sentence_accuracy is high at 0.93 | how to explain that,
my training set about 1M line, runing 50000 steps, batchsize is 32 | closed | 2019-04-06T03:11:54Z | 2019-10-24T11:56:38Z | https://github.com/google-research/bert/issues/557 | [] | SeekPoint | 4 |
google-research/bert | tensorflow | 1,250 | Where is pre-trained model tf_examples.tfrecord ? | i try run this command
python run_pretraining.py \
--input_file=/tmp/tf_examples.tfrecord \
--output_dir=/tmp/pretraining_output \
--do_train=True \
--do_eval=True \
--bert_config_file=$BERT_BASE_DIR/bert_config.json \
--init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt \
--train_batch_size=32 \
--max_seq_length=128 \
--max_predictions_per_seq=20 \
--num_train_steps=20 \
--num_warmup_steps=10 \
--learning_rate=2e-5
i have already downloaded uncased_L-12_H-768_A-12_2.zip have .json and .ckpt files, but not have tf_examples.tfrecord
Thank you .. | open | 2021-08-04T10:26:50Z | 2022-06-05T09:05:49Z | https://github.com/google-research/bert/issues/1250 | [] | nicejava | 1 |
vitalik/django-ninja | rest-api | 1,027 | Use `class Meta` or `class Config` | So the documentation states for `ModelSchema` we should define it like
```python
from django.contrib.auth.models import User
from ninja import ModelSchema
class UserSchema(ModelSchema):
class Meta:
model = User
fields = ['id', 'username', 'first_name', 'last_name']
```
However, I am converting everything in the response to camelCase for my API endpoints.
The documentation specified we should override pyandtic configs with the `class Config`
```python
class UserSchema(ModelSchema):
class Config:
model = User
model_fields = ["id", "email"]
alias_generator = to_camel
populate_by_name = True # !!!!!! <------
```
Which one are we supposed to use? I would like to have consistency across my codebase.
Can we also use `class Config` instead of `class Meta:` for any Ninja Schema?
ALso by the way thank you so much for developing this library I absolutely love it? The best way to build a web framework using python.
Love fast API syntax and Django ORM, and this lib allows you to have both : )
| open | 2023-12-22T17:14:02Z | 2023-12-26T18:14:08Z | https://github.com/vitalik/django-ninja/issues/1027 | [] | alexwolf22 | 1 |
coqui-ai/TTS | python | 3,233 | [Bug] In tokenizer.py[line 180],_abbreviations miss "zh-cn" options, and cause valid dataset blank. please check | ### Describe the bug

In tokenizer.py[line 180],_abbreviations miss "zh-cn" options, and cause valid dataset blank
### To Reproduce
python ..\TTS\recipes\ljspeech\xtts_v2\train_gpt_xtts.py
### Expected behavior
model train done successfully
### Logs
_No response_
### Environment
```shell
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.1.0+cpu",
"TTS": "0.20.4",
"numpy": "1.22.0"
},
"System": {
"OS": "Windows",
"architecture": [
"64bit",
"WindowsPE"
],
"processor": "Intel64 Family 6 Model 63 Stepping 2, GenuineIntel",
"python": "3.9.13",
"version": "10.0.22000"
}
}
```
### Additional context
_No response_ | closed | 2023-11-16T02:20:16Z | 2023-11-21T21:24:18Z | https://github.com/coqui-ai/TTS/issues/3233 | [
"bug"
] | jackyin68 | 5 |
NullArray/AutoSploit | automation | 818 | Divided by zero exception88 | Error: Attempted to divide by zero.88 | closed | 2019-04-19T16:01:10Z | 2019-04-19T16:37:33Z | https://github.com/NullArray/AutoSploit/issues/818 | [] | AutosploitReporter | 0 |
jupyter/nbgrader | jupyter | 995 | Released assignments are still there after nbgrader db assignment remove | OS: Ubuntu 16.04.4 LTS
nbgrader version 0.5.4
jupyterhub version 0.9.0
jupyter notebook version 5.5.0
Expected behavior:
When I use the command `nbgrader db assignment remove <assignment_name>`, it must remove all the relevant files and mentions.
Actual behavior:
When I use the command `nbgrader db assignment remove <assignment_name>`, and then go the "Assignments" tab, the assignments are still there under "Released assignments" and I can "Fetch" them. I have deleted the files in the "source" and "released" folders as well as the user downloaded files. I see now they are under `/srv/nbgrader/exchange/<course_dir>/outbound/`, but I do not see the value of keeping those after removing the assignment. Actually, this can be confusing for new students.
Thanks!
| open | 2018-07-14T14:15:23Z | 2022-06-23T10:21:11Z | https://github.com/jupyter/nbgrader/issues/995 | [
"enhancement",
"documentation"
] | maryamdev | 3 |
huggingface/transformers | python | 36,640 | [Feature Request]: refactor _update_causal_mask to a public utility | ### Feature request
refactor _update_causal_mask to a public utility
### Motivation
After this pr https://github.com/huggingface/transformers/pull/35235/files#diff-06392bad3b9e97be9ade60d4ac46f73b6809388f4d507c2ba1384ab872711c51
all the attention implement already refactor to use ALL_ATTENTION_FUNCTIONS and people can register their own implement very easy.
I notice that there are still another function: _update_causal_mask is copy-and-paste everywhere and related to attention modules
if people register an attention impl, this _update_causal_mask will add attention_mask if it is not flash_attention_2. so hope this function can be refactor too
### Your contribution
I can do some testing and submitting a PR, we can add ulysess implementation as a third party example | open | 2025-03-11T07:14:39Z | 2025-03-12T15:13:09Z | https://github.com/huggingface/transformers/issues/36640 | [
"Feature request"
] | Irvingwangjr | 2 |
proplot-dev/proplot | data-visualization | 391 | When subplotting, all lines are plotted only in the last image plot | Very basic example code, I am plotting images for each axis. Additionally, for each axis, I am plotting some lines with `coco.plot.showAnns(anns)` which in turn calls several `pplt.pyplot.plot()` functions
For some reason, instead of having lines plotted in each image, all them are plotted only in the last one?
| closed | 2022-09-23T15:46:02Z | 2023-03-03T22:45:39Z | https://github.com/proplot-dev/proplot/issues/391 | [
"support"
] | Robotatron | 5 |
kynan/nbstripout | jupyter | 10 | ions | closed | 2016-02-04T04:53:54Z | 2016-02-15T21:51:18Z | https://github.com/kynan/nbstripout/issues/10 | [
"resolution:invalid"
] | mforbes | 0 | |
piskvorky/gensim | nlp | 3,001 | save_word2vec_format TypeError when specifying count in KeyedVectors initialization | #### Problem description
When using preallocation for the initialization of KeyedVectors, the model cannot be stored with `save_word2vec_format`.
This prevents iteratively filling the model with `add_vector` as it would incur a big performance hit.
#### Steps/code/corpus to reproduce
Minimal example:
```python
from gensim.models import KeyedVectors
from gensim.test.utils import get_tmpfile
import numpy as np
keys = [str(x) for x in range(0, 100)]
vectors = np.random.rand(100, 25)
# This works as expected
model_1= KeyedVectors(vector_size=25)
model_1.add_vectors(keys, vectors)
model_1.save_word2vec_format(get_tmpfile("test1.w2v"))
# But this fails during storing
model_2= KeyedVectors(vector_size=25, count=100)
model_2.add_vectors(keys, vectors)
model_2.save_word2vec_format(get_tmpfile("test2.w2v"))
```
Traceback of model 2:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-34-687b0e2e8c4f> in <module>
15 model_2= KeyedVectors(vector_size=25, count=100)
16 model_2.add_vectors(keys, vectors)
---> 17 model_2.save_word2vec_format(get_tmpfile("test2.w2v"))
~/miniconda3/envs/myenv/lib/python3.8/site-packages/gensim/models/keyedvectors.py in save_word2vec_format(self, fname, fvocab, binary, total_vec, write_header, prefix, append, sort_attr)
1573 fout.write(f"{total_vec} {self.vector_size}\n".encode('utf8'))
1574 for key in keys_to_write:
-> 1575 key_vector = self[key]
1576 if binary:
1577 fout.write(f"{prefix}{key} ".encode('utf8') + key_vector.astype(REAL).tobytes())
~/miniconda3/envs/myenv/lib/python3.8/site-packages/gensim/models/keyedvectors.py in __getitem__(self, key_or_keys)
382 return self.get_vector(key_or_keys)
383
--> 384 return vstack([self.get_vector(key) for key in key_or_keys])
385
386 def get_index(self, key, default=None):
TypeError: 'NoneType' object is not iterable
```
#### Versions
```
Linux-3.10.0-1062.9.1.el7.x86_64-x86_64-with-glibc2.10
Python 3.8.5 | packaged by conda-forge | (default, Aug 21 2020, 18:21:27)
[GCC 7.5.0]
Bits 64
NumPy 1.19.1
SciPy 1.5.2
gensim 4.0.0beta
FAST_VERSION 1
```
| open | 2020-11-19T09:35:55Z | 2022-07-04T08:25:54Z | https://github.com/piskvorky/gensim/issues/3001 | [] | Iseratho | 4 |
python-restx/flask-restx | api | 179 | How to define a generic class in my return class | The unified return object we defined in the project is a generic object. How should I express it on swagger? The generic class is as follows:
`from typing import TypeVar, Generic
from flask import jsonify
T = TypeVar('T')
class ResponseResult(Generic[T]):
errorCode: str
errorMessage: str
errorType: str
data: T`
How should I use api.model to define my generic class?
`user.model('ResponseResult', {
....
})` | open | 2020-07-21T12:32:41Z | 2020-07-21T12:32:41Z | https://github.com/python-restx/flask-restx/issues/179 | [
"question"
] | somta | 0 |
TheKevJames/coveralls-python | pytest | 251 | Update 3.0.0 Release Notes | Thank you for the great work on this package.
Upon running a build today on GitHub actions, coveralls.io submission failed with a `422 Client Error: Unprocessable Entity` error when submitting to coveralls.io.
The release notes look to have a typo (it says to set `service-name` vs `service`) on the command to override the service name.
With my GitHub actions build I had to run:
```
coveralls --service=github
```
When setting this option, the submission to coveralls started working again.
| closed | 2021-01-12T13:17:55Z | 2021-01-15T21:25:27Z | https://github.com/TheKevJames/coveralls-python/issues/251 | [] | davidmezzetti | 4 |
pydantic/pydantic-ai | pydantic | 205 | logfire included in base install | Hey, I just wanted to start of by saying - thank you for making this amazing package. Contrary to the installation docs (see https://ai.pydantic.dev/install/), it looks like `logfire-api` is installed with the base version of `pydantic-ai`.
```
matthewlemay@Matthews-Laptop-2 ~/G/brugge (main)> uv add pydantic-ai
Resolved 219 packages in 223ms
Built brugge @ file:///Users/matthewlemay/Github/brugge
Prepared 3 packages in 230ms
Uninstalled 1 package in 0.71ms
Installed 13 packages in 33ms
~ brugge==0.1.0 (from file:///Users/matthewlemay/Github/brugge)
+ cachetools==5.5.0
+ colorama==0.4.6
+ eval-type-backport==0.2.0
+ google-auth==2.36.0
+ griffe==1.5.1
+ groq==0.13.0
+ logfire-api==2.6.2
+ pyasn1==0.6.1
+ pyasn1-modules==0.4.1
+ pydantic-ai==0.0.12
+ pydantic-ai-slim==0.0.12
+ rsa==4.9
``` | closed | 2024-12-10T18:15:32Z | 2024-12-10T19:12:26Z | https://github.com/pydantic/pydantic-ai/issues/205 | [
"question"
] | mplemay | 2 |
davidsandberg/facenet | tensorflow | 281 | How to get the model with .pb format, the format of the model I got from facenet_train_classifier.py is .ckpt | closed | 2017-05-18T03:43:28Z | 2017-11-21T18:56:07Z | https://github.com/davidsandberg/facenet/issues/281 | [] | bingjilin | 5 | |
jina-ai/serve | fastapi | 5,554 | Specify openapi_url to customize openapi.json serving | Hi, is there a way to specify a custom location for the openapi.json in a similar fashion as FastAPI does? Any help would be appreciated. Often you want to use different gateway permissions for your swagger docs served on /docs and it's handy to have all assets served there.
```
app = FastAPI(
title="..",
description="..",
version="0.1",
openapi_url="/docs/openapi.json"
)
``` | closed | 2022-12-22T08:31:56Z | 2022-12-22T15:55:30Z | https://github.com/jina-ai/serve/issues/5554 | [] | masc-it | 1 |
mwaskom/seaborn | matplotlib | 3,644 | histogram with fixed binwidth - unexpected results for last column | When creating a simple histogram with binwidth of 1, I was surprised that the last two numbers were merged in a single column.
`fig = sns.histplot([0,1,1,1,3,4,6,7], binwidth=1)`

Similarly, the last bar in the other example is placed directly adjacent to the previous one although I would expect a gap here:
`fig = sns.histplot([0,1,1,1,3,4,6], binwidth=1)`

Is this a bug or is there an explanation for this? | closed | 2024-02-29T14:05:24Z | 2024-03-01T12:26:25Z | https://github.com/mwaskom/seaborn/issues/3644 | [] | KathSe1984 | 4 |
zappa/Zappa | django | 661 | [Migrated] `exclude` setting doesn't work with packages installed as editable (from github, etc.,) | Originally from: https://github.com/Miserlou/Zappa/issues/1680 by [jnoortheen](https://github.com/jnoortheen)
<!--- Provide a general summary of the issue in the Title above -->
## Context
I have installed `lambda-packages` using its `git` repo URL. But this makes it to end up in the final zip package. As I have inspected the code, the editable packages are copied without considering the exclude setting.
## Expected Behavior
`lambda_packages` folder should completely be not present in the final zip package.
## Actual Behavior
It ends up in the zip and making it more than 100MB of size.
## Possible Fix
I am creating a PR to consider the exclude setting for the editable packages copying as well.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.47
* Operating System and Python version: Manjaro & 3.7
| closed | 2021-02-20T12:32:35Z | 2024-04-13T17:36:41Z | https://github.com/zappa/Zappa/issues/661 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
remsky/Kokoro-FastAPI | fastapi | 105 | add timestamps for each word | I would like to have timestamps for each word in the generated text-to-speech output. This would improve the accuracy of syncing the audio with other media.
I could also submit this as a PR if I get some guidance.
| closed | 2025-01-31T12:52:53Z | 2025-02-02T09:59:20Z | https://github.com/remsky/Kokoro-FastAPI/issues/105 | [
"enhancement"
] | merouanezouaid | 1 |
RobertCraigie/prisma-client-py | asyncio | 424 | Can't create nested relation | ## Bug description
When trying to create entity `Foo` with relation `bar` and without relation `baz` it errors with:
```
{
"errors": [
{
"error": 'Error in query graph construction: QueryParserError(QueryParserError { path: QueryPath { segments: ["Mutation", "createOneFoo", "data"] }, error_kind: InputUnionParseError { parsing_errors: [QueryParserError { path: QueryPath { segments: ["Mutation", "createOneFoo", "data", "FooCreateInput", "bar"] }, error_kind: FieldNotFoundError }, QueryParserError { path: QueryPath { segments: ["Mutation", "createOneFoo", "data", "FooUncheckedCreateInput", "baz"] }, error_kind: FieldNotFoundError }] } })',
"user_facing_error": {
"is_panic": False,
"message": "Failed to validate the query: `Unable to match input value to any allowed input type for the field. Parse errors: [Query parsing/validation error at `Mutation.createOneFoo.data.FooCreateInput.bar`: Field does not exist on enclosing type., Query parsing/validation error at `Mutation.createOneFoo.data.FooUncheckedCreateInput.baz`: Field does not exist on enclosing type.]` at `Mutation.createOneFoo.data`",
"meta": {
"query_validation_error": "Unable to match input value to any allowed input type for the field. Parse errors: [Query parsing/validation error at `Mutation.createOneFoo.data.FooCreateInput.bar`: Field does not exist on enclosing type., Query parsing/validation error at `Mutation.createOneFoo.data.FooUncheckedCreateInput.baz`: Field does not exist on enclosing type.]",
"query_position": "Mutation.createOneFoo.data",
},
"error_code": "P2009",
},
}
]
}
```
## How to reproduce
1. Create schema with entity `Foo` that has relation `bar` and `baz`
2. Call `prisma.foo.create(data={ "bar": { "create": {} } })`
3. See error
## Expected behavior
To create entity with nested relation
## Prisma information
```prisma
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
generator client {
provider = "prisma-client-py"
interface = "asyncio"
}
model Bar {
id String @id
created_at DateTime @default(now())
updated_at DateTime @updatedAt
}
model Baz {
id String @id
created_at DateTime @default(now())
updated_at DateTime @updatedAt
}
model Foo {
id String @id
created_at DateTime @default(now())
updated_at DateTime @updatedAt
bar Bar? @relation(fields: [bar_id], references: [id])
bar_id String?
baz Baz? @relation(fields: [baz_id], references: [id])
baz_id String?
}
```
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: Debian
- Database: PostgreSQL, MySQL, MariaDB or SQLite
- Python version: 3.8.12
- Prisma version:
```
pisma : 3.13.0
prisma client python : 0.6.6
platform : debian-openssl-1.1.x
engines : efdf9b1183dddfd4258cd181a72125755215ab7b
install path : /home/gitpod/.pyenv/versions/3.8.12/lib/python3.8/site-packages/prisma
installed extras : []
```
| closed | 2022-06-11T20:19:29Z | 2022-06-26T09:41:46Z | https://github.com/RobertCraigie/prisma-client-py/issues/424 | [
"kind/question"
] | iddan | 4 |
vimalloc/flask-jwt-extended | flask | 148 | Should be able to unset access and refresh token cookies independently. | I am looking specifically to be able to unset access token cookies without unsetting refresh token cookies.
My reason for this is that I am handling JWTs before dispatching to the view function (i have written a JWT session extension) and I would like to return a 401 when I receive an expired token and then remove only the access tokens in that response, preserving the refresh tokens should the user choose to refresh but allowing subsequent requests to be made without an access token (empty session) rather than with an invalid access token (unresolvable 401).
I am proposing introducing two new functions:
unset_access_cookies
unset_refresh_cookies
symmetrical to the `set_*_cookies` functions in addition to the existing `unset_jwt_cookes` function.
I have a work-around in place, but it's far more elegant to make this change and I think it's probably generally useful.
PR incoming. | closed | 2018-05-05T06:15:00Z | 2018-05-05T17:47:46Z | https://github.com/vimalloc/flask-jwt-extended/issues/148 | [] | matthewstory | 1 |
ultralytics/yolov5 | pytorch | 13,167 | How can a model trained on Ultralytics HUB perform inference prediction on the test set? | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I have successfully trained several models on Ultratics HUB, and now I want to make the final test machine inference prediction. However, Ultratics HUB can only perform online inference by uploading a few images. I found in the subsequent section that entering relevant instructions at the local command line can perform local inference, such as Python segment/predict.py -- weights yolov5m seg.pt -- data data/images/bus.jpg, where the weights are replaced with the trained best.pt file and the data is replaced with a custom training set. However, during code execution, it shows
YOLOv5n6u summary (fused): 253 layers, 4126316 parameters, 0 gradients, 7.2 GFLOPs
Traceback (most recent call last):
File "D:\DesktoI have successfully trained several models on Ultratics HUB, and now I want to make the final test machine inference prediction. However, Ultratics HUB can only perform online inference by uploading a few images. I found in the subsequent section that entering relevant instructions at the local command line can perform local inference, such as Python segment/predict.py -- weights yolov5m seg.pt -- data data/images/bus.jpg, where the weights are replaced with the trained best.pt file and the data is replaced with a custom training set. However, during code execution, it shows YOLOv5n6u summary (fused): 253 layers, 4126316 parameters, 0 gradients, 7.2 GFLOPs
Traceback (most recent call last):
File "D:\Desktop\YOLO5\yolov5\detect.py", line 312, in <module>
main(opt)
File "D:\Desktop\YOLO5\yolov5\detect.py", line 307, in main
run(**vars(opt))
File "C:\Users\Y\AppData\Local\anaconda3\envs\yolov5\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:\Desktop\YOLO5\yolov5\detect.py", line 200, in run
s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string
~~~~~^^^^^^^^
KeyError: 5059
However, when I use the best. pt file of the model trained on the local command line, I can use the above method for inference prediction. May I ask which step went wrong? Is it possible that the best. pt file for both types of training cannot be used interchangeably? Or are there any other methods for batch testing at Ultralytics HUB?
### Additional
_No response_ | closed | 2024-07-05T02:39:14Z | 2024-10-20T19:49:32Z | https://github.com/ultralytics/yolov5/issues/13167 | [
"question",
"Stale"
] | Aq114 | 3 |
huggingface/datasets | numpy | 7,357 | Python process aborded with GIL issue when using image dataset | ### Describe the bug
The issue is visible only with the latest `datasets==3.2.0`.
When using image dataset the Python process gets aborted right before the exit with the following error:
```
Fatal Python error: PyGILState_Release: thread state 0x7fa1f409ade0 must be current when releasing
Python runtime state: finalizing (tstate=0x0000000000ad2958)
Thread 0x00007fa33d157740 (most recent call first):
<no Python frame>
Extension modules: numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._boun
ded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, pyarrow.lib, pandas._libs.tslibs.ccalendar, pandas._libs.ts
libs.np_datetime, pandas._libs.tslibs.dtypes, pandas._libs.tslibs.base, pandas._libs.tslibs.nattype, pandas._libs.tslibs.timezones, pandas._libs.tslibs.fields, pandas._libs.tslibs.timedeltas, pandas._libs.t
slibs.tzconversion, pandas._libs.tslibs.timestamps, pandas._libs.properties, pandas._libs.tslibs.offsets, pandas._libs.tslibs.strptime, pandas._libs.tslibs.parsing, pandas._libs.tslibs.conversion, pandas._l
ibs.tslibs.period, pandas._libs.tslibs.vectorized, pandas._libs.ops_dispatch, pandas._libs.missing, pandas._libs.hashtable, pandas._libs.algos, pandas._libs.interval, pandas._libs.lib, pyarrow._compute, pan
das._libs.ops, pandas._libs.hashing, pandas._libs.arrays, pandas._libs.tslib, pandas._libs.sparse, pandas._libs.internals, pandas._libs.indexing, pandas._libs.index, pandas._libs.writers, pandas._libs.join,
pandas._libs.window.aggregations, pandas._libs.window.indexers, pandas._libs.reshape, pandas._libs.groupby, pandas._libs.json, pandas._libs.parsers, pandas._libs.testing, charset_normalizer.md, requests.pa
ckages.charset_normalizer.md, requests.packages.chardet.md, yaml._yaml, markupsafe._speedups, PIL._imaging, torch._C, torch._C._dynamo.autograd_compiler, torch._C._dynamo.eval_frame, torch._C._dynamo.guards
, torch._C._dynamo.utils, torch._C._fft, torch._C._linalg, torch._C._nested, torch._C._nn, torch._C._sparse, torch._C._special, sentencepiece._sentencepiece, sklearn.__check_build._check_build, psutil._psut
il_linux, psutil._psutil_posix, scipy._lib._ccallback_c, scipy.sparse._sparsetools, _csparsetools, scipy.sparse._csparsetools, scipy.linalg._fblas, scipy.linalg._flapack, scipy.linalg.cython_lapack, scipy.l
inalg._cythonized_array_utils, scipy.linalg._solve_toeplitz, scipy.linalg._decomp_lu_cython, scipy.linalg._matfuncs_sqrtm_triu, scipy.linalg.cython_blas, scipy.linalg._matfuncs_expm, scipy.linalg._decomp_up
date, scipy.sparse.linalg._dsolve._superlu, scipy.sparse.linalg._eigen.arpack._arpack, scipy.sparse.linalg._propack._spropack, scipy.sparse.linalg._propack._dpropack, scipy.sparse.linalg._propack._cpropack,
scipy.sparse.linalg._propack._zpropack, scipy.sparse.csgraph._tools, scipy.sparse.csgraph._shortest_path, scipy.sparse.csgraph._traversal, scipy.sparse.csgraph._min_spanning_tree, scipy.sparse.csgraph._flo
w, scipy.sparse.csgraph._matching, scipy.sparse.csgraph._reordering, scipy.special._ufuncs_cxx, scipy.special._ufuncs, scipy.special._specfun, scipy.special._comb, scipy.special._ellip_harm_2, scipy.spatial
._ckdtree, scipy._lib.messagestream, scipy.spatial._qhull, scipy.spatial._voronoi, scipy.spatial._distance_wrap, scipy.spatial._hausdorff, scipy.spatial.transform._rotation, scipy.optimize._group_columns, s
cipy.optimize._trlib._trlib, scipy.optimize._lbfgsb, _moduleTNC, scipy.optimize._moduleTNC, scipy.optimize._cobyla, scipy.optimize._slsqp, scipy.optimize._minpack, scipy.optimize._lsq.givens_elimination, sc
ipy.optimize._zeros, scipy.optimize._highs.cython.src._highs_wrapper, scipy.optimize._highs._highs_wrapper, scipy.optimize._highs.cython.src._highs_constants, scipy.optimize._highs._highs_constants, scipy.l
inalg._interpolative, scipy.optimize._bglu_dense, scipy.optimize._lsap, scipy.optimize._direct, scipy.integrate._odepack, scipy.integrate._quadpack, scipy.integrate._vode, scipy.integrate._dop, scipy.integr
ate._lsoda, scipy.interpolate._fitpack, scipy.interpolate._dfitpack, scipy.interpolate._bspl, scipy.interpolate._ppoly, scipy.interpolate.interpnd, scipy.interpolate._rbfinterp_pythran, scipy.interpolate._r
gi_cython, scipy.special.cython_special, scipy.stats._stats, scipy.stats._biasedurn, scipy.stats._levy_stable.levyst, scipy.stats._stats_pythran, scipy._lib._uarray._uarray, scipy.stats._ansari_swilk_statis
tics, scipy.stats._sobol, scipy.stats._qmc_cy, scipy.stats._mvn, scipy.stats._rcont.rcont, scipy.stats._unuran.unuran_wrapper, scipy.ndimage._nd_image, _ni_label, scipy.ndimage._ni_label, sklearn.utils._isf
inite, sklearn.utils.sparsefuncs_fast, sklearn.utils.murmurhash, sklearn.utils._openmp_helpers, sklearn.metrics.cluster._expected_mutual_info_fast, sklearn.preprocessing._csr_polynomial_expansion, sklearn.p
reprocessing._target_encoder_fast, sklearn.metrics._dist_metrics, sklearn.metrics._pairwise_distances_reduction._datasets_pair, sklearn.utils._cython_blas, sklearn.metrics._pairwise_distances_reduction._bas
e, sklearn.metrics._pairwise_distances_reduction._middle_term_computer, sklearn.utils._heap, sklearn.utils._sorting, sklearn.metrics._pairwise_distances_reduction._argkmin, sklearn.metrics._pairwise_distanc
es_reduction._argkmin_classmode, sklearn.utils._vector_sentinel, sklearn.metrics._pairwise_distances_reduction._radius_neighbors, sklearn.metrics._pairwise_distances_reduction._radius_neighbors_classmode, s
klearn.metrics._pairwise_fast, PIL._imagingft, google._upb._message, h5py._errors, h5py.defs, h5py._objects, h5py.h5, h5py.utils, h5py.h5t, h5py.h5s, h5py.h5ac, h5py.h5p, h5py.h5r, h5py._proxy, h5py._conv,
h5py.h5z, h5py.h5a, h5py.h5d, h5py.h5ds, h5py.h5g, h5py.h5i, h5py.h5o, h5py.h5f, h5py.h5fd, h5py.h5pl, h5py.h5l, h5py._selector, _cffi_backend, pyarrow._parquet, pyarrow._fs, pyarrow._azurefs, pyarrow._hdfs
, pyarrow._gcsfs, pyarrow._s3fs, multidict._multidict, propcache._helpers_c, yarl._quoting_c, aiohttp._helpers, aiohttp._http_writer, aiohttp._http_parser, aiohttp._websocket, frozenlist._frozenlist, xxhash
._xxhash, pyarrow._json, pyarrow._acero, pyarrow._csv, pyarrow._dataset, pyarrow._dataset_orc, pyarrow._parquet_encryption, pyarrow._dataset_parquet_encryption, pyarrow._dataset_parquet, regex._regex, scipy
.io.matlab._mio_utils, scipy.io.matlab._streams, scipy.io.matlab._mio5_utils, PIL._imagingmath, PIL._webp (total: 236)
Aborted (core dumped)
```an
### Steps to reproduce the bug
Install `datasets==3.2.0`
Run the following script:
```python
import datasets
DATASET_NAME = "phiyodr/InpaintCOCO"
NUM_SAMPLES = 10
def preprocess_fn(example):
return {
"prompts": example["inpaint_caption"],
"images": example["coco_image"],
"masks": example["mask"],
}
default_dataset = datasets.load_dataset(
DATASET_NAME, split="test", streaming=True
).filter(lambda example: example["inpaint_caption"] != "").take(NUM_SAMPLES)
test_data = default_dataset.map(
lambda x: preprocess_fn(x), remove_columns=default_dataset.column_names
)
for data in test_data:
print(data["prompts"])
``
### Expected behavior
The script should not hang or crash.
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-5.15.0-50-generic-x86_64-with-glibc2.31
- Python version: 3.11.0
- `huggingface_hub` version: 0.25.1
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.2.0 | open | 2025-01-06T11:29:30Z | 2025-03-08T15:59:36Z | https://github.com/huggingface/datasets/issues/7357 | [] | AlexKoff88 | 1 |
Teemu/pytest-sugar | pytest | 173 | Keep consistency in the output | Right now, when a test fails the output looks like this:
```
...
Results (2.34s):
5 passed
1 failed
- path/to/your_test.py:666 test_something_wrong
```
The way the failing test (ie `PATH:LINE TEST`) is displayed is not the *natural* way pytest displays *test paths* (ie `PATH::TEST`), and I think it is unpractical for some reasons:
- I frequently need to copy/paste this line to pass it as an argument of a new pytest invocation in your shell, for example, with new debugging options. Then I have to clean the line numbers, and add a colon, and delete a space. Once, it is not a big deal, but it is a bit annoying when you do this dozen times a day.
- Using thing like [kitty hints](https://sw.kovidgoyal.net/kitty/kittens/hints.html), the space in the report makes kitty only catch the filename, and not the whole *test path*, making the *hint* feature useless in that situation.
A more practical way to display the report would be to keep the *test path* the way pytest use them. For example what do you think of this output?
```
...
Results (2.34s):
5 passed
1 failed
- path/to/your_test.py::test_something_wrong :666
```
If you agree, I am willing to make a pull request. What do you think?
Cheers | open | 2019-03-26T11:08:47Z | 2020-08-25T18:28:34Z | https://github.com/Teemu/pytest-sugar/issues/173 | [
"question"
] | azmeuk | 11 |
jonaswinkler/paperless-ng | django | 1,323 | [FEATURE] Make it possible to individually change the tesseract language to (re)ocr | CURRENT SITUATION:
When setting up paperless-ng there is only one place to set tesseract languages to use in the config.
My current, as I have tons of multiple scientific documents with different languages, config is...
`PAPERLESS_OCR_LANGUAGE=bul+cat+chi_sim+dan+deu+eng+est+fin+fra+ita+jpn+kor+lat+lav+nld+nor+osd+pol+por+rus+tur`
... which is not ideal obviously and does make mistakes.
WANTED SITUATION:
Identify language before OCR-ing and if that fails using the default settings...
(I know tesseract can identify the script)
`PAPERLESS_OCR_LANGUAGE=eng+deu+fra+lat`
... for instance
And to be able to re-ocr documents on individual basis with a different language set if needed.
| open | 2021-09-17T11:15:43Z | 2021-09-17T11:15:43Z | https://github.com/jonaswinkler/paperless-ng/issues/1323 | [] | bwakkie | 0 |
supabase/supabase-py | fastapi | 192 | Can't query float values | On querying database with gte, lte, gt, lt for float or np.float32 values. It's throwing Api error.
Error message-
> APIError: {'message': 'invalid input syntax for type real: ""4.6079545""', 'code': '22P02', 'details': None, 'hint': None}
The field I tried querying are float4 and float8 data types.
**Additional context**
On querying ID with numeric data type all functions work perfectly fine. | closed | 2022-04-18T20:24:18Z | 2022-05-01T11:54:48Z | https://github.com/supabase/supabase-py/issues/192 | [
"bug"
] | Sharaddition | 8 |
matterport/Mask_RCNN | tensorflow | 2,254 | The server has only CPU but no GPU, how to call multiple CPUs to run the program? | The server has 48 CPUs, but when I run the code, I find that only one CPU is used. How can I use all 48 CPUs? | open | 2020-06-24T03:28:57Z | 2020-06-24T04:08:43Z | https://github.com/matterport/Mask_RCNN/issues/2254 | [] | Romuns-Nicole | 0 |
pytest-dev/pytest-xdist | pytest | 521 | xdist master freezes if socketserver worker calls pytest.exit() | - test_exit.py
```
import pytest
def test_a():
pass
def test_b():
pytest.exit('system state unrecoverable, destroy pytest session.')
```
- socketserver https://bitbucket.org/hpk42/execnet/raw/2af991418160/execnet/script/socketserver.py
- expected: `pytest -d --tx socket=localhost:8888 test_exit.py --rsyncdir=.` should not freeze
- observed: it freezes until Ctrl+C is input.
- `--tx ssh=localhost` is fine.
- log
```
root@0fa7f92c43b4:~# pytest -d --tx socket=localhost:8888 test_exit.py --rsyncdir=. # socketserver
==================================================================================== test session starts =====================================================================================
platform linux2 -- Python 2.7.13, pytest-4.6.9, py-1.8.1, pluggy-0.13.1
rootdir: /root
plugins: xdist-1.31.0, forked-1.1.3
gw0 [2]
.[gw0] node down: keyboard-interrupt
f
replacing crashed worker gw0
^C ### pytest freezes until I input Ctrl+C
========================================================================================== FAILURES ==========================================================================================
_____________________________________________________________________________________ root/test_exit.py ______________________________________________________________________________________
[gw0] linux2 -- Python 2.7.13 /usr/bin/python
worker 'gw0' crashed while running 'root/test_exit.py::test_b'
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! KeyboardInterrupt !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
/usr/local/lib/python2.7/dist-packages/execnet/gateway_socket.py:28: KeyboardInterrupt
(to show a full traceback on KeyboardInterrupt use --fulltrace)
============================================================================= 1 failed, 1 passed in 2.58 seconds =============================================================================
root@0fa7f92c43b4:~# pytest -d --tx ssh=localhost test_exit.py --rsyncdir=. # ssh
==================================================================================== test session starts =====================================================================================
platform linux2 -- Python 2.7.13, pytest-4.6.9, py-1.8.1, pluggy-0.13.1
rootdir: /root
plugins: xdist-1.31.0, forked-1.1.3
gw0 [2]
.[gw0] node down: keyboard-interrupt
f
replacing crashed worker gw0
gw1 C
========================================================================================== FAILURES ==========================================================================================
_____________________________________________________________________________________ root/test_exit.py ______________________________________________________________________________________
[gw0] linux2 -- Python 2.7.13 /usr/bin/python
worker 'gw0' crashed while running 'root/test_exit.py::test_b'
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: <WorkerController gw0> received keyboard-interrupt !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
============================================================================= 1 failed, 1 passed in 1.08 seconds =============================================================================
```
| open | 2020-04-22T05:26:13Z | 2020-04-27T00:16:54Z | https://github.com/pytest-dev/pytest-xdist/issues/521 | [] | cielavenir | 1 |
yihong0618/running_page | data-visualization | 574 | Sync Actions 无法生成分析 SVG | 更新数据时,显示更新完成,但无法生成最新的 Total SVG 图片,返回提示如下:
```
Run python run_page/gen_svg.py --from-db --title "Ryan's Running" --type github --athlete "Ryan" --special-distance 10 --special-distance2 20 --special-color yellow --special-color2 red --output assets/github.svg --use-localtime --min-distance 0.5
All tracks: 28
After filter tracks: [28](https://github.com/85Ryan/gooorun/actions/runs/7239220950/job/19720846244#step:21:29)
Creating poster of type github with 26 tracks and storing it in file assets/github.svg...
All tracks: 28
After filter tracks: 28
All tracks: 28
After filter tracks: 28
All tracks: 28
After filter tracks: 28
Creating poster of type github with 26 tracks and storing it in file assets/github_2023.svg...
Cannot set locale to "zh_CN": unsupported locale setting
```
另外,我替换修改了 `assets` 文件夹下的地图开始点、结束点图标:`start.svg` 和 `end.svg`,本地显示正常,部署到 Github Pages 后不能正常显示,还是显示之前的图标。
请问如何解决?谢谢! | closed | 2023-12-17T15:47:08Z | 2023-12-18T00:57:11Z | https://github.com/yihong0618/running_page/issues/574 | [] | 85Ryan | 0 |
graphql-python/graphene-django | graphql | 1,523 | API mutation for google translate | Hi all is there any way to mutate request for google translator? | closed | 2024-05-22T05:36:41Z | 2024-05-22T16:04:41Z | https://github.com/graphql-python/graphene-django/issues/1523 | [
"🐛bug"
] | abdulhafeez1724 | 1 |
httpie/cli | rest-api | 1,608 | Install httpie-edgegrid plugins fails | ## Checklist
- [x] I've searched for similar issues.
- [x] I'm using the latest version of HTTPie.
---
## Minimal reproduction code and steps
1. Execute `httpie cli plugins install httpie-edgegrid`
## Current result
```
Installing httpie-edgegrid...
Collecting httpie-edgegrid
Using cached httpie_edgegrid-2.1.4-py3-none-any.whl.metadata (3.5 kB)
Collecting httpie==3.2.2 (from httpie-edgegrid)
Using cached httpie-3.2.2-py3-none-any.whl.metadata (7.6 kB)
Collecting edgegrid-python==1.3.1 (from httpie-edgegrid)
Using cached edgegrid_python-1.3.1-py3-none-any.whl.metadata (754 bytes)
Collecting pyOpenSSL==24.1.0 (from httpie-edgegrid)
Using cached pyOpenSSL-24.1.0-py3-none-any.whl.metadata (12 kB)
Requirement already satisfied: urllib3<3.0.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie-edgegrid) (2.2.3)
Requirement already satisfied: requests>=2.3.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from edgegrid-python==1.3.1->httpie-edgegrid) (2.32.3)
Requirement already satisfied: requests-toolbelt>=0.9.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from edgegrid-python==1.3.1->httpie-edgegrid) (1.0.0)
Collecting ndg-httpsclient (from edgegrid-python==1.3.1->httpie-edgegrid)
Using cached ndg_httpsclient-0.5.1-py3-none-any.whl.metadata (6.2 kB)
Collecting pyasn1 (from edgegrid-python==1.3.1->httpie-edgegrid)
Using cached pyasn1-0.6.1-py3-none-any.whl.metadata (8.4 kB)
Requirement already satisfied: pip in /opt/homebrew/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (24.2)
Requirement already satisfied: charset-normalizer>=2.0.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (3.4.0)
Requirement already satisfied: defusedxml>=0.6.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (0.7.1)
Requirement already satisfied: Pygments>=2.5.2 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (2.18.0)
Requirement already satisfied: multidict>=4.7.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (6.1.0)
Requirement already satisfied: setuptools in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (75.3.0)
Requirement already satisfied: rich>=9.10.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (13.9.4)
Collecting cryptography<43,>=41.0.5 (from pyOpenSSL==24.1.0->httpie-edgegrid)
Using cached cryptography-42.0.8-cp39-abi3-macosx_10_12_universal2.whl.metadata (5.3 kB)
Collecting cffi>=1.12 (from cryptography<43,>=41.0.5->pyOpenSSL==24.1.0->httpie-edgegrid)
Using cached cffi-1.17.1-cp313-cp313-macosx_11_0_arm64.whl.metadata (1.5 kB)
Requirement already satisfied: idna<4,>=2.5 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from requests>=2.3.0->edgegrid-python==1.3.1->httpie-edgegrid) (3.10)
Requirement already satisfied: certifi>=2017.4.17 in /opt/homebrew/opt/certifi/lib/python3.13/site-packages (from requests>=2.3.0->edgegrid-python==1.3.1->httpie-edgegrid) (2024.8.30)
Requirement already satisfied: PySocks!=1.5.7,>=1.5.6 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from requests[socks]>=2.22.0->httpie==3.2.2->httpie-edgegrid) (1.7.1)
Requirement already satisfied: markdown-it-py>=2.2.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from rich>=9.10.0->httpie==3.2.2->httpie-edgegrid) (3.0.0)
Collecting pycparser (from cffi>=1.12->cryptography<43,>=41.0.5->pyOpenSSL==24.1.0->httpie-edgegrid)
Using cached pycparser-2.22-py3-none-any.whl.metadata (943 bytes)
Requirement already satisfied: mdurl~=0.1 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from markdown-it-py>=2.2.0->rich>=9.10.0->httpie==3.2.2->httpie-edgegrid) (0.1.2)
Using cached httpie_edgegrid-2.1.4-py3-none-any.whl (9.2 kB)
Using cached edgegrid_python-1.3.1-py3-none-any.whl (17 kB)
Using cached httpie-3.2.2-py3-none-any.whl (127 kB)
Using cached pyOpenSSL-24.1.0-py3-none-any.whl (56 kB)
Using cached cryptography-42.0.8-cp39-abi3-macosx_10_12_universal2.whl (5.9 MB)
Using cached ndg_httpsclient-0.5.1-py3-none-any.whl (34 kB)
Using cached pyasn1-0.6.1-py3-none-any.whl (83 kB)
Using cached cffi-1.17.1-cp313-cp313-macosx_11_0_arm64.whl (178 kB)
Using cached pycparser-2.22-py3-none-any.whl (117 kB)
Installing collected packages: pycparser, pyasn1, cffi, httpie, cryptography, pyOpenSSL, ndg-httpsclient, edgegrid-python, httpie-edgegrid
Attempting uninstall: httpie
Found existing installation: httpie 3.2.4
Can't install 'httpie-edgegrid'
```
## Expected result
Successful installation
---
## Debug output
Please re-run the command with `--debug`, then copy the entire command & output and paste both below:
Command: `httpie cli plugins install httpie-edgegrid --debug`
Output:
```bash
HTTPie 3.2.4
Requests 2.32.3
Pygments 2.18.0
Python 3.13.0 (main, Oct 7 2024, 05:02:14) [Clang 15.0.0 (clang-1500.3.9.4)]
/opt/homebrew/Cellar/httpie/3.2.4/libexec/bin/python
Darwin 23.5.0
<Environment {'apply_warnings_filter': <function Environment.apply_warnings_filter at 0x1019dcc20>,
'args': Namespace(),
'as_silent': <function Environment.as_silent at 0x1019dcae0>,
'colors': 256,
'config': {'default_options': []},
'config_dir': PosixPath('/Users/glen.thomas/.config/httpie'),
'devnull': <property object at 0x1019cd260>,
'is_windows': False,
'log_error': <function Environment.log_error at 0x1019dcb80>,
'program_name': 'httpie',
'quiet': 0,
'rich_console': <functools.cached_property object at 0x10196d350>,
'rich_error_console': <functools.cached_property object at 0x1019d03b0>,
'show_displays': True,
'stderr': <_io.TextIOWrapper name='<stderr>' mode='w' encoding='utf-8'>,
'stderr_isatty': True,
'stdin': <_io.TextIOWrapper name='<stdin>' mode='r' encoding='utf-8'>,
'stdin_encoding': 'utf-8',
'stdin_isatty': True,
'stdout': <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>,
'stdout_encoding': 'utf-8',
'stdout_isatty': True}>
<PluginManager {'adapters': [],
'auth': [<class 'httpie.plugins.builtin.BasicAuthPlugin'>,
<class 'httpie.plugins.builtin.DigestAuthPlugin'>,
<class 'httpie.plugins.builtin.BearerAuthPlugin'>],
'converters': [],
'formatters': [<class 'httpie.output.formatters.headers.HeadersFormatter'>,
<class 'httpie.output.formatters.json.JSONFormatter'>,
<class 'httpie.output.formatters.xml.XMLFormatter'>,
<class 'httpie.output.formatters.colors.ColorFormatter'>]}>
Installing httpie-edgegrid...
Collecting httpie-edgegrid
Using cached httpie_edgegrid-2.1.4-py3-none-any.whl.metadata (3.5 kB)
Collecting httpie==3.2.2 (from httpie-edgegrid)
Using cached httpie-3.2.2-py3-none-any.whl.metadata (7.6 kB)
Collecting edgegrid-python==1.3.1 (from httpie-edgegrid)
Using cached edgegrid_python-1.3.1-py3-none-any.whl.metadata (754 bytes)
Collecting pyOpenSSL==24.1.0 (from httpie-edgegrid)
Using cached pyOpenSSL-24.1.0-py3-none-any.whl.metadata (12 kB)
Requirement already satisfied: urllib3<3.0.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie-edgegrid) (2.2.3)
Requirement already satisfied: requests>=2.3.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from edgegrid-python==1.3.1->httpie-edgegrid) (2.32.3)
Requirement already satisfied: requests-toolbelt>=0.9.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from edgegrid-python==1.3.1->httpie-edgegrid) (1.0.0)
Collecting ndg-httpsclient (from edgegrid-python==1.3.1->httpie-edgegrid)
Using cached ndg_httpsclient-0.5.1-py3-none-any.whl.metadata (6.2 kB)
Collecting pyasn1 (from edgegrid-python==1.3.1->httpie-edgegrid)
Using cached pyasn1-0.6.1-py3-none-any.whl.metadata (8.4 kB)
Requirement already satisfied: pip in /opt/homebrew/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (24.2)
Requirement already satisfied: charset-normalizer>=2.0.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (3.4.0)
Requirement already satisfied: defusedxml>=0.6.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (0.7.1)
Requirement already satisfied: Pygments>=2.5.2 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (2.18.0)
Requirement already satisfied: multidict>=4.7.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (6.1.0)
Requirement already satisfied: setuptools in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (75.3.0)
Requirement already satisfied: rich>=9.10.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from httpie==3.2.2->httpie-edgegrid) (13.9.4)
Collecting cryptography<43,>=41.0.5 (from pyOpenSSL==24.1.0->httpie-edgegrid)
Using cached cryptography-42.0.8-cp39-abi3-macosx_10_12_universal2.whl.metadata (5.3 kB)
Collecting cffi>=1.12 (from cryptography<43,>=41.0.5->pyOpenSSL==24.1.0->httpie-edgegrid)
Using cached cffi-1.17.1-cp313-cp313-macosx_11_0_arm64.whl.metadata (1.5 kB)
Requirement already satisfied: idna<4,>=2.5 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from requests>=2.3.0->edgegrid-python==1.3.1->httpie-edgegrid) (3.10)
Requirement already satisfied: certifi>=2017.4.17 in /opt/homebrew/opt/certifi/lib/python3.13/site-packages (from requests>=2.3.0->edgegrid-python==1.3.1->httpie-edgegrid) (2024.8.30)
Requirement already satisfied: PySocks!=1.5.7,>=1.5.6 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from requests[socks]>=2.22.0->httpie==3.2.2->httpie-edgegrid) (1.7.1)
Requirement already satisfied: markdown-it-py>=2.2.0 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from rich>=9.10.0->httpie==3.2.2->httpie-edgegrid) (3.0.0)
Collecting pycparser (from cffi>=1.12->cryptography<43,>=41.0.5->pyOpenSSL==24.1.0->httpie-edgegrid)
Using cached pycparser-2.22-py3-none-any.whl.metadata (943 bytes)
Requirement already satisfied: mdurl~=0.1 in /opt/homebrew/Cellar/httpie/3.2.4/libexec/lib/python3.13/site-packages (from markdown-it-py>=2.2.0->rich>=9.10.0->httpie==3.2.2->httpie-edgegrid) (0.1.2)
Using cached httpie_edgegrid-2.1.4-py3-none-any.whl (9.2 kB)
Using cached edgegrid_python-1.3.1-py3-none-any.whl (17 kB)
Using cached httpie-3.2.2-py3-none-any.whl (127 kB)
Using cached pyOpenSSL-24.1.0-py3-none-any.whl (56 kB)
Using cached cryptography-42.0.8-cp39-abi3-macosx_10_12_universal2.whl (5.9 MB)
Using cached ndg_httpsclient-0.5.1-py3-none-any.whl (34 kB)
Using cached pyasn1-0.6.1-py3-none-any.whl (83 kB)
Using cached cffi-1.17.1-cp313-cp313-macosx_11_0_arm64.whl (178 kB)
Using cached pycparser-2.22-py3-none-any.whl (117 kB)
Installing collected packages: pycparser, pyasn1, cffi, httpie, cryptography, pyOpenSSL, ndg-httpsclient, edgegrid-python, httpie-edgegrid
Attempting uninstall: httpie
Found existing installation: httpie 3.2.4
Can't install 'httpie-edgegrid'
```
## Additional information, screenshots, or code examples
…
| closed | 2024-11-04T17:36:34Z | 2024-11-04T20:42:15Z | https://github.com/httpie/cli/issues/1608 | [
"bug",
"new"
] | glenthomas | 2 |
wagtail/wagtail | django | 12,584 | Eliminate use of .listing styles on group permission tables | ### Issue Summary
Under Settings -> Groups -> Page permissions in Wagtail 6.3, the page chooser is squeezed into an undersized table cell causing every letter to be wrapped:

This appears to have been introduced as a result of the `overflow-wrap: anywhere;` rule added in #12430 / #12431. Rather than just preventing wrapping, though, it might be better to do something about the excessive whitespace on the right - I notice that when the minimap is expanded, the `max-width: 840px;` on `.w-form-width` becomes `max-width: 80%;`, which makes it proportionally wider but still squashed...

- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes
### Technical details
- Python version: 3.12.4
- Django version: 5.0.9
- Wagtail version: 6.3
- Browser version: Chrome 131.0.6778.70
### Working on this
<!--
Do you have thoughts on skills needed?
Are you keen to work on this yourself once the issue has been accepted?
Please let us know here.
-->
Anyone can contribute to this. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
| open | 2024-11-15T15:12:37Z | 2024-12-14T04:16:12Z | https://github.com/wagtail/wagtail/issues/12584 | [
"type:Cleanup/Optimisation",
"component:Frontend"
] | gasman | 2 |
plotly/dash-bio | dash | 135 | Make setup.py require dash | Really just a suggestion, but I believe it would be nice to have setup.py require dash by default:
https://github.com/plotly/dash-bio/blob/bd5ce53c4529ce3e0c783cb61eb6eab7b298dd93/setup.py#L11-L20
That way, when the user installs dash-bio, they simply need to create a venv and run:
```
pip install dash-bio
```
with dash, dash-html-components, dash_renderer being installed as well.
It could be implemented along the line of:
```python
setup(
name=package_name,
version=package["version"],
author=package['author'],
packages=[package_name, '{}/utils'.format(package_name), '{}/component_factory'.format(package_name)],
include_package_data=True,
license=package['license'],
description=package['description'] if 'description' in package else package_name,
install_requires=[
'dash', 'dash-html-components', 'dash_renderer'
]
)
```
| closed | 2019-01-25T20:01:16Z | 2019-06-11T14:54:04Z | https://github.com/plotly/dash-bio/issues/135 | [] | xhluca | 19 |
plotly/dash | jupyter | 2,917 | When using keyword arguments in a background callback with Celery, no_update does not work | **Describe your context**
```
dash 2.17.1
dash-ag-grid 2.4.0
dash-auth-external 1.2.1
dash-auth0-oauth 0.1.5
dash-bootstrap-components 1.6.0
dash-core-components 2.0.0
dash-cytoscape 1.0.0
dash-extensions 0.0.71
dash-html-components 2.0.0
dash-iconify 0.1.2
dash-loading-spinners 1.0.3
dash-mantine-components 0.12.1
dash-table 5.0.0
Celery 5.3.6
```
**Describe the bug**
There is a background callback.
The background manager is Celery.
The output was specified as a [keyword argument](https://dash.plotly.com/flexible-callback-signatures#keyword-arguments), and the callback returned no_update.
However, the result received by the browser was `{"_dash_no_update": "_dash_no_update"}`.
The `no_update` did not work correctly.
```
@callback(
output=dict(
foo=Output("foo", "children", allow_duplicate=True),
),
inputs=dict(
bar=Input("bar", "value"),
),
prevent_initial_call=True,
background=True,
)
def background_callback(bar: str):
return dict(foo=dash.no_update)
```
The value `{"_dash_no_update": "_dash_no_update"}` is assigned to foo, causing an unintended third callback to be invoked.
I suspect [this part of the code](https://github.com/plotly/dash/blob/v2.17.1/dash/_callback.py#L452). For the dict type, the transformation from `{"_dash_no_update": "_dash_no_update"}` to NoUpdate is not performed. I believe this is the cause of the unintended bug. (As a side note, I use multi-page)
**Expected behavior**
When using a background callback with Celery, returning no_update for the Output using keyword arguments should result in nothing being returned as intended.
| open | 2024-07-09T09:49:24Z | 2024-08-13T19:54:35Z | https://github.com/plotly/dash/issues/2917 | [
"bug",
"P3"
] | JeongMinSik | 0 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 450 | 关于怎么再docker替换cookie | 有2个东西不太理解!
第一是再docker里怎么去替换cookei
我看了视频,你用的是本地python,是可以直接替换!那docker里怎么弄!
还有就是这个API怎么用到脚本里面去!我是API小白所以问问! | closed | 2024-07-14T15:58:10Z | 2024-07-25T18:07:36Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/450 | [
"enhancement"
] | xilib | 3 |
encode/databases | asyncio | 550 | Support for UNIX domain socket for MySQL | Issue similar to #422, but for MySQL connection strings.
In short, `unix_socket`option is not passed to the underlying `aiomysql` or `asyncmy` backends, which results in failed connection to database.
Originally described by [coryvirok](https://github.com/coryvirok) in #239 pull request (fixes only `aiomysql` backend) and later in opened another pull request #503 by [ryanrasti](https://github.com/ryanrasti). Credits to them!!!
Since this issue is blocking deployment to Google Cloud Run, I created a fix for both backends. Most of the work was done in #422, here I am only passing `unix_socket` value to `create_pool` function.
| closed | 2023-05-09T22:21:13Z | 2023-07-12T01:12:10Z | https://github.com/encode/databases/issues/550 | [] | wojtasiq | 0 |
Guovin/iptv-api | api | 878 | [Bug]:ipv6结果疑问 | ### Don't skip these steps | 不要跳过这些步骤
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field | 我明白,如果我“故意”删除或跳过任何强制性的\*字段,我将被**限制**
- [x] I am sure that this is a running error exception problem and will not submit any problems unrelated to this project | 我确定这是运行报错异常问题,不会提交任何与本项目无关的问题
- [x] I have searched and double-checked that there are no similar issues that have been created | 我已经通过搜索并仔细检查过没有存在已经创建的类似问题
### Occurrence environment | 触发环境
- [x] Workflow | 工作流
- [ ] GUI | 软件
- [ ] Docker
- [ ] Command line | 命令行
### Bug description | 具体描述
ipv_type_prefer = IPV6 #会导致resurt.txt 写入为空
### Error log | 报错日志
Writing: 0%| | 0/1872 [00:00<?, ?it/s]
Sorting: 100%|██████████| 2984/2984 [08:13<00:00, 6.05it/s]
Writing: 0%| | 0/1872 [00:00<?, ?it/s] | closed | 2025-01-26T07:02:00Z | 2025-01-26T07:13:18Z | https://github.com/Guovin/iptv-api/issues/878 | [
"invalid"
] | GSD-3726 | 3 |
CorentinJ/Real-Time-Voice-Cloning | python | 928 | This project needs a maintainer | This is blue-fish. Unfortunately, I cannot participate in open source anymore. It has been my pleasure to contribute to this project. I wish you all the best! | closed | 2021-12-01T09:31:19Z | 2021-12-28T12:34:18Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/928 | [] | ghost | 2 |
serengil/deepface | deep-learning | 1,248 | What about IQA | ### Description
First of all, thanks for your job. I wonder if there is an opportunity to add face quality assessment models to pipeline?
Or maybe anyone could advice a repo to explore. Tried PyIQA but not satisfied.
### Additional Info
_No response_ | closed | 2024-06-02T21:12:57Z | 2024-06-03T09:49:45Z | https://github.com/serengil/deepface/issues/1248 | [
"enhancement"
] | wauxhall | 1 |
ResidentMario/missingno | data-visualization | 170 | Heatmap ValueError:could not convert string to float: '--' | Im trying missingno.heatmap on the NYPD Motor Vehicle Collisions Dataset.
`import pandas`
`import missingno`
`df = pandas.read_csv('Motor_Vehicle_Collisions_-_Crashes_20240322.csv')`
`missingno.heatmap(df)`
Then this error occured
`ValueError Traceback (most recent call last)
Cell In[3], [line 1](vscode-notebook-cell:?execution_count=3&line=1)
----> [1](vscode-notebook-cell:?execution_count=3&line=1) missingno.heatmap(df)
File [c:\Users\Admin\anaconda3\envs\jup2\lib\site-packages\missingno\missingno.py:398](file:///C:/Users/Admin/anaconda3/envs/jup2/lib/site-packages/missingno/missingno.py:398), in heatmap(df, filter, n, p, sort, figsize, fontsize, labels, label_rotation, cmap, vmin, vmax, cbar, ax)
[395](file:///C:/Users/Admin/anaconda3/envs/jup2/lib/site-packages/missingno/missingno.py:395) ax0.patch.set_visible(False)
[397](file:///C:/Users/Admin/anaconda3/envs/jup2/lib/site-packages/missingno/missingno.py:397) for text in ax0.texts:
--> [398](file:///C:/Users/Admin/anaconda3/envs/jup2/lib/site-packages/missingno/missingno.py:398) t = float(text.get_text())
[399](file:///C:/Users/Admin/anaconda3/envs/jup2/lib/site-packages/missingno/missingno.py:399) if 0.95 <= t < 1:
[400](file:///C:/Users/Admin/anaconda3/envs/jup2/lib/site-packages/missingno/missingno.py:400) text.set_text('<1')
ValueError: could not convert string to float: '--'`
| open | 2024-03-24T18:17:37Z | 2024-05-14T18:32:58Z | https://github.com/ResidentMario/missingno/issues/170 | [] | dvcchamhocvcl | 4 |
sinaptik-ai/pandas-ai | pandas | 1,459 | I use fastapi+pandasai to provide data-generated image services, but as the number of requests increases, many figures do not exit automatically, so I want to find out if there is a way to exit these figures? please |
I use fastapi+pandasai to provide data-generated image services, but as the number of requests increases, many figures do not exit automatically, so I want to find out if there is a way to exit these figures? please!

| closed | 2024-12-09T08:39:25Z | 2024-12-12T20:27:03Z | https://github.com/sinaptik-ai/pandas-ai/issues/1459 | [] | lwdnxu | 2 |
521xueweihan/HelloGitHub | python | 2,098 | 【开源自荐】Yank Note - 一款面向程序员的 Markdown 笔记应用 | ## 项目推荐
- 项目地址:https://github.com/purocean/yn
- 类别:JS
- 项目后续更新计划:
- 增加插件中心:将一些功能做成插件发布,也方便用户共享插件
- 增加反向索引:将文档信息结构化,便于搜索,也便于程序使用
- 增强文档关联:能查看文档引用和被引用情况
- 项目描述:
Yank Note 是一款面向程序员的本地 Markdown 笔记应用。支持在文档中嵌入可运行的代码块、思维导图以及各种图形 (Drawio、Mermaid、Plantuml),支持插件拓展、文档历史版本回溯。
- 推荐理由:
Yank Note 非常适合程序员群体,针对下面的场景具有和其他笔记工具不一样的体验:
1. 做技术笔记:可直接在文档中运行代码块(天然支持 JS 代码,其他语言需配置环境),让笔记生动起来。
2. 编写辅助工具:可在文档中嵌入 HTML 组件来制作一些辅助工具,甚至你可以使用 [Markdown 遥控无人机](https://github.com/purocean/yn/issues/65#issuecomment-962472316) :)
3. 写技术方案和文章:支持嵌入多种图形(思维导图、Plantunl、Drawio、Mermaid 、ECharts ),写文档画图形一气呵成。
4. 记录工作日志:支持任务代办列表,使用“宏替换”功能可以方便生成日报周报。
- 截图:

| closed | 2022-02-10T06:55:32Z | 2022-02-28T02:03:13Z | https://github.com/521xueweihan/HelloGitHub/issues/2098 | [
"已发布",
"JavaScript 项目"
] | purocean | 3 |
streamlit/streamlit | deep-learning | 9,917 | Use index value instead of row position in `session_state` of `st.data_editor` | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
The index number in the edited and delete rows properties of a `data_editor` are not the defined DataFrame index numbers but rather the plain numbers of the rows.
### Reproducible Code Example
```Python
def update_problems():
editedRows = st.session_state.peditor.get('edited_rows', {})
deletedRows = st.session_state.peditor.get('deleted_rows', {})
print(deletedRows)
st.session_state.probdata = conn.query(sql='select * from problems', index_col='problem_id', ttl=0)
st.session_state.probdata = st.data_editor(st.session_state.probdata, hide_index=False, key="peditor", num_rows='dynamic', on_change=update_problems)
```
### Steps To Reproduce
_No response_
### Expected Behavior
The index col of the dataframe is used not the row number
### Current Behavior
The row number of the dataframe row is used instead of the index
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version:
- Python version:
- Operating System:
- Browser:
### Additional Information
_No response_ | open | 2024-11-24T22:42:16Z | 2024-11-25T14:03:21Z | https://github.com/streamlit/streamlit/issues/9917 | [
"type:enhancement",
"feature:st.data_editor"
] | mhupfauer | 3 |
paperless-ngx/paperless-ngx | django | 7,634 | [BUG] Google invoice throws with "File type application/octet-stream not supported" | ### Description
Google Invoice is not parsed.
Direct upload and automatic mail scan both fail.
Error message:
File type application/octet-stream not supported
Importing Google Invoices in the past was never a problem. So either Google Invoice changed something in their PDF, or there is a regression bug in paperless. I don't know.
### Steps to reproduce
Upload a google invoice newer than 2024-08.
### Webserver logs
```bash
[2024-09-06 00:13:42,434] [ERROR] [paperless.consumer] Unsupported mime type application/octet-stream
[2024-09-06 00:13:42,435] [ERROR] [paperless.tasks] ConsumeTaskPlugin failed: 5052774727.pdf: Unsupported mime type application/octet-stream
Traceback (most recent call last):
File "/usr/src/paperless/src/documents/tasks.py", line 149, in consume_file
msg = plugin.run()
^^^^^^^^^^^^
File "/usr/src/paperless/src/documents/consumer.py", line 550, in run
self._fail(
File "/usr/src/paperless/src/documents/consumer.py", line 304, in _fail
raise ConsumerError(f"{self.filename}: {log_message or message}") from exception
documents.consumer.ConsumerError: 5052774727.pdf: Unsupported mime type application/octet-stream
```
### Browser logs
_No response_
### Paperless-ngx version
2.11.6
### Host OS
Archlinux
### Installation method
Docker - official image
### System status
_No response_
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-09-05T22:24:33Z | 2024-10-08T03:10:27Z | https://github.com/paperless-ngx/paperless-ngx/issues/7634 | [
"not a bug"
] | draptik | 7 |
strawberry-graphql/strawberry | graphql | 2,828 | Allow codegen to process multiple queries | <!--- Provide a general summary of the changes you want in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Feature Request Type
- [ ] Core functionality
- [x] Alteration (enhancement/optimization) of existing feature(s)
- [x] New behavior
## Description
<!-- A few sentences describing what it is. -->
codegen currently involves importing the schema from the user's app. Depending on the structure of the app, that could be a very expensive operation (in my case, this pulls in our ORM and a number of other dependencies). In my current application with only 5 or 6 queries, **this is currently taking about 19s to generate all the output files**. If I only import the schema once and run the generator on all the queries in the same process, **this time drops to about 4s**.
I propose a couple minor changes to the `strawberry codegen` CLI (and underlying user-facing APIs). First, I propose we add `nargs=-1` to the query CLI argument. This will allow multiple queries to be passed on the command line.
The existing plugins use a hardcoded output file name. e.g. `types.py` or `types.js`. Changing this would likely be problematic because scripts might assume this is the output file and move it to a different name per query. Because of this, the builtin plugins do not support processing multiple query files without some additional shenanigans. There are two ways we could handle this:
* If there is 1 query, use the existing pattern of a hardcoded `types.py`/`types.js`. If there are more than one query passed, use the `query.stem + extension` (where extension would vary by plugin).
* Add a flag to the plugin class that dictates whether the plugin will output unique files per input query file.
I have a branch downstream where I implemented the second, but implementing the first would be relatively simple as well.
Generally, to make this work, I have adjusted the `QueryCodegenPlugin` to accept a file path in it's `__init__`. It's unlikely that people are overriding this currently, but if they are, we can `try`/`except` the `TypeError` and slap the `query` property on the class after initializing it but before doing anything else with it. This should solve most backward compatibility issues (we'd only have problems if they call `__init__` directly with no arguments -- e.g. from `super().__init__()` and ever that _could_ be alleviated by having a default value that could be sniffed out (e.g. `Path.home()` -- it doesn't make sense to have a query's filename be a directory).
A potentially simpler alternative is we could just keep everything with a no-args `__init__` and always just add the `query` property in a loop after we've constructed all the user-space plugins. From the perspective of the exposed API, the `query` filename will always be present and not `None`. This probably requires a little white lie to the type-system for a very shot period, but in my experience, `mypy` is willing to believe a class has the properties that you claim in spite of what `__init__` does.
A proof of concept implementation is at https://github.com/strawberry-graphql/strawberry/compare/main...mgilson:strawberry:multiple-codegen
but this could easily be tweaked in any of the ways listed above very easily.
Thoughts? | closed | 2023-06-08T14:56:07Z | 2025-03-20T15:56:12Z | https://github.com/strawberry-graphql/strawberry/issues/2828 | [] | mgilson | 2 |
Kinto/kinto | api | 2,643 | Run some tests in GitHub Actions | I think it would be wise to have at least one set of CI validation coming from GitHub Actions, so that we have two sources of code quality, especially for times like now where open source builds on Travis are delayed for several hours. (As of writing, GHA are finishing within 10 minutes of a PR being opened, whereas Travis is just now running checks for a PR submitted 9 hours ago.)
I don't think we need to do the multiple Python version testing, but at least testing on the latest Python in GHA would be nice. | closed | 2020-10-26T23:47:37Z | 2021-10-15T15:54:44Z | https://github.com/Kinto/kinto/issues/2643 | [] | dstaley | 6 |
pydantic/pydantic-ai | pydantic | 699 | Why not use abstract classes to define agents? | First off, congratulations on the project! Since there’s no dedicated discussions tab, I decided to open an issue instead.
All mainstream frameworks seem to use the same approach for defining agents: passing arguments to the agent class constructor.
```python
@dataclass
class SupportDependencies:
customer_id: int
db: DatabaseConn
class SupportResult(BaseModel):
support_advice: str = Field(description='Advice returned to the customer')
block_card: bool = Field(description="Whether to block the customer's card")
risk: int = Field(description='Risk level of query', ge=0, le=10)
support_agent = Agent(
'openai:gpt-4o',
deps_type=SupportDependencies,
result_type=SupportResult,
system_prompt=(
'You are a support agent in our bank, give the '
'customer support and judge the risk level of their query.'
),
)
@support_agent.tool
async def customer_balance(
ctx: RunContext[SupportDependencies], include_pending: bool
) -> float:
"""
Returns the customer's current account balance.
Args:
include_pending: Checks whether to include pending from the customer_balance database
"""
balance = await ctx.deps.db.customer_balance(
id=ctx.deps.customer_id,
include_pending=include_pending,
)
return balance
```
However, in my complete ignorance on the matter, I can't help but draw a parallel between agents and classes. And when I first heard about pydantic-ai, the first thing that came to mind was constructing agents using abstract classes (e.g., having a BaseAgent, similar to a BaseModel).
```python
class SupportAgent(BaseAgent):
"""
You are a support agent in our bank, give the
customer support and judge the risk level of their query.
""" # <<< System Prompt
model='openai:gpt-4o'
result_type=SupportResult
deps:SupportDependencies
async def __customer_balance(
self, include_pending: Annotated[bool, "Checks whether to include pending from the customer_balance database"]
) -> float:
"""Returns the customer's current account balance."""
balance = await self.deps.db.customer_balance(
id=self.deps.customer_id,
include_pending=include_pending,
)
return balance
async def main() -> None:
deps = SupportDependencies(customer_id=123, db=DatabaseConn())
support_agent = SupportAgent(deps=deps)
result = await support_agent.run('What is my balance?')
print(result.data)
```
I personally find the second option more intuitive to read. What do you think? Would it be feasible to implement something like this? What would be the limitations of this approach? | closed | 2025-01-16T10:59:37Z | 2025-02-07T14:07:08Z | https://github.com/pydantic/pydantic-ai/issues/699 | [
"question",
"Stale"
] | lucasmsoares96 | 7 |
JaidedAI/EasyOCR | machine-learning | 992 | EasyOCR failed when the picture has long height like long wechat snapshot contained several small snapshots | open | 2023-04-17T04:07:42Z | 2023-04-17T04:07:42Z | https://github.com/JaidedAI/EasyOCR/issues/992 | [] | crazyn2 | 0 | |
plotly/dash | flask | 2,539 | Wrong typing on Dash init | Types in the docstring or init of the Dash class should be valid types for the checker.
Right now it has `boolean` instead of `bool`, `string` instead of `str` and an amalgation of types with `or` when it should be union. | open | 2023-05-24T14:11:18Z | 2024-08-13T19:33:11Z | https://github.com/plotly/dash/issues/2539 | [
"bug",
"P3"
] | T4rk1n | 1 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 4,024 | Can't send mails with internal SMTP server | ### What version of GlobaLeaks are you using?
Globaleaks 4.14.8 hosted on Debian 6.1.76-1
### What browser(s) are you seeing the problem on?
All
### What operating system(s) are you seeing the problem on?
Windows, macOS
### Describe the issue
We tried to configure mail notifications using our internal SMTP server (SMTP relay is enabled). SMTP server is Exchange.
Our actual configuration is:
> SMTP email address: noreply@ourdomain.tld
> SMTP server address: our.internal.smtp.server.net
> SMTP sevrer port: 25
> Security: SMTP/TLS
> Require authentication is disabled
> Notification roles: Admin, Analyst, Custodian, Recipient
Trying to send either a test notification or a password reset generates and infinite loading which ends in timeout.
This is the excerpt of _globaleaks.log_:
> 2024-03-18 15:09:47+0100 [-] [E] [1] SMTP connection failed (Exception: )
> 2024-03-18 15:09:47+0100 [-] Starting factory <twisted.mail.smtp.ESMTPSenderFactory object at 0x7f5ee96512d0>
> 2024-03-18 15:09:47+0100 [-] Stopping factory <twisted.mail.smtp.ESMTPSenderFactory object at 0x7f5ee96512d0>
> 2024-03-18 15:11:02+0100 [-] [E] [1] SMTP connection failed (Exception: )
Following some issues already reported on here we disabled Tor anonymization for outgoing connections without success. Mails still can't get delivered and the system times out.
The server is able to see the SMTP server. We also tried to telnet into the SMTP server from Globaleaks server and from there emails get delivered.
We also tried to setup a trace with Tcpdump which shows that the SMTP session gets initiated by Globaleaks, the SMTP server responds and after the STARTTLS/Client Hello, Globaleaks starts to contact Tor network's IP addresses even though the anonymization is disabled.
### Proposed solution
_No response_ | open | 2024-03-18T15:15:14Z | 2024-03-18T16:46:23Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/4024 | [
"T: Bug",
"Triage"
] | diegodipalma | 1 |
home-assistant/core | asyncio | 140,462 | Smartthings integration broken and cannot be fixed. Getting 400 error trying to get token? | ### The problem
Since I updated to 2025.3.2 this error shows up and is not resolving for me. I thought that it might get resolved after waiting for some time, but a whole day in and this doesn't seem to get fixed.

Anyone have an idea on if the issue is on my end? This was working fine for ages until today.
### What version of Home Assistant Core has the issue?
core-2025.3.2
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Smartthings
### Link to integration documentation on our website
_No response_
### Diagnostics information
[home-assistant_smartthings_2025-03-12T17-41-40.799Z.log](https://github.com/user-attachments/files/19215054/home-assistant_smartthings_2025-03-12T17-41-40.799Z.log)
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | closed | 2025-03-12T17:55:39Z | 2025-03-13T15:14:52Z | https://github.com/home-assistant/core/issues/140462 | [
"integration: smartthings"
] | zainag | 10 |
jazzband/django-oauth-toolkit | django | 902 | A limit on redirect_uri only up to 255 chars | **Describe the bug**
Cannot handle `redirect_uri` longer then 255 chars. [RFC 7230](https://tools.ietf.org/html/rfc7230#section-3.1.1) recommends to design system to be capable to work with URI at least to 8000 chars long.
> It is RECOMMENDED that all HTTP senders and recipients support, at a minimum, request-line lengths of 8000 octets.
**To Reproduce**
Send a request to `AuthorizationView` with `redirect_uri` longer than 255 chars.
**Expected behavior**
To save `redirect_uri` into `Grant` model.
**Version**
1.3.3
<!-- Have you tested with the latest version and/or master branch? -->
<!-- Replace '[ ]' with '[x]' to indicate that. -->
- [X] I have tested with the latest published release and it's still a problem.
- [x] I have tested with the master branch and it's still a problem.
| closed | 2020-12-11T08:49:37Z | 2020-12-18T10:20:06Z | https://github.com/jazzband/django-oauth-toolkit/issues/902 | [
"bug"
] | shaddeus | 10 |
tflearn/tflearn | data-science | 776 | Will there be a weighted_cross_entropy_with_logits? | open | 2017-05-29T09:46:35Z | 2017-06-06T15:40:52Z | https://github.com/tflearn/tflearn/issues/776 | [] | noeagles | 1 | |
activeloopai/deeplake | tensorflow | 2,630 | [BUG] | ### Severity
P0 - Critical breaking issue or missing functionality
### Current Behavior
hello ,
I am trying to deploy an app on google function and i have dependency issue that result from deeplake. I am using 3.7.1 version, and the dependency issues commes from ERROR: Cannot install -r requirements.txt (line 10), -r requirements.txt (line 85), aiobotocore[boto3]==2.4.2 and botocore==1.31.53 because these package versions have conflicting dependencies.
can someone please help me on this one.
best
K
### Steps to Reproduce
when deployement on function google, but works locally without any issue
### Expected/Desired Behavior
posted in google function cloud without any errors.
### Python Version
3.10.11
### OS
_No response_
### IDE
vs code
### Packages
_No response_
### Additional Context
_No response_
### Possible Solution
_No response_
### Are you willing to submit a PR?
- [ ] I'm willing to submit a PR (Thank you!) | closed | 2023-09-29T19:06:58Z | 2024-09-24T17:09:31Z | https://github.com/activeloopai/deeplake/issues/2630 | [
"bug"
] | kabsikabs | 2 |
tqdm/tqdm | pandas | 1,484 | reversed(tqdm(some_list)) does not yield a reversed list | - [x] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [ ] visual output bug
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
Version info: `4.65.0 3.10.11 (main, Apr 5 2023, 00:00:00) [GCC 12.2.1 20221121 (Red Hat 12.2.1-4)] linux`
When `reversed` is used on tqdm, it unexpectedly returns everything in the original, unreversed, order
```
>>> from tqdm import tqdm
>>> list(reversed(tqdm([1, 2, 3, 4])))
100%|██████████████████████████| 4/4 [00:00<00:00, 47798.34it/s]
[1, 2, 3, 4]
>>>
```
This is because `__reversed__` swaps in a reversed iterator, then stores the resulting generator it gets form calling `self.__iter__()`, and swaps the original iterator back before returning the generator. Crucially, because `__iter__()` is a generator, none of its code is executed before the first element is required. So by the time you use the generator, it's using the original iterator again.
There are a number of ways to solve this. The top answer on https://stackoverflow.com/q/5724009/1961666 suggests wrapping most of the code in `__iter__` in a generator function, so that `__iter__` is not itself a generator and so the first line would be run immediately when called.
| open | 2023-07-23T17:56:34Z | 2023-07-23T19:28:53Z | https://github.com/tqdm/tqdm/issues/1484 | [] | harmenwassenaar | 0 |
gradio-app/gradio | machine-learning | 10,399 | TabbedInterface does not work with Chatbot defined in ChatInterface | ### Describe the bug
When defining a `Chatbot` in `ChatInterface`, the `TabbedInterface` does not render it properly.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
def chat():
return "Hello"
chat_ui = gr.ChatInterface(
fn=chat,
type="messages",
chatbot=gr.Chatbot(type="messages"),
)
demo = gr.TabbedInterface([chat_ui], ["Tab 1"])
demo.launch()
```
### Screenshot

### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Darwin
gradio version: 5.12.0
gradio_client version: 1.5.4
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.6
ffmpy: 0.5.0
gradio-client==1.5.4 is not installed.
httpx: 0.27.2
huggingface-hub: 0.27.1
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 2.2.2
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.3
pillow: 11.1.0
pydantic: 2.10.5
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.4.10
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.12.0
httpx: 0.27.2
huggingface-hub: 0.27.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.2
```
### Severity
Blocking usage of gradio | open | 2025-01-21T17:11:08Z | 2025-01-22T17:19:36Z | https://github.com/gradio-app/gradio/issues/10399 | [
"bug"
] | arnaldog12 | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 898 | Generator samples from other dataset | I trained a CycleGAN with default parameters (but images loaded at 128x128 without cropping) for 200 epochs. The images generated are bad, but that's not the point. The strange thing is that sometimes the generator samples modified images from destination dataset. Have you ever seen this problem? Do you have any idea what can cause this problem? | closed | 2020-01-11T15:42:17Z | 2020-01-15T18:48:37Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/898 | [] | domef | 3 |
betodealmeida/shillelagh | sqlalchemy | 482 | More backends | Let's write some more backends!
- [X] apsw
- [X] Postgres (multicorn2)
- [ ] duckdb
- [ ] sqlglot | open | 2024-11-01T14:51:25Z | 2024-11-01T14:51:42Z | https://github.com/betodealmeida/shillelagh/issues/482 | [
"enhancement",
"help wanted",
"developer"
] | betodealmeida | 0 |
pandas-dev/pandas | data-science | 60,467 | QST: Is Using pandas.test() Equivalent to Running pytest Directly? | ### Research
- [X] I have searched the [[pandas] tag](https://stackoverflow.com/questions/tagged/pandas) on StackOverflow for similar questions.
- [X] I have asked my usage related question on [StackOverflow](https://stackoverflow.com).
### Link to question on StackOverflow
https://stackoverflow.com/questions/79244236/is-using-pandas-test-equivalent-to-running-pytest-directly
### Question about pandas
I'm working with the pandas codebase and using the _tester.py module located at pandas/util/_tester.py to execute tests. Specifically, I'm calling the pandas.test() function to run the test suite. Here’s how I’m doing it:
```
import pandas
pandas.test()
```
I can run the same tests directly with a pytest command, for example:
```
python3.12 -m pytest --cov=pandas --cov-report=term-missing --cov-branch pandas/tests/api/test_api.py::TestApi::test_api_indexers
```
Are these two approaches (pandas.test() and running pytest directly) functionally equivalent?
Does using pandas.test() introduce any additional overhead compared to directly invoking pytest? For example:
Does the wrapper preprocess or filter the test suite in any way?
Are there any significant differences in performance or the way results are handled?
In large test suites, would one approach be more efficient or recommended over the other?
Thanks in advance! | closed | 2024-12-02T13:18:15Z | 2024-12-02T18:23:34Z | https://github.com/pandas-dev/pandas/issues/60467 | [
"Usage Question",
"Needs Triage"
] | angiolye24 | 1 |
aws/aws-sdk-pandas | pandas | 2,509 | Exponential backoff | **Is your idea related to a problem? Please describe.**
When using awswrangler with millions of calls to databases, I am constantly receive throttling errors. I have had to write an exponential backoff decorator in python and create wrappers for all of the wrangler functions I am using.
**Describe the solution you'd like**
Since throttling is quite a common issue, I suggest adding it internal to awswrangler. (I can provide my code if needed)
| closed | 2023-11-01T13:20:03Z | 2023-11-15T15:04:18Z | https://github.com/aws/aws-sdk-pandas/issues/2509 | [
"enhancement"
] | awspiv | 2 |
ansible/ansible | python | 84,660 | meta: end_role only works for a single server on a group | ### Summary
When a role does a "meta: end_role" with a "when" clause only the first server on a group honors the end_role statement.
### Issue Type
Bug Report
### Component Name
ansible.builtin.meta
### Ansible Version
```console
$ ansible --version
ansible [core 2.18.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/gmonells/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/gmonells/possible_bug/venv/lib/python3.11/site-packages/ansible
ansible collection location = /home/gmonells/.ansible/collections:/usr/share/ansible/collections
executable location = /home/gmonells/possible_bug/venv/bin/ansible
python version = 3.11.11 (main, Dec 4 2024, 08:55:08) [GCC 9.4.0] (/home/gmonells/possible_bug/venv/bin/python3.11)
jinja version = 3.1.5
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
(venv) gmonells@LAPTOP-R1RBCIUC:~/possible_bug$ ansible-config dump --only-changed -t all
CONFIG_FILE() = /etc/ansible/ansible.cfg
GALAXY_SERVERS:
(venv) gmonells@LAPTOP-R1RBCIUC:~/possible_bug$
```
### OS / Environment
Ubuntu WSL, python3.11, ansible-core==2.18.2 under a virtualenv
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
(venv) gmonells@LAPTOP-R1RBCIUC:~/possible_bug$ cat roles/bug/tasks/main.yml
---
- name: Show "bug" variable
ansible.builtin.debug:
var: bug
- name: Skip role if it is not enabled
ansible.builtin.meta: end_role
when: not bug
- name: Do some tasks
ansible.builtin.debug:
msg: "This host has not been bypassed"
...
(venv) gmonells@LAPTOP-R1RBCIUC:~/possible_bug$
(venv) gmonells@LAPTOP-R1RBCIUC:~/possible_bug$ cat playbook.yml
---
- hosts: all
roles:
- bug
...
(venv) gmonells@LAPTOP-R1RBCIUC:~/possible_bug$
(venv) gmonells@LAPTOP-R1RBCIUC:~/possible_bug$ cat inventory.yml
all:
vars:
ansible_python_interpreter: auto_silent
bug: false
hosts:
server01:
server02:
server03:
(venv) gmonells@LAPTOP-R1RBCIUC:~/possible_bug$
```
### Expected Results
As all the servers have the "bug" variable to false, the role should end_role and bypass other tasks on ALL servers.
Instead, only first server on the group gets bypassed. Notice that "group" could refer to an adhoc list of hosts (see below)
### Actual Results
```console
(venv) gmonells@LAPTOP-R1RBCIUC:~/possible_bug$ ansible-playbook -i inventory.yml playbook.yml -l server01
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [server01]
TASK [bug : Show "bug" variable] **********************************************************************************************************************************************************************************
ok: [server01] => {
"bug": false
}
TASK [bug : Skip role if it is not enabled] ***********************************************************************************************************************************************************************
PLAY RECAP ********************************************************************************************************************************************************************************************************
server01 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
(venv) gmonells@LAPTOP-R1RBCIUC:~/possible_bug$
(venv) gmonells@LAPTOP-R1RBCIUC:~/possible_bug$ ansible-playbook -i inventory.yml playbook.yml -l server02
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [server02]
TASK [bug : Show "bug" variable] **********************************************************************************************************************************************************************************
ok: [server02] => {
"bug": false
}
TASK [bug : Skip role if it is not enabled] ***********************************************************************************************************************************************************************
PLAY RECAP ********************************************************************************************************************************************************************************************************
server02 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
(venv) gmonells@LAPTOP-R1RBCIUC:~/possible_bug$
(venv) gmonells@LAPTOP-R1RBCIUC:~/possible_bug$ ansible-playbook -i inventory.yml playbook.yml -l "server02:server01:server03"
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [server01]
ok: [server02]
ok: [server03]
TASK [bug : Show "bug" variable] **********************************************************************************************************************************************************************************
ok: [server01] => {
"bug": false
}
ok: [server02] => {
"bug": false
}
ok: [server03] => {
"bug": false
}
TASK [bug : Skip role if it is not enabled] ***********************************************************************************************************************************************************************
TASK [bug : Do some tasks] ****************************************************************************************************************************************************************************************
ok: [server02] => {
"msg": "This host has not been bypassed"
}
ok: [server03] => {
"msg": "This host has not been bypassed"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************
server01 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
server02 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
server03 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
(venv) gmonells@LAPTOP-R1RBCIUC:~/possible_bug$
(venv) gmonells@LAPTOP-R1RBCIUC:~/possible_bug$ ansible-playbook -i inventory.yml playbook.yml -l all
PLAY [all] ********************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok: [server03]
ok: [server01]
ok: [server02]
TASK [bug : Show "bug" variable] **********************************************************************************************************************************************************************************
ok: [server01] => {
"bug": false
}
ok: [server02] => {
"bug": false
}
ok: [server03] => {
"bug": false
}
TASK [bug : Skip role if it is not enabled] ***********************************************************************************************************************************************************************
TASK [bug : Do some tasks] ****************************************************************************************************************************************************************************************
ok: [server02] => {
"msg": "This host has not been bypassed"
}
ok: [server03] => {
"msg": "This host has not been bypassed"
}
PLAY RECAP ********************************************************************************************************************************************************************************************************
server01 : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
server02 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
server03 : ok=3 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
(venv) gmonells@LAPTOP-R1RBCIUC:~/possible_bug$
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct | closed | 2025-02-03T17:02:12Z | 2025-02-25T14:00:05Z | https://github.com/ansible/ansible/issues/84660 | [
"module",
"bug",
"has_pr",
"affects_2.18"
] | Sirtea | 1 |
waditu/tushare | pandas | 1,393 | document bug in hsgt_top10 沪深股通十大成交股文档描述错误 |
沪深股通十大成交股 https://tushare.pro/document/2?doc_id=48
change | float | 涨跌额
文档描述为涨跌额,但其实是涨跌百分比
| open | 2020-07-13T12:28:35Z | 2020-07-13T12:28:35Z | https://github.com/waditu/tushare/issues/1393 | [] | zergscut2017 | 0 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 457 | "OSError: [WinError 193] %1 is not a valid Win32 application" on attempt to run python scripts | I'm set up with an Anaconda environment for Python 3.7.7
Running on a 64-bit Windows 10 system with an AMD processor.
Installed PYTorch and ffmpeg through `conda install pytorch torchvision cudatoolkit=10.2 -c pytorch` and `conda install ffmpeg` (unsure if that may introduce some difference), and the rest through the Pip requirements file.
Merged the pretrained models into the file structure, went to to test the configuration with `python demo_cli.py`, from the conda environment, with active directory appropriately at the root of the project.

Similar error with `python demo_toolbox.py`:

This error seems supremely uninformative, does anyone have any insight? | closed | 2020-07-28T20:43:30Z | 2020-08-07T07:43:34Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/457 | [] | EnergeticSpaceCore | 9 |
PokeAPI/pokeapi | graphql | 670 | GraphQL API seems to be down? | <!--
Thanks for contributing to the PokéAPI project. To make sure we're effective, please check the following:
- Make sure your issue hasn't already been submitted on the issues tab. (It has search functionality!)
- If your issue is one of outdated API data, please note that we get our data from [veekun](https://github.com/veekun/pokedex/). If they are not up to date either, please look for or create an issue there. Otherwise, feel free to create an issue here.
- Provide a clear description of the issue.
- Provide a clear description of the steps to reproduce.
- Provide a clear description of the expected behavior.
Thank you!
-->
Steps to Reproduce:
opening https://beta.pokeapi.co/graphql/v1beta in the browser returns a 502 bad gateway
OR
trying to query the API with react ApolloClient results in a CORS error:
```
const pokemonClient = new ApolloClient({
uri: "https://beta.pokeapi.co/graphql/v1beta",
cache: new InMemoryCache(),
});
```
```
const TEST_QUERY = gql`
query MyQuery {
pokemon_v2_pokemon(where: { id: { _eq: 10 } }) {
id
name
}
}
`;
```
```
pokemonClient.query({ query: TEST_QUERY }).then((response) => {
console.log(response.data.name);
});
```
> Access to fetch at 'https://beta.pokeapi.co/graphql/v1beta' from origin 'http://localhost:8080' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
| closed | 2021-11-18T10:13:32Z | 2022-09-07T08:23:35Z | https://github.com/PokeAPI/pokeapi/issues/670 | [
"graphql"
] | MiroStW | 9 |
matplotlib/mplfinance | matplotlib | 124 | how to addplot with different lengths? | I have plotted the normal data,
but there are some add_plots that are either shorter or longer which I want to plot on top of the main plot, now it complains about their length,
```
File "....Python\Python38\lib\site-packages\matplotlib\axes\_base.py", line 269, in _xy_from_xy
raise ValueError("x and y must have same first dimension, but "
ValueError: x and y must have same first dimension, but have shapes (401,) and (171,)
```
although I could do this with normal plots easily by just plotting on top of the previous one to get some shapes like this :

now for example I have this code:
```
added_plots = [mplfinance.make_addplot(tcdfohlcv),
mplfinance.make_addplot(tcdfvalues),
mplfinance.make_addplot(tcdfpredicted)]
mplfinance.plot(ohlc, addplot=added_plots, type='candle', volume=True, style='yahoo')
```
so tcdfohlcv is same size as ohlc data but tcdfvalues is smaller and tcdfpredicted is larger.
is there anyway I can plot these together? | closed | 2020-05-04T22:24:02Z | 2021-08-05T01:10:53Z | https://github.com/matplotlib/mplfinance/issues/124 | [
"question"
] | allahyarzadeh | 7 |
JaidedAI/EasyOCR | pytorch | 473 | The file cannot be found by running after packaging with pyinstaller | I don't know if there are any friends like me. After packing with pyinstaller, I couldn't find the file. The specific error report is as follows:
```python
Traceback (most recent call last):
File "main.py", line 363, in main
File "captcha/easy_ocr.py", line 46, in easy_ocr
File "easyocr/easyocr.py", line 199, in __init__
File "easyocr/easyocr.py", line 256, in setLanguageList
FileNotFoundError: [Errno 2] No such file or directory: '/var/folders/v7/4z00fvss71n435b2g4xvymjr0000gn/T/_MEIUYXJ9t/easyocr/character/ch_sim_char.txt'
``` | closed | 2021-06-29T10:17:11Z | 2022-11-11T23:06:36Z | https://github.com/JaidedAI/EasyOCR/issues/473 | [] | yqchilde | 9 |
yihong0618/running_page | data-visualization | 334 | 悦跑圈导出数据问题 | 你好,我按照文档方法成功导出悦跑圈数据并记录了uid和sid,但是在我登录悦跑圈app(尝试了验证码和微信登录)后sid就失效了,无法再次获取数据
所以现在的问题是手机app在线和每天定时导出悦跑圈数据这两件事情只能实现一个。是我操作的方式有问题还是其他什么原因

| closed | 2022-11-03T06:22:33Z | 2022-11-03T07:16:31Z | https://github.com/yihong0618/running_page/issues/334 | [] | w749 | 3 |
horovod/horovod | pytorch | 3,494 | [Elastic] Deadlock when a node dies gracefully and # survivors < min_np | **Environment:**
1. Framework: PyTorch
2. Framework version: 1.10
3. Horovod version: latest
**Bug report:**
The deadlock scenario:
- We currently have `n = min_np` nodes running
- One or more nodes **_gracefully_** dies, resulting in the number of active nodes `n < min_np`
- For the surviving nodes, this is a [node removal event](https://github.com/horovod/horovod/blob/master/horovod/common/elastic.py#L96), and [state sync get skipped](https://github.com/horovod/horovod/blob/master/horovod/common/elastic.py#L161)
- Since `n < min_np`, training hangs waiting for new nodes to join
- Now new nodes join and we again have `n = min_np`
- Training resumes with two different situations:
- For the new nodes, `skip_sync = False` and state sync (involving a number of broadcasts) happen, new nodes hang there
- For the old nodes, `skip_sync = True` (since this was a node removal event, on which they were waiting), these nodes continue till any other collective operation (e.g. another broadcast or an allreduce) happens and also hangs
- DeadLock!
**Suggested solutions:**
I have two possible resolutions in mind
1. Simply skip the special logic for node removal, i.e. always sync state even if a node got removed
2. Perform an `and` AllReduce on `skip_sync` before this [line](https://github.com/horovod/horovod/blob/master/horovod/common/elastic.py#L161); such that if any process wants to sync, then all will sync.
@EnricoMi @tgaddair | open | 2022-03-25T15:24:56Z | 2022-11-12T03:08:48Z | https://github.com/horovod/horovod/issues/3494 | [
"bug"
] | ASDen | 1 |
sinaptik-ai/pandas-ai | data-science | 1,405 | conversational failed | ### System Info
OS version: windows 10
Python version: 3.12.7
The current version of pandasai being used: 2.3.0
### 🐛 Describe the bug
It seems the pipeline is not able to categorize correctly the conversation:
Here the config code for the Agent:
`config = {"llm":llm,"verbose": True, "direct_sql": False,"enable_cache": True,"response_parser": StreamlitResponse2, 'conversational':True}`
Here the pipepline log, where in the first attempt with "Hello" it is ok, then with "How are you?" it is trying to create the code...missing the skipping:
```
2024-10-21 19:03:01 [INFO] Question: hello
2024-10-21 19:03:01 [INFO] Running PandasAI with azure-openai LLM...
2024-10-21 19:03:01 [INFO] Prompt ID: 0e905df9-dec5-4279-9c2e-cd16bff1b15a
2024-10-21 19:03:01 [INFO] Executing Pipeline: GenerateChatPipeline
2024-10-21 19:03:01 [INFO] Executing Step 0: ValidatePipelineInput
2024-10-21 19:03:01 [INFO] Executing Step 1: CacheLookup
2024-10-21 19:03:01 [INFO] Using cached response
2024-10-21 19:03:01 [INFO] Executing Step 2: PromptGeneration
2024-10-21 19:03:01 [INFO] Executing Step 2: Skipping...
2024-10-21 19:03:01 [INFO] Executing Step 3: CodeGenerator
2024-10-21 19:03:01 [INFO] Executing Step 3: Skipping...
2024-10-21 19:03:01 [INFO] Executing Step 4: CachePopulation
2024-10-21 19:03:01 [INFO] Executing Step 4: Skipping...
2024-10-21 19:03:01 [INFO] Executing Step 5: CodeCleaning
2024-10-21 19:03:01 [INFO]
Code running:
```
result = {'type': 'string', 'value': 'Hello! How can I assist you today?'}
print(result)
```
2024-10-21 19:03:01 [INFO] Executing Step 6: CodeExecution
2024-10-21 19:03:01 [INFO] Executing Step 7: ResultValidation
2024-10-21 19:03:01 [INFO] Answer: {'type': 'string', 'value': 'Hello! How can I assist you today?'}
2024-10-21 19:03:01 [INFO] Executing Step 8: ResultParsing
2024-10-21 19:03:07 [INFO] Question: how are you?
2024-10-21 19:03:07 [INFO] Running PandasAI with azure-openai LLM...
2024-10-21 19:03:07 [INFO] Prompt ID: 03e3f142-d68f-48b7-9475-f7e163e49234
2024-10-21 19:03:07 [INFO] Executing Pipeline: GenerateChatPipeline
2024-10-21 19:03:07 [INFO] Executing Step 0: ValidatePipelineInput
2024-10-21 19:03:07 [INFO] Executing Step 1: CacheLookup
2024-10-21 19:03:07 [INFO] Executing Step 2: PromptGeneration
2024-10-21 19:03:07 [INFO] Using prompt:
....
2024-10-21 19:03:07 [INFO] Executing Step 3: CodeGenerator
2024-10-21 19:03:11 [ERROR] Pipeline failed on step 3: No code found in the response
```
| closed | 2024-10-21T17:17:55Z | 2025-01-28T16:01:46Z | https://github.com/sinaptik-ai/pandas-ai/issues/1405 | [
"bug"
] | HAL9KKK | 3 |
modin-project/modin | pandas | 6,620 | TypeError: bins argument only works with numeric data. | When running the Modin tests with the following way I see the error.
```bash
MODIN_CPUS=44 MODIN_ENGINE=ray python -m pytest modin/pandas/test/test_general.py
FAILED modin/pandas/test/test_general.py::test_value_counts[True-3-False] - TypeError: bins argument only works with numeric data.
``` | closed | 2023-10-02T14:15:42Z | 2023-10-18T09:57:27Z | https://github.com/modin-project/modin/issues/6620 | [
"bug 🦗",
"P2"
] | YarShev | 2 |
MaartenGr/BERTopic | nlp | 2,296 | Duplicate Document Entries in Documents from get_document_info(corpus) | Hello!
I am working on analyzing research funding trends using data collected from NIH ExPORTER.
My team and I have run into a problem where there are no duplicate documents in our raw data, as in there are not repeated ABSTRACT, APPLICATION_ID, or PI_names, but documents extracted from the BERTopic results using get_document_info(corpus) has repeated documents with the same exact ABSTRACT, APPLICATION_ID, and PI_names. Even the dates of budget start and end are the same.
I don't think the problem lies in preprocessing, as our corpus creation is rather simple:
```
def clean_text(text):
text = text.lower().strip() # Lowercase
text = re.sub(r'\s+', ' ', text) # Remove extra spaces
text = re.sub(r'[^\w\s]', '', text) # Remove punctuation
text = re.sub(r'\b\d+\b', '', text) # Remove standalone numbers
return text
data['CLEANED_ABSTRACT'] = data['ABSTRACT_TEXT'].astype(str).apply(clean_text)
corpus = data['CLEANED_ABSTRACT'].astype(str).tolist()
```
There shouldn't be repeated documents, since there are no resubmissions or renewals(which would mean a different start date on the budget, but the same abstract and PI names, but there aren't any repeats in our raw data file).
The repeated documents only appear in doc_info that was retrieved via get_document_info(corpus). We're thinking that duplicates maybe appearing because some documents are assigned 2 or more topics at once, and replacing some of the other documents assigned to those same topics with lower probabilities. However, we don't see these repeat documents in different topics, only in the same ones. And in our manual validation where we picked 15 documents for 15 topics, we will get anywhere from 2~8 repeated documents per topic.
Any idea is appreciated. Please let me know if more information is needed to answer this question. Thank you. | open | 2025-02-25T04:41:02Z | 2025-03-04T12:18:24Z | https://github.com/MaartenGr/BERTopic/issues/2296 | [] | zerubael | 5 |
flairNLP/flair | nlp | 2,997 | TARS Zero Shot Classifier Predictions | Here is the example code to use TARS Zero Shot Classifier
```
from flair.models import TARSClassifier
from flair.data import Sentence
# 1. Load our pre-trained TARS model for English
tars = TARSClassifier.load('tars-base')
# 2. Prepare a test sentence
sentence = Sentence("I am so glad you liked it!")
# 3. Define some classes that you want to predict using descriptive names
classes = ["happy", "sad"]
#4. Predict for these classes
tars.predict_zero_shot(sentence, classes)
# Print sentence with predicted labels
print(sentence)
print(sentence.labels[0].value)
print(round(sentence.labels[0].score,2))
```
Now this code is wrapped into the following function so that I can use it to get predictions for multiple sentences in a dataset.
```
def tars_zero(example):
sentence = Sentence(example)
tars.predict_zero_shot(sentence,classes)
print(sentence)
inputs = ["I am so glad you liked it!", "I hate it"]
for input in inputs:
tars_zero(input)
#output:
Sentence: "I am so glad you liked it !" → happy (0.8667)
Sentence: "I hate it"
```
**Here the model is giving predictions only for the first instance.**
| closed | 2022-11-24T07:37:38Z | 2023-05-21T15:36:59Z | https://github.com/flairNLP/flair/issues/2997 | [
"bug",
"wontfix"
] | ghost | 2 |
Lightning-AI/pytorch-lightning | data-science | 19,574 | Constructor arguments in init_args get instantiated while parsing arguments of LightningModule | ### Bug description
I have a model and one can specify the layers as arguments of the model's constructor, for example, one can set a different normalization layer to the boring model below with a command like ```model = BoringNN(norm_layer=nn.InstanceNorm1d)```:
BoringNN:
```
class BoringNN(nn.Module) :
def __init__(
self,
norm_layer: nn.Module = nn.BatchNorm1d,
activation_layer: nn.Module = nn.ReLU,
):
super().__init__()
self.layer = nn.Linear(3,32)
self.norm = norm_layer(32)
self.activation = activation_layer()
def forward(self, x):
x = self.layer(x)
x = self.norm(x)
x = self.activation(x)
return x
```
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
Here, I am trying to configure this model with Lightning CLI as below:
BoringModel:
```
class BoringModel(L.LightningModule) :
def __init__(
self,
model: nn.Module,
):
super().__init__()
self.save_hyperparameters()
self.model = model
...
```
config.yaml
```
model:
class_path: boring_model.BoringModel
init_args:
model:
class_path: boring_model.BoringNN
init_args:
norm_layer:
class_path: torch.nn.BatchNorm2d
activation_layer:
class_path: torch.nn.ReLU
...
```
### Error messages and logs
But then it gives this error:
```
usage: main.py [-h] [-c CONFIG] [--print_config[=flags]] {fit,validate,test,predict} ...
error: Parser key "model":
Problem with given class_path 'boring_model.BoringModel':
Parser key "model":
Problem with given class_path 'boring_model.BoringNN':
Parser key "norm_layer":
Problem with given class_path 'torch.nn.BatchNorm2d':
Validation failed: Key "num_features" is required but not included in config object or its value is None.
```
### Environment
<details>
<summary>Current environment</summary>
```
- Lightning Component: Trainer, LightningModule, LightningCLI
- PyTorch Lightning Version: 2.2.0.post0
- PyTorch Version: 2.0.0+cu118
- Python version: 3.10.13
- OS (e.g., Linux): Linux
- jsonargparse: 4.27.5
- omegaconf: 2.3.0
```
</details>
### More info
As far as I understand, it is trying to instantiate a norm_layer object while parsing the arguments, but the constructor ```torch.nn.BatchNorm2d``` requires an argument, e.g., as in ```norm = torch.nn.BatchNorm2d(32)```. This argument is not provided in the config file, so it fails to instantiate it.
However, I do not want to pass an object of the specified type to ```BoringNN```, I just want to pass the constructor so that the object will be created inside ```BoringNN```.
I think that the current behavior is also reasonable, as some may want to pass in some objects as arguments. Therefore, I want to ask if there is a way to distinguish these two types of init_args in the config file, (1) objects to be instantiated and (2) object constructors (as discussed above).
Side note: To avoid issues with logging arguments like ```nn.BatchNorm2d```, I run with ```save_config_callback=None``` set in the ```main.py```.
```
# main.py
from lightning.pytorch.cli import LightningCLI
import lightning as L
def cli_main():
# note: don't call fit!!
cli = LightningCLI(L.LightningModule, L.LightningDataModule,
subclass_mode_model=True, subclass_mode_data=True,
auto_configure_optimizers=False,
save_config_callback=None
)
if __name__ == "__main__":
cli_main()
# note: it is good practice to implement the CLI in a function and call it in the main if block
```
cc @carmocca @mauvilsa | closed | 2024-03-05T11:28:01Z | 2024-04-23T08:07:36Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19574 | [
"question",
"lightningcli"
] | tommycwh | 12 |
nolar/kopf | asyncio | 877 | RuntimeError: Session is closed in kopf.testing.KopfRunner | ### Long story short
I encounter the following error a majority of the time when running python kubernetes queries inside the kopf.testing.KopfRunner context. Occasionally, it will succeed, but most of the time it fails as shown below.
Error:
```
self = <aiohttp.client.ClientSession object at 0x12258f8e0>, method = 'get', str_or_url = 'https://0.0.0.0:49746/apis/admissionregistration.k8s.io/v1'
async def _request(
self,
method: str,
str_or_url: StrOrURL,
*,
params: Optional[Mapping[str, str]] = None,
data: Any = None,
json: Any = None,
cookies: Optional[LooseCookies] = None,
headers: Optional[LooseHeaders] = None,
skip_auto_headers: Optional[Iterable[str]] = None,
auth: Optional[BasicAuth] = None,
allow_redirects: bool = True,
max_redirects: int = 10,
compress: Optional[str] = None,
chunked: Optional[bool] = None,
expect100: bool = False,
raise_for_status: Optional[bool] = None,
read_until_eof: bool = True,
proxy: Optional[StrOrURL] = None,
proxy_auth: Optional[BasicAuth] = None,
timeout: Union[ClientTimeout, object] = sentinel, verify_ssl: Optional[bool] = None,
fingerprint: Optional[bytes] = None, ssl_context: Optional[SSLContext] = None,
ssl: Optional[Union[SSLContext, bool, Fingerprint]] = None, proxy_headers: Optional[LooseHeaders] = None,
trace_request_ctx: Optional[SimpleNamespace] = None, read_bufsize: Optional[int] = None,
) -> ClientResponse:
# NOTE: timeout clamps existing connect and read timeouts. We cannot
# set the default to None because we need to detect if the user wants
# to use the existing timeouts by setting timeout to None.
if self.closed:
> raise RuntimeError("Session is closed")
E RuntimeError: Session is closed
```
However, if I shell out to kubectl I do not encounter this issue.
### Kopf version
1.35.3
### Kubernetes version
v1.19.2+k3s1
### Python version
3.8.12
### Code
```python
def test_operator(_handlers, _setup):
with kopf.testing.KopfRunner(
[
"run",
"-A",
"--verbose",
_handlers,
]
) as runner:
cr = CLIENT.get_namespaced_custom_object("test-stack-us-east-2b")
```
```
### Logs
_No response_
### Additional information
_No response_ | open | 2021-12-24T21:36:17Z | 2022-03-13T01:33:00Z | https://github.com/nolar/kopf/issues/877 | [
"bug"
] | retr0h | 3 |
huggingface/transformers | tensorflow | 36,579 | AutoModel failed with empty tensor error | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.50.0.dev0
- Platform: Linux-4.18.0-553.16.1.el8_10.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.28.1
- Safetensors version: 0.5.2
- Accelerate version: 1.4.0.dev0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_CPU
- mixed_precision: bf16
- use_cpu: True
- debug: False
- num_processes: 4
- machine_rank: 0
- num_machines: 4
- main_process_ip: 127.0.0.1
- main_process_port: 29500
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- ipex_config: {'ipex': False}
- mpirun_config: {'mpirun_ccl': '1', 'mpirun_hostfile': '/home/jiqingfe/jiqing_hf/HuggingFace/tests/workloads/fine-tune/hostfile'}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- DeepSpeed version: not installed
- PyTorch version (GPU?): 2.6.0+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@SunMarc @ArthurZucker @Rocketknight1
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following codes:
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("meta-llama/Llama-3.1-8B-Instruct", device_map="auto")
```
Error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jiqingfe/transformers/src/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
return model_class.from_pretrained(
File "/home/jiqingfe/transformers/src/transformers/modeling_utils.py", line 271, in _wrapper
return func(*args, **kwargs)
File "/home/jiqingfe/transformers/src/transformers/modeling_utils.py", line 4535, in from_pretrained
dispatch_model(model, **device_map_kwargs)
File "/home/jiqingfe/accelerate/src/accelerate/big_modeling.py", line 496, in dispatch_model
model.to(device)
File "/home/jiqingfe/transformers/src/transformers/modeling_utils.py", line 3262, in to
return super().to(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1343, in to
return self._apply(convert)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 903, in _apply
module._apply(fn)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 930, in _apply
param_applied = fn(param)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1336, in convert
raise NotImplementedError(
NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
```
### Expected behavior
Expect got a base model. | closed | 2025-03-06T07:57:25Z | 2025-03-13T17:18:16Z | https://github.com/huggingface/transformers/issues/36579 | [
"bug"
] | jiqing-feng | 1 |
AirtestProject/Airtest | automation | 1,066 | 用 pycharm運行自動化腳本時,可以將手機畫面即時投屏嗎? | 用 pycharm運行自動化腳本時,可以將手機畫面即時投屏嗎? | open | 2022-07-08T10:27:24Z | 2022-07-08T10:27:24Z | https://github.com/AirtestProject/Airtest/issues/1066 | [] | Ray-W-u | 0 |
jupyterhub/jupyterhub-deploy-docker | jupyter | 80 | Use Lab as default route | I would like to use jupyterlab as the default application. Although, it is properly installed and reachable (by manually navigating to the /lab endpoint), /tree (jupyterhub) is still the default route. I found some old posts (e.g., #26 ) and there is even an [example](https://github.com/jupyterhub/jupyterhub-deploy-docker/tree/master/examples/jupyterlab) config, but it seems to be outdated.
How could one set the default route to jupyter lab? | closed | 2019-02-05T12:47:49Z | 2022-12-05T00:52:46Z | https://github.com/jupyterhub/jupyterhub-deploy-docker/issues/80 | [] | inkrement | 5 |
explosion/spaCy | data-science | 11,949 | PhraseMatcher not matching correctly on attr when tokenization is customized | I have an example where I have `$` in my infixes tokenization rules. However then the `PhraseMatcher` fails to match on `LOWER` attr
## How to reproduce the behaviour
```python
import spacy
from spacy.matcher import PhraseMatcher
from spacy.util import compile_infix_regex
nlp = spacy.load("blank:en")
nlp.tokenizer.infix_finditer = compile_infix_regex(
list(nlp.Defaults.infixes) + [r"[$]"]
).finditer
doc = nlp("It amounted to US$ 5")
# the following works where I match on the actual case of the token
matcher_working = PhraseMatcher(nlp.vocab, attr="LOWER", validate=True)
matcher_working.add("USD", [nlp("US$")])
assert len(matcher_working(doc)) == 1
# the following does not work where I match on the lowercase of the token
matcher = PhraseMatcher(nlp.vocab, attr="LOWER", validate=True)
matcher.add("USD", [nlp("us$")])
assert len(matcher(doc)) == 1
```
If I don't add `[r"[$]"]` to my infixes then it works fine. I assume that's a bug!?
## Info about spaCy
- **spaCy version:** 3.4.1
- **Platform:** macOS-12.6.1-arm64-arm-64bit
- **Python version:** 3.9.9
| closed | 2022-12-08T10:18:54Z | 2022-12-08T12:47:36Z | https://github.com/explosion/spaCy/issues/11949 | [] | NixBiks | 2 |
joerick/pyinstrument | django | 347 | Timeline view is difficult to scroll around on | I like the new timeline view, but the scrolling behavior is a little wonky. I think it needs to be much harder to zoom in and out. Ideally it would lock either horizontal scrolling or zooming, so that you can scroll the timeline without zooming at the same time. Maybe it should also require more of a vertical scroll to actually activate zooming.
Another idea would be to make scrolling on the timeline itself only do panning (this would also fix #346), and make it so that to zoom you have to scroll or drag on the top bar. This is a common UI for zooming a chart. | open | 2024-10-14T19:00:35Z | 2024-10-15T20:40:16Z | https://github.com/joerick/pyinstrument/issues/347 | [] | asmeurer | 2 |
CPJKU/madmom | numpy | 91 | add convenience methods to MIDIFile to add notes, set tempo and time signature | It would be nice to have some convenience methods to:
- add notes
- set tempo
- set time signature
of a `MIDIFile`, the method should take both take input given in seconds or beats.
These methods should be added to `MIDIFile` since the events need to be put into a track, but the tempo and time signature events can be in another track.
Suggestion: let the method accept an argument to indicate the unit to be used (seconds/beats), if none is given, it should use the (recently removed) instance attribute.
| closed | 2016-02-18T12:20:57Z | 2018-03-04T11:55:08Z | https://github.com/CPJKU/madmom/issues/91 | [
"enhancement",
"feature request"
] | superbock | 1 |
clovaai/donut | nlp | 48 | Different input resolution throws error | Following is the error we get when we try to pass an input size of 512\*2,512\*3:
Are different input resolution/sizes are not supported currently?
Traceback (most recent call last):
File "train.py", line 149, in <module>
train(config)
File "train.py", line 57, in train
model_module = DonutModelPLModule(config)
File "/home/souvic/Desktop/upwork1/donut/donut/lightning_module.py", line 35, in __init__
ignore_mismatched_sizes=True,
File "/home/souvic/Desktop/upwork1/donut/donut/donut/model.py", line 595, in from_pretrained
model = super(DonutModel, cls).from_pretrained(pretrained_model_name_or_path, revision="official", *model_args, **kwargs)
File "/home/souvic/anaconda3/envs/donut_official/lib/python3.7/site-packages/transformers/modeling_utils.py", line 2113, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/home/souvic/Desktop/upwork1/donut/donut/donut/model.py", line 387, in __init__
name_or_path=self.config.name_or_path,
File "/home/souvic/Desktop/upwork1/donut/donut/donut/model.py", line 70, in __init__
num_classes=0,
File "/home/souvic/anaconda3/envs/donut_official/lib/python3.7/site-packages/timm/models/swin_transformer.py", line 500, in __init__
downsample=PatchMerging if (i < self.num_layers - 1) else None
File "/home/souvic/anaconda3/envs/donut_official/lib/python3.7/site-packages/timm/models/swin_transformer.py", line 408, in __init__
for i in range(depth)])
File "/home/souvic/anaconda3/envs/donut_official/lib/python3.7/site-packages/timm/models/swin_transformer.py", line 408, in <listcomp>
for i in range(depth)])
File "/home/souvic/anaconda3/envs/donut_official/lib/python3.7/site-packages/timm/models/swin_transformer.py", line 281, in __init__
mask_windows = window_partition(img_mask, self.window_size) # num_win, window_size, window_size, 1
File "/home/souvic/anaconda3/envs/donut_official/lib/python3.7/site-packages/timm/models/swin_transformer.py", line 111, in window_partition
x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
RuntimeError: shape '[1, 25, 10, 38, 10, 1]' is invalid for input of size 98304 | closed | 2022-09-11T04:23:52Z | 2024-06-18T22:56:55Z | https://github.com/clovaai/donut/issues/48 | [] | Souvic | 3 |
snarfed/granary | rest-api | 143 | Access source URLs in retweets? | I've recently downloaded my tweets from Twitter, and am trying to use Granary to convert them into static HTML files.
I'm basically doing this:
```
posts = json.loads(open(sys.argv[1], 'r').read())
for post in posts:
decoded = twitter.Twitter('token', 'secret', 'smerrill').tweet_to_object(post)
print decoded
```
For replies, I see the `inReplyTo` element in the resultant object; but retweets are simply the full text of the thing I retweeted prefixed with with the "RT @screenname ". The screenname is linked to the root of that user's account, **not** the ID of the tweet that I am retweeting.
I see that the source JSON does contain the ID of the original tweet, so it should be possible to link to that, instead of just the user's Twitter home page. This would preserve the fidelity of the retweets, provided that the original hasn't been deleted.
Before I get into munging the Granary object with data from the original source JSON, how hard would it be to construct a more representational retweet in Granary directly? | closed | 2018-04-01T21:35:56Z | 2018-04-02T13:47:29Z | https://github.com/snarfed/granary/issues/143 | [] | skpy | 4 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,394 | i only can pass cloudflare with devtool opened | anyone have a solution to get around it, it just rotates | open | 2023-07-14T09:22:11Z | 2023-07-17T03:42:32Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1394 | [] | NCLnclNCL | 5 |
lorien/grab | web-scraping | 370 | Segmentation fault 11 | Segmentation fault 11 error occurs when i call "go" method of Grab class in mac OS | closed | 2018-12-24T13:51:23Z | 2022-02-25T07:56:47Z | https://github.com/lorien/grab/issues/370 | [] | akoikelov | 2 |
airtai/faststream | asyncio | 1,100 | Bug: FastAPI 0.106.0 broke the integration | **Describe the bug**
Installing fastapi version 0.106.0 and above break tests.
**How to reproduce**
```sh
pip install fastapi==0.106.0
pytest tests
``` | closed | 2023-12-26T20:54:30Z | 2023-12-27T18:46:50Z | https://github.com/airtai/faststream/issues/1100 | [
"bug"
] | davorrunje | 0 |
plotly/dash | jupyter | 2,569 | Dash crashes when I deploy it with Gunicorn | **Describe your context**
- replace the result of `pip list | grep dash` below
```
dash 2.10.2
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
**Describe the bug**
I am trying to deploy a dash app to production using Nginx and Ubuntu 22.04, also I am using gunicorn to deploy it.
The problem comes when I try access my domain and dash gives an Internal Server Error due to Flask
`
TypeError: Flask.__call__() missing 1 required positional argument: 'start_response'
`
I have not found too much information about this error, and I think it is an internal error
**Expected behavior**
The app should work fine as in my local machine it work correctly, but when I use it in combination with Gunicorn, it crashes
| open | 2023-06-18T20:57:34Z | 2024-08-13T19:34:22Z | https://github.com/plotly/dash/issues/2569 | [
"bug",
"P3"
] | manumartinm | 2 |
mars-project/mars | pandas | 2,899 | [BUG] mars storage put too much shuffle meta to data manager and supervisor | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version: 3.7.9
2. The version of Mars you use: master
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
too much meta to data manager in main pool:
[put_data_info.log](https://github.com/mars-project/mars/files/8401983/put_data_info.log)
too much meta to supervisor:
[set_meta.log](https://github.com/mars-project/mars/files/8401982/set_meta.log)
5. Minimized code to reproduce the error.
**Expected behavior**
Mars should store as much less info for chunks in supervisor to avoid the supervisor single-point bottle-neck.
The mapper subtaks is about `1000~2000` subtasks, but the meta for every subtasks is about 1~3M, there is too much serialization and storage cost for those subtasks would be very large and the supervisor will be the bottleneck for the system.
**Additional context**
Add any other context about the problem here.
| closed | 2022-04-02T06:43:28Z | 2022-04-09T12:01:38Z | https://github.com/mars-project/mars/issues/2899 | [
"type: bug",
"mod: meta service"
] | chaokunyang | 5 |
piccolo-orm/piccolo | fastapi | 436 | IDE type hints expects "Where", "WhereRaw", "And", "Or" | My editor doesn't like this statement, since it evaluates to a boolean type, which doesn't match the type signature `Combinable = t.Union["Where", "WhereRaw", "And", "Or"]`

As far as I can see in the docs, this is the correct way to build a where clause, so it would be good if the type hints supported it. Not sure if it's as easy as adding a `bool` to the type hint `Union` (probably not)
| open | 2022-02-16T09:13:48Z | 2025-01-21T21:35:40Z | https://github.com/piccolo-orm/piccolo/issues/436 | [] | trondhindenes | 13 |
saulpw/visidata | pandas | 2,703 | Create a plugin list somewhere | Is there a collection of plugins somewhere? I couldn't find a list.
It might be helpful if there was at least an "awesome"-style list cataloging what's out there. | open | 2025-02-09T20:17:19Z | 2025-02-12T03:49:25Z | https://github.com/saulpw/visidata/issues/2703 | [
"question"
] | deliciouslytyped | 3 |
getsentry/sentry | django | 87,279 | Filtering Transaction > Summary > Spans by releases returns no results | ### Environment
SaaS (https://sentry.io/)
### Steps to Reproduce
1. Navigate to `Transaction > Summary > Spans`.
2. Attempt to filter search by a well known release # with corresponding events.
### Expected Result
Span operations are shown that occur within the given release.
### Actual Result
Results seem to always be empty.
### Product Area
Unknown
### Link
_No response_
### DSN
_No response_
### Version
_No response_ | open | 2025-03-18T15:15:15Z | 2025-03-18T15:17:11Z | https://github.com/getsentry/sentry/issues/87279 | [
"Bug"
] | bcoe | 1 |
hatchet-dev/hatchet | fastapi | 539 | docs: Kubernetes Quickstart examples are incomplete | The example workers found at https://docs.hatchet.run/self-hosting/kubernetes#run-your-first-worker do not work without major changes. It would be really useful for users to know which configuration properties they have to set in order to get it working.
In my case, the following config worked (typescript example):
```json
{
"token": "xxxxx",
"api_url": "http://localhost:7000/",
"tenant_id": "xxxxx",
"host_port": "7000",
"tls_config": {
"tls_strategy": "none"
}
}
```
Also, I'm not quite sure what the `dotenv` dependency adds to the example, since it doesn't seem to do anything. | closed | 2024-05-29T15:13:07Z | 2024-06-14T13:58:36Z | https://github.com/hatchet-dev/hatchet/issues/539 | [] | kosmoz | 1 |
pytorch/vision | machine-learning | 8,390 | AttributeError: module 'torchvision.transforms' has no attribute 'v2' | ### 🐛 Describe the bug
I am getting the following error:
> AttributeError: module 'torchvision.transforms' has no attribute 'v2'
### Versions
I am using the following versions:
```python
torch version: 2.2.2, torchvision version: 0.17.2
``` | closed | 2024-04-20T12:46:43Z | 2024-04-29T09:42:37Z | https://github.com/pytorch/vision/issues/8390 | [] | ImahnShekhzadeh | 1 |
ranaroussi/yfinance | pandas | 1,288 | Debian - yfinance - ERROR: Could not build wheels for cryptography, which are required to inatall pyproject.toml-based projects | ### Still think it's a bug? YES, definitely.
- Info about your system:
- yfinance version - UNABLE TO INSTALL **ANY** yfinance version using pip3 install yfinance
Python: 3.7.3
platform: Linux-4.19.0-22-686-i686-with-debian-10.13
pip: n/a
setuptools: 65.6.3
setuptools_rust: 1.5.2
rustc: 1.66.0 (69f9c33d7 2022-12-12)
Summary of errors at bottom of terminal console:
error: command '/usr/bin/i686-linux-gnu-gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for cryptography
Failed to build cryptography
ERROR: Could not build wheels for cryptography, which is required to install pyproject.toml-based projects
Selected relevant info from attempt to install yfinance
$ pip3 install yfinance
Defaulting to user installation because normal site-packages is not writeable
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting yfinance
Using cached https://www.piwheels.org/simple/yfinance/yfinance-0.2.3-py2.py3-none-any.whl (50 kB)
...
...
Collecting cryptography>=3.3.2
Using cached cryptography-39.0.0.tar.gz (603 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
...
...
Building wheels for collected packages: cryptography
Building wheel for cryptography (pyproject.toml) ... error
error: subprocess-exited-with-error
Building wheel for cryptography (pyproject.toml) did not run successfully.
exit code: 1
[300 lines of output]
...
Perhaps install an older version of yfinance (before cryptography became a requirement)? How to fix problem???
Please help. | closed | 2023-01-10T21:55:10Z | 2023-01-15T18:24:05Z | https://github.com/ranaroussi/yfinance/issues/1288 | [] | SymbReprUnlim | 14 |
littlecodersh/ItChat | api | 965 | 关于itchat一天时间给内存占到百分百的问题 | 我用itchat微信多开 , 一天就给服务器内存暂居到百分百, 是不是由于好友信息 群信息太多导致的呢, 怎么可以关闭接收这些信息呢 , 只要登陆好 能发送信息即可,希望大佬解答一下 | open | 2022-07-18T04:28:20Z | 2022-07-18T04:29:21Z | https://github.com/littlecodersh/ItChat/issues/965 | [] | cheduiwang | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.