repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
ageitgey/face_recognition | machine-learning | 1,560 | image de-duplication by comparing faces images. | * face_recognition version:1.3.0
* Python version:3.11.6
* Operating System:Ubuntu(23.10)
### Description
Requirement:
I have 2 folders containing n no.of images, and each image may have different faces in them. So I need to extract only unique faces from images folder.
I have a folder of images which have multiple faces in each image. So I need to extract every face from the image and if it is unique i need to save the unique face to unique_faces folder. Like wise i have to check every image in the folder and need to save unique faces in a unique_faces folder.
given below is nested loop which iterates every image from both folders {i extracted only faces from images and saved them in 2 folders} and uses face recognition to compare each image and on a folder of 100 images i'm getting 5-7 false positives and time is approximately 25 min. so I searched for other recognition libraries like insightface, deeepface, compreface like that. but they were quicker but false positives are more. I tried with threading too but some what increase in performance is there but not that much still taking 15min.
Primary Goal : To eliminate the duplicate the images.
are there anyone with a solution for this.
### What I Did
```
for img1_filename in os.listdir(folder_1):
img1_path = os.path.join(folder_1, img1_filename)
for img2_filename in os.listdir(folder_2):
img2_path = os.path.join(folder_2, img2_filename)
if verify_faces(img1_path, img2_path):
# Save the matched image from folder_1 into the output folder
output_img_path = os.path.join(output_folder, img2_filename)
shutil.copyfile(img1_path, output_img_path)
print(f"Match found: {img1_filename} and {img2_filename}. Saved as {output_img_path}")
break # If a match is found, break the inner loop
else:
print("No Match")
```
| open | 2024-04-16T08:42:36Z | 2024-04-16T08:42:36Z | https://github.com/ageitgey/face_recognition/issues/1560 | [] | Raghucharan16 | 0 |
thtrieu/darkflow | tensorflow | 557 | NaN training loss *after* first iteration | I am attempting to further train the yolo network on the coco dataset as a starting point for "training my own data." I converted the coco json into .xml annotations, but when I try to train, I get NaN loss starting at the second step. Most issues regarding NaN loss seem to be centered around incorrect annotations, however, I have checked over mine multiple times for correctness. I have copied coco.names and overwritten labels.txt with it.
I use the following command to train:
`./flow --model cfg/yolo.cfg --load bin/yolo.weights --train --dataset /path/to/JPEGImages/ --annotation /path/to/Annotations/ --gpu 0.95`
I get the following when training:
```
Training statistics:
Learning rate : 1e-05
Batch size : 16
Epoch number : 1000
Backup every : 2000
step 1 - loss 9.513481140136719 - moving ave loss 9.51348114013672
step 2 - loss nan - moving ave loss nan
step 3 - loss nan - moving ave loss nan
```
I've read in other issues that there is a parse history that one can delete if there was previously a mistake in the annotations before, but I cannot find it. Looking for any ideas on reason for this to happen. | closed | 2018-02-02T20:39:53Z | 2022-05-15T16:29:16Z | https://github.com/thtrieu/darkflow/issues/557 | [] | matlabninja | 7 |
davidsandberg/facenet | tensorflow | 412 | How to run your code to mark one's name in one image?(accomplish face recognition) | If i want to test one image,and the result will cirle out the faces in the image and label his or her name beside the bounding boxes,so how to run this code to train my own model and how to modify this code to accomplish my goal?
Thanks,very much! | closed | 2017-08-06T06:48:12Z | 2022-05-31T16:38:57Z | https://github.com/davidsandberg/facenet/issues/412 | [] | Victoria2333 | 13 |
yihong0618/running_page | data-visualization | 386 | 小白不会折腾 | 请问想导出悦跑圈该怎么操作呀,不会这些东西 | closed | 2023-03-14T08:28:35Z | 2024-03-01T10:09:48Z | https://github.com/yihong0618/running_page/issues/386 | [] | woodanemone | 5 |
amdegroot/ssd.pytorch | computer-vision | 44 | RGB vs BGR? | Hello,
I was looking at your implementation and I believe the input to your model is an image with RGB ordering. I was also looking at the [keras implementation](https://github.com/rykov8/ssd_keras) and they use BGR values. I have been also testing with an [map evaluation script](https://github.com/oarriaga/single_shot_multibox_detector/blob/master/src/evaluate.py) and it seems that I get better results ;using the weights that you provided from the original caffe implementation, when I use BGR instead of RBG. Do you happen to know which order should we follow when using the original caffe weights?
Thank you very much :) | closed | 2017-07-28T16:38:43Z | 2017-09-22T10:14:39Z | https://github.com/amdegroot/ssd.pytorch/issues/44 | [] | oarriaga | 4 |
public-apis/public-apis | api | 3,649 | Responsive-Login-Form-master5555.zip | 
| closed | 2023-09-26T16:36:14Z | 2023-09-26T19:45:32Z | https://github.com/public-apis/public-apis/issues/3649 | [] | Shopeepromo | 0 |
babysor/MockingBird | deep-learning | 653 | 能否提供数据包的格式 | 如果我想使用自己的数据包,我应该如何去格式化我的数据
谢谢
Edit 1:
这边的是magicdata_dev_set的格式,我是否把我自己的数据包根据这个结构去做格式就可以了?
```
.
└── MAGICDATA_dev_set/
├── TRANS.txt
├── speaker_id_1/
│ ├── utterance_1.wav
│ ├── utterance_2.wav
│ └── ...
├── speaker_id_2/
│ ├── utterance_3.wav
│ ├── utterance_4.wav
│ └── ...
├── speaker_id_3/
│ ├── utterance_5.wav
│ ├── utterance_6.wav
│ └── ...
└── ...
```
TRANS.txt:
```
UtteranceID SpeakerID Transcription
utterance_1.wav speaker_id_1 vocals_1
utterance_2.wav speaker_id_2 vocals_2
...
``` | closed | 2022-07-18T03:56:01Z | 2022-07-18T06:23:05Z | https://github.com/babysor/MockingBird/issues/653 | [] | markm812 | 2 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 1,299 | 01453367 | Pero que seguro | open | 2024-05-21T13:42:25Z | 2024-05-21T13:43:43Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1299 | [] | Oscar199632 | 0 |
huggingface/datasets | pytorch | 6,899 | List of dictionary features get standardized | ### Describe the bug
Hi, i’m trying to create a HF dataset from a list using Dataset.from_list.
Each sample in the list is a dict with the same keys (which will be my features). The values for each feature are a list of dictionaries, and each such dictionary has a different set of keys. However, the datasets library standardizes all dictionaries under a feature and adds all possible keys (with None value) from all the dictionaries under that feature.
How can I keep the same set of keys as in the original list for each dictionary under a feature?
### Steps to reproduce the bug
```
from datasets import Dataset
# Define a function to generate a sample with "tools" feature
def generate_sample():
# Generate random sample data
sample_data = {
"text": "Sample text",
"feature_1": []
}
# Add feature_1 with random keys for this sample
feature_1 = [{"key1": "value1"}, {"key2": "value2"}] # Example feature_1 with random keys
sample_data["feature_1"].extend(feature_1)
return sample_data
# Generate multiple samples
num_samples = 10
samples = [generate_sample() for _ in range(num_samples)]
# Create a Hugging Face Dataset
dataset = Dataset.from_list(samples)
dataset[0]
```
```{'text': 'Sample text', 'feature_1': [{'key1': 'value1', 'key2': None}, {'key1': None, 'key2': 'value2'}]}```
### Expected behavior
```{'text': 'Sample text', 'feature_1': [{'key1': 'value1'}, {'key2': 'value2'}]}```
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-5.15.0-1040-nvidia-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.23.0
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | open | 2024-05-15T14:11:35Z | 2024-05-15T14:11:35Z | https://github.com/huggingface/datasets/issues/6899 | [] | sohamparikh | 0 |
biolab/orange3 | numpy | 7,051 | Box Plot: Sometimes the statistic does not appear under the graph. | **What's wrong?**
When the graph increases a lot vertically, the text of t'student, ANOVA, etc. is not visible.
**How can we reproduce the problem?**
Load dataset "Bank Marketing"
Add widget Box Plot
Connect both
In Box Plot choose the variable "age" and for Subgroups "education" or "month"
No text appears below the graph.
**What's your environment?**
- Operating system: Linux Mint 22.1
- Orange version: 3.38.1
- How you installed Orange: With miniconda, envs, ...
| open | 2025-03-23T22:48:03Z | 2025-03-23T23:05:02Z | https://github.com/biolab/orange3/issues/7051 | [
"bug report"
] | gmolledaj | 1 |
dask/dask | pandas | 11,433 | `array.broadcast_shapes` to return a tuple of `int`, not a tuple of NumPy scalars | <!-- Please do a quick search of existing issues to make sure that this has not been asked before. -->
Before NumPy v2, the `repr` of NumPy scalars returned a plain number. This changed in [NEP 51](https://numpy.org/neps/nep-0051-scalar-representation.html), which changes how the result of `broadcast_shapes()` is printed:
- Before: `(240, 37, 49)`
- After: `(np.int64(240), np.int64(37), np.int64(49))`
There is no functional difference here, but it is much less intuitive to read - especially problematic for documentation/notebooks.
What does everyone think of adding a line into `broadcast_shapes` to ensure that only plain integers are returned? Would this have any unintended consequences? Many thanks. | closed | 2024-10-16T13:39:49Z | 2024-10-16T16:44:57Z | https://github.com/dask/dask/issues/11433 | [
"array"
] | trexfeathers | 4 |
Asabeneh/30-Days-Of-Python | numpy | 544 | Lack of error handling for invalid inputs. | ['The provided code appears to be a function that summarizes text using sentence extraction. However, there are potential issues with how it handles invalid inputs.', 'Current Behavior:', 'The code assumes that the input text is valid and does not perform any checks for empty or null inputs. If an empty string or null value is passed as the input, the code will fail without providing any error message or handling.', 'Expected Behavior:', 'The function should gracefully handle invalid inputs and provide informative error messages to the user.', 'Possible Solutions:', 'Add a check at the beginning of the function to ensure that the input text is not empty or null.', 'If the input is invalid, raise a ValueError or TypeError with a clear error message.', 'Alternatively, you could return an empty string or a default value to indicate that the input was invalid.'] | open | 2024-07-03T12:07:05Z | 2024-07-03T12:07:05Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/544 | [] | aakashrajaraman2 | 0 |
zihangdai/xlnet | nlp | 267 | colab notebook can not run under tensorflow 2.0 | XLNet-imdb-GPU.ipynb
Error: module 'tensorflow._api.v2.train' has no attribute 'Optimizer'
Add this fix:
%tensorflow_version 1.x | open | 2020-06-10T03:41:23Z | 2020-06-10T03:41:23Z | https://github.com/zihangdai/xlnet/issues/267 | [] | jlff | 0 |
neuml/txtai | nlp | 7 | Add unit tests and integrate Travis CI | Add testing framework and integrate Travis CI | closed | 2020-08-17T14:36:45Z | 2021-05-13T15:02:42Z | https://github.com/neuml/txtai/issues/7 | [] | davidmezzetti | 0 |
matterport/Mask_RCNN | tensorflow | 2,900 | ModuleNotFoundError: No module named 'maturin' | open | 2022-11-07T06:20:55Z | 2022-11-07T06:20:55Z | https://github.com/matterport/Mask_RCNN/issues/2900 | [] | xuqq0318 | 0 | |
marshmallow-code/apispec | rest-api | 395 | Representing openapi's "Any Type" | I am trying to use `marshmallow.fields.Raw` to represent a "arbitrary json" field. Currently I do not see a way to represent a field in marshmallow that will produce a openapi property that is empty when passed through apispec.
The code in `apispec.marshmallow.openapi.OpenAPIConverter.field2type_and_format` (see https://github.com/marshmallow-code/apispec/blob/dev/apispec/ext/marshmallow/openapi.py#L159-L173) unconditionally sets the key `type` in the property object. However it seems that it is legal in openapi to have an empty schema (without the `type` key) which permits any type (see the "Any Type" section at the bottom of https://swagger.io/docs/specification/data-models/data-types/)
Please advise if this is possible with the current codebase or if I will need to implement a custom solution. Thanks. | closed | 2019-02-22T00:05:34Z | 2025-01-21T18:22:07Z | https://github.com/marshmallow-code/apispec/issues/395 | [] | kaya-zekioglu | 4 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 349 | Problem with scrapegraphai/graphs/pdf_scraper_graph.py | Hello,

output of FetchNode being fed directly into RAGNode won't work because RagNode is expecting the second argument as a list of str. However, FetchNode is outputing a list of langchain Document.
When I perform the `run` function on an instance of PDFScraperGraph, I get the following error
```
ValidationError Traceback (most recent call last)
Cell In[3], [line 13](vscode-notebook-cell:?execution_count=3&line=13)
[1](vscode-notebook-cell:?execution_count=3&line=1) from scrapegraphai.graphs import PDFScraperGraph
[3](vscode-notebook-cell:?execution_count=3&line=3) pdf_scraper = PDFScraperGraph(
[4](vscode-notebook-cell:?execution_count=3&line=4) prompt="Which company sponsored the research?",
[5](vscode-notebook-cell:?execution_count=3&line=5) source="/Users/tindo/Desktop/lang_graph/data/lorem_ipsum.pdf",
(...)
[11](vscode-notebook-cell:?execution_count=3&line=11) },
[12](vscode-notebook-cell:?execution_count=3&line=12) )
---> [13](vscode-notebook-cell:?execution_count=3&line=13) result = pdf_scraper.run()
File ~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/graphs/pdf_scraper_graph.py:105, in PDFScraperGraph.run(self)
[97](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/graphs/pdf_scraper_graph.py:97) """
[98](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/graphs/pdf_scraper_graph.py:98) Executes the web scraping process and returns the answer to the prompt.
[99](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/graphs/pdf_scraper_graph.py:99)
[100](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/graphs/pdf_scraper_graph.py:100) Returns:
[101](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/graphs/pdf_scraper_graph.py:101) str: The answer to the prompt.
[102](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/graphs/pdf_scraper_graph.py:102) """
[104](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/graphs/pdf_scraper_graph.py:104) inputs = {"user_prompt": self.prompt, self.input_key: self.source}
--> [105](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/graphs/pdf_scraper_graph.py:105) self.final_state, self.execution_info = self.graph.execute(inputs)
[107](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/graphs/pdf_scraper_graph.py:107) return self.final_state.get("answer", "No answer found.")
File ~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/graphs/base_graph.py:171, in BaseGraph.execute(self, initial_state)
[169](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/graphs/base_graph.py:169) return (result["_state"], [])
[170](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/graphs/base_graph.py:170) else:
--> [171](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/graphs/base_graph.py:171) return self._execute_standard(initial_state)
File ~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/graphs/base_graph.py:110, in BaseGraph._execute_standard(self, initial_state)
[107](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/graphs/base_graph.py:107) current_node = next(node for node in self.nodes if node.node_name == current_node_name)
[109](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/graphs/base_graph.py:109) with get_openai_callback() as cb:
--> [110](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/graphs/base_graph.py:110) result = current_node.execute(state)
[111](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/graphs/base_graph.py:111) node_exec_time = time.time() - curr_time
[112](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/graphs/base_graph.py:112) total_exec_time += node_exec_time
File ~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/nodes/rag_node.py:85, in RAGNode.execute(self, state)
[82](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/nodes/rag_node.py:82) chunked_docs = []
[84](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/nodes/rag_node.py:84) for i, chunk in enumerate(doc):
---> [85](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/nodes/rag_node.py:85) doc = Document(
[86](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/nodes/rag_node.py:86) page_content=chunk,
[87](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/nodes/rag_node.py:87) metadata={
[88](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/nodes/rag_node.py:88) "chunk": i + 1,
[89](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/nodes/rag_node.py:89) },
[90](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/nodes/rag_node.py:90) )
[91](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/nodes/rag_node.py:91) chunked_docs.append(doc)
[93](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/scrapegraphai/nodes/rag_node.py:93) self.logger.info("--- (updated chunks metadata) ---")
File ~/Desktop/lang_graph/env/lib/python3.10/site-packages/langchain_core/documents/base.py:22, in Document.__init__(self, page_content, **kwargs)
[20](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/langchain_core/documents/base.py:20) def __init__(self, page_content: str, **kwargs: Any) -> None:
[21](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/langchain_core/documents/base.py:21) """Pass page_content in as positional or named arg."""
---> [22](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/langchain_core/documents/base.py:22) super().__init__(page_content=page_content, **kwargs)
File ~/Desktop/lang_graph/env/lib/python3.10/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
[339](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/pydantic/v1/main.py:339) values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
[340](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/pydantic/v1/main.py:340) if validation_error:
--> [341](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/pydantic/v1/main.py:341) raise validation_error
[342](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/pydantic/v1/main.py:342) try:
[343](https://file+.vscode-resource.vscode-cdn.net/Users/tindo/Desktop/lang_graph/~/Desktop/lang_graph/env/lib/python3.10/site-packages/pydantic/v1/main.py:343) object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for Document
page_content
str type expected (type=type_error.str)
```
I believe the programmer expects us to perform the pipeline FetchNode -> ParseNode -> RAGNode, instead. Although, this may not make sense in the pdf scraping scenario. This is because the `text_splitter.split_text()` in parse_node turns a list of Document into a list of str. Thanks! | closed | 2024-06-06T04:53:16Z | 2024-06-16T11:33:15Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/349 | [] | tindo2003 | 6 |
roboflow/supervision | pytorch | 676 | Add legend text on video frames | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
Hi All!
I'm trying to find a way to add a legend text at each frame (with the number of frame assigned) during the object tracking. Does supervision has a way to do that? I checked the available annotators, but I didn't find something relevant.
Thank you!
### Additional
_No response_ | closed | 2023-12-15T15:05:19Z | 2023-12-18T08:58:53Z | https://github.com/roboflow/supervision/issues/676 | [
"question"
] | dimpolitik | 4 |
huggingface/datasets | deep-learning | 7,071 | Filter hangs | ### Describe the bug
When trying to filter my custom dataset, the process hangs, regardless of the lambda function used. It appears to be an issue with the way the Images are being handled. The dataset in question is a preprocessed version of https://huggingface.co/datasets/danaaubakirova/patfig where notably, I have converted the data to the Parquet format.
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('lcolonn/patfig', split='test')
ds_filtered = ds.filter(lambda row: row['cpc_class'] != 'Y')
```
Eventually I ctrl+C and I obtain this stack trace:
```
>>> ds_filtered = ds.filter(lambda row: row['cpc_class'] != 'Y')
Filter: 0%| | 0/998 [00:00<?, ? examples/s]Filter: 0%| | 0/998 [00:35<?, ? examples/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 567, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/fingerprint.py", line 482, in wrapper
out = func(dataset, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3714, in filter
indices = self.map(
^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 602, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 567, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3161, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3552, in _map_single
batch = apply_function_on_filtered_inputs(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 3421, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 6478, in get_indices_from_mask_function
num_examples = len(batch[next(iter(batch.keys()))])
~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 273, in __getitem__
value = self.format(key)
^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 376, in format
return self.formatter.format_column(self.pa_table.select([key]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 443, in format_column
column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/formatting/formatting.py", line 219, in decode_column
return self.features.decode_column(column, column_name) if self.features else column
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 2008, in decode_column
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 2008, in <listcomp>
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/features.py", line 1351, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/datasets/features/image.py", line 188, in decode_example
image.load() # to avoid "Too many open files" errors
^^^^^^^^^^^^
File "/home/l-walewski/miniconda3/envs/patentqa/lib/python3.11/site-packages/PIL/ImageFile.py", line 293, in load
n, err_code = decoder.decode(b)
^^^^^^^^^^^^^^^^^
KeyboardInterrupt
```
Warning! This can even seem to cause some computers to crash.
### Expected behavior
Should return the filtered dataset
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-6.5.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- `huggingface_hub` version: 0.24.0
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | open | 2024-07-25T15:29:05Z | 2024-07-25T15:36:59Z | https://github.com/huggingface/datasets/issues/7071 | [] | lucienwalewski | 0 |
donnemartin/data-science-ipython-notebooks | tensorflow | 8 | Add requirements file to help with installation for users who prefer not to use Anaconda | closed | 2015-07-12T11:35:04Z | 2015-07-14T11:47:19Z | https://github.com/donnemartin/data-science-ipython-notebooks/issues/8 | [
"enhancement"
] | donnemartin | 0 | |
seleniumbase/SeleniumBase | web-scraping | 2,842 | Suddenly unable to bypass CloudFlare challenge (Ubuntu Server) | Hello, overnight my instances of seleniumbase became unable to bypass the CloudFlare challenge ( which uses CloudFlare turnstile ).
I was using an older version of SB so I updated to latest ( 4.27.4 ), and it is still not passing the challenge.

I am using your demo code for clicking on the CloudFlare turnstile captcha:
```
from seleniumbase import SB
def open_the_turnstile_page(sb):
url = "https://wildbet.gg/"
sb.driver.uc_open_with_reconnect(url, reconnect_time=5)
def click_turnstile_and_verify(sb):
sb.switch_to_frame("iframe")
sb.driver.uc_click("span")
sb.assert_element("img#captcha-success", timeout=3)
with SB(uc=True, test=True) as sb:
open_the_turnstile_page(sb)
try:
click_turnstile_and_verify(sb)
except Exception:
open_the_turnstile_page(sb)
click_turnstile_and_verify(sb)
sb.set_messenger_theme(location="top_left")
sb.post_message("SeleniumBase wasn't detected", duration=3)
```
if I instead use:
`sb.driver.uc_open_with_reconnect(url, reconnect_time=9999)`
and click manually, it works. This means they are detecting something ?
I also tried adding `reconnect_time=5` on uc_click and it did not help.
I'm a big fan of your project and I've been using it for some time :) | closed | 2024-06-07T21:54:20Z | 2025-01-21T11:11:59Z | https://github.com/seleniumbase/SeleniumBase/issues/2842 | [
"workaround exists",
"feature or fix already exists",
"UC Mode / CDP Mode",
"Fun"
] | Jobine23 | 73 |
elliotgao2/toapi | flask | 115 | Production Deployment Instructions | Hello,
I am relatively new to python web development. And while I am mainly working on a mobile app. I found `topapi` to be a perfect companion for my backend requirements.
I am now almost ready to launch my app, but am struggling to find a good production hosting environment for the `toapi` server code.
Mainly looking around using `heroku` or `aws` or `google app engine` for hosting server.
I was wondering if you can provide some instructions for deploying to a production quality server. I did go over this [deploy link](http://flask.pocoo.org/docs/0.12/deploying/) but still not able to link the content to the actual toapi codebase.
And advise on how can I move forward with this.
Thank you again, | closed | 2018-03-02T01:32:31Z | 2018-03-04T12:12:05Z | https://github.com/elliotgao2/toapi/issues/115 | [] | ahetawal-p | 3 |
jonaswinkler/paperless-ng | django | 1,705 | [BUG] Database migrations error | Hi,
im trying to install paperless-ng on my raspberry pi. When running the superuser creation command after following the guide for the docker install, the following error appears. How do i fix this:
docker-compose run --rm webserver createsuperuser
Starting paperless-ng_broker_1 ... done
Paperless-ng docker container starting...
Creating directory /tmp/paperless
Adjusting permissions of paperless files. This may take a while.
Apply database migrations...
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 354, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 398, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 89, in wrapped
Cheer Gamma | open | 2022-05-16T21:23:28Z | 2022-05-22T16:12:22Z | https://github.com/jonaswinkler/paperless-ng/issues/1705 | [] | nwoltering | 1 |
python-visualization/folium | data-visualization | 1,744 | multiple TimeSliderChoropleth fails | **Describe the bug**
I was trying to visualise multiple time series in one map using TimeSliderChoropleth. I tried multiple variations of the code you can see on the screenshot, but every time it returned a map with blue patches. I would like to ask if there is any solution to this issue.

**To Reproduce**
```
wales = folium.Map([52.395180, -3.511841],zoom_start=7.5)
ts1 = TimeSliderChoropleth(area_sites,name = 'A', styledict=sd_p,
overlay=True, show=False)
ts2 = TimeSliderChoropleth(area_sites,name = 'B', styledict=sd_m,control=True,
overlay=True, show=False)
ts1.add_to(wales)
ts2.add_to(wales)
folium.LayerControl().add_to(wales)
wales#.save("testmap.html")
```
**Expected behavior**
Would it be possible to add multiple TimeSliderChoropleth?
**Environment (please complete the following information):**
- Browser Chrome
- Jupyter Notebook
- Python version sys.version_info(major=3, minor=10, micro=4, releaselevel='final', serial=0)
- folium version 0.14.0
- branca version 0.6.0
| closed | 2023-04-03T16:39:44Z | 2023-05-17T14:06:31Z | https://github.com/python-visualization/folium/issues/1744 | [
"enhancement",
"plugin",
"work in progress"
] | Sponka | 3 |
supabase/supabase-py | fastapi | 533 | install not possible because of version conflicts | **Describe the bug**
Trying to install this package with "poetry add supabase" gives the following error:
Because no versions of supabase match >1.0.4,<2.0.0
and supabase (1.0.4) depends on postgrest (>=0.10.8,<0.11.0), supabase (>=1.0.4,<2.0.0) requires postgrest (>=0.10.8,<0.11.0).
Because postgrest (0.10.8) depends on pydantic (>=2.1.0,<3.0)
and no versions of postgrest match >0.10.8,<0.11.0, postgrest (>=0.10.8,<0.11.0) requires pydantic (>=2.1.0,<3.0).
Thus, supabase (>=1.0.4,<2.0.0) requires pydantic (>=2.1.0,<3.0).
So, because main depends on both pydantic (^1.10.4) and supabase (^1.0.4), version solving failed.
**To Reproduce**
poetry add supabase
**Expected behavior**
It installs
**Desktop (please complete the following information):**
- OS: osx
| closed | 2023-08-31T14:55:38Z | 2023-09-18T15:03:05Z | https://github.com/supabase/supabase-py/issues/533 | [] | digi604 | 1 |
huggingface/text-generation-inference | nlp | 2,362 | AttributeError: 'Idefics2ForConditionalGeneration' object has no attribute 'model' | ### System Info
1xL40 node on Runpod
Latest `huggingface/text-generation-inference:latest` docker image.
Command: `--model-id HuggingFaceM4/idefics2-8b --port 8080 --max-input-length 3000 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 --speculate 3 --lora-adapters orionsoftware/rater-adapter-v0.0.1`
### Information
- [X] Docker
- [ ] The CLI directly
### Tasks
- [X] An officially supported command
- [ ] My own modifications
### Reproduction
I'm trying to deploy an idefics2 LoRA using the `huggingface/text-generation-inference:latest` docker image.
The command I'm running is `--model-id HuggingFaceM4/idefics2-8b --port 8080 --max-input-length 3000 --max-total-tokens 4096 --max-batch-prefill-tokens 4096 --speculate 3 --lora-adapters orionsoftware/rater-adapter-v0.0.1`
I also have a correct HF token to access orionsoftware/rater-adapter-v0.0.1.
It works well without the `--lora-adapters orionsoftware/rater-adapter-v0.0.1` part. But once I add the LoRA, I'm getting this error starting up:
```
2024-08-07T14:53:12.382413544Z [2m2024-08-07T14:53:12.382183Z[0m [32m INFO[0m [2mtext_generation_launcher[0m[2m:[0m Loading adapter weights into model: orionsoftware/rater-adapter-v0.0.1
2024-08-07T14:53:12.526786055Z [2m2024-08-07T14:53:12.526533Z[0m [31mERROR[0m [2mtext_generation_launcher[0m[2m:[0m Error when initializing model
2024-08-07T14:53:12.526839016Z Traceback (most recent call last):
2024-08-07T14:53:12.526843694Z File "/opt/conda/bin/text-generation-server", line 8, in <module>
2024-08-07T14:53:12.526847692Z sys.exit(app())
2024-08-07T14:53:12.526851690Z File "/opt/conda/lib/python3.10/site-packages/typer/main.py", line 311, in __call__
2024-08-07T14:53:12.526855617Z return get_command(self)(*args, **kwargs)
2024-08-07T14:53:12.526858793Z File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
2024-08-07T14:53:12.526862660Z return self.main(*args, **kwargs)
2024-08-07T14:53:12.526865856Z File "/opt/conda/lib/python3.10/site-packages/typer/core.py", line 778, in main
2024-08-07T14:53:12.526869113Z return _main(
2024-08-07T14:53:12.526872248Z File "/opt/conda/lib/python3.10/site-packages/typer/core.py", line 216, in _main
2024-08-07T14:53:12.526875775Z rv = self.invoke(ctx)
2024-08-07T14:53:12.526879843Z File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1688, in invoke
2024-08-07T14:53:12.526883249Z return _process_result(sub_ctx.command.invoke(sub_ctx))
2024-08-07T14:53:12.526886255Z File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
2024-08-07T14:53:12.526889331Z return ctx.invoke(self.callback, **ctx.params)
2024-08-07T14:53:12.526892567Z File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 783, in invoke
2024-08-07T14:53:12.526916632Z return __callback(*args, **kwargs)
2024-08-07T14:53:12.526919849Z File "/opt/conda/lib/python3.10/site-packages/typer/main.py", line 683, in wrapper
2024-08-07T14:53:12.526922954Z return callback(**use_params) # type: ignore
2024-08-07T14:53:12.526925820Z File "/opt/conda/lib/python3.10/site-packages/text_generation_server/cli.py", line 109, in serve
2024-08-07T14:53:12.526929326Z server.serve(
2024-08-07T14:53:12.526932332Z File "/opt/conda/lib/python3.10/site-packages/text_generation_server/server.py", line 274, in serve
2024-08-07T14:53:12.526935638Z asyncio.run(
2024-08-07T14:53:12.526938885Z File "/opt/conda/lib/python3.10/asyncio/runners.py", line 44, in run
2024-08-07T14:53:12.526941910Z return loop.run_until_complete(main)
2024-08-07T14:53:12.526945066Z File "/opt/conda/lib/python3.10/asyncio/base_events.py", line 636, in run_until_complete
2024-08-07T14:53:12.526948322Z self.run_forever()
2024-08-07T14:53:12.526951238Z File "/opt/conda/lib/python3.10/asyncio/base_events.py", line 603, in run_forever
2024-08-07T14:53:12.526954875Z self._run_once()
2024-08-07T14:53:12.526957790Z File "/opt/conda/lib/python3.10/asyncio/base_events.py", line 1909, in _run_once
2024-08-07T14:53:12.526960856Z handle._run()
2024-08-07T14:53:12.526964022Z File "/opt/conda/lib/python3.10/asyncio/events.py", line 80, in _run
2024-08-07T14:53:12.526967298Z self._context.run(self._callback, *self._args)
2024-08-07T14:53:12.526971727Z > File "/opt/conda/lib/python3.10/site-packages/text_generation_server/server.py", line 229, in serve_inner
2024-08-07T14:53:12.526974853Z model = get_model_with_lora_adapters(
2024-08-07T14:53:12.526977828Z File "/opt/conda/lib/python3.10/site-packages/text_generation_server/models/__init__.py", line 1216, in get_model_with_lora_adapters
2024-08-07T14:53:12.526983248Z 1 if layer_name == "lm_head" else len(model.model.model.layers)
2024-08-07T14:53:12.526986344Z File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1729, in __getattr__
2024-08-07T14:53:12.526989230Z raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
2024-08-07T14:53:12.526992286Z AttributeError: 'Idefics2ForConditionalGeneration' object has no attribute 'model'
```
This is on a 1xL40 node on Runpod.
`orionsoftware/rater-adapter-v0.0.1` was trained using the `transformers` `Trainer` and looks like this:

I'm curious as to what I'm doing wrong. Unfortunately, my weak Python skills prevent me from debugging this further.
### Expected behavior
The expectation was for the model to be served correctly with no errors. | open | 2024-08-06T10:38:39Z | 2024-09-11T15:43:35Z | https://github.com/huggingface/text-generation-inference/issues/2362 | [] | komninoschatzipapas | 2 |
jupyter/nbgrader | jupyter | 1,681 | Installation: nbgrader menus don't show up in jupyterhub | ### Operating system
Ubuntu 20.04.5 LTS
### `nbgrader version`
0.8.1
### `jupyterhub version` (if used with JupyterHub)
1.5.0 20220918101831
### `jupyter notebook --version`
6.4.12
### Expected behavior
after `pip install nbrader` the Jupyter hub web UI contains Formgrader, Courses and Assignment menus
### Actual behavior
Neither of the nbgrader-specific menus shows up
### Steps to reproduce the behavior
I have also tried installing with "sudo" and activating the server extensions manually as described in https://nbgrader.readthedocs.io/en/stable/user_guide/installation.html
with no result. No error is reported during the installation or activation of the extensions. | closed | 2022-10-05T08:10:04Z | 2022-10-07T07:32:49Z | https://github.com/jupyter/nbgrader/issues/1681 | [] | kliegr | 4 |
waditu/tushare | pandas | 1,338 | python调取daily数据是发生未知错误 | ID:361673
Traceback (most recent call last):
File "C:/Users/KING/Desktop/meiduo/Desktop/test5.py", line 41, in <module>
df = pro.daily(ts_code=i, start_date=start, end_date=end)
File "D:\Python37\lib\site-packages\tushare\pro\client.py", line 44, in query
raise Exception(result['msg'])
Exception: 系统未知错误,欢迎上报! 谢谢!request-id(704dbe287d5111ea9a5fc1a4690e31ab1586759931818779) | open | 2020-04-13T07:02:58Z | 2020-04-13T07:02:58Z | https://github.com/waditu/tushare/issues/1338 | [] | zbqing | 0 |
lyhue1991/eat_tensorflow2_in_30_days | tensorflow | 98 | 3-1,3-2,3-3 | 您好,请在在3-1,3-2,3-3的分类模型正向传播的函数中: @tf.function(input_signature=[tf.TensorSpec(shape = [None,2], dtype = tf.float32)]),shape的参数该如何定义 | open | 2022-10-26T08:34:23Z | 2022-10-26T08:34:23Z | https://github.com/lyhue1991/eat_tensorflow2_in_30_days/issues/98 | [] | gaopfnice | 0 |
torchbox/wagtail-grapple | graphql | 290 | bug: order is not applied to search results | When using both searchQuery and order, the results are not ordered according to order. | closed | 2023-01-05T18:42:31Z | 2023-01-14T02:40:31Z | https://github.com/torchbox/wagtail-grapple/issues/290 | [
"bug"
] | dopry | 1 |
tflearn/tflearn | tensorflow | 389 | variable_scope() got multiple values for argument 'reuse' | I'm getting the following error:
```
Traceback (most recent call last):
File "book.py", line 15, in <module>
net = tflearn.fully_connected(net, 64)
File "/usr/local/lib/python3.5/site-packages/tflearn/layers/core.py", line 146, in fully_connected
with tf.variable_scope(scope, name, values=[incoming], reuse=reuse) as scope:
File "/usr/local/Cellar/python3/3.5.2_1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py", line 133, in helper
return _GeneratorContextManager(func, args, kwds)
File "/usr/local/Cellar/python3/3.5.2_1/Frameworks/Python.framework/Versions/3.5/lib/python3.5/contextlib.py", line 38, in __init__
self.gen = func(*args, **kwds)
TypeError: variable_scope() got multiple values for argument 'reuse'
```
by executing this code:
```
dataset_file = 'my_dataset.txt'
from tflearn.data_utils import image_preloader
import tflearn
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.data_preprocessing import ImagePreprocessing
from tflearn.data_augmentation import ImageAugmentation
from tflearn.layers.conv import conv_2d, max_pool_2d
X, Y = image_preloader(dataset_file, image_shape=(128, 128), mode='file', categorical_labels=True, normalize=True)
# Classification
tflearn.init_graph(num_cores=8, gpu_memory_fraction=0.5)
net = tflearn.input_data(shape=[None, 784])
net = tflearn.fully_connected(net, 64)
net = tflearn.dropout(net, 0.5)
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='adam', loss='categorical_crossentropy')
model = tflearn.DNN(net)
model.fit(X, Y)
```
Please advice.
| closed | 2016-10-11T19:19:11Z | 2016-10-15T23:55:48Z | https://github.com/tflearn/tflearn/issues/389 | [] | ror6ax | 8 |
tqdm/tqdm | pandas | 958 | Bug in tqdm.write() in MINGW64 on Windows 10 | - [x] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [x] visual output bug
+ [ ] documentation request (i.e. "X is missing from the documentation." If instead I want to ask
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
Version output:
4.45.0 3.7.6 (default, Jan 8 2020, 20:23:39) [MSC v.1916 64 bit (AMD64)] win32
On Windows 10 using the MING64 bash emulator, the output of `tqdm.write()` does not appear during the loop, but appears all at once after the loop has finished. In the default Windows terminal it works as expected.
Test code:
```
from tqdm import tqdm
from time import sleep
for i in tqdm(range(0, 3000)):
sleep(0.001)
if i%1000 == 0:
tqdm.write("Time passed")
```
Output in MING64 (the three "Time passed" messages appear simultaneously after the loop has finished):
```
$ python test.py
100%|##########| 3000/3000 [00:05<00:00, 535.52it/s]
Time passed
Time passed
Time passed
```
Output in Windows Powershell (working as expected):
```
PS C:\xxx\yyy\zzz> python .\test.py
Time passed
Time passed
Time passed
100%|█████████████████████████████████████████████████████████████████████████████| 3000/3000 [00:05<00:00, 527.41it/s]
```
Output in Windows prompt (working as expected):
```
(dvc) C:\xxx\yyy\zzz>python test.py
Time passed
Time passed
Time passed
100%|█████████████████████████████████████| 3000/3000 [00:05<00:00, 565.41it/s]
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
[StackOverflow#tqdm]: https://stackoverflow.com/questions/tagged/tqdm
| open | 2020-05-01T10:20:11Z | 2020-05-01T10:20:22Z | https://github.com/tqdm/tqdm/issues/958 | [] | charlesbaynham | 0 |
mlfoundations/open_clip | computer-vision | 527 | Customized pretrained models | Hi, thanks for awesome repo. Is there any way to load a pre-trained model and change its layer config. For instance, I need to load ViT-B-16-plus-240, change the embed dim and after that I train it. Please note that I need to initialize the model's layer weights to the pre-trained model except for the layers which are changed. | closed | 2023-05-10T17:22:59Z | 2023-05-18T16:52:20Z | https://github.com/mlfoundations/open_clip/issues/527 | [] | kyleub | 2 |
dgtlmoon/changedetection.io | web-scraping | 2,879 | Exception: 'ascii' codec can't encode character '\xf6' in position 5671: ordinal not in range(128) | **Describe the bug**
I get the error now on latest version: 0.48.05
Exception: 'ascii' codec can't encode character '\xf6' in position 5720: ordinal not in range(128)
**Version**
*Exact version* in the top right area: 0.48.05
**How did you install?**
Docker
**To Reproduce**
Steps to reproduce the behavior:
1. Install changedetection.io
2. Add a change detection watch
3. Wait for it to check the website and observe the error
| closed | 2025-01-02T21:58:40Z | 2025-01-03T08:13:55Z | https://github.com/dgtlmoon/changedetection.io/issues/2879 | [
"triage"
] | doggi87 | 1 |
0b01001001/spectree | pydantic | 240 | Deactive validation_error_status value | Is there a way to remove the `validation_error_status` feature for specific endpoints?
Use case: A `GET` endpoint with no query parameters but path parameters only, so it will never be possible to return a `422` or any error with the `ValidationError` form. | closed | 2022-07-27T17:02:51Z | 2023-03-24T02:26:59Z | https://github.com/0b01001001/spectree/issues/240 | [] | Joseda8 | 2 |
autokey/autokey | automation | 271 | trying to emulate shift+end with autokey | ## Classification:
Question
## Reproducibility:
Always
## Version
AutoKey version: autokey-gtk 0.95.1
Used GUI (Gtk, Qt, or both): gtk
If the problem is known to be present in more than one version, please list all of those.
Installed via: (PPA, pip3, …).
sudo apt-get install autokey-gtk
Linux Distribution: ubuntu 18.04.2 (gnome shell)
## Summary
Summary of the problem.
## Steps to Reproduce (if applicable)
I try with the following scripts (more info at this [stack overflow](https://stackoverflow.com/questions/55647612/trying-to-emulate-shiftend-with-autokey) question)
```
keyboard.send_key('<shift>+<end>')
---
keyboard.send_key('<shift+end>')
keyboard.send_key('shift+end')
etc...
---
keyboard.press_key('<shift>')
keyboard.send_key('<end>')
keyboard.release_key('<shift>')
```
I can't emulate the shift-end key combination
## Expected Results
To send the ctrl-end
- This should happen.
## Actual Results
- Instead, this happens. it does nothing, or with the last try it only works when you press the key combination the sevons time (???)
``` | closed | 2019-04-13T06:38:36Z | 2019-04-23T18:28:25Z | https://github.com/autokey/autokey/issues/271 | [] | opensas | 3 |
gtalarico/django-vue-template | rest-api | 46 | Initial settings: disableHostCheck, Cross-Origin | Hi, I appreciate your work!
When following the installation guide, it is not working out of the box. In
`vue.config.js`
I have to add
```
devServer: {
disableHostCheck: true,
...
}
```
to see the vue start page.
The web beowser console is repeatedly generating Cross-Origin blocks. So is some communication still bocked between vue and django by Cross-Origin rule of django or vue?
Am I doing something wrong or is it intended to be like this? Is djangos corsheaders needed to allow communication?
| open | 2020-01-16T17:44:34Z | 2020-01-19T16:48:33Z | https://github.com/gtalarico/django-vue-template/issues/46 | [] | totobrei | 1 |
robotframework/robotframework | automation | 5,250 | Allow removing tags using `-tag` syntax also in `Test Tags` | `Test Tags` from `*** Settings ***` and `[Tags]` from Test Case behave differently when removing tags: while it is possible to remove tags with Test Case's `[Tag] -something`, Settings `Test Tags -something` introduces a new tag `-something`.
Running tests with these robot files (also [attached](https://github.com/user-attachments/files/17566740/TagsTest.zip)):
* `__init__.robot`:
```
*** Settings ***
Test Tags something
```
* `-SomethingInSettings.robot`:
```
*** Settings ***
Test Tags -something
*** Test Cases ***
-Something In Settings
Should Be Empty ${TEST TAGS}
```
* `-SomethingInTestCase.robot`:
```
*** Test Cases ***
-Something In Test Case
[Tags] -something
Should Be Empty ${TEST TAGS}
```
gives the following output:
```
> robot .
==============================================================================
TagsTest
==============================================================================
TagsTest.-SomethingInSettings
==============================================================================
-Something In Settings | FAIL |
'['-something', 'something']' should be empty.
------------------------------------------------------------------------------
TagsTest.-SomethingInSettings | FAIL |
1 test, 0 passed, 1 failed
==============================================================================
TagsTest.-SomethingInTestCase
==============================================================================
-Something In Test Case | PASS |
------------------------------------------------------------------------------
TagsTest.-SomethingInTestCase | PASS |
1 test, 1 passed, 0 failed
==============================================================================
TagsTest | FAIL |
2 tests, 1 passed, 1 failed
==============================================================================
```
(https://forum.robotframework.org/t/removing-tags-from-the-test-tags-setting/7513/6?u=romanliv confirms this as an issue to be fixed) | open | 2024-10-30T05:56:30Z | 2024-11-01T11:05:54Z | https://github.com/robotframework/robotframework/issues/5250 | [
"enhancement",
"priority: medium",
"backwards incompatible",
"effort: medium"
] | romanliv | 1 |
autogluon/autogluon | scikit-learn | 4,681 | torch.load Compatibility Issue: Unsupported Global fastcore.foundation.L with weights_only=True | Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.
Detailed Traceback:
Traceback (most recent call last):
File "C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\trainer\abstract_trainer.py", line 2103, in _train_and_save
model = self._train_single(**model_fit_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\trainer\abstract_trainer.py", line 1993](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/trainer/abstract_trainer.py#line=1992), in _train_single
model = model.fit(X=X, y=y, X_val=X_val, y_val=y_val, X_test=X_test, y_test=y_test, total_resources=total_resources, **model_fit_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\abstract\abstract_model.py", line 925](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/abstract/abstract_model.py#line=924), in fit
out = self._fit(**kwargs)
^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\ensemble\stacker_ensemble_model.py", line 270](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/ensemble/stacker_ensemble_model.py#line=269), in _fit
return super()._fit(X=X, y=y, time_limit=time_limit, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\ensemble\bagged_ensemble_model.py", line 298](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/ensemble/bagged_ensemble_model.py#line=297), in _fit
self._fit_folds(
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\ensemble\bagged_ensemble_model.py", line 724](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/ensemble/bagged_ensemble_model.py#line=723), in _fit_folds
fold_fitting_strategy.after_all_folds_scheduled()
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\ensemble\fold_fitting_strategy.py", line 317](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/ensemble/fold_fitting_strategy.py#line=316), in after_all_folds_scheduled
self._fit_fold_model(job)
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\ensemble\fold_fitting_strategy.py", line 322](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/ensemble/fold_fitting_strategy.py#line=321), in _fit_fold_model
fold_model = self._fit(self.model_base, time_start_fold, time_limit_fold, fold_ctx, self.model_base_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\ensemble\fold_fitting_strategy.py", line 358](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/ensemble/fold_fitting_strategy.py#line=357), in _fit
fold_model.fit(X=X_fold, y=y_fold, X_val=X_val_fold, y_val=y_val_fold, time_limit=time_limit_fold, num_cpus=num_cpus, num_gpus=num_gpus, **kwargs_fold)
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\core\models\abstract\abstract_model.py", line 925](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/core/models/abstract/abstract_model.py#line=924), in fit
out = self._fit(**kwargs)
^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\tabular\models\fastainn\tabular_nn_fastai.py", line 365](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/tabular/models/fastainn/tabular_nn_fastai.py#line=364), in _fit
self.model.fit_one_cycle(epochs, params["lr"], cbs=callbacks)
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastai\callback\schedule.py", line 121](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastai/callback/schedule.py#line=120), in fit_one_cycle
self.fit(n_epoch, cbs=ParamScheduler(scheds)+L(cbs), reset_opt=reset_opt, wd=wd, start_epoch=start_epoch)
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastai\learner.py", line 266](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastai/learner.py#line=265), in fit
self._with_events(self._do_fit, 'fit', CancelFitException, self._end_cleanup)
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastai\learner.py", line 203](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastai/learner.py#line=202), in _with_events
self(f'after_{event_type}'); final()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastai\learner.py", line 174](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastai/learner.py#line=173), in __call__
def __call__(self, event_name): L(event_name).map(self._call_one)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastcore\foundation.py", line 159](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastcore/foundation.py#line=158), in map
def map(self, f, *args, **kwargs): return self._new(map_ex(self, f, *args, gen=False, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastcore\basics.py", line 910](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastcore/basics.py#line=909), in map_ex
return list(res)
^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastcore\basics.py", line 895](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastcore/basics.py#line=894), in __call__
return self.func(*fargs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastai\learner.py", line 178](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastai/learner.py#line=177), in _call_one
for cb in self.cbs.sorted('order'): cb(event_name)
^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastai\callback\core.py", line 64](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastai/callback/core.py#line=63), in __call__
except Exception as e: raise modify_exception(e, f'Exception occured in `{self.__class__.__name__}` when calling event `{event_name}`:\n\t{e.args[0]}', replace=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastai\callback\core.py", line 62](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastai/callback/core.py#line=61), in __call__
try: res = getcallable(self, event_name)()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\autogluon\tabular\models\fastainn\callbacks.py", line 116](file:///C:/Users/celes/anaconda3/Lib/site-packages/autogluon/tabular/models/fastainn/callbacks.py#line=115), in after_fit
self.learn.load(f"{self.fname}", with_opt=self.with_opt)
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastai\learner.py", line 422](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastai/learner.py#line=421), in load
load_model(file, self.model, self.opt, device=device, **kwargs)
File "[C:\Users\celes\anaconda3\Lib\site-packages\fastai\learner.py", line 53](file:///C:/Users/celes/anaconda3/Lib/site-packages/fastai/learner.py#line=52), in load_model
state = torch.load(file, map_location=device, **torch_load_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[C:\Users\celes\anaconda3\Lib\site-packages\torch\serialization.py", line 1455](file:///C:/Users/celes/anaconda3/Lib/site-packages/torch/serialization.py#line=1454), in load
raise pickle.UnpicklingError(_get_wo_message(str(e))) from None
_pickle.UnpicklingError: Exception occured in `AgSaveModelCallback` when calling event `after_fit`:
Weights only load failed. This file can still be loaded, to do so you have two options, do those steps only if you trust the source of the checkpoint.
(1) Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
(2) Alternatively, to load with `weights_only=True` please check the recommended steps in the following error message.
WeightsUnpickler error: Unsupported global: GLOBAL fastcore.foundation.L was not an allowed global by default. Please use `torch.serialization.add_safe_globals([L])` or the `torch.serialization.safe_globals([L])` context manager to allowlist this global if you trust this class/function. | closed | 2024-11-23T11:41:14Z | 2024-11-23T14:04:34Z | https://github.com/autogluon/autogluon/issues/4681 | [
"enhancement"
] | celestinoxp | 0 |
pyg-team/pytorch_geometric | deep-learning | 8,967 | /lib/python3.10/site-packages/torch_geometric/utils/sparse.py:268 : Sparse CSR tensor support is in beta state | ### 🐛 Describe the bug
Hello,
I updated my pytorch to 2.2.1+cu121 using pip, and also updated pyg by `pip install torch_geometric`. Then I found there is a warning when I imported the dataset:
`/lib/python3.10/site-packages/torch_geometric/utils/sparse.py:268 : Sparse CSR tensor support is in beta state, please submit a feature request to https://github.com/pytorch/pytorch/issues. (Triggered internally at ../aten/src/ATen/SparseCsrTensorImpl.cpp:53.)`
`adj = torch.sparse_csr_tensor(`
This is the code related to this:
```
def get_products():
root = osp.join(osp.dirname(osp.realpath('__file__')), '..', 'data', 'products')
dataset = PygNodePropPredDataset('ogbn-products', root)
data = dataset[0]
data = T.ToSparseTensor()(data)
data.y = data.y.view(-1)
split_idx = dataset.get_idx_split()
data.train_mask = index2mask(split_idx['train'], data.num_nodes)
data.val_mask = index2mask(split_idx['valid'], data.num_nodes)
data.test_mask = index2mask(split_idx['test'], data.num_nodes)
return data, dataset.num_features, dataset.num_classes
data, in_channels, out_channels = get_products()
```
It never happened before my updating. Now, my data.adj_t becomes tensor instead of sparse tensor. And some functions like:
`data.adj_t = data.adj_t.set_diag()` will return errors. I am wondering how I could fix it.
Besides, after updating pyg, it was returning
`lib/python3.10/site-packages/torch_cluster/_version_cuda.so: undefined symbol: _ZN3c1017RegisterOperatorsD1Ev. `
I fixed it by updating torch_cluster. I am not sure if this is related.
Any help would be appreciated.
### Versions
PyTorch version: 2.2.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.5
Libc version: glibc-2.35
Python version: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
Nvidia driver version: 535.129.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.2
[pip3] pytorch-warmup==0.1.1
[pip3] torch==2.2.1
[pip3] torch-cluster==1.6.3
[pip3] torch_geometric==2.5.0
[pip3] torch-scatter==2.1.2
[pip3] torch-sparse==0.6.17
[pip3] torch_spline_conv==1.2.2+pt22cu121
[pip3] torchaudio==2.2.1
[pip3] torchvision==0.17.1
[pip3] triton==2.2.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.8.0 h6a678d5_0
[conda] mkl 2023.1.0 h213fc3f_46343
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.6 py310h1128e8f_1
[conda] mkl_random 1.2.2 py310h1128e8f_1
[conda] numpy 1.26.0 pypi_0 pypi
[conda] numpy-base 1.25.2 py310hb5e798b_0
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] pytorch-warmup 0.1.1 pypi_0 pypi
[conda] torch 2.0.1 pypi_0 pypi
[conda] torch-cluster 1.6.3 pypi_0 pypi
[conda] torch-geometric 2.5.0 pypi_0 pypi
[conda] torch-scatter 2.1.2 pypi_0 pypi
[conda] torch-sparse 0.6.17 pypi_0 pypi
[conda] torch-spline-conv 1.2.2+pt22cu121 pypi_0 pypi
[conda] torchaudio 2.2.1 pypi_0 pypi
[conda] torchvision 0.17.1 pypi_0 pypi
[conda] triton 2.2.0 pypi_0 pypi | open | 2024-02-26T01:22:59Z | 2024-03-07T13:44:29Z | https://github.com/pyg-team/pytorch_geometric/issues/8967 | [
"bug"
] | RX28666 | 15 |
netbox-community/netbox | django | 17,774 | The rename of SSO from Microsoft Azure AD to Entra ID doesn't work as expected | ### Deployment Type
Self-hosted
### Triage priority
N/A
### NetBox Version
v4.1.4
### Python Version
3.10
### Steps to Reproduce
Update from NetBox v4.1.1 to v4.1.4 (SSO with Entra ID enabled)
### Expected Behavior
According to the #15829, the new label **Microsoft Entra ID** was expected when SSO with Entra ID is enabled on NetBox.
### Observed Behavior
The login screen doesn't show the **Microsoft Entra ID** label

The link associated to SSO button is `.../oauth/login/azuread-oauth2/?next=%2F`. With reference to doc [Microsoft Entra ID](https://github.com/netbox-community/netbox/blob/develop/docs/administration/authentication/microsoft-entra-id.md), if I change the _Redirect URI_ (Azure App Registrations) from `/oauth/complete/azuread-oauth2/` to `/oauth/complete/entraid-oauth2/`. login doesn't work anymore. | closed | 2024-10-16T11:58:52Z | 2025-01-17T03:02:38Z | https://github.com/netbox-community/netbox/issues/17774 | [
"type: bug",
"status: accepted",
"severity: low"
] | lucafabbri365 | 8 |
glato/emerge | data-visualization | 16 | SyntaxError: invalid syntax when starting emerge | Getting the following error when trying to start emerge:
```sh
(app) ➜ app git:(master) emerge
Traceback (most recent call last):
File "/Users/dillon.jones/.pyenv/versions/app/bin/emerge", line 33, in <module>
sys.exit(load_entry_point('emerge-viz==1.1.0', 'console_scripts', 'emerge')())
File "/Users/dillon.jones/.pyenv/versions/app/bin/emerge", line 25, in importlib_load_entry_point
return next(matches).load()
File "/Users/dillon.jones/.pyenv/versions/3.7.10/envs/app/lib/python3.7/site-packages/importlib_metadata/__init__.py", line 167, in load
module = import_module(match.group('module'))
File "/Users/dillon.jones/.pyenv/versions/3.7.10/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/Users/dillon.jones/.pyenv/versions/3.7.10/envs/app/lib/python3.7/site-packages/emerge/main.py", line 8, in <module>
from emerge.appear import Emerge
File "/Users/dillon.jones/.pyenv/versions/3.7.10/envs/app/lib/python3.7/site-packages/emerge/appear.py", line 14, in <module>
from emerge.languages.javaparser import JavaParser
File "<fstring>", line 1
(result=)
^
SyntaxError: invalid syntax
```
Seems like python doesn't like this line? https://github.com/glato/emerge/blob/ac8adeb7144fef1c99e9ee9c1872551be9936cdc/emerge/languages/javaparser.py#L162
**Describe your environment**
Python 3.7.10
MacOS 11.6.2, intel cpu
emerge-viz 1.1.0
**To Reproduce**
Steps to reproduce the behavior:
Install emerge. I had to `brew install graphviz` and then run the following to avoid a wheel issue
```sh
pip install emerge-viz --global-option=build_ext --global-option="-I/usr/local/Cellar/graphviz/2.50.0/include/" --global-option="-L/usr/local/Cellar/graphviz/2.50.0/lib/" pygraphviz
```
run emerge
**Expected behavior**
emerge starts
| closed | 2022-02-17T17:04:17Z | 2022-02-25T19:45:28Z | https://github.com/glato/emerge/issues/16 | [
"bug"
] | dj0nes | 2 |
lk-geimfari/mimesis | pandas | 1,408 | finance.company() presented inconsistently | # Bug report
<!--
Hi, thanks for submitting a bug. We appreciate that.
But, we will need some information about what's wrong to help you.
-->
## What's wrong
The names of companies are not presented consistently within and between locales. Within locales some company names include the suffix (e.g. Ltd., Corp, etc.) while others do not, and between locales:
- EN has capitalised company names
- EN_GB names have the stock ticker in brackets at the end
- EN_AU are in ALL CAPS
Most other locales seem to adhere to EN's stile, but I have not checked all.
## How is that should be
Company name formatting should be consistent. I think without suffix would be most useful.
## System information
System-idependent
| closed | 2023-09-07T09:17:32Z | 2023-09-13T13:35:25Z | https://github.com/lk-geimfari/mimesis/issues/1408 | [
"enhancement"
] | lunik1 | 1 |
autogluon/autogluon | computer-vision | 4,406 | Improve CPU training times for catboost | Related to https://github.com/catboost/catboost/issues/2722
Problem: Catboost takes 16x more time to train than a similar Xgboost model.
```
catboost: 1.2.5
xgboost: 2.0.3
autogluon: 1.1.1
Python: 3.10.14
OS: Windows 11 Pro (10.0.22635)
CPU: Intel(R) Core(TM) i7-1165G7
GPU: Integrated Graphics
RAM: 16 GB
```
Example with data:
```python
from autogluon.tabular import TabularDataset, TabularPredictor
import numpy as np
from sklearnex import patch_sklearn
patch_sklearn()
# data
label = 'signature'
data_url = 'https://raw.githubusercontent.com/mli/ag-docs/main/knot_theory/'
train_data = TabularDataset(f'{data_url}train.csv')
test_data = TabularDataset(f'{data_url}test.csv')
# train
np.random.seed(2024)
predictor = TabularPredictor(label=label, problem_type='multiclass', eval_metric='log_loss')
predictor.fit(train_data, included_model_types=['XGB', 'CAT'])
# report
metrics = ['model', 'score_test', 'score_val', 'eval_metric', 'pred_time_test', 'fit_time']
predictor.leaderboard(test_data)[metrics]
```
model | score_test | score_val | eval_metric | pred_time_test | fit_time
-- | -- | -- | -- | -- | --
WeightedEnsemble_L2 | -0.155262 | -0.138425 | log_loss | 0.649330 | 263.176814
CatBoost | -0.158654 | -0.150310 | log_loss | 0.237857 | 247.344303
XGBoost | -0.171801 | -0.144754 | log_loss | 0.398456 | 15.676711
| closed | 2024-08-18T03:19:42Z | 2024-08-20T04:36:45Z | https://github.com/autogluon/autogluon/issues/4406 | [
"enhancement",
"wontfix",
"module: tabular"
] | crossxwill | 1 |
airtai/faststream | asyncio | 1,384 | Bug: Publisher Direct Usage | **Describe the bug**
Can't use direct publishing
[Documentation](https://faststream.airt.ai/latest/getting-started/publishing/direct/)
**How to reproduce**
```python
from faststream import FastStream
from faststream.rabbit import RabbitBroker
broker = RabbitBroker("amqp://guest:guest@localhost:5672")
app = FastStream(broker)
await broker.connect()
publisher = broker.publisher("another-queue")
@broker.subscriber("another-queue")
async def handle_next(msg: str):
assert msg == "Hi!"
await publisher.publish("Hi")
```
Error:
```
AssertionError: Please, `connect()` the broker first
```
*When added publisher setup*:
```python
from faststream import FastStream
from faststream.rabbit import RabbitBroker
broker = RabbitBroker("amqp://guest:guest@localhost:5672")
app = FastStream(broker)
await broker.connect()
publisher = broker.publisher("another-queue")
@broker.subscriber("another-queue")
async def handle_next(msg: str):
assert msg == "Hi!"
publisher.setup(producer=broker._producer, app_id=broker.app_id, virtual_host=broker.virtual_host)
await publisher.publish("Hi")
```
Output:
```
DeliveredMessage(delivery=<Basic.Return object at 0xffff6e0be3f0>, header=<pamqp.header.ContentHeader object at 0xffff6fa5c590>, body=b'Hi', channel=<Channel: "1" at 0xffff6e2350e0>)
```
**Screenshots**
<img width="667" alt="image" src="https://github.com/airtai/faststream/assets/74822918/28da790c-befc-482e-a6ff-5bcb31f8f98f">
<img width="824" alt="image" src="https://github.com/airtai/faststream/assets/74822918/2c29b1b4-057a-4c93-a562-b92bbac587b8">
<img width="1238" alt="image" src="https://github.com/airtai/faststream/assets/74822918/1b9795ed-4348-43f3-8a23-62700b26c766">
| closed | 2024-04-18T15:41:31Z | 2024-04-18T16:55:06Z | https://github.com/airtai/faststream/issues/1384 | [
"bug"
] | taras0024 | 4 |
nikitastupin/clairvoyance | graphql | 118 | Parameter to limit the amount of fields sent | There is a limitation in the amount of fields that can be sent at root level in some implementations, the following screenshots showcase the problem:
Request

Response

This problem seems to happen only at the root level, the tool doesn't seem aware of this, so it would be nice to have a parameter to limit the number of fields sent | open | 2024-11-14T14:20:28Z | 2024-11-14T14:20:28Z | https://github.com/nikitastupin/clairvoyance/issues/118 | [] | rollinx1 | 0 |
matplotlib/mplfinance | matplotlib | 360 | How to improve the speed of saving pictures | I need to cycle hundreds of times to save pictures. At present, the speed of saving pictures is about one picture per second. Is there any way to improve the speed of saving pictures?
I heard that I can create figure reuse, and then use plt.clf () instead plt.close ("all"), how to do it? I can't find the official document | closed | 2021-03-19T16:42:27Z | 2021-03-21T06:18:18Z | https://github.com/matplotlib/mplfinance/issues/360 | [
"question"
] | jaried | 7 |
pytest-dev/pytest-mock | pytest | 91 | UnicodeEncodeError in detailed introspection of assert_called_with | Comparing called arguments with `assert_called_with` fails with `UnicodeEncodeError` when one of the arguments (either on the left or right) is a unicode string, non ascii.
Python 2.7.13
Below are two test cases that looks like they *should* work:
```python
def test_assert_called_with_unicode_wrong_argument(mocker):
stub = mocker.stub()
stub('l\xc3\xb6k'.decode('UTF-8'))
with pytest.raises(AssertionError):
stub.assert_called_with(u'lak')
def test_assert_called_with_unicode_correct_argument(mocker):
stub = mocker.stub()
stub('l\xc3\xb6k'.decode('UTF-8'))
stub.assert_called_with('l\xc3\xb6k'.decode('UTF-8'))
```
Result:
```
test_pytest_mock.py::test_assert_called_with_unicode_wrong_argument FAILED
test_pytest_mock.py::test_assert_called_with_unicode_correct_argument PASSED
==== FAILURES ====
______test_assert_called_with_unicode_wrong_argument _______
mocker = <pytest_mock.MockFixture object at 0x104868f90>
def test_assert_called_with_unicode_wrong_argument(mocker):
stub = mocker.stub()
stub('l\xc3\xb6k'.decode('UTF-8'))
with pytest.raises(AssertionError):
> stub.assert_called_with(u'lak')
E UnicodeEncodeError: 'ascii' codec can't encode character u'\xf6' in position 10: ordinal not in range(128)
test_pytest_mock.py:544: UnicodeEncodeError
```
Truncated traceback:
```
> pytest-mock/test_pytest_mock.py(544)test_assert_called_with_unicode_wrong_argument()
-> stub.assert_called_with(u'lak')
pytest-mock/pytest_mock.py(211)wrap_assert_called_with()
-> *args, **kwargs)
pytest-mock/pytest_mock.py(192)assert_wrapper()
-> msg += '\nArgs:\n' + str(e)
``` | closed | 2017-09-15T08:00:02Z | 2017-09-15T23:57:05Z | https://github.com/pytest-dev/pytest-mock/issues/91 | [
"bug"
] | AndreasHogstrom | 2 |
pallets/quart | asyncio | 130 | Weird error on page load | Hi,
So I wrote a simple discord bot and tried making a dashboard for it.
After loggin in it should go to the dashboard that lists all servers. (Note this worked when I ran everything local (127.0.0.1)
This is the error:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/aiohttp/connector.py", line 936, in _wrap_create_connection
return await self._loop.create_connection(*args, **kwargs) # type: ignore # noqa
File "/usr/lib/python3.8/asyncio/base_events.py", line 1025, in create_connection
raise exceptions[0]
File "/usr/lib/python3.8/asyncio/base_events.py", line 1010, in create_connection
sock = await self._connect_sock(
File "/usr/lib/python3.8/asyncio/base_events.py", line 924, in _connect_sock
await self.sock_connect(sock, address)
File "/usr/lib/python3.8/asyncio/selector_events.py", line 494, in sock_connect
return await fut
File "/usr/lib/python3.8/asyncio/selector_events.py", line 499, in _sock_connect
sock.connect(address)
OSError: [Errno 99] Cannot assign requested address
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/quart/app.py", line 1467, in handle_request
return await self.full_dispatch_request(request_context)
File "/usr/local/lib/python3.8/dist-packages/quart/app.py", line 1492, in full_dispatch_request
result = await self.handle_user_exception(error)
File "/usr/local/lib/python3.8/dist-packages/quart/app.py", line 968, in handle_user_exception
raise error
File "/usr/local/lib/python3.8/dist-packages/quart/app.py", line 1490, in full_dispatch_request
result = await self.dispatch_request(request_context)
File "/usr/local/lib/python3.8/dist-packages/quart/app.py", line 1536, in dispatch_request
return await self.ensure_async(handler)(**request_.view_args)
File "/root/Isha/dashboard.py", line 39, in dashboard
guild_count = await ipc_client.request("get_guild_count")
File "/usr/local/lib/python3.8/dist-packages/discord/ext/ipc/client.py", line 99, in request
await self.init_sock()
File "/usr/local/lib/python3.8/dist-packages/discord/ext/ipc/client.py", line 63, in init_sock
self.multicast = await self.session.ws_connect(self.url, autoping=False)
File "/usr/local/lib/python3.8/dist-packages/aiohttp/client.py", line 721, in _ws_connect
resp = await self.request(method, url,
File "/usr/local/lib/python3.8/dist-packages/aiohttp/client.py", line 480, in _request
conn = await self._connector.connect(
File "/usr/local/lib/python3.8/dist-packages/aiohttp/connector.py", line 523, in connect
proto = await self._create_connection(req, traces, timeout)
File "/usr/local/lib/python3.8/dist-packages/aiohttp/connector.py", line 858, in _create_connection
_, proto = await self._create_direct_connection(
File "/usr/local/lib/python3.8/dist-packages/aiohttp/connector.py", line 1004, in _create_direct_connection
raise last_exc
File "/usr/local/lib/python3.8/dist-packages/aiohttp/connector.py", line 980, in _create_direct_connection
transp, proto = await self._wrap_create_connection(
File "/usr/local/lib/python3.8/dist-packages/aiohttp/connector.py", line 943, in _wrap_create_connection
raise client_error(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host localhost:20000 ssl:default [Cannot assign requested address]
Executing <Task pending name='Task-20' coro=<ASGIHTTPConnection.handle_request() running at /usr/local/lib/python3.8/dist-packages/quart/asgi.py:102> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f1bde6afbb0>()] created at /usr/lib/python3.8/asyncio/base_events.py:422> cb=[_wait.<locals>._on_completion() at /usr/lib/python3.8/asyncio/tasks.py:507] created at /usr/local/lib/python3.8/dist-packages/quart/asgi.py:46> took 0.111 seconds
[2021-07-05 20:25:28,238] [IPADDRESSTATIREMOVED]:64462 GET /dashboard 1.1 500 158977 132152
``` | closed | 2021-07-05T18:28:03Z | 2022-07-05T01:58:55Z | https://github.com/pallets/quart/issues/130 | [] | rem9000 | 2 |
python-restx/flask-restx | flask | 166 | [From flask-restplus #589] Stop using failing ujson as default serializer | Flask-restx uses `ujson` for serialization by default, falling back to normal `json` if unavailable.
As has been mentioned before `ujson` is quite flawed, with some people reporting rounding errors.
Additionally, `ujson.dumps` does not take the `default` argument, which allows one to pass any custom method as the serializer. Since my project uses some custom serialization, I have been forced to fork flask-restx and just remove the ujson import line...
You could add an option to choose which lib to use, or completely remove ujson from there. I think it is bad practice to have to maintain the code for two lbraries with different APIs.
An alternative could be to make ujson an optional dependency and not list it in the package dependencies, but keep the current imports as they are. As a result, anyone wanting to use ujson can install it and it will take over transparently.
Original issue: `https://github.com/noirbizarre/flask-restplus/issues/589`
I would gladly make the PR if you are happy with one of these solutions | open | 2020-06-30T12:58:32Z | 2024-09-04T03:33:07Z | https://github.com/python-restx/flask-restx/issues/166 | [
"bug",
"good first issue"
] | Anogio | 3 |
Nemo2011/bilibili-api | api | 100 | 【提问】如何应对Cookies中的SESSDATA和bili_jct频繁刷新 | **Python 版本:** 3.8.5
**模块版本:** 3.1.3
**运行环境:** Linux
---
无论是在3.1.3的bilibili_api.utils.Verify,抑或是当前最新版的bilibili_api.utils.Credential,用户认证都需要用到Cookies中的SESSDATA和bili_jct。尽管这种方式和大多SDK的AK/SK认证不同,但在过去两年左右时间里,我们的服务使用bilibili_api进行自动上传视频总是能成功,这归功于Cookies中的SESSDATA和bili_jct在很长一段时间保持不变。
但最近,总会因为SESSDATA和bili_jct频繁变动,导致自动上传视频失败。
因此,我想了解了解SESSDATA和bili_jct的刷新机制,以及如何去避免因SESSDATA和bili_jct刷新而造成的上传失效,或者还有其他方法能实现用户持久的认证吗?
希望能尽快收到回复,感谢! | closed | 2022-11-04T07:35:32Z | 2023-01-30T00:59:14Z | https://github.com/Nemo2011/bilibili-api/issues/100 | [] | nicliuqi | 10 |
gunthercox/ChatterBot | machine-learning | 2,370 | Bot | Halo
| closed | 2024-06-19T14:08:38Z | 2024-08-20T11:28:37Z | https://github.com/gunthercox/ChatterBot/issues/2370 | [] | wr-tiger | 0 |
streamlit/streamlit | python | 10,521 | Slow download of csv file when using the inbuild download as csv function for tables displayed as dataframes in Edge Browser | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Issue is only in MS Edge Browser:
When pressing "download as csv" on a table the download is really slow. and i run on a thinkpad p16 gen1.
15 columns x 20 k rows takes 9-11 sec
15 columns x 50 k rows takes 19-22 sec
When i do it on with my own function using to_csv from the pandas libary i can do it in less than 1 sec for both 20 k and 50 k
**Issue only occur in Edge browser.**
Brave and firefox works just fine with the inbuild
### Reproducible Code Example
```Python
import streamlit as st
import pandas as pd
# tested in both 1.38 and 1.42.2
# Name: streamlit
# Version: 1.39.0 / 1.42.2
# Define number of rows and columns
num_rows = 20000 # 20 k rows takes 9-11 sec to download via inbuild download as csv
# num_rows = 50000 # 50 k rows takes 19-22 sec to download via inbuild download as csv
num_cols = 15
# Generate random data
data = {
f"Col_{i+1}": np.random.choice(['A', 'B', 'C', 'D', 1, 2, 3, 4, 5, 10.5, 20.8, 30.1], num_rows)
for i in range(num_cols)
}
data = pd.DataFrame(data)
st.write(data) # the same issue when using st.dataframe(data)
# the below method takes less a secound for both 20 k and 50 k rows
# to_csv() is from the pandas libary which also are used in the streamlit package.
csv = data.to_csv(index=False).encode('utf-8')
# Download button
st.download_button(
label="Download as CSV OWN",
data=csv,
file_name='data.csv',
mime='text/csv',
)
```
### Steps To Reproduce
hover over the table, click download as csv and watch your download folder for how slow it loads only a few 50-100 kb a sec
then try using the custom made button: "Download as CSV OWN" then it instantly downloads
### Expected Behavior
i would expect the inbuild download as csv function would be as fast as the pandas.to_csv() function.
I tried it on a Thinkpad T14 gen 3, P16 gen 1 and on a linux server, all have the same issue

### Current Behavior
no error msg, but it just super slow
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.39.0 and 1.42.2
- Python version: 3.12.1
- Operating System: Windows 11 / windows 10, Linux server
- Browser: Edge for business: Version 133.0.3065.82 (Official build) (64-bit)
### Additional Information
_No response_ | open | 2025-02-26T08:43:56Z | 2025-03-03T11:49:27Z | https://github.com/streamlit/streamlit/issues/10521 | [
"type:bug",
"feature:st.dataframe",
"status:confirmed",
"priority:P3",
"feature:st.download_button",
"feature:st.data_editor"
] | LazerLars | 2 |
holoviz/panel | matplotlib | 6,991 | Make it possible and easy to pyscript editor with Panel | I hope that one day it would be possible and easy to use pyscript editor with Panel for embedding on web pages etc.
Currently I cannot get it working.
## Reproducible example
**mini-coi.js**
```javascript
/*! coi-serviceworker v0.1.7 - Guido Zuidhof and contributors, licensed under MIT */
/*! mini-coi - Andrea Giammarchi and contributors, licensed under MIT */
(({ document: d, navigator: { serviceWorker: s } }) => {
if (d) {
const { currentScript: c } = d;
s.register(c.src, { scope: c.getAttribute('scope') || '.' }).then(r => {
r.addEventListener('updatefound', () => location.reload());
if (r.active && !s.controller) location.reload();
});
}
else {
addEventListener('install', () => skipWaiting());
addEventListener('activate', e => e.waitUntil(clients.claim()));
addEventListener('fetch', e => {
const { request: r } = e;
if (r.cache === 'only-if-cached' && r.mode !== 'same-origin') return;
e.respondWith(fetch(r).then(r => {
const { body, status, statusText } = r;
if (!status || status > 399) return r;
const h = new Headers(r.headers);
h.set('Cross-Origin-Opener-Policy', 'same-origin');
h.set('Cross-Origin-Embedder-Policy', 'require-corp');
h.set('Cross-Origin-Resource-Policy', 'cross-origin');
return new Response(body, { status, statusText, headers: h });
}));
});
}
})(self);
```
**config.toml**
```toml
packages = [
"https://cdn.holoviz.org/panel/1.4.4/dist/wheels/bokeh-3.4.1-py3-none-any.whl",
"https://cdn.holoviz.org/panel/1.4.4/dist/wheels/panel-1.4.4-py3-none-any.whl"
]
```
**script.html**
```html
<!DOCTYPE html>
<html>
<head>
<script src="./mini-coi.js" scope="./"></script>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width,initial-scale=1" />
<title>🦜 Panel Editor Template</title>
<link rel="stylesheet" href="https://pyscript.net/releases/2024.7.1/core.css">
<script type="module" src="https://pyscript.net/releases/2024.7.1/core.js"></script>
</head>
<body>
<script type="py-editor" config="./config.toml">
import panel as pn
pn.extension(sizing_mode="stretch_width")
slider = pn.widgets.FloatSlider(start=0, end=10, name='Amplitude')
def callback(new):
return f'Amplitude is: {new}'
pn.Row(slider, pn.bind(callback, slider)).servable(target="out");
</script>
<div id="out"></div>
</body>
</html>
```
At first I don't see any output in the web page or errors in the console.

After click the run button multiple times I see

## Additional Context
I've tried without `target="out"`. I've tried without `.servable(target="out")`. But I cannot get it to display the Panel app.
I would really like the version without `.servable(target="out")` to work as that would be the easiest code to explain to users. | open | 2024-07-16T06:48:19Z | 2024-07-17T11:07:34Z | https://github.com/holoviz/panel/issues/6991 | [
"TRIAGE"
] | MarcSkovMadsen | 4 |
gradio-app/gradio | data-visualization | 10,583 | Unable to correctly handle file download with multiple deployments | ### Describe the bug
I notice a weird problem while using the download functionality inside gr.File() when I deploy my app in multiple pods.
Consider the following case and two pods:
<img width="1887" alt="Image" src="https://github.com/user-attachments/assets/d001cf2d-1c20-493b-aab7-5e317b4c099d" />
- I have a function that processes user requests which generates an Excel file at the end, the process is handled by pod A, so the file is stored locally inside pod A.
- Now the user wants to click download, but sometimes this download request is handled by pod B. Therefore, the file download request will fail since the file is never generated in pod B.
- If the user clicks the download button **multiple times**, he will eventually download the file successfully -- when the download request is handled by pod A.
Although gr.File() can take a URL as input/output, for corporate scenarios, we can't upload the file to somewhere public.
What I can think of in the first place is that, when I finish generating the file, I upload it to a private GCS bucket, and when the user clicks download, I download it from GCS so I can ensure every pod has the file ready.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
5.16.0
```
### Severity
Blocking usage of gradio | open | 2025-02-13T06:35:06Z | 2025-02-28T17:53:24Z | https://github.com/gradio-app/gradio/issues/10583 | [
"bug",
"cloud"
] | jamie0725 | 1 |
vaexio/vaex | data-science | 1,368 | [FEATURE-REQUEST]:Porting str.extract functionality of pandas | Hi Team,
There is a 'str.extract' function that exists in pandas.The main use is to extract relevant groups of text from string.
The same is not available in vaex.Is it feasible to port it?
PFB the functionality

| open | 2021-05-24T07:32:17Z | 2021-05-24T10:59:10Z | https://github.com/vaexio/vaex/issues/1368 | [] | datascientistpur | 2 |
saulpw/visidata | pandas | 1,576 | [json] saving a file discards columns that contain only null values | **Small description**
When I save a JSON file that has a column containing only null values, the column is lost.
**Expected result**
When I load this JSON file and save it, I expect the save process to preserve nulls_column.
```
[
{"index": 1, "nulls_column": null}
]
```
**Actual result**
The saved file is instead:
```
[
{"index": 1}
]
```
**Steps to reproduce with sample data and a .vd**
Create a file "input.json" containing:
```
[
{"index": 1, "nulls_column": null}
]
```
In the same directory, create a file "cmdlog.vdj" containing:
```
#!vd -p
{"longname": "open-file", "input": "input.json", "keystrokes": "o"}
{"sheet": "input", "col": "", "row": "", "longname": "save-sheet", "input": "output.json", "keystrokes": "Ctrl+S", "comment": "save current sheet to filename in format determined by extension (default .tsv)"}
{"col": "", "row": "", "longname": "open-file", "input": "output.json", "keystrokes": "o", "comment": "Open file or URL"}
```
Run ```vd -p cmdlog.vdj```
**Additional context**
Version:
VisiData v2.10.2, python 3.8.10, Ubuntu 20.04.5
The loss of the columns is caused by loaders/json.py, line 81:
https://github.com/saulpw/visidata/blob/924563a0b7eeede80834dc02d8c1f237fe1949c1/visidata/loaders/json.py#L81 | closed | 2022-10-27T22:51:02Z | 2023-10-18T23:31:59Z | https://github.com/saulpw/visidata/issues/1576 | [
"bug",
"fixed"
] | midichef | 2 |
PokeAPI/pokeapi | api | 308 | Requests to the V2 API time out sometimes | **Issue**: the server returns a 504 Gateway time-out error seemingly at random times throughout the day. This has happened more than once in one day. See screenshot.
**Description**: I've been working on an app recently that uses the V2 of the PokeAPI, and I'm not making too many requests per day while testing. Sometimes, I haven't made any requests in a while, and I get this 504. Have you been experiencing any issues with the servers recently?
<img width="1111" alt="pokeapi-504" src="https://user-images.githubusercontent.com/4595734/31727109-cb8b50ba-b3f6-11e7-8a5b-4cb56a7eb976.png">
| closed | 2017-10-18T15:25:33Z | 2018-09-08T04:23:08Z | https://github.com/PokeAPI/pokeapi/issues/308 | [] | uicowboy | 5 |
keras-team/keras | machine-learning | 20,356 | Request for developer guide: multi-node TPU distributed training with JAX | ### Multi-node TPU Training with JAX
The [multi-GPU JAX training guide](https://keras.io/guides/distributed_training_with_jax/) is helpful, but it's unclear how to extend this to multi-node TPU setups.
Currently using `tpu v4-256` with `tf.distribute.cluster_resolver.TPUClusterResolver` and `tf.distribute.TPUStrategy` for data-parallel training. We're transitioning to jax and need the equivalent approach.
Specifically:
1. How to configure TPU runtime for jax.
2. How to handle cluster resolution for TPUs (similar to `TPUClusterResolver`).
3. Examples for multi-node TPU data-parallel training with jax.
Detailed examples would be helpful. Thank you.
| open | 2024-10-15T13:04:50Z | 2025-01-27T19:07:04Z | https://github.com/keras-team/keras/issues/20356 | [
"type:support",
"stat:awaiting keras-eng"
] | rivershah | 5 |
replicate/cog | tensorflow | 1,489 | how to import docker images? | I used cog to push to my private registry. I pulled from another machine. Can I import docker images? avoid building it again?
Because My network here is terrible bad (China) Try to build almost 48 hours but failed. | open | 2024-01-17T10:13:33Z | 2024-01-17T10:13:33Z | https://github.com/replicate/cog/issues/1489 | [] | deerleo | 0 |
roboflow/supervision | tensorflow | 1,787 | how to use yolov11s-seg supervision onnx runtime? | dear @onuralpszr i saw similar case on #1626 and tried some customization with my own usecase for segmentation but doesn't seem to properly working
here is how I am exporting my model with ultralytics
```python
ft_loaded_best_model.export(
format="onnx",
nms=True,
data="/content/disease__instance_segmented/data.yaml",
) # creates 'best.onnx'
```
which outputs in console
```console
Ultralytics 8.3.75 🚀 Python-3.11.11 torch-2.5.1+cu124 CPU (Intel Xeon 2.00GHz)
YOLO11s-seg summary (fused): 265 layers, 10,068,364 parameters, 0 gradients, 35.3 GFLOPs
PyTorch: starting from '/content/drive/MyDrive/ML/DENTAL_THESIS/fine_tuned/segment/train/weights/best.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) ((1, 300, 38), (1, 32, 160, 160)) (19.6 MB)
ONNX: starting export with onnx 1.17.0 opset 19...
ONNX: slimming with onnxslim 0.1.48...
ONNX: export success ✅ 4.2s, saved as '/content/drive/MyDrive/ML/DENTAL_THESIS/fine_tuned/segment/train/weights/best.onnx' (38.7 MB)
Export complete (5.5s)
Results saved to /content/drive/MyDrive/ML/DENTAL_THESIS/fine_tuned/segment/train/weights
Predict: yolo predict task=segment model=/content/drive/MyDrive/ML/DENTAL_THESIS/fine_tuned/segment/train/weights/best.onnx imgsz=640
Validate: yolo val task=segment model=/content/drive/MyDrive/ML/DENTAL_THESIS/fine_tuned/segment/train/weights/best.onnx imgsz=640 data=/content/dental_disease__instance_segmented-7/data.yaml
Visualize: https://netron.app/
/content/drive/MyDrive/ML/DENTAL_THESIS/fine_tuned/segment/train/weights/best.onnx
```
I have 4 classes in my model
as I applied nms my output0 is already transposed I think
where first 4 indices are bbox. 5 is prob, 6 is class id 7 and rest 32 are mask and the 300 is for the model will detect up to 300 results, educate if my interpretation is wrong ?
here is my implementation
```python
def xywh2xyxy(x):
y = np.copy(x)
y[..., 0] = x[..., 0] - x[..., 2] / 2
y[..., 1] = x[..., 1] - x[..., 3] / 2
y[..., 2] = x[..., 0] + x[..., 2] / 2
y[..., 3] = x[..., 1] + x[..., 3] / 2
return y
class YOLOv11:
def __init__(self, path, conf_thres=0.7, iou_thres=0.5):
self.conf_threshold = conf_thres
self.iou_threshold = iou_thres
# Initialize the ONNX model
self.initialize_model(path)
def __call__(self, image):
return self.detect_objects(image)
def initialize_model(self, path):
self.session = onnxruntime.InferenceSession(
path, providers=onnxruntime.get_available_providers()
)
self.get_input_details()
self.get_output_details()
def detect_objects(self, image):
input_tensor = self.prepare_input(image)
outputs = self.inference(input_tensor)
self.boxes, self.scores, self.class_ids, self.masks = self.process_output(outputs)
return self.boxes, self.scores, self.class_ids, self.masks
def prepare_input(self, image):
self.img_height, self.img_width = image.shape[:2]
input_img = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
input_img = cv2.resize(input_img, (self.input_width, self.input_height))
input_img = input_img / 255.0
input_img = input_img.transpose(2, 0, 1)
input_tensor = input_img[np.newaxis, :, :, :].astype(np.float32)
return input_tensor
def inference(self, input_tensor):
outputs = self.session.run(self.output_names, {self.input_names[0]: input_tensor})
return outputs
def process_output(self, outputs):
"""
Process model outputs:
- outputs[0]: shape (1, 300, 38)
* 0-3: bounding box (xywh)
* 4: confidence score
* 5: class id
* 6-37: segmentation coefficients (32 values)
- outputs[1]: shape (1, 38, 160, 160) mask prototypes
"""
# Remove batch dimension from detections
predictions = np.squeeze(outputs[0]) # shape (300, 38)
mask_protos = outputs[1] # shape (1, 38, 160, 160)
# Filter predictions using the confidence score (index 4)
conf_scores = predictions[:, 4]
valid = conf_scores > self.conf_threshold
predictions = predictions[valid]
scores = conf_scores[valid]
if len(scores) == 0:
return [], [], [], []
# Extract bounding boxes (indices 0-3)
boxes = self.extract_boxes(predictions)
# Extract class ids (index 5) and cast them to int
class_ids = predictions[:, 5].astype(np.int32)
# Extract segmentation masks using segmentation coefficients (indices 6-37)
masks = self.extract_masks(predictions, mask_protos)
return boxes, scores, class_ids, masks
def extract_boxes(self, predictions):
boxes = predictions[:, :4] # xywh format
boxes = self.rescale_boxes(boxes)
boxes = xywh2xyxy(boxes)
return boxes
def rescale_boxes(self, boxes):
# Scale boxes from network input dimensions to original image dimensions
input_shape = np.array([self.input_width, self.input_height, self.input_width, self.input_height])
boxes = np.divide(boxes, input_shape, dtype=np.float32)
boxes *= np.array([self.img_width, self.img_height, self.img_width, self.img_height])
return boxes
def extract_masks(self, predictions, mask_protos):
"""
Compute segmentation masks:
- predictions: (num_detections, 38) with segmentation coefficients at indices 6-37
- mask_protos: (1, 38, 160, 160); we use the first 32 channels to match coefficients.
"""
# Get segmentation coefficients from predictions (32 coefficients)
seg_coeffs = predictions[:, 6:38] # shape: (num_detections, 32)
# Use the first 32 channels from mask prototypes
mask_protos = mask_protos[0, :32, :, :] # shape: (32, 160, 160)
# Compute per-detection masks as a weighted sum over mask prototypes
masks = np.einsum('nc,chw->nhw', seg_coeffs, mask_protos)
# Apply sigmoid to get values between 0 and 1
masks = 1 / (1 + np.exp(-masks))
# Threshold masks to produce binary masks
masks = masks > 0.5
# Resize each mask to the original image dimensions
final_masks = []
for mask in masks:
mask_uint8 = (mask.astype(np.uint8)) * 255
mask_resized = cv2.resize(mask_uint8, (self.img_width, self.img_height), interpolation=cv2.INTER_NEAREST)
final_masks.append(mask_resized)
final_masks = np.array(final_masks)
return final_masks
def get_input_details(self):
model_inputs = self.session.get_inputs()
self.input_names = [inp.name for inp in model_inputs]
self.input_shape = model_inputs[0].shape
self.input_height = self.input_shape[2]
self.input_width = self.input_shape[3]
def get_output_details(self):
model_outputs = self.session.get_outputs()
self.output_names = [out.name for out in model_outputs]
``` | closed | 2025-02-15T14:26:55Z | 2025-02-17T17:32:33Z | https://github.com/roboflow/supervision/issues/1787 | [
"question"
] | pranta-barua007 | 23 |
onnx/onnx | tensorflow | 5,978 | Need to know the setuptools version for using onnx in developer mode | # Ask a Question
I am trying to install onnx using `pip install -e .`. But I get the following error
> Traceback (most recent call last):
> File "/project/setup.py", line 321, in <module>
> setuptools.setup(
> File "/usr/local/lib/python3.10/dist-packages/setuptools/__init__.py", line 153, in setup
> return distutils.core.setup(**attrs)
> File "/usr/lib/python3.10/distutils/core.py", line 148, in setup
> dist.run_commands()
> File "/usr/lib/python3.10/distutils/dist.py", line 966, in run_commands
> self.run_command(cmd)
> File "/usr/lib/python3.10/distutils/dist.py", line 985, in run_command
> cmd_obj.run()
> File "/project/setup.py", line 253, in run
> return super().run()
> File "/usr/local/lib/python3.10/dist-packages/setuptools/command/develop.py", line 34, in run
> self.install_for_development()
> File "/usr/local/lib/python3.10/dist-packages/setuptools/command/develop.py", line 114, in install_for_development
> self.run_command('build_ext')
> File "/usr/lib/python3.10/distutils/cmd.py", line 313, in run_command
> self.distribution.run_command(command)
> File "/usr/lib/python3.10/distutils/dist.py", line 985, in run_command
> cmd_obj.run()
> File "/project/setup.py", line 259, in run
> return super().run()
> File "/usr/local/lib/python3.10/dist-packages/setuptools/command/build_ext.py", line 79, in run
> _build_ext.run(self)
> File "/usr/lib/python3.10/distutils/command/build_ext.py", line 340, in run
> self.build_extensions()
> File "/project/setup.py", line 288, in build_extensions
> if self.editable_mode:
> File "/usr/lib/python3.10/distutils/cmd.py", line 103, in __getattr__
> raise AttributeError(attr)
> AttributeError: editable_mode
>
>
The problem seems to be that setuptools version is probably wrong
My current version for setuptools is 59.5.0 . Can anyone help me to know what is lask working version of setuptools. Also requirements.txt do not have hardcoded version. | closed | 2024-02-29T16:12:38Z | 2024-03-02T13:45:06Z | https://github.com/onnx/onnx/issues/5978 | [
"question"
] | Abhishek-TyRnT | 2 |
neuml/txtai | nlp | 747 | Fix issue with setting quantize=False in HFTrainer pipeline | Setting quantize to False is causing an Exception. | closed | 2024-07-12T16:33:42Z | 2024-07-15T00:54:49Z | https://github.com/neuml/txtai/issues/747 | [
"bug"
] | davidmezzetti | 0 |
plotly/dash-bio | dash | 429 | Can someone help me on how to implement the whole Circos app please. | **Describe the bug**
Am having a lot of trouble to implement the whole circos app on windows.
I can't install the dash-bio-utils.
Having an issue for installing parmed
**To Reproduce**
Steps to reproduce the behavior:
- pip3 install dash-bio-utils
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain the issue.
**Python version: [e.g., 3.7.2]**
**Python environment (all installed packages in your current environment):**
-
**Additional context**
Add any other context about the problem here.
| closed | 2019-10-19T20:31:40Z | 2022-09-27T09:44:28Z | https://github.com/plotly/dash-bio/issues/429 | [] | davilen | 3 |
django-import-export/django-import-export | django | 1,452 | how to ONLY export what I want? | Thanks for this app.
I'm using it to export data from django admin.
But I face a problem, how to ONLY export the data after filtering.
Just like there are 30000 records in the DB table, after filtering, I get 200 records. How to ONLY export these 200 records?
I do not want to export them all.
Thank you.
| closed | 2022-06-23T10:32:31Z | 2023-04-12T12:58:08Z | https://github.com/django-import-export/django-import-export/issues/1452 | [
"question"
] | monalisir | 7 |
plotly/dash | data-visualization | 2,227 | [BUG] Dash Background Callbacks do not work if you import SQLAlchemy | Dash dependencies:
```
dash 2.6.1
dash-bootstrap-components 1.2.1
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- OS: macOS Monterey
- Browser: Chrome
- Version: 105.0.5195.102
If I take the first Background Callbacks example at https://dash.plotly.com/background-callbacks and add `from sqlalchemy import create_engine` to the imports I get the following stack trace when I click the `Run Job!` button for the second time:
```
Dash is running on http://127.0.0.1:8050/
* Serving Flask app 'main'
* Debug mode: on
Process Process-3:
Traceback (most recent call last):
File "/Users/one/.pyenv/versions/live/lib/python3.9/site-packages/multiprocess/process.py", line 315, in _bootstrap
self.run()
File "/Users/one/.pyenv/versions/live/lib/python3.9/site-packages/multiprocess/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/Users/one/.pyenv/versions/live/lib/python3.9/site-packages/dash/long_callback/managers/diskcache_manager.py", line 179, in job_fn
cache.set(result_key, user_callback_output)
File "/Users/one/.pyenv/versions/live/lib/python3.9/site-packages/diskcache/core.py", line 796, in set
with self._transact(retry, filename) as (sql, cleanup):
File "/Users/one/.pyenv/versions/3.9.9/lib/python3.9/contextlib.py", line 119, in __enter__
return next(self.gen)
File "/Users/one/.pyenv/versions/live/lib/python3.9/site-packages/diskcache/core.py", line 710, in _transact
sql = self._sql
File "/Users/one/.pyenv/versions/live/lib/python3.9/site-packages/diskcache/core.py", line 648, in _sql
return self._con.execute
File "/Users/one/.pyenv/versions/live/lib/python3.9/site-packages/diskcache/core.py", line 623, in _con
con = self._local.con = sqlite3.connect(
sqlite3.OperationalError: disk I/O error
```
I tried this using SQLAlchemy 1.4.41.
The Background Callbacks example is as follows with the SQLAlchmey import added:
```
# This stops the background callback from working.
from sqlalchemy import create_engine
import time
import os
import dash
from dash import DiskcacheManager, CeleryManager, Input, Output, html
if 'REDIS_URL' in os.environ:
# Use Redis & Celery if REDIS_URL set as an env variable
from celery import Celery
celery_app = Celery(__name__, broker=os.environ['REDIS_URL'], backend=os.environ['REDIS_URL'])
background_callback_manager = CeleryManager(celery_app)
else:
# Diskcache for non-production apps when developing locally
import diskcache
cache = diskcache.Cache("./cache")
background_callback_manager = DiskcacheManager(cache)
app = dash.Dash(__name__)
app.layout = html.Div(
[
html.Div([html.P(id="paragraph_id", children=["Button not clicked"])]),
html.Button(id="button_id", children="Run Job!"),
]
)
@dash.callback(
output=Output("paragraph_id", "children"),
inputs=Input("button_id", "n_clicks"),
background=True,
manager=background_callback_manager,
)
def update_clicks(n_clicks):
time.sleep(2.0)
return [f"Clicked {n_clicks} times"]
if __name__ == "__main__":
app.run_server(debug=True)
```
See https://stackoverflow.com/questions/73696237/dash-background-callbacks-not-working-with-sqlalchemy for the related StackOverflow question. | closed | 2022-09-13T07:47:48Z | 2024-07-24T15:13:58Z | https://github.com/plotly/dash/issues/2227 | [] | jongillham | 1 |
CPJKU/madmom | numpy | 73 | unify negative indices behaviour of FramedSignal | The behaviour of negative indices for the `FramedSignal` is not consistent:
- if a single frame at position -1 is requested, the frame left of the first one is returned (as documented),
- if a slice [-1:] is requested, the last frame is returned.
The idea of returning the frame left of the first one was to be able to calculate a correct first order difference, but it is somehow not really what people expect.
| closed | 2016-01-29T11:01:58Z | 2016-02-18T12:05:10Z | https://github.com/CPJKU/madmom/issues/73 | [] | superbock | 0 |
miguelgrinberg/Flask-Migrate | flask | 9 | multidb support | Alembic and Flask-SQLAlchemy has support to multidb. Flask-Migrate should work with them.
| closed | 2013-10-09T17:22:52Z | 2019-06-13T22:50:41Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/9 | [] | iurisilvio | 9 |
pyjanitor-devs/pyjanitor | pandas | 750 | [DOC] Update Pull Request Template with `netlify` option | # Brief Description of Fix
<!-- Please describe the fix in terms of a "before" and "after". In other words, what's not so good about the current docs
page, and what you would like to see it become.
Example starter wording is provided. -->
Currently, the recommended approach for reviewing documentation updates is to build the docs locally. This may still be the recommended approach, but a note should be included that lets developers know that they can preview the docs from the PR checks as well (with the new `netlify` inclusion).
I would like to propose a change, such that now the docs have examples on how to use the `netlify` doc previews from a PR.
# Relevant Context
<!-- Please put here, in bullet points, links to the relevant docs page. A few starting template points are available
to get you started. -->
- [Link to exact file to be edited](https://github.com/ericmjl/pyjanitor/blob/dev/.github/pull_request_template.md)
- Maybe we want to add a line to the `CONTRIBUTING.rst` files as well? 🤷♂️
| open | 2020-09-18T00:52:33Z | 2020-10-19T00:41:45Z | https://github.com/pyjanitor-devs/pyjanitor/issues/750 | [
"good first issue",
"docfix",
"available for hacking",
"hacktoberfest"
] | loganthomas | 5 |
modAL-python/modAL | scikit-learn | 66 | missing 'inputs' positional argument with ActiveLearner function | All of my relevant code:
```python
#!/usr/bin/env python3.5
from data_generator import data_generator as dg
# standard imports
from keras.models import load_model
from keras.utils import to_categorical
from keras.wrappers.scikit_learn import KerasClassifier
from os import listdir
import pandas as pd
import numpy as np
from modAL.models import ActiveLearner
######## NEW STUFF ########
# get filenames and folder names
data_location = './sensor_preprocessed_dataset/flow_rates_pressures/'
subfolders = ['true','false']
###########################
classifier = KerasClassifier(load_model('./0.7917.h5'))
(X_train, y_train), (X_test, y_test) = dg.load_data_for_model(data_location, subfolders)
WINDOW_SIZE = X_train[0].shape[0]
CHANNELS = X_train[0].shape[1]
# reshape and retype the data for the classifier
X_train = X_train.reshape(X_train.shape[0], WINDOW_SIZE, CHANNELS, 1)
X_test = X_test.reshape(X_test.shape[0], WINDOW_SIZE, CHANNELS, 1)
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
# assemble initial data
n_initial = 30
initial_idx = np.random.choice(range(len(X_train)), size=n_initial, replace=False)
X_initial = X_train[initial_idx]
y_initial = y_train[initial_idx]
learner = ActiveLearner(
estimator=classifier,
X_training=X_train,
y_training=y_train,
verbose=1
)
X_pool = X_test
y_pool = y_test
n_queries = 10
for idx in range(n_queries):
print('Query no. %d' % (idx + 1))
query_idx, query_instance = learner.query(X_pool, n_instances=100, verbose=0)
learner.teach(
X=X_pool[query_idx], y=y_pool[query_idx], only_new=True,
verbose=1
)
X_pool = np.delete(X_pool, query_idx, axis=0)
y_pool = np.delete(y_pool, query_idx, axis=0)
```
Messages, Warnings, and Errors:
```
Using TensorFlow backend.
WARNING:tensorflow:From /home/jazz/.local/lib/python3.5/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
2020-01-24 10:03:54.427147: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-01-24 10:03:54.447927: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2712000000 Hz
2020-01-24 10:03:54.448529: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x4c01c00 executing computations on platform Host. Devices:
2020-01-24 10:03:54.448599: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): <undefined>, <undefined>
WARNING:tensorflow:From /home/jazz/.local/lib/python3.5/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From /home/jazz/.local/lib/python3.5/site-packages/tensorflow/python/ops/math_grad.py:102: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Deprecated in favor of operator or tf.math.divide.
Traceback (most recent call last):
File "./classifier.py", line 45, in <module>
y_training=y_train
File "/home/jazz/.local/lib/python3.5/site-packages/modAL/models/learners.py", line 79, in __init__
X_training, y_training, bootstrap_init, **fit_kwargs)
File "/home/jazz/.local/lib/python3.5/site-packages/modAL/models/base.py", line 63, in __init__
self._fit_to_known(bootstrap=bootstrap_init, **fit_kwargs)
File "/home/jazz/.local/lib/python3.5/site-packages/modAL/models/base.py", line 106, in _fit_to_known
self.estimator.fit(self.X_training, self.y_training, **fit_kwargs)
File "/home/jazz/.local/lib/python3.5/site-packages/keras/wrappers/scikit_learn.py", line 210, in fit
return super(KerasClassifier, self).fit(x, y, **kwargs)
File "/home/jazz/.local/lib/python3.5/site-packages/keras/wrappers/scikit_learn.py", line 139, in fit
**self.filter_sk_params(self.build_fn.__call__))
TypeError: __call__() missing 1 required positional argument: 'inputs'
```
I honestly don't even know where to begin to solve this, my code is based on your example here: [https://modal-python.readthedocs.io/en/latest/content/examples/Keras_integration.html] https://modal-python.readthedocs.io/en/latest/content/examples/Keras_integration.html)
And I've read the docs here: [https://modal-python.readthedocs.io/en/latest/content/apireference/models.html] https://modal-python.readthedocs.io/en/latest/content/apireference/models.html
Any input is appreciated. | open | 2020-01-24T16:15:25Z | 2020-02-28T07:58:26Z | https://github.com/modAL-python/modAL/issues/66 | [] | zbrasseaux | 3 |
davidsandberg/facenet | computer-vision | 1,201 | How to judge the unknown face | I want to modify your code to output unknown faces when they don't exist in the dataset, instead of randomly outputting a face information | open | 2021-06-06T11:02:21Z | 2021-06-06T11:03:02Z | https://github.com/davidsandberg/facenet/issues/1201 | [] | niminjian | 1 |
pytorch/vision | machine-learning | 8,735 | `_skip_resize` ignored on detector inferenece | ### 🐛 Describe the bug
### Background
The FCOS object detector accepts `**kwargs`, of which is the `_skip_resize` flag, to be passed directly to `GeneralizedRCNNTransform` at FCOS init, [here](https://github.com/pytorch/vision/blob/main/torchvision/models/detection/fcos.py#L420). If not specified, any input image should be resized according to the default parameters (min_size=800,max_size=1333).
### Bug
However, **in inference mode**, even when passing `_skip_resize=True` when constructing FCOS, it is **ignored** by `GeneralizedRCNNTransform`, who will resize the image anyway. It can be clearly seen [here](https://github.com/pytorch/vision/blob/main/torchvision/models/detection/transform.py#L189).
The code in [transform.py](https://github.com/pytorch/vision/blob/main/torchvision/models/detection/transform.py#L189) should be changed from this (only skipping resize at training):
if self.training:
if self._skip_resize:
return image, target
size = self.torch_choice(self.min_size)
else:
size = self.min_size[-1]
image, target = _resize_image_and_masks(image, size, self.max_size, target, self.fixed_size)
to this (skipping resize anyway):
if self._skip_resize:
return image, target
if self.training:
size = self.torch_choice(self.min_size)
else:
size = self.min_size[-1]
image, target = _resize_image_and_masks(image, size, self.max_size, target, self.fixed_size)
### Versions
Collecting environment information...
PyTorch version: 1.12.1+cu116
Is debug build: False
CUDA used to build PyTorch: 11.6
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35
Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-107-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.5.119
CUDA_MODULE_LOADING set to:
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: 13th Gen Intel(R) Core(TM) i9-13900K
CPU family: 6
Model: 183
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 1
Stepping: 1
CPU max MHz: 5800.0000
CPU min MHz: 800.0000
BogoMIPS: 5990.40
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq tme rdpid movdiri movdir64b fsrm md_clear serialize pconfig arch_lbr flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 896 KiB (24 instances)
L1i cache: 1.3 MiB (24 instances)
L2 cache: 32 MiB (12 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI BHI_DIS_S
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.26.1
[pip3] torch==1.12.1+cu116
[pip3] torchaudio==0.12.1+cu116
[pip3] torchvision==0.13.1+cu116
[conda] Could not collect | open | 2024-11-20T07:56:31Z | 2024-12-01T11:32:10Z | https://github.com/pytorch/vision/issues/8735 | [] | talcs | 3 |
mitmproxy/mitmproxy | python | 7,172 | request dissect curl upload file size error. | #### Problem Description
As photo show blew, when I use curl upload file and use file_content = flow.request.multipart_form.get(b'file') get file content and len(file_content) is not correct, thanks.
<img width="951" alt="image" src="https://github.com/user-attachments/assets/433b1604-687e-468d-aee7-031c9eed6586">
<img width="1472" alt="image" src="https://github.com/user-attachments/assets/bf822214-ce4f-43cf-a0c7-32a4a0b11819">
<img width="1019" alt="image" src="https://github.com/user-attachments/assets/6739a7be-b429-4fc4-9091-a2f922092b60">
#### System Information
<img width="514" alt="image" src="https://github.com/user-attachments/assets/c0e88f59-aadd-4a0e-a439-e9ce0131ccf2">
| closed | 2024-09-09T07:29:03Z | 2024-09-09T09:06:03Z | https://github.com/mitmproxy/mitmproxy/issues/7172 | [
"kind/triage"
] | zjwangmin | 4 |
allenai/allennlp | pytorch | 5,019 | Caption-Based Image Retrieval Model | We want to implement the Caption-Based Image Retrieval task from https://api.semanticscholar.org/CorpusID:199453025.
The [COCO](https://cocodataset.org/) and [Flickr30k](https://www.kaggle.com/hsankesara/flickr-image-dataset) datasets contain a large number of images with image captions. The task here is to train a model to pick the right image given the caption. The image must be picked from four images, one of which is the real one, and the other three are other random images from the dataset.
You will have to write `Step`s that produce a `DatasetDict` for Flickr30k and COCO, including code that can produce the negative examples. Each instance will consist of a caption with four images. You will also need to write model that can solve this task. The underlying component for the model will be VilBERT, and the [VQA model](https://github.com/allenai/allennlp-models/blob/main/allennlp_models/vision/models/vision_text_model.py) is probably a good place to steal some code getting started. | open | 2021-02-24T23:05:18Z | 2021-08-28T00:27:13Z | https://github.com/allenai/allennlp/issues/5019 | [
"Contributions welcome",
"Models",
"hard"
] | dirkgr | 0 |
dbfixtures/pytest-postgresql | pytest | 319 | How to enable the plugin for gitlab-CI / dockerized tests | ```yaml
# .gitlab-ci.yml
image: python:3.6
services:
## https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#available-settings-for-services
- name: kartoza/postgis:11.5-2.5
alias: db
variables:
# In gitlab-ci, the host connection name should be the service name or alias
# (with all / replaced with - or _) and everything after the colon ( : ) is stripped
POSTGRES_HOST: db # try to use service-alias (similar to docker-compose)
POSTGRES_PORT: 5432
ALLOW_IP_RANGE: 0.0.0.0/0
POSTGRES_MULTIPLE_EXTENSIONS: postgis,postgis_topology
# Using the admin-db and user allows the CI to do anything
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASS: '' # no passwd on gitlab-CI
SQLALCHEMY_DATABASE_URI: "postgres://${POSTGRES_USER}@${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}"
# etc etc
```
This docker provision seems to work for creating a db instance, but this plugin tries to create another one. However, the posgresql factory can't find the `pg_ctl` to start a new instance.
```
> raise child_exception_type(errno_num, err_msg, err_filename)
E FileNotFoundError: [Errno 2] No such file or directory: '/usr/lib/postgresql/11/bin/pg_ctl': '/usr/lib/postgresql/11/bin/pg_ctl'
/usr/local/lib/python3.6/subprocess.py:1364: FileNotFoundError
```
It seems like this plugin needs an installation of postgresql (e.g. apt install). | closed | 2020-08-17T21:59:27Z | 2021-04-20T15:22:05Z | https://github.com/dbfixtures/pytest-postgresql/issues/319 | [] | dazza-codes | 2 |
ydataai/ydata-profiling | jupyter | 1,479 | Add a report on outliers | ### Missing functionality
I'm missing an easy report to see outliers.
### Proposed feature
An outlier to me is some value more than 3 std dev away from the mean.
I calculate this as:
```python
mean = X.mean()
std = X.std()
lower, upper = mean - 3*std, mean + 3*std
outliers = X[(X < lower) | (X > upper)]
100 * outliers.count() / X.count()
```
It would be nice if there is an interactive report added with the outliers
### Alternatives considered
See code above :)
### Additional context
_No response_ | open | 2023-10-15T08:25:24Z | 2023-10-16T20:56:52Z | https://github.com/ydataai/ydata-profiling/issues/1479 | [
"feature request 💬"
] | svaningelgem | 1 |
QuivrHQ/quivr | api | 3,503 | Mine good responses + with context | Crafted manually -> use intent classifier to get most asked '*type'* of questions | closed | 2024-11-28T08:47:01Z | 2025-03-03T16:07:01Z | https://github.com/QuivrHQ/quivr/issues/3503 | [
"Stale"
] | linear[bot] | 2 |
matplotlib/mplfinance | matplotlib | 248 | Candle stick colors incorrectly colored | Hi Daniel,
Im running the following code and getting incorrect colors for the cande sticks. Output and screenshot are given below. Please advise.
Code
```
import mplfinance as mpf
import pandas as pd
print("mpf version:", mpf.__version__)
AAL_csv = pd.read_csv('./data/F.csv', index_col='timestamp', parse_dates=True)
print(AAL_csv.head())
mpf.plot(AAL_csv[['open', 'high', 'low', 'close']][-20:], type='candlestick', style='yahoo')
plt.show()
```
Output
```
mpf version: 0.12.7a0
open high low close volume
timestamp
2020-08-19 6.89 7.02 6.860 6.87 44158123
2020-08-18 6.98 7.01 6.885 6.89 40444196
2020-08-17 7.05 7.06 6.870 6.98 64307597
2020-08-14 6.97 7.11 6.930 7.04 43517662
2020-08-13 7.03 7.18 7.000 7.03 50066758
```
Screenshot of plot

| closed | 2020-08-24T08:27:28Z | 2020-08-24T23:58:29Z | https://github.com/matplotlib/mplfinance/issues/248 | [
"question"
] | lcobiac | 4 |
BeanieODM/beanie | pydantic | 269 | Allow comparison operators for datetime | It would be great if it were possible to find documents by datetime field using comparison operators, e.g:
```python
products = await Product.find(Product.date < datetime.datetime.now()).to_list()
``` | closed | 2022-05-14T11:19:50Z | 2023-02-05T02:40:56Z | https://github.com/BeanieODM/beanie/issues/269 | [
"Stale"
] | rdfsx | 2 |
taverntesting/tavern | pytest | 341 | Recursive string interpolation | I'd like to use recursive string substitution when building the name for a variable. I know this uses the limited Python formatting, but test data that depends on multiple parameters (environment, running instance, etc) is more flexibly picked up from an external file like this.
Example (imagine ENV is something coming from a function):
```
# config.yaml
---
name: Config
description: Common test information
variables:
stage_host: https://some-stage-host.com
production_host: https://some-production-host.com
extra_environment_host: https://extra-environment-host.com
```
```
test_name: Test for stage or production
includes:
- !include config.yaml
stages:
- name: Call some endpoint
request:
url: "{{ENV}_host}/some_endpoint"
method: GET
response:
status_code: 200
```
Are there any plans to allow some form of recursive interpolation in the future (i.e. on `1.0`)? Would contributions in this area be welcome? | closed | 2019-04-17T15:07:36Z | 2019-05-09T14:00:57Z | https://github.com/taverntesting/tavern/issues/341 | [] | bogdanbranescu | 2 |
shaikhsajid1111/facebook_page_scraper | web-scraping | 17 | No posts were found! | Hey! Thanks for your script.
But I was trying to run your example and get the 'no posts were found' error.
Is it because of the new layout?
Thanks! | open | 2022-03-03T16:55:31Z | 2022-08-14T09:13:15Z | https://github.com/shaikhsajid1111/facebook_page_scraper/issues/17 | [] | abigmeh | 9 |
sinaptik-ai/pandas-ai | data-visualization | 594 | I need to use Petals LLM on to PandasAI | ### 🚀 The feature
Hello PandasAI team,
I would like to propose a new feature for the PandasAI library: adding support for the Petals Language Learning Model (LLM).
Petals LLM operates on a peer-to-peer (P2P) network and allows us to use large models like meta-llama/Llama-2-70b-hf in a distributed manner. This could be a valuable addition to the LLMs currently supported by PandasAI.
### Motivation, pitch
The motivation for this proposal is to enhance the capabilities of the PandasAI library by adding support for the Petals Language Learning Model (LLM). The Petals LLM operates on a peer-to-peer (P2P) network and allows users to use large models like meta-llama/Llama-2-70b-hf in a distributed manner. This feature could be particularly useful for users who are working with large datasets and need to generate text at scale.
Pitch:
The Petals LLM has proven to be a powerful tool for generating text, and its P2P network allows it to scale efficiently. By integrating the Petals LLM into the PandasAI library, we can provide users with more options for text generation and make it easier for them to leverage the capabilities of the Petals LLM. This could potentially lead to better performance and more innovative uses of the PandasAI library.
This feature is not related to a specific problem, but rather it is a proactive measure to enhance the capabilities of the PandasAI library and provide users with more options for text generation. We believe that this feature would be beneficial to many users of the PandasAI library and look forward to discussing its potential further.
### Alternatives
_No response_
### Additional context
`
from ..prompts.base import Prompt
from .base import LLM
from petals import AutoDistributedModelForCausalLM
from transformers import AutoTokenizer
class PetalsLLM(LLM):
"""Petals LLM"""
def __init__(self, model_name="meta-llama/Llama-2-70b-hf"):
self.model_name = model_name
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoDistributedModelForCausalLM.from_pretrained(model_name)
@property
def type(self) -> str:
return "petals"
def call(self, instruction: Prompt, suffix: str = "") -> str:
self.last_prompt = instruction.to_string() + suffix
inputs = self.tokenizer(self.last_prompt, return_tensors="pt")["input_ids"]
outputs = self.model.generate(inputs, max_new_tokens=5)
return self.tokenizer.decode(outputs[0])
` | closed | 2023-09-25T20:40:11Z | 2024-06-01T00:18:16Z | https://github.com/sinaptik-ai/pandas-ai/issues/594 | [] | databenti | 2 |
coqui-ai/TTS | python | 2,510 | [Bug] Unable to find compute_embedding_from_clip | ### Describe the bug
Unable to find compute_embedding_from_clip
### To Reproduce
tts --text "This is a demo text." --speaker_wav "my_voice.wav"
### Expected behavior
_No response_
### Logs
```shell
> tts_models/en/ljspeech/tacotron2-DDC is already downloaded.
> vocoder_models/en/ljspeech/hifigan_v2 is already downloaded.
> Using model: Tacotron2
> Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log
| > min_level_db:-100
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:20
| > fft_size:1024
| > power:1.5
| > preemphasis:0.0
| > griffin_lim_iters:60
| > signal_norm:False
| > symmetric_norm:True
| > mel_fmin:0
| > mel_fmax:8000.0
| > pitch_fmin:1.0
| > pitch_fmax:640.0
| > spec_gain:1.0
| > stft_pad_mode:reflect
| > max_norm:4.0
| > clip_norm:True
| > do_trim_silence:True
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:2.718281828459045
| > hop_length:256
| > win_length:1024
> Model's reduction rate `r` is set to: 1
> Vocoder Model: hifigan
> Setting up Audio Processor...
| > sample_rate:22050
| > resample:False
| > num_mels:80
| > log_func:np.log
| > min_level_db:-100
| > frame_shift_ms:None
| > frame_length_ms:None
| > ref_level_db:20
| > fft_size:1024
| > power:1.5
| > preemphasis:0.0
| > griffin_lim_iters:60
| > signal_norm:False
| > symmetric_norm:True
| > mel_fmin:0
| > mel_fmax:8000.0
| > pitch_fmin:1.0
| > pitch_fmax:640.0
| > spec_gain:1.0
| > stft_pad_mode:reflect
| > max_norm:4.0
| > clip_norm:True
| > do_trim_silence:False
| > trim_db:60
| > do_sound_norm:False
| > do_amp_to_db_linear:True
| > do_amp_to_db_mel:True
| > do_rms_norm:False
| > db_level:None
| > stats_path:None
| > base:2.718281828459045
| > hop_length:256
| > win_length:1024
> Generator Model: hifigan_generator
> Discriminator Model: hifigan_discriminator
Removing weight norm...
> Text: This is a demo text.
> Text splitted to sentences.
['This is a demo text.']
Traceback (most recent call last):
File "/opt/homebrew/bin/tts", line 8, in <module>
sys.exit(main())
File "/opt/homebrew/lib/python3.10/site-packages/TTS/bin/synthesize.py", line 396, in main
wav = synthesizer.tts(
File "/opt/homebrew/lib/python3.10/site-packages/TTS/utils/synthesizer.py", line 316, in tts
speaker_embedding = self.tts_model.speaker_manager.compute_embedding_from_clip(speaker_wav)
AttributeError: 'NoneType' object has no attribute 'compute_embedding_from_clip'
```
### Environment
```shell
- TTS Version latest
- Machine Mac M1
- pip3
```
### Additional context
_No response_ | closed | 2023-04-12T22:52:08Z | 2023-04-21T09:55:55Z | https://github.com/coqui-ai/TTS/issues/2510 | [
"bug"
] | faizulhaque | 5 |
jwkvam/bowtie | jupyter | 77 | document subscribing functions to more than one event | closed | 2016-12-31T22:59:26Z | 2017-01-03T05:00:41Z | https://github.com/jwkvam/bowtie/issues/77 | [] | jwkvam | 0 | |
scikit-image/scikit-image | computer-vision | 7,641 | import paddleocr and raise_build_error occurs | ### Description:
I just import paddleocr in a python file, an ImportError occurs:
```shell
python test_import.py
/home/fengt/anaconda3/envs/py37/lib/python3.7/site-packages/paddle/fluid/core.py:214: UserWarning: Load version
/home/fengt/local/usr/lib/x86_64-linux-gnu/libgomp.so.1 failed
warnings.warn("Load {} failed".format(dso_absolute_path))
Traceback (most recent call last):
File "/home/fengt/anaconda3/envs/py37/lib/python3.7/site-packages/skimage/__init__.py", line 151, in <module>
from ._shared import geometry
ImportError: dlopen: cannot load any more object with static TLS
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test_import.py", line 1, in <module>
from paddleocr import PaddleOCR
File "/home/fengt/anaconda3/envs/py37/lib/python3.7/site-packages/paddleocr/__init__.py", line 14, in <module>
from .paddleocr import *
File "/home/fengt/anaconda3/envs/py37/lib/python3.7/site-packages/paddleocr/paddleocr.py", line 48, in <module>
from tools.infer import predict_system
File "/home/fengt/anaconda3/envs/py37/lib/python3.7/site-packages/paddleocr/tools/infer/predict_system.py", line 32, in <module>
import tools.infer.predict_rec as predict_rec
File "/home/fengt/anaconda3/envs/py37/lib/python3.7/site-packages/paddleocr/tools/infer/predict_rec.py", line 31, in <module>
from ppocr.postprocess import build_post_process
File "/home/fengt/anaconda3/envs/py37/lib/python3.7/site-packages/paddleocr/ppocr/postprocess/__init__.py", line 33, in <module>
from .pg_postprocess import PGPostProcess
File "/home/fengt/anaconda3/envs/py37/lib/python3.7/site-packages/paddleocr/ppocr/postprocess/pg_postprocess.py", line 25, in <module>
from ppocr.utils.e2e_utils.pgnet_pp_utils import PGNet_PostProcess
File "/home/fengt/anaconda3/envs/py37/lib/python3.7/site-packages/paddleocr/ppocr/utils/e2e_utils/pgnet_pp_utils.py", line 25, in <module>
from extract_textpoint_slow import *
File "/home/fengt/anaconda3/envs/py37/lib/python3.7/site-packages/paddleocr/ppocr/utils/e2e_utils/extract_textpoint_slow.py", line 24, in <module>
from skimage.morphology._skeletonize import thin
File "/home/fengt/anaconda3/envs/py37/lib/python3.7/site-packages/skimage/__init__.py", line 154, in <module>
_raise_build_error(e)
File "/home/fengt/anaconda3/envs/py37/lib/python3.7/site-packages/skimage/__init__.py", line 133, in _raise_build_error
%s""" % (e, msg))
ImportError: dlopen: cannot load any more object with static TLS
It seems that scikit-image has not been built correctly.
Your install of scikit-image appears to be broken.
Try re-installing the package following the instructions at:
https://scikit-image.org/docs/stable/install.html
```
the test_import.py is:
```python
from paddleocr import PaddleOCR
```
### Way to reproduce:
test_import.py
```python
from paddleocr import PaddleOCR
```
python test_import.py in a virtual conda env.
pip list:
```bash
Package Version
----------------------- ------------------
absl-py 2.1.0
antlr4-python3-runtime 4.9.3
anyio 3.7.1
appdirs 1.4.4
astor 0.8.1
attrdict 2.0.1
Babel 2.14.0
bce-python-sdk 0.9.25
beautifulsoup4 4.12.3
black 21.4b2
cachetools 5.5.0
certifi 2022.12.7
charset-normalizer 3.4.0
click 8.1.7
cloudpickle 2.2.1
cssselect 1.2.0
cssutils 2.7.1
cycler 0.11.0
Cython 3.0.11
decorator 5.1.1
detectron2 0.6+cpu
et-xmlfile 1.1.0
exceptiongroup 1.2.2
fire 0.7.0
Flask 2.2.5
flask-babel 3.1.0
fonttools 4.38.0
future 1.0.0
fvcore 0.1.5.post20221221
google-auth 2.37.0
google-auth-oauthlib 0.4.6
grpcio 1.62.3
h11 0.14.0
httpcore 0.17.3
httpx 0.24.1
hydra-core 1.3.2
idna 3.10
imageio 2.31.2
imgaug 0.4.0
importlib-metadata 6.7.0
importlib-resources 5.12.0
iopath 0.1.9
itsdangerous 2.1.2
Jinja2 3.1.4
joblib 1.3.2
kiwisolver 1.4.5
lmdb 1.5.1
lxml 5.3.0
Markdown 3.4.4
MarkupSafe 2.1.5
matplotlib 3.5.3
mypy-extensions 1.0.0
networkx 2.6.3
numpy 1.21.6
oauthlib 3.2.2
omegaconf 2.3.0
opencv-contrib-python 4.6.0.66
opencv-python 4.6.0.66
opencv-python-headless 4.10.0.84
openpyxl 3.1.3
opt-einsum 3.3.0
packaging 24.0
paddleocr 2.7.0.2
paddlepaddle 2.5.2
pandas 1.3.5
pathspec 0.11.2
pdf2docx 0.5.8
Pillow 9.5.0
pip 22.3.1
portalocker 2.7.0
premailer 3.10.0
protobuf 3.20.3
psutil 6.1.1
pyasn1 0.5.1
pyasn1-modules 0.3.0
pyclipper 1.3.0.post6
pyclustering 0.10.1.2
pycocotools 2.0.7
pycryptodome 3.21.0
pydot 2.0.0
PyMuPDF 1.20.2
pyparsing 3.1.4
python-dateutil 2.9.0.post0
python-docx 1.1.0
python-pptx 0.6.23
pytz 2024.2
PyWavelets 1.3.0
PyYAML 6.0.1
rapidfuzz 3.4.0
rarfile 4.2
regex 2024.4.16
requests 2.31.0
requests-oauthlib 2.0.0
rsa 4.9
scikit-image 0.19.3
scikit-learn 1.0.2
scipy 1.7.3
setuptools 65.6.3
shapely 2.0.6
six 1.17.0
sniffio 1.3.1
soupsieve 2.4.1
tabulate 0.9.0
tensorboard 2.11.2
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.1
termcolor 2.3.0
threadpoolctl 3.1.0
tifffile 2021.11.2
toml 0.10.2
tqdm 4.67.1
typed-ast 1.5.5
typing_extensions 4.7.1
urllib3 2.0.7
visualdl 2.5.3
Werkzeug 2.2.3
wheel 0.38.4
XlsxWriter 3.2.0
yacs 0.1.8
zipp 3.15.0
```
### Version information:
```Shell
3.7.16 (default, Jan 17 2023, 22:20:44)
[GCC 11.2.0]
Linux-4.4.0-31-generic-x86_64-with-debian-jessie-sid
scikit-image version: 0.19.3
numpy version: 1.21.6
```
| open | 2024-12-20T03:13:03Z | 2025-01-09T00:03:56Z | https://github.com/scikit-image/scikit-image/issues/7641 | [
":bug: Bug"
] | Tom89757 | 1 |
netbox-community/netbox | django | 18,553 | Virtualization: Modifying a cluster field also modifies member virtual machine sites | ### Deployment Type
NetBox Cloud
### NetBox Version
v4.2.2
### Python Version
3.12
### Steps to Reproduce
1. Create a cluster with a blank scope
2. Create some VM's, assign them to the cluster and assign them to a site
3. Edit a field on the cluster (eg description) and save
### Expected Behavior
Virtual Machines are not modified.
### Observed Behavior
Virtual Machine site fields are cleared to match the cluster scope.
Note - there is some validation in netbox whereas a cluster is scoped to a site, you cannot add vms within a different site. This is expected given the cluster as been explicitly scoped.
However, modifying a field in the cluster should not impact virtual machines.
One big use case for this functionality is if the cluster is not scoped to a specific site - for example in an overlay cluster spanning multiple sites in which case virtual machines within a single cluster may be in multiple sites. | open | 2025-01-31T21:24:21Z | 2025-03-03T21:24:00Z | https://github.com/netbox-community/netbox/issues/18553 | [
"type: bug",
"status: under review",
"severity: medium"
] | cruse1977 | 1 |
davidsandberg/facenet | computer-vision | 527 | how to set optional parameters for slim.batch_norm | here is my batch_norm_params, which is soon fed into normalizer_params.

however, when i print tf.trainable_variables, there are only mean, variance and beta for BN, missing gamma..

how to change the default settings? such as adding gamma or simply reserve mean and variance.
| open | 2017-11-14T06:57:37Z | 2017-11-14T06:57:37Z | https://github.com/davidsandberg/facenet/issues/527 | [] | patienceFromZhou | 0 |
aws/aws-sdk-pandas | pandas | 2,461 | Repairing table works in the AWS Console, but not when using athena.repair_table | ### Describe the bug
I'm trying to repair a table using `athena.repair_table`, like so:
```
wr.athena.repair_table(
database=db_name,
table=table["name"],
boto3_session=athena_session,
workgroup=settings.aws_dev_athena_workgroup,
)
```
This code generates the following exception:
```Python
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/vscode/.local/lib/python3.11/site-packages/awswrangler/_config.py", line 733, in wrapper
return function(**args)
^^^^^^^^^^^^^^^^
File "/home/vscode/.local/lib/python3.11/site-packages/awswrangler/athena/_utils.py", line 530, in repair_table
response: Dict[str, Any] = _executions.wait_query(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vscode/.local/lib/python3.11/site-packages/awswrangler/_config.py", line 733, in wrapper
return function(**args)
^^^^^^^^^^^^^^^^
File "/home/vscode/.local/lib/python3.11/site-packages/awswrangler/athena/_executions.py", line 238, in wait_query
raise exceptions.QueryFailed(response["Status"].get("StateChangeReason"))
awswrangler.exceptions.QueryFailed: [ErrorCategory:USER_ERROR, ErrorCode:DDL_FAILED], Detail:FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
```
I also tried running the query (`MSCK REPAIR TABLE <TABLE NAME>`) using `athena.read_sql_query`, but got the same error.
Using boto3 yelds a similar error.
I'm able to run `SELECT` queries just fine.
Running this query on the AWS Console (Athena Query editor) works correctly.
### How to Reproduce
I'm trying to to repair a table in a development account, after copying parquet files from a production account. My idea was to create a prod -> dev sync script.
### OS
Linux
### Python version
3.11
### AWS SDK for pandas version
3.3.0
| closed | 2023-09-14T17:20:50Z | 2023-09-15T13:24:28Z | https://github.com/aws/aws-sdk-pandas/issues/2461 | [
"bug"
] | CarlosDomingues | 2 |
SALib/SALib | numpy | 345 | Morris method normal distribution sample | Hi,
I was wondering how to create a sample with a normal distribution using the Morris sampler.
Since 'dists' is not yet supported in the morris sampler, I tried a workaround (tested with the saltelli-sampler that has 'dists' and it worked), but I receive lots of infinite-values in the sample with the morris method when converting uniform to normal distributions. Is there a solution for this?
Tried code:
```
from SALib.sample import morris
import pandas as pd
import scipy.stats.distributions as ssd
import lhsmdu
# problem definition ('num_vars', 'names', 'bounds', 'dists')
prob_dists_code = {'num_vars': 3,
'names': ['P1', 'P2', 'P3'],
'groups': None,
'bounds': [[0.0, 1.0], [0.0, 1.0], [0.0, 1.0]]}
np_array = morris.sample(prob_dists_code, N=1000, num_levels=4, optimal_trajectories=None)
p1 = ssd.uniform(loc=0.9, scale=0.2)
p2 = ssd.norm(loc=45, scale=10)
p3 = ssd.norm(loc=0, scale=1)
new_samples = np_array
new_samples[:, 0] = lhsmdu.inverseTransformSample(p1, np_array[:, 0])
new_samples[:, 1] = lhsmdu.inverseTransformSample(p2, np_array[:, 1])
new_samples[:, 2] = lhsmdu.inverseTransformSample(p3, np_array[:, 2])
df = pd.DataFrame(new_samples)
df.columns = ['P1', 'P2', 'P3']
print(df)
```
Kind regards,
Matthias | open | 2020-09-10T14:45:09Z | 2020-10-24T07:25:22Z | https://github.com/SALib/SALib/issues/345 | [] | MatthiVH | 4 |
graphql-python/graphene-sqlalchemy | graphql | 112 | Generate Input Arguments from SQLAlchemy Class? | Hello,
Do you know if it's possible to generate input arguments dynamically from the SQLAlchemy class that will be transformed by the mutation?
Example:
My input arguments for a `CreatePerson` mutation look like this:
```python
class CreatePersonInput(graphene.InputObjectType):
"""Arguments to create a person."""
name = graphene.String(required=True, description="Name of the person to be created.")
height = graphene.String(default_value="unknown", description="Height of the person to be created.")
mass = graphene.String(default_value="unknown", description="Mass of the person to be created.")
hair_color = graphene.String(default_value="unknown", description="Hair color of the person to be created.")
skin_color = graphene.String(default_value="unknown", description="Skin color of the person to be created.")
eye_color = graphene.String(default_value="unknown", description="Eye color of the person to be created.")
birth_year = graphene.String(default_value="unknown", description="Birth year of the person to be created.")
gender = graphene.String(default_value="unknown", description="Gender of the person to be created.")
planet_id = graphene.ID(default_value="unknown", description="Global Id of the planet from which the person to be created comes from.")
url = graphene.String(default_value="unknown", description="URL of the person in the Star Wars API.")
class CreatePerson(graphene.Mutation):
"""Mutation to create a person."""
person = graphene.Field(lambda: People, description="Person created by this mutation.")
class Arguments:
input = CreatePersonInput(required=True)
...
```
In the meantime, the input arguments for my `UpdatePerson` mutation look like this:
```python
class UpdatePersonInput(graphene.InputObjectType):
"""Arguments to update a person."""
id = graphene.ID(required=True)
name = graphene.String()
height = graphene.String()
mass = graphene.String()
hair_color = graphene.String()
skin_color = graphene.String()
eye_color = graphene.String()
birth_year = graphene.String()
gender = graphene.String()
planet_id = graphene.ID()
url = graphene.String()
class UpdatePerson(graphene.Mutation):
"""Update a person."""
person = graphene.Field(lambda: People, description="Person updated by this mutation.")
class Arguments:
input = UpdatePersonInput(required=True)
...
```
Finally, my SQLAlchemy class look like this:
```python
class ModelPeople(Base):
"""People model."""
__tablename__ = 'people'
id = Column('id', Integer, primary_key=True)
name = Column('name', String)
height = Column('height', String)
mass = Column('mass', String)
hair_color = Column('hair_color', String)
skin_color = Column('skin_color', String)
eye_color = Column('eye_color', String)
birth_year = Column('birth_year', String)
gender = Column('gender', String)
planet_id = Column('planet_id', Integer, ForeignKey('planet.id'))
created = Column('created', String)
edited = Column('edited', String)
url = Column('url', String)
...
```
This is all pretty redundant and it would be ideal if we could just reuse the SQLAlchemy class attributes in the `InputObjectType` | open | 2018-02-09T05:44:41Z | 2018-04-24T18:13:29Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/112 | [] | alexisrolland | 3 |
jina-ai/serve | fastapi | 6,166 | After killing the process, it becomes a zombie process | 1. nohup jina flow --uses flow.yml --name bf7e2d0198df3388a73525a3f3c7f87f --workspace bf7e2d0198df3388a73525a3f3c7f87f >bf7e2d0198df3388a73525a3f3c7f87f/jina.log 2>&1 &
2. ps aux|grep 'jina flow '|grep 'name bf7e2d0198df3388a73525a3f3c7f87f' |grep -v grep |awk '{print $2}'| xargs echo
outputs are: 1719527 1719529 1721841
3. kill -9 1719527 1719529 1721841
4. processes(1719527 1719529 1721841) becomes zombie process
| closed | 2024-04-28T02:15:05Z | 2024-11-13T00:22:57Z | https://github.com/jina-ai/serve/issues/6166 | [
"Stale"
] | iamfengdy | 10 |
microsoft/nni | tensorflow | 5,149 | Remove trials from experiment | <!-- Please only use this template for submitting enhancement requests -->
**What would you like to be added**:
Remove trials from experiment, either in the web page or in the command line.
**Why is this needed**:
Sometime one trial may get strange result due to numerical issue, which is difficult to anticipate in advance. The strange result will prevent users from reading the result in an appropriate scale.
<img width="1118" alt="image" src="https://user-images.githubusercontent.com/25512742/193263571-efe0ea1d-c1b3-4012-b632-71030a5d971f.png">
<img width="1158" alt="image" src="https://user-images.githubusercontent.com/25512742/193263757-361b0e44-85ac-43e8-86cb-7b36fb5fb50f.png">
**Without this feature, how does current nni work**:
The strange value (if quite large) will push other informative values to a small portion, makes it impossible to get any insight from the hyper-parameter panel.
**Components that may involve changes**:
Web page, and the database to store all the trials for one experiment.
**Brief description of your proposal if any**:
| open | 2022-09-30T11:55:03Z | 2022-10-08T09:35:44Z | https://github.com/microsoft/nni/issues/5149 | [
"new feature",
"WebUI",
"need more info"
] | DDDOH | 4 |
huggingface/transformers | deep-learning | 36,626 | save_only_model with FSDP throws FileNotFoundError error | ### System Info
* Transformers (4.50.0.dev0) main branch at commit [94ae1ba](https://github.com/huggingface/transformers/commit/94ae1ba5b55e79ba766582de8a199d8ccf24a021)
* (also tried) transformers==4.49
* python==3.12
* accelerate==1.0.1
### Who can help?
@muellerzr @SunMarc @ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run a simple FSDP training with state dict type `SHARDED_STATE_DICT` with `save_only_model` option.
```
File "/opt/conda/lib/python3.11/site-packages/transformers/trainer.py", line 2241, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/transformers/trainer.py", line 2639, in _inner_training_loop
self._maybe_log_save_evaluate(tr_loss, grad_norm, model, trial, epoch, ignore_keys_for_eval, start_time)
File "/opt/conda/lib/python3.11/site-packages/transformers/trainer.py", line 3092, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial)
File "/opt/conda/lib/python3.11/site-packages/transformers/trainer.py", line 3211, in _save_checkpoint
self.state.save_to_json(os.path.join(output_dir, TRAINER_STATE_NAME))
File "/opt/conda/lib/python3.11/site-packages/transformers/trainer_callback.py", line 144, in save_to_json
with open(json_path, "w", encoding="utf-8") as f:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: './train_output/checkpoint-1/trainer_state.json'
```
### Expected behavior
Report the incompatibility early in the training lifecycle rather erroring out at the first checkpoint save event. | closed | 2025-03-10T06:56:16Z | 2025-03-13T16:17:37Z | https://github.com/huggingface/transformers/issues/36626 | [
"bug"
] | kmehant | 1 |
Kanaries/pygwalker | pandas | 677 | [BUG] When using the chat feature, the app crashes after a period of loading. | **Describe the bug**
When using the chat feature, the app crashes after a period of loading.
I launched Pygwalker from the .py file on my computer. Then a web page will open, and I use the chat feature on that page. After a while, it will crash.
The data I use: https://www.kaggle.com/datasets/dgomonov/new-york-city-airbnb-open-data/data?select=AB_NYC_2019.csv
Here is my code.

If i running the code in a .ipynb file, i can use the chat feature.
**To Reproduce**
Steps to reproduce the behavior:
1. Use this code to launch Pygwalker from the .py file.
2. In the chat feature, input: "With room_type on the x-axis and the mean price on the y-axis."

It is the same when using the chat here.

3. After a while, it will crash.
**Expected behavior**
Chat feature should work.
**Versions**
- pygwalker version: 0.4.9.14
- python version: 3.12.9
- browser: chrome | open | 2025-03-05T03:55:48Z | 2025-03-07T03:39:23Z | https://github.com/Kanaries/pygwalker/issues/677 | [
"bug",
"P1"
] | vanbolin | 5 |
scrapy/scrapy | python | 6,705 | Provide coroutine/Future alternatives to public Deferred APIs | If we want better support for native asyncio, we need to somehow provide `async def` alternatives to such public APIs as `CrawlerProcess.crawl()`, `ExecutionEngine.download()` or `ExecutionEngine.stop()`. It doesn't seem possible right away, because users can expect that e.g. `addCallback()` works on a result of any such function, but we may be able to do that in stages, in a backwards incompatible way, or e.g. by providing separate APIs.
Related to #6677 and to #6219. Also #6047 shows a potential problem as when `CrawlerProcess.crawl()` starts returning a coroutine you really need to await on it explicitly. | open | 2025-03-08T17:15:47Z | 2025-03-08T17:15:47Z | https://github.com/scrapy/scrapy/issues/6705 | [
"enhancement",
"discuss",
"asyncio"
] | wRAR | 0 |
Lightning-AI/pytorch-lightning | deep-learning | 19,718 | Training stuck when running on Slurm with multiprocessing | ### Bug description
Hi,
I'm trying to train a model on Slurm using a single GPU, and in the training_step I call multiprocessing.Pool() to parallel some function calls (function is executes on every example in the training data).
When I run multiprocessing.Pool from the training_step, the call never ends. I added multiple logs and prints to the code, and I see that all function calls were executed, but the pool was never joined and it stays "hanging".
I tried running the same code not on Slurm and it works as expected. I also tried running the same function using multiprocess on Slurm outside the training_step and it also worked.
The only thing that doesn't work is running the program on Slurm, and run multiprocess inside the training_step.
The trainer definition is:
```
trainer = pl.Trainer(
max_epochs=100,
callbacks=callbacks,
default_root_dir=root_dir
)
```
The training step (simplified):
```
def training_step(self, batch: List[torch.Tensor], batch_idx: int) -> torch.Tensor:
x, y = batch
if self.project:
with Pool() as pool:
x_projected = pool.starmap(self.projector.project, list(zip(x.to_cpu().numpy(), y.to_cpu.numpy())))
x = torch.tensor(x_projected).float().to(x.device)
logits = self(x)
return self.loss(logits, y)
```
And the Slurm file:
```
#! /bin/sh
#SBATCH --job-name=job
#SBATCH --output=logs/job.out # redirect stdout
#SBATCH --error=logs/job.err # redirect stderr
#SBATCH --partition=killable
#SBATCH --time=1000
#SBATCH --signal=USR1@120 # how to end job when time's up
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --mem=2500 # CPU memory (MB)
#SBATCH -c 8 # 8 cores
#SBATCH --gpus=1
python job.py
```
The lightning version I'm using is lightning 2.1.2.
Any ideas on what the problem is and how to solve it?
Thanks in advance!
### What version are you seeing the problem on?
v2.1
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_
cc @awaelchli | open | 2024-03-31T15:31:36Z | 2024-03-31T16:03:55Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19718 | [
"question",
"environment: slurm",
"ver: 2.1.x"
] | talshechter | 1 |
Josh-XT/AGiXT | automation | 1,205 | 3D Printing Extension | ### Feature/Improvement Description
I would like to make automated prints using octoprint maybe or some other api/sdk allowing to print on 3d printing with slicing functions or just throwing gcode or stl files inside.
### Proposed Solution
Octoprint may work..... there is one I found called https://github.com/dougbrion/OctoRest Octorest which may work but I prefer to find a much more updated one so I don't put something thats totally outdated into the code stack
### Acknowledgements
- [X] I have searched the existing issues to make sure this feature has not been requested yet.
- [X] I have provided enough information for everyone to understand why this feature request is needed in AGiXT. | closed | 2024-06-07T07:16:12Z | 2025-01-20T02:34:59Z | https://github.com/Josh-XT/AGiXT/issues/1205 | [
"type | request | new feature"
] | birdup000 | 1 |
MaartenGr/BERTopic | nlp | 2,149 | TypeError: unhashable type: 'numpy.ndarray' | Hi.
I follow the tutorial [here ](https://huggingface.co/docs/hub/en/bertopic#:~:text=topic%2C%20prob%20%3D%20topic_model.transform(%22This%20is%20an%20incredible%20movie!%22)%0Atopic_model.topic_labels_%5Btopic%5D)
but got error
`topic_model.topic_labels_[topic]`
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[36], line 1
----> 1 topic_model.topic_labels_[topic]
TypeError: unhashable type: 'numpy.ndarray'
```
Then I try,
```
if isinstance(topic, np.ndarray):
topic = topic[0]
label = topic_model.topic_labels_[int(topic)]
```
is it correct approach or am I missing something?
Thanks. | closed | 2024-09-13T11:43:16Z | 2024-09-13T13:04:21Z | https://github.com/MaartenGr/BERTopic/issues/2149 | [] | cindyangelira | 2 |
skypilot-org/skypilot | data-science | 4,255 | Implement Automatic Bucket Creation and Data Transfer in `with_data` API | To implement the `with_data` API in #4254:
- **Automatic Bucket Creation**
Add logic to create a storage bucket automatically when `with_data` is called. The API should link the upstream task’s output path to this bucket, streamlining data storage setup.
- **Data Transfer and Downstream Access**
Enable automatic transfer of the upstream task’s output to the bucket, and configure the downstream task to retrieve data from it directly, ensuring seamless data flow without manual setup. | open | 2024-11-04T00:43:48Z | 2024-12-19T23:09:06Z | https://github.com/skypilot-org/skypilot/issues/4255 | [] | andylizf | 1 |
ckan/ckan | api | 8,162 | PEDIR DINERO FALSO EN LÍNEA Telegram@Goodeuro BILLETES FALSOS DISPONIBLES EN LÍNEA |
PEDIR DINERO FALSO EN LÍNEA Telegram@Goodeuro
BILLETES FALSOS DISPONIBLES EN LÍNEA
Compre billetes de euro falsos de alta calidad en líneaA. Compra dinero falso en una tienda de confianza y no te ganes la vida pagando
Los billetes falsos se han vuelto extremadamente populares hoy en día debido a sus obvios beneficios. Primero, son más baratos que los originales y no tienes que pagar más para usarlos en línea. En segundo lugar, es una excelente manera de finalmente poder permitirnos artículos costosos en los que quizás no hubiéramos pensado antes.
Contáctenos
Telegrama: @Kingbanknotes
Correo electrónico: viceakame@gmail.
Elementos de seguridad de nuestros billetes en euros falsificados online
Estas características hacen que nuestras facturas sean 100% no detectadas, 100% seguras y seguras para usar en cualquiera de estas áreas:
CASINO, ATM, CAMBISTAS, TIENDAS, GASOLINERAS.
Hologramas y tiras holográficas.
Microcartas
Tinta e hilo metálicos.
Marcas de agua
Detección por infrarrojos
Características ultravioleta
Ver a través de funciones
Diferentes números de serie
#compra dinero falso indetectable,
#comprar licencias de conducir reales y falsas
#Comprar billetes falsos
#dónde comprar billetes de dólares falsos
#dónde comprar billetes falsos
#dónde comprar billetes falsos
#comprar billetes falsos
#dónde comprar billetes falsos
#¿dónde puedo comprar billetes de dólares falsos?
#dónde comprar billetes falsos
#Comprar billetes de euro falsos
#Compre billetes de euros falsos en línea
#Dónde comprar libras británicas falsas
#billetes falsos a la venta en Reino Unido
#billetes de banco falsos a la venta
#Billetes en euros falsificados a la venta
#billetes de banco falsos a la venta
#Dónde comprar billetes de euros falsos
#Dónde a libras esterlinas
#Billetes de banco falsos a la venta en Alemania
#Billetes de banco falsificados a la venta en China
#Comprar billetes de moneda Japón
#Dónde comprar billetes de euros online
#dinero falso a la venta
#dinero de utilería a la venta
#dinero falso a la venta
#dinero en venta
#árbol del dinero en venta
#dinero falso indetectable a la venta
#dinero real a la venta
#dinero confederado a la venta
#dinero falso legítimo a la venta
| closed | 2024-04-07T06:50:03Z | 2024-04-07T12:15:13Z | https://github.com/ckan/ckan/issues/8162 | [] | akimevice | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.