repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
matplotlib/mplfinance | matplotlib | 60 | how to add text to figure | Hi,Denial:
thanks for your great work! and I want to know if there is a way to add a text to figure,thank you | open | 2020-03-22T04:00:59Z | 2023-05-08T09:17:59Z | https://github.com/matplotlib/mplfinance/issues/60 | [
"question"
] | liaoshuren | 23 |
Zeyi-Lin/HivisionIDPhotos | fastapi | 226 | ๆไนไฟฎๆน้
็ฝฎๆไปถๅขๅ ๆๅฐๆ็้้ข็็ธ็บธๅฐบๅฏธ | ๆไนไฟฎๆน้
็ฝฎๆไปถๅขๅ ๆๅฐๆ็้้ข็็ธ็บธๅฐบๅฏธ | open | 2025-01-07T10:12:12Z | 2025-01-21T11:40:54Z | https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/226 | [] | cchuycchuy | 0 |
datawhalechina/fantastic-matplotlib | matplotlib | 3 | Issue on page /็ฌฌไธๅ๏ผๅธๅฑๆ ผๅผๅฎๆนๅ/index.html ๅ็ฐไธไธช้ๅซๅญ | 
| open | 2023-07-14T01:31:41Z | 2023-07-14T01:31:41Z | https://github.com/datawhalechina/fantastic-matplotlib/issues/3 | [] | Geek3600 | 0 |
ranaroussi/yfinance | pandas | 1,393 | stock.info.get("preMarketPrice") returning None, even tho the pre-market price exist on the website | hey, ive encountaered problem in scraping pre-market price of stocks, since the last API update. many tickers returning None during pre-market, even tho i can see the price is there on the website. Here's an example:

now im running this code:
ticker = yf.Ticker("OPBK")
price = ticker.info.get("preMarketPrice")
print(f"pre market price = {price}")
resulted out put:
pre market price = None
mention: it used to work before the last API update.
Any ideas?
| open | 2023-02-03T13:31:26Z | 2025-01-08T13:17:26Z | https://github.com/ranaroussi/yfinance/issues/1393 | [] | xxredxoctoberxx | 6 |
huggingface/diffusers | deep-learning | 10,749 | Please add support for GGUF in Lumina2 pipeline | **Is your feature request related to a problem? Please describe.**
GGUF is already available, please add support in pipeline
https://huggingface.co/calcuis/lumina-gguf/tree/main
**Describe the solution you'd like.**
```
import torch
from diffusers import Lumina2Text2ImgPipeline, Lumina2Transformer2DModel
bfl_repo = "Alpha-VLLM/Lumina-Image-2.0"
dtype = torch.bfloat16
transformer_path = f"https://huggingface.co/calcuis/lumina-gguf/blob/main/lumina2-q8_0.gguf"
transformer = Lumina2Transformer2DModel.from_single_file(
transformer_path,
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
torch_dtype=dtype,
config=bfl_repo,
subfolder="transformer"
)
pipe = Lumina2Text2ImgPipeline.from_pretrained(
bfl_repo,
transformer=transformer,
torch_dtype=dtype,
)
pipe.enable_model_cpu_offload()
pipe.vae.enable_slicing()
pipe.vae.enable_tiling()
inference_params = {
"prompt": "Portrait of a young woman in a Victorian-era outfit with brass goggles and leather straps. Background shows an industrial revolution cityscape with smoky skies and tall, metal structures",
"height": 1024,
"width": 576,
"guidance_scale": 4.0,
"num_inference_steps": 30,
"generator": torch.Generator(device="cpu").manual_seed(0),
}
image = pipe(**inference_params).images[0]
image.save(output_path)
```
**Describe alternatives you've considered.**
BnB int4 / int8 works, with GGUF we may achieve further memory reduction.
**Additional context.**
(venv) C:\aiOWN\diffuser_webui>python lumina2_gguf.py
Traceback (most recent call last):
File "C:\aiOWN\diffuser_webui\lumina2_gguf.py", line 6, in <module>
transformer = Lumina2Transformer2DModel.from_single_file(
AttributeError: type object 'Lumina2Transformer2DModel' has no attribute 'from_single_file'
@zhuole1025 | closed | 2025-02-08T16:42:05Z | 2025-02-12T13:24:52Z | https://github.com/huggingface/diffusers/issues/10749 | [] | nitinmukesh | 2 |
ultralytics/ultralytics | deep-learning | 19,696 | May I ask how to mark the key corners of the box? There is semantic ambiguity | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
May I ask how to mark the key corners of the box? There is semantic ambiguity
### Additional
_No response_ | open | 2025-03-14T09:45:40Z | 2025-03-16T04:33:07Z | https://github.com/ultralytics/ultralytics/issues/19696 | [
"question"
] | missTL | 4 |
noirbizarre/flask-restplus | flask | 50 | Support ORM (MongoKit) models? | I've been trying to generate `ApiModel`s from from MongoKit models:
Consider the following MongoKit model:
``` python
@api.model() # this would be awesome
class User(Document):
structure = {
'name': unicode,
'email': unicode,
}
use_dot_notation=True
```
Currently I've tried this
``` python
user_model = api.model('User', fields=User.structure)
```
Kind of expected this to work automagically, but looks like I'm missing something.
``` python
File "/home/mike/.virtualenvs/cmdb/lib/python2.7/site-packages/flask_restplus/swagger.py", line 357, in serialize_schema
raise ValueError('Model {0} not registered'.format(model))
ValueError: Model <function wrapper at 0x7fb5a6cb1578> not registered
```
Not sure how the whole model mapping process works, could you please provide some details?
Thanks!
| closed | 2015-06-03T18:19:25Z | 2015-11-04T15:54:08Z | https://github.com/noirbizarre/flask-restplus/issues/50 | [
"wontfix"
] | mikeroll | 1 |
donnemartin/data-science-ipython-notebooks | machine-learning | 96 | solving issue | data-science-ipython-notebooks | open | 2023-03-31T14:29:24Z | 2023-03-31T14:29:24Z | https://github.com/donnemartin/data-science-ipython-notebooks/issues/96 | [] | Sandyah06 | 0 |
mirumee/ariadne | api | 151 | Raise ValueError when `field` or `source` decorator was called incorrectly | Currently there's no error when the developer forgets to follow the `field` or `source` decorator with `("name")`, tricking them into thinking that decorated function has been registered while in fact it wasn't.
We could update implementation for those functions to raise ValueError when `name` attr is not `str`. | closed | 2019-05-06T15:58:00Z | 2019-05-07T11:22:40Z | https://github.com/mirumee/ariadne/issues/151 | [
"enhancement",
"roadmap"
] | rafalp | 0 |
jupyter-book/jupyter-book | jupyter | 1,889 | Error using jupyter-book (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')) | ### Describe the bug
When I try to build my jupyter-book, I get an error. Other people working on the same git-repo don't get the same error. I am working on a M1 Mac and have installed python through homebrew.
This is the error message I get
```console
$ jupyter-book build mybook
sphinx.errors.ExtensionError: Could not import extension myst_nb (exception: dlopen(PATH1, 0x0002): tried: 'PATH1' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), 'PATH2' (no such file), 'PATH1' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')))
```
### Reproduce the bug
1. Use a M1 Mac
2. (Install brew if not installed)
3. Open terminal of choice
4. Write "brew install python"
5. Write "pip install jupyter-book"
6. Write "jupyter build <your-jupyter-book>"
### List your environment
Ironically, I cannot even use jupyter-book --version, as I get the same error.
jupyter-book-0.13.1 | closed | 2022-11-29T16:05:04Z | 2022-12-05T10:49:40Z | https://github.com/jupyter-book/jupyter-book/issues/1889 | [
"bug"
] | jacobInav | 6 |
onnx/onnxmltools | scikit-learn | 324 | Provide command line interface | Like https://github.com/onnx/tensorflow-onnx. This is useful when you already have a model on disk. | open | 2019-07-18T08:04:08Z | 2019-08-06T22:44:49Z | https://github.com/onnx/onnxmltools/issues/324 | [
"contribution welcome"
] | letmaik | 0 |
Asabeneh/30-Days-Of-Python | numpy | 307 | No code of conduct and contributing files in the root repo | Both the `code_of_conduct.md and contributing.md file are a most of a project.
The help contributors how the owner/org. want commits to be done and rules to be followed when wanting a pull request.
I can work on them, if assigned to me. | open | 2022-10-01T23:51:48Z | 2022-10-02T12:05:35Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/307 | [] | chemben17 | 1 |
recommenders-team/recommenders | data-science | 1,559 | Error when i want to pull docker image | When i want to pull the docker image I face this error:
> Unable to find image 'recommenders:cpu' locally
docker: Error response from daemon: pull access denied for recommenders, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
also, I have login with my docker hub ID. | closed | 2021-10-27T15:38:12Z | 2021-10-30T10:32:01Z | https://github.com/recommenders-team/recommenders/issues/1559 | [
"help wanted"
] | ahforoughi | 2 |
cleanlab/cleanlab | data-science | 1,203 | Can I use CleanLab for a regression task dataset with numerous (>40) numerical and categorical variables? | Hi,
I would like to use CleanLab to analyze a tabular dataset I have with ~6000 rows and ~40 columns. The columns are mostly numerical, but some of them are low-cardinality categorical. I dummy-encode the categorical variables, which increases the number of input features to between 50 and 60. The task is a regression one, i.e., I have a single target column which is a float. Can I use CleanLab to identify possibly noisy samples? I'm using mostly tree-based models such as xgboost and RandomForests (which work surprisingly well for my issue, probably because there's *a lot* of noise in the data).
| closed | 2024-09-19T09:04:25Z | 2024-11-19T08:05:10Z | https://github.com/cleanlab/cleanlab/issues/1203 | [
"question"
] | AndreaPi | 1 |
keras-team/keras | tensorflow | 20,603 | Request for multi backend support for timeseries data loading | Hi,
I wonder is it possible for you to implement keras.utils.timeseries_dataset_from_array() method by other backends (e.g. JAX)?
it would be nice to not have to add TF dependency just because of this module.
https://github.com/keras-team/keras/blob/v3.7.0/keras/src/utils/timeseries_dataset_utils.py#L7 | closed | 2024-12-06T08:35:40Z | 2025-01-21T07:02:07Z | https://github.com/keras-team/keras/issues/20603 | [
"type:support",
"stat:awaiting response from contributor"
] | linomi | 4 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 428 | ๅ
ณไบๅฏน่ฏๆไปคๅพฎ่ฐ็ปๆ็้ฎ้ข | ๆฅผไธปๆ็จไฝ ็่ฎญ็ปไปฃ็ ๏ผๅจๅค่ฝฎๅฏน่ฏๆฐๆฎไธ่ฎญไบไธไธชlora๏ผไฝๆฏๆ่งๆดไฝๅ็ญ่ฟไบ็ฎ็ญ๏ผ่ฟๆฏๆไนๅไบๅใ
>Hi!
Hello, world!
>How can I assist you today?
What would you like to know?
>who is Trump?
I don't know.
>็พๅฝๆป็ปๆฏ่ฐ
Barack Obama
>็พๅฝ้ฆ้ฝๅจๅช้
Washington D.C.
>ไธญๅฝ้ฆ้ฝๅข
ๅไบฌ
็จ็ๆฏtrain_3.5m็ๆฐๆฎ้๏ผไนๅ
ๆฌไบ alapca_cn็ๆฐๆฎ้ใ็่ฎบไธๆฅ่ฏดไธๅบ่ฏฅ่ฟไน็ฎ็ญๅใๆ็จ็ๆจ็่ฎพ็ฝฎ
```
output_ids = model.generate(
torch.as_tensor(input_ids).cuda(),
do_sample=True,
temperature=0.7,
max_new_tokens=1024,
)
``` | closed | 2023-05-25T03:22:05Z | 2023-06-05T22:02:13Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/428 | [
"stale"
] | lucasjinreal | 2 |
graphdeco-inria/gaussian-splatting | computer-vision | 1,174 | How to input stereo camera parameters | I have two sets of images. One set can be sparsely reconstructed using COLMAP, while the other set fails due to insufficient feature points caused by camera characteristics. I plan to use the COLMAP output of the first set as input for 3D Gaussian Splatting .
Additionally, I know the intrinsic and relative extrinsic parameters of both cameras. My question is, if I modify the COLMAP output of the first camera to match the intrinsic and extrinsic parameters of the second camera, can I use this as input for 3DGS for the second camera?
I modified the `cameras.bin` and` images.bin` files, but the results seem to be quite poor. | open | 2025-02-27T04:20:15Z | 2025-02-27T04:20:15Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/1174 | [] | zhuchi1121 | 0 |
InstaPy/InstaPy | automation | 6,594 | Cannot detect post media type , tried many web solutions | Hey bots, I've got the error "**Cannot detect post media type**" when I use a like function . I'm asking it again because I even followed many steps in the internet, for exemple, [#6346 ](https://github.com/InstaPy/InstaPy/pull/6346)
This previous solution suggested to change line 905 from like_util.py to:
```
post_category = element.find_element_by_xpath(
"//a[@href='/p/"
+ post_href.split("/")[-2]
+ "/']/child::div[@class='u7YqG']/child::div/*[name()='svg']"
).get_attribute("aria-label")
```
Or another online suggestion is to change where is span to div in the same place I wrote above. But none of these solutions for me worked. Anyone in 2022 had the same issue, and could not solve it using those solutions. Or even better, anyone can give me a good solution for this?
| open | 2022-04-25T23:38:16Z | 2022-04-25T23:38:16Z | https://github.com/InstaPy/InstaPy/issues/6594 | [] | adrielkirch | 0 |
pyeve/eve | flask | 1,006 | Relational Lookups/Insertions | Hi,
I am having two objects: `user`, `event`
A user can create events. The `GET` on my `event` endpoint should give back only events created by the respective user.
Can I have something similar to a foreign key in my `event` schema that my endpoint checks against.
Can `filter` provide such a functionality?
~ For Frodo
| closed | 2017-03-27T11:44:00Z | 2017-03-27T12:42:47Z | https://github.com/pyeve/eve/issues/1006 | [] | der-daniel | 1 |
dask/dask | numpy | 10,934 | [DISCUSSION] What is the timeline for `dask.dataframe` deprecation | Many users and down-stream libraries were a bit surprised to see a loud deprecation warning when importing `dask.dataframe` after the `2024.2.0` release. The dask-expr migration was certainly obvious for anyone watching github. However, the discussion/decision over the specific timeline was largely internal to Coiled.
Could we use this issue to establish a basic timeline for users and down-stream libraries to use as a reference? Note that I am not asking that we try to reach a consensus on these kinds of decisions. It would just be very useful to know what the plan is (so it can be communicated easily to others).
Critical Questions:
- What is the earliest date that the `"dataframe.query-planning"` default will change from `"False"` to `"True"`? For example, will it be `2024.2.1`, or is the plan to do this in `2024.3.0` or later?
- What is the earliest date that `"dataframe.query-planning": "False"` will be disabled entirely? | closed | 2024-02-16T22:27:10Z | 2024-11-04T23:17:57Z | https://github.com/dask/dask/issues/10934 | [
"dataframe",
"discussion",
"deprecation"
] | rjzamora | 9 |
huggingface/diffusers | deep-learning | 11,062 | Error in loading Civit AI Lora: LCMTurboMix_Euler_A_fix | ### Describe the bug
[This CIVITAI Lora](https://civitai.com/models/216190/lora) has over 20k downloads and doesn't work with SDXL Pipeline. It is giving `lora_unet_down_blocks_0_downsamplers_0_conv.alpha` not supported error. I have uploaded the model on hugging face. Error appears on `load_lora_weights()` function
### Reproduction
```
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0")
pipe.load_lora_weights("RhaegarKhan/LCMTurboMix_Euler_A_fix")
prompt = "<lora:LCMTurboMix2fix:1>,abstract portrait of 1girl,undefined gender,fragmented visual style,red and black color palette,evokes feelings of rebellion,passion,and freedom,blurred boundaries,high resolution,aesthetic,"
image = pipe(prompt).images[0]
```
### Logs
```shell
Loading pipeline components...: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 7/7 [00:02<00:00, 3.35it/s]
LCMTurboMix_Euler_A_fix.safetensors: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 13.0M/13.0M [00:00<00:00, 38.8MB/s]
Traceback (most recent call last):
File "/home/user/runware/Ali/sd-base-api/lora.py", line 4, in <module>
pipe.load_lora_weights("RhaegarKhan/LCMTurboMix_Euler_A_fix")
File "/home/user/runware/shehzad/temp/sd-base-api/diffusers/src/diffusers/loaders/lora_pipeline.py", line 545, in load_lora_weights
state_dict, network_alphas = self.lora_state_dict(
^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/envs/3.7/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/user/runware/shehzad/temp/sd-base-api/diffusers/src/diffusers/loaders/lora_pipeline.py", line 695, in lora_state_dict
state_dict = _maybe_map_sgm_blocks_to_diffusers(state_dict, unet_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/runware/shehzad/temp/sd-base-api/diffusers/src/diffusers/loaders/lora_conversion_utils.py", line 59, in _maybe_map_sgm_blocks_to_diffusers
raise ValueError(f"Checkpoint not supported because layer {layer} not supported.")
ValueError: Checkpoint not supported because layer lora_unet_down_blocks_0_downsamplers_0_conv.alpha not supported.
```
### System Info
Diffusers version: Version: 0.33.0.dev0
Python: 3.12.9
### Who can help?
@sayakpaul | open | 2025-03-14T17:22:09Z | 2025-03-19T14:52:43Z | https://github.com/huggingface/diffusers/issues/11062 | [
"bug",
"lora"
] | ali-afridi26 | 1 |
pennersr/django-allauth | django | 3,503 | Some templates missing {% load allauth %} | At least the templates `django-allauth/allauth/templates/socialaccount/login_cancelled.html`and `django-allauth/allauth/templates/account/verified_email_required.html` are missing {% load allauth %} to define the {% element %} tag. | closed | 2023-10-28T18:12:20Z | 2023-10-28T19:42:59Z | https://github.com/pennersr/django-allauth/issues/3503 | [] | msapiro | 0 |
oegedijk/explainerdashboard | plotly | 193 | `pd.DataFrame.append` method is deprecated | I found a lot of `FutureWarning`s in the logs, introduced by pandas 1.4
```
/opt/conda-envs/envs/explainer/lib/python3.8/site-packages/explainerdashboard/explainer_methods.py:1098: FutureWarning:
The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.
``` | closed | 2022-03-01T02:51:01Z | 2022-03-03T15:24:46Z | https://github.com/oegedijk/explainerdashboard/issues/193 | [] | achimgaedke | 0 |
piskvorky/gensim | data-science | 2,925 | Change parameter used in dtm_coherence() (in DTM wrapper) to avoid persistent warning | Within the DTM wrapper, using `dtm_coherence()` always produces a warning:
> "The parameter `num_words` is deprecated, will be removed in 4.0.0, use `topn` instead."
Although the function works, obviously the intention is to deprecate the parameter at some point.
This can be tracked back to `show_topic()`, which is where `num_words` has switched to `topn`. It is a very simple fix to just change the parameter used in the `dtm_coherence()` function accordingly.
PR incoming with fix. | closed | 2020-08-28T10:30:11Z | 2020-09-03T12:03:54Z | https://github.com/piskvorky/gensim/issues/2925 | [] | MeganStodel | 3 |
JoeanAmier/XHS-Downloader | api | 192 | ๆบ็ ่ฟ่กๆฅ้๏ผๆบ็ ่ฟ่กๅบ็ฐไปฅไธ้่ฏฏ๏ผ่ฏท้ฎๆฏไปไนๅๅ ๏ผ | `PS G:\XHS-Downloader-master\XHS-Downloader-master> python main.py
Traceback (most recent call last):
File "G:\XHS-Downloader-master\XHS-Downloader-master\main.py", line 6, in <module>
from source import Settings
File "G:\XHS-Downloader-master\XHS-Downloader-master\source\__init__.py", line 1, in <module>
from .CLI import cli
File "G:\XHS-Downloader-master\XHS-Downloader-master\source\CLI\__init__.py", line 1, in <module>
from .main import cli
File "G:\XHS-Downloader-master\XHS-Downloader-master\source\CLI\main.py", line 19, in <module>
from source.application import XHS
File "G:\XHS-Downloader-master\XHS-Downloader-master\source\application\__init__.py", line 1, in <module>
from .app import XHS
File "G:\XHS-Downloader-master\XHS-Downloader-master\source\application\app.py", line 24, in <module>
from source.module import DataRecorder
File "G:\XHS-Downloader-master\XHS-Downloader-master\source\module\__init__.py", line 2, in <module>
from .manager import Manager
File "G:\XHS-Downloader-master\XHS-Downloader-master\source\module\manager.py", line 15, in <module>
from .static import HEADERS
File "G:\XHS-Downloader-master\XHS-Downloader-master\source\module\static.py", line 7
PROJECT = f"XHS-Downloader V{VERSION_MAJOR}.{
^
SyntaxError: unterminated string literal (detected at line 7)` | open | 2024-11-09T08:30:27Z | 2025-03-06T09:28:16Z | https://github.com/JoeanAmier/XHS-Downloader/issues/192 | [] | uaaazcc | 2 |
DistrictDataLabs/yellowbrick | matplotlib | 964 | Rank1D graph has an argument `color`. But when we pass a color as a string its simply not working. | **Describe the bug**
I'm using yellow brick version 0.9.1. I couldn't change the colour of Rank1D bar digram. It's set to blue by default and not changing
**To Reproduce**
```python
from yellowbrick.features import Rank1D
# Load the credit dataset
# Instantiate the 1D visualizer with the Sharpiro ranking algorithm
visualizer = Rank1D(algorithm='shapiro', color='red')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof()
```
**Dataset**
I used the classical diabetes data
**Expected behavior**
I would like to change the colour of Rank2D graph from blue to red, green or any
**Traceback**
```
If applicable, add the traceback from the exception.
```
**Desktop (please complete the following information):**
- OS: [Ubuntu 18.04.1]
- Python Version [3.6]
- Yellowbrick Version [e.g. 0.9]
**Additional context**
Add any other context about the problem here.
| closed | 2019-09-05T16:50:31Z | 2019-09-06T05:46:10Z | https://github.com/DistrictDataLabs/yellowbrick/issues/964 | [] | sanu-s | 5 |
miguelgrinberg/Flask-SocketIO | flask | 957 | Dynamic data and possible lost emits treatment | Lets suppose i emit to a client and when i emit that client is in a no-network zone or he is in an elevator so he doesn't receive the emit, how does a socket based app should treat that scenario?
should i load dynamic data that could change with emits on every connect instead on HTML? i mean every connect event on the front end uses a socket event to get the latest data. so if a re connection event happens i get the newest data even if the client wasn't fully online when the emit happened. Thanks. | closed | 2019-04-23T02:58:48Z | 2019-08-04T16:02:37Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/957 | [
"question"
] | valentin-ballester | 4 |
plotly/dash | dash | 3,001 | race condition when updating dcc.Store | Hello!
```
dash 2.18.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: macOS Sonoma
- Browser: Tested in Firefox and Chrome
- FF Version: 129.0.2
- Chrome Version: 128.0.6613.121
**Describe the bug**
When 2 callbacks perform partial updates to a dcc.Store at the same time (or nearly the same time), only 1 of those updates is reflected in the store. I tested and found the same behaviour in dash versions 2.17.0 and 2.18.0, and this happens for all storage types (memory, session, and local).
A minimal example is below. Most of this example is setting up preconditions to cause the race condition, but it roughly matches our real-world use-case and can reliably exhibit the behaviour.
The example app works like this:
We have multiple components on the page that need to load, and each has 2 elements to manage: the content Div and the Loading indicator. We also have a dispatcher (Interval component + `loading_dispatcher` callback) that kicks off the loading of these components in chunks. For each component, the dispatcher first turns on the Loading indicator, which then triggers the content Div to load (`load_component` function), which then triggers the Loading indicator to stop (`stop_spinner` function). We also have a cleanup function (`mark_loaded`) that waits for the components to finish loading, then pushes data to the store about which components have loaded. Finally, the `set_status` function checks the store, and if all of the components have loaded it updates the status Div at the bottom to indicate everything is fully loaded.
**Minimal Example**
```
from dash import Dash, html,callback,Output,Input,State,no_update,dcc, MATCH, ALL, Patch, callback_context, clientside_callback
from dash.exceptions import PreventUpdate
import time
import random
app = Dash(__name__)
NUM_COMPONENTS = 21
STORAGE_TYPE = 'local'
slow_components = [
html.Div([
html.Div(children='loading...', id={'type': 'slow-component', 'index': i}),
dcc.Loading(id={'type': 'slow-component-animation', 'index': i}, display='hide')
])
for i in range(NUM_COMPONENTS)]
app.layout = html.Div(
slow_components +
[
html.Hr(),
html.Div(id='status', children='not all loaded'),
dcc.Interval(id='timer', interval=2000, max_intervals=10),
dcc.Store(id='loading-data', data={}, storage_type=STORAGE_TYPE, clear_data=True),
]
)
@callback(Output({'type': 'slow-component-animation', 'index':ALL}, 'display'),
Input('timer', 'n_intervals'), prevent_initial_call=True)
def loading_dispatcher(n):
# Kicks off loading for 3 components at a time
if n is None or n > NUM_COMPONENTS/3:
raise PreventUpdate()
output_list = [no_update] * NUM_COMPONENTS
current_chunk_start = list(range(0,NUM_COMPONENTS, 3))[n-1]
output_list[current_chunk_start:current_chunk_start+3] = ['show']*3
return output_list
@callback(
Output({'type': 'slow-component', 'index': MATCH}, 'children'),
Input({'type': 'slow-component-animation', 'index': MATCH}, 'display'),
State({'type': 'slow-component-animation', 'index': MATCH}, 'id'),
State({'type': 'slow-component', 'index': MATCH}, 'children'),
prevent_initial_call=True
)
def load_component(display, id_, current_state):
# "Loads" data for 1 second, updates loading text
if current_state == 'loaded':
raise PreventUpdate()
print(f'loading {id_['index']}, {current_state}')
time.sleep(1)
print(f'loaded {id_['index']}')
return 'loaded'
@callback(
Output({'type': 'slow-component-animation', 'index':MATCH}, 'display', allow_duplicate=True),
Input({'type': 'slow-component', 'index': MATCH}, 'children'),
prevent_initial_call=True
)
def stop_spinner(loading_text):
# After loading, removes spinner
if loading_text == 'loaded':
return 'hide'
return no_update
@callback(
Output('loading-data', 'data', allow_duplicate=True),
Input({'type': 'slow-component-animation', 'index': ALL}, 'display'),
prevent_initial_call=True
)
def mark_loaded(components):
# When a component is fully loaded, mark it as such in the data store
print('checking if components are loaded')
update_dict = {}
for component in callback_context.triggered:
if component['value'] == 'hide':
component_id = callback_context.triggered_prop_ids[component['prop_id']]['index']
print(f'component {component_id} loaded')
update_dict[component_id] = 'loaded'
patch = Patch()
patch.update(update_dict)
print(f'adding to data store: {update_dict}')
return patch # <- This is where the race condition happens. If 2 callbacks patch the store at the same time, only 1 of those patches is applied
@callback(
Output('status', 'children'),
Output('loading-data', 'data', allow_duplicate=True),
Input('loading-data', 'data'),
prevent_initial_call=True
)
def set_status(loading_data):
# Once all components are loaded, update the status bar to show we are fully loaded
print(f'{loading_data=}')
if loading_data is None:
return no_update, no_update
if len(loading_data) == NUM_COMPONENTS:
print('FULLY LOADED')
return 'FULLY LOADED', {}
return no_update, no_update
if __name__ == '__main__':
app.run(debug=True)
```
**Expected behavior**
The app should load each component, and once they are finished the bottom text would update to say "FULLY LOADED".
The logs would also show that after each item is added to the store, the next time "loading_data=" is printed it would contain all of the component indices that have been added to the store. At the end of the logs we would see every number from 0-20 as a key in the `loading_data` dictionary.
Example (abbreviated):
```
loading 0, loading...
loading 1, loading...
loading 2, loading...
loaded 2
loaded 1
loaded 0
checking if components are loaded
component 2 loaded
component 1 loaded
adding to data store: {2: 'loaded', 1: 'loaded'}
checking if components are loaded
component 0 loaded
adding to data store: {0: 'loaded'}
loading_data={'0': 'loaded', '1': 'loaded', '2': 'loaded'}
loading 5, loading...
loading 4, loading...
checking if components are loaded
adding to data store: {}
loading 3, loading...
loading_data={'0': 'loaded', '1': 'loaded', '2': 'loaded'}
loaded 5
loaded 4
loaded 3
checking if components are loaded
component 5 loaded
adding to data store: {5: 'loaded'}
checking if components are loaded
component 4 loaded
adding to data store: {4: 'loaded'}
checking if components are loaded
component 3 loaded
adding to data store: {3: 'loaded'}
loading_data={'0': 'loaded', '1': 'loaded', '2': 'loaded', '3': 'loaded', '4': 'loaded', '5': 'loaded'}
...
loading_data={'0': 'loaded', '1': 'loaded', '2': 'loaded', '3': 'loaded', '4': 'loaded', '5': 'loaded', ... '20': 'loaded'}
FULLY LOADED
```
**Exhibited Behaviour**
After all components are loaded, the bottom text does not update to say "FULLY LOADED" and we see that the "loading_data" dictionary has not received all of the updates that were sent to it, as it does not include every index from 0 to 20.
```
loading_data=None
loading 2, loading...
loading 1, loading...
checking if components are loaded
adding to data store: {}
loading 0, loading...
loading_data={}
loaded 1
loaded 0
loaded 2
checking if components are loaded
component 0 loaded
adding to data store: {0: 'loaded'}
checking if components are loaded
component 1 loaded
adding to data store: {1: 'loaded'}
checking if components are loaded
component 2 loaded
adding to data store: {2: 'loaded'}
loading_data={'2': 'loaded'}
loading 5, loading...
loading 4, loading...
checking if components are loaded
adding to data store: {}
loading 3, loading...
loading_data={'2': 'loaded'}
loaded 5
loaded 4
loaded 3
checking if components are loaded
component 5 loaded
adding to data store: {5: 'loaded'}
checking if components are loaded
component 4 loaded
component 3 loaded
adding to data store: {4: 'loaded', 3: 'loaded'}
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded'}
loading 8, loading...
loading 7, loading...
checking if components are loaded
adding to data store: {}
loading 6, loading...
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded'}
loaded 8
loaded 6
loaded 7
checking if components are loaded
component 8 loaded
adding to data store: {8: 'loaded'}
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded', '8': 'loaded'}
checking if components are loaded
component 7 loaded
component 6 loaded
adding to data store: {7: 'loaded', 6: 'loaded'}
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded', '6': 'loaded', '7': 'loaded', '8': 'loaded'}
loading 11, loading...
loading 10, loading...
checking if components are loaded
adding to data store: {}
loading 9, loading...
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded', '6': 'loaded', '7': 'loaded', '8': 'loaded'}
loaded 11
loaded 9
loaded 10
checking if components are loaded
component 11 loaded
adding to data store: {11: 'loaded'}
checking if components are loaded
component 9 loaded
component 10 loaded
adding to data store: {9: 'loaded', 10: 'loaded'}
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded', '6': 'loaded', '7': 'loaded', '8': 'loaded', '9': 'loaded', '10': 'loaded'}
loading 14, loading...
loading 13, loading...
checking if components are loaded
adding to data store: {}
loading 12, loading...
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded', '6': 'loaded', '7': 'loaded', '8': 'loaded', '9': 'loaded', '10': 'loaded'}
loaded 14
loaded 12
loaded 13
checking if components are loaded
component 14 loaded
adding to data store: {14: 'loaded'}
checking if components are loaded
component 13 loaded
component 12 loaded
adding to data store: {13: 'loaded', 12: 'loaded'}
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded', '6': 'loaded', '7': 'loaded', '8': 'loaded', '9': 'loaded', '10': 'loaded', '12': 'loaded', '13': 'loaded'}
loading 17, loading...
loading 16, loading...
checking if components are loaded
adding to data store: {}
loading 15, loading...
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded', '6': 'loaded', '7': 'loaded', '8': 'loaded', '9': 'loaded', '10': 'loaded', '12': 'loaded', '13': 'loaded'}
loaded 17
loaded 16
loaded 15
checking if components are loaded
component 17 loaded
adding to data store: {17: 'loaded'}
checking if components are loaded
component 16 loaded
component 15 loaded
adding to data store: {16: 'loaded', 15: 'loaded'}
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded', '6': 'loaded', '7': 'loaded', '8': 'loaded', '9': 'loaded', '10': 'loaded', '12': 'loaded', '13': 'loaded', '15': 'loaded', '16': 'loaded'}
loading 20, loading...
loading 19, loading...
checking if components are loaded
adding to data store: {}
loading 18, loading...
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded', '6': 'loaded', '7': 'loaded', '8': 'loaded', '9': 'loaded', '10': 'loaded', '12': 'loaded', '13': 'loaded', '15': 'loaded', '16': 'loaded'}
loaded 20
loaded 19
loaded 18
checking if components are loaded
component 18 loaded
adding to data store: {18: 'loaded'}
checking if components are loaded
component 19 loaded
component 20 loaded
adding to data store: {19: 'loaded', 20: 'loaded'}
loading_data={'2': 'loaded', '3': 'loaded', '4': 'loaded', '6': 'loaded', '7': 'loaded', '8': 'loaded', '9': 'loaded', '10': 'loaded', '12': 'loaded', '13': 'loaded', '15': 'loaded', '16': 'loaded', '19': 'loaded', '20': 'loaded'}
```
| open | 2024-09-12T16:54:42Z | 2024-09-12T18:13:47Z | https://github.com/plotly/dash/issues/3001 | [
"bug",
"P3"
] | logankopas | 0 |
Netflix/metaflow | data-science | 1,771 | `python hello.py batch step --help` is broken | leads to `TypeError: sequence item 0: expected str instance, NoneType found` | closed | 2024-03-25T14:20:52Z | 2024-06-18T14:05:02Z | https://github.com/Netflix/metaflow/issues/1771 | [] | madhur-ob | 0 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 15,566 | [Bug]: Linux: SDXL-based models fail to load, PyTorch error | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [X] The issue has been reported before but has not been fixed yet
### What happened?
Whenever I select an SDXL model from the dropdown list at the top of the page, including the SDXL base model, it fails to load. The terminal output shows the following error: `AttributeError: module 'torch' has no attribute 'float8_e4m3fn'`.
### Steps to reproduce the problem
1. Launch the WebUI.
2. Click the "down" arrow below "Stable Diffusion checkpoint" at the top left of the page.
3. Select an SDXL model from the dropdown list.
4. After a few seconds processing, the error will be printed to the terminal output and the selection will return to the previously selected model.
### What should have happened?
The model should load.
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
[sysinfo-2024-04-18-15-34.json](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/15027195/sysinfo-2024-04-18-15-34.json)
### Console logs
```Shell
################################################################
Launching launch.py...
################################################################
Python 3.11.8 (main, Feb 12 2024, 14:50:05) [GCC 13.2.1 20230801]
Version: v1.9.0
Commit hash: adadb4e3c7382bf3e4f7519126cd6c70f4f8557b
Launching Web UI with arguments: --skip-torch-cuda-test --upcast-sampling --opt-sub-quad-attention --medvram-sdxl
2024-04-18 12:28:22.419346: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
==============================================================================
You are running torch 2.0.1+rocm5.4.2.
The program is tested to work with torch 2.1.2.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.
Use --skip-version-check commandline argument to disable this check.
==============================================================================
*** "Disable all extensions" option was set, will only load built-in extensions ***
Loading weights [fbc31a67aa] from /opt/stable-diffusion-web-ui/models/Stable-diffusion/instruct-pix2pix-00-22000.safetensors
Running on local URL: http://127.0.0.1:7860
Creating model from config: /opt/stable-diffusion-web-ui/configs/instruct-pix2pix.yaml
LatentDiffusion: Running in eps-prediction mode
Applying attention optimization: sub-quadratic... done.
Model loaded in 2.1s (load weights from disk: 0.5s, create model: 0.2s, apply weights to model: 1.1s, calculate empty prompt: 0.2s).
To create a public link, set `share=True` in `launch()`.
Startup time: 17.6s (import torch: 2.6s, import gradio: 1.1s, setup paths: 10.3s, other imports: 0.4s, load scripts: 0.4s, create ui: 0.4s, gradio launch: 2.2s).
Loading model sd_xl_base_1.0.safetensors [31e35c80fc] (2 out of 2)
Loading weights [31e35c80fc] from /opt/stable-diffusion-web-ui/models/Stable-diffusion/sd_xl_base_1.0.safetensors
Creating model from config: /opt/stable-diffusion-web-ui/repositories/generative-models/configs/inference/sd_xl_base.yaml
changing setting sd_model_checkpoint to sd_xl_base_1.0.safetensors [31e35c80fc]: AttributeError
Traceback (most recent call last):
File "/opt/stable-diffusion-web-ui/modules/options.py", line 165, in set
option.onchange()
File "/opt/stable-diffusion-web-ui/modules/call_queue.py", line 13, in f
res = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/stable-diffusion-web-ui/modules/initialize_util.py", line 181, in <lambda>
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: sd_models.reload_model_weights()), call=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/stable-diffusion-web-ui/modules/sd_models.py", line 860, in reload_model_weights
sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/stable-diffusion-web-ui/modules/sd_models.py", line 826, in reuse_model_from_already_loaded
load_model(checkpoint_info)
File "/opt/stable-diffusion-web-ui/modules/sd_models.py", line 748, in load_model
load_model_weights(sd_model, checkpoint_info, state_dict, timer)
File "/opt/stable-diffusion-web-ui/modules/sd_models.py", line 448, in load_model_weights
module.to(torch.float8_e4m3fn)
^^^^^^^^^^^^^^^^^^^
AttributeError: module 'torch' has no attribute 'float8_e4m3fn'
```
### Additional information
SD1.5 models work. Tested on fully up-to-date EndeavourOS. | open | 2024-04-18T15:39:28Z | 2024-05-01T23:53:37Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15566 | [
"asking-for-help-with-local-system-issues"
] | prmbittencourt | 6 |
huggingface/transformers | tensorflow | 36,124 | Speaker Verification: All Speakers Getting Perfect 1.000 Similarity Scores | ### System Info
### Bug Report
<!-- Important information -->
Model name (e.g. bert-base-cased): pyannote/embedding
Language (if applicable): English
Framework (PyTorch, TensorFlow, etc...): PyTorch
### Description
Using pyannote/embedding for speaker verification, getting perfect similarity scores (1.000) for all speakers, even between obviously different voices in an audiobook.
### Code To Reproduce The Issue
python
import torch
import torchaudio
from pyannote.audio import Model
import torch.nn.functional as F
Setup
device = torch.device("cuda")
embedding_model = Model.from_pretrained("pyannote/embedding",
use_auth_token='xxx').to(device)
Load and process reference audio
reference_waveform, sample_rate = torchaudio.load("reference.flac")
reference_waveform = reference_waveform.mean(dim=0, keepdim=True).to(device)
reference_features = embedding_model(reference_waveform.unsqueeze(0))
reference_features = F.normalize(reference_features, p=2, dim=1)
Load test audio segment
test_waveform, = torchaudio.load("test.flac")
test_waveform = test_waveform.mean(dim=0, keepdim=True).to(device)
test_embedding = embedding_model(test_waveform.unsqueeze(0))
test_embedding = F.normalize(test_embedding, p=2, dim=1)
Calculate similarity
similarity = F.cosine_similarity(reference_features, test_embedding, dim=1).mean()
print(f"Similarity: {similarity.item():.6f}")
### Expected Results
Different speakers should have varying similarity scores below 1.000
### Actual Results
All speakers get perfect 1.000 similarity scores:
- Speaker A vs Reference: 1.000000
- Speaker B vs Reference: 0.999998
- Speaker C vs Reference: 1.000000
### Environment
- pyannote.audio: 3.1.1
- torch: 2.5.1+cu124
- Platform: Google Colab (Ubuntu Linux)
- CUDA: Yes
- GPU: Tesla T4
- Python: 3.11
- torchaudio: 2.5.1+cu124
### Additional Context
- Using professional audiobook with distinct voices
- Reference is 10-minute high-quality audio
- Testing with 4-hour audiobook
- Consistent 1.000 similarity across all different speakers
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Install dependencies:
pip install pyannote.audio==3.1.1 torch==2.5.1+cu124 torchaudio==2.5.1+cu124
2. Use reference audio (10-minute FLAC file) and test audio (different speaker, FLAC file)
3. Run the provided code:
- Load model and audio files
- Extract embeddings
- Calculate similarity
4. Observe that similarity scores are always 1.000 regardless of speaker differences
Full code provided in the description above. This can be reproduced with any two different speakers' audio files.
### Expected behavior
The similarity scores should:
- Be less than 1.000 for different speakers
- Show variation between different voices
- Have lower scores for more dissimilar voices
- Only approach 1.000 for the same speaker
Instead, we're getting perfect 1.000 similarity scores for all speakers, even between obviously different voices (male/female) from a professional audiobook. | closed | 2025-02-10T20:58:01Z | 2025-03-21T08:04:37Z | https://github.com/huggingface/transformers/issues/36124 | [
"bug"
] | misterpathologist | 2 |
skypilot-org/skypilot | data-science | 4,657 | [Catalog] AWS H200 with 0 price | There are two VMs in the AWS catalog with H200 but without a price
https://github.com/skypilot-org/skypilot-catalog/blob/master/catalogs/v6/aws/vms.csv
```
p5e.48xlarge,H200,8.0,192.0,2048.0,"{'Gpus': [{'Name': 'H200', 'Manufacturer': 'NVIDIA', 'Count': 8, 'MemoryInfo': {'SizeInMiB': 144384}}], 'TotalGpuMemoryInMiB': 1155072}",,,eu-north-1,eun1-az1
p5e.48xlarge,H200,8.0,192.0,2048.0,"{'Gpus': [{'Name': 'H200', 'Manufacturer': 'NVIDIA', 'Count': 8, 'MemoryInfo': {'SizeInMiB': 144384}}], 'TotalGpuMemoryInMiB': 1155072}",,,us-east-2,use2-az3
``` | open | 2025-02-06T09:20:41Z | 2025-02-10T23:50:02Z | https://github.com/skypilot-org/skypilot/issues/4657 | [] | SalikovAlex | 4 |
kizniche/Mycodo | automation | 669 | Mycodo DHT22 humidity readings | ## Mycodo Issue Report:
- Specific Mycodo Version:7.5.10
#### Problem Description
Please list:
I'm using an DHT22 but i'm getting weird humidity data readings and errors.
It worked correct in previous versions, before 7.x.
The humidity constantly gives a 0% or 1% reading. Sometimes it gives a higher reading.
After I still had this issue, I did an upgrade from 7.5.3 to 7.5.10 today. But no luck yet.
### Errors
2019-07-03 19:09:49,403 - ERROR - mycodo.controller_input_b90028c8 - StopIteration raised. Possibly could not read input. Ensure it's connected properly and detected.
2019-07-03 19:09:51,689 - ERROR - mycodo.controller_input_5e076805 - StopIteration raised. Possibly could not read input. Ensure it's connected properly and detected.
2019-07-03 19:10:49,437 - ERROR - mycodo.controller_input_b90028c8 - StopIteration raised. Possibly could not read input. Ensure it's connected properly and detected.
### Steps to Reproduce the issue:
Watch the Dashboard, Live or logs.
### Additional Notes
Is there anything that should be added to make it easier
to address this issue? | closed | 2019-07-03T17:16:04Z | 2019-07-07T03:26:36Z | https://github.com/kizniche/Mycodo/issues/669 | [] | ralphknoops | 5 |
littlecodersh/ItChat | api | 566 | ่ฟไธๅฅๆ bug | https://github.com/littlecodersh/ItChat/blob/fc81ba6e53a8c5f7ddeb7edfc8e6e5e7dedde924/itchat/components/messages.py#L242
ๅบ่ฏฅไฝฟ็จ chatroomUserName | closed | 2017-12-16T11:46:40Z | 2018-02-28T03:31:00Z | https://github.com/littlecodersh/ItChat/issues/566 | [
"bug"
] | raywill | 1 |
serengil/deepface | machine-learning | 650 | Unable to use DBSCAN clustering | Using the face encoding data from `DeepFace.representation`. I'm attempting to cluster faces using `DBSCAN`. I am unable to determine why it is not clustering properly. | closed | 2023-01-30T12:30:31Z | 2023-02-21T14:57:44Z | https://github.com/serengil/deepface/issues/650 | [
"documentation"
] | alenpaulvarghese | 8 |
exaloop/codon | numpy | 83 | pip installer on linux | The [Python decorator part](https://docs.exaloop.io/codon/interoperability/decorator) mentions the codon library can be installed via pip install.
The example only shows a workaround on macOS via
`python3 -m pip install codon-0.13.0-cp39-cp39-macosx_12_0_arm64.whl`
It doesn't seem to work on linux yet.
A direct `python3 -m pip install codon` throws the following error
```
ERROR: Command errored out with exit status 1:
command: /opt/anaconda3/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-xek5nan9/cogent/setup.py'"'"'; __file__='"'"'/tmp/pip-install-xek5nan9/cogent/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-27df80_7
cwd: /tmp/pip-install-xek5nan9/cogent/
Complete output (6 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-xek5nan9/cogent/setup.py", line 61
print "Failed to build html due to ImportErrors for sphinx"
^
SyntaxError: Missing parentheses in call to 'print'. Did you mean print("Failed to build html due to ImportErrors for sphinx")?
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
``` | closed | 2022-12-10T13:10:00Z | 2022-12-14T10:08:51Z | https://github.com/exaloop/codon/issues/83 | [] | vavrines | 4 |
mckinsey/vizro | pydantic | 1,058 | Error "No value for argument 'points_data' in function call" in Custom Action | ### Question
Description:
Hello, Vizro team,
I am testing a Custom Action example from the documentation, and while the code runs correctly, Visual Studio Code displays the following error: No value for argument 'points_data' in function call
It seems to be related to static type analysis or the function definition. However, when executing the code, no runtime errors occur.
Could you clarify if this is a known issue or if there is a recommended configuration to avoid this message in VSC?
I am using:
Vizro (version: 0.1.34 )
Python (version: 3.11.19)
Thank you in advance for your help.
Best regards,
Francisco

### Code/Examples
```py
import vizro.models as vm
import vizro.plotly.express as px
from vizro import Vizro
from vizro.models.types import capture
df = px.data.iris()
@capture("action")
def my_custom_action(show_species: bool, points_data: dict):
"""Custom action."""
clicked_point = points_data["points"][0]
x, y = clicked_point["x"], clicked_point["y"]
text = f"Clicked point has sepal length {x}, petal width {y}"
if show_species:
species = clicked_point["customdata"][0]
text += f" and species {species}"
return text
page = vm.Page(
title="Action with clickData as input",
components=[
vm.Graph(
id="scatter_chart",
figure=px.scatter(df, x="sepal_length", y="petal_width",
color="species", custom_data=["species"]),
actions=[
vm.Action(
function=my_custom_action(show_species=True),
inputs=["scatter_chart.clickData"],
outputs=["my_card.children"],
),
],
),
vm.Card(id="my_card", text="Click on a point on the above graph."),
],
)
dashboard = vm.Dashboard(pages=[page])
Vizro().build(dashboard).run()
```
### Which package?
vizro
### Code of Conduct
- [x] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | closed | 2025-03-10T03:21:36Z | 2025-03-10T15:08:21Z | https://github.com/mckinsey/vizro/issues/1058 | [
"Needs triage :mag:",
"General Question :question:"
] | fpeucelle | 2 |
jmcnamara/XlsxWriter | pandas | 860 | question: I am looking to get the current format of a cell after I open the existing Excel work book. I need to copy that format to another worksheet. | ### Question
I am looking to get the current format of a cell after I open the existing Excel work book. I need to copy that format to another worksheet.
I have got 2 existing workbooks
I need to copy the cell format from 1 work book and apply that format to range of cells in the second workbook.
I saw the add_format function but could not find any function to retrieve the format from the workbook I opened.
Please suggest | closed | 2022-02-21T06:51:38Z | 2022-02-21T08:22:41Z | https://github.com/jmcnamara/XlsxWriter/issues/860 | [
"question"
] | vivek-k-aggarwal | 1 |
mitmproxy/mitmproxy | python | 7,510 | [Not a bug] Thank you & congratulations for mitmproxy | Hello,
I'm the maintainer of [websockets](https://github.com/python-websockets/websockets). Over the week-end, I added support for connecting through a SOCKS proxy. I expected that writing tests for this feature would be hellish because it would require running a SOCKS proxy with various configurations.
Then I came across mitmproxy, which I could configure and run within my Python process with [just a few lines of code](https://github.com/python-websockets/websockets/blob/4a89e5616ffed1a8662fe195ad14827bb93a9bed/tests/proxy.py#L36-L64) โ even though it was never designed for that! I created a trivial addon to record connections to the proxy and I was ready to write tests.
This is a testimony to how well designed mitmproxy is. Well done & thank you :-) | closed | 2025-01-26T21:58:56Z | 2025-01-27T10:52:34Z | https://github.com/mitmproxy/mitmproxy/issues/7510 | [
"kind/feature"
] | aaugustin | 1 |
thtrieu/darkflow | tensorflow | 1,004 | Darkflow is not configured properly | I am trying to run darkflow on a raspberry pi. I have successfully executed python scripts for object detection using darkflow earlier. Having said that I do not know what is wrong now.
I installed opencv-python, tensorflow and keras using pip3. When i import these libraries in python3, i do not get any error.
I built darkflow using:` python3 setup.py build_ext --inplace`
when i try to run even the python3 flow --h i get the following error:
```
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: compiletime version 3.4 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.5
return f(*args, **kwds)
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: builtins.type size changed, may indicate binary incompatibility. Expected 432, got 412
return f(*args, **kwds)
Traceback (most recent call last):
File "/home/pi/Desktop/darkflow-master/run_img.py", line 9, in <module>
from darkflow.net.build import TFNet
File "/home/pi/Desktop/darkflow-master/darkflow/net/build.py", line 5, in <module>
from .ops import op_create, identity
File "/home/pi/Desktop/darkflow-master/darkflow/net/ops/__init__.py", line 1, in <module>
from .simple import *
File "/home/pi/Desktop/darkflow-master/darkflow/net/ops/simple.py", line 1, in <module>
import tensorflow.contrib.slim as slim
File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/contrib/__init__.py", line 40, in <module>
from tensorflow.contrib import distribute
File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/contrib/distribute/__init__.py", line 33, in <module>
from tensorflow.contrib.distribute.python.tpu_strategy import TPUStrategy
File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/contrib/distribute/python/tpu_strategy.py", line 27, in <module>
from tensorflow.contrib.tpu.python.ops import tpu_ops
File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/contrib/tpu/__init__.py", line 69, in <module>
from tensorflow.contrib.tpu.python.ops.tpu_ops import *
File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/contrib/tpu/python/ops/tpu_ops.py", line 39, in <module>
resource_loader.get_path_to_datafile("_tpu_ops.so"))
File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/contrib/util/loader.py", line 56, in load_op_library
ret = load_library.load_op_library(path)
File "/home/pi/.local/lib/python3.5/site-packages/tensorflow/python/framework/load_library.py", line 61, in load_op_library
lib_handle = py_tf.TF_LoadLibrary(library_filename)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Invalid name:
An op that loads optimization parameters into HBM for embedding. Must be
preceded by a ConfigureTPUEmbeddingHost op that sets up the correct
embedding table configuration. For example, this op is used to install
parameters that are loaded from a checkpoint before a training loop is
executed.
parameters: A tensor containing the initial embedding table parameters to use in embedding
lookups using the Adagrad optimization algorithm.
accumulators: A tensor containing the initial embedding table accumulators to use in embedding
lookups using the Adagrad optimization algorithm.
table_name: Name of this table; must match a name in the
TPUEmbeddingConfiguration proto (overrides table_id).
num_shards: Number of shards into which the embedding tables are divided.
shard_id: Identifier of shard for this operation.
table_id: Index of this table in the EmbeddingLayerConfiguration proto
(deprecated).
(Did you use CamelCase?); in OpDef: name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" input_arg { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type: DT_FLOAT type_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" number_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type_list_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" } input_arg { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type: DT_FLOAT type_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" number_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type_list_attr: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" } attr { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" default_value { i: -1 } description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" has_minimum: true minimum: -1 } attr { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" default_value { s: "" } description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" } attr { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" } attr { name: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" type: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" } summary: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" description: "\nAn op that loads optimization parameters into HBM for embedding. Must be\npreceded by a ConfigureTPUEmbeddingHost op that sets up the correct\nembedding table configuration. For example, this op is used to install\nparameters that are loaded from a checkpoint before a training loop is\nexecuted.\n\nparameters: A tensor containing the initial embedding table parameters to use in embedding\nlookups using the Adagrad optimization algorithm.\naccumulators: A tensor containing the initial embedding table accumulators to use in embedding\nlookups using the Adagrad optimization algorithm.\ntable_name: Name of this table; must match a name in the\n TPUEmbeddingConfiguration proto (overrides table_id).\nnum_shards: Number of shards into which the embedding tables are divided.\nshard_id: Identifier of shard for this operation.\ntable_id: Index of this table in the EmbeddingLayerConfiguration proto\n (deprecated).\n" is_stateful: true
>>>
```
my opencv version is 3.4.4
my tensorflow version is 1.13.1
If you know what's wrong please help.
and otherwise tell me for what versions its working for you i will try them
| closed | 2019-03-19T07:02:20Z | 2019-03-20T06:21:49Z | https://github.com/thtrieu/darkflow/issues/1004 | [] | knl-kolhe | 1 |
nolar/kopf | asyncio | 393 | AttributeError: 'NoneType' object has no attribute 'loader' | > <a href="https://github.com/chungktran"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/49414458?v=4"></a> An issue by [chungktran](https://github.com/chungktran) at _2020-08-18 19:29:06+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/issues/393
>
## Long story short
Getting `AttributeError: 'NoneType' object has no attribute 'loader'` error when running inside of k8s.
## Description
Kopf running fine when running outside of k8s. It does exactly what I wanted to do when running outside of the cluster, which is to delete pods that have an annotation sets to `"true"`. However, when built into a container and runs it in k8s the error below is thrown.
```
Traceback (most recent call last):
File "/usr/local/bin/kopf", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/kopf/cli.py", line 36, in wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/decorators.py", line 73, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/kopf/cli.py", line 75, in run
modules=modules,
File "/usr/local/lib/python3.7/site-packages/kopf/utilities/loaders.py", line 36, in preload
module = importlib.util.module_from_spec(spec)
File "<frozen importlib._bootstrap>", line 580, in module_from_spec
AttributeError: 'NoneType' object has no attribute 'loader'
```
<details><summary>The code snippet to reproduce the issue</summary>
```python
import kopf
import kubernetes
DEFAULT_ANNOTATION = 'kopf.example.com/restart'
@kopf.timer('example.com', 'v1', 'restarts', interval=10)
def restart(spec, status, logger, **kwargs):
anno = spec.get('annotation', DEFAULT_ANNOTATION)
coreV1 = kubernetes.client.CoreV1Api()
pods = coreV1.list_pod_for_all_namespaces(watch=False)
# Get pods that have opted-in annotation to be restart
tbd_pods = [
{
'name': p.metadata.name,
'namespace': p.metadata.namespace,
}
for p in pods.items
if p.metadata.annotations and p.metadata.annotations.get(anno, '').lower() == 'true'
]
for pod in pods:
coreV1.delete_namespaced_pod(pod['name'], pod['namespace'])
```
</details>
<details><summary>The exact command to reproduce the issue</summary>
```bash
kopf run --liveness http://:8080/healthz --verbose restarter.py
```
</details>
<details><summary>The full output of the command that failed</summary>
```
Traceback (most recent call last):
File "/usr/local/bin/kopf", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/kopf/cli.py", line 36, in wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/decorators.py", line 73, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/kopf/cli.py", line 75, in run
modules=modules,
File "/usr/local/lib/python3.7/site-packages/kopf/utilities/loaders.py", line 36, in preload
module = importlib.util.module_from_spec(spec)
File "<frozen importlib._bootstrap>", line 580, in module_from_spec
AttributeError: 'NoneType' object has no attribute 'loader'
```
</details>
## Environment
* Kopf version: `kopf, version 0.27`
* Kubernetes version: `v1.17.5`
* Python version: `Python 3.8.5`
* OS/platform: `Linux x86_64 GNU/Linux`
<details><summary>Python packages installed</summary>
```
aiohttp==3.6.2
aiojobs==0.2.2
async-timeout==3.0.1
attrs==19.3.0
cachetools==4.1.1
certifi==2020.6.20
chardet==3.0.4
click==7.1.2
google-auth==1.20.1
idna==2.10
iso8601==0.1.12
kopf==0.27
kubernetes==11.0.0
logzero==1.5.0
multidict==4.7.6
oauthlib==3.1.0
pip==20.2.2
pyasn1==0.4.8
pyasn1-modules==0.2.8
pykube-ng==20.7.2
python-consul==1.1.0
python-dateutil==2.8.1
PyYAML==5.3.1
requests==2.24.0
requests-oauthlib==1.3.0
rsa==4.6
setuptools==49.2.0
six==1.15.0
typing-extensions==3.7.4.2
urllib3==1.25.10
websocket-client==0.57.0
wheel==0.34.2
yapf==0.30.0
yarl==1.5.1
```
</details>
---
> <a href="https://github.com/chungktran"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/49414458?v=4"></a> Commented by [chungktran](https://github.com/chungktran) at _2020-08-19 15:43:56+00:00_
>
I figured out the issue. | open | 2020-08-18T20:05:28Z | 2020-10-09T21:21:02Z | https://github.com/nolar/kopf/issues/393 | [
"bug",
"archive"
] | kopf-archiver[bot] | 6 |
influxdata/influxdb-client-python | jupyter | 401 | Cannot create new bucket | I am trying to create a new bucket with the API but it throws **ValueError: Invalid value for `org_id`, must not be `None`** error if I do not insert the "org_id" param.
Code:
```python
from influxdb_client import InfluxDBClient, BucketRetentionRules
url = "http://localhost:8086"
token = "my-token"
org = "my-org"
with InfluxDBClient(url=url, token=token) as client:
buckets_api = client.buckets_api()
"""
Create Bucket with retention policy set to 3600 seconds and name "bucket-by-python"
"""
print(f"------- Create -------\n")
retention_rules = BucketRetentionRules(type="expire", every_seconds=3600)
created_bucket = buckets_api.create_bucket(bucket_name="bucket-by-python",
retention_rules=retention_rules,
org=org)
```
Even if I put the org_id param it continues throeing the same error.
influx_db_client version: '1.25.0'
influx_db version:1.8.10
| closed | 2022-02-01T08:04:50Z | 2022-02-17T08:47:58Z | https://github.com/influxdata/influxdb-client-python/issues/401 | [
"wontfix"
] | jimazikerlan | 4 |
pytest-dev/pytest-mock | pytest | 175 | pytest-mock 1.13.0: catching side-effects breaks spy | Hello,
Since #173 was merged (and pytest-mock 1.13.0 released), `mocker.spy` can't be called successfully once a spied function raised an exception.
The issue is that `mocker.spy` relies on a side-effect to wrap all the calls: https://github.com/pytest-dev/pytest-mock/blob/7bddcd53d287a59150d22e6496bcf20af44c3378/src/pytest_mock/plugin.py#L125
But now that we assign a new side-effect after an exception was raised, the spy will always raise the exception instead of calling the wrapper.
Here is a test case to reproduce the issue:
```python
def test_spy_side_effect(mocker):
class Foo:
def bar(self, arg):
if arg > 0:
return arg
raise RuntimeError("I'm an error")
foo = Foo()
mocker.spy(foo, 'bar')
assert foo.bar(42) == 42
foo.bar.assert_called_with(42)
with pytest.raises(RuntimeError) as exc_info:
foo.bar(-1)
assert str(exc_info.value) == "I'm an error"
foo.bar.assert_called_with(-1)
# with pytest-mock 1.13.0 this will raise a RuntimeError instead of returning 21
assert foo.bar(21) == 21
foo.bar.assert_called_with(21)
```
A possible solution would be to assign the exception to `result.return_value` instead of `result.side_effect` as proposed initially in #173. However I understand that this is not perfect either. | closed | 2019-12-09T16:08:27Z | 2020-01-04T18:48:18Z | https://github.com/pytest-dev/pytest-mock/issues/175 | [] | k4nar | 5 |
521xueweihan/HelloGitHub | python | 2,012 | java | ## ้กน็ฎๆจ่
- ้กน็ฎๅฐๅ๏ผไป
ๆถๅฝ GitHub ็ๅผๆบ้กน็ฎ๏ผ่ฏทๅกซๅ GitHub ็้กน็ฎๅฐๅ
- ็ฑปๅซ๏ผ่ฏทไปไธญ้ๆฉ๏ผCใC#ใC++ใCSSใGoใJavaใJSใKotlinใObjective-CใPHPใPythonใRubyใSwiftใๅ
ถๅฎใไนฆ็ฑใๆบๅจๅญฆไน ๏ผ
- ้กน็ฎๅ็ปญๆดๆฐ่ฎกๅ๏ผ
- ้กน็ฎๆ่ฟฐ๏ผ
- ๅฟ
ๅ๏ผ่ฟๆฏไธชไปไน้กน็ฎใ่ฝ็จๆฅๅนฒไปไนใๆไปไน็น็นๆ่งฃๅณไบไปไน็็น
- ๅฏ้๏ผ้็จไบไปไนๅบๆฏใ่ฝๅค่ฎฉๅๅญฆ่
ๅญฆๅฐไปไน
- ๆ่ฟฐ้ฟๅบฆ๏ผไธๅ
ๅซ็คบไพไปฃ็ ๏ผ: 10 - 256 ไธชๅญ็ฌฆ
- ๆจ่็็ฑ๏ผไปคไบบ็ผๅไธไบฎ็็นๆฏไปไน๏ผ่งฃๅณไบไปไน็็น๏ผ
- ็คบไพไปฃ็ ๏ผ๏ผๅฏ้๏ผ้ฟๅบฆ๏ผ1-20 ่ก
- ๆชๅพ๏ผ๏ผๅฏ้๏ผgif/png/jpg
## ๆ็คบ๏ผๆไบคๆถ่ฏทๅ ้คไปฅไธๅ
ๅฎน๏ผ
> ็นๅปไธๆน โPreviewโ ๆดๆนไพฟๅฐ้
่ฏปไปฅไธๅ
ๅฎน๏ผ
ๆ้ซ้กน็ฎๆถๅฝ็ๆฆ็ๆนๆณๅฆไธ๏ผ
1. ๅฐ HelloGitHub ็ฝ็ซ้ฆ้กต๏ผhttps://hellogithub.com ๆ็ดข่ฆๆจ่็้กน็ฎๅฐๅ๏ผๆฅ็ๅๅคๆจ่็้กน็ฎๆฏๅฆ่ขซๆจ่่ฟใ
2. ๆ นๆฎ [้กน็ฎๅฎกๆ ธๆ ๅ่ฏดๆ](https://github.com/521xueweihan/HelloGitHub/issues/271) ไฟฎๆน้กน็ฎ
3. ๅฆๆจๆจ่็้กน็ฎๆถๅฝๅฐใHelloGitHubใๆๅ๏ผๆจ็ GitHub ๅธๅทๅฐๅฑ็คบๅจ [่ดก็ฎไบบๅ่กจ](https://github.com/521xueweihan/HelloGitHub/blob/master/content/contributors.md)๏ผ**ๅๆถไผๅจๆฌ issues ไธญ้็ฅๆจ**ใ
ๅๆฌกๆ่ฐขๆจๅฏน HelloGitHub ้กน็ฎ็ๆฏๆ๏ผ
| closed | 2021-12-11T06:19:47Z | 2021-12-11T06:19:52Z | https://github.com/521xueweihan/HelloGitHub/issues/2012 | [
"ๆถๆissue"
] | showjx | 1 |
pytorch/pytorch | deep-learning | 149,222 | inconsistent result of torch.equal API from API documentation. | ### ๐ Describe the bug
Expect this to assert false, as they are different types (based on the documentation, indicate they should have same elements), but an assertion error is thrown.
```python
def test_different_dtypes(self):
# Test with tensors of different data types
tensor1 = torch.tensor([1, 2, 3], dtype=torch.int32)
tensor2 = torch.tensor([1, 2, 3], dtype=torch.float32)
self.assertFalse(torch.equal(tensor1, tensor2))
```
```
======================================================================
FAIL: test_different_dtypes (__main__.TestTorchEqual)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/user/projects/api_guided_testgen/out/bug_detect_gpt4o/exec/basic_rag_apidoc/torch/torch.equal.py", line 28, in test_different_dtypes
self.assertFalse(torch.equal(tensor1, tensor2))
AssertionError: True is not false
```
### Versions
Collecting environment information...
PyTorch version: 2.5.0
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.5.0-1ubuntu1~22.04) 9.5.0
Clang version: 14.0.0-1ubuntu1.1
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) E-2224G CPU @ 3.50GHz
CPU family: 6
Model: 158
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Stepping: 10
CPU max MHz: 4700.0000
CPU min MHz: 800.0000
BogoMIPS: 6999.82
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 128 KiB (4 instances)
L1i cache: 128 KiB (4 instances)
L2 cache: 1 MiB (4 instances)
L3 cache: 8 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-3
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT disabled
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT disabled
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Mitigation; TSX disabled
Versions of relevant libraries:
[pip3] numpy==2.0.1
[pip3] torch==2.5.0
[pip3] torchaudio==2.5.0
[pip3] torchvision==0.20.0
[conda] blas 1.0 mkl
[conda] cpuonly 2.0 0 pytorch
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] libjpeg-turbo 2.0.0 h9bf148f_0 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py39h5eee18b_2
[conda] mkl_fft 1.3.11 py39h5eee18b_0
[conda] mkl_random 1.2.8 py39h1128e8f_0
[conda] numpy 2.0.1 py39h5f9d8c6_1
[conda] numpy-base 2.0.1 py39hb5e798b_1
[conda] pytorch 2.5.0 py3.9_cpu_0 pytorch
[conda] pytorch-mutex 1.0 cpu pytorch
[conda] torchaudio 2.5.0 py39_cpu pytorch
[conda] torchvision 0.20.0 py39_cpu pytorch
cc @svekars @sekyondaMeta @AlannaBurke @albanD | closed | 2025-03-14T20:51:04Z | 2025-03-21T03:44:51Z | https://github.com/pytorch/pytorch/issues/149222 | [
"module: docs",
"triaged",
"module: python frontend"
] | sjh0849 | 2 |
explosion/spaCy | machine-learning | 13,263 | spacy.load() on python 3.12 with vscode | I think this is technically not a bug in this repo but it will likely affect other people too.
I have also found the same behavior in this https://github.com/carpedm20/emoji/issues/280, which led to the creation of this ticket https://github.com/microsoft/debugpy/issues/1496
## How to reproduce the behaviour
Call `spacy.load()` using Python 3.12 when launching using VSCode.
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: OSX
* Python Version Used: 3.12.1
* spaCy Version Used: 3.7
* Environment Information:
| closed | 2024-01-23T15:11:42Z | 2024-01-24T09:34:34Z | https://github.com/explosion/spaCy/issues/13263 | [
"third-party"
] | lsmith77 | 3 |
miguelgrinberg/microblog | flask | 326 | Project dependencies may have API risk issues | Hi, In **microblog**, inappropriate dependency versioning constraints can cause risks.
Below are the dependencies and version constraints that the project is using
```
alembic==1.6.5
Babel==2.9.1
blinker==1.4
certifi==2021.5.30
chardet==4.0.0
click==8.0.1
dnspython==2.1.0
dominate==2.6.0
elasticsearch==7.13.3
email-validator==1.1.3
Flask==2.0.1
Flask-Babel==2.0.0
Flask-Bootstrap==3.3.7.1
Flask-HTTPAuth==4.4.0
Flask-Login==0.5.0
Flask-Mail==0.9.1
Flask-Migrate==3.0.1
Flask-Moment==1.0.1
Flask-SQLAlchemy==2.5.1
Flask-WTF==0.15.1
greenlet==1.1.0
httpie==2.4.0
idna==2.10
itsdangerous==2.0.1
Jinja2==3.0.1
langdetect==1.0.9
Mako==1.1.4
MarkupSafe==2.0.1
Pygments==2.9.0
PyJWT==2.1.0
PySocks==1.7.1
python-dateutil==2.8.1
python-dotenv==0.18.0
python-editor==1.0.4
pytz==2021.1
redis==3.5.3
requests==2.25.1
requests-toolbelt==0.9.1
rq==1.9.0
six==1.16.0
SQLAlchemy==1.4.20
urllib3==1.26.6
visitor==0.1.3
Werkzeug==2.0.1
WTForms==2.3.3
```
The version constraint **==** will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint **No Upper Bound** and **\*** will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.
After further analysis, in this project,
The version constraint of dependency **alembic** can be changed to *>=0.1.0,<=0.1.1*.
The version constraint of dependency **Flask-Babel** can be changed to *>=0.9,<=2.0.0*.
The version constraint of dependency **Flask-HTTPAuth** can be changed to *>=3.0.0,<=4.7.0*.
The version constraint of dependency **Flask-Login** can be changed to *>=0.1.3,<=0.6.2*.
The version constraint of dependency **Flask-Mail** can be changed to *>=0.7.0,<=0.7.6*.
The version constraint of dependency **Flask-Mail** can be changed to *>=0.9.0,<=0.9.1*.
The version constraint of dependency **Flask-Moment** can be changed to *>=0.1.0,<=0.11.0*.
The version constraint of dependency **Flask-Moment** can be changed to *>=1.0.1,<=1.0.2*.
The version constraint of dependency **Flask-SQLAlchemy** can be changed to *>=0.16,<=3.0.0a1*.
The version constraint of dependency **PyJWT** can be changed to *>=0.1.1,<=1.1.0*.
The version constraint of dependency **redis** can be changed to *>=3.0.0,<=4.3.3*.
The version constraint of dependency **requests** can be changed to *>=0.2.1,<=0.2.3*.
The version constraint of dependency **requests** can be changed to *>=0.7.0,<=2.24.0*.
The version constraint of dependency **requests** can be changed to *==2.26.0*.
The version constraint of dependency **rq** can be changed to *>=0.3.3,<=1.10.1*.
The version constraint of dependency **SQLAlchemy** can be changed to *>=0.5.0beta3,<=1.4.41*.
The version constraint of dependency **Werkzeug** can be changed to *>=0.9,<=2.1.2*.
The version constraint of dependency **WTForms** can be changed to *>=1.0.2,<=3.0.1*.
The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.
The invocation of the current project includes all the following methods.
<details><summary>The calling methods from the alembic</summary>
<pre>alembic.op.create_index
alembic.op.drop_table
alembic.op.add_column
alembic.op.drop_index
alembic.op.drop_column
alembic.op.create_table
alembic.context.run_migrations
alembic.context.is_offline_mode
alembic.context.begin_transaction
alembic.context.configure
</pre>
</details>
<details><summary>The calling methods from the Flask-Babel</summary>
<pre>flask_babel.Babel.init_app
flask_babel.get_locale
flask_babel.lazy_gettext
flask_babel.Babel
</pre>
</details>
<details><summary>The calling methods from the Flask-HTTPAuth</summary>
<pre>flask_httpauth.HTTPTokenAuth
</pre>
</details>
<details><summary>The calling methods from the Flask-Login</summary>
<pre>flask_login.login_user
flask_login.LoginManager
flask_login.logout_user
flask_login.LoginManager.init_app
</pre>
</details>
<details><summary>The calling methods from the Flask-Mail</summary>
<pre>flask_mail.Message.attach
flask_mail.Mail
flask_mail.Message
flask_mail.Mail.init_app
</pre>
</details>
<details><summary>The calling methods from the Flask-Moment</summary>
<pre>flask_moment.Moment
flask_moment.Moment.init_app
</pre>
</details>
<details><summary>The calling methods from the Flask-SQLAlchemy</summary>
<pre>flask_sqlalchemy.SQLAlchemy
flask_sqlalchemy.SQLAlchemy.init_app
</pre>
</details>
<details><summary>The calling methods from the PyJWT</summary>
<pre>jwt.encode
jwt.decode
</pre>
</details>
<details><summary>The calling methods from the redis</summary>
<pre>redis.Redis.from_url
</pre>
</details>
<details><summary>The calling methods from the requests</summary>
<pre>requests.post
</pre>
</details>
<details><summary>The calling methods from the rq</summary>
<pre>rq.get_current_job
</pre>
</details>
<details><summary>The calling methods from the SQLAlchemy</summary>
<pre>sqlalchemy.PrimaryKeyConstraint
sqlalchemy.String
sqlalchemy.Column
sqlalchemy.ForeignKeyConstraint
sqlalchemy.Boolean
sqlalchemy.Text
sqlalchemy.Float
sqlalchemy.Integer
sqlalchemy.DateTime
</pre>
</details>
<details><summary>The calling methods from the Werkzeug</summary>
<pre>werkzeug.urls.url_parse
werkzeug.security.check_password_hash
werkzeug.security.generate_password_hash
</pre>
</details>
<details><summary>The calling methods from the WTForms</summary>
<pre>wtforms.validators.Length
wtforms.validators.ValidationError
wtforms.validators.EqualTo
wtforms.validators.Email
wtforms.validators.DataRequired
</pre>
</details>
<details><summary>The calling methods from the all methods</summary>
<pre>sqlalchemy.engine_from_config.connect
self.SearchForm.super.__init__
app.api.auth.basic_auth.current_user
sqlalchemy.PrimaryKeyConstraint
click.argument
flask.render_template
flask_migrate.Migrate
app.logger.addHandler
flask_babel.Babel
flask_login.current_user.followed_posts.paginate
app.create_app.app_context
flask.request.args.get
flask_migrate.Migrate.init_app
os.mkdir
flask.Flask.register_blueprint
rq.Queue
flask_login.logout_user
alembic.op.drop_index
sys.exc_info
alembic.op.drop_column
list
os.path.join
username.data.User.query.filter_by.first
flask.Blueprint
wtforms.BooleanField
app.auth.forms.ResetPasswordRequestForm
user.id.followers.c.followed_id.self.followed.filter.count
alembic.context.begin_transaction
flask_mail.Mail
Post.query.join
app.models.User.query.get_or_404.from_dict
flask_bootstrap.Bootstrap
app.models.Post
sqlalchemy.engine_from_config
logging.StreamHandler.setLevel
self.followers.count
elasticsearch.Elasticsearch
User.query.get
username.User.query.filter_by.first.check_password
app.auth.forms.LoginForm
sqlalchemy.String
run_migrations_online
dotenv.load_dotenv
flask.current_app.elasticsearch.index
app.models.Message.timestamp.desc
rq.get_current_job.get_id
app.models.Post.timestamp.asc
str
Message.query.filter_by
datetime.datetime.utcnow
username.User.query.filter_by.first
self.followed.filter
hashlib.md5
app.logger.info
werkzeug.http.HTTP_STATUS_CODES.get
threading.Thread
username.User.query.filter_by.first_or_404
_set_task_progress
app.api.auth.token_auth.current_user
requests.post.json
flask_mail.Message
app.db.session.rollback
email.data.User.query.filter_by.first
self.name.Task.query.filter_by.first
msg.current_app._get_current_object.send_async_email.Thread.start
app.main.forms.MessageForm
flask.current_app.config.get
self.followed.count
requests.post
user.posts.count
Task.query.filter_by
app.models.User.query.filter_by
app.models.User.verify_reset_password_token.set_password
app.logger.setLevel
app.api.errors.error_response
app.db.relationship
cls.id.in_
flask.current_app._get_current_object
werkzeug.urls.url_parse
app.auth.forms.RegistrationForm
app.models.Notification.timestamp.asc
flask_login.current_user.notifications.filter
app.mail.send
wtforms.validators.Length
app.models.User.query.get_or_404
flask.request.accept_languages.best_match
os.remove
app.models.Message.timestamp.desc.current_user.messages_received.order_by.paginate
flask.redirect
self.set_password
user.get_reset_password_token
app.models.User.verify_reset_password_token.check_password
form.username.data.User.query.filter_by.first
app.db.session.commit
self.last_seen.isoformat
getattr
flask_sqlalchemy.SQLAlchemy.init_app
jwt.decode
flask_babel.get_locale
flask_login.current_user.follow
wtforms.validators.DataRequired
logging.getLogger
query.paginate
os.urandom.base64.b64encode.decode
int
alembic.op.f
self.avatar
token.User.query.filter_by.first
wtforms.PasswordField
flask.Flask
flask_login.current_user.add_notification
alembic.context.configure
since.Notification.timestamp.current_user.notifications.filter.order_by
super
flask_babel.Babel.init_app
app.search.add_to_index
redis.Redis.from_url
Post.query.filter_by
wtforms.SubmitField
n.get_data
app.main.forms.EditProfileForm
flask_login.current_user.get_task_in_progress
os.path.dirname
config.get_main_option
app.create_app
alembic.op.create_table
logging.handlers.RotatingFileHandler
alembic.context.is_offline_mode
flask.flash
self.followed.append
app.main.forms.MessageForm.validate_on_submit
wtforms.validators.ValidationError
rq.job.Job.fetch
os.system
self.email.lower
app.api.errors.bad_request
os.environ.get
app.db.ForeignKey
app.main.forms.EmptyForm
flask_login.current_user.followed_posts
engine.connect.close
self.is_following
app.models.User.query.get_or_404.to_dict
flask_babel.lazy_gettext
logging.config.fileConfig
app.api.auth.basic_auth.current_user.get_token
flask_login.current_user.unfollow
User.query.filter_by
app.cli.register
min
id.User.query.get_or_404.to_dict
os.environ.get.replace
datetime.datetime
self.username.data.User.query.filter_by.first
flask_login.current_user.launch_task
range
flask_login.login_user
wants_json_response
app.models.User.check_token
name.self.notifications.filter_by.delete
own.followed.union.order_by
flask.request.get_json
app.db.backref
json.loads
json.dumps
app.models.Post.timestamp.desc.Post.query.order_by.paginate
Notification
cls.query.filter
recipient.User.query.filter_by.first_or_404.add_notification
logging.StreamHandler
translate.command
app.db.event.listen
app.app_context.push
sqlalchemy.Column
Post.user_id.followers.c.followed_id.followers.Post.query.join.filter.union
sqlalchemy.ForeignKeyConstraint
app.search.query_index
logging.Formatter
app.cli.group
app.api.auth.token_auth.current_user.revoke_token
logging.getLogger.info
self.get_rq_job
flask.current_app.elasticsearch.search
flask_login.LoginManager.init_app
self.posts.count
flask_babel._
user.posts.order_by
os.path.abspath
flask_mail.Message.attach
recipient.User.query.filter_by.first_or_404.new_messages
flask_httpauth.HTTPBasicAuth
Task
app.auth.forms.ResetPasswordForm
app.errors.bp.app_errorhandler
run_migrations_offline
alembic.op.add_column
app.db.String
app.models.User.to_collection_dict
app.app_context
alembic.context.run_migrations
alembic.op.create_index
app.db.Column
len
flask.jsonify
sqlalchemy.Text
wtforms.validators.EqualTo
when.append
app.logger.error
app.api.bp.route
werkzeug.security.generate_password_hash
app.models.Post.timestamp.desc.user.posts.order_by.paginate
wtforms.validators.Email
flask_login.current_user.messages_received.order_by
job.meta.get
app.translate.translate
data.append
app.main.bp.route
app.db.Table
self.followed.remove
flask.current_app.elasticsearch.delete
self.notifications.filter_by
form.email.data.User.query.filter_by.first
logging.handlers.RotatingFileHandler.setFormatter
flask_bootstrap.Bootstrap.init_app
app.models.User.verify_reset_password_token
post.timestamp.isoformat
app.auth.bp.route
datetime.timedelta
flask.abort
setattr
app.models.Task.query.get
app.models.User
flask_mail.Mail.init_app
app.db.case
app.models.Post.timestamp.desc
rq.get_current_job
flask.current_app.task_queue.enqueue
app.models.Message
flask.g.search_form.validate
sqlalchemy.Boolean
flask_httpauth.HTTPTokenAuth
app.main.forms.SearchForm
cls.query.filter_by
Post.timestamp.desc
error_response
isinstance
self.Task.query.filter_by.all
sqlalchemy.DateTime
self.email.lower.encode.md5.hexdigest
app.db.session.add
self.Message.query.filter_by.filter
rq.job.Job.fetch.get_id
flask_moment.Moment.init_app
recipient.User.query.filter_by.first_or_404
wtforms.TextAreaField
task.user.add_notification
app.models.Post.search
app.search.remove_from_index
langdetect.detect
Post.user_id.followers.c.followed_id.followers.Post.query.join.filter
RuntimeError
flask.url_for
app.models.Post.query.order_by
flask_moment.Moment
data.User.query.filter_by.first
app.auth.forms.ResetPasswordForm.validate_on_submit
script.upgrade_ops.is_empty
time.sleep
rq.get_current_job.save_meta
logging.handlers.SMTPHandler
flask_sqlalchemy.SQLAlchemy
app.config.from_object
flask_login.LoginManager
app.main.forms.PostForm
sqlalchemy.Integer
sqlalchemy.Float
alembic.op.drop_table
config.set_main_option
jwt.encode
self.EditProfileForm.super.__init__
base64.b64encode
logging.handlers.SMTPHandler.setLevel
time.time
app.email.send_email
app.models.User.query.get
wtforms.StringField
config.get_section
werkzeug.security.check_password_hash
os.path.exists
last_read_time.Message.timestamp.self.Message.query.filter_by.filter.count
ids.cls.id.in_.cls.query.filter.order_by
os.urandom
app.auth.email.send_password_reset_email
item.to_dict
logging.handlers.RotatingFileHandler.setLevel
format
self.email.lower.encode
</pre>
</details>
@developer
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much. | closed | 2022-10-26T01:58:50Z | 2022-10-26T14:32:26Z | https://github.com/miguelgrinberg/microblog/issues/326 | [] | PyDeps | 1 |
huggingface/datasets | tensorflow | 7,289 | Dataset viewer displays wrong statists | ### Describe the bug
In [my dataset](https://huggingface.co/datasets/speedcell4/opus-unigram2), there is a column called `lang2`, and there are 94 different classes in total, but the viewer says there are 83 values only. This issue only arises in the `train` split. The total number of values is also 94 in the `test` and `dev` columns, viewer tells the correct number of them.
<img width="177" alt="image" src="https://github.com/user-attachments/assets/78d76ef2-fe0e-4fa3-85e0-fb2552813d1c">
### Steps to reproduce the bug
```python3
from datasets import load_dataset
ds = load_dataset('speedcell4/opus-unigram2').unique('lang2')
for key, lang2 in ds.items():
print(key, len(lang2))
```
This script returns the following and tells that the `train` split has 94 values in the `lang2` column.
```
train 94
dev 94
test 94
zero 5
```
### Expected behavior
94 in the reviewer.
### Environment info
Collecting environment information...
PyTorch version: 2.4.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: CentOS Linux release 8.2.2004 (Core) (x86_64)
GCC version: (GCC) 8.3.1 20191121 (Red Hat 8.3.1-5)
Clang version: Could not collect
CMake version: version 3.11.4
Libc version: glibc-2.28
Python version: 3.9.20 (main, Oct 3 2024, 07:27:41) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-193.28.1.el8_2.x86_64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 525.85.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 64
On-line CPU(s) list: 0-63
Thread(s) per core: 1
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 4
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7542 32-Core Processor
Stepping: 0
CPU MHz: 3389.114
BogoMIPS: 5789.40
Virtualization: AMD-V
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 16384K
NUMA node0 CPU(s): 0-15
NUMA node1 CPU(s): 16-31
NUMA node2 CPU(s): 32-47
NUMA node3 CPU(s): 48-63
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.4.1+cu121
[pip3] torchaudio==2.4.1+cu121
[pip3] torchdevice==0.1.1
[pip3] torchglyph==0.3.2
[pip3] torchmetrics==1.5.0
[pip3] torchrua==0.5.1
[pip3] torchvision==0.19.1+cu121
[pip3] triton==3.0.0
[pip3] datasets==3.0.1
[conda] numpy 1.26.4 pypi_0 pypi
[conda] torch 2.4.1+cu121 pypi_0 pypi
[conda] torchaudio 2.4.1+cu121 pypi_0 pypi
[conda] torchdevice 0.1.1 pypi_0 pypi
[conda] torchglyph 0.3.2 pypi_0 pypi
[conda] torchmetrics 1.5.0 pypi_0 pypi
[conda] torchrua 0.5.1 pypi_0 pypi
[conda] torchvision 0.19.1+cu121 pypi_0 pypi
[conda] triton 3.0.0 pypi_0 pypi | closed | 2024-11-11T03:29:27Z | 2024-11-13T13:02:25Z | https://github.com/huggingface/datasets/issues/7289 | [] | speedcell4 | 1 |
statsmodels/statsmodels | data-science | 8,885 | DOC: links of notebooks in statsmodels.tsa are not working | #### Describe the bug
In the description page of tsa (time series analysis) module of the statsmodels library (https://www.statsmodels.org/stable/tsa.html), all the links redirecting to the notebooks do not work. When we try to access them, the following error message is displayed :
<img width="1432" alt="image" src="https://github.com/statsmodels/statsmodels/assets/112933842/774d19be-59df-469e-88a0-cb24215de1bd">
Below are all the links that do not work :
https://www.statsmodels.org/examples/notebooks/generated/autoregressions.html
https://www.statsmodels.org/examples/notebooks/generated/tsa_arma_0.html
https://www.statsmodels.org/examples/notebooks/generated/tsa_arma_1.html
https://www.statsmodels.org/examples/notebooks/generated/exponential_smoothing.html
https://www.statsmodels.org/examples/notebooks/generated/autoregressive_distributed_lag.html
https://www.statsmodels.org/examples/notebooks/generated/markov_regression.html
https://www.statsmodels.org/examples/notebooks/generated/markov_autoregression.html
https://www.statsmodels.org/examples/notebooks/generated/tsa_filters.html
https://www.statsmodels.org/examples/notebooks/generated/deterministics.html
https://www.statsmodels.org/examples/notebooks/generated/stl_decomposition.html
After searching a bit, i found out that these notebooks still exist (unless one) in the website but the links are slightly different:
https://www.statsmodels.org/stable/examples/notebooks/generated/autoregressions.html
https://www.statsmodels.org/stable/examples/notebooks/generated/tsa_arma_0.html
https://www.statsmodels.org/stable/examples/notebooks/generated/tsa_arma_1.html
https://www.statsmodels.org/stable/examples/notebooks/generated/exponential_smoothing.html
https://www.statsmodels.org/stable/examples/notebooks/generated/autoregressive_distributed_lag.html (still doesn't work)
https://www.statsmodels.org/stable/examples/notebooks/generated/markov_regression.html
https://www.statsmodels.org/stable/examples/notebooks/generated/markov_autoregression.html
https://www.statsmodels.org/stable/examples/notebooks/generated/tsa_filters.html
https://www.statsmodels.org/stable/examples/notebooks/generated/deterministics.html
https://www.statsmodels.org/stable/examples/notebooks/generated/stl_decomposition.html
For the autoregressive_distributed_lag notebook, it is available in the statsmodels GitHub repository at the following link :
https://github.com/statsmodels/statsmodels/blob/main/examples/notebooks/autoregressive_distributed_lag.ipynb
#### Expected Output
All the incorrectly listed links should be corrected with the correct links that I have provided.
Thanks. | closed | 2023-05-17T18:07:17Z | 2023-10-27T09:57:36Z | https://github.com/statsmodels/statsmodels/issues/8885 | [] | Cheergui | 0 |
nonebot/nonebot2 | fastapi | 2,594 | Plugin: nonebot-plugin-vits-tts | ### PyPI ้กน็ฎๅ
nonebot-plugin-vits-tts
### ๆไปถ import ๅ
ๅ
nonebot_plugin_vits_tts
### ๆ ็ญพ
[{"label":"VITS","color":"#ea5252"},{"label":"TTS","color":"#52dbea"}]
### ๆไปถ้
็ฝฎ้กน
```dotenv
VITS__DEVIC=0
VITS__VMODEL_PATH=models
VITS__AT_BOT=false
VITS__COOLDOWN=0
VITS__VMODEL_FILE_NAME=model.pth
VITS__CONFIG_FILE_NAME=config
VITS__TENCENT_SECRET_ID=
VITS__TENCENT_SECRET_KEY=
VITS__DEFAULT_LENGTH_SCALE=1
VITS__DEFAULT_NOISE_SCALE=0.667
VITS__DEFAULT_NOISE_SCALE_W=0.6
```
| closed | 2024-03-03T12:11:18Z | 2024-03-04T09:11:48Z | https://github.com/nonebot/nonebot2/issues/2594 | [
"Plugin"
] | Redmomn | 6 |
ultralytics/ultralytics | pytorch | 18,934 | Although the model is generally successful, it sometimes labels completely irrelevant locations. | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I am training object detection on 2200 images of 1920x1080 using the Yolov8s model. During the training process, the image size was set to 1920 and the batch size was set to 16, and training was performed for a total of 600 epochs. However, although the model's success is generally high, it occasionally labels completely irrelevant places. What could be the possible reasons for this problem and how can I solve it?
### Additional
_No response_ | open | 2025-01-29T05:55:17Z | 2025-02-01T17:55:46Z | https://github.com/ultralytics/ultralytics/issues/18934 | [
"question",
"detect"
] | oztrkoguz | 7 |
dynaconf/dynaconf | flask | 652 | Release 3.1.5 does not contain all commits since last release | Hey @rochacbruno, I updated to the latest version of dynaconf this morning but the strategies implementation is not included. Took a look at the latest [release commit](https://github.com/rochacbruno/dynaconf/commit/083f3c2497be8998524e16cae2cb2e24afc1332f) and it says it doesn't belong to a branch in this project and was likely created from a fork? Looking at the [parent](https://github.com/rochacbruno/dynaconf/commit/b0774d7d50e72fb4fa9d6ab60c0171db3612b400) of that commit and it is after the prefix strategy implementation but that [commit](https://github.com/rochacbruno/dynaconf/commit/0e47bf2a22d6f0c565085c3e8ea63bbb625ec150) doesn't seem to be included.
Any idea what has happened here? | closed | 2021-09-03T09:00:43Z | 2021-09-03T11:11:06Z | https://github.com/dynaconf/dynaconf/issues/652 | [
"question"
] | zzZIMAWAKE | 3 |
Kludex/mangum | fastapi | 212 | How does Lambda maintains the state of the application? | My question is more about Lambda and less about Mangum itself. I have been lately struggling with this.
So Lambda invokes a chunk of code every time it is invoked. And Mangum makes it possible to route multiple paths to same Lambda instead of per lambda per route.
My question is.. **Is the FastAPI server always running in the background between the invocation of Lambda? Or it is started every time is invoked?**
I am asking this because, what if I have a global state which I use over lifetime of my application? Will it be reset between the invocation of the Lambda? | closed | 2021-12-19T14:33:53Z | 2021-12-20T10:55:18Z | https://github.com/Kludex/mangum/issues/212 | [] | santosh | 0 |
3b1b/manim | python | 1,346 | 'TexText' is not defined | I have installed manim by the following.
```
pip3 install manimlib
```
However, following error occured.
I'm using ubuntu on WSL1.
How to fixt this ?
```
manim example_scenes.py OpeningManimExample
```
```
Media will be written to /home/maru/manim/media/. You can change this behavior with the --media_dir flag.
Traceback (most recent call last):
File "/home/maru/.local/lib/python3.8/site-packages/manimlib/extract_scene.py", line 155, in main
scene = SceneClass(**scene_kwargs)
File "/home/maru/.local/lib/python3.8/site-packages/manimlib/scene/scene.py", line 75, in __init__
self.construct()
File "example_scenes.py", line 13, in construct
title = TexText("This is some \\LaTeX")
NameError: name 'TexText' is not defined
``` | closed | 2021-02-03T20:25:04Z | 2021-02-09T03:24:34Z | https://github.com/3b1b/manim/issues/1346 | [] | Maruoka842 | 1 |
ccxt/ccxt | api | 24,802 | C#: System.StackOverflowException in System.Collections.Concurrent | ### Operating System
Windows 10
### Programming Languages
C#
### CCXT Version
4.4.42
### Description
I'm basically looping through all the exchanges and if they support websocket and a number of different symbols I use `await exchange.WatchTrades(symbol)` on each symbol. Each WatchTrades is ran in a separate async Task. I'm running WatchTrades on around 50 different exchanges on 16 different symbols, so 800 tasks running (awaiting trades). Anywhere between an hour to four hours I get a `System.StackOverflowException` for `System.Collections.Concurrent`. I'm not using the namespace myself but in the CCXT source I found the following:
https://github.com/ccxt/ccxt/blob/master/cs/ccxt/ws/CustomConcurrentDictionary.cs
I believe that's the culprit. So far I've been unable to reproduce the issue with certainty. It happens only now and then but when it does, it crashes the whole application (WPF) without any error except in the Windows event viewer:
```
Fault bucket 0, type 5
Event Name: CLR20r3
Response: Not available
Cab Id: 0
Problem signature:
P1: <app name>
P2: 1.0.0.0
P3: 67200000
P4: System.Collections.Concurrent
P5: 9.0.24.52809
P6: b2656325
P7: ca
P8: 1b4
P9: System.StackOverflowException
P10:
```
I'm trying to pinpoint the issue but so far no luck. I'll try and upgrade my CCXT version next and if I manage to reproduce the crashing I'll publish the code here. | open | 2025-01-08T14:34:24Z | 2025-01-27T07:19:29Z | https://github.com/ccxt/ccxt/issues/24802 | [] | alert101 | 10 |
graphistry/pygraphistry | pandas | 238 | [BUG] hypergraph dask_cudf exn | **Describe the bug**
`dask_cudf` engine failing on hypergraph unit test
**To Reproduce**
```python
edges_gdf = cudf.DataFrame({'x': ['a'], 'y': ['c']})
n_dgdf, e_dgdf = edges_to_hypergraph(edges_gdf, {'direct': True, 'engine': 'dask_cudf'})
# g = graphistry.hypergraph(edges_gdf, **cleaned_opts)['graph'] ...
n_gdf = await gpu_client.compute(n_dgdf)
e_gdf = await gpu_client.compute(e_dgdf)
```
**Actual behavior**
```
____________________________________________________________________ Test_edges_to_hypergraph.test_edges_to_hypergraph_dask_dask_cudf_explicit ____________________________________________________________________
self = <test_hypergraph.Test_edges_to_hypergraph object at 0x7ff0e0160110>, gpu_client = <Client: 'tcp://172.21.0.2:8786' processes=1 threads=1, memory=7.63 GiB>
@pytest.mark.skipif(not is_gpu(), reason='not is_gpu()')
@pytest.mark.timeout(60)
@pytest.mark.asyncio
async def test_edges_to_hypergraph_dask_dask_cudf_explicit(self, gpu_client):
edges_gdf = cudf.DataFrame({'x': ['a'], 'y': ['c']})
> n_dgdf, e_dgdf = edges_to_hypergraph(edges_gdf, {'direct': True, 'engine': 'dask_cudf'})
test/server/client/graph/test_hypergraph.py:160:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
server/client/graph/hypergraph.py:138: in edges_to_hypergraph
g = graphistry.hypergraph(edges_gdf, **cleaned_opts)['graph']
/conda/envs/rapids/lib/python3.7/site-packages/graphistry/pygraphistry.py:489: in hypergraph
engine=engine, npartitions=npartitions, chunksize=chunksize)
/conda/envs/rapids/lib/python3.7/site-packages/graphistry/hyper.py:23: in hypergraph
engine=engine, npartitions=npartitions, chunksize=chunksize)
/conda/envs/rapids/lib/python3.7/site-packages/graphistry/hyper_dask.py:739: in hypergraph
entities = format_entities(events, entity_types, defs, direct, drop_na, engine_resolved, npartitions, chunksize, debug) # type: ignore
/conda/envs/rapids/lib/python3.7/site-packages/graphistry/hyper_dask.py:336: in format_entities
mt_df = mt_nodes(defs, events, entity_types, direct, engine)
/conda/envs/rapids/lib/python3.7/site-packages/graphistry/hyper_dask.py:302: in mt_nodes
.head(0)
/conda/envs/rapids/lib/python3.7/site-packages/dask/dataframe/core.py:1049: in head
return self._head(n=n, npartitions=npartitions, compute=compute, safe=True)
/conda/envs/rapids/lib/python3.7/site-packages/dask/dataframe/core.py:1082: in _head
result = result.compute()
/conda/envs/rapids/lib/python3.7/site-packages/dask/base.py:284: in compute
(result,) = compute(self, traverse=False, **kwargs)
```
| open | 2021-06-29T23:25:46Z | 2021-06-29T23:26:35Z | https://github.com/graphistry/pygraphistry/issues/238 | [
"bug"
] | lmeyerov | 0 |
Lightning-AI/pytorch-lightning | deep-learning | 19,906 | Add functionality to save nn.Modules supplied as arguments when initialising LightningModule | ### Description & Motivation
There are scenarios where it makes sense to supply nn.Modules as arguments when initialising a LightningModule, indeed this seems to be endorsed in some of the Lightning docs, however it is recommended to ignore the nn.Modules when calling `self.save_hyperparameters()`. This pattern is inconvenient when it comes to saving/loading models, since if you simply save the LightningModule you will be unable to load it again, as you will not have the necessary information to instantiate the nn.Modules (although their weights will be stored in the checkpoint).
### Pitch
When suppling nn.Modules as arguments to LightningModules, checkpoints currently save only the weights of the nn.Modules which is insufficient to instantiate the nn.Modules as part of loading the LightningModule.
Add functionality to seamlessly save nn.Modules provided as arguments to LightningModules such that the LightningModule can be loaded without having to separately save the initialisation arguments of the nn.Modules and initialise the nn.Modules before supplying them as arguments when loading the LightningModule from the checkpoint.
### Alternatives
_No response_
### Additional context
_No response_
cc @borda | closed | 2024-05-25T00:21:19Z | 2024-05-26T18:51:32Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19906 | [
"feature",
"needs triage"
] | tom-hehir | 0 |
vitalik/django-ninja | pydantic | 1,308 | [BUG] Paginated Discriminated Annotated Unions response schemas are overridden in OpenAPI docs by make_response_paginated | **Describe the bug**
Hi,
In my company project we use extensively Annotated Discriminated Unions both for inputs and outputs schemas
We recentlty encountered wrongly assigned schemas in openapi doc, as we added a paginated endpoint
Here is a minimal reproductible api code:
```python
from typing import Annotated
from typing import Literal
from typing import Union
from ninja import Field
from ninja import NinjaAPI
from ninja import Router
from ninja import Schema
from ninja.pagination import paginate
api = NinjaAPI(
title="Nova API",
version="1.0.0",
)
router_foo = Router(auth=None, tags=["RouterFoo"])
router_bar = Router(auth=None, tags=["RouterBar"])
class Foo1Schema(Schema):
id: int
disc: Literal["foo1"]
class Foo2Schema(Schema):
id: int
disc: Literal["foo2"]
class Bar1Schema(Schema):
id: int
disc: Literal["bar1"]
class Bar2Schema(Schema):
id: int
disc: Literal["bar2"]
DiscriminatedFooUnion = Annotated[
Union[Foo1Schema, Foo2Schema],
Field(discriminator="disc"),
]
DiscriminatedBarUnion = Annotated[
Union[Bar1Schema, Bar2Schema],
Field(discriminator="disc"),
]
@router_foo.get("/", response={200: list[DiscriminatedFooUnion]})
@paginate
def foos_endpoint(request):
return []
@router_bar.get("/", response={200: list[DiscriminatedBarUnion]})
@paginate
def bars_endpoint(request):
return []
api.add_router("/router_foo", router_foo)
api.add_router("/router_bar", router_bar)
```
This code will result in wrongly assigned Paged schemas in openapi doc

I believe this is caused by `make_response_paginated` and more precisely by the type creation being made with `new_name = f"Paged{item_schema.__name__}"` resulting in `new_name = PagedAnnotated` which is always the same name for Annotated schemas
As it's the same name, it probably overwrites previous schemas of same name in schemas registry

**Versions (please complete the following information):**
- Python version: 3.12.4
- Django version: 5.0.9
- Django-Ninja version: 1.3.0
- Pydantic version: 2.9.2
This is kind of problematic for us as our frontend team generates TS client types validation from the OpenAPI schemas
| open | 2024-10-02T16:55:50Z | 2024-10-04T14:13:47Z | https://github.com/vitalik/django-ninja/issues/1308 | [] | M3te0r | 0 |
horovod/horovod | machine-learning | 4,110 | [+[!๐
๐๐๐ ๐๐๐๐๐๐!]+]Sophie Rain Spiderman Video Original Video Link Sophie Rain Spiderman Video Viral On Social Media X Trending Now | 20 seconds ago
L๐aked Video Sophie Rain Spiderman Original Video Viral Video L๐aked on X Twitter Telegram
..
..
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ](https://usnews-daily.com/free-watch/)
..
..
[๐ด ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐==โบโบ ๐ฃ๐๐๐๐
๐๐บ๐ฝ ๐ญ๐๐](https://usnews-daily.com/free-watch/?t)
..
..
<a href="https://usnews-daily.com/free-watch/?y" rel="nofollow" data-target="animated-image.originalLink"><img src="https://i.imgur.com/vN3eWE7.png"></a>
..
..
[-wATCH-]โ Sophie Rain Spiderman Video Original Video Link Sophie Rain Spiderman Video Viral On Social Media X Trending Now
[-wATCH-]โ Sophie Rain Spiderman สแดแดแดแดแด
Video แด ษชสแดส On Social Media หฃ แตสทโฑแตแตแตสณ
[-wATCH-]โ Sophie Rain Spiderman สแดแดแดแดแด
Video แด ษชสแดส On Social Media หฃ แตสทโฑแตแตแตสณ
[-wATCH-]โ Sophie Rain Spiderman Video Original Video Link Sophie Rain Spiderman Video Viral On Social Media X Trending Now
Sophie Rain Spiderman Original Video video took the internet by storm and amazed viewers on various social media platforms. Sophie Rain Spiderman, a young and talented digital creator, recently became famous thanks to this interesting video.
L๐aked Video Sophie Rain Spiderman Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Original Video video oficial twitter
L๐aked Video Sophie Rain Spiderman Original Video Viral Video L๐aked on X Twitter..
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. | closed | 2024-11-17T17:23:26Z | 2024-11-20T12:23:49Z | https://github.com/horovod/horovod/issues/4110 | [] | ghost | 1 |
vi3k6i5/flashtext | nlp | 18 | ๅ
ณ้ฎๅญไธๆฏๆๆญฃๅ่กจ่พพๅผ๏ผ | closed | 2017-11-14T06:35:22Z | 2017-11-14T06:37:22Z | https://github.com/vi3k6i5/flashtext/issues/18 | [] | korterling | 0 | |
httpie/cli | python | 730 | [Feature] Allow bypassing .netrc | The default support for .netrc is super useful, but sometimes I'd like to set the Authorisation header to a different value than the one I have configured in my .netrc, and sometimes it would be nice not to Authorization header set, to see how an API behaves when not authorised.
Having a flag, say `--ignore-netrc` that enables this behaviour would be nice.
Having an Authorization header supplied as a CLI parameter take precedence over what's in .netrc would be nice too.
If you use curl with -n (to enable using .netrc) and also set `-H "Authorization: Bearer abcdef"` then the CLI version of the header takes precedence, and the values from .netrc aren't used. | closed | 2018-11-19T11:56:14Z | 2019-08-31T10:10:15Z | https://github.com/httpie/cli/issues/730 | [] | gregadevinta | 1 |
sanic-org/sanic | asyncio | 2,539 | app.run(workers=2) in bugs | ```
app.run(host=app.config['HOST'], port=app.config['PORT'], debug=app.config['DEBUG'], auto_reload=app.config['AUTO_RELOAD'], access_log=False,workers=1)
```
## There is no problem with the above code.
```
app.run(host=app.config['HOST'], port=app.config['PORT'], debug=app.config['DEBUG'], auto_reload=app.config['AUTO_RELOAD'], access_log=False,workers=2)
```
## The above code will report the following error after running.
<img width="1327" alt="image" src="https://user-images.githubusercontent.com/7685337/188152567-448afff1-3cc6-4ad8-8a04-cfc93a31f7c6.png"> | closed | 2022-09-02T13:07:45Z | 2022-09-04T11:19:48Z | https://github.com/sanic-org/sanic/issues/2539 | [] | jiayouzl | 16 |
GibbsConsulting/django-plotly-dash | plotly | 345 | How to use django models to store input data from dash components to django's database?? | I have built a dash application and have integrated it into the Django web application. Now i want to save the input data in the database. How do I do it?? | closed | 2021-07-13T15:00:47Z | 2022-04-19T10:43:09Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/345 | [] | nikhilnaregal | 4 |
horovod/horovod | deep-learning | 3,005 | Dynamic system environment variables modification doesn't work when using Spark as backend | **Environment:**
1. Framework: (TensorFlow, Keras, PyTorch, MXNet): TensorFlow
2. Framework version: 2.5.0
3. Horovod version: 0.22.1
4. MPI version: 4.0.2
5. CUDA version: 11.2
6. NCCL version: 2.9.9
7. Python version: 3.8.5
8. Spark / PySpark version: 3.1.1
9. Ray version:
10. OS and version: Ubuntu 20.04
11. GCC version: 9.3
12. CMake version: 3.18.5
**Checklist:**
1. Did you search issues to find if somebody asked this question before? Yes
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? Yes
**Bug report:**
Environment: Spark Standalone mode, with 4 GPU equipped.
Since the TF1 API has been changed in TF2, I tried to use `os.environ['CUDA_VISIBLE_DEVICES']` to control visible GPU devices for TF.
But putting this code inside [train_fn](https://github.com/horovod/horovod/blob/master/examples/spark/keras/keras_spark3_rossmann.py#L398) doesn't work: TF2 is still able to detect all 4 GPUs no matter which number I pass to `CUDA_VISIBLE_DEVICES`.
the code is like
``` python
def train_fn(model_bytes):
...
...
hvd.init()
os.environ['CUDA_VISIBLE_DEVICES'] = str(hvd.rank())
context.context().reinitialize_physical_devices()
gpus = tf.config.experimental.list_physical_devices('GPU')
print(gpus) # 4 GPU devices are shown.
horovod.spark.run(train_fn....)
```
Another thing I noticed is, the default environment vars are not passed to [mpi_run](https://github.com/horovod/horovod/blob/master/horovod/spark/mpi_run.py#L23) . I print this env, and there's nothing. so I have to pass env to `horovod.spark.run` manually.
the code is like:
```python
horovod.spark.run(train_fn, args=(model_bytes,), num_proc=4, env={'LD_LIBRARY_PATH':"......", "PATH":....}
```
Or it will complain like
```
Was unable to run mpirun --version:
/bin/sh: 1: mpirun: not found
```
Not sure if I should post it as another issue.
| open | 2021-06-28T09:24:42Z | 2021-06-28T22:27:01Z | https://github.com/horovod/horovod/issues/3005 | [
"bug"
] | wjxiz1992 | 4 |
newpanjing/simpleui | django | 326 | ไฝฟ็จdjango-import-export ่ฟไธชๆไปถ ๅพ้ๅ
ๅฎน็ๆ
ๅตไธ ๅฏผๅบใๅฏผๅ
ฅไธคไธชๆ้ฎๅ่ฝ้ฝๅๆๅ ้คๆฐๆฎ | **bugๆ่ฟฐ**
* *Bug description * *
็ฎๅ็ๆ่ฟฐไธ้ๅฐ็bug๏ผ
Briefly describe the bugs encountered:
**้็ฐๆญฅ้ชค**
** repeat step **
1.
2.
3.
**็ฏๅข**
** environment**
1.Operating System๏ผ
(Windows/Linux/MacOS)....
2.Python Version๏ผ3.7.9
3.Django Version๏ผ3.1.4
4.SimpleUI Version๏ผ2021.1.1
**Description**
| closed | 2020-12-04T09:56:47Z | 2021-03-17T09:26:51Z | https://github.com/newpanjing/simpleui/issues/326 | [
"bug"
] | 68110923 | 6 |
open-mmlab/mmdetection | pytorch | 11,897 | mm_grounding_dino finetune based on swin large | mm_grounding_dino is a really good work, thanks for sharing.
your documention about "mm_grounding_dino finetune" is only about swin tiny, and i want to use swin large. but when i change the config and use pretrained model grounding_dino_swin-l_pretrain_all-56d69e78.pth to init the weights, there is error:
```python
File "mmdet/models/backbones/swin.py", line 728, in init_weights
table_current = self.state_dict()[table_key]
KeyError: 'backbone.stages.0.blocks.0.attn.w_msa.relative_position_bias_table'
```
I think this is because your model does not have the weight. How can I solve this?
| closed | 2024-08-06T10:09:22Z | 2024-08-14T07:16:21Z | https://github.com/open-mmlab/mmdetection/issues/11897 | [
"reimplementation"
] | zhaishengfu | 0 |
quokkaproject/quokka | flask | 56 | change internal comment system to a modular/detached/async | Currently Comment is an embedded document, and it is the most used pattern for Mongo, but it is not easy to interchange comment system in this way.
The idea is to create a separate module **quokka-comments** detached from Content and use the same approach as Disqus, use the content identifier (can be url or id) to store comments separetelly.
also this will allow to swtich from internal commenting system to disqus, intensedebate, facebook, google comments as external plugins.
| closed | 2013-10-19T17:08:28Z | 2015-07-16T02:56:49Z | https://github.com/quokkaproject/quokka/issues/56 | [
"enhancement"
] | rochacbruno | 2 |
onnx/onnx | scikit-learn | 5,784 | [Feature request] onnx.printer / parser support ID with '/', ':', etc | ### System information
_No response_
### What is the problem that this feature solves?
Currently the onnx.printer prints ID without quoted, like
```
<
ir_version: 7,
opset_import: [ "" : 10 ]
>
agraph (float[N, 128] X, float[128, 10] W, float[10] B) => (float[N, 10] C)
{
Foo = MatMul(X, W)
Bar = Add(Foo, B)
C = Softmax(Bar)
}
```
It is fine if the ID contains only `[a-zA-Z_]` however, a lot of models has special character in the ID of node, for example the llama has and node named `/model/layers.0/self_attn/Mul_3_output_0` it contains `.`, `/`, some other op even has `:`. I want to enhance the printer / parser, But I am not sure which spec is better:
1. Single quoted ID, any char except `'` can be used in the name. The printed ID is quoted. The parser respect that , too.
2. Don't quoted, just treat `/`, `:`, `.` as `_`. But I am not sure will it confused with other syntax.
Dose anyone have any suggestions? Thank you.
### Alternatives considered
_No response_
### Describe the feature
Quoted the ID or extend the acceptable char in the parser.
### Will this influence the current api (Y/N)?
_No response_
### Feature Area
_No response_
### Are you willing to contribute it (Y/N)
Yes
### Notes
_No response_ | closed | 2023-11-30T21:37:27Z | 2024-12-23T06:45:00Z | https://github.com/onnx/onnx/issues/5784 | [
"topic: enhancement",
"stale"
] | yocox | 1 |
PaddlePaddle/PaddleHub | nlp | 1,455 | ๅฎๆนๆไพ็ๆๆกฃๆ้ฎ้ข | ๆฌข่ฟๆจๅ้ฆPaddleHubไฝฟ็จ้ฎ้ข๏ผ้ๅธธๆ่ฐขๆจๅฏนPaddleHub็่ดก็ฎ๏ผ
ๅจ็ไธๆจ็้ฎ้ขๆถ๏ผ่พ่ฆๆจๅๆญฅๆไพๅฆไธไฟกๆฏ๏ผ
- ็ๆฌใ็ฏๅขไฟกๆฏ
1๏ผ hub 2.1.0
2๏ผpython3.8ใ ็ณป็ป ้ฟ้ไบ็ๆ ๅ centosใ
- ๅค็ฐๆญฅ้ชค๏ผๆ็
งๅฎๆนๆๆกฃ็ๆญๅปบๆๅกๆต็จ๏ผๆฒก้ฎ้ข๏ผไฝๆฏpythonๅ็จๅบ่ฎฟ้ฎๆๅกๆต็จ็ๆถๅ๏ผไธ็ด่ฎฟ้ฎไธไบ๏ผๆ็คบ ๅฐไบไธไธช predict_args ๅญๆฎตใ
https://www.paddlepaddle.org.cn/hubdetail?name=lac&en_category=LexicalAnalysis
ๆฅ้ไฟกๆฏๅฆไธ๏ผ

ๅๆถ๏ผimport request ไธ็ดๆ็คบๆ้ฎ้ข๏ผๅๆฅๅ็ฐๅฐไบไธชs๏ผๅบ่ฏฅๆฏ import requestsโฆโฆ | closed | 2021-06-07T09:19:20Z | 2021-06-09T11:47:23Z | https://github.com/PaddlePaddle/PaddleHub/issues/1455 | [
"nlp"
] | allenxln | 1 |
zappa/Zappa | django | 547 | [Migrated] Zappa update fails with "import pip" command | Originally from: https://github.com/Miserlou/Zappa/issues/1446 by [iwitaly](https://github.com/iwitaly)
I use Gitlab CI for updating my Zappa app with the following script.
```
- export PIPENV_VENV_IN_PROJECT=true
- pip install pipenv
- pipenv install
- export VIRTUAL_ENV=.venv/
- export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID_DEV
- export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY_DEV
- export ENVIRONMENT=dev
- pipenv run python manage.py migrate --settings=admin_dashboard.settings.dev
- pipenv run zappa update dev
```
I also use ```lambci/lambda:build-python3.6``` as a base image.
Until today all updates were good, but today I've got this error

What could that mean?
| closed | 2021-02-20T12:22:34Z | 2022-07-16T07:12:32Z | https://github.com/zappa/Zappa/issues/547 | [] | jneves | 1 |
jupyter/nbviewer | jupyter | 620 | When I run "Image Classification and Filter Visualization" by using Google-net, I get some trouble. | ## when I use Googlenet instead of Caffenet, I get this:
# My code:
net.blobs['data'].data[...] = transformed_image
output = net.forward()
output_prob = output['prob'][0] # the output probability vector for the first image in the batch
print 'predicted class is:', output_prob.argmax()
# error:
---
ValueError Traceback (most recent call last)
<ipython-input-14-c5d1c39289dd> in <module>()
1 # copy the image data into the memory allocated for the net
----> 2 net.blobs['data'].data[...] = transformed_image
3 ### perform classification
4 output = net.forward()
5 output_prob = output['prob'][0] # the output probability vector for the first image in the batch
ValueError: could not broadcast input array from shape (3,224,224) into shape (50,3,227,227)
## but when I comment this part code, everything is okay!
net.blobs['data'].reshape(50, # batch size
3, # 3-channel (BGR) images
227, 227) # image size is 227x227
| closed | 2016-07-18T10:27:58Z | 2016-07-19T17:33:25Z | https://github.com/jupyter/nbviewer/issues/620 | [] | ghost | 1 |
Urinx/WeixinBot | api | 195 | ๅฏไปฅๅ้ไธๆกๆๅญๆถๆฏ๏ผ็นๅปๆๅญ็ๆถๅๅ
ถๅฎๆฏไธช้พๆฅ็ๅใ | open | 2017-05-09T08:01:34Z | 2017-05-18T01:34:50Z | https://github.com/Urinx/WeixinBot/issues/195 | [] | huangzk | 1 | |
blb-ventures/strawberry-django-plus | graphql | 139 | Optimizer throws an exception for union queries | You should be able to detect that it's a union query and use prefetch_related instead of select_related.
```
File "/app/.heroku/python/lib/python3.10/site-packages/strawberry_django_plus/type.py", line 318, in <lambda>
lambda *args, **kwargs: resolve_connection(
File "/app/.heroku/python/lib/python3.10/site-packages/strawberry_django_plus/utils/resolvers.py", line 533, in resolve_connection
nodes = ext.optimize(nodes, info=info)
File "/app/.heroku/python/lib/python3.10/site-packages/strawberry_django_plus/optimizer.py", line 639, in optimize
return optimize(qs, info, config=config, store=store)
File "/app/.heroku/python/lib/python3.10/site-packages/strawberry_django_plus/optimizer.py", line 375, in optimize
qs = store.apply(qs, info=info, config=config)
File "/app/.heroku/python/lib/python3.10/site-packages/strawberry_django_plus/optimizer.py", line 530, in apply
qs = qs.select_related(*self.select_related)
File "/app/.heroku/python/lib/python3.10/site-packages/django/db/models/query.py", line 1049, in select_related
self._not_support_combined_queries('select_related')
File "/app/.heroku/python/lib/python3.10/site-packages/django/db/models/query.py", line 1398, in _not_support_combined_queries
raise NotSupportedError(
django.db.utils.NotSupportedError: Calling QuerySet.select_related() after union() is not supported.
``` | open | 2022-11-01T16:46:42Z | 2022-11-01T17:48:04Z | https://github.com/blb-ventures/strawberry-django-plus/issues/139 | [
"bug"
] | eloff | 1 |
iperov/DeepFaceLab | machine-learning | 712 | Result video ending up as an image | Deepfacelab version: DeepFaceLab_NVIDIA_build_04_06_2020
When iam done with the training, merging and converting. I only get a image with some sound. (A video with just a image) | open | 2020-04-12T01:39:04Z | 2023-06-08T20:28:01Z | https://github.com/iperov/DeepFaceLab/issues/712 | [] | Tanatorlol | 1 |
dynaconf/dynaconf | flask | 237 | [RFC] When trying to load yamls, we would like to load `.local.yaml` files last and auto-merge keys | **Is your feature request related to a problem? Please describe.**
We are trying to replace https://github.com/seandst/yaycl usage with Dynaconf and yaycle has a feature where it lets you have yaml files with 2 types:
1) file.yaml
2) file.local.yaml
When you load the yamls from your conf directory, yaycl loads both files and key-value pairs from `.local.yaml` file would override values from `.yaml` in the AttrDict that is loaded finally
**Describe the solution you'd like**
I'd like to have similar ability in the Dynaconf
**Describe alternatives you've considered**
Tried using dynaconf_merge and dynaconf_merge_unique options as Bruno suggested but didn't work.
**Additional context**
This, according to my team lead, is a common practice to have `filename.local.yaml` file that can override values in `filename.yaml` file where `filename` is same.
| closed | 2019-09-19T17:25:25Z | 2019-09-26T19:40:42Z | https://github.com/dynaconf/dynaconf/issues/237 | [
"Not a Bug",
"RFC"
] | kedark3 | 2 |
geopandas/geopandas | pandas | 2,684 | BUG: GeoDataFrame.iterfeatures with na='drop' crashes on non-scalar columns | - [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of geopandas.
- [x] (optional) I have confirmed this bug exists on the main branch of geopandas.
---
#### Code Sample, a copy-pastable example
```python
import geopandas
gdf = geopandas.GeoDataFrame(dict(geometry=geopandas.GeoSeries.from_wkt(['POINT EMPTY']), test=[[1, 2]]))
print(list(gdf.iterfeatures(na='drop')))
```
#### Problem description
The code crashes with
```
Traceback (most recent call last):
File "<ipython-input-49-8489f3b42ff4>", line 1, in <module>
list(gdf.iterfeatures(na='drop'))
File "/home/nofitserov/.cache/pypoetry/virtualenvs/test-RPhZg3RA-py3.9/lib/python3.9/site-packages/geopandas/geodataframe.py", line 885, in iterfeatures
properties_items = {
File "/home/nofitserov/.cache/pypoetry/virtualenvs/test-RPhZg3RA-py3.9/lib/python3.9/site-packages/geopandas/geodataframe.py", line 886, in <dictcomp>
k: v for k, v in zip(properties_cols, row) if not pd.isnull(v)
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
due to auto-magic type confusion in this code:
```python
if na == "drop":
properties_items = {
k: v for k, v in zip(properties_cols, row) if not pd.isnull(v)
}
```
When `v` is not a pandas scalar with len>1, `pd.isnull` returns a boolean array instead of a single value, breaking the logic completely. I think that explicitly excluding non-scalars from this check should help, as they should never be dropped here?
#### Expected Output
```
[{'id': '0', 'type': 'Feature', 'properties': {'test': [1, 2]}, 'geometry': None}]
```
#### Output of ``geopandas.show_versions()``
<details>
SYSTEM INFO
-----------
python : 3.9.9 (main, Nov 19 2021, 00:00:00) [GCC 10.3.1 20210422 (Red Hat 10.3.1-1)]
executable : /home/nofitserov/.cache/pypoetry/virtualenvs/test-RPhZg3RA-py3.9/bin/python
machine : Linux-5.14.18-100.fc33.x86_64-x86_64-with-glibc2.32
GEOS, GDAL, PROJ INFO
---------------------
GEOS : 3.11.1
GEOS lib : None
GDAL : 3.4.3
GDAL data dir: /home/nofitserov/.cache/pypoetry/virtualenvs/test-RPhZg3RA-py3.9/lib64/python3.9/site-packages/fiona/gdal_data
PROJ : 9.1.0
PROJ data dir: /home/nofitserov/.cache/pypoetry/virtualenvs/test-RPhZg3RA-py3.9/lib64/python3.9/site-packages/pyproj/proj_dir/share/proj
PYTHON DEPENDENCIES
-------------------
geopandas : 0.12.2
numpy : 1.23.5
pandas : 1.5.2
pyproj : 3.4.1
shapely : 2.0.0
fiona : 1.8.22
geoalchemy2: None
geopy : None
matplotlib : 3.6.2
mapclassify: 2.4.3
pygeos : None
pyogrio : v0.4.2
psycopg2 : None
pyarrow : 10.0.1
rtree : None
</details>
| closed | 2022-12-21T17:42:37Z | 2023-04-10T08:53:33Z | https://github.com/geopandas/geopandas/issues/2684 | [
"bug",
"good first issue"
] | himikof | 4 |
ARM-DOE/pyart | data-visualization | 624 | NEXRAD reflectivity map blank | I'm very new to this module so forgive me if I ended up missing something but I downloaded Level 2 NEXRAD data directly off of NCDC I then downloaded one of the files as is into a directory and copied your example from pyart/examples/plotting/plot_nexrad_reflectivity.py word for word but replaced your filename with the appropriate one that I downloaded. This resulted in a graph appearing with nothing on it, no title, no axes titles, nada.

| closed | 2016-11-25T20:33:25Z | 2017-02-02T17:21:36Z | https://github.com/ARM-DOE/pyart/issues/624 | [] | troyofathens | 10 |
piskvorky/gensim | machine-learning | 3,075 | HPD random seed not working | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
I am trying to reproduce results using the same random seed (all other elements remain equal), but the results is different everytime I run the model.
#### Code
Here is my code :
```python
hdpmodel = HdpModel(corpus=bowCorpus, id2word=dictionary, alpha = 0.5, random_state = 1)
```
Is there any way to fix this on my side or it is a technical problem / missing developpement ?
Best Regards,
Evangelia ZVE | closed | 2021-03-15T14:36:53Z | 2021-03-16T08:43:48Z | https://github.com/piskvorky/gensim/issues/3075 | [] | evangeliazve | 6 |
pytest-dev/pytest-html | pytest | 164 | screen shot attachment to the pytest report | Hi,
As i am using the unittest for selenium and python, i am using the pytest for reporting. I needed the help for capturing the screenshots in the pytest reports for pass scenarios and fail scenarios as well. Can you please provide the sample template of the script and code for attaching screenshots in pytest report.
Thanks for the help | closed | 2018-05-10T10:39:08Z | 2018-05-10T17:23:37Z | https://github.com/pytest-dev/pytest-html/issues/164 | [] | writetomaha14 | 2 |
sinaptik-ai/pandas-ai | data-visualization | 1,439 | ModuleNotFoundError: No module named 'seaborn' | ### System Info
OS version: `Debian GNU/Linux 12 (bookworm)`
Python version: `Python 3.11.2`
The current version of pandasai being used: `2.4.0`
### ๐ Describe the bug
When instantiating a new Agent, the following error occurs, indicating that **seaborn** should not be an optional dependency:
```
Traceback (most recent call last):
File ".../streamlit/runtime/scriptrunner/exec_code.py", line 88, in exec_func_with_error_handling
result = func()
^^^^^^
File ".../streamlit/runtime/scriptrunner/script_runner.py", line 579, in code_to_exec
exec(code, module.__dict__)
File ".../app.py", line 1, in <module>
from pandasai import Agent
File ".../pandasai/__init__.py", line 6, in <module>
from pandasai.smart_dataframe import SmartDataframe
File ".../pandasai/smart_dataframe/__init__.py", line 27, in <module>
from pandasai.agent import Agent
File ".../pandasai/agent/__init__.py", line 1, in <module>
from .agent import Agent
File ".../pandasai/agent/agent.py", line 5, in <module>
from pandasai.agent.base import BaseAgent
File ".../pandasai/agent/base.py", line 8, in <module>
from pandasai.agent.base_security import BaseSecurity
File ".../pandasai/agent/base_security.py", line 2, in <module>
from pandasai.pipelines.pipeline import Pipeline
File ".../pandasai/pipelines/__init__.py", line 3, in <module>
from .pipeline import Pipeline
File ".../pandasai/pipelines/pipeline.py", line 5, in <module>
from pandasai.config import load_config_from_json
File ".../pandasai/config.py", line 4, in <module>
from . import llm
File ".../pandasai/llm/__init__.py", line 5, in <module>
from .google_gemini import GoogleGemini
File ".../pandasai/llm/google_gemini.py", line 16, in <module>
from ..helpers.optional import import_dependency
File ".../pandasai/helpers/optional.py", line 22, in <module>
from pandasai.safe_libs.restricted_seaborn import RestrictedSeaborn
File ".../pandasai/safe_libs/restricted_seaborn.py", line 1, in <module>
import seaborn as sns
ModuleNotFoundError: No module named 'seaborn'
```
After installing **seaborn** using `poetry add seaborn`, the following error occurs, indicating that **pyyaml** is also required:
```
Traceback (most recent call last):
File ".../streamlit/runtime/scriptrunner/exec_code.py", line 88, in exec_func_with_error_handling
result = func()
^^^^^^
File ".../streamlit/runtime/scriptrunner/script_runner.py", line 579, in code_to_exec
exec(code, module.__dict__)
File ".../app.py", line 1, in <module>
from pandasai import Agent
File ".../pandasai/__init__.py", line 6, in <module>
from pandasai.smart_dataframe import SmartDataframe
File ".../pandasai/smart_dataframe/__init__.py", line 27, in <module>
from pandasai.agent import Agent
File ".../pandasai/agent/__init__.py", line 1, in <module>
from .agent import Agent
File ".../pandasai/agent/agent.py", line 5, in <module>
from pandasai.agent.base import BaseAgent
File ".../pandasai/agent/base.py", line 8, in <module>
from pandasai.agent.base_security import BaseSecurity
File ".../pandasai/agent/base_security.py", line 2, in <module>
from pandasai.pipelines.pipeline import Pipeline
File ".../pandasai/pipelines/__init__.py", line 3, in <module>
from .pipeline import Pipeline
File ".../pandasai/pipelines/pipeline.py", line 5, in <module>
from pandasai.config import load_config_from_json
File ".../pandasai/config.py", line 6, in <module>
from .schemas.df_config import Config
File ".../pandasai/schemas/df_config.py", line 4, in <module>
from pandasai.helpers.dataframe_serializer import DataframeSerializerType
File ".../pandasai/helpers/dataframe_serializer.py", line 4, in <module>
import yaml
ModuleNotFoundError: No module named 'yaml'
```
To resolve this, I had to run `poetry add pyyaml`, and then everything worked correctly.
### Suggestion
1. **Make seaborn a required dependency**: It appears that **seaborn** is being imported without being marked as a required dependency, causing errors when it's missing. It should either be made a required dependency or a check should be added to verify if it is present in the environment before importing.
2. **Make pyyaml a required dependency**: Similarly, **pyyaml** should be listed as a required dependency, as it is necessary for proper functionality. | closed | 2024-11-27T19:51:57Z | 2025-01-02T16:54:26Z | https://github.com/sinaptik-ai/pandas-ai/issues/1439 | [
"bug"
] | desertproject | 2 |
numba/numba | numpy | 9,207 | no implementation for __rmul__ | ```python
import numba as nb
import numba.experimental as nbexp
import numba.extending as nbex
from numba import types as nbt
@nbexp.jitclass([ ('_x', nbt.float32),
('_y', nbt.float32), ])
class Vec2:
def __init__(self, x : float, y : float):
self._x = x
self._y = y
@property
def x(self) -> float: return self._x
@property
def y(self) -> float: return self._y
def __rmul__(self, other) -> Vec2: return Vec2(0,0)
@nb.njit(nogil=True)
def run_test1():
return 2 * Vec2(1,1)
print( run_test1() )
```
error
```
numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<built-in function mul>) found for signature:
>>> mul(Literal[int](2), instance.jitclass.Vec2#226da1bb310<_x:float32,_y:float32>)
``` | closed | 2023-09-22T08:16:30Z | 2023-09-25T14:03:19Z | https://github.com/numba/numba/issues/9207 | [
"duplicate"
] | iperov | 1 |
streamlit/streamlit | streamlit | 10,574 | streamlet-bokeh missing Python3.9 support due to bokeh3 version | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
streamlit-bokeh doesn't work with Python3.9 since it pins Bokeh 3.6.3, which only supports Python3.10+. Bokeh 3.4.3 is the last version of Bokeh3 which supports Python3.9.
Is it possible to add python3.9 support for streamlit-bokeh1? (presumably reduce the the required bokeh version to 3.4.3).
### Reproducible Code Example
```
python3.9 -m pip install streamlit-bokeh
```
### Steps To Reproduce
_No response_
### Expected Behavior
_No response_
### Current Behavior
```
ERROR: Cannot install streamlit-bokeh==3.6.0, streamlit-bokeh==3.6.1 and streamlit-bokeh==3.6.2 because these package versions have conflicting dependencies.
The conflict is caused by:
streamlit-bokeh 3.6.2 depends on bokeh==3.6.3
streamlit-bokeh 3.6.1 depends on bokeh==3.6.2
streamlit-bokeh 3.6.0 depends on bokeh==3.6.1
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
WARNING: There was an error checking the latest version of pip.
```
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.42.2
- Streamlit-bokeh version: 3.6.X
- Python version: 3.9.16
- Operating System: MacOS
- Browser: Safari
### Additional Information
| open | 2025-03-01T02:35:09Z | 2025-03-01T13:03:44Z | https://github.com/streamlit/streamlit/issues/10574 | [
"type:enhancement",
"feature:st.bokeh_chart"
] | SimonHeim | 3 |
ipython/ipython | data-science | 14,542 | Resolve build docs error by removing one unnecessary line | In the latest several commits, the `Build docs` CI seems to fail. See the following for example.
https://github.com/ipython/ipython/actions/runs/11258467460/job/31305131520#step:6:71
The error message says `sphinx.errors.SphinxWarning: Calling get_html_theme_path is deprecated. If you are calling it to define html_theme_path, you are safe to remove that code.`
Indeed the readme of https://github.com/Pennsieve/sphinx_rtd_theme mentions that since v0.2.5 that line is no longer needed, and `docs/requirements.txt` specifies `sphinx_rtd_theme>=1.2.0`.
I think it suffices to create a PR to remove the following line and I am willing to do it.
https://github.com/ipython/ipython/blob/a49046c77e94025501c64a8856498107589b729a/docs/source/conf.py#L134
One thing I am not sure is why this warning becomes an error...
System information:
```
{'commit_hash': 'a49046c77',
'commit_source': 'repository',
'default_encoding': 'utf-8',
'ipython_path': '/Users/kevin1kevin1k/ipython/IPython',
'ipython_version': '8.29.0.dev',
'os_name': 'posix',
'platform': 'macOS-14.1-x86_64-i386-64bit',
'sys_executable': '/Users/kevin1kevin1k/ipython/venv/bin/python3',
'sys_platform': 'darwin',
'sys_version': '3.10.15 (main, Sep 7 2024, 00:20:06) [Clang 15.0.0 '
'(clang-1500.3.9.4)]'}
``` | closed | 2024-10-17T14:51:31Z | 2024-10-19T18:14:34Z | https://github.com/ipython/ipython/issues/14542 | [] | kevin1kevin1k | 0 |
google-research/bert | tensorflow | 1,000 | bert run_classifier key error = '0' | File "run_classifier.py", line 981, in <module>
tf.app.run()
File "C:\Users\Parveen\ishan\bertenv\lib\site-packages\tensorflow_core\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\Parveen\ishan\bertenv\lib\site-packages\absl\app.py", line 299, in run
_run_main(main, args)
File "C:\Users\Parveen\ishan\bertenv\lib\site-packages\absl\app.py", line 250, in _run_main
sys.exit(main(argv))
File "run_classifier.py", line 942, in main
predict_file)
File "run_classifier.py", line 490, in file_based_convert_examples_to_features
max_seq_length, tokenizer)
File "run_classifier.py", line 459, in convert_single_example
label_id = label_map[example.label]
KeyError: '0'
I have changed the labels in the colaProcessor class and my training is successful, I am getting this error during testing. Please help | closed | 2020-02-11T05:51:45Z | 2020-08-04T06:44:03Z | https://github.com/google-research/bert/issues/1000 | [] | agarwalishan | 1 |
inventree/InvenTree | django | 8,852 | [Reporting] Support generation of DataMatrix codes | Hi,
would it be possible to print Datamatrix codes ?
_Originally posted by @gab696 in https://github.com/inventree/InvenTree/discussions/8819_ | closed | 2025-01-07T09:55:38Z | 2025-03-21T10:01:43Z | https://github.com/inventree/InvenTree/issues/8852 | [
"enhancement",
"barcode",
"report"
] | SchrodingersGat | 2 |
marshmallow-code/apispec | rest-api | 354 | RFC: Remove extracting reference from field metadata | Currently the Marshmallow plugin looks for 'ref' key containing a JSON reference path in field. An example from the [tests](https://github.com/marshmallow-code/apispec/blob/29881b18e6723295870422f08c17851d49f83caf/tests/test_openapi.py#L600):
```python
class PetSchema(Schema):
category = fields.Nested(CategorySchema, many=True, ref='#/definitions/Category')
```
This functionality seems to be redundant with the plugin's ability to store automatically store references. It also seems like it is a more fragile way to pass reference information into the spec.
This [code block](https://github.com/marshmallow-code/apispec/blob/29881b18e6723295870422f08c17851d49f83caf/apispec/ext/marshmallow/openapi.py#L343) extracts the reference and can probably be completely removed because the unbound self referencing case does not occur when a schema instance is passed to `schema2jsonschema`, which is the primary case currently (we could probably enforce that by instancing the schema within `schema2jsonschema`). | closed | 2019-01-02T03:37:45Z | 2019-02-03T18:57:08Z | https://github.com/marshmallow-code/apispec/issues/354 | [
"backwards incompat"
] | Bangertm | 2 |
horovod/horovod | pytorch | 4,046 | Horovod with Spark - Job Not Distributing Across Worker Nodes | Problem Description: Horovod with Spark - Job Not Distributing Across Worker Nodes
**Environment:**
Cluster Setup: 1 Master Node, 2 Worker Nodes
Software Versions:
Horovod: >= 0.19.0
TensorFlow: >= 1.12.0
Spark: >= 2.3.2
Python: 3.x
MPI Version: Open MPI 4.0.5
Deployment Mode: YARN
Issue Summary: I am experiencing an issue where my distributed training job using Horovod on Spark is not properly utilizing the worker nodes in my cluster. Instead, all computation appears to be executed on the master node, leading to resource exhaustion on the master while the worker nodes remain idle.
Details:
I configured my Spark and Horovod environment following the official Horovod documentation.
My setup involves one master node and two worker nodes, with the master node not participating as a worker (confirmed via the Hadoop UI).
The job is submitted using mpirun with a spark-submit command embedded within it.
Symptoms:
The master node (lila) shows high CPU and memory usage, almost to the point of exhaustion.
Worker nodes (worker1, worker2) show no significant CPU or memory usage.
Both Hadoop and Spark UIs confirm that only the master node is active during the job execution.
The custom callback to print the hostname confirms that only the master node is processing the data.
Commands Used: Here is the mpirun command used to submit the job:
`mpirun -np 4 -bind-to none -map-by slot -x NCCL_DEBUG=INFO -x LD_LIBRARY_PATH -x PATH -mca pml ob1 -mca btl ^openib \spark-submit --master yarn --deploy-mode cluster --conf spark.executor.cores=4 --conf spark.executor.instances=2 --conf spark.driver.memory=4g --conf spark.executor.memory=6g --conf spark.dynamicAllocation.enabled=false --conf spark.yarn.maxAppAttempts=1 codes/estimator_example.py > output/estimator_example.txt`
Code Snippet: Here is a simplified version of the code being used:
```
from tensorflow import keras
import tensorflow as tf
import horovod.spark.keras as hvd
from pyspark.sql import SparkSession
import numpy as np
import os
import socket
# Initialize Horovod
hvd.init()
# Set up Spark session
spark = SparkSession.builder.appName("HorovodOnSparkExample").getOrCreate()
# Generate random data
def generate_data(num_samples):
ย ย num_features = 2
ย ย data = np.random.rand(num_samples, num_features)
ย ย labels = (data[:, 0] + data[:, 1] > 1).astype(int)
ย ย return spark.createDataFrame([(float(x[0]), float(x[1]), int(y)) for x, y in zip(data, labels)], ["feature1", "feature2", "label"])
train_df = generate_data(1000)
test_df = generate_data(200)
# Build a simple Keras model
model = keras.models.Sequential([
ย ย keras.layers.Dense(8, input_dim=2, activation='tanh'),
ย ย keras.layers.Dense(1, activation='sigmoid')
])
# Optimizer and loss
optimizer = keras.optimizers.SGD(learning_rate=0.1)
loss = 'binary_crossentropy'
# Store for checkpointing
store = hvd.spark.common.store.HDFSStore('/user/username/experiments')
# Define the KerasEstimator
keras_estimator = hvd.KerasEstimator(
ย ย num_proc=4,
ย ย store=store,
ย ย model=model,
ย ย optimizer=optimizer,
ย ย loss=loss,
ย ย feature_cols=['feature1', 'feature2'],
ย ย label_cols=['label'],
ย ย batch_size=32,
ย ย epochs=10
)
# Fit the model
keras_model = keras_estimator.fit(train_df).setOutputCols(['predict'])
# Transform the test data
predict_df = keras_model.transform(test_df)
predict_df.show()
# Custom callback to log worker information
class WorkerInfoCallback(tf.keras.callbacks.Callback):
ย ย def on_epoch_end(self, epoch, logs=None):
ย ย ย ย hostname = socket.gethostname()
ย ย ย ย rank = hvd.rank()
ย ย ย ย print(f"Epoch {epoch} ended. Worker rank: {rank}, Hostname: {hostname}")
# Enable Horovod timeline
os.environ["HOROVOD_TIMELINE"] = "/home/hadoop/horovod_timeline_gan.json"
os.environ["HOROVOD_TIMELINE_MARK_CYCLES"] = "1"
# Add the custom callback to the list of callbacks
callbacks = [
ย ย hvd.callbacks.BroadcastGlobalVariablesCallback(0),
ย ย hvd.callbacks.MetricAverageCallback(),
ย ย WorkerInfoCallback()
]
# Train the model
keras_model.fit(x_train, y_train,
ย ย ย ย ย ย ย ย batch_size=128,
ย ย ย ย ย ย ย ย callbacks=callbacks,
ย ย ย ย ย ย ย ย epochs=2,
ย ย ย ย ย ย ย ย verbose=2 if hvd.rank() == 0 else 0,
ย ย ย ย ย ย ย ย validation_data=(x_test, y_test))
# Save the model
if hvd.rank() == 0:
ย ย keras_model.save('/home/hadoop/keras_model.h5')
# Stop Spark session
spark.stop()
```
Questions:
How can I ensure that the training job is properly distributed across the worker nodes?
Are there any additional configurations or steps required to ensure that mpirun properly utilizes the worker nodes?
Are there specific debugging steps I should follow to identify why the worker nodes are not being utilized?
Any insights or suggestions to resolve this issue would be greatly appreciated.
Thank you!
| open | 2024-06-12T09:12:08Z | 2025-01-31T23:14:46Z | https://github.com/horovod/horovod/issues/4046 | [
"wontfix"
] | omarmujahidgithub | 3 |
cvat-ai/cvat | pytorch | 8,407 | Missing Label in CVAT After Labeling Two Objects | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
Hereโs the task link where this issue occurs: https://app.cvat.ai/tasks/616492/jobs/710592
### Expected Behavior
When labeling two objects, I expected the first object to be labeled with '0' and the second with '1'. However, I only receive label '0' for both objects, and label '1' is missing. I'm not sure what the issue is.
### Possible Solution
Iโve already tried checking the label configuration, refreshing the browser, and restarting the task, but the issue remains. Iโve successfully labeled objects with '0' and '1' in previous tasks and exports without any issues. However, now it only gives me label '0'. This seems to be a problem with CVAT itself, as I havenโt changed my process, and it worked before. Could you please look into this issue? This is for a very important project.
### Context
This issue has significantly affected my ability to continue working on a critical project. I need to label objects with both '0' and '1', but now I can only apply label '0'. I am annotating vegetation and power lines, where vegetation is labeled as '0' and power lines as '1'. Previously, it applied the labels correctly, but now only label '0' is being applied. This is causing delays in my project, as I need both labels to properly categorize the objects. Resolving this is crucial for me to move forward with my work.
Iโm attaching my task link for reference: https://app.cvat.ai/tasks/616492/jobs/710592
### Environment
```Markdown
Iโm attaching my task link for reference: https://app.cvat.ai/tasks/616492/jobs/710592
```
| closed | 2024-09-06T04:34:20Z | 2024-09-10T19:05:56Z | https://github.com/cvat-ai/cvat/issues/8407 | [
"need info"
] | QuifaAbas | 5 |
robusta-dev/robusta | automation | 1,520 | [Feature] Add the posibility to create tables in the create_finding action | **Is your feature request related to a problem?**
No
**Describe the solution you'd like**
Currently I am using the action `create_finding`. I would like to have a way to create tables in the `description`. Something around the lines of:
```
actions:
- create_finding:
title: 'Workflow $labels.workflow_name failed'
aggregation_key: WorkflowFailures
severity: HIGH
description: |
# Following is a pseudocode of a table creation
| workflow name | namespace | severity|
----
| $labels.workflow_name | $labels.namespace | HIGH |
```
**Additional context**
Would be nice to have this and more features from markdown in order to enrich further the description in my `create_finding` action
| open | 2024-08-06T16:40:47Z | 2024-08-06T16:41:13Z | https://github.com/robusta-dev/robusta/issues/1520 | [] | crileroro | 1 |
Lightning-AI/LitServe | api | 271 | Is it possible to support multiple endpoints for one server? | ## ๐ Feature
Multiple endpoints like `/embedding` or `/vlm/predict` or `/ocr/predict`.
### Motivation
I would like to host multiple models on a single GPU for different purposes. It would be ideal to support numerous (small) models while maintaining high performance, such as through batching.
Additionally, I believe starting multiple `litserve` instances with different ports may introduce unnecessary complexity, compared to starting a single server with different endpoints.
### Pitch
<!-- A clear and concise description of what you want to happen. -->
### Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
| open | 2024-09-06T07:57:46Z | 2025-01-02T09:08:21Z | https://github.com/Lightning-AI/LitServe/issues/271 | [
"enhancement",
"help wanted",
"question"
] | arkohut | 18 |
lepture/authlib | flask | 256 | CSRF validation failure when running in docker | I have a flask / authlib client which works perfectly when run on localhost from the shell, but which consistently fails with a CSRF state-mismatch when run in a docker container. I put up a stackexchange [question](https://stackoverflow.com/questions/63228209/flask-authlib-csrf-state-mismatch-only-when-running-in-docker) a couple of days ago which narrates the problem in more detail.
this may well not be an authlib problem, but my minimal example is very minimal and the problem is cropping up in authlib, so this seemed like the best place to report. apologies if that's in error.
**Error Stacks**
```
Traceback (most recent call last):
File "/usr/local/lib64/python3.8/site-packages/flask/app.py", line 2464, in __call__
return self.wsgi_app(environ, start_response)
File "/usr/local/lib64/python3.8/site-packages/flask/app.py", line 2450, in wsgi_app
response = self.handle_exception(e)
File "/usr/local/lib64/python3.8/site-packages/flask/app.py", line 1867, in handle_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib64/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib64/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib64/python3.8/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib64/python3.8/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib64/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/local/lib64/python3.8/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib64/python3.8/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/app/src/app/flask_interface/user.py", line 17, in auth_callback
token = extensions.auth0.authorize_access_token()
File "/usr/local/lib/python3.8/site-packages/authlib/integrations/flask_client/remote_app.py", line 74, in authorize_access_token
params = self.retrieve_access_token_params(flask_req, request_token)
File "/usr/local/lib/python3.8/site-packages/authlib/integrations/base_client/base_app.py", line 149, in retrieve_access_token_params
params = self._retrieve_oauth2_access_token_params(request, params)
File "/usr/local/lib/python3.8/site-packages/authlib/integrations/base_client/base_app.py", line 130, in _retrieve_oauth2_access_token_params
raise MismatchingStateError()
```
**To Reproduce**
A minimal example can be found [here](https://github.com/circius/flask-authlib-docker-bug). It should be possible to reproduce it by registering an oauth2 client and adding the client configuration to the code.
**Expected behavior**
Authentication should succeed when the client is run in a docker container.
**Environment:**
- OS: Fedora 32
- Python Version: 3.8.3
- Authlib Version: 0.14.3
**Additional context**
The docker environment is based on python:slim in the code examples above, but I have tested this with a dockerfile based on the fedora:latest package with the same result.
The problem seems to arise from non-persistence of the session between the setting of `session[_auth0_authlib_state_]` before the login attempt and its getting when the callback is fired. Here's some pertinent logging:
```
inside set_session_data:
request: <Request 'http://localhost:5000/login' [GET]>
key: state,
value: w4SnGtxMsOrledSvTHg3YzXkwCItHU
sess_key: _auth0_authlib_state_.
setting session[_auth0_authlib_state_] to w4SnGtxMsOrledSvTHg3YzXkwCItHU
session:[_auth0_authlib_state_]: w4SnGtxMsOrledSvTHg3YzXkwCItHU
inside app.flask_interface.user.auth_callback
inside _retrieve_oath2_access_token_params
inside get_session_data:
request: <Request 'http://localhost:5000/auth_callback?code=Sssrq1Yun_qhHkk4&state=w4SnGtxMsOrledSvTHg3YzXkwCItHU' [GET]>
key_param: state
generated session key: _auth0_authlib_state_
getting session.pop(_auth0_authlib_state_, None): no such key
back in app.flask_interface.user.auth_callback
request_state: w4SnGtxMsOrledSvTHg3YzXkwCItHU,
state: None
```
| closed | 2020-08-05T14:55:38Z | 2020-08-12T15:16:01Z | https://github.com/lepture/authlib/issues/256 | [] | circius | 5 |
dropbox/sqlalchemy-stubs | sqlalchemy | 114 | Enum interpreted as str | I have an SQLAlchemy model that makes use of the `Enum` column type. When accessing the field of an instance of this model, mypy believes that the type of the field is `str` even though it is actually an enum (e.g. `MyEnum`). This is annoying since, when I do want to access its value, mypy fails with `error: "str" has no attribute "value"`.
Although it cannot be run as such, the following snippet demonstrates the behavior when run through mypy. I would expect mypy to expect `m.state` should be of type `MyEnum` (really, in this snippet, it will be `None`, but in real code, it will be a `MyEnum`).
```python
import enum
from sqlalchemy import Column, Enum
from sqlalchemy.ext.declarative import declarative_base
class MyEnum(enum.Enum):
A = 'A'
B = 'B'
Base = declarative_base()
class MyModel(Base):
state = Column(Enum(MyEnum), nullable=False)
m = MyModel()
m.state.value
``` | open | 2019-10-02T08:24:55Z | 2020-05-04T07:58:36Z | https://github.com/dropbox/sqlalchemy-stubs/issues/114 | [
"enhancement",
"priority-normal",
"topic-stubs"
] | qsantos | 10 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,221 | [Bug]: sd_model.model.diffusion_model.dtype for SDXL still reports float when using --precison half | ### Checklist
- [X] The issue exists after disabling all (other) extensions
- [ ] The issue exists on a clean installation of webui
- [X] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
`sd_models_xl.extend_sdxl()` adds a `shared.sd_model.model.diffusion_model.dtype` attribute to SDXL models, which is not updated after casting to float16 when using `--precision half`.
Anything relying on this attribute to determine the dtype of SDXL models will see float instead of float16.
SD1.5 models are unaffected from what I've seen.
I know of at least one extension that checks this attribute to determine the unet's dtype and is broken due to the misreported dtype: https://github.com/aria1th/sd-webui-deepcache-standalone/issues/9
**Hardcoding the extension to use float16 or using `devices.dtype_unet` effectively works around the bug.**
### Steps to reproduce the problem
1. Run webui with `--precision half`
2. Load any SDXL model.
3. Attempt to use [DeepCache](https://github.com/aria1th/sd-webui-deepcache-standalone) with `Refreshes caches when step is divisible by number` > 1
4. Exception due to extension expecting `float` based on misreported dtype but receiving `float16` instead
### What should have happened?
Extension runs without crashing.
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
[sysinfo-2024-07-18-14-49.json](https://github.com/user-attachments/files/16284953/sysinfo-2024-07-18-14-49.json)
### Console logs
```Shell
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################
################################################################
Running on hope user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
python venv already activate or run without venv: /home/hope/src/sd/stable-diffusion-webui/venv
################################################################
################################################################
Launching launch.py...
################################################################
glibc version is 2.39
Check TCMalloc: libtcmalloc_minimal.so.4
libtcmalloc_minimal.so.4 is linked with libc.so,execute LD_PRELOAD=/usr/lib/libtcmalloc_minimal.so.4
Python 3.11.9 (main, Apr 30 2024, 07:54:26) [GCC 13.2.1 20240417]
Version: v1.9.4-168-ge5dfc253
Commit hash: e5dfc2539efe017106c0539b12247cae45e9bb99
Launching Web UI with arguments: --api --flash-attn --precision half
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
ldm/sgm GroupNorm32 replaced with normal torch.nn.GroupNorm due to `--precision half`.
Loading weights [461c3bbd5c] from /home/hope/src/sd/stable-diffusion-webui/models/Stable-diffusion/SeaArtFurryXL1.0.safetensors
Creating model from config: /home/hope/src/sd/stable-diffusion-webui/repositories/generative-models/configs/inference/sd_xl_base.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 6.4s (prepare environment: 1.1s, import torch: 2.3s, import gradio: 0.4s, setup paths: 0.7s, other imports: 0.3s, load scripts: 0.6s, create ui: 0.6s, add APIs: 0.3s).
Loading VAE weights from user metadata: /home/hope/src/sd/stable-diffusion-webui/models/VAE/sdxl-vae-fp16-fix.safetensors
Applying attention optimization: flash_attn... done.
Textual inversion embeddings loaded(3): feffyxl1, feffyxl2, feffyxl3
Textual inversion embeddings skipped(4): boring_e621_fluffyrock_v4, boring_e621_unbound_lite, boring_e621_unbound_plus, detailed_e621
Model loaded in 4.5s (load weights from disk: 0.3s, create model: 0.7s, apply weights to model: 3.0s, calculate empty prompt: 0.2s).
0%| | 0/20 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(e0wcb11dlj6by43)', <gradio.routes.Request object at 0x781745ecf190>, '', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, '<p style="margin-bottom:0.75em">Keyframe Format: <br>Seed | Prompt or just Prompt</p>', '', 25, True, 5.0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "/home/hope/src/sd/stable-diffusion-webui/modules/call_queue.py", line 58, in f
res = list(func(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/modules/txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/modules/processing.py", line 847, in process_images
res = process_images_inner(p)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/modules/processing.py", line 984, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/modules/processing.py", line 1342, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 218, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/modules/sd_samplers_common.py", line 272, in launch_sampling
return func()
^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 218, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/modules/sd_samplers_cfg_denoiser.py", line 244, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/modules/sd_models_xl.py", line 43, in apply_model
return self.model(x, t, cond)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 22, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/modules/sd_hijack_utils.py", line 34, in __call__
return self.__sub_func(self.__orig_func, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/modules/sd_hijack_unet.py", line 50, in apply_model
result = orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/repositories/generative-models/sgm/modules/diffusionmodules/wrappers.py", line 28, in forward
return self.diffusion_model(
^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/extensions/sd-webui-deepcache-standalone/deepcache.py", line 126, in hijacked_unet_forward
emb = unet.time_embed(t_emb)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/container.py", line 217, in forward
input = module(input)
^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 527, in network_Linear_forward
return originals.Linear_forward(self, input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hope/src/sd/stable-diffusion-webui/venv/lib/python3.11/site-packages/torch/nn/modules/linear.py", line 116, in forward
return F.linear(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: mat1 and mat2 must have the same dtype, but got Float and Half
---
```
### Additional information
_No response_ | open | 2024-07-17T09:56:30Z | 2024-07-18T14:50:35Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16221 | [
"bug-report"
] | feffy380 | 3 |
dynaconf/dynaconf | flask | 1,183 | [bug] order of validators changes order of nested settings | **Describe the bug**
I'm not exactly sure if this is a bug or was intended when using validators, but it certainly caught me by surprise since it wasn't in the [documentation](https://www.dynaconf.com/) and [dicts have been ordered by default since python3.7](https://docs.python.org/3.7/whatsnew/3.7.html)
The order of validators affects the order of nested settings i.e. nested dictionaries as seen below and the only way to prevent it is either reversing the order of the validators for nested settings or use `copy.deepcopy()` on `dynaconf_obj` to preserve it before running the validators on it
**To Reproduce**
Steps to reproduce the behavior:
1. Having the following folder structure
<!-- Describe or use the command `$ tree -v` and paste below -->
<details>
<summary> Project structure </summary>
```bash
โ ~ tree -v dynaconf-bug
dynaconf-bug
โโโ __pycache__
โโโ pdm.lock
โโโ pyproject.toml
โโโ settings.yaml
โโโ src
โโโ dynaconf_bug
โโโ __init__.py
โโโ __main__.py
โโโ __pycache__
โโโ __init__.cpython-311.pyc
โโโ __main__.cpython-311.pyc
4 directories, 7 files
```
</details>
2. Having the following config files:
<!-- Please adjust if you are using different files and formats! -->
<details>
<summary> Config files </summary>
**dynaconf-bug/pyproject.toml**
```toml
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[project]
name = "dynaconf-bug"
version = "0.0.1"
requires-python = ">=3.11"
dependencies = [
"dynaconf",
"rich",
]
[project.scripts]
dynaconf_bug = "dynaconf_bug.__main__:main"
```
and
**dynaconf-bug/settings.yaml**
```yaml
01_key: '01_val'
02_key: '02_val'
03_key:
01_nested_key: '01_nested_val'
02_nested_key: '02_nested_val'
03_nested_key:
01_nested_nested_key: '01_nested_nested_val'
02_nested_nested_key: '02_nested_nested_val'
03_nested_nested_key: '03_nested_nested_val'
```
</details>
3. Having the following app code:
<details>
<summary> Code </summary>
**dynaconf-bug/src/dynaconf_bug/\_\_main\_\_.py**
```python
import dynaconf
import rich.pretty
import rich.panel
def main() -> None:
dynaconf_obj = dynaconf.Dynaconf(settings_file="../../settings.yaml")
settings_yaml_before_validation = dynaconf_obj.as_dict()
rich.print(
rich.panel.Panel(
rich.pretty.Pretty(settings_yaml_before_validation, indent_guides=True, expand_all=True),
title="settings_yaml_before_validation"
)
)
def create_placeholder_validator(setting):
return dynaconf.Validator(setting, condition=lambda value: True)
dynaconf_obj.validators.register(
create_placeholder_validator("01_key"),
create_placeholder_validator("02_key"),
create_placeholder_validator("03_key.01_nested_key"),
create_placeholder_validator("03_key.02_nested_key"),
create_placeholder_validator("03_key.03_nested_key.01_nested_nested_key"),
create_placeholder_validator("03_key.03_nested_key.02_nested_nested_key"),
create_placeholder_validator("03_key.03_nested_key.03_nested_nested_key"),
)
dynaconf_obj.validators.validate_all()
settings_yaml_after_validation = dynaconf_obj.as_dict()
rich.print(
rich.panel.Panel(
rich.pretty.Pretty(settings_yaml_after_validation, indent_guides=True, expand_all=True),
title="settings_yaml_after_validation"
)
)
dynaconf_obj.reload()
settings_yaml_reverse_validation = dynaconf_obj.as_dict()
dynaconf_obj.validators.register(
create_placeholder_validator("01_key"),
create_placeholder_validator("02_key"),
create_placeholder_validator("03_key.02_nested_key"),
create_placeholder_validator("03_key.01_nested_key"),
create_placeholder_validator("03_key.03_nested_key.03_nested_nested_key"),
create_placeholder_validator("03_key.03_nested_key.02_nested_nested_key"),
create_placeholder_validator("03_key.03_nested_key.01_nested_nested_key"),
)
dynaconf_obj.validators.validate_all()
rich.print(
rich.panel.Panel(
rich.pretty.Pretty(settings_yaml_reverse_validation, indent_guides=True, expand_all=True),
title="settings_yaml_reverse_validation"
)
)
if __name__ == "__main__":
main()
```
</details>
4. Executing under the following environment
<details>
<summary> Execution </summary>
```bash
โ ~ pdm update --project dynaconf-bug
0:00:00 ๐ Lock successful.
All packages are synced to date, nothing to do.
โ Update dynaconf-bug 0.0.1 -> 0.0.1 successful
0:00:00 ๐ All complete! 0/0
INFO: PDM 2.18.2 is installed, while 2.19.1 is available.
Please run `brew upgrade pdm` to upgrade.
Run `pdm config check_update false` to disable the check.
โ ~ source dynaconf-bug/.venv/bin/activate
(dynaconf-bug-3.11) โ ~ dynaconf_bug
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ settings_yaml_before_validation โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ { โ
โ โ '01_KEY': '01_val', โ
โ โ '02_KEY': '02_val', โ
โ โ '03_KEY': { โ
โ โ โ '01_nested_key': '01_nested_val', โ
โ โ โ '02_nested_key': '02_nested_val', โ
โ โ โ '03_nested_key': { โ
โ โ โ โ '01_nested_nested_key': '01_nested_nested_val', โ
โ โ โ โ '02_nested_nested_key': '02_nested_nested_val', โ
โ โ โ โ '03_nested_nested_key': '03_nested_nested_val' โ
โ โ โ } โ
โ โ } โ
โ } โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ settings_yaml_after_validation โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ { โ
โ โ '01_KEY': '01_val', โ
โ โ '02_KEY': '02_val', โ
โ โ '03_KEY': { โ
โ โ โ '03_nested_key': { โ
โ โ โ โ '03_nested_nested_key': '03_nested_nested_val', โ
โ โ โ โ '02_nested_nested_key': '02_nested_nested_val', โ
โ โ โ โ '01_nested_nested_key': '01_nested_nested_val' โ
โ โ โ }, โ
โ โ โ '02_nested_key': '02_nested_val', โ
โ โ โ '01_nested_key': '01_nested_val' โ
โ โ } โ
โ } โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ settings_yaml_reverse_validation โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ { โ
โ โ '01_KEY': '01_val', โ
โ โ '02_KEY': '02_val', โ
โ โ '03_KEY': { โ
โ โ โ '01_nested_key': '01_nested_val', โ
โ โ โ '02_nested_key': '02_nested_val', โ
โ โ โ '03_nested_key': { โ
โ โ โ โ '01_nested_nested_key': '01_nested_nested_val', โ
โ โ โ โ '02_nested_nested_key': '02_nested_nested_val', โ
โ โ โ โ '03_nested_nested_key': '03_nested_nested_val' โ
โ โ โ } โ
โ โ } โ
โ } โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
```
</details>
**Expected behavior**
I don't think the order of nested settings i.e. nested dictionaries should change seeing as the outermost keys are not affected
But if that is the expected behavior, then it should be documented on the [Validation page](https://www.dynaconf.com/validation/) at the very least
**Environment (please complete the following information):**
- OS: [e.g. Linux/Fedora29, Windows/x.x.x, Linux/Ubuntu16.x]
```
โ ~ fastfetch
....
.',:clooo: .:looooo:. ------------------------
.;looooooooc .oooooooooo' OS: Ubuntu focal 20.04 x86_64
.;looooool:,''. :ooooooooooc Host: Precision 5570
;looool;. 'oooooooooo, Kernel: Linux 5.15.0-88-generic
;clool' .cooooooc. ,, Uptime: 14 days, 10 mins
... ...... .:oo, Packages: 2267 (dpkg), 15 (snap), 207 (brew)
.;clol:,. .loooo' Shell: zsh 5.9
:ooooooooo, 'ooool Display (LG HDR 4K): 3840x2160 @ 60 Hz in 31โณ [External] *
'ooooooooooo. loooo. Display (SHP1515): 1920x1200 @ 60 Hz in 16โณ [Built-in]
'ooooooooool coooo. Display (ARZOPA): 1920x1080 @ 30 Hz in 16โณ [External]
,loooooooc. .loooo. Display (SyncMaster): 1920x1200 @ 60 Hz in 24โณ [External]
.,;;;'. ;ooooc DE: GNOME 3.36.9
... ,ooool. WM: Mutter (X11)
.cooooc. ..',,'. .cooo. WM Theme: Adwaita
;ooooo:. ;oooooooc. :l. Theme: Adwaita [GTK2/3/4]
.coooooc,.. coooooooooo. Icons: Adwaita [GTK2/3/4]
.:ooooooolc:. .ooooooooooo' Font: Cantarell (11pt) [GTK2/3/4]
.':loooooo; ,oooooooooc Cursor: Adwaita (24px)
..';::c' .;loooo:' Terminal: tmux 3.4
CPU: 12th Gen Intel(R) Core(TM) i9-12900H (20) @ 5.00 GHz
GPU 1: NVIDIA Device 25BA (3D)
GPU 2: Intel Device 46A6 (VGA compatible) @ 1.45 GHz [Integrated]
Memory: 19.66 GiB / 62.47 GiB (31%)
Swap: 447.38 MiB / 2.00 GiB (22%)
Disk (/): 367.54 GiB / 929.04 GiB (40%) - ext4
Local IP (enx349971e7b9cf): 192.168.178.48/24
Battery (DELL M59JH32): 100% [AC Connected]
Locale: en_US.UTF-8
```
- Dynaconf Version [e.g. 2.0.0/source]
```
โ ~ grep -B1 -A8 'name = "dynaconf"' dynaconf-bug/pdm.lock
[[package]]
name = "dynaconf"
version = "3.2.6"
requires_python = ">=3.8"
summary = "The dynamic configurator for your Python Project"
groups = ["default"]
files = [
{file = "dynaconf-3.2.6-py2.py3-none-any.whl", hash = "sha256:3911c740d717df4576ed55f616c7cbad6e06bc8ef23ffca444b6e2a12fb1c34c"},
{file = "dynaconf-3.2.6.tar.gz", hash = "sha256:74cc1897396380bb957730eb341cc0976ee9c38bbcb53d3307c50caed0aedfb8"},
]
```
- Frameworks in use (Flask, Django? versions..)
N/A
**Additional context**
Add any other context about the problem here.
| open | 2024-10-01T16:05:12Z | 2024-10-04T11:51:49Z | https://github.com/dynaconf/dynaconf/issues/1183 | [
"bug"
] | tan-wei-xin-alez | 1 |
koxudaxi/fastapi-code-generator | pydantic | 131 | Handle all request parameter types | Hi,
Thanks for this tool !
I'm trying to infer request parameter type, based on what's inside "in" field in operation's parameters.
It appears that only Query parameter type is handled for the moment, is it a known issue ? Willing to help and PR if needed.
| open | 2021-04-02T07:59:19Z | 2021-04-05T15:12:22Z | https://github.com/koxudaxi/fastapi-code-generator/issues/131 | [] | rambobinator | 1 |
timkpaine/lantern | plotly | 114 | fix plot function to use clearly enumerated lists | closed | 2017-10-26T00:02:19Z | 2017-11-25T06:29:43Z | https://github.com/timkpaine/lantern/issues/114 | [
"bug",
"in progress"
] | timkpaine | 1 | |
pallets-eco/flask-wtf | flask | 50 | from flask.ext.wtf import * raises Exception | Trying that will raise:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: Item in ``from list'' not a string
But doing
```
from flask.ext.wtf import Form
```
works fine. This is using Flask 0.9, WTForms 1.0.2 and Flask-WTF 0.8 on Debian 6 with python 2.6.
Doing
```
from flask.ext.wtf import *
```
on Windows 7 with Python 2.7 does work, though.
| closed | 2012-08-27T12:14:04Z | 2021-05-30T01:24:45Z | https://github.com/pallets-eco/flask-wtf/issues/50 | [] | rhyek | 5 |
ultralytics/ultralytics | computer-vision | 19,250 | mode.val(save_json=True),COCO API AssertionError: Results do not correspond to current coco set. | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
trian.py
```
if __name__ == '__main__':
from ultralytics import YOLO
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
model.train(data="coco8.yaml",device=0,batch=-1)
```
yolo2json.py
```
import os
import json
from PIL import Image
# ่ฎพ็ฝฎๆฐๆฎ้่ทฏๅพ
output_dir = "D:\YOLOv11\datasets\coco8" #ไฟฎๆนไธบYOLOๆ ผๅผ็ๆฐๆฎ้่ทฏๅพ๏ผ
dataset_path = "D:\YOLOv11\datasets\coco8" # ไฟฎๆนไฝ ๆณ่พๅบ็cocoๆ ผๅผๆฐๆฎ้่ทฏๅพ
images_path = os.path.join(dataset_path,"images")
labels_path = os.path.join(dataset_path,"labels")
# ็ฑปๅซๆ ๅฐ
categories = [
{"id": 0, "name": "person"},
{"id": 1, "name": "bicycle"},
{"id": 2, "name": "car"},
{"id": 3, "name": "motorcycle"},
{"id": 4, "name": "airplane"},
{"id": 5, "name": "bus"},
{"id": 6, "name": "train"},
{"id": 7, "name": "truck"},
{"id": 8, "name": "boat"},
{"id": 9, "name": "traffic light"},
{"id": 10, "name": "fire hydrant"},
{"id": 11, "name": "stop sign"},
{"id": 12, "name": "parking meter"},
{"id": 13, "name": "bench"},
{"id": 14, "name": "bird"},
{"id": 15, "name": "cat"}, # ไฟฎๆน่ฟ้
{"id": 16, "name": "dog"},
{"id": 17, "name": "horse"},
{"id": 18, "name": "sheep"},
{"id": 19, "name": "cow"},
{"id": 20, "name": "elephant"},
{"id": 21, "name": "bear"},
{"id": 22, "name": "zebra"},
{"id": 23, "name": "giraffe"},
{"id": 24, "name": "backpack"},
{"id": 25, "name": "umbrella"},
{"id": 26, "name": "handbag"},
{"id": 27, "name": "tie"},
{"id": 28, "name": "suitcase"},
{"id": 29, "name": "frisbee"},
{"id": 30, "name": "skis"},
{"id": 31, "name": "snowboard"},
{"id": 32, "name": "sports ball"},
{"id": 33, "name": "kite"},
{"id": 34, "name": "baseball bat"},
{"id": 35, "name": "baseball glove"},
{"id": 36, "name": "skateboard"},
{"id": 37, "name": "surfboard"},
{"id": 38, "name": "tennis racket"},
{"id": 39, "name": "bottle"},
{"id": 40, "name": "wine glass"},
{"id": 41, "name": "cup"},
{"id": 42, "name": "fork"},
{"id": 43, "name": "knife"},
{"id": 44, "name": "spoon"},
{"id": 45, "name": "bowl"},
{"id": 46, "name": "banana"},
{"id": 47, "name": "apple"},
{"id": 48, "name": "sandwich"},
{"id": 49, "name": "orange"},
{"id": 50, "name": "broccoli"},
{"id": 51, "name": "carrot"},
{"id": 52, "name": "hot dog"},
{"id": 53, "name": "pizza"},
{"id": 54, "name": "donut"},
{"id": 55, "name": "cake"},
{"id": 56, "name": "chair"},
{"id": 57, "name": "couch"},
{"id": 58, "name": "potted plant"},
{"id": 59, "name": "bed"},
{"id": 60, "name": "dining table"},
{"id": 61, "name": "toilet"},
{"id": 62, "name": "tv"},
{"id": 63, "name": "laptop"},
{"id": 64, "name": "mouse"},
{"id": 65, "name": "remote"},
{"id": 66, "name": "keyboard"},
{"id": 67, "name": "cell phone"},
{"id": 68, "name": "microwave"},
{"id": 69, "name": "oven"},
{"id": 70, "name": "toaster"},
{"id": 71, "name": "sink"},
{"id": 72, "name": "refrigerator"},
{"id": 73, "name": "book"},
{"id": 74, "name": "clock"},
{"id": 75, "name": "vase"},
{"id": 76, "name": "scissors"},
{"id": 77, "name": "teddy bear"},
{"id": 78, "name": "hair drier"},
{"id": 79, "name": "toothbrush"}
]
# YOLOๆ ผๅผ่ฝฌCOCOๆ ผๅผ็ๅฝๆฐ
def convert_yolo_to_coco(x_center, y_center, width, height, img_width, img_height):
x_min = (x_center - width / 2) * img_width
y_min = (y_center - height / 2) * img_height
width = width * img_width
height = height * img_height
return [x_min, y_min, width, height]
# ๅๅงๅCOCOๆฐๆฎ็ปๆ
def init_coco_format():
return {
"images": [],
"annotations": [],
"categories": categories
}
# ๅค็ๆฏไธชๆฐๆฎ้ๅๅบ
for split in ['train', 'val']: #'test'
coco_format = init_coco_format()
annotation_id = 1
for img_name in os.listdir(os.path.join(images_path, split)):
if img_name.lower().endswith(('.png', '.jpg', '.jpeg')):
img_path = os.path.join(images_path, split, img_name)
label_path = os.path.join(labels_path, split, img_name.replace("jpg", "txt"))
img = Image.open(img_path)
img_width, img_height = img.size
image_info = {
"file_name": img_name,
"id": len(coco_format["images"]) + 1,
"width": img_width,
"height": img_height
}
coco_format["images"].append(image_info)
if os.path.exists(label_path):
with open(label_path, "r") as file:
for line in file:
category_id, x_center, y_center, width, height = map(float, line.split())
bbox = convert_yolo_to_coco(x_center, y_center, width, height, img_width, img_height)
annotation = {
"id": annotation_id,
"image_id": image_info["id"],
"category_id": int(category_id) + 1,
"bbox": bbox,
"area": bbox[2] * bbox[3],
"iscrowd": 0
}
coco_format["annotations"].append(annotation)
annotation_id += 1
# ไธบๆฏไธชๅๅบไฟๅญJSONๆไปถ
with open(os.path.join(output_dir, f"{split}_coco_format.json"), "w") as json_file:
json.dump(coco_format, json_file, indent=4)
```
vail.py
```
if __name__ == '__main__':
from ultralytics import YOLO
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
model = YOLO("runs/detect/train11/weights/best.pt") # load a pretrained model (recommended for training)
results=model.val(data="coco8.yaml",save_json=True,device=0,batch=1)
anno = COCO("D:/YOLOv11/datasets/coco8/val_coco_format.json") # Load your JSON annotations
pred = anno.loadRes(f"{results.save_dir}/predictions.json") # Load predictions.json
val = COCOeval(anno, pred, "bbox")
val.evaluate()
val.accumulate()
val.summarize()
```
vail.py ๆฅ้
```
(yolov11) D:\YOLOv11>python vail.py
Ultralytics 8.3.18 ๐ Python-3.11.7 torch-2.6.0+cu126 CUDA:0 (NVIDIA GeForce RTX 4060 Ti, 16380MiB)
YOLO11n summary (fused): 238 layers, 2,616,248 parameters, 0 gradients, 6.5 GFLOPs
val: Scanning D:\YOLOv11\datasets\coco8\labels\val.cache... 4 images, 0 backgrounds, 0 corrupt: 100%|โโโโโโโโโโ| 4/4 [00:00<?, ?it/s]
Class Images Instances Box(P R mAP50 mAP50-95): 100%|โโโโโโโโโโ| 4/4 [00:01<00:00, 2.88it/s]
all 4 17 0.802 0.66 0.864 0.593
person 3 10 0.82 0.461 0.695 0.347
dog 1 1 0.707 1 0.995 0.697
horse 1 2 0.835 1 0.995 0.473
elephant 1 2 0.779 0.5 0.508 0.153
umbrella 1 1 0.669 1 0.995 0.995
potted plant 1 1 1 0 0.995 0.895
Speed: 2.2ms preprocess, 26.6ms inference, 0.0ms loss, 14.8ms postprocess per image
Saving runs\detect\val18\predictions.json...
Results saved to runs\detect\val18
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
Loading and preparing results...
Traceback (most recent call last):
File "D:\YOLOv11\vail.py", line 56, in <module>
pred = anno.loadRes(f"{results.save_dir}/predictions.json") # Load predictions.json
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ProgramData\anaconda3\Lib\site-packages\pycocotools\coco.py", line 327, in loadRes
assert set(annsImgIds) == (set(annsImgIds) & set(self.getImgIds())), \
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Results do not correspond to current coco set
```
### Additional
I don't know why there's an error | closed | 2025-02-14T15:42:45Z | 2025-02-18T11:19:04Z | https://github.com/ultralytics/ultralytics/issues/19250 | [
"question",
"detect"
] | SDIX-7 | 7 |
ResidentMario/missingno | pandas | 129 | Add legend/labeling to graphs | The graphs on the example page are vague as to whether the indicators are for present or missing data. The library name would lead one to believe that the black sections might be missing data, and it's only on reading the accompanying text/description that it's clear that the reverse is true.
A simple legend or label of some sort should be added to the graphs indicating which colors are missing data vs. present data. Without this, the only way for graphs made elsewhere to be clear is for users to track back to the descriptions on the example page here or check manually, which rather defeats the purpose of the library.
Otherwise fantastic library; thanks so much for your great work! | closed | 2021-03-09T14:31:15Z | 2022-02-20T01:06:22Z | https://github.com/ResidentMario/missingno/issues/129 | [
"feature request"
] | tomshaffner | 1 |
biolab/orange3 | numpy | 6,876 | Discretize: rounding problem | <!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
<!-- Be specific, clear, and concise. Include screenshots if relevant. -->
<!-- If you're getting an error message, copy it, and enclose it with three backticks (```). -->
Using PCA on the Titanic dataset and discretizing the output results in strange rounding of the values by the disretization. This results with multiple values with the same "name".
This is the workflow I have (I have also included the .ows):

Here on the left is the Data Table that shows the results of the PCA (pay attention to the PC7 attribute). On the right we can see the results of the Discretize widget, the discretized PC7 attribute has been rounded strangely, there are also multiple PC7 values with the same "name" (highlighted).

**How can we reproduce the problem?**
Zip of the workflow:
[discretize_bug.zip](https://github.com/user-attachments/files/16691162/discretize_bug.zip)
To reproduce the problem, set the PCA components to 8 in the provided workflow.

**What's your environment?**
<!-- To find your Orange version, see "Help โ About โ Version" or `Orange.version.full_version` in code -->
- Operating system: Windows 10
- Orange version: 3.38
- How you installed Orange: Using pip in a conda environment
| closed | 2024-08-21T12:10:14Z | 2024-11-23T18:39:02Z | https://github.com/biolab/orange3/issues/6876 | [
"bug",
"snack"
] | ZanMervic | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.