repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
roboflow/supervision
pytorch
1,325
comment utilser les resultas de ce model hors connectiom
### Search before asking - [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests. ### Question j'ai projet sur le quel je travail qui consiste a calculer les dimensions reelles en mm d'un oeuf a partir de son image et je voulais savoir comment integrer ce model a mon projet et comment utiliser ses resultats pour avoir les dimensions en pixels de l'oeuf sur l'image ### Additional _No response_
closed
2024-07-04T03:04:03Z
2024-07-05T11:41:13Z
https://github.com/roboflow/supervision/issues/1325
[ "question" ]
Tkbg237
0
flairNLP/flair
pytorch
2,982
'TextClassifier' object has no attribute 'embeddings'
TARSClassifier.load error AttributeError Traceback (most recent call last) <ipython-input-13-710c2b4d40e4> in <module> ----> 1 tars = TARSClassifier.load('/content/drive/MyDrive/Text_classification/final-model.pt') 2 frames /usr/local/lib/python3.7/dist-packages/flair/nn/model.py in load(cls, model_path) 147 state = torch.load(f, map_location="cpu") 148 --> 149 model = cls._init_model_with_state_dict(state) 150 151 if "model_card" in state: /usr/local/lib/python3.7/dist-packages/flair/models/tars_model.py in _init_model_with_state_dict(cls, state, **kwargs) 739 label_dictionary=state.get("label_dictionary"), 740 label_type=state.get("label_type", "default_label"), --> 741 embeddings=state.get("tars_model").embeddings, 742 num_negative_labels_to_sample=state.get("num_negative_labels_to_sample"), 743 **kwargs, /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in __getattr__(self, name) 1206 return modules[name] 1207 raise AttributeError("'{}' object has no attribute '{}'".format( -> 1208 type(self).__name__, name)) 1209 1210 def __setattr__(self, name: str, value: Union[Tensor, 'Module']) -> None: AttributeError: 'TextClassifier' object has no attribute 'embeddings'
closed
2022-11-08T04:36:58Z
2022-11-09T15:51:44Z
https://github.com/flairNLP/flair/issues/2982
[ "bug" ]
pranavan-rbg
6
labmlai/annotated_deep_learning_paper_implementations
pytorch
144
bug in switch transformer when using torch.bfloat16
https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/25ad4d675039f1eccabb2f7ca6c14b11ee8d02c1/labml_nn/transformers/switch/__init__.py#L139 here final_output.dtype is torch.float32 and expert_output[i].dtype is torch.bfloat16 shoud set dtype of final_output like `final_output = x.new_zeros(x.shape, dtype=expert_output[0].dtype)`
closed
2022-08-24T12:22:48Z
2022-10-13T11:17:13Z
https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/144
[ "enhancement" ]
DogeWatch
1
huggingface/datasets
computer-vision
7,440
IterableDataset raises FileNotFoundError instead of retrying
### Describe the bug In https://github.com/huggingface/datasets/issues/6843 it was noted that the streaming feature of `datasets` is highly susceptible to outages and doesn't back off for long (or even *at all*). I was training a model while streaming SlimPajama and training crashed with a `FileNotFoundError`. I can only assume that this was due to a momentary outage considering the file in question, `train/chunk9/example_train_3889.jsonl.zst`, [exists like all other files in SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B/blob/main/train/chunk9/example_train_3889.jsonl.zst). ```python ... File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 2226, in __iter__ for key, example in ex_iterable: File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1499, in __iter__ for x in self.ex_iterable: File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1067, in __iter__ yield from self._iter() File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1231, in _iter for key, transformed_example in iter_outputs(): File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1207, in iter_outputs for i, key_example in inputs_iterator: File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1111, in iter_inputs for key, example in iterator: File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 371, in __iter__ for key, pa_table in self.generate_tables_fn(**gen_kwags): File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/packaged_modules/json/json.py", line 99, in _generate_tables for file_idx, file in enumerate(itertools.chain.from_iterable(files)): File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/utils/track.py", line 50, in __iter__ for x in self.generator(*self.args): File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/utils/file_utils.py", line 1378, in _iter_from_urlpaths raise FileNotFoundError(urlpath) FileNotFoundError: zstd://example_train_3889.jsonl::hf://datasets/cerebras/SlimPajama-627B@2d0accdd58c5d5511943ca1f5ff0e3eb5e293543/train/chunk9/example_train_3889.jsonl.zst ``` That final `raise` is at the bottom of the following snippet: https://github.com/huggingface/datasets/blob/f693f4e93aabafa878470c80fd42ddb10ec550d6/src/datasets/utils/file_utils.py#L1354-L1379 So clearly, something choked up in `xisfile`. ### Steps to reproduce the bug This happens when streaming a dataset and iterating over it. In my case, that iteration is done in Trainer's `inner_training_loop`, but this is not relevant to the iterator. ```python File "/miniconda3/envs/draft/lib/python3.11/site-packages/accelerate/data_loader.py", line 835, in __iter__ next_batch, next_batch_info = self._fetch_batches(main_iterator) ``` ### Expected behavior This bug and the linked issue have one thing in common: *when streaming fails to retrieve an example, the entire program gives up and crashes*. As users, we cannot even protect ourselves from this: when we are iterating over a dataset, we can't make `datasets` skip over a bad example or wait a little longer to retry the iteration, because when a Python generator/iterator raises an error, it loses all its context. In other words: if you have something that looks like `for b in a: for c in b: for d in c:`, errors in the innermost loop can only be caught by a `try ... except` in `c.__iter__()`. There should be such exception handling in `datasets` and it should have a **configurable exponential back-off**: first wait and retry after 1 minute, then 2 minutes, then 4 minutes, then 8 minutes, ... and after a given amount of retries, **skip the bad example**, and **only after** skipping a given amount of examples, give up and crash. This was requested in https://github.com/huggingface/datasets/issues/6843 too, since currently there is only linear backoff *and* it is clearly not applied to `xisfile`. ### Environment info - `datasets` version: 3.3.2 *(the latest version)* - Platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28 - Python version: 3.11.7 - `huggingface_hub` version: 0.26.5 - PyArrow version: 15.0.0 - Pandas version: 2.2.0 - `fsspec` version: 2024.10.0
open
2025-03-07T19:14:18Z
2025-03-22T21:48:02Z
https://github.com/huggingface/datasets/issues/7440
[]
bauwenst
5
pytest-dev/pytest-qt
pytest
530
How to use QAbstractItemModelTester?
I've read the docs at https://doc.qt.io/qt-6/qabstractitemmodeltester.html and https://pytest-qt.readthedocs.io/en/latest/modeltester.html, but I still fail to understand how to implement `QAbstractItemModelTester` to test a custom `QAbstractItemModel`. The following code/pseudocode (in my actual code I instantiate the model/view and add data properly) does not seem to do anything or else I am not properly configuring things to get the reports: ```python app = QApplication(sys.argv) model = ... # this is a custom tree model derived from QAbstractItemModel # add some data to model... view = ... # custom tree view derived form QTreeView view.setModel(model) view.show() tester = QAbstractItemModelTester(model, QAbstractItemModelTester.FailureReportingMode.Fatal) app.exec() # GOOD: at this point the model is displayed as expected # BAD: now I manipulate the model in a way that causes a seg fault, # but I don't see how to get any info from QAbstractItemModelTester as to why the seg fault is occurring ``` I'm hoping to get some debug info as to why the seg fault is occurring (yes, I've already tried commenting tons of things in my code, but I still haven't been able to track it down, so I'm hoping to use QAbstractItemModelTester). However, I don't see where QAbstractItemModelTester might be logging any such info if it is even working at all. Probably I'm not understanding how to implement it correctly, so any help would be awesome as I find the docs mentioned above uninformative. Note, I am using VSCode if that matters. Cheers
closed
2023-11-23T19:20:40Z
2023-11-27T16:43:44Z
https://github.com/pytest-dev/pytest-qt/issues/530
[]
marcel-goldschen-ohm
8
cleanlab/cleanlab
data-science
357
CI/Docs: Be able to build docs locally with flags to skip certain tutorials
By default docs build executes all tutorials, but we want to have optional flag user can add to specify which ones to skip when building docs locally.
closed
2022-08-23T23:37:23Z
2022-11-28T21:06:53Z
https://github.com/cleanlab/cleanlab/issues/357
[ "enhancement" ]
jwmueller
4
ray-project/ray
tensorflow
50,682
[Core] ray distributed debugger, always connecting to cluster..
### What happened + What you expected to happen I had a problem with ray distributed debugger. The vscode plugin shows Connecting to cluster... But I can't figure out where the problem is. ![Image](https://github.com/user-attachments/assets/3f89ce1f-c83a-4c22-abff-ca2b9ad64e1d) If I want to solve this problem, where should I start? Are there any logs on the ray cluster side when vscode plug-in tries to connect to the ray cluster? Is the code of vscode plug-in open source? If there's a source code, I think I can do it myself. Does the vscode plugin not support ipv6? We made ipv6 support on the ray cluster side ourselves. ### Versions / Dependencies 2.10 ### Reproduction script None ### Issue Severity None
closed
2025-02-18T13:07:37Z
2025-03-07T03:09:52Z
https://github.com/ray-project/ray/issues/50682
[ "bug", "P1", "debugger" ]
MissiontoMars
6
ray-project/ray
python
51,622
Release test training_ingest_benchmark-task=image_classification.skip_training failed
Release test **training_ingest_benchmark-task=image_classification.skip_training** failed. See https://buildkite.com/ray-project/release/builds/36666#0195bc7b-c2f9-4ebf-8c1e-25b99e6432b0 for more details. Managed by OSS Test Policy
open
2025-03-22T19:53:43Z
2025-03-22T19:53:47Z
https://github.com/ray-project/ray/issues/51622
[ "bug", "P0", "triage", "release-test", "ray-test-bot", "weekly-release-blocker", "stability", "ml" ]
can-anyscale
1
flasgger/flasgger
flask
631
How to show parameters and responses in Swagger UI
I'm trying to make the parameters and responses appear from the user.yaml file, appear in the Swagger UI. Because I'm later calling this file in my application's user.py using this from here " @swag_from("swagger/user.yaml") " ![Image](https://github.com/user-attachments/assets/eeae1957-fcd5-4988-a8c7-3ed8e6c35992) ![Image](https://github.com/user-attachments/assets/537483d6-340c-4d4a-b513-9eb8abc5918e)
open
2025-02-19T14:57:05Z
2025-02-19T14:57:05Z
https://github.com/flasgger/flasgger/issues/631
[]
pedrohaherzog-2005
0
coqui-ai/TTS
deep-learning
2,724
[Bug] where is tts.model.bark?
### Describe the bug this file cannot be found Traceback (most recent call last): ModuleNotFoundError: No module named 'TTS.tts.models.bark' ### To Reproduce from TTS.tts.models.bark import BarkAudioConfig this file cannot be found Traceback (most recent call last): ModuleNotFoundError: No module named 'TTS.tts.models.bark' ### Expected behavior _No response_ ### Logs ```shell development ``` ### Environment ```shell this file cannot be found Traceback (most recent call last): ModuleNotFoundError: No module named 'TTS.tts.models.bark' ``` ### Additional context _No response_
closed
2023-06-30T00:59:50Z
2023-06-30T12:02:21Z
https://github.com/coqui-ai/TTS/issues/2724
[ "bug" ]
acordova200
1
alteryx/featuretools
scikit-learn
2,756
how to add a dataframe that rows are valid for a period of time with featuretools
I am working on a dataset with multiple tables. I am using featuretools library for feature engineering. One of the tables that is NOT the target dataframe, comes with several columns. Three of three column are related to the conversation: ['rating', 'valid_from', 'valid_to']. I use valid_from as the time_index but am not sure how to incorporate valid_to column. If this was the target dataframe I could have used valid_to as cutoffs but since it's not the target dataframe I don't know how to set up the problem so there is no data leakage. I also thought of using valid_to as the time_index but again I am not sure how to incorporate valid_from column in that case.
open
2024-10-30T21:57:13Z
2024-11-11T04:24:20Z
https://github.com/alteryx/featuretools/issues/2756
[]
eddyfathi
2
plotly/dash
jupyter
2,421
[BUG]dcc.Store in main layout, ctx.triggered_id and prevent_initial_call are abnormal
Thank you so much for helping improve the quality of Dash! We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through. **Describe your context** Please provide us your environment, so we can easily reproduce the issue. - replace the result of `pip list | grep dash` below ``` dash 2.7.1 dash-core-components 2.0.0 dash-html-components 2.0.0 dash-renderer not install dash-table 5.0.0 ``` - if frontend related, tell us your Browser, Version and OS - OS: [e.g. iOS] win10 - Browser [e.g. chrome, safari] edge - Version [e.g. 22] 109.0.1518.78 **Describe the bug** if dcc.Store in main layout, 1. when reload page ctx.triggered_id is not None. 2. when prevent_initial_call = True, and reload page, the callback is Executed. **Expected behavior** if dcc.Store in main layout, 1. when reload page ctx.triggered_id is None. 2. when prevent_initial_call = True, and reload page, the callback is not Executed. **Screenshots** https://github.com/AnnMarieW/dash-multi-page-app-demos/tree/main/multi_page_store this project can reproduce the bug. ```python # the code location: multi_page_store/pages/graph.py @callback( Output("store", "data"), Input("year", "value"), prevent_initial_call=True # !!! not work ) def get_data(year): print(ctx.triggered_id) # !!! when reload page ctx.triggered_id is not None. dff = df.query(f"year=={year}") store = { "data": dff.to_dict("records"), "columns": [{"name": i, "id": i} for i in dff.columns], } return store ```
closed
2023-02-11T10:08:49Z
2024-07-25T13:05:31Z
https://github.com/plotly/dash/issues/2421
[]
xiongyifan
2
alirezamika/autoscraper
automation
64
scraper.build returns a blank list
Here is the code to reproduce: from autoscraper import AutoScraper class Scraper(): wanted_list = ["0.79"] origUrl = 'https://www.sec.gov/Archives/edgar/data/0001744489/000174448921000105/fy2021_q2xprxex991.htm' newUrl = 'https://www.sec.gov/Archives/edgar/data/0001744489/000174448921000179/fy2021_q3xprxex991.htm' path="Alpaca/Scraper/sec/file.txt" def scrape(self): scraper = AutoScraper() result = scraper.build(self.origUrl, self.wanted_list) print(result) result = scraper.get_result_exact(self.newUrl) print(result) if __name__ == '__main__': scraper = Scraper() scraper.scrape() Here is the log: [] [] Expected to be: [0.79] [0.80]
closed
2021-08-15T01:56:13Z
2021-12-01T08:29:51Z
https://github.com/alirezamika/autoscraper/issues/64
[]
p595285902
1
pydata/pandas-datareader
pandas
435
FRED no longer working
FRED seems to have changed the URL structure for downloading CSV. I'm using latest development version of pandas-datareader. In using the example from the docs I get: pandas_datareader._utils.RemoteDataError: Unable to read URL: http://research.stlouisfed.org/fred2/series/GDP/downloaddata/GDP.csv
closed
2018-01-07T09:47:57Z
2018-01-13T10:57:45Z
https://github.com/pydata/pandas-datareader/issues/435
[]
jhoodsmith
8
flairNLP/flair
pytorch
2,953
How to use the trained model for named entity recognition
I've trained a named entity-related model, now how can I use the model to recognize named entities in my own sentences? Can you provide relevant examples? I looked at this case `model = SequenceTagger.load('resources/taggers/example-upos/final-model.pt') sentence = Sentence('I love Berlin') model.predict(sentence) print(sentence.to_tagged_string())` But the results seemed different from what I wanted
closed
2022-10-01T03:10:17Z
2023-04-02T16:54:25Z
https://github.com/flairNLP/flair/issues/2953
[ "question", "wontfix" ]
yaoysyao
1
albumentations-team/albumentations
machine-learning
2,100
[Add transform] Add RandomMedianBlur
Add RandomMedianBlur that is an alias of MedianBlur and has the same API as Kornia's https://kornia.readthedocs.io/en/latest/augmentation.module.html#kornia.augmentation.RandomMedianBlur
closed
2024-11-08T15:52:57Z
2024-11-17T01:30:23Z
https://github.com/albumentations-team/albumentations/issues/2100
[ "enhancement" ]
ternaus
1
Kaliiiiiiiiii-Vinyzu/patchright-python
web-scraping
15
[feature request] support `browser use`
*Bug Description* I cant seem to integrate this browser use with https://github.com/browser-use/browser-use to work. Reproduction Steps Install Browser use from https://github.com/browser-use/browser-use Install pip install patchright from https://github.com/Kaliiiiiiiiii-Vinyzu/patchright-python patchright install chromium from https://github.com/Kaliiiiiiiiii-Vinyzu/patchright-python run script Code Sample ```python import asyncio from langchain_google_genai import ChatGoogleGenerativeAI from browser_use import Agent, Browser, BrowserConfig, SystemPrompt from browser_use.browser.context import BrowserContext, BrowserContextConfig # patchright here! from patchright.async_api import async_playwright async def main(): async with async_playwright() as p: browser = await p.chromium.launch(headless=False) page = await browser.new_page() await page.goto('https://chat.deepseek.com/sign_in') await page.wait_for_load_state('networkidle') await asyncio.sleep(3) config = BrowserConfig( headless=False, disable_security=True, extra_chromium_args=[ "--no-sandbox", "--disable-setuid-sandbox", "--disable-dev-shm-usage", "--disable-gpu", "--disable-web-security", "--disable-dev-mode", # disable dev mode "--disable-debug-mode", # disable debug mode "--remote-debugging-port=9223", ], ) browser_context = BrowserContext( config= BrowserContextConfig( maximum_wait_page_load_time=60 ), browser=browser ) llm = ChatGoogleGenerativeAI(model="gemini-2.0-flash-exp") agent = Agent( task = "type apples@jack.com in email address", llm = llm, browser = browser, browser_context = browser_context, ) result = await agent.run() await asyncio.sleep(10) await browser.close() asyncio.run(main()) ``` Version 0.1.29 LLM Model Other (specify in description) Operating System Windows 11 Relevant Log Output ``` (Browser use) D:\AI\Browser use>py test_new.py INFO [browser_use] BrowserUse logging setup complete with level info INFO [root] Anonymized telemetry enabled. See https://github.com/browser-use/browser-use for more information. INFO [agent] 🚀 Starting task: type apples@jack.com in email address INFO [agent] 📍 Step 1 WARNING [browser] Page load failed, continuing... ERROR [agent] ❌ Result failed 1/3 times: 'Browser' object has no attribute 'get_playwright_browser' INFO [agent] 📍 Step 1 WARNING [browser] Page load failed, continuing... ERROR [agent] ❌ Result failed 2/3 times: 'Browser' object has no attribute 'get_playwright_browser' INFO [agent] 📍 Step 1 WARNING [browser] Page load failed, continuing... ERROR [agent] ❌ Result failed 3/3 times: 'Browser' object has no attribute 'get_playwright_browser' ERROR [agent] ❌ Stopping due to 3 consecutive failures WARNING [agent] No history to create GIF from ```
closed
2025-01-31T04:25:26Z
2025-02-10T22:41:09Z
https://github.com/Kaliiiiiiiiii-Vinyzu/patchright-python/issues/15
[ "enhancement", "third-party" ]
scaruslooner
2
Guovin/iptv-api
api
719
[Bug]: Sorting的阶段又卡了
### Don't skip these steps | 不要跳过这些步骤 - [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field | 我明白,如果我“故意”删除或跳过任何强制性的\*字段,我将被**封锁** - [X] I have checked through the search that there are no similar issues that already exist | 我已经通过搜索仔细检查过没有存在已经创建的相似问题 - [X] I will not submit any issues that are not related to this project | 我不会提交任何与本项目无关的问题 ### Occurrence environment | 触发环境 - [X] Workflow | 工作流 - [ ] GUI | 软件 - [ ] Docker - [ ] Command line | 命令行 ### Bug description | 具体描述 又卡住了 ### Error log | 报错日志 _No response_
closed
2024-12-21T11:30:24Z
2024-12-23T09:25:19Z
https://github.com/Guovin/iptv-api/issues/719
[ "bug" ]
zhycn9033
1
Evil0ctal/Douyin_TikTok_Download_API
api
45
提示有更新,然后就没有然后了
提示shortcut有更新,点击更新提示错误url。 不更新的话也没法下载了。。。 麻烦看看怎么回事。多谢了!
closed
2022-06-28T09:20:49Z
2022-06-29T01:54:40Z
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/45
[]
stormblizzard
3
mwaskom/seaborn
data-visualization
2,919
How to do various things in the next gen Seaborn
Hello, I'm sure this is all on the radar / plan for the next gen Seaborn anyway, but just thought I'd flag some features that I couldn't work out how to do today using the next gen syntax (most likely because these features haven't landed yet or aren't in the next gen documentation yet). They are: - title - subtitle - caption - annotations - axis ticks - legend layout - saving plots to file They're all from here: https://aeturrell.github.io/python4DS/communicate-plots.html, a training page on data vis that goes through how to do various things with next gen Seaborn and which has placeholders in for these features for now. I am so impressed with the next gen version, can't wait to see more of it!
closed
2022-07-23T10:58:05Z
2022-08-26T10:48:14Z
https://github.com/mwaskom/seaborn/issues/2919
[ "question", "objects-plot" ]
aeturrell
9
Avaiga/taipy
automation
2,291
Improving data interaction and visualization features in (pie_multiple.py)
### Description the need for interactive data filtering, export functionality for charts, and historical trend visualization these features will improve user engagement, data accessibility, and context for decision-making ### Solution Proposed i can add a dropdown or multi-select for filtering data by region or emissions type, so users can focus on what they care about and include a button to export pie charts as PNG or PDF for easy sharing and then finally, add a line or bar chart next to the pies to show historical trends and give more context. ### Impact of Solution These features will make the taipy more user-friendly and versatile filtering adds focus, exporting improves usability, and historical trends provide deeper insights just make sure performance stays smooth with large datasets and the layout stays clean with added charts ### Acceptance Criteria - [x] If applicable, a new demo code is provided to show the new feature in action. - [x] Integration tests exhibiting how the functionality works are added. - [x] Any new code is covered by a unit tested. - [x] Check code coverage is at least 90%. - [x] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated. ### Code of Conduct - [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+). - [X] I am willing to work on this issue (optional)
closed
2024-11-29T09:05:01Z
2025-01-17T13:51:28Z
https://github.com/Avaiga/taipy/issues/2291
[ "✨New feature" ]
Kritika75
3
flasgger/flasgger
api
131
File Upload
Hi, One of my application requires to upload a file through flasgger. I am not sure, how to define the conf. Below is what I did reading the Swagger docs: ` """ This API let's you train word embeddings Call this api passing your file and get the word embeddings. --- tags: - Train (Word Embeddings) consumes: - multipart/form-data parameters: - in: formData name: body required: true description: Upload your file. responses: 500: description: ERROR Failed! 200: description: INFO Success! """ ` Please, point the error.
closed
2017-07-06T12:45:50Z
2017-07-09T06:46:52Z
https://github.com/flasgger/flasgger/issues/131
[]
prakhar21
1
Farama-Foundation/PettingZoo
api
888
[Bug Report] ClipOutOfBoundsWrapper.step() AttributeError: 'int' object has no attribute 'shape'
### Describe the bug When I using Ray.rllib to try 'pistonball_v6' env, it raise a AttrbuteError caused by `int.shape`. There is the fully stack tracing below: File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/ray/rllib/utils/actor_manager.py", line 183, in apply raise e File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/ray/rllib/utils/actor_manager.py", line 174, in apply return func(self, *args, **kwargs) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/ray/rllib/execution/rollout_ops.py", line 86, in <lambda> lambda w: w.sample(), local_worker=False, healthy_only=True File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/ray/rllib/evaluation/rollout_worker.py", line 914, in sample batches = [self.input_reader.next()] File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/ray/rllib/evaluation/sampler.py", line 92, in next batches = [self.get_data()] File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/ray/rllib/evaluation/sampler.py", line 277, in get_data item = next(self._env_runner) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/ray/rllib/evaluation/env_runner_v2.py", line 323, in run outputs = self.step() File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/ray/rllib/evaluation/env_runner_v2.py", line 379, in step self._base_env.send_actions(actions_to_send) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/ray/rllib/env/multi_agent_env.py", line 656, in send_actions raise e File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/ray/rllib/env/multi_agent_env.py", line 645, in send_actions obs, rewards, terminateds, truncateds, infos = env.step(agent_dict) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/ray/rllib/env/wrappers/pettingzoo_env.py", line 207, in step obss, rews, terminateds, truncateds, infos = self.par_env.step(action_dict) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/pettingzoo/utils/conversions.py", line 157, in step self.aec_env.step(actions[agent]) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/supersuit/utils/base_aec_wrapper.py", line 44, in step super().step(action) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/pettingzoo/utils/wrappers/order_enforcing.py", line 75, in step super().step(action) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/pettingzoo/utils/wrappers/base.py", line 108, in step self.env.step(action) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/pettingzoo/utils/wrappers/order_enforcing.py", line 75, in step super().step(action) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/pettingzoo/utils/wrappers/base.py", line 108, in step self.env.step(action) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/pettingzoo/utils/conversions.py", line 292, in step obss, rews, terminations, truncations, infos = self.env.step(self._actions) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/supersuit/generic_wrappers/utils/shared_wrapper_util.py", line 125, in step observations, rewards, terminations, truncations, infos = super().step(actions) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/pettingzoo/utils/wrappers/base_parallel.py", line 39, in step res = self.env.step(actions) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/pettingzoo/utils/conversions.py", line 157, in step self.aec_env.step(actions[agent]) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/supersuit/utils/base_aec_wrapper.py", line 44, in step super().step(action) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/pettingzoo/utils/wrappers/order_enforcing.py", line 75, in step super().step(action) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/pettingzoo/utils/wrappers/base.py", line 108, in step self.env.step(action) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/supersuit/utils/base_aec_wrapper.py", line 44, in step super().step(action) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/pettingzoo/utils/wrappers/order_enforcing.py", line 75, in step super().step(action) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/pettingzoo/utils/wrappers/base.py", line 108, in step self.env.step(action) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/supersuit/utils/base_aec_wrapper.py", line 44, in step super().step(action) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/pettingzoo/utils/wrappers/order_enforcing.py", line 75, in step super().step(action) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/pettingzoo/utils/wrappers/base.py", line 108, in step self.env.step(action) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/pettingzoo/utils/wrappers/order_enforcing.py", line 75, in step super().step(action) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/pettingzoo/utils/wrappers/base.py", line 108, in step self.env.step(action) File "/home/roots/anaconda3/envs/all/lib/python3.8/site-packages/pettingzoo/utils/wrappers/clip_out_of_bounds.py", line 31, in step space.shape == action.shape AttributeError: 'int' object has no attribute 'shape' ### Code example ```shell if __name__ == "__main__": env_name = "pistonball_v6" register_env(env_name, lambda config: ParallelPettingZooEnv(env_creator(config))) config = ( PPOConfig() .rollouts(num_rollout_workers=4, rollout_fragment_length="auto") .training( train_batch_size=512, lr=2e-5, gamma=0.99, lambda_=0.9, use_gae=True, clip_param=0.4, grad_clip=None, entropy_coeff=0.1, vf_loss_coeff=0.25, sgd_minibatch_size=64, num_sgd_iter=10, ) .environment(env=env_name, clip_actions=True) .debugging(log_level="ERROR") .framework(framework="torch") .resources(num_gpus=int(os.environ.get("RLLIB_NUM_GPUS", "0"))) ) tune.run( "PPO", name="PPO", stop={"timesteps_total": 5000000}, checkpoint_freq=10, local_dir="~/ray_results/" + env_name, config=config.to_dict(), ) ``` ### System info env: pip pettingzoo version: 1.22.3 ray version: 2.3.0 OS: Ubuntu 20.04.4 LTS ### Additional context _No response_ ### Checklist - [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/PettingZoo/issues) in the repo
closed
2023-02-27T04:34:21Z
2023-05-12T17:15:04Z
https://github.com/Farama-Foundation/PettingZoo/issues/888
[ "bug" ]
luorq3
1
davidsandberg/facenet
tensorflow
664
Tensorflow Java Working Example
Hello I am new to deep learning and tensroflow in general, I am writing a basic server app for my thesis where I need to accept calls in real time to classify a face image, for the current moment I was able to load the facenet trained model (graph) in Java and generate embeddings vector to use it later in classification using SVM, but the point is how accurate is these generated features? and does it need more training before it is used in classification ? below is a snippet of how I generate the embeddings vector in Java ```java Tensor<Float> result = session.runner() .feed("input:0", image) .feed("phase_train:0", Tensors.create(false)) .fetch("embeddings:0").run().get(0) .expect(Float.class); result.writeTo(FloatBuffer.wrap(embeddings)); ``` can someone guide me for how can I train the model for more accurate results the same way that was done in python but in Java ?
open
2018-03-12T20:07:48Z
2019-02-12T02:57:55Z
https://github.com/davidsandberg/facenet/issues/664
[]
mhashem
1
X-PLUG/MobileAgent
automation
34
TypeError: annotate() got an unexpected keyword argument 'labels'
辛苦看看下面这个报错原因是什么呢?Python版本 3.9.13, 系统版本:windows 10 Traceback (most recent call last): File "D:\Project\script\MobileAgent-main\Mobile-Agent-v2\run.py", line 286, in <module> perception_infos, width, height = get_perception_infos(adb_path, screenshot_file) File "D:\Project\script\MobileAgent-main\Mobile-Agent-v2\run.py", line 190, in get_perception_infos coordinates = det(screenshot_file, "icon", groundingdino_model) File "D:\Project\script\MobileAgent-main\Mobile-Agent-v2\MobileAgent\icon_localization.py", line 45, in det result = groundingdino_model(inputs) File "D:\Env\Python\lib\site-packages\modelscope\pipelines\base.py", line 220, in __call__ output = self._process_single(input, *args, **kwargs) File "D:\Env\Python\lib\site-packages\modelscope\pipelines\base.py", line 255, in _process_single out = self.forward(out, **forward_params) File "C:\Users\xiaomi\.cache\modelscope\modelscope_modules\GroundingDINO\ms_wrapper.py", line 35, in forward return self.model(inputs,**forward_params) File "D:\Env\Python\lib\site-packages\modelscope\models\base\base_torch_model.py", line 36, in __call__ return self.postprocess(self.forward(*args, **kwargs)) File "C:\Users\xiaomi\.cache\modelscope\modelscope_modules\GroundingDINO\ms_wrapper.py", line 66, in forward annotated_frame = annotate(image_source=image_source, boxes=boxes, logits=logits, phrases=phrases) File "C:\Users\xiaomi\.cache\modelscope\modelscope_modules\GroundingDINO\groundingdino\util\inference.py", line 97, in annotate annotated_frame = box_annotator.annotate( File "D:\Env\Python\lib\site-packages\supervision\utils\conversion.py", line 23, in wrapper return annotate_func(self, scene, *args, **kwargs) TypeError: annotate() got an unexpected keyword argument 'labels'
open
2024-07-22T03:32:42Z
2024-07-22T03:58:29Z
https://github.com/X-PLUG/MobileAgent/issues/34
[]
hulk-zhk
1
CorentinJ/Real-Time-Voice-Cloning
python
1,169
Audio instead of text input to synthesize or vocode? (target audio prompt)
Hi, I'm new to python and ML in general. I've got it to work on my mac m1, so that's nice. I've got the text to speech working but I was wondering: is it possible to learn a voice with a dataset and use that voice to replace a recorded voice (so not text)? Let's say I'm singing something but I want the voice of someone else. I replace my voice with the ai voice. Is that feature available? Does someone know?
open
2023-03-02T11:43:09Z
2023-03-02T12:38:21Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1169
[]
remycoopermusic
1
ResidentMario/missingno
data-visualization
71
"UnboundLocalError: local variable 'ax2' referenced before assignment" on msno.bar(df)
Hello! Just tried to run a `msno.bar(df)`, but it returned the following ``` --------------------------------------------------------------------------- UnboundLocalError Traceback (most recent call last) <command-823280> in <module>() ----> 1 msno.bar(df) /databricks/python/lib/python3.5/site-packages/missingno/missingno.py in bar(df, figsize, fontsize, labels, log, color, inline, filter, n, p, sort) 245 ax3.grid(False) 246 --> 247 for ax in [ax1, ax2, ax3]: 248 ax.spines['top'].set_visible(False) 249 ax.spines['right'].set_visible(False) UnboundLocalError: local variable 'ax2' referenced before assignment ```
closed
2018-06-28T21:26:27Z
2018-06-29T16:25:48Z
https://github.com/ResidentMario/missingno/issues/71
[]
paulochf
1
vllm-project/vllm
pytorch
15,093
[Usage]: `torch.compile` is turned on, but the model LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct does not support it.
### Your current environment ```text `torch.compile` is turned on, but the model LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct does not support it. Facing this issue when trying to server this model ``` ### How would you like to use vllm I want to run inference of a [/LGAI-EXAONE/EXAONE-3.5-2.4B-Instructl]https://huggingface.co/LGAI-EXAONE/EXAONE-3.5-2.4B-Instruct. It is not working. ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
open
2025-03-19T05:47:13Z
2025-03-19T09:36:13Z
https://github.com/vllm-project/vllm/issues/15093
[ "usage" ]
Bhaveshdhapola
1
microsoft/qlib
machine-learning
1,658
Request for data update #BUG ALSO
## 🌟 Request for data update Convenient data retrieve method (like `python -m qlib.run.get_data qlib_data --target_dir ~/.qlib/qlib_data/cn_data --region cn`) can only get data to 2020-9-24. However, if we get data from yahoo manually, It costs lots of time(more than 8 hours here only for DOWNLOAD) and raise bugs. And when it begins to normalize, it happens `[58410:MainThread](2023-09-25 06:27:36,588) ERROR - qlib.workflow - [utils.py:41] - An exception has been raised[TypeError: can't compare offset-naive and offset-aware datetimes]. File "scripts/data_collector/yahoo/collector.py", line 1207, in <module> fire.Fire(Run) File "/Users/bernoulli_hermes/opt/anaconda3/envs/qlibenv/lib/python3.8/site-packages/fire/core.py", line 141, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/Users/bernoulli_hermes/opt/anaconda3/envs/qlibenv/lib/python3.8/site-packages/fire/core.py", line 475, in _Fire component, remaining_args = _CallAndUpdateTrace( File "/Users/bernoulli_hermes/opt/anaconda3/envs/qlibenv/lib/python3.8/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) File "scripts/data_collector/yahoo/collector.py", line 1182, in update_data_to_bin self.normalize_data_1d_extend(qlib_data_1d_dir) File "scripts/data_collector/yahoo/collector.py", line 1072, in normalize_data_1d_extend yc.normalize() File "/Users/bernoulli_hermes/opt/anaconda3/envs/qlibenv/lib/python3.8/site-packages/qlib/scripts/data_collector/base.py", line 319, in normalize for _ in worker.map(self._executor, file_list): File "/Users/bernoulli_hermes/opt/anaconda3/envs/qlibenv/lib/python3.8/concurrent/futures/process.py", line 484, in _chain_from_iterable_of_lists for element in iterable: File "/Users/bernoulli_hermes/opt/anaconda3/envs/qlibenv/lib/python3.8/concurrent/futures/_base.py", line 619, in result_iterator yield fs.pop().result() File "/Users/bernoulli_hermes/opt/anaconda3/envs/qlibenv/lib/python3.8/concurrent/futures/_base.py", line 444, in result return self.__get_result() File "/Users/bernoulli_hermes/opt/anaconda3/envs/qlibenv/lib/python3.8/concurrent/futures/_base.py", line 389, in __get_result raise self._exception TypeError: can't compare offset-naive and offset-aware datetimes`.
open
2023-09-25T00:05:07Z
2023-11-24T08:13:27Z
https://github.com/microsoft/qlib/issues/1658
[ "enhancement" ]
Imbernoulli
5
davidsandberg/facenet
computer-vision
328
About Linting: The Code is not Linting Well
Hi @davidsandberg, I'm studying the Machine Learning on Face Recognition recently and I found that your project is a great start so I'm reading all the python codes last week. (I prefer to use TensorFlow instead of Torch because I'm a Google fan and I come here after OpenFace, ;) However, I found that your code does not always follow the linting rules, such as `pylint`, `pep8`, `flake8`, etc. When I'm reading the code, I change them to follow those linting rules because it can make code clearer and more readable. I'm wondering if you would like to merge those works, it's just lost of re-formatting, and will not change any code logic. If so, I'd like to send you a Pull Request about the changes that I have made. Thank you for the great project, I really appreciate your work.
open
2017-06-14T05:50:43Z
2017-06-14T05:50:43Z
https://github.com/davidsandberg/facenet/issues/328
[]
huan
0
KevinMusgrave/pytorch-metric-learning
computer-vision
135
only support batch_size=1 when I set indices_tuple in TripletMarginLoss?
In my dataset.__getitem__, one index I return anchor, positve, negative and their labels, so in one minibatch, the labels' shape could be (batch_size, 3), and let indices_tuple=labels their was a wrong message because indices_tuple's length can only be 3 or 4. how can I specify the indices_tuple?
closed
2020-07-08T05:49:09Z
2020-07-25T14:19:07Z
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/135
[ "Frequently Asked Questions", "question" ]
DogeWatch
4
iperov/DeepFaceLab
deep-learning
5,397
Extraction workflow, script data_src util add landmarks debug images and manual_output_debug_fix
## Expected behavior I expect that I could generate debug images in a way that it will be useful for manual_output_debug_fix. It is not clear to me why normal extraction creates debug folder while creating debug images is not using same approach. ## Actual behavior _4) data_src faceset extract.bat_ creates debug directory on the same level as aligned directory _4.2) data_src util add landmarks debug images.bat_ creates images in aligned folder with a debug prefix, which seems to be not usable with manual_output_debug_fix ## Steps to reproduce run _4.2) data_src util add landmarks debug images.bat_ use modified _5) data_dst faceset MANUAL RE-EXTRACT DELETED ALIGNED_DEBUG.bat_ for src data ## Other relevant information Windows DeepFaceLab_NVIDIA_RTX3000_series_build_09_06_2021 ## Additional note Regarding extraction in general, readme says faceswap stayed in the past. Well there are a few things which were nice: I recall some GUI tool, probably related to faceswap, which was used for easy extracted alignments management. It is similar to manual extraction editor, though it had nice options to jump to next missing alignment or find alignments with multiple faces and so on (without sorting) I am not an expert, but I think it would be convenient to have single folder with frames which I could simply reuse to generate alignments whenever necessary (only for the new frames): - extract alignment from frames, only for the ones missing in alignments - delete weird, wrongly extracted alignments - extract manually missing alignments (as clearly they failed for the first time hence alignment removal) if the result is fine: - do faces processing, remove aligned which are not useful - remove frames or somehow mark them as not useful to avoid extracting again useless data Looks like now: - I need to generate debug folder, while I think it is probably possible to do it on flight in GUI (probably faceswap did this), if I clearly see that alignment is wrong right away I dont need to fiddle with debug, but for current use I would have to delete debug anyway - if I didnt create debugs and I would like to create them, the existing script will create files in aligned directory which are not used for manual extraction fix - I cannot automatically generate alignments for missing files. Continue seems to continue from n-th index (-128 actually, not sure why, -1 would work also it seems), instead of looking for missing files. Right now when new frames are added to source, it is messy. For small data sets, it simply wants to create alignments again. I am not sure whether it is a part of design, but coulndt we change this in Extractor.py: input_image_paths = input_image_paths[ [ Path(x).stem for x in input_image_paths ].index ( Path(output_images_paths[**-128**]).stem.split('_')[0] ) : ] to input_image_paths = input_image_paths[ [ Path(x).stem for x in input_image_paths ].index ( Path(output_images_paths[**-1**]).stem.split('_')[0] ) : ] or actually use: DeletedFilesSearcherSubprocessor to only re-extract missing faces automatically in case you added some source files I think that would simplify growing data sets and having multiple source files Then user could do, possibly under option: - add new files to frames, and generate only the new alignments automatically, regardless of frames count - remove alignments which seem to be wrong, and do manual fix only for the missing alignments (not even debug, as preview of alignments could be created on flight in editor) Update for the last part: seems like MVE addresses some of the issues mentioned above, so if MVE + DFL is expected to be the future then this topic might be less important
open
2021-09-21T12:50:52Z
2023-06-08T22:40:47Z
https://github.com/iperov/DeepFaceLab/issues/5397
[]
berniejeromski
3
pyjanitor-devs/pyjanitor
pandas
1,399
Proposed alternative for `join_apply` in deprecation notice does not replicate its behavior
# Brief Description Currently, the docs for `join_apply` state its deprecation, and advise to use `transform_column` instead. However, `join_apply` works row-wise, and `transform_column` is either single-column or multiple columns but with the same function applied to each column individually. At this point, it is unclear what would be the correct replacement for `join_apply`. A reference to an alternative approach, or a snippet would make the documentation clearer. For the record, I filed this under "Documentation fix" as it's not a code problem, but a documentation problem. A possible alternative may be to suggest the actual code `join_apply` had: ``` df = df.copy().join(df.apply(fn, axis=1).rename(new_column_name)) ``` # Relevant Context - [Link to documentation page](https://pyjanitor-devs.github.io/pyjanitor/api/functions/#janitor.functions.join_apply)
closed
2024-09-11T08:02:27Z
2024-09-14T13:31:42Z
https://github.com/pyjanitor-devs/pyjanitor/issues/1399
[]
lbeltrame
4
thunlp/OpenPrompt
nlp
105
Misuse of loss function in 0_basic.py tutorial?
Here is the basic flow that captures the loss computation in 0_basic.py tutorial. ``` ... myverbalizer = ManualVerbalizer(tokenizer, num_classes=3, label_words=[["yes"], ["no"], ["maybe"]]) ... prompt_model = PromptForClassification(plm=plm,template=mytemplate, verbalizer=myverbalizer, freeze_plm=False) ... loss_func = torch.nn.CrossEntropyLoss() ... logits = prompt_model(inputs) labels = inputs['label'] loss = loss_func(logits, labels) ``` The ```post_log_softmax``` of ```ManualVerbalizer``` defaults to True, which computes the log softmax on the output logits of the PLM. Therefore, ```logits = prompt_model(inputs)``` gives the log(softmax(x)) results. Shouldn't we use ```NLLLoss``` after that? If we use ```CrossEntropyLoss```, it will compute softmax again. I don't understand...
closed
2022-01-21T12:01:25Z
2022-01-24T02:31:31Z
https://github.com/thunlp/OpenPrompt/issues/105
[]
guoxuxu
1
pydata/xarray
pandas
9,277
⚠️ Nightly upstream-dev CI failed ⚠️
[Workflow Run URL](https://github.com/pydata/xarray/actions/runs/11944030848) <details><summary>Python 3.12 Test Summary</summary> ``` xarray/tests/test_backends.py::TestInstrumentedZarrStore::test_append: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestInstrumentedZarrStore::test_region_write: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_zero_dimensional_variable[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_store[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_test_data[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_load[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_dataset_compute[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_pickle[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_pickle_dataarray[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_None_variable[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_object_dtype[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_string_data[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_string_encoded_characters[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_numpy_datetime_data[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_cftime_datetime_data[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_timedelta_data[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_float64_data[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_example_1_netcdf[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_coordinates[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_global_coordinates[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_coordinates_with_space[2]: Failed: DID NOT WARN. No warnings of type (<class 'xarray.coding.variables.SerializationWarning'>,) were emitted. Emitted warnings: []. xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_boolean_dtype[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_orthogonal_indexing[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_vectorized_indexing[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_vectorized_indexing_negative_step[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_outer_indexing_reversed[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_isel_dataarray[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_array_type_after_indexing[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_dropna[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_ondisk_after_print[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_bytes_with_fill_value[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_string_with_fill_value_nchar[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_empty_vlen_string_array[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_mask_and_scale[2-dtype0-create_unsigned_masked_scaled_data-create_encoded_unsigned_masked_scaled_data]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_mask_and_scale[2-dtype0-create_signed_masked_scaled_data-create_encoded_signed_masked_scaled_data]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_mask_and_scale[2-dtype0-create_masked_and_scaled_data-create_encoded_masked_and_scaled_data]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_unsigned[2-fill_value0-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_unsigned[2-fill_value1-True]: Failed: DID NOT WARN. No warnings of type (<class 'xarray.coding.variables.SerializationWarning'>,) were emitted. Emitted warnings: []. xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_unsigned[2--1-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_unsigned[2-255-True]: Failed: DID NOT WARN. No warnings of type (<class 'xarray.coding.variables.SerializationWarning'>,) were emitted. Emitted warnings: []. xarray/tests/test_backends.py::TestZarrDictStore::test_coordinate_variables_after_dataset_roundtrip[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_grid_mapping_and_bounds_are_coordinates_after_dataarray_roundtrip[2]: Failed: DID NOT WARN. No warnings of type (<class 'UserWarning'>,) were emitted. Emitted warnings: []. xarray/tests/test_backends.py::TestZarrDictStore::test_coordinates_encoding[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_endian[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_invalid_dataarray_names_raise[2]: AssertionError: Regex pattern did not match. Regex: 'string or None' Input: "MemoryStore.__init__() got an unexpected keyword argument 'mode'" xarray/tests/test_backends.py::TestZarrDictStore::test_encoding_kwarg[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_encoding_kwarg_dates[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_default_fill_value[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_explicitly_omit_fill_value[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_explicitly_omit_fill_value_via_encoding_kwarg[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_explicitly_omit_fill_value_in_coord[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_explicitly_omit_fill_value_in_coord_via_encoding_kwarg[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_encoding_same_dtype[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_multiindex_not_implemented[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_consolidated[2-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_consolidated[2-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_consolidated[2-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_read_non_consolidated_warning[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_non_existent_store[2]: AssertionError: Regex pattern did not match. Regex: 'No such file or directory' Input: 'Unable to find group: file:///tmp/pytest-of-runner/pytest-0/test_non_existent_store_2_0/ca4396e8-f602-4b95-892c-f886072a0d21' xarray/tests/test_backends.py::TestZarrDictStore::test_auto_chunk[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_manual_chunk[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_warning_on_bad_chunks[2]: Failed: DID NOT WARN. No warnings of type (<class 'UserWarning'>,) were emitted. Emitted warnings: []. xarray/tests/test_backends.py::TestZarrDictStore::test_write_uneven_dask_chunks[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_chunk_encoding[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_chunk_encoding_with_dask[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_drop_encoding[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_hidden_zarr_keys[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_persistence_modes[2-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_persistence_modes[2-group1]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_compressor_encoding[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_group[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_zarr_mode_w_overwrites_encoding[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_dataset_caching[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_mode_rplus_success[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_mode_rplus_fails[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_invalid_dim_raises[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_no_dims_raises[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_append_dim_not_set_raises[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_mode_not_a_raises[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_existing_encoding_raises[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_string_length_mismatch_raises[2-U]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_string_length_mismatch_raises[2-S]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_check_encoding_is_consistent_after_append[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_new_variable[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_append_dim_no_overwrite[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_to_zarr_compute_false_roundtrip[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_to_zarr_append_compute_false_roundtrip[2]: Failed: DID NOT WARN. No warnings of type (<class 'xarray.coding.variables.SerializationWarning'>,) were emitted. Emitted warnings: []. xarray/tests/test_backends.py::TestZarrDictStore::test_save_emptydim[2-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_save_emptydim[2-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_no_warning_from_open_emptydim_with_chunks[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-False-False-False-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-False-False-False-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-False-False-False-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-False-False-True-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-False-False-True-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-False-False-True-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-False-True-False-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-False-True-False-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-False-True-False-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-False-True-True-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-False-True-True-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-False-True-True-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-True-False-False-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-True-False-False-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-True-False-False-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-True-False-True-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-True-False-True-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-True-False-True-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-True-True-False-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-True-True-False-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-True-True-False-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-True-True-True-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-True-True-True-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-True-True-True-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-None-False-False-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-None-False-False-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-None-False-False-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-None-False-True-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-None-False-True-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-None-False-True-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-None-True-False-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-None-True-False-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-None-True-False-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-None-True-True-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-None-True-True-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[2-None-True-True-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region_mode[2-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region_mode[2-r+]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region_mode[2-a]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_preexisting_override_metadata[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region_errors[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_encoding_chunksizes[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_chunk_encoding_with_partial_dask_chunks[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_chunk_encoding_with_larger_dask_chunks[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_open_zarr_use_cftime[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_read_select_write[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_attributes[2-obj0]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_attributes[2-obj1]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_chunked_datetime64_or_timedelta64[2-datetime64[ns]]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_chunked_datetime64_or_timedelta64[2-timedelta64[ns]]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_chunked_cftime_datetime[2]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDirectoryStore::test_non_existent_store[2]: AssertionError: Regex pattern did not match. Regex: 'No such file or directory' Input: 'Unable to find group: file:///tmp/pytest-of-runner/pytest-0/test_non_existent_store_2_1/ca800d35-ff58-46d4-a670-d887b3d458a7' xarray/tests/test_backends.py::TestZarrDirectoryStore::test_manual_chunk[2]: ValueError: ndarray is not C-contiguous xarray/tests/test_backends.py::TestZarrDirectoryStore::test_warning_on_bad_chunks[2]: Failed: DID NOT WARN. No warnings of type (<class 'UserWarning'>,) were emitted. Emitted warnings: []. xarray/tests/test_backends.py::TestZarrDirectoryStore::test_write_uneven_dask_chunks[2]: ValueError: ndarray is not C-contiguous xarray/tests/test_backends.py::TestZarrDirectoryStore::test_encoding_chunksizes[2]: ValueError: ndarray is not C-contiguous xarray/tests/test_backends.py::TestZarrWriteEmpty::test_non_existent_store[2]: AssertionError: Regex pattern did not match. Regex: 'No such file or directory' Input: 'Unable to find group: file:///tmp/pytest-of-runner/pytest-0/test_non_existent_store_2_2/07b52aff-ff7d-48a0-9754-dce50afa69db' xarray/tests/test_backends.py::TestZarrWriteEmpty::test_manual_chunk[2]: ValueError: ndarray is not C-contiguous xarray/tests/test_backends.py::TestZarrWriteEmpty::test_warning_on_bad_chunks[2]: Failed: DID NOT WARN. No warnings of type (<class 'UserWarning'>,) were emitted. Emitted warnings: []. xarray/tests/test_backends.py::TestZarrWriteEmpty::test_write_uneven_dask_chunks[2]: ValueError: ndarray is not C-contiguous xarray/tests/test_backends.py::TestZarrWriteEmpty::test_encoding_chunksizes[2]: ValueError: ndarray is not C-contiguous xarray/tests/test_backends.py::TestZarrDictStore::test_zero_dimensional_variable[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_store[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_test_data[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_load[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_dataset_compute[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_pickle[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_pickle_dataarray[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_None_variable[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_object_dtype[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_string_data[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_string_encoded_characters[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_numpy_datetime_data[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_cftime_datetime_data[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_timedelta_data[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_float64_data[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_example_1_netcdf[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_coordinates[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_global_coordinates[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_coordinates_with_space[3]: Failed: DID NOT WARN. No warnings of type (<class 'xarray.coding.variables.SerializationWarning'>,) were emitted. Emitted warnings: []. xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_boolean_dtype[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_orthogonal_indexing[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_vectorized_indexing[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_vectorized_indexing_negative_step[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_outer_indexing_reversed[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_isel_dataarray[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_array_type_after_indexing[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_dropna[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_ondisk_after_print[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_bytes_with_fill_value[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_string_with_fill_value_nchar[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_empty_vlen_string_array[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_mask_and_scale[3-dtype0-create_unsigned_masked_scaled_data-create_encoded_unsigned_masked_scaled_data]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_mask_and_scale[3-dtype0-create_signed_masked_scaled_data-create_encoded_signed_masked_scaled_data]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_mask_and_scale[3-dtype0-create_masked_and_scaled_data-create_encoded_masked_and_scaled_data]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_unsigned[3-fill_value0-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_unsigned[3-fill_value1-True]: Failed: DID NOT WARN. No warnings of type (<class 'xarray.coding.variables.SerializationWarning'>,) were emitted. Emitted warnings: []. xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_unsigned[3--1-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_unsigned[3-255-True]: Failed: DID NOT WARN. No warnings of type (<class 'xarray.coding.variables.SerializationWarning'>,) were emitted. Emitted warnings: []. xarray/tests/test_backends.py::TestZarrDictStore::test_coordinate_variables_after_dataset_roundtrip[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_grid_mapping_and_bounds_are_coordinates_after_dataarray_roundtrip[3]: Failed: DID NOT WARN. No warnings of type (<class 'UserWarning'>,) were emitted. Emitted warnings: []. xarray/tests/test_backends.py::TestZarrDictStore::test_coordinates_encoding[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_invalid_dataarray_names_raise[3]: AssertionError: Regex pattern did not match. Regex: 'string or None' Input: "MemoryStore.__init__() got an unexpected keyword argument 'mode'" xarray/tests/test_backends.py::TestZarrDictStore::test_encoding_kwarg[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_encoding_kwarg_dates[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_default_fill_value[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_explicitly_omit_fill_value[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_explicitly_omit_fill_value_via_encoding_kwarg[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_explicitly_omit_fill_value_in_coord[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_explicitly_omit_fill_value_in_coord_via_encoding_kwarg[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_encoding_same_dtype[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_multiindex_not_implemented[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_consolidated[3-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_consolidated[3-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_roundtrip_consolidated[3-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_read_non_consolidated_warning[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_non_existent_store[3]: AssertionError: Regex pattern did not match. Regex: 'No such file or directory' Input: 'Unable to find group: file:///tmp/pytest-of-runner/pytest-0/test_non_existent_store_3_0/76d6a872-2f54-497e-bca2-12bdaaa83697' xarray/tests/test_backends.py::TestZarrDictStore::test_auto_chunk[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_manual_chunk[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_warning_on_bad_chunks[3]: Failed: DID NOT WARN. No warnings of type (<class 'UserWarning'>,) were emitted. Emitted warnings: []. xarray/tests/test_backends.py::TestZarrDictStore::test_write_uneven_dask_chunks[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_chunk_encoding[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_chunk_encoding_with_dask[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_drop_encoding[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_dimension_names[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_persistence_modes[3-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_persistence_modes[3-group1]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_compressor_encoding[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_group[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_zarr_mode_w_overwrites_encoding[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_dataset_caching[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_mode_rplus_success[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_mode_rplus_fails[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_invalid_dim_raises[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_no_dims_raises[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_append_dim_not_set_raises[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_mode_not_a_raises[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_existing_encoding_raises[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_string_length_mismatch_works[3-U]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_string_length_mismatch_works[3-S]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_check_encoding_is_consistent_after_append[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_new_variable[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_append_with_append_dim_no_overwrite[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_to_zarr_compute_false_roundtrip[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_to_zarr_append_compute_false_roundtrip[3]: Failed: DID NOT WARN. No warnings of type (<class 'xarray.coding.variables.SerializationWarning'>,) were emitted. Emitted warnings: []. xarray/tests/test_backends.py::TestZarrDictStore::test_save_emptydim[3-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_save_emptydim[3-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_no_warning_from_open_emptydim_with_chunks[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-False-False-False-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-False-False-False-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-False-False-False-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-False-False-True-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-False-False-True-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-False-False-True-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-False-True-False-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-False-True-False-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-False-True-False-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-False-True-True-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-False-True-True-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-False-True-True-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-True-False-False-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-True-False-False-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-True-False-False-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-True-False-True-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-True-False-True-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-True-False-True-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-True-True-False-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-True-True-False-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-True-True-False-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-True-True-True-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-True-True-True-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-True-True-True-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-None-False-False-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-None-False-False-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-None-False-False-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-None-False-True-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-None-False-True-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-None-False-True-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-None-True-False-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-None-True-False-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-None-True-False-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-None-True-True-False]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-None-True-True-True]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region[3-None-True-True-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region_mode[3-None]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region_mode[3-r+]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region_mode[3-a]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_preexisting_override_metadata[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_region_errors[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_encoding_chunksizes[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_chunk_encoding_with_partial_dask_chunks[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_chunk_encoding_with_larger_dask_chunks[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_open_zarr_use_cftime[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_write_read_select_write[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_attributes[3-obj0]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_attributes[3-obj1]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_chunked_datetime64_or_timedelta64[3-datetime64[ns]]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_chunked_datetime64_or_timedelta64[3-timedelta64[ns]]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDictStore::test_chunked_cftime_datetime[3]: TypeError: MemoryStore.__init__() got an unexpected keyword argument 'mode' xarray/tests/test_backends.py::TestZarrDirectoryStore::test_non_existent_store[3]: AssertionError: Regex pattern did not match. Regex: 'No such file or directory' Input: 'Unable to find group: file:///tmp/pytest-of-runner/pytest-0/test_non_existent_store_3_1/417a58b9-514b-4a71-a6dd-816600e4aba7' xarray/tests/test_backends.py::TestZarrWriteEmpty::test_non_existent_store[3]: AssertionError: Regex pattern did not match. Regex: 'No such file or directory' Input: 'Unable to find group: file:///tmp/pytest-of-runner/pytest-0/test_non_existent_store_3_2/3c505bf3-9c7e-4c22-9de2-2f5dc965fbca' xarray/tests/test_cftimeindex.py::test_multiindex: KeyError: '2001-01' xarray/tests/test_dask.py::TestToDaskDataFrame::test_to_dask_dataframe: AssertionError: DataFrame.index are different Attribute "dtype" are different [left]: object [right]: StringDtype(storage=pyarrow, na_value=<NA>) xarray/tests/test_distributed.py::test_async: IndexError: tuple index out of range xarray/tests/test_formatting.py::test_display_nbytes: AssertionError: assert '<xarray.Data...197 1198 1199' == '<xarray.Data...197 1198 1199' Skipping 86 identical leading characters in diff, use -v to show - 8, 1199], dtype=int16) + 8, 1199], shape=(1200,), dtype=int16) ? +++++++++++++++ Coordinates: * foo (foo) int16 2kB 0 1 2 3 4 5 6 ... 1194 1195 1196 1197 1198 1199 xarray/tests/test_plot.py::TestContour::test_colors_np_levels: assert False + where False = isinstance(array([[0., 0., 0., 1.],\n [1., 0., 0., 1.],\n [1., 1., 1., 1.]]), list) xarray/tests/test_variable.py::TestVariableWithDask::test_datetime64_conversion: assert True == False xarray/tests/test_variable.py::TestVariableWithDask::test_timedelta64_conversion: assert True == False xarray/tests/test_variable.py::TestVariableWithDask::test_multiindex: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() ``` </details>
closed
2024-07-25T00:23:37Z
2024-11-21T22:21:59Z
https://github.com/pydata/xarray/issues/9277
[ "CI" ]
github-actions[bot]
10
plotly/dash
plotly
2,235
Flask contexts not available inside background callback
**Describe your context** ``` dash 2.6.1 dash-core-components 2.0.0 dash-html-components 2.0.0 dash-table 5.0.0 ``` **Describe the bug** > Background callbacks don't have flask's app and request contexts inside. **Expected behavior** > Background callbacks have flask's app and request contexts inside. So, after creating callback function and before providing them to the Celery, we should provide flask contexts inside this function to imitate the default callback behaviour ```python3 with flask_app.app_context(): return (copy_current_request_context(job_fn) if has_request_context() else job_fn)(*args, **kwargs) ``` Any recommendations for now?
closed
2022-09-17T19:49:22Z
2024-10-23T19:41:37Z
https://github.com/plotly/dash/issues/2235
[ "bug", "P3" ]
ArtsiomAntropau
8
marcomusy/vedo
numpy
267
How to render a mesh with vertex indices(close loop) and face indices(each part) with specific colors?
![freestyle2_](https://user-images.githubusercontent.com/34391447/102007708-5c558880-3d66-11eb-97af-b685d1a25e84.png) There are some old interfaces in the issues, I hope to get the latest answers, thank you very very very much! ref: [line](https://github.com/marcomusy/vedo/issues/219#issuecomment-699074563) + [face](https://github.com/marcomusy/vedo/issues/102) data: [airplane.zip](https://github.com/marcomusy/vedo/files/5684419/airplane.zip) ``` from vedo import * import numpy as np settings.screenshotTransparentBackground = True path_mesh = "airplane_after.obj" path_idx_v0 = "airplane_vert_0.txt" path_idx_f0 = "airplane_face_0.txt" path_idx_f1 = "airplane_face_1.txt" mesh = load(path_mesh, force=True) mesh.c("gray").bc("black") mesh.rotateX(90) idx_v0 = np.loadtxt(path_idx_v0, delimiter=",").astype(np.int) idx_f0 = np.loadtxt(path_idx_f0, delimiter=",").astype(np.int) idx_f1 = np.loadtxt(path_idx_f1, delimiter=",").astype(np.int) l1 = Line(mesh.points()[idx_v0], closed=True, c="r", lw=3) scals_f = np.zeros(mesh.NCells()) scals_f[idx_f0] = 1 scals_f[idx_f1] = 2 mesh.addCellArray(scals_f, "mycellscalars") # mesh.cellColors(scals_f) show(mesh, l1) screenshot("render.png") ```
closed
2020-12-13T09:14:21Z
2020-12-13T13:25:58Z
https://github.com/marcomusy/vedo/issues/267
[]
LogWell
2
Textualize/rich
python
2,677
Can not print text in square brackets
closed
2022-11-30T05:32:15Z
2022-11-30T06:23:09Z
https://github.com/Textualize/rich/issues/2677
[ "Needs triage" ]
willmcgugan
3
deezer/spleeter
tensorflow
418
[Discussion] How fast should spleeting be using spleeter-gpu?
Hello all, I'm running spleeter-gpu (installed via miniconda) on an ec2 GPU instance. I'm still seeing roughly ~25 seconds for it to run 2stem on a 5 minute track, and a bit longer for 5stem. Does this seem about right? Is this as fast at it gets? Thanks
open
2020-06-12T15:19:24Z
2021-12-27T12:17:21Z
https://github.com/deezer/spleeter/issues/418
[ "question" ]
zsaraf
7
developmentseed/lonboard
jupyter
263
Support general color maps
It seems your `apply_continuous_cmap` is very tied to palettable. I experienced this a friction as I'm used to using Colorcet. So I ended up creating some functionality to convert from colorcet to palettable in #262. I'm not very well versed in colormaps but I would think that tying it to one provider is friction.
closed
2023-11-26T07:18:52Z
2023-12-04T21:10:00Z
https://github.com/developmentseed/lonboard/issues/263
[]
MarcSkovMadsen
1
home-assistant/core
asyncio
140,590
Tibber integration not connected anymore
### The problem Hi, I have been using tibber integration since over a year now but since a week or so, it seems to not be available anymore. At least I don't get any data anymore. Tried to reinstall integration, not possible anymore. Also created a new API Token, same result. ### What version of Home Assistant Core has the issue? core-2024.1.6 ### What was the last working version of Home Assistant Core? _No response_ ### What type of installation are you running? Home Assistant OS ### Integration causing the issue tibber ### Link to integration documentation on our website _No response_ ### Diagnostics information after I deleted the tibber integration, I'm not able to install it anymore, when I try to connect, Eror sais: Connection failed ### Example YAML snippet ```yaml ``` ### Anything in the logs that might be useful for us? ```txt ``` ### Additional information _No response_
open
2025-03-14T11:06:40Z
2025-03-20T06:50:15Z
https://github.com/home-assistant/core/issues/140590
[ "integration: tibber" ]
sabom2d
21
samuelcolvin/dirty-equals
pytest
100
Maintenance status of dirty_equals?
Hi, We started depending on dirty_equals in a couple of test suites instead of further complicating a bunch of ad-hoc, homegrown hacks. However I noticed there hasn't been activity on this repo since November last year, so I want to kindly check on the maintenance status of this project. Just to be sure we're not betting on the wrong horse (including contributing back).
closed
2024-07-15T09:41:04Z
2024-08-13T20:18:45Z
https://github.com/samuelcolvin/dirty-equals/issues/100
[]
soxofaan
4
scrapy/scrapy
python
6,443
Enable caching using 'HTTPCACHE_ ENABLED=True' on Windows. Slow second run speed.
<!-- Thanks for taking an interest in Scrapy! If you have a question that starts with "How to...", please see the Scrapy Community page: https://scrapy.org/community/. The GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself. Keep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md The following is a suggested template to structure your issue, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#reporting-bugs --> ### Description Enable caching on Windows using 'HTTPCACHE_ ENABLED=True'. During the second run, each project needs to run for 2 seconds due to the enabled agent. ### Steps to Reproduce 1. Enable caching by setting HTTPCACHE_ENABLED = True. 2. Crawl the website xxx and wait for the process to finish. 3. The second run completes the task quickly. 4. Use Clash to enable proxy, or set up the proxy directly in the system. 5. On running again, you find that the speed is very slow, approximately 2 seconds per item. 6. Debugging reveals that the slowdown is caused by the line proxy_bypass(parsed.hostname) in the file scrapy\downloadermiddleware\httpproxy.py, within the HttpProxyMiddleware class, in the process_request function. **Expected behavior:** Hope to use the cache directly when it is available. **Actual behavior:** Check the system proxy and then check the IP of the xxx website when the cache is available... **Reproduces how often:** 100% ### Versions Scrapy : 2.11.2 lxml : 5.2.2.0 libxml2 : 2.11.7 cssselect : 1.2.0 parsel : 1.9.1 w3lib : 2.2.1 Twisted : 24.3.0 Python : 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)] pyOpenSSL : 24.1.0 (OpenSSL 3.2.2 4 Jun 2024) cryptography : 42.0.8 Platform : Windows-10-10.0.19045-SP0
closed
2024-07-24T08:59:44Z
2024-08-18T11:42:20Z
https://github.com/scrapy/scrapy/issues/6443
[ "needs more info" ]
pengkua
1
pmaji/crypto-whale-watching-app
plotly
2
Adding ETHBTC ratio
[appv0.1.txt](https://github.com/pmaji/eth_python_tracker/files/1684175/appv0.1.txt) Hi, I'm no way near a coder, just a sysadmin debugging perl/python/bash code sometimes. I was playing around to see if I was able to add the ethbtc chart, I think I got it.( Don't really understand the details, it's all copy pasta from your code ;) ) Feel free to take a look and adapt it if you want to merge in the code! Thanks again for sharing!
closed
2018-02-01T04:23:53Z
2018-02-01T04:35:13Z
https://github.com/pmaji/crypto-whale-watching-app/issues/2
[]
arsenicks
2
AirtestProject/Airtest
automation
412
命令行运行 加入--recording 参数 录屏问题
(请尽量按照下面提示内容填写,有助于我们快速定位和解决问题,感谢配合。否则直接关闭。) **(重要!问题分类)** * 测试开发环境AirtestIDE使用问题 -> https://github.com/AirtestProject/AirtestIDE/issues **描述问题bug** (简洁清晰得概括一下遇到的问题是什么。或者是报错的traceback信息。) 1.复制ide的命令行运行语句 2.最末尾加入--recording 运行 3.运行完毕,打开log文件夹 只有全黑,0kb的recording_0.MP4文件没有实际录屏内容 4.命令行末尾加入 --recording a.MP4 运行完毕能正常生成录屏视频文件,但是文件名为recording_1.MP4 **设备:** - 型号: 华为P20 PRO - 系统: 9
open
2019-05-22T04:49:54Z
2019-05-27T11:15:35Z
https://github.com/AirtestProject/Airtest/issues/412
[ "bug" ]
Jimmy36506
1
paulbrodersen/netgraph
matplotlib
48
Curved layout does not work for multi-component graphs.
Hi Paul, I am having a problem with the scale parameter of the Graph class. First of all, I am not assigning any value to it and I leave it as the default (1.0 1.0). Debugging a bit, I figure out that that value is changed internally by the function "get_layout_for_multiple_components" to (0.67, 1). In this way I always get an error saying that some of the nodes are outside the origin-scale area. I tried with the spring, circular and multipartite layout and all of them produce the same error. Am I setting something wrong? ``` Graph(G, node_layout = 'circular', node_size = 8, node_color = node_color, node_labels = node_label, node_edge_width = border, node_label_fontdict = dict(size=font_size), node_edge_color = edge_color, node_label_offset = 0.15, node_alpha = 1, arrows = True, edge_layout = 'curved', edge_label = show_edge_labels, edge_labels = edge_label, edge_label_fontdict = dict(size=font_size), edge_color = edge_color, edge_width = edge_width, edge_label_position = 0.35) ```
closed
2022-07-17T16:55:58Z
2022-08-04T14:10:21Z
https://github.com/paulbrodersen/netgraph/issues/48
[ "bug" ]
lcastri
4
amidaware/tacticalrmm
django
1,053
Agent AutoUpdate does not work
**Server Info (please complete the following information):** - OS: [Debian 10] - Browser: [all] - RMM Version (as shown in top left of web UI): 0.12.2 **Installation Method:** - [x] Standard **Agent Info (please complete the following information):** - Agent version (as shown in the 'Summary' tab of the agent from web UI): 1.8.0 - Agent OS: Win 10 **Describe the bug** I am sorry to reopen a thread regarding the update of agent but i can't manage to understand why it does not work. all the exclusions have been made on Windows defender and there are nothing on the protection history `Add-MpPreference -ExclusionPath C:\TEMP\TRMM` `Add-MpPreference -ExclusionPath C:\Program Files\TacticalAgent\*` `Add-MpPreference -ExclusionPath C:\Windows\Temp\winagent-v*.exe` `Add-MpPreference -ExclusionPath C:\Program Files\Mesh Agent\*` Agent v2.02 is dowloaded on the computer, and the update is started but never complete, see log : ![image](https://user-images.githubusercontent.com/84961534/162381104-020c71cd-4661-4928-9546-99ca97c2361f.png) > time="2022-04-08T08:42:17+02:00" level=info msg="Agent updating from 1.8.0 to 2.0.2" time="2022-04-08T08:42:17+02:00" level=info msg="Downloading agent update from https://agents.tacticalrmm.com/api/v1/winagents/?version=2.0.2&arch=64&token=xxxxxxxxxxxxxxxxxxxxxxxxxx" time="2022-04-08T08:42:24+02:00" level=info msg="RPC service started" Manual update work running an elevated powershell `Start-Process -FilePath ".\winagent-v2.0.2.exe" -ArgumentList ('/VERYSILENT /SUPPRESSMSGBOXES /FORCECLOSEAPPLICATIONS') -Wait` Please help me troubleshoot this !
closed
2022-04-08T07:02:06Z
2022-04-08T10:24:56Z
https://github.com/amidaware/tacticalrmm/issues/1053
[]
guillaumebottollier
3
tableau/server-client-python
rest-api
1,336
Replace custom time handling with datetime/timedelta
From #1299: The use of a custom datetime module caught me off guard in the project. I'd suggest using datetime and timedelta as the source for seconds and minutes instead of manually keeping track of what the seconds and minutes ought to be, but didn't want to make any more changes than necessary to fix the problem.
open
2024-01-13T14:59:36Z
2024-01-13T14:59:36Z
https://github.com/tableau/server-client-python/issues/1336
[ "enhancement" ]
jacalata
0
google-research/bert
tensorflow
476
Crash issue when best_non_null_entry is None on SQuAD 2.0
If the n best entries are all null, we would get 'None' for best_non_null_entry and the program will crash in the next few lines. I made a workaround as following by assigning `score_diff = FLAGS.null_score_diff_threshold + 1.0` to fix this issue in `run_squad.py`. Please fix it in the official release. ``` #line 885 best_non_null_entry = None for entry in nbest: total_scores.append(entry.start_logit + entry.end_logit) if not best_non_null_entry: if entry.text: best_non_null_entry = entry ...... #line 905 if not FLAGS.version_2_with_negative: all_predictions[example.qas_id] = nbest_json[0]["text"] else: # predict "" iff the null score - the score of best non-null > threshold if best_non_null_entry: score_diff = score_null - best_non_null_entry.start_logit - ( best_non_null_entry.end_logit) scores_diff_json[example.qas_id] = score_diff else: score_diff = FLAGS.null_score_diff_threshold + 1.0 if score_diff > FLAGS.null_score_diff_threshold: all_predictions[example.qas_id] = "" else: all_predictions[example.qas_id] = best_non_null_entry.text ```
open
2019-03-05T01:26:38Z
2019-03-05T01:26:38Z
https://github.com/google-research/bert/issues/476
[]
xianzhez
0
huggingface/transformers
nlp
36,701
Some questions of `Gemma3` processor
Thanks for bringing us a nice implementation of the `Gemma3` model! After reading the code, I have a question about `gemma3.processing.py`. This segment of code is as follows: [code](https://github.com/huggingface/transformers/blob/d84569387fb1f88c86fb8d82a41f20c9e207d09e/src/transformers/models/gemma3/processing_gemma3.py#L126C16-L133C60) ```python for batch_idx, (prompt, images, num_crops) in enumerate(zip(text, batched_images, batch_num_crops)): image_indexes = [m.start() for m in re.finditer(self.boi_token, prompt)] if len(images) != len(image_indexes): raise ValueError( f"Prompt contained {len(image_indexes)} image tokens but received {len(images)} images." ) # Insert additional image tokens for Pan-and-Scan crops for num, idx in reversed(list(zip(num_crops, image_indexes))): if num: formatted_image_text = ( f"Here is the original image {self.boi_token} and here are some crops to help you see better " + " ".join([self.boi_token] * num) ) prompt = prompt[:idx] + formatted_image_text + prompt[idx + len(self.boi_token) :] text_with_crops[batch_idx] = prompt ``` I can see that this code is handling the placeholders for the image and when `Pan-and-Scan` is on, the crops of the image will also be added to the sentence before feeding into the tokenizer. But `text_with_crops` seems never to be used after that. The sub-fig in a sentence is a nice feature for me! Is there something I missed or is the code in `if num` incomplete? @RyanMullins @zucchini-nlp
closed
2025-03-13T15:43:16Z
2025-03-14T12:07:57Z
https://github.com/huggingface/transformers/issues/36701
[ "VLM" ]
Kuangdd01
2
iterative/dvc
data-science
10,378
datasets: include uri in api output for dvcx
See https://github.com/iterative/dvcx/pull/1321/files#r1547778555. `dvc.api.get_dataset()` should return something like `{"name": "dogs-and-cats", "version": 1, "uri": "ds://dogs-and-cats@v1"}` (adding the `"uri"` field).
open
2024-04-02T12:35:30Z
2024-10-05T22:55:13Z
https://github.com/iterative/dvc/issues/10378
[ "p2-medium", "A: api", "A: data-management" ]
dberenbaum
0
plotly/plotly.py
plotly
4,121
Feature request - dual renderer
# The problem Whenever you have an interactive figure in a Jupyter notebook, the plot will not show if notebook is exported to pdf using nbconvert or if notebook is uploaded to an environment such as Github. This can be solved by inserting `pio.renderers.default = "png"`, and then execute all cells again. But then interactivity is lost... It's undesirable having to switch the default renderer back and forth all the time. # Partial solution A partial solution is to render everything twice: ```Python fig.show(renderer='png') fig.show() ``` Then notebooks on Github or exported to pdf, will appear with static rendered figures, but in an interactive environment everything will appear twice. This, is obviously not desirable either, and causes bloat. # Proposal It would be nice if one could display a figure with something like this: ```Python fig.show(renderer=['plotly_mimetype', 'png']) ``` I'm not fully into how Jupyter cell-output works, and what is possible, but I could imagine it being something like this: Plotly would either: - Somehow render the interactive figure on top of the static figure cell-output. - Or, render in separate cell-outputs, but have the interactive figure hide the static figure. As said, I don't know if this is possible, or maybe this request is not possible within plotly, but should be posted on IPython/Jupyter.
closed
2023-03-24T09:30:28Z
2023-03-25T22:43:49Z
https://github.com/plotly/plotly.py/issues/4121
[]
KaareH
4
thtrieu/darkflow
tensorflow
436
def preprocess(self, im, allobj = None) im shape
if im's shape is not equal to self.meta['inp_size'], allobj is wrong
open
2017-11-21T08:46:55Z
2017-11-21T08:46:55Z
https://github.com/thtrieu/darkflow/issues/436
[]
adeagle
0
chezou/tabula-py
pandas
325
Superscript numbers in PDF coerce to be a normal number
<!--- Provide a general summary of your changes in the Title above --> Superscript numbers show up concatenated as normal numbers <!-- Write the summary of your issue here --> I am attempting to extract some data that contains superscripts. Image of the number in question: [https://i.stack.imgur.com/tdXKR.png](url) <!--- Write and check the following questionaries. --> - [x] Did you read [FAQ](https://tabula-py.readthedocs.io/en/latest/faq.html)? - [x] (Optional, but really helpful) Your PDF URL: PDF in question, page 147 is the table [https://edisciplinas.usp.br/pluginfile.php/4557662/mod_resource/content/1/CRC%20Handbook%20of%20Chemistry%20and%20Physics%2095th%20Edition.pdf](url) - [x] Paste the output of `import tabula; tabula.environment_info()` on Python REPL: Python version: 3.7.12 | packaged by conda-forge | (default, Oct 26 2021, 06:08:53) [GCC 9.4.0] Java version: openjdk version "11.0.16" 2022-07-19 OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu120.04) OpenJDK 64-Bit Server VM (build 11.0.16+8-post-Ubuntu-0ubuntu120.04, mixed mode, sharing) tabula-py version: 2.5.1 platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid uname: uname_result(system='Linux', node='2e4bec642b2a', release='5.15.65+', version='#1 SMP Sat Oct 22 09:37:52 UTC 2022', machine='x86_64', processor='x86_64') linux_distribution: ('Ubuntu', '20.04', 'focal') mac_ver: ('', ('', '', ''), '') If not possible to execute `tabula.environment_info()`, please answer following questions manually. - [x] Paste the output of `python --version` command on your terminal: 3.7.12 - [x] Paste the output of `java -version` command on your terminal: 11.0.16 - [x] Does `java -h` command work well?; Ensure your java command is included in `PATH` - [x] Write your OS and it's version: OpenJDK Runtime Environment (build 11.0.16+8-post-Ubuntu-0ubuntu120.04) Kaggle kernel tabula-py version: 2.5.1 # What did you do when you faced the problem? <!--- Provide your information to reproduce the issue. --> Currently no workaround found, searching for numbers seem to be assigned as standard number and not callable by regex calls for superscript numbers. ## Code: ``` import tabula tabula.read_pdf(pdf_path, pages=147, multiple_tables=False, stream=True, guess=False, area = (54.2, 53.8, 794.3, 615.0), columns = (70.1, 152.9, 236.8, 287.7, 324.9, 351.9, 387.0, 423.2, 456.8, 487.9, 514.3, 559.9)) ``` ## Expected behavior: <!--- Write your expected results/outputs --> ``` 250^9 or superscript just ignored so 250 ``` ## Actual behavior: <!--- Put the actual results/outputs --> ``` 2509 ``` ## Related Issues:
closed
2022-10-26T14:42:10Z
2022-10-26T14:44:11Z
https://github.com/chezou/tabula-py/issues/325
[]
drewbeh
2
dsdanielpark/Bard-API
api
63
Error when trying to execute the C# translation code
There are so many errors, when you try to execute the translated code in c#. It will display Many errors, once you run the code as it is. From the missing variables to the regex expression and the bad request error.
closed
2023-06-17T13:39:12Z
2023-06-20T03:37:11Z
https://github.com/dsdanielpark/Bard-API/issues/63
[]
SalimLouDev
0
wsvincent/awesome-django
django
138
Add navigation to sidebar
I guess it gan be good idea to add navigation throught the sections to sidebar on https://awesomedjango.org/
closed
2021-09-23T16:39:27Z
2021-09-23T18:29:10Z
https://github.com/wsvincent/awesome-django/issues/138
[]
sergeyshevch
0
ckan/ckan
api
7,909
`asbool` cannot handle empty strings
## CKAN version 2.10 ## Describe the bug `ckan.common.asbool` can handle empty lists, empty dicts, empty sets, `0`, and `None`, all of which evaluate to `False`. However, if given an empty string, it will error out with "String is not true/false". ### Expected behavior An empty string should be evaluated as `False`, like every other empty object.
closed
2023-11-13T00:01:45Z
2023-11-16T18:35:53Z
https://github.com/ckan/ckan/issues/7909
[]
ThrawnCA
3
ranaroussi/yfinance
pandas
1,386
download and history methods fail with proxy
Description yFinance.dowload() and Ticker.history() methods do not work with proxy and return no data with a message like: 1 Failed download: - AMZN: No data found for this range, symbol may be delisted yFinance: 0.2.9 + hotfix/proxy (base.py and fundamentals.py) python: 3.9.12 OS: Windows 10 20H2 Analysis Bug is in base.py module in function history() at line 571: ``` data = get_fn( url=url, params=params, timeout=timeout ) ``` Fix To fix it simply add `proxy=proxy` as a fourth parameter. Enjoy!! MV
closed
2023-01-31T11:32:08Z
2023-07-21T11:59:06Z
https://github.com/ranaroussi/yfinance/issues/1386
[]
vidalmarco
3
OpenBB-finance/OpenBB
python
6,858
Unlocking Finance for All: Spreading the Word with OpenBB
### What side quest or challenge are you solving? I'm tackling the No-Code Side Quest for OpenBB Finance! My challenge is to create engaging Twitter threads to spread the word about this amazing AI-powered financial research tool. Helping to grow awareness and build a community around #OpenBB while making finance accessible for all. 🌍💡 ### Points 150-500 Points ### Description I contributed to the No-Code Side Quest for OpenBB Finance by crafting engaging Twitter content to raise awareness about the platform. My task involved creating tweets and threads that highlight OpenBB’s AI-powered research and analytics tools, promoting its features, and encouraging community involvement. This helps make financial tools more accessible and educates users about the power of open-source finance. ### Provide proof that you've completed the task Here's the link to the tweets showcasing my contribution: https://x.com/snigdha_1234567/status/1849315580719063113 https://x.com/snigdha_1234567/status/1849315583067873290
closed
2024-10-24T05:34:41Z
2024-10-24T05:36:59Z
https://github.com/OpenBB-finance/OpenBB/issues/6858
[]
SNIDGHA
0
pandas-dev/pandas
data-science
61,160
ENH: Accept no fields for groupby by
### Feature Type - [ ] Adding new functionality to pandas - [x] Changing existing functionality in pandas - [ ] Removing existing functionality in pandas ### Problem Description Hello, Sometimes, you have no fields to group by when aggregating. I know there is then an aggregate function but it would help having a more dynamic code to allow the use of groupy by without any grouping field instead of this error : ![Image](https://github.com/user-attachments/assets/02d42ecd-fb72-4e25-999c-dd9e1deb14ff) Best regards, Simon ### Feature Description Just the ability to select no fields in the by argument ### Alternative Solutions A conditional function that uses groupby or aggregate ### Additional Context _No response_
open
2025-03-21T18:19:54Z
2025-03-24T02:43:26Z
https://github.com/pandas-dev/pandas/issues/61160
[ "Enhancement", "Needs Triage" ]
simonaubertbd
6
pydata/xarray
numpy
10,098
`xr.open_datatree` generates duplicate dask keys
### What happened? Pretty serious and sneaky bug in `open_datatree`, which can cause all nodes to load exactly the same data when dask is used. ```python import xarray as xr # Write out an example tree ds = xr.tutorial.open_dataset("air_temperature").chunk(time=-1) dt = xr.DataTree() dt["air1"] = ds dt["air2"] = ds * 2 dt.to_zarr("test.zarr", mode="w") # Without dask, looks good dt = xr.open_datatree("test.zarr", engine="zarr", chunks=None) print(dt.air1.air.equals(dt.air2.air)) # False # With dask dt = xr.open_datatree("test.zarr", engine="zarr", chunks={}) print(dt.air1.air.equals(dt.air2.air)) # True, uhoh ``` The dask tasks for each open are labeled identically for each node: ``` print(dt.air1.air.data.__dask_graph__().dependencies) print(dt.air2.air.data.__dask_graph__().dependencies) {'original-open_dataset-air-86fc3dbefb498bddaa9d70756e1c8822': set(), 'open_dataset-air-86fc3dbefb498bddaa9d70756e1c8822': {'original-open_dataset-air-86fc3dbefb498bddaa9d70756e1c8822'}} {'original-open_dataset-air-86fc3dbefb498bddaa9d70756e1c8822': set(), 'open_dataset-air-86fc3dbefb498bddaa9d70756e1c8822': {'original-open_dataset-air-86fc3dbefb498bddaa9d70756e1c8822'}} ``` Which is because here, we just pass through the root directory as the `filename_or_obj` for every node into the tokenizer: https://github.com/pydata/xarray/blob/282235f4f3e3432c9defaee45777ecef256d8684/xarray/backends/api.py#L447-L463 I think instead we need to pass through something like `os.path.join(filename_or_obj, path)` for each node's dataset. ### MVCE confirmation - [x] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray. - [x] Complete example — the example is self-contained, including all data and the text of any traceback. - [x] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result. - [x] New issue — a search of GitHub Issues suggests this is not a duplicate. - [x] Recent environment — the issue occurs with the latest version of xarray and its dependencies. ### Environment <details> xarray==2025.01.2 </details>
closed
2025-03-05T17:52:08Z
2025-03-07T01:12:27Z
https://github.com/pydata/xarray/issues/10098
[ "bug", "topic-dask", "topic-DataTree" ]
slevang
4
eralchemy/eralchemy
sqlalchemy
11
Unable to import psycopg2 error in brew install.
I installed eralchemy using brew, but when I attempt to run it, I'm getting the following error: ``` $ eralchemy -i 'postgresql+psycopg2://dakota@localhost:5432/logbook' -o logbook.pdf Please install psycopg2 using "pip install psycopg2". ``` So for the sake of not reporting an obvious issue; I went ahead and ran the command as requested: ``` $ pip install psycopg2 Requirement already satisfied (use --upgrade to upgrade): psycopg2 in /usr/local/lib/python2.7/site-packages ``` And of course, opening a Python terminal and importing psycopg2 works. I even tried creating a virtual environment, installing pscyopg2 there, and running eralchemy. No joy. ERAlchemy is working for me now; I uninstalled the brew installation and used pip to install the latest and am now good to go. Not sure if anyone else has had this issue; and I don't really know how to resolve it.
closed
2016-02-10T14:37:14Z
2016-04-10T20:10:30Z
https://github.com/eralchemy/eralchemy/issues/11
[]
bbengfort
3
nltk/nltk
nlp
3,104
Using Mobile Hotspot worked like magic. Thank you @AurekC
Using Mobile Hotspot worked like magic. Thank you @AurekC _Originally posted by @ishandutta0098 in https://github.com/nltk/nltk/issues/1981#issuecomment-646984229_ yes, hotspot truly does work like magic
closed
2023-01-08T04:34:33Z
2023-11-14T02:25:48Z
https://github.com/nltk/nltk/issues/3104
[]
TrinityNe0
1
seleniumbase/SeleniumBase
pytest
2,766
Update examples to use the newer CF Turnstile selector
## Update examples to use the newer CF Turnstile selector The Cloudflare Turnstile checkbox selector changed from `span.mark` to just `span`. I've already updated the tests for it: https://github.com/seleniumbase/SeleniumBase/commit/e693775f56d0ad2904112577217d934437715125 This is not surprising, considering that they are watching me: https://www.youtube.com/watch?v=2pTpBtaE7SQ&t=1907s <img width="400" alt="Cloudflare found me" src="https://github.com/seleniumbase/SeleniumBase/assets/6788579/4375b981-766b-4256-9611-3ad386a5f90c"> People should be aware that websites can change, which causes CSS Selectors to change too. Thankfully, no SeleniumBase changes were needed this time, unlike last time: https://github.com/seleniumbase/SeleniumBase/issues/2626 In case there's any confusion, this is all you need to do: <img width="500" alt="Replace All" src="https://github.com/seleniumbase/SeleniumBase/assets/6788579/5bafa865-91d5-42a6-a8f6-93c4d28a36cf"> ...and that should solve the issue!
closed
2024-05-10T16:20:34Z
2024-06-23T23:57:47Z
https://github.com/seleniumbase/SeleniumBase/issues/2766
[ "documentation", "tests", "UC Mode / CDP Mode", "Fun" ]
mdmintz
1
google/trax
numpy
1,431
Effective train/eval batch_size is always 1 due to batcher default arg "variable_shapes=True"
### Description When providing inputs with a constant shape - for instance imagenet32 where examples are always of length *3072*, but it also applies e.g. to this config: https://github.com/google/trax/blob/master/trax/supervised/configs/reformer_imagenet64.gin and not specifying `variable_shapes=False`, as it isn't in the config above, the effective training and evaluation `batch_size` is always equal to **1**. The reason for that is default argument `variable_shapes` in this function set to True, which enables the bucketer to do some magic so that the effective train/eval batch becomes 1, regardless of what was specified by the user: https://github.com/google/trax/blob/master/trax/data/inputs.py#L791 That seems like an annoying bug that causes **huge** unexplained variance among eval batches and makes training on a batch bigger than 1 per device possible only virtually (without even being aware of this `variable_shapes` arg and using constant shape data). My repro confirming that has been done using the latest trax dev (>=1.3.7), but the problem probably exists also in 1.3.6 and earlier. Setting `variable_shapes=False` in the gin config explicitly solves the problem, however needing to specify it there doesn't seem like a good default behaviour and can lead many further people to this bug. ... ### Environment information ``` environment independent problem (the issue is in the logic) ``` ### For bugs: reproduction and error logs # Steps to reproduce: ### To make this repro work, variable_shapes shouldn't be specified in config.gin (the default value is True and it causes the issue), and the input should be of constant shape ```python from trax.data.inputs import batcher import gin gin.parse_config_file('config.gin') b = batcher() ev = b.eval_stream(1) print(next(ev)[0].shape) # That prints (1, ...) regardless of the train/eval bs specified in config.gin ``` # Error logs: ``` None - I have noticed this by printing eval batch shapes in debugger ```
open
2021-02-05T11:18:24Z
2021-02-06T11:52:25Z
https://github.com/google/trax/issues/1431
[]
syzymon
0
influxdata/influxdb-client-python
jupyter
376
Parameter `location` for the `aggregateWindow` function
I'm trying to use the new `location` attribute for the `aggregateWindow` function. https://docs.influxdata.com/flux/v0.x/stdlib/universe/aggregatewindow/ I tried to give the parameter like this : `aggregateWindow(every: 1m, fn: mean, location: "UTC")` But the following error occured : `{"code":"invalid","message":"error @1:324-1:396: expected {zone:string, offset:duration} but found string (argument location)"}` So I tried like this (from [this doc](https://docs.influxdata.com/flux/v0.x/stdlib/timezone/#constants)) : `aggregateWindow(every: 1m, fn: mean,location: {zone: "UTC", offset: 0h})` and even like this `aggregateWindow(every: 1m, fn: mean,location: {"zone": "UTC", "offset": 0h})` but a `KeyError: 'zone'` occured in both cases. Is it failing because it's not yet implemented in the library or because I use it with the wrong syntax ? As it's a new parameter it's not yet documented in this library but the first error suggested that the parameter is available so I'm a bit confused here. __Specifications:__ - Client Version: 1.24.0 - Platform: Windows
closed
2021-12-06T11:20:53Z
2021-12-07T11:22:00Z
https://github.com/influxdata/influxdb-client-python/issues/376
[ "wontfix" ]
Yaronn44
2
rasbt/watermark
jupyter
23
Newline is not taking into account
Generally, programmer may use more than one library packages to be listed. ``` %watermark -a "author" -d -t -v -m -p numpy,pandas,scipy,sklearn,statsmodels,matplotlib,seaborn,bokeh,xgboost,`\n h2o,pymc3,lifelines,theano,altair ``` Any scope of including this functionality?
closed
2016-12-13T16:31:45Z
2016-12-13T16:58:41Z
https://github.com/rasbt/watermark/issues/23
[]
chandrad
2
vanna-ai/vanna
data-visualization
181
Add a function to summarize results
We need to add a function that will summarize the results in natural language. The function will have to take in the tabular results and/or the chart image.
closed
2024-01-23T14:34:58Z
2024-03-02T04:15:38Z
https://github.com/vanna-ai/vanna/issues/181
[]
zainhoda
1
google-research/bert
nlp
590
Can you update the BibTex of the paper?
I want to cite the NAACL paper instead of Arxiv paper.
open
2019-04-19T08:50:22Z
2019-04-19T08:50:22Z
https://github.com/google-research/bert/issues/590
[]
Das-Boot
0
scikit-learn/scikit-learn
python
30,830
⚠️ CI failed on Wheel builder (last failure: Feb 14, 2025) ⚠️
**CI failed on [Wheel builder](https://github.com/scikit-learn/scikit-learn/actions/runs/13322079886)** (Feb 14, 2025)
closed
2025-02-14T04:42:36Z
2025-02-15T04:48:20Z
https://github.com/scikit-learn/scikit-learn/issues/30830
[ "Needs Triage" ]
scikit-learn-bot
3
serengil/deepface
machine-learning
1,373
[QUESTION]: <FMR for default verification thresholds>
### Description Hi, Verification functions has preset thresholds for each model. Is there certain FMR used for defining those thresholds? ### Additional Info _No response_
closed
2024-10-21T08:02:39Z
2024-10-21T09:24:52Z
https://github.com/serengil/deepface/issues/1373
[ "enhancement" ]
Tuulimylly-Jack
1
wkentaro/labelme
computer-vision
1,486
Windwos cmd :Error "Fatal error in launcher: Unable to find an appended archive" after entering labelme
### Discussed in https://github.com/labelmeai/labelme/discussions/1485 <div type='discussions-op-text'> <sup>Originally posted by **ZGB0414** August 23, 2024</sup> (labelme) F:\software\anaconda3\envs\labelme\Lib\site-packages>labelme Fatal error in launcher: Unable to find an appended archive.</div>
open
2024-08-23T08:17:44Z
2024-08-23T08:17:44Z
https://github.com/wkentaro/labelme/issues/1486
[]
ZGB0414
0
huggingface/datasets
nlp
6,589
After `2.16.0` version, there are `PermissionError` when users use shared cache_dir
### Describe the bug - We use shared `cache_dir` using `HF_HOME="{shared_directory}"` - After dataset version 2.16.0, datasets uses `filelock` package for file locking #6445 - But, `filelock` package make `.lock` file with `644` permission - Dataset is not available to other users except the user who created the lock file via `load_dataset`. ### Steps to reproduce the bug 1. `pip install datasets==2.16.0` 2. `export HF_HOME="{shared_directory}"` 3. download dataset with `load_dataset` 4. logout and login another user 5. `pip install datasets==2.16.0` 6. `export HF_HOME="{shared_directory}"` 7. download dataset with `load_dataset` 8. `PermissionError` occurs ### Expected behavior - Users can share `cache_dir` using environment variable `HF_HOME` ### Environment info - python == 3.9.10 - datasets == 2.16.0 - ubuntu 22.04 - shared_directory has ACL ![image (1)](https://github.com/huggingface/datasets/assets/106717516/5ca759db-ad0c-4883-9a97-9c8fccd00d8a) - users are same group (developers)
closed
2024-01-15T06:46:27Z
2024-02-02T07:55:38Z
https://github.com/huggingface/datasets/issues/6589
[]
minhopark-neubla
2
taverntesting/tavern
pytest
786
Multiple MQTT Responses to Single Request - Support out of order messages
As referenced in #385, which is currently implmented in the feature-2.0 branch, the MQTT messages can arrive out of order, as MQTT provides no order guarantee. While the test infrastructure supports a YAML spec that allow a list of response objects, the test only passes if the messages are received in the same order as the test response expects, which may or may not happen. The scope of this issue would be to - Design any modifications to the YAML spec to specify if order is required or not to pass - Implement necessary modifications to the logic, such as starting a thread to accumulate responses, to support out-of-order - Create a system test that can check out of order responses - Verify the system test passes with out of order responses - Link necessary MQTT spec documentation in support of "order guarantee" in MQTT > I neglected to realise that could be the case with MQTT, it probably just needs to start a separated thread for each response and collect it at the end. _Originally posted by @michaelboulton in https://github.com/taverntesting/tavern/issues/385#issuecomment-1146628963_
closed
2022-06-06T16:43:44Z
2023-01-16T10:04:43Z
https://github.com/taverntesting/tavern/issues/786
[]
RFRIEDM-Trimble
7
Neoteroi/BlackSheep
asyncio
388
Support Pydantic v2
Currently BlackSheep supports Pydantic v1. There are several breaking changes in Pydantic v2 that require changes.
open
2023-07-01T07:36:14Z
2023-07-03T05:01:40Z
https://github.com/Neoteroi/BlackSheep/issues/388
[ "document" ]
RobertoPrevato
0
google-research/bert
nlp
1,000
bert run_classifier key error = '0'
File "run_classifier.py", line 981, in <module> tf.app.run() File "C:\Users\Parveen\ishan\bertenv\lib\site-packages\tensorflow_core\python\platform\app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "C:\Users\Parveen\ishan\bertenv\lib\site-packages\absl\app.py", line 299, in run _run_main(main, args) File "C:\Users\Parveen\ishan\bertenv\lib\site-packages\absl\app.py", line 250, in _run_main sys.exit(main(argv)) File "run_classifier.py", line 942, in main predict_file) File "run_classifier.py", line 490, in file_based_convert_examples_to_features max_seq_length, tokenizer) File "run_classifier.py", line 459, in convert_single_example label_id = label_map[example.label] KeyError: '0' I have changed the labels in the colaProcessor class and my training is successful, I am getting this error during testing. Please help
closed
2020-02-11T05:51:45Z
2020-08-04T06:44:03Z
https://github.com/google-research/bert/issues/1000
[]
agarwalishan
1
junyanz/pytorch-CycleGAN-and-pix2pix
pytorch
1,102
Same ouput images for difference input images in pix2pix model
I tried to implement pix2pix model with KAIST thermal - visible dataset to transfer thermal image to visible image. I trained for around 20 epochs and the test results are very unexpected. All the generated fake image for different test images is the same with no detail. I have tried many variations while training and end up with the same problem. @junyanz please help me with this issue.
open
2020-07-24T06:34:54Z
2020-07-24T08:54:24Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1102
[]
mAmulya
1
apragacz/django-rest-registration
rest-api
62
How to report vulnerabilities
It would be nice if you added some information to the README how you expect vulnerabilities to be reported. (I'd like to report one.)
closed
2019-06-29T18:46:04Z
2019-07-07T23:11:54Z
https://github.com/apragacz/django-rest-registration/issues/62
[]
peterthomassen
3
ymcui/Chinese-LLaMA-Alpaca
nlp
638
推理时声称“为OpenAI的产品”
### 提交前必须检查以下项目 - [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。 - [X] 由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行 - [X] 我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案 - [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案 - [X] 模型正确性检查:务必检查模型的[SHA256.md](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/SHA256.md),模型不对的情况下无法保证效果和正常运行 ### 问题类型 效果问题 ### 基础模型 Alpaca-Plus-13B ### 操作系统 Windows ### 详细描述问题 在推理时注意到 Alpaca-Plus-7B 模型和 Alpaca-Plus-13B 模型在拒绝回答和表达所属时声称其 `是 OpenAI 公司研发的产品, 名为 ChatGPT` ,这是错误的,或许会带来不必要的麻烦。可否通过使用更清洁的数据或特殊精调来解决或抑制问题? ### 依赖情况(代码类问题务必提供) N/A ### 运行日志或截图 * * * * 我没有刻意引导,以下皆为首次或第二次提问。 ## 完全使用中文 ![由OpenAI创建](https://github.com/ymcui/Chinese-LLaMA-Alpaca/assets/63788100/79e4e56d-3a4a-4f71-8793-d07dfad9e0d7) ![我叫ChatGPT,是一个基于人工智能的语言模型。](https://github.com/ymcui/Chinese-LLaMA-Alpaca/assets/63788100/9acc4dad-10b6-4bea-8133-7fcd8b45b795) ## 完全使用英文 ![created by OpenAI](https://github.com/ymcui/Chinese-LLaMA-Alpaca/assets/63788100/b0becc79-2dba-4145-a59a-5ade583a2b7f) * 有时它能规避名字问题,但有时候不能。 ![有时它能规避名字问题,但有时候不能](https://github.com/ymcui/Chinese-LLaMA-Alpaca/assets/63788100/b91ae3e5-9e51-4c30-a69f-e96677ea2d26) * 甚至说自己是 GPT-2 ? ![甚至说自己是 GPT-2 ?](https://github.com/ymcui/Chinese-LLaMA-Alpaca/assets/63788100/f231fa5f-5fd7-473b-8bed-7d237fe97440)
closed
2023-06-19T09:15:03Z
2023-06-26T23:53:49Z
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/638
[ "stale" ]
PJ-568
2
huggingface/datasets
tensorflow
7,335
Too many open files: '/root/.cache/huggingface/token'
### Describe the bug I ran this code: ``` from datasets import load_dataset dataset = load_dataset("common-canvas/commoncatalog-cc-by", cache_dir="/datadrive/datasets/cc", num_proc=1000) ``` And got this error. Before it was some other file though (lie something...incomplete) runnting ``` ulimit -n 8192 ``` did not help at all. ### Steps to reproduce the bug Run the code i sent ### Expected behavior Should be no errors ### Environment info linux, jupyter lab.
open
2024-12-16T21:30:24Z
2024-12-16T21:30:24Z
https://github.com/huggingface/datasets/issues/7335
[]
kopyl
0
LAION-AI/Open-Assistant
machine-learning
2,972
Improve Dataset Entry to add system tag for back-and-forth conversations
Add system tag for each answer in a back and forth conversation. So we have to convert `[Q1, A1, Q2, A2]` to `<prompter>q1<eos><system>attrib1<eos><assistant>a1<eos><prompter>q2<eos><system>attrib2<eos><assistant>a2<eos>` This also includes changing the prompter and system order
closed
2023-04-29T12:39:47Z
2023-06-05T08:27:24Z
https://github.com/LAION-AI/Open-Assistant/issues/2972
[ "ml" ]
CloseChoice
0
pydata/xarray
numpy
9,496
`concat()` very slow when inserting `NaN` into Dask arrays
### What is your issue? Given the following situation: - a small Dataset with a few variables and a single dimension `dim1` , backed by Dask - a large Dataset with a single variable and a single dimension `dim1`, backed by Dask When I `concat()` them along `dim1`, xarray extends the variables that appear in the first Dataset but not in the second Dataset with `NaN`. I would expect this to be lazy and to execute almost instantly, but it turns out to be very slow on my machine. Example code: ```python3 import xarray as xr import dask.array as da import numpy as np ds1 = xr.Dataset( data_vars=dict( var1=('dim1', da.arange(10, dtype=np.float64, chunks=-1)), var2=('dim1', da.arange(10, dtype=np.float64, chunks=-1)), var3=('dim1', da.arange(10, dtype=np.float64, chunks=-1)), var4=('dim1', da.arange(10, dtype=np.float64, chunks=-1)), var5=('dim1', da.arange(10, dtype=np.float64, chunks=-1)), var6=('dim1', da.arange(10, dtype=np.float64, chunks=-1)), var7=('dim1', da.arange(10, dtype=np.float64, chunks=-1)) ), ) ds2 = xr.Dataset( data_vars=dict( var1=('dim1', da.arange(100_000, dtype=np.float64, chunks=20_000)), ), ) print(ds1) print('var1 chunks:', ds1['var1'].chunksizes) print() print(ds2) print('var1 chunks:', ds2['var1'].chunksizes) print() concat = xr.concat([ds1, ds2], dim='dim1') print(concat) print('var1 chunks:', concat['var1'].chunksizes) print('var2 chunks:', concat['var2'].chunksizes) ``` Output: ``` <xarray.Dataset> Size: 560B Dimensions: (dim1: 10) Dimensions without coordinates: dim1 Data variables: var1 (dim1) float64 80B dask.array<chunksize=(10,), meta=np.ndarray> var2 (dim1) float64 80B dask.array<chunksize=(10,), meta=np.ndarray> var3 (dim1) float64 80B dask.array<chunksize=(10,), meta=np.ndarray> var4 (dim1) float64 80B dask.array<chunksize=(10,), meta=np.ndarray> var5 (dim1) float64 80B dask.array<chunksize=(10,), meta=np.ndarray> var6 (dim1) float64 80B dask.array<chunksize=(10,), meta=np.ndarray> var7 (dim1) float64 80B dask.array<chunksize=(10,), meta=np.ndarray> var1 chunks: Frozen({'dim1': (10,)}) <xarray.Dataset> Size: 800kB Dimensions: (dim1: 100000) Dimensions without coordinates: dim1 Data variables: var1 (dim1) float64 800kB dask.array<chunksize=(20000,), meta=np.ndarray> var1 chunks: Frozen({'dim1': (20000, 20000, 20000, 20000, 20000)}) <xarray.Dataset> Size: 6MB Dimensions: (dim1: 100010) Dimensions without coordinates: dim1 Data variables: var1 (dim1) float64 800kB dask.array<chunksize=(10,), meta=np.ndarray> var2 (dim1) float64 800kB dask.array<chunksize=(10,), meta=np.ndarray> var3 (dim1) float64 800kB dask.array<chunksize=(10,), meta=np.ndarray> var4 (dim1) float64 800kB dask.array<chunksize=(10,), meta=np.ndarray> var5 (dim1) float64 800kB dask.array<chunksize=(10,), meta=np.ndarray> var6 (dim1) float64 800kB dask.array<chunksize=(10,), meta=np.ndarray> var7 (dim1) float64 800kB dask.array<chunksize=(10,), meta=np.ndarray> var1 chunks: Frozen({'dim1': (10, 20000, 20000, 20000, 20000, 20000)}) var2 chunks: Frozen({'dim1': (10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, ... ``` The last output line is followed by many more `10`s. This takes about 10-20 seconds to run on my machine. Is there any reason for this being so slow? I would've expected the code to execute almost instantly, such that the `NaN` chunks are being added lazily, e.g. upon calling `compute()`. Here is my output of `xr.show_versions()`: ``` INSTALLED VERSIONS ------------------ commit: None python: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] python-bits: 64 OS: Linux OS-release: 5.15.133.1-microsoft-standard-WSL2 machine: x86_64 processor: x86_64 byteorder: little LC_ALL: None LANG: C.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.14.2 libnetcdf: 4.9.3-development xarray: 2024.9.0 pandas: 2.2.2 numpy: 1.26.4 scipy: 1.14.1 netCDF4: 1.7.1.post2 pydap: None h5netcdf: 1.3.0 h5py: 3.7.0 zarr: 2.12.0 cftime: 1.6.4 nc_time_axis: 1.4.1 iris: None bottleneck: 1.4.0 dask: 2024.9.0 distributed: 2024.9.0 matplotlib: 3.9.2 cartopy: None seaborn: 0.13.2 numbagg: 0.8.1 fsspec: 2024.9.0 cupy: None pint: None sparse: None flox: 0.9.11 numpy_groupies: 0.11.2 setuptools: 74.1.2 pip: 22.0.2 conda: None pytest: None mypy: None IPython: 8.27.0 sphinx: None ```
open
2024-09-14T19:28:22Z
2024-11-15T14:33:41Z
https://github.com/pydata/xarray/issues/9496
[ "topic-dask", "topic-combine" ]
pschlo
7
aiogram/aiogram
asyncio
672
check_ip can't parse ips when there is multiple proxies/load balancers for webhook url
when there is multiple load balancers or proxies in the way, `X-Forwarded-For` header should be as follow: `X-Forwarded-For: <client>, <proxy1>, <proxy2>` and `WebhookRequestHandler` can't parse it correctly ```python3 # For reverse proxy (nginx) forwarded_for = self.request.headers.get('X-Forwarded-For', None) if forwarded_for: return forwarded_for, _check_ip(forwarded_for) ``` example: `ipaddress.AddressValueError: Expected 4 octets in '91.108.6.70,::ffff:10.42.144.238'`
closed
2021-08-25T18:31:59Z
2021-08-25T19:28:39Z
https://github.com/aiogram/aiogram/issues/672
[]
astronuttt
0
betodealmeida/shillelagh
sqlalchemy
66
Call `atexit.register(self.close)` on the base class
closed
2021-07-05T18:51:26Z
2021-07-07T01:27:46Z
https://github.com/betodealmeida/shillelagh/issues/66
[ "enhancement", "good first issue" ]
betodealmeida
0
PokeAPI/pokeapi
graphql
550
missing sprites and artwork
Hi, using the pokeapi as Pokédex I see some sprites and artwork are missing. For pokemon ids: 896 and 897 all sprites, artwork are missing For pokemon ids:894-895-896-897-898 and for these one(probably not all of them as an official artwork) 10027-10028-10029-10030-10031-10032-10061-10080-10081-10082-10083-10084-10085 and from 10091 to 10219 official artwork are missing I can search for artwork or sprites if they are missing but you have to guide me a little (I am a frontend dev)
closed
2021-01-02T18:14:38Z
2022-01-12T09:22:55Z
https://github.com/PokeAPI/pokeapi/issues/550
[]
aabeborn
7
autogluon/autogluon
data-science
4,470
TimeSeries forecast result has no change with different factors combination
Hi, I use timeseries models including RecursiveTabular, Theta, TemporalFusionTransformer, SimpleFeedForward, PatchTST, DirectTabular, DeepAR, DLinear, Chronos and AutoETS to make fit and forecast. With the same target label, and freq,in the first round I used about 150 factors as train_data to fit and get forecast result. In the second, I used about half of the factors data to do the same job, and the forecast result is the same... Not sure where is the problem, and how to fix it?
open
2024-09-16T09:53:57Z
2024-09-16T09:53:57Z
https://github.com/autogluon/autogluon/issues/4470
[ "enhancement" ]
luochixq
0
davidsandberg/facenet
computer-vision
1,148
a fully convolution network want to change the input size which is not the same as trainning data size ,and test on a whole big image
hello, the facenet is a great work, and recently i redefined a fully convolutional network using the facenet。 and the traning data is 32x32x1 patches. and now , i want to use the network which is training beyond 32x32 image patches to test the whole big image which have a size of 512x512. sorry for a beginner of TF, i don't know how to change the input size of the orginal 32training network.. I read the code ,and the author seems to use "get_tensor_by_name" to get the input tensor ,and it is fixed as 32x32 as my training size. i tried to use placeholder to create a new tensor ,but i meet the error. can someone here to help me ,thanks a lot.
open
2020-04-02T14:49:24Z
2020-04-02T14:49:24Z
https://github.com/davidsandberg/facenet/issues/1148
[]
liuliustar
0
roboflow/supervision
pytorch
1,359
Help with using Supervision on real time feed
### Search before asking - [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests. ### Question I am using a Lucid Machine Vision Camera and I want to use supervision for detection, speed estimation of objects from real time feed. Since some of the functions/annotators are having the arguments as the ```video_info``` which has ```fps``` and ```total_frames``` which cannot be calculated on a real time feed and also I am using a machine vision camera which are connected with ethernet cable so getting feed from the cameras is also very different compared to web cam's since ```cv2.VideoCapture()``` doesn't work. # I am attaching the code how I get the camera feed below. ``` import time import cv2 import numpy as np from arena_api import enums from arena_api.buffer import BufferFactory from arena_api.system import system window_width = 800 window_height = 600 def select_device_from_user_input(): device_infos = system.device_infos if len(device_infos) == 0: print("No camera connected\nPress enter to search again") input() print("Devices found:") selected_index = 0 for i in range(len(device_infos)): if device_infos[i]['serial'] == "camera_serial_number": selected_index = i selected_model = device_infos[selected_index]['model'] print(f"\nCreate device: {selected_model}...") device = system.create_device(device_infos=device_infos[selected_index])[0] return device def apply_gamma_correction(frame, gamma): corrected_frame = frame.astype(np.float32) / 255.0 corrected_frame = np.power(corrected_frame, gamma) corrected_frame = (corrected_frame * 255.0).astype(np.uint8) return corrected_frame def is_moving(frame1, frame2): diff = np.sqrt(np.mean(np.square(frame1 - frame2))) print(diff) return diff def get_image_buffers(is_color_camera=False): device = select_device_from_user_input() device.tl_stream_nodemap.get_node( 'StreamBufferHandlingMode').value = 'NewestOnly' device.tl_stream_nodemap.get_node('StreamPacketResendEnable').value = True device.tl_stream_nodemap.get_node( 'StreamAutoNegotiatePacketSize').value = True isp_bayer_pattern = device.nodemap.get_node('IspBayerPattern').value is_color_camera = False device.nodemap.get_node('Width').value = 3072 device.nodemap.get_node('Height').value = 1080 if isp_bayer_pattern != 'NONE': is_color_camera = True if is_color_camera == True: device.nodemap.get_node('PixelFormat').value = "BayerRG8" else: device.nodemap.get_node('PixelFormat').value = "Mono8" device.nodemap.get_node('DeviceStreamChannelPacketSize').value = 1500 device.nodemap.get_node('AcquisitionMode').value = "Continuous" device.nodemap.get_node('AcquisitionFrameRateEnable').value = True device.nodemap.get_node('AcquisitionFrameRate').value = 30.0 device.nodemap.get_node('AcquisitionFrameRateEnable').value = True device.nodemap.get_node('GainAuto').value = "Off" device.nodemap.get_node('Gain').value = 0.0 device.nodemap.get_node('BalanceWhiteEnable').value = True device.nodemap.get_node('BalanceWhiteAuto').value = "Continuous" device.nodemap['GammaEnable'].value = True device.nodemap.get_node('ExposureAuto').value = "Off" device.nodemap.get_node('ExposureTime').value = 4000.00 device.nodemap['Gamma'].value = 0.350 device.nodemap['ColorTransformationEnable'].value = True key = -1 cv2.namedWindow("Image-1", cv2.WINDOW_NORMAL) device.start_stream() # Initialize FPS calculation fps_start_time = time.time() fps_counter = 0 while True: image_buffer = device.get_buffer() # optional args nparray = np.ctypeslib.as_array(image_buffer.pdata, shape=(image_buffer.height, image_buffer.width, int( image_buffer.bits_per_pixel / 8))).reshape(image_buffer.height, image_buffer.width, int(image_buffer.bits_per_pixel / 8)) if is_color_camera == True: display_img = cv2.cvtColor(nparray, cv2.COLOR_BayerBG2BGR) nparray = cv2.cvtColor(display_img, cv2.COLOR_BGR2GRAY) else: display_img = cv2.cvtColor(nparray, cv2.COLOR_GRAY2BGR) decoded_img = display_img # decoded_img = display_img[700:2000, :] print(decoded_img.shape) # decoded_img = apply_gamma_correction(decoded_img, gamma=0.5) # Calculate and display FPS fps_counter += 1 if time.time() - fps_start_time >= 1: fps = fps_counter / (time.time() - fps_start_time) fps_start_time = time.time() fps_counter = 0 cv2.putText(decoded_img, f'FPS: {fps:.2f}', (50, 150), cv2.FONT_HERSHEY_SIMPLEX, 6, (0, 255, 0), 2) print(fps) cv2.imshow("Image-1", decoded_img) key = cv2.waitKey(1) & 0xFF if key == ord("q"): break device.requeue_buffer(image_buffer) if __name__ == "__main__": get_image_buffers(is_color_camera=True) ``` Don't worry about ```device.nodemap``` or ```device.nodemap.get_node``` these are the settings to get a clear image from the camera. 🔍 Seeking Expert Help 🔍 Dear @LinasKo, @skylargivens, @iurisilvio, @sberan, I hope you can assist with a challenging issue I've encountered. Thanks Likith ### Additional ### I've been deeply engaged in refining the following code snippet to achieve a dual goal: detecting objects and calculating speed estimation. But sadly unable to achieve what I am looking for, The core of our computations revolves around the ```decoded_image```(have a look at the code below). Your expertise in this area could provide crucial insights into overcoming the current challenges. I hope this context gives you a clear understanding of the work at hand. Your input on this matter would be greatly appreciated. ``` import argparse from collections import defaultdict, deque from pathlib import Path import cv2 import numpy as np from shapely.geometry import Polygon, Point from ultralytics import YOLO from ultralytics.utils.plotting import Annotator, colors import time from arena_api import enums from arena_api.buffer import BufferFactory from arena_api.system import system import torch import supervision as sv SOURCE = np.array([[0, 0], [3072, 0], [3072, 1080], [0, 1080]]) TARGET_WIDTH = 0.7388 TARGET_HEIGHT = 0.2594 TARGET = np.array([ [0, 0], [TARGET_WIDTH, 0], [TARGET_WIDTH, TARGET_HEIGHT], [0, TARGET_HEIGHT] ]) # Global variables window_width = 800 window_height = 600 device = 'cuda' if torch.cuda.is_available() else 'cpu' print("Current Device:", device) is_color_camera = False gamma = 0.1 gamma_table = np.array([((i / 255.0) ** (1.0 / gamma)) * 255 for i in np.arange(0, 256)]).astype(np.uint8) def parse_arguments() -> argparse.Namespace: parser = argparse.ArgumentParser( description="Speed Estimation using Ultralytics and Supervision" ) parser.add_argument( "--source_video_path", required=False, default="/home/harvestedlabs/Desktop/Codes/39l.mp4", help="Path to the source video file", type=str ) return parser.parse_args() class ViewTransformer: def __init__(self, source=np.ndarray, target=np.ndarray): source = source.astype(np.float32) target = target.astype(np.float32) self.m = cv2.getPerspectiveTransform(source, target) def transform_points(self, points: np.ndarray) -> np.ndarray: if points.size == 0: print("Warning: No points to transform.") return np.array([]) reshaped_points = points.reshape(-1, 1, 2).astype(np.float32) transformed_points = cv2.perspectiveTransform(reshaped_points, self.m) return transformed_points.reshape(-1, 2) def select_device_from_user_input(): device_infos = system.device_infos if len(device_infos) == 0: print("No camera connected\nPress enter to search again") input() print("Devices found:") selected_index = 0 for i in range(len(device_infos)): if device_infos[i]['serial'] == "222600043": selected_index = i # 222600043 223200992 selected_model = device_infos[selected_index]['model'] print(f"\nCreate device: {selected_model}...") device = system.create_device(device_infos=device_infos[selected_index])[0] return device def get_image_buffers(is_color_camera=False): """Captures and processes image buffers from the camera.""" device = select_device_from_user_input() # Camera configuration device.tl_stream_nodemap.get_node('StreamBufferHandlingMode').value = 'NewestOnly' device.tl_stream_nodemap.get_node('StreamPacketResendEnable').value = True device.tl_stream_nodemap.get_node('StreamAutoNegotiatePacketSize').value = True isp_bayer_pattern = device.nodemap.get_node('IspBayerPattern').value is_color_camera = isp_bayer_pattern != 'NONE' device.nodemap.get_node('Width').value = 3072 device.nodemap.get_node('Height').value = 2048 if is_color_camera: device.nodemap.get_node('PixelFormat').value = "BayerRG8" else: device.nodemap.get_node('PixelFormat').value = "Mono8" # Features device.nodemap.get_node('BalanceWhiteAuto').value = "Continuous" device.nodemap.get_node('DeviceStreamChannelPacketSize').value = 1500 device.nodemap.get_node('AcquisitionMode').value = "Continuous" device.nodemap.get_node('AcquisitionFrameRateEnable').value = True device.nodemap['ColorTransformationEnable'].value = True device.nodemap['BalanceWhiteEnable'].value = True device.nodemap['GammaEnable'].value = True device.nodemap['Gamma'].value = 0.350 device.nodemap.get_node('ExposureAuto').value = "Off" device.nodemap.get_node('ExposureTime').value = 2000.00 device.nodemap.get_node('GainAuto').value = "Off" device.nodemap.get_node('Gain').value = 0.0 device.start_stream() fps_start_time = time.time() # Initialize start_time fps_counter = 0 # Initialize fps_counter return device, fps_start_time, fps_counter def process_frames(device, fps_start_time, fps_counter): """Process frames from the device and calculate FPS.""" model = YOLO("/home/harvestedlabs/Desktop/Codes/Likith/token.pt") print("YOLO model loaded.") byte_track = sv.ByteTrack(frame_rate=0) # Placeholder, will be updated later print("ByteTrack initialized.") # Obtain the resolution of the camera feed width = device.nodemap.get_node('Width').value height = device.nodemap.get_node('Height').value resolution_wh = (width, height) thickness = sv.calculate_optimal_line_thickness(resolution_wh=resolution_wh) text_scale = sv.calculate_optimal_text_scale(resolution_wh=resolution_wh) bounding_box_annotator = sv.BoundingBoxAnnotator(thickness=thickness, color_lookup=sv.ColorLookup.TRACK) label_annotator = sv.LabelAnnotator(text_scale=text_scale, text_thickness=thickness, text_position=sv.Position.BOTTOM_CENTER, color_lookup=sv.ColorLookup.TRACK) trace_annotator = sv.TraceAnnotator(thickness=thickness, trace_length=0, position=sv.Position.BOTTOM_CENTER, color_lookup=sv.ColorLookup.TRACK) # Placeholder, will be updated later polygon_zone = sv.PolygonZone(SOURCE) zone_annotator = sv.PolygonZoneAnnotator(zone=polygon_zone, color=sv.Color.WHITE, thickness=6, text_thickness=6, text_scale=4) view_transformer = ViewTransformer(SOURCE, TARGET) coordinates = defaultdict(lambda: deque(maxlen=0)) # Placeholder, will be updated later # Define video_info video_info = sv.VideoInfo(width=width, height=height, fps=30, total_frames=None) # Change fps if necessary with sv.VideoSink(target_path='target.mp4', video_info=video_info) as sink: while True: image_buffer = device.get_buffer() nparray = np.ctypeslib.as_array(image_buffer.pdata, shape=(image_buffer.height, image_buffer.width, int( image_buffer.bits_per_pixel / 8))).reshape(image_buffer.height, image_buffer.width, int(image_buffer.bits_per_pixel / 8)) if is_color_camera: display_img = cv2.cvtColor(nparray, cv2.COLOR_BayerBG2BGR) nparray = cv2.cvtColor(display_img, cv2.COLOR_BGR2GRAY) else: display_img = cv2.cvtColor(nparray, cv2.COLOR_GRAY2BGR) decoded_img = display_img image = cv2.resize(decoded_img, (960, 540)) # Calculate and display FPS fps_counter += 1 if time.time() - fps_start_time >= 1: fps = fps_counter / (time.time() - fps_start_time) fps_start_time = time.time() fps_counter = 0 cv2.putText(decoded_img, f'FPS: {fps:.2f}', (50, 150), cv2.FONT_HERSHEY_SIMPLEX, 6, (0, 255, 0), 2) print(fps) byte_track.frame_rate = fps trace_annotator.trace_length = fps * 2 coordinates.default_factory = lambda: deque(maxlen=fps) try: result = model(image) print("Frame processed by model.") if not result: print("No result for the frame, skipping.") continue detections = sv.Detections.from_ultralytics(result[0]) detections = detections[polygon_zone.trigger(detections)] detections = byte_track.update_with_detections(detections=detections) points = detections.get_anchors_coordinates(anchor=sv.Position.BOTTOM_CENTER) if points.size > 0: points = view_transformer.transform_points(points=points) else: print("No points detected in the frame.") labels = [] for tracker_id, [_, point] in zip(detections.tracker_id, points): coordinates[tracker_id].append(point) points = np.array(coordinates[tracker_id], np.int32) if points.size > 0: speeds = sv.calculate_speed( points=points, fps=byte_track.frame_rate, scaler=3600 / 1000) label = f'{speeds[-1]:.2f} km/h' labels.append(label) zone_annotator.annotate(frame=image) bounding_box_annotator.annotate(frame=image, detections=detections) label_annotator.annotate(frame=image, detections=detections, labels=labels) trace_annotator.annotate(frame=image, detections=detections, tracker_coordinates=coordinates) except Exception as e: print(f"Error processing frame: {e}") sink.write_frame(image) cv2.imshow("Processed Frame", image) if cv2.waitKey(1) & 0xFF == ord('q'): break device.stop_stream() system.destroy_device(device) def main() -> None: args = parse_arguments() device, fps_start_time, fps_counter = get_image_buffers() process_frames(device, fps_start_time, fps_counter) if __name__ == "__main__": main() ```
closed
2024-07-15T04:48:35Z
2024-07-15T09:43:58Z
https://github.com/roboflow/supervision/issues/1359
[ "question" ]
likith1908
0
httpie/cli
python
596
Error at Query String Parameters
i got these bugs when i tryithis. ![cap12](https://user-images.githubusercontent.com/11556048/28555286-87cfc6de-711c-11e7-8f1b-c5d52ad9f6fc.png) ![cap14](https://user-images.githubusercontent.com/11556048/28555296-9a1af408-711c-11e7-8d8b-70f46f7ba9f3.png)
closed
2017-07-25T04:02:33Z
2017-08-02T21:38:35Z
https://github.com/httpie/cli/issues/596
[]
sriyanfernando
3
Evil0ctal/Douyin_TikTok_Download_API
web-scraping
91
tiktok新的已失效 2022年10月20日
https://api-h2.tiktokv.com/aweme/v1/feed/?version_code=2613&aweme_id= 这个失效了 大佬
closed
2022-10-20T06:43:16Z
2022-10-20T06:50:35Z
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/91
[ "wontfix" ]
5wcx
0
gee-community/geemap
jupyter
1,790
geemap.download_ee_image
--------------------------------------------------------------------------- HttpError Traceback (most recent call last) File d:\ProgramData\Miniconda3\envs\geeclone\lib\site-packages\ee\data.py:354, in _execute_cloud_call(call, num_retries) 353 try: --> 354 return call.execute(num_retries=num_retries) 355 except googleapiclient.errors.HttpError as e: File d:\ProgramData\Miniconda3\envs\geeclone\lib\site-packages\googleapiclient\_helpers.py:130, in positional.<locals>.positional_decorator.<locals>.positional_wrapper(*args, **kwargs) 129 logger.warning(message) --> 130 return wrapped(*args, **kwargs) File d:\ProgramData\Miniconda3\envs\geeclone\lib\site-packages\googleapiclient\http.py:938, in HttpRequest.execute(self, http, num_retries) 937 if resp.status >= 300: --> 938 raise HttpError(resp, content, uri=self.uri) 939 return self.postproc(resp, content) HttpError: <HttpError 400 when requesting https://earthengine.googleapis.com/v1/projects/earthengine-legacy/value:compute?prettyPrint=false&alt=json returned "Computation timed out.". Details: "Computation timed out."> During handling of the above exception, another exception occurred: EEException Traceback (most recent call last) Cell In[4], line 1 ----> 1 geemap.download_ee_image( 2 image=FVC, 3 filename=pathFVC, 4 region=table.geometry(), 5 crs_transform=crs_transform, 6 crs=crs, 7 scale=10, 8 ) 10 geemap.download_ee_image( 11 image=FR, 12 filename=pathFR, (...) 16 scale=10, 17 ) 20 geemap.download_ee_image( 21 image=slope, 22 filename=pathslope, (...) 26 scale=10, 27 ) File d:\ProgramData\Miniconda3\envs\geeclone\lib\site-packages\geemap\common.py:12440, in download_ee_image(image, filename, region, crs, crs_transform, scale, resampling, dtype, overwrite, num_threads, max_tile_size, max_tile_dim, shape, scale_offset, unmask_value, **kwargs) 12437 kwargs["scale_offset"] = scale_offset 12439 img = gd.download.BaseImage(image) > 12440 img.download(filename, overwrite=overwrite, num_threads=num_threads, **kwargs) File d:\ProgramData\Miniconda3\envs\geeclone\lib\site-packages\geedim\download.py:841, in BaseImage.download(self, filename, overwrite, num_threads, max_tile_size, max_tile_dim, **kwargs) 838 raise FileExistsError(f'{filename} exists') 840 # prepare (resample, convert, reproject) the image for download --> 841 exp_image, profile = self._prepare_for_download(**kwargs) 843 # get the dimensions of an image tile that will satisfy GEE download limits 844 tile_shape, num_tiles = self._get_tile_shape(exp_image, max_tile_size=max_tile_size, max_tile_dim=max_tile_dim) File d:\ProgramData\Miniconda3\envs\geeclone\lib\site-packages\geedim\download.py:526, in BaseImage._prepare_for_download(self, set_nodata, **kwargs) 519 """ 520 Prepare the encapsulated image for tiled GeoTIFF download. Will reproject, resample, clip and convert the image 521 according to the provided parameters. 522 523 Returns the prepared image and a rasterio profile for the downloaded GeoTIFF. 524 """ 525 # resample, convert, clip and reproject image according to download params --> 526 exp_image = self._prepare_for_export(**kwargs) 527 # see float nodata workaround note in Tile.download(...) 528 nodata_dict = dict( 529 float32=self._float_nodata, 530 float64=self._float_nodata, (...) 536 int32=np.iinfo('int32').min, 537 ) # yapf: disable File d:\ProgramData\Miniconda3\envs\geeclone\lib\site-packages\geedim\download.py:440, in BaseImage._prepare_for_export(self, crs, crs_transform, shape, region, scale, resampling, dtype, scale_offset) 432 if not self.has_fixed_projection: 433 # if the image has no fixed projection, either crs, region, & scale; or crs, crs_transform and shape 434 # must be specified 435 raise ValueError( 436 f'This image does not have a fixed projection, you need to specify a crs, region & scale; or a ' 437 f'crs, crs_transform & shape.' 438 ) --> 440 if (not self.bounded) and (not region and (not crs or not crs_transform or not shape)): 441 # if the image has no footprint (i.e. it is 'unbounded'), either region; or crs, crs_transform and shape 442 # must be specified 443 raise ValueError( 444 f'This image is unbounded, you need to specify a region; or a crs, crs_transform and ' 445 f'shape.' 446 ) 448 if self.crs == 'EPSG:4326' and not scale and not shape: 449 # If the image is in EPSG:4326, either scale (in meters); or shape must be specified. 450 # Note that ee.Image.prepare_for_export() expects a scale in meters, but if the image is EPSG:4326, 451 # the default scale is in degrees. File d:\ProgramData\Miniconda3\envs\geeclone\lib\site-packages\geedim\download.py:237, in BaseImage.bounded(self) 235 # TODO: an unbounded region could also have these bounds 236 unbounded_bounds = (-180, -90, 180, 90) --> 237 return (self.footprint is not None) and (features.bounds(self.footprint) != unbounded_bounds) File d:\ProgramData\Miniconda3\envs\geeclone\lib\site-packages\geedim\download.py:168, in BaseImage.footprint(self) 165 @property 166 def footprint(self) -> Optional[Dict]: 167 """ Geojson polygon of the image extent. None if the image is a composite. """ --> 168 if ('properties' not in self._ee_info) or ('system:footprint' not in self._ee_info['properties']): 169 return None 170 return self._ee_info['properties']['system:footprint'] File d:\ProgramData\Miniconda3\envs\geeclone\lib\site-packages\geedim\download.py:105, in BaseImage._ee_info(self) 103 """ Earth Engine image metadata. """ 104 if self.__ee_info is None: --> 105 self.__ee_info = self._ee_image.getInfo() 106 return self.__ee_info File d:\ProgramData\Miniconda3\envs\geeclone\lib\site-packages\ee\image.py:116, in Image.getInfo(self) 108 def getInfo(self) -> Optional[Any]: 109 """Fetch and return information about this image. 110 111 Returns: (...) 114 properties - Dictionary containing the image's metadata properties. 115 """ --> 116 return super().getInfo() File d:\ProgramData\Miniconda3\envs\geeclone\lib\site-packages\ee\computedobject.py:105, in ComputedObject.getInfo(self) 99 def getInfo(self) -> Optional[Any]: 100 """Fetch and return information about this object. 101 102 Returns: 103 The object can evaluate to anything. 104 """ --> 105 return data.computeValue(self) File d:\ProgramData\Miniconda3\envs\geeclone\lib\site-packages\ee\data.py:1021, in computeValue(obj) 1018 body = {'expression': serializer.encode(obj, for_cloud_api=True)} 1019 _maybe_populate_workload_tag(body) -> 1021 return _execute_cloud_call( 1022 _get_cloud_projects() 1023 .value() 1024 .compute(body=body, project=_get_projects_path(), prettyPrint=False) 1025 )['result'] File d:\ProgramData\Miniconda3\envs\geeclone\lib\site-packages\ee\data.py:356, in _execute_cloud_call(call, num_retries) 354 return call.execute(num_retries=num_retries) 355 except googleapiclient.errors.HttpError as e: --> 356 raise _translate_cloud_exception(e) EEException: Computation timed out. I first downloaded an image using geemap.download_ee_image, and no error was reported. But after I refreshed and used geemap.download_ee_image to download the image, an EEException: Computation timed out occurred.
closed
2023-10-22T07:52:03Z
2023-10-23T02:20:13Z
https://github.com/gee-community/geemap/issues/1790
[ "bug" ]
qianmoeast
3
matplotlib/matplotlib
data-science
29,337
KeyError: 'buttons' when plotting with matplotlib / ipympl backend in Jupyter Notebook within VSCode presumably due to typo
Using a current install of Python and notebook, matplotlib, ipympl etc. on Windows 11 within VSCode, ``` %matplotlib ipympl #%matplotlib widget import matplotlib.pyplot as plt ``` and ``` fig, ax = plt.subplots() ... ax.plot(...) ``` raise `KeyError: 'buttons'` in `c:\Users\user\AppData\Local\Programs\Python\Python313\Lib\site-packages\matplotlib\backends\backend_webagg_core.py:295` respectively https://github.com/matplotlib/matplotlib/blob/f8900ead0d9381a7652568768b065324f929734e/lib/matplotlib/backends/backend_webagg_core.py#L295 Changing the key to `button` resolves the error (for my use case).
closed
2024-12-17T16:49:00Z
2024-12-18T06:51:44Z
https://github.com/matplotlib/matplotlib/issues/29337
[ "status: downstream fix required" ]
gnbl
6
smarie/python-pytest-cases
pytest
152
Use @pytest.mark.usefixtures decorator for case function
Can I somehow use decorator @pytest.mark.usefixtures for case function? I prefer it when the fixture sets some system state but returns nothing.
closed
2020-12-02T10:01:45Z
2020-12-02T20:53:15Z
https://github.com/smarie/python-pytest-cases/issues/152
[]
arut-grigoryan
4
mljar/mljar-supervised
scikit-learn
252
add traceback to errors reports
closed
2020-11-27T11:45:21Z
2020-11-27T11:47:29Z
https://github.com/mljar/mljar-supervised/issues/252
[ "enhancement" ]
pplonski
0
ccxt/ccxt
api
25,260
bitmex watchPosition and fetchPositions returns incorrect symbol ('XBTF25') for closed positions
### Operating System windows 11 ### Programming Languages JavaScript ### CCXT Version 4.4.58 ### Description When using bitmex's watchPositions or fetchPositions method, the symbol incorrectly parsed as 'XBTF25' for closed positions. like this: ```json { "info": { "_comment1" : "fake account id" "account": 2222222, "symbol": "ETHUSD", "currency": "XBt", "underlying": "ETH", "quoteCurrency": "USD", "homeNotional": 0, "currentQty": 0, "_comment2" : ". . . (ellipsis)" }, "_comment1" : "fake account id" "id": "0000000", "_comment1" : "exchange symbol is ETHUSD but public symbol is XBTF25" "symbol": "XBTF25", "contracts": 0, "contractSize": 100, "notional": 0, "_comment2" : ". . . (ellipsis)" } ``` ```json { "info": { "_comment1" : "fake account id" "account": 1111111, "symbol": "DOTUSDT", "currency": "USDt", "currentQty": 0, "_comment2" : ". . . (ellipsis)" }, "_comment1" : "fake account id" "id": "1111111", "_comment1" : "exchange symbol is DOTUSDT but public symbol is XBTF25" "symbol": "XBTF25", "contracts": 0, "contractSize": 0.001, "_comment2" : ". . . (ellipsis)" } ``` Data actually received by Websocket as checked in Chrome Developer Tools Network tab: ```json { "table": "position", "action": "update", "data": [ { "account": 238210, "symbol": "ETHUSD", "currency": "XBt", "currentQty": 0, "markPrice": 2589.75, "liquidationPrice": null, "timestamp": "2025-02-12T04:26:55.348Z" } ] } ``` \+ Small issue unrelated to the main issue There is a typo in line 28 (watchPostions => watchPositions ) https://github.com/ccxt/ccxt/blob/8e8bc018dc712a01964f3106321767bccca484a4/ts/src/pro/bitmex.ts#L16-L32 ### Code ``` js // ... while (shouldContinue) { try { const positions: CcxtPosition[] = await exchange.watchPositions( symbols, since, limit, params ) const xbtf25Position = positions.find((p) => (p.symbol = 'XBTF25')) if (xbtf25Position) { console.log('xbtf25Position watched', xbtf25Position) } } // ... } // ... ```
open
2025-02-12T04:39:42Z
2025-02-12T05:02:46Z
https://github.com/ccxt/ccxt/issues/25260
[]
dnjsgur0629
1
mckinsey/vizro
plotly
709
Contribute `Gantt` to Vizro visual vocabulary
## Thank you for contributing to our visual-vocabulary! 🎨 Our visual-vocabulary is a dashboard, that serves a a comprehensive guide for selecting and creating various types of charts. It helps you decide when to use each chart type, and offers sample Python code using [Plotly](https://plotly.com/python/), and instructions for embedding these charts into a [Vizro](https://github.com/mckinsey/vizro) dashboard. Take a look at the dashboard here: https://huggingface.co/spaces/vizro/demo-visual-vocabulary The source code for the dashboard is here: https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary ## Instructions 0. Get familiar with the dev set-up (this should be done already as part of the initial intro sessions) 1. Read through the [README](https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary) of the visual vocabulary 2. Follow the steps to contribute a chart. Take a look at other examples. This [commit](https://github.com/mckinsey/vizro/pull/634/commits/417efffded2285e6cfcafac5d780834e0bdcc625) might be helpful as a reference to see which changes are required to add a chart. 3. Ensure the app is running without any issues via `hatch run example visual-vocabulary` 4. List out the resources you've used in the [README](https://github.com/mckinsey/vizro/tree/main/vizro-core/examples/visual-vocabulary) 5. Raise a PR **Useful resources:** - Gantt: https://plotly.com/python/gantt/ - Data chart mastery: https://www.atlassian.com/data/charts/how-to-choose-data-visualization
closed
2024-09-17T12:31:04Z
2024-10-14T07:50:55Z
https://github.com/mckinsey/vizro/issues/709
[ "Good first issue :baby_chick:", "GHC: chart/dashboard track" ]
huong-li-nguyen
3
jonaswinkler/paperless-ng
django
1,521
[BUG] Database is locked message during import
**Describe the bug** When I import pdf files it sometimes put me a database is locked error (this document is not imported). I think this is happening when I import multiple files at the same time. **To Reproduce** import several files in the same time even if I don't think this is happening for everyone. **Expected behavior** importing the files without error :P **idea** I'm not sure but since my db file is located on a crappy low speed harddrive, my mount point of the container is : /data/documents:/data /data/config/paperlessng:/config (/data is the mount point of the crappy hdd on my system) I modified the mount to be a docker volume which is located on a ssd, and it seems there is no more issue. I posted here to inform that it may be a problem with slow hard drive. In case it can be solved or more informative. From what I looked the db is not in /data, meaning its inside the **Webserver logs** ``` [2022-01-03 19:27:49,249] [ERROR] [paperless.consumer] The following error occured while consuming facture_eau_partie_1_sur_2_02_12_2021.pdf: database is locked Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/django/db/backends/utils.py", line 84, in _execute return self.cursor.execute(sql, params) File "/usr/local/lib/python3.8/dist-packages/django/db/backends/sqlite3/base.py", line 423, in execute return Database.Cursor.execute(self, query, params) sqlite3.OperationalError: database is locked The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/app/paperless/src/documents/consumer.py", line 287, in try_consume_file document = self._store( File "/app/paperless/src/documents/consumer.py", line 382, in _store document = Document.objects.create( File "/usr/local/lib/python3.8/dist-packages/django/db/models/manager.py", line 85, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/django/db/models/query.py", line 453, in create obj.save(force_insert=True, using=self.db) File "/usr/local/lib/python3.8/dist-packages/django/db/models/base.py", line 726, in save self.save_base(using=using, force_insert=force_insert, File "/usr/local/lib/python3.8/dist-packages/django/db/models/base.py", line 763, in save_base updated = self._save_table( File "/usr/local/lib/python3.8/dist-packages/django/db/models/base.py", line 868, in _save_table results = self._do_insert(cls._base_manager, using, fields, returning_fields, raw) File "/usr/local/lib/python3.8/dist-packages/django/db/models/base.py", line 906, in _do_insert return manager._insert( File "/usr/local/lib/python3.8/dist-packages/django/db/models/manager.py", line 85, in manager_method return getattr(self.get_queryset(), name)(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/django/db/models/query.py", line 1270, in _insert return query.get_compiler(using=using).execute_sql(returning_fields) File "/usr/local/lib/python3.8/dist-packages/django/db/models/sql/compiler.py", line 1416, in execute_sql cursor.execute(sql, params) File "/usr/local/lib/python3.8/dist-packages/django/db/backends/utils.py", line 66, in execute return self._execute_with_wrappers(sql, params, many=False, executor=self._execute) File "/usr/local/lib/python3.8/dist-packages/django/db/backends/utils.py", line 75, in _execute_with_wrappers return executor(sql, params, many, context) File "/usr/local/lib/python3.8/dist-packages/django/db/backends/utils.py", line 84, in _execute return self.cursor.execute(sql, params) File "/usr/local/lib/python3.8/dist-packages/django/db/utils.py", line 90, in __exit__ raise dj_exc_value.with_traceback(traceback) from exc_value File "/usr/local/lib/python3.8/dist-packages/django/db/backends/utils.py", line 84, in _execute return self.cursor.execute(sql, params) File "/usr/local/lib/python3.8/dist-packages/django/db/backends/sqlite3/base.py", line 423, in execute return Database.Cursor.execute(self, query, params) django.db.utils.OperationalError: database is locked``` **Relevant information** - archlinux (runnig on docker using linuxserver.io image. - chrome - Installation method: docker
open
2022-01-03T19:35:24Z
2022-06-27T12:18:51Z
https://github.com/jonaswinkler/paperless-ng/issues/1521
[]
eephyne
3
chiphuyen/stanford-tensorflow-tutorials
nlp
102
training taking lot of time for me :/
open
2018-03-17T10:34:48Z
2018-03-19T15:11:54Z
https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/102
[]
saxindo
2
dropbox/PyHive
sqlalchemy
178
Not compatible with impyla in Python 3 due to TCLIService modified
The TCLIService is modified by impyla with a not compatible one, I also reported the issue here: https://github.com/cloudera/impyla/issues/277 I suggest vendor the TCLIService module to avoid modified by other packages, so we import it via `hive.TCLIService`. It can also avoid any potential conflicts with packages which depend on TCLIService module.
open
2017-11-13T07:24:37Z
2017-11-13T07:24:37Z
https://github.com/dropbox/PyHive/issues/178
[]
guyskk
0
Neoteroi/BlackSheep
asyncio
521
Scalar integration
##### _Note: consider using [Discussions](https://github.com/Neoteroi/BlackSheep/discussions) to open a conversation about new features…_ **🚀 Feature Request** Do you plan integrating with other API documentation platforms like Scalar, for example? I't more beautiful than swagger
open
2025-01-21T00:26:52Z
2025-01-21T00:26:52Z
https://github.com/Neoteroi/BlackSheep/issues/521
[]
arthurbrenno
0