repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
jupyter/nbgrader
jupyter
1,417
nbgrader feedback fail
<!-- Thanks for helping to improve nbgrader! If you are submitting a bug report or looking for support, please use the below template so we can efficiently solve the problem. If you are requesting a new feature, feel free to remove irrelevant pieces of the issue template. --> when click Generate Feedback button, shows the following message: ``` [FeedbackApp | WARNING] Config option `template_path` not recognized by `HTMLExporter`. Did you mean one of: `extra_template_paths, template_name, template_paths`? [FeedbackApp | INFO] Copying /home/grader/workshop2_2021/autograded/1002/sample/timestamp.txt -> /home/grader/workshop2_2021/feedback/1002/sample/timestamp.txt [FeedbackApp | INFO] Converting notebook /home/grader/workshop2_2021/autograded/1002/sample/Sample01.ipynb <jinja2.environment.Environment object at 0x7f5aeae95340> feedback.tpl [FeedbackApp | ERROR] There was an error processing assignment: /home/grader/workshop2_2021/autograded/1002/sample [FeedbackApp | ERROR] Traceback (most recent call last): File "/opt/anaconda3/lib/python3.8/site-packages/nbgrader/converters/base.py", line 336, in convert_notebooks self.convert_single_notebook(notebook_filename) File "/opt/anaconda3/lib/python3.8/site-packages/nbgrader/converters/base.py", line 292, in convert_single_notebook output, resources = self.exporter.from_filename(notebook_filename, resources=resources) File "/opt/anaconda3/lib/python3.8/site-packages/nbconvert/exporters/exporter.py", line 181, in from_filename return self.from_file(f, resources=resources, **kw) File "/opt/anaconda3/lib/python3.8/site-packages/nbconvert/exporters/exporter.py", line 199, in from_file return self.from_notebook_node(nbformat.read(file_stream, as_version=4), resources=resources, **kw) File "/opt/anaconda3/lib/python3.8/site-packages/nbconvert/exporters/html.py", line 119, in from_notebook_node return super().from_notebook_node(nb, resources, **kw) File "/opt/anaconda3/lib/python3.8/site-packages/nbconvert/exporters/templateexporter.py", line 384, in from_notebook_node output = self.template.render(nb=nb_copy, resources=resources) File "/opt/anaconda3/lib/python3.8/site-packages/nbconvert/exporters/templateexporter.py", line 148, in template self._template_cached = self._load_template() File "/opt/anaconda3/lib/python3.8/site-packages/nbconvert/exporters/templateexporter.py", line 355, in _load_template return self.environment.get_template(template_file) File "/opt/anaconda3/lib/python3.8/site-packages/jinja2/environment.py", line 883, in get_template return self._load_template(name, self.make_globals(globals)) File "/opt/anaconda3/lib/python3.8/site-packages/jinja2/environment.py", line 857, in _load_template template = self.loader.load(self, name, globals) File "/opt/anaconda3/lib/python3.8/site-packages/jinja2/loaders.py", line 430, in load raise TemplateNotFound(name) jinja2.exceptions.TemplateNotFound: feedback.tpl [FeedbackApp | WARNING] Removing failed assignment: /home/grader/workshop2_2021/feedback/1002/sample [FeedbackApp | ERROR] There was an error processing assignment 'sample' for student '1002' [FeedbackApp | ERROR] Please see the the above traceback for details on the specific errors on the above failures. ``` ### Operating system ubuntu 18.04 ### `nbgrader --version` nbgrader version 0.6.1 ### `jupyterhub --version` (if used with JupyterHub) 1.0.0 ### `jupyter notebook --version` 6.1.4 ### Expected behavior generate feedback ### Actual behavior CANNOT generate feedback, given such warning ``` [WARNING] Config option `template_path` not recognized by `HTMLExporter`. Did you mean one of: `extra_template_paths, template_name, template_paths`? ``` not sure if this cause cannot find out the feedback.tpl error ``` File "/opt/anaconda3/lib/python3.8/site-packages/jinja2/loaders.py", line 430, in load raise TemplateNotFound(name) jinja2.exceptions.TemplateNotFound: feedback.tpl ```
closed
2021-03-05T16:39:40Z
2021-03-25T23:34:29Z
https://github.com/jupyter/nbgrader/issues/1417
[ "duplicate" ]
2010hexi
1
Nemo2011/bilibili-api
api
613
[漏洞] 有dolby音效时获取最好的流时出错
**Python 版本:** 3 **模块版本:** main/dev **运行环境:** MacOS **模块路径:** `bilibili_api.video.detect_best_streams()` **解释器:** cpython **报错信息:** ```python Traceback (most recent call last): File "bilibili-api/server.py", line 181, in <module> sync(test()) File "bilibili-api/bilibili_api/utils/sync.py", line 33, in sync return loop.run_until_complete(coroutine) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "bilibili-api/server.py", line 178, in test streams = detecter.detect_best_streams() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "bilibili-api/bilibili_api/video.py", line 2421, in detect_best_streams data = self.detect( ^^^^^^^^^^^^ File "bilibili-api/bilibili_api/video.py", line 2348, in detect dolby_stream_url = dolby_data["audio"]["baseUrl"] ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^ TypeError: list indices must be integers or slices, not str ``` **报错代码:** ```python if dolby_data and (not no_dolby_audio): if dolby_data["audio"]: dolby_stream_url = dolby_data["audio"]["baseUrl"] dolby_stream_quality = AudioQuality(dolby_data["audio"]["id"]) dolby_stream = AudioStreamDownloadURL( url=dolby_stream_url, audio_quality=dolby_stream_quality ) streams.append(dolby_stream) return streams ``` --- 使用这个bvid:BV1gw411W7aU 时,dolby音效为数组 <img width="1275" alt="image" src="https://github.com/Nemo2011/bilibili-api/assets/19368870/384c071c-e517-4c48-afc6-1aa7fa40a5e5">
closed
2023-12-28T01:23:12Z
2023-12-31T00:40:08Z
https://github.com/Nemo2011/bilibili-api/issues/613
[ "bug", "solved" ]
nooblong
3
ivy-llc/ivy
pytorch
28,741
[Bug]: Failed to sign up
### Bug Explanation I followed the instructions (https://unify.ai/docs/ivy/overview/get_started.html) to sign up and add the API key to .ivy/key.pem. However, ivy still asks me to sign up. ### Steps to Reproduce Bug The test code (located at `~/test_ivy.py`) is simple as follows: ```python import ivy ivy.trace_graph() ivy.transpile() ivy.unify() ``` The API key is at `~/.ivy/key.pem` (ends with '='). When I run `python test_ivy.py`, an error raises: ```bash Traceback (most recent call last): File "/home/zdk/test_ivy.py", line 4, in <module> ivy.trace_graph() File "/home/zdk/miniconda3/envs/new_env/lib/python3.10/site-packages/ivy/compiler/compiler.py", line 97, in trace_graph from ._compiler import trace_graph as _trace_graph File "_compiler.pyx", line 74, in init combined_source._compiler File "_compiler.pyx", line 26, in combined_source._compiler File "VX.pyx", line 21, in init VX File "MC.pyx", line 141, in MC.connect_to_unify Exception: Please sign up for free pilot access here [https://console.unify.ai/], in order to make use of ivy.trace, ivy.transpile and ivy.unify. ``` I also tried `export IVY_ROOT=~/.ivy`, the error is the same. ### Environment ubuntu 18.04, python 3.10, ivy 0.0.9.0. ### Ivy Version 0.0.9.0 ### Backend - [x] NumPy - [x] TensorFlow - [x] PyTorch - [x] JAX ### Device _No response_
closed
2024-04-21T03:56:48Z
2024-06-17T08:25:01Z
https://github.com/ivy-llc/ivy/issues/28741
[ "Bug Report" ]
zhangdongkun98
2
fugue-project/fugue
pandas
95
[FEATURE] Fillna as builtin
**Is your feature request related to a problem? Please describe.** We should include a fillna method in the execution engine. **Describe the solution you'd like** It just needs to be simple like the Spark implementation. No need for forward fill and backfill. **Additional context** It will have a very similar implementation to the existing dropna method in the execution engine.
closed
2020-11-02T07:45:44Z
2020-11-08T15:01:56Z
https://github.com/fugue-project/fugue/issues/95
[ "enhancement", "Fugue SQL", "programming interface", "core feature" ]
kvnkho
0
ResidentMario/geoplot
matplotlib
74
Marker size
Hello, Is there a way to specify the marker size in a pointplot ? keith
closed
2019-03-20T22:25:25Z
2019-03-20T22:46:31Z
https://github.com/ResidentMario/geoplot/issues/74
[]
vool
1
tqdm/tqdm
pandas
1,494
buf-size [cs]hould default to bigger size (when using `--bytes`?)
Hi! Just stumbled upon this today (mpire linked on HN > mpire's readme links to tqdm for progress bar) and noticed the nice stand-alone program. As someone frequently using `pv` to visualize how much data went through a pipe when moving data around, I figured it'd be a good replacement (possibly more available, pretty unicode progress bar); but for example testing with `dd if=/dev/zero bs=1M | tqdm --bytes > /dev/null` I was surprised to see tqdm was much slower than pv (getting ~2GB/s with pv, 280MB/s with tqdm) Bumping `--buf-size` to just 64k makes tqdm catch up to be almost as fast as pv (~1.9GB/s). (Using a bigger buffer (e.g. 1MB) lowers speed a bit to ~1.8GB/s, but further lower tqdm's cpu usage (at this speed, 90% of a core -> 60% of a core) so might actually be appreciable depending on the pipeline as that's still good enough for most real world networked/disk usages -- in comparison 280MB/s is slower than some high end setups and would bother me) I wouldn't use tqdm for lines, but a quick test shows increasing the buf-size for the default "iteration" mode doesn't seem to change anything -- I think it'd make sense to use a different default size when `--bytes` was requested, though. Thanks! (tested on fedora 38's python3-tqdm-4.65.0-2.fc38.noarch (python 3.11.4) as well as today's git master with identical results)
open
2023-08-12T06:55:36Z
2023-08-12T06:55:36Z
https://github.com/tqdm/tqdm/issues/1494
[]
martinetd
0
rthalley/dnspython
asyncio
307
Benchmarking functionality
I looked through the examples and around on the websites and didn't find anything about this specifically, but please let me know if I missed it. It would be nice for this library to have several benchmarking features. We would like to use it to keep tabs on the status of DNS servers, and and their performance in various regions. That means 1. knowing the internal resolution time (I believe `dig` interrogates a DNS server for this information automatically) and 2. knowing the roundtrip time of a hostname resolution. I see that `dns.message.make_query` resolves similar to dig, but can it also provide internal resolution time like dig does? And if there are no plans to do internal timing of the request, how much time loss is involved in script? Would it be accurate within a couple milliseconds to time that function call?
closed
2018-05-03T20:41:36Z
2020-07-19T13:19:27Z
https://github.com/rthalley/dnspython/issues/307
[ "Enhancement Request" ]
isaiahtaylor
1
amdegroot/ssd.pytorch
computer-vision
328
about coco dataset labels
I have two questions. 1. why in the config.py, the number of classes in COCO is 201? Is that 81 or 82? 2. why coco_labels.py has two kinds of index? Thanks!
open
2019-04-25T06:51:02Z
2019-11-07T03:20:34Z
https://github.com/amdegroot/ssd.pytorch/issues/328
[]
qiufeng1994
4
huggingface/datasets
numpy
6,982
cannot split dataset when using load_dataset
### Describe the bug when I use load_dataset methods to load mozilla-foundation/common_voice_7_0, it can successfully download and extracted the dataset but It cannot generating the arrow document, This bug happened in my server, my laptop, so as #6906 , but it won't happen in the google colab. I work for it for days, even I load the datasets from local path, it can Generating train split and validation split but bug happen again in test split. ### Steps to reproduce the bug from datasets import load_dataset, load_metric, Audio common_voice_train = load_dataset("mozilla-foundation/common_voice_7_0", "ja", split="train", token=selftoken, trust_remote_code=True) ### Expected behavior ``` { "name": "ValueError", "message": "Instruction \"train\" corresponds to no data!", "stack": "--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[2], line 3 1 from datasets import load_dataset, load_metric, Audio ----> 3 common_voice_train = load_dataset(\"mozilla-foundation/common_voice_7_0\", \"ja\", split=\"train\",token='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')#,trust_remote_code=True)#,streaming=True) 4 common_voice_test = load_dataset(\"mozilla-foundation/common_voice_7_0\", \"ja\", split=\"test\",token='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')#,trust_remote_code=True)#,streaming=True) File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\load.py:2626, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2622 # Build dataset for splits 2623 keep_in_memory = ( 2624 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 2625 ) -> 2626 ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory) 2627 # Rename and cast features to match task schema 2628 if task is not None: 2629 # To avoid issuing the same warning twice File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\builder.py:1266, in DatasetBuilder.as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory) 1263 verification_mode = VerificationMode(verification_mode or VerificationMode.BASIC_CHECKS) 1265 # Create a dataset for each of the given splits -> 1266 datasets = map_nested( 1267 partial( 1268 self._build_single_dataset, 1269 run_post_process=run_post_process, 1270 verification_mode=verification_mode, 1271 in_memory=in_memory, 1272 ), 1273 split, 1274 map_tuple=True, 1275 disable_tqdm=True, 1276 ) 1277 if isinstance(datasets, dict): 1278 datasets = DatasetDict(datasets) File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\utils\\py_utils.py:484, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, batched, batch_size, types, disable_tqdm, desc) 482 if batched: 483 data_struct = [data_struct] --> 484 mapped = function(data_struct) 485 if batched: 486 mapped = mapped[0] File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\builder.py:1296, in DatasetBuilder._build_single_dataset(self, split, run_post_process, verification_mode, in_memory) 1293 split = Split(split) 1295 # Build base dataset -> 1296 ds = self._as_dataset( 1297 split=split, 1298 in_memory=in_memory, 1299 ) 1300 if run_post_process: 1301 for resource_file_name in self._post_processing_resources(split).values(): File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\builder.py:1370, in DatasetBuilder._as_dataset(self, split, in_memory) 1368 if self._check_legacy_cache(): 1369 dataset_name = self.name -> 1370 dataset_kwargs = ArrowReader(cache_dir, self.info).read( 1371 name=dataset_name, 1372 instructions=split, 1373 split_infos=self.info.splits.values(), 1374 in_memory=in_memory, 1375 ) 1376 fingerprint = self._get_dataset_fingerprint(split) 1377 return Dataset(fingerprint=fingerprint, **dataset_kwargs) File c:\\Users\\cybes\\.conda\\envs\\ECoG\\lib\\site-packages\\datasets\\arrow_reader.py:256, in BaseReader.read(self, name, instructions, split_infos, in_memory) 254 msg = f'Instruction \"{instructions}\" corresponds to no data!' 255 #msg = f'Instruction \"{self._path}\",\"{name}\",\"{instructions}\",\"{split_infos}\" corresponds to no data!' --> 256 raise ValueError(msg) 257 return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) ValueError: Instruction \"train\" corresponds to no data!" } ``` ### Environment info Environment: python 3.9 windows 11 pro VScode+jupyter
closed
2024-06-19T08:07:16Z
2024-07-08T06:20:16Z
https://github.com/huggingface/datasets/issues/6982
[]
cybest0608
3
nerfstudio-project/nerfstudio
computer-vision
2,983
NeRF-W
Hi, nice work! I am wondering if there is any chance I can run NeRF-W by ns-train directly?
open
2024-03-05T02:53:32Z
2024-03-14T11:06:08Z
https://github.com/nerfstudio-project/nerfstudio/issues/2983
[]
cfeng16
1
Lightning-AI/pytorch-lightning
deep-learning
20,562
`BatchSizeFinder` with method `fit` and separate batch_size attributes for train and val (e.g., `self.train_batch_size` and `self.val_batch_size`)
### Outline & Motivation I suggest allowing `batch_arg_name` to accept a list of arg names. E.g. `tuner.scale_batch_size(..., batch_arg_name=["train_batch_size", "val_batch_size"). ### Pitch Fit uses both train and val dataloaders. They can have their own batch sizes. ### Additional context _No response_ cc @lantiga @justusschock
open
2025-01-24T18:21:04Z
2025-01-24T18:21:26Z
https://github.com/Lightning-AI/pytorch-lightning/issues/20562
[ "refactor", "needs triage" ]
ibro45
0
PokeAPI/pokeapi
graphql
1,119
Little problem with some pokemons
Hi, thank you for the api! I'm using it to some of my profile projects. I'm using react and im having problems with 3 pokemons: - gible - ralts - slakoth error message: SyntaxError: Unexpected end of JSON input Thank you very much!
closed
2024-08-12T16:16:13Z
2024-10-05T14:30:14Z
https://github.com/PokeAPI/pokeapi/issues/1119
[]
jrdelrio
3
miguelgrinberg/python-socketio
asyncio
477
Client Randomly Disconnecting
Hey Guys, I recently observe following error when using Python-Socketio as Server with four connected clients. After some time (about ten minutes) one client disconnects with following log output: ``` INFO:werkzeug:127.0.0.1 - - [05/May/2020 10:23:28] "GET /socket.io/?transport=polling&EIO=3&sid=b91e51d9d9b043ab98293875123a744f&t=1588674208.5054448 HTTP/1.1" 500 - WARNING:engineio.client:Unexpected status code 500 in server response, aborting ``` I enabled all debugging options that are available (to the best of my knowledge). However it seems that the server is unable to handle the GET Request at this particular time, but I can't see a reason for this. Is there any way I can get more information about this 500 Error? Actually this seems like an Engineio problem and its maybe related to this issue: https://github.com/miguelgrinberg/python-socketio/issues/475 Thanks in advance :)
closed
2020-05-05T10:30:34Z
2020-05-05T18:08:40Z
https://github.com/miguelgrinberg/python-socketio/issues/477
[]
anon767
7
great-expectations/great_expectations
data-science
10,603
add_or_update_expectation suite is gone. will there be an update added to the SuiteFactory?
It looks in v 1.2.0 like suite factory only has CRUD methods: delete, add, and all. Will there be an update so that a suite of a particular name can be updated as was possible in previous versions where you could run add_or_update_expectation_suite on the great expectations context like was possible in version 0.18.17?
closed
2024-10-30T22:45:22Z
2025-02-14T16:53:55Z
https://github.com/great-expectations/great_expectations/issues/10603
[ "feature-request" ]
isaacmartin1
5
comfyanonymous/ComfyUI
pytorch
7,347
[Errno 2] No such file or directory: 'none' is:issue
### Your question I am trying to run Live Portrait Lip Sync and I am getting the error message: [Errno 2] No such file or directory: 'none' is:issue Not sure what this means or how to address it. Thanks ### Logs ```powershell ``` ### Other _No response_
closed
2025-03-22T02:53:56Z
2025-03-23T01:29:33Z
https://github.com/comfyanonymous/ComfyUI/issues/7347
[ "User Support", "Custom Nodes Bug" ]
mb0021
3
tensorflow/tensor2tensor
deep-learning
1,121
[Question] ASR Transformer performance vs. Google Speech-to-Text
### Description We used the ["ASR with Transformer" colab notebook](https://colab.research.google.com/github/tensorflow/tensor2tensor/blob/master/tensor2tensor/notebooks/asr_transformer.ipynb) which let us load the pre-trained checkpoints of the ASR Problems trained on librispeech and Common Voice datasets. We tried out a few sentences and the results were not very good, for example for librispeech: Target: "Hello world" Output: "HALLOW WORLDS" Target: "To which address can we send the official documents?" Output: "THE WITCH ANNA IS COMING SCENT OFFICIAL LOGAMENTS" If we compare this to the performance of the Google Cloud Speech-to-Text API (and the service used in Google Translate from voice, which I assume is the same API), that performance is very, very good. In the [paper](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/46687.pdf), the architecture used is a encoder-decoder one with attention, just as transformer. However a separate language model is used. Does that make such a huge difference? Or are the checkpoints in the colab notebook not trained on such amount of data / for so long as the Speech-to-Text API? In general, would it be possible to achieve a result which is better than the result in the ASR colab notebook with the Transformer architecture (and for different languages?)? Thanks!
open
2018-10-09T13:09:11Z
2019-12-15T16:46:50Z
https://github.com/tensorflow/tensor2tensor/issues/1121
[]
mabergerx
4
ets-labs/python-dependency-injector
flask
556
[delete] Core container as singletone for entire app
[App Package (Container) Diagramm](https://online.visual-paradigm.com/community/share/example-app-ub4sde1um) The `CoreContainer` container contains: ```python class CoreContainer( containers.DeclarativeContainer ): arguments = providers.Resource( parse_arguments ) config = providers.Resource( parse_config, arguments=arguments ) _logging = providers.Resource( init_logging, arguments=arguments, config=config ) logger_factory = providers.Factory( logging.getLogger ).provider ``` The main question is about `CoreContainer`. Is there a way to make it share for the entire application? The only way I found is when importing top-level containers (`CleanContainer`, `DatabaseInitContainer`, `ParseContainer`, ...) specify the `CoreContainer` container as a dependency and pass it into the chuld containers. If I do this: ```python class DatabaseContainer( containers.DeclarativeContainer ): core: CoreContainer = providers.Container( CoreContainer ) ... class DownloadContainer( containers.DeclarativeContainer ): core: CoreContainer = providers.Container( CoreContainer ) ... class ParseContainer( containers.DeclarativeContainer ): core: CoreContainer = providers.Container( CoreContainer ) ``` then all elements in `CoreContainer` are initialized multiple times. It was it would be convenient if the container itself was like a `Singletone` for the entire application.
closed
2022-02-01T16:14:43Z
2022-02-02T11:27:48Z
https://github.com/ets-labs/python-dependency-injector/issues/556
[]
VasyaGaikin
0
jupyterlab/jupyter-ai
jupyter
289
Configurable vector store
### Problem Users would like to use alternative retrievers for RAG instead of being restricted to using FAISS locally. ### Proposed Solution 1. Offer alternative retrievers, such as Pinecone, Kendra, OpenSearch, etc. 2. Offer retriever configurability to allow operators to switch between them. ### Additional context Not sure if retriever configurability should be exclusively implemented via traitlets; I would prefer an admin UI with authorization and the ability to change the retriever at runtime (instead of prior to server start), but I'm not sure if this is possible within our time constraints.
open
2023-07-24T16:23:14Z
2023-07-27T23:45:33Z
https://github.com/jupyterlab/jupyter-ai/issues/289
[ "enhancement", "scope:RAG" ]
dlqqq
0
RobertCraigie/prisma-client-py
pydantic
811
Generated Includes for Partial Types
## Problem When you use partial types as return values for functions that fetch from your database, you need to include all the relations of the corresponding partial type each time. That is very repetitive, and it would be nice if there were partial_types_includes generated, when generating the partial types. For example it should be possible to use it like this: ```python async def read_user(id: str) -> UserFull: await db.user.find_unique( where={"id": id}, include=generated_user_full_include ) ``` ## Suggested solution Generating partial types includes when generating partial types. An example python script that defines a function that generates the include-dicts for all passed partial types. Has to be called like: `create_partial_type_include(["UserFull", "UserWithoutPosts"])` https://gist.github.com/TimonGaertner/10e2f232521158a7f8842c5b88fa385d
open
2023-08-29T14:07:19Z
2023-08-29T14:13:06Z
https://github.com/RobertCraigie/prisma-client-py/issues/811
[]
TimonGaertner
0
gunthercox/ChatterBot
machine-learning
1,912
how to delete one of the learned data training by list or corpus
hello, I want to delete just one of the learned data not all. But I got a problem,I cant't delete it. it reported an error like this ![捕获](https://user-images.githubusercontent.com/43271293/73932551-8acc3100-4915-11ea-93e7-854c8f29a30f.PNG)
open
2020-02-06T11:19:22Z
2020-02-06T11:19:22Z
https://github.com/gunthercox/ChatterBot/issues/1912
[]
qibinaoe
0
dhaitz/mplcyberpunk
matplotlib
36
ModuleNotFoundError: No module named 'pkg_resources'
``` Traceback (most recent call last): File "C:\Users\Raj Dave\Desktop\New folder\main.py", line 5, in <module> import mplcyberpunk File "C:\Users\Raj Dave\AppData\Local\Programs\Python\Python312\Lib\site-packages\mplcyberpunk\__init__.py", line 4, in <module> import pkg_resources ModuleNotFoundError: No module named 'pkg_resources' ``` python 3.12 happens when I'm trying to import it
closed
2024-01-30T19:52:37Z
2024-11-26T21:20:50Z
https://github.com/dhaitz/mplcyberpunk/issues/36
[]
Rajdave69
3
chatopera/Synonyms
nlp
126
Title: nearby方法输出结果并不是最近的词
<!-- Sponsor this project / 开源项目支持方 --> <!-- Chatopera 云服务:低代码、无代码方式定制智能对话机器人,查看 https://bot.chatopera.com/ --> <!-- 春松客服:快速获得好用的开源客服系统,查看 https://www.cskefu.com/ --> ## ![image](https://user-images.githubusercontent.com/26009601/130062044-36d167ce-3aed-4cdc-a207-7c7aae535581.png) ## 预期行为 输出意思近,得分高的词语 ## 操作系统 Ubuntu 20.04 ## 解决方案 ## 代码版本 <!-- Git commit hash (`git rev-parse HEAD`),进入代码库并执行 --> ## 来自 <!-- 说明公司或行业后优先支持 --> - 行业: - 公司/团队 官网: <!-- 产品使用说明书 --> <!-- https://docs.chatopera.com/ --> <!-- 快速掌握春松客服功能及二次开发 --> <!-- 春松客服大讲堂:https://ke.qq.com/course/464050 --> <!-- 非产品缺陷请联系商务获得支持 --> <!-- 定制化开发, 培训,咨询等: https://www.chatopera.com/mail.html --> ## Open Source for the World by Chatopera [![chatoper banner][co-banner-image]][co-url] [co-banner-image]: https://user-images.githubusercontent.com/3538629/42383104-da925942-8168-11e8-8195-868d5fcec170.png [co-url]: https://www.chatopera.com
open
2021-08-19T11:36:29Z
2022-04-24T08:32:41Z
https://github.com/chatopera/Synonyms/issues/126
[ "bug" ]
Miaotxy
2
pytest-dev/pytest-django
pytest
682
pytest.ini directory is not set to rootdir in 3.4.4
if I set a pytest.ini file and then run pytest in a subfolder it looks like the rootdir is not longer set to the directory that pytest.ini is in. This causes the path to DJANGO_SETTINGS_MODULE to no longer be found so if I have a directory structure like this root --apps --test --project ----settings ------test pytest.ini set the pytest.ini file to this [pytest] DJANGO_SETTINGS_MODULE=project.settings.test and then run pytest from the test folder it will find the pytest.ini file, but will then throw a "project" module not found. However, it is found if I set PYTHONPATH so maybe something that sets the rootdir broke? PYTHONPATH=.. pytest This happens in 3.4.4, but not 3.4.3 and earlier versions
closed
2018-12-18T16:09:08Z
2019-02-26T15:18:44Z
https://github.com/pytest-dev/pytest-django/issues/682
[ "bug" ]
danlittlejohn
2
PaddlePaddle/PaddleHub
nlp
1,700
paddlehub1.8.1 训练模型出现报错 float division by zore
paddlepaddle版本试过1.7.2、1.8.0、1.8.4,在训练模型时,cls_task.finetune_and_eval() 此步出现报错 float division by zore。float division by zore,strategy,config均能跑通。 ![f63769ad95d82e216b90dcf1fc1ddad](https://user-images.githubusercontent.com/89830138/143004474-ba77d9a6-deea-41cb-91fd-9c0bef769548.png) ![b53ea7b2a899cd25beaa2be28901895](https://user-images.githubusercontent.com/89830138/143004565-13567d34-0556-44d2-9780-60cfea6a4159.png) ![058d7aabe23c834ef93aace593f77d0](https://user-images.githubusercontent.com/89830138/143004640-659bfe81-8c16-43ed-a0fc-970b09919554.png) ![3e1c014b73265f378e1844e4a2f3876](https://user-images.githubusercontent.com/89830138/143004697-c8ac0120-bab6-4b0f-8218-f79ba3d53e1f.png)
open
2021-11-23T09:52:25Z
2021-11-24T13:50:34Z
https://github.com/PaddlePaddle/PaddleHub/issues/1700
[]
yuanharry
7
mailgyc/doudizhu
sqlalchemy
10
一直报错python3.5不能用吗 只能用3.6?
self.db: torndb.Connection = self.application.db ^ 提示这里出了问题 setting配置都已更改
closed
2018-08-01T16:25:16Z
2018-12-24T05:59:59Z
https://github.com/mailgyc/doudizhu/issues/10
[ "invalid" ]
vinplezhang
3
davidteather/TikTok-Api
api
520
[BUG] - tiktok changed api
api_url add "/post" ```api_url = "{}api/post/item_list/?{}&{}"``` "items" change to "itemList" ``` if "itemList" in res.keys(): for t in res["itemList"]: response.append(t) ``` "maxCursor" change to "cursor" ```maxCursor = res["cursor"]```
closed
2021-03-08T11:43:07Z
2021-03-08T20:12:42Z
https://github.com/davidteather/TikTok-Api/issues/520
[ "bug" ]
11348
2
Evil0ctal/Douyin_TikTok_Download_API
fastapi
54
tiktok请求完数据为空
请求结果: 目标链接: https://www.tiktok.com/@jelenew_official/video/7108906863097367851 获取到的TikTok视频ID是7108906863097367851 正在请求API链接:https://api.tiktokv.com/aweme/v1/multi/aweme/detail/?aweme_ids=%5B7108906863097367851%5D 视频ID为: 7108906863097367851 正在请求API链接:https://api.tiktokv.com/aweme/v1/multi/aweme/detail/?aweme_ids=%5B7108906863097367851%5D {'status': 'failed', 'reason': JSONDecodeError('Expecting value: line 1 column 1 (char 0)'), 'function': 'Scraper.tiktok()', 'value': 'https://www.tiktok.com/@jelenew_official/video/7108906863097367851'} Error: 'url_type' ![image](https://user-images.githubusercontent.com/62318192/180733203-ac975e79-5ada-41f4-988f-ce3b1b611068.png) 方法: ![image](https://user-images.githubusercontent.com/62318192/180733116-a0f91aa2-8377-4099-b318-65791c3a8104.png)
closed
2022-07-25T08:29:15Z
2022-08-01T05:34:22Z
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/54
[ "API Down", "Fixed" ]
holmes1849082248
10
ultralytics/ultralytics
python
19,770
mac os M系列芯片怎么跑yolo11
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions. ### Question /Users/tianlong/anaconda3/envs/yoloenv/bin/python /Users/tianlong/Desktop/develop/yolo/yolo11/train.py Matplotlib is building the font cache; this may take a moment. New https://pypi.org/project/ultralytics/8.3.93 available 😃 Update with 'pip install -U ultralytics' Ultralytics 8.3.56 🚀 Python-3.12.9 torch-2.2.2 Traceback (most recent call last): File "/Users/tianlong/Desktop/develop/yolo/yolo11/train.py", line 39, in <module> main() File "/Users/tianlong/Desktop/develop/yolo/yolo11/train.py", line 20, in main train_results = model.train( ^^^^^^^^^^^^ File "/Users/tianlong/Desktop/develop/yolo/yolo11/ultralytics/engine/model.py", line 800, in train self.trainer = (trainer or self._smart_load("trainer"))(overrides=args, _callbacks=self.callbacks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tianlong/Desktop/develop/yolo/yolo11/ultralytics/engine/trainer.py", line 103, in __init__ self.device = select_device(self.args.device, self.args.batch) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tianlong/Desktop/develop/yolo/yolo11/ultralytics/utils/torch_utils.py", line 192, in select_device raise ValueError( ValueError: Invalid CUDA 'device=0' requested. Use 'device=cpu' or pass valid CUDA device(s) if available, i.e. 'device=0' or 'device=0,1,2,3' for Multi-GPU. torch.cuda.is_available(): False torch.cuda.device_count(): 0 os.environ['CUDA_VISIBLE_DEVICES']: None See https://pytorch.org/get-started/locally/ for up-to-date torch install instructions if no CUDA devices are seen by torch. Process finished with exit code 1 ### Additional /Users/tianlong/anaconda3/envs/yoloenv/bin/python /Users/tianlong/Desktop/develop/yolo/yolo11/train.py Matplotlib is building the font cache; this may take a moment. New https://pypi.org/project/ultralytics/8.3.93 available 😃 Update with 'pip install -U ultralytics' Ultralytics 8.3.56 🚀 Python-3.12.9 torch-2.2.2 Traceback (most recent call last): File "/Users/tianlong/Desktop/develop/yolo/yolo11/train.py", line 39, in <module> main() File "/Users/tianlong/Desktop/develop/yolo/yolo11/train.py", line 20, in main train_results = model.train( ^^^^^^^^^^^^ File "/Users/tianlong/Desktop/develop/yolo/yolo11/ultralytics/engine/model.py", line 800, in train self.trainer = (trainer or self._smart_load("trainer"))(overrides=args, _callbacks=self.callbacks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tianlong/Desktop/develop/yolo/yolo11/ultralytics/engine/trainer.py", line 103, in __init__ self.device = select_device(self.args.device, self.args.batch) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/tianlong/Desktop/develop/yolo/yolo11/ultralytics/utils/torch_utils.py", line 192, in select_device raise ValueError( ValueError: Invalid CUDA 'device=0' requested. Use 'device=cpu' or pass valid CUDA device(s) if available, i.e. 'device=0' or 'device=0,1,2,3' for Multi-GPU. torch.cuda.is_available(): False torch.cuda.device_count(): 0 os.environ['CUDA_VISIBLE_DEVICES']: None See https://pytorch.org/get-started/locally/ for up-to-date torch install instructions if no CUDA devices are seen by torch. Process finished with exit code 1
open
2025-03-19T03:46:27Z
2025-03-19T06:13:41Z
https://github.com/ultralytics/ultralytics/issues/19770
[ "question", "dependencies" ]
tianlong19910404
2
d2l-ai/d2l-en
computer-vision
2,626
Numpy version mismatch in d2l installation JAX
I am trying to install d2l for jax but [Installation Link](https://d2l.ai/chapter_installation/index.html) and I am getting Error in numpy package version when i install `d2l` by ``` pip install d2l==1.0.3 ``` error: ``` ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. chex 0.1.82 requires numpy>=1.25.0, but you have numpy 1.23.5 which is incompatible. tensorflow 2.18.0 requires numpy<2.1.0,>=1.26.0, but you have numpy 1.23.5 which is incompatible. Successfully installed d2l-1.0.3 numpy-1.23.5 ```
open
2024-11-22T17:42:22Z
2024-11-22T17:42:22Z
https://github.com/d2l-ai/d2l-en/issues/2626
[]
itahang
0
autogluon/autogluon
computer-vision
4,702
[BUG] RaySystemError: System error: Failed to unpickle serialized exception
**Bug Report Checklist** **Describe the bug** <!-- A clear and concise description of what the bug is. --> Using many columns ex: 11000 the ray reproduces an error related to catboost... The error is confusing, because the root of the problem is the number of columns (many)... if you use, for example, only 1,000 there are no errors, but 11,000 the error appears... ![image](https://github.com/user-attachments/assets/aa6e29ec-7480-472f-a217-4be0bbb81f14) ```python Skipping CatBoost_r177_BAG_L1 due to exception: RaySystemError: System error: Failed to unpickle serialized exception traceback: Traceback (most recent call last): File "[C:\Users\celes\anaconda3\Lib\site-packages\ray\exceptions.py", line 51](file:///C:/Users/celes/anaconda3/Lib/site-packages/ray/exceptions.py#line=50), in from_ray_exception return pickle.loads(ray_exception.serialized_exception) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ModuleNotFoundError: No module named '_catboost' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "[C:\Users\celes\anaconda3\Lib\site-packages\ray\_private\serialization.py", line 460](file:///C:/Users/celes/anaconda3/Lib/site-packages/ray/_private/serialization.py#line=459), in deserialize_objects obj = self._deserialize_object(data, metadata, object_ref) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[C:\Users\celes\anaconda3\Lib\site-packages\ray\_private\serialization.py", line 342](file:///C:/Users/celes/anaconda3/Lib/site-packages/ray/_private/serialization.py#line=341), in _deserialize_object return RayError.from_bytes(obj) ^^^^^^^^^^^^^^^^^^^^^^^^ File "[C:\Users\celes\anaconda3\Lib\site-packages\ray\exceptions.py", line 45](file:///C:/Users/celes/anaconda3/Lib/site-packages/ray/exceptions.py#line=44), in from_bytes return RayError.from_ray_exception(ray_exception) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "[C:\Users\celes\anaconda3\Lib\site-packages\ray\exceptions.py", line 54](file:///C:/Users/celes/anaconda3/Lib/site-packages/ray/exceptions.py#line=53), in from_ray_exception raise RuntimeError(msg) from e RuntimeError: Failed to unpickle serialized exception ``` **Expected behavior** <!-- A clear and concise description of what you expected to happen. --> **To Reproduce** <!-- A minimal script to reproduce the issue. Links to Colab notebooks or similar tools are encouraged. If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com. In short, we are going to copy-paste your code to run it and we expect to get the same result as you. --> ```python import pandas as pd import numpy as np from autogluon.tabular import TabularPredictor # dataset 11.000 columns and 1.000 rows num_rows = 1000 num_columns = 11000 data = pd.DataFrame(np.random.rand(num_rows, num_columns), columns=[f'col_{i}' for i in range(num_columns)]) # create column with 's', 'd' or 'i' values data['n2_sdi'] = np.random.choice(['s', 'd', 'i'], size=num_rows) # convert data type from object to category data['n2_sdi'] = data['n2_sdi'].astype('category') # define the target column target_column = 'n2_sdi' predictor = TabularPredictor(label=target_column, path='autogluon_ray_investigation') predictor.fit( train_data=data, presets='best_quality', hyperparameters={'CAT': {}}, # only catBoost to investigate time_limit=1000 ) ``` **Screenshots / Logs** <!-- If applicable, add screenshots or logs to help explain your problem. --> **Installed Versions** <!-- Please run the following code snippet: --> <details> ```python NSTALLED VERSIONS ------------------ date : 2024-11-28 time : 14:07:47.346096 python : 3.11.9.final.0 OS : Windows OS-release : 10 Version : 10.0.22631 machine : AMD64 processor : AMD64 Family 23 Model 96 Stepping 1, AuthenticAMD num_cores : 16 cpu_ram_mb : 32175.171875 cuda version : None num_gpus : 1 gpu_ram_mb : [3935] avail_disk_size_mb : None autogluon : None autogluon.common : 1.2 autogluon.core : 1.2 autogluon.features : 1.2 autogluon.tabular : 1.2 boto3 : 1.35.55 catboost : 1.2.7 einops : 0.8.0 fastai : 2.7.18 huggingface-hub : 0.26.3 hyperopt : 0.2.7 imodels : 2.0.0 lightgbm : 4.5.0.99 matplotlib : 3.10.0rc1 networkx : 3.4.2 numpy : 1.26.4 onnx : 1.17.0 onnxruntime : 1.20.1 onnxruntime-gpu : 1.20.1 pandas : 2.2.3 psutil : 6.1.0 pyarrow : 18.0.0 ray : 2.39.0 requests : 2.28.2 scikit-learn : 1.5.2 scikit-learn-intelex: 2025.0.0 scipy : 1.14.1 skl2onnx : 1.17.0 spacy : 3.8.2 tabpfn : 0.1.10 torch : 2.6.0.dev20241127+cu118 tqdm : 4.65.2 vowpalwabbit : None xgboost : 2.1.3 ``` </details>
closed
2024-11-28T14:11:29Z
2025-01-13T11:37:56Z
https://github.com/autogluon/autogluon/issues/4702
[ "bug", "module: tabular", "Needs Triage", "priority: 0" ]
celestinoxp
4
nalepae/pandarallel
pandas
57
Can't not call progress_apply(lambda x: function(x), axis=0)
The api onl work when axis=1.
closed
2019-12-04T07:57:47Z
2019-12-04T08:06:47Z
https://github.com/nalepae/pandarallel/issues/57
[]
lhduc94
0
huggingface/datasets
computer-vision
6,506
Incorrect test set labels for RTE and CoLA datasets via load_dataset
### Describe the bug The test set labels for the RTE and CoLA datasets when loading via datasets load_dataset are all -1. Edit: It appears this is also the case for every other dataset except for MRPC (stsb, sst2, qqp, mnli (both matched and mismatched), qnli, wnli, ax). Is this intended behavior to safeguard the test set for evaluation purposes? ### Steps to reproduce the bug !pip install datasets from datasets import load_dataset rte_data = load_dataset('glue', 'rte') cola_data = load_dataset('glue', 'cola') print(rte_data['test'][0:30]['label']) print(cola_data['test'][0:30]['label']) Output: [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1] [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1] The non-label test data seems to be fine: e.g. rte_data['test'][1] is: {'sentence1': "Authorities in Brazil say that more than 200 people are being held hostage in a prison in the country's remote, Amazonian-jungle state of Rondonia.", 'sentence2': 'Authorities in Brazil hold 200 people as hostage.', 'label': -1, 'idx': 1} Training and validation data are also fine: e.g. rte_data['train][0] is: {'sentence1': 'No Weapons of Mass Destruction Found in Iraq Yet.', 'sentence2': 'Weapons of Mass Destruction Found in Iraq.', 'label': 1, 'idx': 0} ### Expected behavior Expected the labels to be binary 0/1 values; Got all -1s instead ### Environment info - `datasets` version: 2.15.0 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.19.4 - PyArrow version: 10.0.1 - Pandas version: 1.5.3 - `fsspec` version: 2023.6.0
closed
2023-12-16T22:06:08Z
2023-12-21T09:57:57Z
https://github.com/huggingface/datasets/issues/6506
[]
emreonal11
1
pytorch/pytorch
deep-learning
148,937
[AOTInductor]Only support one model instance when use AOTIModelPackageLoader load aot model?
when i use aoti model in cpp, i try to infer parallel by using multi threads and multi streams, like this: ```cpp torch::inductor::AOTIModelPackageLoader loader("model.pt2"); torch::inductor::AOTIModelContainerRunner* runner = loader.get_runner(); for thread_id in threads: // in different threads auto outputs = runner->run(inputs, streams[thread_id]); ``` But i found the others are blocked when one of infer is running. than i found in torch code, only one model instance is initialized when use AOTIModelPackageLoader: ```cpp std::string cubin_dir = temp_dir_ + k_separator + model_directory; runner_ = registered_aoti_runner[device]( so_path, 1, device, cubin_dir, run_single_threaded); // here, only init 1 model ``` is it a feature or bug? how can i infer parallel? cc @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1
closed
2025-03-11T01:59:53Z
2025-03-19T02:18:33Z
https://github.com/pytorch/pytorch/issues/148937
[ "triaged", "oncall: pt2", "oncall: export", "module: aotinductor" ]
zzq96
3
aiogram/aiogram
asyncio
1,477
wrong TypeHint for ReplyKeyboardBuilder button
### Checklist - [X] I am sure the error is coming from aiogram code - [X] I have searched in the issue tracker for similar bug reports, including closed ones ### Operating system ubuntu 20 ### Python version 3.11 ### aiogram version 3.5 ### Expected behavior `request_user` and `request_chat` in `ReplyKeyboardBuilder().button()` ```py from aiogram.utils.keyboard import ReplyKeyboardBuilder from aiogram.types import KeyboardButtonRequestUser, KeyboardButtonRequestChat x = ReplyKeyboardBuilder().button(request_user=KeyboardButtonRequestUser(**params)).as_markup() y = ReplyKeyboardBuilder().button(request_chat=KeyboardButtonRequestChat(**params)).as_markup() ``` should suggest ```py Optional[KeyboardButtonRequestUser] = None Optional[KeyboardButtonRequestChat] = None ``` ### Current behavior `request_user` and `request_chat` in `ReplyKeyboardBuilder().button()` ```py from aiogram.utils.keyboard import ReplyKeyboardBuilder x = ReplyKeyboardBuilder().button(request_user=True).as_markup() y = ReplyKeyboardBuilder().button(request_chat=True).as_markup() ``` currently suggest ```py Optional[bool] = None ``` ### Steps to reproduce change button function in ReplyKeyboardBuilder class in keyboard.py file ```py @no_type_check def button( self, *, text: str, request_user: Optional[KeyboardButtonRequestUser] = None, request_chat: Optional[KeyboardButtonRequestChat] = None, request_contact: Optional[bool] = None, request_location: Optional[bool] = None, request_poll: Optional[KeyboardButtonPollType] = None, web_app: Optional[WebAppInfo] = None, **kwargs: Any, ) -> "KeyboardBuilder[KeyboardButton]": ... ``` ### Code example _No response_ ### Logs _No response_ ### Additional information _No response_
closed
2024-05-03T11:41:23Z
2024-05-03T12:10:08Z
https://github.com/aiogram/aiogram/issues/1477
[ "bug" ]
HadiH2o
0
aiortc/aiortc
asyncio
303
Strange `trackId` value in stats
I call `pc.addTransceiver()` with a `track` (generated by a `MediaPlayer`) whose `id` is "296ff180-3e3a-44de-b9f9-e0440d686e84". However when I print the `transceiver.sender.getStats()` I see a strange `trackId` value into it: ```json { "bytesSent": 8643, "id": "outbound-rtp_4575985168", "kind": "audio", "packetsSent": 2881, "ssrc": 3795462943, "timestamp": 1582673816.716741, "trackId": "4575764560", "transportId": "transport_4575984272", "type": "outbound-rtp" } ``` Here we have `"trackId": "4575764560"`, which does not make much sense.
closed
2020-02-25T23:44:38Z
2022-08-07T03:09:36Z
https://github.com/aiortc/aiortc/issues/303
[ "stale" ]
ibc
9
globaleaks/globaleaks-whistleblowing-software
sqlalchemy
3,392
Notification on password expiration
### Proposal A nice feature would be notification to users on password expiration - An idea? /soren ### Motivation and context We have had multiple requests from recipients, that would like to get a notification on expiration of passwords. This feature will actually make the recipients log in to the platform - even ift here hasn't been any reportings, hence, their familization with the platform will increase :-)
open
2023-03-20T13:21:28Z
2023-03-20T15:17:46Z
https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3392
[ "C: Backend", "F: Notification", "T: Feature" ]
schris-dk
1
onnx/onnx
pytorch
6,406
ONNX Windows build relies on MS Store python, fails with official python launcher
# Bug Report ### Description I'm using a Windows Arm-based PC, and the protobuf compilation stage of the pip installation fails with a fatal error. The error indicates that the mypy plugin is failing because the `python` command isn't found. I believe this is because I'm using the official download from Python dot org that only installs the `py` executable, which is a launcher for Python. This is listed as the recommended option at [docs.python.org/3/using/windows.html](https://docs.python.org/3/using/windows.html), with the Microsoft Store version of `python` listed as an alternative. Unfortunately the Store version of Python is still x86-based, so I can't use it on a Windows/Arm PC (technically it can run under emulation but having both installed causes other problems). This also seems like a common setup in the Windows world, regardless of architecture, so it would be nice to support it. ### System information - Windows 11, ARM - ONNX version 1.18 - Python version: 3.11 (Arm) ### Reproduction instructions ```bash git clone https://github.com/onnx/onnx cd onnx py setup.py install ``` This also currently happens with plain `pip install onnx`. ### Expected behavior I expect the wheel to be built and installed. ### Actual behavior The build fails with the following error messages (trimmed to the most relevant): ```bash Building Custom Rule C:/Users/pete/onnx/CMakeLists.txt Running C++ protocol buffer compiler on C:/Users/pete/onnx/.setuptools-cmake-build/onnx/onn x-ml.proto Python was not found; run without arguments to install from the Microsoft Store, or disable this shortcut from Settings > Manage App Execution Aliases. --mypy_out: protoc-gen-mypy: Plugin failed with status code 9009. C:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Microsoft\VC\v170\Microsoft.C ppCommon.targets(254,5): error MSB8066: Custom build for 'C:\Users\pete\onnx\.setuptools-cmak e-build\CMakeFiles\1cc123d7a88b38a07b5d702bea4bbab7\onnx-ml.proto.rule;C:\Users\pete\onnx\.se tuptools-cmake-build\CMakeFiles\1cc123d7a88b38a07b5d702bea4bbab7\onnx-ml.pb.cc.rule;C:\Users\ pete\onnx\.setuptools-cmake-build\CMakeFiles\2613932fd8912cf9addf99599c963206\gen_onnx_proto. rule;C:\Users\pete\onnx\CMakeLists.txt' exited with code 1. [C:\Users\pete\onnx\.setuptools-c make-build\gen_onnx_proto.vcxproj] Traceback (most recent call last): ... raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['C:\\Program Files\\CMake\\bin\\cmake.EXE', '--build', '.', '--config', 'Release', '--', '/maxcpucount:12']' returned non-zero exit status 1. ``` ### Notes From my debugging, this is caused by the [tools/protoc-gen-mypy.bat](https://github.com/onnx/onnx/blob/main/tools/protoc-gen-mypy.bat) batch file calling the `python` command directly. I will be submitting a patch to try both versions of the command, so that: ```bash python -u "%~dp0\protoc-gen-mypy.py" ``` becomes ```bash python -u "%~dp0\protoc-gen-mypy.py" || py -u "%~dp0\protoc-gen-mypy.py" ``` I will submit this as a PR linking back to this issue shortly.
closed
2024-09-30T21:11:35Z
2024-11-04T19:19:47Z
https://github.com/onnx/onnx/issues/6406
[ "bug" ]
petewarden
2
fastapi/fastapi
api
12,554
Update docs include syntax for source examples
### Privileged issue - [X] I'm @tiangolo or he asked me directly to create an issue here. ### Issue Content This is a good first contribution. :nerd_face: The code examples shown in the docs are actual Python files. They are even tested in CI, that's why you can always copy paste an example and it will always work, the example is tested. The way those examples are included in the docs used a specific format. But now there's a new format available that is much simpler and easier to use than the previous one, in particular in complex cases, for example when there are examples in multiple versions of Python. But not all the docs have the new format yet. The docs should use the new format to include examples. That is the task. :nerd_face: **It should be done as one PR per page updated.** ## Simple Example Before, the format was like: ````markdown ```Python hl_lines="3" {!../../docs_src/first_steps/tutorial001.py!} ``` ```` Now the new format looks like: ````markdown {* ../../docs_src/first_steps/tutorial001.py hl[3] *} ```` * Instead of `{!` and `!}` it uses `{*` and `*}` * It no longer has a line above with: ````markdown ```Python ```` * And it no longer has a line below with: ````markdown ``` ```` * The highlight is no longer a line with e.g. `hl_lines="3"` (to highlight line 3), but instead in the same line there's a `hl[3]`. An example PR: https://github.com/fastapi/fastapi/pull/12552 ## Multiple Python Versions There are some cases where there are variants of the same example for multiple versions of Python, or for using `Annotated` or not. In those cases, the current include examples have syntax for tabs, and notes saying `Annotated` should be preferred. For example: ````markdown //// tab | Python 3.9+ ```Python hl_lines="4 8 12" {!> ../../docs_src/security/tutorial006_an_py39.py!} ``` //// //// tab | Python 3.8+ ```Python hl_lines="2 7 11" {!> ../../docs_src/security/tutorial006_an.py!} ``` //// //// tab | Python 3.8+ non-Annotated /// tip Prefer to use the `Annotated` version if possible. /// ```Python hl_lines="2 6 10" {!> ../../docs_src/security/tutorial006.py!} ``` //// ```` In these cases, it should be updated to only include the first one (the others will be included automatically :sunglasses: ): ````markdown {* ../../docs_src/security/tutorial006_an_py39.py hl[4,8,12] *} ```` * The syntax for tabs is also removed, all the other variants are included automatically. * The highlight lines are included for that same first file, the fragment with `hl_lines="4 8 12"` is replaced with `hl[4,8,12]` An example PR: https://github.com/fastapi/fastapi/pull/12553 ## Highlight Lines ### Simple Lines When there's a fragment like: ````markdown hl_lines="4 8 12" ```` That means it is highlighting the lines 4, 8, and 12. The new syntax is on the same include line: ````markdown hl[4,8,12] ```` * It separates individual lines by commas. * It uses `hl`, with square brackets around. ### Line Ranges When there are line ranges, like: ````markdown hl_lines="4-6" ```` That means it is highlighting lines from 4 to 6 (so, 4, 5, and 6). The new syntax uses `:` instead of `-` for the ranges: ````markdown hl[4:6] ```` ### Multiple Highlights There are some highlights that include individual lines and also line ranges, for example the old syntax was: ````markdown hl_lines="2 4-6 8-11 13" ```` That means it is highlighting: * Line 2 * Lines from 4 to 6 (so, 4, 5, and 6) * Lines from 8 to 11 (so, 8, 9, 10, and 11) * Line 13 The new syntax separates by commas instead of spaces: ````markdown hl[2,4:6,8:11,13] ```` ## Include Specific Lines In some cases, there are specific lines included instead of the entire file. For example, the old syntax was: ````markdown ```Python hl_lines="7" {!> ../../docs_src/separate_openapi_schemas/tutorial001_py310.py[ln:1-7]!} # Code below omitted 👇 ``` ```` In this example, the lines included are from line 1 to line 7 (lines 1, 2, 3, 4, 5, 6, 7). In the old syntax, it's defined with the fragment: ````markdown [ln:1-7] ```` In the new syntax, the included code from above would be: ````markdown {* ../../docs_src/separate_openapi_schemas/tutorial001_py310.py ln[1:7] hl[7] *} ```` * The lines to include that were defined with the fragment `[ln:1-7]`, are now defined with `ln[1:7]` The new syntax `ln` as in `ln[1:7]` also supports multiple lines and ranges to include. ### Comments Between Line Ranges In the old syntax, when there are ranges of code included, there are comments like: ````markdown # Code below omitted 👇 ```` The new syntax generates those comments automatically based on the line ranges. ### Real Example A more real example of the include with the old syntax looked like this: ````markdown //// tab | Python 3.10+ ```Python hl_lines="7" {!> ../../docs_src/separate_openapi_schemas/tutorial001_py310.py[ln:1-7]!} # Code below omitted 👇 ``` <details> <summary>👀 Full file preview</summary> ```Python {!> ../../docs_src/separate_openapi_schemas/tutorial001_py310.py!} ``` </details> //// //// tab | Python 3.9+ ```Python hl_lines="9" {!> ../../docs_src/separate_openapi_schemas/tutorial001_py39.py[ln:1-9]!} # Code below omitted 👇 ``` <details> <summary>👀 Full file preview</summary> ```Python {!> ../../docs_src/separate_openapi_schemas/tutorial001_py39.py!} ``` </details> //// //// tab | Python 3.8+ ```Python hl_lines="9" {!> ../../docs_src/separate_openapi_schemas/tutorial001.py[ln:1-9]!} # Code below omitted 👇 ``` <details> <summary>👀 Full file preview</summary> ```Python {!> ../../docs_src/separate_openapi_schemas/tutorial001.py!} ``` </details> //// ```` In the new syntax, that is replaced with this: ````markdown {* ../../docs_src/separate_openapi_schemas/tutorial001_py310.py ln[1:7] hl[7] *} ```` * The only file that needs to be included and defined is the first one, and the lines to include and highlight are also needed for the first file. * All the other file includes, full file preview, comments, etc. are generated automatically. --- An example PR: https://github.com/fastapi/fastapi/pull/12555 ## Help Do you want to help? Please do! Remember **it should be done as one PR per page updated.** If you see a page that doesn't fit these cases, leave it as is, I'll take care of it later. Before submitting a PR, check if there's another one already handling that file. Please name the PR including the file path, for example: ````markdown 📝 Update includes for `docs/tutorial/create-db-and-table.md` ````
closed
2024-10-26T13:06:48Z
2024-11-18T02:45:54Z
https://github.com/fastapi/fastapi/issues/12554
[ "good first issue" ]
tiangolo
30
modelscope/data-juicer
streamlit
375
Efficient processing OPs for scanned images and pdf
### Search before continuing 先搜索,再继续 - [X] I have searched the Data-Juicer issues and found no similar feature requests. 我已经搜索了 Data-Juicer 的 issue 列表但是没有发现类似的功能需求。 ### Description 描述 There is a large amount of valuable data in the format of scanned images and PDFs. We can continuously discuss and list related processing operations to be added into DJ in this thread. Some pioneering works include [MAP-NEO](https://github.com/multimodal-art-projection/MAP-NEO/tree/main/Matrix/document-convert) and [PDF-Extract-Kit](https://github.com/opendatalab/PDF-Extract-Kit). ### Use case 使用场景 _No response_ ### Additional 额外信息 _No response_ ### Are you willing to submit a PR for this feature? 您是否乐意为此功能提交一个 PR? - [X] Yes I'd like to help by submitting a PR! 是的!我愿意提供帮助并提交一个PR!
closed
2024-07-30T03:03:15Z
2024-09-29T09:32:18Z
https://github.com/modelscope/data-juicer/issues/375
[ "enhancement", "dj:multimodal", "stale-issue", "dj:op" ]
yxdyc
5
omar2535/GraphQLer
graphql
11
Add logging
Implement logging instead of using prints
closed
2021-11-30T07:56:28Z
2023-10-15T03:47:12Z
https://github.com/omar2535/GraphQLer/issues/11
[ "➕enhancement" ]
omar2535
1
graphql-python/graphene
graphql
1,226
Proposal: Alternative API for mutations and subscriptions
# The problem Currently the mutation api is quite clunky and not very intuitive: https://docs.graphene-python.org/en/latest/types/mutations/ Often users get confused about the `mutate` function being a static method and that you need to define a separate `Mutation` class that extends `ObjectType` with each mutation in it. Also the `Output` field is not a very intuitive way of defining the return value of the `Mutation`. See: https://github.com/graphql-python/graphene/issues/1163 https://github.com/graphql-python/graphene/issues/810 https://github.com/graphql-python/graphene-django/issues/759 and others Subscriptions in Graphene are currently a very new and experimental feature and the current API has some downsides as outlined here: https://github.com/graphql-python/graphene/pull/1107#issuecomment-657194024 # The solution Implement a new decorator based API for both mutations and subscriptions and expose a new `mutations` and `subscriptions` argument to `Schema` that accepts a list of mutation and subscription functions. This has the benefit of not only being simpler but also making it super clear that the mutation/subscribe function is a static method. ## Examples: ### Mutations: ```python from graphene import mutation, String, Schema from .definitions import User, Query @mutation(User, required=True, arguments={"name": String(required=True)}) def update_user(root, info, name): # Update the user... return User(name=name) schema = Schema(query=Query, mutations=[update_user]) ``` <details> <summary>Equivalent functionality with the current API</summary> ```python from graphene import Mutation, String, Schema, ObjectType, Field class UpdateUser(Mutation): class Arguments: name = String(required=True) Output = User def mutate(root, info, name): # Update the user... return User(name=name) class MyMutation(ObjectType): update_user = UpdateUser.Field(required=True) schema = Schema(query=Query, mutation=MyMutation) ``` </details> ### Subscriptions: ```python from graphene import subscription, Int, Schema from .definitions import Query @subscription(Int, required=True) async def count_to_ten(root, info): count = 0 while count < 10: count += 1 yield count schema = Schema(query=Query, subscriptions=[count_to_ten]) ``` # Drawbacks Using decorators like this is quite different to the current API so it would require a lot of documentation and education to get users to migrate to it (though the 2 APIs could coexist quite easily). Since there should only be 1 way of defining mutations we should try and make it as easy as possible for people to migrate. # Alternatives We could double down on the current Mutation API and improve the documentation and add more validation to help users use it correctly. The Subscription API could also follow the Mutation API so that they are consistent.
closed
2020-07-12T12:40:24Z
2020-11-09T17:05:07Z
https://github.com/graphql-python/graphene/issues/1226
[ "✨ enhancement" ]
jkimbo
13
nerfstudio-project/nerfstudio
computer-vision
2,822
how scale reularizition (use_scale_regularization) used in splatfacto?
how scale reularizition (use_scale_regularization) used in splatfacto? https://github.com/nerfstudio-project/nerfstudio/blob/3bade3a9dcee19bf0bf28f40c3615891b9b9b444/nerfstudio/models/splatfacto.py#L853
closed
2024-01-25T13:37:23Z
2024-01-25T19:12:36Z
https://github.com/nerfstudio-project/nerfstudio/issues/2822
[]
pknmax
1
coqui-ai/TTS
pytorch
2,785
[Bug] KeyError: 'use_phonemes'
### Describe the bug I tried to train a vocoder model and I tried to utilize it with tts-server but this throws KeyError: 'use_phonemes'. This is the full error that I get: (venv) PS D:\GitHub\Coqui> tts-server --config_path D:\GitHub\Coqui\models\LJSpeech-1.1\config.json --model_path D:\GitHub\Coqui\models\LJSpeech-1.1\best_model.pth D:\GitHub\Coqui\venv\lib\site-packages\torchaudio\compliance\kaldi.py:22: UserWarning: Failed to initialize NumPy: module compiled against API version 0x10 but this version of numpy is 0xf . Check the section C-API incompatibility at the Troubleshooting ImportError section at https://numpy.org/devdocs/user/troubleshooting-importerror.html#c-api-incompatibility for indications on how to solve this problem . (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:84.) EPSILON = torch.tensor(torch.finfo(torch.float).eps) Traceback (most recent call last): File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "D:\GitHub\Coqui\venv\Scripts\tts-server.exe\__main__.py", line 4, in <module> File "D:\GitHub\Coqui\venv\lib\site-packages\TTS\server\server.py", line 104, in <module> synthesizer = Synthesizer( File "D:\GitHub\Coqui\venv\lib\site-packages\TTS\utils\synthesizer.py", line 91, in __init__ self._load_tts(tts_checkpoint, tts_config_path, use_cuda) File "D:\GitHub\Coqui\venv\lib\site-packages\TTS\utils\synthesizer.py", line 182, in _load_tts if self.tts_config["use_phonemes"] and self.tts_config["phonemizer"] is None: File "D:\GitHub\Coqui\venv\lib\site-packages\coqpit\coqpit.py", line 614, in __getitem__ return self.__dict__[arg] KeyError: 'use_phonemes' ### To Reproduce 1. Run the command "tts-server --config_path config path --model_path model path" 2. error ### Expected behavior _No response_ ### Logs _No response_ ### Environment ```shell { "CUDA": { "GPU": [], "available": false, "version": null }, "Packages": { "PyTorch_debug": false, "PyTorch_version": "2.0.1+cpu", "TTS": "0.15.6", "numpy": "1.22.0" }, "System": { "OS": "Windows", "architecture": [ "64bit", "WindowsPE" ], "processor": "AMD64 Family 25 Model 33 Stepping 0, AuthenticAMD", "python": "3.10.11", "version": "10.0.22621" } } ``` ### Additional context _No response_
closed
2023-07-19T20:04:17Z
2023-10-26T19:24:24Z
https://github.com/coqui-ai/TTS/issues/2785
[ "bug", "wontfix" ]
MVR3S
4
NullArray/AutoSploit
automation
438
Unhandled Exception (6ee01e0a5)
Autosploit version: `2.2.3` OS information: `Linux-4.19.0-kali1-amd64-x86_64-with-Kali-kali-rolling-kali-rolling` Running context: `autosploit.py` Error meesage: `[Errno 17] File exists: '/root/Downloads/AutoSploit/autosploit_out/2019-02-05_05h10m21s/'` Error traceback: ``` Traceback (most recent call): File "/root/Downloads/AutoSploit/autosploit/main.py", line 123, in main terminal.terminal_main_display(loaded_exploits) File "/root/Downloads/AutoSploit/lib/term/terminal.py", line 319, in terminal_main_display self.exploit_gathered_hosts(loaded_mods) File "/root/Downloads/AutoSploit/lib/term/terminal.py", line 253, in exploit_gathered_hosts exploiter.start_exploit() File "/root/Downloads/AutoSploit/lib/exploitation/exploiter.py", line 102, in start_exploit makedirs(current_host_path) File "/usr/lib/python2.7/os.py", line 157, in makedirs mkdir(name, mode) OSError: [Errno 17] File exists: '/root/Downloads/AutoSploit/autosploit_out/2019-02-05_05h10m21s/' ``` Metasploit launched: `True`
closed
2019-02-05T10:10:25Z
2019-02-19T04:22:44Z
https://github.com/NullArray/AutoSploit/issues/438
[]
AutosploitReporter
0
pydata/xarray
numpy
9,702
inconsistent 1D interp on 1D and ND DataArrays with NaNs
### What happened? May be a duplicate of #5852, and as stated there, interp with nans isn't really supported. But I noticed that in some cases `xr.DataArray.interp` drops valid data if there are NaNs in the array, and that this behavior is different depending on the number of dimensions, even if only a single dimension is specified (so `interp1d` should be used in every case). Not sure if this warrants a different issue, but the NaN handling appears to be different depending on the number of dimensions, even if we're interpolating along a single dimension (so should be essentially broadcasting `interp1d`: Here's a MRE: ```python import xarray as xr import numpy as np a = xr.DataArray( [np.nan, 1, 2], dims=["x"], coords=[range(0, 5, 2)], ) print("a:\n") print(a) b = xr.DataArray( [[np.nan, 1, 2]], dims=["y", "x"], coords=[["first"], range(0, 5, 2)], ) print("\n\nb:\n") print(b) print("\n\na.interp(x=range(5)):\n") print(a.interp(x=range(5))) print("\n\nb.interp(x=range(5)):\n") print(b.interp(x=range(5))) print("\n\nxarray version:") print(xr.__version__) ``` gives ``` a: <xarray.DataArray (x: 3)> Size: 24B array([nan, 1., 2.]) Coordinates: * x (x) int64 24B 0 2 4 b: <xarray.DataArray (y: 1, x: 3)> Size: 24B array([[nan, 1., 2.]]) Coordinates: * y (y) <U5 20B 'first' * x (x) int64 24B 0 2 4 a.interp(x=range(5)): <xarray.DataArray (x: 5)> Size: 40B array([nan, nan, 1. , 1.5, 2. ]) Coordinates: * x (x) int64 40B 0 1 2 3 4 b.interp(x=range(5)): <xarray.DataArray (y: 1, x: 5)> Size: 40B array([[nan, nan, nan, 1.5, 2. ]]) Coordinates: * y (y) <U5 20B 'first' * x (x) int64 40B 0 1 2 3 4 xarray version: 2024.10.0 ``` Note that the second interpolation case, in which the array is 2D, drops a valid value (1, in position x=2), even though it is present in the original array. The first case correctly interpolates between 1 and 2 and does not drop the first value. Apologies - I don't have time now to diagnose whether this is an xarray issue with the new 2D interp handling or simply a scipy interp1d broadcasting issue, but it was certainly unexpected! ### What did you expect to happen? _No response_ ### Minimal Complete Verifiable Example _No response_ ### MVCE confirmation - [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray. - [X] Complete example — the example is self-contained, including all data and the text of any traceback. - [X] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result. - [X] New issue — a search of GitHub Issues suggests this is not a duplicate. - [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies. ### Relevant log output _No response_ ### Anything else we need to know? _No response_ ### Environment <details> INSTALLED VERSIONS ------------------ commit: None python: 3.11.4 | packaged by conda-forge | (main, Jun 10 2023, 18:08:41) [Clang 15.0.7 ] python-bits: 64 OS: Darwin OS-release: 24.0.0 machine: arm64 processor: arm byteorder: little LC_ALL: None LANG: en_US.UTF-8 LOCALE: ('en_US', 'UTF-8') libhdf5: 1.14.0 libnetcdf: 4.9.2 xarray: 2024.10.0 pandas: 2.2.3 numpy: 1.24.3 scipy: 1.14.1 netCDF4: 1.6.4 pydap: None h5netcdf: None h5py: 3.8.0 zarr: 2.18.3 cftime: 1.6.2 nc_time_axis: None iris: None bottleneck: 1.3.7 dask: 2024.8.0 distributed: 2024.8.0 matplotlib: 3.7.1 cartopy: 0.24.0 seaborn: 0.12.2 numbagg: None fsspec: 2023.6.0 cupy: None pint: None sparse: None flox: 9.11 numpy_groupies: 0.11.2 setuptools: 67.7.2 pip: 23.1.2 conda: None pytest: None mypy: None IPython: 8.14.0 sphinx: None </details>
closed
2024-10-31T12:41:26Z
2024-12-02T15:31:58Z
https://github.com/pydata/xarray/issues/9702
[ "bug", "topic-interpolation" ]
delgadom
6
InstaPy/InstaPy
automation
6,038
InstaPy installation on digital ocean
Hi community, I am a big fan of instapy and would like to run it on a server. I tried to follow the instruction: https://github.com/InstaPy/instapy-docs/blob/master/How_Tos/How_To_DO_Ubuntu_on_Digital_Ocean.md I tried installing both browsers and also just Firefox. I followed the instructions step by step. They all ran through without error. I then created a python file with nano and just have the very basic lines needed i.e. importing and initiating a session. The python file works fine on my computer but on the server I get the following error message: ![image](https://user-images.githubusercontent.com/58906672/104904671-88070680-59d5-11eb-81db-f35b0cef1f4e.png) I also further tried #5672 i.e. changing webdriverdownloader.py and also installing webdrivermanager==0.9.0 I get same error message Is there anything else I could try? Many thanks for your help. Tom
closed
2021-01-18T10:40:31Z
2021-01-19T10:27:34Z
https://github.com/InstaPy/InstaPy/issues/6038
[]
tomnewg
1
httpie/cli
python
1,401
Bad filename when having non ASCII characters
## Checklist - [x] I've searched for similar issues. - [x] I'm using the latest version of HTTPie. --- ## Minimal reproduction code and steps 1. save a dummy image as `天狗.png` 2. issue a multipart/form-data, such as: `https --offline --multipart https://example.org name='John Doe' file_field@/home/john/天狗.png` > /tmp/httpie_result.txt ## Current result The `Content-Disposition` header generated will be something like: `Content-Disposition: form-data; name="file_field"; filename="§©Áãó.png"` … ## Expected result The `Content-Disposition` header should conform to [rfc2231](https://tools.ietf.org/html/rfc2231). So this should rather be: `Content-Disposition: form-data; name="file_field"; filename*=UTF-8''%E5%A4%A9%E7%8B%97.png` You can generate the proper encoded filename in accordance with rfc2231 using the perl command line `perl -MEncode -MURI::Escape::XS -lE 'say URI::Escape::XS::uri_escape( Encode::decode_utf8("天狗.png") )'` … --- ## Debug output Please re-run the command with `--debug`, then copy the entire command & output and paste both below: ```bash $ http --debug <COMPLETE ARGUMENT LIST THAT TRIGGERS THE ERROR> <COMPLETE OUTPUT> ``` ## Additional information, screenshots, or code examples …
open
2022-05-16T03:55:37Z
2024-10-30T10:53:32Z
https://github.com/httpie/cli/issues/1401
[ "bug", "new" ]
jackdeguest
4
nonebot/nonebot2
fastapi
3,209
Plugin: 他们在聊什么
### PyPI 项目名 nonebot-plugin-whats-talk-gemini ### 插件 import 包名 nonebot_plugin_whats_talk_gemini ### 标签 [{"label":"群聊总结","color":"#03f744"},{"label":"AI","color":"#feee06"},{"label":"Gemini","color":"#0609fe"}] ### 插件配置项 ```dotenv WT_AI_KEYS=["xxxxxx"] ``` ### 插件测试 - [ ] 如需重新运行插件测试,请勾选左侧勾选框
closed
2024-12-26T13:11:00Z
2024-12-28T15:38:32Z
https://github.com/nonebot/nonebot2/issues/3209
[ "Plugin", "Publish" ]
hakunomiko
1
CTFd/CTFd
flask
2,006
API documentation is broken
I'm trying to access the API documentation page of CTFd, https://docs.ctfd.io/docs/api/Swagger%20UI, but it's broken and doesn't show Swagger UI. I didn't find any other contacts, so I reported here.
closed
2021-10-13T17:35:09Z
2021-10-14T08:09:56Z
https://github.com/CTFd/CTFd/issues/2006
[]
MrSuicideParrot
2
SciTools/cartopy
matplotlib
2,315
Cannot import cartopy.feature (missing 'config')
### Description I'm a first time cartopy user. It installed just fine, but when I try to import cartopy.feature I get an import error related to the config file. As far as I can tell, this _should_ be made automatically by the `__init__` file, but this doesn't seem to be the case. #### Code to reproduce ``` import cartopy.feature as ccrs ``` #### Traceback ``` Traceback (most recent call last): File "/Users/elizabeth/mambaforge/envs/IMAU/lib/python3.10/code.py", line 90, in runcode exec(code, self.locals) File "<input>", line 9, in <module> File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/Users/elizabeth/mambaforge/envs/IMAU/lib/python3.10/site-packages/cartopy/feature/__init__.py", line 19, in <module> import cartopy.io.shapereader as shapereader File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/Users/elizabeth/mambaforge/envs/IMAU/lib/python3.10/site-packages/cartopy/io/__init__.py", line 19, in <module> from cartopy import config ImportError: cannot import name 'config' from 'cartopy' (unknown location) ``` <details> ### Operating system Version: Python: 3.10 MacOS: Sonoma 14.2.1 (Apple M1) ### Cartopy version 0.22.0 </details>
closed
2024-01-18T17:47:19Z
2024-01-18T18:04:58Z
https://github.com/SciTools/cartopy/issues/2315
[]
Elizabethcase
1
521xueweihan/HelloGitHub
python
2,671
【开源自荐】cmd-wrapped,一个 Rust 编写的命令行记录分析 CLI
## 推荐项目 - 项目地址:https://github.com/YiNNx/cmd-wrapped - 类别:Rust - 项目标题:一个 Rust 编写的命令行历史记录分析总结 CLI - 项目描述: 一个 Rust 编写的命令行历史记录分析 CLI。它可以读取你的命令行操作历史记录,生成一份分析总结: - 统计任意一年中的命令行活跃分布,如每日最活跃时段,以及常用命令等。 - 类 Github 的年度命令分布图 - 支持 Zsh, Bash, Fish 和 atuin - 亮点: 通过 cmd-wrapped,你可以获得一份有趣的数据总结,直观地看到过去在命令行中的使用习惯、条数统计等 - 截图: ![090684D4-C54C-45D7-808B-78E771CE95A5](https://github.com/521xueweihan/HelloGitHub/assets/86649490/75eb1beb-5012-42cf-a7a8-bd1a3b6fe09a)
closed
2024-01-08T09:32:50Z
2024-01-26T02:26:46Z
https://github.com/521xueweihan/HelloGitHub/issues/2671
[ "已发布", "Rust 项目" ]
YiNNx
2
Lightning-AI/pytorch-lightning
data-science
20,152
Typing for `_restricted_classmethod` (e.g. for `LightningModule.load_from_checkpoint`) has stopped working for mypy 1.11
### Bug description The current release of mypy 1.11 (1.11.0 & 1.11.1) raises an error for methods decorated with `_restricted_classmethod` (e.g. for `LightningModule.load_from_checkpoint`). From what I have found, there used to be a workaround for tricking type checkers into expected behaviour (from `pytorch_lightning/utilities/model_helpers.py`): ```python # trick static type checkers into thinking it's a @classmethod # https://github.com/microsoft/pyright/issues/5865 _restricted_classmethod = classmethod if TYPE_CHECKING else _restricted_classmethod_impl ``` but it seems to have stopped working for mypy 1.11. I am not sure whether this problem is on lightning's side (maybe the workaround needs some update) or this is a mypy bug. ### What version are you seeing the problem on? v2.2, master ### How to reproduce the bug Create a file with following contents: ```python from pytorch_lightning.demos.boring_classes import BoringModel model = BoringModel.load_from_checkpoint("some.ckpt") ``` Run mypy 1.11 (1.11.0 or 1.11.1) on this file: ``` mypy lit_typing.py ``` ### Error messages and logs ``` lit_typing.py:3: error: "object" not callable [operator] Found 1 error in 1 file (checked 1 source file) ``` ### Environment <details> <summary>Current environment</summary> * CUDA: - GPU: None - available: False - version: None * Lightning: - lightning-utilities: 0.11.5 - pytorch-lightning: 2.3.3 - torch: 2.3.1 - torchmetrics: 1.4.0.post0 - torchvision: 0.18.1 * Packages: - aiohttp: 3.9.5 - aiohttp-retry: 2.8.3 - aiosignal: 1.3.1 - amqp: 5.2.0 - annotated-types: 0.7.0 - antlr4-python3-runtime: 4.9.3 - anyio: 4.4.0 - appdirs: 1.4.4 - appnope: 0.1.4 - argon2-cffi: 23.1.0 - argon2-cffi-bindings: 21.2.0 - arrow: 1.3.0 - astroid: 3.2.3 - asttokens: 2.4.1 - async-lru: 2.0.4 - asyncssh: 2.15.0 - atpublic: 4.1.0 - attrs: 23.2.0 - autopep8: 2.3.1 - babel: 2.15.0 - backcall: 0.2.0 - beautifulsoup4: 4.12.3 - billiard: 4.2.0 - black: 24.4.2 - bleach: 6.1.0 - blinker: 1.8.2 - build: 1.2.1 - catppuccin: 2.3.0 - celery: 5.4.0 - certifi: 2024.7.4 - cffi: 1.16.0 - charset-normalizer: 3.3.2 - click: 8.1.7 - click-didyoumean: 0.3.1 - click-plugins: 1.1.1 - click-repl: 0.3.0 - colorama: 0.4.6 - comm: 0.2.2 - configobj: 5.0.8 - contourpy: 1.2.1 - cryptography: 42.0.8 - cycler: 0.12.1 - dash: 2.17.1 - dash-core-components: 2.0.0 - dash-html-components: 2.0.0 - dash-table: 5.0.0 - debugpy: 1.8.2 - decorator: 5.1.1 - defusedxml: 0.7.1 - dictdiffer: 0.9.0 - dill: 0.3.8 - diskcache: 5.6.3 - distlib: 0.3.8 - distro: 1.9.0 - docopt: 0.6.2 - docstring-parser: 0.16 - dpath: 2.2.0 - dulwich: 0.22.1 - dvc: 3.51.2 - dvc-data: 3.15.1 - dvc-http: 2.32.0 - dvc-objects: 5.1.0 - dvc-render: 1.0.2 - dvc-studio-client: 0.21.0 - dvc-task: 0.4.0 - einops: 0.8.0 - entrypoints: 0.4 - executing: 2.0.1 - fastjsonschema: 2.20.0 - filelock: 3.15.4 - flake8: 7.1.0 - flask: 3.0.3 - flatten-dict: 0.4.2 - flufl.lock: 7.1.1 - fonttools: 4.53.1 - fqdn: 1.5.1 - frozenlist: 1.4.1 - fsspec: 2024.6.1 - funcy: 2.0 - gitdb: 4.0.11 - gitpython: 3.1.43 - grandalf: 0.8 - greenlet: 3.0.3 - gto: 1.7.1 - h11: 0.14.0 - h5py: 3.11.0 - httpcore: 1.0.5 - httpx: 0.27.0 - hydra-core: 1.3.2 - idna: 3.7 - imageio: 2.34.2 - importlib-metadata: 8.0.0 - importlib-resources: 6.4.0 - iniconfig: 2.0.0 - ipdb: 0.13.13 - ipykernel: 6.29.5 - ipython: 8.12.3 - ipywidgets: 8.1.3 - isoduration: 20.11.0 - isort: 5.13.2 - iterative-telemetry: 0.0.8 - itsdangerous: 2.2.0 - jedi: 0.19.1 - jinja2: 3.1.4 - joblib: 1.4.2 - json5: 0.9.25 - jsonargparse: 4.32.0 - jsonpointer: 3.0.0 - jsonschema: 4.23.0 - jsonschema-specifications: 2023.12.1 - jupyter-client: 8.6.2 - jupyter-core: 5.7.2 - jupyter-events: 0.10.0 - jupyter-lsp: 2.2.5 - jupyter-server: 2.14.2 - jupyter-server-mathjax: 0.2.6 - jupyter-server-terminals: 0.5.3 - jupyterlab: 4.2.3 - jupyterlab-pygments: 0.3.0 - jupyterlab-server: 2.27.2 - jupyterlab-widgets: 3.0.11 - kiwisolver: 1.4.5 - klepto: 0.2.5 - kombu: 5.3.7 - lazy-loader: 0.4 - lightning-utilities: 0.11.5 - markdown-it-py: 3.0.0 - markupsafe: 2.1.5 - matplotlib: 3.9.1 - matplotlib-inline: 0.1.7 - mccabe: 0.7.0 - mdurl: 0.1.2 - mistune: 3.0.2 - mpmath: 1.3.0 - msgpack: 1.0.8 - multidict: 6.0.5 - mypy: 1.11.1 - mypy-extensions: 1.0.0 - nbclient: 0.10.0 - nbconvert: 7.16.4 - nbdime: 4.0.1 - nbformat: 5.10.4 - neovim: 0.3.1 - nest-asyncio: 1.6.0 - networkx: 3.3 - notebook-shim: 0.2.4 - numpy: 2.0.0 - omegaconf: 2.3.0 - orjson: 3.10.6 - overrides: 7.7.0 - packaging: 24.1 - pandas: 2.2.2 - pandocfilters: 1.5.1 - parso: 0.8.4 - pathspec: 0.12.1 - pexpect: 4.9.0 - pickleshare: 0.7.5 - pillow: 10.4.0 - pip: 24.0 - pip-tools: 7.4.1 - pipreqs: 0.5.0 - platformdirs: 3.11.0 - plotly: 5.22.0 - pluggy: 1.5.0 - pox: 0.3.4 - prometheus-client: 0.20.0 - prompt-toolkit: 3.0.47 - psutil: 6.0.0 - ptyprocess: 0.7.0 - pure-eval: 0.2.2 - pycodestyle: 2.12.0 - pycparser: 2.22 - pydantic: 2.8.2 - pydantic-core: 2.20.1 - pydocstyle: 6.3.0 - pydot: 2.0.0 - pyflakes: 3.2.0 - pygit2: 1.15.1 - pygments: 2.18.0 - pygtrie: 2.5.0 - pylint: 3.2.5 - pynvim: 0.5.0 - pyparsing: 3.1.2 - pyproject-hooks: 1.1.0 - pytest: 8.2.2 - pytest-mock: 3.14.0 - python-dateutil: 2.9.0.post0 - python-json-logger: 2.0.7 - pytoolconfig: 1.3.1 - pytorch-lightning: 2.3.3 - pytz: 2024.1 - pyyaml: 6.0.1 - pyzmq: 26.0.3 - referencing: 0.35.1 - rentry: 1.0.1 - requests: 2.32.3 - retrying: 1.3.4 - rfc3339-validator: 0.1.4 - rfc3986-validator: 0.1.1 - rich: 13.7.1 - rope: 1.13.0 - rpds-py: 0.19.0 - ruamel.yaml: 0.18.6 - ruamel.yaml.clib: 0.2.8 - scikit-image: 0.24.0 - scikit-learn: 1.5.1 - scipy: 1.14.0 - scmrepo: 3.3.6 - semver: 3.0.2 - send2trash: 1.8.3 - setuptools: 70.3.0 - shellingham: 1.5.4 - shortuuid: 1.0.13 - shtab: 1.7.1 - six: 1.16.0 - smmap: 5.0.1 - sniffio: 1.3.1 - snowballstemmer: 2.2.0 - soupsieve: 2.5 - sqltrie: 0.11.0 - stack-data: 0.6.3 - sympy: 1.13.0 - tabulate: 0.9.0 - tenacity: 8.5.0 - terminado: 0.18.1 - threadpoolctl: 3.5.0 - tifffile: 2024.7.2 - tinycss2: 1.3.0 - tomlkit: 0.13.0 - torch: 2.3.1 - torchmetrics: 1.4.0.post0 - torchvision: 0.18.1 - tornado: 6.4.1 - tqdm: 4.66.4 - traitlets: 5.14.3 - typer: 0.12.3 - types-python-dateutil: 2.9.0.20240316 - typeshed-client: 2.7.0 - typing-extensions: 4.12.2 - tzdata: 2024.1 - uri-template: 1.3.0 - urllib3: 2.2.2 - vine: 5.1.0 - virtualenv: 20.26.3 - voluptuous: 0.15.2 - wcwidth: 0.2.13 - webcolors: 24.6.0 - webencodings: 0.5.1 - websocket-client: 1.8.0 - werkzeug: 3.0.3 - wheel: 0.43.0 - widgetsnbextension: 4.0.11 - yarg: 0.1.9 - yarl: 1.9.4 - zc.lockfile: 3.0.post1 - zipp: 3.19.2 * System: - OS: Darwin - architecture: - 64bit - - processor: arm - python: 3.12.3 - release: 23.5.0 - version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000 </details> ### More info The problem does not occur for previous mypy versions e.g. mypy 1.10.
closed
2024-08-02T09:05:10Z
2024-08-03T13:50:03Z
https://github.com/Lightning-AI/pytorch-lightning/issues/20152
[ "bug", "help wanted", "code quality", "ver: 2.2.x" ]
maciejzj
1
tableau/server-client-python
rest-api
1,364
Retrieving subscriptions failed for frequency = Daily, hour (interval) = 24
tableau_server.subscriptions fails if any subscriptions have schedule frequency="Daily" & interval hours="24" **Versions** - Tableau Cloud: 2024.1.0 - Python version: 3.11.4 - TSC library version: 0.30 **To Reproduce** tableau_server_obj.subscriptions **Results** ValueError: Invalid interval 24.0 not in {0.25, 0.5, 2, 1, 4, 6, 8, 12} **To fix** update interval_item.py, line 139: ``` VALID_INTERVALS = {0.25, 0.5, 1, 2, 4, 6, 8, 12, 24} ```
closed
2024-03-15T16:38:14Z
2024-05-09T23:06:20Z
https://github.com/tableau/server-client-python/issues/1364
[]
ryanstryker
2
aeon-toolkit/aeon
scikit-learn
2,660
[ENH] Implement U-Shapelets clusterer
### Describe the feature or idea you want to propose It would be nice to have an implementation of the Unsupervised Shapelets (U-Shapelets) https://ieeexplore.ieee.org/document/6413851 https://epubs.siam.org/doi/pdf/10.1137/1.9781611974010.101 https://link.springer.com/article/10.1007/s10618-015-0411-4 ### Describe your proposed solution Implement the U-Shapelets algorithms and possibly scalability improvements. It will be on the contributor to evaluate the accuracy of their implementation to published code and results. ### Describe alternatives you've considered, if relevant _No response_ ### Additional context _No response_
open
2025-03-20T13:14:14Z
2025-03-20T13:59:05Z
https://github.com/aeon-toolkit/aeon/issues/2660
[ "enhancement", "clustering", "implementing algorithms" ]
MatthewMiddlehurst
1
sloria/TextBlob
nlp
270
Python 3.5: no mudule named textblob
When i start tihs program in python 3.5 there came an no module error. But in python 2 it works. Why
closed
2019-06-12T12:07:26Z
2019-09-16T12:51:55Z
https://github.com/sloria/TextBlob/issues/270
[]
Schokolino1
1
graphql-python/graphene
graphql
637
Docs link to 'Babel Relay Plugin' 404s
Hi! On this page: http://docs.graphene-python.org/projects/django/en/latest/introspection/ ...the `Babel Relay Plugin` link (pointing at https://facebook.github.io/relay/docs/guides-babel-plugin.html) 404s. I had a rummage to try and find the new location but didn't have much luck.
closed
2017-12-29T18:00:13Z
2017-12-29T18:10:12Z
https://github.com/graphql-python/graphene/issues/637
[]
edmorley
1
gunthercox/ChatterBot
machine-learning
1,697
ChatterBot can handle 10 million database entries?
I want to be able to save as much data as possible, so what is the maximum size that Chatterbot can handle? From what size does it get too slow? can handle 10 million entries?
closed
2019-04-06T23:52:58Z
2020-08-22T19:28:29Z
https://github.com/gunthercox/ChatterBot/issues/1697
[ "answered" ]
EdgarAI
2
kensho-technologies/graphql-compiler
graphql
260
NotImplementedError when calling toGremlin in FoldedContextField
Hey guys! First of all, thanks a lot for this awesome work. I was testing the compiler in combination with Gremlin. The following GraphQL is mentioned in your Readme, but causes a NotImplementedError when trying to generate a Gremlin statement out of it: ``` Animal { name @output(out_name: "name") out_Animal_ParentOf @fold { _x_count @filter(op_name: ">=", value: ["$min_children"]) @output(out_name: "number_of_children") name @filter(op_name: "has_substring", value: ["$substr"]) @output(out_name: "child_names") } } ``` Is it a bug or is it just not implemented. Many thanks!
open
2019-05-07T12:36:42Z
2019-05-08T19:47:14Z
https://github.com/kensho-technologies/graphql-compiler/issues/260
[ "enhancement", "help wanted", "good first issue" ]
dirkkolb
5
iterative/dvc
machine-learning
9,757
dvc exp show: External s3 address not properly shown
# Bug Report <!-- ## dvc exp show: External s3 address not properly shown --> ## Description Hello, I extended the example from https://github.com/iterative/dvc/issues/9713. Thank you so much for addressing that so quickly! This is much appreciated! When now using an external s3 address `s3://<BUCKET>/<FILE_NAME>` (e.g., `s3://my_bucket/model.pkl`) as an output location in DVC 3.7, `workspace` and `master` branch in `dvc exp show` use two different names to refer to the s3 location, neither of which seems correct: `master` uses `<REPO_PATH>/<BUCKET>/<FILE_NAME>`, `workspace` uses `<BUCKET>/<FILENAME>`. Both are missing the prefix `s3://` ### Reproduce For reproducing, please specify the respective `<BUCKET>` and `<FILE_NAME>` in the following: ``` git init -q dvc_issue cd dvc_issue dvc init -q cat <<EOT >> .dvc/config [cache] type = symlink EOT cat <<EOT >> dvc.yaml vars: - uri_model: s3://<BUCKET>/<FILE_NAME> stages: train: cmd: python train.py deps: - train.py outs: - \${uri_model}: cache: false evaluate: cmd: python evaluate.py deps: - evaluate.py - \${uri_model} metrics: - metrics.json: cache: false EOT cat <<EOT >> train.py import boto3 def main(): bucket_name = <BUCKET> file_name = <FILE_NAME> data = b"weights: 1, 2, 3" s3 = boto3.resource('s3') object = s3.Object(bucket_name, file_name) object.put(Body=data) print("Finished train.") if __name__ == "__main__": main() EOT cat <<EOT >> evaluate.py import json def main(): metrics_filename = "metrics.json" data = {"auc": 0.29} with open(metrics_filename, 'w') as f: json.dump(data, f) print("Finished evaluate.") if __name__ == "__main__": main() EOT dvc repro -q git add . git commit -q -m "initial" dvc exp show -v ``` ### Expected A single column with the entry `s3://<BUCKET>/<FILENAME>`. ### Environment information **Output of `dvc doctor`:** ```console $ dvc doctor VC version: 3.7.0 (pip) ------------------------ Platform: Python 3.10.8 on Linux-3.10.0-1127.8.2.el7.x86_64-x86_64-with-glibc2.17 Subprojects: dvc_data = 2.6.0 dvc_objects = 0.23.1 dvc_render = 0.5.3 dvc_task = 0.3.0 scmrepo = 1.0.4 Supports: http (aiohttp = 3.8.5, aiohttp-retry = 2.8.3), https (aiohttp = 3.8.5, aiohttp-retry = 2.8.3), s3 (s3fs = 2023.6.0, boto3 = 1.26.0) Config: Global: /home/kpetersen/.config/dvc System: /etc/xdg/dvc ``` -->
open
2023-07-25T00:01:48Z
2024-10-23T08:06:33Z
https://github.com/iterative/dvc/issues/9757
[ "bug", "p2-medium", "ui", "A: experiments" ]
kpetersen-hf
1
autogluon/autogluon
data-science
4,457
[BUG] refit_every_n_windows doesn't reduce training time for long time series
**Bug Report Checklist** <!-- Please ensure at least one of the following to help the developers troubleshoot the problem: --> - [x] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install --> - [x] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred --> - [x] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked --> **Describe the bug** I have a time series dataset with a length of one year and a resolution of one minute (so 365*24*60 datapoints). If I fit a `TimeSeriesPredictor` to forecast a window with `prediction_length=15`, training is very fast. But I want to evaluate it on a full day (keeping the `prediction_length=15`), so I set `num_val_windows=24*4` but also `refit_every_n_windows=None` to keep a similar training runtime (since models will only be trained once and then tested on all validation windows). In the second case however, training takes vastly longer, which I wouldn't expect. Is this a bug? Essentially, the `refit_every_n_windows` option doesn't seem to do anything for this dataset. Interestingly though, I've also tested this for a dataset with only a day of information (in minute resolution, so 24*60 datapoints), here `refit_every_n_windows=None` did seem to reduce training runtime significantly, essentially giving the same training runtime as when `num_val_windows=1`. **Expected behavior** I expect a test with `refit_every_n_windows=None` to take roughly the same time for the same dataset, no matter the size of `num_val_windows`. **To Reproduce** ```python import pandas as pd import numpy as np from autogluon.timeseries import TimeSeriesDataFrame, TimeSeriesPredictor df = pd.DataFrame( { "timestamp": pd.date_range("2023-01-01", "2024-01-01", freq="min", inclusive="left"), "target": np.sin(np.arange(365*24*60)), "item_id": ["item_one"]*365*24*60, } ) train_data = TimeSeriesDataFrame.from_data_frame( df, id_column="item_id", timestamp_column="timestamp" ) # this will train fast since num_val_windows=1 predictor1 = TimeSeriesPredictor( prediction_length=15, path="refit_every_n_windows_test", target="target", eval_metric="MAE", ) predictor1.fit( train_data, presets="fast_training", num_val_windows=1, ) # this is obviously much slower since num_val_windows=24*4 predictor2 = TimeSeriesPredictor( prediction_length=15, path="refit_every_n_windows_test", target="target", eval_metric="MAE", ) predictor2.fit( train_data, presets="fast_training", num_val_windows=24*4, ) # I'd expect this to be as fast as predictor1 (at least in training time), but it's more like predictor 2 (much slower) predictor3 = TimeSeriesPredictor( prediction_length=15, path="refit_every_n_windows_test", target="target", eval_metric="MAE", ) predictor3.fit( train_data, presets="fast_training", num_val_windows=24*4, refit_every_n_windows=None, ) ``` **Screenshots / Logs** ``` OUTPUT FOR predictor1: Beginning AutoGluon training... AutoGluon will save models to 'refit_every_n_windows_test' =================== System Info =================== AutoGluon Version: 1.1.1 Python Version: 3.10.13 Operating System: Windows Platform Machine: AMD64 Platform Version: 10.0.22631 CPU Count: 12 GPU Count: 0 Memory Avail: 1.08 GB / 15.69 GB (6.9%) Disk Space Avail: 101.43 GB / 235.67 GB (43.0%) =================================================== Setting presets to: fast_training Fitting with arguments: {'enable_ensemble': True, 'eval_metric': MAE, 'hyperparameters': 'very_light', 'known_covariates_names': [], 'num_val_windows': 1, 'prediction_length': 15, 'quantile_levels': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9], 'random_seed': 123, 'refit_every_n_windows': 1, 'refit_full': False, 'skip_model_selection': False, 'target': 'target', 'verbosity': 2} Inferred time series frequency: 'min' Provided train_data has 525600 rows, 1 time series. Median time series length is 525600 (min=525600, max=525600). Provided data contains following columns: target: 'target' AutoGluon will gauge predictive performance using evaluation metric: 'MAE' This metric's sign has been flipped to adhere to being higher_is_better. The metric score can be multiplied by -1 to get the metric value. =================================================== Starting training. Start time is 2024-09-06 11:39:32 Models that will be trained: ['Naive', 'SeasonalNaive', 'RecursiveTabular', 'DirectTabular', 'ETS', 'Theta'] Training timeseries model Naive. -1.0222 = Validation score (-MAE) 0.36 s = Training runtime 0.09 s = Validation (prediction) runtime Training timeseries model SeasonalNaive. -0.7292 = Validation score (-MAE) 0.34 s = Training runtime 0.09 s = Validation (prediction) runtime Training timeseries model RecursiveTabular. -0.0006 = Validation score (-MAE) 36.15 s = Training runtime 2.41 s = Validation (prediction) runtime Training timeseries model DirectTabular. -0.0008 = Validation score (-MAE) 28.96 s = Training runtime 0.53 s = Validation (prediction) runtime Training timeseries model ETS. -0.6342 = Validation score (-MAE) 0.31 s = Training runtime 0.11 s = Validation (prediction) runtime Training timeseries model Theta. -0.9614 = Validation score (-MAE) 0.31 s = Training runtime 0.20 s = Validation (prediction) runtime Fitting simple weighted ensemble. Ensemble weights: {'DirectTabular': 0.62, 'RecursiveTabular': 0.38} -0.0005 = Validation score (-MAE) 0.39 s = Training runtime 2.94 s = Validation (prediction) runtime Training complete. Models trained: ['Naive', 'SeasonalNaive', 'RecursiveTabular', 'DirectTabular', 'ETS', 'Theta', 'WeightedEnsemble'] Total runtime: 70.57 s Best model: WeightedEnsemble Best model score: -0.0005 OUTPUT FOR predictor2 (I never managed to finish it, seemed to get stuck on RecursiveTabular): Beginning AutoGluon training... AutoGluon will save models to 'refit_every_n_windows_test' =================== System Info =================== AutoGluon Version: 1.1.1 Python Version: 3.10.13 Operating System: Windows Platform Machine: AMD64 Platform Version: 10.0.22631 CPU Count: 12 GPU Count: 0 Memory Avail: 2.95 GB / 15.69 GB (18.8%) Disk Space Avail: 100.57 GB / 235.67 GB (42.7%) =================================================== Setting presets to: fast_training Fitting with arguments: {'enable_ensemble': True, 'eval_metric': MAE, 'hyperparameters': 'very_light', 'known_covariates_names': [], 'num_val_windows': 96, 'prediction_length': 15, 'quantile_levels': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9], 'random_seed': 123, 'refit_every_n_windows': 1, 'refit_full': False, 'skip_model_selection': False, 'target': 'target', 'verbosity': 2} Inferred time series frequency: 'min' Provided train_data has 525600 rows, 1 time series. Median time series length is 525600 (min=525600, max=525600). Provided data contains following columns: target: 'target' AutoGluon will gauge predictive performance using evaluation metric: 'MAE' This metric's sign has been flipped to adhere to being higher_is_better. The metric score can be multiplied by -1 to get the metric value. =================================================== Starting training. Start time is 2024-09-06 14:35:41 Models that will be trained: ['Naive', 'SeasonalNaive', 'RecursiveTabular', 'DirectTabular', 'ETS', 'Theta'] Training timeseries model Naive. -0.8330 = Validation score (-MAE) 52.87 s = Training runtime 0.08 s = Validation (prediction) runtime Training timeseries model SeasonalNaive. -0.6931 = Validation score (-MAE) 38.15 s = Training runtime 0.08 s = Validation (prediction) runtime Training timeseries model RecursiveTabular. OUTPUT FOR predictor3: Beginning AutoGluon training... AutoGluon will save models to 'refit_every_n_windows_test' =================== System Info =================== AutoGluon Version: 1.1.1 Python Version: 3.10.13 Operating System: Windows Platform Machine: AMD64 Platform Version: 10.0.22631 CPU Count: 12 GPU Count: 0 Memory Avail: 2.34 GB / 15.69 GB (14.9%) Disk Space Avail: 100.90 GB / 235.67 GB (42.8%) =================================================== Setting presets to: fast_training Fitting with arguments: {'enable_ensemble': True, 'eval_metric': MAE, 'hyperparameters': 'very_light', 'known_covariates_names': [], 'num_val_windows': 96, 'prediction_length': 15, 'quantile_levels': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9], 'random_seed': 123, 'refit_full': False, 'skip_model_selection': False, 'target': 'target', 'verbosity': 2} Inferred time series frequency: 'min' Provided train_data has 525600 rows, 1 time series. Median time series length is 525600 (min=525600, max=525600). Provided data contains following columns: target: 'target' AutoGluon will gauge predictive performance using evaluation metric: 'MAE' This metric's sign has been flipped to adhere to being higher_is_better. The metric score can be multiplied by -1 to get the metric value. =================================================== Starting training. Start time is 2024-09-06 12:14:39 Models that will be trained: ['Naive', 'SeasonalNaive', 'RecursiveTabular', 'DirectTabular', 'ETS', 'Theta'] Training timeseries model Naive. -0.8330 = Validation score (-MAE) 38.34 s = Training runtime 0.19 s = Validation (prediction) runtime Training timeseries model SeasonalNaive. -0.6931 = Validation score (-MAE) 32.74 s = Training runtime 0.10 s = Validation (prediction) runtime Training timeseries model RecursiveTabular. -0.0003 = Validation score (-MAE) 292.69 s = Training runtime 2.82 s = Validation (prediction) runtime Training timeseries model DirectTabular. -0.0007 = Validation score (-MAE) 75.64 s = Training runtime 0.43 s = Validation (prediction) runtime Training timeseries model ETS. -0.6369 = Validation score (-MAE) 139.11 s = Training runtime 0.12 s = Validation (prediction) runtime Training timeseries model Theta. -0.8383 = Validation score (-MAE) 87.63 s = Training runtime 0.21 s = Validation (prediction) runtime Fitting simple weighted ensemble. Ensemble weights: {'DirectTabular': 0.08, 'RecursiveTabular': 0.92} -0.0003 = Validation score (-MAE) 47.86 s = Training runtime 3.25 s = Validation (prediction) runtime Training complete. Models trained: ['Naive', 'SeasonalNaive', 'RecursiveTabular', 'DirectTabular', 'ETS', 'Theta', 'WeightedEnsemble'] Total runtime: 738.78 s Best model: WeightedEnsemble Best model score: -0.0003 ``` **Installed Versions** ```python INSTALLED VERSIONS ------------------ date : 2024-09-06 time : 15:32:03.974121 python : 3.10.13.final.0 OS : Windows OS-release : 10 Version : 10.0.22631 machine : AMD64 processor : Intel64 Family 6 Model 186 Stepping 3, GenuineIntel num_cores : 12 cpu_ram_mb : 16068.703125 cuda version : None num_gpus : 0 gpu_ram_mb : [] avail_disk_size_mb : None accelerate : 0.21.0 autogluon : 1.1.1 autogluon.common : 1.1.1 autogluon.core : 1.1.1 autogluon.features : 1.1.1 autogluon.multimodal : 1.1.1 autogluon.tabular : 1.1.1 autogluon.timeseries : 1.1.1 boto3 : 1.34.27 catboost : 1.2.5 defusedxml : 0.7.1 evaluate : 0.4.2 fastai : 2.7.17 gluonts : 0.15.1 hyperopt : 0.2.7 imodels : None jinja2 : 3.1.3 joblib : 1.3.2 jsonschema : 4.21.1 lightgbm : 4.3.0 lightning : 2.2.1 matplotlib : 3.8.2 mlforecast : 0.10.0 networkx : 3.2.1 nlpaug : 1.1.11 nltk : 3.9.1 nptyping : 2.4.1 numpy : 1.26.3 nvidia-ml-py3 : 7.352.0 omegaconf : 2.2.3 onnxruntime-gpu : None openmim : 0.3.9 optimum : 1.18.1 optimum-intel : None orjson : 3.10.7 pandas : 2.2.0 pdf2image : 1.17.0 Pillow : 10.2.0 psutil : 5.9.8 pytesseract : 0.3.10 pytorch-lightning : 2.2.1 pytorch-metric-learning: 2.3.0 ray : 2.10.0 requests : 2.32.3 scikit-image : 0.20.0 scikit-learn : 1.4.0 scikit-learn-intelex : None scipy : 1.12.0 seqeval : 1.2.2 setuptools : 68.2.2 skl2onnx : None statsforecast : 1.4.0 tabpfn : None tensorboard : 2.15.1 text-unidecode : 1.3 timm : 0.9.16 torch : 2.3.1 torchmetrics : 1.2.1 torchvision : 0.18.1 tqdm : 4.66.5 transformers : 4.39.3 utilsforecast : 0.0.10 vowpalwabbit : None xgboost : 2.0.3 ``` </details>
open
2024-09-06T13:37:42Z
2025-01-06T10:32:15Z
https://github.com/autogluon/autogluon/issues/4457
[ "bug: unconfirmed", "Needs Triage", "module: timeseries" ]
loek-scholt
1
benbusby/whoogle-search
flask
1,136
[unable to start] <errno 13 Permission denied: whoogle.key>
**Describe the bug** After a server reboot whoogle refuses to start. Docker logs indicate a permissions issue for the file whoogle.key which to the best of my knowledge has never been present in my deployment. **To Reproduce** Steps to reproduce the behavior: 1. Running Ubuntu 22.04 (latest patches as of Monday 8th April 2024) 2. Run Whoogle:latest in Docker using Docker-compose 3. reboot server. 4. See error **Deployment Method** - [ ] Heroku (one-click deploy) - [X] Docker - [ ] `run` executable - [ ] pip/pipx - [ ] Other: [describe setup] **Version of Whoogle Search** - [ ] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc) - [X] benbusby/whoogle-search latest 2348c4b7a1b0 4 weeks ago 136MB - [ ] Not sure **Desktop (please complete the following information):** - OS: [Windows 11] - Browser [e.g. chrome] - Version [Version 123.0.6312.106 (Official Build) (64-bit)] **Additional context** Creating whoogle ... done Running on http://0.0.0.0:5000 Apr 08 12:11:07.001 [notice] Tor 0.4.6.9 running on Linux with Libevent 2.1.12-stable, OpenSSL 1.1.1w, Zlib 1.2.12, Liblzma 5.2.5, Libzstd 1.5.0 and Unknown N/A as libc. Apr 08 12:11:07.001 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning Apr 08 12:11:07.001 [notice] Read configuration file "/etc/tor/torrc". Apr 08 12:11:07.006 [notice] Opening Socks listener on 127.0.0.1:9050 Apr 08 12:11:07.006 [notice] Opened Socks listener connection (ready) on 127.0.0.1:9050 Apr 08 12:11:07.006 [notice] Opening Control listener on 127.0.0.1:9051 Apr 08 12:11:07.006 [notice] Opened Control listener connection (ready) on 127.0.0.1:9051 Traceback (most recent call last): File "<frozen runpy>", line 189, in _run_module_as_main File "<frozen runpy>", line 148, in _get_module_details File "<frozen runpy>", line 112, in _get_module_details File "/whoogle/app/__init__.py", line 107, in <module> with open(app_key_path, 'w') as key_file: ^^^^^^^^^^^^^^^^^^^^^^^ PermissionError: [Errno 13] Permission denied: '/config/whoogle.key'
closed
2024-04-08T12:11:29Z
2024-04-19T18:51:00Z
https://github.com/benbusby/whoogle-search/issues/1136
[ "bug" ]
laoistom
0
deeppavlov/DeepPavlov
nlp
1,667
Repl: KeyboardInterrupt, EOFError
- При выходе из интерактивного режима через нажатие ctrl-c или ctrl-d происходит падение скрипта с ошибкой. - Пустая строка защитывается за контекст или вопрос. А по хорошему просто по новой запрашивать данные. **DeepPavlov version** (you can look it up by running `pip show deeppavlov`): 1.4.0 **Python version**: 3.10.13 **Operating system** (ubuntu linux, windows, ...): linux **Command that led to error**: ``` python -m deeppavlov interact squad_bert -d ``` **Error (including full traceback)**: ``` KeyboardInterrupt EOFError ```
open
2023-10-26T14:20:00Z
2023-10-26T14:20:00Z
https://github.com/deeppavlov/DeepPavlov/issues/1667
[ "bug", "enhancement" ]
optk2k
0
marimo-team/marimo
data-science
3,531
Feature request: Support collapsible sections in app mode
### Description Hi there, first things first: Thank you _so much_ for conceiving marimo. It is really a pleasure to work with, full of so many details expanding upon but re-thinking Jupyter and other Notebook ideas and technologies. @WalBeh and me love it. ### Status quo We are very pleased with the "collapsible section" feature, as described and implemented over here. - https://github.com/marimo-team/marimo/issues/1085 - https://github.com/marimo-team/marimo/discussions/1682 - https://github.com/marimo-team/marimo/pull/1826 We are currently exploring to conceive a little chrome for a small operational support utility, and are currently giving it a shot to do it with marimo. - https://github.com/crate/cratedb-toolkit/pull/350 In this scenario, the notebook is getting longer and longer already, which makes it uncomfortable to work with in edit mode, but navigating it efficiently in view/app mode is also difficult. With kind regards, Andreas. ### Suggested solution Building upon that marimo includes the fundamental implementation already, we would dearly like to see the feature be available in app mode as well. We would dearly like to give the notebook the possibility to be started in full collapsed mode, so the user can effectively use the section headers as a primary navigation element to drill down into different parts without needing to read sequentially. While the grid view is also nice for end-user/app mode presentations, we think elaborating this specific feature and possibly a few friends, in traditional notebook/cell mode, would yield excellent usability improvements, specifically when approaching marimo from an application developer's perspective. --- NB: _We found @vangberg already reported the same request 1:1 over here,_ - https://github.com/marimo-team/marimo/discussions/2660 _but we wanted to emphasize it by handing in a dedicated feature request about it, because we think it would be a so important feature, at least for us, and would like to give more people the chance to upvote more prominently. If you think it is not applicable, feel free to close, and we will add our voice to GH-2660 instead._
open
2025-01-21T23:10:28Z
2025-01-24T02:06:00Z
https://github.com/marimo-team/marimo/issues/3531
[ "enhancement" ]
amotl
7
tensorpack/tensorpack
tensorflow
918
Add examples to tensorpack import
I vote for adding the examples to the tensorpack (at least add a `__init__.py`). I end up with copying or symlinking these files. But something along ```python from tensorpack.examples.GAN import GanTrainer from tensorpack.examples.OpticalFlow.FlowNet import Model ``` would be super useful. I know, these examples might change. But the same is true for symlinks. And ending with several copies of the same file is neither a good solution.
closed
2018-10-04T13:13:16Z
2019-06-26T00:03:50Z
https://github.com/tensorpack/tensorpack/issues/918
[ "enhancement" ]
PatWie
6
graphql-python/graphene
graphql
1,496
@include and @skip directives lead to irrelevant errors
I am unsure whether to put this under bugs or feature requests, because I would consider this to be both. The directives `@include` and `@skip` are awesome concepts, but they miss the mark when a certain field is not queryable. That is, if the boolean variable on which the `@include` directive is conditioned evaluates to false, then it would be reasonable to say that it is not relevant whether the field is queryable or not, because the field will simply be omitted in the result anyway. Why is this important? Well, consider we have two APIs that are very similar and share a big part of their graphql schema definition. I work with django-graphene combined with relay, so I'll present my example use case with that context. Suppose both APIs contain design logic for a fruit store. Both stores have a model they call `Banana`. One store also has a model called `BananaBox`, because the store has some kind of shipping system, whereas the other store does not define `BananaBox` because it does not ship bananas in boxes anywhere. The store that has `BananaBox` simply added a ForeignKey from `Banana` to `BananaBox` in order to keep track of which box a banana is in. Both APIs define appropriate django-graphene Nodes for each model, and both APIs define a query endpoint for bananas. Now, if we want to perform a query somewhat like the following: ```graphql query bananas { banana { id banana_box { id size } } } ``` That query will fail when executed on the store without the `BananaBox` model. However, one elegant way of solving this problem would be to use the following query: ```graphql query bananas($fetch_box: Boolean!) { banana { id banana_box @include(if: $fetch_box) { id size } } } ``` And then specifying `fetch_box: true` or `fetch_box: false` in the store with and without box respectively. However, this will result in the following error for the store without the BananaBox: `Cannot query field "banana_box" on type "BananaNode".` I believe this error is irrelevant to answering the query, because having `fetch_box: false` clearly indicates to the server that we are not interested in the `banana_box` field. The [graphql documentation about directives](https://graphql.org/learn/queries/#directives) mentions that the `@include` and `@skip` directives determine what fields are included in the result, and I would argue that excluding a field means that composing the result can be completed without knowledge of whether or not the field is queryable at all. If this BananaNode has a lot of queryable attributes, and we are interested in all of them, defining the `bananas` query twice just because the two APIs differ by one field is quite silly and repetitive, especially when working with more than two similar APIs. If this use case seems far-fetched, please understand that it is really relevant to the application I am developing.
closed
2023-02-22T13:10:26Z
2023-02-27T16:29:52Z
https://github.com/graphql-python/graphene/issues/1496
[ "question" ]
DrumsnChocolate
2
nteract/testbook
pytest
97
tb contains method
> Also, it would be nice if the following check is supported: > 'happy_fraction' in tb Implement a `__contains__` method which returns whether the variable/function is present in the notebook or not (in memory). _Originally posted by @rohitsanj in https://github.com/nteract/testbook/issues/96#issuecomment-819540146_
open
2021-04-14T13:59:33Z
2021-05-20T09:34:19Z
https://github.com/nteract/testbook/issues/97
[ "enhancement", "good first issue", "sprint-friendly" ]
rohitsanj
0
allenai/allennlp
pytorch
5,064
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling cublasSgemm
## Checklist <!-- To check an item on the list replace [ ] with [x]. --> - [x] I have verified that the issue exists against the `master` branch of AllenNLP. - [x] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/master/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs. - [x] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports. - [x] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes. - [x] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/master/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/master) to find out if the bug was already fixed in the master branch. - [x] I have included in the "Description" section below a traceback from any exceptions related to this bug. - [x] I have included in the "Related issues or possible duplicates" section beloew all related issues and possible duplicate issues (If there are none, check this box anyway). - [x] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug. - [x] I have included in the "Environment" section below the output of `pip freeze`. - [x] I have included in the "Steps to reproduce" section below a minimally reproducible example. ## Description When I train RoBERTa (or BERT, but let's just stick with RoBERTa in this issue in the interest of simplicity) on MNLI, I get an odd CUDA error. ``` File "/opt/conda/lib/python3.7/site-packages/allennlp/models/basic_classifier.py", line 116,[26/1829$ rd embedded_text = self._text_field_embedder(tokens) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/allennlp/modules/text_field_embedders/basic_text_field_e mbedder.py", line 103, in forward token_vectors = embedder(**tensors, **forward_params_values) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/allennlp/modules/token_embedders/pretrained_transformer_ embedder.py", line 201, in forward transformer_output = self.transformer_model(**parameters) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/transformers/models/roberta/modeling_roberta.py", line 8 22, in forward return_dict=return_dict, File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/transformers/models/roberta/modeling_roberta.py", line 5 15, in forward output_attentions, File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/transformers/models/roberta/modeling_roberta.py", line 4 36, in forward self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output File "/opt/conda/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1819, in apply_chu nking_to_forward return forward_fn(*input_tensors) File "/opt/conda/lib/python3.7/site-packages/transformers/models/roberta/modeling_roberta.py", line 4 47, in feed_forward_chunk intermediate_output = self.intermediate(attention_output) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/transformers/models/roberta/modeling_roberta.py", line 3 48, in forward hidden_states = self.dense(hidden_states) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/linear.py", line 94, in forward return F.linear(input, self.weight, self.bias) File "/opt/conda/lib/python3.7/site-packages/torch/nn/functional.py", line 1753, in linear return torch._C._nn.linear(input, weight, bias) RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m , n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)` ``` ## Environment I made a docker image that reproduces this issue at: https://hub.docker.com/r/nfliu/torch1.8.0-sgemm-execution-debugging . The associated dockerfile is https://gist.github.com/nelson-liu/f80d76f5557d48f2a52b2082b1bf86da . In short, it is based off of the NVIDIA cuda 11.1 container, and installs allennlp and allennlp-models off the most recent commits, and also pytorch 1.8.0+cu111. The python is python 3.7 Here's the output of nvidia-smi (for things like driver version, etc) ``` $ nvidia-smi Fri Mar 19 13:26:08 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.56 Driver Version: 460.56 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 TITAN Xp On | 00000000:5E:00.0 Off | N/A | | 23% 26C P8 9W / 250W | 1MiB / 12196MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ ``` ## Steps to reproduce 1. Go to a machine with a titanxp and a driver that supports cuda 11.1 2. `nvidia-docker run --rm -it nfliu/torch1.8.0-sgemm-execution-debugging` 3. `allennlp train https://gist.githubusercontent.com/nelson-liu/2164bb51097c5a8f9f9e8 d7784f8473e/raw/ce93da75558489177556355c8d54ca4949417b8b/roberta_base_mnli.jsonnet -s output` 4. You should see the error above within the first epoch. If not, it'd be great to know that you can't reproduce the issue. The config is at https://gist.github.com/nelson-liu/2164bb51097c5a8f9f9e8d7784f8473e , it's exactly the same as the RoBERTa MNLI config except I'm using RoBERTa base and a batch size of 8, since the titanxp has a bit less memory.
closed
2021-03-19T20:25:55Z
2021-06-03T18:29:55Z
https://github.com/allenai/allennlp/issues/5064
[ "bug" ]
nelson-liu
17
unionai-oss/pandera
pandas
1,071
`validate` is slow with when coercing several hundreds columns.
**Describe the bug** Validating against a `SchemaModel` with several hundred is used with coerce takes a lot of time, even if the dataframe is already valid. It doesn’t occur when there is no `coerce`. - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the latest version of pandera. - [ ] (optional) I have confirmed this bug exists on the master branch of pandera. #### Code Sample, a copy-pastable example ```python3 import pandas as pd import pandera as pa import numpy as np from pandera.typing import Series class TestCoerce(pa.SchemaModel): a: Series[float] = pa.Field(alias="a\d+", regex=True, coerce=True) class TestNoCoerce(pa.SchemaModel): a: Series[float] = pa.Field(alias="a\d+", regex=True, coerce=False) def gen_df( value: float = 1.618, col_number: int = 40, row_number: int = int(1e6), ): return pd.DataFrame( { f"a{i}": np.full(row_number, value) for i in range(col_number) } ) df = gen_df() TestCoerce.validate(df) ``` In this gist you will find a script that compares execution time with and without coerce : https://gist.github.com/koalp/0e70303c014712a6f7f790b5743482a3 #### Expected behavior That the coercion doesn’t take so much time when the dtype is already good. It would be even better to not be slow when all the columns must be converted. #### Desktop (please complete the following information): - OS: linux - Python 3.9, 3.10 - Pandera 0.13.4 #### Additional context After running benchmarks, I found out that the `__setattr__` function¹ from pandas (replacing a column) takes a lot of time to run. (python 3.9) If [I modify pandera to only `setattr` it the result from `try_coercion` differs from the previous column](https://github.com/unionai-oss/pandera/commit/88ecc0c585fa3f568308121e97bd0c8cf5ae59eb) it solves my issue as I currently only have 1 or less column that need to be changed (wrong dtype). However, it isn’t a generic solution as it doesn’t help when a lot of columns have a wrong dtype. On discord, a modification was suggested: > I think an alternative and potentially faster solution would be to check if the dtype of `obj[matched_colname]` is the same as `col_schema.dtype`. If so, then coercion isn't necessary. If not, then apply coercion and reassign the column.
open
2023-01-20T13:05:07Z
2025-02-14T22:11:55Z
https://github.com/unionai-oss/pandera/issues/1071
[ "bug" ]
koalp
4
dfki-ric/pytransform3d
matplotlib
178
Photogrammetry Example?
In the Readme, there is an example of a tree trunk with the computed position of the camera being displayed with the reconstructed 3D mesh. Does an example do this at the current moment?
closed
2022-01-12T23:49:44Z
2022-01-13T08:27:58Z
https://github.com/dfki-ric/pytransform3d/issues/178
[]
stanleyshly
0
jina-ai/clip-as-service
pytorch
608
word embedding for emojis
Hi, I'm using bc.encode() to generate the word vector, but the text contains some emojis. Does BertClient encode this emojis properly? Or should I remove the emojis before the word encoding? If they can be encoded properly, can you explain how briefly? Thank you!
open
2020-12-03T13:29:25Z
2020-12-03T13:29:25Z
https://github.com/jina-ai/clip-as-service/issues/608
[]
jianGuoQiang
0
BeanieODM/beanie
asyncio
572
[BUG] Fetch_Links & Inheritance dont work if Link not on Parent Class
Hi, To avoid a circular import error with Links, I put my "Base_House" in a separate module than houses.py and house_parts.py. [In reality my modules are much larger]. This is now causing an issue whereby, when I use .find() on Base_House (which really is just an abstract class in a sense, it just has the common features with its children classes), fetch_links = True doesn't work. My guess is, because the Link is not defined on Base_House. If I search on the children classes on their own, everything works fine. ``` from beanie import Document, BackLink, Link from pydantic import Field # base.py <--- Separate module to houses.py to avoid circular import error with house_parts.py class Base_House(Document): name: str class Settings: is_root = True # houses.py class Apartment(Base_House): name: str door: Link["Door"] class House(Base_House): name: str door: Link["Door"] # house_parts.py class Door(Document): height: int = 2 width: int = 1 house: BackLink[Base_House] = Field(original_field="door") # people class Person(Document): name: str house: Link[Base_House] doc = await Person.find_one(with_children = True, fetch_links = True) doc.door -> Link # <-- this won't fetch the link, because - I think - no Link on Base_House ```
closed
2023-05-24T13:29:56Z
2023-07-10T02:10:34Z
https://github.com/BeanieODM/beanie/issues/572
[ "Stale" ]
mg3146
3
fastapi/sqlmodel
fastapi
337
select specific columns returning Row objects instead of ORM class, load_only() doesn't work
### First Check - [X] I added a very descriptive title to this issue. - [X] I used the GitHub search to find a similar issue and didn't find it. - [X] I searched the SQLModel documentation, with the integrated search. - [X] I already searched in Google "How to X in SQLModel" and didn't find any information. - [X] I already read and followed all the tutorial in the docs and didn't find an answer. - [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic). - [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy). ### Commit to Help - [X] I commit to help with one of those options 👆 ### Example Code ```python 1) from sqlalchemy.orm import load_only from sqlmodel import (Field, Session, SQLModel, create_engine, select) from chalicelib.models import ( Tx ) engine = create_engine(url, echo=True) with Session(engine) as session: results = session.exec(select(Tx.Id, Tx.party, Tx.time) .order_by(Tx .time .desc()) ).all() myResults = [] for result in results: myResults.append(result.toDict()) 2) from sqlalchemy.orm import load_only from sqlmodel import (Field, Session, SQLModel, create_engine, select) from chalicelib.models import ( Tx ) engine = create_engine(url, echo=True) with Session(engine) as session: results = session.exec(select(Tx) .options(load_only("id", "party", "time"))) .order_by(Tx .time .desc()) ).all() myResults = [] for result in results: myResults.append(result.toDict()) ``` ### Description Basically I'm following the two methods mentioned here: https://github.com/tiangolo/sqlmodel/issues/232 1) Selecting columns inside `select()` returns Row class object instead of the desired ORM class Tx like before I expected `Tx(id=2, party='new', time='0000:00:00')` instead got Row object that I am having trouble now converting to a dict with `.toDict()` that normally converts that Tx object into a dict. 2) Using sqlalchemy's `load_only` filter gets ignored. I expected `Tx(id=2, party='new', time='0000:00:00')` but instead got all the columns associated. Want to know if I'm doing this the "right" way, seeing as there is no documentation and I could only find this information after coming across #232 via Google search. ### Operating System Linux ### Operating System Details _No response_ ### SQLModel Version 0.0.6 ### Python Version 3.8 ### Additional Context _No response_
open
2022-05-11T20:40:49Z
2024-08-07T23:42:11Z
https://github.com/fastapi/sqlmodel/issues/337
[ "question" ]
johnatr
2
scikit-optimize/scikit-optimize
scikit-learn
489
random_state not working?
Greetings! If you run the following code: ``python import numpy as np from skopt import gp_minimize def f(x): #print(x) return (np.sin(5 * x[0]) * (1 - np.tanh(x[0] ** 2)) * np.random.randn() * 0.1) res1 = forest_minimize(f, [(-2.0, 2.0)], random_state=np.random.RandomState(1)) res2 = forest_minimize(f, [(-2.0, 2.0)], random_state=np.random.RandomState(1)) print(np.concatenate((res1.x_iters, res2.x_iters), axis=1)) `` You will notice that both columns are different. But I have set the same random state.
closed
2017-08-29T11:04:02Z
2017-08-29T11:55:43Z
https://github.com/scikit-optimize/scikit-optimize/issues/489
[]
echo66
1
scrapy/scrapy
web-scraping
5,842
Values sent from api do not work in custom_settings
### Description I send some `custom_settings` values through api. While these values are successfully received in kwargs, they are not applied in crawler settings. I wrote two different codes for the same thing, but I didn't succeed, and maybe scrapy has a bug in this part or I made a mistake. ### First code: <pre> from scrapy.spiders import CrawlSpider, Rule from scrapy.loader import ItemLoader from kavoush.lxmlhtml import LxmlLinkExtractor as LinkExtractor from kavoush.items import PageLevelItem class PageSpider(CrawlSpider): name = 'github' def __init__(self, *args, **kwargs): self.start_urls = kwargs.get('host_name') self.allowed_domains = [self.start_urls] self.custom_settings = { 'CONCURRENT_REQUESTS': int(kwargs.get('CONCURRENT_REQUESTS')), 'ROBOTSTXT_OBEY': kwargs.get('ROBOTSTXT_OBEY') == 'True', } self.logger.info(f'CONCURRENT_REQUESTS? {self.custom_settings}') self.rules = ( Rule(LinkExtractor(allow=(self.start_urls),deny=('\.webp'),unique=True), callback='parse', follow=True), ) super(PageSpider, self).__init__(*args, **kwargs) def parse(self,response): loader = ItemLoader(item=PageLevelItem(), response=response) loader.add_xpath('page_source_html_lang', "//html/@lang") yield loader.load_item() def errback_domain(self, failure): self.logger.error(repr(failure)) </pre> <br> And according to the photo, the settings I want via api are not available in Overridden settings, but in the log, the correct values are sent and received successfully. <img src="https://i.ibb.co/P6pgtx2/pp1.png"> <br> The photo below is the request that I sent through api and with the help of scrapyd. <img src="https://i.ibb.co/L6CSggD/apiin.png"> <br> The second code has the same problem, actually I expected one of the codes (the first code or the second code) to work, but none of them applies the settings.<br> ### Second code: <pre> from scrapy.spiders import CrawlSpider, Rule from scrapy.loader import ItemLoader from kavoush.lxmlhtml import LxmlLinkExtractor as LinkExtractor from kavoush.items import PageLevelItem from scrapy.settings import Settings class PageSpider(CrawlSpider): name = 'github1' def __init__(self, *args, **kwargs): self.start_urls = kwargs.get('host_name') self.allowed_domains = [self.start_urls] self.custom_settings = { 'CONCURRENT_REQUESTS': int(kwargs.get('CONCURRENT_REQUESTS')), 'ROBOTSTXT_OBEY': kwargs.get('ROBOTSTXT_OBEY') == 'True', } self.logger.info(f'CONCURRENT_REQUESTS? {self.custom_settings}') for key, value in kwargs.items(): if key in self.custom_settings: self.custom_settings[key] = value settings = Settings() settings.setdict(self.custom_settings, priority='spider') self.settings = settings self.rules = ( Rule(LinkExtractor(allow=(self.start_urls),deny=('\.webp'),unique=True), callback='parse', follow=True), ) super(PageSpider, self).__init__(*args, **kwargs) def parse(self,response): loader = ItemLoader(item=PageLevelItem(), response=response) loader.add_xpath('page_source_html_lang', "//html/@lang") yield loader.load_item() def errback_domain(self, failure): self.logger.error(repr(failure)) </pre> <br> ### Versions <pre> Scrapy : 2.7.1 lxml : 4.9.2.0 libxml2 : 2.9.14 cssselect : 1.2.0 parsel : 1.7.0 w3lib : 2.1.1 Twisted : 22.10.0 Python : 3.10.9 pyOpenSSL : 22.1.0 (OpenSSL 3.0.7 1 Nov 2022) cryptography : 38.0.1 Platform : Linux-5.15.0-50-generic-x86_64-with-glibc2.35 </pre> ### Additional context I already asked my question in stackoverflow and said maybe I don't know, but the answer in stackoverflow was similar to my first code, but it doesn't work in scrapy. In short, I want to send some `custom_settings` via api, but I didn't succeed. Thank you friends.
closed
2023-03-03T07:08:56Z
2023-03-03T16:48:09Z
https://github.com/scrapy/scrapy/issues/5842
[]
SardarDelha
3
scrapy/scrapy
python
5,763
Cookie Injection won't work to all allowed url in LinkExtractor of CrawlSpider class
<!-- Thanks for taking an interest in Scrapy! If you have a question that starts with "How to...", please see the Scrapy Community page: https://scrapy.org/community/. The GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself. Keep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md The following is a suggested template to structure your issue, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#reporting-bugs --> ### Description ### Steps to Reproduce 1. Create a CrawlSpider class with start_requests method ``` import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule from scrapy import Request class MyCustomCrawler(CrawlSpider): name = "test_crawler" #some domain and rules here.. #start urls here ... #etc. #inject cookie here def start_requests(self): self.currencyCode='USD' for url in self.start_urls: yield Request(url, cookies={'currencyCode': self.currencyCode},callback=self.parse_myitem) ``` 2. Enable and Debug Cookie in settings.py `COOKIES_DEBUG = True` 3. Run the spider with the following command `scrapy crawl test_crawler` **Expected behavior:** Cookies should be applied to all urls listed in start urls and rules **Actual behavior:** Cookies are applied only in start urls and the execution stops (e.g. all allowed rules are not crawled because I overriden the start_request). **Reproduces how often:** 100% reproducible in CrawlSpider class but not in Spider class. ### Versions Scrapy 2.7.1
closed
2022-12-18T16:13:46Z
2022-12-18T16:26:20Z
https://github.com/scrapy/scrapy/issues/5763
[]
arcans1998
0
nolar/kopf
asyncio
432
[archival placeholder]
This is a placeholder for later issues/prs archival. It is needed now to reserve the initial issue numbers before going with actual development (PRs), so that later these placeholders could be populated with actual archived issues & prs with proper intra-repo cross-linking preserved.
closed
2020-08-18T20:06:28Z
2020-08-18T20:06:29Z
https://github.com/nolar/kopf/issues/432
[ "archive" ]
kopf-archiver[bot]
0
noirbizarre/flask-restplus
flask
613
The name of the body can only be called payload?
In the generated document, the name of the body can only be called payload? I want change the name to make more sense. ``` @au.route('/authenticate') @au.response(400, 'params error') class Authenticate(Resource): @au.doc('Get a accessToken') @au.doc(body=auth) @au.marshal_with(token) def post(self): ```
open
2019-03-27T07:42:34Z
2020-01-16T02:58:23Z
https://github.com/noirbizarre/flask-restplus/issues/613
[ "Needed: Feedback" ]
ELvisZxc
3
ploomber/ploomber
jupyter
331
Notebook/script templates generated by "ploomber scaffold" should contain a cell with the autoreload magic
If `ploomber scaffold` finds a `pipeline.yaml` if checks all `tasks[*].sources` and creates files for all tasks whose source is missing. e.g., ```yaml tasks: - source: scripts/some-script.py product: output.ipynb ``` If `scripts/some-script.py` does not exist, it creates one. The created script contains a basic structure for users to get started. It'd be useful to add the following as the first cell: ```python # uncomment the next to lines to enable auto-reloading # for more info: https://ipython.readthedocs.io/en/stable/config/extensions/autoreload.html # %load_ext autoreload # %autoreload 2 ``` This is a pretty handy but relatively unknown feature of IPython that auto-reloads modified code. Documentation: https://ipython.readthedocs.io/en/stable/config/extensions/autoreload.html File to modify: https://github.com/ploomber/ploomber/blob/ac915ea998f4da0177204f2189f8f608f3404fc6/src/ploomber/resources/ploomber_add/task.py This is the line that triggers file creation when executing "ploomber scaffold": https://github.com/ploomber/ploomber/blob/ac915ea998f4da0177204f2189f8f608f3404fc6/src/ploomber/cli/cli.py#L67 Add tests here: https://github.com/ploomber/ploomber/blob/master/tests/cli/test_scaffold.py
closed
2021-09-04T15:19:46Z
2021-10-08T18:49:53Z
https://github.com/ploomber/ploomber/issues/331
[ "good first issue" ]
edublancas
0
google-deepmind/graph_nets
tensorflow
80
Placeholders from data dicts with no edges
Hi, I am currently trying out graph_nets and came across a problem, that I wasn't able to solve. My graphs have node and global properties as well as senders/receivers, but use no edge properties at all. I tried to create a placeholder from such a data dict and provide it as input for my model, however, session.run calls involving this placeholder yield the following error. ``` Traceback (most recent call last): File "graphNN.py", line 312, in <module> graphs_tuple_np = sess.run(graphs_tuple_ph, feed_dict) File "/home/felix/python/tf/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 929, in run run_metadata_ptr) File "/home/felix/python/tf/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1137, in _run self._graph, fetches, feed_dict_tensor, feed_handles=feed_handles) File "/home/felix/python/tf/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 471, in __init__ self._fetch_mapper = _FetchMapper.for_fetch(fetches) File "/home/felix/python/tf/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 261, in for_fetch return _ListFetchMapper(fetch) File "/home/felix/python/tf/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 370, in __init__ self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches] File "/home/felix/python/tf/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 370, in <listcomp> self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches] File "/home/felix/python/tf/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 258, in for_fetch type(fetch))) TypeError: Fetch argument None has invalid type <class 'NoneType'> ``` Minimal code, that reproduces my error: ``` n_node = 3 senders = [0, 1, 2] # Indices of nodes sending the edges receivers = [1, 2, 0] # Indices of nodes receiving the edges data_dict = { "n_node": n_node, "senders": senders, "receivers": receivers, } tf.reset_default_graph() graphs_tuple_ph = utils_tf.placeholders_from_data_dicts([data_dict]) with tf.Session() as sess: feed_dict = utils_tf.get_feed_dict(graphs_tuple_ph, utils_np.data_dicts_to_graphs_tuple([data_dict])) graphs_tuple_np = sess.run(graphs_tuple_ph, feed_dict) ``` Why is this not working?
closed
2019-06-11T14:37:04Z
2019-06-24T09:15:13Z
https://github.com/google-deepmind/graph_nets/issues/80
[]
felbecker
2
plotly/dash-core-components
dash
443
Upgrade to Plotly.js 1.44.1
https://github.com/plotly/plotly.js/releases/tag/v1.44.1
closed
2019-01-23T20:24:42Z
2019-01-25T03:02:34Z
https://github.com/plotly/dash-core-components/issues/443
[ "dash-type-maintenance" ]
Marc-Andre-Rivet
0
mirumee/ariadne
graphql
1,188
make_federated_schema is unable to parse repeatable directives
Hello, I'm extending our application to be integrated in GraphQL federation and I encountered a weird bug. We are heavily utilizing directives in our schema, to extend resolving logic of our fields and we used to consume schema with ariadne function `make_executable_schema`. During integration of our service to GraphQL federation we had to switch and load our schema with `make_federated_schema` instead of `make_executable_schema`. A lot of our directives are repeatable, but it seems that `make_federated_schema` is failing to load those. If I remove repeatable attribute from all directives or I use `make_executable_schema` application loads correctly accepting directive definitions. I am showing this on simple GraphQL app exposed by uvicorn server: Schema: ```GraphQL directive @some_directive repeatable on FIELD_DEFINITION | OBJECT type Query { query: String @some_directive } ``` main.py: ```python from ariadne.asgi import GraphQL from ariadne import load_schema_from_path from ariadne.contrib.federation import make_federated_schema from directives import ( SomeDirective, # definition is irrelevant and not stated in this issue ) type_defs = load_schema_from_path("schema_.graphql") schema = make_federated_schema( type_defs, directives={ "some_directive": SomeDirective, }, ) app = GraphQL(schema) ``` Actual error: ``` uvicorn main:app --reload --port 5000 INFO: Will watch for changes in these directories: ['/home/name/PycharmProjects/supergraph/supergraph-integration-poc/pyxis'] INFO: Uvicorn running on http://127.0.0.1:5000 (Press CTRL+C to quit) INFO: Started reloader process [51173] using StatReload Process SpawnProcess-1: Traceback (most recent call last): File "/usr/lib64/python3.12/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib64/python3.12/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/name/PycharmProjects/supergraph/venv/lib64/python3.12/site-packages/uvicorn/_subprocess.py", line 78, in subprocess_started target(sockets=sockets) File "/home/name/PycharmProjects/supergraph/venv/lib64/python3.12/site-packages/uvicorn/server.py", line 62, in run return asyncio.run(self.serve(sockets=sockets)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.12/asyncio/runners.py", line 194, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.12/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.12/asyncio/base_events.py", line 687, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File "/home/name/PycharmProjects/supergraph/venv/lib64/python3.12/site-packages/uvicorn/server.py", line 69, in serve config.load() File "/home/name/PycharmProjects/supergraph/venv/lib64/python3.12/site-packages/uvicorn/config.py", line 433, in load self.loaded_app = import_from_string(self.app) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/name/PycharmProjects/supergraph/venv/lib64/python3.12/site-packages/uvicorn/importer.py", line 19, in import_from_string module = importlib.import_module(module_str) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/lib64/python3.12/importlib/__init__.py", line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 1387, in _gcd_import File "<frozen importlib._bootstrap>", line 1360, in _find_and_load File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 935, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 995, in exec_module File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed File "/home/name/PycharmProjects/supergraph/supergraph-integration-poc/pyxis/main.py", line 10, in <module> schema = make_federated_schema( ^^^^^^^^^^^^^^^^^^^^^^ File "/home/name/PycharmProjects/supergraph/venv/lib64/python3.12/site-packages/ariadne/contrib/federation/schema.py", line 68, in make_federated_schema type_token = "extend type" if has_query_type(sdl) else "type" ^^^^^^^^^^^^^^^^^^^ File "/home/name/PycharmProjects/supergraph/venv/lib64/python3.12/site-packages/ariadne/contrib/federation/schema.py", line 46, in has_query_type ast_document = parse(type_defs) ^^^^^^^^^^^^^^^^ File "/home/name/PycharmProjects/supergraph/venv/lib64/python3.12/site-packages/graphql/language/parser.py", line 113, in parse return parser.parse_document() ^^^^^^^^^^^^^^^^^^^^^^^ File "/home/name/PycharmProjects/supergraph/venv/lib64/python3.12/site-packages/graphql/language/parser.py", line 241, in parse_document definitions=self.many(TokenKind.SOF, self.parse_definition, TokenKind.EOF), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/name/PycharmProjects/supergraph/venv/lib64/python3.12/site-packages/graphql/language/parser.py", line 1149, in many nodes = [parse_fn()] ^^^^^^^^^^ File "/home/name/PycharmProjects/supergraph/venv/lib64/python3.12/site-packages/graphql/language/parser.py", line 288, in parse_definition return getattr(self, f"parse_{method_name}")() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/name/PycharmProjects/supergraph/venv/lib64/python3.12/site-packages/graphql/language/parser.py", line 990, in parse_directive_definition self.expect_token(TokenKind.AT) File "/home/name/PycharmProjects/supergraph/venv/lib64/python3.12/site-packages/graphql/language/parser.py", line 1045, in expect_token raise GraphQLSyntaxError( graphql.error.syntax_error.GraphQLSyntaxError: Syntax Error: Expected '@', found Name 'repeatable'. GraphQL request:1:11 1 | directive repeatable on FIELD_DEFINITION | OBJECT | ^ 2 | ``` Can somebody please take a look on that if I am doing something wrong or this should be fixed in library?
closed
2024-07-08T12:26:19Z
2024-11-19T16:46:17Z
https://github.com/mirumee/ariadne/issues/1188
[]
ezopezo
1
mage-ai/mage-ai
data-science
5,264
[BUG] MAGE_BASE_PATH issue with fonts (.ttf)
### Mage version 0.9.72 ### Describe the bug Hello Mage team, after upgrading from 0.9.71 to 0.9.72, MAGE_BASE_PATH is not working with fonts. ex. MAGE_BASE_PATH=test font URLs should be like https://mageurl-example.com/test/fonts/~~~, but frontend is loading fonts from https://mageurl-example.com/fonts/~~~ Thank you! ### To reproduce _No response_ ### Expected behavior _No response_ ### Screenshots _No response_ ### Operating system _No response_ ### Additional context _No response_
closed
2024-07-13T01:37:06Z
2024-07-15T08:35:52Z
https://github.com/mage-ai/mage-ai/issues/5264
[ "bug" ]
farmboy-dev
0
akfamily/akshare
data-science
5,355
option_dce_daily输出有误
调用d1, d2 = akshare.option_dce_daily("玉米期权", trade_date="20241121"),d2输出如下,包含了多余玉米淀粉期权的信息 合约系列 隐含波动率(%) 0 c2501 13.18 1 c2503 10.13 2 c2505 14.28 3 c2507 15.57 4 c2509 13.44 5 c2511 10.9 6 cs2501 13.24 7 cs2503 10.99 8 cs2505 13.69 9 cs2507 13.69 10 cs2509 10.52 11 cs2511 10.52 原因是源码中通过玉米代码"c"进行过滤,把玉米淀粉“cs"给添加进去了。麻烦修改下这个逻辑,感谢!
closed
2024-11-21T11:42:16Z
2024-11-21T12:21:50Z
https://github.com/akfamily/akshare/issues/5355
[ "bug" ]
akkezhu
1
pytorch/vision
computer-vision
8,967
Failed to install vision based on python 3.13t(free-threaded) on Windows OS
### 🐛 Describe the bug **Reproduce steps:** [Windows 11 OS] conda create -n nogil2 --override-channels -c conda-forge python-freethreading conda activate nogil2 pip install torch torchvision torchaudio --pre --index-url https://download.pytorch.org/whl/nightly/cu128 ERROR: Cannot install torchvision==0.22.0.dev20250226+cu128, torchvision==0.22.0.dev20250227+cu128, torchvision==0.22.0.dev20250228+cu128, torchvision==0.22.0.dev20250301+cu128, torchvision==0.22.0.dev20250302+cu128, torchvision==0.22.0.dev20250303+cu128, torchvision==0.22.0.dev20250304+cu128, torchvision==0.22.0.dev20250306+cu128, torchvision==0.22.0.dev20250307+cu128, torchvision==0.22.0.dev20250308+cu128, torchvision==0.22.0.dev20250309+cu128, torchvision==0.22.0.dev20250310+cu128, torchvision==0.22.0.dev20250311+cu128 and torchvision==0.22.0.dev20250312+cu128 because these package versions have conflicting dependencies. The conflict is caused by: torchvision 0.22.0.dev20250312+cu128 depends on numpy torchvision 0.22.0.dev20250311+cu128 depends on numpy torchvision 0.22.0.dev20250310+cu128 depends on numpy torchvision 0.22.0.dev20250309+cu128 depends on numpy torchvision 0.22.0.dev20250308+cu128 depends on numpy torchvision 0.22.0.dev20250307+cu128 depends on numpy torchvision 0.22.0.dev20250306+cu128 depends on numpy torchvision 0.22.0.dev20250304+cu128 depends on numpy torchvision 0.22.0.dev20250303+cu128 depends on numpy torchvision 0.22.0.dev20250302+cu128 depends on numpy torchvision 0.22.0.dev20250301+cu128 depends on numpy torchvision 0.22.0.dev20250228+cu128 depends on numpy torchvision 0.22.0.dev20250227+cu128 depends on numpy torchvision 0.22.0.dev20250226+cu128 depends on numpy To fix this you could try to: loosen the range of package versions you've specified remove package versions to allow pip to attempt to solve the dependency conflict ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts ### Versions Versions Collecting environment information... PyTorch version: N/A Is debug build: N/A CUDA used to build PyTorch: N/A ROCM used to build PyTorch: N/A OS: Microsoft Windows Server 2022 Datacenter Evaluation (10.0.20348 64-bit) GCC version: Could not collect Clang version: Could not collect CMake version: Could not collect Libc version: N/A Python version: 3.13.2 experimental free-threading build | packaged by conda-forge | (main, Feb 17 2025, 13:52:36) [MSC v.1942 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-2022Server-10.0.20348-SP0 Is CUDA available: N/A CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: N/A GPU models and configuration: Could not collect Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: N/A CPU: Name: Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz Manufacturer: GenuineIntel Family: 179 Architecture: 9 ProcessorType: 3 DeviceID: CPU0 CurrentClockSpeed: 2594 MaxClockSpeed: 2594 L2CacheSize: 18432 L2CacheSpeed: None Revision: 21767 Name: Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz Manufacturer: GenuineIntel Family: 179 Architecture: 9 ProcessorType: 3 DeviceID: CPU1 CurrentClockSpeed: 2594 MaxClockSpeed: 2594 L2CacheSize: 18432 L2CacheSpeed: None Revision: 21767 Versions of relevant libraries: [pip3] No relevant packages [conda] No relevant packages
open
2025-03-13T07:28:42Z
2025-03-14T09:06:47Z
https://github.com/pytorch/vision/issues/8967
[]
jameszhouyi
2
plotly/jupyter-dash
jupyter
54
call-backs from Dash-plotly to JupyterLab
![Screen Shot 2021-02-12 at 7 45 19](https://user-images.githubusercontent.com/19770769/107736067-d1632100-6d09-11eb-956c-9b05b73eb71f.png) Hi, I am trying to incorporate ArcGIS WebScenes in my Dash-plotly workflow. I think I have two main options 1. To use iFrame and I louse the call back (As far as I know), which looks great but I am very limited in modification and call back. 2. To incorporate Dash-Plotly Call backs to Jupyer-Lab like in the example in the attached image, **I was able one time to have a call-back from the map to the output view on the left side**. Obviously I prefer the Dash-plotly--> Jupyter Lab callback option. I have much more control in modifying the websene in the Jupyter-Lab embedded cell (moving camera, changing visualisation. Even though I can modify the websene and update the iframe, it's performance is inferrier to that of an embed websene like in the jupyte-lab. So in Short - I am looking to send a call back from Dash-plotly back to Jupyter Lab, and **I want it to work more than once**. (It's now a one time thing) I tried both 'Inline' and the 'JupyterLab' option. Thanks in advance.
open
2021-02-12T06:16:52Z
2021-02-12T06:16:52Z
https://github.com/plotly/jupyter-dash/issues/54
[]
Shai2u
0
OpenInterpreter/open-interpreter
python
1,018
Function calling broken with Azure OpenAI
### Describe the bug To confirm the model works, I asked it to write a haiku: ![image](https://github.com/KillianLucas/open-interpreter/assets/32345320/974cd95f-d6cb-4fa1-9321-d8affce0bc93) Next, to test the project, I asked it for _python script for hello world_ ![image](https://github.com/KillianLucas/open-interpreter/assets/32345320/10f33a58-2d47-4e5c-bece-c00650728ab0) It asked me if I want to execute the script, to which I replied yes. This is when it failed. Here is the complete output: > Yes Python Version: 3.12.2 Pip Version: 24.0 Open-interpreter Version: cmd:Interpreter, pkg: 0.2.0 OS Version and Architecture: Windows-10-10.0.19045-SP0 CPU Info: Intel64 Family 6 Model 183 Stepping 1, GenuineIntel RAM Info: 63.69 GB, used: 24.82, free: 38.87 # Interpreter Info Vision: False Model: azure/gpt-35-turbo-16k Function calling: True Context window: 16000 Max tokens: None Auto run: False API base: None Offline: False Curl output: Not local # Messages System Message: You are Open Interpreter, a world-class programmer that can complete any goal by executing code. First, write a plan. **Always recap the plan between each code block** (you have extreme short-term memory loss, so you need to recap the plan between each message block to retain it). When you execute code, it will be executed **on the user's machine**. The user has given you **full and complete permission** to execute any code necessary to complete the task. Execute the code. If you want to send data between programming languages, save the data to a txt or json. You can access the internet. Run **any code** to achieve the goal, and if at first you don't succeed, try again and again. You can install new packages. When a user refers to a filename, they're likely referring to an existing file in the directory you're currently executing code in. Write messages to the user in Markdown. In general, try to **make plans** with as few steps as possible. As for actually executing code to carry out that plan, for *stateful* languages (like python, javascript, shell, but NOT for html which starts from 0 every time) **it's critical not to try to do everything in one code block.** You should try something, print information about it, then continue from there in tiny, informed steps. You will never get it on the first try, and attempting it in one go will often lead to errors you cant see. You are capable of **any** task. {'role': 'user', 'type': 'message', 'content': 'python script for hello world'} {'role': 'assistant', 'type': 'message', 'content': 'Sure! Here\'s a Python script that prints "Hello, World!" to the console:\n\n```python\nprint("Hello, World!")\n```\n\nWould you like me to execute this script for you?'} {'role': 'user', 'type': 'message', 'content': 'Yes'} Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "C:\Users\[redacted]\Personal Projects\openinterpreter\Scripts\interpreter.exe\__main__.py", line 7, in <module> File "C:\Users\[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\core\core.py", line 25, in start_terminal_interface start_terminal_interface(self) File "C:\Users\[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 684, in start_terminal_interface interpreter.chat() File "C:\Users\[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\core\core.py", line 86, in chat for _ in self._streaming_chat(message=message, display=display): File "C:\Users\[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\core\core.py", line 113, in _streaming_chat yield from terminal_interface(self, message) File "C:\Users\[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\terminal_interface\terminal_interface.py", line 135, in terminal_interface for chunk in interpreter.chat(message, display=False, stream=True): File "C:\Users\[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\core\core.py", line 148, in _streaming_chat yield from self._respond_and_store() File "C:\Users\[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\core\core.py", line 194, in _respond_and_store for chunk in respond(self): File "C:\Users\[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\core\respond.py", line 49, in respond for chunk in interpreter.llm.run(messages_for_llm): File "C:\Users\[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\core\llm\llm.py", line 191, in run yield from run_function_calling_llm(self, params) File "C:\Users\[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\core\llm\run_function_calling_llm.py", line 66, in run_function_calling_llm arguments = parse_partial_json(arguments) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\[redacted]\Personal Projects\openinterpreter\Lib\site-packages\interpreter\core\llm\utils\parse_partial_json.py", line 8, in parse_partial_json return json.loads(s) ^^^^^^^^^^^^^ File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.12_3.12.752.0_x64__qbz5n2kfra8p0\Lib\json\__init__.py", line 339, in loads raise TypeError(f'the JSON object must be str, bytes or bytearray, ' TypeError: the JSON object must be str, bytes or bytearray, not NoneType ### Reproduce Command run: ` interpreter --model azure/gpt-35-turbo-16k --temperature 0.11 --context_window 16000 --llm_supports_functions` Model version: 0613 (which according to [OpenAI](https://platform.openai.com/docs/guides/function-calling) supports function calling. ### Expected behavior It would run the print() statement and output "Hello, World!" ### Screenshots _No response_ ### Open Interpreter version 0.2.0 ### Python version Tested on: Python 3.11.8 and Python 3.12.2 ### Operating System name and version Windows 10 Enterprise ### Additional context _No response_
open
2024-02-15T22:16:01Z
2024-03-20T21:59:34Z
https://github.com/OpenInterpreter/open-interpreter/issues/1018
[ "Bug" ]
shouryan01
2
FactoryBoy/factory_boy
django
241
Distinction for django_get_or_create between unique and unique_together
Once #239 has been fixed, it would be nice to be able to differentiate between a list where any property on the list can be used (more than one property with unique flags), where providing multiple properties at build/create will have the system check for those and fail if they don't match what is in the DB, unless none of them are in the DB (example 1 below) ``` class ThingFactory(factory.django.DjangoModelFactory) class Meta: django_get_or_create = ('prop1', 'prop2') # prop1 and prop2 are both separately unique ThingFactory.create(prop1='a', prop2='b') ``` The create should only succeed when there is no instance of prop1=a or prop2=b, or when both exist on the same object. If either exists and the other is not on the same object, it should cause an error as it would in the database. For unique_together, there would need to be some intelligence around specifying those as well, and I think that should be left to the programmer with a new django_get_or_create_together Meta property. It would function the same as above, but instead of failing when only one of them matches (or both exist, but not together), a new object is created, while it is only gotten if both exist on the same object. I have created a state table below to show what would happen for each method. Basically AND with current unique results in a get, and NOR with current unique results in a create, and XOR is an error. Similarly, AND for unique_together results in a get, and NAND for unique together results in a create. ``` values | unique | unique_together !a, !b | create | create !a, b | ERROR | create a, !b | ERROR | create a, b | get | get ```
open
2015-10-28T18:49:41Z
2020-10-13T07:51:45Z
https://github.com/FactoryBoy/factory_boy/issues/241
[]
stephenross
1
ageitgey/face_recognition
python
791
Comparing new encoding with saved encodings from .dat file
* face_recognition version: probably 1.2.2 * Python version: 3.6.3 * Operating System: CentOS 7 ### Description Hello, I have access to lots of pictures and 64 cores CPU. I am not that great at python. I noticed that face_recognition can't use multiple core while it is loading images, faces, encodings,it only uses multiple cores while comparing the encodings. Because of this I can't do a real-time comparison of a picture with all the others fast enough. ### What I Did I started creating a script to dump all encodings to a file (tested with 1k samples first), using your guide [here](https://github.com/ageitgey/face_recognition/wiki/How-do-I-save-face-encodings-to-a-file%3F): ``` import face_recognition import pickle import os import glob import concurrent.futures #go to img directory os.chdir("/home/samples") #all img files IMG_FILES = glob.glob("*.jpg") FILES_COUNT = len(IMG_FILES) print (f'\033[96m We have {FILES_COUNT} jpg files.\033[00m') #dump file DUMP_ENCODINGS = "/home/known_faces.dat" #other vars all_face_encodings = {} #create face encoding def face_encode(img): try: img_load = face_recognition.load_image_file(img) except: print (f'\033[91m File {img} - could not be loaded (corrupted?).\033[00m') return False try: a_face_encoding = face_recognition.face_encodings(img_load)[0] except: print (f'\033[91m File {img} - no face detected.\033[00m') return False return a_face_encoding #work the cpus with concurrent.futures.ProcessPoolExecutor() as executor: for IMG_FILE, the_face_encoding in zip(IMG_FILES, executor.map(face_encode, IMG_FILES)): if the_face_encoding is False: all_face_encodings[os.path.splitext(os.path.basename(IMG_FILE))[0]] = None continue all_face_encodings[os.path.splitext(os.path.basename(IMG_FILE))[0]] = the_face_encoding #dump with open(DUMP_ENCODINGS, 'ab+') as file_dump: pickle.dump(all_face_encodings, file_dump) ``` **This worked very well and pretty fast - all cores used.** See 3 random arrays from the results. [3random_arrays.txt](https://github.com/ageitgey/face_recognition/files/3032491/3random_arrays.txt) Now I want to check new images against all this encodings. I tried the idea from [here](https://github.com/ageitgey/face_recognition/wiki/How-do-I-save-face-encodings-to-a-file%3F), but I always get this error: ``` Traceback (most recent call last): File "compare_face.py", line 37, in <module> main() File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/Click-7.0-py3.6.egg/click/core.py", line 764, in __call__ return self.main(*args, **kwargs) File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/Click-7.0-py3.6.egg/click/core.py", line 717, in main rv = self.invoke(ctx) File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/Click-7.0-py3.6.egg/click/core.py", line 956, in invoke return ctx.invoke(self.callback, **ctx.params) File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/Click-7.0-py3.6.egg/click/core.py", line 555, in invoke return callback(*args, **kwargs) File "compare_face.py", line 33, in main result = test_img(unknown_face, face_names, face_encds) File "compare_face.py", line 9, in test_img result = face_recognition.compare_faces(face_encodings, unknown_face) File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/face_recognition-1.2.3-py3.6.egg/face_recognition/api.py", line 222, in compare_faces return list(face_distance(known_face_encodings, face_encoding_to_check) <= tolerance) File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/face_recognition-1.2.3-py3.6.egg/face_recognition/api.py", line 72, in face_distance return np.linalg.norm(face_encodings - face_to_compare, axis=1) ValueError: operands could not be broadcast together with shapes (10,) (128,) ``` I'm guessing I don't read the encodings properly from the .dat file. Code: ``` import face_recognition import pickle import os import click import numpy as np def test_img(unknown_face, face_names, face_encodings): result = face_recognition.compare_faces(face_encodings, unknown_face) names_with_result = list(zip(face_names, result)) return [names_with_result] @click.command() @click.argument('image_to_check') def main (image_to_check): #unknown unknown_image = face_recognition.load_image_file(image_to_check) unknown_face = face_recognition.face_encodings(unknown_image)[0] #dump file DUMP_ENCODINGS = "/home/known_faces.dat" #other vars with open(DUMP_ENCODINGS, 'rb') as file_dump_read: all_face_encodings = pickle.load(file_dump_read) face_names = list(all_face_encodings.keys()) face_encds = np.array(list(all_face_encodings.values())) result = test_img(unknown_face, face_names, face_encds) print (result) if __name__ == "__main__": main() ``` If you have any ideas please do help! If you think I should change the approach, please tell! Do you think I should save the encodings in a different way for faster comparison (like a db)? I still need to make this second part to use multiple cores. In the end I will probably have like 400 000 encodings saved and I will need to compare a new face with the old ones ASAP. I am willing to use KNN if it will be more accurate (right now I it does not detect faces in many, many pictures). Regards,
open
2019-04-02T07:58:52Z
2019-06-06T11:52:08Z
https://github.com/ageitgey/face_recognition/issues/791
[]
franckyz0r
4
httpie/cli
python
1,331
Sessions doesn't support multiple headers sharing the same name
## Checklist - [x] I've searched for similar issues. - [x] I'm using the latest version of HTTPie. --- ## Minimal reproduction code and steps Run `http --offline : hello:world hello:people --session test` ## Current result The header `hello: world` is not saved in the session file ``` $ cat ~/.httpie/sessions/localhost/test.json { "__meta__": { "about": "HTTPie session file", "help": "https://httpie.io/docs#sessions", "httpie": "3.1.0" }, "auth": { "password": null, "type": null, "username": null }, "cookies": [], "headers": { "hello": "people" } } ``` ## Expected result Both `hello: world` and `hello: people` should be saved in the session file --- ## Debug output Please re-run the command with `--debug`, then copy the entire command & output and paste both below: ```bash HTTPie 3.1.0 Requests 2.22.0 Pygments 2.7.2 Python 3.8.5 (default, May 27 2021, 13:30:53) [GCC 9.3.0] /usr/bin/python3 Linux 5.10.102.1-microsoft-standard-WSL2 <Environment {'apply_warnings_filter': <function Environment.apply_warnings_filter at 0x7ffa8d6de8b0>, 'as_silent': <function Environment.as_silent at 0x7ffa8d6de790>, 'colors': 256, 'config': {'default_options': []}, 'config_dir': PosixPath('/home/ducaale/.httpie'), 'devnull': <property object at 0x7ffa8d6d8900>, 'is_windows': False, 'log_error': <function Environment.log_error at 0x7ffa8d6de820>, 'program_name': 'http', 'quiet': 0, 'stderr': <_io.TextIOWrapper name='<stderr>' mode='w' encoding='utf-8'>, 'stderr_isatty': True, 'stdin': <_io.TextIOWrapper name='<stdin>' mode='r' encoding='utf-8'>, 'stdin_encoding': 'utf-8', 'stdin_isatty': True, 'stdout': <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>, 'stdout_encoding': 'utf-8', 'stdout_isatty': True}> <PluginManager {'adapters': [], 'auth': [<class 'httpie.plugins.builtin.BasicAuthPlugin'>, <class 'httpie.plugins.builtin.DigestAuthPlugin'>, <class 'httpie.plugins.builtin.BearerAuthPlugin'>], 'converters': [], 'formatters': [<class 'httpie.output.formatters.headers.HeadersFormatter'>, <class 'httpie.output.formatters.json.JSONFormatter'>, <class 'httpie.output.formatters.xml.XMLFormatter'>, <class 'httpie.output.formatters.colors.ColorFormatter'>]}> >>> requests.request(**{'auth': None, 'data': RequestJSONDataDict(), 'headers': <HTTPHeadersDict('User-Agent': b'HTTPie/3.1.0', 'hello': b'world', 'hello': b'people')>, 'method': 'get', 'params': <generator object MultiValueOrderedDict.items at 0x7ffa8d346510>, 'url': 'http://localhost'}) GET / HTTP/1.1 Accept: */* Accept-Encoding: gzip, deflate Connection: keep-alive Host: localhost User-Agent: HTTPie/3.1.0 hello: world hello: people ``` ## Additional information, screenshots, or code examples N/A
closed
2022-03-19T19:02:31Z
2022-04-03T13:48:31Z
https://github.com/httpie/cli/issues/1331
[ "enhancement", "needs product design" ]
ducaale
0
ultralytics/ultralytics
pytorch
18,982
YOLOv11 vs SSD performance on 160x120 infrared images
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions. ### Question Hello, in our previous project, we successfully implemented object detection in images from 160x120 infrared camera on Raspberry Pi 4. We used SqueezeNet-SSD network trained with CaffeSSD framework. Its performance was very good for our needs (about 60 ms per frame) with excellent accuracy on normal-sized objects (confidence mostly over 90%) but lower accuracy on smaller objects (mostly detected correctly with very low confidence 30%). Later, we stripped SqueezeNet's fire modules to simple squeeze-expand blocks, added feature-fusion for the first SSD layer and modified priorboxes ratios to match our dataset. We reached detection speed of about 30 ms per frame and excellent accuracy for all objects. In our upcoming project, we are continuing with similar task but we would like to use more innovative approach, because Caffe framework has not been maintained for years anymore. We're experimenting with Ultralytics framework and it looks very modern to us. We're also thinking about switching to Raspberry Pi 5, maybe with Hailo8 kit which is not supported by CaffeSSD so Ultralytics seems to be good way to go. Our dataset consists of 5000 training grayscale images and 1000 testing images with resolution of 160x120. Many augmented versions of each training image was added to the training dataset thus it has over 40000 images. We identify 5 types of objects - example: face (about 64x100) and eyes (45x10). It's exactly the same dataset that was used for training our SSD networks. Now we have trained several versions of YOLOv11 with batch size of 128 for 300 epochs. Results are good, but not as good as our original SSD network. Here, I would like to share our benchmarks with others: **Detection speed** ``` RPi5 Rock 4B+ RPi4 RPi 5 + Hailo 8 ---------------------------------------------------------------------------------------------------- SEnet-FSSD-160x120 7.542 ms 27.478 ms 29.263 ms - SqueezeNet-SSD 10.074 ms 32.615 ms 38.491 ms - Yolo11n-160 12.317 ms 49.212 ms 45.283 ms 4.252 ms Yolo11n-320 48.207 ms 177.076 ms 178.268 ms 7.236 ms Yolo11s-160 30.835 ms 129.767 ms 127.677 ms 10.999 ms Yolo11m-320 313.738 ms 1121.319 ms 1180.839 ms 24.829 ms ``` As you can see, even the nano version of YOLOv11 is much slower than the original SqueezeNet-SSD. Although we would prefer better times, it is still usable for our needs, especially when we're thinking about Hailo8. **Detection accuracy** I don't have specific objective statistics here but it is worst just visually. Even yolo11m-320 version provides worst results. Rectangles are not as exact, confidences are lower and there is a bit higher number of false results. Just for illustration on 1000 validation images: (mean wIoU is average of IoU for all detections with threshold of 50 weighted by the confidence) SEnet-FSSD-160x120 - total detections: 2014, false positives: 6, false negatives: 9, mean wIoU: 0.892 SqueezeNet-SSD - total detections: 2029, false positives: 59, false negatives: 47, mean wIoU: 0.855 Yolo11n-160 - total detections: 2027, false positives: 28, false negatives: 18, mean wIoU: 0.851 Yolo11s-160 - total detections: 2023, false positives: 26, false negatives: 20, mean wIoU: 0.859 Yolo11m-320 - total detections: 2078, false positives: 71, false negatives: 12, mean wIoU: 0.845 ``` $ ./test image.png senet-fssd-160x120 0: class = 3, confidence = 1.000000, [61, 10, 124, 107] 1: class = 1, confidence = 0.999990, [72, 49, 115, 57] $ ./test image.png squeezenet-ssd-160x120 0: class = 3, confidence = 1.000000, [61, 10, 123, 108] 1: class = 1, confidence = 0.774772, [72, 49, 114, 57] $ ./test image.png yolo11n-160.onnx 0: class = 3, confidence = 0.920182, [60, 11, 123, 99] 1: class = 1, confidence = 0.766865, [71, 49, 115, 57] $ ./test image.png yolo11m-320.onnx 0: class = 3, confidence = 0.895741, [61, 12, 123, 103] 1: class = 1, confidence = 0.745349, [72, 50, 115, 56] ``` Maybe the problem lies in training hyperparameters. We just set batch size to 128 and number of epochs to 300. I would appreciate any ideas. Thank you! Meanwhile, I've been trying to simulate our SEnet-FSSD using YAML model in Ultralytics. I don't know if it is good idea, I just would like to see if it changes anything. I made pure copy of our network, but it is not possible to train it, because of layers sizes mismatch. MaxPool2d layers don't seem to downscale the resolution in the same way as it happens in Caffe framework. There is also no Eltwise (element sum) layer so I had to change it to Concat layer. Adding padding=1 to all MaxPool2d layers works but it automatically changes the input resolution to 192. But results are practically very similar to other YOLO models and not to our original network. Here is the model that is 1:1 rewrite of our SSD network. Maybe, someone will be able to fix it: ``` nc: 5 activation: nn.ReLU6() backbone: - [-1, 1, Conv, [64, 3, 2, 0]] # 0,conv1 - [-1, 1, nn.MaxPool2d, [3, 2]] # 1,pool1 - [-1, 1, Conv, [32, 1, 1, 0] ] # 2,fire2 squeeze - [-1, 1, Conv, [64, 3, 1, 1] ] # 3,fire2 expand - [-1, 1, Conv, [32, 1, 1, 0] ] # 4,fire3 squeeze - [-1, 1, Conv, [64, 3, 1, 1] ] # 5,fire3 expand - [-1, 1, nn.MaxPool2d, [3, 2]] # 6,pool3 - [-1, 1, Conv, [64, 1, 1, 0] ] # 7,fire4 squeeze - [-1, 1, Conv, [128, 3, 1, 1] ] # 8,fire4 expand - [-1, 1, Conv, [64, 1, 1, 0] ] # 9,fire5 squeeze - [-1, 1, Conv, [128, 3, 1, 1] ] # 10,fire5 expand - [-1, 1, nn.MaxPool2d, [3, 2]] # 11,pool5 - [-1, 1, Conv, [96, 1, 1, 0] ] # 12,fire6 squeeze - [-1, 1, Conv, [192, 3, 1, 1] ] # 13,fire6 expand - [-1, 1, Conv, [96, 1, 1, 0] ] # 14,fire7 squeeze - [-1, 1, Conv, [192, 3, 1, 1] ] # 15,fire7 expand - [-1, 1, Conv, [64, 1, 1, 0] ] # 16,fire8 squeeze - [-1, 1, Conv, [256, 3, 1, 1] ] # 17,fire8 expand - [-1, 1, Conv, [64, 1, 1, 0] ] # 18,fire9 squeeze - [-1, 1, Conv, [256, 3, 1, 1] ] # 19,fire9 expand head: - [-1, 1, nn.MaxPool2d, [3, 2]] # 20,pool9 - [-1, 1, Conv, [96, 1, 1, 0] ] # 21,fire10 squeeze - [-1, 1, Conv, [384, 3, 1, 1] ] # 22,fire10 expand - [-1, 1, nn.MaxPool2d, [3, 2]] # 23,pool10 - [-1, 1, Conv, [64, 1, 1, 0] ] # 24,fire11 squeeze - [-1, 1, Conv, [256, 3, 1, 1] ] # 25,fire11 expand # feature-fusion layers - [19, 1, Conv, [128, 1, 1, 0] ] # 26 - [-1, 1, nn.Upsample, [None, 2, "nearest"]] # 27 - [22, 1, Conv, [128, 1, 1, 0] ] # 28 - [-1, 1, nn.Upsample, [None, 4, "nearest"]] # 29 - [[29, 27, 10], 1, Concat, [1]] # 30 - [-1, 1, nn.BatchNorm2d, []] # 31 - [-1, 1, Conv, [64, 1, 1, 0] ] # 32 - [-1, 1, Conv, [128, 3, 1, 1] ] # 33 - [[25, 22, 19, 33], 1, Detect, [nc]] ``` ### Additional EDIT: I forgot to provide information that all detections are done using OpenCV in C++
open
2025-02-03T20:01:01Z
2025-02-27T22:16:34Z
https://github.com/ultralytics/ultralytics/issues/18982
[ "question", "detect", "embedded" ]
BigMuscle85
32
fastapi-users/fastapi-users
asyncio
188
user handling with HTML template
hello, how do I pass a ValidationError or 400 error back to the login.html template? for example if user enters empty username or incorrect password, they just get a text return, ``` 1 validation error for Request body -> password field required (type=value_error.missing) Validation Error: ``` How can I pass this error back to login.html template as a warning? Thank you
closed
2020-05-17T18:41:44Z
2023-05-09T12:46:41Z
https://github.com/fastapi-users/fastapi-users/issues/188
[ "question" ]
perfecto25
4
WeblateOrg/weblate
django
14,174
Batch triggering of source checks
### Describe the problem The source checks like MultipleFailingCheck or LongUntranslatedCheck are currently the slowest paths in the `update_checks` task. These could probably be done using a single SQL query instead of doing a query per string. ### Describe the solution you would like Similarly to the target checks, there should be a way to batch these. It should be used in both translation update and in scheduled checks update. ### Describe alternatives you have considered _No response_ ### Screenshots _No response_ ### Additional context _No response_
open
2025-03-12T09:05:37Z
2025-03-21T12:36:55Z
https://github.com/WeblateOrg/weblate/issues/14174
[ "enhancement", "Area: Quality checks" ]
nijel
3
thp/urlwatch
automation
282
Xpath filtering doesn't work with text() and node()
Example: That will gives me the first result in a google's search: ``` name: "Test" url: "https://www.google.com/search?q=something&tbm=nws&source=lnt&tbs=qdr:h&biw=1920&bih=957&dpr=1" filter: xpath://div[@class='g'][1]/descendant::a/text() ``` Expected Behavior: Monitoring the changes on the first result`s text - inside the <a> tag Result: ``` /home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/urlwatch/reporters.py:335: FutureWarning: Possible nested set at position 1 line = re.sub(WDIFF_REMOVED_RE, lambda x: self._red(x.group(0)), line) =========================================================================== 01. ERROR: Test =========================================================================== --------------------------------------------------------------------------- ERROR: Test (https://www.google.com/search?q=something&tbm=nws&source=lnt&tbs=qdr:h&biw=1920&bih=957&dpr=1) --------------------------------------------------------------------------- Traceback (most recent call last): File "/home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/urlwatch/handler.py", line 92, in process data = FilterBase.process(filter_kind, subfilter, self, data) File "/home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/urlwatch/filters.py", line 90, in process return filtercls(state.job, state).filter(data, subfilter) File "/home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/urlwatch/filters.py", line 382, in filter for element in tree.xpath(subfilter)) File "/home/ubuntu/anaconda3/envs/py37/lib/python3.7/site-packages/urlwatch/filters.py", line 382, in <genexpr> for element in tree.xpath(subfilter)) File "src/lxml/etree.pyx", line 3350, in lxml.etree.tostring TypeError: Type 'lxml.etree._ElementUnicodeResult' cannot be serialized. --------------------------------------------------------------------------- ``` The same behavior occurs using the xpath://node() operator.
closed
2018-09-19T18:20:15Z
2018-10-23T18:24:19Z
https://github.com/thp/urlwatch/issues/282
[]
eduardohayashi
0
strawberry-graphql/strawberry
graphql
2,813
Exception handling with status code . As graphql-strawberry always return status code as 200 OK.
<!---I am new to graphql how to handle status code in graphql-strawberry--> ## Feature Request Type - [ ] Core functionality - [ ] Alteration (enhancement/optimization) of existing feature(s) - [ ] New behavior ## Description <!-- A few sentences describing what it is. --> def default_resolver(root, field): """resolver""" try: return operator.getitem(root, field) except KeyError: return getattr(root, field) config = StrawberryConfig( default_resolver=default_resolver ) schema = strawberry.Schema(query=Query,config=config) graphql_app = GraphQLRouter(schema, graphiql = env!='production')
closed
2023-06-06T08:08:09Z
2025-03-20T15:56:12Z
https://github.com/strawberry-graphql/strawberry/issues/2813
[]
itsckguru
1
pyg-team/pytorch_geometric
pytorch
8,894
Torch 2.2
### 😵 Describe the installation problem Is pyg compatible with torch 2.2 yet? The readme says test passing 2.2, but on conda channel it still requires torch to be highest at 2.1. ### Environment * PyG version: * PyTorch version: 2.2 * OS: * Python version: * CUDA/cuDNN version: * How you installed PyTorch and PyG (`conda`, `pip`, source): conda * Any other relevant information (*e.g.*, version of `torch-scatter`):
open
2024-02-10T16:12:27Z
2024-02-10T17:09:34Z
https://github.com/pyg-team/pytorch_geometric/issues/8894
[ "installation" ]
Will-Zhao0
1
Yorko/mlcourse.ai
scikit-learn
596
links are broken on mlcourse.ai
example: https://festline.github.io/notebooks/blob/master/jupyter_english/topic02_visual_data_analysis/topic2_visual_data_analysis.ipynb?flush_cache=true
closed
2019-05-30T02:15:44Z
2019-06-27T14:54:06Z
https://github.com/Yorko/mlcourse.ai/issues/596
[]
nickcorona
4
pandas-dev/pandas
data-science
61,018
BUG: df.plot() "Subplots" changes behavior of how values are stacked using the "Stacked" property
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [x] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python import matplotlib.pyplot as plt import pandas as pd import numpy as np df = pd.DataFrame([(30, 10, 10), (20, 20, 20), (10, 30, 30)], columns=list('ABC')) df.plot(kind="bar", stacked=True) df.plot(subplots= [('A','B')],kind="bar", stacked=True) plt.show() print(df) ``` ### Issue Description Using both the "stacked" and "subplots" option when drawing a bar graph changes how the bar graph is stacked. Illustrated with image below where instead of numberically stacking the values for A and B it just physically overlays them. My guess is it doesn't properly use the "bottom" attribute when drawing B. ![Image](https://github.com/user-attachments/assets/30d4655d-1a88-4321-be17-6ea88bbf2646) ### Expected Behavior The behavior of the subplot version should be inline with when the option is not used. So the total of the values should equal to A+B for each element ### Installed Versions <details> INSTALLED VERSIONS ------------------ commit : 0691c5cf90477d3503834d983f69350f250a6ff7 python : 3.11.0 python-bits : 64 OS : Windows OS-release : 10 Version : 10.0.22631 machine : AMD64 processor : AMD64 Family 25 Model 33 Stepping 2, AuthenticAMD byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : English_United States.1252 pandas : 2.2.3 numpy : 1.24.2 pytz : 2025.1 dateutil : 2.8.2 pip : 25.0 Cython : None sphinx : None IPython : 8.12.0 adbc-driver-postgresql: None adbc-driver-sqlite : None bs4 : 4.13.3 blosc : None bottleneck : None dataframe-api-compat : None fastparquet : None fsspec : None html5lib : None hypothesis : None gcsfs : None jinja2 : 3.1.2 lxml.etree : 5.3.0 matplotlib : 3.10.0 numba : None numexpr : None odfpy : None openpyxl : None pandas_gbq : None psycopg2 : None pymysql : None pyarrow : None pyreadstat : None pytest : None python-calamine : None pyxlsb : None s3fs : None scipy : None sqlalchemy : 2.0.9 tables : None tabulate : None xarray : None xlrd : None xlsxwriter : None zstandard : None tzdata : 2025.1 qtpy : None pyqt5 : None </details>
open
2025-02-28T05:41:34Z
2025-03-02T13:55:10Z
https://github.com/pandas-dev/pandas/issues/61018
[ "Bug", "Visualization" ]
eicchen02
3
microsoft/MMdnn
tensorflow
281
Error when converting Pytorch model to IR
Platform (like ubuntu 16.04/win10): ubuntu 16.04 Python version: python 3.5.2 Source framework with version (like Tensorflow 1.4.1 with GPU): Pytorch 0.4.0 with GPU Destination framework with version (like CNTK 2.3 with GPU): caffe Pre-trained model path (webpath or webdisk path): https://github.com/DagnyT/hardnet/tree/master/pretrained/train_liberty_with_aug Running scripts: import sys import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable import torch.backends.cudnn as cudnn import time import os import cv2 import math import numpy as np class L2Norm(nn.Module): def __init__(self): super(L2Norm,self).__init__() self.eps = 1e-10 def forward(self, x): norm = torch.sqrt(torch.sum(x * x, dim = 1) + self.eps) x= x / norm.unsqueeze(-1).expand_as(x) return x class L1Norm(nn.Module): def __init__(self): super(L1Norm,self).__init__() self.eps = 1e-10 def forward(self, x): norm = torch.sum(torch.abs(x), dim = 1) + self.eps x= x / norm.expand_as(x) return x class HardNet(nn.Module): """HardNet model definition """ def __init__(self): super(HardNet, self).__init__() self.features = nn.Sequential( nn.Conv2d(1, 32, kernel_size=3, padding=1, bias = False), nn.BatchNorm2d(32, affine=False), nn.ReLU(), nn.Conv2d(32, 32, kernel_size=3, padding=1, bias = False), nn.BatchNorm2d(32, affine=False), nn.ReLU(), nn.Conv2d(32, 64, kernel_size=3, stride=2, padding=1, bias = False), nn.BatchNorm2d(64, affine=False), nn.ReLU(), nn.Conv2d(64, 64, kernel_size=3, padding=1, bias = False), nn.BatchNorm2d(64, affine=False), nn.ReLU(), nn.Conv2d(64, 128, kernel_size=3, stride=2,padding=1, bias = False), nn.BatchNorm2d(128, affine=False), nn.ReLU(), nn.Conv2d(128, 128, kernel_size=3, padding=1, bias = False), nn.BatchNorm2d(128, affine=False), nn.ReLU(), nn.Dropout(0.1), nn.Conv2d(128, 128, kernel_size=8, bias = False), nn.BatchNorm2d(128, affine=False), ) def input_norm(self,x): flat = x.view(x.size(0), -1) mp = torch.mean(flat, dim=1) sp = torch.std(flat, dim=1) + 1e-7 return (x - mp.detach().unsqueeze(-1).unsqueeze(-1).unsqueeze(-1).expand_as(x)) / sp.detach().unsqueeze(-1).unsqueeze(-1).unsqueeze(1).expand_as(x) def forward(self, input): x_features = self.features(self.input_norm(input)) x = x_features.view(x_features.size(0), -1) return L2Norm()(x) if __name__ == '__main__': DO_CUDA = True try: input_img_fname = sys.argv[1] output_fname = sys.argv[2] if len(sys.argv) > 3: DO_CUDA = sys.argv[3] != 'cpu' except: print("Wrong input format. Try ./extract_hardnet_desc_from_hpatches_file.py imgs/ref.png out.txt gpu") sys.exit(1) model_weights = '../pretrained/train_liberty_with_aug/checkpoint_liberty_with_aug.pth' model = HardNet() checkpoint = torch.load(model_weights) model.load_state_dict(checkpoint['state_dict']) from mmdnn.conversion.pytorch.pytorch_parser import PytorchParser size=32 pytorchparser = PytorchParser(model, [1, size, size]) IR_file = 'HardNet' pytorchparser.run(IR_file) When i run the code , error occurs: /usr/local/lib/python3.5/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters Traceback (most recent call last): File "./mmdnn_ori_extract_hardnet_desc_from_hpatches_file.py", line 105, in <module> pytorchparser = PytorchParser(model, [1, size, size]) File "/home/kangrong/envs/mmdnntest/lib/python3.5/site-packages/mmdnn/conversion/pytorch/pytorch_parser.py", line 67, in __init__ if not os.path.exists(model_file_name): File "/home/kangrong/envs/mmdnntest/lib/python3.5/genericpath.py", line 19, in exists os.stat(path) TypeError: argument should be string, bytes or integer, not HardNet I want to convert Pytorch model to caffe , but when i convert Pytorch to IR errors occur. How can i solve this error? And how can i convert Pytorch model to caffe model?
closed
2018-07-03T03:11:27Z
2019-10-10T00:29:04Z
https://github.com/microsoft/MMdnn/issues/281
[]
Miranda0920
10
facebookresearch/fairseq
pytorch
5,111
how to finetune a language pair that not in the language dictionary
## ❓ Questions and Help ### Before asking: 1. search the issues. 2. search the docs. <!-- If you still can't find what you need: --> #### What is your question? hello, I'm using m2m100 model and i want finetune a language pair that not in the language dictionary ,what should i do ? #### Code <!-- Please paste a code snippet if your question requires it! --> #### What have you tried? #### What's your environment? - fairseq Version (e.g., 1.0 or main): - PyTorch Version (e.g., 1.0) - OS (e.g., Linux): - How you installed fairseq (`pip`, source): - Build command you used (if compiling from source): - Python version: - CUDA/cuDNN version: - GPU models and configuration: - Any other relevant information:
open
2023-05-22T07:22:40Z
2023-05-22T07:22:40Z
https://github.com/facebookresearch/fairseq/issues/5111
[ "question", "needs triage" ]
ecoli-hit
0
proplot-dev/proplot
data-visualization
381
Error when passing keyword arguments like edgecolor and linewidth to Quiver function
<!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). --> ### Description Absolutely love the project, thank you for all your efforts with it! Issue: Keyword arguments for altering the appearance (e.g. edgecolor and linewidth) of quiver arrows appear not to be supported in the PlotAxes.quiver() function. ### Steps to reproduce ```python import numpy as np import proplot as plot X = np.arange(-3, 3, 1) Y = np.arange(-3, 3, 1) U, V = np.meshgrid(X, Y) fig, axs = plot.subplots() axs.quiver(X, Y, U, V, edgecolor='r', linewidth = 1) plot.show() ``` **Expected behavior**: (output from matplotlib equivalent below) ![matplotlib_quiver](https://user-images.githubusercontent.com/25443238/182264958-14ef9e4a-8117-418b-8c31-31b8e6498b97.png) **Actual behavior**: Long traceback ending in: AttributeError: 'Quiver' object has no property 'markeredgecolor' ### Equivalent steps in matplotlib ```python import numpy as np import matplotlib.pyplot as plt X = np.arange(-3, 3, 1) Y = np.arange(-3, 3, 1) U, V = np.meshgrid(X, Y) plt.quiver(X, Y, U, V, edgecolor='r', linewidth = 1) plt.show() ``` ### Proplot version 3.4.3 0.9.5
open
2022-08-02T00:26:14Z
2023-03-29T08:42:43Z
https://github.com/proplot-dev/proplot/issues/381
[ "bug" ]
jordanbrook
0