repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
JaidedAI/EasyOCR
pytorch
672
I'm trying to select the image using OpenCV selectROI then save it, then use the saved image for OCR but it is showing this error
I'm trying to select the image using OpenCV selectROI then save it, then use the saved image for OCR but it is showing this error Traceback (most recent call last): File "e:\Python\OCR\tempCodeRunnerFile.py", line 11, in <module> bounds = reader.readtext("x.jpg", detail=0) File "C:\Users\lenovo\AppData\Local\Programs\Python\Python38\lib\site-packages\easyocr\easyocr.py", line 385, in readtext horizontal_list, free_list = self.detect(img, min_size, text_threshold,\ File "C:\Users\lenovo\AppData\Local\Programs\Python\Python38\lib\site-packages\easyocr\easyocr.py", line 275, in detect text_box_list = get_textbox(self.detector, img, canvas_size, mag_ratio, File "C:\Users\lenovo\AppData\Local\Programs\Python\Python38\lib\site-packages\easyocr\detection.py", line 95, in get_textbox bboxes_list, polys_list = test_net(canvas_size, mag_ratio, detector, File "C:\Users\lenovo\AppData\Local\Programs\Python\Python38\lib\site-packages\easyocr\detection.py", line 55, in test_net boxes, polys, mapper = getDetBoxes( File "C:\Users\lenovo\AppData\Local\Programs\Python\Python38\lib\site-packages\easyocr\craft_utils.py", line 236, in getDetBoxes boxes, labels, mapper = getDetBoxes_core(textmap, linkmap, text_threshold, link_threshold, low_text, estimate_num_chars) File "C:\Users\lenovo\AppData\Local\Programs\Python\Python38\lib\site-packages\easyocr\craft_utils.py", line 31, in getDetBoxes_core nLabels, labels, stats, centroids = cv2.connectedComponentsWithStats(text_score_comb.astype(np.uint8), connectivity=4) cv2.error: Unknown C++ exception from OpenCV code
open
2022-03-01T06:30:53Z
2022-03-23T09:14:15Z
https://github.com/JaidedAI/EasyOCR/issues/672
[]
vsatyamesc
3
serengil/deepface
machine-learning
783
DeepFace: BGR to RGB
Hi serengil! For black/white images, does DeepFace/VGG handle the vectorization of such images? Also, does the image need to conform to BGR or RGB format? Because I realized it affects the output vector for BGR-type and RGB-type images. Thanks!
closed
2023-06-22T07:19:05Z
2023-06-28T14:14:19Z
https://github.com/serengil/deepface/issues/783
[]
jsnleong
2
mwaskom/seaborn
pandas
3,781
Polars error for plotting when a datetime column is present, even when that column is not plotted
https://github.com/mwaskom/seaborn/blob/b4e5f8d261d6d5524a00b7dd35e00a40e4855872/seaborn/_core/data.py#L313C9-L313C55 ``` import polars as pl import seaborn as sns df = pl.LazyFrame({ 'col1': [1,2,3], 'col2': [1,2,3], 'duration_col': [1,2,3], }) df = df.with_columns(pl.duration(days=pl.col('duration_col')).alias('duration_col')).collect() df sns.scatterplot(df, x='duration_col', y='col1') ``` which gives error: ``` NotImplementedError Traceback (most recent call last) File [/opt/conda/envs/ds/lib/python3.12/site-packages/seaborn/_core/data.py:313](http://localhost:8888/opt/conda/envs/ds/lib/python3.12/site-packages/seaborn/_core/data.py#line=312), in convert_dataframe_to_pandas(data) 306 try: 307 # This is going to convert all columns in the input dataframe, even though 308 # we may only need one or two of them. It would be more efficient to select (...) 311 # interface where variables passed in Plot() may only be referenced later 312 # in Plot.add(). But noting here in case this seems to be a bottleneck. --> 313 return pd.api.interchange.from_dataframe(data) 314 except Exception as err: File [/opt/conda/envs/ds/lib/python3.12/site-packages/pandas/core/interchange/from_dataframe.py:71](http://localhost:8888/opt/conda/envs/ds/lib/python3.12/site-packages/pandas/core/interchange/from_dataframe.py#line=70), in from_dataframe(df, allow_copy) 69 raise ValueError("`df` does not support __dataframe__") ---> 71 return _from_dataframe( 72 df.__dataframe__(allow_copy=allow_copy), allow_copy=allow_copy 73 ) File [/opt/conda/envs/ds/lib/python3.12/site-packages/pandas/core/interchange/from_dataframe.py:94](http://localhost:8888/opt/conda/envs/ds/lib/python3.12/site-packages/pandas/core/interchange/from_dataframe.py#line=93), in _from_dataframe(df, allow_copy) 93 for chunk in df.get_chunks(): ---> 94 pandas_df = protocol_df_chunk_to_pandas(chunk) 95 pandas_dfs.append(pandas_df) File [/opt/conda/envs/ds/lib/python3.12/site-packages/pandas/core/interchange/from_dataframe.py:150](http://localhost:8888/opt/conda/envs/ds/lib/python3.12/site-packages/pandas/core/interchange/from_dataframe.py#line=149), in protocol_df_chunk_to_pandas(df) 149 elif dtype == DtypeKind.DATETIME: --> 150 columns[name], buf = datetime_column_to_ndarray(col) 151 else: File [/opt/conda/envs/ds/lib/python3.12/site-packages/pandas/core/interchange/from_dataframe.py:396](http://localhost:8888/opt/conda/envs/ds/lib/python3.12/site-packages/pandas/core/interchange/from_dataframe.py#line=395), in datetime_column_to_ndarray(col) 384 data = buffer_to_ndarray( 385 dbuf, 386 ( (...) 393 length=col.size(), 394 ) --> 396 data = parse_datetime_format_str(format_str, data) # type: ignore[assignment] 397 data = set_nulls(data, col, buffers["validity"]) File [/opt/conda/envs/ds/lib/python3.12/site-packages/pandas/core/interchange/from_dataframe.py:361](http://localhost:8888/opt/conda/envs/ds/lib/python3.12/site-packages/pandas/core/interchange/from_dataframe.py#line=360), in parse_datetime_format_str(format_str, data) 359 return data --> 361 raise NotImplementedError(f"DateTime kind is not supported: {format_str}") NotImplementedError: DateTime kind is not supported: tDu The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) Cell In[20], line 10 7 df = df.with_columns(pl.duration(days=pl.col('duration_col')).alias('duration_col')).collect() 8 df ---> 10 sns.scatterplot(df, x='duration_col', y='col1') File [/opt/conda/envs/ds/lib/python3.12/site-packages/seaborn/relational.py:615](http://localhost:8888/opt/conda/envs/ds/lib/python3.12/site-packages/seaborn/relational.py#line=614), in scatterplot(data, x, y, hue, size, style, palette, hue_order, hue_norm, sizes, size_order, size_norm, markers, style_order, legend, ax, **kwargs) 606 def scatterplot( 607 data=None, *, 608 x=None, y=None, hue=None, size=None, style=None, (...) 612 **kwargs 613 ): --> 615 p = _ScatterPlotter( 616 data=data, 617 variables=dict(x=x, y=y, hue=hue, size=size, style=style), 618 legend=legend 619 ) 621 p.map_hue(palette=palette, order=hue_order, norm=hue_norm) 622 p.map_size(sizes=sizes, order=size_order, norm=size_norm) File [/opt/conda/envs/ds/lib/python3.12/site-packages/seaborn/relational.py:396](http://localhost:8888/opt/conda/envs/ds/lib/python3.12/site-packages/seaborn/relational.py#line=395), in _ScatterPlotter.__init__(self, data, variables, legend) 387 def __init__(self, *, data=None, variables={}, legend=None): 388 389 # TODO this is messy, we want the mapping to be agnostic about 390 # the kind of plot to draw, but for the time being we need to set 391 # this information so the SizeMapping can use it 392 self._default_size_range = ( 393 np.r_[.5, 2] * np.square(mpl.rcParams["lines.markersize"]) 394 ) --> 396 super().__init__(data=data, variables=variables) 398 self.legend = legend File [/opt/conda/envs/ds/lib/python3.12/site-packages/seaborn/_base.py:634](http://localhost:8888/opt/conda/envs/ds/lib/python3.12/site-packages/seaborn/_base.py#line=633), in VectorPlotter.__init__(self, data, variables) 629 # var_ordered is relevant only for categorical axis variables, and may 630 # be better handled by an internal axis information object that tracks 631 # such information and is set up by the scale_* methods. The analogous 632 # information for numeric axes would be information about log scales. 633 self._var_ordered = {"x": False, "y": False} # alt., used DefaultDict --> 634 self.assign_variables(data, variables) 636 # TODO Lots of tests assume that these are called to initialize the 637 # mappings to default values on class initialization. I'd prefer to 638 # move away from that and only have a mapping when explicitly called. 639 for var in ["hue", "size", "style"]: File [/opt/conda/envs/ds/lib/python3.12/site-packages/seaborn/_base.py:679](http://localhost:8888/opt/conda/envs/ds/lib/python3.12/site-packages/seaborn/_base.py#line=678), in VectorPlotter.assign_variables(self, data, variables) 674 else: 675 # When dealing with long-form input, use the newer PlotData 676 # object (internal but introduced for the objects interface) 677 # to centralize [/](http://localhost:8888/) standardize data consumption logic. 678 self.input_format = "long" --> 679 plot_data = PlotData(data, variables) 680 frame = plot_data.frame 681 names = plot_data.names File [/opt/conda/envs/ds/lib/python3.12/site-packages/seaborn/_core/data.py:57](http://localhost:8888/opt/conda/envs/ds/lib/python3.12/site-packages/seaborn/_core/data.py#line=56), in PlotData.__init__(self, data, variables) 51 def __init__( 52 self, 53 data: DataSource, 54 variables: dict[str, VariableSpec], 55 ): ---> 57 data = handle_data_source(data) 58 frame, names, ids = self._assign_variables(data, variables) 60 self.frame = frame File [/opt/conda/envs/ds/lib/python3.12/site-packages/seaborn/_core/data.py:275](http://localhost:8888/opt/conda/envs/ds/lib/python3.12/site-packages/seaborn/_core/data.py#line=274), in handle_data_source(data) 271 """Convert the data source object to a common union representation.""" 272 if isinstance(data, pd.DataFrame) or hasattr(data, "__dataframe__"): 273 # Check for pd.DataFrame inheritance could be removed once 274 # minimal pandas version supports dataframe interchange (1.5.0). --> 275 data = convert_dataframe_to_pandas(data) 276 elif data is not None and not isinstance(data, Mapping): 277 err = f"Data source must be a DataFrame or Mapping, not {type(data)!r}." File [/opt/conda/envs/ds/lib/python3.12/site-packages/seaborn/_core/data.py:319](http://localhost:8888/opt/conda/envs/ds/lib/python3.12/site-packages/seaborn/_core/data.py#line=318), in convert_dataframe_to_pandas(data) 314 except Exception as err: 315 msg = ( 316 "Encountered an exception when converting data source " 317 "to a pandas DataFrame. See traceback above for details." 318 ) --> 319 raise RuntimeError(msg) from err RuntimeError: Encountered an exception when converting data source to a pandas DataFrame. See traceback above for details. ```
closed
2024-11-09T04:29:10Z
2024-11-10T04:29:58Z
https://github.com/mwaskom/seaborn/issues/3781
[]
zacharygibbs
2
ludwig-ai/ludwig
data-science
3,714
Ray `session.get_dataset_shard` deprecation "warning" stops training
**Describe the bug** While trying to recreate https://ludwig.ai/0.8/examples/llms/llm_text_generation/ on my setup using proprietary data, I ran into a Ray deprecation "warning" that continued to crash my training runs. It appears to be complaining about something that changed as of Ray 2.3, but I saw that Ludwig allows ray versions from `2.2.0` to `<2.5` (I'm using `2.4`), so it seems strange that it would be a syntax issue... Data shape: `jsonl` (one dictionary/map per line; flat with column labels as keys; seems to load okay) Training config: ``` model_type: llm base_model: meta-llama/Llama-2-7b-hf adapter: type: lora prompt: template: | Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: Generate 2 questions from the context: Context: {context} ### Response: input_features: - name: prompt type: text preprocessing: max_sequence_length: 512 output_features: - name: questions type: text preprocessing: max_sequence_length: 256 trainer: type: finetune learning_rate: 0.0003 batch_size: 1 gradient_accumulation_steps: 16 epochs: 3 learning_rate_scheduler: warmup_fraction: 0.01 postprocessing: sample_ratio: 0.1 ``` **To Reproduce** 1. Submit `experiment` job to Ray cluster 2. Data gets 3. Model gets loaded 4. Training begins... then ``` 620 File "/home/ml/virtualenv/lib/python3.10/site-packages/ludwig/backend/ray.py", line 501, in <lambda> 621 lambda config: train_fn(**config), 622 File "/home/ml/virtualenv/lib/python3.10/site-packages/ludwig/backend/ray.py", line 193, in train_fn 623 val_shard = RayDatasetShard(val_shard, features, training_set_metadata) 624 File "/home/ml/virtualenv/lib/python3.10/site-packages/ludwig/data/dataset/ray.py", line 244, in __init__ 625 self.create_epoch_iter() 626 File "/home/ml/virtualenv/lib/python3.10/site-packages/ludwig/data/dataset/ray.py", line 268, in create_epoch_iter 627 self.epoch_iter = self.dataset_shard.repeat().iter_epochs() 628 File "/home/ml/virtualenv/lib/python3.10/site-packages/ray/data/_internal/dataset_iterator/dataset_iterator_impl.py", line 48, in __getattr__ 629 raise DeprecationWarning( 630DeprecationWarning: session.get_dataset_shard returns a ray.data.DatasetIterator instead of a Dataset/DatasetPipeline as of Ray v2.3. Use iter_torch_batches(), to_tf(), or iter_batches() to iterate over one epoch. See https://docs.ray.io/en/latest/data/api/dataset_iterator.html for full DatasetIterator docs. 6312023-10-11 02:07:43,868 ERROR tune.py:941 -- Trials did not complete: [TorchTrainer_e1cf1_00000] ``` ... ``` 654The above exception was the direct cause of the following exception: 655 656Traceback (most recent call last): 657 File "/home/ml/virtualenv/bin/ludwig", line 8, in <module> 658 sys.exit(main()) 659 File "/home/ml/virtualenv/lib/python3.10/site-packages/ludwig/cli.py", line 191, in main 660 CLI() 661 File "/home/ml/virtualenv/lib/python3.10/site-packages/ludwig/cli.py", line 71, in __init__ 662 getattr(self, args.command)() 663 File "/home/ml/virtualenv/lib/python3.10/site-packages/ludwig/cli.py", line 96, in experiment 664 experiment.cli(sys.argv[2:]) 665 File "/home/ml/virtualenv/lib/python3.10/site-packages/ludwig/experiment.py", line 528, in cli 666 experiment_cli(**vars(args)) 667 File "/home/ml/virtualenv/lib/python3.10/site-packages/ludwig/experiment.py", line 217, in experiment_cli 668 (eval_stats, train_stats, preprocessed_data, output_directory) = model.experiment( 669 File "/home/ml/virtualenv/lib/python3.10/site-packages/ludwig/api.py", line 1337, in experiment 670 (train_stats, preprocessed_data, output_directory) = self.train( 671 File "/home/ml/virtualenv/lib/python3.10/site-packages/ludwig/api.py", line 640, in train 672 train_stats = trainer.train( 673 File "/home/ml/virtualenv/lib/python3.10/site-packages/ludwig/backend/ray.py", line 500, in train 674 trainer_results = runner.run( 675 File "/home/ml/virtualenv/lib/python3.10/site-packages/ludwig/backend/ray.py", line 439, in run 676 return trainer.fit() 677 File "/home/ml/virtualenv/lib/python3.10/site-packages/ray/train/base_trainer.py", line 614, in fit 678 raise TrainingFailedError( ``` I tried setting `epochs: 1` but that didn't seem to work around the `self.create_epoch_iter()` call that is part of the stack trace here. Also fiddled with batch size to no avail, but since this isn't an OOM, I didn't expect many of the config options to make a difference anyway. **Expected behavior** I expected it to train without a hitch 😅 **Screenshots** If applicable, add screenshots to help explain your problem. **Environment (please complete the following information):** - Amazon Linux 2 (container) running via Docker (with NVIDIA container toolkit) on a CentOS 7 AMI - CUDA 12.1 - Python 3.10.13 - `ludwig[full]==0.8.4` - `ray==2.4.0` - `torch==2.1.0` **Additional context** One of the biggest differences with my setup is the OS because I have to build all my own images (from scratch). It seems unlikely that this kind of error would arise due to my OS, though; the deprecation warning seems more related to syntax than anything. It appears that you're building your GPU images from ray 2.3.1 here: https://github.com/ludwig-ai/ludwig/blob/master/docker/ludwig-ray-gpu/Dockerfile#L12 so I'll probably try pinning Ray in my setup to `2.3.1` as a first pass to see if that solves the issue. P.S. Ray Train went GA (stable API) as of `2.7`; are there plans to upgrade? I'm definitely interested (and hopefully able to make time) to help out with such an endeavor!
closed
2023-10-11T03:10:52Z
2023-10-13T18:40:12Z
https://github.com/ludwig-ai/ludwig/issues/3714
[]
trevenrawr
2
erdewit/ib_insync
asyncio
689
reqAccountSummary often times out
Hey team-- I noticed that when I call reqAccountSummary it often times out, even when I give a timeout of 2 minutes. Does anyone know why this might be? Here is the relevant code snippet: ``` def get_account_balance_with_timeout(self, timeout): # Request account summary ib.reqAccountSummary() # Wait for the account summary update with a timeout if not ib.waitOnUpdate(timeout=timeout): raise Exception("Fetching account balance timed out.") # Filter for TotalCashBalance account_summary = ib.accountSummary() for item in account_summary: if item.tag == 'TotalCashBalance' and item.currency == 'USD': return item.value return None ```
open
2024-02-01T20:21:16Z
2024-02-06T22:27:59Z
https://github.com/erdewit/ib_insync/issues/689
[]
thorpep138
3
yihong0618/running_page
data-visualization
140
[feat] chang the font.
@shaonianche @MFYDev Can we change the base font to @shaonianche run page, its cooler. ![image](https://user-images.githubusercontent.com/15976103/120890492-9cbb0280-c635-11eb-84b5-133e223f9061.png)
open
2021-06-05T11:39:08Z
2021-06-05T13:20:49Z
https://github.com/yihong0618/running_page/issues/140
[ "documentation" ]
yihong0618
4
microsoft/MMdnn
tensorflow
318
Converted PyTorch to Keras without error, but inference results are different
Hello, I have converted a ResNet-152 model that I trained on a custom dataset from PyTorch to Keras, but it seems the conversion was not successful because when testing on an image, the output of the two networks are different. Here's how I performed the conversion: 1. Convert PyTorch to IR: `mmtoir -f pytorch -d out --inputShape 3 224 224 -n resnet152_pytorch.pth` Three files are produced: `out.json`, `out.pb` and `out.npy`. 2. Convert IR to Keras code snippet: `mmtocode -f keras --IRModelPath out.pb --dstModelPath resnet152_keras.py` 3. Save the weights to Keras .h5: `python -m mmdnn.conversion.examples.keras.imagenet_test -n resnet152_keras.py -w out.npy --dump resnet152_keras.h5` And here's how I tested them: 1. First read an image, resize it and convert to float format (it's just for testing so we don't need proper pre-processing): ``` image = cv2.imread(image_path, 1) image = cv2.resize(image, (224, 224), interpolation = cv2.INTER_CUBIC) image = image.astype(float) ``` 2. Load the model and do inference for PyTorch: device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = torch.load('resnet152_pytorch.pth') image_cwh = np.rollaxis(image, 2, 0) # convert the image to CWH format im_tensor = torch.from_numpy(image_cwh) im_tensor.unsqueeze_(0) # add batch dimension # send the input the and the model to GPU device im_tensor = im_tensor.to(device) model = model.to(device) # prediction output = model(im_tensor) print(output) 3. Load the model and do inference for Keras: model = keras.models.load_model('resnet152_keras.h5') output = model.predict_on_batch(np.expand_dims(image, axis=0)) print(output) And this is what I obtained: > Pytorch: tensor([[-0.7422, -0.9134, -0.5829, ..., -0.0954, -1.2378, 0.0789]], device='cuda:0') > Keras: [[ -52.964676 -101.15237 -71.54291 ... 20.734167 -69.53998 -58.768787]] Thank you very much in advance for your help!
closed
2018-07-17T17:28:27Z
2020-04-17T09:38:44Z
https://github.com/microsoft/MMdnn/issues/318
[]
netw0rkf10w
8
ray-project/ray
deep-learning
50,819
CI test linux://python/ray/air:test_tensor_extension is flaky
CI test **linux://python/ray/air:test_tensor_extension** is flaky. Recent failures: - https://buildkite.com/ray-project/postmerge/builds/8495#01952b30-22c6-4a0f-9857-59a7988f67d8 - https://buildkite.com/ray-project/postmerge/builds/8491#01952b00-e020-4d4e-b46a-209c0b3dbf5b - https://buildkite.com/ray-project/postmerge/builds/8491#01952ad9-1225-449b-84d0-29cfcc6a048c DataCaseName-linux://python/ray/air:test_tensor_extension-END Managed by OSS Test Policy
closed
2025-02-22T01:46:52Z
2025-03-05T18:19:06Z
https://github.com/ray-project/ray/issues/50819
[ "bug", "triage", "data", "flaky-tracker", "ray-test-bot", "ci-test", "weekly-release-blocker", "stability" ]
can-anyscale
42
mwaskom/seaborn
pandas
3,095
Add flit to dev extra
closed
2022-10-18T12:59:12Z
2022-10-18T23:25:03Z
https://github.com/mwaskom/seaborn/issues/3095
[ "infrastructure" ]
mwaskom
1
PablocFonseca/streamlit-aggrid
streamlit
299
is it possible to get selected rows without refresh the grid?
I have a big dataframe and wants to use aggrid to manage the selection. but seems the selection response like "rowClicked" will come with the grid refresh, for the DF is big the refresh time is a little long. is it possible to get selected rows without REFRESH the grid?
closed
2024-11-09T03:54:23Z
2025-03-05T19:25:16Z
https://github.com/PablocFonseca/streamlit-aggrid/issues/299
[]
sharkblue2009
2
ymcui/Chinese-BERT-wwm
nlp
190
tf2无法加载hfl / chinese-roberta-wwm-ext
你好,我下载了对应的h5模型然后加载报错。 transformers:2.2.2 tensorflow:2.1 `model = TFBertModel.from_pretrained(path, output_hidden_states=True)` path是模型路径,这个可以成功加载bert-base,但是加载roberta就会报以下错误 File "classifiacation.py", line 160, in &lt;module&gt; transformer_layer = TFBertModel.from_pretrained(MODEL, output_hidden_states=True) File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py", line 309, in from_pretrained model.load_weights(resolved_archive_file, by_name=True) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py", line 234, in load_weights return super(Model, self).load_weights(filepath, by_name, skip_mismatch) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/network.py", line 1220, in load_weights f, self.layers, skip_mismatch=skip_mismatch) File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/saving/hdf5_format.py", line 777, in load_weights_from_hdf5_group_by_name str(weight_values[i].shape) + '.') ValueError: Layer #0 (named "bert"), weight &lt;tf.Variable 'tf_bert_model/bert/encoder/layer_._0/attention/self/query/kernel:0' shape=(768, 768) dtype=float32, numpy= array([[-0.01850067, -0.01887354, 0.00046411, ..., -0.02237962, 0.0132857 , -0.01035117], [-0.0011026 , -0.01686522, 0.00017086, ..., -0.01813387, -0.01236598, 0.01903026], [ 0.02472041, 0.02698529, -0.00301668, ..., -0.0238625 , 0.00780853, -0.01740931], ..., [-0.00500965, -0.0014657 , 0.02582165, ..., -0.00806629, -0.01069776, 0.02885169], [ 0.03499781, 0.01101323, -0.03752618, ..., 0.01265424, -0.00410191, 0.01200508], [-0.00900458, 0.01460658, -0.0131218 , ..., -0.01634052, 0.02017507, -0.00059968]], dtype=float32)&gt; has shape (768, 768), but the saved weight has shape (768, 12, 64).
closed
2021-07-14T09:48:41Z
2021-07-16T05:46:10Z
https://github.com/ymcui/Chinese-BERT-wwm/issues/190
[]
kscp123
3
3b1b/manim
python
1,523
Object being "left behind" even though animation is supposed to move it away
### Describe the bug <!-- A clear and concise description of what the bug is. --> When I apply MoveToTarget to a mobject, the mobject does move, but a copy of it stays in the old position. **Code**: <!-- The code you run which reflect the bug. --> the full construct function: ``` python tensordot = TexText("$ \otimes $") vector = TexText(r"$v^{i}$").next_to(tensordot, LEFT).set_color_by_gradient(GREEN) covector = TexText(r"$w_{j}$").next_to(tensordot, RIGHT).set_color_by_gradient(YELLOW) self.play(Write(tensordot), Write(vector), Write(covector)) text1 = TexText(r"$ \leftarrow $ values $ \rightarrow $").next_to(tensordot, DOWN) dvector = TexText(r"$v = \begin{bmatrix}1\\2\end{bmatrix}$").next_to(text1, LEFT) dcovector = TexText(r"$w = \begin{bmatrix} 3, 4 \end{bmatrix}$").next_to(text1, RIGHT) self.wait() self.play(Write(text1), Write(dvector), Write(dcovector)) tgroup = VGroup() tgroup.add(tensordot, vector, covector) self.wait() self.play(FadeOut(text1), FadeOut(dvector), FadeOut(dcovector)) matrix = TexText(r"$ W^{i}_{j} = v^{i}w_{j} $").set_color_by_gradient(BLUE) self.play(Transform(tgroup, matrix).set_run_time(2)) self.wait() m12 = TexText(r"$M^{1}_{2} = v^{1}w_{2} = 1 \cdot 4 = 4$").set_color_by_gradient(RED) m11 = TexText(r"$M^{1}_{1} = v^{1}w_{1} = 1 \cdot 3 = 3$").next_to(m12, UP).set_color_by_gradient(RED) m21 = TexText(r"$M^{2}_{1} = v^{2}w_{1} = 2 \cdot 3 = 6$").next_to(m12, DOWN).set_color_by_gradient(RED) m22 = TexText(r"$M^{2}_{2} = v^{2}w_{2} = 2 \cdot 4 = 8$").next_to(m21, DOWN).set_color_by_gradient(RED) matrix.generate_target() matrix.target.shift(2*UP) self.play(MoveToTarget(matrix)) # supposed to move but was stuck self.play(Write(m11)) self.play(Write(m12)) self.play(Write(m21)) self.play(Write(m22)) self.wait() ``` **Wrong display or Error traceback**: <!-- the wrong display result of the code you run, or the error Traceback --> the object "matrix" gets stuck in the old position and a copy of it moves to the target. ### Additional context <!-- Add any other context about the problem here. -->
closed
2021-05-27T14:19:02Z
2021-06-17T17:06:42Z
https://github.com/3b1b/manim/issues/1523
[ "bug" ]
y0n1n1
2
vitalik/django-ninja
rest-api
1,095
Creating data with POST request
I have a model in that a field takes ManyToManyField with reference to a Model. It throws an error, can't bulk assign the data, and suggests to use `.set(data)`. is there any other way to create the data? ``` PrimaryModel: id = UUIDField(primary_key=True, editable=False) actors = ManyToManyField(Actor, blank=True) ``` ``` Actor: id = UUIDField(primary_key=True, editable=False) name=CharField(max_length=50) ``` While there is a `post` request to the PrimaryModel, the payload comes as `[id, id]`. ```Python @router.post("/", response={201: MovieOut}) def create_movie(request, data: MovieIn): list_id = data.list_id result = Term.objects.create(**data.dict(exclude=["actors"])) result.actor.set(data.actor) return 201, result ```
closed
2024-02-23T14:36:18Z
2024-02-24T16:17:35Z
https://github.com/vitalik/django-ninja/issues/1095
[]
Harshav0423
2
vitalik/django-ninja
rest-api
576
How to return nested dict? response={200: Dict...
So right now I have `response={200: Dict, codes_4xx: DefaultMessageSchema}` I am working on stripe integration, thus I cannot use a defined Schema, I need to pass stripe data. Their dict is nested and quite a big one... It goes something like: ``` This is stripe data and also expected data which I want to be getting as a response { "data": [ { "billing_details": { "address": { "city": null, "country": null, "line1": null, "line2": null, "postal_code": null, "state": null }, "email": null, "name": null, "phone": null }, "card": { "brand": "visa", "checks": { "address_line1_check": null, "address_postal_code_check": null, "cvc_check": "pass" }, "country": "US", "exp_month": 9, "exp_year": 2023, "fingerprint": "zsdqFS23F3Aw", "funding": "credit", "generated_from": null, "last4": "4242", "networks": { "available": [ "visa" ], ``` The current way I have defined response, it does not recognize nested dictionaries. Meaning instead of "card" it will return 'card' / instead of list it will return string and etc... How can I make it work?
open
2022-09-28T12:16:49Z
2022-09-30T18:32:33Z
https://github.com/vitalik/django-ninja/issues/576
[]
ssandr1kka
3
lyhue1991/eat_tensorflow2_in_30_days
tensorflow
15
1-1结构化数据建模流程规范的问题
在1-1章中, 作者使用到的`y_test = dftest_raw['Survived'].values`,其中dftest_raw是没有`Survived`这一列的, 这个时候会报错。 不知道作者使用的test data是官方的test data,还是从train data中分割一部分出来成为test data呢? 谢谢!
closed
2020-04-08T08:53:42Z
2020-04-13T00:44:29Z
https://github.com/lyhue1991/eat_tensorflow2_in_30_days/issues/15
[]
Tokkiu
4
gee-community/geemap
jupyter
1,520
Customizing legend seems to have no effect
<html> <body> <!--StartFragment--> Mon Apr 24 23:02:13 2023 Eastern Daylight Time -- OS | Windows | CPU(s) | 20 | Machine | AMD64 Architecture | 64bit | RAM | 63.9 GiB | Environment | Jupyter Python 3.8.16 (default, Jan 17 2023, 22:25:28) [MSC v.1916 64 bit (AMD64)] geemap | 0.20.4 | ee | 0.1.339 | ipyleaflet | 0.17.2 folium | 0.13.0 | jupyterlab | 3.5.3 | notebook | 6.5.2 ipyevents | 2.0.1 | geopandas | 0.12.2 |   |   Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture a <!--EndFragment--> </body> </html> ### Description Running the unsupervised classification tutorial and tried to make a change to the colors and the legend entries to replace the "one, two, three, four, etc" and it seems to ignore the updates and continues to show the standard legend. ### Ran the following code: ``` legend_classes = ['Urban1', 'Urban2', 'Veg1', 'Veg2', 'Water'] legend_cols = ['#595959', '#d9d9d9', '#004d1a', '#00cc44', '#0066ff'] # Reclassify the map result = result.remap([0, 1, 2, 3, 4], [1, 2, 3, 4, 5]) Map=geemap.Map() Map.centerObject(point, 12) Map.addLayer( result, {'min': 1, 'max': 5, 'palette': legend_cols}, 'Labelled clusters' ) Map.add_legend( legend_keys=legend_classes, legend_colors=legend_cols, position='bottomright' ) Map ``` ![Screenshot 2023-04-24 230752](https://user-images.githubusercontent.com/38561308/234164832-f17f1ae9-7b6e-4b64-b3aa-7c8def239e9a.png)
closed
2023-04-25T03:10:01Z
2023-04-26T11:03:22Z
https://github.com/gee-community/geemap/issues/1520
[ "bug" ]
jportolese
2
pydata/xarray
numpy
9,596
DataTree broadcasting
### What is your issue? _From https://github.com/xarray-contrib/datatree/issues/199_ Currently you can perform arithmetic with datatrees, e.g. `dt + dt`. (In fact the current implementation lets you apply arbitrary operations on n trees that return 1 to n new trees, see [`map_over_subtree`](https://github.com/xarray-contrib/datatree/blob/49f7a2f58d0a4d496639afdb54b56c4058965c05/datatree/mapping.py#L106).) However currently these trees must have the same structure of nodes (i.e. be ["isomorphic"](https://github.com/xarray-contrib/datatree/blob/49f7a2f58d0a4d496639afdb54b56c4058965c05/datatree/mapping.py#L23)). It would be useful to generalise tree operations to handle trees of different structure. I'm going to call this "tree broadcasting" (not to be confused with array broadcasting).
open
2024-10-08T16:45:16Z
2024-10-08T16:45:16Z
https://github.com/pydata/xarray/issues/9596
[ "API design", "topic-DataTree" ]
TomNicholas
0
tpvasconcelos/ridgeplot
plotly
39
CI checks not triggered for "Upgrade dependencies" pull request
The "CI checks" GitHub action should be automatically triggered for all "Upgrade dependencies" pull requests. However, this is currently not the case (see #38, for an example). References: - https://stackoverflow.com/questions/72432651/github-actions-auto-approve-not-working-on-pull-request-created-by-github-action - https://stackoverflow.com/questions/73079924/github-workflows-not-triggered-by-automatically-created-prs
closed
2022-09-26T07:48:13Z
2023-08-14T15:55:48Z
https://github.com/tpvasconcelos/ridgeplot/issues/39
[ "BUG", "dependencies", "CI/CD" ]
tpvasconcelos
0
biolab/orange3
pandas
6,410
Drag-from-widget to add next widget constantly produces error messages
**What's wrong?** Since recently, I can no longer easily drag a connector from a widget and select a next widget in the contextual menu. The first time directly after dragging, an Unexpected Error window comes up, with a report that can be submitted. Next, a larger error message window pops up. If I click Ignore, I can finally select the next widget. Every next time I want to add new widgets this way, two of these large-size error message windows pop up consecutively. After clicking Ignore on each, again I can proceed. It appears this started happening since (1) I updated Mac OS to Ventura 13.3.1 and (2) I updated the Text mining add-on to v. 1.12.2 **How can we reproduce the problem?** See attached movie: [Screen Recording 2023-04-13 at 15.42.50.mov.zip](https://github.com/biolab/orange3/files/11222910/Screen.Recording.2023-04-13.at.15.42.50.mov.zip) **What's your environment?** <!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code --> - Operating system: Mac OS to Ventura 13.3.1 (Silicon) - Orange version: 3.34.1 - How you installed Orange: from DMG _edit_: updated add-on was text mining, not timeseries.
closed
2023-04-13T14:03:53Z
2023-05-02T09:50:44Z
https://github.com/biolab/orange3/issues/6410
[ "bug" ]
wvdvegte
6
mars-project/mars
numpy
2,875
Add a web page to show Mars process stacks
We need a web page to show stack of each process of Mars. `sys._current_frames()` may be used.
closed
2022-03-26T05:17:11Z
2022-03-28T09:21:55Z
https://github.com/mars-project/mars/issues/2875
[ "type: feature", "mod: web" ]
wjsi
0
BeanieODM/beanie
asyncio
208
Error with type checking on sort in PyCharm
Hi. Using the documented mode of sorting `sort(Class.field)` results in a warning in PyCharm. Is something off in the type definition. It seems to work fine. <img width="690" alt="Screen Shot 2022-02-17 at 5 28 01 PM" src="https://user-images.githubusercontent.com/2035561/154600174-d4e37588-55a7-47fd-9740-c7b4038dd0bc.png">
closed
2022-02-18T01:29:32Z
2022-06-30T19:03:51Z
https://github.com/BeanieODM/beanie/issues/208
[]
mikeckennedy
18
dynaconf/dynaconf
django
320
[bug] Little bug on docs/customexts with variable undeclared.
**Describe the bug** I read the code and on file docs/customexts/aafig.py and on the line 120 is used one variable **text** and this variable wasn't created before. [Link to file](https://github.com/rochacbruno/dynaconf/blob/bb6282cf04214f13c0bcbacdb4cee65d4c9ddafb/docs/customexts/aafig.py#L120) `img.replace_self(nodes.literal_block(text, text))` ```python def render_aafig_images(app, doctree): format_map = app.builder.config.aafig_format merge_dict(format_map, DEFAULT_FORMATS) if aafigure is None: app.builder.warn('aafigure module not installed, ASCII art images ' 'will be redered as literal text') for img in doctree.traverse(nodes.image): if not hasattr(img, 'aafig'): continue if aafigure is None: img.replace_self(nodes.literal_block(text, text)) continue options = img.aafig['options'] text = img.aafig['text'] ```
closed
2020-03-18T10:20:03Z
2020-08-06T19:07:13Z
https://github.com/dynaconf/dynaconf/issues/320
[ "bug" ]
Bernardoow
1
pytest-dev/pytest-html
pytest
796
report.html is created at the start of pytest run instead of after in v4
We run two containers in one pod. One container runs a bash script that checks the existence of the report.html then shuts the pods down after uploading the html file to azure blob storage. The second container runs the pytest command. Due to the change in report.html being created at the start of the pytest run now, it no longer waits for the test on the second container to finish. It uploads the unfinished version of the report instead. As this is a breaking change, could you please make a note of this in the release notes or maybe fix this by making a temporary file in `/tmp` then move it to the report directory at the end of the test Part of the script: ``` while [ ! -f /tmp/report.html ]; do sleep 2 done ```
open
2024-01-24T14:20:51Z
2024-03-04T14:30:12Z
https://github.com/pytest-dev/pytest-html/issues/796
[]
thenu97
2
jina-ai/serve
fastapi
5,422
`metadata.uid` field doesn't exist in generated k8s YAMLs
**Describe the bug** <!-- A clear and concise description of what the bug is. --> In the [generated YAML file](https://github.com/jina-ai/jina/blob/master/jina/resources/k8s/template/deployment-executor.yml), it has a reference to `metadata.uid`: `fieldPath: metadata.uid`. But looks like in `metadata` there's no `uid`: ``` metadata: name: {name} namespace: {namespace} ``` This is sometimes causing problem for our Operator use case; we had to manually remove the `POD_UID` block generated by `jina`. Are we going to address this? **Describe how you solve it** <!-- copy past your code/pull request link --> --- <!-- Optional, but really help us locate the problem faster --> **Environment** <!-- Run `jina --version-full` and copy paste the output here --> **Screenshots** <!-- If applicable, add screenshots to help explain your problem. -->
closed
2022-11-22T02:22:01Z
2022-12-01T08:33:06Z
https://github.com/jina-ai/serve/issues/5422
[]
zac-li
0
OFA-Sys/Chinese-CLIP
nlp
184
TensorRT格式转换报错CUDA error 700
环境conda下安装虚拟环境 机器版本 ![image](https://github.com/OFA-Sys/Chinese-CLIP/assets/32260115/8d65497e-0b83-4e8b-bfb7-ea459493dee0) 虚拟环境 python3.8 ![image](https://github.com/OFA-Sys/Chinese-CLIP/assets/32260115/24994076-8823-433f-9b6e-d1b960cbb2d8) ![image](https://github.com/OFA-Sys/Chinese-CLIP/assets/32260115/8da18ba2-90ff-478b-88c8-9d1c2b73ae7a) 按demo执行pytroch2onnx正常 ![image](https://github.com/OFA-Sys/Chinese-CLIP/assets/32260115/fc61183f-dead-40e3-8130-65ab37af4c76) onnx2tensorrt报错 In the autotuner, CUDA error 700 from 'cuEventSynchronize(end)': an illegal memory access was encountered. ![image](https://github.com/OFA-Sys/Chinese-CLIP/assets/32260115/07e49819-b67f-4d10-b4a0-0c7f24c6c382) 按demo下载trt文件加载报错 ![image](https://github.com/OFA-Sys/Chinese-CLIP/assets/32260115/d51e2fe7-a690-4790-966d-30cb504d1767)
open
2023-08-11T07:31:11Z
2023-08-29T10:08:04Z
https://github.com/OFA-Sys/Chinese-CLIP/issues/184
[]
WittyLLL
2
Zeyi-Lin/HivisionIDPhotos
fastapi
93
很好用,非常感谢
![image](https://github.com/user-attachments/assets/09d65400-6496-4c7e-8a74-06ae7a317cbb) ![image](https://github.com/user-attachments/assets/2e8843fd-8173-4c27-98e3-f809720d8529)
closed
2024-09-10T10:55:56Z
2024-09-11T04:30:18Z
https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/93
[]
zmwv823
2
matplotlib/mplfinance
matplotlib
44
Implement regression tests for the new API
Upon a Pull Request or Merge, the repository presently kicks off TravisCI tests for the old API. We should extend this for the new API as well. The new tests can easily be based off code in the jupyter notebooks in the examples folder.
closed
2020-03-05T18:48:04Z
2020-05-06T01:33:42Z
https://github.com/matplotlib/mplfinance/issues/44
[ "enhancement", "released" ]
DanielGoldfarb
1
python-visualization/folium
data-visualization
1,734
ColorMap Legend Position is always set to topRight. Change it by specifying the postion
**Is your feature request related to a problem? Please describe.** I generated a Cholopreth Map using folium.Choropleth(), Now i t creates the Map perfectly and also add the legend to the topRight corner of the Map. There is no way to change the location of the Map to the topleft. **Describe the solution you'd like** I want to have an optional option in folium.Choropleth(), legend_postion, which is by_default set to topRIght, but we can be able to mention it to move topleft, bottomRight, bottomLeft etc. **Additional context** THis is the map, i created easily ![image](https://user-images.githubusercontent.com/3126387/223203967-285711b6-ca9a-4635-97c4-2517dd5326b7.png) Here is the output I want, Here i change the HTML Code ![image](https://user-images.githubusercontent.com/3126387/223204095-81e4e2e0-6a49-4dba-b60b-52b734f529a9.png) Is there any other way to do that? If the feature was not very useful.
closed
2023-03-06T18:55:24Z
2023-03-12T13:03:33Z
https://github.com/python-visualization/folium/issues/1734
[]
muneebable
1
OWASP/Nettacker
automation
289
Bug(Recursion) in calling warn, error from api
This is actually an interesting recursive logic where we are not allowed to call warn, error functions when using API but the __die_failure() uses error. When calling the error function, it says that if the command contains "--start-api" then it won't print a thing. Interesting one. I will soon fix it. ![image](https://user-images.githubusercontent.com/32503192/84069944-f966eb80-a9e8-11ea-9aec-d4040895dc22.png) ![image](https://user-images.githubusercontent.com/32503192/84069910-e81ddf00-a9e8-11ea-9f67-2468826d417a.png) Please describe the issue or question and share your OS and Python version. _________________ **OS**: `Linux` **OS Version**: `Ubuntu 18.04` **Python Version**: `3.6.9`
closed
2020-06-08T19:04:57Z
2020-06-09T10:19:18Z
https://github.com/OWASP/Nettacker/issues/289
[ "bug" ]
aman566
1
streamlit/streamlit
deep-learning
10,023
Add `height` and `key` to `st.columns`
### Checklist - [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests. - [X] I added a descriptive title and summary to this issue. ### Summary It would be great if the features of st.columns() and st.container() could be combined in a single API call, say st.columns(). For example, if one wants to create two scrollable fixed-height side-by-side columns one has to first instantiate two columns with cols = st.columns(2), and then create a st.container() within each column for a specified fixed height. ### Why? Presently, st.columns() allows one to create multiple columns side by side, but one can neither make them fixed height scrollable. And with st.container(), one can make the container fixed height & scrollable, but there is no way to place containers side by side. This is frustrating when writing code since one has to remember these two types of behavior for the two functions when they could easily be rolled into a single function. ### How? The present st.columns() API call is: st.columns(spec, *, gap="small", vertical_alignment="top", border=False) and the st.container() API call is: st.container(*, height=None, border=None, key=None) Simply add to st.columns() three of the parameters of st.container(), namely height, border and key. If you can do this this, there will be no need for st.container(), which will make it far easier for programmers. ### Additional Context _No response_
open
2024-12-13T23:56:31Z
2024-12-22T02:18:15Z
https://github.com/streamlit/streamlit/issues/10023
[ "type:enhancement", "feature:st.columns" ]
iscoyd
4
httpie/cli
rest-api
1,002
HTTP/1.1 404 Not Found error
(base) PS C:\Users\charu> http --stream -f -a Aaron https://stream.twitter.com/1/statuses/filter.json track='Justin Bieber' http: password for Aaron@stream.twitter.com: HTTP/1.1 404 Not Found cache-control: no-cache, no-store, max-age=0 content-length: 0 date: Mon, 07 Dec 2020 03:12:59 GMT server: tsa_a set-cookie: personalization_id="v1_7M0qAqT4IHizTm3vckq/tQ=="; Max-Age=63072000; Expires=Wed, 07 Dec 2022 03:12:59 GMT; Path=/; Domain=.twitter.com; Secure; SameSite=None set-cookie: guest_id=v1%3A160731077930093670; Max-Age=63072000; Expires=Wed, 07 Dec 2022 03:12:59 GMT; Path=/; Domain=.twitter.com; Secure; SameSite=None strict-transport-security: max-age=631138519 x-connection-hash: bbc073d81faa811ca0a7bf6f7a3234b9 x-response-time: 5 x-tsa-request-body-time: 1 This is more information about environment. Windows 10 professional, Python 3.8.3 (default, Jul 2 2020, 17:30:36) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
closed
2020-12-07T03:15:17Z
2020-12-21T11:12:41Z
https://github.com/httpie/cli/issues/1002
[]
charujing
1
JaidedAI/EasyOCR
machine-learning
529
Can't recognise dollar sign
I'm using the following setting to read invoice data and noticed that the model can't read dollar sign. reader = easyocr.Reader(['en','la']) When I change the model to latin_g1, the model can pick up dollar signs most of the time. I wonder if dollar sign was not included in the training dataset.
closed
2021-08-30T08:00:18Z
2021-10-06T08:52:55Z
https://github.com/JaidedAI/EasyOCR/issues/529
[]
cypresswang
1
tflearn/tflearn
tensorflow
639
ValueError: Only call `softmax_cross_entropy_with_logits` with named arguments (labels=..., logits=..., ...)
Anaconda 3.5 TensorFlow 1.0 TFLearn from latest Git pull In: > tflearn\examples\extending_tensorflow> python trainer.py > Traceback (most recent call last): > File "trainer.py", line 39, in <module> > loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(net, Y)) > File "C:\Anaconda3\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 1578, in softmax_cross_entropy > labels, logits) > File "C:\Anaconda3\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 1533, in _ensure_xent_args > "named arguments (labels=..., logits=..., ...)" % name) > ValueError: Only call `softmax_cross_entropy_with_logits` with named arguments (labels=..., logits=..., ...)
open
2017-02-28T15:36:45Z
2018-11-08T19:16:09Z
https://github.com/tflearn/tflearn/issues/639
[]
EricPerbos
18
ets-labs/python-dependency-injector
asyncio
50
Remove or replace pypi.in badges
https://pypip.in/ is not reachable anymore, need to remove or replace its badges from README
closed
2015-05-05T16:49:11Z
2015-05-12T16:09:54Z
https://github.com/ets-labs/python-dependency-injector/issues/50
[ "bug" ]
rmk135
1
junyanz/pytorch-CycleGAN-and-pix2pix
deep-learning
1,626
SAR to Optical image generation
@junyanz, @taesungp I am cycleGan Model for sar to optical image generation, results are so okay okay (not to bad), i want to know like how to improve optical images accuracy?
open
2024-02-28T10:53:14Z
2024-10-10T10:54:12Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1626
[]
arrrrr3186
1
reloadware/reloadium
django
149
why automatically jumps to some breakpoint far away from X
In debug, when I add a comment on line x, and there is a breakpoint above and below line x, after saving, the program automatically jumps to some breakpoint far away from X. I would like to know how to solve it, thank you very much
closed
2023-05-28T09:12:04Z
2023-05-28T10:55:59Z
https://github.com/reloadware/reloadium/issues/149
[ "wontfix" ]
sajsxj
8
InstaPy/InstaPy
automation
6,665
Its not working
PS C:\Users\JUAN ROJAS\xsx> python quickstart.py InstaPy Version: 0.6.16 ._. ._. ._. ._. ._. ._. ._. ._. ._. ._. Workspace in use: "C:/Users/JUAN ROJAS/InstaPy" OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO INFO [2022-12-15 19:50:00] [gouppers] Session started! oooooooooooooooooooooooooooooooooooooooooooooooooooooo INFO [2022-12-15 19:50:00] [gouppers] -- Connection Checklist [1/2] (Internet Connection Status) INFO [2022-12-15 19:50:00] [gouppers] - Internet Connection Status: ok INFO [2022-12-15 19:50:00] [gouppers] - Current IP is "186.155.15.159" and it's from "Colombia/CO" INFO [2022-12-15 19:50:00] [gouppers] -- Connection Checklist [2/2] (Hide Selenium Extension) INFO [2022-12-15 19:50:00] [gouppers] - window.navigator.webdriver response: True WARNING [2022-12-15 19:50:00] [gouppers] - Hide Selenium Extension: error INFO [2022-12-15 19:50:03] [gouppers] - Cookie file not found, creating cookie... WARNING [2022-12-15 19:50:19] [gouppers] Login A/B test detected! Trying another string... WARNING [2022-12-15 19:50:24] [gouppers] Could not pass the login A/B test. Trying last string... ERROR [2022-12-15 19:50:30] [gouppers] Login A/B test failed! b"Message: Unable to locate element: //div[text()='Log In']\nStacktrace:\nRemoteError@chrome://remote/content/shared/RemoteError.sys.mjs:8:8\nWebDriverError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:182:5\nNoSuchElementError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:394:5\nelement.find/</<@chrome://remote/content/marionette/element.sys.mjs:275:16\n" Traceback (most recent call last): File "C:\Users\JUAN ROJAS\AppData\Local\Programs\Python\Python39\lib\site-packages\instapy\login_util.py", line 348, in login_user login_elem = browser.find_element( File "C:\Users\JUAN ROJAS\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 861, in find_element return self.execute(Command.FIND_ELEMENT, {"using": by, "value": value})["value"] File "C:\Users\JUAN ROJAS\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 444, in execute self.error_handler.check_response(response) File "C:\Users\JUAN ROJAS\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 249, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: //button[text()='Log In'] Stacktrace: RemoteError@chrome://remote/content/shared/RemoteError.sys.mjs:8:8 WebDriverError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:182:5 NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:394:5 element.find/</<@chrome://remote/content/marionette/element.sys.mjs:275:16 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\JUAN ROJAS\AppData\Local\Programs\Python\Python39\lib\site-packages\instapy\login_util.py", line 354, in login_user login_elem = browser.find_element( File "C:\Users\JUAN ROJAS\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 861, in find_element return self.execute(Command.FIND_ELEMENT, {"using": by, "value": value})["value"] File "C:\Users\JUAN ROJAS\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 444, in execute self.error_handler.check_response(response) File "C:\Users\JUAN ROJAS\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 249, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: //a[text()='Log in'] Stacktrace: RemoteError@chrome://remote/content/shared/RemoteError.sys.mjs:8:8 WebDriverError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:182:5 NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:394:5 element.find/</<@chrome://remote/content/marionette/element.sys.mjs:275:16 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\JUAN ROJAS\AppData\Local\Programs\Python\Python39\lib\site-packages\instapy\login_util.py", line 361, in login_user login_elem = browser.find_element( File "C:\Users\JUAN ROJAS\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 861, in find_element return self.execute(Command.FIND_ELEMENT, {"using": by, "value": value})["value"] File "C:\Users\JUAN ROJAS\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 444, in execute self.error_handler.check_response(response) File "C:\Users\JUAN ROJAS\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 249, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: //div[text()='Log In'] Stacktrace: RemoteError@chrome://remote/content/shared/RemoteError.sys.mjs:8:8 WebDriverError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:182:5 NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:394:5 element.find/</<@chrome://remote/content/marionette/element.sys.mjs:275:16 ...................................................................................................................... CRITICAL [2022-12-15 19:50:30] [gouppers] Unable to login to Instagram! You will find more information in the logs above. '''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' ERROR [2022-12-15 19:50:30] [gouppers] You have too few comments, please set at least 10 distinct comments to avoid looking suspicious. INFO [2022-12-15 19:50:32] [gouppers] Sessional Live Report: |> No any statistics to show [Session lasted 41.04 seconds] OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO INFO [2022-12-15 19:50:32] [gouppers] Session ended! ooooooooooooooooooooooooooooooooooooooooooooooooooooo
open
2022-12-16T00:56:31Z
2022-12-26T14:49:34Z
https://github.com/InstaPy/InstaPy/issues/6665
[]
juanrojasm
1
fugue-project/fugue
pandas
186
[BUG] PandasDataFrame print slow
**Minimal Code To Reproduce** ```python with FugueWorkflow() as dag: dag.load("verylargedataset").show() ``` **Describe the bug** This can be very slow because the default `head` implementation is not good for pandas **Expected behavior** This should be very fast **Environment (please complete the following information):** - Backend: pandas/dask/ray? - Backend version: - Python version: - OS: linux/windows
closed
2021-04-22T20:51:28Z
2021-04-22T21:31:28Z
https://github.com/fugue-project/fugue/issues/186
[ "bug", "core feature" ]
goodwanghan
0
graphql-python/graphene-django
graphql
1,389
InputObjectType causing encoding issues with JSONField
**Note: for support questions, please use stackoverflow**. This repository's issues are reserved for feature requests and bug reports. * **What is the current behavior?** I have an `InputObjectType` with multiple fields and receive it in my `mutation` as a param. But it throws [Object of type proxy is not JSON serializable](https://stackoverflow.com/questions/48454398/typeerror-at-en-object-of-type-proxy-is-not-json-serializable) on passing it as a `JSONField` parameter for object creation. * **If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem** via a github repo, https://repl.it or similar. I have the following Django model with a JSONField inside: ``` from django_jsonfield_backport import models as mysql class RaceReport(TimeStampedModel): user = models.ForeignKey(User, on_delete=models.CASCADE, related_name='race_reports') parameters = mysql.JSONField(blank=True, default=dict) ``` Then, there's a custom `InputObjectType` which looks like this: ``` class ParametersInput(graphene.InputObjectType): start_point = BigInteger(required=True) end_point = BigInteger(required=True) data_type = DataType() ``` I am using the above input type inside the following `Mutation`: ``` class CreateRaceReport(graphene.Mutation): class Arguments: parameters = ParametersInput(required=True) def mutate(_root, info, parameters): # noqa: N805 race_report = RaceReport.objects.create(user=info.context.user, parameters=parameters) # origin of the error return CreateRaceReport(race_report=race_report) ``` * **What is the expected behavior?** The `InputObjectType` should be resolved after we access its attributes or use`parameters.__dict__` but it stays lazy and causes the JSON encoding errors. * **What is the motivation / use case for changing the behavior?** Better compatibility with `JSONField`. * **Please tell us about your environment:** - Version: 2.1.8 - Platform: MacOS 13.0.1 * **Other information** (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. StackOverflow) - https://stackoverflow.com/questions/75412492/graphene-inputobjecttype-causing-encoding-issues-with-jsonfield
open
2023-02-15T08:54:39Z
2023-02-15T08:54:39Z
https://github.com/graphql-python/graphene-django/issues/1389
[ "🐛bug" ]
adilhussain540
0
xuebinqin/U-2-Net
computer-vision
335
How to test using my trained model?
Hello! I use your repository, and I trained own my image data and got weights file(u2net.pthu2net_bce_itr_8000_train_0.195329_tar_0.024132) But how do I run u2net_test.py with my trained model? I added this codes. > elif(model_name=='u2net.pthu2net_bce_itr_8000_train_0.195329_tar_0.024132'): print("...my model") net = u2net.pthu2net_bce_itr_8000_train_0.195329_tar_0.024132(3,1) How should I modify "net = u2net.pthu2net_bce_itr_8000_train_0.195329_tar_0.024132(3,1)" ? Thank you for reading.
open
2022-10-01T09:06:40Z
2023-10-09T06:37:41Z
https://github.com/xuebinqin/U-2-Net/issues/335
[]
hic9507
1
CorentinJ/Real-Time-Voice-Cloning
python
1,014
Dumb Question about a setup error
When I run the: `python demo_toolbox.py` command I get an error telling me that the download for _synthesizer.pt_ failed and I have to download it from a google drive link. When I have the file downloaded, can I just unzip the .zip and put the files in the: `Real-Tome-Voice-Cloning` directory? Or do I have to do something else to get the program to work?
closed
2022-02-19T17:13:54Z
2022-03-27T15:17:54Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1014
[]
TheChillzed
1
autogluon/autogluon
scikit-learn
4,990
[timeseries] Prediction fails for irregular data if refit_full=True and skip_model_selection=True
**Bug Report Checklist** <!-- Please ensure at least one of the following to help the developers troubleshoot the problem: --> - [ ] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install --> - [x] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred --> - [x] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked --> **Describe the bug** <!-- A clear and concise description of what the bug is. --> **Expected behavior** <!-- A clear and concise description of what you expected to happen. --> **To Reproduce** ```python from autogluon.timeseries import TimeSeriesDataFrame, TimeSeriesPredictor data = TimeSeriesDataFrame.from_path( "https://autogluon.s3.amazonaws.com/datasets/timeseries/grocery_sales/test.csv", ) # Create an irregularly sampled dataframe data_irreg = data.iloc[[1, 31, 34, 62, 63, 64, 65, 66, 67, 68, 69, 70]] predictor = TimeSeriesPredictor(prediction_length=1, freq="W", target="unit_sales") predictor.fit( data_irreg, hyperparameters={"Croston": {}}, skip_model_selection=True, refit_full=True, ) predictor.predict(data_irreg) ``` **Screenshots / Logs** ``` Frequency 'W' stored as 'W-SUN' Beginning AutoGluon training... =================== System Info =================== AutoGluon Version: 1.2 Python Version: 3.11.10 Operating System: Linux Platform Machine: x86_64 Platform Version: #1 SMP Sat Feb 22 01:31:51 UTC 2025 CPU Count: 32 GPU Count: 4 Memory Avail: 230.86 GB Disk Space Avail: 533.94 GB =================================================== Fitting with arguments: {'enable_ensemble': True, 'eval_metric': WQL, 'freq': 'W-SUN', 'hyperparameters': {'Croston': {}}, 'known_covariates_names': [], 'num_val_windows': 1, 'prediction_length': 1, 'quantile_levels': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9], 'random_seed': 123, 'refit_every_n_windows': 1, 'refit_full': True, 'skip_model_selection': True, 'target': 'unit_sales', 'verbosity': 2} train_data with frequency 'None' has been resampled to frequency 'W-SUN'. Provided train_data has 14 rows (NaN fraction=14.3%), 3 time series. Median time series length is 4 (min=1, max=9). Provided data contains following columns: target: 'unit_sales' past_covariates: categorical: [] continuous (float): ['scaled_price', 'promotion_email'] AutoGluon will ignore following non-numeric/non-informative columns: ignored covariates: ['promotion_homepage'] To learn how to fix incorrectly inferred types, please see documentation for TimeSeriesPredictor.fit AutoGluon will gauge predictive performance using evaluation metric: 'WQL' This metric's sign has been flipped to adhere to being higher_is_better. The metric score can be multiplied by -1 to get the metric value. =================================================== Starting training. Start time is 2025-03-20 15:07:19 Models that will be trained: ['Croston'] Training timeseries model Croston. 0.01 s = Training runtime Training complete. Models trained: ['Croston'] Total runtime: 0.02 s Best model: Croston WARNING: refit_full functionality for TimeSeriesPredictor is experimental and is not yet supported by all models. Refitting models via `refit_full` using all of the data (combined train and validation)... Models trained in this way will have the suffix '_FULL' and have NaN validation score. This process is not bound by time_limit, but should take less time than the original `fit` call. Fitting model: Croston_FULL | Skipping fit via cloning parent ... Refit complete. Models trained: ['Croston_FULL'] Total runtime: 0.01 s Updated best model to 'Croston_FULL' (Previously 'Croston'). AutoGluon will default to using 'Croston_FULL' for predict(). data with frequency 'None' has been resampled to frequency 'W-SUN'. Model not specified in predict, will default to the model with the best validation score: Croston_FULL Warning: Croston_FULL failed for 3 time series (100.0%). Fallback model SeasonalNaive was used for these time series. Model Croston_FULL failed to predict with the following exception: Traceback (most recent call last): File "/uv_envs/stable/lib/python3.11/site-packages/autogluon/timeseries/trainer/abstract_trainer.py", line 1224, in get_model_pred_dict model_pred_dict[model_name] = self._predict_model( ^^^^^^^^^^^^^^^^^^^^ File "/uv_envs/stable/lib/python3.11/site-packages/autogluon/timeseries/trainer/abstract_trainer.py", line 1151, in _predict_model return model.predict(data, known_covariates=known_covariates) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/uv_envs/stable/lib/python3.11/site-packages/autogluon/timeseries/models/abstract/abstract_timeseries_model.py", line 432, in predict predictions = self._predict(data=data, known_covariates=known_covariates, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/uv_envs/stable/lib/python3.11/site-packages/autogluon/timeseries/models/local/abstract_local_model.py", line 186, in _predict predictions_df.index = get_forecast_horizon_index_ts_dataframe(data, self.prediction_length, freq=self.freq) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/uv_envs/stable/lib/python3.11/site-packages/autogluon/timeseries/utils/forecast.py", line 39, in get_forecast_horizon_index_ts_dataframe timestamps = np.dstack([last_ts + step * offset for step in range(1, prediction_length + 1)]).ravel() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/uv_envs/stable/lib/python3.11/site-packages/autogluon/timeseries/utils/forecast.py", line 39, in <listcomp> timestamps = np.dstack([last_ts + step * offset for step in range(1, prediction_length + 1)]).ravel() ~~~~~^~~~~~~~ TypeError: unsupported operand type(s) for *: 'int' and 'NoneType' ```
open
2025-03-20T15:10:26Z
2025-03-20T15:10:32Z
https://github.com/autogluon/autogluon/issues/4990
[ "bug", "module: timeseries" ]
shchur
0
voila-dashboards/voila
jupyter
618
ipywidget filelink does not work in voila
when using the ipywidget filelink with jupyter notebook it works well. After running the same with voila I get the message 403: Forbidden.
closed
2020-05-22T12:58:33Z
2020-08-31T09:26:34Z
https://github.com/voila-dashboards/voila/issues/618
[]
omontue
4
pytorch/pytorch
numpy
148,966
DISABLED test_parity__foreach_abs_fastpath_inplace_cuda_bfloat16 (__main__.TestForeachCUDA)
Platforms: linux, slow This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_parity__foreach_abs_fastpath_inplace_cuda_bfloat16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/38551199174). Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_parity__foreach_abs_fastpath_inplace_cuda_bfloat16` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. <details><summary>Sample error message</summary> ``` Traceback (most recent call last): File "/var/lib/jenkins/workspace/test/test_foreach.py", line 227, in test_parity actual = func( File "/var/lib/jenkins/workspace/test/test_foreach.py", line 90, in __call__ assert mta_called == (expect_fastpath and (not zero_size)), ( AssertionError: mta_called=False, expect_fastpath=True, zero_size=False, self.func.__name__='_foreach_abs_', keys=('aten::_foreach_abs_', 'Unrecognized', 'cudaLaunchKernel', 'Lazy Function Loading', 'cudaDeviceSynchronize') During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1159, in test_wrapper return test(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/mock.py", line 1833, in _inner return f(*args, **kw) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1972, in wrap_fn return fn(self, *args, **kwargs) File "/var/lib/jenkins/workspace/test/test_foreach.py", line 234, in test_parity with self.assertRaises(type(e)): File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 226, in __exit__ self._raiseFailure("{} not raised".format(exc_name)) File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 163, in _raiseFailure raise self.test_case.failureException(msg) AssertionError: AssertionError not raised The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3150, in wrapper method(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3150, in wrapper method(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 3150, in wrapper method(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 454, in instantiated_test result = test(self, **param_kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_utils.py", line 1612, in wrapper fn(*args, **kwargs) File "/opt/conda/envs/py_3.10/lib/python3.10/site-packages/torch/testing/_internal/common_device_type.py", line 1171, in test_wrapper raise e_tracked from e Exception: Caused by sample input at index 0: SampleInput(input=TensorList[Tensor[size=(20, 20), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(19, 19), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(18, 18), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(17, 17), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(16, 16), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(15, 15), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(14, 14), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(13, 13), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(12, 12), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(11, 11), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(10, 10), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(9, 9), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(8, 8), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(7, 7), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(6, 6), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(5, 5), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(4, 4), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(3, 3), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(2, 2), device="cuda:0", dtype=torch.bfloat16], Tensor[size=(1, 1), device="cuda:0", dtype=torch.bfloat16]], args=(), kwargs={}, broadcasts_input=False, name='') To execute this test, run the following from the base repo dir: PYTORCH_OPINFO_SAMPLE_INPUT_INDEX=0 PYTORCH_TEST_CUDA_MEM_LEAK_CHECK=1 PYTORCH_TEST_WITH_SLOW_GRADCHECK=1 python test/test_foreach.py TestForeachCUDA.test_parity__foreach_abs_fastpath_inplace_cuda_bfloat16 This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ``` </details> Test file path: `test_foreach.py` cc @clee2000 @crcrpar @mcarilli @janeyx99
open
2025-03-11T15:42:48Z
2025-03-11T15:42:53Z
https://github.com/pytorch/pytorch/issues/148966
[ "triaged", "module: flaky-tests", "skipped", "module: mta" ]
pytorch-bot[bot]
1
ultralytics/yolov5
machine-learning
12,863
Some issues regarding the function of save-txt in segmentation prediction task.
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question ### When I was predicting about the segmentation task, I opened -- save-txt, but it didn't work as I expected, and the result is as follows: **Inference results** ![true](https://github.com/ultralytics/yolov5/assets/57213928/970a1036-0c79-446a-b4d3-6e3eee3b53bc) **Saved txt** [saved_seg_label.txt](https://github.com/ultralytics/yolov5/files/14818433/saved_seg_label.txt) **Add outlines to the org image based on the txt file.** But it did not save all the points of the mask, each instance target only saves a portion of the mask's outline points. ![added_outline](https://github.com/ultralytics/yolov5/assets/57213928/8bb7d32f-709d-4aae-8b1b-0df1d502f38c) **Inference commands used** `python segment/predict.py --weights runs/bare_soil_uncover-seg/v0.0.001/weights/best.pt --source test_sources/images/baresoil/test.jpg --save-txt --classes 0` **Code of add outlines** if save_txt: segments = [ scale_segments(im0.shape if retina_masks else im.shape[2:], x, im0.shape, normalize=True) for x in reversed(masks2segments(masks))] # Print results for c in det[:, 5].unique(): n = (det[:, 5] == c).sum() # detections per class s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string # Mask plotting annotator.masks( masks, colors=[colors(x, True) for x in det[:, 5]], im_gpu=torch.as_tensor(im0, dtype=torch.float16).to(device).permute(2, 0, 1).flip(0).contiguous() / 255 if retina_masks else im[i]) # Write results for j, (*xyxy, conf, cls) in enumerate(reversed(det[:, :6])): if save_txt: # Write to file seg = segments[j].reshape(-1) # (n,2) to (n*2) line = (cls, *seg, conf) if save_conf else (cls, *seg) # label format with open(f'{txt_path}.txt', 'a') as f: f.write(('%g ' * len(line)).rstrip() % line + '\n') if save_img or save_crop or view_img: # Add bbox to image c = int(cls) # integer class label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}') annotator.box_label(xyxy, label, color=colors(c, True)) # annotator.draw.polygon(segments[j], outline=colors(c, True), width=3) if save_crop: save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True) # Stream results im0 = annotator.result() if view_img: if platform.system() == 'Linux' and p not in windows: windows.append(p) cv2.namedWindow(str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO) # allow window resize (Linux) cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0]) cv2.imshow(str(p), im0) if cv2.waitKey(1) == ord('q'): # 1 millisecond exit() # Save results (image with detections) if save_img: if dataset.mode == 'image': # add outlines------------------------------------------------------------------ for area in segments: #Obtain pixel coordinates based on scale area[:, 0] *= 1920 area[:, 1] *= 1080 area = area.astype(np.int64) print("area:",area) cv2.polylines(im0,[area],1,(255,0,0),5) # add outlines----------------------------------------------------------------- cv2.imwrite(save_path, im0) else: # 'video' or 'stream' if vid_path[i] != save_path: # new video vid_path[i] = save_path if isinstance(vid_writer[i], cv2.VideoWriter): vid_writer[i].release() # release previous video writer if vid_cap: # video fps = vid_cap.get(cv2.CAP_PROP_FPS) w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) else: # stream fps, w, h = 30, im0.shape[1], im0.shape[0] save_path = str(Path(save_path).with_suffix('.mp4')) # force *.mp4 suffix on results videos vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h)) vid_writer[i].write(im0) ### Additional _No response_
closed
2024-04-01T02:12:32Z
2024-10-20T19:42:33Z
https://github.com/ultralytics/yolov5/issues/12863
[ "question" ]
CHR1122
4
FlareSolverr/FlareSolverr
api
623
[hddolby] (updating) FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Unable to process browser request. ProtocolError: Protocol error (Page.navigate): frameId not supported RemoteAgentError@chrome://remote/content/cdp/Error.jsm:29:5UnsupportedError@chrome://remote/content/cdp/Error.jsm:106:1navigate@chrome://remote/content/cdp/domains/parent/Page.jsm:103:13execute@chrome://remote/content/cdp/domains/DomainCache.jsm:101:25execute@chrome://remote/content/cdp/sessions/Session.jsm:64:25execute@chrome://remote/content/cdp/sessions/TabSession.jsm:67:20onPacket@chrome://remote/content/cdp/CDPConnection.jsm:248:36onMessage@chrome://remote/content/server/WebSocketTransport.jsm:89:18handleEvent@chrome://remote/content/server/WebSocketTransport.jsm:71:14
**Please use the search bar** at the top of the page and make sure you are not creating an already submitted issue. Check closed issues as well, because your issue may have already been fixed. ### How to enable debug and html traces [Follow the instructions from this wiki page](https://github.com/FlareSolverr/FlareSolverr/wiki/How-to-enable-debug-and-html-trace) ### Environment * **FlareSolverr version**: * **Last working FlareSolverr version**: * **Operating system**: * **Are you using Docker**: [yes/no] * **FlareSolverr User-Agent (see log traces or / endpoint)**: * **Are you using a proxy or VPN?** [yes/no] * **Are you using Captcha Solver:** [yes/no] * **If using captcha solver, which one:** * **URL to test this issue:** ### Description [List steps to reproduce the error and details on what happens and what you expected to happen] ### Logged Error Messages [Place any relevant error messages you noticed from the logs here.] [Make sure you attach the full logs with your personal information removed in case we need more information] ### Screenshots [Place any screenshots of the issue here if needed]
closed
2022-12-17T14:28:16Z
2022-12-17T22:15:33Z
https://github.com/FlareSolverr/FlareSolverr/issues/623
[ "duplicate", "invalid" ]
rmjsaxs
1
microsoft/unilm
nlp
1,058
text tokenizer for beitv3?
**Describe** Model I am using (UniLM, MiniLM, LayoutLM ...): the tokenizer for visual image is using beitv2: https://github.com/microsoft/unilm/blob/master/beit2/test_get_code.py but the tokenizer for text is not mentioned?
closed
2023-04-10T13:27:33Z
2023-04-26T07:05:10Z
https://github.com/microsoft/unilm/issues/1058
[]
PanXiebit
8
scrapy/scrapy
web-scraping
6,561
Improve the contribution documentation
It would be nice to have something like [this](https://github.com/scrapy/scrapy/issues/1615#issuecomment-2497663596) in a section of the contribution docs that we can link easily to such questions.
closed
2024-11-25T10:52:01Z
2024-12-12T10:38:31Z
https://github.com/scrapy/scrapy/issues/6561
[ "enhancement", "docs" ]
Gallaecio
2
lepture/authlib
django
303
authorize_access_token() doesn't add client_secret to query with GET request
When you send GET request to get access token client_secret is not added to query. Here is some prints from methods: request method: GET <authlib.integrations.httpx_client.oauth2_client.OAuth2ClientAuth object at 0x00000203BB662B80> SOME_CLIENT_SECRET send method: <authlib.integrations.httpx_client.oauth2_client.OAuth2ClientAuth object at 0x00000203BB662B80> SOME_CLIENT_SECRET _send_handling_auth method: response <Response [401 Unauthorized]> b'{"error":"invalid_client","error_description":"client_secret is undefined"}' As you can see everything fine with Auth object and client_secret is in it, but something wrong with _send_handling_auth Full error callback: client_secret <authlib.integrations.httpx_client.oauth2_client.OAuth2ClientAuth object at 0x0000020867A03B80> SOME_CLIENT_SECRET Traceback (most recent call last): File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 394, in run_asgi result = await app(self.scope, self.receive, self.send) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 45, in __call__ return await self.app(scope, receive, send) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\fastapi\applications.py", line 179, in __call__ await super().__call__(scope, receive, send) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\starlette\applications.py", line 111, in __call__ await self.middleware_stack(scope, receive, send) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\starlette\middleware\errors.py", line 181, in __call__ raise exc from None File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\starlette\middleware\errors.py", line 159, in __call__ await self.app(scope, receive, _send) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\starlette\middleware\cors.py", line 78, in __call__ await self.app(scope, receive, send) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\starlette\middleware\sessions.py", line 75, in __call__ await self.app(scope, receive, send_wrapper) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\starlette\exceptions.py", line 82, in __call__ raise exc from None File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\starlette\exceptions.py", line 71, in __call__ await self.app(scope, receive, sender) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\starlette\routing.py", line 566, in __call__ await route.handle(scope, receive, send) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\starlette\routing.py", line 227, in handle await self.app(scope, receive, send) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\starlette\routing.py", line 41, in app response = await func(request) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\fastapi\routing.py", line 182, in app raw_response = await run_endpoint_function( File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\fastapi\routing.py", line 133, in run_endpoint_function return await dependant.call(**values) File ".\src\app\auth\routes.py", line 47, in vk_auth token = await oauth.vk.authorize_access_token(request, method='GET') File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\authlib\integrations\starlette_client\integration.py", line 64, in authorize_access_token return await self.fetch_access_token(**params) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\authlib\integrations\base_client\async_app.py", line 105, in fetch_access_token token = await client.fetch_token(token_endpoint, **kwargs) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\authlib\integrations\httpx_client\oauth2_client.py", line 133, in _fetch_token return self.parse_response_token(resp.json()) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\authlib\oauth2\client.py", line 380, in parse_response_token self.handle_error(error, description) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\authlib\integrations\httpx_client\oauth2_client.py", line 82, in handle_error raise OAuthError(error_type, error_description) authlib.integrations.base_client.errors.OAuthError: invalid_client: client_secret is undefined ERROR:uvicorn.error:Exception in ASGI application Traceback (most recent call last): File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 394, in run_asgi result = await app(self.scope, self.receive, self.send) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 45, in __call__ return await self.app(scope, receive, send) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\fastapi\applications.py", line 179, in __call__ await super().__call__(scope, receive, send) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\starlette\applications.py", line 111, in __call__ await self.middleware_stack(scope, receive, send) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\starlette\middleware\errors.py", line 181, in __call__ raise exc from None File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\starlette\middleware\errors.py", line 159, in __call__ await self.app(scope, receive, _send) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\starlette\middleware\cors.py", line 78, in __call__ await self.app(scope, receive, send) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\starlette\middleware\sessions.py", line 75, in __call__ await self.app(scope, receive, send_wrapper) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\starlette\exceptions.py", line 82, in __call__ raise exc from None File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\starlette\exceptions.py", line 71, in __call__ await self.app(scope, receive, sender) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\starlette\routing.py", line 566, in __call__ await route.handle(scope, receive, send) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\starlette\routing.py", line 227, in handle await self.app(scope, receive, send) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\starlette\routing.py", line 41, in app response = await func(request) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\fastapi\routing.py", line 182, in app raw_response = await run_endpoint_function( File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\fastapi\routing.py", line 133, in run_endpoint_function return await dependant.call(**values) File ".\src\app\auth\routes.py", line 47, in vk_auth token = await oauth.vk.authorize_access_token(request, method='GET') File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\authlib\integrations\starlette_client\integration.py", line 64, in authorize_access_token return await self.fetch_access_token(**params) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\authlib\integrations\base_client\async_app.py", line 105, in fetch_access_token token = await client.fetch_token(token_endpoint, **kwargs) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\authlib\integrations\httpx_client\oauth2_client.py", line 133, in _fetch_token return self.parse_response_token(resp.json()) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\authlib\oauth2\client.py", line 380, in parse_response_token self.handle_error(error, description) File "c:\users\ander\appdata\local\programs\python\python39\lib\site-packages\authlib\integrations\httpx_client\oauth2_client.py", line 82, in handle_error raise OAuthError(error_type, error_description) authlib.integrations.base_client.errors.OAuthError: invalid_client: client_secret is undefined **To Reproduce** ``` oauth.register( name='vk', client_id=CLIENT_ID, client_secret=CLIENT_SECRET, authorize_url='https://oauth.vk.com/authorize', access_token_url='https://oauth.vk.com/access_token', ) @router.get('/login/vk') async def vk_login(request: Request): redirect_uri = request.url_for('vk_auth') return await oauth.vk.authorize_redirect(request, redirect_uri) @router.get('/') async def vk_auth(request: Request): token = await oauth.vk.authorize_access_token(request, method='GET') <- error here ``` **Expected behavior** await oauth.vk.authorize_access_token(request, method='GET') makes request with url like https://some-site.com/access-token?client_id=id&client_secret=secret&redirect_uri=uri&code=code&state=state **Environment:** - OS: Windows 10 - Python Version: 3.9 - Authlib Version: 0.15.2 - starlette: 0.13.6 - fastapi: 0.61.2
closed
2020-12-14T11:27:10Z
2020-12-18T06:15:42Z
https://github.com/lepture/authlib/issues/303
[ "bug" ]
Ander813
3
agronholm/anyio
asyncio
411
include comparison with other python SC libs?
It may help (the community) to include a "related projects" area or comparisons area in docs somewhere. I'm not sure how big this list would be realistically. I found [quattro](https://github.com/Tinche/quattro/issues/1) and although I'm not sure why it exists it may make sense do a comparison with it if that's worthwhile.
closed
2022-01-13T02:04:32Z
2022-01-13T08:42:23Z
https://github.com/agronholm/anyio/issues/411
[]
parity3
0
pallets-eco/flask-sqlalchemy
sqlalchemy
993
SQLite databases are not created in the instance directory
In Flask-SQLAlchemy 2.5.1, when I create a SQLite database connection with a relative path, the database is created in the application root directory rather than the instance directory. It looks like this should have changed in #537, but while that code was merged into `main` in May 2020 it looks like it has not been included yet in a release. ### Actual Behavior ##### `config.py`: ```python SQLALCHEMY_DATABASE_URI = 'sqlite:///development.db' ``` The file `development.db` is created in `app.root_path`. (In this case, `./my-package`, which is under source control.) ### Expected Behavior The file `development.db` is created in `app.instance_path`. (In this case `./instance`, which is git-ignored.) Environment: - Python version: 3.9.5 - Flask-SQLAlchemy version: 2.5.1 - SQLAlchemy version: 1.4.23
closed
2021-08-28T18:21:17Z
2022-10-03T00:21:51Z
https://github.com/pallets-eco/flask-sqlalchemy/issues/993
[]
chrisbouchard
2
huggingface/datasets
tensorflow
6,907
Support the deserialization of json lines files comprised of lists
### Feature request I manage a somewhat large and popular Hugging Face dataset known as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus). I recently updated my corpus to be stored in a json lines file where each line is an array and each element represents a value at a particular column. Previously, my corpus was stored as a json lines file where each line was a dictionary and the keys were the fields. Essentially, a line in my json lines file used to look like this: ```json {"version_id":"","type":"","jurisdiction":"","source":"","citation":"","url":"","when_scraped":"","text":""} ``` And now it looks like this: ```json ["","","","","","","",""] ``` This saves 65 bytes per document and allows me very quickly serialise and deserialise documents via `msgspec`. After making this change, I found that `datasets` was incapable of deserialising my Corpus without a custom loading script, even if I ensured that the `dataset_info` field in my dataset card contained the desired names of my features. I would like to request that functionality be added to support this format which is more memory-efficent and faster than using dictionaries. ### Motivation The [documentation](https://huggingface.co/docs/datasets/en/dataset_script) for creating dataset loading scripts asserts that: > In the next major release, the new safety features of 🤗 Datasets will disable running dataset loading scripts by default, and you will have to pass trust_remote_code=True to load datasets that require running a dataset script. I would rather not require my users to pass `trust_remote_code=True` which means that I will need built-in support for this format. ### Your contribution I would be happy to submit a PR for this if this is something you would incorporate into `datasets` and if I can be pointed to where the code would need to go.
open
2024-05-18T05:07:23Z
2024-05-18T08:53:28Z
https://github.com/huggingface/datasets/issues/6907
[ "enhancement" ]
umarbutler
1
zappa/Zappa
django
448
[Migrated] Zappa doesn't work with case insensetive headers.
Originally from: https://github.com/Miserlou/Zappa/issues/1188 by [Speedrockracer](https://github.com/Speedrockracer) Edit: Figured it out if the header is "content-type" zappa doesn't pass it to flask. "Content-Type" gets passed. According to the specs headers should be case insensetive. ## Context Request with lowercase content-type header ``` [1508701474917] [DEBUG] 2017-10-22T19:44:34.917Z 6e53dd5a-b761-11e7-9c1b-affcee69afd7 Zappa Event: {'resource': '/{proxy+}', 'path': '/auth/', 'httpMethod': 'POST', 'headers': {'Accept': 'application/json', 'Accept-Encoding': 'gzip;q=1.0, compress;q=0.5', 'Accept-Language': 'en-NL;q=1.0', 'CloudFront-Forwarded-Proto': 'https', 'CloudFront-Is-Desktop-Viewer': 'true', 'CloudFront-Is-Mobile-Viewer': 'false', 'CloudFront-Is-SmartTV-Viewer': 'false', 'CloudFront-Is-Tablet-Viewer': 'false', 'CloudFront-Viewer-Country': 'NL', 'content-type': 'application/json', 'Host': 'api.urlyapp.io', 'User-Agent': 'Urly/1.0 (com.esvaru.urlyapp; build:2; iOS 10.3.2) Alamofire/4.5.1', 'Via': '2.0 25d8d373b361f7af9e59da6c842223d0.cloudfront.net (CloudFront)', 'X-Amz-Cf-Id': 'G7kmFCmIMOU_3QdJ3dEDwVhHDCMeQ33hS-m_WjLC4PRVk7EM3k4iEA==', 'X-Amzn-Trace-Id': 'Root=1-59ecf522-084dee24514d78a36554bebe', 'X-Forwarded-For': '86.80.153.197, 216.137.58.67', 'X-Forwarded-Port': '443', 'X-Forwarded-Proto': 'https'}, 'queryStringParameters': None, 'pathParameters': {'proxy': 'auth'}, 'stageVariables': None, 'requestContext': {'path': '/auth/', 'accountId': '338913649781', 'resourceId': 'pkmwck', 'stage': 'production', 'requestId': '6e538fad-b761-11e7-8981-e5887a02dba4', 'identity': {'cognitoIdentityPoolId': None, 'accountId': None, 'cognitoIdentityId': None, 'caller': None, 'apiKey': '', 'sourceIp': '86.80.153.197', 'accessKey': None, 'cognitoAuthenticationType': None, 'cognitoAuthenticationProvider': None, 'userArn': None, 'userAgent': 'Urly/1.0 (com.esvaru.urlyapp; build:2; iOS 10.3.2) Alamofire/4.5.1', 'user': None}, 'resourcePath': '/{proxy+}', 'httpMethod': 'POST', 'apiId': 'pdbjoebwd7'}, 'body': '{"title":"test","type":2,"username":"admin","password":"*******"}', 'isBase64Encoded': False} [1508701879629] ========= Printing request.headers: [1508701474918] Content-Length: 63 [1508701474918] Accept: application/json [1508701474918] Accept-Encoding: gzip;q=1.0, compress;q=0.5 [1508701474918] Accept-Language: en-NL;q=1.0 [1508701474918] Cloudfront-Is-Tablet-Viewer: false [1508701474918] Cloudfront-Viewer-Country: NL [1508701474918] Host: api.urlyapp.io [1508701474918] User-Agent: Urly/1.0 (com.esvaru.urlyapp; build:2; iOS 10.3.2) Alamofire/4.5.1 [1508701474918] Via: 2.0 25d8d373b361f7af9e59da6c842223d0.cloudfront.net (CloudFront) [1508701474918] X-Amz-Cf-Id: G7kmFCmIMOU_3QdJ3dEDwVhHDCMeQ33hS-m_WjLC4PRVk7EM3k4iEA== [1508701474918] X-Amzn-Trace-Id: Root=1-59ecf522-084dee24514d78a36554bebe [1508701474918] X-Forwarded-For: 86.80.153.197, 216.137.58.67 [1508701474918] X-Forwarded-Port: 443 [1508701474918] X-Forwarded-Proto: https [1508701474918] Cloudfront-Forwarded-Proto: https [1508701474918] Cloudfront-Is-Desktop-Viewer: true [1508701474918] Cloudfront-Is-Mobile-Viewer: false [1508701474918] Cloudfront-Is-Smarttv-Viewer: false [1508701879629] ========= Printing request.headers.get('Content-type'): [1508701474918] None [1508701474918] None [1508701879629] ========= Printing raw and parsed body: [1508701474918] b'{"title":"test","type":2,"username":"admin","password":"******"}' [1508701474918] b'{"title":"test","type":2,"username":"admin","password":"******"}' [1508701474919] ImmutableMultiDict([]) [1508701474919] None [1508701474920] None ``` As you can see the content type header is printed in zappa event but not present in the headers. Here are the logs for a "seemingly" same request made with a Content-Type header ``` [1508701879629] [DEBUG] 2017-10-22T19:51:19.614Z 5f8a0bf8-b762-11e7-ad60-29f5fbcdec28 Zappa Event: {'resource': '/{proxy+}', 'path': '/auth/', 'httpMethod': 'POST', 'headers': {'Accept': 'application/json', 'Accept-Encoding': 'gzip, deflate', 'cache-control': 'no-cache', 'CloudFront-Forwarded-Proto': 'https', 'CloudFront-Is-Desktop-Viewer': 'true', 'CloudFront-Is-Mobile-Viewer': 'false', 'CloudFront-Is-SmartTV-Viewer': 'false', 'CloudFront-Is-Tablet-Viewer': 'false', 'CloudFront-Viewer-Country': 'NL', 'Content-Type': 'application/json', 'Host': 'api.urlyapp.io', 'Postman-Token': '529c1777-5861-41cd-8edd-39b1b6ecd3c2', 'User-Agent': 'PostmanRuntime/6.3.2', 'Via': '1.1 48e3cf3ee71856983cda4c8805113c56.cloudfront.net (CloudFront)', 'X-Amz-Cf-Id': 'NjwfT3tNZNMSdt-Vu4L9oImqLkpwO_EFarNwkTA6-T7rGuT3VcCmxQ==', 'X-Amzn-Trace-Id': 'Root=1-59ecf6b7-7a14636508fe1f6d65b89905', 'X-Forwarded-For': '86.80.153.197, 216.137.58.67', 'X-Forwarded-Port': '443', 'X-Forwarded-Proto': 'https'}, 'queryStringParameters': None, 'pathParameters': {'proxy': 'auth'}, 'stageVariables': None, 'requestContext': {'path': '/auth/', 'accountId': '338913649781', 'resourceId': 'pkmwck', 'stage': 'production', 'requestId': '5f83f1f2-b762-11e7-b6b8-b16418e5696d', 'identity': {'cognitoIdentityPoolId': None, 'accountId': None, 'cognitoIdentityId': None, 'caller': None, 'apiKey': '', 'sourceIp': '86.80.153.197', 'accessKey': None, 'cognitoAuthenticationType': None, 'cognitoAuthenticationProvider': None, 'userArn': None, 'userAgent': 'PostmanRuntime/6.3.2', 'user': None}, 'resourcePath': '/{proxy+}', 'httpMethod': 'POST', 'apiId': 'pdbjoebwd7'}, 'body': '{\n "title": "test",\n "type": 2,\n "username": "admin",\n "password": "*******"\n }', 'isBase64Encoded': False} [1508701879629] ========= Printing request.headers: [1508701879629] Content-Type: application/json [1508701879629] Content-Length: 108 [1508701879629] Accept: application/json [1508701879629] Accept-Encoding: gzip, deflate [1508701879629] Cloudfront-Is-Mobile-Viewer: false [1508701879629] Cloudfront-Is-Smarttv-Viewer: false [1508701879629] Cloudfront-Is-Tablet-Viewer: false [1508701879629] Host: api.urlyapp.io [1508701879629] Postman-Token: 529c1777-5861-41cd-8edd-39b1b6ecd3c2 [1508701879629] User-Agent: PostmanRuntime/6.3.2 [1508701879629] Via: 1.1 48e3cf3ee71856983cda4c8805113c56.cloudfront.net (CloudFront) [1508701879629] X-Amz-Cf-Id: NjwfT3tNZNMSdt-Vu4L9oImqLkpwO_EFarNwkTA6-T7rGuT3VcCmxQ== [1508701879629] X-Amzn-Trace-Id: Root=1-59ecf6b7-7a14636508fe1f6d65b89905 [1508701879629] X-Forwarded-For: 86.80.153.197, 216.137.58.67 [1508701879629] X-Forwarded-Port: 443 [1508701879629] X-Forwarded-Proto: https [1508701879629] Cache-Control: no-cache [1508701879629] Cloudfront-Forwarded-Proto: https [1508701879629] Cloudfront-Is-Desktop-Viewer: true [1508701879629] Cloudfront-Viewer-Country: NL [1508701879629] ========= Printing request.headers.get('Content-type'): [1508701879629] application/json [1508701879629] application/json [1508701879629] ========= Printing raw and parsed body: [1508701879629] b'{"title": "test","type": 2,"username": "admin","password": "*******"}' [1508701879629] b'{"title": "test","type": 2,"username": "admin","password": "*******"}' [1508701879629] ImmutableMultiDict([]) [1508701879629] {'title': 'test', 'type': 2, 'username': 'admin', 'password': '*******'} [1508701879629] {'title': 'test', 'type': 2, 'username': 'admin', 'password': '*******'} ``` I tried: ## Expected Behavior The content type header should be available and the request body parsed. ## Actual Behavior Content type header is None and the request body is not parsed. ## Steps to Reproduce Works: curl -i -X POST -H "Content-Type: application/x-www-form-urlencoded" -d "password=tester&title=Django%E2%80%99s%20MacBook%20Pro&type=2&username=tester" https://api.urlyapp.io/auth/ Doesn't work: curl -i -X POST -H "content-type: application/x-www-form-urlencoded" -d "password=tester&title=Django%E2%80%99s%20MacBook%20Pro&type=2&username=tester" https://api.urlyapp.io/auth/ ## Your Environment * Zappa version used: 0.44.3 * Operating System and Python version: AWS Lambda python 3.6
closed
2021-02-20T08:34:59Z
2022-07-16T07:33:06Z
https://github.com/zappa/Zappa/issues/448
[]
jneves
1
nonebot/nonebot2
fastapi
2,588
Bug: 在使用Minecraft适配器时出现错误
### 操作系统 Windows ### Python 版本 3.12.2 ### NoneBot 版本 1.3.1 ### 适配器 Minecraft 1.0.8 ### 协议端 无需 ### 描述问题 在加载Minecraft适配器时pydantic出现问题 ### 复现步骤 1. 正常安装nonebot 2. 创建项目时适配器选择Minecraft 3. 其余参数默认 4. 正常启动即可复现 ### 期望的结果 正常加载Minecraft适配器 ### 截图或日志 ![image](https://github.com/nonebot/nonebot2/assets/99112592/dfbd9116-3cd0-4ffa-a38b-cfd694083127)
closed
2024-02-23T13:43:54Z
2024-02-24T03:50:14Z
https://github.com/nonebot/nonebot2/issues/2588
[ "question" ]
MingriLingran
2
lepture/authlib
flask
97
request an example of openid intergration
can I replace flask-openid with authlib? if the answer is 'yes', could you pls show an example? many thx!
closed
2018-11-19T03:33:57Z
2018-11-21T00:23:02Z
https://github.com/lepture/authlib/issues/97
[]
jor112358
1
docarray/docarray
pydantic
1,530
Bug: DocIndex search fails when it's empty
```python import numpy as np from docarray import BaseDoc from docarray.index import InMemoryExactNNIndex from docarray.typing import NdArray from pydantic import parse_obj_as class MyDoc(BaseDoc): emb: NdArray index = InMemoryExactNNIndex[MyDoc]() query = parse_obj_as(NdArray, np.random.rand(5)) docs, _ = index.find(query, search_field="emb", limit=10) ``` ```shell ValueError: need at least one array to stack ```
closed
2023-05-11T12:39:58Z
2023-05-15T09:03:12Z
https://github.com/docarray/docarray/issues/1530
[ "type/bug" ]
jupyterjazz
1
mithi/hexapod-robot-simulator
plotly
41
Recomputed Hexapod is sometimes not equal to Updated Hexapod. Why?
https://mithi.github.io/robotics-blog/blog/hexapod-simulator/3-prerelease-2/
open
2020-04-13T08:26:02Z
2020-06-18T21:25:13Z
https://github.com/mithi/hexapod-robot-simulator/issues/41
[]
mithi
4
Nike-Inc/koheesio
pydantic
67
[FEATURE] Dummy issue
<!-- We follow Design thinking principles to bring the new feature request to life. Please read through [Design thinking](https://www.interaction-design.org/literature/article/5-stages-in-the-design-thinking-process) principles if you are not familiar. --> <!-- This is the [Board](https://github.com/orgs/Nike-Inc/projects/4) your feature request would go through, so keep in mind that there would be more back and forth on this. If you are very clear with all phases, please describe them here for faster development. --> ## Is your feature request related to a problem? Please describe. <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> ... ## Describe the solution you'd like <!-- A clear and concise description of what you want to happen. --> ... ## Describe alternatives you've considered <!-- A clear and concise description of any alternative solutions or features you've considered. --> ... ## Additional context <!-- Add any other context or screenshots about the feature request here. --> ...
closed
2024-09-27T09:33:31Z
2024-11-08T11:18:50Z
https://github.com/Nike-Inc/koheesio/issues/67
[ "enhancement" ]
femilian-6582
0
chatanywhere/GPT_API_free
api
313
请问使用gpt的ask PDF功能时出现“pdf is undefined”的错误该如何解决?
错误如下: ![屏幕截图 2024-10-28 202736](https://github.com/user-attachments/assets/a41bde65-84e9-414a-805d-a35e0519bfdc)
open
2024-10-28T12:27:48Z
2024-11-04T00:52:04Z
https://github.com/chatanywhere/GPT_API_free/issues/313
[]
royyyylyyy226
3
jina-ai/serve
deep-learning
5,658
python vs YAML section should mention executor/deployment and gateway not just flow
python vs YAML section: https://docs.jina.ai/concepts/preliminaries/coding-in-python-yaml/ should not only mention the flow. It also should span Executor/gateway/deployment yaml and be showcased as a general concept in jina
closed
2023-02-06T09:31:57Z
2023-02-13T14:48:34Z
https://github.com/jina-ai/serve/issues/5658
[]
alaeddine-13
0
rthalley/dnspython
asyncio
620
dns.query.xfr() - timeout error
Seems that setting a timeout in dns.query.xfr() throws an error. ``` python test.py Traceback (most recent call last): File "test.py", line 7, in <module> axfr_data = dns.zone.from_xfr( dns.query.xfr(server, zone, timeout=30.00)) File "/usr/local/lib/python3.7/site-packages/dns/zone.py", line 1106, in from_xfr for r in xfr: File "/usr/local/lib/python3.7/site-packages/dns/query.py", line 611, in xfr if mexpiration is None or mexpiration > expiration: TypeError: '>' not supported between instances of 'float' and 'NoneType' ``` Test code: ``` import dns.zone import dns.query zone = 'zonetransfer.me.' server = 'nsztm1.digi.ninja.' axfr_data = dns.zone.from_xfr( dns.query.xfr(server, zone, timeout=30.00)) print(axfr_data.to_text().decode()) ``` * Python 3.7.9 ``` pkg info py37-dnspython-1.16.0 py37-dnspython-1.16.0 Name : py37-dnspython Version : 1.16.0 Installed on : Sun Jan 3 00:15:38 2021 GMT Origin : dns/py-dnspython Architecture : FreeBSD:12:* Prefix : /usr/local Categories : python dns Licenses : ISCL Maintainer : rm@FreeBSD.org WWW : http://www.dnspython.org/ Comment : DNS toolkit for Python Options : EXAMPLES : on PYCRYPTODOME : off Annotations : flavor : py37 repo_type : binary repository : FreeBSD Flat size : 1.29MiB Description : dnspython is a DNS toolkit for Python. It supports almost all record types. It can be used for queries, zone transfers, and dynamic updates. It supports TSIG authenticated messages and EDNS0. dnspython provides both high and low level access to DNS. The high level classes perform queries for data of a given name, type, and class, and return an answer set. The low level classes allow direct manipulation of DNS zones, messages, names, and records. WWW: http://www.dnspython.org/ ```
closed
2021-01-04T23:42:33Z
2021-01-05T10:22:41Z
https://github.com/rthalley/dnspython/issues/620
[]
schrodyn
4
great-expectations/great_expectations
data-science
10,499
Failing `test_connection` in `TableAsset.test_connection` for uppercase schema name defined in SQL Server / mssql
Using GX Core version: 1.1.1 Currently I'm not able to create a table data asset from the `SQLDatasource`'s `add_table_asset` method when the schema of the table I'm trying to connect to is defined in the database in the uppercase form (e.g., `MY_SCHEMA`): ```python datasource.add_table_asset( name="asset-name", schema_name="S_EC", table_name="my_table", ) ``` Instead I got the following error: ``` great_expectations.datasource.fluent.interfaces.TestConnectionError: Attempt to connect to table: "my_schema.my_table" failed because the schema "my_schema" does not exist ``` Looking at the lines below it seems that the `self.schema_name not in inspector.get_schema_names()` check doesn't do case insensitive comparison. https://github.com/great-expectations/great_expectations/blob/f9aa879a3d3ef0750bd72a24a003ea5631a6f29e/great_expectations/datasource/fluent/sql_datasource.py#L908-L912 Attempting to bracket the schema with quote characters (e.g., `'MY_SCHEMA'`) does persist the uppercase, but the test fails as well, complaining: ``` great_expectations.datasource.fluent.interfaces.TestConnectionError: Attempt to connect to table: ""MY_SCHEMA".my_table" failed because the schema ""MY_SCHEMA"" does not exist ``` I can still create query asset with `add_query_asset` method, though.
open
2024-10-10T03:01:44Z
2024-10-24T20:38:23Z
https://github.com/great-expectations/great_expectations/issues/10499
[ "stack:mssql", "community-supported" ]
amirulmenjeni
6
tensorpack/tensorpack
tensorflow
1,400
ValueError: Variable conv0/W/Momentum/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?Please read & provide the following information
I too try to replicate the experiment in: https://github.com/czhu95/ternarynet/blob/master/README.md Using a very old release of TF/tensorpack: +Ubuntu 16.04 +Python 2.7 +TF release 1.1.0 + Old Tensorpack as included in the Ternarynet repo And got this error: ValueError: Variable conv0/W/Momentum/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope? Being a newbie in TF, I have no idea how to fix it. Any idea? Full Trace: [0222 14:48:36 @regularize.py:17] Apply regularizer for linear/Wp:0 [0222 14:48:36 @regularize.py:17] Apply regularizer for linear/Wn:0 Traceback (most recent call last): File "./tw-cifar10-resnet.py", line 227, in <module> SyncMultiGPUTrainer(config).train() File "/home/akwok/ML/ternarynet-master/tensorpack/train/multigpu.py", line 80, in train self.config.optimizer.apply_gradients(grads, get_global_step_var()), File "/home/akwok/.local/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 446, in apply_gradients self._create_slots([_get_variable_for(v) for v in var_list]) File "/home/akwok/.local/lib/python2.7/site-packages/tensorflow/python/training/momentum.py", line 63, in _create_slots self._zeros_slot(v, "momentum", self._name) File "/home/akwok/.local/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 766, in _zeros_slot named_slots[_var_key(var)] = slot_creator.create_zeros_slot(var, op_name) File "/home/akwok/.local/lib/python2.7/site-packages/tensorflow/python/training/slot_creator.py", line 174, in create_zeros_slot colocate_with_primary=colocate_with_primary) File "/home/akwok/.local/lib/python2.7/site-packages/tensorflow/python/training/slot_creator.py", line 146, in create_slot_with_initializer dtype) File "/home/akwok/.local/lib/python2.7/site-packages/tensorflow/python/training/slot_creator.py", line 66, in _create_slot_var validate_shape=validate_shape) File "/home/akwok/.local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1049, in get_variable use_resource=use_resource, custom_getter=custom_getter) File "/home/akwok/.local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 948, in get_variable use_resource=use_resource, custom_getter=custom_getter) File "/home/akwok/.local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 356, in get_variable validate_shape=validate_shape, use_resource=use_resource) File "/home/akwok/.local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 341, in _true_getter use_resource=use_resource) File "/home/akwok/.local/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 671, in _get_single_variable "VarScope?" % name) ValueError: Variable conv0/W/Momentum/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?
closed
2020-02-23T00:44:40Z
2020-04-24T10:52:06Z
https://github.com/tensorpack/tensorpack/issues/1400
[]
aywkwok
2
xinntao/Real-ESRGAN
pytorch
518
invalid gpu device
vkEnumerateInstanceExtensionProperties failed -3 vkEnumerateInstanceExtensionProperties failed -3 vkEnumerateInstanceExtensionProperties failed -3 invalid gpu device 用的核显Intel Graphics 2500,GPU无法识别 绿色便携版本地重建
open
2022-12-09T16:03:15Z
2023-02-14T08:51:29Z
https://github.com/xinntao/Real-ESRGAN/issues/518
[]
791814
1
sinaptik-ai/pandas-ai
data-science
1,348
Return plots as json strings
### 🚀 The feature Return the plots as json strings rather than images ### Motivation, pitch the current setup for plots is to return an address for the .png which is created based on the plots. This becomes cumbersome sometime to further process these plots and do post-processing before showing them to the users. Returning them as JSON strings(poorly.to_json or similar operation) would aide to generalise the provided solution. ### Alternatives _No response_ ### Additional context _No response_
closed
2024-09-01T11:22:22Z
2025-03-08T16:00:08Z
https://github.com/sinaptik-ai/pandas-ai/issues/1348
[ "enhancement" ]
sachinkumarUI
2
brightmart/text_classification
tensorflow
151
多分类问题
您好,您的代码能否支持输入一个句子,输出多个标签?
open
2024-06-04T09:01:13Z
2024-06-04T09:01:13Z
https://github.com/brightmart/text_classification/issues/151
[]
cutecharmingkid
0
HIT-SCIR/ltp
nlp
645
userDict
version 4.1.1 能不能为提供userdict的字典删除操作。 场景:多个文本每个文本的都有自己独特的用户字典。如果所用文本都使用一个共同的userdict,会导致分词错误。而ltp只有添加字典操作。希望能够添加对删除当前字典的功能。
open
2023-05-20T07:48:34Z
2023-05-23T04:28:07Z
https://github.com/HIT-SCIR/ltp/issues/645
[ "feature-request" ]
zhihuashan
0
ets-labs/python-dependency-injector
flask
681
Ressource not closed in case of Exception in the function
Hello, We have some function that uses `Closing[Provide["session"]]` to inject the database session to the method. It works well, except if we have any exception raised in the function the closing method will not be called and the resource will stay open This is really easy to reproduce with a minimal case on fastAPI, the shudown (`db.close()`) is never called in this case. This is a huge problem as in the next call, the injector will re inject the resource (the session) which is already in a fail state. I did not find anything on the documentation regarding error management, did we miss something ? Thanks for your help. Container ``` class Container(containers.DeclarativeContainer): wiring_config = containers.WiringConfiguration(packages=[XX]) session = providers.Resource(get_db) ``` Resource DB: ``` def get_db() -> Iterator[Session]: db: Session = SessionMaker() try: yield db finally: db.close() ``` Router: ``` @router.get( "/hello-world" ) @inject async def hello_world( session: Session = Depends(Closing[Provide["session"]]), ): raise Exception() ```
open
2023-03-20T16:09:16Z
2023-03-30T11:33:42Z
https://github.com/ets-labs/python-dependency-injector/issues/681
[]
BaptisteSaves
1
mlfoundations/open_clip
computer-vision
269
GradCAM visualizations
Has anyone tried saliency map visualizations with open_clip models? I came across these examples, but they only use OpenAI ResNet-based models. https://colab.research.google.com/github/kevinzakka/clip_playground/blob/main/CLIP_GradCAM_Visualization.ipynb https://huggingface.co/spaces/njanakiev/gradio-openai-clip-grad-cam
closed
2022-11-30T07:04:57Z
2023-04-16T18:04:04Z
https://github.com/mlfoundations/open_clip/issues/269
[]
usuyama
2
StackStorm/st2
automation
5,180
Special characters in st2admin account causing st2 key failure
Just upgraded to version 3.4 and my keyvault is having problems. I believe it's due to my st2admin password containing special characters. ``` [root@stackstorm workflows]# st2 key list --scope=all Traceback (most recent call last): File "/bin/st2", line 10, in <module> sys.exit(main()) File "/opt/stackstorm/st2/lib/python3.6/site-packages/st2client/shell.py", line 470, in main return Shell().run(argv) File "/opt/stackstorm/st2/lib/python3.6/site-packages/st2client/shell.py", line 385, in run config = self._parse_config_file(args=args, validate_config_permissions=False) File "/opt/stackstorm/st2/lib/python3.6/site-packages/st2client/base.py", line 183, in _parse_config_file result = parser.parse() File "/opt/stackstorm/st2/lib/python3.6/site-packages/st2client/config_parser.py", line 197, in parse value = get_func(section, key) File "/usr/lib64/python3.6/configparser.py", line 800, in get d) File "/usr/lib64/python3.6/configparser.py", line 394, in before_get self._interpolate_some(parser, option, L, value, section, defaults, 1) File "/usr/lib64/python3.6/configparser.py", line 444, in _interpolate_some "found: %r" % (rest,)) configparser.InterpolationSyntaxError: '%' must be followed by '%' or '(', found: '%3C#V~Bvg%3E3t+' ``` This password above is what I used to install stackstorm. Or at least part of it. I've since changed the password via the documented htpasswd method, but the issue persists. Any tips? Left the password in for research purposes. curl -sSL https://stackstorm.com/packages/install.sh | bash -s -- --user=st2admin --password='q7j/t%3C#V~Bvg%3E3t+'
closed
2021-03-04T18:06:44Z
2021-06-14T14:11:16Z
https://github.com/StackStorm/st2/issues/5180
[ "bug" ]
maxfactor1
4
matterport/Mask_RCNN
tensorflow
2,724
tensorflow
ModuleNotFoundError: No module named 'tensorflow' ![ff7c28a77d7c98e93e60821e71c1350](https://user-images.githubusercontent.com/90297716/141607587-b33f3234-2b2e-40f5-ad2a-12edc1004355.png) ![eed5804707f041a3475e14392a94951](https://user-images.githubusercontent.com/90297716/141607593-236ab86c-cbfd-4f02-b361-0b5828872cae.png) but my keras is 2.6.0 tensorflow-gpu is 2.6.0 thanks a lot
open
2021-11-13T05:53:31Z
2022-01-11T04:55:13Z
https://github.com/matterport/Mask_RCNN/issues/2724
[]
LY-happy
4
tableau/server-client-python
rest-api
1,539
ENHANCEMENT: Access to /customviews (Search)
## Summary TSC does not appear to have support for /customviews ## Request Type Enhancement: Specifically Search for my use case. ****Type 1: support a REST API:**** https://help.tableau.com/current/api/rest_api/en-us/REST/rest_api_ref.htm#get_custom_view Example: https://servernaname/api/3.19/sites/{{site_liud}}/customviews ****Type 2: add a REST API and support it in tsc.**** Available in the REST API - Links above. ****Type 3: new functionality**** This is not for new API Functionality. ## Description The TSC currently does not appear to support /customviews. In our use case I need to search the site for all custom views and then parse them by name as there does not appear to be a tag option for a custom view.
closed
2024-11-25T13:33:46Z
2024-11-25T13:47:18Z
https://github.com/tableau/server-client-python/issues/1539
[ "enhancement", "needs investigation" ]
cpare
3
plotly/dash
data-visualization
2,697
[BUG] implementation of hashlib in _utils.py fails in FIPS environment
**Describe your context** CentOS FIPS environment Python 3.11.5 ``` dash dash-bootstrap-components dash-leaflet numpy pandas plotly pyproj scipy xarray tables ``` **Describe the bug** When deployed using rstudio-connect, the app fails to initialize with the following error: `...dash/_utils.py", line 144, in _concat hashed_inputs = hashlib.md5( ValueError: [digital envelope routines: EVP_DigestInit_ex] disabled for FIPS` The error can be replicated simply with python3 in a similar environment, as was done in this redhat bug report: https://bugzilla.redhat.com/show_bug.cgi?id=1744670 There is a fair amount of discussion about this issue in other projects (where reporters on average do a better job of describing the issue): https://github.com/dask/dask/issues/8608 https://github.com/Linuxfabrik/lib/issues/30 https://github.com/PrefectHQ/prefect/issues/7615 Where the suggested fix is to use a different hasher (e.g. blake3) or using the hashilb.md5 optional flag used_for_security=False (python>=3.9) if you're not hashing sensitive data. I for some reason did not see this error while in python 3.8. I believe this is a necessary fix to make dash compatible with FIPS environments.
closed
2023-11-20T20:56:55Z
2024-04-16T13:34:45Z
https://github.com/plotly/dash/issues/2697
[]
caplinje-NOAA
5
PokeAPI/pokeapi
graphql
735
Route 13 Crystal walk encounters missing time restriction
I was pulling Crystal encounter data and noticed that [Pidgeotto](https://pokeapi.co/api/v2/pokemon/pidgeotto/encounters) had one encounter location which strangely wasn't time-restricted like its other encounter locations, Kanto Route 13. [Bulbapedia](https://bulbapedia.bulbagarden.net/wiki/Kanto_Route_13#Generation_II) said this encounter was restricted to morning and day, like its other encounters, so I decided to test in-game. I loaded the VC edition of Crystal (to my knowledge, the only difference between GBC Crystal and VC Crystal with regards to encounters is the ability to obtain the [GS Ball](https://bulbapedia.bulbagarden.net/wiki/GS_Ball) without an event) and did 30 encounters at night, without encountering a single Pidgeotto. At the given 20% spawn rate, there would be approximately a 0.1% chance of not getting a Pidgeotto. Then, I repeated in the morning and day, getting a Pidgeotto on the third and second encounters respectively, consistent with the given encounter rate. Notably, the JSON has two identical `encounter_details` objects for this encounter, each of which appears to simply be missing an object in `condition_values`. Compare the (likely correct) Silver entry to the Crystal entry: ``` { "encounter_details": [ { "chance": 20, "condition_values": [ { "name": "time-day", "url": "https://pokeapi.co/api/v2/encounter-condition-value/4/" } ], "max_level": 25, "method": { "name": "walk", "url": "https://pokeapi.co/api/v2/encounter-method/1/" }, "min_level": 25 }, { "chance": 20, "condition_values": [ { "name": "time-morning", "url": "https://pokeapi.co/api/v2/encounter-condition-value/3/" } ], "max_level": 25, "method": { "name": "walk", "url": "https://pokeapi.co/api/v2/encounter-method/1/" }, "min_level": 25 } ], "max_chance": 40, "version": { "name": "silver", "url": "https://pokeapi.co/api/v2/version/5/" } } ``` ``` { "encounter_details": [ { "chance": 20, "condition_values": [], "max_level": 25, "method": { "name": "walk", "url": "https://pokeapi.co/api/v2/encounter-method/1/" }, "min_level": 25 }, { "chance": 20, "condition_values": [], "max_level": 25, "method": { "name": "walk", "url": "https://pokeapi.co/api/v2/encounter-method/1/" }, "min_level": 25 } ], "max_chance": 40, "version": { "name": "crystal", "url": "https://pokeapi.co/api/v2/version/6/" } } ```
closed
2022-07-18T19:20:43Z
2022-09-16T02:09:02Z
https://github.com/PokeAPI/pokeapi/issues/735
[]
Eiim
2
yzhao062/pyod
data-science
7
implement Connectivity-based outlier factor (COF)
See https://dl.acm.org/citation.cfm?id=693665 for more information.
closed
2018-06-14T14:01:29Z
2021-01-10T01:21:47Z
https://github.com/yzhao062/pyod/issues/7
[ "help wanted" ]
yzhao062
2
hbldh/bleak
asyncio
1,495
asyncio.exceptions.TimeoutError connecting to BLE (WinRT)
* bleak version: 0.21.1 * Python version: 3.9.4 * Operating System: Window 11 64-bit * BlueZ version (`bluetoothctl -v`) in case of Linux: ### Description I'm trying to connect to a BLE device, SP107E. My bluetooth-scanning app on my phone works correctly, but Bleak times out. ### What I Did ``` import asyncio from bleak import BleakClient, discover device_name = "SP107E" service_uuid = "FFB0" characteristic_uuid = "0000ffe1-0000-1000-8000-00805f9b34fb" command = b'\x00\x00\x00\x01' async def main(): devices = await discover() device = next((d for d in devices if d.name == device_name), None) async with BleakClient(device, services=[service_uuid], timeout=60.0) as client: print(f"listening...") await client.start_notify(characteristic_uuid, notification_handler) print(f"sending command") await client.write_gatt_char(characteristic_uuid, command) print("Command sent") await asyncio.sleep(30) await client.stop_notify(characteristic_uuid) asyncio.run(main()) ``` I've tried: - Not filtering by service - Larger timeouts - Checking with another app (LightBlue on my iPhone) ### Logs Wireshark capture: [sp107e-pcapng.zip](https://github.com/hbldh/bleak/files/13991964/sp107e-pcapng.zip) ``` 2024-01-19 16:48:25,199 bleak.backends.winrt.client MainThread DEBUG: Connecting to BLE device @ 62:20:04:01:AF:29 2024-01-19 16:48:25,259 bleak.backends.winrt.client MainThread DEBUG: getting services (service_cache_mode=None, cache_mode=None)... 2024-01-19 16:48:25,360 bleak.backends.winrt.client Dummy-1 DEBUG: session_status_changed_event_handler: id: BluetoothLE#BluetoothLEe8:48:b8:c8:20:00-62:20:04:01:af:29, error: <BluetoothError.SUCCESS: 0>, status: <GattSessionStatus.ACTIVE: 1> 2024-01-19 16:48:25,422 bleak.backends.winrt.client Dummy-2 DEBUG: max_pdu_size_changed_handler: 131 2024-01-19 16:48:25,662 bleak.backends.winrt.client Dummy-1 DEBUG: 62:20:04:01:AF:29: services changed 2024-01-19 16:48:35,269 bleak.backends.winrt.client MainThread DEBUG: closing requester 2024-01-19 16:48:35,270 bleak.backends.winrt.client MainThread DEBUG: closing session Traceback (most recent call last): File "C:\git\mpg\venv\lib\site-packages\bleak\backends\winrt\client.py", line 480, in connect self.services = await self.get_services( File "C:\git\mpg\venv\lib\site-packages\bleak\backends\winrt\client.py", line 724, in get_services await FutureLike( File "C:\git\mpg\venv\lib\site-packages\bleak\backends\winrt\client.py", line 1122, in __await__ yield self # This tells Task to wait for completion. asyncio.exceptions.CancelledError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\git\mpg\main.py", line 35, in <module> asyncio.run(main()) File "C:\Python39\lib\asyncio\runners.py", line 44, in run return loop.run_until_complete(main) File "C:\Python39\lib\asyncio\base_events.py", line 642, in run_until_complete return future.result() File "C:\git\mpg\main.py", line 26, in main async with BleakClient(device, services=[service_uuid]) as client: File "C:\git\mpg\venv\lib\site-packages\bleak\__init__.py", line 565, in __aenter__ await self.connect() File "C:\git\mpg\venv\lib\site-packages\bleak\__init__.py", line 605, in connect return await self._backend.connect(**kwargs) File "C:\git\mpg\venv\lib\site-packages\bleak\backends\winrt\client.py", line 480, in connect self.services = await self.get_services( File "C:\git\mpg\venv\lib\site-packages\async_timeout\__init__.py", line 141, in __aexit__ self._do_exit(exc_type) File "C:\git\mpg\venv\lib\site-packages\async_timeout\__init__.py", line 228, in _do_exit raise asyncio.TimeoutError asyncio.exceptions.TimeoutError ```
open
2024-01-19T16:56:51Z
2024-05-04T18:07:21Z
https://github.com/hbldh/bleak/issues/1495
[ "3rd party issue", "Backend: WinRT" ]
kierenj
3
Kanaries/pygwalker
pandas
454
ASK AI should work with open source self hosted/cloud hosted LLMs in open source pygwalker
**Is your feature request related to a problem? Please describe.** I am unable to integrate or use Amazon Bedrock, Azure GPT, Gemini or my own llama 2 LLM for the ASK AI feature. **Describe the solution you'd like** Either I should have the option to remove the real state used by the ASK AI bar in the open source PyGwalker or I should be able to integrate custom LLM endpoints.
closed
2024-03-01T22:32:01Z
2024-04-13T01:47:17Z
https://github.com/Kanaries/pygwalker/issues/454
[ "good first issue", "proposal" ]
rishabh-dream11
5
tfranzel/drf-spectacular
rest-api
1,324
OAuth2Authentication
**Describe the bug** There is no mark on the protected endpoint => swagger does not transfer the access token when accessing the protected endpoint. **To Reproduce** ![1](https://github.com/user-attachments/assets/a2611576-a908-466e-b3f6-77dfca3afb82) **Expected behavior** ![2](https://github.com/user-attachments/assets/2159b1af-48ee-438d-bea3-8ef37e77032c) Settings: REST_FRAMEWORK = { 'DEFAULT_SCHEMA_CLASS': 'drf_spectacular.openapi.AutoSchema', 'DEFAULT_AUTHENTICATION_CLASSES': [ 'oauth2_provider.contrib.rest_framework.OAuth2Authentication', ], 'DEFAULT_PAGINATION_CLASS': 'apps.utils.pagination.DefaultPagination', 'DEFAULT_FILTER_BACKENDS': [ 'django_filters.rest_framework.DjangoFilterBackend', 'rest_framework.filters.SearchFilter', ], } OAUTH2_PROVIDER = { "SCOPES": { "read": "Read scope", "write": "Write scope", "groups": "Access to groups", }, } SPECTACULAR_SETTINGS = { 'TITLE': 'Your Project API', 'DESCRIPTION': 'Your project description', 'VERSION': '1.0.0', 'SERVE_INCLUDE_SCHEMA': False, "SWAGGER_UI_SETTINGS": { "swagger": "2.0", "deepLinking": True, "filter": True, "persistAuthorization": True, }, 'OAUTH2_FLOWS': ['password'], 'OAUTH2_AUTHORIZATION_URL': 'auth/authorize/', 'OAUTH2_TOKEN_URL': 'auth/token/', 'OAUTH2_REFRESH_URL': 'auth/revoke_token/', 'OAUTH2_SCOPES': 'read write groups', 'SWAGGER_UI_OAUTH2_CONFIG': { 'clientId': env.str('OAUTH2_CLIENTID'), 'clientSecret': env.str('OAUTH2_CLIENTSECRET'), 'appName': env.str('OAUTH2_APPNAME'), }, }
open
2024-11-01T16:17:32Z
2024-11-02T17:30:40Z
https://github.com/tfranzel/drf-spectacular/issues/1324
[]
ArtemKAF
3
MorvanZhou/tutorials
numpy
27
关于tensorflow 数据可视化 bug 关于 tf.train.SummaryWriter("/Users/taw/logs", sess.graph) 报错
报错内容: Traceback (most recent call last): File "/Users/taw/PycharmProjects/stractTest/tensorTest4.py", line 46, in <module> writer = tf.train.SummaryWriter("/Users/taw/logs", sess.graph) File "/Users/taw/anaconda/lib/python2.7/site-packages/tensorflow/python/training/summary_io.py", line 82, in __init__ self.add_graph(graph_def) File "/Users/taw/anaconda/lib/python2.7/site-packages/tensorflow/python/training/summary_io.py", line 128, in add_graph event = event_pb2.Event(wall_time=time.time(), graph_def=graph_def) File "/Users/taw/anaconda/lib/python2.7/site-packages/google/protobuf/internal/python_message.py", line 519, in init _ReraiseTypeErrorWithFieldName(message_descriptor.name, field_name) File "/Users/taw/anaconda/lib/python2.7/site-packages/google/protobuf/internal/python_message.py", line 450, in _ReraiseTypeErrorWithFieldName six.reraise(type(exc), exc, sys.exc_info()[2]) File "/Users/taw/anaconda/lib/python2.7/site-packages/google/protobuf/internal/python_message.py", line 517, in init copy.MergeFrom(new_val) File "/Users/taw/anaconda/lib/python2.7/site-packages/google/protobuf/internal/python_message.py", line 1208, in MergeFrom "expected %s got %s." % (cls.__name__, type(msg).__name__)) TypeError: Parameter to MergeFrom() must be instance of same class: expected GraphDef got Graph. for field Event.graph_def
closed
2017-01-04T08:16:17Z
2017-01-05T02:38:10Z
https://github.com/MorvanZhou/tutorials/issues/27
[]
bobeneba
1
PokeAPI/pokeapi
graphql
538
Weight not correct
When I go to "https://pokeapi.co/api/v2/pokemon/charizard". Charizards weight says 905, but it is 90.5kg (according to [pokemondb.net](https://pokemondb.net/pokedex/charizard). I am assuming that kilograms is the unit of measurement since the numbers are correct, just no decimal point. Haven't checked other pokemon yet.
closed
2020-11-10T10:42:49Z
2020-11-11T01:50:39Z
https://github.com/PokeAPI/pokeapi/issues/538
[]
LachlynR
2
deepinsight/insightface
pytorch
1,975
A link to buffalo_l models?
The SimSwap uses insightface and requires to download antelope.zip, and obviously provides a link to a proper model. Insightface as a default now uses buffalo_l ; I would want to test how it would work with this model. However, I can't find a link to buffalo_l models; could you provide one?
open
2022-04-15T08:47:08Z
2022-04-15T08:47:08Z
https://github.com/deepinsight/insightface/issues/1975
[]
szopeno
0
ultralytics/ultralytics
deep-learning
19,750
YOLO12x-OBB pretrained weight request!
### Search before asking - [x] I have searched the Ultralytics [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests. ### Description Hi, is there any plans to share yolo12x-obb pretrained weights .pt files? ### Use case _No response_ ### Additional _No response_ ### Are you willing to submit a PR? - [ ] Yes I'd like to help by submitting a PR!
open
2025-03-17T21:53:46Z
2025-03-18T16:29:37Z
https://github.com/ultralytics/ultralytics/issues/19750
[ "enhancement", "OBB" ]
hillsonghimire
2
PaddlePaddle/PaddleNLP
nlp
9,698
[Question]: Taskflow如何指定模型路径,task为information_extraction
### 请提出你的问题 我通过task_path和home_path设定均会报错 ``` Traceback (most recent call last): File "/root/work/filestorage/liujc/paddle/main.py", line 7, in <module> model = Taskflow( File "/usr/local/lib/python3.10/dist-packages/paddlenlp/taskflow/taskflow.py", line 809, in __init__ self.task_instance = task_class( File "/usr/local/lib/python3.10/dist-packages/paddlenlp/taskflow/information_extraction.py", line 536, in __init__ self._get_inference_model() File "/usr/local/lib/python3.10/dist-packages/paddlenlp/taskflow/task.py", line 372, in _get_inference_model self._prepare_static_mode() File "/usr/local/lib/python3.10/dist-packages/paddlenlp/taskflow/task.py", line 227, in _prepare_static_mode self.predictor = paddle.inference.create_predictor(self._config) RuntimeError: (Unavailable) Not allowed to load partial data via load_combine_op, please use load_op instead. [Hint: Expected buffer->eof() == true, but received buffer->eof():0 != true:1.] (at /paddle/paddle/phi/kernels/impl/load_combine_kernel_impl.h:81) [operator < load_combine > error] ``` 而且保存的模型路径中,缺失了inference.pdmodel文件。
closed
2024-12-26T01:30:55Z
2025-03-18T00:21:48Z
https://github.com/PaddlePaddle/PaddleNLP/issues/9698
[ "question", "stale" ]
liujiachang
10
roboflow/supervision
deep-learning
698
track_ids of Detections in different videos
### Search before asking - [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests. ### Question In different videos (or from different angles), are the track_ids of Detections the same? I want to use the consistency of track_id for cross-video tracking. ### Additional _No response_
closed
2023-12-28T09:40:15Z
2023-12-28T12:04:00Z
https://github.com/roboflow/supervision/issues/698
[ "question" ]
kenwaytis
1
pallets-eco/flask-sqlalchemy
flask
496
How do I declare unique constraint on multiple columns?
closed
2017-05-09T23:13:41Z
2020-12-05T20:21:46Z
https://github.com/pallets-eco/flask-sqlalchemy/issues/496
[]
xiangfeidongsc
2
FactoryBoy/factory_boy
django
296
Unable to access factory_parent in RelatedFactory LazyAttribute.
In https://factoryboy.readthedocs.org/en/latest/recipes.html#copying-fields-to-a-subfactory, it's shown that it's possible to copy fields from the parent factory, using as `LazyAttribute` that access `factory_parent`. However, this is not set on a `RelatedFactory`, making it impossible to access a value that exists on the factory, but not on the object. I was hoping to use one field to populate several derived objects, but these require using a `RelatedFactory`, as they have a foreign key to the object that is in the parent factory.
closed
2016-04-21T07:38:33Z
2016-05-21T09:12:02Z
https://github.com/FactoryBoy/factory_boy/issues/296
[ "Q&A" ]
schinckel
7
BeanieODM/beanie
pydantic
318
Why Input model need to be init with beanie?
Hi I'm using beanie with fastapi. this is my model: ``` class UserBase(Document): username: str | None parent_id: str | None role_id: str | None payment_type: str | None disabled: bool | None note: str | None access_token: str | None class UserIn(UserBase): username: str password: str disabled: bool = False class User(UserBase): username: str password: str disabled: bool = False salt: str class Settings: name = "users" class UserOut(UserBase): note: str | None access_token: str | None ``` my routes: ``` USERS = APIRouter() @USERS.post('/users', response_model=UserOut) async def post_user(user: UserIn): user_to_crate = User(**user.dict(), salt=get_salt()) await user_to_crate.save() user_to_response = UserOut(**user_to_crate.dict()) return user_to_response ``` my beane init function: ``` async def init(): # Create Motor client client = motor.motor_asyncio.AsyncIOMotorClient( f"mongodb://{getenv('MONGO_USER')}:{getenv('MONGO_PASSWORD')}" f"@{getenv('MONGO_HOST')}/{getenv('MONGO_DATABASE')}?" f"replicaSet={getenv('MONGO_REPLICA_SET')}" f"&authSource={getenv('MONGO_DATABASE')}" ) # Init beanie await init_beanie(database=client[f"{getenv('MONGO_DATABASE')}"], document_models=[User]) ``` if i'm not put UserIn to the init beanie function, then call /users, i will get this error: ``` INFO: 172.16.16.7:55512 - "POST /users HTTP/1.0" 500 Internal Server Error ERROR: Exception in ASGI application Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi result = await app(self.scope, self.receive, self.send) File "/usr/local/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__ return await self.app(scope, receive, send) File "/usr/local/lib/python3.10/site-packages/fastapi/applications.py", line 269, in __call__ await super().__call__(scope, receive, send) File "/usr/local/lib/python3.10/site-packages/starlette/applications.py", line 124, in __call__ await self.middleware_stack(scope, receive, send) File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 184, in __call__ raise exc File "/usr/local/lib/python3.10/site-packages/starlette/middleware/errors.py", line 162, in __call__ await self.app(scope, receive, _send) File "/usr/local/lib/python3.10/site-packages/starlette/exceptions.py", line 93, in __call__ raise exc File "/usr/local/lib/python3.10/site-packages/starlette/exceptions.py", line 82, in __call__ await self.app(scope, receive, sender) File "/usr/local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__ raise e File "/usr/local/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__ await self.app(scope, receive, send) File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 670, in __call__ await route.handle(scope, receive, send) File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 266, in handle await self.app(scope, receive, send) File "/usr/local/lib/python3.10/site-packages/starlette/routing.py", line 65, in app response = await func(request) File "/usr/local/lib/python3.10/site-packages/fastapi/routing.py", line 217, in app solved_result = await solve_dependencies( File "/usr/local/lib/python3.10/site-packages/fastapi/dependencies/utils.py", line 557, in solve_dependencies ) = await request_body_to_args( # body_params checked above File "/usr/local/lib/python3.10/site-packages/fastapi/dependencies/utils.py", line 692, in request_body_to_args v_, errors_ = field.validate(value, values, loc=loc) File "pydantic/fields.py", line 857, in pydantic.fields.ModelField.validate File "pydantic/fields.py", line 1074, in pydantic.fields.ModelField._validate_singleton File "pydantic/fields.py", line 1121, in pydantic.fields.ModelField._apply_validators File "pydantic/class_validators.py", line 313, in pydantic.class_validators._generic_validator_basic.lambda12 File "pydantic/main.py", line 686, in pydantic.main.BaseModel.validate File "/usr/local/lib/python3.10/site-packages/beanie/odm/documents.py", line 138, in __init__ self.get_motor_collection() File "/usr/local/lib/python3.10/site-packages/beanie/odm/interfaces/getters.py", line 13, in get_motor_collection return cls.get_settings().motor_collection File "/usr/local/lib/python3.10/site-packages/beanie/odm/documents.py", line 779, in get_settings raise CollectionWasNotInitialized beanie.exceptions.CollectionWasNotInitialized ``` Then i put UserIn in the beanie init function. Everything is working. ``` # Init beanie await init_beanie(database=client[f"{getenv('MONGO_DATABASE')}"], document_models=[User,UserIn]) ``` I just want the UserIn model to validate input data and beanie try to find it in the db. Models not mapped to db should not have anything to do with the db. Right?
closed
2022-07-29T04:42:10Z
2022-07-30T05:58:07Z
https://github.com/BeanieODM/beanie/issues/318
[]
nghianv19940
3
lucidrains/vit-pytorch
computer-vision
217
why not combine key and query linar layer into one
I look into some vit code (e.g. max-vit mobile-vit) and found in attention module,they are like: #x is input key=nn.Linear(...,bias=False)(x) query=nn.Linear(...,bias=False)(x) similar_matrix=torch.matmul(query,key.transpose(...)) because Linear can be considered as a matrix, I think: key=K^T @ x query=Q^T @ x similar_matrix = query^T @ key = x^T @ (Q @ K^T) @ x (K,Q means learnable matrix(linear weights), @ means matmul, ^T means transpose) here Q @ K^T , I think they can be combined into a matrix in order to reduce the amount of parameters and calculation why not do this? is it just for easy reading? or because the training effect after combining is not good? thanks
open
2022-05-06T01:48:21Z
2022-05-06T06:08:54Z
https://github.com/lucidrains/vit-pytorch/issues/217
[]
locysty
1
davidsandberg/facenet
tensorflow
528
In center loss, the way of centers update
Hi,I don't understand your demo about center loss,your demo about centers update centers below: diff = (1 - alfa) * (centers_batch - features) centers = tf.scatter_sub(centers, label, diff) but in the paper,centers should update should follow below: diff = centers_batch - features unique_label, unique_idx, unique_count = tf.unique_with_counts(labels) appear_times = tf.gather(unique_count, unique_idx) appear_times = tf.reshape(appear_times, [-1, 1]) diff = diff / tf.cast((1 + appear_times), tf.float32) diff = alpha * diff # update centers centers = tf.scatter_sub(centers, labels, diff)
closed
2017-11-14T12:29:49Z
2018-04-04T14:29:51Z
https://github.com/davidsandberg/facenet/issues/528
[]
biubug6
1
rougier/scientific-visualization-book
numpy
88
AttributeError: 'Arrow3D' object has no attribute 'do_3d_projection'
I try to run [`code/scales-projections/projection-3d-frame.py`](https://github.com/rougier/scientific-visualization-book/blob/master/code/scales-projections/projection-3d-frame.py) and then there's an error: `AttributeError: 'Arrow3D' object has no attribute 'do_3d_projection'` my env: python: 3.11, matplotlib: 3.8.0 then I found a possible solution here -> [matplotlib issues#21688](https://github.com/matplotlib/matplotlib/issues/21688#issuecomment-974912574) when I change class [`Arrow3D`](https://github.com/rougier/scientific-visualization-book/blob/2efaca2bcabf15d74c46c62fb7d4606347ebe78a/code/scales-projections/projection-3d-frame.py#L79) to : ```python class Arrow3D(mpatches.FancyArrowPatch): def __init__(self, xs, ys, zs, *args, **kwargs): mpatches.FancyArrowPatch.__init__(self, (0, 0), (0, 0), *args, **kwargs) self._verts3d = xs, ys, zs def do_3d_projection(self, renderer=None): xs3d, ys3d, zs3d = self._verts3d xs, ys, zs = proj3d.proj_transform(xs3d, ys3d, zs3d, self.axes.M) self.set_positions((xs[0],ys[0]),(xs[1],ys[1])) return np.min(zs) ``` it works, so I think maybe `projection-3d-frame.py` need to be updated?
closed
2024-01-17T10:53:37Z
2024-01-22T13:21:55Z
https://github.com/rougier/scientific-visualization-book/issues/88
[]
EmmetZ
3
Tinche/aiofiles
asyncio
48
Please tag 0.4.0 release in git
In Fedora we package from github tarball rather than PyPI, so this would help us updating to 0.4.0. Thanks!
closed
2018-09-02T17:17:34Z
2018-09-02T23:14:14Z
https://github.com/Tinche/aiofiles/issues/48
[]
ignatenkobrain
1
wkentaro/labelme
deep-learning
1,302
Not able to get the annotations labelled
### Provide environment information OS - MacOS BIg SUr Python - 3.8.8 ### What OS are you using? macOS 11.7.4 ### Describe the Bug The annotations are created and saved as a json file and closed. When try to reopen only the images open, without the labels. Its as if i havent annotated it only.Only the images would come on the tool but not the json files. ### Expected Behavior When labelme opens -> open directoory -> go to the folder that has both images and json labels. Get both the images and labels together in the app. ### To Reproduce _No response_
open
2023-07-28T20:35:48Z
2023-07-28T20:35:48Z
https://github.com/wkentaro/labelme/issues/1302
[ "issue::bug" ]
makamnilisha
0
DistrictDataLabs/yellowbrick
scikit-learn
361
ClassificationScoreVisualizers should return accuracy
See #358 and #213 -- classification score visualizers should return accuracy when `score()` is called. If F1 or accuracy is not in the figure it should also be included in the figure.
closed
2018-03-22T16:42:34Z
2018-07-16T18:38:32Z
https://github.com/DistrictDataLabs/yellowbrick/issues/361
[ "priority: low", "type: technical debt", "level: novice" ]
bbengfort
4
statsmodels/statsmodels
data-science
8,772
ENH: GAM Mixin, penalized splines for other models than GLM
I guess it would not be too difficult to split out penalized splines from GAM so that it can be used with other models that are not in the GLM families. GAM already uses the penalization mixin, so we mainly need the spline and penalization parts for the model and the extra post-estimation methods for the results classes. #7128 application could be where we want to add a spline as extra term e.g. control functions in #8745 to make the statistics more semi-parametric but initial candidates would be the count models in discrete. That needs adjustment to GAM for the extra params (NBP, GPP or multipart models like hurdle or zero-inflated.)
open
2023-04-04T20:16:12Z
2023-04-04T20:16:12Z
https://github.com/statsmodels/statsmodels/issues/8772
[ "type-enh", "comp-discrete", "topic-penalization", "comp-causal" ]
josef-pkt
0
netbox-community/netbox
django
18,916
DynamicModelChoiceField doesn't render error message on submit
### Deployment Type Self-hosted ### NetBox Version v.4.2.5 ### Python Version 3.11 ### Steps to Reproduce While developing a plugin I realized that the DynamicModelChoiceField doesn't display an error if `required=True` and the nothing is selected. The form just doesn't submit and the view doesn't give a hint why. 1. create a form with two choice fields (one django.forms.ModelChoiceField and one DynamicModelChoiceField) 2. set both to be required ### Expected Behavior When both are empty the form should show an error message on submit ### Observed Behavior only the forms.ModelChoiceField shows an error message ![Image](https://github.com/user-attachments/assets/459d1426-4054-40b8-8482-71aba22e22e0)
open
2025-03-16T07:19:30Z
2025-03-18T19:02:58Z
https://github.com/netbox-community/netbox/issues/18916
[ "type: bug", "status: needs owner", "severity: low" ]
chii0815
2
ets-labs/python-dependency-injector
asyncio
498
Setter method calls are missing
Hi Contributors :) I'm trying to set an optional dependency ( a logger ) by using a setter method injection, but I haven't found any way to do that using this project. A sample implementation of it can be found here https://symfony.com/doc/current/service_container/calls.html for PHP language. Thanks.
closed
2021-08-30T16:16:51Z
2021-08-31T15:00:47Z
https://github.com/ets-labs/python-dependency-injector/issues/498
[ "question" ]
aminclip
3
OpenBB-finance/OpenBB
machine-learning
6,969
[IMPROVE] `obb.equity.screener`: Make Input Of Country & Exchange Uniform Across Providers
In the `economy` module, the `country` parameter can be entered as names or two-letter ISO codes. The same treatment should be applied to the `obb.equity.screener` endpoint. Additionally, the "exchange" parameter should reference ISO MICs - i.e, XNAS instead of NASDAQ, XNYS instead of NYSE, etc.
open
2024-11-27T04:48:54Z
2024-11-27T04:48:54Z
https://github.com/OpenBB-finance/OpenBB/issues/6969
[ "enhancement", "platform" ]
deeleeramone
0
plotly/dash
jupyter
2,886
[BUG] Error generating typescript components
I started seeing the following error while generating typescript components: ``` I:\ds\projects\dash-salt\node_modules\typescript\lib\typescript.js:50379 if (symbol.flags & 33554432 /* SymbolFlags.Transient */) ^ TypeError: Cannot read properties of undefined (reading 'flags') at getSymbolLinks (I:\projects\node_modules\typescript\lib\typescript.js:50379:24) at getExportsOfModule (I:\projects\node_modules\typescript\lib\typescript.js:52656:25) at Object.getExportsOfModuleAsArray [as getExportsOfModule] (I:\projects\node_modules\typescript\lib\typescript.js:52595:35) at C:\python3.11\site-packages\dash\extract-meta.js:652:33 at Array.forEach (<anonymous>) at gatherComponents (C:\python3.11\site-packages\dash\extract-meta.js:649:33) at C:\python3.11\site-packages\dash\extract-meta.js:176:21 at Array.forEach (<anonymous>) at C:\\python3.11\site-packages\dash\extract-meta.js:173:40 at Array.forEach (<anonymous>) Node.js v18.20.2 Error generating metadata in dash_salt (status=1) ``` **Describe your context** I have the following dependencies: [tool.poetry.dependencies] python = "^3.11" wheel = "^0.43.0" build = "^1.2.1" dash = {extras = ["dev"], version = "^2.17.1"} plotly = "^5.22.0" This is happening on Windows. Note that the generation script was previously working fine. In fact, this very component was previously built successfully, and we haven't changed it since then. The error first appeared during building a different component. We removed it, and it started showing up for this component, even though we hadn't changed it. I suspect it's something with the environment, but not sure.
closed
2024-06-14T22:16:58Z
2024-06-14T22:31:26Z
https://github.com/plotly/dash/issues/2886
[]
tsveti22
1
plotly/dash-bio
dash
521
XYZ Files
Hey, I updated the parser, such that he can read xyz-files. If you want me to contribute it let me know. Cool project! Best, MQ
closed
2020-10-22T18:25:34Z
2020-12-22T07:07:16Z
https://github.com/plotly/dash-bio/issues/521
[]
cap-jmk
5
kymatio/kymatio
numpy
202
need for a cross-module `shape` kwarg to constructors
An issue that appeared in discussion of #194 It is already on its way to be solved because of PR #195 (which incorporated github.com/lostanlen/kymatio/pull/2 by @eickenberg) But I'm still opening it so that we can add it to the alpha milestone and be reminded of it in our API changelog
closed
2018-11-27T03:02:32Z
2018-11-27T04:58:41Z
https://github.com/kymatio/kymatio/issues/202
[ "API" ]
lostanlen
0