repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
quantumlib/Cirq
api
6,265
Adding tags to Circuit objects
**Is your feature request related to a use case or problem? Please describe.** I'm passing around a `cirq.FrozenCircuit` to take data and would like to have some descriptive information about the circuit attached to it, which other entities can access to help them understand key information about the circuits they ran. **Describe the solution you'd like** Attach metadata "tags" to `Circuit` objects, similar to `Operation.with_tags`. **[optional] Describe alternatives/workarounds you've considered** It descends into hacks, like wrapping the circuit in a `CircuitOperation` and tagging that, or subclassing `cirq.FrozenCircuit`. **[optional] Additional context (e.g. screenshots)** **What is the urgency from your perspective for this issue? Is it blocking important work?** I hesitate to actually call this "P0" although I would use it immediately for something fairly important and useful. P0 - this should land no later than a week
closed
2023-08-28T21:11:29Z
2023-08-29T00:37:42Z
https://github.com/quantumlib/Cirq/issues/6265
[ "kind/feature-request" ]
kjsatz
0
holoviz/panel
jupyter
7,631
Should `doc.session_context` always have a `session` attribute?
When I convert a panel app, I get an error that `doc.session_context` does not have a `session` attribute. So maybe we want to replace `and doc.session_context.session)` with `and hasattr(doc.session_context, 'session')` in [this line](https://github.com/holoviz/panel/blob/62cd70da9eef14b8515874ccba736992137c4256/panel/io/state.py#L301).
closed
2025-01-20T08:44:59Z
2025-01-20T21:05:51Z
https://github.com/holoviz/panel/issues/7631
[ "webassembly" ]
wachsylon
0
plotly/dash
data-visualization
2,519
[BUG] `dash.get_relative_path()` docstring out of date
Docstrings for `dash.get_relative_path()` and `dash.strip_relative_path()` still refer to the `app` way of accessing those functions, which creates inconsistency in the docs: ![Screen Shot 2023-05-02 at 2 44 32 PM](https://user-images.githubusercontent.com/4672118/235759684-d386ad8c-cee1-48a4-ba6c-9b54fb442440.png) https://dash.plotly.com/reference#dash.get_relative_path
closed
2023-05-02T18:57:09Z
2023-05-15T20:29:16Z
https://github.com/plotly/dash/issues/2519
[]
emilykl
0
tox-dev/tox
automation
2,657
Numeric factor in environment name wrongly causes "conflicting factors" error
https://github.com/tox-dev/tox/pull/2597 had the unintended effect of interpreting a numeric factor (e.g. `2105`) as a candidate base python, breaking previously valid environment names (e.g. `py37-foo-2105`) by raising `ValueError: conflicting factors py37, 2105 in py37-foo-2105`. Example GitHub workflow build: https://github.com/galaxyproject/planemo/actions/runs/3654111042/jobs/6174245903 for `tox.ini` https://github.com/galaxyproject/planemo/blob/nsoranzo-patch-1/tox.ini _Originally posted by @nsoranzo in https://github.com/tox-dev/tox/pull/2597#discussion_r1044065606_
closed
2022-12-09T04:06:52Z
2023-06-17T01:18:17Z
https://github.com/tox-dev/tox/issues/2657
[ "bug:minor", "help:wanted" ]
nsoranzo
2
huggingface/pytorch-image-models
pytorch
1,312
[BUG] Unknown model (vit_base_patch16_224_dino)
closed
2022-06-19T20:47:35Z
2022-06-19T20:48:44Z
https://github.com/huggingface/pytorch-image-models/issues/1312
[ "bug" ]
matheusboaro
1
GibbsConsulting/django-plotly-dash
plotly
424
Updates to the dependencies haven't propagated to pypi
Hi there, It seems there is an issue with the pypi manifest (?) The changes for tag v2.1.0 ( 14be68ade2618595bcef123f91a0f0e92d8bef4bc ) inlcuded "Relax historical constraints" #413 but it seems the dependency changes weren't translated to the setup.py Straight from the pypi [json ](https://pypi.org/pypi/django-plotly-dash/json): ![image](https://user-images.githubusercontent.com/37381718/200354855-d9452577-3277-4912-ae3f-e7bb8e8355af.png) > 42 "dash-bootstrap-components (<1)", This doesn't reflect the changes made in 2.1.0, unless I'm missing something (should be:) > 42 "dash-bootstrap-components",
closed
2022-11-07T16:01:34Z
2022-11-09T15:05:23Z
https://github.com/GibbsConsulting/django-plotly-dash/issues/424
[ "bug" ]
AdrianUlrich
2
pennersr/django-allauth
django
3,232
Google Login breaks when `SESSION_COOKIE_SAMESITE` is set to `Strict`
Hi, I have setup Google and Facebook social sign-in using all-auth using templates (i.e. not DRF, etc). Google login stopped working at some point. After a lot of digging, I have found that when ever I have the following setting ```python # in settings SESSION_COOKIE_SAMESITE = 'Strict' ``` It fails **most of the time**, on production, with this setting on. The only account I have had success with is a Google Workspace account (non-gmail). However, taking a hint from #2982, as soon as I remove that setting (it [defaults to](https://docs.djangoproject.com/en/3.2/ref/settings/#session-cookie-samesite) `Lax`) everything seems to work fine. Facebook provider works fine with or without that setting, which leads to the question: is it possible to get the google provider to work similarly to the way the facebook provider works? It would be great for some sites to have `SESSION_COOKIE_SAMESITE` set to `Strict` Many thanks
closed
2023-01-12T04:01:53Z
2024-04-21T09:37:42Z
https://github.com/pennersr/django-allauth/issues/3232
[]
100cube
5
ivy-llc/ivy
numpy
28,102
Fix Frontend Failing Test: paddle - attribute.paddle.real
To-do List: https://github.com/unifyai/ivy/issues/27500
closed
2024-01-28T19:07:57Z
2024-01-29T13:08:53Z
https://github.com/ivy-llc/ivy/issues/28102
[ "Sub Task" ]
Sai-Suraj-27
0
wandb/wandb
tensorflow
8,620
[Bug]: expose `create_and_run_agent` as a public API
### Describe the bug Currently the `create_and_run_agent` function can be imported from `wandb.sdk.launch._launch`. However, this makes it seem as if this is a private api and should therefore not be imported. To reduce this ambiguity, we should add the function to `__all__` in `wandb/sdk/launch/__init__.py`
closed
2024-10-15T13:09:08Z
2024-11-08T15:26:49Z
https://github.com/wandb/wandb/issues/8620
[ "ty:bug", "c:launch" ]
marijncv
4
ultralytics/ultralytics
computer-vision
19,148
Yolov8 OBB output for documentation purposes
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions. ### Question Hello guys, I'm trying to find the output parameters for the OBB model. It should something similiar to this example from a paper: ![Image](https://github.com/user-attachments/assets/77f11b94-5cce-43c2-9756-db92625a1a54) As far as I know Yolov8 OBB outputs an angle[0...90°], the class and a bounding box. How to adapt these outputs properly with the corresponding numbers (e.g. 512x256xnumber for the output) to the image. Thanks ### Additional _No response_
open
2025-02-09T19:11:31Z
2025-02-21T05:19:09Z
https://github.com/ultralytics/ultralytics/issues/19148
[ "documentation", "question", "OBB" ]
Petros626
10
davidteather/TikTok-Api
api
934
I am selling TikTok Private API + (X-Gorgon, X-Khronos, X-Argus, X-Ladon, and more)
https://github.com/tiktoksapi/TikTok-Private-API
closed
2022-08-19T23:24:30Z
2023-08-08T22:07:33Z
https://github.com/davidteather/TikTok-Api/issues/934
[ "feature_request" ]
ghost
1
keras-team/keras
deep-learning
20,109
Value Error
```python --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[244], line 1 ----> 1 training_history = Plant_Detector.fit(x= training_set, validation_data = validation_set, epochs = 10) File ~\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\keras\engine\training.py:1193, in Model.fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing) 1186 with trace.Trace( 1187 'train', 1188 epoch_num=epoch, 1189 step_num=step, 1190 batch_size=batch_size, 1191 _r=1): 1192 callbacks.on_train_batch_begin(step) -> 1193 tmp_logs = self.train_function(iterator) 1194 if data_handler.should_sync: 1195 context.async_wait() File ~\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\eager\def_function.py:885, in Function.__call__(self, *args, **kwds) 882 compiler = "xla" if self._jit_compile else "nonXla" 884 with OptionalXlaContext(self._jit_compile): --> 885 result = self._call(*args, **kwds) 887 new_tracing_count = self.experimental_get_tracing_count() 888 without_tracing = (tracing_count == new_tracing_count) File ~\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\eager\def_function.py:933, in Function._call(self, *args, **kwds) 930 try: 931 # This is the first call of __call__, so we have to initialize. 932 initializers = [] --> 933 self._initialize(args, kwds, add_initializers_to=initializers) 934 finally: 935 # At this point we know that the initialization is complete (or less 936 # interestingly an exception was raised) so we no longer need a lock. 937 self._lock.release() File ~\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\eager\def_function.py:759, in Function._initialize(self, args, kwds, add_initializers_to) 756 self._lifted_initializer_graph = lifted_initializer_graph 757 self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph) 758 self._concrete_stateful_fn = ( --> 759 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access 760 *args, **kwds)) 762 def invalid_creator_scope(*unused_args, **unused_kwds): 763 """Disables variable creation.""" File ~\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\eager\function.py:3066, in Function._get_concrete_function_internal_garbage_collected(self, *args, **kwargs) 3064 args, kwargs = None, None 3065 with self._lock: -> 3066 graph_function, _ = self._maybe_define_function(args, kwargs) 3067 return graph_function File ~\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\eager\function.py:3463, in Function._maybe_define_function(self, args, kwargs) 3459 return self._define_function_with_shape_relaxation( 3460 args, kwargs, flat_args, filtered_flat_args, cache_key_context) 3462 self._function_cache.missed.add(call_context_key) -> 3463 graph_function = self._create_graph_function(args, kwargs) 3464 self._function_cache.primary[cache_key] = graph_function 3466 return graph_function, filtered_flat_args File ~\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\eager\function.py:3298, in Function._create_graph_function(self, args, kwargs, override_flat_arg_shapes) 3293 missing_arg_names = [ 3294 "%s_%d" % (arg, i) for i, arg in enumerate(missing_arg_names) 3295 ] 3296 arg_names = base_arg_names + missing_arg_names 3297 graph_function = ConcreteFunction( -> 3298 func_graph_module.func_graph_from_py_func( 3299 self._name, 3300 self._python_function, 3301 args, 3302 kwargs, 3303 self.input_signature, 3304 autograph=self._autograph, 3305 autograph_options=self._autograph_options, 3306 arg_names=arg_names, 3307 override_flat_arg_shapes=override_flat_arg_shapes, 3308 capture_by_value=self._capture_by_value), 3309 self._function_attributes, 3310 function_spec=self.function_spec, 3311 # Tell the ConcreteFunction to clean up its graph once it goes out of 3312 # scope. This is not the default behavior since it gets used in some 3313 # places (like Keras) where the FuncGraph lives longer than the 3314 # ConcreteFunction. 3315 shared_func_graph=False) 3316 return graph_function File ~\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\framework\func_graph.py:1007, in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes, acd_record_initial_resource_uses) 1004 else: 1005 _, original_func = tf_decorator.unwrap(python_func) -> 1007 func_outputs = python_func(*func_args, **func_kwargs) 1009 # invariant: `func_outputs` contains only Tensors, CompositeTensors, 1010 # TensorArrays and `None`s. 1011 func_outputs = nest.map_structure(convert, func_outputs, 1012 expand_composites=True) File ~\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\eager\def_function.py:668, in Function._defun_with_scope.<locals>.wrapped_fn(*args, **kwds) 664 with default_graph._variable_creator_scope(scope, priority=50): # pylint: disable=protected-access 665 # __wrapped__ allows AutoGraph to swap in a converted function. We give 666 # the function a weak reference to itself to avoid a reference cycle. 667 with OptionalXlaContext(compile_with_xla): --> 668 out = weak_wrapped_fn().__wrapped__(*args, **kwds) 669 return out File ~\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\framework\func_graph.py:994, in func_graph_from_py_func.<locals>.wrapper(*args, **kwargs) 992 except Exception as e: # pylint:disable=broad-except 993 if hasattr(e, "ag_error_metadata"): --> 994 raise e.ag_error_metadata.to_exception(e) 995 else: 996 raise ValueError: in user code: C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\keras\engine\training.py:862 train_function * return step_function(self, iterator) C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\keras\engine\training.py:852 step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:1286 run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:2849 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:3632 _call_for_each_replica return fn(*args, **kwargs) C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\keras\engine\training.py:845 run_step ** outputs = model.train_step(data) C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\keras\engine\training.py:803 train_step loss = self.compiled_loss( C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\keras\engine\compile_utils.py:204 __call__ loss_value = loss_obj(y_t, y_p, sample_weight=sw) C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\keras\losses.py:155 __call__ losses = call_fn(y_true, y_pred) C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\keras\losses.py:259 call ** return ag_fn(y_true, y_pred, **self._fn_kwargs) C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\util\dispatch.py:206 wrapper return target(*args, **kwargs) C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\keras\losses.py:1679 categorical_crossentropy return backend.categorical_crossentropy( C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\util\dispatch.py:206 wrapper return target(*args, **kwargs) C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\keras\backend.py:4875 categorical_crossentropy target.shape.assert_is_compatible_with(output.shape) C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\framework\tensor_shape.py:1161 assert_is_compatible_with raise ValueError("Shapes %s and %s are incompatible" % (self, other)) ValueError: Shapes (None, 9) and (None, 1024) are incompatible ```
closed
2024-08-10T16:10:25Z
2024-09-12T01:58:57Z
https://github.com/keras-team/keras/issues/20109
[ "type:support", "stat:awaiting response from contributor", "stale" ]
LuvolwethuTokwe
4
JaidedAI/EasyOCR
pytorch
732
make the result json serializable
According to https://github.com/JaidedAI/EasyOCR/blob/7a685cb8c4ba14f2bc246f89c213f1a56bbc2107/easyocr/recognition.py#L137-L149 the the confidence is `numpy.array`, and it's not json serializable. It would be great to make it json serializable. And according to https://github.com/JaidedAI/EasyOCR/blob/7a685cb8c4ba14f2bc246f89c213f1a56bbc2107/easyocr/detection.py#L92-L110 the text box coord is ndarray too. However, I got some coords return as int, not ndarray. That is wierd. Sorry I could not provide the source image since it contains patient information.
open
2022-05-20T00:56:38Z
2025-01-14T06:53:58Z
https://github.com/JaidedAI/EasyOCR/issues/732
[]
hubutui
2
piskvorky/gensim
nlp
3,160
Inconsistency between versions 3.2.0, 3.8.1 and 4.0.1
Phrases and Phrases classes of the "models" package cannot be used; It is not accessible in the versions that specified the issue's title, at least in the latest version, this inconsistency should be removed.
closed
2021-05-30T19:46:00Z
2021-05-31T20:24:01Z
https://github.com/piskvorky/gensim/issues/3160
[ "need info" ]
recepsirin
1
pandas-dev/pandas
data-science
60,575
DOC: fix docstring validation errors for pandas.core and pandas.errors
> follow up on issues #59698, #59458 and #58063 pandas has a script for validating docstrings: > > https://github.com/pandas-dev/pandas/blob/b0192c70610a9db593968374ea60d189daaaccc7/ci/code_checks.sh#L86-L100 > > Currently, some methods fail docstring validation check. The task here is: > > * take 2-4 methods > * run: `scripts/validate_docstrings.py <method-name>` > * run `pytest pandas/tests/scalar/test_nat.py::test_nat_doc_strings` > * fix the docstrings according to whatever error is reported > * remove those methods from `code_checks.sh` script > * commit, push, open pull request > > Example: > > ``` > scripts/validate_docstrings.py pandas.Timedelta.ceil > ``` > > pandas.Timedelta.ceil fails with the SA01 and ES01 errors > > ``` > ################################################################################ > ################################## Validation ################################## > ################################################################################ > > 2 Errors found for `pandas.Timedelta.ceil`: > ES01 No extended summary found > SA01 See Also section not found > ``` > > Please don't comment `take` as multiple people can work on this issue. You also don't need to ask for permission to work on this, just comment on which methods are you going to work. > > If you're new contributor, please check the [contributing guide](https://pandas.pydata.org/docs/dev/development/contributing.html) > > thanks @natmokval for giving me the idea for this issue.
closed
2024-12-15T13:58:58Z
2024-12-15T14:07:24Z
https://github.com/pandas-dev/pandas/issues/60575
[]
sunlight798
0
huggingface/datasets
pandas
6,602
Index error when data is large
### Describe the bug At `save_to_disk` step, the `max_shard_size` by default is `500MB`. However, one row of the dataset might be larger than `500MB` then the saving will throw an index error. Without looking at the source code, the bug is due to wrong calculation of number of shards which i think is `total_size / min(max_shard_size, row_size)` which should be `total_size / max(max_shard_size, row_size)` The fix is setting a larger `max_shard_size` ### Steps to reproduce the bug 1. create a dataset with large dense tensors per row 2. set a small `max_shard_size` say 1MB 3. `save_to_disk` ### Expected behavior ``` raise IndexError(f"Index {index} out of range for dataset of size {size}.") IndexError: Index 10 out of range for dataset of size 10. ``` ### Environment info - `datasets` version: 2.16.0 - Platform: Linux-5.10.201-168.748.amzn2int.x86_64-x86_64-with-glibc2.26 - Python version: 3.10.13 - `huggingface_hub` version: 0.20.2 - PyArrow version: 14.0.2 - Pandas version: 2.1.4 - `fsspec` version: 2023.12.2
open
2024-01-18T23:00:47Z
2024-01-18T23:00:47Z
https://github.com/huggingface/datasets/issues/6602
[]
ChenchaoZhao
0
tensorpack/tensorpack
tensorflow
1,247
How to convert quantized weights to integer ?
After training the (8,8,32) AlexNet with dorefa-alexnet.py, I use the recommanded fw.py to quantize the weights: `import tensorflow as tf from tensorflow.python import pywrap_tensorflow import numpy as np import os model_dir = "./train_log/alexnet-dorefa-8,8,32/" checkpoint_path = os.path.join(model_dir, "model-5000") reader = pywrap_tensorflow.NewCheckpointReader(checkpoint_path) var_to_shape_map = reader.get_variable_to_shape_map() print("############# PRINT WEIGHT ##############") key = 'conv0/W' print("tensor_name: ", key) weight0 = reader.get_tensor(key) print(weight0) print("########### Quantized Weight #############") bitW = 8 def quantize(x,k): n = float(2**k - 1) return tf.round(x * n) / n def fw(x): if bitW == 32: return x if bitW == 1: # BWN E = tf.stop_gradient(tf.reduce_mean(tf.abs(x))) return tf.sign(x / E) * E x = tf.tanh(x) x = x / tf.reduce_max(tf.abs(x)) * 0.5 + 0.5 return 2 * quantize(x, bitW) - 1 sess = tf.Session() print( sess.run( fw(weight0) ) )` I get following weights.txt, but the weights are **float not integer**. How can I do? Thank you! `[-0.01960784 0.10588241 -0.19999999 0.1686275 0.00392163 0.30196083 -0.10588235 -0.03529412 0.00392163 0.01176476 -0.04313725 -0.16862744 -0.27843136 0.01176476 -0.00392157 -0.16862744 0.18431377 -0.09803921 -0.10588235 0.20000005 -0.26274508 0.09019613 0.07450986 -0.05098039 0.24705887 -0.01176471 -0.14509803 0.09019613 -0.40392154 -0.1372549 -0.0745098 -0.02745098 0.00392163 -0.02745098 -0.38823527 -0.06666666 0.06666672 -0.08235294 0.0196079 -0.08235294 0.09019613 -0.34117645 -0.12156862 0.01176476 -0.11372548 -0.11372548 -0.12941176 0.05098045 0.14509809 0.16078436 0.07450986 0.082353 -0.08235294 -0.0745098 0.00392163 -0.27058822 -0.00392157 -0.6392157 0.17647064 -0.18431371 -0.09019607 0.43529415 0.06666672 0.20784318 -0.15294117 0.14509809 -0.14509803 0.04313731 -0.0745098 -0.06666666 0.02745104 -0.1607843 -0.02745098 0.17647064 -0.0745098 0.2941177 0.38823533 -0.34117645 0.20784318 0.21568632 0.17647064 -0.3333333 0.02745104 0.00392163 0.02745104 -0.12941176 -0.21568626 -0.21568626 0.09803927 -0.06666666 0.5529412 -0.05098039 0.09803927 0.11372554 0.0196079 -0.29411763 0.3176471 0.05098045 -0.31764704 -0.09803921 -0.2235294 0.36470592 -0.02745098 0.04313731 -0.00392157 -0.04313725 0.00392163 0.3803922 0.02745104 0.09019613 0.254902 0.17647064 0.04313731 0.3176471`
closed
2019-06-29T12:06:13Z
2019-07-01T07:28:55Z
https://github.com/tensorpack/tensorpack/issues/1247
[ "examples" ]
changchun-zhou
3
deeppavlov/DeepPavlov
nlp
1,381
👩‍💻📞 DeepPavlov Community Call #5
> Update: [DeepPavlov Community Call #5 Recording](http://bit.ly/DPCommunityCall5_Video) > Subscribe for future calls here (last Thursday of the month, 8am Pacific/7pm MSK): > [http://bit.ly/MonthlyDPCommunityCall2021](http://bit.ly/MonthlyDPCommunityCall2021) Dear DeepPavlov community, The online DeepPavlov Community Call is scheduled for January 28th! This is the first call in 2021, and it will be all about our plans for DeepPavlov Library Refactoring Project. It is a big project for us, and we can’t wait to share our news and hear your thoughts. As always, you are welcome to suggest topics and call in! **DeepPavlov Community Call #5 (January 28, 2021)** **We’ll hold the fifth one on January 28, 2021 at 8:00am Pacific (7pm MSK/4 or 5pm UTC depending on DST).** **Add to your calendar: [http://bit.ly/MonthlyDPCommunityCall2021](http://bit.ly/MonthlyDPCommunityCall2021)** Last two years were fantastic for us. Participation in famous challenges like Alexa Prize 3, Alexa Prize 4 as well as Russian Pro//Chtenie helped us to battle test our DeepPavlov Library in the real conditions. It was fascinating for us to see how our Library fueled our endeavors in the Conversational AI space! DeepPavlov lab was created to enable building super smart, intelligent multiskill AI Assistants. To make this happen, we’ve created 3 key components: - **DeepPavlov Library** with a huge (200+) number of different configs that include NLP models that span across basic NLP tasks like entity extraction, sentiment analysis, classification; frameworks for building complex skills like Go-Bot and FAQ; integration with different systems; and myriads of examples. First release was shipped on February 5, 2018. - **DeepPavlov Agent** which is our Conversational AI Orchestrator. It enabled our efforts on Alexa Prize 3, DeepPavlov Dream AI Assistant Demo, Deepy, and is currently the heart of our Alexa Prize 4 Dream socialbot. It’s also versatile enough to support solutions for sister NLP problems like language comprehension that helped us to win at one of Pro//Chtenie challenges. First release was shipped on June 28, 2019. - **DeepPavlov Deepy** which is our platform for building Multiskill AI Assistants. It was born from our experimental DeepPavlov Dream AI Assistant Demo which itself was born from our Alexa Prize 3 Dream socialbot. Deepy provides a number of different goal-oriented, FAQ-based, and chit-chat skills, and several examples of Distributions that showcase different kinds of multiskill AI assistants you can build with DeepPavlov technology. First release was shipped on December 8, 2020. Releases of Agent and Deepy allowed us to round up our efforts in helping you to build an end-to-end solution. However, while some of the components of the original Library were efficiently integrated with Agent and Deepy, some remained in their experimental form. Others became effectively obsolete. It’s now time to begin our Library’s transformation on its path to v1.0. This process will involve a lot of restructuring, refactoring that will in turn lead to significant changes. Here are some of the areas: - Migration to PyTorch - Documentation - Deprecation of Old Models & Code - Models Version Control - Updates to Configs These are some of the top areas that we will focus in the coming months. We welcome you to join us at our DeepPavlov Community Call #5 to let us know what you think about these changes, share your expectations from v1 of the Library, and tell us how DeepPavlov Library helps you in your projects! **Agenda for the DeepPavlov Community Call #5:** > 7:00pm–7:30pm (MSK) | Welcome and overview of DeepPavlov Library changes in 2021H1 > 7:30pm–8:00pm (MSK) | Time for Feedback & Q&A with the team > In case you’ve missed the fourth one, we’ve uploaded a record — [see the playlist](https://bit.ly/DPCommunityCall3_Video). Check it out! **DeepPavlov Library Feedback Survey** **_We want to hear your thoughts. Starting with today, you can fill in this formto let us know how you use DeepPavlov Library, what you want us to add or improve!_** **We are anxious to hear your thoughts! [http://bit.ly/DPLibrary2021Survey](http://bit.ly/DPLibrary2021Survey)** **Interested?** Please let us know and leave a comment with any topics or questions you’d like to hear about! We can’t promise to cover everything but we’ll do our best next week or in a future call. After calling in or watching, please do fill in the survey to let us know if you have any feedback on the format or content: https://bit.ly/dpcallsurvey See you! The DeepPavlov team
closed
2021-01-21T12:08:35Z
2021-02-18T16:17:20Z
https://github.com/deeppavlov/DeepPavlov/issues/1381
[ "discussion" ]
moryshka
0
taverntesting/tavern
pytest
740
Tavern 2.0.0a2 False Negatives
Tavern 2.0.0a2 reports erroneous failures. ``` /tavern_tests/test_gdcp_display_priorities.tavern.yaml::Volume, no adjustment ______________Format variables: hmi_app_connect_topic = 'hmi/app_connect' high_pri_app = 'MCX_Client' hmi_app_connect_response_topic = 'hmi/app_connect_response' high_pri_app = 'MCX_Client' Source test stage (line 37): - name: Step 1 - Connect high priority app mqtt_publish: topic: "{hmi_app_connect_topic}" json: application: "{high_pri_app}" application_ref: 1000 mqtt_response: topic: "{hmi_app_connect_response_topic}" json: application: "{high_pri_app}" application_ref: 1000 result_code: 0 timeout: 1 Formatted stage: mqtt_publish: json: application: 'MCX_Client' application_ref: 1000 topic: 'hmi/app_connect' mqtt_response: json: application: 'MCX_Client' application_ref: 1000 result_code: 0 timeout: 1 topic: 'hmi/app_connect_response' name: Step 1 - Connect high priority app Errors: E tavern.util.exceptions.TestFailError: Test 'Step 1 - Connect high priority app' failed: - Expected '{'application': 'MCX_Client', 'application_ref': 1000, 'result_code': 0}' on topic 'hmi/app_connect_response' but no such message received ``` ``` Dec 1 10:25:39 5c02662b3add user.debug HMI[12]: [7fdcf1d3b740]on_log16 : Client HMI received PUBLISH (d0, q0, r0, m0, 'hmi/app_connect', ... (54 bytes)) Dec 1 10:25:39 5c02662b3add user.debug HMI[12]: [7fdcf1d3b740]virtual void MosquittoClient::on_message(const mosquitto_message*) Dec 1 10:25:39 5c02662b3add user.debug HMI[12]: [7fdcf1d3b740]AppRequestHandler::handleAppConnect Dec 1 10:25:39 5c02662b3add user.debug HMI[12]: [7fdcf1d3b740]HMIConfiguration::Instance Dec 1 10:25:39 5c02662b3add user.debug HMI[12]: [7fdcf1d3b740]HMIConfiguration::validate_json_against_schema - ../json_schemas/HMI/target_app_connect_schema.json Dec 1 10:25:39 5c02662b3add user.debug HMI[12]: [7fdcf1d3b740]AppRequestHandler::handleAppConnect data validated against schema Dec 1 10:25:39 5c02662b3add user.debug HMI[12]: [7fdcf1d3b740]AppRequestHandler::isAppAlreadyConnected Dec 1 10:25:39 5c02662b3add user.debug HMI[12]: [7fdcf1d3b740]AppRequestHandler::handleAppConnect - new app: MCX_Client Dec 1 10:25:39 5c02662b3add user.debug HMI[12]: [7fdcf1d3b740]HMIConfiguration::Instance Dec 1 10:25:39 5c02662b3add user.debug HMI[12]: [7fdcf1d3b740]HMIConfiguration::getPriorityForApp Dec 1 10:25:39 5c02662b3add user.debug HMI[12]: [7fdcf1d3b740]HMIConfiguration::getPriorityForApp - found App 'MCX_Client' - priority = 2 Dec 1 10:25:39 5c02662b3add user.debug HMI[12]: [7fdcf1d3b740]AppRequestHandler::handleKeyRegistration Dec 1 10:25:39 5c02662b3add user.debug HMI[12]: [7fdcf1d3b740]AppRequestHandler::sendAppConnectResponse 0 Dec 1 10:25:39 5c02662b3add user.debug HMI[12]: [7fdcf1d3b740]publish_to_topic (topic = 'hmi/app_connect_response', message = '{"application":"MCX_Client","application_ref":1000,"result_code":0}', qos = '0', retain = '0') ``` This happens for every stage in the test.
closed
2021-12-01T11:15:16Z
2023-01-16T10:05:31Z
https://github.com/taverntesting/tavern/issues/740
[]
KenStorey
1
amidaware/tacticalrmm
django
1,312
Parameterize Linux Installer Script
**Is your feature request related to a problem? Please describe.** When installing the agent on Linux systems, you have to download the script onto your machine first, then copy it over to proper system, make it executable, then run it. It would be nice to simplify this process. **Describe the solution you'd like** The only item changing in the script every time its ran are a few variables. It would make more sense for these to simply be parameters for the script to run. Then, you could pipe the script from curl to bash (bad practice, I know, but we could put a warning or something) with the parameters on the machine directly without having to download the script and copy it over. The command could then be provided in the web interface. Consider adding the following snippet to the installer script: ```bash # Simple Bash Argument Handler while [[ "$#" -gt 0 ]]; do case $1 in -t|--token) token="$2"; shift ;; -c|--clientid) clientID="$2"; shift ;; -s|--siteid) siteID="$2"; shift ;; -a|--agenttype) agentType="$2"; shift;; --apiurl) apiURL="$2"; shift;; --agentdl) agentDL="$2"; shift;; --meshdl) meshDL="$2"; shift;; *) echo "Unknown parameter passed: $1"; exit 1 ;; esac shift done ``` Then, from the TacticalRMM web interface when you select "Install New Agent" and select "Linux", the following command would appear at the bottom instead of (or next to) the "Download Linux Script Button" ```bash curl -fL https://rmm.example.com/agent-install.sh | sudo bash -s -- -t xxxxxx -c 1 -s 1 -a server --agentdl 'https://agents.tacticalrmm.com/api/v2/agents/?version=2.4.0&...' --meshdl 'https://mesh.example.com/meshagents?id=...' ``` **Describe alternatives you've considered** NA **Additional context** Only thing to consider would be making that script publicly available - however, as it stands, I can already download it from github without the parameters, this would be essentially accomplishing the same thing except moving the variables from within the script to parameters.
open
2022-10-11T15:55:41Z
2022-10-11T15:55:41Z
https://github.com/amidaware/tacticalrmm/issues/1312
[]
SoarinFerret
0
pallets/flask
flask
5,023
send_file wont accept filenames with comma
If the user requests a file from server such as "CORAÇÕES,.html" the browser will return this error: ERR_RESPONSE_HEADERS_MULTIPLE_CONTENT_DISPOSITION This is caused by comma inside the filename. if replace the name for "C,.html", it works. To replicate this error, you can run this function: ``` send_from_directory( "CORAÇÕES,.html", as_attachment=True, ) ``` Environment: - Python version:3.10.9 - Flask version:2.2.3 Solution: The comma can be replaced by a comma on the http header. It wont work with a commen comma, such as '"Corações,.html"', but it will work with urrllib.parse, such as: ``` from urllib.parse import quote send_from_directory( "CORAÇÕES,.html", as_attachment=True, download_name=quote("CORAÇÕES,.html"), ) ``` Maybe we can add urrlibe.parse.quote as default for download_name?
closed
2023-03-10T15:26:30Z
2023-03-25T00:05:28Z
https://github.com/pallets/flask/issues/5023
[]
albcunha
1
yt-dlp/yt-dlp
python
12,617
Nicovideo json video link output doesn't work
### Checklist - [x] I'm reporting that yt-dlp is broken on a **supported** site - [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels)) - [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details - [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command) - [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar issues **including closed ones**. DO NOT post duplicates - [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required ### Region Worldwide (some videos are only japan) ### Provide a description that is worded well enough to be understood I tried to play nicovideo(Niconico Douga) video in potplayer using yt-dlp with script, but it doesn't work. potplayer yt-dlp script uses json output of yt-dlp, so I checked the yt-dlp json output, and I found that yt-dlp json output of nicovideo's video doesn't work. but download feature is actually working, so json output video url is only the problem in nicovideo site. and also, -f best parameter is not working. Here is example video: https://www.nicovideo.jp/watch/sm44179023 (no region lock) ### Provide verbose output that clearly demonstrates the problem - [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`) - [x] If using API, add `'verbose': True` to `YoutubeDL` params instead - [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below ### Complete Verbose Output ```shell [debug] Command-line config: ['-vU', '--no-playlist', '--all-subs', '-j', '--', 'https://www.nicovideo.jp/watch/sm44179023'] [debug] Encodings: locale cp949, fs utf-8, pref cp949, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version nightly@2025.03.13.232844 from yt-dlp/yt-dlp-nightly-builds [4432a9390] (win_exe) [debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.26100-SP0 (OpenSSL 1.1.1t 7 Feb 2023) [debug] exe versions: ffmpeg 7.1-full_build-www.gyan.dev (setts), ffprobe 7.1-essentials_build-www.gyan.dev [debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2025.01.31, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-15.0.1 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets, curl_cffi [debug] Plugin directories: none [debug] Loaded 1844 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest Latest version: nightly@2025.03.13.232844 from yt-dlp/yt-dlp-nightly-builds yt-dlp is up to date (nightly@2025.03.13.232844 from yt-dlp/yt-dlp-nightly-builds) [niconico] Extracting URL: https://www.nicovideo.jp/watch/sm44179023 [niconico] sm44179023: Downloading webpage [niconico] sm44179023: Downloading JSON metadata [niconico] sm44179023: Downloading m3u8 information [niconico] sm44179023: Downloading comments [info] sm44179023: Downloading subtitles: comments [debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id [debug] Default format spec: bestvideo*+bestaudio/best [info] sm44179023: Downloading 1 format(s): video-h264-720p+audio-aac-192kbps {"id": "sm44179023", "_api_data": {"ads": null, "category": null, "channel": null, "client": {"nicosid": "1742009383.1621927832", "watchId": "sm44179023", "watchTrackId": "rmS9ywDHdP_1742009383963"}, "comment": {"server": {"url": ""}, "keys": {"userKey": ""}, "layers": [{"index": 0, "isTranslucent": false, "threadIds": [{"id": 1728078906, "fork": 1, "forkLabel": "owner"}]}, {"index": 1, "isTranslucent": false, "threadIds": [{"id": 1728078906, "fork": 0, "forkLabel": "main"}, {"id": 1728078906, "fork": 2, "forkLabel": "easy"}]}], "threads": [{"id": 1728078906, "fork": 1, "forkLabel": "owner", "videoId": "sm44179023", "isActive": false, "isDefaultPostTarget": false, "isEasyCommentPostTarget": false, "isLeafRequired": false, "isOwnerThread": true, "isThreadkeyRequired": false, "threadkey": null, "is184Forced": false, "hasNicoscript": true, "label": "owner", "postkeyStatus": 0, "server": ""}, {"id": 1728078906, "fork": 0, "forkLabel": "main", "videoId": "sm44179023", "isActive": true, "isDefaultPostTarget": true, "isEasyCommentPostTarget": false, "isLeafRequired": true, "isOwnerThread": false, "isThreadkeyRequired": false, "threadkey": null, "is184Forced": false, "hasNicoscript": false, "label": "default", "postkeyStatus": 0, "server": ""}, {"id": 1728078906, "fork": 2, "forkLabel": "easy", "videoId": "sm44179023", "isActive": true, "isDefaultPostTarget": false, "isEasyCommentPostTarget": true, "isLeafRequired": true, "isOwnerThread": false, "isThreadkeyRequired": false, "threadkey": null, "is184Forced": false, "hasNicoscript": false, "label": "easy", "postkeyStatus": 0, "server": ""}], "ng": {"ngScore": {"isDisabled": false}, "channel": [], "owner": [], "viewer": null}, "isAttentionRequired": false, "nvComment": {"threadKey": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJqdGkiOiI2N2Q0ZjQyN2YzYWZmIiwiZXhwIjoxNzQyMDEwMTM2LCJ0eXAiOiJUaHJlYWQtS2V5IiwidGlkcyI6WyIxNzI4MDc4OTA2Il0sImYxODRzIjpbXX0.eTYn_RNDqI6jHDTM1aGGIQrRhzfvfUU3CrLlwFnGRbFmUXiZ-pPzV5qvztWh83z4d3rwuWLfejuJ6xV6ACDRWg", "server": "https://public.nvcomment.nicovideo.jp", "params": {"targets": [{"id": "1728078906", "fork": "owner"}, {"id": "1728078906", "fork": "main"}], "language": "en-us"}}}, "community": null, "easyComment": {"phrases": []}, "external": {"commons": {"hasContentTree": true}, "ichiba": {"isEnabled": true}}, "genre": {"key": "cooking", "label": "\u6599\u7406", "isImmoral": false, "isDisabled": false, "isNotSet": false}, "marquee": {"isDisabled": false, "tagRelatedLead": null}, "media": {"domand": {"videos": [{"id": "video-h264-1080p", "isAvailable": false, "label": "1080p", "bitRate": 4131927, "width": 1920, "height": 1080, "qualityLevel": 4, "recommendedHighestAudioQualityLevel": 1}, {"id": "video-h264-720p", "isAvailable": true, "label": "720p", "bitRate": 2060905, "width": 1280, "height": 720, "qualityLevel": 3, "recommendedHighestAudioQualityLevel": 1}, {"id": "video-h264-480p", "isAvailable": true, "label": "480p", "bitRate": 1626432, "width": 854, "height": 480, "qualityLevel": 2, "recommendedHighestAudioQualityLevel": 1}, {"id": "video-h264-360p", "isAvailable": true, "label": "360p", "bitRate": 620083, "width": 640, "height": 360, "qualityLevel": 1, "recommendedHighestAudioQualityLevel": 1}, {"id": "video-h264-144p", "isAvailable": true, "label": "144p", "bitRate": 153600, "width": 256, "height": 144, "qualityLevel": 0, "recommendedHighestAudioQualityLevel": 1}], "audios": [{"id": "audio-aac-192kbps", "isAvailable": true, "bitRate": 167165, "samplingRate": 48000, "integratedLoudness": -19.960732, "truePeak": -1.50952, "qualityLevel": 1, "loudnessCollection": [{"type": "video", "value": 1}, {"type": "pureAdPreroll", "value": 0.39991104599173466}, {"type": "houseAdPreroll", "value": 0.39991104599173466}, {"type": "networkAdPreroll", "value": 0.39991104599173466}, {"type": "pureAdMidroll", "value": 0.39991104599173466}, {"type": "houseAdMidroll", "value": 0.39991104599173466}, {"type": "networkAdMidroll", "value": 0.39991104599173466}, {"type": "pureAdPostroll", "value": 0.39991104599173466}, {"type": "houseAdPostroll", "value": 0.39991104599173466}, {"type": "networkAdPostroll", "value": 0.39991104599173466}, {"type": "nicoadVideoIntroduce", "value": 0.5034581782561807}, {"type": "nicoadBillboard", "value": 1}, {"type": "marquee", "value": 0.6338162943823035}], "label": {"quality": "\u6a19\u6e96\u97f3\u8cea", "bitrate": "192kbps"}}, {"id": "audio-aac-64kbps", "isAvailable": true, "bitRate": 68762, "samplingRate": 48000, "integratedLoudness": -19.960732, "truePeak": -1.50952, "qualityLevel": 0, "loudnessCollection": [{"type": "video", "value": 1}, {"type": "pureAdPreroll", "value": 0.39991104599173466}, {"type": "houseAdPreroll", "value": 0.39991104599173466}, {"type": "networkAdPreroll", "value": 0.39991104599173466}, {"type": "pureAdMidroll", "value": 0.39991104599173466}, {"type": "houseAdMidroll", "value": 0.39991104599173466}, {"type": "networkAdMidroll", "value": 0.39991104599173466}, {"type": "pureAdPostroll", "value": 0.39991104599173466}, {"type": "houseAdPostroll", "value": 0.39991104599173466}, {"type": "networkAdPostroll", "value": 0.39991104599173466}, {"type": "nicoadVideoIntroduce", "value": 0.5034581782561807}, {"type": "nicoadBillboard", "value": 1}, {"type": "marquee", "value": 0.6338162943823035}], "label": {"quality": "\u4f4e\u97f3\u8cea", "bitrate": "64kbps"}}], "isStoryboardAvailable": false, "accessRightKey": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJqdGkiOiI2N2Q0ZjQyN2Y0MTNjIiwiZXhwIjoxNzQyMDA5OTgzLCJ0eXAiOiJBY2Nlc3MtUmlnaHQtS2V5IiwidmlkIjoic200NDE3OTAyMyIsInJpZCI6Im5pY292aWRlby1zbTQ0MTc5MDIzIiwiZmlkIjo2LCJ1aWQiOiI2LXJtUzl5d0RIZFBfMTc0MjAwOTM4Mzk2MyIsImQiOjUwMiwidiI6WyJ2aWRlby1oMjY0LTcyMHAiLCJ2aWRlby1oMjY0LTQ4MHAiLCJ2aWRlby1oMjY0LTM2MHAiLCJ2aWRlby1oMjY0LTE0NHAiXSwiYSI6WyJhdWRpby1hYWMtMTkya2JwcyIsImF1ZGlvLWFhYy02NGticHMiXSwicyI6ZmFsc2UsInNoIjpmYWxzZX0.Ray55inFHN9kl1OqpqGSx1zXtAiYJ0ZyrZYEW6WX8cd_ipwR1nExSFlQ2ZN2uDPxQN1pzZ_RTb-M7bAk4Ll47A"}, "delivery": null, "deliveryLegacy": null}, "okReason": "PURELY", "owner": {"id": 64865991, "nickname": "\u30ed\u30a6\u30a2\u30a4\u30ad\u30e5\u30fc", "iconUrl": "https://secure-dcdn.cdn.nimg.jp/nicoaccount/usericon/6486/64865991.jpg?1485862794", "channel": null, "live": null, "isVideosPublic": true, "isMylistsPublic": true, "videoLiveNotice": null, "viewer": null}, "payment": {"video": {"isPpv": false, "isAdmission": false, "isContinuationBenefit": false, "isPremium": false, "watchableUserType": "all", "commentableUserType": "all", "billingType": "free"}, "preview": {"ppv": {"isEnabled": false}, "admission": {"isEnabled": false}, "continuationBenefit": {"isEnabled": false}, "premium": {"isEnabled": false}}}, "pcWatchPage": {"tagRelatedBanner": null, "videoEnd": {"bannerIn": null, "overlay": null}, "showOwnerMenu": false, "showOwnerThreadCoEditingLink": false, "showMymemoryEditingLink": false}, "player": {"initialPlayback": null, "comment": {"isDefaultInvisible": false}, "layerMode": 0}, "ppv": null, "ranking": {"genre": {"rank": 1, "genre": "Cooking", "dateTime": "2024-10-05T20:00:00+09:00"}, "popularTag": [{"tag": "\u304a\u83d3\u5b50\u4f5c\u308a", "regularizedTag": "\u30aa\u83d3\u5b50\u4f5c\u30ea", "rank": 1, "genre": "Cooking", "dateTime": "2024-10-05T08:00:00+09:00"}, {"tag": "\u6599\u7406", "regularizedTag": "\u6599\u7406", "rank": 1, "genre": "Cooking", "dateTime": "2024-10-05T20:00:00+09:00"}], "teiban": {"featuredKey": "lq8d5918", "label": "\u6599\u7406", "rank": 66}}, "series": null, "smartphone": null, "system": {"serverTime": "2025-03-15T12:29:43+09:00", "isPeakTime": true, "isStellaAlive": true}, "tag": {"items": [], "hasR18Tag": false, "isPublishedNicoscript": false, "edit": {"isEditable": false, "uneditableReason": "NEED_LOGIN", "editKey": null}, "viewer": null}, "video": {"id": "sm44179023", "title": "\u3010\u8d85\u89e3\u8aac\u3011\u30b5\u30a4\u30bc\u306e\u30c9\u30ea\u30f3\u30af\u30d0\u30fc\u5168\u90e8\u3076\u3063\u8fbc\u307f\u30a2\u30a4\u30b9\u306e\u4f5c\u308a\u65b9", "description": "\u300c\u4ed6\u306e\u6599\u7406\u52d5\u753b\u300d<a href=\"https://www.nicovideo.jp/mylist/58245198\" target=\"_blank\" rel=\"noopener\">mylist/58245198</a><br><br>-----\u3053\u3061\u3089\u3082\u3088\u308d\u3057\u304f\u304a\u9858\u3044\u3057\u307e\u3059-----<br>\u25c6\u30c4\u30a4\u30c3\u30bf\u30fc <a href=\"https://twitter.com/IowlQ\" target=\"_blank\" rel=\"noopener nofollow\">https://twitter.com/IowlQ</a><br>\u25c6YouTube <a href=\"https://t.co/TFNhE3XaHe?amp=1\" target=\"_blank\" rel=\"noopener nofollow\">https://t.co/TFNhE3XaHe?amp=1</a><br>\u25c6\u30a4\u30f3\u30b9\u30bf <a href=\"https://t.co/B6AAm09HD2\" target=\"_blank\" rel=\"noopener nofollow\">https://t.co/B6AAm09HD2</a><br><br>", "count": {"view": 109735, "comment": 0, "mylist": 319, "like": 6279}, "duration": 502, "thumbnail": {"url": "https://nicovideo.cdn.nimg.jp/thumbnails/44179023/44179023.52467209", "middleUrl": "https://nicovideo.cdn.nimg.jp/thumbnails/44179023/44179023.52467209.M", "largeUrl": "https://nicovideo.cdn.nimg.jp/thumbnails/44179023/44179023.52467209.L", "player": "https://img.cdn.nimg.jp/s/nicovideo/thumbnails/44179023/44179023.52467209.original/a960x540l?key=dc5b718dd1829915fc0d2b0a955ceefb4a25e521c84e6a38b6526c3d31a42a33", "ogp": "https://img.cdn.nimg.jp/s/nicovideo/thumbnails/44179023/44179023.52467209.original/r1280x720l?key=c0260bc50c28d6e68aae777dc5d0c3b3807f39f070c9ad01549080dcfa93ff9f"}, "rating": {"isAdult": false}, "registeredAt": "2024-10-05T06:55:06+09:00", "isPrivate": false, "isDeleted": false, "isNoBanner": false, "isAuthenticationRequired": false, "isEmbedPlayerAllowed": true, "isGiftAllowed": true, "viewer": null, "watchableUserTypeForPayment": "all", "commentableUserTypeForPayment": "all", "9d091f87": false}, "videoAds": {"additionalParams": {"videoId": "sm44179023", "videoDuration": 502, "isAdultRatingNG": false, "isAuthenticationRequired": false, "isR18": false, "nicosid": "1742009383.1621927832", "lang": "en-us", "watchTrackId": "rmS9ywDHdP_1742009383963", "genre": "cooking"}, "items": [{"type": "preroll", "timingMs": null, "additionalParams": {"linearType": "preroll", "adIdx": 0, "skipType": 1, "skippableType": 1, "pod": 1}}, {"type": "postroll", "timingMs": null, "additionalParams": {"linearType": "postroll", "adIdx": 0, "skipType": 1, "skippableType": 1, "pod": 2}}], "reason": "non_premium_user_ads"}, "videoLive": null, "viewer": null, "waku": {"information": null, "bgImages": [], "addContents": null, "addVideo": null, "tagRelatedBanner": null, "tagRelatedMarquee": null}}, "title": "\u3010\u8d85\u89e3\u8aac\u3011\u30b5\u30a4\u30bc\u306e\u30c9\u30ea\u30f3\u30af\u30d0\u30fc\u5168\u90e8\u3076\u3063\u8fbc\u307f\u30a2\u30a4\u30b9\u306e\u4f5c\u308a\u65b9", "formats": [{"format_id": "audio-aac-64kbps", "format_note": "Main Audio", "format_index": null, "url": "https://delivery.domand.nicovideo.jp/hlsbid/67005e65f5707422fe0acb1f/playlists/media/audio-aac-64kbps.m3u8?session=fe22af498f05f007b134490d51d1a616c30b4ebbcc83d7870000000067d645a8fd7ab12b3baeef17&Expires=1742095784&Signature=IDbB3kWQ6NZ9Spv4ALsx5l24CCsHHFwTWnClokBN~NH40KSmmJWN2-IMoU8qc7~je68HDfUgLHzU2w-7-p2PSntOH2zYWC7Cy8WnLI9qsvO0IJ4tHLOD1t6EH0lnZA6sSDVTMLpWPwRlC-nmlhpJyDgkCNChmuZT5zjnY8oyDRsRwLI7aQ4iWMJCWenz0LNV4D2cGcKiIRRPnKUAbLo7ugq7SCK6OKljA-gjmk8X3nBeZDbKZ2-75c5drwUEQN4u99PQKrY4sMJMwVx9JCg96P7oW3jpS1CZGEhCLHJs4J74XUZFKb3Hb96Opm5Y2bTdTNFaJk52MJgGlRMTpCvikg__&Key-Pair-Id=K11RB80NFXU134", "manifest_url": "https://delivery.domand.nicovideo.jp/hlsbid/67005e65f5707422fe0acb1f/playlists/variants/c30b4ebbcc83d787.m3u8?session=fe22af498f05f007b134490d51d1a616c30b4ebbcc83d7870000000067d645a8fd7ab12b3baeef17&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9kZWxpdmVyeS5kb21hbmQubmljb3ZpZGVvLmpwL2hsc2JpZC82NzAwNWU2NWY1NzA3NDIyZmUwYWNiMWYvcGxheWxpc3RzL3ZhcmlhbnRzL2MzMGI0ZWJiY2M4M2Q3ODcubTN1OFxcPyoiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE3NDIwOTU3ODR9fX1dfQ__&Signature=Dr34UVdukroVxDFErXbaZNID34RlGQVQSZVm1~FyvUAvHwwPVFmXQziOqS0ftjqPS67nSbvFl-PRSXu3DQYjyFBjqPviTFW51~91h4hZTcQkooVf5qJ5mcqOZAHvx3yXMAaVROKYx95P6NdD3yGudqN--nxOIfkHvLkL0CLso08TMs377DwZOOi3HIqBxjC1ljB37J3CgzTZ5nsoIQRnNaZSIYsP0fqfhMngBnr4CtvfcOQsxIAnFI5zVLh~u-BwjA57bLjZ13kno-raVpXfWehzqnb19suGFcNtMPNlr3OLjdXsyIN~ENqtf9X0YXOPSbAZHts38NAmmalMMpaV7Q__&Key-Pair-Id=K11RB80NFXU134", "language": null, "ext": "mp4", "protocol": "m3u8_native", "preference": null, "quality": 0, "has_drm": false, "vcodec": "none", "abr": 68.762, "asr": 48000, "acodec": "aac", "audio_ext": "mp4", "video_ext": "none", "vbr": 0, "tbr": 68.762, "resolution": "audio only", "aspect_ratio": null, "cookies": "nicosid=1742009383.1621927832; Domain=.nicovideo.jp; Path=/; Expires=2057369384; domand_bid=08e8eed99fdce20fed514af7ea5aca7af8ffe1374448ea2e1707cc45ff4a47eb; Domain=.nicovideo.jp; Path=/; Secure; Expires=1742095784", "http_headers": {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.115 Safari/537.36", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en-us,en;q=0.5", "Sec-Fetch-Mode": "navigate"}, "format": "audio-aac-64kbps - audio only (Main Audio)"}, {"format_id": "audio-aac-192kbps", "format_note": "Main Audio", "format_index": null, "url": "https://delivery.domand.nicovideo.jp/hlsbid/67005e65f5707422fe0acb1f/playlists/media/audio-aac-192kbps.m3u8?session=fe22af498f05f007b134490d51d1a616c30b4ebbcc83d7870000000067d645a8fd7ab12b3baeef17&Expires=1742095784&Signature=fJabGphdRSuN1l8S6N27okVtNjqTHXJKY2SUtJ7K0A4m1u76KP01g~IUIfJ8BNw4RL2z8uik2YEvGsZfPpJmziSZv99Btc-Zk14q55kJ3hmV5gjQu8v7rY-XBvAyYprcMbMEKlTY~qYTjEfmNiWPSowfx44H3H4oMNJuxsiXhwrZVnmQlJTHXSHzr7asEdr8MVXTa1O-ojMkZmk~SzYmJhrMYhV4DC1Y-Rey5YXG71KtdVXshfVS3ECeCwNBUHS4VGc8mEaZmV3uHP9jUzjN86Eb9QOgTGEApvBofdR1-mkzgoKghn6s01H3xJb23-el10GsO8~CiE--6NQ9a4df-w__&Key-Pair-Id=K11RB80NFXU134", "manifest_url": "https://delivery.domand.nicovideo.jp/hlsbid/67005e65f5707422fe0acb1f/playlists/variants/c30b4ebbcc83d787.m3u8?session=fe22af498f05f007b134490d51d1a616c30b4ebbcc83d7870000000067d645a8fd7ab12b3baeef17&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9kZWxpdmVyeS5kb21hbmQubmljb3ZpZGVvLmpwL2hsc2JpZC82NzAwNWU2NWY1NzA3NDIyZmUwYWNiMWYvcGxheWxpc3RzL3ZhcmlhbnRzL2MzMGI0ZWJiY2M4M2Q3ODcubTN1OFxcPyoiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE3NDIwOTU3ODR9fX1dfQ__&Signature=Dr34UVdukroVxDFErXbaZNID34RlGQVQSZVm1~FyvUAvHwwPVFmXQziOqS0ftjqPS67nSbvFl-PRSXu3DQYjyFBjqPviTFW51~91h4hZTcQkooVf5qJ5mcqOZAHvx3yXMAaVROKYx95P6NdD3yGudqN--nxOIfkHvLkL0CLso08TMs377DwZOOi3HIqBxjC1ljB37J3CgzTZ5nsoIQRnNaZSIYsP0fqfhMngBnr4CtvfcOQsxIAnFI5zVLh~u-BwjA57bLjZ13kno-raVpXfWehzqnb19suGFcNtMPNlr3OLjdXsyIN~ENqtf9X0YXOPSbAZHts38NAmmalMMpaV7Q__&Key-Pair-Id=K11RB80NFXU134", "language": null, "ext": "mp4", "protocol": "m3u8_native", "preference": null, "quality": 1, "has_drm": false, "vcodec": "none", "abr": 167.165, "asr": 48000, "acodec": "aac", "audio_ext": "mp4", "video_ext": "none", "vbr": 0, "tbr": 167.165, "resolution": "audio only", "aspect_ratio": null, "cookies": "nicosid=1742009383.1621927832; Domain=.nicovideo.jp; Path=/; Expires=2057369384; domand_bid=08e8eed99fdce20fed514af7ea5aca7af8ffe1374448ea2e1707cc45ff4a47eb; Domain=.nicovideo.jp; Path=/; Secure; Expires=1742095784", "http_headers": {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.115 Safari/537.36", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en-us,en;q=0.5", "Sec-Fetch-Mode": "navigate"}, "format": "audio-aac-192kbps - audio only (Main Audio)"}, {"format_id": "video-h264-144p", "format_index": null, "url": "https://delivery.domand.nicovideo.jp/hlsbid/67005e65f5707422fe0acb1f/playlists/media/video-h264-144p.m3u8?session=fe22af498f05f007b134490d51d1a616c30b4ebbcc83d7870000000067d645a8fd7ab12b3baeef17&Expires=1742095784&Signature=GhbIqiHYiUEy0PMAHBw9cKaxXQghHk5hHvWVqL6uEpsBpVNYbJZpPYm-qCnZaxs4BlbgaEltx5V91rqXBgY8Dk-HMBamLD-DRTfjU1NvespeHgKkY9ROipxi-9UE2wK1FkqtYunJ4TcsyJhn4qL083Bi4-z-aqeqwLu5BIx0gBqAWIvacVbLz~CWRyFeHS7nmGViMVkxLopgLFX5ibj0Q2IGYshpbSrb2eOu9kYZM3jAiYJbk62~ZjRzYxFIn9TlK3z3xzbO7ZKmqHW9lazgBHS19zWRyJtBZuYXt8s4U7VsFKa9LlkbZ4LLCwB1DY-Zy4YVwPmbOp4Fp6CCVgCSqg__&Key-Pair-Id=K11RB80NFXU134", "manifest_url": "https://delivery.domand.nicovideo.jp/hlsbid/67005e65f5707422fe0acb1f/playlists/variants/c30b4ebbcc83d787.m3u8?session=fe22af498f05f007b134490d51d1a616c30b4ebbcc83d7870000000067d645a8fd7ab12b3baeef17&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9kZWxpdmVyeS5kb21hbmQubmljb3ZpZGVvLmpwL2hsc2JpZC82NzAwNWU2NWY1NzA3NDIyZmUwYWNiMWYvcGxheWxpc3RzL3ZhcmlhbnRzL2MzMGI0ZWJiY2M4M2Q3ODcubTN1OFxcPyoiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE3NDIwOTU3ODR9fX1dfQ__&Signature=Dr34UVdukroVxDFErXbaZNID34RlGQVQSZVm1~FyvUAvHwwPVFmXQziOqS0ftjqPS67nSbvFl-PRSXu3DQYjyFBjqPviTFW51~91h4hZTcQkooVf5qJ5mcqOZAHvx3yXMAaVROKYx95P6NdD3yGudqN--nxOIfkHvLkL0CLso08TMs377DwZOOi3HIqBxjC1ljB37J3CgzTZ5nsoIQRnNaZSIYsP0fqfhMngBnr4CtvfcOQsxIAnFI5zVLh~u-BwjA57bLjZ13kno-raVpXfWehzqnb19suGFcNtMPNlr3OLjdXsyIN~ENqtf9X0YXOPSbAZHts38NAmmalMMpaV7Q__&Key-Pair-Id=K11RB80NFXU134", "tbr": 153.553, "ext": "mp4", "fps": 23.976, "protocol": "m3u8_native", "preference": null, "quality": -1, "has_drm": false, "width": 256, "height": 144, "vcodec": "avc1.4d401e", "acodec": "none", "dynamic_range": "SDR", "video_ext": "mp4", "audio_ext": "none", "abr": 0, "vbr": 153.553, "resolution": "256x144", "aspect_ratio": 1.78, "cookies": "nicosid=1742009383.1621927832; Domain=.nicovideo.jp; Path=/; Expires=2057369384; domand_bid=08e8eed99fdce20fed514af7ea5aca7af8ffe1374448ea2e1707cc45ff4a47eb; Domain=.nicovideo.jp; Path=/; Secure; Expires=1742095784", "http_headers": {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.115 Safari/537.36", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en-us,en;q=0.5", "Sec-Fetch-Mode": "navigate"}, "format": "video-h264-144p - 256x144"}, {"format_id": "video-h264-360p", "format_index": null, "url": "https://delivery.domand.nicovideo.jp/hlsbid/67005e65f5707422fe0acb1f/playlists/media/video-h264-360p.m3u8?session=fe22af498f05f007b134490d51d1a616c30b4ebbcc83d7870000000067d645a8fd7ab12b3baeef17&Expires=1742095784&Signature=O3puTCTyKzNR8bf5Jsy8KrYrHWyIyWMuFqSbllyGs~qUGraUVOPYaa2EBcJMXqn2UR1T8JFmD2S7Tt5E8iuG0LA9bkXTjwxpirpzQFdip82BoQ~RqK5sHCJlrzxE0fbsV7gasnvmLXGSZGOAMdmujJD1wiqVvjW~NQb7yeVq1JXhlSjxzagCIWG3mKXNxBgO3gkaENhFfxietWRUPoVSHHaO4YYOcUXV7Lx54H-PjkoX~9Cj7x6WblVgHYox5gSIhjpQgAA2l49~NyQkm-lt-BUt9vrzUcDqu6aqPZo8zRxKiRPx1~sZkhG~Xc5-~TBxqTpXn58EczMbJOLUWv68zQ__&Key-Pair-Id=K11RB80NFXU134", "manifest_url": "https://delivery.domand.nicovideo.jp/hlsbid/67005e65f5707422fe0acb1f/playlists/variants/c30b4ebbcc83d787.m3u8?session=fe22af498f05f007b134490d51d1a616c30b4ebbcc83d7870000000067d645a8fd7ab12b3baeef17&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9kZWxpdmVyeS5kb21hbmQubmljb3ZpZGVvLmpwL2hsc2JpZC82NzAwNWU2NWY1NzA3NDIyZmUwYWNiMWYvcGxheWxpc3RzL3ZhcmlhbnRzL2MzMGI0ZWJiY2M4M2Q3ODcubTN1OFxcPyoiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE3NDIwOTU3ODR9fX1dfQ__&Signature=Dr34UVdukroVxDFErXbaZNID34RlGQVQSZVm1~FyvUAvHwwPVFmXQziOqS0ftjqPS67nSbvFl-PRSXu3DQYjyFBjqPviTFW51~91h4hZTcQkooVf5qJ5mcqOZAHvx3yXMAaVROKYx95P6NdD3yGudqN--nxOIfkHvLkL0CLso08TMs377DwZOOi3HIqBxjC1ljB37J3CgzTZ5nsoIQRnNaZSIYsP0fqfhMngBnr4CtvfcOQsxIAnFI5zVLh~u-BwjA57bLjZ13kno-raVpXfWehzqnb19suGFcNtMPNlr3OLjdXsyIN~ENqtf9X0YXOPSbAZHts38NAmmalMMpaV7Q__&Key-Pair-Id=K11RB80NFXU134", "tbr": 620.038, "ext": "mp4", "fps": 23.976, "protocol": "m3u8_native", "preference": null, "quality": 1, "has_drm": false, "width": 640, "height": 360, "vcodec": "avc1.4d401e", "acodec": "none", "dynamic_range": "SDR", "video_ext": "mp4", "audio_ext": "none", "abr": 0, "vbr": 620.038, "resolution": "640x360", "aspect_ratio": 1.78, "cookies": "nicosid=1742009383.1621927832; Domain=.nicovideo.jp; Path=/; Expires=2057369384; domand_bid=08e8eed99fdce20fed514af7ea5aca7af8ffe1374448ea2e1707cc45ff4a47eb; Domain=.nicovideo.jp; Path=/; Secure; Expires=1742095784", "http_headers": {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.115 Safari/537.36", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en-us,en;q=0.5", "Sec-Fetch-Mode": "navigate"}, "format": "video-h264-360p - 640x360"}, {"format_id": "video-h264-480p", "format_index": null, "url": "https://delivery.domand.nicovideo.jp/hlsbid/67005e65f5707422fe0acb1f/playlists/media/video-h264-480p.m3u8?session=fe22af498f05f007b134490d51d1a616c30b4ebbcc83d7870000000067d645a8fd7ab12b3baeef17&Expires=1742095784&Signature=gGj0n0Ypcelv-mUa5A25hGwvBUqRIY1820T4pkHHymWWcVPGKO4lk0B5iljBIZaiQWlXbNapICmHgUM9J5MsIe7VhgnRfCjU2qOxsuSFsa3Q-C7xmEPQtmPq23cIdtXsiBq3aR-VmjwfN4LBA5N9O~Eg4v-NX6hBo0QymCwPESEsfgNHe1OgkD6uuISagCMo4MC92qEMwzf6xkavzs32b6rUhHa9xdf634WOISaJAgJOcbJPjM0myBGzfwTgdttV23ylU19JY2WYibcYT0dnNJR1vYIpOTnthq~3wFaTUfDqoyzNIX3CWDdOtrL4Rp0eetY2y30ALecsMGJemdbaLA__&Key-Pair-Id=K11RB80NFXU134", "manifest_url": "https://delivery.domand.nicovideo.jp/hlsbid/67005e65f5707422fe0acb1f/playlists/variants/c30b4ebbcc83d787.m3u8?session=fe22af498f05f007b134490d51d1a616c30b4ebbcc83d7870000000067d645a8fd7ab12b3baeef17&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9kZWxpdmVyeS5kb21hbmQubmljb3ZpZGVvLmpwL2hsc2JpZC82NzAwNWU2NWY1NzA3NDIyZmUwYWNiMWYvcGxheWxpc3RzL3ZhcmlhbnRzL2MzMGI0ZWJiY2M4M2Q3ODcubTN1OFxcPyoiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE3NDIwOTU3ODR9fX1dfQ__&Signature=Dr34UVdukroVxDFErXbaZNID34RlGQVQSZVm1~FyvUAvHwwPVFmXQziOqS0ftjqPS67nSbvFl-PRSXu3DQYjyFBjqPviTFW51~91h4hZTcQkooVf5qJ5mcqOZAHvx3yXMAaVROKYx95P6NdD3yGudqN--nxOIfkHvLkL0CLso08TMs377DwZOOi3HIqBxjC1ljB37J3CgzTZ5nsoIQRnNaZSIYsP0fqfhMngBnr4CtvfcOQsxIAnFI5zVLh~u-BwjA57bLjZ13kno-raVpXfWehzqnb19suGFcNtMPNlr3OLjdXsyIN~ENqtf9X0YXOPSbAZHts38NAmmalMMpaV7Q__&Key-Pair-Id=K11RB80NFXU134", "tbr": 1626.3880000000001, "ext": "mp4", "fps": 23.976, "protocol": "m3u8_native", "preference": null, "quality": 2, "has_drm": false, "width": 854, "height": 480, "vcodec": "avc1.4d4020", "acodec": "none", "dynamic_range": "SDR", "video_ext": "mp4", "audio_ext": "none", "abr": 0, "vbr": 1626.3880000000001, "resolution": "854x480", "aspect_ratio": 1.78, "cookies": "nicosid=1742009383.1621927832; Domain=.nicovideo.jp; Path=/; Expires=2057369384; domand_bid=08e8eed99fdce20fed514af7ea5aca7af8ffe1374448ea2e1707cc45ff4a47eb; Domain=.nicovideo.jp; Path=/; Secure; Expires=1742095784", "http_headers": {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.115 Safari/537.36", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en-us,en;q=0.5", "Sec-Fetch-Mode": "navigate"}, "format": "video-h264-480p - 854x480"}, {"format_id": "video-h264-720p", "format_index": null, "url": "https://delivery.domand.nicovideo.jp/hlsbid/67005e65f5707422fe0acb1f/playlists/media/video-h264-720p.m3u8?session=fe22af498f05f007b134490d51d1a616c30b4ebbcc83d7870000000067d645a8fd7ab12b3baeef17&Expires=1742095784&Signature=xD-RdnCt-PlYvkQRGTCZdc~PORKtYkuv9QcpfoAI9z9pnzytcp4m57KgTpAFIfC6NtA5NJxwfQ3HWXJhD8wwQlH7lZRIvCbzV2bM53KS9FnAnCsaIKfdpej-7h7lGl6inEkg2qbOFZg2I6LW-SB6KYL5gyK-RXvDsvdVbfR~sGgFNtj6cj76PMHm6boAJQEjdJP1BjtVQOfc8vpru90XCFwen0-TS~w1BFL9s23vQjhX8Lz9Wsab2gBIA-6FF8s8gnyDk7tIx-Eh1ghQYeNW1ZwcL66oY0wcrJi83Xsv7f5eslOMOBDD~9nsj0Tq~FpW6z1sFny0wsgGUg09-O10ZQ__&Key-Pair-Id=K11RB80NFXU134", "manifest_url": "https://delivery.domand.nicovideo.jp/hlsbid/67005e65f5707422fe0acb1f/playlists/variants/c30b4ebbcc83d787.m3u8?session=fe22af498f05f007b134490d51d1a616c30b4ebbcc83d7870000000067d645a8fd7ab12b3baeef17&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9kZWxpdmVyeS5kb21hbmQubmljb3ZpZGVvLmpwL2hsc2JpZC82NzAwNWU2NWY1NzA3NDIyZmUwYWNiMWYvcGxheWxpc3RzL3ZhcmlhbnRzL2MzMGI0ZWJiY2M4M2Q3ODcubTN1OFxcPyoiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE3NDIwOTU3ODR9fX1dfQ__&Signature=Dr34UVdukroVxDFErXbaZNID34RlGQVQSZVm1~FyvUAvHwwPVFmXQziOqS0ftjqPS67nSbvFl-PRSXu3DQYjyFBjqPviTFW51~91h4hZTcQkooVf5qJ5mcqOZAHvx3yXMAaVROKYx95P6NdD3yGudqN--nxOIfkHvLkL0CLso08TMs377DwZOOi3HIqBxjC1ljB37J3CgzTZ5nsoIQRnNaZSIYsP0fqfhMngBnr4CtvfcOQsxIAnFI5zVLh~u-BwjA57bLjZ13kno-raVpXfWehzqnb19suGFcNtMPNlr3OLjdXsyIN~ENqtf9X0YXOPSbAZHts38NAmmalMMpaV7Q__&Key-Pair-Id=K11RB80NFXU134", "tbr": 2060.861, "ext": "mp4", "fps": 23.976, "protocol": "m3u8_native", "preference": null, "quality": 3, "has_drm": false, "width": 1280, "height": 720, "vcodec": "avc1.4d4020", "acodec": "none", "dynamic_range": "SDR", "video_ext": "mp4", "audio_ext": "none", "abr": 0, "vbr": 2060.861, "resolution": "1280x720", "aspect_ratio": 1.78, "cookies": "nicosid=1742009383.1621927832; Domain=.nicovideo.jp; Path=/; Expires=2057369384; domand_bid=08e8eed99fdce20fed514af7ea5aca7af8ffe1374448ea2e1707cc45ff4a47eb; Domain=.nicovideo.jp; Path=/; Secure; Expires=1742095784", "http_headers": {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.115 Safari/537.36", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en-us,en;q=0.5", "Sec-Fetch-Mode": "navigate"}, "format": "video-h264-720p - 1280x720"}], "availability": null, "thumbnails": [{"id": "url", "url": "https://nicovideo.cdn.nimg.jp/thumbnails/44179023/44179023.52467209", "ext": "jpg", "preference": 0}, {"id": "middleUrl", "url": "https://nicovideo.cdn.nimg.jp/thumbnails/44179023/44179023.52467209.M", "ext": "jpg", "preference": 1}, {"id": "largeUrl", "url": "https://nicovideo.cdn.nimg.jp/thumbnails/44179023/44179023.52467209.L", "ext": "jpg", "preference": 2}, {"id": "player", "url": "https://img.cdn.nimg.jp/s/nicovideo/thumbnails/44179023/44179023.52467209.original/a960x540l?key=dc5b718dd1829915fc0d2b0a955ceefb4a25e521c84e6a38b6526c3d31a42a33", "ext": "jpg", "preference": 3, "width": 960, "height": 540, "resolution": "960x540"}, {"id": "ogp", "url": "https://img.cdn.nimg.jp/s/nicovideo/thumbnails/44179023/44179023.52467209.original/r1280x720l?key=c0260bc50c28d6e68aae777dc5d0c3b3807f39f070c9ad01549080dcfa93ff9f", "ext": "jpg", "preference": 4, "width": 1280, "height": 720, "resolution": "1280x720"}], "description": "\u300c\u4ed6\u306e\u6599\u7406\u52d5\u753b\u300dmylist/58245198\n\n-----\u3053\u3061\u3089\u3082\u3088\u308d\u3057\u304f\u304a\u9858\u3044\u3057\u307e\u3059-----\n\u25c6\u30c4\u30a4\u30c3\u30bf\u30fc https://twitter.com/IowlQ\n\u25c6YouTube https://t.co/TFNhE3XaHe?amp=1\n\u25c6\u30a4\u30f3\u30b9\u30bf https://t.co/B6AAm09HD2", "uploader": "\u30ed\u30a6\u30a2\u30a4\u30ad\u30e5\u30fc", "uploader_id": "64865991", "timestamp": 1728078906, "channel": null, "channel_id": null, "view_count": 109735, "tags": [], "genre": "\u6599\u7406", "comment_count": 0, "duration": 502.0, "webpage_url": "https://www.nicovideo.jp/watch/sm44179023", "subtitles": {"comments": [{"ext": "json", "data": "[]"}]}, "original_url": "https://www.nicovideo.jp/watch/sm44179023", "webpage_url_basename": "sm44179023", "webpage_url_domain": "nicovideo.jp", "extractor": "niconico", "extractor_key": "Niconico", "playlist": null, "playlist_index": null, "thumbnail": "https://img.cdn.nimg.jp/s/nicovideo/thumbnails/44179023/44179023.52467209.original/r1280x720l?key=c0260bc50c28d6e68aae777dc5d0c3b3807f39f070c9ad01549080dcfa93ff9f", "display_id": "sm44179023", "fulltitle": "\u3010\u8d85\u89e3\u8aac\u3011\u30b5\u30a4\u30bc\u306e\u30c9\u30ea\u30f3\u30af\u30d0\u30fc\u5168\u90e8\u3076\u3063\u8fbc\u307f\u30a2\u30a4\u30b9\u306e\u4f5c\u308a\u65b9", "duration_string": "8:22", "upload_date": "20241004", "release_year": null, "genres": ["\u6599\u7406"], "requested_subtitles": {"comments": {"ext": "json", "data": "[]"}}, "_has_drm": null, "epoch": 1742009385, "requested_formats": [{"format_id": "video-h264-720p", "format_index": null, "url": "https://delivery.domand.nicovideo.jp/hlsbid/67005e65f5707422fe0acb1f/playlists/media/video-h264-720p.m3u8?session=fe22af498f05f007b134490d51d1a616c30b4ebbcc83d7870000000067d645a8fd7ab12b3baeef17&Expires=1742095784&Signature=xD-RdnCt-PlYvkQRGTCZdc~PORKtYkuv9QcpfoAI9z9pnzytcp4m57KgTpAFIfC6NtA5NJxwfQ3HWXJhD8wwQlH7lZRIvCbzV2bM53KS9FnAnCsaIKfdpej-7h7lGl6inEkg2qbOFZg2I6LW-SB6KYL5gyK-RXvDsvdVbfR~sGgFNtj6cj76PMHm6boAJQEjdJP1BjtVQOfc8vpru90XCFwen0-TS~w1BFL9s23vQjhX8Lz9Wsab2gBIA-6FF8s8gnyDk7tIx-Eh1ghQYeNW1ZwcL66oY0wcrJi83Xsv7f5eslOMOBDD~9nsj0Tq~FpW6z1sFny0wsgGUg09-O10ZQ__&Key-Pair-Id=K11RB80NFXU134", "manifest_url": "https://delivery.domand.nicovideo.jp/hlsbid/67005e65f5707422fe0acb1f/playlists/variants/c30b4ebbcc83d787.m3u8?session=fe22af498f05f007b134490d51d1a616c30b4ebbcc83d7870000000067d645a8fd7ab12b3baeef17&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9kZWxpdmVyeS5kb21hbmQubmljb3ZpZGVvLmpwL2hsc2JpZC82NzAwNWU2NWY1NzA3NDIyZmUwYWNiMWYvcGxheWxpc3RzL3ZhcmlhbnRzL2MzMGI0ZWJiY2M4M2Q3ODcubTN1OFxcPyoiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE3NDIwOTU3ODR9fX1dfQ__&Signature=Dr34UVdukroVxDFErXbaZNID34RlGQVQSZVm1~FyvUAvHwwPVFmXQziOqS0ftjqPS67nSbvFl-PRSXu3DQYjyFBjqPviTFW51~91h4hZTcQkooVf5qJ5mcqOZAHvx3yXMAaVROKYx95P6NdD3yGudqN--nxOIfkHvLkL0CLso08TMs377DwZOOi3HIqBxjC1ljB37J3CgzTZ5nsoIQRnNaZSIYsP0fqfhMngBnr4CtvfcOQsxIAnFI5zVLh~u-BwjA57bLjZ13kno-raVpXfWehzqnb19suGFcNtMPNlr3OLjdXsyIN~ENqtf9X0YXOPSbAZHts38NAmmalMMpaV7Q__&Key-Pair-Id=K11RB80NFXU134", "tbr": 2060.861, "ext": "mp4", "fps": 23.976, "protocol": "m3u8_native", "preference": null, "quality": 3, "has_drm": false, "width": 1280, "height": 720, "vcodec": "avc1.4d4020", "acodec": "none", "dynamic_range": "SDR", "video_ext": "mp4", "audio_ext": "none", "abr": 0, "vbr": 2060.861, "resolution": "1280x720", "aspect_ratio": 1.78, "cookies": "nicosid=1742009383.1621927832; Domain=.nicovideo.jp; Path=/; Expires=2057369384; domand_bid=08e8eed99fdce20fed514af7ea5aca7af8ffe1374448ea2e1707cc45ff4a47eb; Domain=.nicovideo.jp; Path=/; Secure; Expires=1742095784", "http_headers": {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.115 Safari/537.36", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en-us,en;q=0.5", "Sec-Fetch-Mode": "navigate"}, "format": "video-h264-720p - 1280x720"}, {"format_id": "audio-aac-192kbps", "format_note": "Main Audio", "format_index": null, "url": "https://delivery.domand.nicovideo.jp/hlsbid/67005e65f5707422fe0acb1f/playlists/media/audio-aac-192kbps.m3u8?session=fe22af498f05f007b134490d51d1a616c30b4ebbcc83d7870000000067d645a8fd7ab12b3baeef17&Expires=1742095784&Signature=fJabGphdRSuN1l8S6N27okVtNjqTHXJKY2SUtJ7K0A4m1u76KP01g~IUIfJ8BNw4RL2z8uik2YEvGsZfPpJmziSZv99Btc-Zk14q55kJ3hmV5gjQu8v7rY-XBvAyYprcMbMEKlTY~qYTjEfmNiWPSowfx44H3H4oMNJuxsiXhwrZVnmQlJTHXSHzr7asEdr8MVXTa1O-ojMkZmk~SzYmJhrMYhV4DC1Y-Rey5YXG71KtdVXshfVS3ECeCwNBUHS4VGc8mEaZmV3uHP9jUzjN86Eb9QOgTGEApvBofdR1-mkzgoKghn6s01H3xJb23-el10GsO8~CiE--6NQ9a4df-w__&Key-Pair-Id=K11RB80NFXU134", "manifest_url": "https://delivery.domand.nicovideo.jp/hlsbid/67005e65f5707422fe0acb1f/playlists/variants/c30b4ebbcc83d787.m3u8?session=fe22af498f05f007b134490d51d1a616c30b4ebbcc83d7870000000067d645a8fd7ab12b3baeef17&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9kZWxpdmVyeS5kb21hbmQubmljb3ZpZGVvLmpwL2hsc2JpZC82NzAwNWU2NWY1NzA3NDIyZmUwYWNiMWYvcGxheWxpc3RzL3ZhcmlhbnRzL2MzMGI0ZWJiY2M4M2Q3ODcubTN1OFxcPyoiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE3NDIwOTU3ODR9fX1dfQ__&Signature=Dr34UVdukroVxDFErXbaZNID34RlGQVQSZVm1~FyvUAvHwwPVFmXQziOqS0ftjqPS67nSbvFl-PRSXu3DQYjyFBjqPviTFW51~91h4hZTcQkooVf5qJ5mcqOZAHvx3yXMAaVROKYx95P6NdD3yGudqN--nxOIfkHvLkL0CLso08TMs377DwZOOi3HIqBxjC1ljB37J3CgzTZ5nsoIQRnNaZSIYsP0fqfhMngBnr4CtvfcOQsxIAnFI5zVLh~u-BwjA57bLjZ13kno-raVpXfWehzqnb19suGFcNtMPNlr3OLjdXsyIN~ENqtf9X0YXOPSbAZHts38NAmmalMMpaV7Q__&Key-Pair-Id=K11RB80NFXU134", "language": null, "ext": "mp4", "protocol": "m3u8_native", "preference": null, "quality": 1, "has_drm": false, "vcodec": "none", "abr": 167.165, "asr": 48000, "acodec": "aac", "audio_ext": "mp4", "video_ext": "none", "vbr": 0, "tbr": 167.165, "resolution": "audio only", "aspect_ratio": null, "cookies": "nicosid=1742009383.1621927832; Domain=.nicovideo.jp; Path=/; Expires=2057369384; domand_bid=08e8eed99fdce20fed514af7ea5aca7af8ffe1374448ea2e1707cc45ff4a47eb; Domain=.nicovideo.jp; Path=/; Secure; Expires=1742095784", "http_headers": {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.115 Safari/537.36", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en-us,en;q=0.5", "Sec-Fetch-Mode": "navigate"}, "format": "audio-aac-192kbps - audio only (Main Audio)"}], "format": "video-h264-720p - 1280x720+audio-aac-192kbps - audio only (Main Audio)", "format_id": "video-h264-720p+audio-aac-192kbps", "ext": "mp4", "protocol": "m3u8_native+m3u8_native", "language": null, "format_note": "Main Audio", "filesize_approx": null, "tbr": 2228.026, "width": 1280, "height": 720, "resolution": "1280x720", "fps": 23.976, "dynamic_range": "SDR", "vcodec": "avc1.4d4020", "vbr": 2060.861, "stretched_ratio": null, "aspect_ratio": 1.78, "acodec": "aac", "abr": 167.165, "asr": 48000, "audio_channels": null, "_filename": "\u3010\u8d85\u89e3\u8aac\u3011\u30b5\u30a4\u30bc\u306e\u30c9\u30ea\u30f3\u30af\u30d0\u30fc\u5168\u90e8\u3076\u3063\u8fbc\u307f\u30a2\u30a4\u30b9\u306e\u4f5c\u308a\u65b9 [sm44179023].mp4", "filename": "\u3010\u8d85\u89e3\u8aac\u3011\u30b5\u30a4\u30bc\u306e\u30c9\u30ea\u30f3\u30af\u30d0\u30fc\u5168\u90e8\u3076\u3063\u8fbc\u307f\u30a2\u30a4\u30b9\u306e\u4f5c\u308a\u65b9 [sm44179023].mp4", "_type": "video", "_version": {"version": "2025.03.13.232844", "current_git_head": null, "release_git_head": "4432a9390c79253ac830702b226d2e558b636725", "repository": "yt-dlp/yt-dlp-nightly-builds"}} ```
closed
2025-03-15T03:40:31Z
2025-03-15T08:57:39Z
https://github.com/yt-dlp/yt-dlp/issues/12617
[ "question" ]
lkdjfalnnlvz
4
plotly/dash
plotly
2,738
Send categorical color data on click/hover/select
Hi there, I just discovered Dash recently and while making my first app I noticed that the mouse actions' data used in callbacks doesn't contain the color data of the active point. I tried this for bar plots and scatter plots, when I use `clickData` for example in a callback, I get its color data when it's numerical but not when it's categorical. The solution here would be to add a field in the JSON data sent to the callback, ideally with the same `color.marker` name for numerical data to ensure consistency. I thought about making a mapping from my categorical data to a numeric one. The problem with this approach is that it infers a comparison between the categorical values which is not always desirable. Have a great day.
open
2024-01-31T15:45:22Z
2024-08-13T19:45:31Z
https://github.com/plotly/dash/issues/2738
[ "feature", "P2" ]
Voltini
4
dynaconf/dynaconf
fastapi
247
[RFC] key restrictions for configuration files in certain positions
in insights-QE we have a setup where we have a number of plug-ins, and each plug-in has its own config-file with the default configuration for that plug-in. Unfortunately it regularly happens that someone accidentally commits global configuration as part of the plugin configuration (as setting it works even if it is the wrong thing). In order to elevate that we want to warn about sections in plug-in configuration files which do not belong, and optionally also ignore those on CI builds. ideally Dynaconf would enable us to configure load locatiosn so they would include valid "toplevel keys". in a Proof of concept i override the built-in `yaml_loader.load` function. untested excerpt ```python WHITELIST_EP_NAMES = {"_internal_envs"} CONFIGPATH_TO_EP = {} def load(obj, env=None, silent=True, key=None, filename=None): """ Reads and loads in to "obj" a single key or all keys from source file. :param obj: the settings instance :param env: settings current env default='development' :param silent: if errors should raise :param key: if defined load a single key, else load all in env :param filename: Optional custom filename to load :return: None """ def yaml_reader(fp): path = Path(fp.name) try: ep = CONFIGPATH_TO_EP[path] except KeyError: # XXX: log? return yaml.full_load(fp) else: return _check_yaml_config_global_keys(path, ep) loader = BaseLoader( obj=obj, env=env, identifier="yaml", extensions=YAML_EXTENSIONS, file_reader=yaml_reader, string_reader=yaml_reader, ) loader.load(filename=filename, key=key, silent=silent) yaml_loader.load = load # xxx HACK def _check_yaml_config_global_keys(path, ep): if ep.name in WHITELIST_EP_NAMES: return with path.open("r") as fp: data = yaml.full_load(fp) if data is None: return drop = [] for env, content in sorted(data.items()): for key in content: if key.lower() != ep.name.lower(): MissplacedConfigurationSectionWarning.trigger(ep, env, key, path) drop.append((env, key)) # we dont drop until we fix the code # for env, key in drop: # del data[env][key] return data for ep in sorted(set(entrypoint.maybe_group("_internal_.application_plugins"))): try: package_name = ep.value.split(":")[0] package_config_files = _glob_in_package(package_name, "conf", "settings.*.yaml") for package_config_file in package_config_files: if package_config_file.name == "settings.local.yaml": pass # true custom cinfig is allowed to override CONFIGPATH_TO_EP[package_config_file] = ep conf_files.extend(package_config_files) data_files.extend(_glob_in_package(package_name, "data", "*_data.yaml")) except Exception as e: error_print(f"Loading plugin settings [{ep.name}]: {str(e)}") raise conf_files.sort(key=_configfile_name_sort_key) def _make_global_configuration(env=None): env = env or os.environ.get("ENV_FOR_DYNACONF") or "qa" return Settings( SETTINGS_FILE_FOR_DYNACONF=",".join(map(str, conf_files)), MERGE_ENABLED_FOR_DYNACONF=True, ENV_FOR_DYNACONF=env, ) return conf ```
closed
2019-10-15T05:13:21Z
2022-07-02T20:12:19Z
https://github.com/dynaconf/dynaconf/issues/247
[ "wontfix", "hacktoberfest", "Not a Bug", "RFC" ]
RonnyPfannschmidt
2
scikit-multilearn/scikit-multilearn
scikit-learn
266
Some tests are failing
open
2023-03-14T15:02:36Z
2023-03-14T15:02:36Z
https://github.com/scikit-multilearn/scikit-multilearn/issues/266
[ "help wanted" ]
ChristianSch
0
healthchecks/healthchecks
django
435
Use Microsoft Teams Card Title instead of Text
Currently the integration uses the "text" attribute of a MS Teams Card to notify the up/down status of checks. ![image](https://user-images.githubusercontent.com/70552043/94623561-9c72a680-02f3-11eb-8d27-2e38369822ba.png) This makes notifications unhelpful: ![image](https://user-images.githubusercontent.com/70552043/94623703-fa9f8980-02f3-11eb-8a0a-8237d50ca244.png) If you instead use the title field of the Card (instead of text) you should get a better result.
closed
2020-09-29T22:42:51Z
2020-10-06T13:44:10Z
https://github.com/healthchecks/healthchecks/issues/435
[]
andrewbrock-sahmri
2
skypilot-org/skypilot
data-science
4,051
`sky down -a` with GCP clusters sometimes crashes with TimeoutError
<!-- Describe the bug report / feature request here --> `sky down -a`: ``` ... File "/Users/zongheng//sky/cli.py", line 2890, in _down_or_stop_clusters subprocess_utils.run_in_parallel(_down_or_stop, clusters) File "/Users/zongheng//sky/utils/subprocess_utils.py", line 65, in run_in_parallel return list(p.imap(func, args)) File "/Users/zongheng/anaconda/envs//lib/python3.10/multiprocessing/pool.py", line 873, in next raise value File "/Users/zongheng/anaconda/envs//lib/python3.10/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/Users/zongheng//sky/cli.py", line 2862, in _down_or_stop core.down(name, purge=purge) File "/Users/zongheng//sky/utils/common_utils.py", line 388, in _record return f(*args, **kwargs) File "/Users/zongheng//sky/core.py", line 406, in down backend.teardown(handle, terminate=True, purge=purge) File "/Users/zongheng//sky/utils/common_utils.py", line 388, in _record return f(*args, **kwargs) File "/Users/zongheng//sky/utils/common_utils.py", line 367, in _record return f(*args, **kwargs) File "/Users/zongheng//sky/backends/backend.py", line 121, in teardown self._teardown(handle, terminate, purge) File "/Users/zongheng//sky/backends/cloud_vm_ray_backend.py", line 3579, in _teardown self.teardown_no_lock( File "/Users/zongheng//sky/backends/cloud_vm_ray_backend.py", line 3917, in teardown_no_lock provisioner.teardown_cluster(repr(cloud), File "/Users/zongheng//sky/provision/provisioner.py", line 206, in teardown_cluster provision.terminate_instances(cloud_name, cluster_name.name_on_cloud, File "/Users/zongheng//sky/provision/__init__.py", line 47, in _wrapper return impl(*args, **kwargs) File "/Users/zongheng//sky/provision/gcp/instance.py", line 556, in terminate_instances handler.terminate(project_id, zone, instance)) File "/Users/zongheng//sky/provision/gcp/instance_utils.py", line 358, in terminate ).execute() File "/Users/zongheng/anaconda/envs//lib/python3.10/site-packages/googleapiclient/_helpers.py", line 130, in positional_wrapper return wrapped(*args, **kwargs) File "/Users/zongheng/anaconda/envs//lib/python3.10/site-packages/googleapiclient/http.py", line 923, in execute resp, content = _retry_request( File "/Users/zongheng/anaconda/envs//lib/python3.10/site-packages/googleapiclient/http.py", line 222, in _retry_request raise exception File "/Users/zongheng/anaconda/envs//lib/python3.10/site-packages/googleapiclient/http.py", line 191, in _retry_request resp, content = http.request(uri, method, *args, **kwargs) File "/Users/zongheng/anaconda/envs//lib/python3.10/site-packages/google_auth_httplib2.py", line 209, in request self.credentials.before_request(self._request, method, uri, request_headers) File "/Users/zongheng/anaconda/envs//lib/python3.10/site-packages/google/auth/credentials.py", line 228, in before_request self._blocking_refresh(request) File "/Users/zongheng/anaconda/envs//lib/python3.10/site-packages/google/auth/credentials.py", line 191, in _blocking_refresh self.refresh(request) File "/Users/zongheng/anaconda/envs//lib/python3.10/site-packages/google/oauth2/credentials.py", line 431, in refresh ) = reauth.refresh_grant( File "/Users/zongheng/anaconda/envs//lib/python3.10/site-packages/google/oauth2/reauth.py", line 333, in refresh_grant response_status_ok, response_data, retryable_error = _client._token_endpoint_request_no_throw( File "/Users/zongheng/anaconda/envs//lib/python3.10/site-packages/google/oauth2/_client.py", line 191, in _token_endpoint_request_no_throw response = request( File "/Users/zongheng/anaconda/envs//lib/python3.10/site-packages/google_auth_httplib2.py", line 119, in __call__ response, data = self.http.request( File "/Users/zongheng/anaconda/envs//lib/python3.10/site-packages/httplib2/__init__.py", line 1724, in request (response, content) = self._request( File "/Users/zongheng/anaconda/envs//lib/python3.10/site-packages/httplib2/__init__.py", line 1444, in _request (response, content) = self._conn_request(conn, request_uri, method, body, headers) File "/Users/zongheng/anaconda/envs//lib/python3.10/site-packages/httplib2/__init__.py", line 1366, in _conn_request conn.connect() File "/Users/zongheng/anaconda/envs//lib/python3.10/site-packages/httplib2/__init__.py", line 1156, in connect sock.connect((self.host, self.port)) TimeoutError: timed out ``` It may be a cluster that's in the process of being autodown'd or already autodown'd. We should be robust against such cases and not crash. <!-- If relevant, fill in versioning info to help us troubleshoot --> _Version & Commit info:_ * `sky -v`: PLEASE_FILL_IN * `sky -c`: 836c5cde2
open
2024-10-08T22:17:42Z
2024-12-19T23:08:56Z
https://github.com/skypilot-org/skypilot/issues/4051
[]
concretevitamin
0
gunthercox/ChatterBot
machine-learning
1,677
[Django][Chatterbot]Two identical strings did not match while using SpecificResponseAdapter
Hello, I've tried to add SpecificResponseAdapter to the Django example app and I've encountered the following problem: The string from statement did not not match with the one from input_text, basically two identical strings did not match. This line of code generated the problem https://github.com/gunthercox/ChatterBot/blob/9bd6e57347d2efc1e4b691a147db6fca8bde9851/chatterbot/logic/specific_response.py#L25 Changing it like this fixed it for me `if statement.text == self.input_text:` Unfortunately I couldn't understand why this happened since` __str__` method already returns `self.text`. Any idea?
open
2019-03-24T19:51:13Z
2019-03-30T17:19:27Z
https://github.com/gunthercox/ChatterBot/issues/1677
[ "possible bug" ]
robertcrasmaru
0
gunthercox/ChatterBot
machine-learning
2,054
dedicated bot for each client
I'm trying to create dedicated bot for each of my clients, meaning: each client on my website will have his own instance of chatterbot that was trained on different data. The only solution that I came up with was to save for each one of them their own database file, but It seems to me a wrong way to do so.. Is there a better way to load different chatbots chatbot? Thanks
closed
2020-10-06T12:02:20Z
2023-07-21T10:44:15Z
https://github.com/gunthercox/ChatterBot/issues/2054
[]
adamcohenhillel
1
Asabeneh/30-Days-Of-Python
matplotlib
260
nested if issue
`I am a beginner who is still learning how to use python syntax.I tried to write codes using if conditions as exercising but I had an issue. the idea of the code is to write a program that allows the user to enter his country and the course he wants to apply for then the program will give him a discount regarding this information. issue: the issue is every time I write ksa as the input it gives me a discount for Canada and I don't know why. ![if1](https://user-images.githubusercontent.com/105638271/180653345-085e2298-5864-43e5-afd4-9b374e5dbde3.PNG) ![if2](https://user-images.githubusercontent.com/105638271/180653349-ed982c9e-e906-4235-9e51-7dad85b05898.PNG)
closed
2022-07-24T15:06:34Z
2022-07-24T16:48:44Z
https://github.com/Asabeneh/30-Days-Of-Python/issues/260
[]
YouseefSaad
0
WZMIAOMIAO/deep-learning-for-image-processing
deep-learning
738
多卡跑hrnet时出现错误
在训练完第一个epoch就出现了这样的错误,请问是出了什么问题吗 IoU metric: keypoints Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.358 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.679 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.335 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.364 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.361 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.451 Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.755 Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.457 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.426 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.487 Traceback (most recent call last): File "/media/data1/sangrg/HRNet/train_multi_GPU.py", line 272, in <module> main(args) File "/media/data1/sangrg/HRNet/train_multi_GPU.py", line 168, in main if args.rank in [-1, 0]: AttributeError: 'Namespace' object has no attribute 'rank'
open
2023-05-22T08:44:30Z
2024-10-27T00:10:06Z
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/738
[]
srg1234567
1
akfamily/akshare
data-science
5,917
AKShare 接口问题报告stock_info_a_code_name()
> 欢迎加入专注于财经数据和量化投研的【数据科学实战】知识社区, > 获取《AKShare-财经数据宝典》,宝典随 AKShare 同步更新, > 里面汇集了财经数据的使用经验和指南,还分享了众多国内外财经数据源。 > 欢迎加入我们,交流财经数据问题,探索量化投研的世界! > 详细信息参考:https://akshare.akfamily.xyz/learn.html ## 重要前提 遇到任何 AKShare 使用问题,请先将您本地的 AKShare 升级到**最新版**,可以通过如下命令升级: ``` pip install akshare --upgrade # Python 版本需要大于等于 3.9 ``` ## 如何提交问题 请提交以下相关信息,以更精准的解决问题。**不符合提交规范的 issue 会被关闭!** > 由于开源项目维护工作量较大,本 issue 只接受接口报错问题。 如有财经数据方面的问题,请加入【数据科学实战】知识社区提问,感谢支持! **详细问题描述** 1. 请先详细阅读 AKShare 文档中对应接口的使用方式:https://akshare.akfamily.xyz 2. 请检查操作系统版本,目前只支持 64 位主流操作系统 3. 请检查 Python 版本,目前只支持 3.9 以上的版本 4. 请确认 AKShare 版本,升级到最新版复现问题 5. 请提交相关接口的名称和相应的调用代码 6. 接口报错的截图或描述 7. 期望获得的正确结果 python版本3.11 akshare版本1.16.45 该接口读取的上交所信息为 col_stock_code: "证券代码", "COMPANY_ABBR": "证券简称", "FULL_NAME": "公司全称", "LIST_DATE": "上市日期" 但是上交所有部分股票信息的证券简称未更新如:"COMPANY_ABBR_EN": "CTCG", "STOCK_TYPE": "1", "LIST_BOARD": "1", "COMPANY_ABBR": "国旅联合", "A_STOCK_CODE": "600358", "AREA_NAME": "360000", "DELIST_DATE": "-", "NUM": "1", "SEC_NAME_CN": "ST联合", "AREA_NAME_DESC": "江西省", "FULL_NAME_IN_ENGLISH": "CHINA TOURISM AND CULTURE INVESTMENT GROUP CO.,LTD", "SEC_NAME_FULL": "ST联合", "STATE_CODE": "2", "B_STOCK_CODE": "-", "STATE_CODE_STOCK": "4", "LIST_DATE": "20000922", "CSRC_CODE": "L", "PRODUCT_STATUS": " S F N ", "CSRC_CODE_DESC": "租赁和商务服务业", "COMPANY_CODE": "600358", "FULL_NAME": "国旅文化投资集团股份有限公司" 接口返回信息为国旅联合,但是实际上已经变更为ST联合了 是否可以添加返回证券全称
closed
2025-03-18T03:15:04Z
2025-03-18T06:59:11Z
https://github.com/akfamily/akshare/issues/5917
[ "bug" ]
AccommA
1
tflearn/tflearn
data-science
761
tf.device is not working on tflearn.DNN model
Hi, I have used tflearn to quick prototyping, and I found I set device as cpu for parallel running, and the models were using gpu memory. I captured part of my code in here. with tf.device('/cpu:0'): # Network building net = tflearn.input_data([None, FLAGS.input_length,len(FLAGS.select_list.split(","))]) net = tflearn.lstm(net, len(FLAGS.select_list.split(","))*FLAGS.lstm_num); net = tflearn.fully_connected(net, len(FLAGS.select_list.split(","))*FLAGS.output_length); net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1,metric='R2', loss='mean_square'); # Training model = tflearn.DNN(net, tensorboard_verbose=0); Thanks.
closed
2017-05-17T03:08:52Z
2017-10-18T00:04:04Z
https://github.com/tflearn/tflearn/issues/761
[]
jsikyoon
3
ultralytics/yolov5
pytorch
12,502
Modifying Anchor Box Representation to XYWH in YOLOv5
Hello YOLOv5 Community, I am working on integrating a new label assignment technique into YOLOv5. For this purpose, I require the anchor boxes to be represented in the XYWH format. However, it appears that in the YOLOv5 implementation, anchor boxes are primarily represented using only width and height (WH). Could you please guide me on where and how I can modify the code to use XYWH representation for anchor boxes? Specifically, I'm looking at the following section of the code where the anchor boxes are involved: https://github.com/ultralytics/yolov5/blob/f400bba7836c1dbc2771db251984f20009b5fa81/utils/loss.py#L177 Any pointers or examples on how to achieve this modification would be greatly appreciated. Thank you in advance for your assistance!
closed
2023-12-13T14:31:38Z
2024-10-20T19:34:16Z
https://github.com/ultralytics/yolov5/issues/12502
[ "Stale" ]
sinanutkuulu
3
mithi/hexapod-robot-simulator
plotly
58
Improve code quality
# `hexapod.Linkage` - [ ] Format this better https://github.com/mithi/hexapod-robot-simulator/blob/35cac5c60dc93e8fd2442869097313d6612925f9/hexapod/linkage.py#L158 - [ ] Use a for loop instead https://github.com/mithi/hexapod-robot-simulator/blob/35cac5c60dc93e8fd2442869097313d6612925f9/hexapod/linkage.py#L142
closed
2020-04-17T18:21:27Z
2020-04-18T14:48:56Z
https://github.com/mithi/hexapod-robot-simulator/issues/58
[]
mithi
4
strawberry-graphql/strawberry
asyncio
3,788
OpenTelemetry Extension groups all resolver in one old span
<!-- Provide a general summary of the bug in the title above. --> <!--- This template is entirely optional and can be removed, but is here to help both you and us. --> <!--- Anything on lines wrapped in comments like these will not show up in the final text. --> ## Describe the Bug With the OpenTelemetry extension enabled, we get distributed tracing telemetry with Azure (or other telemetry viewer), but it seems like all the resolvers use the same SPAN parent, created on the first operation ran on the service. All subsequent query will use the same span for the resolver, but not for the Parsing and Validation Lifecycle ![Image](https://github.com/user-attachments/assets/908e9964-33c7-4d3b-a190-4c521de0c893) These "GraphQL Resolving: events" are all seperate GraphQL queries made to the service; the timestamp of the "GraphQL Query: historicalData" server span is "07:40:00" and the last "GraphQL Resolving: events" is "08:01:28". There's also other spans for "GraphQL Parsing" and "Validation" that appear in the telemetry, but they are missing the resolvers ![Image](https://github.com/user-attachments/assets/b273335d-98af-4a47-990d-6072fd1629d8) This trace is the "08:01:28" that should include the last "GraphQL Resolving: events" of the previous screenshot. ## System Information - Operating system: Linux - Strawberry version (if applicable): 0.260.2 ## Additional Context https://github.com/strawberry-graphql/strawberry/blob/main/strawberry/extensions/tracing/opentelemetry.py#L172 It seems like that line, the `self_span_holder[LifecycleStep.OPERATION]` is always the same object, the object created during the first operation ever run on the service. I'm not too keen on the details of Python's coroutine and async stuff, but it feels like the `_span_holder` used by the `async resolve` coroutine might be a memory copy of the initial invocation. Both the `on_parse` and `on_validate` method seems to resolve the correct span and these are not async.
open
2025-02-19T13:21:55Z
2025-03-24T17:54:41Z
https://github.com/strawberry-graphql/strawberry/issues/3788
[ "bug" ]
pgelinas
3
jadore801120/attention-is-all-you-need-pytorch
nlp
116
How is it possible to use the code for text generation, not translation
Hi, I realized in the readme you explained how to pass wmt data, preprocess that and run train.py. How is it possible to run train.py on a dataset with similar source and target language. source and target being in separate text file.
closed
2019-08-19T04:59:51Z
2019-12-08T09:53:05Z
https://github.com/jadore801120/attention-is-all-you-need-pytorch/issues/116
[]
fabrahman
2
BayesWitnesses/m2cgen
scikit-learn
353
Does m2cgen support CART algorithm?
Does m2cgen support CART (Classification and Regression Tree) algorithm?
closed
2021-03-03T07:53:46Z
2021-04-02T20:15:18Z
https://github.com/BayesWitnesses/m2cgen/issues/353
[]
dawangda
1
google-research/bert
nlp
1,305
Which datasets used to generate the models ?
Hi ! Thank you very much for this great repo and work. I would be interested in working with your **bert-tiny** model. For this work, it would be of importance to know what data exactly this model was obtained from. I could not find a concrete pointer. Would you be able to guide me to a specification of this information? Thank you a lot in advance and kind regards!
open
2022-05-03T18:12:46Z
2022-08-17T13:13:34Z
https://github.com/google-research/bert/issues/1305
[]
ycattan
1
encode/httpx
asyncio
2,992
Add option to disable cookie persistence in Client/AsyncClient
Initially raised as discussion #1533 I've been using `httpx`'s `AsyncClient` as a web spider, however I have noticed that cookies are automatically persisted and there's no easy way to disable this. My workaround has been to subclass python's `http.cookiejar.CookieJar` like so: ```python from http.cookiejar import CookieJar class NullCookieJar(CookieJar): """A CookieJar that rejects all cookies.""" def extract_cookies(self, *_): """For extracting and saving cookies. This implementation does nothing""" pass def set_cookie(self, _): """Normally for setting a cookie. This implementation does nothing""" pass ``` Would it be possible to: 1. add an option to disable cookie persistence to `Client`/`AsyncClient` or 2. include this implementation of `NullCookieJar` in `httpx` as a utility class? Thanks!
open
2023-12-08T01:05:50Z
2024-12-18T06:08:37Z
https://github.com/encode/httpx/issues/2992
[ "enhancement" ]
fastily
5
mouredev/Hello-Python
fastapi
3
Python
closed
2022-12-04T03:57:32Z
2024-02-22T11:02:56Z
https://github.com/mouredev/Hello-Python/issues/3
[]
mxsrxm
0
RobertCraigie/prisma-client-py
pydantic
83
Help with Azure Functions
Hey, thanks for the client. I am trying to use it with azure functions, but it is not finding the client. If I enter with ssh and run pip install and prisma generate, it works, but I am not being able to do so. Is there any way to handle this issue? Thanks!
closed
2021-10-27T17:20:26Z
2021-10-30T10:21:26Z
https://github.com/RobertCraigie/prisma-client-py/issues/83
[ "kind/question" ]
danielweil
2
plotly/dash
data-science
2,422
[BUG] Dash DCC.Dropdown using Dynamic Options, nested within dcc.Loading returns empty list
``` dash 2.7.1 dash-ag-grid 0.0.1 dash-bootstrap-components 1.3.0 dash-core-components 2.0.0 dash-html-components 2.0.0 dash-table 5.0.0 ``` - OS: Windows 11 - Browser: Edge - Version 110.0.1587.41 **Describe the bug** Using Dash DCC.Dropdown using Dynamic Options, nested within dcc.Loading returns empty list. Upon inspection the callback fires twice, once retuning the options list for the dropdown as expected and once returns and empty list even though prevent update is used. This happens on any single key pressed in the search box of the dropdown. removing the DCC.Dropdown from being nested in dcc.loading resolves the issue. The code example used is exactly as in the [dash documentation](https://dash.plotly.com/dash-core-components/dropdown#dynamic-options), except for the dcc.loading wrapper. dcc.Loading( [ dcc.Dropdown( id="add_new_wave_modal_entity_sel_id", ), ] ), @callback( Output("add_new_wave_modal_entity_sel_id", "options"), Input("add_new_wave_modal_entity_sel_id", "search_value"), ) def update_options(search_value): if not search_value: raise PreventUpdate return [o for o in glb.wave_entity_signals_dd_options if search_value in o["label"]] **Expected behavior** The dcc.dropdown list will populate according to the typed filter on the fly. the spinner of the dcc.loading will appear if the list takes time to appear. **Screenshots** ![Screenshot 2023-02-13 020120](https://user-images.githubusercontent.com/7108242/218348954-51f12682-7e92-480d-a0cc-4411d1f4779c.jpg)
closed
2023-02-13T01:03:59Z
2024-03-11T16:38:19Z
https://github.com/plotly/dash/issues/2422
[]
urifeigin
1
explosion/spaCy
data-science
12,898
Sentence-terminal periods not tokenized properly in Malayalam text
## How to reproduce the behaviour import spacy nlp = spacy.blank('ml') doc = nlp('ഇന്ത്യയിൽ കേരള സംസ്ഥാനത്തിലും കേന്ദ്രഭരണപ്രദേശങ്ങളായ ലക്ഷദ്വീപിലും പോണ്ടിച്ചേരിയുടെ ഭാഗമായ മാഹിയിലും തമിഴ്നാട്ടിലെ കന്യാകുമാരി ജില്ലയിലും നീലഗിരി ജില്ലയിലെ ഗൂഡല്ലൂർ താലൂക്കിലും സംസാരിക്കപ്പെടുന്ന ഭാഷയാണ് മലയാളം.') print(doc[-1]) # Print 'മലയാളം.', should be '.' ## Your Environment * Operating System: Windows 11 x64 * Python Version Used: 3.10.12 * spaCy Version Used: 3.6.0
open
2023-08-09T10:04:56Z
2023-09-05T12:59:00Z
https://github.com/explosion/spaCy/issues/12898
[ "bug", "lang / ml" ]
BLKSerene
3
zihangdai/xlnet
nlp
136
train_gpu
closed
2019-07-08T11:26:26Z
2019-07-08T11:26:48Z
https://github.com/zihangdai/xlnet/issues/136
[]
Bagdu
0
thomaxxl/safrs
rest-api
99
Make many to many relationship methods extendable
Since the API takes SQLAlchemy table objects and uses those in SAFRSBase column relationships, there is no opportunity to override the created relationship endpoints to add validation to the calls. For example: ``` site_customer = db.Table( 'site_customer', db.Column('site_id', db.Integer, db.ForeignKey('site.id'), primary_key=True), db.Column('customer_id', db.Integer, db.ForeignKey('customers.id'), primary_key=True)) class Site(SAFRSBase): __tablename__ = 'sites' id = db.Column(db.Integer, primary_key=True) status = db.Column(db.String(100), nullable=False) customers= \ db.relationship( 'Customer', secondary=site_customer, backref='sites') class Customer(SAFRSBase): __tablename__ = 'customers' id = db.Column(db.Integer, primary_key=True) name = db.Column(db.String(100), nullable=False) ``` In the above, I'd like to make it so a customer can only be attached to a site if the site's status is 'Active'. For properties on the objects we can override the methods _s_post() etc. However, there is no explicit method that can be overridden to modify the patch function for /sites/$SITE_ID/customers because the methods for the relationships are implied.
closed
2021-09-30T23:50:45Z
2021-10-02T00:20:21Z
https://github.com/thomaxxl/safrs/issues/99
[]
lapierreni
2
ageitgey/face_recognition
machine-learning
970
how to use gpu limit
hello i use face_recognition and i have some question can i limit gpu usage??
open
2019-11-07T02:13:23Z
2019-12-06T19:21:18Z
https://github.com/ageitgey/face_recognition/issues/970
[]
mickey-kim
1
wkentaro/labelme
computer-vision
1,020
Unable to run,lots of TypeError
First off, I performed the following commands when installing (Windows 10, using Anaconda): `conda create --name=labelme python=3` `conda activate labelme` `pip install labelme` When i try to run labelme, I got the following error: > `Traceback (most recent call last): File "E:\ProgramData\Anaconda3\envs\Image-style-transfer-gp\lib\site-packages\labelme\widgets\canvas.py", line 618, in paintEvent p.translate(self.offsetToCenter()) File "E:\ProgramData\Anaconda3\envs\Image-style-transfer-gp\lib\site-packages\labelme\widgets\canvas.py", line 659, in offsetToCenter return QtCore.QPoint(x, y) TypeError: arguments did not match any overloaded call: QPoint(): too many arguments QPoint(int, int): argument 1 has unexpected type 'float' QPoint(QPoint): argument 1 has unexpected type 'float' I don't know what caused the problem, maybe a version update of Qt?
closed
2022-05-17T07:40:26Z
2022-10-23T12:30:48Z
https://github.com/wkentaro/labelme/issues/1020
[ "issue::bug", "priority: high" ]
FallenSequoia
3
slackapi/bolt-python
fastapi
702
How can I forward an image from an older message without making the image public?
I created a slash command that will send images from older messages. But I couldn't find a way to send such images without making them public. I know this is possible because I can do it manually. How can this be achieved? ### Reproducible in: More of a generic question, no need to reproduce anything #### The `slack_bolt` version slack-bolt==1.14.0 slack-sdk==3.17.2 #### Python runtime version Python 3.10.5 #### OS info ProductName: macOS ProductVersion: 11.2.3 BuildVersion: 20D91 Darwin Kernel Version 20.3.0: Thu Jan 21 00:06:51 PST 2021; root:xnu-7195.81.3~1/RELEASE_ARM64_T8101 #### Steps to reproduce: Don't need to reproduce anything ### Expected result: I should be able to forward images from previous messages (in public channels) without making the image public ### Actual result: The message/preview doesn't work (no errors, they just don't appear), which forces me to make the image public.
closed
2022-08-18T19:16:09Z
2022-09-02T10:06:05Z
https://github.com/slackapi/bolt-python/issues/702
[ "question", "need info" ]
ArturFortunato
10
ultrafunkamsterdam/undetected-chromedriver
automation
896
Getting detected while using proxy
Hi im getting detected while using proxy but when i don't add argument --proxy-server then request pass trough Here is my code that im using ` try: import undetected_chromedriver.v2 as uc except ImportError: print('Error, Module undetected_chromedriver is required') print("run 'pip install undetected_chromedriver' and try again") import time import argparse if __name__ == "__main__": parser = argparse.ArgumentParser(description = "Scrape from URL while avoiding bot detection") parser.add_argument("URL", nargs = 1, metavar = "url", type = str, help = "input URL to be scraped") args = parser.parse_args() if len(args.URL) == 1: request = args.URL[0] else: raise Exception("Invalid url") options = uc.ChromeOptions() options.headless=True options.add_argument('--headless') options.add_argument('--proxy-server=xxxxx:xxx') driver = uc.Chrome(options=options) driver.get(request) time.sleep(1) print(driver.page_source) ` also i tried with load extension but that is not working in headless mode and i want to run it inside docker. Any help ?
open
2022-11-15T08:26:19Z
2022-12-03T10:33:31Z
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/896
[]
mbernatovic
1
dynaconf/dynaconf
fastapi
548
[RFC] hjson
Does someone uses `hjson` ? https://github.com/hjson/hjson-py Will be implemented, but only if this issue is upvoted.
closed
2021-03-09T22:38:04Z
2022-07-02T20:12:29Z
https://github.com/dynaconf/dynaconf/issues/548
[ "wontfix", "Not a Bug", "RFC" ]
rochacbruno
1
Johnserf-Seed/TikTokDownload
api
297
[BUG]用户主页名包含 / 会导致下载失败
**描述出现的错误** 用户主页名包含 / 会导致下载失败 **bug复现** 复现这次行为的步骤: **截图** ![image](https://user-images.githubusercontent.com/32392758/215475632-c665ffaa-0a89-4c32-b67c-23b9ef8839ff.png) ![image](https://user-images.githubusercontent.com/32392758/215475859-d6eebbf7-df79-4850-9813-3c3cae378f85.png) ![image](https://user-images.githubusercontent.com/32392758/215475917-650c0cef-84b7-433a-88aa-5a7d8e9276d1.png) **桌面(请填写以下信息):** -操作系统:windows10 64bit -vpn代理 关闭] -版本[13045] **附文** 在此处添加有关此问题的文字。
closed
2023-01-30T12:24:16Z
2023-02-06T14:08:33Z
https://github.com/Johnserf-Seed/TikTokDownload/issues/297
[ "故障(bug)", "额外求助(help wanted)", "无效(invalid)" ]
sansi98h
0
hyperspy/hyperspy
data-visualization
2,534
Real/live time handling for EDS spectrum images
Following the discussion in #2362 and #2364, it appears that currently there are inconsistencies in handling real/live time for EDS spectrum images. Specifically, there are several questions to address: 1. Should live and real time values in metadata correspond to time per pixel or per entire spectrum image? 2. I/O plugins for different file formats should provide conforming values. 3. [Metadata structure docs](http://hyperspy.org/hyperspy-doc/current/user_guide/metadata_structure.html) should be updated to clarify this.
open
2020-09-03T13:21:09Z
2022-03-29T06:58:52Z
https://github.com/hyperspy/hyperspy/issues/2534
[ "type: proposal" ]
askorikov
4
google-research/bert
tensorflow
1,232
Unable to include emojis in masked language modelling?
Hi, I am new to Hugging Face and masked language modelling (MLM), and I was wondering how to include emojis when doing such a task. I have a dataset with tweets, with each tweet containing an emoji at the end - here is a sample of my data: | ID | Tweet | | -------- | -------------- | | 1 | Looking good today 😎 | | 2 | Weather is so hot, lol ☀️ | | 3 | I hate you!!! 🤬 | At the moment, I have fully trained my masked language model using my dataset, but when I predict something, it does **NOT** output or predict the emojis. It just predicts words. This is my desired input from using my dataset for MLM: ``` "You look great [MASK]" ``` This is my desired output from using my dataset for MLM: ``` [{'score': 0.26041436195373535, 'sequence': 'You look great 😎"', 'token': 72, 'token_str': '."'}, {'score': 0.1813151091337204, 'sequence': 'you look great 💯"', 'token': 2901, 'token_str': '!"'}, {'score': 0.14516998827457428, 'sequence': 'you look great 👌', 'token': 328, 'token_str': '!'},] ``` However, this is what I am actually getting from my output: ``` [{'score': 0.26041436195373535, 'sequence': 'You look great?"', 'token': 72, 'token_str': '."'}, {'score': 0.1813151091337204, 'sequence': 'You look great."', 'token': 2901, 'token_str': '!"'}, {'score': 0.14516998827457428, 'sequence': 'You look great!', 'token': 328, 'token_str': '!'},] ``` I know it is possible to do this, but how do I do it? I am close, but not very. Likewise, I have my model fully trained on my dataset, but it just does not seem to output emojis, even though I have included them in the training. Does something need to be included to accept emoji? If so, what? Thanks - I would really appreciate the help!
open
2021-06-07T08:28:51Z
2021-06-07T08:43:03Z
https://github.com/google-research/bert/issues/1232
[]
AnandP2812
1
TencentARC/GFPGAN
pytorch
119
how to get "StyleGAN2GeneratorClean" pretrained model?
I have noticed that "StyleGAN2GeneratorClean" arch version is given, but i did not find its corresponding pretrained model.Where can i get it?
closed
2021-12-15T11:28:19Z
2021-12-21T08:37:27Z
https://github.com/TencentARC/GFPGAN/issues/119
[]
zhangxuzju
1
paperless-ngx/paperless-ngx
machine-learning
8,024
[BUG] Missing page numbers
### Description The new page count function (#7750) does not enter a page number for documents consumed by Gotenberg (*.docx, *.eml). ### Steps to reproduce 1. Consume docx or eml document 1. Look in the list of documents 1. Pages entry is empty/missing ### Webserver logs ```bash n/a ``` ### Browser logs _No response_ ### Paperless-ngx version 2.13.0 ### Host OS Synology DSM ### Installation method Docker - official image ### System status _No response_ ### Browser _No response_ ### Configuration changes _No response_ ### Please confirm the following - [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation. - [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools. - [X] I have already searched for relevant existing issues and discussions before opening this report. - [X] I have updated the title field above with a concise description.
closed
2024-10-26T11:07:05Z
2024-11-26T03:14:44Z
https://github.com/paperless-ngx/paperless-ngx/issues/8024
[ "not a bug" ]
leo-022
3
deepset-ai/haystack
machine-learning
9,065
Verify the documentation section on Helm chart
Bilge mentioned that this section seems outdated and there was interest in this during the conference. https://docs.haystack.deepset.ai/docs/kubernetes#deploy-with-helm
open
2025-03-19T10:42:45Z
2025-03-20T15:44:30Z
https://github.com/deepset-ai/haystack/issues/9065
[ "type:documentation", "P2" ]
dfokina
0
marcomusy/vedo
numpy
326
Lidar Data sequence Screenshot
@marcomusy Thanks for the wonderful library , i am using vedo to plot the lidar point cloud density data , i have few queries 1. Can we draw bounding box for objects on point cloud data 2. i am having a sequence of lidar frames i need to set the perceptive and take screenshots without changing the perceptive , i know how to save the screenshot but ech time new data comes in the perceptive is set to default , please share your answers Thanks in advance
closed
2021-03-01T14:14:22Z
2021-03-03T17:25:06Z
https://github.com/marcomusy/vedo/issues/326
[]
abhigoku10
3
CTFd/CTFd
flask
1,985
Fix non-clicable checkbox label in user creation form in Admin side
The user creation form checkbox label for email user credentials isn't clickable. Small fix.
closed
2021-09-09T20:33:38Z
2021-09-13T07:54:21Z
https://github.com/CTFd/CTFd/issues/1985
[ "easy" ]
ColdHeat
0
noirbizarre/flask-restplus
flask
770
Flask-RESTPlus is dead, long life to Flask-RestX
Hello everyone, It has now been almost a year since we first started discussions about the future of Flask-RESTPlus in #593. During the past months, a few of us have been granted maintainers access on github: @SteadBytes, @j5awry, @a-luna and myself. Unfortunately, we never had access to the Flask-RESTPlus Pypi project, hence we have never been able to release any new version. This situation led us to fork the project under a different name so we can move forward. The new project name is [flask-restx](https://github.com/python-restx/flask-restx) and it will be handled by the [python-restx](https://github.com/python-restx) organization. We are at the early stages and we still have a lot of setup/organization to do, such as CI, renaming, copyright checking, etc. Our current milestone is to deliver the equivalent of the v0.14.0 that has been discussed in #743 After that, we will start migrate issues and PR that are still relevant and build a roadmap. We'll let you know here when things are ready.
open
2020-01-10T21:49:14Z
2021-07-08T04:08:35Z
https://github.com/noirbizarre/flask-restplus/issues/770
[]
ziirish
1
nvbn/thefuck
python
650
error: zsh: command not found: thefuck
![Uploading 屏幕快照 2017-05-26 上午12.45.35.png…]() ![Uploading 屏幕快照 2017-05-26 上午12.45.59.png…]() ![2017-05-26 12 47 10](https://cloud.githubusercontent.com/assets/12124524/26460659/e994b220-41ac-11e7-82d2-55e9910cae72.png) I have already installed python-dev and source .bash_profle; But I still get command not found
closed
2017-05-25T16:48:36Z
2023-10-31T16:11:02Z
https://github.com/nvbn/thefuck/issues/650
[]
Zane96
12
ansible/ansible
python
84,147
‎URL safe b64encode and ‎b64decode
### Summary The current ‎b64encode and ‎b64decode filters do not allow for producing encoded strings according to [RFC 4648 section 5](https://datatracker.ietf.org/doc/html/rfc4648#section-5), commonly referred to as base64url, which are intended for use in URLs and file names. I recently had cause to require this for building a URL in a template, and had to resort to a custom filter plugin. This wasn't difficult at all, however I thought the community may benefit from this being made available to all in the core filters. Python's base64 module used by the current filter implementation provides this out of the box; so before I rattle off a PR, are we in agreement that such a thing would be useful? Should such a feature be implemented by adding an optional parameter to the existing base64 filters, or by implementation of an altogether new pair? Python's module provides different functions, but my thoughts were that fewer similarly named filters in the same namespace would be the preferable option for Ansible. ### Issue Type Feature Idea ### Component Name plugins ### Additional Information <!--- Paste example playbooks or commands between quotes below --> ``` # My config file url_param = {{ my_variable | b64encode(urlsafe=True) }} ``` Or alternately something like ``` # My config file url_param = {{ my_variable | urlsafe_b64encode }} ``` ### Code of Conduct - [X] I agree to follow the Ansible Code of Conduct
open
2024-10-20T03:16:44Z
2024-11-25T12:05:54Z
https://github.com/ansible/ansible/issues/84147
[ "waiting_on_contributor", "feature" ]
andrensairr
7
stanfordnlp/stanza
nlp
483
Data conversion Python Object to CoNLL gives error
**Describe the bug** Trying to convert the python object of the annotated Document (`List[List[Dict]]]`) to the CoNLL format of (`List[List[List]]`) following the example [in the docs](https://stanfordnlp.github.io/stanza/data_conversion.html) produces an error. **To Reproduce** Steps to reproduce the behavior: 1. Run the following lines: ``` >>> from stanza.utils.conll import CoNLL >>> dicts = [[{'id': '1', 'text': 'Test', 'upos': 'NOUN', 'xpos': 'NN', 'feats': 'Number=Sing', 'misc': 'start_char=0|end_char=4'}, {'id': '2', 'text': 'sentence', 'upos': 'NOUN', 'xpos': 'NN', 'feats': 'Number=Sing', 'misc': 'start_char=5|end_char=13'}, {'id': '3', 'text': '.', 'upos': 'PUNCT', 'xpos': '.', 'misc': 'start_char=13|end_char=14'}]] # dicts is List[List[Dict]], representing each token / word in each sentence in the document >>> conll = CoNLL.convert_dict(dicts) ``` 2. See error **Expected behavior** `conll` should be a `List[List[List]]`, representing each token / word in each sentence in the document **Environment (please complete the following information):** - OS: Ubuntu 18.04 - Python version: Python 3.6.9 - Stanza version: v1.1.1 708c935 **Additional context** Produced error: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/m0re/projects/phd/stanza/stanza/utils/conll.py", line 112, in convert_dict token_conll = CoNLL.convert_token_dict(token_dict) File "/home/m0re/projects/phd/stanza/stanza/utils/conll.py", line 132, in convert_token_dict token_conll[FIELD_TO_IDX[HEAD]] = str((token_dict[ID] if isinstance(token_dict[ID], int) else token_dict[ID][0]) - 1) # evaluation script requires head: int TypeError: unsupported operand type(s) for -: 'str' and 'int' ```
closed
2020-10-13T08:58:07Z
2020-10-13T16:10:50Z
https://github.com/stanfordnlp/stanza/issues/483
[ "bug" ]
m0re4u
1
ipython/ipython
data-science
14,464
Could it be worthy to make `VerboseTB._tb_highlight_style` a public attribute?
<!-- This is the repository for IPython command line, if you can try to make sure this question/bug/feature belong here and not on one of the Jupyter repositories. If it's a generic Python/Jupyter question, try other forums or discourse.jupyter.org. If you are unsure, it's ok to post here, though, there are few maintainer so you might not get a fast response. --> Hi! Over QtConsole we are doing some work to use the changes done at https://github.com/ipython/ipython/pull/14138 to customize traceback syntax highlighting (see https://github.com/jupyter/qtconsole/pull/608). Following https://github.com/jupyter/qtconsole/pull/608#discussion_r1638413158 and https://github.com/jupyter/qtconsole/pull/608#discussion_r1643501066, wanted to check if making `VerboseTB._tb_highlight_style` a public attribute makes sense/could be possible. Also, are there any plans related with changing the traceback syntax highlighting logic in the future? Any in info is greatly appreciated!
closed
2024-06-19T19:30:53Z
2024-10-01T07:39:10Z
https://github.com/ipython/ipython/issues/14464
[]
dalthviz
1
MaxHalford/prince
scikit-learn
90
How to change colors of plot_row_coordinates?
Using famd.plot_row_coordinates, how can I change the colors of each group specified in color_labels?
closed
2020-06-04T21:59:33Z
2023-02-27T11:48:53Z
https://github.com/MaxHalford/prince/issues/90
[]
insuquot
2
widgetti/solara
flask
69
[Bug] In new jupyter notebook (7+), the fullscreen button is missing from app
In jupyter version 7+, when you run a Solara app in the notebook, there's no option to go fullscreen anymore. simple app ``` import solara as sl @sl.component def Page(): sl.Markdown("# Hi there!") sl.Markdown("This is my page") Page() ``` <img width="1362" alt="image" src="https://user-images.githubusercontent.com/22605641/232238343-7ef410b8-5705-4934-913e-778a4b4a4683.png"> Jupyter versions ``` Selected Jupyter core packages... IPython : 8.12.0 ipykernel : 6.22.0 ipywidgets : 8.0.6 jupyter_client : 8.2.0 jupyter_core : 5.3.0 jupyter_server : 2.5.0 jupyterlab : not installed nbclient : 0.7.3 nbconvert : 7.3.1 nbformat : 5.8.0 notebook : 6.5.4 qtconsole : 5.4.2 traitlets : 5.9.0 ``` Solara version: `'1.10.0'`
open
2023-04-15T16:36:58Z
2023-04-15T16:37:10Z
https://github.com/widgetti/solara/issues/69
[]
Ben-Epstein
0
jupyter/nbviewer
jupyter
181
index.ipynb
When a GitHub repo or gist has an `index.ipynb` or `Index.ipynb`, we should directly render that document rather than rendering the list of repo contents. @Carreau and @rgbkrk do you think we can get this done before 1.0? I want to create one of these files for our examples notebooks to use as an entry page for our notebook docs.
open
2014-01-26T22:25:24Z
2014-03-17T04:55:21Z
https://github.com/jupyter/nbviewer/issues/181
[ "type:Enhancement" ]
ellisonbg
10
tensorlayer/TensorLayer
tensorflow
186
Undefined names
flake8 testing of https://github.com/zsdonghao/tensorlayer on Python 2.7.13 $ __flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics)__ ``` ./example/_tf0.12/tutorial_dynamic_rnn.py:89:13: F821 undefined name 'incoming' incoming if isinstance(X, tf.Tensor) else tf.stack(X))#tf.pack(X)) ^ ./example/_tf0.12/tutorial_translate.py:407:45: F821 undefined name 'model_checkpoint_path' raise Exception("no %s exist" % model_checkpoint_path) ^ ./tensorlayer/layers.py:265:56: F821 undefined name 'name_reuse' if (name in set_keep['_layers_name_list']) and name_reuse == False: ^ ./tensorlayer/layers.py:5742:49: F821 undefined name 'single_cell' cell = tf.contrib.rnn.MultiRNNCell([single_cell] * num_layers) ^ ./tensorlayer/layers.py:5744:49: F821 undefined name 'single_cell' cell = tf.nn.rnn_cell.MultiRNNCell([single_cell] * num_layers) ^ ./tensorlayer/layers.py:5992:34: F821 undefined name 'W' self.all_params.extend( [W, b] ) ^ ./tensorlayer/layers.py:5992:37: F821 undefined name 'b' self.all_params.extend( [W, b] ) ^ ./tensorlayer/prepro.py:941:11: F821 undefined name 'flatx' print(flatx.shape) # (160, 176, 1) ^ ./tensorlayer/prepro.py:942:21: F821 undefined name 'flatx' whitex = np.dot(flatx, principal_components) ^ ./tensorlayer/prepro.py:1001:8: F821 undefined name 'is_random' if is_random: ^ ./tensorlayer/prepro.py:1255:25: F821 undefined name 'image' x = binary_dilation(image, selem=mask) ^ ```
closed
2017-08-03T11:39:40Z
2017-08-11T16:25:27Z
https://github.com/tensorlayer/TensorLayer/issues/186
[]
cclauss
1
facebookresearch/fairseq
pytorch
5,019
KeyError: 'best_loss' when finetuning PyTorch Wav2Vec2ForCTC model with fairseq
## ❓ Questions and Help Using the pytorch model: chcaa/xls-r-300m-danish-nst-cv9 from HuggingFace directly in fairseq-train results in KeyError. See below. Any suggestions to how to convert a pytorch model into a format fairseq accepts or in any other way circumvent the error? I have seen there is a script for converting from fairseq to pytorch, but not the other way around. Help is much appreciated! ``` ... model = task.build_model(cfg.model) File "/home/kikop/miniconda3/envs/xls-r/lib/python3.10/site-packages/fairseq/tasks/audio_finetuning.py", line 193, in build_model model = super().build_model(model_cfg, from_checkpoint) File "/home/kikop/miniconda3/envs/xls-r/lib/python3.10/site-packages/fairseq/tasks/audio_pretraining.py", line 197, in build_model model = super().build_model(model_cfg, from_checkpoint) File "/home/kikop/miniconda3/envs/xls-r/lib/python3.10/site-packages/fairseq/tasks/fairseq_task.py", line 338, in build_model model = models.build_model(cfg, self, from_checkpoint) File "/home/kikop/miniconda3/envs/xls-r/lib/python3.10/site-packages/fairseq/models/__init__.py", line 106, in build_model return model.build_model(cfg, task) File "/home/kikop/miniconda3/envs/xls-r/lib/python3.10/site-packages/fairseq/models/wav2vec/wav2vec2_asr.py", line 208, in build_model w2v_encoder = Wav2VecEncoder(cfg, len(task.target_dictionary)) File "/home/kikop/miniconda3/envs/xls-r/lib/python3.10/site-packages/fairseq/models/wav2vec/wav2vec2_asr.py", line 377, in __init__ state = checkpoint_utils.load_checkpoint_to_cpu(cfg.w2v_path, arg_overrides) File "/home/kikop/miniconda3/envs/xls-r/lib/python3.10/site-packages/fairseq/checkpoint_utils.py", line 343, in load_checkpoint_to_cpu state = _upgrade_state_dict(state) File "/home/kikop/miniconda3/envs/xls-r/lib/python3.10/site-packages/fairseq/checkpoint_utils.py", line 585, in _upgrade_state_dict {"criterion_name": "CrossEntropyCriterion", "best_loss": state["best_loss"]} KeyError: 'best_loss' ``` #### Code ``` fairseq-hydra-train task.data="/home/kikop/speech_finetune/data/processed/synthesised/VCTK_238a5fd5-64c4-4fc4-8665-130e47fac1eb/" \ common.wandb_project=individual_models model.w2v_path=//home/kikop/speech_finetune/models/speech-recognition/pytorch_model.bin \ checkpoint.reset_dataloader=True checkpoint.reset_optimizer=True common.reset_logging=True \ --config-dir /home/kikop/speech_finetune/speech_finetune/models/speech-recognition/config --config-name asr_individual_finetune ``` #### What's your environment? - fairseq==0.12.2 - torch==1.13.0 - OS = Linux - How you installed fairseq: pip - Python 3.10.6 - CUDA Version: 11.6
open
2023-03-10T13:36:53Z
2024-01-02T21:22:21Z
https://github.com/facebookresearch/fairseq/issues/5019
[ "question", "needs triage" ]
KiriKoppelgaard
1
opengeos/leafmap
jupyter
919
`import leafmap.foliumap as leafmap` is unable to `add_gdf` if `style` column is present
<!-- Please search existing issues to avoid creating duplicates. --> ### Environment Information - leafmap version: 0.38.5 - Python version: 3.10.12 - Operating System: Ubuntu 20.04.6 LTS ### Description Trying to use `add_gdf` function fails with `foliumap` version of `leafmap` if the `gdf` has `style` column. The same functionality works with `import leafmap.leafmap as leafmap`. ```py show_what_works = False if show_what_works: import leafmap.leafmap as leafmap else: import leafmap.foliumap as leafmap import geopandas as gpd from shapely.geometry import Polygon dict_with_style = {"name": ["A", "B", "C"], "geometry": [Polygon([(77.0, 28.0), (77.0, 29.0), (78.0, 29.0)]), Polygon([(77.0, 29.0), (77.0, 30.0), (78.0, 30.0)]), Polygon([(77.0, 30.0), (77.0, 31.0), (78.0, 31.0)])], "style": [{"color": "red"}, {"color": "green"}, {"color": "blue"}]} gdf_with_style = gpd.GeoDataFrame.from_dict(dict_with_style, crs="EPSG:4326") print(gdf_with_style) m = leafmap.Map() m.add_gdf(gdf_with_style, zoom_to_layer=True) m ```
closed
2024-10-15T15:44:58Z
2024-10-17T15:09:05Z
https://github.com/opengeos/leafmap/issues/919
[ "bug" ]
patel-zeel
2
flairNLP/flair
pytorch
3,458
[Question]: Resume training
### Question I'm trying to resume training according to : [This code](https://github.com/flairNLP/flair/blob/8bcc3d9dac0b0e318e0bd0290af5a36f4d414fab/resources/docs/TUTORIAL_TRAINING_MORE.md?plain=1#L73) where it says : # 7. continue training at later point. Load previously trained model checkpoint, then resume trained_model = SequenceTagger.load(path + '/checkpoint.pt') # resume training best model, but this time until epoch 25 trainer.resume(trained_model, base_path=path + '-resume', max_epochs=25, ) but resume is not defined in : [class ModelTrainer(Pluggable)]( https://github.com/flairNLP/flair/blob/8bcc3d9dac0b0e318e0bd0290af5a36f4d414fab/flair/trainers/trainer.py#L41) I'm sure it's a common task using your awesome library yet I cannot get it working. Any information would be very appreciated.
open
2024-05-17T07:28:23Z
2024-06-24T02:20:42Z
https://github.com/flairNLP/flair/issues/3458
[ "question" ]
alfredwallace7
5
microsoft/nni
deep-learning
5,657
TypeError: to() takes from 0 to 4 positional arguments but 5 were given
When normalize my input images with following func: def normalize(self, image): # image_mean = [0.485, 0.456, 0.406], image_std = [0.229, 0.224, 0.225] dtype, device = image.dtype, image.device mean = torch.as_tensor(self.image_mean, dtype=dtype, device=device) std = torch.as_tensor(self.image_std, dtype=dtype, device=device) return (image - mean[:, None, None]) / std[:, None, None] TypeError: to() takes from 0 to 4 positional arguments but 5 were given And the stack : __call__, jit_translate.py:244 __init__, infer_mask.py:80 update_direct_sparsity, compressor.py:235 infer_modules_masks, compressor.py:381 speedup_model, compressor.py:544 <module>, prac.py:84
open
2023-08-04T02:57:46Z
2023-08-11T06:59:51Z
https://github.com/microsoft/nni/issues/5657
[]
RToF
0
pytorch/pytorch
numpy
149,604
DISABLED test_linear (__main__.TestLazyModules)
Platforms: linux, rocm This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_linear&suite=TestLazyModules&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39077249396). Over the past 3 hours, it has been determined flaky in 6 workflow(s) with 8 failures and 6 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_linear` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. <details><summary>Sample error message</summary> ``` Traceback (most recent call last): File "/var/lib/jenkins/workspace/test/nn/test_lazy_modules.py", line 126, in test_linear self.assertTrue( File "/opt/conda/envs/py_3.10/lib/python3.10/unittest/case.py", line 687, in assertTrue raise self.failureException(msg) AssertionError: False is not true To execute this test, run the following from the base repo dir: python test/nn/test_lazy_modules.py TestLazyModules.test_linear This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0 ``` </details> Test file path: `nn/test_lazy_modules.py` cc @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki @clee2000
open
2025-03-20T06:44:22Z
2025-03-20T06:44:27Z
https://github.com/pytorch/pytorch/issues/149604
[ "module: nn", "triaged", "module: flaky-tests", "skipped" ]
pytorch-bot[bot]
1
hyperspy/hyperspy
data-visualization
3,029
Velox EMD Complex dtype issue
#### Bug Description There remains a small bug in reading Velox EMD files with FFTs. In _read_image(): fft_dtype = [('realFloatHalfEven', '<f4'), ('imagFloatHalfEven', '<f4')]; however, this is not always the case. If, for example, an FFT is taken of a region with odd number of pixels, then Velox saves the data as fft_dtype = [('realFloatHalfOdd', '<f4'), ('imagFloatHalfOdd', '<f4')] This issue causes hs.load() to fail in the presence of an FFT with odd number of pixels. #### To Reproduce In Velox, create an "FFT from selection" that has an odd number of pixels (i.e. 2047x2047) Save .emd file Attempt to load in hyperspy ``` import hyperspy.api as hs hs.load('image_with_oddFFT.emd') # ERROR:hyperspy.io:If this file format is supported, please report this error to the HyperSpy developers. ``` #### Expected behavior Hyperspy should manage to import odd pixel width FFTs just as it can even pixel FFTs from Velox EMD files. #### Python environement: - HyperSpy version: 1.7.1 - Python version: 3.10.5 Pull request generated [https://github.com/hyperspy/hyperspy/pull/3028#issue-1380192233](https://github.com/hyperspy/hyperspy/pull/3028#issue-1380192233)
closed
2022-09-21T01:50:13Z
2024-02-16T17:15:43Z
https://github.com/hyperspy/hyperspy/issues/3029
[ "type: bug" ]
bryandesser
2
polakowo/vectorbt
data-visualization
679
vbt.CCXTData.pull
ccxt_data_eth = vbt.CCXTData.pull( "ETH/USDT", start="2020-01-03", end="2020-01-06") ccxt_data_eth.close why i get No symbols could be fetched
closed
2024-01-10T14:32:59Z
2024-03-16T10:51:15Z
https://github.com/polakowo/vectorbt/issues/679
[]
happy-mint
2
sktime/pytorch-forecasting
pandas
1,219
TimeSeriesTransformer Learning Rate Finder gives a KeyError after finding intial LR.
- PyTorch-Forecasting version: `0.10.3` - PyTorch version: `1.13.0+cu116` - PyTorch-Lightning version: `1.8.6` - Python version: `3.8.16` - Operating System: `Linux 2301df3359e4 5.10.147+ #1 SMP Sat Dec 10 16:00:40 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux` (colab) ### Expected behavior Either doing a manual LR finder or setting "use_learning_rate_finder=True" in the TimeSeriesTransformer optimize_hyperparameters function, should find the best learning rate using the built in learning rate finder ### Actual behavior It completes one epoch to find initial LR, then throws a KeyError: 'radam_buffer' ### Code to reproduce the problem Copied almost entirely from the Stallion Example. Code and traceback can be found here: https://gist.github.com/ifeelagood/f29137abbfcbfa956c87ab965ad3b499 Please note, this was also tested on python 3.10.8 locally, with newest versions
open
2023-01-07T04:47:15Z
2023-02-17T10:35:11Z
https://github.com/sktime/pytorch-forecasting/issues/1219
[]
ifeelagood
3
netbox-community/netbox
django
18,311
Call out requirement for PostgreSQL 13+ in v4.2 release notes
### Change Type Addition ### Area Other ### Proposed Changes Django 5.1 [dropped support for PostgreSQL 12](https://docs.djangoproject.com/en/5.1/releases/5.1/#dropped-support-for-postgresql-12) and we failed to capture it in the NetBox v4.2 release notes. PostgreSQL 12 support was dropped upstream in November so hopefully few people are still running it.
closed
2025-01-06T21:52:27Z
2025-01-06T22:04:15Z
https://github.com/netbox-community/netbox/issues/18311
[ "type: documentation", "status: accepted" ]
jeremystretch
0
python-restx/flask-restx
flask
548
Swagger and Apache curl's do not match
I'm running flask restx behind an apache proxy. My team only gets one server for all our tools so it's kind of necessary. I'm setting my namespace up like so: `namespace = Namespace('hosts', 'host endpoints', validate=True, authorizations=authorizations, path="/hosts")` The two endpoints are for a virtual machine name and a PUT that takes a json at '/'. But in order to work w/ port 443 and https, I'm having to proxy my requests through apache and gunicorn. Apache config: ``` <VirtualHost *:443> ServerName webdev02.mycompany.com ## Vhost docroot DocumentRoot "/etc/httpd/htdocs" <Directory "/etc/httpd/htdocs"> Options Indexes FollowSymLinks MultiViews AllowOverride All Require all granted </Directory> ## Logging ErrorLog "/var/log/httpd/httpd_error_log" ServerSignature Off CustomLog "/var/log/httpd/httpd_access_log" "mainserver" ## Server aliases ServerAlias webdev02.mycompany.com ## Custom fragment ## Custom fragment SetEnvIf X-Forwarded-For "^.*\..*\..*\..*" forwarded CustomLog "/var/log/httpd/httpd_access_log" combined env=!forwarded CustomLog "/var/log/httpd/httpd_access_log" proxy env=forwarded <Location "literally 11 other applications"> RequestHeader set X-Forwarded-Port 443 ProxyPass http://127.0.0.1:80##/ </Location> <Location /my_application> ProxyPass http://127.0.0.1:8082/ </Location> <Location /swaggerui> # this has to be done to find the swaggerui page ProxyPass http://127.0.0.1:8082/swaggerui </Location> <Location /swagger.json> # because my_application represents the project running on port 8082, I have to redirect here to retrieve the swagger.json file ProxyPass http://webdev02.mycompany.com/my_application/swagger.json </Location> <Location /hosts/> # this is the start of my problems ProxyPass http://webdev02.mycompany.com/my_application/hosts/ </Location> SetEnv HTTPS On </VirtualHost> ``` A curl requests works like so: ``` curl -X 'GET' \ 'https://webdev02.mycompany.com/my_application/hosts/myserver.mycompany.com' \ -H 'accept: application/json' {"vmname": "myserver.mycompany.com", "OS": "Red Hat Enterprise Linux 8", "PatchWindow": "1_Mon_AM_6"} ``` the problem: my swagger page does not recognize that it is behind Apache and Gunicorn. When I go to my doc page, select that same endpoint above and hit execute, it outputs the CURL like this: ``` curl -X 'GET' \ 'https://webdev02.mycompany.com/hosts/myserver.mycompany.com' \ -H 'accept: application/json' ``` I've tried ProxyFix from werkzeug.middleware.proxy_fix and that has done nothing. How do I get a flask-restx project to recognize its behind a proxy? I should mention I've seen a [previous post](https://github.com/python-restx/flask-restx/issues/58) on this but it hasn't resolved my issue.
open
2023-06-28T21:40:27Z
2023-07-09T01:26:58Z
https://github.com/python-restx/flask-restx/issues/548
[ "question" ]
itinneed2022
3
dfki-ric/pytransform3d
matplotlib
101
sensor2img
enhancement: A more general form for the camera projection. implementing sensor2image for sonar sensor (single- and multibeam) too.
closed
2021-01-21T16:03:15Z
2021-05-07T08:32:33Z
https://github.com/dfki-ric/pytransform3d/issues/101
[]
Mateus224
1
paperless-ngx/paperless-ngx
machine-learning
8,857
[BUG] After last update celery will freeze the whole machine
### Description I run paperless in a docker container on a machine which also run parts of my smarthome automation. I have already invest some time to investigate the problem. Celery will use 25% of CPU after 5 minutes of start of paperless. After another 5 minutens the whole system will freeze. I guess it's due to the Gmail import. But it was running before the update. I create filter to look for attached PDFs but I guess paperless chokes in the process. It starts a new scan before the previous one is completed. First attempt to workaround was to disable the email import. Then I tried to reduce maximum age. (before 600 to 10) This seems to help. But it works before the update. Is something changes from last Docker release (november) to current? ### Steps to reproduce Create mail import (Gmail acoounts has ~20.000) mails. ### Webserver logs ```bash [2025-01-22 10:05:00,001] [INFO] [celery.beat] Scheduler: Sending due task Train the classifier (documents.tasks.train_classifier) [2025-01-22 10:05:00,011] [INFO] [celery.beat] Scheduler: Sending due task Check and run scheduled workflows (documents.tasks.check_scheduled_workflows) [2025-01-22 10:10:00,000] [INFO] [celery.beat] Scheduler: Sending due task Check all e-mail accounts (paperless_mail.tasks.process_mail_accounts) [2025-01-22 10:20:00,001] [INFO] [celery.beat] Scheduler: Sending due task Check all e-mail accounts (paperless_mail.tasks.process_mail_accounts) [2025-01-22 10:30:00,001] [INFO] [celery.beat] Scheduler: Sending due task Check all e-mail accounts (paperless_mail.tasks.process_mail_accounts) M2M reverse owner <ManyToOneRel: documents.document> M2M reverse correspondent <ManyToOneRel: documents.document> M2M reverse storage_path <ManyToOneRel: documents.document> M2M reverse document_type <ManyToOneRel: documents.document> M2M forward tags M2M reverse owner <ManyToOneRel: documents.document> M2M reverse correspondent <ManyToOneRel: documents.document> M2M reverse storage_path <ManyToOneRel: documents.document> M2M reverse document_type <ManyToOneRel: documents.document> M2M forward tags ``` ### Browser logs ```bash ``` ### Paperless-ngx version 2.14.4 ### Host OS Debian ### Installation method Docker - official image ### System status ```json ``` ### Browser _No response_ ### Configuration changes _No response_ ### Please confirm the following - [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation. - [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools. - [x] I have already searched for relevant existing issues and discussions before opening this report. - [x] I have updated the title field above with a concise description.
closed
2025-01-22T09:22:16Z
2025-01-22T10:39:06Z
https://github.com/paperless-ngx/paperless-ngx/issues/8857
[ "not a bug" ]
BigDi
1
facebookresearch/fairseq
pytorch
5,044
About extracting the feature of the 12th layer for finetuned hubert model
#### What is your question? In examples/hubert/simple_kmeans, there are methods of extracting the feature of _pretrained_ model. Now, for comparing the similarity of different models and layers, I am working on extracting the feature of the 12th layer for **_finetuned_ hubert model** that given in README.md, such as the HuBERT LARGE model and HuBERT Extra LARGE model. What should I do?
open
2023-03-24T02:19:32Z
2024-11-17T14:51:03Z
https://github.com/facebookresearch/fairseq/issues/5044
[ "question", "needs triage" ]
LYPinASR
4
seleniumbase/SeleniumBase
web-scraping
2,829
How to bypass captcha input text
I found a website that using https://github.com/igoshev/laravel-captcha library. So we must input text to solve the captcha. This is an example of captcha image ![image](https://github.com/seleniumbase/SeleniumBase/assets/46953807/7471e9bf-4ed3-4c9a-982d-6cfebede7419) And this is the HTML element ``` <div class="btn-two-cols-wrapper"> <!-- <div> <div required="required" data-callback="enableSubmitPassengerData" data-sitekey="6LeCD6woAAAAAHz-bCQAI0Bg2b1SGsb9ZxXRMUA7" class="g-recaptcha"></div> </div> <div class="help-block with-errors text-danger"></div> --> <img src="https://booking.kai.id/captcha/image?_=884327940" alt="https://github.com/igoshev/laravel-captcha" style="cursor:pointer;width:180px;height:50px;" title="Perbarui" onclick="this.setAttribute('src','https://booking.kai.id/captcha/image?_=884327940&amp;_='+Math.random());var captcha=document.getElementById('captcha');if(captcha){captcha.focus()}"> <br> <span class="error">Invalid Captcha</span> <div class="row"> <div class="col-md-4"> <input class="form-control" type="text" id="captcha" name="captcha" autocomplete="off" style="margin-top:10px;"> </div> </div> </div> ``` Do you have an idea to bypass it.?
closed
2024-06-04T16:54:16Z
2024-06-04T17:23:53Z
https://github.com/seleniumbase/SeleniumBase/issues/2829
[ "question", "UC Mode / CDP Mode" ]
adarmawan117
1
wemake-services/django-test-migrations
pytest
40
properly apply migrations before running test
Current `django-test-migrations` tests setup runs all migrations forward (it's done either by Django’s test case or pytest's `--migrations` cli option) and then migrations are reverted by `.before()` to these specified in its arguments. Such an approach can cause a really ugly and unintuitive tracebacks. Proper solution will be defer migrations running until `.before()` is called. This way we can avoid applying unnecessary migrations' operations that will be reverted anyway in `.before()` and migrations's exceptions will be raised only when particular migration will be tested (currently if any migration even the last one in the plan is incorrect and will raise some exception then all migrations tests will be interrupted by this exception which will produce really unintuituve tracebacks). If anyone has some other ideas or is against it, please raise your hand, so we can figure out the best solution for this issue.
closed
2020-03-08T14:28:22Z
2020-05-10T17:08:54Z
https://github.com/wemake-services/django-test-migrations/issues/40
[]
skarzi
0
jupyter/nbviewer
jupyter
835
Fully formed Github repo urls leading to 400 error
When giving nbviewer a fully formed Github repo URL, say `https://github.com/jupyter-widgets/tutorial`, a 400 error results: **To Reproduce** 1. Go to https://nbviewer.jupyter.org. 2. Paste into the input text box: `https://github.com/jupyter-widgets/tutorial` 3. Nbviewer turns the above into `https://nbviewer.jupyter.org/urls/github.com/jupyter-widgets/tutorial` and produces a `400 : Bad Request` error. **Expected behavior** The repo should be rendered. That works as expected if instead one types only the org/repo part: `jupyter-widgets/tutorial`, leading to the URL `https://nbviewer.jupyter.org/github/jupyter-widgets/tutorial/tree/master/`. I'm pretty sure this used to work, but I don't know when the change occured. Oddly this seems like it should be an often reported problem, but I couldn't find a dupe.
closed
2019-06-12T07:00:05Z
2019-07-07T14:30:25Z
https://github.com/jupyter/nbviewer/issues/835
[ "type:Enhancement" ]
fperez
5
numba/numba
numpy
9,859
Version `0.61.0rc2` leaks Numba source code into coverage report
<!-- Thanks for opening an issue! To help the Numba team handle your information efficiently, please first ensure that there is no other issue present that already describes the issue you have (search at https://github.com/numba/numba/issues?&q=is%3Aissue). --> ## Reporting a bug <!-- Before submitting a bug report please ensure that you can check off these boxes: --> - [X] I have tried using the latest released version of Numba (most recent is visible in the release notes (https://numba.readthedocs.io/en/stable/release-notes-overview.html). - [ ] I have included a self contained code sample to reproduce the problem. i.e. it's possible to run as 'python bug.py'. <!-- Please include details of the bug here, including, if applicable, what you expected to happen! --> Hello, I am trying to make our code base support Python 3.13 proactively, and since we use Numba, we need to start implementing the `0.61.0`+ version. Since you are on RC2, we decided to give it a try. One strange issue we are running into is: when we run coverage + pytest some numba code from `lib/site-packages` gets leaked into the coverage report. See below image. I re-ran our test suite with stable `0.60.0` and we don't see this happen. This also makes our combined coverage reporting (multiple python version run reports merged at the end) fail because the numba is not needed and not installed during merging of the reports in CI, and we don't install it. ![image](https://github.com/user-attachments/assets/b3aeadaf-186e-4fa2-994d-c49cc8a530a7) Any ideas what may be going on?
closed
2024-12-19T20:21:27Z
2025-01-14T16:25:37Z
https://github.com/numba/numba/issues/9859
[ "bug", "bug - incorrect behavior" ]
tasansal
6
flavors/django-graphql-jwt
graphql
278
Automated mutation testing with input variables
Hello, I am trying to test a mutation with an input variable. I have tried adding the input to variables, but get an error: 'Variable "$input" of required type [...] was not provided.' Could someone please help with this issue? Many thanks! Here is the code: ```python class GQLTestCase(JSONWebTokenTestCase): def setUp(self): self.user = get_user_model().objects.create(username='testuser') self.client.authenticate(self.user) def test_mutation(self): mutation = ''' mutation updateModel($input: UpdateModelInputType!) { updateModel(input: $input) { model { id name } } } ''' model_id = 1 variables = { 'username': self.user.username, 'inuput':{'id': model_id }, } response = self.client.execute( update_mutation, variables ) self.assertIsNone(response.errors) ``` Others seem to be having the same issue: [https://stackoverflow.com/questions/59509280/django-graphql-endpoint-testing-unable-to-use-variables-dictionary](url)
closed
2021-07-23T05:08:48Z
2021-07-25T02:32:05Z
https://github.com/flavors/django-graphql-jwt/issues/278
[]
ayding11
2
pallets-eco/flask-wtf
flask
205
Is recaptcha v2 supported?
Seems to be recaptcha v2 is not available yet. Does someone working on it?
closed
2015-10-25T10:45:07Z
2021-05-28T01:03:50Z
https://github.com/pallets-eco/flask-wtf/issues/205
[ "recaptcha" ]
synergetic
1
Sanster/IOPaint
pytorch
607
image size 问题
在batch processing中没有预设的batch size,是单张进行处理的,不存在并行关系。为什么同样一张尺寸的图片,在webUI上可以进行处理,在batch processing的模式下会出现显存不足的情况?源代码中webUI与batch processing调取的应该是相同的API
closed
2024-12-06T08:06:19Z
2025-01-21T01:58:02Z
https://github.com/Sanster/IOPaint/issues/607
[ "stale" ]
zzhanghj
2
pydantic/pydantic
pydantic
10,821
AttributeError raised in property as masked and reported as a pydantic error instead
### Initial Checks - [X] I confirm that I'm using Pydantic V2 ### Description When a pydantic model has property attributes that do computation, and any of those computations may result in an `AttributeError`, this error is masked as if it is a pydantic `__getattr__` error. This is a confusing result mainly because the actual cause of the `AttributeError` becomes extremely opaque and hard to chase down. This issue was uncovered in #10810 after much goose chasing. It would be good if pydantic were able to distinguish between `AttributeErrors` caused by actual inability to access the model attribute vs `AttributeErrors` raised from within a calculated field. ### Example Code ```Python import traceback import pydantic class Model(pydantic.BaseModel): model_config = pydantic.ConfigDict(extra='allow') main: str @property def something(self) -> str: raise AttributeError instance = Model.model_validate(dict(main='test')) try: instance.something except AttributeError: traceback.print_exc() instance = Model.model_validate(dict(main='test', another='test')) try: instance.something except AttributeError: traceback.print_exc() ``` ### Python, Pydantic & OS Version ```Text pydantic version: 2.9.2 pydantic-core version: 2.23.4 pydantic-core build: profile=release pgo=false install path: /home/jmassucco/devel/monorepo/dist/export/python/virtualenvs/python-default/3.8.16/lib/python3.8/site-packages/pydantic python version: 3.8.16 (default, Jun 5 2024, 17:31:41) [GCC 9.4.0] platform: Linux-5.4.0-195-generic-x86_64-with-glibc2.2.5 related packages: fastapi-0.115.2 pydantic-extra-types-2.9.0 mypy-1.12.0 pydantic-settings-2.6.0 typing_extensions-4.12.2 commit: unknown ```
closed
2024-11-12T10:31:51Z
2024-11-12T20:22:08Z
https://github.com/pydantic/pydantic/issues/10821
[ "bug V2", "pending" ]
jmassucco17
2
biosustain/potion
sqlalchemy
12
Add Blueprint support
closed
2015-01-15T13:49:04Z
2015-12-02T10:10:21Z
https://github.com/biosustain/potion/issues/12
[]
lyschoening
0
comfyanonymous/ComfyUI
pytorch
7,172
Problem with generation image with simple workflow
### Your question Hi, I'm a new comfyUI user, I installed the program, nodes and models, and during the first attempts to generate an image, unfortunately instead of some effect I get noise. What could be the reason, I'll add that during jpg generation no information appears in comfy. I have a computer with an nvidia geforce rtx 4070ti graphics card. ![Image](https://github.com/user-attachments/assets/cf829ad3-4d51-41cd-b6c1-fec33948ef49) ### Logs ```powershell ``` ### Other _No response_
open
2025-03-10T13:16:14Z
2025-03-10T16:23:13Z
https://github.com/comfyanonymous/ComfyUI/issues/7172
[ "User Support" ]
dawmil
4
ageitgey/face_recognition
machine-learning
874
512 points using Euclidean distance
Hello @ageitgey 1 I have face encodings 512 points extracted from insightface for one face I want to calculate the Euclidean distance of them using this code any idea? wanted to make face recognition using those encodings because it's accurate to detect face to find Euclidean distance I use this `dists = distance.euclidean(f1, f2)` from scipy.spatial and this `np.linalg.norm(f1-f2)` from numpy got same output `0.026715927` 2 this was for 2 images how can i do this for 100 images
open
2019-07-05T15:05:37Z
2019-07-05T15:17:33Z
https://github.com/ageitgey/face_recognition/issues/874
[]
tomriddle54
0
pydata/xarray
pandas
9,871
layout of tests
### What is your issue? Currently the tests are stored in a flat on-disk layout, but some test files are _logically_ grouped, e.g. the relationship between `test_backends_api.py` and `test_backends_common.py` could be expressed via a directory structure, like `test/backends/test_api.py` and `test/backends/test_common.py`. Relatedly, some of the test files are a bit long -- [`test_backends.py`](https://github.com/pydata/xarray/blob/main/xarray/tests/test_backends.py) is 6500 lines, and it contains tests for zarr and netcdf (and maybe other things). If someone is just working on the zarr side of things, then the netcdf tests are thousands of lines of clutter that could in principle be entirely contained in a separate test file. So my proposal would be to judiciously group logically related tests into modules , e.g. `test_backends` would be one, and also split large test files into smaller independent components, e.g. `test_backends/test_zarr.py` would just test the zarr backend stuff. Does this seem reasonable?
open
2024-12-10T09:11:54Z
2025-03-21T19:25:22Z
https://github.com/pydata/xarray/issues/9871
[]
d-v-b
2
ets-labs/python-dependency-injector
asyncio
574
`@inject` breaks `inspect.iscoroutinefunction`
After upgrade to 4.39.0, while `asyncio.coroutines.iscoroutinefunction` is preserved, `inspect.iscoroutinefunction` is not -- tested on Python 3.10.4: ```python from dependency_injector.wiring import inject import inspect async def foo(): pass @inject async def bar(): pass print(inspect.iscoroutinefunction(foo)) print(inspect.iscoroutinefunction(bar)) ``` This e.g. makes FastAPI incorrectly recognize `bar` as a "normal" function when used with `dep = fastapi.Depends(bar)` syntax in router dependencies.
closed
2022-03-29T12:00:24Z
2022-03-30T08:03:09Z
https://github.com/ets-labs/python-dependency-injector/issues/574
[ "bug" ]
burritoatspoton
3
iterative/dvc
machine-learning
10,333
DVC fails to read path to python.exe, if forward slashes "/ " used in windows
# Bug Report dvc repro fails to read path/to/python.exe, in windows, because of slashes "/". There is no problem with path/to/my_script.py ## Issue name repro: failed to reproduce 'test': "'.venv' is not recognized as an internal or external command. ERROR: failed to run: .venv/Scripts/python.exe script/test.py" ## Description I try to use another python executable originated in my virtual environment. I prefer to use forward slashes, as sometimes I want to test them also in WSL. Even if I can run the command line with slashes, dvc repro requires forward slashes. ### Reproduce ```powerhell git init -q dvc init -q python -m venv .venv dvc stage add -n test .venv/Scripts/python.exe script/test.py dvc repro # won't work dvc stage add -f -n test .venv\Scripts\python.exe script/test.py dvc repro # will work # This will also work .venv/Scripts/python.exe script/test.py ``` ### Expected I expect from a Linux OS to not run with backslashes, but Windows doesn't have any issue either with backslashes or forward slashes in the path. ### Environment information dvc == 3.26.0
closed
2024-03-01T20:03:27Z
2024-03-04T08:10:10Z
https://github.com/iterative/dvc/issues/10333
[ "awaiting response" ]
HarryKalantzopoulos
6
twopirllc/pandas-ta
pandas
285
Numpy ImportError: cannot import name 'sliding_window_view' from 'numpy.lib.stride_tricks'
**Which version are you running? The lastest version is on Github. Pip is for major releases.** version v0.2.75 Running Windows 10 **Describe the bug** I just installed the version 0.2.75 from github by downloading the .zip file, then installed using pip3 install pandas-ta-master.zip Received a notification that I didn't have 'wheels' installed so it used legacy method of install, but installation was successful. But when I tried to add the library I get the error shown below. I uninstalled pandas-ta, then I installed wheels. I then reinstalled pandas-ta successfully: ```sh C:\Users\chuck\Downloads>pip3 install pandas-ta-master.zip <snip>a bunch of installation details....</snip> Successfully built pandas-ta Installing collected packages: pandas-ta Successfully installed pandas-ta-0.2.75b0 C:\Users\chuck\Downloads> ``` === Below is the result of simply trying to import the library ===== ```python >>> import pandas_ta as pta Traceback (most recent call last): File "<pyshell#0>", line 1, in <module> import pandas_ta as pta File "C:\Users\chuck\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas_ta\__init__.py", line 116, in <module> from pandas_ta.core import * File "C:\Users\chuck\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas_ta\core.py", line 4, in <module> from pandas_ta.candles.cdl_pattern import ALL_PATTERNS File "C:\Users\chuck\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas_ta\candles\__init__.py", line 2, in <module> from .cdl_doji import cdl_doji File "C:\Users\chuck\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas_ta\candles\cdl_doji.py", line 2, in <module> from pandas_ta.overlap import sma File "C:\Users\chuck\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas_ta\overlap\__init__.py", line 6, in <module> from .hilo import hilo File "C:\Users\chuck\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas_ta\overlap\hilo.py", line 4, in <module> from .ma import ma File "C:\Users\chuck\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas_ta\overlap\ma.py", line 8, in <module> from .linreg import linreg File "C:\Users\chuck\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas_ta\overlap\linreg.py", line 6, in <module> from numpy.lib.stride_tricks import sliding_window_view ImportError: cannot import name 'sliding_window_view' from 'numpy.lib.stride_tricks' (C:\Users\chuck\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\lib\stride_tricks.py) >>> ```
closed
2021-05-06T18:57:19Z
2021-07-28T21:25:23Z
https://github.com/twopirllc/pandas-ta/issues/285
[ "duplicate", "info" ]
ctilly
18
plotly/dash
data-science
3,007
Patten matching callbacks do not warn if no matches exist
**Describe your context** Please provide us your environment, so we can easily reproduce the issue. - replace the result of `pip list | grep dash` below ``` dash 2.18.1 A Python framework ... dash-bootstrap-components 1.6.0 Bootstrap themed co... dash-core-components 2.0.0 Core component suit... dash-html-components 2.0.0 Vanilla HTML compon... dash-table 5.0.0 Dash table ``` **Describe the bug** When registering a pattern-matching callback, no warning is issued if the pattern does not match the ID of any of the DOM elements. For example if we create a button with an ID like this: ```python id={ "type": "delete-list-item", "deletion-target": "some_path", "extras": "..." } ``` and then define a callback that only defines 2/3 of the keys present in the `id` dict: ```python @app.callback( Input( {"type": "delete-list-item", "deletion-target": "some_path", }, "n_clicks" ), ) def delete_list_item(n_clicks): print(n_clicks) ``` The callback does not get attached to any element but does show up on the dev tools page. **Expected behavior** Under the default conditions (`app.config.suppress_callback_exceptions = False`), a warning should be emitted when no matching `id`s are found. **Screenshots** Not needed
open
2024-09-19T02:06:23Z
2024-09-23T14:31:26Z
https://github.com/plotly/dash/issues/3007
[ "bug", "P3" ]
farhanhubble
0
PrefectHQ/prefect
data-science
17,434
Cannot deploy flow from within a flow
### Bug summary Related to / manifestation of #15008: ``` from prefect import flow from util.prefect import hello_flow # just some flow @flow def deploy(): hello_flow.deploy( name="my-deployment", work_pool_name="data-default", build=False, _sync=True, ) if __name__ == "__main__": deploy() ``` fails with ``` File ~/model/.venv/lib/python3.10/site-packages/prefect/flows.py:1496, in Flow.deploy(self, name, work_pool_name, image, build, push, work_queue_name, job_variables, interval, cron, rrule, paused, schedule, schedules, concurrency_limit, triggers, parameters, description, tags, version, enforce_parameter_schema, entrypoint_type, print_next_steps, ignore_warnings, _sla) 1493 if TYPE_CHECKING: 1494 assert inspect.isawaitable(to_deployment_coro) -> 1496 deployment = await to_deployment_coro 1498 from prefect.deployments.runner import deploy 1500 deploy_coro = deploy( 1501 deployment, 1502 work_pool_name=work_pool_name, (...) 1507 ignore_warnings=ignore_warnings, 1508 ) TypeError: object RunnerDeployment can't be used in 'await' expression ``` ### Version info ```Text Version: 3.2.11 API version: 0.8.4 Python version: 3.10.16 Git commit: 9481694f Built: Wed, Mar 5, 2025 10:00 PM OS/Arch: darwin/arm64 Profile: default Server type: cloud Pydantic version: 2.10.6 Integrations: prefect-gcp: 0.6.2 prefect-dask: 0.3.3 prefect-kubernetes: 0.5.3 ``` ### Additional context _No response_
closed
2025-03-10T14:11:01Z
2025-03-11T14:34:41Z
https://github.com/PrefectHQ/prefect/issues/17434
[ "bug" ]
bnaul
4
giotto-ai/giotto-tda
scikit-learn
2
Building manylinux wheels
From @rth: Currently wheels are built based on the [ubuntu-16.04 image](https://github.com/giotto-learn/giotto-learn/blob/fefbd7c058421a9250ac017cfe796d6655e4ac56/azure-pipelines.yml#L4) which means that they would work with that Ubuntu version and later, but probably not other linux distributions. A way to make wheels that would work for any linux distribution is with manylinux wheels https://github.com/pypa/manylinux. Traditionally manylinux1 were were built with CentSO5, but a more recent manylinux2010 standard (CentOS 6) was recently adopted. Docker images are available and could be used in that same ubuntu VM.
closed
2019-10-15T07:52:10Z
2019-10-22T11:42:45Z
https://github.com/giotto-ai/giotto-tda/issues/2
[]
gtauzin
1
apragacz/django-rest-registration
rest-api
44
User verification works multiple times
Shouldn't the link that gets sent to activate an account only work once? Seems especially relevant when having `'REGISTER_VERIFICATION_AUTO_LOGIN': True`
closed
2019-04-19T09:33:32Z
2020-04-25T07:52:55Z
https://github.com/apragacz/django-rest-registration/issues/44
[ "type:bug", "priority:high" ]
kris7ian
3