repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
mwaskom/seaborn
data-science
2,788
[Feature] Multiple rugplots on same plot
Hi there, Is there a way to add multiple rugplots to the same figure? This would be useful when having unique values on one axis and multiple grouping variables to color by. I understand this is achievable with shapes/colors for groups of low cardinality but I believe can quickly get confusing. Currently, when calling `rugplot` twice the second `rugplot` draws over the first one. Example: ```{python} import string import random import numpy as np import pandas as pd import seaborn as sns N = 20 groups_a = random.choices(string.ascii_letters[0:5], k=N) groups_b = random.choices(string.ascii_uppercase[0:5], k=N) Y = np.random.random(N) X = range(N) df = pd.DataFrame.from_dict({"X":X, "Y":Y, "A":groups_a,"B":groups_b}) sns.scatterplot(data=df, x="X", y="Y") sns.rugplot(data=df, x="X", hue="A", linewidth=12) sns.rugplot(data=df, x="X", hue="B", linewidth=12) ``` This yields: ![image](https://user-images.githubusercontent.com/35219306/166219583-2f49e0fb-8ad6-402b-a59c-0941fc82fe66.png) Expected output would be something like: ![image](https://user-images.githubusercontent.com/35219306/166219606-ee33c480-9633-44d3-8c56-f359d1592867.png) (of course with more modifications of cmaps etc. to make the plot more clear)
closed
2022-05-02T10:20:48Z
2022-05-02T13:18:32Z
https://github.com/mwaskom/seaborn/issues/2788
[]
jeskowagner
8
vllm-project/vllm
pytorch
14,789
[Bug]: Clarification on LoRA Support for Gemma3ForConditionalGeneration
### Your current environment <img width="886" alt="Image" src="https://github.com/user-attachments/assets/056cca27-67e9-4399-9411-cffa94760c04" /> ### 🐛 Describe the bug Hey vLLM Team, I’d like to clarify the LoRA support for `Gemma3ForConditionalGeneration`. The [supported models documentation](https://docs.vllm.ai/en/latest/models/supported_models.html) states that this model supports LoRA, but after reviewing the code, it doesn’t seem to have LoRA support implemented. Could you confirm if LoRA is indeed supported for this model, or if the documentation needs an update? Thanks! ![Image](https://github.com/user-attachments/assets/5e6a0a7c-5720-4814-bf2c-5fe5eb6c0d20) <img width="597" alt="Image" src="https://github.com/user-attachments/assets/eeb1d626-d84d-4f23-ae0b-44587ff77544" /> ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
closed
2025-03-14T01:37:07Z
2025-03-15T01:21:18Z
https://github.com/vllm-project/vllm/issues/14789
[ "bug" ]
angkywilliam
2
holoviz/panel
plotly
6,822
Panel ChatInterface examples should not use TextAreaInput
I think the pn.chat.ChatAreaInput should be mentioned here https://panel.holoviz.org/reference/chat/ChatInterface.html
closed
2024-05-10T17:22:00Z
2024-05-13T19:24:15Z
https://github.com/holoviz/panel/issues/6822
[ "type: docs" ]
ahuang11
0
plotly/dash
dash
2,221
[BUG] Background callbacks with different outputs not working
**Describe your context** Please provide us your environment, so we can easily reproduce the issue. - replace the result of `pip list | grep dash` below ``` dash 2.6.1 dash-bootstrap-components 1.2.0 dash-core-components 2.0.0 dash-html-components 2.0.0 dash-table 5.0.0 ``` **Describe the bug** Background callbacks don't work if they are generated with a function/loop. For example, I've created a function `gen_callback` that creates a new callback given a css id. ```python def gen_callback(css_id, x): @app.callback( Output(css_id, 'children'), Input('my-dropdown', 'value'), background=True, ) def callback_name(value): print(f"Inside callback_name {self._css_id} for {org_id}") return int(value) + x ``` The background callback manager uses a hash function to know the key to get. This hash function just takes into account the code of the function, but not the variables used to generate that function. https://github.com/plotly/dash/blob/c897b2b094543e930a862fe51d48abfb78057df7/dash/long_callback/managers/__init__.py#L101-L105 I think the hash should also take into account the `Output` list of the callback. **Expected behavior** Using a variable as the id of one of the outputs creates multiple different, valid callbacks, that should work fine with background=True. **Possible temporary solution** Generate a function with `exec()`, replacing parts of a template code, and decorate the compiled function.
closed
2022-09-08T08:05:57Z
2023-06-25T22:28:46Z
https://github.com/plotly/dash/issues/2221
[]
daviddavo
5
koxudaxi/datamodel-code-generator
pydantic
1,541
Issue with generating model from JSON Compound Schema
**Describe the bug** Trying to generate a pydantic model from a compound JSON schema fails and returns following error (just the end of the otherwise very long message) : `yaml.scanner.ScannerError: mapping values are not allowed in this context in "<unicode string>", line 11, column 25` **To Reproduce** Taken from [json-schema doc](https://json-schema.org/understanding-json-schema/structuring.html) Example schema: ```json { "$id": "https://example.com/schemas/customer", "$schema": "https://json-schema.org/draft/2020-12/schema", "type": "object", "properties": { "first_name": { "type": "string" }, "last_name": { "type": "string" }, "shipping_address": { "$ref": "/schemas/address" }, "billing_address": { "$ref": "/schemas/address" } }, "required": ["first_name", "last_name", "shipping_address", "billing_address"], "$defs": { "address": { "$id": "/schemas/address", "$schema": "http://json-schema.org/draft-07/schema#", "type": "object", "properties": { "street_address": { "type": "string" }, "city": { "type": "string" }, "state": { "$ref": "#/definitions/state" } }, "required": ["street_address", "city", "state"], "definitions": { "state": { "enum": ["CA", "NY", "... etc ..."] } } } } } ``` Used commandline: ``` $ datamodel-codegen --input .\json_schema\test.json --input-file-type jsonschema --output test.py ``` **Expected behavior** Expected the generation of a pydantic schema **Version:** - OS: windows 11 22H2 - Python version: 3.11.4 - datamodel-code-generator version: 0.21.4 **Additional context** Similar issue was described in jsonschema $ref parsed as yaml? #564, however I tried to replace the base URI with "blank" and now I got FileNotFound error when refering to the sub $id
closed
2023-09-11T01:16:23Z
2023-11-19T17:05:54Z
https://github.com/koxudaxi/datamodel-code-generator/issues/1541
[ "enhancement" ]
clementboutaric2
2
erdewit/ib_insync
asyncio
250
How to get the pre-market-price?
I would like to get the pre-market-price -> the price between 8 -9.30. By the way, I don't know how to get the price. Would you please inform me? I really appreciate your help I will wait for your favorable reply. Thanks.
closed
2020-05-04T15:21:24Z
2020-05-07T10:28:40Z
https://github.com/erdewit/ib_insync/issues/250
[]
lovetrading10
1
onnx/onnx
pytorch
6,579
Undefined Symbol when importing onnx
# Bug Report ### Is the issue related to model conversion? No. ### Describe the bug I am trying to build onnx on ppc64le arch with external protobuf and with shared libraries. When I install the onnx wheel, and try importing it, I see issues of undefined symbol. ``` >>> import onnx Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/builder/dev1/lib64/python3.12/site-packages/onnx/__init__.py", line 77, in <module> from onnx.onnx_cpp2py_export import ONNX_ML ImportError: /home/builder/new_scripts/onnx/onnx/.setuptools-cmake-build/libonnx.so: undefined symbol: _ZN4onnx25TensorProto_DataType_NameB5cxx11ENS_20TensorProto_DataTypeE ``` ### System information <!-- - OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*): ppc64le - ONNX version (*e.g. 1.13*): v1.17.0 - Python version: 3.12 - GCC/Compiler version (if compiling from source): GCC 13 - CMake version: 3.31.1 - Protobuf version: 4.25.3 - Visual Studio version (if applicable):--> NA ### Reproduction instructions <!-- - Describe the code to reproduce the behavior. ``` import onnx model = onnx.load('model.onnx') ... ``` - Attach the ONNX model to the issue (where applicable)--> ### Expected behavior import onnx.__version__ should show version. ### Notes Listing my cmake args: ``` export CMAKE_ARGS="${CMAKE_ARGS} -DCMAKE_INSTALL_PREFIX=$ONNX_PREFIX" export CMAKE_ARGS="${CMAKE_ARGS} -DCMAKE_AR=${AR}" export CMAKE_ARGS="${CMAKE_ARGS} -DCMAKE_LINKER=${LD}" export CMAKE_ARGS="${CMAKE_ARGS} -DCMAKE_NM=${NM}" export CMAKE_ARGS="${CMAKE_ARGS} -DCMAKE_OBJCOPY=${OBJCOPY}" export CMAKE_ARGS="${CMAKE_ARGS} -DCMAKE_OBJDUMP=${OBJDUMP}" export CMAKE_ARGS="${CMAKE_ARGS} -DCMAKE_RANLIB=${RANLIB}" export CMAKE_ARGS="${CMAKE_ARGS} -DCMAKE_STRIP=${STRIP}" export CMAKE_ARGS="${CMAKE_ARGS} -DBUILD_SHARED_LIBS=ON" export CMAKE_ARGS="${CMAKE_ARGS} -DONNX_BUILD_SHARED_LIBS=ON" export CMAKE_ARGS="${CMAKE_ARGS} -DCMAKE_CXX_STANDARD=17" export CMAKE_ARGS="${CMAKE_ARGS} -DONNX_USE_PROTOBUF_SHARED_LIBS=ON" export CMAKE_ARGS="${CMAKE_ARGS} -DONNX_USE_LITE_PROTO=ON" export CMAKE_ARGS="${CMAKE_ARGS} -DProtobuf_PROTOC_EXECUTABLE=$ENV_PREFIX/bin/protoc -DProtobuf_LIBRARY=$ENV_PREFIX/lib/libprotobuf.so" export CMAKE_ARGS="${CMAKE_ARGS} -DCMAKE_PREFIX_PATH=$CMAKE_PREFIX_PATH" ``` creating whl using `python -m pip wheel -w dist -vv --no-build-isolation --no-deps .`
open
2024-12-09T13:01:23Z
2024-12-09T13:01:23Z
https://github.com/onnx/onnx/issues/6579
[ "bug" ]
Aman-Surkar
0
miguelgrinberg/microblog
flask
68
SearchableMixin limited to Post objects
Hi, At this moment SearchableMixin is limited only to Post class. Could it be possible to change it to be more universal with the following change? ``` db.case(when, value=cls.id)), total ``` And then reuse the same mixin with User class ``` class User(SearchableMixin, UserMixin, db.Model): __searchable__ = ['username'] ... db.event.listen(db.session, 'before_commit', User.before_commit) db.event.listen(db.session, 'after_commit', User.after_commit) ```
closed
2018-01-11T13:24:29Z
2018-03-06T01:32:58Z
https://github.com/miguelgrinberg/microblog/issues/68
[ "bug" ]
avoidik
1
healthchecks/healthchecks
django
1,132
Additional blank checks are created through API?
Hi, I'm having this odd issue and I could use a bit of a sanity check. I create checks like this: ```bash curl --silent "$HC_CHECK_URL/api/v3/checks/" \ --header "X-Api-Key: $HC_API_KEY" \ --data @- | jq <<-EOF { "name": "${image_local}", "slug": "${image_slug}", "timeout": 5184000, "grace": 3600, "tz": "Redacted", "channels": "Redacted", "unique": ["slug"] } EOF ``` And this seems to trigger the correct check slug. I see a new event/ping there, but it also creates these empty checks. To note, the check name (image_local variable) changes on every build, but the slug (image_slug) doesn't change. I've noticed that after the initial check creation, new calls to the same check don't update the name. Is this by design? ![Image](https://github.com/user-attachments/assets/1895c096-5125-4c9a-9c3a-0c7a8a842adc)
closed
2025-03-05T23:15:04Z
2025-03-05T23:54:03Z
https://github.com/healthchecks/healthchecks/issues/1132
[]
rwjack
2
danimtb/dasshio
dash
109
support for 'dash wand'?
hello there, i've been using the dash buttons in my brand new HA setup for a few weeks now. then i've learned about the dash wand, bought one on ebay but had no success with the provided config URL, youve posted here: `http://192.168.0.1/?amzn_ssid=<SSID>&amzn_pw=<PASSWORD>` google also has no clue. could anyony lead my in the right direction? is this a future feature maybe?
closed
2022-03-12T01:24:39Z
2023-06-12T07:20:02Z
https://github.com/danimtb/dasshio/issues/109
[]
sebaschn
1
kubeflow/katib
scikit-learn
2,132
After updating to version 0.15.0, the name argument in KatibClient().get_success_trial_details() has disappeared.
/kind bug **What steps did you take and what happened:** The code below, which worked well in version 0.14.0, fails because of the name parameter in version 0.15.0. ``` trial_details_log = kclient.get_success_trial_details( name=experiment, namespace=namespace ) ``` **What did you expect to happen:** I wonder if this was done intentionally in version 0.15.0 or if a bug fix is needed. Is it something that will be changed in the future update version? - Katib version (check the Katib controller image version): 0.15.0 - Kubernetes version: (`kubectl version`): v1.21.11 - OS (`uname -a`): `Linux control-plane.minikube.internal 5.4.0-144-generic #161~18.04.1-Ubuntu SMP Fri Feb 10 15:55:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux` --- <!-- Don't delete this message to encourage users to support your issue! --> Impacted by this bug? Give it a 👍 We prioritize the issues with the most 👍
closed
2023-03-24T00:50:49Z
2023-03-27T00:51:45Z
https://github.com/kubeflow/katib/issues/2132
[ "kind/bug" ]
moey920
3
pydantic/pydantic-ai
pydantic
1,028
Create llms.txt and llms-full.txt
Right now we have only `llms.txt`, but that shows the full docs. We should follow the spec properly, and add them to the hub: https://llmstxthub.com/
open
2025-03-02T10:06:59Z
2025-03-20T12:22:43Z
https://github.com/pydantic/pydantic-ai/issues/1028
[]
Kludex
10
desec-io/desec-stack
rest-api
528
GUI: allow declaring API tokens read-only during creation
It would be useful to have a capability for read-only API tokens. The use case I have is that we'd like to check if all values are up to date via `terraform plan -detailed-exitcode || fail` as part of a CI job. At the moment that could only be done by giving that CI job a token with read/write access. This is a "least authority" concern similar to #347.
open
2021-04-14T13:18:03Z
2024-10-07T17:11:35Z
https://github.com/desec-io/desec-stack/issues/528
[ "enhancement", "help wanted", "gui" ]
Valodim
6
ipython/ipython
data-science
14,113
error with jupyter_notebook_config.json file
**Hi, I face this error with my jupyter_notebook_config.json file, when I use "jupyter notebook"** Exception while loading config file /data/Wu_Feizhen/wfz05/.jupyter/jupyter_notebook_config.json Traceback (most recent call last): File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/site-packages/traitlets/config/application.py", line 858, in _load_config_files config = loader.load_config() ^^^^^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/site-packages/traitlets/config/loader.py", line 576, in load_config dct = self._read_file_as_dict() ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/site-packages/traitlets/config/loader.py", line 582, in _read_file_as_dict return json.load(f) ^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/json/__init__.py", line 293, in load return loads(fp.read(), ^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/json/__init__.py", line 346, in loads return _default_decoder.decode(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 3 column 13 (char 33) Traceback (most recent call last): File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/bin/jupyter-notebook", line 11, in <module> sys.exit(main()) ^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/site-packages/jupyter_core/application.py", line 277, in launch_instance return super().launch_instance(argv=argv, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/site-packages/traitlets/config/application.py", line 991, in launch_instance app.initialize(argv) File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/site-packages/traitlets/config/application.py", line 113, in inner return method(app, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/site-packages/notebook/notebookapp.py", line 2169, in initialize self.init_server_extension_config() File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/site-packages/notebook/notebookapp.py", line 2026, in init_server_extension_config section = manager.get(self.config_file_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/site-packages/notebook/services/config/manager.py", line 25, in get recursive_update(config, cm.get(section_name)) ^^^^^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/site-packages/notebook/config_manager.py", line 100, in get recursive_update(data, json.load(f)) ^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/json/__init__.py", line 293, in load return loads(fp.read(), ^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/json/__init__.py", line 346, in loads return _default_decoder.decode(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 3 column 13 (char 33) **then I run "jupyter notebook --debug", I got this,** Searching ['/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/etc/jupyter', '/data/Wu_Feizhen/wfz05/.jupyter', '/data/Wu_Feizhen/wfz05/.local/etc/jupyter', '/usr/local/etc/jupyter', '/etc/jupyter'] for config files [D 21:29:21.651 NotebookApp] Looking for jupyter_config in /etc/jupyter [D 21:29:21.651 NotebookApp] Looking for jupyter_config in /usr/local/etc/jupyter [D 21:29:21.651 NotebookApp] Looking for jupyter_config in /data/Wu_Feizhen/wfz05/.local/etc/jupyter [D 21:29:21.651 NotebookApp] Looking for jupyter_config in /data/Wu_Feizhen/wfz05/.jupyter [D 21:29:21.651 NotebookApp] Looking for jupyter_config in /data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/etc/jupyter [D 21:29:21.651 NotebookApp] Looking for jupyter_notebook_config in /etc/jupyter [D 21:29:21.651 NotebookApp] Looking for jupyter_notebook_config in /usr/local/etc/jupyter [D 21:29:21.651 NotebookApp] Looking for jupyter_notebook_config in /data/Wu_Feizhen/wfz05/.local/etc/jupyter [D 21:29:21.651 NotebookApp] Looking for jupyter_notebook_config in /data/Wu_Feizhen/wfz05/.jupyter [D 21:29:21.652 NotebookApp] Loaded config file: /data/Wu_Feizhen/wfz05/.jupyter/jupyter_notebook_config.py [E 21:29:21.652 NotebookApp] Exception while loading config file /data/Wu_Feizhen/wfz05/.jupyter/jupyter_notebook_config.json Traceback (most recent call last): File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/site-packages/traitlets/config/application.py", line 858, in _load_config_files config = loader.load_config() ^^^^^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/site-packages/traitlets/config/loader.py", line 576, in load_config dct = self._read_file_as_dict() ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/site-packages/traitlets/config/loader.py", line 582, in _read_file_as_dict return json.load(f) ^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/json/__init__.py", line 293, in load return loads(fp.read(), ^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/json/__init__.py", line 346, in loads return _default_decoder.decode(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 3 column 13 (char 33) [D 21:29:21.653 NotebookApp] Looking for jupyter_notebook_config in /data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/etc/jupyter [D 21:29:21.653 NotebookApp] Raising open file limit: soft 1024->4096; hard 4096->4096 [D 21:29:21.673 NotebookApp] Paths used for configuration of jupyter_notebook_config: /etc/jupyter/jupyter_notebook_config.json [D 21:29:21.673 NotebookApp] Paths used for configuration of jupyter_notebook_config: /usr/local/etc/jupyter/jupyter_notebook_config.json [D 21:29:21.673 NotebookApp] Paths used for configuration of jupyter_notebook_config: /data/Wu_Feizhen/wfz05/.local/etc/jupyter/jupyter_notebook_config.json [D 21:29:21.673 NotebookApp] Paths used for configuration of jupyter_notebook_config: /data/Wu_Feizhen/wfz05/.jupyter/jupyter_notebook_config.json Traceback (most recent call last): File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/bin/jupyter-notebook", line 11, in <module> sys.exit(main()) ^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/site-packages/jupyter_core/application.py", line 277, in launch_instance return super().launch_instance(argv=argv, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/site-packages/traitlets/config/application.py", line 991, in launch_instance app.initialize(argv) File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/site-packages/traitlets/config/application.py", line 113, in inner return method(app, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/site-packages/notebook/notebookapp.py", line 2169, in initialize self.init_server_extension_config() File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/site-packages/notebook/notebookapp.py", line 2026, in init_server_extension_config section = manager.get(self.config_file_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/site-packages/notebook/services/config/manager.py", line 25, in get recursive_update(config, cm.get(section_name)) ^^^^^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/site-packages/notebook/config_manager.py", line 100, in get recursive_update(data, json.load(f)) ^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/json/__init__.py", line 293, in load return loads(fp.read(), ^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/json/__init__.py", line 346, in loads return _default_decoder.decode(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/data/Wu_Feizhen/wfz05/anaconda3/envs/jupyter_R/lib/python3.11/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 3 column 13 (char 33) **please anyone can help me with this issue, many thanks!**
closed
2023-07-11T13:44:47Z
2023-07-13T12:06:43Z
https://github.com/ipython/ipython/issues/14113
[]
BioVictory
1
AntonOsika/gpt-engineer
python
113
Add deployment instructions to output
Let me preface this by saying I am not a developer, but I have a decent level of technical understanding and understand how code works. In my instructions for the code I was generating, I said that I wanted step by step instructions for deploying the application on a server. In my case, this is a Telegram bot. It added that request to the specification, but did not provide the instructions in the output. It might be good to have an instruction available to the user that does not generate code, but allows them to request instructions for installation or some other component of the app delivery.
closed
2023-06-17T16:51:48Z
2023-06-18T19:20:20Z
https://github.com/AntonOsika/gpt-engineer/issues/113
[ "enhancement", "help wanted", "good first issue" ]
clickbrain
3
CorentinJ/Real-Time-Voice-Cloning
python
468
SyntaxError: invalid syntax when running python demo_cli.py
``` $: python demo_cli.py Traceback (most recent call last): File "demo_cli.py", line 2, in <module> from utils.argutils import print_args File "Real-Time-Voice-Cloning-master/utils/argutils.py", line 22 def print_args(args: argparse.Namespace, parser=None): ^ SyntaxError: invalid syntax ```
closed
2020-08-04T15:48:21Z
2020-08-04T15:56:43Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/468
[]
biplab1
3
clovaai/donut
nlp
220
Training one common model on two different type of images
I have been trying to train a common model that can extract data from two different type of images. I am getting all other details but it is not extracting the ID numbers of these two type of images. Can anyone suggest something to improve the results.
open
2023-07-01T04:16:28Z
2023-07-01T04:56:23Z
https://github.com/clovaai/donut/issues/220
[]
NavneetSingh20
0
flairNLP/flair
pytorch
2,699
NER on custom data fails to start training and gets stuck before 1st epoch
I'm trying to train a custom NER with falir using my data (BIO format). from flair.data import Corpus from flair.datasets import ColumnCorpus columns = {0 : 'text', 1 : 'ner'} data_folder = 'Data_french/' corpus: Corpus = ColumnCorpus(data_folder, columns, train_file = 'train.txt', test_file = 'test.txt', dev_file = 'eval.txt') 2022-04-01 13:59:25,399 Reading data from Data_french 2022-04-01 13:59:25,401 Train: Data_french/train.txt 2022-04-01 13:59:25,401 Dev: Data_french/eval.txt 2022-04-01 13:59:25,401 Test: Data_french/test.txt CPU times: user 49min 20s, sys: 5min 54s, total: 55min 15s Wall time: 55min 18s print(len(corpus.train)) print(corpus.train[0].to_tagged_string('ner')) 4948186 Myélogramme en sortie d'aplasie le 17.04.2013 `<B-DATE>` : rémission cytologique . tag_type = 'ner' tag_dictionary = corpus.make_label_dictionary(label_type=tag_type) 2022-04-01 15:37:39,082 Computing label dictionary. Progress: 100%|██████████| 4948186/4948186 [05:20<00:00, 15435.43it/s] 2022-04-01 15:42:59,660 Corpus contains the labels: ner (#179449386) 2022-04-01 15:42:59,660 Created (for label 'ner') Dictionary with 17 tags: <unk>, O, B-DATE, B-PATIENT, B-VILLE, I-DATE, B-DOCTOR, B-ZIP, I-PATIENT, I-DOCTOR, B-STR, I-STR, B-PHONE, I-PHONE, B-EMAIL, I-EMAIL, I-VILLE tag_dictionary.get_items() [' `<unk>` ', 'O', 'B-DATE', 'B-PATIENT', 'B-VILLE', 'I-DATE', 'B-DOCTOR', 'B-ZIP', 'I-PATIENT', 'I-DOCTOR', 'B-STR', 'I-STR', 'B-PHONE', 'I-PHONE', 'B-EMAIL', 'I-EMAIL', 'I-VILLE'] embedding_types = [ # GloVe embeddings WordEmbeddings("fr"), # contextual string embeddings, forward FlairEmbeddings('fr-forward'), # contextual string embeddings, backward FlairEmbeddings('fr-backward'), ] embeddings : StackedEmbeddings = StackedEmbeddings(embeddings=embedding_types) from flair.models import SequenceTagger tagger : SequenceTagger = SequenceTagger(hidden_size=256, embeddings=embeddings, tag_dictionary=tag_dictionary, tag_type=tag_type, use_crf=True)` from flair.trainers import ModelTrainer trainer : ModelTrainer = ModelTrainer(tagger, corpus) trainer.train('flair_all_data_model', #train_with_dev=True, learning_rate=0.1, mini_batch_size=64, max_epochs=150, embeddings_storage_mode='none') Now everything works find and started the training but it doesn't even reach the epoch one and stuck before the epochs in this stage: 2022-04-01 15:53:22,942 ---------------------------------------------------------------------------------------------------- 2022-04-01 15:53:22,942 Model: "SequenceTagger( (embeddings): StackedEmbeddings( (list_embedding_0): WordEmbeddings( 'fr' (embedding): Embedding(1000000, 300) ) (list_embedding_1): FlairEmbeddings( (lm): LanguageModel( (drop): Dropout(p=0.5, inplace=False) (encoder): Embedding(275, 100) (rnn): LSTM(100, 1024) (decoder): Linear(in_features=1024, out_features=275, bias=True) ) ) (list_embedding_2): FlairEmbeddings( (lm): LanguageModel( (drop): Dropout(p=0.5, inplace=False) (encoder): Embedding(275, 100) (rnn): LSTM(100, 1024) (decoder): Linear(in_features=1024, out_features=275, bias=True) ) ) ) (word_dropout): WordDropout(p=0.05) (locked_dropout): LockedDropout(p=0.5) (embedding2nn): Linear(in_features=2348, out_features=2348, bias=True) (rnn): LSTM(2348, 256, batch_first=True, bidirectional=True) (linear): Linear(in_features=512, out_features=19, bias=True) (beta): 1.0 (weights): None (weight_tensor) None )" 2022-04-01 15:53:22,943 ---------------------------------------------------------------------------------------------------- 2022-04-01 15:53:22,943 Corpus: "Corpus: 4948186 train + 608305 dev + 620581 test sentences" 2022-04-01 15:53:22,943 ---------------------------------------------------------------------------------------------------- 2022-04-01 15:53:22,944 Parameters: 2022-04-01 15:53:22,944 - learning_rate: "0.1" 2022-04-01 15:53:22,944 - mini_batch_size: "64" 2022-04-01 15:53:22,944 - patience: "3" 2022-04-01 15:53:22,945 - anneal_factor: "0.5" 2022-04-01 15:53:22,945 - max_epochs: "150" 2022-04-01 15:53:22,945 - shuffle: "True" 2022-04-01 15:53:22,945 - train_with_dev: "False" 2022-04-01 15:53:22,946 - batch_growth_annealing: "False" 2022-04-01 15:53:22,946 ---------------------------------------------------------------------------------------------------- 2022-04-01 15:53:22,946 Model training base path: "flair_all_data_model" 2022-04-01 15:53:22,946 ---------------------------------------------------------------------------------------------------- 2022-04-01 15:53:22,947 Device: cuda:0 2022-04-01 15:53:22,947 ---------------------------------------------------------------------------------------------------- 2022-04-01 15:53:22,947 Embeddings storage mode: none 2022-04-01 15:53:22,949 ----------------------------------------------------------------------------------------------------`
closed
2022-04-01T13:56:19Z
2022-06-22T12:40:37Z
https://github.com/flairNLP/flair/issues/2699
[]
elazzouzi1080
1
huggingface/transformers
tensorflow
36,541
Wrong dependency: `"tensorflow-text<2.16"`
### System Info - `transformers` version: 4.50.0.dev0 - Platform: Windows-10-10.0.26100-SP0 - Python version: 3.10.11 - Huggingface_hub version: 0.29.1 - Safetensors version: 0.5.3 - Accelerate version: not installed - Accelerate config: not found - DeepSpeed version: not installed - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using distributed or parallel set-up in script?: no ### Who can help? @stevhliu @Rocketknight1 ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am trying to install the packages needed for creating a PR to test my changes. Running `pip install -e ".[dev]"` from this [documentation](https://huggingface.co/docs/transformers/contributing#create-a-pull-request) results in the following error: ```markdown ERROR: Cannot install transformers and transformers[dev]==4.50.0.dev0 because these package versions have conflicting dependencies. The conflict is caused by: transformers[dev] 4.50.0.dev0 depends on tensorflow<2.16 and >2.9; extra == "dev" tensorflow-text 2.8.2 depends on tensorflow<2.9 and >=2.8.0; platform_machine != "arm64" or platform_system != "Darwin" transformers[dev] 4.50.0.dev0 depends on tensorflow<2.16 and >2.9; extra == "dev" tensorflow-text 2.8.1 depends on tensorflow<2.9 and >=2.8.0 To fix this you could try to: 1. loosen the range of package versions you've specified 2. remove package versions to allow pip to attempt to solve the dependency conflict ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts ``` This happens because of the specification of `tensorflow-text<2.16` here: https://github.com/huggingface/transformers/blob/c0c5acff077ac7c8fe68a0fdbad24306dbd9d4e3/setup.py#L179 ### Expected behavior `transformers[dev]` requires `tensorflow` above version 2.9, while `tensorflow-text` explicitly restricts TensorFlow to versions below 2.9. Also, there is no 2.16 version, neither of `tensorflow` or `tensorflow-text`: https://pypi.org/project/tensorflow/#history https://pypi.org/project/tensorflow-text/#history ```markdown INFO: pip is looking at multiple versions of transformers[dev] to determine which version is compatible with other requirements. This could take a while. ERROR: Could not find a version that satisfies the requirement tensorflow<2.16,>2.9; extra == "dev" (from transformers[dev]) (from versions: 2.16.0rc0, 2.16.1, 2.16.2, 2.17.0rc0, 2.17.0rc1, 2.17.0, 2.17.1, 2.18.0rc0, 2.18.0rc1, 2.18.0rc2, 2.18.0, 2.19.0rc0) ERROR: No matching distribution found for tensorflow<2.16,>2.9; extra == "dev" ``` What is the correct `tensorflow-text` version?
open
2025-03-04T15:23:19Z
2025-03-07T00:14:03Z
https://github.com/huggingface/transformers/issues/36541
[ "bug" ]
d-kleine
6
strawberry-graphql/strawberry
fastapi
3,501
`UnallowedReturnTypeForUnion` Error thrown when having a response of Union Type and using Generics since 0.229.0 release
## Describe the Bug `UnallowedReturnTypeForUnion` Error thrown when having a response of Union Type and using Generics(with interface) since 0.229.0 release. Sharing the playground [link](https://play.strawberry.rocks/?gist=a3c87c1b7816a2a3251cdf442b4569e1) showing the error for ease of replication. This was a feature which was working but seems like it broke with the 0.229.0 release ## System Information - Operating system: - Strawberry version (if applicable): 0.229.0 ## Additional Context Playground Link : https://play.strawberry.rocks/?gist=a3c87c1b7816a2a3251cdf442b4569e1
closed
2024-05-15T13:19:10Z
2025-03-20T15:56:43Z
https://github.com/strawberry-graphql/strawberry/issues/3501
[ "bug" ]
jibujacobamboss
2
wkentaro/labelme
computer-vision
658
Backward compatibility Issues
I have old json files created with an older version of labelme just a few months old. But a new version packed with Ubuntu Focal Fossa 20.04 failed to open the json file, and saying that lineColor is missing. ```json { "shapes": [ { "shape_type": "polygon", "points": [ [ 95.10204081632652, 159.1020408163265 ], [ 117.55102040816325, 147.26530612244898 ], [ 145.30612244897958, 147.6734693877551 ], [ 153.87755102040816, 162.77551020408163 ], [ 156.734693877551, 185.22448979591834 ], [ 153.0612244897959, 186.85714285714283 ], [ 152.6530612244898, 195.42857142857142 ], [ 150.6122448979592, 197.0612244897959 ], [ 138.77551020408163, 206.44897959183672 ], [ 97.14285714285714, 208.89795918367346 ], [ 96.3265306122449, 202.77551020408163 ], [ 89.38775510204081, 186.0408163265306 ], [ 88.16326530612244, 177.0612244897959 ] ], "flags": {}, "group_id": null, "label": "Minibus" }, { "shape_type": "polygon", "points": [ [ 0.0, 184.81632653061223 ], [ 26.530612244897956, 185.22448979591834 ], [ 40.0, 198.6938775510204 ], [ 37.14285714285714, 226.85714285714283 ], [ 21.224489795918366, 233.38775510204079 ], [ 16.3265306122449, 235.0204081632653 ], [ 12.244897959183673, 230.53061224489795 ], [ 0.0, 232.16326530612244 ] ], "flags": {}, "group_id": null, "label": "Car" }, { "shape_type": "polygon", "points": [ [ 215.91836734693877, 166.0408163265306 ], [ 211.83673469387753, 170.93877551020407 ], [ 207.34693877551018, 177.0612244897959 ], [ 204.48979591836732, 185.22448979591834 ], [ 200.40816326530611, 191.75510204081633 ], [ 207.34693877551018, 194.20408163265304 ], [ 202.85714285714283, 202.77551020408163 ], [ 197.55102040816325, 209.30612244897958 ], [ 202.44897959183672, 209.7142857142857 ], [ 211.42857142857142, 196.6530612244898 ], [ 217.55102040816325, 199.91836734693877 ], [ 223.6734693877551, 208.89795918367346 ], [ 226.1224489795918, 206.44897959183672 ], [ 220.40816326530611, 194.61224489795916 ], [ 215.1020408163265, 188.89795918367346 ], [ 215.51020408163265, 173.3877551020408 ], [ 219.99999999999997, 172.57142857142856 ] ], "flags": {}, "group_id": null, "label": "Pedestrian" }, { "shape_type": "polygon", "points": [ [ 233.46938775510202, 131.75510204081633 ], [ 229.79591836734693, 135.0204081632653 ], [ 232.6530612244898, 138.28571428571428 ], [ 228.97959183673467, 139.91836734693877 ], [ 231.0204081632653, 141.95918367346937 ], [ 233.87755102040813, 147.6734693877551 ], [ 234.28571428571428, 158.6938775510204 ], [ 235.1020408163265, 158.6938775510204 ], [ 239.59183673469386, 158.28571428571428 ], [ 237.55102040816325, 147.6734693877551 ], [ 237.95918367346937, 144.0 ], [ 239.99999999999997, 141.14285714285714 ], [ 238.36734693877548, 137.46938775510202 ], [ 235.51020408163262, 137.87755102040816 ] ], "flags": {}, "group_id": null, "label": "Pedestrian" }, { "shape_type": "polygon", "points": [ [ 33.87755102040816, 139.91836734693877 ], [ 31.83673469387755, 143.59183673469386 ], [ 28.97959183673469, 149.30612244897958 ], [ 31.020408163265305, 155.0204081632653 ], [ 32.244897959183675, 159.51020408163265 ], [ 33.46938775510204, 168.89795918367346 ], [ 36.326530612244895, 168.48979591836732 ], [ 36.73469387755102, 157.0612244897959 ], [ 35.51020408163265, 151.34693877551018 ], [ 38.367346938775505, 150.53061224489795 ] ], "flags": {}, "group_id": null, "label": "Pedestrian" }, { "shape_type": "polygon", "points": [ [ 126.53061224489795, 147.26530612244898 ], [ 131.83673469387753, 139.91836734693877 ], [ 135.91836734693877, 139.91836734693877 ], [ 140.0, 133.79591836734693 ], [ 159.18367346938774, 133.3877551020408 ], [ 166.12244897959184, 139.91836734693877 ], [ 165.7142857142857, 151.75510204081633 ], [ 165.7142857142857, 156.6530612244898 ], [ 162.44897959183672, 155.0204081632653 ], [ 159.18367346938774, 158.6938775510204 ], [ 153.0612244897959, 159.51020408163265 ], [ 146.53061224489795, 146.85714285714283 ] ], "flags": {}, "group_id": null, "label": "Car" }, { "shape_type": "polygon", "points": [ [ 144.6629213483146, 133.14606741573033 ], [ 148.03370786516854, 123.87640449438202 ], [ 149.7191011235955, 121.06741573033707 ], [ 151.68539325842696, 119.9438202247191 ], [ 156.46067415730337, 116.29213483146067 ], [ 170.22471910112358, 116.29213483146067 ], [ 175.0, 117.97752808988764 ], [ 178.93258426966293, 127.24719101123596 ], [ 179.4943820224719, 131.46067415730337 ], [ 177.52808988764045, 146.34831460674158 ], [ 168.82022471910113, 150.28089887640448 ], [ 166.01123595505618, 148.87640449438203 ], [ 166.29213483146066, 139.8876404494382 ], [ 159.2696629213483, 133.14606741573033 ] ], "flags": {}, "group_id": null, "label": "Minibus" }, { "shape_type": "polygon", "points": [ [ 140.4494382022472, 132.58426966292134 ], [ 137.07865168539325, 128.37078651685394 ], [ 137.07865168539325, 120.78651685393258 ], [ 140.1685393258427, 115.73033707865169 ], [ 144.1011235955056, 120.78651685393258 ], [ 144.9438202247191, 125.56179775280899 ] ], "flags": {}, "group_id": null, "label": "Motorcycle" }, { "shape_type": "polygon", "points": [ [ 196.91011235955057, 128.93258426966293 ], [ 192.97752808988764, 122.75280898876404 ], [ 193.53932584269663, 116.57303370786516 ], [ 196.62921348314606, 112.07865168539325 ], [ 200.8426966292135, 116.29213483146067 ], [ 201.40449438202248, 119.9438202247191 ], [ 202.52808988764045, 125.84269662921348 ] ], "flags": {}, "group_id": null, "label": "Motorcycle" }, { "shape_type": "polygon", "points": [ [ 214.8876404494382, 124.71910112359551 ], [ 215.1685393258427, 117.41573033707866 ], [ 212.92134831460675, 117.69662921348315 ], [ 216.29213483146066, 110.11235955056179 ], [ 220.22471910112358, 115.4494382022472 ], [ 221.06741573033707, 118.53932584269663 ], [ 220.5056179775281, 124.71910112359551 ], [ 218.82022471910113, 128.37078651685394 ] ], "flags": {}, "group_id": null, "label": "Motorcycle" }, { "shape_type": "polygon", "points": [ [ 205.8988764044944, 125.56179775280899 ], [ 204.2134831460674, 122.75280898876404 ], [ 203.65168539325842, 116.85393258426966 ], [ 203.65168539325842, 115.4494382022472 ], [ 205.3370786516854, 107.86516853932584 ], [ 217.13483146067415, 105.33707865168539 ], [ 219.9438202247191, 106.46067415730337 ], [ 219.9438202247191, 111.51685393258427 ], [ 223.03370786516854, 109.26966292134831 ], [ 225.28089887640448, 113.48314606741573 ], [ 225.28089887640448, 114.8876404494382 ], [ 224.7191011235955, 119.10112359550561 ], [ 224.1573033707865, 123.59550561797752 ], [ 220.78651685393257, 125.56179775280899 ] ], "flags": {}, "group_id": null, "label": "Car" }, { "shape_type": "polygon", "points": [ [ 237.64044943820224, 123.87640449438202 ], [ 236.23595505617976, 120.50561797752809 ], [ 235.3932584269663, 112.07865168539325 ], [ 237.92134831460675, 103.93258426966293 ], [ 248.87640449438203, 103.08988764044943 ], [ 259.2696629213483, 104.7752808988764 ], [ 263.4831460674157, 115.1685393258427 ], [ 264.0449438202247, 119.3820224719101 ], [ 260.39325842696627, 125.84269662921348 ], [ 257.5842696629214, 125.84269662921348 ], [ 257.02247191011236, 122.47191011235955 ], [ 255.3370786516854, 122.47191011235955 ], [ 255.0561797752809, 124.71910112359551 ], [ 252.80898876404493, 124.71910112359551 ], [ 252.24719101123594, 123.87640449438202 ], [ 244.9438202247191, 122.75280898876404 ], [ 244.9438202247191, 125.84269662921348 ], [ 243.25842696629212, 126.12359550561797 ], [ 241.01123595505618, 125.84269662921348 ] ], "flags": {}, "group_id": null, "label": "Minibus" }, { "shape_type": "polygon", "points": [ [ 260.00884955752207, 151.20353982300884 ], [ 260.00884955752207, 141.91150442477874 ], [ 257.3539823008849, 138.81415929203538 ], [ 257.3539823008849, 134.83185840707964 ], [ 259.1238938053097, 133.50442477876103 ], [ 259.5663716814159, 129.5221238938053 ], [ 261.33628318584067, 129.5221238938053 ], [ 262.2212389380531, 134.38938053097343 ], [ 265.3185840707964, 135.27433628318582 ], [ 264.87610619469024, 152.5309734513274 ] ], "flags": {}, "group_id": null, "label": "Pedestrian" }, { "shape_type": "polygon", "points": [ [ 213.99115044247785, 101.64601769911502 ], [ 214.43362831858406, 94.12389380530972 ], [ 229.92035398230087, 92.79646017699113 ], [ 235.2300884955752, 93.23893805309733 ], [ 235.6725663716814, 96.7787610619469 ], [ 233.90265486725662, 98.99115044247786 ], [ 233.4601769911504, 102.53097345132741 ], [ 231.69026548672565, 100.31858407079645 ], [ 224.16814159292034, 99.87610619469025 ], [ 224.16814159292034, 108.72566371681415 ] ], "flags": {}, "group_id": null, "label": "Car" } ], "imagePath": "../AnnoImages/936.jpeg", "flags": {}, "version": "4.2.9", "imageData": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCADwAUADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD3A3KI21pVU+hanebhgN/J7ZrhPtUu4sfmY925qRL+ZbqO4K5KDABYkYq+Uzudv5p7FjTHuVRd0kqoo7lq4691u5nl3xuYBjG1D1rOe7leLy3ZnXdnk5pcoXPQhcx+V5gkBTuwbj86qQa1a3EwhjmbcfbiuFS5cIU3sEbqueKfBcPbyrIrcg0+ULs9BLsJwQSRjmsPW9St4pzumzugZMLyc5z+HWuZn1G5c7jOzMf9vFZE8r+fHvGcEjOfakkguejReIrK6tghmaN3jHLrxkj1q+dTtVwWuUGenzV5nbmQ28eCowMbR09KsCRioPPTtRYLs7wa7YHpc4+bowNMvtXT7Owt72NJR0J5zXC+Z70u/wBKfKFzol1u+jQvNcb42OMDr+FMtPEdxFLmZmdPTcawN/qaQutFkB3c2v2MIH78uSBwnNQQeJLaSYq5ZUP3WP8AWuKzQZKVgOun8Sol9GELNACQxB6+4q3c6/bWzICXfem4FOf61wZlwMkkUCfPGTkUWFc7O58UwRtiFZJD6k4FUh4puCGJCjB+4M8/jXMlzR5mFppILl2S/lkld8sCxLEAml/tC4H3ZpFGegc1SBBOTxTi4f8AwFAGla61e2aBI52CjJweeaaNY1DzWc3c2W64aswn3/A0gJOOfyNAzTutTubwqZJCdowMGqv2iUN/rHx/vGo4LpxCmDgY5zQTlsk9aQrCy3M5glQyP8yMOGPoa4PUZJX0m4BdvmhPG72rvdkgk2EHnj1rg5vntnU90K4P0xUsuJ52HbaPmNG5v7zfnTQMilpFDSWz94/nQCf7x+tLikxTA1DBbKPLkvW3ZPIORiqp+yhZ/wB65Yf6s/3qq4oxSEXknsY1P7uV2Pqab9vRXbbADGcYVvYVSxRQBaF/Mjl0wpLbhjovGMUf2ld7k/enCDGPX/GqtFO4Ekt1cSyF5JnYnoc1GWY87jz60YoxSuB9IjANDkDkDimoeOCDj1qKSdQ3JI9q2MiKWRRnI5qqZWGewqSVw+QOM1WkIHU0WGSC4+Y55FEk5OCOB3zVUsO1KDlTmkMeHzxweepqvO/72Ijsx4/CnbqhmPzRk92osBaglIVgDyG6VOkoC4J6dqoxkiUqe461Nn1pAWdwIbJ696RXIwCcj1qEE7hjn2p+4njFFwJN/wAvFJk9uaYenGOelAVjgAnP0ouFiQudwzTTIOuaYN23H6VG/POcH0xRcCR5F8s5PbpTUk+Y4HGBUbkKvDbjimxFnXKqzfLklRmk2Fi1vFPJULnrUBR/IMuCFHQmoRIeOenamncdi8JV3cjOO1M8/DnAAU9KiyHxnCnHenGEIuBIGOe1AWFMpLcUzzW5FPitWlXcpwPcVFIcOVBzjvRcLEkUmEAOMZP86siUMp29c96p24BWQMM+lSRIzyBQCAe/SkIueasagFvrxXCzkCSZRziRxn/gRrsidrnI4HrXGXzH+0rsAH/XNgD35/rUsqJ5442SuPRj/Okrc/4R65mmkk8xEVmJAIOetTp4XP8AFdDr2Sp5rFHOYJpcV1SeGrYffmkb1xgVYj0HTk6xMx9S5pcwWOMxShSTxg/Su7TTrCL7tpCPquf51OEhT7sca49FApczGcEllcPwsMjZ9FNTpo1+/S2f8eP5125lVR94D8arPf2sRIaZB7Ci7GcunhzUXPMaL/vPVuLwpck/vJ4l+gLVrNrFqGwu9mP92M0h1Un7ltMwPcjFK7EVF8KQj/WXUhPoqgVOnhzTk6iVj7t/hSnUrkkbLUY9TJVe51W8iI+SNcn0osFz18xlh0De56/nVediqYP5N/jXNX/jFYbkJbvHJHsOckgdPpWNbeLblItkkjSEn5dw6V086MbM655go4JH8qiZ2cZxkeoql9vtpWj8s5d8ZGdvatNLOIxkpcrgEgsD0/KjnQWHwQCQjGMjtSXFu0TggDBqOzkJuzbfNkISGAz0962JYEwSzqQr43PKMdR6c0rjsYZVlwSMVFLn5O/zCtO6ltvNs1Uw5MgEhUlhgqev6Ul1bWkLowu4s7+ignH50XApw28srqY1ymcZNWxYS92Ue2KsvqFoIgqh2MfKhRjJq2dQsptkn2ZWDLnYzHNRcqxRjhSLhsM2ee1KyoWI2r39fSrH223DlvsrIo+6iYA/E96je/tVz/oqf8Denp1Ya9hoBHOVH0X2xUqIQM7hgAcZxVNr+GW5TbHAP4QinrW2mn+bbpMpTLKCQRUSlYcU3uY5tUd8mUgHpgZoFgjy48w4HU4o1a9bSvK+RXLEggN0rDfX7pHYCSFhuOMDt2o1aHZI6A2Fsig5b6moYLhYGAkkYxDPG7jr6VNpUsOqwiWQhApwSxGc8U0x6aHHm3Cn94eN+OKm7vqPQzCTtJzkE5xn8qcYgrxCXMYb7zMOBz1rY+06BD96eFgOgGW/pWJrOqJdyCKzCx28Z+9twWx/IVak9iGicRxoxDXcAX0XJ/pU8UFtL/y/AZ7CM1lC1uyqlcK/qGxQ1hdCE+ZPwB93Jp3RNmbDCytZUDXMzlh1XgYof7D9q8tEd2xkZbrWSoLxR5BDBQMVIkc/mr5Zwc/ez0obSKSZrWzRFnCQE4x0NLIh28xlQexeobd7hHYi3c56ttPNJdpe3e0C3ZQM8kgUuddwsyLMRYDzAB6muc1Gxnk1e5mgt2MDsCr9jwAe9b/9k3w5Plxj/af/AOtUZt5o/wByZlIZTkqufypcyYzItNHu7rOBGmD/ABP/AIVDd2E9rMY2kjyOpANapniikwZiG9B1qvcGDLOwdz6kY7UWVx3ZXtNLF31uWBHUBRTNQ0+C08grcO4kYgjNTIWj2PgrWTrcrfY1cHlZhg/XNDsEU2zOuHlGovGoYBMAKG+8p6n61disNmxJm4BGXY8tx09qzJb0wJBORmXdgHviqcuqTzxMxY57c9KC3BpnTmWEWpMZG1crxWHbhJdQn7jaD/OltLkDS4twIWTdkg9DmqtvKltfOZGIBiOCB3zQXyWgbAhRewpcIOiisU6ldPGpUqGL7eFrVR3g08PIvnXL9ABkKPcCpdjElyO1Z2qRrLGOcMOg9fapUjuJ7eQSIyluBzjb706O3MTKxCLjjLtk/pWc6rWyLUerKDbhhiQMf3uv0xUSoFkTBJxzwM0biy7zyx6E1PE8QZTJkbem3/PNaXIsRy3Egc5kZhjit/QLlfLaNrkIWx+7yRnmse5SOSVTEdoOMZHrzWha2kVtd+c0gcgKyYb2B5x+NNp9Bq19S9q8Rs5oRHNLtbcGy59a2kZLZADImzAYLu9qwtZnNwschBBDHnB6Y96m81p2iMqt5e0AlFAbHtniizsUrJmkskZMLeYC29cjdnqcVLcTQFQodSu4EkN+dWJtK0OWw+1abruySEbzbXkW1mxgkAjiuYcOJ5I8k7WPSiKTBs2pZWW9M8MkYB42bu1NMyq7t57KxPaTHHYVjxht2SMgDoasDErFGHGPvDiqIuXRLE+cyyEDuzGlWNXfKK5I7Gn6dBA6TxuAVCdTUkCodRkADFTCjjDfWocmUIQ8DpIwVB656UXWoXQdoEvZGhX7qrIcfpUNykJ1FwwBBjU8n3qkY23EgHbk8iiLuSxJdxcE8+uTT+PSmCMhTgZxUmOnygcVdxEgI2tgqoweWpS4GCrAY69KfHbK1qzk5IJ4xxVORAk0MmBg5GSPbP8ASldMbL4iTyQ0lyI+zBj/ACA61Gbu2imwczIOpB27v8Kpz3JlYtuJ571TLsWPHWjViOvh1nTCqk2c59S0xPFapv8ATVjWSO03qehAz/WvPIpTsQZxgdQK0LS9nglXZIAhODuHFS4JjudomsKCfJsFGM8t3/SkbW75siO2iUgck/8A66oQsAgw+4HnJNK8mwk4JGB796FBICwdQv3zmSNOOCBUJuLxjl7psDsKqxXaT/NG4YA8sBxUhkULkyYB6HijlQEkV9HC5MkjtuGFBJOTnnrVK4ubm4lRkhaJC2zcBkniopJYoyJZFZ1WXgE9iM0l7rSGJGjLfK33CuKbuthq3UYdOmjXzWK7COpPNU5jlWJmJUDqfpUouJ7lgJHUKOdpbA65q8beyn/drPjJxkKMH1460rtbidjljqjvcQxAMo81dpPcdKdrbyNaEBWEasC42+9WW062xDK6lJVwwznJPbFU9VcvZzqcn5DyT+NG6Ki7M527lJ2NnGG7VV8zCMOeTT5keVMqMhRk0iWlzIMxxswNUi5yuzViO3T4UbOxkGDj+LvVWCfyL6JmBYcjAGc8U6EzPYNETxAd2M/gaV7Vk05bvB3FwFA9wR/WjS5q3eBafWUH7tYkDA/daVcj64FM/tLUHGIrRDkcHzM/1FDOlrBHvJTMY2gDr75FQNfktmNIWUesmOfWnYwsrbjZr2+EJbzI1bjhFyMHvk1HYXN0+oQiWYvG2cYYY6VsWOlzXaNdXPlkNGFCr0I9azbmCKy1y0jhQopwcE556U7aGd1c1bPSnMKq4EbJncSDk+1NvNLkRkMUO5WPRT0qZLyGFj5b7SPQ1ZF7MRuRw3HIZazRRg37sWEZhAMe5NyseTmt/TDBDpw+02iSSMhKs08iH06KwFVpYjJDJI6jJfdgD2/+tU5LeTCQVPysMEe+asRFfvEyv5Z25YFVViVA/E09Jw8MahRwBziormMmEscEKo5q1p8UUlqhMe5wTk7sZpN6CFgDyxyIqjIBOP8AP0q025pGIwM4I4x1ANSRgIzCONSueRjNPRwYIwyjmMc/QY/pUOQXIjlckqGI7YpyxBM7gQSegqxlmUYGaYOCPkwM96nmEpDIro2t0ijDRyZVgPpSbovtducgo0bLlie3So7tFRoWAKgSDOPrioHKpqCRuFCLIQPxFHmVqy1KIUvo+U2tEwHT2NWkcbQyMHTH3TVK+jiVrcKUzkjarD+6cVMk32Rk8xMyFASyvkAYyBRy6AXwYPKyzKuR69KrvLZbGEbBnCnBweKoyorzSMXYIe2f/rVJFbD5SGO0fez3p6Idy5BBI0RG4A4yQKy7mdDNbRlSFMo+9+XStNAViGZAT6n+VZeqxCJI5AScSDrRFq4OxMdLuXkOFVVDYBY4z71VltlhfEhcBgdoIx371oGdpW3RswDehpkkBaWIyiZwTjpVc1hWILKySS3EkjYXeVFPihSN13HLCRMZHH3gO/Wpba2uPOmggtp32tuCqhJxgGkuba6tFLTwNEwIZfNHXkdKfNcR0ErrbRk+XtUDgYxmqx1EDHGADzk1Qub55YlSZ4WZSQiRrz9SakeJnHAJAY+lUgNHw1Y6ZcpcW+oarBbrvBUMOWzUd5LZSM0VoCIllIRnPLKOAcdBmsaGxM99LChVTjcS7YFad2gt0WIFHIQfMnNRLRlrValOZ8o4wdowxIqhc5dmjbA3HoBVt5AMBSRn7xz1qG5k837o4HehXZLsRHhclulS21yYnUrgEHqaqKvPfNWEQOwzkewFDQiy8pkiEec8dz0qhcwA6dclgNxicc/Q4q4EmDgLGzc8HbnNSSwsYZg6n7rDBGOxqBq7Zx62cQSN8bt2Dz7r/jTN0llfKIQSSqgr13djTrW7RbSEP1UAYrRsb+B5UXBEh4+7Ti5X8jZD4rCDTrp5DIzLIuNm3IA9/Wma7Gz6YzR4URkSEfSrVwheXJaUqRwqdqSdPO0yeM5UsjD5j7daq+oNs5IyefELeV9jA5jJ6DPUe1Jpdg9/qkdoQR82ZMdgOtWNYltpfsxgAEkce2XHcirFkTbXk12C4YtwF7Z5NbpXOds7ry1RAoAVR0AFcV4ikVNftgP4VXP51tpr2xVEgD5HJU4P41xt/e/b9ZNyBgFxtBPYYxSkmgjqacqNt3Rx4UjnB6f4VrWastvhmLfKeD2rbmsUcsAqkE8HGCB7+tVntFHJmRCByE5rG6RaRFKM2hx3VSfzqJEL2kGM53n+VWpJYDEwAZpNvOe1SQFzZsI4sbWGCDz6UcwjNdHCuNrn5SDgVPpSPJbnapYBwMj3rQczvbeWTkdxv56dazdJMqbovLZgxH8WMc9eKlyuhaGn5JgYhwoyBj5gTUqRxeVGTIgAZlIxk9c1XmZC5YvGdvcIT+pNRvexIQCdw37uBjsKx5tA0LqmEsqtI+COg78U9ZIjcbEjLAHABPWs59WAP7uNTjrntUdpqsbzkuwPPAx/hQr9hpmhq06y2sxjh8sKnGeuQQf6VS1VluJUnWMM7EEgHt71allmuWmhWMbTGc7Vyeh/LtVeSKabSLaQAgFQMjA7Y+tXEG3YpO7r5e8ouJBwPvda13gZCfLKuQeMk1gXEDIjMSMqeADW7JcWLomZ3OMMdiH698VTu9iVdlaSecfKYVXHaiK5lbuFPvT3uoWfIa4kPoEQf4mr9hPLAHcwPASn3o15b25HH4VVkikisHmfgSM2f7oqveW7zWE8kccjiJfMdgCdozjP612uhRT6rqEELXk8ZdSRvl68ZxgVz18YPsk9s0rRuUYFEBwxxkbiDRbUrQ0dA8K6rdwW91HZyJBIgZZEZMnjqMtVPVmaKZA13dswlXln/DgjjvUOmXEQ0i2OLgyBSG2zZVeT27fhTJ9RilidBPuCbfKUcYIIJ4NOwXELmK9lKieTci8hjn0zUt7bSvp85EaBNm594ySRgkgHoeOtRi/WXUWeMhlCleT2yCK09SNtP4WEiXYF0JGVoCfvJtyDStLqN2SMXS4pJdUEJ2cq2AzADpVuMgxIfkJKj+EntWZAdk0THgnqfwq4FbYpEmBgcBzVxM+hNaxg6nkjblDwB1q5d2sUrR5L5P4VlCc211Gy/eC9/wCKtMyusCs5Vgo3Ahupx0pPVjMp4kVnUdj0NVZCN2MEYHYVd86KVm8wSBmb7qD/ABqwdIQ4MlzDbjptlnTd+hp2sIysZUYHJHQCtDToyqknGT3q1HBY2YYG5V3BKscEgH06U9Ykdg9oWkAGGwmOfapkUIRlT39MU6OKAo/mTFGIwECZNRylomIdSGPRTUiQFwdm1pSCAx9agpHJ/wDCLS+aQhxEScEtk1o6V4Yg+1mMzsrhckrg/wBa1X0a5kYlIZnA6lVyM4yaTSjFFelI3DHYcsv06VfTQG2ZUFg03hS51NnbzoZgmA2B94Dp+NYNyxNrckM26NTg10NvqhtfDmt6auCtw3BJ6cg5H5CubSJi94SG2SqcfiKasiW2znUIVxJJzjnHrXVwaVJbWMchmUEqGkVuBk88Vg6ba/atQQSYWJWBbP8AKuuvb1FRVjKsxbBxzgVspWIkmzn71IrayknjUqW+VCOnvXOHg59Oa6DU4Jp1WJHQRKc/M2Mmsp7IoVXzEJPdTwPrUOabGoNI9d0i2/tiAhrU3CqB5iwTLHz6ZwePpzV6HwdA0snnfaYGDDbH9lYgAn+9kk49cCsGO9mdmIuZFDHPyEDp06CteC9tQoNzdsXOSQ0rc8+mahajbaKuo6Fb2OR56NjcAsAZm/HIAH51FbaZAlq5k1SxhAAyHzvB69MEdqs3FxZSxnywjL/uf/WrPMoMK7QWP+7jNDiguZupyNBbyxytMCRhGBGCe3ArKtNRlhyFdkbHJBxu/Kr+rzyxbBtKtnoy1hiUpK4JCkHqB1NQ4hoTS3c7q68gA9T35ppumnYxhAcEfNmppIpUtWPmRFWbPPWqcUWFDg7xgHaKOVCLiSoyGOMLHnqTzinpbI+6RXBJbJCrgA1WgaEksRgjoKcb1ycKoUDsKhp9AL0UjFsFmDMDkgmo4GZQMcbeAadbNFKxAJZyBzmniIJG5LHcGJ2+1WnpZj5WxzoXhZ2J3MM802A7QjHDDAODU0+PKJHAAAxWhp2j3F7ps08UMTRQfeZ5lQn2GTn9KIslKxjG9lBIX5VPZeKktNTnilXEjYJ6GnG0WT+NUOeQ3akGnIjEtMxxgkqKLJopJs34NctY2HmSXCSbuqBcYxgjJFJbTWmF8sDGBknvx3rnJUPnnJ+QHjJ/pV6GFowpWQbQM7h2qGuUTuTWkskVm0Mf3s/eJzikEUao/wC6xtPJJ6CpYIwkj/vVJbsKldlDPwGG3kfjWkZ2QFWynjVXOCcnjA9qtvcxMu0BzuBHUccVGsC3P7zJT1VRgmmiARbTI27/AGSOop86Y7Exiti2yMMzKgOcfxY5pIpnDCI7lIHOOPeoSDKwmhCRjPCjgcUGEu7HncSScccUXsOxYn+TyZpclSONrBjn39KJHV2O5gAMFVznFTwWjSRxxmFpEGSfmwCfrTzoyC4M1yTFa/3Yzuf6DPFHOgsUIclo5N3Bb5s9hV1ksWjbyy+/aWJU53fpxXQaeNAgAb7PKw2ksZ7kDAHsBWBq0mnPf40UyR28iYPJO/n36CrDQeLm0CmR4g2fmZWPf16Vs6VZaVq7XEVyt1CbeXCCJQu5T0zmuQH3sCSQ49Fqzp93KmpCYvISGMjFueg4zSdrE6mjqtpAtwRpUkzRRs0KvKf4lOGGfY102g6FNb+ULmawKzRhlaSX51OB/CR2+tV9N8R38nmQ20ccgWTotupJyNxPT1pZ9cviCLmO2Y9f3lpGf6Vna42+xo6ki2NhezXNqHtIU+aS2vRuPQcrt4/M1x2jC0aWeaOB4c42IzZO3696lvbmIwzTGaCCSU7PKSIKrAjGcDisfRb2R7sJIxZmjxkt3pySS0KptvcrSIkGp6hbwKApKlQ3piqkqsUeQ8jPOBxmrGp74deycfOik459RUlzt+yMBwd2a55Ts0U1qc5DbIjE/MAD2q0tuZWLqXbcOBsFRXl2ba3PkkghhuyuBVeyv5Z7+OGSTEbHmulVXuiOVmh9mMSqPJByNpBGfxp4srcNmSG3CjsOtWWaGOaNFjt5CzAEgnj9a02ge1sWu2S2QRsUKA84xw3481PtdR8rtYrWzW8EKjLKCOmM4P41aN1EW2Kc47kYrmjfs9sVYBVXgEU+K/3Splm2+nWpbl0Iub4v0jYg569cUovd7YSGacj+6KynupYkYZyp9OtLHcXOd6TmPruA4+tHtHbUDQvb2K/iaNrRVZj8gyuR6YwK546dNKJpY8SLFkyEnBHPXFTSxym6HlOzcZGKhkgaM45Hy84p86fUNSm7sRjGAOtMVmdhGD1PSpCJJMxoCcDgDmp47R4wrnOD/EBVXQFdMbSrAZHekAU5H8jSyApMwq5DpssyeYAQB1Y8CplOMFqNJsZbzeQwBXOe5qe4uMMQO46intaSJuQqSOw6Z+lVZEUS7SdoA+UmpjJS1RopuKJhK5Tg8Hsa0LC/eFiAQ2PWsqBS6NjJJxwK0UtGWESAoc+vHenO1g5kiecyzsnmEFiTzx0rQtfmcqwUbVBHv7ms0Ww8wOblAVPQ05lVwxEjAY5JfP8A+uiDS0BEL27XF6wjBZWJ2noOp71JcRPaoI3wrleAGzx68VZjglEBKKwC9Dj2qiXEiCR53EhX7oH5Vd00Q4u4olZ1JjG3aOma37LTFdFkvNT0+zYnkPNvb8lz+XWnaPEtiRdvbQtKeAHUOBwMnB4rYHiG/Q5TyI/dLaMf+y0cikDbQ6LSNHchj4ktN2P4bZyPzyKbq/hiWzhjuBNFPA4GGI8on6AnNWLTxTewSMZJndGVhtXCjd2PSue1S5n1lpHmmd2JYfN/Cw5GPyo9mkClqXoNJKLjCRjHbk/4UlxZLHfWhTecMdzMc8elaEUzy2sLyKVkKjcCMHOOaa5AGQB+NZ21NEzCvr+QakJI4z5dscH5sA962xd6ZGBNfW80hkXKBH7e9c1exGK6mjwWMhyCP5VbktHbTYQI2aVc8Ee9aaCabNPV7nTHsj9httkvdy/QY571lW14llC1v5scsaqHBXg5x938M0sdpMYcNFMHGcbcAfzqumlX16wTGxVOCzn+lJyQKLNGLSpTB+9jkLHJP77A9fSnQ2y6ZM5cKonb5FDZ6DnJrZG4L24A71m6vbz3cCi3CrKjhlYnGKy9pruV7Mr3XiP+y5BHAkbg8tvz1GRjg+9Z7eMpXck2tvyMYA9sVDdaXqDzSSfZUkXPBZ1JqmmmXzFM2DY3cghentWiqon2bLd5r/8AauyIWsNvIWwrxk/limWmnzG5tJUBYK/znbjjOD+laOjaY1rdSzzWgQ9IicZx781oX935EJ2AF/4R61jKrd2RtCktzB1qKE6zaBAdrRnv3zxVCWYvuAVtvTOKmuX33VncMcDzOWPaor145ZgSd5HAA7UtOpM7IydUjf8As92KHYcc/jWXYkHULclSwLrlR1bkcVvTxtNatCfusMdawZFNveHBOY36j61omjG50sfkpNJ9uEtusRJVbZBkt2GWqB9VhuriQzwzyIUEYVJAGPvypA/KrTxI+0Md2Wzg037PGDhY0z1yBWXOkx85n+QhVgkZB9Tzn8KhggxLlgVx2FbxiidMBFHuOKRLaMMHTcCOhLVXtmyErspRTtDNkDaoPU9xUU99mQuB8zHOTWnJFFO5L7iR1NK8VkilHAOR3bFRKsl0G07mXNeHYNpAY9cUWj79wJ4Jy2KuabEjpJJHChw5VD7dqn+zIUkaNBvbufWn7Rdh7FKRI0ikSBsBgQwx37GpLC0lulUjCoVAJZuCPSoWsrlNx2kjHUUiPcCARhWyF6DpTldr3WCauakekLFdCaSTcMcptyPSpppzEygRp5Y7Ien4VkG9u2QBo2EgPXp+dTf6RdsrsQp7Ac1zOjNu82P0Llxswnlqp3fxMe/oayb2CSRgBGpI7xjitmDTpS+8xsD6mrX2QIB52361tSXJsUqcpHOQQSiEgKMkdTV+C2aeJQUk3DrW3BaBz+5tw59QMCtSPSXZ/mkKJ/dQc+/NaObZoqKW5jR6dbLFm4UgY5ZiP6VNBYRyjbbWjMPULgfma6WDSLaLDeSCw/ic5NaAjUAAD8KEmXeMdEc7Hoc8ke2SQRqeoUZNULnwtJEsP2YmZSQHHTHHWupub6C1nFtISJmG5V2nn8aIIVkUSybST3PStacDGpMrw6FaJHCn23bhfmBAJHOakl0y0twrPqZKc7gYk6dsd60oo7XuqfpVDxPawyaBdtHKiTRwtJER13AcD8a22Rg9SgieFUbM2ozvhvujgf8AoNQi/sCuyxkVyhJZgmCxyccn8K4u6icPuMjA7sSBx0NT2V1HZIzO5Iwev6VLasNHWwSPLCZHIJZjyDkU8Rs+MCk0a3J0m2LY5XPpWkkQjyTz6CsGzeKM9tLSZlMgJ2nIGe9WktkTt+dWMZGelBK9qlstIYIlAyAKRiOwodz0GKaBjms2ylEgu5zbWc04iaRo1LCNerewqraajFe2UFySsSz/AHFY8k9xUmqmY6bMLbmcj5V/veorhrG4klvtOtI4ZG8iX92jdF5yeO2KErmmiVzvNhc9OO1O8vFPyBUM84jUnvUjMfV72+sr63FssUkRicvGxweOhz2qnc3BnkydoA9DkCqHiF3nuN4n4xtCHgfTNRRyrFCsfC4XoOapbGU5OImpyRGOMR8YbJpo2gZVNue5qpcSGV8gjFJGGlfAz06U+xyyk2xzu65WMbh6E1QFiTL5kwOS2TV24me0xMFV1DYdT0IqXz7S9mVYJHOVztcdPWhtoOV2uIkmzGOWHr2qWKUvhioYHvQ8IjYnyhgDqTTHLggEFgeiDjArNyuQTvcoCSgdiedoWp0EsxKoAPYnGKcZUGJHmKr1KjGaqS3EUVx5izHBGSB/KlzSfwo1joSyrFERHI3JHJU5FUdXZY7QNGh/u793bFST3drKMxIxkxwT61G6Ne2hjcjB6GnTjLm5mKWpU0Kd42kQgmM9wfumt+GRvu5BB7gYNYem23k3U0RySpx9a2UhY85VPcmtKkVzBGLZY3HPOPfIzQzrxx19Kmg0+aUDy0kf3PArSttClIHmsqD+6n+NQomiovqc/LbvKvC7PdqsWNlqCMRDG0gPcrgfXJrsbbRoIiCseW9WNXltgBz+QFbJNmvLCJzdvpF46g3EyqPSMc/nWpb6REhDFSx9X5rVConYU7cDxjkdqapidQhjtkUYxx7VYigLsI40LE9hyavWsekwlW1TUIkkP/LBW+YfXHNayeJdAs02wyEKP7kLf4VfIZOoZ8Wh3UsjAQlVBxvc43fQdalk8MXjIRHPDGT3wSasHxzow6NO30j/APr04+MrALu8mbbt3fw5/nVezZnzkFv4Umjb99dRyLycGPnP1zUr+HI4Vz50CKOpZP8A69Mn8b6emnXNyqSh4Ymk2MvUCvNtQ8ZXms3Mf74i3mh8xMH8xjtinyvuF7mjrWrxWGsG1hEcsWCPMQ7QtZXivVWe2sraNFVXjEj4m3GT5iBwDxXJ6rEElWQEsTkMSfxqCB5TJGEiLEEcqmT1qr6Ba5uXdpNdW0SW6FpGO5lHU0+w8NXUocXC+WjY4P3q0tE067i1SS/kUrG6bVjY5IrpQTjpWEpGsYENrEYIUhUkqi4Gam5z1yacMngDApdhA4/Gs7miQwkjjrTCQM08jrzULOAeealstIb3zkCo5JdvAJJpjyFyQBj1NIFAqCrDgCSDkk571w9sfK8ZFcfMJ2H0yM13AwmCOMVzd3ZxN4mjuxvGJELMAdp+U/1ArSLtciUWzo5JAi5yOBWJqV+QSqHLEflT9Qv1jXPBz90A1iIxlnBkYgEjLf1qEjR6Gdq6NKsBHPzn+VOcCRy5BCjoKu3pSJmRDvQdwOtVQmxA8/Ck8AU+fQ46krsjEW8FvLCKO5P9KkTb5SgYyV6Ch0ku2KghI8dOnapltkTarOWKgYAqJT01IsircQb7WTeSPM6cd6wdOmeC8GASR2Arq50jnVQVIA+6AaYltbwszrAqzYOCDyKUatkO9lYVMTqduGA4zjqacYCc5LDHanCSURKFwu0AZxSPJggmQElh26e1Z31JIMAMcHBYYINEdsskoMdvgjuOlacFjI2PKtic9C1asGiu+POkC89Err2OpUu5hDS7YHMgVWxkhepq3Z6VGSfJt3HoxBxXSQabChG2Mt05bmr6WilcEcegp6sp8iRw2naY03iC9gkkMZCKSF78V1NpodtEwYRF2Hdq1UhijGVjVSe4H9ak9u3Wq5W2T7RIiS2VeuB7Cpgqr0ApC2R2pPrV8qRm5NjywFJuJpvAoyaCRTgjk4oyEV5GO1UUsTTSR2rl7K11eGbUjfO7W8uNpL7hjOeg6VS1B6ElzcsSpA4K9cY5qoJZNxOTg9atmS0JXcWdiPlCozZ/IUiy/LI0ei3rrH99xbttH1JrZGJW3s/U/jSAnPXr1rVFpLdWyyR6LfIWGd3kHGKRdA1d0WSPTJQjDhpGVP0JzVIVy3rcENtYwymUrp9xZuhaTHUrxwOcZ9q85sEJ0iEZ897afyleLPKsOevvXW6jpeo2zRxXsdutvcHyvlnWRlJ78ZAplrp9rpGmmxhH2hpG3Tyt0J9APSpkNGKdPn1ARlUwu4MWb7vBrv4oIgqskaLnnhcVz08jQskMcZlcj5VXhVFdDbuWtkLEZxziueozeCJgMCjAApByPb3pCwxxzWDZskLnHrTHlAHNRvIBnnPtVaWZQuTx7VDZRK856nhaqvMXJC8Z71CZGlbJ+6OgpRvJAWM5J4pxjdlXsPLrGuXICjqxNSLIrqGDAqehBzWL4xsidGRBcrJcMQfIQcr9fSqGji6stKEUxZG3Z2selXKnyoiM7s6Z5gvfA96x7vUV2kj7oOMf3vasy7ugd24kqPvZasU3rSTZP3OgHpSVO+pbmjReR5XLMcsalntprJFeQbd43KD3pbCPzpAeqjkmq2pXbX184Gdo4Az1qWzOo7RK8t35KAhcuzcFqakjCQNM43nrTxEGCtJGp4wCe34UfYluJskHaOpFRdI5vUntpIyjSZLMCMLVmR0kkLQgqx6ITn9adaWaQICwQqv3lLY3flVmL7OVIiwrjhjJ2/OsZTDQoJKCp3ELg8A0B02liQT1AxViK7to1G6FFQYYnOC3t61JJOJWVowvlgc4NJK4JIzbmbDuuSu0fMrNlvyqHdLKC2QqkcAHG78O1Xp0t08wgBmBOWIzzUCvDIpyNzHsoOeK1VhK56DFp0u0GSUY9EGKvW9miuAFyffmu4TQtORQCpb/AHnqZNK05T8tvHn65r0VRLdY4eRApAAA4FRkAHrWp4i1PTbC48iOyVnx9/nP4VhwX0F221BtJPQnNPlsLnuTEntR160pGDSfWo1GLwKTNB9qQketIYE0hYD3NITnim8L7mgqw7PBOOlLFf2BsJI5r8xLIeQq/e9uaYAxbAGAe5rnNZudP02WNLiRhIVbbmLI5pwepMlodGNb0qyhJj1S6YqOigAe30pyeO7KKGRJJzMjdQ8w/XC15zeX2n3du0dvcFGOA8jqQPyrLQWCNzfggdwv/wBeunnZhyI9UfxzpPksH8wE4wIX+b/x4YFQnxfonCi0nkfgYkkySa80e6sC4L3TMB02qBSprFtbXgkkgmlIO4IZwi/ouf1pXCx6Jc+JrWJSf7HRMcgsOh9cZrLsnN+Jbg8KWOB2x+FcvN4pUxeYthD88nR3ZxntxnBrRs9WfbpzkqFmDmREXAJ7dKzm2XBG3qFyLGzE20OpkRAM+pAz+Fa9jKstruQHbvYL74OM1x+u3LXElrbEjcX3FAeQP6V18SrBCIoxhVHFc8zoiiwXABz+VQs5bpx7UwuCeTUMk4HCj8awNUiSeQIhOQW9KqBHc7mJPoKcAWPNSgYXFI0toNVQF6UGWKEM0hAIQkDPWngAEda4291VP+EjvY5SFCxtFGWOBnr9K2o25jGq9Cle6iZbq3IYESTjP5119y6oxRcZHQ4ry23ljg1GBpctGr7mAPv2rup9RW4iMkLArIOtViF7wsPsVNQcXAMQxgHr71heWyTFCMEHmtVBhgB0q7Fp8crGWQ7cDqO9Qp8qLnG5ShunsrTywBlvUc1BEpG6RgAO2akkTfKW52g/KDVgLnkjauAeTWUpdjllJt2Gx2zXcMhAK4xhv51Ik3kW8cUYDFcZO3nrmq8t28cbRRjAJ4C1XdzArESbicg4+vWs+VsmTNCW4jmhCSyFCO5XJb8qpS3XlqFJY7TxuJqlnew8xicGn7/mOOnStI00jO4T3czoWSLAYenJojvZEiBUqc8c9qZ5hLdTmmOibeRweuKvlQ7ltL5wyscNg88daR7ne2SMAHcoHbnNVdg2jjp0FMyx6/pTshH0DBcqFwNpbtiuj0ra8JfbznqawotFv0PyW4OP70m3+ldFpsE8FvtnVFOeitmvQuTY5HxxpxUJLaQPJPJxtjXJ9c1yGmaTfvrEEAtrhZGPzllxj3r2O7M4gP2faH9W7Vj29/DarIRIjzMcuVGKllGVrWmJYhfLJORyTWL061sanqMNzKIDMPPbsT0FZGOfWs2axEJJ6Ckx6/pS4zTguazLIypPQU5Yx35NS4IHB/CkLc8mpuVYAo6c1ynjXThcW8dx9jM+wYIV9pA/KupLc1T1AK9jMDjJQ4zQnqDPJrq0cw+VFaNCzEEK024n/Cq0WjTOT5zLGMcEOCTTRPNLfIl1IzKrfNlu1XLm7hHywBuOhLV0XMbEZ0RBHnz23egxU6ZRgkclooAALTxhm/DNZBnlH3WIB7CmEyMehJoA2Xs3vsma/gXy/uqFCr9cLVmymSKZJN5e2tDtLrwxB9K58JKw+6xx6VuaFpd3c71aFkhfGXNTOVikrjLV5p9eaaCOSRXckBucLnoa9D+1sQOMcVStLCGziCRqAPXuasMQBxj6Cuac7m8IkhnJ68Ui5Y5I/CmIhJDHirCLg81kbpWHDI680GVQO/vxQQSTjjiqksMpPAQj/aoQMle7XHByR3BrjvF8ttLYlzHEJyw2sD8x9a37yMpbs8hhjCjJbb0rz6UvreoAKGW3U8kjhR61rTWtzKe1ivbaYZ7F7jftkJ/dr61b0O7dLz7NKThvXtWjKIlURxDESjCg1kzxPFdLcA4IPUVrJ3REFys6uKBp5QgHJ7irOokJFHbo/wAw+9gdsVFpF+htjKykkDrj9ao3cheWR9wJB65rke9i6k0loJ5rOygYJP8AepLuWWFWRwfMI4xzVcNKIycgAnjP61HLPgeYSAdhALGmo6nK3YjFztRvnDsT27U0Pls4x9ahG3CgDcT3xV6C0MqneSo6gjvVOSRk9Srlm5znHHAqzHZzSYJBXHNaUFokTrHtUj1FWXjPmkwuwBHOB+YNZurqKxlfZIhhTIS3BJK9KX+zm5UsQynnIrTCKjGSQryeRjken1prNnoyrubB/wAannbL5TJNpuKjdz7+lO+wMOOmeh3VphSV4VTg9SKdvd5RJIgV89OlNVHcfKfSdRmWMPsLruPbPNcHLe6lKG8zXbSNVGWAl3H8SKTR7rT21AEao92+ckwoQo57k16lydzqtX1a3tf9F85RPIOFB5rjFVkZmkJUMenc1yWq6n5PjWSQuSp7lsnhjT9Y8SOttLJbrgDOGJ5Y0mUkRtObjx1sB+VVxgH3rrApBwa878JM0+um5lYFmGWI616EJAe5/Os5msUSAAdePagkAccVGXz0GaUAnrWZdhckjikwBTt2OBTOT1pDsMkJI+XAPqazLuxmulIM5A9AK1TtHpimkg8DvSuOxyEnhC2diWkbJPUCkXwZZE8s5+hrsBEMdfwprBQOtHOw5EcqPCGnIeVZqlXwzp4I/cg/WuhK7sYyFpHCqpAwPc0nJlciMhNEsIcYhQn3FWAiRLgAADtT3YAnGfqaiIL9PzrNybKURrsDwASaFixgnBJqZECr65607HFI0GhcdqdzilAOKUg0h2GH2qNvrTyccVWuZhFHg9T0pDKupItzayW0n3JBhsf41xoiTT7loIpGMD9d3rW7e3bH5QeT3rKlg89eOG6g1pFtIzmkyrPEUORnBqW0t0mYeYoK9xUtuDcKVYc4wQatm0e1iywCkntzVTnpYxmrakV3OsaJHFGAgPUVVULLIWYcHsRTiN4Zc5PXBPFSSqqRksB04OcDNYoxlJleYqVwGLYHABrHIczFJM7lbpWqIpjvAGwnGSR2qSK0TarOV3Y5x9arnUSL3GadaFD50m31UY5+taJkPqNrdjSOn7zIwg6DBz2pcEqQdo2ngCsZybJY52VTwd2OnsanXciGNsqoBG7tkiojAVYkAHd1U/WnLDIJT5jhioBYD+L2qdhoiLsLj5o8ZHcdOKa+CykHGKkL2xYiRiCc7VHHSo5SgAdSQvAZWXvjpmj3mUrjhL8qxqAoI6/56Uvm+YjyEHecZ5/DpUAnR1JCMCTxUjsAseVYqVyGHFFncLM3rTVNPgtAl3d25OclIxnH4mnL4y0m0YCGOWVv9nCivOUs52IxG5z6A1oWuhX88gEdu4B7sMV7F0FmXNUvZL+/a7UbSegqWy0q71FQGZhHnqa29M8LOjK90QxHO0dK6aCzVFxgAegFRKZcYGXpGjQacMxglyOWNbyJgcnj3pQgQU/OfpWTNbAMDoaUMTTAueelOyBUhYWkL+nNNJJ9qUL3NIYcnrRkCgkLUbOT0pDQpbNMwWPP4UAc5pjygA46UhiuzDgDj1qB5Cc5OcCmyy/LjP4ZqMZPJ4X0qGy4xuJy59vWnomKPQDpTs+lIrYUEAUDnNMHNLkCgaJOKjkcLwKCx25qu75yTSZQSSYUnv8AWsS+u9rE5y3YGrFzcCNiScKBzWJI7SuWPWnFCbEAy3zEkHqTUiJs69c8UxPQ9Ku2ls055GAOpqpMkZFZnzftJOxB3HeqlzOVc7WJjz25/StDUXSIeXCqgDO5jWRcBYhvBzkd+MGs7ts561S2hEkhD4Hyk5+9VryjPGquucjoTnJ9az7aN7mUtJ8sQzgA45rUaXymOQR6D19KJabHJqKkoRRGVJAPQ03MKOQSoiJ6mqcupFGIIJO0DI42mucubueWeSSSZjyeOnWnCi5FqDerOnn1G3g2jh1kJ2OvPQc8VUfUgo3BgDjhT19q5eCQiUybsYz3qU3JbG5juJzmt1h0i4xRtvfymIx+Xlj8xdjkZp8ty5QRicHaMYx3+nes6J1Kud+/CniqonCSFjkuOhJ4p+zTexpZG2u2CVCTIsoB+ZiDz+dXI4ZZX8tZA5Ub8bgST3/H6VzyX4lypwWPVxxn8KvXFw0aLKAFckEKf5cUSgPSxqLFMFJAYKTwAufx9qtyL8y7mKBRyDz3rDtLiSaQWy7iwA+6SM8DgmtSKBIXbzmCkHGXfG36Vi4NMytdH//Z", "imageWidth": 320, "imageHeight": 240 } ``` I've added the json file but it won't read it. Any help is appreciated cause this is due for review soon and i have more two hundred images to annotate.
closed
2020-05-17T21:30:07Z
2020-06-23T18:20:04Z
https://github.com/wkentaro/labelme/issues/658
[]
juniorkibirige
7
microsoft/MMdnn
tensorflow
891
AttributeError: 'NoneType' object has no attribute 'name'
tensorflow convert caffe: from ._conv import register_converters as _register_converters Parse file [model.ckpt-61236.meta] with binary format successfully. Tensorflow model file [model.ckpt-61236.meta] loaded successfully. Tensorflow checkpoint file [model.ckpt-61236] loaded successfully. [435] variables loaded. WARNING:tensorflow:From /home/lsp/.local/lib/python3.6/site-packages/mmdnn/conversion/tensorflow/tensorflow_parser.py:269: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version. Instructions for updating: Use tf.compat.v1.graph_util.extract_sub_graph TensorflowEmitter has not supported operator [IteratorV2] with name [IteratorV2]. TensorflowEmitter has not supported operator [IteratorGetNext] with name [IteratorGetNext]. Traceback (most recent call last): File "/home/lsp/.local/bin/mmconvert", line 8, in <module> sys.exit(_main()) File "/home/lsp/.local/lib/python3.6/site-packages/mmdnn/conversion/_script/convert.py", line 102, in _main ret = convertToIR._convert(ir_args) File "/home/lsp/.local/lib/python3.6/site-packages/mmdnn/conversion/_script/convertToIR.py", line 120, in _convert parser.run(args.dstPath) File "/home/lsp/.local/lib/python3.6/site-packages/mmdnn/conversion/common/DataStructure/parser.py", line 22, in run self.gen_IR() File "/home/lsp/.local/lib/python3.6/site-packages/mmdnn/conversion/tensorflow/tensorflow_parser.py", line 421, in gen_IR func(current_node) File "/home/lsp/.local/lib/python3.6/site-packages/mmdnn/conversion/tensorflow/tensorflow_parser.py", line 815, in rename_FusedBatchNorm self.set_weight(source_node.name, 'mean', self.ckpt_data[mean.name]) AttributeError: 'NoneType' object has no attribute 'name' so,what can i do?
open
2020-08-21T02:09:29Z
2020-08-21T02:10:58Z
https://github.com/microsoft/MMdnn/issues/891
[]
lsplsplsp1111
1
amdegroot/ssd.pytorch
computer-vision
292
this line is wrong, tensor used as index while the dim do not same
ayers/modules/multibox_loss.py # Hard Negative Mining loss_c[pos] = 0 # filter out pos boxes for now loss_c = loss_c.view(num, -1) should be as loss_c = loss_c.view(num, -1) loss_c[pos] = 0 # filter out pos boxes for now
open
2019-02-21T03:44:28Z
2019-03-17T05:34:44Z
https://github.com/amdegroot/ssd.pytorch/issues/292
[]
caijch
3
replicate/cog
tensorflow
1,244
Rethink/redesign validation of prediction responses
We recently ran into an issue in the Replicate production environment where a model with an output type of `cog.File` returned a string rather than a file-like object. This highlighted multiple issues relating to output handling and validation of responses from the model, namely: 1. Cog's synchronous prediction API validates the entire prediction response before returning it to the user, but the asynchronous prediction API doesn't do this anywhere. 2. [`upload_files`](https://github.com/replicate/cog/blob/2b4515b3635c4d1a8ca5d8f94c2e390e383b96d4/python/cog/json.py#L46) quietly ignores things that don't look like file handles. This results in the value returned by the model being passed back to Replicate, even though it was a 10MB base64-encoded blob and not a URL. It's not immediately obvious how and where to fix this in the code. There are at least two things that seem a bit wrong here: - While returning an invalid type is clearly a model error, it seems to me that`POST /predictions` should probably return a prediction with a `failed` status and an appropriate error message rather than a 500. - The async update path (both polling and webhooks) should probably refuse to propagate invalid payloads.
open
2023-08-02T11:08:13Z
2024-01-31T11:45:02Z
https://github.com/replicate/cog/issues/1244
[]
nickstenning
4
adamerose/PandasGUI
pandas
48
PandasGUI (Not Responding)
Hi, Trying to use PandasGUI on Windows 10. Installed via pip in Python 3.8.6 Tried in both Jupyter Notebook and VS Code in .ipynb file. Added `from pandasgui import show` As soon as I use `show(df)` it opens and hangs on the below. ![dwm_mcfci0BLJl](https://user-images.githubusercontent.com/32893752/96835063-c4869d00-148e-11eb-8d34-dfbfae7a9766.png)
closed
2020-10-22T06:49:22Z
2020-11-05T18:00:45Z
https://github.com/adamerose/PandasGUI/issues/48
[]
eddylit
5
lepture/authlib
django
5
Cannot log in with Facebook: "Missing client_id parameter."
From authorize_access_token call I receive: '{"error":{"message":"Missing client_id parameter.","type":"OAuthException","code":101,"fbtrace_id":"AkqIKeKkCiT"}}' During debug I found in oauth.py fetch_access_token(...), when body object is formatted it has one of the query parameters "client=....", which in fact should be "client_id=..." for Facebook. Unfortunately, there is no compliance_hook here for it. I would suggest to make a complience_hook for it. In fact I want to contribute to authlib by myself.
closed
2017-12-05T17:29:49Z
2017-12-07T03:22:44Z
https://github.com/lepture/authlib/issues/5
[ "bug" ]
anikolaienko
5
deezer/spleeter
tensorflow
601
[Bug] ModuleNotFoundError: No module named 'tensorflow.contrib'
![image](https://user-images.githubusercontent.com/36926346/113030487-3bebf580-91c0-11eb-8e5f-6b31f66de245.png) ![image](https://user-images.githubusercontent.com/36926346/113030708-7786bf80-91c0-11eb-9d02-b657097d945c.png) How to fix it up?
closed
2021-03-30T17:29:54Z
2021-04-02T13:34:10Z
https://github.com/deezer/spleeter/issues/601
[ "bug", "invalid" ]
cfuncode
1
dpgaspar/Flask-AppBuilder
rest-api
1,949
Select2 3.5.2 XSS vulnerability
Selec2 3.5.2 is affected XSS vulnerability. https://security.snyk.io/package/npm/select2/3.5.2-browserify https://security.snyk.io/package/npm/select2 I try to update select2.js and select2.css to last stable release (4.0.13) in Superset application and it works.
closed
2022-11-07T09:07:38Z
2022-12-22T08:24:09Z
https://github.com/dpgaspar/Flask-AppBuilder/issues/1949
[]
n1k9
1
holoviz/panel
matplotlib
7,416
Opening notebook from url with panelite, fromURL parameter?
#### Is your feature request related to a problem? Please describe. A way to open notebooks from urls with panelite such as https://github.com/YoraiLevi/interactive_matplotlib/blob/b09c7720069ad41337d8acf2540cda223c50a9dc/examples/draggable_line_matplotlib_widgets.ipynb#L7 #### Describe the solution you'd like the following link should open the notebook `https://panelite.holoviz.org/lab/index.html?fromURL=https://raw.githubusercontent.com/YoraiLevi/interactive_matplotlib/refs/heads/master/examples/draggable_line_matplotlib_widgets.ipynb` #### Describe alternatives you've considered jupyterlite does open the following notebook with this url using `fromURL` parameter `https://jupyter.org/try-jupyter/lab/index.html?fromURL=https://raw.githubusercontent.com/YoraiLevi/interactive_matplotlib/refs/heads/master/examples/draggable_line_matplotlib_widgets.ipynb` #### Additional context https://discourse.holoviz.org/t/opening-notebook-from-url-with-panelite/8346
open
2024-10-17T20:57:28Z
2024-10-17T20:57:28Z
https://github.com/holoviz/panel/issues/7416
[]
YoraiLevi
0
bmoscon/cryptofeed
asyncio
1,014
BITMEX: Failed to parse symbol information: 'expiry'
**Describe the bug** 2024-03-01 14:27:05,508 : ERROR : BITMEX: Failed to parse symbol information: 'expiry' Traceback (most recent call last): File "/home/ec2-user/.cache/pypoetry/virtualenvs/feed-CdS8cHT0-py3.9/lib64/python3.9/site-packages/cryptofeed/exchange.py", line 105, in symbol_mapping syms, info = cls._parse_symbol_data(data if len(data) > 1 else data[0]) File "/home/ec2-user/.cache/pypoetry/virtualenvs/feed-CdS8cHT0-py3.9/lib64/python3.9/site-packages/cryptofeed/exchanges/bitmex.py", line 61, in _parse_symbol_data s = Symbol(base, quote, type=stype, expiry_date=entry['expiry']) KeyError: 'expiry' 2024-03-01 14:27:05,510:ERROR:An error occurred: 'expiry' **To Reproduce** fh.add_feed(Bitmex(timeout=5000, symbols=Bitmex.symbols(), channels=[LIQUIDATIONS], callbacks={LIQUIDATIONS: liquidations, OPEN_INTEREST: oi, FUNDING: funding})) **Expected behavior** expiry = entry.get('expiry') # Use .get() to avoid KeyError if not expiry: continue # Skip symbols without expiry or handle appropriately s = Symbol(base, quote, type=stype, expiry_date=expiry) **Operating System:** - ubuntu **Cryptofeed Version** 2.4.0 **Python Version** 3.9.16 I have tested using .get() to avoid KeyError, as in **Expected behavior**, editing the packages/cryptofeed/exchanges/bitmex.py", line 61, in _parse_symbol_data, and now works. If anyone has same issue, is this correct way to workaround? Thanks! Cheers! Sam
closed
2024-03-01T17:16:39Z
2024-03-01T19:01:58Z
https://github.com/bmoscon/cryptofeed/issues/1014
[ "bug" ]
gawaboga
0
ijl/orjson
numpy
406
Tests fail with pydebug
Hello! In our environment we use Python builds with enabled pydebug. Tests provided with `orjson` fail with Segmentation fault. Probably related to https://github.com/ijl/orjson/issues/277 ```pytest test =========================================================================================== test session starts =========================================================================================== platform linux -- Python 3.11.4+, pytest-7.4.0, pluggy-1.2.0 rootdir: /tmp/orjson plugins: Faker-18.13.0 collected 1182 items test/test_api.py .......................... [ 2%] test/test_append_newline.py ..... [ 2%] test/test_canonical.py ... [ 2%] test/test_circular.py ... [ 3%] test/test_dataclass.py ................... [ 4%] test/test_datetime.py ................................................... [ 9%] test/test_default.py ................... [ 10%] test/test_dict.py ....... [ 11%] test/test_enum.py ............. [ 12%] test/test_error.py ............... [ 13%] test/test_fake.py . [ 13%] test/test_fixture.py ..... [ 14%] test/test_fragment.py ................s............................................................................................................................................................ [ 28%] .................................................................................................................................................................. [ 42%] test/test_indent.py ........ [ 43%] test/test_issue221.py .. [ 43%] test/test_issue331.py ...... [ 43%] test/test_jsonchecker.py .................................... [ 46%] test/test_memory.py ..........s. [ 47%] test/test_non_str_keys.py ............................... [ 50%] test/test_numpy.py .............Fatal Python error: Segmentation fault Current thread 0x00007fece382a740 (most recent call first): File "/tmp/orjson/test/test_numpy.py", line 136 in test_numpy_array_d1_datetime64_years File "/tmp/venv/lib/python3.11/site-packages/_pytest/python.py", line 194 in pytest_pyfunc_call File "/tmp/venv/lib/python3.11/site-packages/pluggy/_callers.py", line 80 in _multicall File "/tmp/venv/lib/python3.11/site-packages/pluggy/_manager.py", line 112 in _hookexec File "/tmp/venv/lib/python3.11/site-packages/pluggy/_hooks.py", line 433 in __call__ File "/tmp/venv/lib/python3.11/site-packages/_pytest/python.py", line 1788 in runtest File "/tmp/venv/lib/python3.11/site-packages/_pytest/runner.py", line 169 in pytest_runtest_call File "/tmp/venv/lib/python3.11/site-packages/pluggy/_callers.py", line 80 in _multicall File "/tmp/venv/lib/python3.11/site-packages/pluggy/_manager.py", line 112 in _hookexec File "/tmp/venv/lib/python3.11/site-packages/pluggy/_hooks.py", line 433 in __call__ File "/tmp/venv/lib/python3.11/site-packages/_pytest/runner.py", line 262 in <lambda> File "/tmp/venv/lib/python3.11/site-packages/_pytest/runner.py", line 341 in from_call File "/tmp/venv/lib/python3.11/site-packages/_pytest/runner.py", line 261 in call_runtest_hook File "/tmp/venv/lib/python3.11/site-packages/_pytest/runner.py", line 222 in call_and_report File "/tmp/venv/lib/python3.11/site-packages/_pytest/runner.py", line 133 in runtestprotocol File "/tmp/venv/lib/python3.11/site-packages/_pytest/runner.py", line 114 in pytest_runtest_protocol File "/tmp/venv/lib/python3.11/site-packages/pluggy/_callers.py", line 80 in _multicall File "/tmp/venv/lib/python3.11/site-packages/pluggy/_manager.py", line 112 in _hookexec File "/tmp/venv/lib/python3.11/site-packages/pluggy/_hooks.py", line 433 in __call__ File "/tmp/venv/lib/python3.11/site-packages/_pytest/main.py", line 349 in pytest_runtestloop File "/tmp/venv/lib/python3.11/site-packages/pluggy/_callers.py", line 80 in _multicall File "/tmp/venv/lib/python3.11/site-packages/pluggy/_manager.py", line 112 in _hookexec File "/tmp/venv/lib/python3.11/site-packages/pluggy/_hooks.py", line 433 in __call__ File "/tmp/venv/lib/python3.11/site-packages/_pytest/main.py", line 324 in _main File "/tmp/venv/lib/python3.11/site-packages/_pytest/main.py", line 270 in wrap_session File "/tmp/venv/lib/python3.11/site-packages/_pytest/main.py", line 317 in pytest_cmdline_main File "/tmp/venv/lib/python3.11/site-packages/pluggy/_callers.py", line 80 in _multicall File "/tmp/venv/lib/python3.11/site-packages/pluggy/_manager.py", line 112 in _hookexec File "/tmp/venv/lib/python3.11/site-packages/pluggy/_hooks.py", line 433 in __call__ File "/tmp/venv/lib/python3.11/site-packages/_pytest/config/__init__.py", line 166 in main File "/tmp/venv/lib/python3.11/site-packages/_pytest/config/__init__.py", line 189 in console_main File "/tmp/venv/bin/pytest", line 8 in <module> Extension modules: pendulum.parsing._iso8601, pendulum._extensions._helpers, psutil._psutil_linux, psutil._psutil_posix, numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator (total: 17) Segmentation fault (core dumped) ``` Steps to reproduce on Docker `ubuntu:latest`: ```bash cd /tmp git clone https://github.com/python/cpython.git cd cpython git switch 3.11 apt-get install nano nano /etc/apt/sources.list apt-get update apt-get build-dep python3 apt-get install pkg-config apt-get install build-essential gdb lcov pkg-config libbz2-dev libffi-dev libgdbm-dev libgdbm-compat-dev liblzma-dev libncurses5-dev libreadline6-dev libsqlite3-dev libssl-dev lzma lzma-dev tk-dev uuid-dev zlib1g-dev ./configure --with-pydebug make make install cd .. git clone https://github.com/ijl/orjson.git python3.11 -m venv venv source venv/bin/activate python -m pip install --upgrade pip setuptools wheel cython pip install orjson cd orjson pip install -r test/requirements.txt pytest test ```
closed
2023-07-10T17:13:19Z
2023-07-12T19:36:16Z
https://github.com/ijl/orjson/issues/406
[]
serjflint
1
zappa/Zappa
django
636
[Migrated] Organize Zappa community governance structure?
Originally from: https://github.com/Miserlou/Zappa/issues/1610 by [brylie](https://github.com/brylie) As an open source project matures, and in order to promote the longevity of the project, it is oftentimes useful to define a community governance structure. Currently, Zappa has a single primary maintainer/contributor, and the [contributions seem to be dwindling a bit](https://github.com/Miserlou/Zappa/graphs/contributors). With gratitude, and no disrespect, to @Miserlou, may we discuss what a community governance structure for Zappa would look like?
closed
2021-02-20T12:26:58Z
2022-07-16T06:49:02Z
https://github.com/zappa/Zappa/issues/636
[]
jneves
1
piskvorky/gensim
nlp
3,341
Clean up aarch64 wheel builds
in .travis.yml: - [ ] Build Python 3.10 wheels - [ ] Perform the wheel builds regularly, so we know when something breaks - [ ] Document the separate travis.yml file in the wiki (e.g. "we use Travis for aarch64 because github actions don't support aarch64 builds yet")
open
2022-05-02T12:35:12Z
2022-05-02T12:35:12Z
https://github.com/piskvorky/gensim/issues/3341
[ "help wanted", "housekeeping" ]
mpenkov
0
fastapi-admin/fastapi-admin
fastapi
40
demo id password incorrect
I can't login with id and password you provided
closed
2021-03-19T16:06:19Z
2023-04-19T02:29:50Z
https://github.com/fastapi-admin/fastapi-admin/issues/40
[]
tbop02k
5
Avaiga/taipy
automation
1,895
Systematically batch front-end updates
### Description The current implementation of `State` as a context manager is beneficial because it enables batching of front-end updates. This can lead to significant performance improvements, particularly when handling numerous variables. However, I believe this should be the default behavior for all callback invocations. Is there ever a scenario where the front-end should be updated before the backend code has finished executing? In most cases, updating the front-end mid-execution could lead to inefficiencies or potential inconsistencies. Would it make sense to adopt this as the default to ensure smoother and more efficient performance? ### Impact of Solution Better performance is expected. We might want to hide the context manager nature of State after this. ### Acceptance Criteria - [ ] Ensure new code is unit tested, and check code coverage is at least 90%. - [ ] Create related issue in taipy-doc for documentation and Release Notes. - [ ] Check if a new demo could be provided based on this, or if legacy demos could be benefit from it. - [ ] Ensure any change is well documented. ### Code of Conduct - [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+). - [ ] I am willing to work on this issue (optional)
open
2024-10-03T11:11:49Z
2025-02-07T13:47:31Z
https://github.com/Avaiga/taipy/issues/1895
[ "📈 Improvement", "🟨 Priority: Medium", "🔒 Staff only", "Gui: Back-End", "💬 Discussion" ]
FabienLelaquais
4
taverntesting/tavern
pytest
699
Use parametrize for HTTP method
As a tester, I want to assure that certain verbs like POST,PUT,DELETE,PATCH are causing a fixed behavior for a set of urls. I can parametrize the url, but using this approach for the method is not possible yet. ```yaml marks: - parametrize: key: verb vals: - POST - PUT - DELETE - PATCH - parametrize: key: path vals: - /odata/TestDataSet - /doc - / stages: - name: "Verb {verb} on {path} returns whatever" request: url: "{service}{path}" method: "{verb}" ``` Currently this yields an error: ``` Enum '{verb}' does not exist. Path: '/stages/0/request/method' Enum: ['GET', 'PUT', 'POST', 'DELETE', 'PATCH', 'OPTIONS', 'HEAD']. ``` I would need this for easy regression testing...
closed
2021-06-16T11:50:00Z
2021-10-31T15:40:31Z
https://github.com/taverntesting/tavern/issues/699
[]
GitifyMe
2
manbearwiz/youtube-dl-server
rest-api
34
Python 2 ChainMap and pathlib
In Python 2, there is no ChainMap inside the collections. So I would suggest changing the code from `from collections import ChainMap` to: ``` try: from collections import ChainMap except ImportError: from chainmap import ChainMap ``` Meanwhile, add pathlib and ChainMap to requirements.txt.
closed
2019-04-21T23:01:32Z
2020-12-04T21:43:29Z
https://github.com/manbearwiz/youtube-dl-server/issues/34
[]
jeffli678
1
lensacom/sparkit-learn
scikit-learn
83
ImportError: cannot import name _check_numpy_unicode_bug
I got the error when importing the SparkitLabelEncoder module with scikit-learn version 0.19.1. ``` >>> from splearn.preprocessing import SparkLabelEncoder Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.7/site-packages/splearn/preprocessing/__init__.py", line 1, in <module> from .label import SparkLabelEncoder File "/usr/lib/python2.7/site-packages/splearn/preprocessing/label.py", line 3, in <module> from sklearn.preprocessing.label import _check_numpy_unicode_bug ImportError: cannot import name _check_numpy_unicode_bug ```
open
2018-02-05T02:18:00Z
2018-02-05T02:18:00Z
https://github.com/lensacom/sparkit-learn/issues/83
[]
dankiho
0
scikit-learn/scikit-learn
data-science
30,037
Implement the two-parameter Box-Cox transform variant
### Describe the workflow you want to enable Currently, ony the single-parameter box-cox is implemented in sklearn.preprocessing.power_transform The two parameter variant is defined as ![](https://wikimedia.org/api/rest_v1/media/math/render/svg/f0bcf29e7ad0c8261a9f15f4abd9468c9e73cbaf) where both the parameters are to be fit from data via MLE ### Describe your proposed solution Add the two-parameter variant as a new method to sklearn.preprocessing.power_transform ### Describe alternatives you've considered, if relevant Of course, the default yeo-johnson transform can be used for negative data, but that is mathematically different ### Additional context wikipedia page: https://en.wikipedia.org/wiki/Power_transform
open
2024-10-09T12:35:03Z
2024-10-09T14:59:44Z
https://github.com/scikit-learn/scikit-learn/issues/30037
[ "New Feature" ]
jachymb
1
deepinsight/insightface
pytorch
2,739
Issue with Installing the python packages.
Hi there, I clone this repo and try to install `setup.py` in `python-package` folder. I faced this error below --------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[1], [line 1](vscode-notebook-cell:?execution_count=1&line=1) ----> [1](vscode-notebook-cell:?execution_count=1&line=1) import insightface File c:\Users\Mohankrishnan\Downloads\insightface-master\insightface-master\python-package\insightface\__init__.py:18 [16](file:///C:/Users/Mohankrishnan/Downloads/insightface-master/insightface-master/python-package/insightface/__init__.py:16) from . import model_zoo [17](file:///C:/Users/Mohankrishnan/Downloads/insightface-master/insightface-master/python-package/insightface/__init__.py:17) from . import utils ---> [18](file:///C:/Users/Mohankrishnan/Downloads/insightface-master/insightface-master/python-package/insightface/__init__.py:18) from . import app [19](file:///C:/Users/Mohankrishnan/Downloads/insightface-master/insightface-master/python-package/insightface/__init__.py:19) from . import data [20](file:///C:/Users/Mohankrishnan/Downloads/insightface-master/insightface-master/python-package/insightface/__init__.py:20) from . import thirdparty File c:\Users\Mohankrishnan\Downloads\insightface-master\insightface-master\python-package\insightface\app\__init__.py:2 [1](file:///C:/Users/Mohankrishnan/Downloads/insightface-master/insightface-master/python-package/insightface/app/__init__.py:1) from .face_analysis import * ----> [2](file:///C:/Users/Mohankrishnan/Downloads/insightface-master/insightface-master/python-package/insightface/app/__init__.py:2) from .mask_renderer import * File c:\Users\Mohankrishnan\Downloads\insightface-master\insightface-master\python-package\insightface\app\mask_renderer.py:8 [6](file:///C:/Users/Mohankrishnan/Downloads/insightface-master/insightface-master/python-package/insightface/app/mask_renderer.py:6) from .face_analysis import FaceAnalysis [7](file:///C:/Users/Mohankrishnan/Downloads/insightface-master/insightface-master/python-package/insightface/app/mask_renderer.py:7) from ..utils import get_model_dir ----> [8](file:///C:/Users/Mohankrishnan/Downloads/insightface-master/insightface-master/python-package/insightface/app/mask_renderer.py:8) from ..thirdparty import face3d [9](file:///C:/Users/Mohankrishnan/Downloads/insightface-master/insightface-master/python-package/insightface/app/mask_renderer.py:9) from ..data import get_image as ins_get_image [10](file:///C:/Users/Mohankrishnan/Downloads/insightface-master/insightface-master/python-package/insightface/app/mask_renderer.py:10) from ..utils import DEFAULT_MP_NAME File c:\Users\Mohankrishnan\Downloads\insightface-master\insightface-master\python-package\insightface\thirdparty\face3d\__init__.py:3 [1](file:///C:/Users/Mohankrishnan/Downloads/insightface-master/insightface-master/python-package/insightface/thirdparty/face3d/__init__.py:1) #import mesh ... ----> [9](file:///C:/Users/Mohankrishnan/Downloads/insightface-master/insightface-master/python-package/insightface/thirdparty/face3d/mesh/__init__.py:9) from .cython import mesh_core_cython [10](file:///C:/Users/Mohankrishnan/Downloads/insightface-master/insightface-master/python-package/insightface/thirdparty/face3d/mesh/__init__.py:10) from . import io [11](file:///C:/Users/Mohankrishnan/Downloads/insightface-master/insightface-master/python-package/insightface/thirdparty/face3d/mesh/__init__.py:11) from . import vis ImportError: cannot import name 'mesh_core_cython' from 'insightface.thirdparty.face3d.mesh.cython' (unknown location) when i go the `'insightface.thirdparty.face3d.mesh.cython` this file ![Image](https://github.com/user-attachments/assets/d0b56b1e-33a3-4a75-ae2c-6253ef126425) those cpp. c files how to import those?
open
2025-03-21T06:36:39Z
2025-03-21T06:39:34Z
https://github.com/deepinsight/insightface/issues/2739
[]
Mohankrish08
0
dolevf/graphw00f
graphql
44
Headers are set empty before using them
In main.py, line 90, only the parameters passed to the script for headers value are considered https://github.com/dolevf/graphw00f/blob/701e4c16262481f9a7094a4b142f435a961497f5/main.py#L90 This leads the script to ignore the values passed in file `conf.py` in `HEADERS` variable.
closed
2024-09-13T19:41:55Z
2024-10-01T22:07:00Z
https://github.com/dolevf/graphw00f/issues/44
[]
azuax
2
Evil0ctal/Douyin_TikTok_Download_API
api
30
抖音图集 解析失败
函数名 | 原因 | 输入值 Scraper.douyin() | local variable 'album_music' referenced before assignment | https://v.douyin.com/XXXXXX/
closed
2022-05-21T18:20:10Z
2022-05-22T09:38:46Z
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/30
[]
mrzhaoer
3
mitmproxy/mitmproxy
python
6,303
Source Distributions for mitmproxy_rs
#### Problem Description On Windows, mitmproxy_rs currently bundles [windows-redirector.exe](https://github.com/mitmproxy/mitmproxy_rs/tree/main/windows-redirector) for OS proxy mode. This executable is built as part of CI, and then copied over into the Python package in build.rs. @emanuele-em is in the process of introducing similar shenanigans on macOS for traffic redirection (where it's going to be a Swift app). As part of https://github.com/mitmproxy/mitmproxy/issues/6299, I realized that this actually is a problem for source distributions (sdists): As we currently build sdists on Linux, windows-redirector.exe is not built during CI and the produced sdist is incomplete for Windows users. The same problem will appear on macOS. We need to fix that somehow. Some proposals to address this: 1. We could somehow add the platform-specific sources to the sdist and have platforms-specific build steps in the sdist. The downside of this approach is that we would basically need to hijack `maturin build` to add custom steps around it. I'd outright reject this approach due to the complexity it brings. 1. We could make them mitmproxy_rs CI a two stage process: The first stage builds the platform-specific parts (windows-redirector.exe on Windows, macos-redirector.app on macOS, etc.), and then the second stage pulls in all these artifacts and includes them all in the sdist. One downside here is that the sdist is partially a binary distribution and not really a _source_ distribution. This would be the easiest approach for us, but downstream would rightfully complain I guess. 2. Another alternative would be to start producing `mitmproxy_rs_windows`/`mitmproxy_rs_macos` packages containing the platform-specific bits, and have `mitmproxy_rs` declare conditional dependencies on these. This would keep things nicely separated, and we could actually avoid the build.rs lets-copy-over-binaries nonsense. The downside is that we have more Python packages and more related compexity. @mitmproxy/devs, @emanuele-em, any thoughts?
closed
2023-08-07T14:04:26Z
2023-08-23T09:43:31Z
https://github.com/mitmproxy/mitmproxy/issues/6303
[ "area/infra", "RFC", "area/rust" ]
mhils
6
xlwings/xlwings
automation
1,772
Hi I cannot open a saved file with xlwings, it looks like it's not responding and creating error.
Hi I cannot open a saved file with xlwings, it looks like it's not responding and creating error. I also used r at the beginning of the code line I am working with Windows. Please see below line 1 ‪"C:\\\Users\\\finance\\\Desktop\\\Test1p.xlsx" File "<ipython-input-71-817a3f5000db>", line 1 SyntaxError: invalid character in identifier
closed
2021-11-23T20:55:36Z
2022-05-21T18:00:03Z
https://github.com/xlwings/xlwings/issues/1772
[]
ivonkal
1
OFA-Sys/Chinese-CLIP
nlp
124
脚本运行错误
您好,我按照指示在相应文件夹下放置了数据集,配置好参数后运行shell脚本,但是出错了。麻烦您看一下是什么原因呢? bash run_scripts/muge_finetune_vit-b-16_rbt-base.sh datadata /opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated and will be removed in future. Use torchrun. Note that --use_env is set by default in torchrun. If your script expects `--local_rank` argument to be set, please change it to read from `os.environ['LOCAL_RANK']` instead. See https://pytorch.org/docs/stable/distributed.html#launch-utility for further instructions warnings.warn( Loading vision model config from cn_clip/clip/model_configs/ViT-B-16.json Loading text model config from cn_clip/clip/model_configs/RoBERTa-wwm-ext-base-chinese.json Traceback (most recent call last): File "cn_clip/training/main.py", line 301, in <module> main() File "cn_clip/training/main.py", line 134, in main find_unused_parameters = torch_version_str_compare_lessequal(torch.__version__, "1.8.0") File "cn_clip/training/main.py", line 40, in torch_version_str_compare_lessequal v1 = [int(entry) for entry in version1.split("+")[0].split(".")] File "cn_clip/training/main.py", line 40, in <listcomp> v1 = [int(entry) for entry in version1.split("+")[0].split(".")] ValueError: invalid literal for int() with base 10: '0a0' ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 29322) of binary: /opt/conda/bin/python Traceback (most recent call last): File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in <module> main() File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main launch(args) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch run(args) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py", line 752, in run elastic_launch( File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: cn_clip/training/main.py FAILED Failures: <NO_OTHER_FAILURES> Root Cause (first observed failure): [0]: time : 2023-05-29_00:05:28 host : task-20230528140505-13208 rank : 0 (local_rank: 0) exitcode : 1 (pid: 29322) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
closed
2023-05-29T00:08:36Z
2023-05-31T02:48:25Z
https://github.com/OFA-Sys/Chinese-CLIP/issues/124
[]
huhuhuqia
4
Yorko/mlcourse.ai
data-science
608
Potentially incorrect statement about .map vs .replace in topic1_pandas_data_analysis.ipynb
In **Applying Functions to Cells, Columns and Rows** section of topic1_pandas_data_analysis.ipynb exercise when explaining how to replace values in column it is stated that `.replace` does the same thing as `.map` that, I think, is just partially correct. While `.map` is applied to the dataframe produces NaN values for keys not found in the map, `.replace` updates values matching keys in the map only.
closed
2019-09-04T19:06:22Z
2019-09-08T13:59:36Z
https://github.com/Yorko/mlcourse.ai/issues/608
[]
andrei-khveras
1
assafelovic/gpt-researcher
automation
909
SearXNG: Exception: Tavily API key not found. Please set the TAVILY_API_KEY environment variable.
**Describe the bug** This is the log: ```log INFO: [11:03:05] 🔍 Running research for ... ERROR: Exception in ASGI application Traceback (most recent call last): File "/home/admin/Github/gpt-researcher/gpt_researcher/retrievers/tavily/tavily_search.py", line 39, in get_api_key api_key = os.environ["TAVILY_API_KEY"] ~~~~~~~~~~^^^^^^^^^^^^^^^^^^ File "<frozen os>", line 679, in __getitem__ KeyError: 'TAVILY_API_KEY' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 242, in run_asgi result = await self.app(self.scope, self.asgi_receive, self.asgi_send) # type: ignore[func-returns-value] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__ return await self.app(scope, receive, send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__ await super().__call__(scope, receive, send) File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/applications.py", line 113, in __call__ await self.middleware_stack(scope, receive, send) File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/middleware/errors.py", line 152, in __call__ await self.app(scope, receive, send) File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/middleware/cors.py", line 77, in __call__ await self.app(scope, receive, send) File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__ await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app raise exc File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app await app(scope, receive, sender) File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/routing.py", line 715, in __call__ await self.middleware_stack(scope, receive, send) File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/routing.py", line 735, in app await route.handle(scope, receive, send) File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/routing.py", line 362, in handle await self.app(scope, receive, send) File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/routing.py", line 95, in app await wrap_app_handling_exceptions(app, session)(scope, receive, send) File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app raise exc File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app await app(scope, receive, sender) File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/starlette/routing.py", line 93, in app await func(session) File "/home/admin/Github/gpt-researcher/venv/lib/python3.11/site-packages/fastapi/routing.py", line 383, in app await dependant.call(**solved_result.values) File "/home/admin/Github/gpt-researcher/backend/server/server.py", line 142, in websocket_endpoint await handle_websocket_communication(websocket, manager) File "/home/admin/Github/gpt-researcher/backend/server/server_utils.py", line 117, in handle_websocket_communication await handle_start_command(websocket, data, manager) File "/home/admin/Github/gpt-researcher/backend/server/server_utils.py", line 28, in handle_start_command report = await manager.start_streaming( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/admin/Github/gpt-researcher/backend/server/websocket_manager.py", line 61, in start_streaming report = await run_agent(task, report_type, report_source, source_urls, tone, websocket, headers) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/admin/Github/gpt-researcher/backend/server/websocket_manager.py", line 95, in run_agent report = await researcher.run() ^^^^^^^^^^^^^^^^^^^^^^ File "/home/admin/Github/gpt-researcher/backend/report_type/basic_report/basic_report.py", line 41, in run await researcher.conduct_research() File "/home/admin/Github/gpt-researcher/gpt_researcher/master/agent/master.py", line 82, in conduct_research self.context = await self.research_conductor.conduct_research() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/admin/Github/gpt-researcher/gpt_researcher/master/agent/researcher.py", line 73, in conduct_research self.researcher.context = await self.__get_context_by_search(self.researcher.query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/admin/Github/gpt-researcher/gpt_researcher/master/agent/researcher.py", line 161, in __get_context_by_search context = await asyncio.gather( ^^^^^^^^^^^^^^^^^^^^^ File "/home/admin/Github/gpt-researcher/gpt_researcher/master/agent/researcher.py", line 220, in __process_sub_query scraped_data = await self.__scrape_data_by_query(sub_query) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/admin/Github/gpt-researcher/gpt_researcher/master/agent/researcher.py", line 275, in __scrape_data_by_query retriever = retriever_class(sub_query) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/admin/Github/gpt-researcher/gpt_researcher/retrievers/tavily/tavily_search.py", line 25, in __init__ self.api_key = self.get_api_key() ^^^^^^^^^^^^^^^^^^ File "/home/admin/Github/gpt-researcher/gpt_researcher/retrievers/tavily/tavily_search.py", line 41, in get_api_key raise Exception( Exception: Tavily API key not found. Please set the TAVILY_API_KEY environment variable. INFO: connection closed ``` **To Reproduce** This is the `.env`: ```bash #export TAVILY_API_KEY=tvly-********** #export RETRIEVER=tavily export RETRIEVER=searx export SEARX_URL="http://10.4.0.101:32768" export DOC_PATH=./my-docs export LLM_PROVIDER=ollama export OLLAMA_BASE_URL="http://10.4.0.100:11434" export FAST_LLM=ollama:llama3.2 export SMART_LLM=ollama:llama3.2 export TEMPERATURE="0.1" export EMBEDDING_PROVIDER=ollama export OLLAMA_EMBEDDING_MODEL=nomic-embed-text ``` **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. **Desktop (please complete the following information):** - OS: Debian 12 LXC on Proxmox VE 8.2 - Browser LibreWolf (Firefox) - Version - **Smartphone (please complete the following information):** - Device: - - OS: - - Browser - - Version - **Additional context** Add any other context about the problem here.
closed
2024-10-14T09:17:41Z
2024-11-09T20:21:08Z
https://github.com/assafelovic/gpt-researcher/issues/909
[]
PieBru
5
nltk/nltk
nlp
3,324
Unable to use word_tokenize function
This is my first time working on an NLP project, I'm unable to use the word_tokenize function which throws an error. after trying this code to solve the error import nltk nltk.download('punkt') nltk.download('stopwords') The code above also throws an error. How do I solve this, an image is attached below. ![Screenshot 2024-09-13 152742](https://github.com/user-attachments/assets/22f00b37-69e9-4144-9d73-903e00667601)
closed
2024-09-13T14:34:31Z
2024-09-25T06:38:12Z
https://github.com/nltk/nltk/issues/3324
[]
beingEniola
4
gunthercox/ChatterBot
machine-learning
1,534
chatterbot1.0.0a3 , example django_app how to train
closed
2018-12-17T06:35:34Z
2019-10-13T09:45:44Z
https://github.com/gunthercox/ChatterBot/issues/1534
[]
ayershub
1
Evil0ctal/Douyin_TikTok_Download_API
fastapi
230
[BUG] i don't understand coding
I don't understand coding, so I use the douyin.wtf web app, when I click "video download no watermark" but that result comes out, how can I download the video? actually we can if we click "video url no watermark" but the video title uses a random title, what if I want the video title to match the title on douyin, because I download a lot of videos 1 by 1 it is very time consuming ![image_906](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/assets/132964012/8aec7331-6ab3-40f1-b455-34d8f9ac375f) ![image_907](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/assets/132964012/72777528-6699-461f-9a2d-4e759ed789d9) ![image_908](https://github.com/Evil0ctal/Douyin_TikTok_Download_API/assets/132964012/b90e0774-3732-417f-af59-8c63e62c0cbc)
closed
2023-07-30T09:53:58Z
2023-08-04T09:31:08Z
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/230
[ "help wanted" ]
radenhendriawan
1
python-restx/flask-restx
api
401
Error handlers registered to a namespace are not confined to that namespace
https://github.com/python-restx/flask-restx/blob/88497ced96674916403fa7829de693eaa3485a08/flask_restx/api.py#L589 api.py harvests all error handlers from all namespaces, and creates a dictionary from exception to handler. However, the dictionary does not keep track of namespaces --> therefore, the namespace error handlers apply to matching exceptions from *any* namespace. Instead, the error handlers registered to specific namespaces should only catch the relevant exceptions when in the context of the registered namespace.
open
2021-12-29T19:09:37Z
2021-12-29T19:10:02Z
https://github.com/python-restx/flask-restx/issues/401
[]
singular-value
0
waditu/tushare
pandas
1,584
600519.SH 前复权数据错误?
600519.SH在20150708的前复权数据: df = ts.pro_bar(ts_code="600519.SH", start_date='20000101', end_date='', adj='qfq') 1512,600519.SH,**20150708,187.6672,200.7712,180.095,194.1086,200.1074**,-5.998800000000017,-2.9978,289140.54,6738026.659 OHLC都对不上其他平台,只有成交量可以。 东方财富 没复权 ![600519_no_fq](https://user-images.githubusercontent.com/4914520/133873118-7e18e922-186d-455b-bedb-d1882565c85c.png) 东方财富 前复权 ![600519_qfq](https://user-images.githubusercontent.com/4914520/133873119-da482eab-9cab-4ee4-aeb4-57ceb19fb647.png)
open
2021-09-18T04:58:16Z
2021-09-18T04:58:16Z
https://github.com/waditu/tushare/issues/1584
[]
liyuling
0
opengeos/leafmap
streamlit
752
Panel Leafmap: Error: Could not process update msg for model id
I'm trying to get [leafmap](https://github.com/opengeos/leafmap) working with Panel similarly to [Solara-Leafmap](https://github.com/opengeos/solara-geospatial/blob/main/pages/01_leafmap.py). A lot of features work. But when I click the tool icon nothing visible happens. But in the console the error below is logged. ```bash Error: Could not process update msg for model id: d5ea3f69eff947928f88b58f36d3ceaa at ipywidgets_bokeh.js?v=562368b20a64be95651bb8246b7420f7fce55538dff999aefaa78662a14d0d39:8:1474147 at async _._handleCommMsg (ipywidgets_bokeh.js?v=562368b20a64be95651bb8246b7420f7fce55538dff999aefaa78662a14d0d39:2:623644) at async _._handleMessage (ipywidgets_bokeh.js?v=562368b20a64be95651bb8246b7420f7fce55538dff999aefaa78662a14d0d39:2:625144) ``` ![image](https://github.com/bokeh/ipywidgets_bokeh/assets/42288570/2bc0934a-7b12-4097-8a6d-6bc159d77a53) ```python import leafmap import panel as pn pn.extension("ipywidgets") widget = leafmap.Map() layout = pn.Column( widget, ).servable() ``` You can see how it should work [here](https://giswqs-solara-geospatial.hf.space/leafmap) ![image](https://github.com/bokeh/ipywidgets_bokeh/assets/42288570/aa9c9e66-019d-43be-b8ce-ecca6b061f44) I don't know if this issue is caused by `ipywidgets_bokeh` or by `leafmap`. Thus I have crossposted in [ipywidgets_bokeh #106](https://github.com/bokeh/ipywidgets_bokeh/issues/106) ```bash Name: bokeh Version: 3.4.1 Location: /home/jovyan/repos/private/panel/.venv/lib/python3.11/site-packages Requires: contourpy, jinja2, numpy, packaging, pandas, pillow, pyyaml, tornado, xyzservices Required-by: ipywidgets-bokeh, panel --- Name: ipywidgets-bokeh Version: 1.6.0 Location: /home/jovyan/repos/private/panel/.venv/lib/python3.11/site-packages Requires: bokeh, ipykernel, ipywidgets Required-by: --- Name: leafmap Version: 0.33.0 Location: /home/jovyan/repos/private/panel/.venv/lib/python3.11/site-packages Requires: bqplot, colour, duckdb, folium, gdown, geojson, ipyevents, ipyfilechooser, ipyleaflet, ipywidgets, matplotlib, numpy, pandas, plotly, pyshp, pystac-client, python-box, scooby, whiteboxgui, xyzservices Required-by: --- Name: panel Version: 1.5.0a3.post1.dev124+g5b59d248.d20240606 Location: /home/jovyan/repos/private/panel/.venv/lib/python3.11/site-packages Editable project location: /home/jovyan/repos/private/panel Requires: bleach, bokeh, linkify-it-py, markdown, markdown-it-py, mdit-py-plugins, packaging, pandas, param, pyviz-comms, requests, tqdm, typing-extensions Required-by: ```
closed
2024-06-12T15:11:50Z
2024-07-21T02:08:53Z
https://github.com/opengeos/leafmap/issues/752
[ "bug" ]
MarcSkovMadsen
3
iMerica/dj-rest-auth
rest-api
65
Add new endpoint to verify password reset confirm token
Hello, For web clients it's essential to have an endpoint to verify password reset confirm token, so it can easily and securely routed to the password reset confirm form page and render it for a user to interact with.
open
2020-05-11T08:11:47Z
2020-05-19T07:40:56Z
https://github.com/iMerica/dj-rest-auth/issues/65
[ "enhancement", "question" ]
mohmyo
5
keras-team/keras
python
20,946
validation_split
Hi, if validation_split (of model.fit) is assigned a percentage, does keras shuffle the entire training data _before_ splitting it? Sorry if this is a repeated question since i saw some discussion from years ago saying that it does not shuffle and the keras' doc pages does not explicitly mention shuffling in validation_split section. But chatgpt says that in more recent versions of keras (later than 2.2.3) validation_split shuffles the entire data before splitting. Thanks for the help.
closed
2025-02-23T08:49:35Z
2025-02-24T15:19:21Z
https://github.com/keras-team/keras/issues/20946
[ "type:support" ]
cuneyt76
6
apify/crawlee-python
web-scraping
695
encoding errors when using the BeautifulSoupCrawlingContext
When running a crawler using the BeautifulSoupCrawlingContext, I am getting unfixable encoding errors. They are thrown even before the handler function is called. "encoding error : input conversion failed due to input error, bytes 0xEB 0x85 0x84 0x20" ``` async def main() -> None: # async with Actor: crawler = BeautifulSoupCrawler() @crawler.router.default_handler async def request_handler(context: BeautifulSoupCrawlingContext) -> None: url = context.request.url print(f"Processing URL: {url}") ``` The error occurs in about 30% of requests when trying to scrape reviews from booking. Some example links for replication: https://www.booking.com/reviewlist.en-gb.html?cc1=cz&pagename=hotel-don-giovanni-prague&rows=25&sort=f_recent_desc&offset=25 https://www.booking.com/reviewlist.en-gb.html?cc1=cz&pagename=hotel-don-giovanni-prague&rows=25&sort=f_recent_desc&offset=50 I found relevant issue stating "Libxml2 does not support the GB2312 encoding so a way to get around this problem is to convert it to utf-8. I did it and it works for me:" https://github.com/mitmproxy/mitmproxy/issues/657 but I did not manage to fix your BeautifulSoupCrawlingContext code by specifying the encoding.
closed
2024-11-13T23:20:22Z
2024-11-25T20:06:12Z
https://github.com/apify/crawlee-python/issues/695
[ "t-tooling" ]
Rigos0
10
lonePatient/awesome-pretrained-chinese-nlp-models
nlp
22
大家认为现在表现最好的开源中文大语言模型是什么?
open
2023-10-19T13:17:29Z
2023-10-30T03:11:12Z
https://github.com/lonePatient/awesome-pretrained-chinese-nlp-models/issues/22
[]
zhawenxuan
1
JaidedAI/EasyOCR
deep-learning
766
UserWarning from torchvision
I am running the demo in the readme, but I got several warnings. ``` import easyocr reader = easyocr.Reader(['en']) result = reader.readtext(imgpath + 'sample.png', detail = 0) print(result) ``` And I got the following warnings: ```C:\Users\22612\AppData\Local\Programs\Python\Python39\lib\site-packages\torchvision\models\_utils.py:252: UserWarning: Accessing the model URLs via the internal dictionary of the module is deprecated since 0.13 and will be removed in 0.15. Please access them via the appropriate Weights Enum instead. warnings.warn( C:\Users\22612\AppData\Local\Programs\Python\Python39\lib\site-packages\torchvision\models\_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead. warnings.warn( C:\Users\22612\AppData\Local\Programs\Python\Python39\lib\site-packages\torchvision\models\_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=None`. warnings.warn(msg)```
open
2022-06-29T16:04:11Z
2023-06-22T14:14:24Z
https://github.com/JaidedAI/EasyOCR/issues/766
[ "PR WELCOME" ]
majunze2001
5
Neoteroi/BlackSheep
asyncio
185
GoogleDoc style to describe parameters doesn't work properly.
**Describe the bug** GoogleDoc style to describe parameters doesn't work properly. Consider the following example: ```python @app.router.get("/api/orders") async def get_orders( page: FromQuery[int] = FromQuery(1), page_size: FromQuery[int] = FromQuery(30), search: FromQuery[str] = FromQuery(""), ) -> PaginatedSet[Order]: """ Returns a paginated set of orders. Args: page: Page number. page_size: The number of items to display per page. search: Optional text search. """ ```
open
2021-07-16T18:00:55Z
2021-07-16T18:01:10Z
https://github.com/Neoteroi/BlackSheep/issues/185
[ "low priority" ]
RobertoPrevato
0
FactoryBoy/factory_boy
sqlalchemy
224
UserFactory example / performance tips
Hi, we're using factory_boy to generate test data during development and were running into unusual performance issues when it came to generating users. After some profiling, the culprit turned out to be a post generation hook on `set_password()`. To generate 1000 users, it was taking ~134 seconds. Since we don't care about unique passwords, generating the password once and reusing it dropped the time to 3 seconds. Kind of obvious in retrospect (who knew password hashers were intentionally slow `¯\_(ツ)_/¯`), but figured it was worth mentioning in case someone else ran into a similar 'performance' issue. Not sure if it's worth adding a tip/warning to the documentation. ``` python from django.contrib.auth.hashers import make_password class UserFactory(DjangoModelFactory): ... class Meta: model = models.User @factory.post_generation def password(self, create, extracted, **kwargs): if extracted is None: self.password = UserFactory.PASSWORD else: self.password = make_password(extracted) UserFactory.PASSWORD = make_password('password') ``` Edit: updated password method to be correct
closed
2015-08-12T03:11:20Z
2015-08-12T19:54:27Z
https://github.com/FactoryBoy/factory_boy/issues/224
[]
rpkilby
2
JaidedAI/EasyOCR
pytorch
493
easyocr crash On CPU while using high resolution images
program killed automatically , for high resolution images easyocr consume all ram and after some time it stuck and kill the process.
closed
2021-07-20T03:47:40Z
2021-07-21T07:21:59Z
https://github.com/JaidedAI/EasyOCR/issues/493
[]
Rushi07555
1
BlinkDL/RWKV-LM
pytorch
32
Question about the training compute
Great work! I am working on a survey and would be interested to know the total training compute (FLOPs) of RWKV-14B model? What was the training time (GPU-hours) on how many A100s? Also any idea about the GPU utilization rate?
closed
2023-02-17T09:04:27Z
2023-02-17T10:02:35Z
https://github.com/BlinkDL/RWKV-LM/issues/32
[]
ogencoglu
1
frappe/frappe
rest-api
29,851
Backport python3.13
Backport python3.13 support to version-15
open
2025-01-19T18:34:52Z
2025-01-19T18:34:52Z
https://github.com/frappe/frappe/issues/29851
[ "feature-request" ]
mahsem
0
litestar-org/litestar
asyncio
3,492
Docs: Migrating to Litestar from Django
### Summary Would be great to have some summary for Django users about what's possible with Litestar already w.r.t. functionality used in a typical Django project in the [docs](https://docs.litestar.dev/latest/migration/index.html).
open
2024-05-14T11:59:24Z
2025-03-20T15:54:42Z
https://github.com/litestar-org/litestar/issues/3492
[ "Documentation :books:" ]
fkromer
1
pydata/pandas-datareader
pandas
854
Timezone not taken into consideration
https://github.com/pydata/pandas-datareader/blob/90f155ac6dcfa53a81441d8886d306c3790049bb/pandas_datareader/yahoo/daily.py#L128 _get_params() of YahooDailyReader doesn't consider local timezone of the request (nor the timezone of the requested data), it just does a hard shift by 4 hours.
open
2021-02-26T21:54:18Z
2021-02-26T22:10:39Z
https://github.com/pydata/pandas-datareader/issues/854
[]
mickbo32
0
dgtlmoon/changedetection.io
web-scraping
2,450
[feature] Extend "Visual filter selector" with a way to deselect elements (add to 'ignore elements')
As mentioned in [this GitHub FR](https://github.com/dgtlmoon/changedetection.io/issues/550) and specifically in [this comment](https://github.com/dgtlmoon/changedetection.io/issues/550#issuecomment-1109410438), the current Visual Filter Selector lacks a feature to deselect elements, unlike Distill. At present, users must manually identify the CSS selector they wish to exclude and add it themselves. Implementing this functionality in the UI would be a significant improvement.
closed
2024-06-30T12:56:49Z
2024-07-10T11:29:05Z
https://github.com/dgtlmoon/changedetection.io/issues/2450
[ "enhancement" ]
Shasoosh
2
voila-dashboards/voila
jupyter
866
Request: Attach a template call to the Voila button in a Jupyter Notebook
# Goal I want to create a single workflow between developers and non-programmer dashboard users. For this to work, starting the dashboard should involve no terminal. This has to run all local. # Idea 1. Setup [nbopen](https://github.com/takluyver/nbopen) to allow a user to double click to open a .ipynb (made by the developer) in a web browser. 2. Click the Voila button to render the dashboard (A single double click would be possible if this issue is implemented: https://github.com/voila-dashboards/voila/issues/692) # Ask for help The Voila button renders the Dashboard with the default template, but I would like it to be beautifully based on voila-vuetify (https://github.com/voila-dashboards/voila-vuetify). In a terminal it would look like: ``` voila --template vuetify-default my_GUI.ipynb ``` Is it possible to attach the `--template vuetify-default` argument to the Voila button in a Jupyter Notebook? If not, where should I look in the code to make this modification?
open
2021-04-07T13:08:10Z
2021-04-22T04:10:49Z
https://github.com/voila-dashboards/voila/issues/866
[]
NumesSanguis
1
uriyyo/fastapi-pagination
fastapi
1,028
AssertionError: missing pagination params
Hi, love the lib!! I'm upgrading the dependencies versions in my repo to use python 3.12 and getting this error: ``` /Users/.../Library/Caches/pypoetry/virtualenvs/sample-service-n-Or4Wts-py3.12/bin/python -m uvicorn sample_service.main:app --reload --host=0.0.0.0 --port=10020 INFO: Will watch for changes in these directories: ['/Users/.../Software/GitLab/sample-service'] INFO: Uvicorn running on http://0.0.0.0:10020 (Press CTRL+C to quit) INFO: Started reloader process [45664] using WatchFiles Process SpawnProcess-1: Traceback (most recent call last): File "/Users/.../.pyenv/versions/3.12.1/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/Users/.../.pyenv/versions/3.12.1/lib/python3.12/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/Users/.../Library/Caches/pypoetry/virtualenvs/sample-service-n-Or4Wts-py3.12/lib/python3.12/site-packages/uvicorn/_subprocess.py", line 78, in subprocess_started target(sockets=sockets) File "/Users/.../Library/Caches/pypoetry/virtualenvs/sample-service-n-Or4Wts-py3.12/lib/python3.12/site-packages/uvicorn/server.py", line 62, in run return asyncio.run(self.serve(sockets=sockets)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/.../.pyenv/versions/3.12.1/lib/python3.12/asyncio/runners.py", line 194, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/Users/.../.pyenv/versions/3.12.1/lib/python3.12/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete File "/Users/.../Library/Caches/pypoetry/virtualenvs/sample-service-n-Or4Wts-py3.12/lib/python3.12/site-packages/uvicorn/server.py", line 69, in serve config.load() File "/Users/.../Library/Caches/pypoetry/virtualenvs/sample-service-n-Or4Wts-py3.12/lib/python3.12/site-packages/uvicorn/config.py", line 458, in load self.loaded_app = import_from_string(self.app) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/.../Library/Caches/pypoetry/virtualenvs/sample-service-n-Or4Wts-py3.12/lib/python3.12/site-packages/uvicorn/importer.py", line 21, in import_from_string module = importlib.import_module(module_str) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/.../.pyenv/versions/3.12.1/lib/python3.12/importlib/__init__.py", line 90, in import_module return _bootstrap._gcd_import(name[level:], package, level) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "<frozen importlib._bootstrap>", line 1387, in _gcd_import File "<frozen importlib._bootstrap>", line 1360, in _find_and_load File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 935, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 994, in exec_module File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed File "/Users/.../Software/GitLab/sample-service/sample_service/main.py", line 41, in <module> app = get_application() ^^^^^^^^^^^^^^^^^ File "/Users/.../Software/GitLab/sample-service/sample_service/main.py", line 37, in get_application add_pagination(application) File "/Users/.../Library/Caches/pypoetry/virtualenvs/sample-service-n-Or4Wts-py3.12/lib/python3.12/site-packages/fastapi_pagination/api.py", line 366, in add_pagination _add_pagination(parent) File "/Users/.../Library/Caches/pypoetry/virtualenvs/sample-service-n-Or4Wts-py3.12/lib/python3.12/site-packages/fastapi_pagination/api.py", line 362, in _add_pagination _update_route(route) File "/Users/.../Library/Caches/pypoetry/virtualenvs/sample-service-n-Or4Wts-py3.12/lib/python3.12/site-packages/fastapi_pagination/api.py", line 345, in _update_route get_parameterless_sub_dependant( File "/Users/.../Library/Caches/pypoetry/virtualenvs/sample-service-n-Or4Wts-py3.12/lib/python3.12/site-packages/fastapi/dependencies/utils.py", line 124, in get_parameterless_sub_dependant return get_sub_dependant(depends=depends, dependency=depends.dependency, path=path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/.../Library/Caches/pypoetry/virtualenvs/sample-service-n-Or4Wts-py3.12/lib/python3.12/site-packages/fastapi/dependencies/utils.py", line 147, in get_sub_dependant sub_dependant = get_dependant( ^^^^^^^^^^^^^^ File "/Users/.../Library/Caches/pypoetry/virtualenvs/sample-service-n-Or4Wts-py3.12/lib/python3.12/site-packages/fastapi/dependencies/utils.py", line 268, in get_dependant sub_dependant = get_param_sub_dependant( ^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/.../Library/Caches/pypoetry/virtualenvs/sample-service-n-Or4Wts-py3.12/lib/python3.12/site-packages/fastapi/dependencies/utils.py", line 111, in get_param_sub_dependant return get_sub_dependant( ^^^^^^^^^^^^^^^^^^ File "/Users/.../Library/Caches/pypoetry/virtualenvs/sample-service-n-Or4Wts-py3.12/lib/python3.12/site-packages/fastapi/dependencies/utils.py", line 147, in get_sub_dependant sub_dependant = get_dependant( ^^^^^^^^^^^^^^ File "/Users/.../Library/Caches/pypoetry/virtualenvs/sample-service-n-Or4Wts-py3.12/lib/python3.12/site-packages/fastapi/dependencies/utils.py", line 289, in get_dependant add_param_to_fields(field=param_field, dependant=dependant) File "/Users/.../Library/Caches/pypoetry/virtualenvs/sample-service-n-Or4Wts-py3.12/lib/python3.12/site-packages/fastapi/dependencies/utils.py", line 483, in add_param_to_fields assert ( AssertionError: non-body parameters must be in path, query, header or cookie: size ``` Versions: ``` [tool.poetry.dependencies] python = "^3.12" fastapi = { version = "^0.109.2", extras = ["all"] } pydantic = "^2.6.1" fastapi-pagination = "^0.12.15" uvicorn = "^0.27.0" ``` It worked fine in with the versions I used before: ``` [tool.poetry.dependencies] python = "^3.10" fastapi = { version = "^0.95.1", extras = ["all"] } pydantic = "^1.10.2" fastapi-pagination = "^0.11.1" uvicorn = "^0.22.0" ``` Any ideas what could be the issue?
closed
2024-02-14T14:52:39Z
2024-02-15T06:44:18Z
https://github.com/uriyyo/fastapi-pagination/issues/1028
[]
dor1202
2
man-group/notebooker
jupyter
176
How do you guys keep up with the deprecated npm packages.?
I was spinning up the docker compose to test out notebooker and it seems some npm packges are deprecated, how do you guys do the check and update them in the installation.? Just curious new to this stuff.
open
2024-04-09T22:33:24Z
2024-04-09T22:33:24Z
https://github.com/man-group/notebooker/issues/176
[]
bojrick
0
QuivrHQ/quivr
api
3,460
File Parsing through MegaParse SDK
closed
2024-11-07T08:29:42Z
2024-11-08T14:07:32Z
https://github.com/QuivrHQ/quivr/issues/3460
[]
chloedia
1
samuelcolvin/dirty-equals
pytest
59
`IsFloat` should have an `exact` argument
allowing you to check something is a float, and check the value.
closed
2023-04-10T08:58:06Z
2023-04-27T18:32:56Z
https://github.com/samuelcolvin/dirty-equals/issues/59
[]
samuelcolvin
0
tortoise/tortoise-orm
asyncio
1,432
Incorrect filtering using expressions:Q
**Description** I am experiencing an issue with Tortoise ORM while trying to filter users (User) based on related Performer and Project objects. The code is not working correctly and is not returning the expected results. **To Reproduce** ```python3 from tortoise import Tortoise, fields from tortoise.models import Model from tortoise.expressions import Q # User, Project, and Performer model definitions class User(Model): """User model""" id = fields.IntField(pk=True) username = fields.CharField(max_length=32, unique=True, index=True) creator: fields.ForeignKeyRelation["User"] = fields.ForeignKeyField( "models.User", null=True, on_delete=fields.SET_NULL, related_name="children" ) class Project(Model): id = fields.IntField(pk=True) creator: fields.ForeignKeyRelation[User] = fields.ForeignKeyField( "models.User", related_name="projects" ) class Performer(Model): user: fields.ForeignKeyRelation[User] = fields.ForeignKeyField( "models.User", related_name="performers" ) project: fields.ForeignKeyRelation[Project] = fields.ForeignKeyField( "models.Project", related_name="performers" ) # Initializing Tortoise and creating the schema await Tortoise.init( # type: ignore db_url="sqlite://:memory:", modules={"models": ["__main__"]}, ) await Tortoise.generate_schemas() # Creating users, projects, and their relationships user1 = await User.create(username="user1") project = await Project.create(creator=user1) user2 = await User.create(username="user2", creator=user1) await User.create(username="user3", creator=user1) await Performer.create(user=user2, project=project) # Executing queries and getting results performers = Q(performers__project=project) res1 = await project.creator.children.filter(performers) # Expected result: [<User: 2>] res2 = await project.creator.children.filter(~performers) # Expected result: [<User: 3>] children = Q(creator=project.creator) res3 = await User.filter(children & ~performers) # Expected result: [<User: 3>] ``` **Expected behavior** Upon executing queries res1, res2, and res3, I expect to get the following results: - res1: [<User: 2>] - res2: [<User: 3>] - res3: [<User: 3>] **Actual Behavior** However, the current behavior of the code leads to incorrect results: - res1: [<User: 2>] (This is correct) - res2: [] (Expected: [<User: 3>]) - res3: [] (Expected: [<User: 3>]) **Versions** Tortoise ORM: 0.19.3 Python: 3.11.0
open
2023-07-20T19:10:27Z
2023-07-20T19:11:00Z
https://github.com/tortoise/tortoise-orm/issues/1432
[]
rilshok
0
nicodv/kmodes
scikit-learn
137
No module named 'kmodes.Kprototypes'
## Expected Behavior from kmodes.Kprototypes import KPrototypes ## Actual Behavior Traceback (most recent call last): File "<pyshell#15>", line 1, in <module> from kmodes.Kprototypes import KPrototypes ModuleNotFoundError: No module named 'kmodes.Kprototypes' ## Steps to Reproduce the Problem 1.import kmodes 2.from kmodes.Kprototypes import KPrototypes ## Specifications - Version:python3.8.0 - Platform: windows 64 Besides, when I input "import scikit-learn", it shows "SyntaxError: invalid syntax". But I have installed scikit-learn successfully already.
closed
2019-10-28T13:55:25Z
2020-06-13T16:12:50Z
https://github.com/nicodv/kmodes/issues/137
[ "bug" ]
Jane419
2
xonsh/xonsh
data-science
4,935
Captured subprocess stderr is not readable
<!--- Provide a general summary of the issue in the Title above --> <!--- If you have a question along the lines of "How do I do this Bash command in xonsh" please first look over the Bash to Xonsh translation guide: https://xon.sh/bash_to_xsh.html If you don't find an answer there, please do open an issue! --> ## xonfig ```python (shuziren) cxu@DESKTOP-T821EV3 D:\workspace\odl-final-pipeline main $ xconfig xonsh: For full traceback set: $XONSH_SHOW_TRACEBACK = True xonsh: subprocess mode: command not found: xconfig Did you mean one of the following? xonfig: Alias XONFIG: Command (XONFIG) ``` ### mycode ```python #!/usr/bin/env xonsh # -*- coding:utf-8 -*- def test(): out = !(ls fuccc) # x = out.stderr.read().decode("utf-8") # print(out.stdout.read()) print(out.errors) if out.returncode != 0: x = out.stderr.read().decode("utf-8") print(x) else: print("good") if __name__ == "__main__": test() ``` ```python (shuziren) cxu@DESKTOP-T821EV3 D:\workspace\odl-final-pipeline main $ xconfig xonsh: For full traceback set: $XONSH_SHOW_TRACEBACK = True xonsh: subprocess mode: command not found: xconfig Did you mean one of the following? xonfig: Alias XONFIG: Command (XONFIG) (shuziren) cxu@DESKTOP-T821EV3 D:\workspace\odl-final-pipeline main [1] $ xonsh testerr.xsh None Traceback (most recent call last): File "testerr.xsh", line 22, in <module> test() File "testerr.xsh", line 14, in test x = out.stderr.read().decode("utf-8") ValueError: I/O operation on closed file. (shuziren) cxu@DESKTOP-T821EV3 D:\workspace\odl-final-pipeline main [1] $ ``` </details> ## Expected Behavior the stderr should read something out like ` cannot access 'fccc': No such file or directory ` ## Current Behavior ```python (shuziren) cxu@DESKTOP-T821EV3 D:\workspace\odl-final-pipeline main [1] $ xonsh testerr.xsh None Traceback (most recent call last): File "testerr.xsh", line 22, in <module> test() File "testerr.xsh", line 14, in test x = out.stderr.read().decode("utf-8") ValueError: I/O operation on closed file. ``` AND when i type the same code in the xonsh console, I can read from the `out.stderr.read().decode("utf-8") `, got the right error message. ### Traceback (if applicable) <details> ``` traceback ``` ## Steps to Reproduce save my code as testerr.xsh then excute a linux or windows command to run the file ` xonsh testerr.xsh` ## For community ⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
closed
2022-09-09T06:24:06Z
2024-06-23T01:28:07Z
https://github.com/xonsh/xonsh/issues/4935
[ "docs", "windows" ]
drunkpig
4
deezer/spleeter
deep-learning
298
[Bug] NoBaseEnvironmentError: This conda installation has no default base environment.
<!-- PLEASE READ THIS CAREFULLY : - Any issue which does not respect following template or lack of information will be considered as invalid and automatically closed - First check FAQ from wiki to see if your problem is not already known --> ## Description <!-- Give us a clear and concise description of the bug you are reporting. --> ## Step to reproduce 1. Installed using the steps in the README: ``` $ git clone https://github.com/Deezer/spleeter $ cd spleeter $ conda install -c conda-forge spleeter NoBaseEnvironmentError: This conda installation has no default base environment. Use 'conda create' to create new environments and 'conda activate' to activate environments. ``` (I previously installed conda via dnf, Fedora's package manager) ## Environment <!-- Fill the following table --> | | | | ----------------- | ------------------------------- | | OS | GNU/Linux Fedora 31 | | Installation type | Conda as in readme | | RAM available | 16GB | | Hardware spec | not sure this is relevant since install didn't even work | ## Additional context <!-- Add any other context about the problem here, references, cites, etc.. -->
closed
2020-03-21T15:16:29Z
2020-04-05T12:24:28Z
https://github.com/deezer/spleeter/issues/298
[ "documentation", "conda" ]
ghost
1
allenai/allennlp
pytorch
4,839
Superfluous warning when extending the vocab in the `Embedding`
<!-- Please fill this template entirely and do not erase any of it. We reserve the right to close without a response bug reports which are incomplete. If you have a question rather than a bug, please ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than posting an issue here. --> ## Checklist <!-- To check an item on the list replace [ ] with [x]. --> - [x] I have verified that the issue exists against the `master` branch of AllenNLP. - [x] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/master/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs. - [x] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports. - [x] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes. - [x] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/master/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/master) to find out if the bug was already fixed in the master branch. - [x] I have included in the "Description" section below a traceback from any exceptions related to this bug. - [x] I have included in the "Related issues or possible duplicates" section beloew all related issues and possible duplicate issues (If there are none, check this box anyway). - [x] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug. - [x] I have included in the "Environment" section below the output of `pip freeze`. - [x] I have included in the "Steps to reproduce" section below a minimally reproducible example. ## Description <!-- Please provide a clear and concise description of what the bug is here. --> If one creates an `allennlp.modules.token_embedders.embedding.Embedding` without a `pretrained_file`, you still get a warning when extending the vocab that no `pretrained_file` is found. I would expect the warning only to trigger if one specified a `pretrained_file` when creating the `Embedding` or when an `extension_pretrained_file` is passed on to `Embedding.extend_vocab`. I would be more than happy to provide a PR if you think this is actually and issue and should be addressed. <details> <summary><b>Python traceback:</b></summary> <p> <!-- Paste the traceback from any exception (if there was one) in between the next two lines below --> ``` WARNING:root:Embedding at model_path, None cannot locate the pretrained_file. If you are fine-tuning and want to use using pretrained_file for embedding extension, please pass the mapping by --embedding-sources argument. ``` </p> </details> ## Related issues or possible duplicates - None ## Environment <!-- Provide the name of operating system below (e.g. OS X, Linux) --> OS: Ubuntu 20.04 <!-- Provide the Python version you were using (e.g. 3.7.1) --> Python version: 3.8.0 <details> <summary><b>Output of <code>pip freeze</code>:</b></summary> <p> <!-- Paste the output of `pip freeze` in between the next two lines below --> ``` attrs==20.3.0 blis==0.7.3 boto3==1.16.29 botocore==1.19.29 catalogue==1.0.0 certifi==2020.11.8 chardet==3.0.4 click==7.1.2 cymem==2.0.4 dataclasses==0.6 filelock==3.0.12 future==0.18.2 h5py==3.1.0 idna==2.10 iniconfig==1.1.1 jmespath==0.10.0 joblib==0.17.0 jsonnet==0.17.0 jsonpickle==1.4.2 murmurhash==1.0.4 nltk==3.5 numpy==1.19.4 overrides==3.1.0 packaging==20.7 plac==1.1.3 pluggy==0.13.1 preshed==3.0.4 protobuf==3.14.0 py==1.9.0 pyparsing==2.4.7 pytest==6.1.2 python-dateutil==2.8.1 regex==2020.11.13 requests==2.25.0 s3transfer==0.3.3 sacremoses==0.0.43 scikit-learn==0.23.2 scipy==1.5.4 sentencepiece==0.1.91 six==1.15.0 spacy==2.3.4 srsly==1.0.4 tensorboardX==2.1 thinc==7.4.3 threadpoolctl==2.1.0 tokenizers==0.9.3 toml==0.10.2 torch==1.7.0 tqdm==4.54.0 transformers==3.5.1 typing-extensions==3.7.4.3 urllib3==1.26.2 wasabi==0.8.0 ``` </p> </details> ## Steps to reproduce <details> <summary><b>Example source:</b></summary> <p> <!-- Add a fully runnable example in between the next two lines below that will reproduce the bug --> ```python from allennlp.data import Token, Instance, Vocabulary from allennlp.data.fields import TextField from allennlp.data.token_indexers import SingleIdTokenIndexer from allennlp.modules.token_embedders.embedding import Embedding instance = Instance({"token": TextField([Token("test")], {"tokens": SingleIdTokenIndexer()})}) vocab = Vocabulary.from_instances([instance]) instance2 = Instance({"token": TextField([Token("this")], {"tokens": SingleIdTokenIndexer()})}) vocab2 = Vocabulary.from_instances([instance, instance2]) embedder = Embedding(1, vocab=vocab) embedder.extend_vocab(vocab2) ``` </p> </details>
closed
2020-12-04T16:17:48Z
2020-12-16T02:09:45Z
https://github.com/allenai/allennlp/issues/4839
[ "bug", "Contributions welcome" ]
dcfidalgo
1
noirbizarre/flask-restplus
api
547
Could any help me to figure out how to list all of the api and url?
Hello, I just want to list all of the url of api, just as we can see the swagger webpage on browser, I tried to find out the method and got this: http://flask.pocoo.org/snippets/117/ and https://stackoverflow.com/questions/13317536/get-a-list-of-all-routes-defined-in-the-app, and save all of them into a dict. Could anyone share some more elegant method about it in flask_restplus package?? Great appreciate Oliver
closed
2018-11-01T02:32:50Z
2018-12-18T08:30:11Z
https://github.com/noirbizarre/flask-restplus/issues/547
[]
hanleilei
2
plotly/dash-table
plotly
776
Takes several seconds for fixed headers to resize to column widths after mouse up
Discovered while investigating https://github.com/plotly/dash-table/issues/775 The columns aren't aligned when resizing the window. That's fine, however it takes about 5-10 seconds for them to resize appropriately after mouse-up ![resize](https://user-images.githubusercontent.com/1280389/81458517-9ff07000-914f-11ea-8306-9164815ca667.gif)
closed
2020-05-09T00:16:31Z
2021-04-09T05:59:11Z
https://github.com/plotly/dash-table/issues/776
[ "bug" ]
chriddyp
2
junyanz/pytorch-CycleGAN-and-pix2pix
computer-vision
1,010
why does use_bias= False for pix2pix model?
I used the pix2pix model for my experiments. I have figured out that the network does not use the bias, only the weights. May i know the reason for it?
closed
2020-04-28T07:57:05Z
2020-06-26T13:51:54Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1010
[]
kalai2033
1
aimhubio/aim
data-visualization
2,702
Dashboard shows log messages multiple times
## 🐛 Bug As soon as you log a message and view it on the dashboard, the newest message is duplicated every second. When you refresh the page, it is gone and starts over again. ### To reproduce 1. Log a message using log_debug/info etc. 2. View it on the dashboard. ### Expected behavior Displaying the log messages without the new message being displayed repeatedly. ### Environment - Aim Version: 3.17.3 - Python version 3.9.16 - pip version 23.1.1 - OS MacOS 13.3.1 - Browsers: Safari / Firefox ### Additional context https://user-images.githubusercontent.com/53063597/235601527-1589c919-b958-44a6-b815-7cd99f7a4f98.mov
closed
2023-05-02T07:07:26Z
2023-05-22T11:28:54Z
https://github.com/aimhubio/aim/issues/2702
[ "type / bug", "help wanted", "area / Web-UI", "phase / shipped" ]
Robert27
3
junyanz/pytorch-CycleGAN-and-pix2pix
deep-learning
920
Evaluate the quality of the predicted segmentation masks.
In my case, I have a pair of images (img_real, img_fake), The channel is 3 (RGB). How to evaluate the quality of the predicted segmentation masks?
open
2020-02-17T09:15:10Z
2020-02-20T09:50:37Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/920
[]
houzeyu2683
4
microsoft/nni
tensorflow
5,394
proxyless example can't train because the grad is none
**Describe the issue**: proxyless example can't train because the grad is none name: module.blocks.1.mobile_inverted_conv.ops.0.inverted_bottleneck.conv.weight -->grad_requirs: True --weight tensor(0.0037, device='cuda:0') -->grad_value: None -->name: module.blocks.1.mobile_inverted_conv.ops.0.inverted_bottleneck.bn.weight -->grad_requirs: True --weight tensor(1., device='cuda:0') -->grad_value: None -->name: module.blocks.1.mobile_inverted_conv.ops.0.inverted_bottleneck.bn.bias -->grad_requirs: True --weight tensor(0., device='cuda:0') -->grad_value: None -->name: module.blocks.1.mobile_inverted_conv.ops.0.depth_conv.conv.weight -->grad_requirs: True --weight tensor(-0.0017, device='cuda:0') -->grad_value: None -->name: module.blocks.1.mobile_inverted_conv.ops.0.depth_conv.bn.weight -->grad_requirs: True --weight tensor(1., device='cuda:0') -->grad_value: None -->name: module.blocks.1.mobile_inverted_conv.ops.0.depth_conv.bn.bias -->grad_requirs: True --weight tensor(0., device='cuda:0') -->grad_value: None -->name: module.blocks.1.mobile_inverted_conv.ops.0.point_linear.conv.weight -->grad_requirs: True --weight tensor(-0.0035, device='cuda:0') -->grad_value: None -->name: module.blocks.1.mobile_inverted_conv.ops.0.point_linear.bn.weight -->grad_requirs: True --weight tensor(1., device='cuda:0') -->grad_value: None -->name: module.blocks.1.mobile_inverted_conv.ops.0.point_linear.bn.bias -->grad_requirs: True --weight tensor(0., device='cuda:0') -->grad_value: None -->name: module.blocks.1.mobile_inverted_conv.ops.1.inverted_bottleneck.conv.weight -->grad_requirs: True --weight tensor(0.0092, device='cuda:0') -->grad_value: None -->name: module.blocks.1.mobile_inverted_conv.ops.1.inverted_bottleneck.bn.weight -->grad_requirs: True --weight tensor(1., device='cuda:0') -->grad_value: None -->name: module.blocks.1.mobile_inverted_conv.ops.1.inverted_bottleneck.bn.bias -->grad_requirs: True --weight tensor(0., device='cuda:0') -->grad_value: None -->name: module.blocks.1.mobile_inverted_conv.ops.1.depth_conv.conv.weight -->grad_requirs: True --weight tensor(-0.0001, device='cuda:0') -->grad_value: None -->name: module.blocks.1.mobile_inverted_conv.ops.1.depth_conv.bn.weight -->grad_requirs: True --weight tensor(1., device='cuda:0') -->grad_value: None -->name: module.blocks.1.mobile_inverted_conv.ops.1.depth_conv.bn.bias -->grad_requirs: True --weight tensor(0., device='cuda:0') -->grad_value: None -->name: module.blocks.1.mobile_inverted_conv.ops.1.point_linear.conv.weight -->grad_requirs: True --weight tensor(0.0023, device='cuda:0') -->grad_value: None -->name: module.blocks.1.mobile_inverted_conv.ops.1.point_linear.bn.weight -->grad_requirs: True --weight tensor(1., device='cuda:0') -->grad_value: None -->name: module.blocks.1.mobile_inverted_conv.ops.1.point_linear.bn.bias -->grad_requirs: True --weight tensor(0., device='cuda:0') -->grad_value: None the layer build by LayerChoice has None grad_value, but the common layer such as first layer has grad. Please check it Thx **Environment**: - NNI version: 2.7 - Training service (local|remote|pai|aml|etc): local - Client OS: ubantu - Server OS (for remote mode only):ubantu - Python version: 3.8 - PyTorch/TensorFlow version:PyTorch - Is conda/virtualenv/venv used?:conda - Is running in Docker?: no
closed
2023-02-22T06:28:56Z
2023-02-27T02:39:20Z
https://github.com/microsoft/nni/issues/5394
[]
miaott1234
6
dynaconf/dynaconf
django
613
[RFC] provide a way to add extra globals/context vars to the jinja formatter
**Is your feature request related to a problem? Please describe.** i want to to use own helper objects for the jinja rendering to make them available i need to hack around way too much **Describe the solution you'd like** ``` settings.DYNACONF_JINJA_FORMATTER_GLOBALS["myhelper"] = MyHelper() ``` **Describe alternatives you've considered** we are currently manually replacing `@jinja` text with lazy objects that implement the feature, its not nice
closed
2021-07-12T19:07:54Z
2024-01-08T11:00:04Z
https://github.com/dynaconf/dynaconf/issues/613
[ "wontfix", "Not a Bug", "RFC" ]
RonnyPfannschmidt
1
aleju/imgaug
machine-learning
675
Shifted augumented segmentation mask
I am getting shifted segmentation mask. It keeps proportions, which is good but is shifted.... Why such weird behavior ?
closed
2020-05-22T10:12:09Z
2020-05-25T10:04:45Z
https://github.com/aleju/imgaug/issues/675
[]
Adblu
2
geopandas/geopandas
pandas
2,463
Reuse existing transaction in `to_postgis` if present
When calling `to_postgis()`, a new transaction is attempted to be started every time (see [here](https://github.com/geopandas/geopandas/blob/main/geopandas/io/sql.py#L32-L39)). However, you might instead want to reuse an already ongoing transaction. Consider this example: ```python with conn.begin(): df.to_postgis('mytable', con=conn) conn.execute(another_query) ``` I would want the insertion caused by `to_postgis` to fail and be rolled back, if my second, "manual" query fails and vice versa. I'd propose to either (a) somehow check if a transaction is already open and if yes, reuse it or start a nested transaction inside it, or (b) allow to pass a `Transaction` object alternatively to passing an `Engine` or `Connection`.
open
2022-06-13T20:24:58Z
2024-08-05T17:55:01Z
https://github.com/geopandas/geopandas/issues/2463
[ "enhancement", "postgis" ]
muety
3
pandas-dev/pandas
python
60,816
BUG: Union of two DateTimeIndexes is incorrectly calculated
### Pandas version checks - [x] I have checked that this issue has not already been reported. - [x] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas. - [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas. ### Reproducible Example ```python from pandas import DatetimeIndex l = DatetimeIndex(['2023-05-24 00:00:00+00:00', '2023-05-24 00:15:00+00:00', '2023-05-24 00:30:00+00:00', '2023-05-24 00:45:00+00:00', '2023-05-24 01:00:00+00:00'], dtype='datetime64[ms, UTC]', name='ts', freq='15min') r = DatetimeIndex(['2023-05-24 00:00:00+00:00', '2023-05-24 00:30:00+00:00', '2023-05-24 01:00:00+00:00'], dtype='datetime64[ms, UTC]', name='ts', freq='30min') union = r.union(l) print(union) assert len(union) == len(l) assert all(r.union(l) == l) ``` ### Issue Description The union of two datetime-indexes as given in the reproducible example is calculated incorrectly, the result on newer Pandas versions is ```python DatetimeIndex(['2023-05-24 00:00:00+00:00', '2051-11-29 16:00:00+00:00', '2080-06-06 08:00:00+00:00'], dtype='datetime64[ms, UTC]', name='ts', freq='15T') ``` The first failing version is the one I put into "Installed Versions". The error happens exactly from Pandas 2.1.0 onwards, Pandas 1.* and up to 2.0.3 work fine. Neither the numpy nor the Python version matter. ### Expected Behavior The expected result in the given case is that `l` is returned. ### Installed Versions INSTALLED VERSIONS ------------------ commit : ba1cccd19da778f0c3a7d6a885685da16a072870 python : 3.10.16.final.0 python-bits : 64 OS : Linux OS-release : 6.12.10-200.fc41.x86_64 Version : #1 SMP PREEMPT_DYNAMIC Fri Jan 17 18:05:24 UTC 2025 machine : x86_64 processor : byteorder : little LC_ALL : None LANG : en_US.UTF-8 LOCALE : en_US.UTF-8 pandas : 2.1.0 numpy : 1.26.4 pytz : 2024.2 dateutil : 2.9.0.post0 tzdata : 2025.1
open
2025-01-29T15:45:23Z
2025-02-02T14:27:14Z
https://github.com/pandas-dev/pandas/issues/60816
[ "Bug", "Regression", "Needs Discussion", "Non-Nano" ]
filmor
5
wkentaro/labelme
deep-learning
936
[BUG] with Conda and Python 3.6
Hi, When I created a new Conda enviorment with Python=3.6 on Ubuntu 20.04 and install labelme via pip this error shows: ``` Traceback (most recent call last): File "/home/ben/anaconda3/envs/labelme/bin/labelme", line 5, in <module> from labelme.__main__ import main File "/home/ben/anaconda3/envs/labelme/lib/python3.6/site-packages/labelme/__main__.py", line 14, in <module> from labelme.app import MainWindow File "/home/ben/anaconda3/envs/labelme/lib/python3.6/site-packages/labelme/app.py", line 46, in <module> LABEL_COLORMAP = imgviz.label_colormap(value=200) File "/home/ben/anaconda3/envs/labelme/lib/python3.6/site-packages/imgviz/label.py", line 40, in label_colormap r = np.bitwise_or.reduce(np.left_shift(bitget(i, 0), j), axis=1) File "/home/ben/anaconda3/envs/labelme/lib/python3.6/site-packages/imgviz/label.py", line 28, in bitget return np.unpackbits(byteval, bitorder="little").reshape(shape)[ TypeError: 'bitorder' is an invalid keyword argument for this function ``` Creating a conda enviorment with python=3.7 works as expected. Consider changing the Readme to change to 3.7?
closed
2021-10-19T08:28:18Z
2021-10-21T19:40:02Z
https://github.com/wkentaro/labelme/issues/936
[]
BenSpex
3
netbox-community/netbox
django
18,942
Comments do not word-wrap in view page for Circuits, Devices, etc.
### NetBox version NetBox Community v4.2.5 (2025-03-06) ### Feature type Change to existing functionality ### Proposed functionality Comments are only properly viewable from the Edit page of Devices, Circuits etc. because the text seems to have all line breaks removed in the View page, which makes no sense, as it's almost impossible to properly read them there as a result. We have to go into Edit to view them with proper line breaks. ### Use case We go into the Circuits category and click on any circuit item entry to view it. The comments section is all smushed together with IP ranges and gateways etc. all printed as a long string over a few lines, making it very hard to read. We click on Edit and voila! the comments area is displayed correctly with line breaks and we can read it clearly, but only from Edit view. It's not practical to always have to jump into Edit in order to correctly display the comments section. ### Database changes _No response_ ### External dependencies _No response_
open
2025-03-18T16:54:27Z
2025-03-20T17:55:43Z
https://github.com/netbox-community/netbox/issues/18942
[ "type: feature", "status: revisions needed" ]
Hestichan
1
twopirllc/pandas-ta
pandas
195
Possible to use pandas-ta with backtrader
Please share an example - how to use pandas-ta with [backtrader](https://github.com/mementum/backtrader)
closed
2021-01-22T09:43:40Z
2021-11-19T21:55:41Z
https://github.com/twopirllc/pandas-ta/issues/195
[ "enhancement", "help wanted", "info" ]
vijay-r
3
pydantic/FastUI
pydantic
338
Abnormal running of demo project
This is my first time using this project, but I can't run the demo project normally. The following is my operation part, which is operated on the Windows 10 system ```shell git clone project source code npm install npm dev ``` I followed the README.md in the demo directory, but when I accessed the address such as: `http://localhost:3000/api/tables`, an error message was returned: `vite-proxy: Proxy connection refused` console error log ```shell [vite] http proxy error at /api/tables: AggregateError [ECONNREFUSED]: at internalConnectMultiple (node:net:1116:18) at afterConnectMultiple (node:net:1683:7) ``` I realized that this might just be the front-end startup, and then I installed the python 3.11 environment ```shell conda create --name FastUI python=3.11 pip install fastapi fastui ``` After configuring the python environment, execute demo/main.py ```shell python main.py ``` But there was no output, and the program was exited before. I don't know if my operation is correct. If there is any error, I hope you can correct it.
closed
2024-07-08T14:51:48Z
2024-07-09T15:25:25Z
https://github.com/pydantic/FastUI/issues/338
[]
MoncozGC
1
ddbourgin/numpy-ml
machine-learning
57
Using numpy.tensordot for Conv2D
From this link: https://stackoverflow.com/questions/56085669/convolutional-layer-in-python-using-numpy and https://numpy.org/doc/stable/reference/generated/numpy.tensordot.html `Z = np.tensordot(X_pad, weights, axes=3) + self.bias` Does this function is more relevant that using im2col?
open
2020-08-02T17:34:00Z
2020-08-20T15:51:09Z
https://github.com/ddbourgin/numpy-ml/issues/57
[]
tetrahydra
1
uriyyo/fastapi-pagination
fastapi
691
How to use joinedload options when using sqlalchemy
Hello! I am trying to use joinedload with sqlalchemy but it seems overwriting original query because of passing by reference. https://github.com/uriyyo/fastapi-pagination/blob/0ce8f30e321caece96a2ba6c0261f9a230d78ec2/fastapi_pagination/ext/sqlalchemy.py#L55 Are there any way to use options when using sqlalchemy?
closed
2023-05-31T13:44:25Z
2023-06-28T09:53:06Z
https://github.com/uriyyo/fastapi-pagination/issues/691
[ "question" ]
fshmng09
9
huggingface/diffusers
pytorch
11,006
Broken video output with Wan 2.1 I2V pipeline + quantized transformer
### Describe the bug Since there is no proper documentation yet, I'm not sure if there is a difference to other video pipelines that I'm unaware of – but with the code below, the video results are reproducibly broken. There is a warning: `Expected types for image_encoder: (<class 'transformers.models.clip.modeling_clip.CLIPVisionModel'>,), got <class 'transformers.models.clip.modeling_clip.CLIPVisionModelWithProjection'>.` which I assume I'm expected to ignore. Init image: ![Image](https://github.com/user-attachments/assets/776aafca-f8d6-4f7b-81e6-c6d41d20dcee) Result: https://github.com/user-attachments/assets/c2e591e7-4cd5-4849-bec4-5938058c0775 Result with different seed: https://github.com/user-attachments/assets/7006e400-3018-4891-9c4f-06d44ebc704f Result with different prompt: https://github.com/user-attachments/assets/42f15f68-bd2b-4b22-b6da-6d5182bc6b22 ### Reproduction ``` # Tested on Google Colab with an A100 (40GB). # Uses ~21 GB VRAM, takes ~150 sec per step, ~75 min in total. !pip install git+https://github.com/huggingface/diffusers.git !pip install -U bitsandbytes !pip install ftfy import os import torch from diffusers import ( BitsAndBytesConfig, WanImageToVideoPipeline, WanTransformer3DModel ) from diffusers.utils import export_to_video from PIL import Image model_id = "Wan-AI/Wan2.1-I2V-14B-480P-Diffusers" quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16 ) transformer = WanTransformer3DModel.from_pretrained( model_id, subfolder="transformer", quantization_config=quantization_config ) pipe = WanImageToVideoPipeline.from_pretrained( model_id, transformer=transformer ) pipe.enable_model_cpu_offload() def render( filename, image, prompt, seed=0, width=832, height=480, num_frames=81, num_inference_steps=30, guidance_scale=5.0, fps=16 ): video = pipe( image=image, prompt=prompt, generator=torch.Generator(device=pipe.device).manual_seed(seed), width=width, height=height, num_frames=num_frames, num_inference_steps=num_inference_steps, guidance_scale=guidance_scale ).frames[0] os.makedirs(os.path.dirname(filename), exist_ok=True) export_to_video(video, filename, fps=fps) render( filename="/content/test.mp4", image=Image.open("/content/test.png"), prompt="a woman in a yellow coat is dancing in the desert", seed=42 ) ``` ### Logs ```shell ``` ### System Info - 🤗 Diffusers version: 0.33.0.dev0 - Platform: Linux-6.1.85+-x86_64-with-glibc2.35 - Running on Google Colab?: Yes - Python version: 3.11.11 - PyTorch version (GPU?): 2.5.1+cu124 (True) - Flax version (CPU?/GPU?/TPU?): 0.10.4 (gpu) - Jax version: 0.4.33 - JaxLib version: 0.4.33 - Huggingface_hub version: 0.28.1 - Transformers version: 4.48.3 - Accelerate version: 1.3.0 - PEFT version: 0.14.0 - Bitsandbytes version: 0.45.3 - Safetensors version: 0.5.3 - xFormers version: not installed - Accelerator: NVIDIA A100-SXM4-40GB, 40960 MiB ### Who can help? _No response_
open
2025-03-07T17:25:50Z
2025-03-23T17:37:13Z
https://github.com/huggingface/diffusers/issues/11006
[ "bug" ]
rolux
6
nikitastupin/clairvoyance
graphql
90
[CD] Make 'tests.yml' workflow more DRY
Currently the following lines are repetitive. It would be nice to make them follow the Do-Not-repeat-Yourself principle by separating them in another action/workflow. ```yaml - uses: actions/checkout@v4 - name: Install and configure poetry run: | pipx install poetry poetry config virtualenvs.in-project true - uses: actions/setup-python@v5 with: python-version: ${{ env.PYTHON_VERSION }} cache: 'poetry' - name: Setup poetry run: poetry install ```
closed
2024-04-30T11:11:04Z
2024-08-10T15:30:39Z
https://github.com/nikitastupin/clairvoyance/issues/90
[]
Privat33r-dev
0
mwouts/itables
jupyter
4
Fix itables on nteract
Opening the README.ipynb in nteract yields a JS error: _TypeError: require.config is not a function_.
closed
2019-04-24T20:09:29Z
2022-01-06T21:05:19Z
https://github.com/mwouts/itables/issues/4
[]
mwouts
1
scikit-image/scikit-image
computer-vision
7,221
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject
### Description: Hi, I am getting the following error: ``` ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject ``` Wonder why this is happening, because I have skimage `0.21.0` installed together with numpy `1.21.5` and [that should be fine](https://github.com/scikit-image/scikit-image/blob/v0.21.x/requirements/default.txt)? ### Way to reproduce: ```python import skimage ``` ### Version information: ```Shell 3.8.17 | packaged by conda-forge | (default, Jun 16 2023, 07:06:00) [GCC 11.4.0] ``` ``` Linux-5.15.0-82-generic-x86_64-with-glibc2.10 ``` ``` Cell In[16], line 1 import skimage; print(f'scikit-image version: {skimage.__version__}') File ~/.conda/envs/csp_wiesner_johannes/lib/python3.8/site-packages/skimage/__init__.py:122 from ._shared import geometry File geometry.pyx:1 in init skimage._shared.geometry ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject ``` ``` numpy version: 1.21.5 ``` ```
closed
2023-10-24T16:19:32Z
2023-11-17T08:35:27Z
https://github.com/scikit-image/scikit-image/issues/7221
[ ":bug: Bug" ]
JohannesWiesner
8
sammchardy/python-binance
api
1,018
future_create_order() ------ how to run faster?
In my test environment, I ran the following code : `order = client.futures_create_order(symbol='ETHUSDT',side='SELL',type='MARKET',quantity=20)` I found that the runtime was over 0.5 seconds! How to make the runtime faster? Preferably within a few milliseconds.
open
2021-09-12T02:48:11Z
2021-09-12T08:27:50Z
https://github.com/sammchardy/python-binance/issues/1018
[]
bmw7
4
wagtail/wagtail
django
12,779
Add Thumbnail to ViewSet
### This proposal is related to a problem Currently it is **not possible to add an image column** to the listing views of the `ModelViewSet` and `SnippetViewSet`. ### The solution I'd like Only the `PageListingViewSet` allows a list of **customizable columns**. I would love to see this for `ModelViewSet` and `SnippetViewSet` as well. Here is an approach to add an `ImageColumn` to the `PageListingViewSet`: [https://stackoverflow.com/a/79133160/5071435](https://stackoverflow.com/a/79133160/5071435) I think an `ImageColumn` could be added anyway to `wagtail.admin.ui.tables` **and** the documentation. ### Alternatives I've considered I do not really want to move back to the `ModelAdmin` app. But it provides an easy-enough-to-use `ThumbnailMixin` for this task
open
2025-01-15T18:08:57Z
2025-01-17T20:14:47Z
https://github.com/wagtail/wagtail/issues/12779
[ "type:Enhancement" ]
andre-fuchs
5
smarie/python-pytest-cases
pytest
258
[Question] Pre-filter cases
Hi @smarie, I was looking through the documentation and couldn't find any way to pre-filter cases into some kind of iterable that can then be passed to `parametrize_with_cases("x", cases=...)`. My use case is generally to have cases in a `cases.py` file that is shared throughout the test module. ``` # x/y/z folder __init__.py cases.py # Contains cases with labels "A" and "B" and some have "banana" too test_base.py # Should test cases "A" and "B" test_A.py # Should test cases "A" test_B.py # Should test cases "B" ``` This is useful for testing hierarchies where `A` and `B` inherit from `Base` so much of their setup is the same and makes sense to be done in the same file `cases.py`, instead of splitting theme out into `cases_A.py` and `cases_B.py`. Is there some way to do the following: ```python # test_A.py import x.y.z.cases as cases # Has cases for both A and B cases_with_tag_A = something(cases) @parametrize_with_cases("x", cases=cases_with_tag_A filter=~ft.has_tag("banana")) def test_A_prop(x): ... @parametrize_with_cases("x", cases=cases_with_tag_A, has_tag="banana") def test_A_prop_that_has_bananas(x): ... ``` I'm aware I could just use a filter on each test but there are other tags and the filters tend to get repetitive and I am prone to forgetting them. I could also use a `partial` but this kind of obscures things and doesn't allow additional filtering, i.e. `has_tag=["A", "banana"]`. Suggestions on how to about this or how this could be implemented would be greatly appreciated :) Best, Eddie
closed
2022-02-19T20:33:13Z
2022-03-21T15:20:19Z
https://github.com/smarie/python-pytest-cases/issues/258
[]
eddiebergman
4
ydataai/ydata-profiling
jupyter
1,130
Feature Request: pre-commit hook workflow to mimic github PR actions
### Missing functionality At the moment, there is no easy way to check that a PR will pass basics actions such as the linter, commit message format, etc. Therefore, when someone open a PR, it most likely fail, then the contributor has to either add more commits just to solve these issues or worse, reword/amend/rebase with a force-push (e.g. when a commit message is not well formatted). See for instance this PR: https://github.com/ydataai/pandas-profiling/pull/1127 It failed on the length of the commit message. The contributor has no other way than rewriting the history that already has been pushed to the remote branch. Not a good practice. This can be demotivating for contributors. They should be able to check that before pushing/open their PR. ### Proposed feature Use precommit hooks to mimic the behavior of the GitHub Action workflow triggered when a PR is opened. https://pre-commit.com/ The contributor would then see his commits checked before pushing and can adjust accordingly. ### Alternatives considered _No response_ ### Additional context _No response_
closed
2022-10-27T08:13:25Z
2022-11-15T16:20:46Z
https://github.com/ydataai/ydata-profiling/issues/1130
[ "code quality 📈" ]
aquemy
0