repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
pywinauto/pywinauto | automation | 1,122 | Couldnt get my way around for more than 5 hours | ## Expected Behavior
Sometimes the script prints the value of `dlg.print_control_identifiers()`,but the pywinauto is stubborn. I use the same windows title every time but anytime I get the error `MatchError: Could not find 'Miracle 9.0 (Rel 6.0) - Standard Copy (Single User) ' in 'dict_keys([])'` | open | 2021-10-01T10:29:42Z | 2022-02-20T12:11:47Z | https://github.com/pywinauto/pywinauto/issues/1122 | [
"bug",
"duplicate"
] | meet1919 | 2 |
ultralytics/yolov5 | deep-learning | 12,424 | Multigpu and multinode performance | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
I'm training Yolov5 on custom dataset and HPC machine, having 4 GPU per node. I'm doing some performance test, using different number of GPUs. Each test run for 100 epochs. The following are time results:
```
4 GPU: 2h, 30 min
8 GPU: 2h 21 min
16 GPU: 2h 25 min
32 GPU: 2h 46 min
```
More or less the AP is the same. I'm bit confused. Why Yolov5 training time does not scale with more GPUs? Each epoch should be finish in less time using more Gpus as well as the execution time. Right? Someone could explain such behaviour? Thanks.
### Additional
_No response_ | closed | 2023-11-24T13:06:21Z | 2024-01-09T00:21:50Z | https://github.com/ultralytics/yolov5/issues/12424 | [
"question",
"Stale"
] | unrue | 10 |
sherlock-project/sherlock | python | 2,375 | Instagram results not showing in Sherlock | ### Installation method
Other (indicate below)
### Package version
Sherlock v0.15.0
### Description
When using Sherlock to check for the existence of an Instagram profile by providing a valid username, the expected result should be that the tool returns whether the profile exists or not.
When performing this action, however, the actual result was that no output was displayed for **Instagram profiles,** even for valid usernames.
This is undesirable because it prevents users from accurately checking Instagram profiles, which is a key feature of the tool.
**Thank you for looking into this issue!**
### Steps to reproduce
1. Open the Sherlock tool.
2. Enter a valid Instagram username (e.g., instagram) and run the tool.
3. Observe that no result is displayed for Instagram, regardless of the username's validity.
### Additional information
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct | closed | 2024-11-30T10:18:47Z | 2025-02-17T05:16:31Z | https://github.com/sherlock-project/sherlock/issues/2375 | [
"bug"
] | ZuhairZeiter | 6 |
horovod/horovod | tensorflow | 3,497 | NotFoundError: Local Variable does not exist with backward_passes_per_step > 1 | **Environment:**
1. Framework: tensorflow.keras
2. Framework version: 2.5.0
3. Horovod version: 0.22.0
4. MPI version:
5. CUDA version: 11.0
6. NCCL version:
7. Python version: 3.8
8. Spark / PySpark version:
9. Ray version:
10. OS and version:
11. GCC version:
12. CMake version:
**Checklist:**
1. Did you search issues to find if somebody asked this question before? yes
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)? --
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? --
4. Did you check if you question is answered in the [troubleshooting guide] (https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
**Bug report:**
Hi, I am getting a NotFoundError: Resouce localhost/_AnonymousVar<xxx>/N10tensorflow3VarE does not exist when setting backward_passes_per_step > 1.
I looked through both tensorflow and horovod code and might have found the origin of this:
My traceback ends up inside tensorflow/python/eager/def_function during the self.train_function(iterator) call inside model.fit().
The horovod DistributedOptimizer is initialized with a LocalGradientAggregationHelperEager, though, since the context.executing_eagerly() gets called before the model.fit and therefore inside an eager execution environment by default. The train_function is wrapped into a tf.function and is executed in graph mode.
summary: horovod initializes the optimizer with an Eager Aggregation Helper outside the model.fit loop, gradient aggregation is called inside the model.fit loop in graph execution
I guess this is the error. I dont know if this error still exists in the newest version of horovod / tensorflow.keras but the code did not change on that end so it most likely is also a bug in the current version.
If someone can recreate this on another version please change the code to initialize the AggregationHelper according to the tf.run_functions_eagerly() property instead; and hopefully you can provide a quick workaround for me as I cannot change the installation on the HPC cluster I am working with.
**Update:**
I tried the following two workarounds:
1. Manually create an instance of LocalGradientAggregationHelper and feed it into the optimizer's attribute. This unfortunately raised the same exception as in #2110 , caused by line 103 inside horovod.tensorflow.gradient_aggregation: `zero_grad = tf.zeros(shape=grad.get_shape().as_list(), dtype=grad.dtype)`. I do not see the exact problem here. (Maybe something else that has to be manually wrapped by tf.function?
2. Compile the model in eager execution mode. This works fine, although I lose some speedup here of course.
| open | 2022-03-28T09:21:57Z | 2022-04-26T21:55:39Z | https://github.com/horovod/horovod/issues/3497 | [
"bug"
] | lenroed | 1 |
ipython/ipython | jupyter | 14,232 | IPython.display.IFrame running on remote JupyterLab references localhost on client machine | <!-- This is the repository for IPython command line, if you can try to make sure this question/bug/feature belong here and not on one of the Jupyter repositories.
If it's a generic Python/Jupyter question, try other forums or discourse.jupyter.org.
If you are unsure, it's ok to post here, though, there are few maintainer so you might not get a fast response.
-->
I have a localhost server running on JupyterLab serving a web application which I would like to display on JupyterLabs on a client machine. However, when calling IPython.display.IFrame() on the client, it references localhost on the client machine instead of the remote server. Here is an example where http://localhost:57967 is where the web application is being served on the JupyterLabs server:
Calling IPython.display.IFrame() displays "refused to connect" error page.

However, http://localhost:57967 serves a web application and is accessible from the remote server

On the client machine, I have a web application running on http://localhost:58541 which is accessible by IPython.display.IFrame().

However, http://localhost:58541 cannot be accessed from the remote server.

In summary, how would I be able to display http://localhost:57967 which is a web application served on the JupyterLab's server? | closed | 2023-10-31T15:12:21Z | 2024-09-30T07:34:41Z | https://github.com/ipython/ipython/issues/14232 | [] | jcaxle | 2 |
raphaelvallat/pingouin | pandas | 47 | Add support for Aligned Rank Transformed data Anova | Good morning,
I had to work on non-normal and heteroscedastic variables but had to compute interactions between those variables. Unfortunately, no methods in your package allow to do so.
I found ART beeing quite straight forward and easy to use. I think it could be very valuable to include ART inside your library.
The actual solution is to run ARTool through rpy2 which is neither performant nor easy to do.
Here is the original paper: http://faculty.washington.edu/wobbrock/pubs/chi-11.06.pdf
And here are some R documentation:
- https://cran.r-project.org/web/packages/ARTool/index.html
- https://cran.r-project.org/web/packages/ARTool/vignettes/art-contrasts.html
Thanks, and have a nice day,
Clément. | closed | 2019-07-02T10:19:24Z | 2019-09-28T19:05:08Z | https://github.com/raphaelvallat/pingouin/issues/47 | [
"feature request :construction:"
] | clementpoiret | 3 |
taverntesting/tavern | pytest | 364 | Would like to be able to define a variable in a stage to use in various places in that stage | In this stage:
- name: -007C PUT update organization incomplete payload
request:
url: "{host}/v2/institutions/{institution_data.id}/organizations/{organization_data.id}/"
method: PUT
json:
branded: !bool "true"
organization_type: !int 8
import_comment: "BP-5845-007C"
headers:
Accept: 'application/json, text/plain, text/html, */*'
Accept-Encoding: 'gzip, deflate, br'
Accept-Language: 'en-US,en;q=0.9'
Authorization: "JWT {encode_web_token}"
case_name: "BP-5845-007C"
response:
status_code: 400
headers:
content-type: "application/json"
save:
body:
put_meta: meta
body:
$ext:
function: api_helpers:organization_error
extra_kwargs:
expected: "{organization_incomplete_expected}"
targets: "{error_targets}"
stage: "BP-5845-007C"
I'd like to be able to define a variable with the value "BP-5845-007C" that can be passed to import_comment, case_name, stage, etc.
Am using this to help identify which test-stage created data and/or what stage is running at the time an error is encountered when running through a debugger. Perhaps could then set a watch on the variable in debugger. (using Pycharm).
Any way of doing this?
If not can it be put in the list of enhancements? | open | 2019-05-30T20:19:21Z | 2019-08-30T14:51:37Z | https://github.com/taverntesting/tavern/issues/364 | [
"Type: Enhancement",
"Priority: Low"
] | pmneve | 3 |
wkentaro/labelme | deep-learning | 533 | How to add point for my annotated polygons? | Hi,thanks for your great work.But i met some problem recently.I have annotated some datas for sementic works before, and i want to add some points for per polygon in images i annotated,but i don't want to delete the annotated polygons to annotate again,I just want to add one or two point for the annotated polygon.But i don't find way how to do it.Does the labelme supports such operator?tkx. | closed | 2019-12-31T08:07:31Z | 2021-04-13T11:20:28Z | https://github.com/wkentaro/labelme/issues/533 | [] | chegnyanjun | 4 |
faif/python-patterns | python | 384 | what's the difference betwen builder.py and abstruct_factory.py | closed | 2022-01-02T09:51:57Z | 2022-01-03T08:24:29Z | https://github.com/faif/python-patterns/issues/384 | [
"question"
] | SeekPoint | 2 | |
hzwer/ECCV2022-RIFE | computer-vision | 233 | no module named moviepy | 
can anyone help? | closed | 2022-01-23T09:14:05Z | 2022-01-23T09:28:48Z | https://github.com/hzwer/ECCV2022-RIFE/issues/233 | [] | danieltan007 | 1 |
coqui-ai/TTS | deep-learning | 3,627 | Please update to be able to use with Python 3.12 | RuntimeError: TTS requires python >= 3.9 and < 3.12 but your Python version is 3.12.0 (tags/v3.12.0:0fb18b0, Oct 2 2023, 13:03:39) [MSC v.1935 64 bit (AMD64)]
This happens even with trying to install using
pip3 install --ignore-requires-python TTS
| closed | 2024-03-11T02:42:57Z | 2024-10-07T00:36:54Z | https://github.com/coqui-ai/TTS/issues/3627 | [
"wontfix",
"feature request"
] | Aphexus | 7 |
tflearn/tflearn | data-science | 963 | how to use pretrained word2vec in tflearn for text classification | Hi Guys,
I found tflearn is a good tool for deep learning.
I am just wondering if i can load pretrained word2vec model (like google word2vec, i.e. https://code.google.com/archive/p/word2vec/). And how to use that to initialise embedding layer in tflearn.
Thank You,
Biswajit | open | 2017-11-21T05:43:28Z | 2017-11-21T05:48:13Z | https://github.com/tflearn/tflearn/issues/963 | [] | Biswajit2902 | 0 |
ydataai/ydata-profiling | jupyter | 795 | Profile contains incorrect types and rejected columns | **Description:**
Profile contains incorrect types and rejected columns.
**Reproduction:**
```python
import datetime
import pandas
import pandas_profiling
df = pandas.DataFrame({
"time": [datetime.datetime(2021, 5, 11)] * 4,
"symbol": ["AAPL", "AMZN", "GOOG", "TSLA"],
"price": [125.91, 3223.91, 2308.76, 617.20],
})
profile = pandas_profiling.ProfileReport(df)
# `time` column is rejected.
# `price` column has type Categorical not Numeric.
```
**Version:**
* _Python version_: Python 3.8
* _Environment_: Jupyter Lab
<details><summary>Click to expand <strong><em>requirements.txt</em></strong></summary>
<p>
```
beautifulsoup4==4.9.3
croniter==1.0.13
defopt==6.1.0
findspark==1.4.2
folium==0.12.1
fsspec==2021.5.0
google-cloud-bigquery==2.16.1
humanize==3.5.0
imagehash==4.2.0
ipython==7.23.1
jedi==0.18.0
jinja2==3.0.1
jupyter-contrib-nbextensions==0.5.1
jupyter-nbextensions-configurator==0.4.1
jupyterlab==3.0.15
jupytext==1.11.2
koalas==1.5.0
marshmallow-pyspark==0.2.2
notebook==6.4.0
numpy==1.20.3
openpyxl==3.0.7
overrides==6.1.0
pandas==1.2.4
pandas-profiling==3.0.0
paramiko==2.7.2
pdpyras==4.2.1
pebble==4.6.1
praw==7.2.0
premailer==3.8.0
psutil==5.8.0
pyarrow==4.0.0
pydeequ==0.1.6
pyspark==3.0.2
pytest==6.2.4
pytest-cov==2.12.0
pytest-icdiff==0.5
pytest-securestore==0.1.3
pytest-timeout==1.4.2
python-dateutil==2.8.1
pytimeparse==1.1.8
ratelimit==2.2.1
requests==2.25.1
rich==10.2.1
scipy==1.6.3
sentry-sdk==1.1.0
slack-sdk==3.5.1
structlog==21.1.0
testing.postgresql==1.3.0
typeguard==2.12.0
typing-utils==0.0.3
xlrd==2.0.1
```
</p>
</details> | closed | 2021-05-19T15:06:14Z | 2021-06-30T10:16:26Z | https://github.com/ydataai/ydata-profiling/issues/795 | [] | ashwin153 | 1 |
pallets/quart | asyncio | 145 | Not working with PyLTI even after 'import quart.flask_patch' | I'm trying to make an app using Quart together with [PyLTI](https://pypi.org/project/PyLTI/).
PyLTI is originally for Flask, so I'm using `import quart.flask_patch`.
However, the following error happens when I run the app with `python main.py`.
Could you please give me some advice?
I'm not sure where I should ask for help.
If here is not appropriate place to ask such kind of questions, please just close the issue.
Thank you in advance.
Error
```
Traceback (most recent call last):
File "/home/xxx/Project/quart+lti/main.py", line 13, in <module>
@lti(request='session')
File "/home/xxx/.cache/pypoetry/virtualenvs/quart+lti-rQY8ksPW-py3.10/lib/python3.10/site-packages/pylti/flask.py", line 188, in wrapper
the_lti = LTI(lti_args, lti_kwargs)
File "/home/xxx/.cache/pypoetry/virtualenvs/quart+lti-rQY8ksPW-py3.10/lib/python3.10/site-packages/pylti/flask.py", line 37, in __init__
LTIBase.__init__(self, lti_args, lti_kwargs)
File "/home/xxx/.cache/pypoetry/virtualenvs/quart+lti-rQY8ksPW-py3.10/lib/python3.10/site-packages/pylti/common.py", line 470, in __init__
self.nickname = self.name
File "/home/xxx/.cache/pypoetry/virtualenvs/quart+lti-rQY8ksPW-py3.10/lib/python3.10/site-packages/pylti/common.py", line 478, in name
if 'lis_person_sourcedid' in self.session:
File "/home/xxx/.cache/pypoetry/virtualenvs/quart+lti-rQY8ksPW-py3.10/lib/python3.10/site-packages/werkzeug/local.py", line 278, in __get__
obj = instance._get_current_object()
File "/home/xxx/.cache/pypoetry/virtualenvs/quart+lti-rQY8ksPW-py3.10/lib/python3.10/site-packages/werkzeug/local.py", line 407, in _get_current_object
return self.__local() # type: ignore
File "/home/xxx/.cache/pypoetry/virtualenvs/quart+lti-rQY8ksPW-py3.10/lib/python3.10/site-packages/quart/globals.py", line 26, in _ctx_lookup
raise RuntimeError(f"Attempt to access {name} outside of a relevant context")
RuntimeError: Attempt to access session outside of a relevant context
```
main.py
```python
import quart.flask_patch
from pylti.flask import lti
from quart import Quart, redirect, url_for
app = Quart(__name__)
@app.route('/lti', methods=['POST'])
@lti(request='initial')
async def lti(lti):
return redirect(url_for('index'), code=303)
@app.route('/')
@lti(request='session')
async def index(lti):
return 'index'
app.run(host='0.0.0.0')
``` | closed | 2022-04-13T15:10:17Z | 2022-10-03T00:26:21Z | https://github.com/pallets/quart/issues/145 | [] | yuttie | 2 |
nerfstudio-project/nerfstudio | computer-vision | 3,611 | Throw an error on non-zero k4, k5 or k6 | Hi @jb-ye @devernay, why is there a problem when k4 is not equal to 0? #3355 #3381 When reading the camera distortion coefficients such as [this](https://github.com/nerfstudio-project/nerfstudio/blob/73fe54dda0b743616854fc839889d955522e0e68/nerfstudio/process_data/colmap_utils.py#L262C2-L278C42), k4 is correctly assigned as the fourth-order radial distortion coefficient rather than the fourth coefficient. The current approach makes data with non-zero k4 coefficient like fisheye unusable.
| open | 2025-03-12T12:24:12Z | 2025-03-15T04:07:31Z | https://github.com/nerfstudio-project/nerfstudio/issues/3611 | [] | Yubel426 | 3 |
pyeve/eve | flask | 1,222 | Demo fails | QuickStart:
root@xtgl-app-009:/root/a#cat run.py
from eve import Eve
app = Eve()
app.run()
root@xtgl-app-009:/root/a#cat settings.py
DOMAIN = {'people': {}}
root@xtgl-app-009:/root/a#
--------------------------------------------------------------------------------------
Then I run the demo:
root@xtgl-app-009:/root/a#python3 run.py
* Serving Flask app "eve" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
--------------------------------------------------------------------------------------
Then I follow the guide:
root@xtgl-app-009:/root/a#curl -i http://127.0.0.1:5000/people
curl: (52) Empty reply from server
root@xtgl-app-009:/root/a#curl -i http://127.0.0.1:5000/
curl: (52) Empty reply from server
-----------------------------------------------------------------------------------
Finally, I have no idea. SO How to deal with it ? | closed | 2019-01-23T09:17:26Z | 2019-03-28T15:13:26Z | https://github.com/pyeve/eve/issues/1222 | [] | SuperHighMan | 1 |
awesto/django-shop | django | 727 | Add tag for version 0.12.1 | Apparently, the git tag for version 0.12.1 was forgotten. Please add it (as documented [here](/awesto/django-shop/blob/master/shop/__init__.py#L6-L18)). | closed | 2018-04-29T07:07:02Z | 2018-04-30T00:37:34Z | https://github.com/awesto/django-shop/issues/727 | [] | r4co0n | 1 |
python-gitlab/python-gitlab | api | 2,738 | ldap_group_links.list() not usable | ## Description of the problem, including code/CLI snippet
The current implementation of the list() method for the ldap_group_links api is not usable. The cli throws an exceptions and the api returns unusable data, because an id is missing.
$ gitlab group-ldap-group-link list --group-id xxxxxx
[<GroupLDAPGroupLink id:None provider:ldapmain>, <GroupLDAPGroupLink id:None provider:ldapmain>, <GroupLDAPGroupLink id:None provider:ldapmain>, <GroupLDAPGroupLink id:None provider:ldapmain>, <GroupLDAPGroupLink id:None provider:ldapmain>, <GroupLDAPGroupLink id:None provider:ldapmain>, <GroupLDAPGroupLink id:None provider:ldapmain>]
Traceback (most recent call last):
File "/home/rof/.local/bin/gitlab", line 8, in <module>
sys.exit(main())
^^^^^^
File "/home/rof/.local/lib/python3.12/site-packages/gitlab/cli.py", line 389, in main
gitlab.v4.cli.run(
File "/home/rof/.local/lib/python3.12/site-packages/gitlab/v4/cli.py", line 556, in run
printer.display_list(data, fields, verbose=verbose)
File "/home/rof/.local/lib/python3.12/site-packages/gitlab/v4/cli.py", line 518, in display_list
self.display(get_dict(obj, fields), verbose=verbose, obj=obj)
File "/home/rof/.local/lib/python3.12/site-packages/gitlab/v4/cli.py", line 487, in display
id = getattr(obj, obj._id_attr)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rof/.local/lib/python3.12/site-packages/gitlab/base.py", line 134, in __getattr__
raise AttributeError(message)
AttributeError: 'GroupLDAPGroupLink' object has no attribute 'id'
I think the problem is, that the ldap_group_links api has two possible (exclusive) id fields: cn and filter. I did not find a way to reflect this in python-gitlab. As we do not use ldap filters in our Gitlab instance, my "fix" was to add
_id_attr = "cn"
to the GroupLDAPGroupLink class.
## Expected Behavior
The data returned shall contain either a cn or filter field.
## Actual Behavior
Returned data only has the provider attribute, which is useless without the other field(s).
## Specifications
- python-gitlab version: 4.2.0
- API version you are using (v3/v4): v4
- Gitlab server version (or gitlab.com): 16.5.3
| open | 2023-12-04T20:19:26Z | 2024-01-21T19:10:49Z | https://github.com/python-gitlab/python-gitlab/issues/2738 | [
"bug"
] | zapp42 | 5 |
skfolio/skfolio | scikit-learn | 6 | [BUG] Python Version Request: Reduce python >= 3.9 | Thanks for the great-looking package!
I have hopefully a small request. Currently you have:
https://github.com/skfolio/skfolio/blob/a1e79d3cf732467dcd078cd414b466b889a2b1c5/pyproject.toml#L16
Is it possible to change this to `3.9` to reduce the restrictiveness?
Sklearn doesn't actually specify a python version in their project.toml. And I don't see a need to restrict from the current dependencies:
``` bash
dependencies = [
"numpy>=1.23.4",
"scipy>=1.8.0",
"pandas>=1.4.1",
"cvxpy>=1.4.1",
"scikit-learn>=1.3.2",
"joblib>=1.3.2",
"plotly>=5.15.0"
]
``` | closed | 2024-01-12T14:39:31Z | 2024-01-16T20:51:33Z | https://github.com/skfolio/skfolio/issues/6 | [
"bug"
] | mdancho84 | 3 |
mwouts/itables | jupyter | 199 | Pandas Style fail to render in Colab | Pandas Style objects fail to render in Google Colab.
This is because the HTML representation of the style object generated by `to_html` is
```
<table id="T_4b0528c8-6c7e-4821-bad4-20c3dbda01a8" class="dataframe">
```
while `itables` expect it to be only equal to
```
<table id="T_4b0528c8-6c7e-4821-bad4-20c3dbda01a8">
``` | closed | 2023-10-01T13:54:02Z | 2023-10-01T22:33:52Z | https://github.com/mwouts/itables/issues/199 | [] | mwouts | 0 |
tflearn/tflearn | data-science | 1,001 | no pip3 package | The installation of the stable release, as described in the docs, does not work when using pip3:
$ pip3 install tflearn
Collecting tflearn
Could not find a version that satisfies the requirement tflearn (from versions: )
No matching distribution found for tflearn | open | 2018-01-19T23:19:30Z | 2018-01-30T21:59:19Z | https://github.com/tflearn/tflearn/issues/1001 | [] | Eezzeldin | 1 |
jazzband/django-oauth-toolkit | django | 711 | authorization_code should use pkce to verify public clients | According to RFC [https://tools.ietf.org/html/rfc7636#section-4](https://tools.ietf.org/html/rfc7636#section-4) For public clients with `authorization_code` grant PKCE's `code_verifier` must be used to authenticate client. Currently authentication for public client is dropped altogether.
**Current behaviour:**
Client's `code_challenge` is not used to authenticate public clients. This leaves public clients vulnerable to token interception attack.
**Expected behaviour:**
Public clients on `authorization_code` grant type must be verified by method described in RFC linked in the issue.
I am not sure how much it would affect existing deployments by making this mandatory, but there should be some option / setting switch that we can flip to enable this security feature.
I'll be open to working on this, let me know if I should. | closed | 2019-04-25T07:34:41Z | 2019-05-29T19:20:23Z | https://github.com/jazzband/django-oauth-toolkit/issues/711 | [] | Abhishek8394 | 3 |
ageitgey/face_recognition | python | 1,351 | is it possible to get accuracy/confidence level when predicting face? | i want to get accuracy/confidence level of the prediction result, is it possible??
i'm using [https://github.com/ageitgey/face_recognition/blob/master/examples/facerec_ipcamera_knn.py](this example) | open | 2021-08-04T11:03:44Z | 2021-09-03T20:12:43Z | https://github.com/ageitgey/face_recognition/issues/1351 | [] | moinologics | 1 |
StackStorm/st2 | automation | 6,021 | action_service.list_values no limit or offset support? | ## SUMMARY
action_service.list_values not working properly?
### STACKSTORM VERSION
3.7.0
##### OS, environment, install method
OS install on RHEL
## Steps to reproduce the problem
As per the contrib/runners/python_runner/python_runner/python_action_wrapper.py the action service allows for listing of datastore values. The relevant method I think is this:
def list_values(self, local=True, prefix=None):
return self.datastore_service.list_values(local=local, prefix=prefix)
However, the above is calling below method without the "limit" parameter:
st2common/st2common/services/datastore.py line 71
def list_values(self, local=True, prefix=None, limit=None, offset=0):
"""
Retrieve all the datastores items.
:param local: List values from a namespace local to this pack/class. Defaults to True.
:type: local: ``bool``
:param prefix: Optional key name prefix / startswith filter.
:type prefix: ``str``
:param limit: Number of keys to get. Defaults to the configuration set at 'api.max_page_size'.
:type limit: ``integer``
:param offset: Number of keys to offset. Defaults to 0.
:type offset: ``integer``
:rtype: ``list`` of :class:`KeyValuePair`
"""
client = self.get_api_client()
self._logger.debug("Retrieving all the values from the datastore")
limit = limit or cfg.CONF.api.max_page_size
key_prefix = self._get_full_key_prefix(local=local, prefix=prefix)
kvps = client.keys.get_all(prefix=key_prefix, limit=limit, offset=offset)
return kvps
Using action_service.list_values I'm not able to provide the limit parameter and thus the action will fail with:
oslo_config.cfg.NoSuchOptError: no such option in group api: max_page_size
even though I did put max_page_size = 100 into st2.conf and restarted st2.
Is there a reason why I can't provide the limit to the action_service.list_values method?
(another missing param is "offset", which would also be useful in certain situations)
## Expected Results
a list of keyvalue pairs
## Actual Results
st2.actions.python.SetAccountsObject: DEBUG Creating new Client object.
st2.actions.python.SetAccountsObject: DEBUG Retrieving all the values from the datastore
Traceback (most recent call last):
File \"/opt/stackstorm/st2/lib/python3.8/site-packages/oslo_config/cfg.py\", line 2262, in _get
return self.__cache[key]
KeyError: ('api', 'max_page_size')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File \"/opt/stackstorm/st2/lib/python3.8/site-packages/python_runner/python_action_wrapper.py\", line 395, in <module>
obj.run()
File \"/opt/stackstorm/st2/lib/python3.8/site-packages/python_runner/python_action_wrapper.py\", line 214, in run
output = action.run(**self._parameters)
File \"/opt/stackstorm/internal_packs/preprocessing_shared_scripts/actions/set_accounts_obj.py\", line 8, in run
accts_objects = self.get_overrides(capability_name)
File \"/opt/stackstorm/internal_packs/preprocessing_shared_scripts/actions/set_accounts_obj.py\", line 12, in get_overrides
return self.action_service.list_values(local=False, prefix=f'{capability}_accounts_')
File \"/opt/stackstorm/st2/lib/python3.8/site-packages/python_runner/python_action_wrapper.py\", line 136, in list_values
return self.datastore_service.list_values(local=local, prefix=prefix)
File \"/opt/stackstorm/st2/lib/python3.8/site-packages/st2common/services/datastore.py\", line 92, in list_values
limit = limit or cfg.CONF.api.max_page_size
File \"/opt/stackstorm/st2/lib/python3.8/site-packages/oslo_config/cfg.py\", line 2547, in __getattr__
return self._conf._get(name, self._group)
File \"/opt/stackstorm/st2/lib/python3.8/site-packages/oslo_config/cfg.py\", line 2264, in _get
value = self._do_get(name, group, namespace)
File \"/opt/stackstorm/st2/lib/python3.8/site-packages/oslo_config/cfg.py\", line 2282, in _do_get
info = self._get_opt_info(name, group)
File \"/opt/stackstorm/st2/lib/python3.8/site-packages/oslo_config/cfg.py\", line 2415, in _get_opt_info
raise NoSuchOptError(opt_name, group)
oslo_config.cfg.NoSuchOptError: no such option in group api: max_page_size
Making sure to follow these steps will guarantee the quickest resolution possible.
Thanks!
| open | 2023-09-06T13:26:59Z | 2023-09-06T13:27:45Z | https://github.com/StackStorm/st2/issues/6021 | [] | fdrab | 0 |
jpadilla/django-rest-framework-jwt | django | 394 | Help configuring JWT_PRIVATE_KEY/JWT_PUBLIC_KEY | I'm utilising `django-rest-framework-jwt` for an REST API authentication and i'd like to have the same web token authorize access to another http service (couchdb).
For creating a JWT enabled reverse proxy i'm looking at jwtproxy (https://github.com/coreos/jwtproxy) which 8afaik) can use a preshared RSA key, so i'm trying to configure RSA private/public keys on `django-rest-framework-jwt`.
Docs mention `JWT_PUBLIC_KEY`is an object of type cryptography.hazmat.primitives.asymmetric.rsa.RSAPublicKey and i'm utilising `cryptography` to try and load an private key file (<https://github.com/coreos/jwtproxy/blob/master/examples/httpserver/mykey.key>):
```
def load_private_key(filepath):
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.backends import default_backend
with open(filepath, "rb") as key_file:
private_key = serialization.load_pem_private_key(key_file.read(), password=None, backend=default_backend())
return private_key
private_key = load_private_key('../../jwtproxy/examples/httpserver/mykey.key')
JWT_AUTH = {
'JWT_ALLOW_REFRESH': True,
'JWT_PRIVATE_KEY': private_key,
'JWT_PUBLIC_KEY': private_key.public_key(),
'JWT_ALGORITHM': 'HS256'
}
```
But i get an error about key not being a string type
```
value = <cryptography.hazmat.backends.openssl.rsa._RSAPrivateKey object at 0x10ca57f60>
def force_bytes(value):
if isinstance(value, text_type):
return value.encode('utf-8')
elif isinstance(value, binary_type):
return value
else:
> raise TypeError('Expected a string value')
E TypeError: Expected a string value
```
If i convert the key objet to bytes:
```
private_key.public_key().public_bytes(serialization.Encoding.PEM, serialization.PublicFormat.PKCS1)
```
i get an exception about the key format:
```
jwt.exceptions.InvalidKeyError: The specified key is an asymmetric key or x509 certificate and should not be used as an HMAC secret.
```
Could i get a nudge in the right direction on how to set proper values for JWT_PRIVATE_KEY/JWT_PUBLIC_KEY from a RSA key ?
| closed | 2017-10-31T20:35:54Z | 2018-03-02T11:46:17Z | https://github.com/jpadilla/django-rest-framework-jwt/issues/394 | [] | zemanel | 3 |
httpie/cli | python | 571 | Should build official Docker image and add usage instructions | I created this: https://github.com/teracyhq/docker-files/tree/master/httpie-jwt-auth
and I'd like to do the same for official Docker image of httpie (instead of under teracy/ umbrella)
Let's discuss here for anyone interested and I could lead the effort.
related: https://github.com/jakubroztocil/httpie/pull/236 | open | 2017-03-23T18:23:12Z | 2021-05-29T06:08:03Z | https://github.com/httpie/cli/issues/571 | [
"packaging"
] | hoatle | 2 |
axnsan12/drf-yasg | rest-api | 473 | DeprecationWarning on Python 3.7/3.8 and breakage on master/3.10 due to coreapi dependency | Originally posted at https://github.com/encode/django-rest-framework/issues/6991
Relates to #389
___________
Python 3.8 releases today but this warning is still triggered from `djangorestframework` 3.10.3 and `drf-yasg` 1.17.0 :
https://github.com/tomchristie/itypes/issues/11
This makes causes users who run their DRF testing with `-Werror` fail the build if they also use `drf-yasg`.
From: https://docs.python.org/3/library/collections.html#module-collections
> Deprecated since version 3.3, will be removed in version 3.10: Moved Collections Abstract Base Classes to the collections.abc module.
## Steps to reproduce
```py
from rest_framework.test import APIClient
```
Run `pytest` tests with`-Werror` under Python 3.7.
```
$ python3 -Werror -m pytest
```
```
During handling of the above exception, another exception occurred:
.tox/docker/lib/python3.7/site-packages/_pytest/config/__init__.py:462: in _importconftest
mod = conftestpath.pyimport()
.tox/docker/lib/python3.7/site-packages/py/_path/local.py:701: in pyimport
__import__(modname)
<frozen importlib._bootstrap>:983: in _find_and_load
???
<frozen importlib._bootstrap>:967: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:677: in _load_unlocked
???
.tox/docker/lib/python3.7/site-packages/_pytest/assertion/rewrite.py:142: in exec_module
exec(co, module.__dict__)
rest_api/tests/conftest.py:2: in <module>
from rest_framework.test import APIClient
.tox/docker/lib/python3.7/site-packages/rest_framework/test.py:16: in <module>
from rest_framework.compat import coreapi, requests
.tox/docker/lib/python3.7/site-packages/rest_framework/compat.py:98: in <module>
import coreapi
.tox/docker/lib/python3.7/site-packages/coreapi/__init__.py:2: in <module>
from coreapi import auth, codecs, exceptions, transports, utils
.tox/docker/lib/python3.7/site-packages/coreapi/codecs/__init__.py:2: in <module>
from coreapi.codecs.base import BaseCodec
.tox/docker/lib/python3.7/site-packages/coreapi/codecs/base.py:1: in <module>
import itypes
.tox/docker/lib/python3.7/site-packages/itypes.py:2: in <module>
from collections import Mapping, Sequence
<frozen importlib._bootstrap>:1032: in _handle_fromlist
???
.tox/docker/lib/python3.7/collections/__init__.py:52: in __getattr__
DeprecationWarning, stacklevel=2)
E DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
``` | closed | 2019-10-16T12:43:42Z | 2020-09-17T12:36:20Z | https://github.com/axnsan12/drf-yasg/issues/473 | [] | johnthagen | 5 |
eriklindernoren/ML-From-Scratch | data-science | 23 | MatplotlibWrapper is an undefined name | MatplotlibWrapper is an undefined name in gaussian_mixture_model.py and k_means.py. Undefined names can raise [NameError](https://docs.python.org/3/library/exceptions.html#NameError) at runtime.
flake8 testing of https://github.com/eriklindernoren/ML-From-Scratch on Python 2.7.13
$ flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics
```
./mlfromscratch/unsupervised_learning/gaussian_mixture_model.py:137:9: F821 undefined name 'MatplotlibWrapper'
p = MatplotlibWrapper()
^
```
The same issue is __also present in k_means.py__ but [the * import](https://github.com/eriklindernoren/ML-From-Scratch/blob/master/mlfromscratch/unsupervised_learning/k_means.py#L11) masks it from flake8. | closed | 2017-09-14T09:19:28Z | 2017-09-18T14:31:12Z | https://github.com/eriklindernoren/ML-From-Scratch/issues/23 | [] | cclauss | 1 |
piskvorky/gensim | machine-learning | 3,025 | Custom Keyword inclusion | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
Our need is the generated summary should have specific keywords from the input text.
#### Steps/code/corpus to reproduce
We need the summarize function to have accept keywords as input parameter.
output = summarize(text, word_count=9, **custom_keywords=keywords**)
**For example,**
```
from gensim.summarization.summarizer import summarize
keywords = ['apple', 'red', 'yellow']
text = "Apple is red. Grape is black. Banana is yellow"
output = summarize(text, word_count=9, custom_keywords=keywords)
print(output)
```
#### Output
**Apple** is **red.**
Banana is **yellow**
As in above example, we need a parameter to include custom keywords and those keywords must be present in the summarized text.
(i.e) The sentences with the keywords should be present in the output.
| closed | 2021-01-12T07:33:17Z | 2021-01-12T10:28:36Z | https://github.com/piskvorky/gensim/issues/3025 | [] | Vignesh9395 | 1 |
encode/databases | asyncio | 208 | aiopg engine raises ResourceWarning in transactions | Step to reproduce:
Python 3.7.7
```python3
import asyncio
from databases import Database
url = "postgresql+aiopg://localhost:5432"
async def generate_series(db, *args):
async with db.connection() as conn:
async for row in conn.iterate( # implicitly starts transaction
f"select generate_series({', '.join(f'{a}' for a in args)}) as i"
):
yield row["i"]
async def main():
async with Database(url) as db:
print([i async for i in generate_series(db, 1, 10)])
async with db.transaction():
print(await db.fetch_val("select 1"))
asyncio.run(main())
```
Result:
```
.../python3.7/site-packages/aiopg/connection.py:256: ResourceWarning: You can only have one cursor per connection. The cursor for connection will be closed forcibly <aiopg.connection::Connection isexecuting=False, closed=False, echo=False, cursor=<aiopg.cursor::Cursor name=None, closed=False>>. ' {!r}.').format(self), ResourceWarning)
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
.../python3.7/site-packages/aiopg/connection.py:256: ResourceWarning: You can only have one cursor per connection. The cursor for connection will be closed forcibly <aiopg.connection::Connection isexecuting=False, closed=False, echo=False, cursor=<aiopg.cursor::Cursor name=None, closed=False>>. ' {!r}.').format(self), ResourceWarning)
1
``` | open | 2020-05-20T10:55:25Z | 2021-03-16T16:59:38Z | https://github.com/encode/databases/issues/208 | [] | nkoshell | 3 |
sktime/sktime | scikit-learn | 7,587 | [ENH] Forecast reconciliation with Machine Learning | One of the new approaches to forecast reconciliation is using ML regressors that leverage the base forecasts as inputs to predict forecasts of the bottom levels (similar to stacking models). The reconciled forecast is then the aggregation of such predicted values. For more information, refer to [1] and [2]
**Describe the solution you'd like**
A new transformation or new ReconcilerForecaster that is able to receive a sklearn regressor (capable of regressing multiple variables, which can be obtained by using sklearn's MultiOutputRegressor wrapper). The dataset for such regressor can be created by cross-validation.
**Additional context**
[1] Spiliotis, Evangelos, et al. "Hierarchical forecast reconciliation with machine learning." Applied Soft Computing 112 (2021): 107756.
[2] Burba, Davide, and Trista Chen. "A trainable reconciliation method for hierarchical time-series." arXiv preprint arXiv:2101.01329 (2021).
| open | 2024-12-30T13:29:11Z | 2024-12-30T13:29:11Z | https://github.com/sktime/sktime/issues/7587 | [
"enhancement"
] | felipeangelimvieira | 0 |
mwaskom/seaborn | data-visualization | 3,546 | Interest in seaborn.objects API contributions? | I've seen a few contributions floating around for the seaborn.objects API (e.g., #3320). Sounds like there's a reluctance to move too fast while the API is settling, but I happen to have a very good student looking to do some work, and I'd _love_ to see the objects API have better integration with modelling tools (especially adding SEs to `PolyFit` and adding lowess (with SEs, ideally).
That a kind of proposal that would be welcomed? | closed | 2023-11-03T17:46:20Z | 2023-11-06T15:17:11Z | https://github.com/mwaskom/seaborn/issues/3546 | [] | nickeubank | 2 |
yinkaisheng/Python-UIAutomation-for-Windows | automation | 1 | Adding ProcessId property to Control class | Hi !
I tried to add ProcessId property
``` python
class Control():
@property
def ProcessId(self):
'''Return process id'''
return ClientObject.dll.GetProcessId(self.Element)
```
and when I call the property I got this error message
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\python27\lib\ctypes\__init__.py", line 378, in __getattr__
func = self.__getitem__(name)
File "C:\python27\lib\ctypes\__init__.py", line 383, in __getitem__
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: function 'GetElementProcessId' not found
```
| closed | 2015-12-28T21:27:05Z | 2015-12-29T07:00:48Z | https://github.com/yinkaisheng/Python-UIAutomation-for-Windows/issues/1 | [] | thu2004 | 1 |
jschneier/django-storages | django | 1,207 | Make django-storages compatible with the new Django's settings `STORAGES` | In Django 4.2 the new settings ``STORAGES`` has been added.
https://github.com/django/django/commit/1ec3f0961fedbe01f174b78ef2805a9d4f3844b1
The ``DEFAULT_FILE_STORAGE`` and ``STATICFILES_STORAGE`` settings are deprecated in Django 4.2 and will be removed Django 5.1.
https://github.com/django/django/commit/32940d390a00a30a6409282d314d617667892841 | closed | 2023-01-12T11:05:07Z | 2023-04-05T12:23:46Z | https://github.com/jschneier/django-storages/issues/1207 | [
"Help Wanted 💕"
] | pauloxnet | 3 |
dynaconf/dynaconf | flask | 1,252 | [RFC] Combining settings from different applications / packages | **Is your feature request related to a problem? Please describe.**
I am currently working on setting up an inheritance-based package and applications. In this model, I would have a base package (core) which has an abstract class with a defined dynaconf settings and config file. For example, the core class (which is packaged as whl) would have specific flags that we want all applications that inherit from this class to use. Then I would have an application that imports this package and inherits from the core class. As mentioned above, I would want to use the settings from the core class.
**Describe the solution you'd like**
Is it possible to merge the LazySettings objects such that I import the core settings (LazySettings) and then merge it with the new one that the application creates? That way I can merge settings and utilize inheritance model
| closed | 2025-02-13T21:19:00Z | 2025-02-17T23:05:41Z | https://github.com/dynaconf/dynaconf/issues/1252 | [
"Not a Bug",
"RFC"
] | omri-cavnue | 2 |
pytest-dev/pytest-html | pytest | 815 | Crashes with KeyError: 'retried' when test is retried with pytest-retry | It looks like the html reporter does not handle tests with the outcome `retried`.
It's quite easy to reproduce:
1. Create a failing test with the mark `@pytest.mark.flaky(retries=3, delay=1)`
2. Run the test and wait for it to fail so that it is retried
3. Pytest crashes with:
```
File "/__w/xi/xi/tests/venv/lib/python3.9/site-packages/pytest_html/report_data.py", line 94, in outcomes
self._outcomes[outcome.lower()]["value"] += 1
KeyError: 'retried'
``` | open | 2024-05-30T13:51:00Z | 2024-12-02T10:40:36Z | https://github.com/pytest-dev/pytest-html/issues/815 | [] | angelos-p | 3 |
seleniumbase/SeleniumBase | web-scraping | 2,635 | UC Mode not working on window server 2022 | Last week, my code worked fine but after updating my code couldn't bypass the cloudflare bot. For information, I use Windows Server 2022.
This is my code:
```
def you_message(text: str, out_type: str = 'json', timeout: int = 20):
"""Function to send a message and get results from YouChat.com
Args:
text (str): text to send
out_type (str): type of result (json, string). Defaults to 'json'.
timeout (int): timeout in seconds to wait for a result. Defaults to 20.
Returns:
str: response of the message
"""
qoted_text = urllib.parse.quote_plus(text)
result = {}
data = ""
with SB(uc=True) as sb:
sb.open(
f"https://you.com/api/streamingSearch?q={qoted_text}&domain=youchat")
timeout_delta = time.time() + timeout
stream_available = False
while time.time() <= timeout_delta:
try:
sb.assert_text("event: youChatIntent", timeout=8.45)
if 'error' in result:
result.pop('error')
data = sb.get_text("body pre")
break
except Exception:
pass
# Try to easy solve captcha challenge
try:
if sb.assert_element('iframe'):
sb.switch_to_frame("iframe")
sb.find_element(".ctp-checkbox-label", timeout=1).click()
# sb.save_screenshot('sel2.png') # Debug
except Exception:
result['error'] = 'Selenium was detected! Try again later. Captcha not solved automaticly.'
finally:
# Force exit from iframe
sb.switch_to_default_content()
if time.time() > timeout_delta:
# sb.save_screenshot('sel-timeout.png') # Debug
result['error'] = 'Timeout while getting data from Selenium! Try again later.'
res_message = ""
for line in data.split("\n"):
if line.startswith("data: {"):
json_data = json.loads(line[5:])
if 'youChatToken' in json_data:
res_message += json_data['youChatToken']
result['generated_text'] = res_message
if out_type == 'json':
return json.dumps(result)
else:
str_res = result['error'] if (
'error' in result) else result['generated_text']
return str_res
``` | closed | 2024-03-24T17:45:54Z | 2024-03-24T18:14:44Z | https://github.com/seleniumbase/SeleniumBase/issues/2635 | [
"invalid usage",
"UC Mode / CDP Mode"
] | zing75blog | 1 |
jupyter/nbgrader | jupyter | 969 | Add link to the Jupyter in Education map in third-party resources | Here is the link: https://elc.github.io/jupyter-map/ | closed | 2018-05-19T13:10:49Z | 2018-10-07T11:14:54Z | https://github.com/jupyter/nbgrader/issues/969 | [
"documentation",
"good first issue"
] | jhamrick | 2 |
plotly/dash-component-boilerplate | dash | 150 | Import errors | Im trying to create a component and when i tried to run the usage.py file im getting an import error:
I have recently received similar errors when trying to run some dash apps and it seems to be from an old version of dash, I would have thought running this inside of this repo it would have an updated version of dash and I dont want to mess anything else up with this process? Im still in my base env but when i created this project i followed the instructions and followed these instructions:
pip install cookiecutter
pip install virtualenv
The project has the package.json file.. The instructions didnt say to go into a env? I assumed once i created that venv file that was ok, so now im not sure what to do? pip install a newer version of dash? Do i do that while im in the base env and in which file/folder?
Error message:
File "usage.py", line 2, in <module>
from dash import Dash, callback, html, Input, Output
ImportError: cannot import name 'callback' from 'dash'
Thanks!
| closed | 2023-01-13T01:10:10Z | 2023-05-31T13:38:11Z | https://github.com/plotly/dash-component-boilerplate/issues/150 | [] | Tedmcm | 1 |
e2b-dev/code-interpreter | jupyter | 7 | CodeInterpreter.reconnect() not working as expected | I ran into this issue and am extremely confused, is this a problem on my end or through the library

| closed | 2024-04-08T20:00:36Z | 2024-04-08T20:25:11Z | https://github.com/e2b-dev/code-interpreter/issues/7 | [] | im-calvin | 2 |
gee-community/geemap | streamlit | 623 | Not getting same interactive map as showing tutorial video | I am getting as given via code
Map = geemap.Map()
Map
we did not get inspector and other tools on map thats why facing problem in adding layers

video tutorial show as given

| closed | 2021-08-14T10:12:25Z | 2021-08-15T01:22:19Z | https://github.com/gee-community/geemap/issues/623 | [
"bug"
] | dileep0 | 1 |
psf/requests | python | 6,601 | Since migration of urllib3 to 2.x data-strings with umlauts in a post request are truncated | Before requests 2.30 it was possible to just pass a Python-string with umlauts (äöü...) to a `requests.post` call. Since urllib3 2.x this causes the body of the request to be truncated. It seems that the Content-Length is calculated based on the length of the string and the string itself is handed over to the call as a multibyte representation causing the string to be truncated in the request because with multibyte characters there are more bytes than characters.
## Expected Result
All characters of the input string should have been sent to the target.
## Actual Result
Input string is truncated. See output of code below:
```
data:application/octet-stream;base64,RGFzIHNpbmQgUG9zdC1EYXRlbiBtaXQgVW1sYXV0ZW46IMOkww==
data:application/octet-stream;base64,RGFzIHNpbmQgUG9zdC1EYXRlbiBtaXQgVW1sYXV0ZW46IOT89g==
Das sind Post-Daten mit Umlauten: äüö
```
## Reproduction Steps
```python
import requests
import json
data_as_string = "Das sind Post-Daten mit Umlauten: äüö"
data_array = [
data_as_string,
bytes(data_as_string,'iso-8859-1'),
bytes(data_as_string,'utf-8')
]
post_url = "https://httpbin.org/post"
headers = {
"Content-Type": "text/plain",
"Host": "httpbin.org",
}
def main():
for d in data_array:
response = requests.post(
url=post_url,
headers=headers,
data=d
)
r = json.loads(response.content)
print(r['data'])
if __name__ == '__main__':
main()
```
The behaviour was also verified using Portswigger Burp Suite:
First Request:
```
50 4F 53 54 20 2F 70 6F 73 74 20 48 54 54 50 2F 32 0D 0A 48 6F 73 74 3A 20 68 74 74 70 62 69 6E 2E 6F 72 67 0D 0A 55 73 65 72 2D 41 67 65 6E 74 3A 20 70 79 74 68 6F 6E 2D 72 65 71 75 65 73 74 73 2F 32 2E 33 31 2E 30 0D 0A 41 63 63 65 70 74 2D 45 6E 63 6F 64 69 6E 67 3A 20 67 7A 69 70 2C 20 64 65 66 6C 61 74 65 2C 20 62 72 0D 0A 41 63 63 65 70 74 3A 20 2A 2F 2A 0D 0A 43 6F 6E 6E 65 63 74 69 6F 6E 3A 20 63 6C 6F 73 65 0D 0A 43 6F 6E 74 65 6E 74 2D 54 79 70 65 3A 20 74 65 78 74 2F 70 6C 61 69 6E 0D 0A 43 6F 6E 74 65 6E 74 2D 4C 65 6E 67 74 68 3A 20 33 37 0D 0A 0D 0A 44 61 73 20 73 69 6E 64 20 50 6F 73 74 2D 44 61 74 65 6E 20 6D 69 74 20 55 6D 6C 61 75 74 65 6E 3A 20 C3 A4 C3
```
```
POST /post HTTP/2
Host: httpbin.org
User-Agent: python-requests/2.31.0
Accept-Encoding: gzip, deflate, br
Accept: */*
Connection: close
Content-Type: text/plain
Content-Length: 37
Das sind Post-Daten mit Umlauten: äÃ
```
Second Request:
```
50 4F 53 54 20 2F 70 6F 73 74 20 48 54 54 50 2F 32 0D 0A 48 6F 73 74 3A 20 68 74 74 70 62 69 6E 2E 6F 72 67 0D 0A 55 73 65 72 2D 41 67 65 6E 74 3A 20 70 79 74 68 6F 6E 2D 72 65 71 75 65 73 74 73 2F 32 2E 33 31 2E 30 0D 0A 41 63 63 65 70 74 2D 45 6E 63 6F 64 69 6E 67 3A 20 67 7A 69 70 2C 20 64 65 66 6C 61 74 65 2C 20 62 72 0D 0A 41 63 63 65 70 74 3A 20 2A 2F 2A 0D 0A 43 6F 6E 6E 65 63 74 69 6F 6E 3A 20 6B 65 65 70 2D 61 6C 69 76 65 0D 0A 43 6F 6E 74 65 6E 74 2D 54 79 70 65 3A 20 74 65 78 74 2F 70 6C 61 69 6E 0D 0A 43 6F 6E 74 65 6E 74 2D 4C 65 6E 67 74 68 3A 20 33 37 0D 0A 0D 0A 44 61 73 20 73 69 6E 64 20 50 6F 73 74 2D 44 61 74 65 6E 20 6D 69 74 20 55 6D 6C 61 75 74 65 6E 3A 20 E4 FC F6
```
```
POST /post HTTP/2
Host: httpbin.org
User-Agent: python-requests/2.31.0
Accept-Encoding: gzip, deflate, br
Accept: */*
Connection: keep-alive
Content-Type: text/plain
Content-Length: 37
Das sind Post-Daten mit Umlauten: äüö
```
Third Request:
```
50 4F 53 54 20 2F 70 6F 73 74 20 48 54 54 50 2F 32 0D 0A 48 6F 73 74 3A 20 68 74 74 70 62 69 6E 2E 6F 72 67 0D 0A 55 73 65 72 2D 41 67 65 6E 74 3A 20 70 79 74 68 6F 6E 2D 72 65 71 75 65 73 74 73 2F 32 2E 33 31 2E 30 0D 0A 41 63 63 65 70 74 2D 45 6E 63 6F 64 69 6E 67 3A 20 67 7A 69 70 2C 20 64 65 66 6C 61 74 65 2C 20 62 72 0D 0A 41 63 63 65 70 74 3A 20 2A 2F 2A 0D 0A 43 6F 6E 6E 65 63 74 69 6F 6E 3A 20 6B 65 65 70 2D 61 6C 69 76 65 0D 0A 43 6F 6E 74 65 6E 74 2D 54 79 70 65 3A 20 74 65 78 74 2F 70 6C 61 69 6E 0D 0A 43 6F 6E 74 65 6E 74 2D 4C 65 6E 67 74 68 3A 20 34 30 0D 0A 0D 0A 44 61 73 20 73 69 6E 64 20 50 6F 73 74 2D 44 61 74 65 6E 20 6D 69 74 20 55 6D 6C 61 75 74 65 6E 3A 20 C3 A4 C3 BC C3 B6
```
```
POST /post HTTP/2
Host: httpbin.org
User-Agent: python-requests/2.31.0
Accept-Encoding: gzip, deflate, br
Accept: */*
Connection: keep-alive
Content-Type: text/plain
Content-Length: 40
Das sind Post-Daten mit Umlauten: äüö
```
## System Information
$ python -m requests.help
```json
{
"chardet": {
"version": null
},
"charset_normalizer": {
"version": "3.3.2"
},
"cryptography": {
"version": ""
},
"idna": {
"version": "3.6"
},
"implementation": {
"name": "CPython",
"version": "3.12.0"
},
"platform": {
"release": "10",
"system": "Windows"
},
"pyOpenSSL": {
"openssl_version": "",
"version": null
},
"requests": {
"version": "2.31.0"
},
"system_ssl": {
"version": "300000b0"
},
"urllib3": {
"version": "2.1.0"
},
"using_charset_normalizer": true,
"using_pyopenssl": false
}
```
<!-- This command is only available on Requests v2.16.4 and greater. Otherwise,
please provide some basic information about your system (Python version,
operating system, &c). -->
| closed | 2023-12-13T14:09:36Z | 2024-12-13T00:06:48Z | https://github.com/psf/requests/issues/6601 | [] | secorvo-jen | 1 |
amidaware/tacticalrmm | django | 1,898 | At the top of the devices list, it would be great to have number of items | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| closed | 2024-06-21T09:47:47Z | 2024-06-21T15:29:05Z | https://github.com/amidaware/tacticalrmm/issues/1898 | [] | JCbarreau | 0 |
microsoft/nni | pytorch | 5,790 | Error in model speedup when using a single logit output layer | **Describe the issue**:
**Environment**:
- NNI version: 3.0
- Training service (local|remote|pai|aml|etc): local
- Client OS: Ubuntu 18.04.4 LTS
- Server OS (for remote mode only):
- Python version: 3.9
- PyTorch/TensorFlow version: 1.12.0
- Is conda/virtualenv/venv used?: conda
- Is running in Docker?: no
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
```
AttributeError Traceback (most recent call last)
Input In [19], in <cell line: 9>()
22 _, masks = pruner.compress()
23 pruner.unwrap_model()
---> 25 model = ModelSpeedup(model, dummy_input, masks).speedup_model()
File ~/miniconda3/envs/optha/lib/python3.9/site-packages/nni/compression/speedup/model_speedup.py:435, in ModelSpeedup.speedup_model(self)
433 self.initialize_update_sparsity()
434 self.update_direct_sparsity()
--> 435 self.update_indirect_sparsity()
436 self.logger.info('Resolve the mask conflict after mask propagate...')
437 # fix_mask_conflict(self.masks, self.graph_module, self.dummy_input)
File ~/miniconda3/envs/optha/lib/python3.9/site-packages/nni/compression/speedup/model_speedup.py:306, in ModelSpeedup.update_indirect_sparsity(self)
304 for node in reversed(self.graph_module.graph.nodes):
305 node: Node
--> 306 self.node_infos[node].mask_updater.indirect_update_process(self, node)
307 sp = f', {sparsity_stats(self.masks.get(node.target, {}))}' if node.op == 'call_module' else ''
308 sp += f', {sparsity_stats({"output mask": self.node_infos[node].output_masks})}'
File ~/miniconda3/envs/optha/lib/python3.9/site-packages/nni/compression/speedup/mask_updater.py:229, in LeafModuleMaskUpdater.indirect_update_process(self, model_speedup, node)
227 for k, v in node_info.module.named_parameters():
228 if isinstance(v, torch.Tensor) and model_speedup.tensor_propagate_check(v) and v.dtype in torch_float_dtype:
--> 229 grad_zero = v.grad.data == 0
230 node_info.param_masks[k][grad_zero] = 0
AttributeError: 'NoneType' object has no attribute 'data'
```
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**:
```
import torch
import torch.nn as nn
import torchvision.models as tvmodels
from nni.compression.pruning import L1NormPruner
from nni.compression.utils import auto_set_denpendency_group_ids
from nni.compression.speedup import ModelSpeedup
if __name__ == '__main__':
model = tvmodels.resnet18()
model.fc = nn.Linear(in_features=512, out_features=1, bias=True)
config_list = [{
'op_types': ['Conv2d'],
'sparse_ratio': 0.1
}]
dummy_input = torch.rand(1, 3, 224, 224)
config_list = auto_set_denpendency_group_ids(model, config_list, dummy_input)
pruner = L1NormPruner(model, config_list)
_, masks = pruner.compress()
pruner.unwrap_model()
model = ModelSpeedup(model, dummy_input, masks).speedup_model()
``` | open | 2024-05-30T10:10:29Z | 2024-05-30T10:10:29Z | https://github.com/microsoft/nni/issues/5790 | [] | rishabh-WIAI | 0 |
recommenders-team/recommenders | machine-learning | 1,718 | [ASK] How can I save SASRec model for re-training and prediction? | I have tried to save trained SASRec model.
pickle, tf.saved_model.save, model.save(), and surprise.dump are not working.
While saving, I got warning saying 'Found untraced functions',
and while loading, 'AttributeError: 'SASREC' object has no attribute 'seq_max_len''.
Plz someone let me know how to save and load SASRec model! | open | 2022-05-13T18:23:23Z | 2023-08-30T14:03:13Z | https://github.com/recommenders-team/recommenders/issues/1718 | [
"help wanted"
] | beomso0 | 2 |
praw-dev/praw | api | 1,090 | Submission stream returning duplicates | If you stream submissions from r/all, the stream returns duplicate copies of items. See this example code: https://github.com/Watchful1/Sketchpad/blob/master/streamTester.py
Streaming 5000 submissions resulted in 3508 duplicates, many 5 or 6 times each.
Increasing the size of the seen_attributes BoundedSet in stream_generator to some large number, I used 2000, fixed the problem, but that feels like a bandaid.
Originally [reported by u/boib](https://old.reddit.com/r/redditdev/comments/c5k2lx/anyone_notice_some_weirdness_with_praw/). | closed | 2019-06-26T04:03:13Z | 2019-07-01T15:41:35Z | https://github.com/praw-dev/praw/issues/1090 | [] | Watchful1 | 2 |
tableau/server-client-python | rest-api | 1,102 | [Type 1] Support `vizWidth` and `vizHeight` parameters of Query View PDF endpoint | ## Description
Currently these are not exposed anywhere. The `PDFRequestOptions` object is shared between the "Query View PDF" and the "Download Workbook PDF" endpoint however the workbook variant does not e.g. support filters and the query view variant has the visWidth/visHeight parameters. These are not exposed and cannot be set to my knowledge.
| open | 2022-09-08T16:56:05Z | 2023-03-23T19:14:41Z | https://github.com/tableau/server-client-python/issues/1102 | [
"enhancement"
] | septatrix | 0 |
marcomusy/vedo | numpy | 505 | addSlider2D: TypeError: DestroyTimer argument 1: an integer is required (got type NoneType) | Hi @marcomusy,
Here is another error that I could not figure out by myself. I am still trying to play with the same data as I used in Issue #504, and this time I am playing with the slider.
Here is the code I used:
```Python
#!/usr/bin/env python3
import numpy as np
from vedo import TetMesh, show, screenshot, settings, Picture, buildLUT, Box, \
Plotter, Axes
from vedo.applications import Animation
import time as tm
def slider(widget, event):
region = widget.GetRepresentation().GetValue()
plot_mesh(region)
def plot_mesh(max_reg):
"""Plots the mesh based on the maximum region index"""
# This will get rid of the background Earth unit and air unit in the model
# which leaves us with the central part of the model
tet_tmp = tet.clone().threshold(name='cell_scalars', above=0, below=max_reg)
# So, we now cut the TetMesh object with a mesh (that Box object)
tet_tmp.cutWithMesh(box, wholeCells=True)
# And we need to convert it to a mesh object for later plotting
msh = tet_tmp.tomesh().lineWidth(1).lineColor('w')
# We need to build a look up table for our color bar, and now it supports
# using category names as labels instead of the numerical values
# This was implemented upon my request
lut_table = [
# Value, color, alpha, category
(12.2, 'dodgerblue', 1, 'C1'),
(15.2, 'skyblue', 1, 'C1-North'),
(16.2, 'lightgray', 1, 'Overburden'),
(18.2, 'yellow', 1, 'MFc'),
(19.2, 'gold', 1, 'MFb'),
(21.2, 'red', 1, 'PW'),
(23.2, 'palegreen', 1, 'BSMT1'),
(25.2, 'green', 1, 'BSMT2'),
]
lut = buildLUT(lut_table)
msh.cmap(lut, 'cell_scalars', on='cells')
# msh.cmap("coolwarm", 'cell_scalars', on='cells')
msh.addScalarBar3D(
categories=lut_table,
pos=(508000, 6416500, -1830),
title='Units',
titleSize=1.5,
sx=100,
sy=4000,
titleXOffset=-2,
)
zlabels = [(500, '500'), (0, '0'), (-500, '-500'), (-1000, '-1000'),
(-1500, '-1500')]
axes = Axes(msh,
xtitle='Easting (m)',
ytitle='Northing (m)',
ztitle='Elevation (m)',
xLabelSize=0.015,
xTitlePosition=0.65,
yTitlePosition=0.65,
yTitleOffset=-1.18,
yLabelRotation=90,
yLabelOffset=-1.6,
# yShiftAlongX=1,
zTitlePosition=0.85,
# zTitleOffset=0.04,
zLabelRotation=90,
zValuesAndLabels=zlabels,
# zShiftAlongX=-1,
axesLineWidth=3,
yrange=msh.ybounds(),
xTitleOffset=0.02,
# yzShift=1,
tipSize=0.,
yzGrid=True,
xyGrid=True,
gridLineWidth=5,
)
plt.show(msh, size=size)
# screenshot('model_mesh_vedo.png')
# from wand import image
# with image.Image(filename='model_mesh_vedo.png') as imag:
# imag.trim(color=None, fuzz=0)
# imag.save(filename='model_mesh_vedo.png')
# Do some settings
settings.useDepthPeeling=False # Useful to show the axes grid
font_name = 'Theemim'
settings.defaultFont = font_name
settings.multiSamples=8
# settings.useParallelProjection = True # avoid perspective parallax
# Create a TetMesh object form the vtk file
tet = TetMesh('final_mesh.vtk')
# This will get rid of the background Earth unit and air unit in the model
# which leaves us with the central part of the model
tet_tmp = tet.clone().threshold(name='cell_scalars', above=0, below=25)
msh = tet_tmp.tomesh().lineWidth(1).lineColor('w')
# Set the camera position
plt = Plotter()
plt.camera.SetPosition( [513381.314, 6406469.652, 6374.748] )
plt.camera.SetFocalPoint( [505099.133, 6415752.321, -907.462] )
plt.camera.SetViewUp( [-0.4, 0.318, 0.86] )
plt.camera.SetDistance( 14415.028 )
plt.camera.SetClippingRange( [7259.637, 23679.065] )
# Crop the entire mesh using a Box object (which is considered to be a mesh
# object in vedo)
# First build a Box object with its centers and dimensions
cent = [504700, 6416500, -615]
box = Box(pos=cent, size=(3000, 5000, 2430))
size = [3940, 2160]
# Add a slider to control the maximum region number in the mesh that we want
# to show
plt.addSlider2D(slider,
2, 25,
value=2,
pos=([0.1,0.1],
[0.4,0.1]),
title="Maximum region number",)
plt.show(interactive=1).close()
```
While dragging the slider, I get this error: "TypeError: DestroyTimer argument 1: an integer is required (got type NoneType)".
Any idea? Thanks again!
Xushan | closed | 2021-11-02T22:29:11Z | 2022-01-11T13:49:42Z | https://github.com/marcomusy/vedo/issues/505 | [] | XushanLu | 2 |
gradio-app/gradio | data-science | 10,373 | Is there any possible way to specify the editable of columns or rows | Hello buddy,
I'm using the Dataframe component in gradio to represent a csv file, in some scenario, some columns or some rows are editable and others are read-only. Is there anyway for me to do that? I just want to do it in the front end not the back end.
Thanks.
| closed | 2025-01-16T06:26:25Z | 2025-01-17T16:38:04Z | https://github.com/gradio-app/gradio/issues/10373 | [
"pending clarification"
] | Yb2S3Man | 3 |
google-research/bert | tensorflow | 468 | Can I use Low-level TF APIs to fine-tuning bert for my task? Do I have to use Estimators? | I am not clear about the optimizer in `optimization.create_optimizer(
total_loss, learning_rate, num_train_steps, num_warmup_steps, False)`. | closed | 2019-03-01T03:00:37Z | 2019-03-01T09:58:38Z | https://github.com/google-research/bert/issues/468 | [] | yumath | 1 |
flairNLP/flair | pytorch | 3,416 | [Question]: Pre-Tagging information in Sequence-Tagging? | ### Question
My specific use case:
I'm trying to solve an event-extraction task which I model as a sequence-tagging problem. So this event-tagger should be able to identify the event trigger, actors and objects in a given span. Now, I've got pretty reliable NER-tags which I would like as a additional information for my event-tagger to use as information (as for example, only PER and ORGs may be actors).
Is there a best practice to use this information? I'm thinking the easy way would be to put annotations inside the train/dev/test data so they can be part of the encoding? Is there a known best way to write these anntotations?
`Barack Obama went to New York`
`[PER] Barack Obama [/PER] went to [LOC] New York [/LOC]`
`Barack [B-PER] Obama [I-PER] went to New [B-LOC] York [I-LOC]`
I guess I'm asking less for a technical solution and more if there is an established way to do this or at least some experience?
(also to the devs: thank you for this awesome framework and the char-based embeddings, pretty much none of my research would be possible without Flair) | open | 2024-03-04T09:01:29Z | 2024-04-04T22:07:50Z | https://github.com/flairNLP/flair/issues/3416 | [
"question"
] | raykyn | 4 |
mwaskom/seaborn | data-science | 3,593 | Discrpancy in seaborn.objects.Dodge groupby order | Hi, I would like to report a strange behavior in the `Move` object `Dodge`.
Seaborn version: 0.13.0
Matplotlib version: 3.8.2
Everything start because I wanted to play around with the `objects` namespace.
As dataset I use the penguins dataset, I drop both the nan and all the values that I do not consider outliers, i.e. everything in between the 0.05 and 0.95 quantile (for each combination of species and sex).
Here the code to replicate the dataset:
```
def get_quantile_df(df, val_col, lower, upper) -> pd.DataFrame:
lower_limit = df[val_col].quantile(lower)
upper_limit = df[val_col].quantile(upper)
sub_df = df[(df[val_col] <= lower_limit) | (df[val_col] >= upper_limit)]
return sub_df, lower_limit, upper_limit
def main():
penguins = sns.load_dataset("penguins")
penguins = penguins.dropna(how="any")
category = "species"
value="body_mass_g"
hue="sex"
lower_out_qt = 0.05
upper_out_qt = 0.95
outliers = None
for c in penguins[category].unique():
c_df = penguins[penguins[category] == c].copy()
for h in c_df[hue].unique():
hue_df = c_df[c_df[hue] == h].copy()
sub_df, l, u = get_quantile_df(hue_df, value, lower_out_qt, upper_out_qt)
outliers = sub_df.copy() if outliers is None else pd.concat([outliers, sub_df])
print(f"{c}-{h}: [{l}-{u}] -> {len(sub_df)}")
```
The print the nested for loop is just to have an idea of how many points to expect
Then I try to plot such outliers with the species on the x axis and the body_mass_g as y axis as follows:
```
(
so.Plot(penguins, x="species", y="body_mass_g", color="sex")
.add(so.Dot(marker="x"), so.Dodge(), data=outliers)
.save("test_figure_outliers1.png")
)
(
so.Plot(outliers, x="species", y="body_mass_g", color="sex")
.add(so.Dot(marker="x"), so.Dodge())
.save("test_figure_outliers2.png")
)
```
As you can see the only difference is that in the first plot I pass the outliers dataset in the add layer, while in the second plot I use directly the outliers dataset at plot level.
I expected the two resulting plots to be identical, but I get two different results, and I think this is caused by the order in groupby operation in the `Dodge` Movement.
Attached the two plots in the same order as the code:


I found a solution on how to make the two plots identical, in the first plot code include also the `groupby` argument as follows:
```
(
so.Plot(penguins, x="species", y="body_mass_g", color="sex")
.add(so.Dot(marker="x"), so.Dodge(), data=outliers, groupby="sex")
.save("test_figure_outliers1.png")
)
```
The cause of the problem is that the first element of the outliers is a "female" penguin, while in the original dataset it's a "male" penguin.
I can see that without specifying anything the grupby operation is executed on the new dataset, producing a possible different order.
But I don't see then why when I specify the groupby variable at that level I get the groupby order executed on the `penguins` dataset.
As a solution I would suggest what follows:
1) if nothing except the new dataset is specified and the new dataset contains the same color column the groupby order is eredited by the Plot level (if specified, otherwise recomputed on the original dataset).
2) if the groupby object is specified at that level and also a new dataset is provided then the groupby should be executed on the new dataset rather then the original one
This way with the solution one the order of the components would be the same between all the layers even providing different subdatasets at each level.
Otherwise the user has the second option to redefine the groupby operation per level. | open | 2023-12-14T15:38:57Z | 2023-12-19T12:05:22Z | https://github.com/mwaskom/seaborn/issues/3593 | [] | tiamilani | 6 |
pytest-dev/pytest-xdist | pytest | 187 | different tests collected between workers in Python 3.5 | I'm trying to run my tests via xdist in python 3.5 and it fails saying different tests were collected between gw0 and gw1. It works fine in python 2.7. This is the same issue as #149 but that issue does not explain the solution. | closed | 2017-07-14T14:46:13Z | 2017-08-07T19:44:49Z | https://github.com/pytest-dev/pytest-xdist/issues/187 | [
"question"
] | havok2063 | 4 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,397 | uc wont find element | this is my code:
```
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
from flask import Flask, request
import threading
app = Flask(__name__)
def my_function(sitekey):
browser.execute_async_script(f"""
document.write(`<script src="https://hcaptcha.com/1/api.js"></script> <div id="captcha"></div>`);
let widget = null;
function waitForHCaptcha() {{
if (typeof hcaptcha !== 'undefined') {{
var element = document.getElementById('captcha');
var sitekey = '{sitekey}';
var widgetId = hcaptcha.render(element, {{
sitekey: sitekey
}});
widget = widgetId;
console.log(hcaptcha);
craft();
}} else {{
// hcaptcha is not yet available, wait and check again
setTimeout(waitForHCaptcha, 100);
}}
}}
function craft() {{
let resp = hcaptcha.getResponse(widget);
if (widget !== null && resp.length > 4) {{
document.write(`<p id="solvedkey">${{resp}}</p>`);
window.hcaptchaResponse = resp;
}} else {{
setTimeout(craft, 100);
}}
}}
waitForHCaptcha(); // start waiting for hcaptcha
""")
hektCaptchaPath = "C:\\Users\\Ken Pira\\Desktop\\DualBot\\hektCaptcha"
browser = None
def run_code(sitekey):
global browser
# Create a new browser instance
if browser is None:
options = Options()
# options.add_argument("--headless") # Run in headless mode
options.add_argument("--window-size=1400,900")
options.add_argument("--load-extension=" + hektCaptchaPath)
browser = webdriver.Chrome(service=Service(executable_path='path_to_undetected_chromedriver'), options=options)
browser.execute_script("window.open('about:blank', '_blank');")
browser.switch_to.window(browser.window_handles[-1])
try:
# Navigate to discord.com
browser.get('https://discord.com')
# Inject and run the provided JavaScript code
threading.Thread(target=my_function,args=(sitekey,)).start()
print("harkinian mode")
print("OpenHack 1"); wait = WebDriverWait(browser, 10); print("OpenHack 3")
result=wait.until(EC.presence_of_element_located((By.CSS_SELECTOR, 'p#solvedkey'))).get_attribute("text")
print("OpenHack 2")
browser.close()
return result
except Exception as e:
print(str(e))
return 'An error occurred', 500
finally:
browser.quit()
@app.route('/captcha')
def captcha():
sitekey = request.args.get('sitekey')
if not sitekey:
return 'Missing sitekey', 400
try:
solved_content = run_code(sitekey)
return solved_content
except Exception as e:
print(str(e))
return 'An error occurred', 500
if __name__ == '__main__':
app.run(port=8085)
```
the body of the site after a while becomes like this:
```
<body><div id="captcha"><iframe src="https://newassets.hcaptcha.com/captcha/v1/d796875/static/hcaptcha.html#frame=checkbox&id=0jfi1gftxcw&host=discord.com&sentry=undefined&reportapi=https%3A%2F%2Faccounts.hcaptcha.com&recaptchacompat=true&custom=false&tplinks=on&sitekey=4c672d35-0701-42b2-88c3-78380b0db560&theme=light&origin=https%3A%2F%2Fdiscord.com" tabindex="0" frameborder="0" scrolling="no" title="Widget containing checkbox for hCaptcha security challenge" data-hcaptcha-widget-id="0jfi1gftxcw" data-hcaptcha-response="P1_eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.hKdwYXNza2V5xQhniUfmVxCyC6ejZmLLI31GjwREnvkfsIbdxldrb3Iil0ZpH6zrumnHtP6dAlGGBfJCfIENBsPdyRW1BcPiFK9UjenGa2Ac-7Ufd7a08oFK0_-bzcNKTiWsxD4EvfdctPVMLSBMWUjUjhP-r0NG2vTRL7m_YkSXaiMkNTPfommoqKkr0WwGVQH6jPF_QCR6PvhhZwMJgb9-PLXELAy-CjFivfBonr5ccaH_Z8SAxJDx4_LAz2eNBDoqn39lk_uxZsIml6EfFMYMgf-QgWOPua0j37F5h3Xpi_DXcy5WFrJyCu97TJ7UgzcFJY285UC7RPz7ESj79fzUI6Rgu-1OrPOu5ye4IjJlaKqh0A2RIL-P5zwu0EQ5N4qjB7w6ej5vFF1WeYCvH21MeAPzF3XuBuU_Z44hy0DUJnwyykLTH1Fzd0m6fuJpL0WKhmtUdfTAaw8pZCcY9YoYK6toiTBCs_GwPXTeOjVlCns63clffk9oqF5Kqru339bCeBT47ulprc4xIu03aqJVhO8gu2EaSU8umWZMLUy9zLzGX2195BOeniKu56Bn65ejTsQtxCjFlDiRUp2ZwnEPdso4OLdeKWFmWVihdmhND1fC6dE1YGx191hRV-MMPta2IB6Vf4wA42OOVRHX1KqVF6f6IiQ15YddW6nmwCDFgFYNRUCCynV4eo8xNiUxj7eXVWCPRpzK1P9nTOgg2qKHQ7nLlLylnXjcqtt_o0l09bAdv8f9b2sDNhpbkzOWRQP56L4_MMoYL5ud2_zBV-wskWvHTWeorSxgaVj7kQO1t131M5v27lidgximaZOKCkQCPMfFRKiHIXdQI7m5RCBt2me3_gJG1crt0BL78oBC7sR3CgZyGxQiRdNH9EGF4GmhkSWvqTipjcTjE4A_2sdT4FJgMe7o8g3PW9oj73kwdrdbXsNcrgWyLz_VUrj7E4rI1bVB4Ny-xULgQoCz3lPKCqLclEqmHXItp0cCuaokMDSVYYsqkl4ckWNI1iFrxbq-keLnJ_Rd-PbYkkR3YAxDuSoyDnhwZ2FqVUB1XFDxV6eKfbJZoGsQZ4ZtqyAXCugAmGZP7RSdeBu66RlNMmia6EXLspMX5TNfsIjG0S4y_51Q4ENZTuT3MsrvfKZh1K2ySrIG6HPn8x35iDUDUoZlMo0lHv0Sp441QkXpAyvDaQVWyF-NQg2z5wYWdVZzJVDGST2PT2dkN3khHLKwG_kckASmz0BdZLkYlJwsA_3hzew-JtkjOVVESKfHVYgA58jGwyn3zZNF20WAWPlSRo-Utxonlt79zozZhqcZsNNSuMYqNAGch8ov1qy-ZPKykLJbw-ccOao8lCqpx0zRIFUZecPFxBsFLHIZ9GRuU_-imUByb5BExJMfhZ-gVhcquXv9eQgZMOJ3e5FUImuPQQKHvPunxa5qwtFH04duLDnnsgNc-aLxRrajbOQ35Q5KzMsS0S4HrekmANgY72jBwLN4-LS8cVLEJ5BPrurGQyQOwJJ2jYRGaQrZFGJLyPxOV3-ruo0b97yyNEfJD8vZORYTwcvRQj7b0cd9Eniy4PHPBWnJZcFBO3nFw_4g9H7uEf2E163zf4zE8hWwZaV59hoVMnK58pO9N7dl8Yps7Rrhec2DkJfcOnl7aX0rHrev-xqnr9YA-kB-nay8MfXV-RGG9zSdAY5DsIrRiiEUVpWvKzBHw-LtUkyqOvgPAE_9len0Y9Walrc_Tm9fJc4Da6NcPs0QZBrgsnEmto5n_roKieCfpu14WTHJOSzMswJXLC1AV4a4GwLjHrIHq2wxJz1aw_XYXW2JEQvPgYlt6u_HSeLo0QC-7nAb0NMWzG-0V-4BUtAZZcxSPknZ9Hw8w1zu0DOYwkY7lY_6I3rjpfih02vs8CXVoyd7TR2v9g2VTEwaASv-QZYYPHOMF59n7YuoKuhozEPGsV5dYdkXjTZ8IvuoAoQbbVhMP7Qc8CQymaB7A0DBumaSfeltjwR3DhUPLxTXC0oY5bX0iPJZEMKMANm45izerRP_O3EptQ3oQ5rCASZxHFOQZ6bAZnG_UlVMcRXB2NRYyaQ6A6zDrvAUaAMCminUu-6RW9RsfY4n0Ew6T-Ux24LUCxNA7CIB0tyZUmaLKoAmwE5LNp7Sxsi2YwAT-MEIT-iDnvQyBLM--rvVXMsBvymLdE-TBLX-Erd7hSflgJYZSw243oGbm_p5mI7Z62cLjop-wZgxQl0sB-lMe32o12pB0TTqVbvpitR99D4aCurRtpJIuWXY7k2KHn6922luhIrc5jkLB7_J3Fjy7Q20r3pVoZK5FVSizqzPdoU_ovVNXoFhuTjv2PLUH14CcfoQdJVcYn2J75U615mo2t8GZpzZHvPSDqMpiKU7sZs7JqAfi1UbCq-l8nEUt8quKf7ukC2BJaxO4qF2ksYHMJjttTH6KNxyMZ8_wnJ9kkFow0VhDuw0Oyy7QwQ9BuwWOQu8kJmpKERfw5eYpg-NgYxLLLHQXqaw6UxDQjpw21xcluCz7FZ9vtg-DDkqs462tE1klC8Pq5tc9qf_niH2ri7q6nnQ8D5ICEqb0Amksf16OaefM_B5AojPf2crgJe9qEhoXJaYrnNptKhZHYPY7wkDjNaGBud-lVy7qpxcY50mTM-NWcOeiAPr1ejLvGx1aGKrAa-rMrckU2Un94XhDBBHobbMHsLkpsvQJMcBfCmh7XHs8LZik_5OzDn_yGRd6p5jNyTSdWlyso-bj5Se4E5bxJNsTuZSlDJB_2hKNYeSGVRegYh040IiTQ3PiCBjC61iFsZ6ML9l7HR5W7Z4ofNiwlCW589GXF07ei68ayGuuFAc07XEqU4CgPMkOJA0GUTCNbftnkuXadwfoj5bo2V4cM5ksyCBqHNoYXJkX2lkzgMxg2-icGQA.qKGHd_vYkXhErnhruL5cUAifnfRpfI50zl-2S3DP5A4" style="width: 303px; height: 78px; overflow: hidden;"></iframe><textarea id="g-recaptcha-response-0jfi1gftxcw" name="g-recaptcha-response" style="display: none;"></textarea><textarea id="h-captcha-response-0jfi1gftxcw" name="h-captcha-response" style="display: none;"></textarea></div><div aria-hidden="true" style="background-color: rgb(255, 255, 255); border: 1px solid rgb(215, 215, 215); box-shadow: rgba(0, 0, 0, 0.1) 0px 0px 4px; border-radius: 4px; left: auto; top: -10000px; z-index: -2147483648; position: absolute; transition: opacity 0.15s ease-out 0s; opacity: 0; visibility: hidden; display: block;"><div style="position: relative; z-index: 1; width: 400px; height: 600px;"><iframe src="https://newassets.hcaptcha.com/captcha/v1/d796875/static/hcaptcha.html#frame=challenge&id=0jfi1gftxcw&host=discord.com&sentry=undefined&reportapi=https%3A%2F%2Faccounts.hcaptcha.com&recaptchacompat=true&custom=false&tplinks=on&sitekey=4c672d35-0701-42b2-88c3-78380b0db560&theme=light&origin=https%3A%2F%2Fdiscord.com" frameborder="0" scrolling="no" title="Main content of the hCaptcha challenge" style="border: 0px; z-index: 2000000000; position: relative; width: 400px; height: 600px;"></iframe></div><div style="width: 100%; height: 100%; position: fixed; pointer-events: none; top: 0px; left: 0px; z-index: 0; background-color: rgb(255, 255, 255); opacity: 0.05; cursor: default;"></div><div style="border-width: 11px; position: absolute; pointer-events: none; margin-top: -11px; z-index: 1; right: 100%; top: 26px;"><div style="border-width: 10px; border-style: solid; border-color: transparent rgb(255, 255, 255) transparent transparent; position: relative; top: 10px; z-index: 1; display: block;"></div><div style="border-width: 11px; border-style: solid; border-color: transparent rgb(215, 215, 215) transparent transparent; position: relative; top: -11px; z-index: 0; display: block;"></div></div></div><p id="solvedkey">P1_eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.hKdwYXNza2V5xQhniUfmVxCyC6ejZmLLI31GjwREnvkfsIbdxldrb3Iil0ZpH6zrumnHtP6dAlGGBfJCfIENBsPdyRW1BcPiFK9UjenGa2Ac-7Ufd7a08oFK0_-bzcNKTiWsxD4EvfdctPVMLSBMWUjUjhP-r0NG2vTRL7m_YkSXaiMkNTPfommoqKkr0WwGVQH6jPF_QCR6PvhhZwMJgb9-PLXELAy-CjFivfBonr5ccaH_Z8SAxJDx4_LAz2eNBDoqn39lk_uxZsIml6EfFMYMgf-QgWOPua0j37F5h3Xpi_DXcy5WFrJyCu97TJ7UgzcFJY285UC7RPz7ESj79fzUI6Rgu-1OrPOu5ye4IjJlaKqh0A2RIL-P5zwu0EQ5N4qjB7w6ej5vFF1WeYCvH21MeAPzF3XuBuU_Z44hy0DUJnwyykLTH1Fzd0m6fuJpL0WKhmtUdfTAaw8pZCcY9YoYK6toiTBCs_GwPXTeOjVlCns63clffk9oqF5Kqru339bCeBT47ulprc4xIu03aqJVhO8gu2EaSU8umWZMLUy9zLzGX2195BOeniKu56Bn65ejTsQtxCjFlDiRUp2ZwnEPdso4OLdeKWFmWVihdmhND1fC6dE1YGx191hRV-MMPta2IB6Vf4wA42OOVRHX1KqVF6f6IiQ15YddW6nmwCDFgFYNRUCCynV4eo8xNiUxj7eXVWCPRpzK1P9nTOgg2qKHQ7nLlLylnXjcqtt_o0l09bAdv8f9b2sDNhpbkzOWRQP56L4_MMoYL5ud2_zBV-wskWvHTWeorSxgaVj7kQO1t131M5v27lidgximaZOKCkQCPMfFRKiHIXdQI7m5RCBt2me3_gJG1crt0BL78oBC7sR3CgZyGxQiRdNH9EGF4GmhkSWvqTipjcTjE4A_2sdT4FJgMe7o8g3PW9oj73kwdrdbXsNcrgWyLz_VUrj7E4rI1bVB4Ny-xULgQoCz3lPKCqLclEqmHXItp0cCuaokMDSVYYsqkl4ckWNI1iFrxbq-keLnJ_Rd-PbYkkR3YAxDuSoyDnhwZ2FqVUB1XFDxV6eKfbJZoGsQZ4ZtqyAXCugAmGZP7RSdeBu66RlNMmia6EXLspMX5TNfsIjG0S4y_51Q4ENZTuT3MsrvfKZh1K2ySrIG6HPn8x35iDUDUoZlMo0lHv0Sp441QkXpAyvDaQVWyF-NQg2z5wYWdVZzJVDGST2PT2dkN3khHLKwG_kckASmz0BdZLkYlJwsA_3hzew-JtkjOVVESKfHVYgA58jGwyn3zZNF20WAWPlSRo-Utxonlt79zozZhqcZsNNSuMYqNAGch8ov1qy-ZPKykLJbw-ccOao8lCqpx0zRIFUZecPFxBsFLHIZ9GRuU_-imUByb5BExJMfhZ-gVhcquXv9eQgZMOJ3e5FUImuPQQKHvPunxa5qwtFH04duLDnnsgNc-aLxRrajbOQ35Q5KzMsS0S4HrekmANgY72jBwLN4-LS8cVLEJ5BPrurGQyQOwJJ2jYRGaQrZFGJLyPxOV3-ruo0b97yyNEfJD8vZORYTwcvRQj7b0cd9Eniy4PHPBWnJZcFBO3nFw_4g9H7uEf2E163zf4zE8hWwZaV59hoVMnK58pO9N7dl8Yps7Rrhec2DkJfcOnl7aX0rHrev-xqnr9YA-kB-nay8MfXV-RGG9zSdAY5DsIrRiiEUVpWvKzBHw-LtUkyqOvgPAE_9len0Y9Walrc_Tm9fJc4Da6NcPs0QZBrgsnEmto5n_roKieCfpu14WTHJOSzMswJXLC1AV4a4GwLjHrIHq2wxJz1aw_XYXW2JEQvPgYlt6u_HSeLo0QC-7nAb0NMWzG-0V-4BUtAZZcxSPknZ9Hw8w1zu0DOYwkY7lY_6I3rjpfih02vs8CXVoyd7TR2v9g2VTEwaASv-QZYYPHOMF59n7YuoKuhozEPGsV5dYdkXjTZ8IvuoAoQbbVhMP7Qc8CQymaB7A0DBumaSfeltjwR3DhUPLxTXC0oY5bX0iPJZEMKMANm45izerRP_O3EptQ3oQ5rCASZxHFOQZ6bAZnG_UlVMcRXB2NRYyaQ6A6zDrvAUaAMCminUu-6RW9RsfY4n0Ew6T-Ux24LUCxNA7CIB0tyZUmaLKoAmwE5LNp7Sxsi2YwAT-MEIT-iDnvQyBLM--rvVXMsBvymLdE-TBLX-Erd7hSflgJYZSw243oGbm_p5mI7Z62cLjop-wZgxQl0sB-lMe32o12pB0TTqVbvpitR99D4aCurRtpJIuWXY7k2KHn6922luhIrc5jkLB7_J3Fjy7Q20r3pVoZK5FVSizqzPdoU_ovVNXoFhuTjv2PLUH14CcfoQdJVcYn2J75U615mo2t8GZpzZHvPSDqMpiKU7sZs7JqAfi1UbCq-l8nEUt8quKf7ukC2BJaxO4qF2ksYHMJjttTH6KNxyMZ8_wnJ9kkFow0VhDuw0Oyy7QwQ9BuwWOQu8kJmpKERfw5eYpg-NgYxLLLHQXqaw6UxDQjpw21xcluCz7FZ9vtg-DDkqs462tE1klC8Pq5tc9qf_niH2ri7q6nnQ8D5ICEqb0Amksf16OaefM_B5AojPf2crgJe9qEhoXJaYrnNptKhZHYPY7wkDjNaGBud-lVy7qpxcY50mTM-NWcOeiAPr1ejLvGx1aGKrAa-rMrckU2Un94XhDBBHobbMHsLkpsvQJMcBfCmh7XHs8LZik_5OzDn_yGRd6p5jNyTSdWlyso-bj5Se4E5bxJNsTuZSlDJB_2hKNYeSGVRegYh040IiTQ3PiCBjC61iFsZ6ML9l7HR5W7Z4ofNiwlCW589GXF07ei68ayGuuFAc07XEqU4CgPMkOJA0GUTCNbftnkuXadwfoj5bo2V4cM5ksyCBqHNoYXJkX2lkzgMxg2-icGQA.qKGHd_vYkXhErnhruL5cUAifnfRpfI50zl-2S3DP5A4</p></body>
```
but it cant find <p solvedkey | open | 2023-07-15T22:40:42Z | 2023-07-17T21:26:36Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1397 | [] | x3n1al | 2 |
dropbox/PyHive | sqlalchemy | 220 | create hive table and get 'timestamp is not supported' | **I have used HiveDate and HiveTimestamp but still get the error:**
TExecuteStatementResp(status=TStatus(statusCode=3, infoMessages=['*org.apache.hive.service.cli.HiveSQLException:Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask.
**My code is:**
`cols=[]
cols.append(Column('t_int', sqlalchemy.INT))
cols.append(Column('t_decimal', HiveDecimal))
cols.append(Column('t_boolean', sqlalchemy.BOOLEAN))
cols.append(Column('t_smallint', sqlalchemy.SMALLINT))
cols.append(Column('t_date', HiveDate))
cols.append(Column('t_timestamp', HiveTimestamp))
table = Table('gyy_test_table', MetaData(bind=engine), *cols, schema='default')
table.drop(checkfirst=True)
table.create()` | open | 2018-07-03T12:41:06Z | 2018-07-03T12:41:06Z | https://github.com/dropbox/PyHive/issues/220 | [] | glorialove323 | 0 |
sigmavirus24/github3.py | rest-api | 1,187 | Can we expose the `timeout` parameter to `Repository.create_tree()` to create large tree objects? | Currently, when trying to create a tree with large data there is a timeout error:
```python3
import github3
gh = github3.login(token=<token>)
repo = gh.repository(<org>, <repo>)
...
# Make big tree_data and tree_sha here.
...
tree = repo.create_tree(tree_data, tree_sha)
```
> github3.exceptions.ConnectionError: <class 'requests.exceptions.ReadTimeout'>: A connection-level exception occurred: HTTPSConnectionPool(host='api.github.com', port=443): Read timed out. (read timeout=10)
Specifically, that `tree_data` object has 1610 blobs and is about 300 KB serialized.
Manually adding a `timeout=60` to the `self._post()` in that function: https://github.com/sigmavirus24/github3.py/blob/62367c9fd7d3e2232c3a2d3c7751759fc8e404b8/src/github3/repos/repo.py#L1392
allows the tree to successfully be created.
My goal is to not get the timeout when executing `create_tree`. Would it be a helpful addition to `github3` if that timeout could be specified without patching the `_post` function? If not, would a PR that adds `timeout` to the `create_tree` function contract be welcome? | open | 2024-07-25T19:59:51Z | 2024-07-25T19:59:51Z | https://github.com/sigmavirus24/github3.py/issues/1187 | [] | rduve | 0 |
benbusby/whoogle-search | flask | 124 | [FEATURE] Password protected search engine | Hi,
I think would be a good option to add a password protection to use the search engineer. I would like to use Whoogle in a VPS and only allow to use Whoogle to the people that knows the password.
Thanks
| closed | 2020-09-11T12:41:23Z | 2023-10-24T11:57:22Z | https://github.com/benbusby/whoogle-search/issues/124 | [
"enhancement"
] | joan-carles | 4 |
noirbizarre/flask-restplus | api | 541 | Can it declare more than one model? | I declare 2 models like this
```
fields = api.model('MyModel', {
'id_siswa': fields.String(),
'nama_siswa': fields.String(),
'kelas': fields.String(),
'hasil': fields.List(fields.Integer),
'id_penilai': fields.String(),
'nama_penilai':fields.String(),
})
indicators = api.model('ModelIndikator', {
'sikap_spritual': fields.String(),
'sikap_sosial': fields.String(),
})
```
with @api.doc()
but get error
`Traceback (most recent call last): File "app.py", line 40, in <module> 'sikap_spritual': fields.String(), AttributeError: 'Model' object has no attribute 'String'` | closed | 2018-10-18T09:53:19Z | 2018-10-19T03:24:49Z | https://github.com/noirbizarre/flask-restplus/issues/541 | [] | kafey | 2 |
jupyter-incubator/sparkmagic | jupyter | 397 | Update the environment variables of the SparkSubmit process (yarn spark client spawned by Livy Server) | I need to update the environment variables of the SparkSubmit process (i need to add PYSPARK_PYTHON to support using a conda environment).
However, i can't figure out how to do this (apart from updating the global $SPARK_HOME/conf/spark-env.sh). Can I do this through configuration or does it need a PR (which I would be willing to contribute)? | closed | 2017-08-08T18:37:28Z | 2019-07-03T13:36:31Z | https://github.com/jupyter-incubator/sparkmagic/issues/397 | [] | jimdowling | 5 |
piskvorky/gensim | data-science | 3,268 | Can't suppress lifecycle events | #### Problem description
I'm trying to use gensim inside another tool that is using `logging` module and want to suppress logging messages from gensim.
`gensim.models.word2vec.logger.level = logging.ERROR` removes the training progress but the lifecycle messages still appear.
According to https://radimrehurek.com/gensim/utils.html#gensim.utils.SaveLoad.add_lifecycle_event, `model.lifecycle_events = None` should work but it does nothing.
Also that instruction won't suppress the `created` event because there is no way to set the variable before creating the object.
#### Steps/code/corpus to reproduce
```python
from gensim.test.utils import common_texts
from gensim.models import Word2Vec
import gensim
import logging
import sys
logger = logging.getLogger()
logger.handlers = []
logger.setLevel(logging.INFO)
h = logging.StreamHandler(sys.stderr)
h.setFormatter(logging.Formatter('%(asctime)s - %(levelname)s - %(message)s'))
logger.addHandler(h)
gensim.models.word2vec.logger.level = logging.ERROR
model = Word2Vec(sentences=None, vector_size=100, window=5, min_count=0, workers=1)
model.lifecycle_events = None
model.build_vocab(common_texts)
model.train(common_texts, total_examples=100000, epochs=1)
```
produces:
```
2021-11-16 12:42:00,680 - INFO - Word2Vec lifecycle event {'params': 'Word2Vec(vocab=0, vector_size=100, alpha=0.025)', 'datetime': '2021-11-16T12:42:00.680299', 'gensim': '4.1.2', 'python': '3.8.5 (default, Sep 4 2020, 07:30:14) \n[GCC 7.3.0]', 'platform': 'Linux-4.15.0-151-generic-x86_64-with-glibc2.10', 'event': 'created'}
2021-11-16 12:42:00,680 - INFO - Word2Vec lifecycle event {'msg': 'effective_min_count=0 retains 12 unique words (100.0%% of original 12, drops 0)', 'datetime': '2021-11-16T12:42:00.680577', 'gensim': '4.1.2', 'python': '3.8.5 (default, Sep 4 2020, 07:30:14) \n[GCC 7.3.0]', 'platform': 'Linux-4.15.0-151-generic-x86_64-with-glibc2.10', 'event': 'prepare_vocab'}
2021-11-16 12:42:00,680 - INFO - Word2Vec lifecycle event {'msg': 'effective_min_count=0 leaves 29 word corpus (100.0%% of original 29, drops 0)', 'datetime': '2021-11-16T12:42:00.680634', 'gensim': '4.1.2', 'python': '3.8.5 (default, Sep 4 2020, 07:30:14) \n[GCC 7.3.0]', 'platform': 'Linux-4.15.0-151-generic-x86_64-with-glibc2.10', 'event': 'prepare_vocab'}
2021-11-16 12:42:00,681 - INFO - Word2Vec lifecycle event {'msg': 'downsampling leaves estimated 3.5001157321504532 word corpus (12.1%% of prior 29)', 'datetime': '2021-11-16T12:42:00.681521', 'gensim': '4.1.2', 'python': '3.8.5 (default, Sep 4 2020, 07:30:14) \n[GCC 7.3.0]', 'platform': 'Linux-4.15.0-151-generic-x86_64-with-glibc2.10', 'event': 'prepare_vocab'}
2021-11-16 12:42:00,681 - INFO - Word2Vec lifecycle event {'update': False, 'trim_rule': 'None', 'datetime': '2021-11-16T12:42:00.681958', 'gensim': '4.1.2', 'python': '3.8.5 (default, Sep 4 2020, 07:30:14) \n[GCC 7.3.0]', 'platform': 'Linux-4.15.0-151-generic-x86_64-with-glibc2.10', 'event': 'build_vocab'}
2021-11-16 12:42:00,682 - INFO - Word2Vec lifecycle event {'msg': 'training model with 1 workers on 12 vocabulary and 100 features, using sg=0 hs=0 sample=0.001 negative=5 window=5 shrink_windows=True', 'datetime': '2021-11-16T12:42:00.682030', 'gensim': '4.1.2', 'python': '3.8.5 (default, Sep 4 2020, 07:30:14) \n[GCC 7.3.0]', 'platform': 'Linux-4.15.0-151-generic-x86_64-with-glibc2.10', 'event': 'train'}
2021-11-16 12:42:00,683 - INFO - Word2Vec lifecycle event {'msg': 'training on 29 raw words (3 effective words) took 0.0s, 2185 effective words/s', 'datetime': '2021-11-16T12:42:00.683442', 'gensim': '4.1.2', 'python': '3.8.5 (default, Sep 4 2020, 07:30:14) \n[GCC 7.3.0]', 'platform': 'Linux-4.15.0-151-generic-x86_64-with-glibc2.10', 'event': 'train'}
```
#### Versions
```
Linux-4.15.0-151-generic-x86_64-with-glibc2.10
Python 3.8.5 (default, Sep 4 2020, 07:30:14)
[GCC 7.3.0]
Bits 64
NumPy 1.18.5
SciPy 1.7.1
gensim 4.1.2
FAST_VERSION 1
```
| closed | 2021-11-16T12:50:38Z | 2021-11-17T12:22:16Z | https://github.com/piskvorky/gensim/issues/3268 | [] | ZJaume | 2 |
pytorch/vision | machine-learning | 8,071 | How to tell if Faster RCNN Detection model is overfitting | I'm confused as to how I can tell if the Faster RCNN Detection model I'm training is overfitting or not given that the validation loss is not computed in the `evaluate` function seen [here](https://github.com/pytorch/vision/blob/main/references/detection/engine.py#L75C1-L115C26) and below.
Any help would be greatly appreciated.
```
@torch.inference_mode()
def evaluate(model, data_loader, device):
n_threads = torch.get_num_threads()
# FIXME remove this and make paste_masks_in_image run on the GPU
torch.set_num_threads(1)
cpu_device = torch.device("cpu")
model.eval()
metric_logger = utils.MetricLogger(delimiter=" ")
header = "Test:"
coco = get_coco_api_from_dataset(data_loader.dataset)
iou_types = _get_iou_types(model)
coco_evaluator = CocoEvaluator(coco, iou_types)
for images, targets in metric_logger.log_every(data_loader, 100, header):
images = list(img.to(device) for img in images)
if torch.cuda.is_available():
torch.cuda.synchronize()
model_time = time.time()
outputs = model(images)
outputs = [{k: v.to(cpu_device) for k, v in t.items()} for t in outputs]
model_time = time.time() - model_time
res = {target["image_id"]: output for target, output in zip(targets, outputs)}
evaluator_time = time.time()
coco_evaluator.update(res)
evaluator_time = time.time() - evaluator_time
metric_logger.update(model_time=model_time, evaluator_time=evaluator_time)
# gather the stats from all processes
metric_logger.synchronize_between_processes()
print("Averaged stats:", metric_logger)
coco_evaluator.synchronize_between_processes()
# accumulate predictions from all images
coco_evaluator.accumulate()
coco_evaluator.summarize()
torch.set_num_threads(n_threads)
return coco_evaluator
``` | open | 2023-10-27T00:03:39Z | 2024-01-16T14:49:37Z | https://github.com/pytorch/vision/issues/8071 | [] | 1andDone | 2 |
django-import-export/django-import-export | django | 1,764 | The error message for invalid column names is misleading | **Describe the bug**
The error message for invalid column names is misleading. It also breaks the interface because `FieldError` is raised and thrown from the process, ignoring the `raise_error` flag.
**To Reproduce**
- See test in PR
**Versions (please complete the following information):**
- Django Import Export: 4.0.0 rc0
- Python 3.11
- Django 5.0.2
**Expected behavior**
A meaningful error message and can be controlled via `raise_errors`.
| closed | 2024-02-29T20:22:30Z | 2024-03-13T10:15:26Z | https://github.com/django-import-export/django-import-export/issues/1764 | [
"bug",
"v4"
] | matthewhegarty | 0 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 690 | Model overfitting, smooth triplet margin loss | Hi there!
Thank you for the awesome library!
I'm currently working on training a model using the CARS196 dataset with the following parameters:
```python
distance = distances.CosineSimilarity()
reducer = reducers.ThresholdReducer(low=0)
loss_func = losses.TripletMarginLoss(margin=0.2, distance=distance, reducer=reducer, smooth_loss=True)
mining_func = miners.TripletMarginMiner(margin=0.2, distance=distance, type_of_triplets="all")
```
Initially, I didn't include the smooth_loss parameter, which caused the loss to get stuck at the margin of 0.2. However, after setting smooth_loss=True, I encountered an overfitting issue. Do you think the margin and type of triplets I've used are appropriate to begin with? Should I consider adjusting them?
Additionally, I'm using the VIT base 16 224 model and freezing the early layers to reduce parameters. Do you see any mistakes in my approach, or do you have any suggestions on what I should try next? I think It is possible to achieve at least Precision@1 of 0.8. Currently, I'm at 0.6 with some overfitting. | open | 2024-03-23T05:11:37Z | 2024-04-01T14:19:16Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/690 | [] | taaresh7 | 2 |
axnsan12/drf-yasg | django | 358 | Best practice for documenting permissions | Hi,
AFAIU there are two general use-cases for schema generation - schema for user, in which case I should use `get_schema_view(public=False,)` and second use-case - schema as API documentation, in which case I should use `get_schema_view(public=True,)`.
Probably permissions documentation is not important in first case - use will see already filtered API subset, but is important in second case.
I'm generating documentation for project with lots of custom permissions. It's strange that there are view, fields and paginator inspectors, but no permissions inspectors.
I was wondering if there is some best-practice or recommended approach for this.
At the moment I use following hack, which does the job, but feels wrong:
```
class ViewInspector(SwaggerAutoSchema):
"""View inspector with some project-specific logic."""
def get_summary_and_description(self):
"""Return summary and description extended with permission docs."""
summary, description = super().get_summary_and_description()
permissions_description = self._get_permissions_description()
if permissions_description:
description += permissions_description
return summary, description
def _get_permissions_description(self):
permission_descriptions = []
for permission_class in getattr(self.view, 'permission_classes', []):
if hasattr(permission_class, 'get_description'):
permission_descriptions.append(
permission_class.get_description(self.view))
else:
permission_descriptions.append(
permission_class.__doc__.replace('\n', ' ').strip())
if permission_descriptions:
return '\n**Permissions**:\n' + '\n'.join(
'+ ' + description for description in permission_descriptions
)
else:
return None
``` | closed | 2019-05-01T17:19:50Z | 2022-11-22T16:38:05Z | https://github.com/axnsan12/drf-yasg/issues/358 | [] | K0Te | 6 |
aio-libs/aiopg | sqlalchemy | 471 | Is there a way to wrap all transactions in test to rollback them at the end of a test? | I use aiopg.sa.create_engine and I'm looking for a way to wrap all transactions in my tests to rollback them at the end of each test to remove all data.
Is it possible? | closed | 2018-05-03T14:51:38Z | 2018-05-04T11:58:57Z | https://github.com/aio-libs/aiopg/issues/471 | [] | GregEremeev | 3 |
SYSTRAN/faster-whisper | deep-learning | 474 | adding initial_prompt is changing the segment duration | For the same audio without inital_prompt, the segments are smaller and more accurate
[0.00s -> 6.20s] text ...
[6.30s -> 10.80s] text ...
[11.30s -> 13.80s] text ...
[13.90s -> 18.60s] text ...
[20.60s -> 23.80s] text ...
[27.88s -> 30.88s] text ...
[31.48s -> 36.98s] text ...
[37.08s -> 39.08s] text ...
[39.98s -> 42.88s] text ...
[44.48s -> 50.68s] text ...
-----
But if I provide some words as inital_prompt the the segments become bigger and inaccurate.
[0.00s -> 31.38s]
[31.58s -> 39.06s]
[40.06s -> 50.54s]
[52.34s -> 56.54s] | closed | 2023-09-14T08:45:05Z | 2024-11-14T13:59:44Z | https://github.com/SYSTRAN/faster-whisper/issues/474 | [] | abdulnim | 3 |
python-visualization/folium | data-visualization | 1,581 | Dynamization in Folium | **Describe the solution you'd like**
I would like to update marker data live based on actual data. Same goes for the data in the marker pop-up. I would also like to draw circles and maybe add or remove markers based on live data.
**Describe alternatives you've considered**
Writing in JavaScript with Leaflet.js itself, but my JavaScript is not good.
**Additional context**
Maybe there is already another Python package that supports this, but I do not think so, if there is I would appreciate if anyone could let me know!
**Implementation**
I am not sure how this could be implemented, since my Python and JavaScript knowledge is limited.
Thanks for developing Folium, makes it a lot easier to use for non JavaScript people!
| closed | 2022-04-07T18:14:23Z | 2023-01-11T16:55:09Z | https://github.com/python-visualization/folium/issues/1581 | [] | Theagainmen | 15 |
QingdaoU/OnlineJudge | django | 426 | system error 的原因 | 您好,謝謝您提供這麼好的平台服務。
已經建立好平台,不過在測試交卷後會出現 system error,
訪使用者帳號登入交卷也會出現 system error,
應該說只有這個結果,想請問原因為何?及該如何處置謝謝您。
<img width="210" alt="image" src="https://user-images.githubusercontent.com/75984915/185625350-cb3d2cfa-1c13-4636-9227-dc79ee0fbc77.png">
| open | 2022-08-19T13:10:18Z | 2024-10-07T07:14:06Z | https://github.com/QingdaoU/OnlineJudge/issues/426 | [] | r07341010 | 1 |
plotly/dash | jupyter | 2,969 | Remove all JavaScript warnings from our libraries | See https://github.com/plotly/dash-core/issues/284 for details. (GitHub won't let me transfer the issue to this repository.) | open | 2024-08-28T15:50:29Z | 2024-08-28T15:50:30Z | https://github.com/plotly/dash/issues/2969 | [
"bug",
"P2"
] | gvwilson | 0 |
tensorflow/tensor2tensor | deep-learning | 1,490 | Hyper parameter tuning for local execution | ### Description
I know t2t has a hyperparameter tuning functin, but it's only for ML Engine.
I implented hyperparameter tuning with Optuna for t2t v1.10.0.
https://github.com/Drunkar/tensor2tensor-optuna

Opuna is a open-sourced hyperparameter optimization framework developed by Preferred Networks.
https://optuna.org/
I want to ask does someone need this feature or not because ML Engine is sometimes high-cost.
Thanks.
### Environment information
```
OS: Ubuntu 16.04.6 LTS
$ pip freeze | grep tensor
mesh-tensorflow==0.0.5
tensor2tensor==1.10.0
tensorboard==1.12.2
tensorflow==1.12.0
tensorflow-datasets==1.0.1
tensorflow-hub==0.3.0
tensorflow-metadata==0.13.0
tensorflow-probability==0.6.0
tensorflow-serving-api==1.13.0
$ python -V
Python 3.6.7 :: Anaconda custom (64-bit)
```
| open | 2019-03-15T07:58:26Z | 2020-03-09T07:59:25Z | https://github.com/tensorflow/tensor2tensor/issues/1490 | [] | Drunkar | 3 |
python-restx/flask-restx | flask | 124 | Contributing getting started questions | Hi everyone!
I've been following this project back when it was still flask-restplus, and I've always wanted to contribute back to it since I've used it and liked it quite a bit.
So I wanted to know two things since now I find myself with some time for contributing:
1. Do you have an specific messaging channel, like discord, slack, gitter, etc?
2. Do you already have in mind a list of issues that are a good "first contribution" candidate?
Thanks in advanced!
| closed | 2020-04-21T21:30:02Z | 2020-04-22T21:49:01Z | https://github.com/python-restx/flask-restx/issues/124 | [
"question"
] | ramarivera | 2 |
huggingface/diffusers | deep-learning | 10,842 | train_text_to_image_lora.py this file was debugged and found a lot of problems, please ask which one has a file that can be trained | train_text_to_image_lora.py this file was debugged and found a lot of problems, please ask which one has a file that can be trained
| open | 2025-02-20T14:38:11Z | 2025-03-22T15:02:48Z | https://github.com/huggingface/diffusers/issues/10842 | [
"stale"
] | llm8047 | 4 |
huggingface/transformers | machine-learning | 36,640 | [Feature Request]: refactor _update_causal_mask to a public utility | ### Feature request
refactor _update_causal_mask to a public utility
### Motivation
After this pr https://github.com/huggingface/transformers/pull/35235/files#diff-06392bad3b9e97be9ade60d4ac46f73b6809388f4d507c2ba1384ab872711c51
all the attention implement already refactor to use ALL_ATTENTION_FUNCTIONS and people can register their own implement very easy.
I notice that there are still another function: _update_causal_mask is copy-and-paste everywhere and related to attention modules
if people register an attention impl, this _update_causal_mask will add attention_mask if it is not flash_attention_2. so hope this function can be refactor too
### Your contribution
I can do some testing and submitting a PR, we can add ulysess implementation as a third party example | open | 2025-03-11T07:14:39Z | 2025-03-12T15:13:09Z | https://github.com/huggingface/transformers/issues/36640 | [
"Feature request"
] | Irvingwangjr | 2 |
yt-dlp/yt-dlp | python | 12,159 | WARNING: [Instagram] unable to extract shared data; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
WARNING: [Instagram] unable to extract shared data; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
WARNING: [Instagram] unable to extract shared data; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
``` | closed | 2025-01-22T08:37:00Z | 2025-01-22T14:42:33Z | https://github.com/yt-dlp/yt-dlp/issues/12159 | [
"spam"
] | Thim19954 | 1 |
CorentinJ/Real-Time-Voice-Cloning | python | 1,241 | Can any of these TTS models run on a regular Win11 i7 laptop, or is NVIDIA gfx card obligatory? | Label: Not an issue, but a question rather.
As the title states: Can any of these TTS models run on a regular Win11 Intel i7 laptop, w/ Intel Iris Xe graphics, or is NVIDIA gfx card obligatory?
I'm a little confused. Most, if not all of these models mention the need for NVIDIA GPU.
Does that mean just the drivers from the NVIDIA website, or the (graphics) card as well. I'm a hardware n00b so pardon my slightly ignorant question. Not sure what a gfx card has to do with data training;-)
Thanks. | open | 2023-08-12T16:55:34Z | 2023-08-12T19:07:05Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1241 | [] | inpresif | 0 |
JaidedAI/EasyOCR | deep-learning | 459 | Error in Google Colaboratory | I click the link in your github Repo for 'Google Colaboratory' it runs until it hits the cel that has the following code and then it generates an error.
```
# Create a reader to do OCR.
# If you change to GPU instance, it will be faster. But CPU is enough.
# (by MENU > Runtime > Change runtime type > GPU, then redo from beginning )
import easyocr
reader = easyocr.Reader(['th','en'])
```
```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-5-eb3fec48561d> in <module>()
2 # If you change to GPU instance, it will be faster. But CPU is enough.
3 # (by MENU > Runtime > Change runtime type > GPU, then redo from beginning )
----> 4 import easyocr
5 reader = easyocr.Reader(['th','en'])
1 frames
/usr/local/lib/python3.7/dist-packages/easyocr/easyocr.py in <module>()
7 make_rotated_img_list, set_result_with_confidence
8 from .config import *
----> 9 from bidi.algorithm import get_display
10 import numpy as np
11 import cv2
ModuleNotFoundError: No module named 'bidi'
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
``` | closed | 2021-06-14T02:23:13Z | 2024-07-21T12:16:23Z | https://github.com/JaidedAI/EasyOCR/issues/459 | [] | datatalking | 3 |
recommenders-team/recommenders | data-science | 1,673 | [ASK] MIND competition submission error using nrms_MIND.ipynb | ### Description
Hi, thanks for creating the repo and MIND competition.
I tried to generate a valid MIND test submission using the latest version of [examples/00_quick_start/nrms_MIND.ipynb](https://github.com/microsoft/recommenders/blob/main/examples/00_quick_start/nrms_MIND.ipynb) by changing only this line to use the original MIND dataset:
```python
MIND_type = 'large'
```
After the `prediction.zip` was uploaded to [MIND competition](https://competitions.codalab.org/competitions/24122#participate) with user name: `leemeng`, I encountered an error message which is hard to debug by myself:
```bash
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
/tmp/codalab/tmpbVMoKT/run/program/evaluate.py:17: VisibleDeprecationWarning: boolean index did not match indexed array along dimension 0; dimension is 28 but corresponding boolean dimension is 16
false_score = score[label==0]
/tmp/codalab/tmpbVMoKT/run/program/evaluate.py:18: VisibleDeprecationWarning: boolean index did not match indexed array along dimension 0; dimension is 28 but corresponding boolean dimension is 16
positive_score = score[label==1]
Traceback (most recent call last):
File "/tmp/codalab/tmpbVMoKT/run/program/evaluate.py", line 125, in
auc, mrr, ndcg, ndcg10,mrrs = scoring(truth_file, submission_answer_file)
File "/tmp/codalab/tmpbVMoKT/run/program/evaluate.py", line 91, in scoring
mrr = mrr_score(y_true,y_score)
File "/tmp/codalab/tmpbVMoKT/run/program/evaluate.py", line 36, in mrr_score
y_true = np.take(y_true, order)
File "/opt/conda/lib/python2.7/site-packages/numpy/core/fromnumeric.py", line 124, in take
return take(indices, axis, out, mode)
IndexError: index 21 is out of bounds for size 16
```
Previous, there was a similar issue (#1192), but that issue was considered to be caused by a small number of lines (7,538).
I have confirmed my second submitted `prediction.txt` in `prediction.zip` has many lines (376,471 lines):
```python
import pandas as pd
predictions = pd.read_csv(
os.path.join(data_path, 'prediction.txt'), sep=" ", header=None,
names=['id', 'rank']
)
len(predictions)
# output: 376471
```
Any help would be really appreciated. 🙏 | closed | 2022-03-14T15:51:30Z | 2024-03-24T09:14:05Z | https://github.com/recommenders-team/recommenders/issues/1673 | [
"help wanted"
] | leemengtw | 7 |
serengil/deepface | machine-learning | 1,395 | [FEATURE]: <There are multiple people in a picture> | ### Description
If we have an ID card information, can the individuals in picture 2 be compared one by one?
### Additional Info
_No response_ | closed | 2024-12-05T09:04:42Z | 2024-12-05T09:09:18Z | https://github.com/serengil/deepface/issues/1395 | [
"enhancement",
"question"
] | jhluaa | 2 |
schemathesis/schemathesis | graphql | 1,889 | [BUG] hypothesis-max-examples is ignored when using docker run | ### Checklist
- [x ] I checked the [FAQ section](https://schemathesis.readthedocs.io/en/stable/faq.html#frequently-asked-questions) of the documentation
- [ x] I looked for similar issues in the [issue tracker](https://github.com/schemathesis/schemathesis/issues)
- [ x] I am using the latest version of Schemathesis
### Describe the bug
Clearly describe the issue you're facing.
### To Reproduce
🚨 **Mandatory** 🚨: Steps to reproduce the behavior:
1. Run this command
'docker run --network=my-network \
-v /home/didier/projects/ck_api_design:/app \
schemathesis/schemathesis:stable run /app/api/v4/users.yaml \
--code-sample-style curl \
--workers auto \
--checks all \
--base-url http://qa-users-api:50103/api/v4 \
--header "X-Api-Key: xxxxxxxxxx" \
--validate-schema=true \
--endpoint ^/api/v4/users$ \
--method POST \
--hypothesis-max-examples 1'
2. See error
After some minutes of execution (instead of a matter of seconds expected) we get:
=================================== SUMMARY ====================================
Performed checks:
not_a_server_error 5677 / 5677 passed PASSED
status_code_conformance 5674 / 5677 passed FAILED
content_type_conformance 5677 / 5677 passed PASSED
response_headers_conformance 5677 / 5677 passed PASSED
response_schema_conformance 3 / 5677 passed FAILED
If I use a previous version of schemathesis (v3.17.0):
Execution is a matter of seconds and the hypothesis-max-examples option works as expected:
Performed checks:
not_a_server_error 7 / 7 passed PASSED
status_code_conformance 4 / 7 passed FAILED
content_type_conformance 7 / 7 passed PASSED
response_headers_conformance 7 / 7 passed PASSED
response_schema_conformance 4 / 7 passed FAILED
Please include a minimal API schema causing this issue:
```yaml
{ "openapi": "3.0.2", ... }
```
### Expected behavior
Clearly describe your expected outcome.
### Environment
```
- Linux
- Python version: Depends on schemathesis image used in stable
- Schemathesis version: [e.g. 3.20.0] 3.21.0
- Spec version: [e.g. Open API 3.0.2] 3.0.1
```
### Additional context
Include any other relevant details, like logs or screenshots.
| closed | 2023-11-15T09:13:43Z | 2023-12-06T17:36:15Z | https://github.com/schemathesis/schemathesis/issues/1889 | [
"Type: Bug",
"Status: Needs Triage"
] | dbire | 5 |
scikit-learn/scikit-learn | python | 30,540 | Failure generating a pdf of the documentations using make latexpdf | ### Describe the bug
Hi sklearn team and fans,
I am trying to generate a pdf of the documentations to be able to read/use sklearn documentations offline. On multiple systems ranging from Macos (ARM or AMD processors) to Ubuntu, I am facing this issue and I am unable to troubleshoot it further:
```
Configuration error:
There is a programmable error in your configuration file:
Traceback (most recent call last):
File "/opt/homebrew/Caskroom/miniconda/base/envs/sklearn_docs/lib/python3.10/site-packages/sphinx/config.py", line 509, in eval_config_file
exec(code, namespace) # NoQA: S102
File "/Users/myself/Documents/sklearn_docs/scikit-learn/doc/conf.py", line 22, in <module>
from sklearn.externals._packaging.version import parse
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1002, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 945, in _find_spec
File "/opt/homebrew/Caskroom/miniconda/base/envs/sklearn_docs/lib/python3.10/site-packages/_scikit_learn_editable_loader.py", line 311, in find_spec
tree = self._rebuild()
File "/opt/homebrew/Caskroom/miniconda/base/envs/sklearn_docs/lib/python3.10/site-packages/_scikit_learn_editable_loader.py", line 345, in _rebuild
subprocess.run(self._build_cmd, cwd=self._build_path, env=env, stdout=subprocess.DEVNULL, check=True)
File "/opt/homebrew/Caskroom/miniconda/base/envs/sklearn_docs/lib/python3.10/subprocess.py", line 501, in run
with Popen(*popenargs, **kwargs) as process:
File "/opt/homebrew/Caskroom/miniconda/base/envs/sklearn_docs/lib/python3.10/subprocess.py", line 966, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/opt/homebrew/Caskroom/miniconda/base/envs/sklearn_docs/lib/python3.10/subprocess.py", line 1842, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: '/private/var/folders/2k/k1vr0xc11hd8_n10w_v242k80000gn/T/pip-build-env-8s67iqiw/overlay/bin/ninja'
make: *** [latexpdf] Error 2
```
First, are we able to utilize the `make latexpdf` to automate the generation of a pdf for the entire documentation? If no, just indicate so. If yes, then, how would you recommend I resolve ot troubkeshoot this issue further? As you can see, the error trace poitns to multiple files in sklearn and I thought I can bring this issue up.
Thanks and best regards
### Steps/Code to Reproduce
```
pip install sphinx sphinx-gallery numpydoc matplotlib Pillow pandas scikit-image joblib
cd doc
pip install --editable ..
make latexpdf
```
### Expected Results
No error is thrown and a pdf for the documentations is generated
### Actual Results
`Configuration error:
There is a programmable error in your configuration file:
Traceback (most recent call last):
File "/opt/homebrew/Caskroom/miniconda/base/envs/sklearn_docs/lib/python3.10/site-packages/sphinx/config.py", line 509, in eval_config_file
exec(code, namespace) # NoQA: S102
File "/Users/myself/Documents/sklearn_docs/scikit-learn/doc/conf.py", line 22, in <module>
from sklearn.externals._packaging.version import parse
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1002, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 945, in _find_spec
File "/opt/homebrew/Caskroom/miniconda/base/envs/sklearn_docs/lib/python3.10/site-packages/_scikit_learn_editable_loader.py", line 311, in find_spec
tree = self._rebuild()
File "/opt/homebrew/Caskroom/miniconda/base/envs/sklearn_docs/lib/python3.10/site-packages/_scikit_learn_editable_loader.py", line 345, in _rebuild
subprocess.run(self._build_cmd, cwd=self._build_path, env=env, stdout=subprocess.DEVNULL, check=True)
File "/opt/homebrew/Caskroom/miniconda/base/envs/sklearn_docs/lib/python3.10/subprocess.py", line 501, in run
with Popen(*popenargs, **kwargs) as process:
File "/opt/homebrew/Caskroom/miniconda/base/envs/sklearn_docs/lib/python3.10/subprocess.py", line 966, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/opt/homebrew/Caskroom/miniconda/base/envs/sklearn_docs/lib/python3.10/subprocess.py", line 1842, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: '/private/var/folders/2k/k1vr0xc11hd8_n10w_v242k80000gn/T/pip-build-env-8s67iqiw/overlay/bin/ninja'
make: *** [latexpdf] Error 2
`
### Versions
```shell
here are all the packages installed in the conda env I am trying to use to generate this pdf document:
alabaster==0.7.16
babel==2.16.0
certifi==2024.12.14
charset-normalizer==3.4.1
contourpy==1.3.1
cycler==0.12.1
Cython==3.0.11
distlib==0.3.6
docutils==0.21.2
filelock==3.10.4
fonttools==4.55.3
idna==3.10
imageio==2.36.1
imagesize==1.4.1
Jinja2==3.1.5
joblib==1.4.2
kiwisolver==1.4.8
lazy_loader==0.4
MarkupSafe==3.0.2
matplotlib==3.10.0
networkx==3.4.2
numpy==2.2.1
numpydoc==1.8.0
packaging==24.2
pandas==2.2.3
pillow==11.0.0
Pygments==2.18.0
pyparsing==3.2.0
python-dateutil==2.9.0.post0
pytz==2024.2
requests==2.32.3
scikit-image==0.25.0
-e git+https://github.com/scikit-learn/scikit-learn.git@970503f839f44b4f78390e6069f8e13c0dd2f185#egg=scikit_learn
scipy==1.14.1
six==1.17.0
snowballstemmer==2.2.0
Sphinx==7.3.7
sphinx-gallery==0.18.0
sphinxcontrib-applehelp==2.0.0
sphinxcontrib-devhelp==2.0.0
sphinxcontrib-htmlhelp==2.1.0
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==2.0.0
sphinxcontrib-serializinghtml==2.0.0
tabulate==0.9.0
threadpoolctl==3.5.0
tifffile==2024.12.12
tomli==2.2.1
tzdata==2024.2
urllib3==2.3.0
virtualenv==20.21.0
```
| closed | 2024-12-25T19:16:30Z | 2024-12-28T19:12:55Z | https://github.com/scikit-learn/scikit-learn/issues/30540 | [
"Bug",
"Needs Triage"
] | hassanshallal | 6 |
fastapi/sqlmodel | pydantic | 534 | [Querying] negating `Model.boolean` in `where()` | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from typing import Optional
from sqlmodel import select, Field, Session, SQLModel, create_engine
class Hero(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
test: bool
name: str
secret_name: str
age: Optional[int] = None
hero_1 = Hero(name="Deadpond", test=False, secret_name="Dive Wilson")
engine = create_engine("sqlite:///database.db")
SQLModel.metadata.create_all(engine)
with Session(engine) as session:
session.add(hero_1)
session.commit()
query = select(Hero).where(Hero.test) # FIXME: this has to be `not Hero.test`
heros = session.execute(query).all()
print(heros)
```
### Description
- Create model with `bool` field
- Query `not field` in `select`
- Throws error OR evaluates to `False` instead of generating `WHERE` clause
### Operating System
macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.8
### Python Version
Python 3.10.9
### Additional Context
_No response_ | closed | 2023-01-21T22:50:44Z | 2023-01-22T15:07:49Z | https://github.com/fastapi/sqlmodel/issues/534 | [
"question"
] | Pk13055 | 2 |
OpenBB-finance/OpenBB | machine-learning | 6,694 | [IMPROVE] Can't complete openbb.build() when using Spark due to failure in renaming temporary cache file | When installing the `multpl` openbb extension and building the Python interface as recommended by the [docs](https://docs.openbb.co/platform/installation#post-installation), I encounter an error when executing the build command as the temporary cache file cannot be renamed and subsequently I encounter an error pulling data from the `multpl` provider.
I only encounter this issue when using a Spark cluster (specifically in Databricks) but when I run the same code on my local Desktop there are no issues, the build succeeds and I can pull data as usual from the `multpl` provider. I believe the failure when using a Spark cluster is due to multiple processes running simultaneously using multiple `ruff` invocations as described in [this issue](https://github.com/astral-sh/ruff/issues/12284).
I am wondering if renaming the cache file is critical to the building step and can be ignored if it fails so the build can still succeed and I can continue using `obb` to pull data from the given provider.
**To Reproduce**
```
%pip install openbb openbb-multpl
# To restart Python using Databricks utils cmd
dbutils.library.restartPython()
import openbb
openbb.build()
# To restart Python using Databricks utils cmd
dbutils.library.restartPython()
from openbb import obb
obb.index.sp500_multiples(series_name='pe_month', provider='multpl')
```
**Screenshots**


**Desktop (see more details [here](https://learn.microsoft.com/en-us/azure/databricks/release-notes/runtime/14.3lts)):**
- Apache Spark 3.5.0, Scala 2.12
- Operating System: Ubuntu 22.04.3 LTS
- Java: Zulu 8.74.0.17-CA-linux64
- Python: 3.10.12
- R: 4.3.1
- Delta Lake: 3.1.0 | closed | 2024-09-25T03:49:10Z | 2024-09-26T18:07:29Z | https://github.com/OpenBB-finance/OpenBB/issues/6694 | [] | eram576 | 4 |
waditu/tushare | pandas | 1,038 | 资产负债表固定资产数据错误与重复 |
ts_code ann_date f_ann_date end_date total_cur_assets fix_assets total_cur_liab total_liab
600038.SH 20190426 20190426 20190331 1.997876e+10 0.000000e+00 1.440746e+10 1.515330e+10
600038.SH 20190426 20190426 20190331 1.997876e+10 0.000000e+00 1.440746e+10 1.515330e+10
600038.SH 20190321 20190321 20181231 2.073679e+10 2.182306e+09 1.528940e+10 1.601735e+10
600038.SH 20181026 20181026 20180930 1.936912e+10 0.000000e+00 1.416455e+10 1.498835e+10
600038.SH 20180825 20180825 20180630 1.881996e+10 2.270699e+09 1.382745e+10 1.461474e+10
这个第一条和第二条数据是重复的,固定资产显示为0(正确的值应该是2,133,647,617),请问这样的数据能不能修复一下呢? | closed | 2019-05-11T01:22:03Z | 2019-05-11T04:23:21Z | https://github.com/waditu/tushare/issues/1038 | [] | KaimingOuyang | 1 |
yzhao062/pyod | data-science | 240 | HBOS negative scores | Hello,
I have been reading the paper Histogram-Base Outlier Score and testing a little bit with the implementation of this python module. By looking at the paper we can see that the minimum score that we can have is 0 by the normalization and logarithm in the formula but when I try it with the PyOD module I get some negative scores.
- We can still apply the rule "the bigger the score the more anomalous the instance is"?
- Why are we getting these negative values?
Thanks in advance to any reply. | open | 2020-10-22T08:48:44Z | 2023-06-21T23:40:50Z | https://github.com/yzhao062/pyod/issues/240 | [] | nacheteam | 2 |
mwaskom/seaborn | pandas | 3,457 | Deprecation warnings when seaborn is used with Pandas 2.1.0 | Pandas 2.1.0 has [deprecated](https://pandas.pydata.org/docs/whatsnew/v2.1.0.html#other-deprecations) a number of functions, and this results in `FutureWarning`s when Seaborn is used.
For example:
```py
import seaborn as sns
tips = sns.load_dataset("tips")
sns.relplot(data=tips, x="total_bill", y="tip")
sns.relplot(data=tips, x="total_bill", y="tip")
#> /opt/homebrew/lib/python3.11/site-packages/seaborn/_oldcore.py:1498: FutureWarning: is_categorical_dtype is deprecated and will be removed in a future version. Use isinstance(dtype, CategoricalDtype) instead
#> /opt/homebrew/lib/python3.11/site-packages/seaborn/_oldcore.py:1498: FutureWarning: is_categorical_dtype is deprecated and will be removed in a future version. Use isinstance(dtype, CategoricalDtype) instead
#> /opt/homebrew/lib/python3.11/site-packages/seaborn/_oldcore.py:1498: FutureWarning: is_categorical_dtype is deprecated and will be removed in a future version. Use isinstance(dtype, CategoricalDtype) instead
#> /opt/homebrew/lib/python3.11/site-packages/seaborn/_oldcore.py:1498: FutureWarning: is_categorical_dtype is deprecated and will be removed in a future version. Use isinstance(dtype, CategoricalDtype) instead
#> /opt/homebrew/lib/python3.11/site-packages/seaborn/_oldcore.py:1498: FutureWarning: is_categorical_dtype is deprecated and will be removed in a future version. Use isinstance(dtype, CategoricalDtype) instead
#> /opt/homebrew/lib/python3.11/site-packages/seaborn/_oldcore.py:1498: FutureWarning: is_categorical_dtype is deprecated and will be removed in a future version. Use isinstance(dtype, CategoricalDtype) instead
#> <seaborn.axisgrid.FacetGrid object at 0x292c403d0>
``` | closed | 2023-08-31T14:44:49Z | 2023-08-31T14:58:26Z | https://github.com/mwaskom/seaborn/issues/3457 | [] | wch | 1 |
airtai/faststream | asyncio | 1,507 | feature: concurrent Redis consuming | **Describe the bug**
It seems tasks don't run in parallel
**How to reproduce**
Include source code:
```python
import asyncio
from faststream import FastStream
from faststream.redis import RedisBroker
from pydantic import BaseModel
redis_dsn = 'xxxx'
rb = RedisBroker(redis_dsn)
class User(BaseModel):
name: str
age: int = 0
@rb.subscriber(list="users")
async def my_listener(user: User):
await asyncio.sleep(3)
print(user, 'from faststream')
async def producer():
for i in range(10):
await rb.publish(User(name="Bob", age=i), list="users")
async def main():
await rb.connect()
asyncio.create_task(producer())
app = FastStream(rb)
await app.run()
if __name__ == '__main__':
asyncio.run(main())
```
And/Or steps to reproduce the behavior:
run the script above
**Expected behavior**
task should run in parallel
**Observed behavior**
task run one after one
| open | 2024-06-07T04:16:37Z | 2024-11-08T10:41:08Z | https://github.com/airtai/faststream/issues/1507 | [
"enhancement",
"good first issue",
"Redis"
] | ryanrain2016 | 14 |
nltk/nltk | nlp | 2,717 | FreqDist Relative Frequencies (Corpus Linguistics) | `FreqDist` is a very useful tool for corpus linguists trying to generate frequency tables. However, in many cases, relative frequencies (i.e., normalized frequencies) are desired in these cases.
There are two common ways of doing this: Either the frequency of an item is divided by the total number of items (e.g., words), or the frequency is normalized to something like "X per million." This allows us to compare frequencies between corpora of different sizes.
I believe that this could be a useful addition to the `FreqDist` that is relatively straightforward to implement. If I haven't overlooked this functionality and if there's interest, I'd be happy to work on a PR. | closed | 2021-05-25T13:09:59Z | 2021-06-02T12:48:17Z | https://github.com/nltk/nltk/issues/2717 | [] | IngoKl | 3 |
GibbsConsulting/django-plotly-dash | plotly | 303 | Remove multiple return value check for clientside_callbacks | As reported in #301 there is a check to prevent use of multiple server-side callback values as this is not handled at present.
However, the check appears to also prevent the use of client-side callbacks with multiple return values, and should be relaxed to not prevent this use case.
| closed | 2021-01-15T16:52:54Z | 2021-01-24T03:35:28Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/303 | [
"bug"
] | GibbsConsulting | 1 |
RobertCraigie/prisma-client-py | pydantic | 962 | Error: spawn prisma-client-py ENOENT} | I've been using Prisma in node.js & I'm loving it.
Right now I have a Fastapi backend that I want to use Prisma with, yet I'm facing this issue whenever I run "Prisma db push" :
`prisma db push
Environment variables loaded from .env
Prisma schema loaded from schema.prisma
Datasource "db": PostgreSQL database "postgres", schema "public" at "HIDDEN.pooler.supabase.com"
The database is already in sync with the Prisma schema.
Running generate... (Use --skip-generate to skip the generators)
Error: spawn prisma-client-py ENOENT}`
Prisma version: `
prisma : 5.13.0
@prisma/client : 5.13.0
Computed binaryTarget : windows
Operating System : win32
Architecture : x64
Node.js : v20.12.0
Query Engine (Node-API) : libquery-engine b9a39a7ee606c28e3455d0fd60e78c3ba82b1a2b (at ........\AppData\Roaming\npm\node_modules\prisma\node_modules@prisma\engines\query_engine-windows.dll.node)
Schema Engine : schema-engine-cli b9a39a7ee606c28e3455d0fd60e78c3ba82b1a2b (at ........\AppData\Roaming\npm\node_modules\prisma\node_modules@prisma\engines\schema-engine-windows.exe)
Schema Wasm : @prisma/prisma-schema-wasm 5.13.0-23.b9a39a7ee606c28e3455d0fd60e78c3ba82b1a2b
Default Engines Hash : HIDDEN
Studio : 0.500.0`
Schema.prisma:
`generator client {
provider = "prisma-client-py"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
model User {
id String @id @unique
name String?
email String @unique
profileImage String?
ntfyChannel String
stripeCustomerId String? @unique`
My system:
`Windows 11 64 bit
Python 3.12.2` | open | 2024-05-13T16:37:28Z | 2024-11-27T03:21:48Z | https://github.com/RobertCraigie/prisma-client-py/issues/962 | [] | AideFlood | 3 |
horovod/horovod | machine-learning | 3,655 | Collective ops: support for GatherOP for model parallel use cases. | **Is your feature request related to a problem? Please describe.**
In model parallel use cases where each rank trains a part of model, after training, for constructing and saving the full model, usually on rank 0, it needs to gather the weights from other ranks.
**Describe the solution you'd like**
An ideal solution would be horovod supporting Gatherop for this use case.
**Describe alternatives you've considered**
Currently the alternative is using existing broadcast op which is slightly wasteful because other ranks also receive the weights for full model while they don't need to.
**Additional context**
N/A
| closed | 2022-08-15T18:11:08Z | 2022-08-23T23:23:52Z | https://github.com/horovod/horovod/issues/3655 | [
"enhancement"
] | MrAta | 1 |
AirtestProject/Airtest | automation | 848 | iOS真机的两个问题。1.横屏ui定位不准,2.iOS弹窗点击不到,点击到的层级为下方appUI层,请问有解决办法吗 | (请尽量按照下面提示内容填写,有助于我们快速定位和解决问题,感谢配合。否则直接关闭。)
**(重要!问题分类)**
* 测试开发环境AirtestIDE使用问题 -> https://github.com/AirtestProject/AirtestIDE/issues
* 控件识别、树状结构、poco库报错 -> https://github.com/AirtestProject/Poco/issues
* 图像识别、设备控制相关问题 -> 按下面的步骤
**描述问题bug**
(简洁清晰得概括一下遇到的问题是什么。或者是报错的traceback信息。)
1.横屏ui定位不准,2.iOS弹窗点击不到,点击到的层级为下方appUI层
```
(在这里粘贴traceback或其他报错信息)
```
**相关截图**
(贴出遇到问题时的截图内容,如果有的话)
(在AirtestIDE里产生的图像和设备相关的问题,请贴一些AirtestIDE控制台黑窗口相关报错信息)
**复现步骤**
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**预期效果**
(预期想要得到什么、见到什么)
**python 版本:** `python3.9`
**airtest 版本:** `1.0.69`
> airtest版本通过`pip freeze`可以命令可以查到
**设备:**
- 型号: [e.g. google pixel 2]
- 系统: [e.g. Android 8.1]
- (别的信息)
**其他相关环境信息**
(其他运行环境,例如在linux ubuntu16.04上运行异常,在windows上正常。)
| closed | 2021-01-04T04:09:53Z | 2021-02-21T08:54:59Z | https://github.com/AirtestProject/Airtest/issues/848 | [] | wslyyy | 1 |
deepspeedai/DeepSpeed | machine-learning | 7,136 | [BUG]When I use deepspeed ZeRO3 to train the vision-language-action model ,it met error of loading weights | **Describe the bug**
I want train vision-language-action model openvla with ZeRO3 and it cant load weight
Loading checkpoint shards: 100%|██████████| 3/3 [00:26<00:00, 7.72s/it]
Loading checkpoint shards: 100%|██████████| 3/3 [00:26<00:00, 8.99s/it]
[rank0]: Traceback (most recent call last):
[rank0]: File "/fs1/private/user/chenzengjue/hdp/openvla/vla-scripts/grpo.py", line 136, in <module>
[rank0]: main(script_args, training_args, model_args)
[rank0]: File "/fs1/private/user/chenzengjue/hdp/openvla/vla-scripts/grpo.py", line 112, in main
[rank0]: trainer = trainer_cls(
[rank0]: ^^^^^^^^^^^^
[rank0]: File "/fs1/private/user/chenzengjue/hdp/openvla/vla-scripts/grpo_trainer.py", line 186, in __init__
[rank0]: vla = AutoModelForVision2Seq.from_pretrained(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/chenzengjue/anaconda3/envs/r1-v/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py", line 559, in from_pretrained
[rank0]: return model_class.from_pretrained(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/chenzengjue/anaconda3/envs/r1-v/lib/python3.11/site-packages/transformers/modeling_utils.py", line 262, in _wrapper
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/chenzengjue/anaconda3/envs/r1-v/lib/python3.11/site-packages/transformers/modeling_utils.py", line 4313, in from_pretrained
[rank0]: ) = cls._load_pretrained_model(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/home/chenzengjue/anaconda3/envs/r1-v/lib/python3.11/site-packages/transformers/modeling_utils.py", line 4949, in _load_pretrained_model
[rank0]: raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}")
[rank0]: RuntimeError: Error(s) in loading state_dict for OpenVLAForActionPrediction:
[rank0]: size mismatch for vision_backbone.featurizer.blocks.0.ls1.scale_factor: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([0]).
[rank0]: size mismatch for vision_backbone.featurizer.blocks.0.ls2.scale_factor: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([0]).
[rank0]: size mismatch for vision_backbone.featurizer.blocks.1.ls1.scale_factor: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([0]).
[rank0]: size mismatch for vision_backbone.featurizer.blocks.1.ls2.scale_factor: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([0]).
[rank0]: size mismatch for vision_backbone.featurizer.blocks.2.ls1.scale_factor: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([0]).
[rank0]: size mismatch for vision_backbone.featurizer.blocks.2.ls2.scale_factor: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([0]).
[rank0]: size mismatch for vision_backbone.featurizer.blocks.3.ls1.scale_factor: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([0]).
[rank0]: size mismatch for vision_backbone.featurizer.blocks.3.ls2.scale_factor: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([0]).
[rank0]: size mismatch for vision_backbone.featurizer.blocks.4.ls1.scale_factor: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([0]).
[rank0]: size mismatch for vision_backbone.featurizer.blocks.4.ls2.scale_factor: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([0]).
[rank0]: size mismatch for vision_backbone.featurizer.blocks.5.ls1.scale_factor: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([0]).
[rank0]: size mismatch for vision_backbone.featurizer.blocks.5.ls2.scale_factor: copying a param with shape torch.Size([1024]) from checkpoint, the shape in current model is torch.Size([0]).
[rank0]: size mismatch for vision_backbone.featurizer.blocks.6.ls1.scale_factor:
**To Reproduce**
I try to rewrite the code of GRPO Trainer and also use deepspeed ZeRO3 to train the Openvla model.And follow the original way to load OpenVLA:
AutoConfig.register("openvla", OpenVLAConfig)
AutoImageProcessor.register(OpenVLAConfig, PrismaticImageProcessor)
AutoProcessor.register(OpenVLAConfig, PrismaticProcessor)
AutoModelForVision2Seq.register(OpenVLAConfig, OpenVLAForActionPrediction)
# Load OpenVLA Processor and Model using HF AutoClasses
quantization_config = None
print(f"Loading model {model}...")
processing_class = AutoProcessor.from_pretrained('/home/chenzengjue/hdp/openvla/openvla-7b', trust_remote_code=True)
vla = AutoModelForVision2Seq.from_pretrained(
'/home/chenzengjue/hdp/openvla/openvla-7b',
torch_dtype=torch.bfloat16,
quantization_config=quantization_config,
# low_cpu_mem_usage=True,
trust_remote_code=True,
)
vla = vla.to(device_id)
Then it met error in AutoModelForVision2Seq.from_pretrained()
If we don't use deepspeed ,it will work and the error disappear.
| open | 2025-03-14T02:33:21Z | 2025-03-14T02:35:57Z | https://github.com/deepspeedai/DeepSpeed/issues/7136 | [
"bug",
"training"
] | hahans | 0 |
InstaPy/InstaPy | automation | 6,815 | selenium.common.exceptions.WebDriverException: Message: Failed to decode response from marionette | selenium.common.exceptions.WebDriverException: Message: Failed to decode response from marionette

| open | 2024-07-01T14:15:45Z | 2024-07-01T14:15:45Z | https://github.com/InstaPy/InstaPy/issues/6815 | [] | DavidFFerreira | 0 |
matplotlib/matplotlib | data-visualization | 29,653 | [ENH]: Make fill_between 'step' argument consistent with plot | ### Problem
When I use fill_between, I typically want a plotted line as well. A nice way of doing this on several axes is to use a dict of keyword arguments, e.g.
kw = dict(color: 'red', ls='--', lw=1)
ax.plot(..., **kw)
ax.fill_between(..., **kw)
One thing that can't be set this way is the `drawstyle`, because for some reason `fill_between` has its own syntax for this which differs from other methods.
Both the argument name and the way it's processed are different:
- [plot](https://matplotlib.org/stable/api/_as_gen/matplotlib.lines.Line2D.html#matplotlib.lines.Line2D.set_drawstyle) takes `drawstyle` or `ds`, with options 'default', 'steps', 'steps-pre', 'steps-mid', 'steps-post'
- [fill_between](https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.fill_between.html) takes `step` with options 'pre', 'post', 'mid', and requires you to not pass in a `step` argument to get the default interpolating behaviour
This makes it really annoying to use, especially when automating.
Also has created some confusion in the past: https://github.com/matplotlib/matplotlib/issues/19524
### Proposed solution
1. Add `drawstyle` and `ds` arguments to `fill_between`, that are processed the same as for `plot`
2. Deprecate the `step` argument to `fill_between` | open | 2025-02-21T11:25:49Z | 2025-02-24T11:15:35Z | https://github.com/matplotlib/matplotlib/issues/29653 | [
"New feature",
"API: consistency"
] | MichaelClerx | 1 |
plotly/dash-table | dash | 569 | Filter does not support empty strings | Running the following code will fail with an AST error
```
import dash
from dash_table import DataTable
app = dash.Dash(__name__)
app.layout = DataTable(
id='table',
columns=[{
'name': x,
'id': x,
'selectable': True
} for x in ['a', 'b', 'c']],
data=[{
'a': str(x) if x % 2 == 0 else '',
'b': x if x % 3 == 1 else '',
'c': str(x*x) if x % 4 == 2 else ''
} for x in range(0,100)],
style_data_conditional=[{
'if': {
'column_id': x,
'filter_query': '{{{}}} eq ""'.format(x)
},
'backgroundColor': 'pink'
} for x in ['a', 'b', 'c']]
)
if __name__ == '__main__':
app.run_server(debug=True)
```
This is due to https://github.com/plotly/dash-table/blob/master/src/dash-table/syntax-tree/lexeme/expression.ts#L7 requiring at least one character inside the string. Removing this requirement will make the above code work as expected. | closed | 2019-09-05T15:01:40Z | 2019-09-05T16:46:13Z | https://github.com/plotly/dash-table/issues/569 | [
"dash-type-bug",
"size: 0.5"
] | Marc-Andre-Rivet | 1 |
tensorflow/tensor2tensor | deep-learning | 1,031 | Trouble with ASR example in Windows OS | ### Description
For some reasons, I'm trying to run the tensor2tensor in Windows OS,
and I have trouble in executing the example described in https://github.com/tensorflow/tensor2tensor/blob/master/docs/tutorials/asr_with_transformer.md
Running the ASR with transformer example in cmd.exe by
`python t2t_trainer.py --model=transformer --hparams_set=transformer_librispeech_tpu --problem=librispeech --train_steps=120000 --eval_steps=3 --local_eval_frequency=100 --data_dir=D:/ASR_E2E/180821/tmp --output_dir=D:/ASR_E2E/180821/output`
gives me an error looks like
> tensorflow.python.framework.errors_impl.UnknownError: NewRandomAccessFile failed to Create/Open: D:\ASR_E2E\180821\tmp\librispeech-train : \udcbe\udcbc\udcbd\udcba\udcb0\udca1 \udcb0źεǾ\udcfa\udcbd\udcc0\udcb4ϴ\udcd9.
> ; Input/output error
It seems that this error has something to do with 'utf-8' encoding due to the sequence of strange symbols like \udcbd\udcc0\udcb4ϴ\udcd9...
How can I solve this problems?
### Environment information
OS: Windows7
$ pip freeze | grep tensor
> tensor2tensor==1.8.0
> tensorboard==1.8.0
> tensorflow==1.8.0
$ python -V
> Python 3.6.5 :: Anaconda, Inc.
### For bugs: reproduction and error logs
```
# Steps to reproduce:
...
```
```
# Error logs:
python t2t_trainer.py --model=transformer --hparams_set=transformer_librispeech_tpu --problem=librispeech --train_steps=120000 --eval_steps=3 --local_eval_frequency=100 --data_dir=D:/ASR_E2E/180821/tmp --output_dir=D:/ASR_E2E/180821/output
C:\ProgramData\Anaconda3\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the sec
ond argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treate
d as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
WARNING:tensorflow:From C:\ProgramData\Anaconda3\lib\site-packages\tensor2tensor\utils\trainer_lib.p
y:198: RunConfig.__init__ (from tensorflow.contrib.learn.python.learn.estimators.run_config) is depr
ecated and will be removed in a future version.
Instructions for updating:
When switching to tf.estimator.Estimator, use tf.estimator.RunConfig instead.
INFO:tensorflow:schedule=continuous_train_and_eval
INFO:tensorflow:worker_gpu=1
INFO:tensorflow:sync=False
WARNING:tensorflow:Schedule=continuous_train_and_eval. Assuming that training is running on a single
machine.
INFO:tensorflow:datashard_devices: ['gpu:0']
INFO:tensorflow:caching_devices: None
INFO:tensorflow:ps_devices: ['gpu:0']
INFO:tensorflow:Using config: {'_task_type': None, '_task_id': 0, '_cluster_spec': <tensorflow.pytho
n.training.server_lib.ClusterSpec object at 0x0000000011F0A780>, '_master': '', '_num_ps_replicas':
0, '_num_worker_replicas': 0, '_environment': 'local', '_is_chief': True, '_evaluation_master': '',
'_train_distribute': None, '_tf_config': gpu_options {
per_process_gpu_memory_fraction: 1
}
, '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_secs': None, '_log_step_co
unt_steps': 100, '_session_config': gpu_options {
per_process_gpu_memory_fraction: 0.95
}
allow_soft_placement: true
graph_options {
optimizer_options {
}
}
, '_save_checkpoints_steps': 100, '_keep_checkpoint_max': 20, '_keep_checkpoint_every_n_hours': 1000
0, '_model_dir': 'D:/ASR_E2E/180821/output', 'use_tpu': False, 't2t_device_info': {'num_async_replic
as': 1}, 'data_parallelism': <tensor2tensor.utils.expert_utils.Parallelism object at 0x0000000011F0A
AC8>}
WARNING:tensorflow:Estimator's model_fn (<function T2TModel.make_estimator_model_fn.<locals>.wrappin
g_model_fn at 0x0000000011FC1268>) includes params argument, but params are not passed to Estimator.
WARNING:tensorflow:ValidationMonitor only works with --schedule=train_and_evaluate
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after 600 secs (eval_spec.th
rottle_secs) or training is finished.
INFO:tensorflow:Reading data files from D:/ASR_E2E/180821/tmp\librispeech-train*
INFO:tensorflow:partition: 0 num_data_files: 347
Traceback (most recent call last):
File "t2t_trainer.py", line 389, in <module>
tf.app.run()
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\platform\app.py", line 126, in
run
_sys.exit(main(argv))
File "t2t_trainer.py", line 385, in main
execute_schedule(exp)
File "t2t_trainer.py", line 326, in execute_schedule
getattr(exp, FLAGS.schedule)()
File "C:\ProgramData\Anaconda3\lib\site-packages\tensor2tensor\utils\trainer_lib.py", line 331, in
continuous_train_and_eval
self._eval_spec)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\estimator\training.py", line 43
9, in train_and_evaluate
executor.run()
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\estimator\training.py", line 51
8, in run
self.run_local()
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\estimator\training.py", line 65
0, in run_local
hooks=train_hooks)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\estimator\estimator.py", line 3
63, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\estimator\estimator.py", line 8
43, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\estimator\estimator.py", line 8
53, in _train_model_default
input_fn, model_fn_lib.ModeKeys.TRAIN))
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\estimator\estimator.py", line 6
91, in _get_features_and_labels_from_input_fn
result = self._call_input_fn(input_fn, mode)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\estimator\estimator.py", line 7
98, in _call_input_fn
return input_fn(**kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensor2tensor\data_generators\problem.py", line 7
35, in estimator_input_fn
dataset_kwargs=dataset_kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensor2tensor\data_generators\problem.py", line 8
47, in input_fn
dataset = skip_random_fraction(dataset, data_files[0])
File "C:\ProgramData\Anaconda3\lib\site-packages\tensor2tensor\data_generators\problem.py", line 1
188, in skip_random_fraction
num_skip = random.randint(0, _file_num_records_cached(data_file))
File "C:\ProgramData\Anaconda3\lib\site-packages\tensor2tensor\data_generators\problem.py", line 1
65, in _file_num_records_cached
for _ in tf.python_io.tf_record_iterator(filename):
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\lib\io\tf_record.py", line 74,
in tf_record_iterator
compat.as_bytes(path), 0, compat.as_bytes(compression_type), status)
File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py", line
519, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.UnknownError: NewRandomAccessFile failed to Create/Open: D:\
ASR_E2E\180821\tmp\librispeech-train : \udcbe\udcbc\udcbd\udcba\udcb0\udca1 \udcb0źεǾ\udcfa\udcbd\
udcc0\udcb4ϴ\udcd9.
; Input/output error | open | 2018-08-30T11:29:35Z | 2018-08-30T11:29:35Z | https://github.com/tensorflow/tensor2tensor/issues/1031 | [] | Minuk101 | 0 |
ivy-llc/ivy | tensorflow | 28,089 | Fix Frontend Failing Test: paddle - tensor.paddle.Tensor.rsqrt_ | To-do List: https://github.com/unifyai/ivy/issues/27500 | closed | 2024-01-27T15:32:33Z | 2024-01-30T17:58:05Z | https://github.com/ivy-llc/ivy/issues/28089 | [
"Sub Task"
] | Sai-Suraj-27 | 0 |
harry0703/MoneyPrinterTurbo | automation | 341 | 可以针对0基础更有好些 | 这个开源项目很棒。感谢作者的付出。
今天尝试了一下。对于我这种0基础的,那些api就让我头疼。不知道作者能否出个详细部署教程(api部分怎么用)。谢谢。
另外:微信群超过200人了 | closed | 2024-05-08T23:12:39Z | 2024-05-10T00:48:50Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/341 | [] | stella741 | 1 |
fastapi/sqlmodel | sqlalchemy | 156 | How can I change the string column type to Unicode? | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from typing import Any, Callable, Optional, Union
import orjson
from sqlalchemy.types import Unicode
from sqlmodel import Field, SQLModel
__all__ = (
"create",
"read",
"update",
"data",
)
def orjson_dumps(value: Any, *, default: Callable[[Any], Union[Any, None]]) -> str:
return orjson.dumps(value, default=default).decode()
class basemodel(SQLModel):
class Config(SQLModel.Config):
json_loads = orjson.loads
json_dumps = orjson_dumps
class base(basemodel):
name: str = Field(
..., index=False, nullable=False, sa_column_kwargs={"type_": Unicode}
)
json_data: str = Field(
..., index=False, nullable=False, sa_column_kwargs={"type_": Unicode}
)
class create(base):
pass
class update(base):
pass
class read(base):
pass
class data(base, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
```
### Description
In mssql, the names and json_data columns are made of varchar.
Since I want an nvarchar column, I specified type_ separately in the sa_column_kwargs option.
However, I confirmed that an error occurred in sqlalchemy.
This error seems to occur when the positional argumet and keyword argumet for type_ overlap.
`File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 1586, in __init__`
`raise exc.ArgumentError(`
`sqlalchemy.exc.ArgumentError: May not pass type_ positionally and as a keyword.`
In this case, I would appreciate it if you could tell me what to do.
### Operating System
Linux
### Operating System Details
in docker,
image: python:3.9-buster
### SQLModel Version
0.0.4
### Python Version
3.9.8
### Additional Context
_No response_ | closed | 2021-11-16T01:27:20Z | 2023-10-29T08:13:23Z | https://github.com/fastapi/sqlmodel/issues/156 | [
"question"
] | phi-friday | 5 |
PeterL1n/RobustVideoMatting | computer-vision | 257 | Possible to use from within Nuke, as with BackgroundMatting? | Is this project yet possible to use from within Nuke, as with BackgroundMatting?
I came here from https://community.foundry.com/cattery where I found Background Matting.
| open | 2023-11-06T12:52:29Z | 2024-02-05T19:53:45Z | https://github.com/PeterL1n/RobustVideoMatting/issues/257 | [] | haakonstorm | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.