repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
deezer/spleeter | tensorflow | 67 | [Bug] Typo In README | Under **Quick start** subheading there is a typo in `git clone https://github.com/Deezer/spleeter`. Instead of the capital **D**, it should be small **d**.
`git clone https://github.com/deezer/spleeter` -- this is the correct link
Fixing this will allow anyone to clone the repository effortlessly. | closed | 2019-11-09T17:08:35Z | 2019-11-11T19:12:16Z | https://github.com/deezer/spleeter/issues/67 | [
"bug",
"invalid",
"wontfix"
] | rohit-yadav | 3 |
wkentaro/labelme | computer-vision | 793 | how to generate color single channel mask image? | hello, guys, i want know how does labelme generate a single channel color image, because recently i try to generate mask image as ground truth automatically, but when i always convert three channels RGB image into single channel image, it gets a gray image,not a color image with 1 channel like labelme. It makes me confused. Do you know how to deal with this? thank you in advance. | closed | 2020-10-26T15:13:38Z | 2022-06-25T04:53:02Z | https://github.com/wkentaro/labelme/issues/793 | [] | lucafei | 1 |
LibreTranslate/LibreTranslate | api | 240 | I can't seem to install the models, I suspect some kind of timeout. | running `./LibreTranslate/install_models.py`
i get
```sh
Updating language models
Found 54 models
Downloading Arabic → English (1.0) ...
Traceback (most recent call last):
File "/home/libretranslate/./LibreTranslate/install_models.py", line 6, in <module>
check_and_install_models(force=True)
File "/home/libretranslate/LibreTranslate/app/init.py", line 54, in check_and_install_models
download_path = available_package.download()
File "/home/libretranslate/.local/lib/python3.9/site-packages/argostranslate/package.py", line 198, in download
data = response.read()
File "/usr/lib/python3.9/http/client.py", line 476, in read
s = self._safe_read(self.length)
File "/usr/lib/python3.9/http/client.py", line 628, in _safe_read
raise IncompleteRead(b''.join(s), amt)
http.client.IncompleteRead: IncompleteRead(50763840 bytes read, 31105830 more expected)
```
if this is timeout then how would i go about increasing said timeout? | closed | 2022-04-05T16:00:38Z | 2022-04-09T00:07:21Z | https://github.com/LibreTranslate/LibreTranslate/issues/240 | [] | Tarcaxoxide | 17 |
saulpw/visidata | pandas | 1,789 | [readme] Add a link to @VisiData@fosstodon.org | On Twitter, I see VisiData has joined the Fediverse. I think the README should also link to the new address,
It looks like Fosstodon already has 26 followers.
Follow up to https://github.com/saulpw/visidata/discussions/1614 | closed | 2023-03-05T18:32:51Z | 2023-03-06T04:17:21Z | https://github.com/saulpw/visidata/issues/1789 | [
"wishlist",
"wish granted"
] | frosencrantz | 1 |
CTFd/CTFd | flask | 1,939 | Be able to export scoreboard to CTFtime format | After CTFd moved away from the CTFtime integration, they removed the capability of having a live-updating scoreboard on CTFtime. Is it possible to export the scoreboard to the CTFtime JSON format after the event has ended to still have the scoreboard there? If not, can this be added? | closed | 2021-07-06T20:48:02Z | 2021-07-11T03:00:31Z | https://github.com/CTFd/CTFd/issues/1939 | [] | 0xmmalik | 2 |
jazzband/django-oauth-toolkit | django | 874 | question: How do I get user info from o/callback? | I am just getting starting with this toolkit. I have been able to call o/code an o/callback to get a response with these fields:
* access_token
* expires_in
* token_type
* scope
* refresh_token
I would like to extend the response to include things from my User model:
* id
* full_name
What is the recommended way to do that? I tried finding this in the docs, so apologies in advance if this is already documented, and doc links are welcome, of course.
| closed | 2020-10-02T12:30:42Z | 2020-10-22T15:20:18Z | https://github.com/jazzband/django-oauth-toolkit/issues/874 | [
"question"
] | showell | 4 |
yeongpin/cursor-free-vip | automation | 231 | Bug: Unable to Pass Verification on Cursor Website | When the script attempts to create an account on the Cursor website, it fails at the verification step. After completing the first step, the website displays an error message:
"Can't verify the user is human."
As a result, the process crashes, and account creation cannot proceed.
OS: Windows
Shell: PowerShell | open | 2025-03-14T21:04:28Z | 2025-03-16T11:57:25Z | https://github.com/yeongpin/cursor-free-vip/issues/231 | [] | Amir-Mohamad | 6 |
lanpa/tensorboardX | numpy | 323 | tensorboardX add_graph() ImportError: cannot import name 'OperatorExportTypes' | when I used the add_graph(), I got the error
```
Traceback (most recent call last):
File "F:/PycharmProjects/Pytorch/cbam/show_model.py", line 17, in <module>
writer.add_graph(model,(inputs,))
File "E:\SoftWare\Anaconda3\lib\site-packages\tensorboardX\writer.py", line 566, in add_graph
self.file_writer.add_graph(graph(model, input_to_model, verbose))
File "E:\SoftWare\Anaconda3\lib\site-packages\tensorboardX\pytorch_graph.py", line 171, in graph
from torch.onnx.utils import OperatorExportTypes
ImportError: cannot import name 'OperatorExportTypes'
```
my pytorch's version is 0.4.0, and tensorboardX's version is 1.6.
How could I deal with it? | closed | 2019-01-10T08:41:56Z | 2019-01-11T05:37:26Z | https://github.com/lanpa/tensorboardX/issues/323 | [] | Carl-Lei | 2 |
sinaptik-ai/pandas-ai | data-visualization | 1,333 | Support for Figure object output type | ### 🚀 The feature
I would love to have the ability of setting the ouput type to a plotly or matplotlib figure, instead of saving plots to PNG and returning filepaths, where the usefulness is quite limited.
### Motivation, pitch
I recently started using pandasai for building custom data analysis apps and I like it quite a lot so far. I was wondering why it is limited to the four output datatypes and even more why the (cumbersome) way of saving images to disk and returning a filepath has been chosen. Maybe it has a security related reason or it is due to the client-server architecture of pandasai? Instead returning objects (like already implemented for the dataframe) would open much more potential, especially regarding plots and figures.
I have already tinkered with modifying the `output_type_template.tmpl` and `output_validator.py` in order to make pandasai return figure objects. However I do not really know about potential problems/implications of this and thus am proposing this here as a feature request, since my "hacky" implementation is probably not how it should be implemented.
Here you can see the resulting prompt used and the generated code, which works fine for now in a standalone app.
**Prompt used:**
```
<dataframe>
dfs[0]:150x5
Sepal_Length,Sepal_Width,Petal_Length,Petal_Width,Class
7.2,3.4,6.4,0.3,Iris-setosa
4.5,4.1,3.5,1.7,Iris-virginica
6.0,2.6,5.9,2.5,Iris-versicolor
</dataframe>
Update this initial code:
"""python
# TODO: import the required dependencies
import pandas as pd
# Write code here
# Declare result var:
type (must be "figure"), value must be a matplotlib.figure or plotly.graph_objects.Figure. Example: { "type": "figure", "value": go.Figure(...) }
"""
### QUERY
Plot the sepal length and width of the data and color points by class
Variable `dfs: list[pd.DataFrame]` is already declared.
At the end, declare "result" variable as a dictionary of type and value.
If you are asked to plot a chart, use "plotly" for charts, save as png.
Generate python code and return full updated code:
```
**Resulting Code:**
```python
df = dfs[0]
fig = px.scatter(df, x='Sepal_Length', y='Sepal_Width', color='Class', title='Sepal Length vs Sepal Width', labels={'Sepal_Length': 'Sepal Length', 'Sepal_Width': 'Sepal Width'})
result = {'type': 'figure', 'value': fig}
```
### Alternatives
I know its also possible to convert plotly figures from/to json. So maybe this could be another option to return (or potentially also save) the figure as json instead.
### Additional context
**Final Result in Chatbot App:**

| open | 2024-08-21T12:52:46Z | 2025-03-05T10:24:01Z | https://github.com/sinaptik-ai/pandas-ai/issues/1333 | [
"enhancement"
] | Blubbaa | 3 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 254 | [BUG] 简短明了的描述问题 | ***发生错误的平台?***
抖音/
***发生错误的端点?***
api
本机是OK的, 部署到ubuntu上之后
TypeError: Cannot read property 'JS_MD5_NO_COMMON_JS' of null
| closed | 2023-08-26T14:20:16Z | 2023-08-26T23:01:42Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/254 | [
"BUG"
] | xhacker5000 | 2 |
iperov/DeepFaceLab | machine-learning | 641 | Feature request: average milliseconds/iteration and/or iteration/second. | THIS IS NOT TECH SUPPORT FOR NEWBIE FAKERS
POST ONLY ISSUES RELATED TO BUGS OR CODE
## Expected behavior
I want to measure actual performance based o modifying parameters and / or hardware. The program should show an average of milliseconds spent on every iterations in the last 10, 30 -or- 60 seconds.
## Actual behavior
The program shows the milliseconds spent in the current iteration only.
## Steps to reproduce
Just start training any model with any settigns.
## Other relevant information
With this feature, a standard benchmark test could be done that would reflect performance changes by modifying train settings and/or hardware used.
THANKS!!! | open | 2020-02-28T11:25:23Z | 2023-06-08T21:05:48Z | https://github.com/iperov/DeepFaceLab/issues/641 | [] | tokafondo | 3 |
dpgaspar/Flask-AppBuilder | rest-api | 1,561 | AUTH_ROLES_MAPPING doesn't work in LDAP config | Flask-Appbuilder version: 3.1.1
pip freeze output:
aiohttp==3.7.2
alembic==1.4.3
amqp==2.6.1
apispec==3.3.2
async-timeout==3.0.1
attrs==20.2.0
Babel==2.8.0
backoff==1.10.0
billiard==3.6.3.0
bleach==3.2.1
boto3==1.16.10
botocore==1.19.10
Brotli==1.0.9
cached-property==1.5.2
cachelib==0.1.1
certifi==2020.6.20
cffi==1.14.3
chardet==3.0.4
click==7.1.2
colorama==0.4.4
contextlib2==0.6.0.post1
convertdate==2.3.0
cron-descriptor==1.2.24
croniter==0.3.36
cryptography==3.2.1
decorator==4.4.2
defusedxml==0.6.0
dnspython==2.0.0
email-validator==1.1.1
et-xmlfile==1.0.1
Flask==1.1.2
Flask-AppBuilder==3.1.1
Flask-Babel==1.0.0
Flask-Caching==1.9.0
Flask-Compress==1.8.0
Flask-Cors==3.0.9
Flask-JWT-Extended==3.24.1
Flask-Login==0.4.1
Flask-Migrate==2.5.3
Flask-OpenID==1.2.5
Flask-SQLAlchemy==2.4.4
flask-talisman==0.7.0
Flask-WTF==0.14.3
future==0.18.2
geographiclib==1.50
geopy==2.0.0
gunicorn==20.0.4
holidays==0.10.3
humanize==3.1.0
idna==2.10
ijson==3.1.2.post0
importlib-metadata==2.1.1
iso8601==0.1.13
isodate==0.6.0
itsdangerous==1.1.0
jdcal==1.4.1
Jinja2==2.11.2
jmespath==0.10.0
jsonlines==1.2.0
jsonschema==3.2.0
kombu==4.6.11
korean-lunar-calendar==0.2.1
linear-tsv==1.1.0
Mako==1.1.3
Markdown==3.3.3
MarkupSafe==1.1.1
marshmallow==3.9.0
marshmallow-enum==1.5.1
marshmallow-sqlalchemy==0.23.1
msgpack==1.0.0
multidict==5.0.0
natsort==7.0.1
numpy==1.19.4
openpyxl==3.0.5
packaging==20.4
pandas==1.1.4
parsedatetime==2.6
pathlib2==2.3.5
pgsanity==0.2.9
Pillow==7.2.0
polyline==1.4.0
prison==0.1.3
py==1.9.0
pyarrow==1.0.1
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
PyJWT==1.7.1
PyMeeus==0.3.7
pyparsing==2.4.7
pyrsistent==0.16.1
python-dateutil==2.8.1
python-dotenv==0.15.0
python-editor==1.0.4
python-geohash==0.8.5
python-ldap==3.3.1
python3-openid==3.2.0
pytz==2020.4
PyYAML==5.3.1
redis==3.5.3
requests==2.24.0
retry==0.9.2
rfc3986==1.4.0
s3transfer==0.3.3
sasl==0.2.1
simplejson==3.17.2
six==1.15.0
SQLAlchemy==1.3.20
SQLAlchemy-Utils==0.36.8
sqlparse==0.3.0
tableschema==1.20.0
tabulator==1.52.5
thrift==0.13.0
thrift-sasl==0.4.2
typing-extensions==3.7.4.3
unicodecsv==0.14.1
urllib3==1.25.11
vine==1.3.0
webencodings==0.5.1
Werkzeug==1.0.1
WTForms==2.3.3
WTForms-JSON==0.3.3
xlrd==1.2.0
yarl==1.6.2
zipp==3.4.0
### Describe the expected results
AUTH_ROLES_MAPPING doesn't work in LDAP config
AUTH_TYPE = AUTH_LDAP
AUTH_USER_REGISTRATION = True
AUTH_LDAP_SERVER = "ldap://x.x.x.x:389"
AUTH_LDAP_BIND_USER = "uid=test,cn=users,cn=accounts,dc=example,dc=com"
AUTH_LDAP_BIND_PASSWORD = "pass"
AUTH_LDAP_SEARCH = "dc=example,dc=com"
AUTH_LDAP_UID_FIELD = "uid"
AUTH_USER_REGISTRATION_ROLE = "Admin"
AUTH_LDAP_USE_TLS = False
AUTH_ROLES_MAPPING = {
"cn=users,cn=accounts,dc=example,dc=com": ["Gamma"],
"cn=testers,cn=groups,cn=accounts,dc=example,dc=com": ["Alpha"],
}
AUTH_LDAP_EMAIL_FIELD = "mail"
AUTH_LDAP_GROUP_FIELD = "memberOf"
Also does not receive mail from LDAP
I tried to test the connection via lib python-ldap, and if I do search_s ("dc = example, dc = com", ldap.SCOPE_SUBTREE, 'uid = test', ['memberOf'])) without bind_s ("user "," pass "), then the LDAP server does not return the 'memberOf' and 'mail' fields. If I do a search after bind_s, then I get the required data.
Help, what could be the reason? | closed | 2021-02-09T14:34:42Z | 2021-02-10T09:49:32Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1561 | [] | mikle7771 | 1 |
iMerica/dj-rest-auth | rest-api | 22 | The generated password reset token is not long enough and always having a fixed length | I have added password_reset_confirm URL into my urls.py as mentioned in docs, inspired by the demo project
```python
re_path(r'^password-reset/confirm/(?P<uidb64>[0-9A-Za-z_\-]+)/(?P<token>[0-9A-Za-z]{1,13}-[0-9A-Za-z]{1,20})/$',
TemplateView.as_view(),
name='password_reset_confirm'),
```
Everything is working just fine, but I have a question regarding the token, I expected it to be longer as the one in registration endpoint but I always get something like that:
`/password-reset/confirm/Mw/5f8-8a381855ed4a6cadb9fe/`
So, the first part always has the length of 3 characters and the second one always has the length of 20 characters, so, what is the point of giving it a range if it will always have a fixed length, also why it's not long enough, am I missing something?
| closed | 2020-03-30T20:25:16Z | 2020-04-01T07:22:15Z | https://github.com/iMerica/dj-rest-auth/issues/22 | [] | mohmyo | 6 |
davidsandberg/facenet | tensorflow | 694 | what is the dummy model and how to use it? | i see a new model named 'dummy model' is commited. But i don't know how to use it and what is that model?
By the way,is there any other compressedd models,such as "MobileNet" ,"ShuffleNet" be added in future?or how to use this models to train the Network? | open | 2018-04-11T13:57:31Z | 2018-04-11T16:01:42Z | https://github.com/davidsandberg/facenet/issues/694 | [] | boyliwensheng | 1 |
healthchecks/healthchecks | django | 633 | API: Return "status" and "started" as separate fields | The web UI displays "up + started" and "down + started" states by showing a an animated progress spinner under the "up" or "down" icon:

In comparison, the API currently cannot express the "up + started" and "down + started" states. The API returns a single "status" field with the possible values: `new`, `started`, `up`, `grace`, `down`, and `paused`. Therefore custom dashboards are disadvantaged compared to the standard web UI (see https://github.com/healthchecks/dashboard/issues/8).
Update API to return two fields:
* status = new|up|grace|down|paused
* started = true|false
This would be backwards-incompatible change so would probably have to go under /api/**v2**/.
| closed | 2022-04-08T08:21:17Z | 2022-12-20T08:30:46Z | https://github.com/healthchecks/healthchecks/issues/633 | [] | cuu508 | 0 |
nltk/nltk | nlp | 3,276 | ntlk unsafe deserialization vulnerability |
NLTK through 3.8.1 allows remote code execution if untrusted packages have pickled Python code, and the integrated data package download functionality is used. This affects, for example, averaged_perceptron_tagger and punkt.
| closed | 2024-07-08T07:20:04Z | 2024-08-21T07:19:40Z | https://github.com/nltk/nltk/issues/3276 | [
"critical"
] | JohnJyong | 2 |
NullArray/AutoSploit | automation | 1,122 | Unhandled Exception (255100047) | Autosploit version: `3.1.2`
OS information: `Linux-4.19.0-kali3-686-pae-i686-with-Kali-kali-rolling-kali-rolling`
Running context: `autosploit.py -c -q *******`
Error mesage: `'results'`
Error traceback:
```
Traceback (most recent call):
File "/root/Autosploit/autosploit/main.py", line 109, in main
AutoSploitParser().single_run_args(opts, loaded_tokens, loaded_exploits)
File "/root/Autosploit/lib/cmdline/cmd.py", line 191, in single_run_args
save_mode=search_save_mode
File "/root/Autosploit/api_calls/censys.py", line 45, in search
raise AutoSploitAPIConnectionError(str(e))
errors: 'results'
```
Metasploit launched: `False`
| closed | 2019-06-30T03:53:02Z | 2019-07-24T10:51:02Z | https://github.com/NullArray/AutoSploit/issues/1122 | [] | AutosploitReporter | 0 |
zappa/Zappa | flask | 527 | [Migrated] An error occurred (IllegalLocationConstraintException) during zappa deploy | Originally from: https://github.com/Miserlou/Zappa/issues/1398 by [mcmonster](https://github.com/mcmonster)
See https://github.com/Miserlou/Zappa/issues/569
I also experienced this issue when attempting to deploy following the out-of-the-box README commands. Zappa's deploy procedure is obfuscating the cause of the issue, namely that my bucket name is not unique. It would be nice if Zappa would suggest this as a possible issue source when the deploy command is ran. | closed | 2021-02-20T09:43:57Z | 2023-08-17T01:08:28Z | https://github.com/zappa/Zappa/issues/527 | [
"bug",
"aws"
] | jneves | 1 |
biolab/orange3 | data-visualization | 6,250 | File widget: URL is lost when saving and reopening OWS file |
**What's wrong?**
If I specify a (specific type of?) URL referring to a data file in the File widget, then save the workflow and reopen it, the URL field is empty and the widget produces an error "No file selected"
**How can we reproduce the problem?**
Place a File widget in the canvas, double-click on it, select URL and enter https://drive.google.com/uc?export=download&id=1dyrDfu2yow5ydnbOkgiJW_NUMPnKEpu2 and press Reload. The preview will now correctly show the variables in the file. Now close the widget, save the workflow and reopen it. The URL has gone and the widget produces an error "No file selected".
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system: Mac OS 13.0.1 on Silicon
- Orange version: 3.33.0
- How you installed Orange: from DMG
| closed | 2022-12-09T11:34:56Z | 2023-01-20T07:41:15Z | https://github.com/biolab/orange3/issues/6250 | [
"bug"
] | wvdvegte | 1 |
jacobgil/pytorch-grad-cam | computer-vision | 52 | I cannot understand why recursively applied GuidedBackpropReLU. | I tried non-recursive guided backprop version like this..
for idx, module in module_top._modules.items(): => for name, module in module_top.named_modules():
recursive_relu_apply(module) => #recursive_relu_apply(module)
module_top._modules[idx] = GuidedBackpropReLU.apply => module = GuidedBackpropReLU.apply
but, result was different..
As long as I know, using "named_modules()"(or "modules()") we can access the whole layers in a module. Then, why proper results can only be obtained by recursive manner? Can someone explain about this?
Thank you. | closed | 2021-01-02T06:22:26Z | 2021-01-05T01:40:41Z | https://github.com/jacobgil/pytorch-grad-cam/issues/52 | [] | hcw-00 | 1 |
cvat-ai/cvat | pytorch | 8,744 | Unable to export data to Shared Path | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
Follow the steps as defined in https://docs.cvat.ai/docs/administration/basics/installation/#share-path
### Expected Behavior
Being able to see the path listed as a Cloud Storage item when exporting a job/dataset.
### Possible Solution
I would like to be able to export my annotations to path specified in `cvat_share` volume. Although I am able to import images from this shared volume, I am unable to export to this path.
### Context
_No response_
### Environment
```Markdown
- Ubuntu 20.04
- Running local instance of CVAT via docker
- CVAT v2.14.4
```
| closed | 2024-11-26T12:28:26Z | 2024-11-26T14:57:20Z | https://github.com/cvat-ai/cvat/issues/8744 | [
"bug"
] | scotgopal | 1 |
ultralytics/yolov5 | machine-learning | 12,798 | How to load Custom Models in VsCode on windows | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question

I already read this one https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading/#custom-models
How to fix it?
Thank you
### Additional
_No response_ | closed | 2024-03-07T16:47:26Z | 2024-04-18T00:20:36Z | https://github.com/ultralytics/yolov5/issues/12798 | [
"question",
"Stale"
] | Waariss | 2 |
wkentaro/labelme | deep-learning | 443 | I didn't find Annotations folder after I use labelme2voc.py | Kentaro, I think when I used the labelme2voc.py it will generate voc xml format files. But I didn't find that, I only got npy file, and #241 according this, I think it have been add.
To be honest, I am a newbie to label this stuff, so maybe there is something wrong? I will be very grateful for your replay.
Thanks... Have a good day.
| closed | 2019-07-06T00:27:07Z | 2019-07-06T04:03:20Z | https://github.com/wkentaro/labelme/issues/443 | [] | EricksonLu | 1 |
influxdata/influxdb-client-python | jupyter | 305 | Support for numpy.int64 as input type | When `Point`-objects are serialized at `point.py` in `_append_fields(fields)` the values are type checked. A `numpy.int64`-type fails this check, so serialization fails. This can be bothersome when values are retrieved from a `pandas.DataFrame` and put into a `Point`-instance
This is the current implementation:
```
def _append_fields(fields):
_return = []
for field, value in sorted(iteritems(fields)):
if value is None:
continue
if isinstance(value, float) or isinstance(value, Decimal):
if not math.isfinite(value):
continue
s = str(value)
# It's common to represent whole numbers as floats
# and the trailing ".0" that Python produces is unnecessary
# in line-protocol, inconsistent with other line-protocol encoders,
# and takes more space than needed, so trim it off.
if s.endswith('.0'):
s = s[:-2]
_return.append(f'{_escape_key(field)}={s}')
elif isinstance(value, int) and not isinstance(value, bool):
_return.append(f'{_escape_key(field)}={str(value)}i')
elif isinstance(value, bool):
_return.append(f'{_escape_key(field)}={str(value).lower()}')
elif isinstance(value, str):
_return.append(f'{_escape_key(field)}="{_escape_string(value)}"')
else:
raise ValueError(f'Type: "{type(value)}" of field: "{field}" is not supported.')
return f"{','.join(_return)}"
```
I would suggest to add this:
```
elif isinstance(value, numpy.int64):
_return.append(f'{_escape_key(field)}={str(value)}i')
``` | closed | 2021-08-13T08:52:59Z | 2021-08-16T11:38:08Z | https://github.com/influxdata/influxdb-client-python/issues/305 | [] | wiseboar-9 | 1 |
pydantic/logfire | fastapi | 310 | Add the `logfire.instrument_mysql()` | ### Description
OpenTelemetry MySQL Instrumentation[](https://opentelemetry-python-contrib.readthedocs.io/en/latest/instrumentation/mysql/mysql.html#module-opentelemetry.instrumentation.mysql) | closed | 2024-07-12T11:39:14Z | 2024-08-05T09:20:24Z | https://github.com/pydantic/logfire/issues/310 | [
"Feature Request",
"P3"
] | Kludex | 2 |
jupyter-book/jupyter-book | jupyter | 2,068 | Adding parse.myst_enable_extensions to _config.yml breaks callouts | ### Describe the bug
**context**
I added `parse.myst_enable_extensions.substitution` to `_config.yml` while fiddling with #2067 and that caused callout styling to break. I swapped `substitution` with `html_image` and then `foo` but it did not help.
**expectation**
I expected callouts to be unaffected by enabling myst extensions.
**bug**
But instead the callouts were not rendered:

Before enabling extensions:

```console
$ jupyter-book build .
Running Jupyter-Book v0.15.1
Source Folder: /Users/rylo/proj/courses/stat999-jupyter-book
Config Path: /Users/rylo/proj/courses/stat999-jupyter-book/_config.yml
Output Path: /Users/rylo/proj/courses/stat999-jupyter-book/_build/html
Running Sphinx v5.0.2
loading pickled environment... done
myst v0.18.1: MdParserConfig(commonmark_only=False, gfm_only=False, enable_extensions=['html_image', 'substitution'], disable_syntax=[], all_links_external=False, url_schemes=['mailto', 'http', 'https'], ref_domains=None, highlight_code_blocks=True, number_code_blocks=[], title_to_header=False, heading_anchors=None, heading_slug_func=None, footnote_transition=True, words_per_minute=200, sub_delimiters=('{', '}'), linkify_fuzzy_links=True, dmath_allow_labels=True, dmath_allow_space=True, dmath_allow_digits=True, dmath_double_inline=False, update_mathjax=True, mathjax_classes='tex2jax_process|mathjax_process|math|output_area')
myst-nb v0.17.2: NbParserConfig(custom_formats={}, metadata_key='mystnb', cell_metadata_key='mystnb', kernel_rgx_aliases={}, execution_mode='force', execution_cache_path='', execution_excludepatterns=[], execution_timeout=30, execution_in_temp=False, execution_allow_errors=False, execution_raise_on_error=False, execution_show_tb=False, merge_streams=False, render_plugin='default', remove_code_source=False, remove_code_outputs=False, code_prompt_show='Show code cell {type}', code_prompt_hide='Hide code cell {type}', number_source_lines=False, output_stderr='show', render_text_lexer='myst-ansi', render_error_lexer='ipythontb', render_image_options={}, render_figure_options={}, render_markdown_format='commonmark', output_folder='build', append_css=True, metadata_to_fm=False)
Using jupyter-cache at: /Users/rylo/proj/courses/stat999-jupyter-book/_build/.jupyter_cache
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 0 source files that are out of date
updating environment: [config changed ('myst_enable_extensions')] 4 added, 0 changed, 0 removed
reading sources... [100%] syllabus
looking for now-outdated files... none found
pickling environment... done
checking consistency... /Users/rylo/proj/courses/stat999-jupyter-book/about.md: WARNING: document isn't included in any toctree
done
preparing documents... done
writing output... [100%] syllabus
/Users/rylo/proj/courses/stat999-jupyter-book/index.md:53: WARNING: 'myst' reference target not found: link
/Users/rylo/proj/courses/stat999-jupyter-book/index.md:54: WARNING: 'myst' reference target not found: link2
/Users/rylo/proj/courses/stat999-jupyter-book/syllabus.md:9: WARNING: 'myst' reference target not found: #about-data-100
/Users/rylo/proj/courses/stat999-jupyter-book/syllabus.md:10: WARNING: 'myst' reference target not found: #goals
/Users/rylo/proj/courses/stat999-jupyter-book/syllabus.md:11: WARNING: 'myst' reference target not found: #prerequisites
generating indices... genindex done
writing additional pages... search done
copying static files... done
copying extra files... done
dumping search index in English (code: en)... done
dumping object inventory... done
build succeeded, 6 warnings.
The HTML pages are in _build/html.
===============================================================================
Finished generating HTML for book.
Your book's HTML pages are here:
_build/html/
You can look at your book by opening this file in a browser:
_build/html/index.html
Or paste this line directly into your browser bar:
file:///Users/rylo/proj/courses/stat999-jupyter-book/_build/html/index.html
===============================================================================
```
**problem**
This is a problem for people doing ___ because ___.
### Reproduce the bug
1. Add this to _config.yml (can substitute any extensions)
```
parse:
myst_enable_extensions:
- html_image
- substitution
```
2. Rebuild book
3. See callout styling vanish
### List your environment
```
Jupyter Book : 0.15.1
External ToC : 0.3.1
MyST-Parser : 0.18.1
MyST-NB : 0.17.2
Sphinx Book Theme : 1.0.1
Jupyter-Cache : 0.6.1
NbClient : 0.7.2
```
macOS 13.6, python 3.11.0 | open | 2023-10-11T00:22:56Z | 2023-10-13T20:42:21Z | https://github.com/jupyter-book/jupyter-book/issues/2068 | [
"bug"
] | ryanlovett | 2 |
saulpw/visidata | pandas | 2,566 | Feature request: Option to customize refline color | closed | 2024-10-15T10:56:34Z | 2024-10-18T07:44:34Z | https://github.com/saulpw/visidata/issues/2566 | [
"wishlist",
"wish granted"
] | cool-RR | 4 | |
openapi-generators/openapi-python-client | rest-api | 103 | Refactor to better mirror OpenAPI terminology | **Is your feature request related to a problem? Please describe.**
Currently some of the terms / organization in this project don't line up very well with OpenAPI terminology.
Examples:
1. "Schema" in this project refers only to something declared in the "schemas" section of the OpenAPI document. In reality, the things we call "Response" and "Property" are also schemas in OpenAPI.
2. "Property" refers to some property within an including schema. Properties are themselves schemas, so should be named as such.
3. "Responses", rather than containing a schema, currently duplicate some of the behavior of schemas. This should be fixed.
**Describe the solution you'd like**
The best course of action, I think, is to separate the parsing of the OpenAPI document from the reorganization of data for client generation. Currently this is done all in one pass, mixing the logic and verbiage of the parsing with the client generation.
It may be possible to find a library which already exists which will parse the document into strongly typed objects. If not, I'll write one. The terminology of the parsed document should mirror the OpenAPI specification as closely as possible.
This project will then reorganize that already-parsed data into something that makes sense for client generation. Hopefully in the process some of the schema-logic can be simplified to make more interesting response types possible.
| closed | 2020-07-22T18:32:14Z | 2020-07-23T16:49:42Z | https://github.com/openapi-generators/openapi-python-client/issues/103 | [
"✨ enhancement"
] | dbanty | 1 |
sigmavirus24/github3.py | rest-api | 325 | Removing token attribute from Authorizations API responses | https://developer.github.com/changes/2014-12-08-removing-authorizations-token/
##
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/6862951-removing-token-attribute-from-authorizations-api-responses?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github).
</bountysource-plugin> | closed | 2014-12-08T20:54:48Z | 2018-03-13T20:18:55Z | https://github.com/sigmavirus24/github3.py/issues/325 | [
"Deprecation Issue"
] | esacteksab | 0 |
GibbsConsulting/django-plotly-dash | plotly | 338 | Is it meant to run with dash-bio? | Hello,
Is it meant to run with dash-bio also? I cant get them to run together.
| open | 2021-05-19T19:29:05Z | 2021-05-20T13:07:41Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/338 | [
"question"
] | michalstepniewskiada | 1 |
plotly/dash | data-science | 2,953 | Error loading dependencies | **Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
I can reproduce this very easily unfortunately the app is quite large and I suspect this is due to something specific with the app. I have rolled back many commits and cannot identify when it started.
- replace the result of `pip list | grep dash` below
```
$ pip freeze | grep dash
dash==2.17.1
dash-bootstrap-components==1.5.0
dash-bootstrap-templates==1.1.2
dash-core-components==2.0.0
dash-extensions==1.0.14
dash-html-components==2.0.0
dash-iconify==0.1.2
dash-table==5.0.0
dash_mantine_components==0.14.4
```
- if frontend related, tell us your Browser, Version and OS
-OS: MacOS
-Browser: Chrome
- Version 127.0.6533.120 (Official Build) (arm64)
**Describe the bug**
When running in development mode locally the first start of dash returns "Error loading dependencies". A refresh allows the app to load.
**Expected behavior**
Better debug-ability as it is incredibly difficult to identify what is causing this. The problem disappears after the first hard page refresh. Can provide data if someone can provide direction on how best to determine the cause.
**Screenshots**
Console log
<img width="1708" alt="Screenshot 2024-08-20 at 10 17 13 AM" src="https://github.com/user-attachments/assets/6cc77088-3eb1-4933-850d-2e490a7f36ac">
Page only displaying "Error loading dependencies"
<img width="251" alt="Screenshot 2024-08-20 at 10 37 39 AM" src="https://github.com/user-attachments/assets/635c9439-a693-48f9-af8c-253d0c2a4660">
In further testing in incognito mode i get the following:
<img width="1298" alt="Screenshot 2024-08-20 at 10 49 11 AM" src="https://github.com/user-attachments/assets/aa4d94b0-4776-434d-906d-9cd37d8af77d">
| closed | 2024-08-20T15:44:50Z | 2024-08-21T16:16:15Z | https://github.com/plotly/dash/issues/2953 | [
"bug",
"P3"
] | micmizer | 2 |
AutoGPTQ/AutoGPTQ | nlp | 462 | Quantization config name | Is a small difference in default quantization configs expected between optimum.gptq and auto-gptq?
* optimum.gptq config filename: _quantization_config.json_ with full list of quantization arguments
* auto-gptq: _quantize_config.json_ with these arguments only: 'bits', 'group_size', 'damp_percent', 'desc_act', 'static_groups', 'sym', 'true_sequential', 'model_name_or_path', 'model_file_base_name'
Example:
```
# docs:https://huggingface.co/docs/optimum/llm_quantization/usage_guides/quantization
from transformers import AutoModelForCausalLM, AutoTokenizer
from optimum.gptq import GPTQQuantizer, load_quantized_model
import torch
model_name = "facebook/opt-125m"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16)
quantizer = GPTQQuantizer(bits=4, dataset="c4", block_name_to_quantize = "model.decoder.layers", model_seqlen = 2048)
quantized_model = quantizer.quantize_model(model, tokenizer)
save_folder = "./"
quantizer.save(model,save_folder)
```
Results in error:
```
from auto_gptq import AutoGPTQForCausalLM
device = "cuda:0"
model = AutoGPTQForCausalLM.from_quantized(save_folder, device=device)
```
versions:
```
optimum==1.14.1
auto-gptq==0.5.1
```
| closed | 2023-12-02T18:37:34Z | 2023-12-07T14:51:15Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/462 | [] | upunaprosk | 1 |
coqui-ai/TTS | deep-learning | 2,839 | [Bug] pip install TTS seems not to bring in required dependecies (mecab-python3 and unidic-lite) | ### Describe the bug
Trying the last version of TTS package on Google Colab it complains about missing dependencies, namely mecab-python3 and unidic-lite
After installing those packages I've been able to initialize the application
### To Reproduce
!pip install TTS
from TTS.api import TTS
### Expected behavior
Initialization is expected not to fail, pip install of the package should bring in all required dependencies
### Logs
_No response_
### Environment
```shell
{
"CUDA": {
"GPU": [],
"available": false,
"version": "11.8"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.0.1+cu118",
"TTS": "0.16.1",
"numpy": "1.22.0"
},
"System": {
"OS": "Linux",
"architecture": [
"64bit",
"ELF"
],
"processor": "x86_64",
"python": "3.10.12",
"version": "#1 SMP Fri Jun 9 10:57:30 UTC 2023"
}
}
```
### Additional context
_No response_ | closed | 2023-08-05T05:52:43Z | 2023-08-14T09:11:41Z | https://github.com/coqui-ai/TTS/issues/2839 | [
"bug"
] | JackNova | 3 |
wkentaro/labelme | computer-vision | 1,565 | 在Windows11上打开labelme,出现2025-03-21 16:39:00.876 | WARNING | labelme.app:__init__:799 - Default AI model is not found: %r | ### Provide environment information
打开labelme会出现2025-03-21 16:39:00.876 | WARNING | labelme.app:__init__:799 - Default AI model is not found: %r,使用ai标注时会闪退,并显示
2025-03-21 16:46:01.296 | DEBUG | labelme.__main__:write:23 - D:\anaconda\conda\envs\labelme\Lib\site-packages\gdown\cached_download.py:102: FutureWarning: md5 is deprecated in favor of hash. Please use hash='md5:xxx...' instead.
warnings.warn(
Unhandled Python exception
### What OS are you using?
Windows11
### Describe the Bug
。
### Expected Behavior
_No response_
### To Reproduce
_No response_ | open | 2025-03-21T08:46:49Z | 2025-03-21T08:46:49Z | https://github.com/wkentaro/labelme/issues/1565 | [] | newpython6 | 0 |
autogluon/autogluon | computer-vision | 4,162 | [tabular] Add logging of inference throughput of best model at end of fit | [From user](https://www.kaggle.com/competitions/playground-series-s4e5/discussion/499495#2789917): "It wasn't really clear that predict was going to be going for a long time"
I think we can make this a bit better by mentioning at the end of training the estimated inference throughput of the selected best model, which the user can refer to when gauging how long it will take to do inference on X rows. We have the number already calculated, we just haven't put it as part of the user-visible logging yet.
| closed | 2024-05-02T22:32:18Z | 2024-05-16T23:53:30Z | https://github.com/autogluon/autogluon/issues/4162 | [
"API & Doc",
"module: tabular"
] | Innixma | 0 |
aio-libs-abandoned/aioredis-py | asyncio | 596 | Why keyword arguments for SortedSet commands should be bytes | Argument type for many SortedSet commands (for `min`/`max` kw) is enforced by isinstance check.
In one place it even has this comment:
```python
if not isinstance(max, bytes): # FIXME Why only bytes?
raise TypeError("max argument must be bytes")
```
I think it is more convenient (for me as a user) to pass strings inst. of bytes.
Some of these commands are for "lexicographical" operations (so user mostly works with strings and not blobs of bytes)
Also strings would be more consistent with other commands.
Should I work on a PR for the change?
If there is a deep reason for that - could you please explain the logic behind it?
Thanks.
| closed | 2019-05-13T05:13:17Z | 2021-03-18T23:55:31Z | https://github.com/aio-libs-abandoned/aioredis-py/issues/596 | [
"help wanted",
"easy",
"pr-available",
"resolved-via-latest"
] | gyermolenko | 1 |
keras-team/autokeras | tensorflow | 848 | where is my kerastuner? | error raised as follow:

could anyone help me? | closed | 2019-12-03T03:56:09Z | 2019-12-03T05:24:05Z | https://github.com/keras-team/autokeras/issues/848 | [] | Kantshun | 0 |
albumentations-team/albumentations | deep-learning | 1,798 | Can't do any data augmentation on my custom dataset in FiftyOne | ## Describe the bug
The cause is unclear but whenever I apply any augmentations using Albumentations library in FiftyOne, the created images don't load at all.
### To Reproduce
Here is what I did and the outputs I got:
1. Created a custom dataset in FiftyOne and labelled it with CVAT
2. Followed this tutorial https://docs.voxel51.com/tutorials/data_augmentation.html to create 3 augmented images based on selected samples.
3. Run 'View last Albumentations run'. The output was a message: 'Failed to execute operator @voxel51/operators/set_view. See console for details.'
4. Opened the 'augmented' tag in FiftyOne. Output:

When opened each of those displayed a message: 'This image failed to load. The file may not exist, or its type (image/jpeg) may be unsupported.'
The labels though seem to be correct, but the image doesn't load.
5. Went to the folder where the dataset is located. The size didn't change. I'd expect it to contain 3 extra images.
I am sorry if I submitted this in a wrong place or category. I can't really find any information on this problem that I'm having and would appreciate any help. | closed | 2024-06-19T15:19:40Z | 2024-06-19T17:47:23Z | https://github.com/albumentations-team/albumentations/issues/1798 | [
"bug"
] | iza611 | 1 |
marcomusy/vedo | numpy | 525 | Can I use mouse rotate a actor like this | https://www.weiy.city/2019/08/vtk-rotate-cone-with-ring/
Thanks. | closed | 2021-11-10T10:12:08Z | 2021-11-11T04:38:20Z | https://github.com/marcomusy/vedo/issues/525 | [
"enhancement"
] | timeanddoctor | 11 |
tflearn/tflearn | data-science | 876 | Installation in Mac Os X | i have installed the tensor flow in the virtual env using the command `virtualenv --system-site-packages tensorflow` shall i install the tfl earn into the virtual env ? how ? | open | 2017-08-18T08:50:16Z | 2017-08-18T08:50:16Z | https://github.com/tflearn/tflearn/issues/876 | [] | augmen | 0 |
deepinsight/insightface | pytorch | 2,472 | app.face_analysis.py when do face recognition, input full image | app.face_analysis.py when do face recognition, input full image, but the recognition model need cropped face image | closed | 2023-11-14T17:22:23Z | 2023-11-15T13:58:09Z | https://github.com/deepinsight/insightface/issues/2472 | [] | deisler134 | 3 |
plotly/dash | data-visualization | 3,067 | allow optional callback inputs | ## Problem
In a number of cases, I have callbacks that have Inputs/States that are only conditionally present on the page, which causes console errors (ReferenceError), and prevents the callback to run.
Examples where this could happen:
* A global store at the app level being updated from inputs on a given page -> this will raise the ReferenceError log when I am on a different page
* Several objects that are conditionally present on a page and update another element
## Expected solution
I would like to be able to mark inputs as optional, in which case they would default to None in the callback when the element is not present.
## Current workaround
In some cases I can use a pattern matching callback with `ALL` to make the input optional (no input will give an empty list), however this feels hackish and does not always work with existing pattern-matching ids.
| open | 2024-11-08T05:42:03Z | 2024-11-11T14:45:47Z | https://github.com/plotly/dash/issues/3067 | [
"feature",
"P3"
] | RenaudLN | 0 |
plotly/dash-table | plotly | 736 | Tooltip doesn't work with horizontal scroll | As reported here: https://community.plotly.com/t/datatable-tooltip-view-appears-limited-to-scope-of-page-only/37576 | closed | 2020-04-13T18:30:53Z | 2020-10-28T18:17:17Z | https://github.com/plotly/dash-table/issues/736 | [
"regression",
"size: 1",
"bug"
] | chriddyp | 18 |
pallets-eco/flask-wtf | flask | 597 | Adding support for X-Frame-Options | Hello,
Is possible to add the following two configurations to allow different values for `X-Frame-Options`, other than `SAMEORIGIN`:
example:
`WTF_CSRF_FRAME_OPTIONS = 'ALLOW-FROM'`
and
`WTF_CSRF_FRAME_OPTIONS_ALLOW_FROM = 'mydomain.site'`
which will produce the following header:
'X-Frame-Options': 'ALLOW-FROM mydomain.site'
I've checked on documentation and previous issues but, unfortunately, I didn't find any way to do it.
Thank you | closed | 2024-05-16T14:44:24Z | 2024-05-31T00:55:12Z | https://github.com/pallets-eco/flask-wtf/issues/597 | [] | scicco | 1 |
plotly/plotly.py | plotly | 4,428 | Non-determinism of `mode` for a plotly express line plot with markers | The following block of code introduces nondeterminism in the `mode` of a plotly express trace, because it involves iterating over a `set`:
https://github.com/plotly/plotly.py/blob/v5.18.0/packages/python/plotly/plotly/express/_core.py#L1922-L1931
Among other things, this causes the output of `plotly.graph_objects.Figure.to_html` to be non-deterministic, because sometimes the `mode` for a line plot with markers is `"lines+markers"` and other times it is `"markers+lines"`. This makes it difficult to test for consistency of results during automated tests.
One potential solution might be to replace the following:
```diff
- trace_patch["mode"] = "+".join(modes)
+ trace_patch["mode"] = "+".join(sorted(modes))
```
Alternatively, `modes` could be constructed as a list instead of a set. | closed | 2023-11-17T00:35:43Z | 2023-11-17T22:48:00Z | https://github.com/plotly/plotly.py/issues/4428 | [] | alev000 | 1 |
google/seq2seq | tensorflow | 52 | During evaluation metric_fn is called epoch_size/batch_size times with the same data | During debugging of bug #39 I found that metric_fn was called many, many times with the same data. So i probed further.
I dumped every call to metric_fn in a file. It grows by batch_size for every call, until it's called with the entire (dev) dataset. The 32 first rows in metric-dump-02-hyp are equal to the rows in metric-dump-01-hyp and so forth. This seems redundant.
I'm worried how this affects the metrics reported. Is it the last call? the first? the average?
```
1 metric-dump-00-hyp.txt
1 metric-dump-00-ref.txt
32 metric-dump-01-hyp.txt
32 metric-dump-01-ref.txt
64 metric-dump-02-hyp.txt
64 metric-dump-02-ref.txt
96 metric-dump-03-hyp.txt
96 metric-dump-03-ref.txt
128 metric-dump-04-hyp.txt
128 metric-dump-04-ref.txt
160 metric-dump-05-hyp.txt
160 metric-dump-05-ref.txt
192 metric-dump-06-hyp.txt
192 metric-dump-06-ref.txt
224 metric-dump-07-hyp.txt
224 metric-dump-07-ref.txt
256 metric-dump-08-hyp.txt
256 metric-dump-08-ref.txt
288 metric-dump-09-hyp.txt
288 metric-dump-09-ref.txt
320 metric-dump-10-hyp.txt
320 metric-dump-10-ref.txt
352 metric-dump-11-hyp.txt
352 metric-dump-11-ref.txt
384 metric-dump-12-hyp.txt
384 metric-dump-12-ref.txt
416 metric-dump-13-hyp.txt
416 metric-dump-13-ref.txt
448 metric-dump-14-hyp.txt
448 metric-dump-14-ref.txt
480 metric-dump-15-hyp.txt
480 metric-dump-15-ref.txt
512 metric-dump-16-hyp.txt
512 metric-dump-16-ref.txt
544 metric-dump-17-hyp.txt
544 metric-dump-17-ref.txt
576 metric-dump-18-hyp.txt
576 metric-dump-18-ref.txt
608 metric-dump-19-hyp.txt
608 metric-dump-19-ref.txt
640 metric-dump-20-hyp.txt
640 metric-dump-20-ref.txt
672 metric-dump-21-hyp.txt
672 metric-dump-21-ref.txt
704 metric-dump-22-hyp.txt
704 metric-dump-22-ref.txt
736 metric-dump-23-hyp.txt
736 metric-dump-23-ref.txt
768 metric-dump-24-hyp.txt
768 metric-dump-24-ref.txt
800 metric-dump-25-hyp.txt
800 metric-dump-25-ref.txt
832 metric-dump-26-hyp.txt
832 metric-dump-26-ref.txt
864 metric-dump-27-hyp.txt
864 metric-dump-27-ref.txt
893 metric-dump-28-hyp.txt
893 metric-dump-28-ref.txt
893 metric-dump-29-hyp.txt
893 metric-dump-29-ref.txt
```
| closed | 2017-03-15T09:19:18Z | 2017-03-21T16:52:50Z | https://github.com/google/seq2seq/issues/52 | [] | rasmusbergpalm | 2 |
koaning/scikit-lego | scikit-learn | 133 | [FEATURE] Meta Model Discussion | At some point in time we're bound to have:
- DecayAdder
- ThresholdCutter
and before you know it, a model in a pipeline might start to look like this:
```
mod = ThresholdCutter(DecayAdder(LogisticRegression())
```
This is hard to read at some point.
We could make a `ModelPipeline` to make declaration a bit nicer. Something like:
```
ModelPipeline([
("logistic", LogisticRegression()),
("decay", DecayAdder()),
("threshold", ThresholdCutter())
])
```
Ah, class is starting. Will return later for more details. | closed | 2019-05-09T07:01:56Z | 2020-01-24T21:43:05Z | https://github.com/koaning/scikit-lego/issues/133 | [
"enhancement"
] | koaning | 0 |
httpie/cli | rest-api | 935 | Refactor client.py | Me and @kbanc were working on #932 and we found the `client.py` file a bit hard to go through and thought we could refactor it to look a bit like the `sessions.py`: there would be a `Client` class and all the logic for preparing and sending the requests would be done from this file.
The idea is pretty rough but @jakubroztocil if you have some thoughts we would really appreciate them! | closed | 2020-06-18T20:35:25Z | 2021-12-28T12:04:41Z | https://github.com/httpie/cli/issues/935 | [] | gmelodie | 3 |
huggingface/transformers | nlp | 36,817 | Add EuroBert Model To Config | ### Model description
I would like to have the EuroBert model added to the config (configuration_auto.py) :)
Especially the 210M version:
https://huggingface.co/EuroBERT
This would probably solve an issue in Flair:
https://github.com/flairNLP/flair/issues/3630
```
File "C:\Users\nick\PycharmProjects\flair\.venv\Lib\site-packages\flair\embeddings\transformer.py", line 1350, in from_params
config_class = CONFIG_MAPPING[model_type]
~~~~~~~~~~~~~~^^^^^^^^^^^^
File "C:\Users\nick\PycharmProjects\flair\.venv\Lib\site-packages\transformers\models\auto\configuration_auto.py", line 794, in __getitem__
raise KeyError(key)
KeyError: 'eurobert'
```
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
@tomaarsen | open | 2025-03-19T09:56:20Z | 2025-03-19T15:27:30Z | https://github.com/huggingface/transformers/issues/36817 | [
"New model"
] | zynos | 1 |
huggingface/datasets | tensorflow | 7,421 | DVC integration broken | ### Describe the bug
The DVC integration seems to be broken.
Followed this guide: https://dvc.org/doc/user-guide/integrations/huggingface
### Steps to reproduce the bug
#### Script to reproduce
~~~python
from datasets import load_dataset
dataset = load_dataset(
"csv",
data_files="dvc://workshop/satellite-data/jan_train.csv",
storage_options={"url": "https://github.com/iterative/dataset-registry.git"},
)
print(dataset)
~~~
#### Error log
~~~
Traceback (most recent call last):
File "C:\tmp\test\load.py", line 3, in <module>
dataset = load_dataset(
^^^^^^^^^^^^^
File "C:\tmp\test\.venv\Lib\site-packages\datasets\load.py", line 2151, in load_dataset
builder_instance.download_and_prepare(
File "C:\tmp\test\.venv\Lib\site-packages\datasets\builder.py", line 808, in download_and_prepare
fs, output_dir = url_to_fs(output_dir, **(storage_options or {}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: url_to_fs() got multiple values for argument 'url'
~~~
### Expected behavior
Integration would work and the indicated file is downloaded and opened.
### Environment info
#### Python version
~~~
python --version
Python 3.11.10
~~~
#### Venv (pip install datasets dvc):
~~~
Package Version
---------------------- -----------
aiohappyeyeballs 2.4.6
aiohttp 3.11.13
aiohttp-retry 2.9.1
aiosignal 1.3.2
amqp 5.3.1
annotated-types 0.7.0
antlr4-python3-runtime 4.9.3
appdirs 1.4.4
asyncssh 2.20.0
atpublic 5.1
attrs 25.1.0
billiard 4.2.1
celery 5.4.0
certifi 2025.1.31
cffi 1.17.1
charset-normalizer 3.4.1
click 8.1.8
click-didyoumean 0.3.1
click-plugins 1.1.1
click-repl 0.3.0
colorama 0.4.6
configobj 5.0.9
cryptography 44.0.1
datasets 3.3.2
dictdiffer 0.9.0
dill 0.3.8
diskcache 5.6.3
distro 1.9.0
dpath 2.2.0
dulwich 0.22.7
dvc 3.59.1
dvc-data 3.16.9
dvc-http 2.32.0
dvc-objects 5.1.0
dvc-render 1.0.2
dvc-studio-client 0.21.0
dvc-task 0.40.2
entrypoints 0.4
filelock 3.17.0
flatten-dict 0.4.2
flufl-lock 8.1.0
frozenlist 1.5.0
fsspec 2024.12.0
funcy 2.0
gitdb 4.0.12
gitpython 3.1.44
grandalf 0.8
gto 1.7.2
huggingface-hub 0.29.1
hydra-core 1.3.2
idna 3.10
iterative-telemetry 0.0.10
kombu 5.4.2
markdown-it-py 3.0.0
mdurl 0.1.2
multidict 6.1.0
multiprocess 0.70.16
networkx 3.4.2
numpy 2.2.3
omegaconf 2.3.0
orjson 3.10.15
packaging 24.2
pandas 2.2.3
pathspec 0.12.1
platformdirs 4.3.6
prompt-toolkit 3.0.50
propcache 0.3.0
psutil 7.0.0
pyarrow 19.0.1
pycparser 2.22
pydantic 2.10.6
pydantic-core 2.27.2
pydot 3.0.4
pygit2 1.17.0
pygments 2.19.1
pygtrie 2.5.0
pyparsing 3.2.1
python-dateutil 2.9.0.post0
pytz 2025.1
pywin32 308
pyyaml 6.0.2
requests 2.32.3
rich 13.9.4
ruamel-yaml 0.18.10
ruamel-yaml-clib 0.2.12
scmrepo 3.3.10
semver 3.0.4
setuptools 75.8.0
shellingham 1.5.4
shortuuid 1.0.13
shtab 1.7.1
six 1.17.0
smmap 5.0.2
sqltrie 0.11.2
tabulate 0.9.0
tomlkit 0.13.2
tqdm 4.67.1
typer 0.15.1
typing-extensions 4.12.2
tzdata 2025.1
urllib3 2.3.0
vine 5.1.0
voluptuous 0.15.2
wcwidth 0.2.13
xxhash 3.5.0
yarl 1.18.3
zc-lockfile 3.0.post1
~~~ | open | 2025-02-25T13:14:31Z | 2025-03-03T17:42:02Z | https://github.com/huggingface/datasets/issues/7421 | [] | maxstrobel | 1 |
mars-project/mars | scikit-learn | 2,848 | [core] Optimize mars graph building performance | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Is your feature request related to a problem? Please describe.**
```python
num_rows = 100_0000_0000
df1 = md.DataFrame(
mt.random.rand(num_rows, 4, chunk_size=100_0000),
columns=list('abcd'))
df2 = md.DataFrame(
mt.random.rand(num_rows, 4, chunk_size=100_0000),
columns=list('abcd'))
df1.merge(df2, left_on='a', right_on='a').execute()
```
20000subtasks and 50 16C32Gworker
* without coloring, graph build time:
* tile: 128s
* assign: 40min


**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| closed | 2022-03-22T09:08:33Z | 2022-03-24T14:33:07Z | https://github.com/mars-project/mars/issues/2848 | [
"type: bug",
"mod: task service"
] | chaokunyang | 1 |
psf/requests | python | 6,299 | How to change Keep-Alive parameters? | We have tried setting these headers ->
Connection: keep-alive
Keep-Alive: timeout=5, max=2
But it is not working as expected | closed | 2022-11-30T10:43:41Z | 2023-12-01T00:03:47Z | https://github.com/psf/requests/issues/6299 | [] | mehtaanshul | 2 |
zihangdai/xlnet | tensorflow | 45 | Getting the following error when trying to run tpu_squad_large.sh | ```
W0624 16:40:52.848234 140595823699392 __init__.py:44] file_cache is unavailable when using oauth2client >= 4.0.0 or google-auth
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/googleapiclient/discovery_cache/__init__.py", line 41, in autodetect
from . import file_cache
File "/usr/local/lib/python2.7/dist-packages/googleapiclient/discovery_cache/file_cache.py", line 41, in <module>
'file_cache is unavailable when using oauth2client >= 4.0.0 or google-auth')
ImportError: file_cache is unavailable when using oauth2client >= 4.0.0 or google-auth
I0624 16:40:53.032814 140595823699392 model_utils.py:32] Use TPU without distribute strategy.
W0624 16:40:53.034595 140595823699392 estimator.py:1924] Estimator's model_fn (<function model_fn at 0x7fded610ded8>) includes params argument, but params are not passed to Estimator.
I0624 16:40:53.035511 140595823699392 estimator.py:201] Using config: {'_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
, '_keep_checkpoint_max': 5, '_task_type': 'worker', '_train_distribute': None, '_is_chief': True, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7fded610c2d0>, '_model_dir': 'gs://question-answering/experiment/squad', '_protocol': None, '_save_checkpoints_steps': 1000, '_keep_checkpoint_every_n_hours': 10000, '_service': None, '_num_ps_replicas': 0, '_tpu_config': TPUConfig(iterations_per_loop=1000, num_shards=8, num_cores_per_replica=None, per_host_input_for_training=3, tpu_job_name=None, initial_infeed_sleep_secs=None, input_partition_dims=None), '_tf_random_seed': None, '_save_summary_steps': 100, '_device_fn': None, '_cluster': None, '_experimental_distribute': None, '_num_worker_replicas': 1, '_task_id': 0, '_log_step_count_steps': None, '_evaluation_master': u'grpc://10.240.1.2:8470', '_eval_distribute': None, '_global_id_in_cluster': 0, '_master': u'grpc://10.240.1.2:8470'}
I0624 16:40:53.035886 140595823699392 tpu_context.py:202] _TPUContext: eval_on_tpu True
I0624 16:40:53.036292 140595823699392 run_squad.py:940] Input tfrecord file glob gs://question-answering/proc_data/squad/spiece.model.*.slen-512.qlen-64.train.tf_record
I0624 16:40:53.103672 140595823699392 run_squad.py:943] Find 0 input paths []
I0624 16:40:53.243366 140595823699392 tpu_system_metadata.py:59] Querying Tensorflow master (grpc://10.240.1.2:8470) for TPU system metadata.
2019-06-24 16:40:53.244997: W tensorflow/core/distributed_runtime/rpc/grpc_session.cc:354] GrpcSession::ListDevices will initialize the session with an empty graph and other defaults because the session has not yet been created.
I0624 16:40:53.250566 140595823699392 tpu_system_metadata.py:120] Found TPU system:
I0624 16:40:53.250852 140595823699392 tpu_system_metadata.py:121] *** Num TPU Cores: 8
I0624 16:40:53.251368 140595823699392 tpu_system_metadata.py:122] *** Num TPU Workers: 1
I0624 16:40:53.251487 140595823699392 tpu_system_metadata.py:124] *** Num TPU Cores Per Worker: 8
I0624 16:40:53.251578 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:CPU:0, CPU, -1, 13676165870058292740)
I0624 16:40:53.251995 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:XLA_CPU:0, XLA_CPU, 17179869184, 18431886415160989968)
I0624 16:40:53.252130 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:0, TPU, 17179869184, 1709911759425913454)
I0624 16:40:53.252240 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:1, TPU, 17179869184, 10844450437283158931)
I0624 16:40:53.252331 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:2, TPU, 17179869184, 6304466678072412335)
I0624 16:40:53.252414 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:3, TPU, 17179869184, 1347834186282897648)
I0624 16:40:53.252512 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:4, TPU, 17179869184, 2010934665306124677)
I0624 16:40:53.252598 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:5, TPU, 17179869184, 1558411301377583255)
I0624 16:40:53.252691 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:6, TPU, 17179869184, 15582409736436553171)
I0624 16:40:53.252773 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU:7, TPU, 17179869184, 13427578911967334923)
I0624 16:40:53.252856 140595823699392 tpu_system_metadata.py:126] *** Available Device: _DeviceAttributes(/job:tpu_worker/replica:0/task:0/device:TPU_SYSTEM:0, TPU_SYSTEM, 17179869184, 17740777277430650014)
W0624 16:40:53.257469 140595823699392 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/resource_variable_ops.py:435: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
I0624 16:40:53.268704 140595823699392 estimator.py:1111] Calling model_fn.
W0624 16:40:53.273418 140595823699392 deprecation.py:323] From run_squad.py:1001: parallel_interleave (from tensorflow.contrib.data.python.ops.interleave_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.experimental.parallel_interleave(...)`.
I0624 16:40:53.275295 140595823699392 error_handling.py:70] Error recorded from training_loop: Tensor conversion requested dtype string for Tensor with dtype float32: 'Tensor("arg0:0", shape=(), dtype=float32, device=/job:tpu_worker/task:0/device:CPU:0)'
I0624 16:40:53.275455 140595823699392 error_handling.py:93] training_loop marked as finished
W0624 16:40:53.275588 140595823699392 error_handling.py:127] Reraising captured error
Traceback (most recent call last):
File "run_squad.py", line 1310, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "run_squad.py", line 1209, in main
estimator.train(input_fn=train_input_fn, max_steps=FLAGS.train_steps)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2457, in train
rendezvous.raise_errors()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/error_handling.py", line 128, in raise_errors
six.reraise(typ, value, traceback)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2452, in train
saving_listeners=saving_listeners)
File "/usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 358, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1124, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "/usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1154, in _train_model_default
features, labels, model_fn_lib.ModeKeys.TRAIN, self.config)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2251, in _call_model_fn
config)
File "/usr/local/lib/python2.7/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 1112, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2547, in _model_fn
input_holders.generate_infeed_enqueue_ops_and_dequeue_fn())
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 1167, in generate_infeed_enqueue_ops_and_dequeue_fn
self._invoke_input_fn_and_record_structure())
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 1243, in _invoke_input_fn_and_record_structure
self._inputs_structure_recorder, host_device, host_id))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 830, in generate_per_host_v2_enqueue_ops_fn_for_host
inputs = _Inputs.from_input_fn(input_fn(user_context))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/tpu/python/tpu/tpu_estimator.py", line 2423, in _input_fn
return input_fn(**kwargs)
File "run_squad.py", line 1001, in input_fn
cycle_length=cycle_length))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 1605, in apply
return DatasetV1Adapter(super(DatasetV1, self).apply(transformation_func))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 1127, in apply
dataset = transformation_func(self)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/experimental/ops/interleave_ops.py", line 88, in _apply_fn
buffer_output_elements, prefetch_input_elements)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/ops/readers.py", line 133, in __init__
cycle_length, block_length)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 2827, in __init__
super(InterleaveDataset, self).__init__(input_dataset, map_func)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 2798, in __init__
map_func, self._transformation_name(), dataset=input_dataset)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 2124, in __init__
self._function.add_to_graph(ops.get_default_graph())
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/function.py", line 490, in add_to_graph
self._create_definition_if_needed()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/function.py", line 341, in _create_definition_if_needed
self._create_definition_if_needed_impl()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/function.py", line 355, in _create_definition_if_needed_impl
whitelisted_stateful_ops=self._whitelisted_stateful_ops)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/function.py", line 883, in func_graph_from_py_func
outputs = func(*func_graph.inputs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/ops/dataset_ops.py", line 2099, in tf_data_structured_function_wrapper
ret = func(*nested_args)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/ops/readers.py", line 247, in __init__
filenames, compression_type, buffer_size, num_parallel_reads)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/data/ops/readers.py", line 199, in __init__
filenames = ops.convert_to_tensor(filenames, dtype=dtypes.string)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1039, in convert_to_tensor
return convert_to_tensor_v2(value, dtype, preferred_dtype, name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1097, in convert_to_tensor_v2
as_ref=False)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1175, in internal_convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 977, in _TensorTensorConversionFunction
(dtype.name, t.dtype.name, str(t)))
ValueError: Tensor conversion requested dtype string for Tensor with dtype float32: 'Tensor("arg0:0", shape=(), dtype=float32, device=/job:tpu_worker/task:0/device:CPU:0)'
``` | closed | 2019-06-24T16:44:04Z | 2019-06-24T17:31:25Z | https://github.com/zihangdai/xlnet/issues/45 | [] | rakshanda22 | 2 |
pydantic/pydantic-core | pydantic | 1,115 | (🐞) `ValidationError` can't be instantiated | ```py
from pydantic import ValidationError
ValidationError() # TypeError: No constructor defined
```
```
> mypy -c "
from pydantic import ValidationError
ValidationError()
"
Success: no issues found in 1 source file
```
# Reasoning:
I want to raise `ValidationError` within my validators, because I think it looks more semantic, and it avoids Ruffs TRY004 rule:
https://docs.astral.sh/ruff/rules/type-check-without-type-error/
```py
def parse_date(val: object) -> date:
"""pydantic validator to parse date strings from <secret internal system>"""
if isinstance(val, date):
return val
if not isinstance(val, str):
# value error needs to be raised to signal Pydantic to try the next validator
raise ValueError # noqa: TRY004
... # do the actual work
```
# What else have you tried?
I could instead `return val`, but then I would have to update the signature to be `-> object`, which isn't as tight.
- Related #959 | closed | 2023-12-06T04:53:46Z | 2023-12-06T05:58:08Z | https://github.com/pydantic/pydantic-core/issues/1115 | [
"unconfirmed"
] | KotlinIsland | 1 |
facebookresearch/fairseq | pytorch | 4,953 | Where to find the list of source languages? | I'm using the below code which will try to translate from Romanian to English
```
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M", src_lang="ron_Latn")
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M")
article = "Şeful ONU spune că nu există o soluţie militară în Siria"
inputs = tokenizer(article, return_tensors="pt")
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["en_Latn"], max_length=30
)
tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0]
```
Can anyone please share me the link of the source_lang list or share the path if present in this github repo? | open | 2023-01-24T15:14:20Z | 2024-02-23T21:11:41Z | https://github.com/facebookresearch/fairseq/issues/4953 | [
"question",
"needs triage"
] | nithinreddyy | 2 |
huggingface/diffusers | deep-learning | 10,697 | Inconsistent random transform between source and target image in train_instruct_pix2pix | ### Describe the bug
Currently, random cropping and random flipping in `train_transform` of [train_instruct_pix2pix.py](https://github.com/huggingface/diffusers/blob/main/examples/instruct_pix2pix/train_instruct_pix2pix.py#L701) and [train_instruct_pix2pix_sdxl.py](https://github.com/huggingface/diffusers/blob/main/examples/instruct_pix2pix/train_instruct_pix2pix_sdxl.py#L772) are applied independently to the source and target images, which leads to discrepancies.
This inconsistency can cause misalignment between the source and target images (the source could be flipped but not the target, or vice versa)
### Reproduction
Following InstructPix2Pix training example, but set resolution to 512 to turn off random cropping. My edit model tends to return symmetric images.
### Logs
```shell
```
### System Info
0.33.0.dev0
### Who can help?
@sayakpaul | closed | 2025-01-31T16:04:18Z | 2025-01-31T18:29:30Z | https://github.com/huggingface/diffusers/issues/10697 | [
"bug"
] | Luvata | 0 |
modin-project/modin | data-science | 7,382 | DOC: Add documentation on how to use Modin Native query compiler | **Is your feature request related to a problem? Please describe.**
Add a paragraph in optimisation notes on how to use [native_query_compiler]((https://github.com/arunjose696/modin/blob/arun-sqc-interop/modin/core/storage_formats/pandas/native_query_compiler.py#L568))
| closed | 2024-09-02T16:43:45Z | 2024-09-06T13:33:10Z | https://github.com/modin-project/modin/issues/7382 | [
"new feature/request 💬",
"Triage 🩹"
] | arunjose696 | 0 |
biolab/orange3 | scikit-learn | 6,360 | Make Ward Linkage the default method for hierarchical clustering | **What's your proposed solution?**
Make Ward the default.
**Are there any alternative solutions?**
No. :) Besides status quo.
| closed | 2023-03-10T09:33:51Z | 2023-03-10T13:32:26Z | https://github.com/biolab/orange3/issues/6360 | [] | janezd | 0 |
aminalaee/sqladmin | sqlalchemy | 822 | Allow to attach sessionmaker after configuring starlette app | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
I want to setup sqladmin for project in which we're using [google IAM](https://cloud.google.com/sql/docs/postgres/iam-authentication) auth for PostgreSQL and our function that creates SqlAlchemy engine is async:
```python
from typing import TYPE_CHECKING
from google.cloud.sql.connector import Connector, create_async_connector
from sqlalchemy.ext.asyncio import AsyncEngine, create_async_engine
if TYPE_CHECKING:
from asyncpg import Connection
class AsyncEngineWrapper:
"""
Reflects the interface of the AsyncEngine but have reference to the Cloud SQL
Connector to close it properly when disposing the engine.
"""
def __init__(self, engine: AsyncEngine, connector: Connector):
self.engine = engine
self.connector = connector
def __getattr__(self, attr):
return getattr(self.engine, attr)
async def dispose(self, close: bool = False) -> None:
await self.connector.close_async()
await self.engine.dispose(close)
async def create_cloud_sql_async_engine(
cloud_sql_instance: str,
*,
cloud_sql_user: str,
cloud_sql_database: str,
cloud_sql_password: str | None = None,
enable_iam_auth: bool = True,
**kwargs,
) -> AsyncEngine:
"""
Use Cloud SQL IAM role authentication mechanism
https://cloud.google.com/sql/docs/postgres/iam-authentication
to create new SqlAlchemy async engine.
"""
connector = await create_async_connector()
async def get_conn() -> "Connection":
return await connector.connect_async(
cloud_sql_instance,
"asyncpg",
user=cloud_sql_user,
password=cloud_sql_password,
db=cloud_sql_database,
enable_iam_auth=enable_iam_auth,
)
engine = create_async_engine(
"postgresql+asyncpg://", async_creator=get_conn, **kwargs
)
return AsyncEngineWrapper(engine, connector) # type: ignore[return-value]
```
and now it's quite tricky to get an instance of the engine when creating FastAPI app (and Admin instance).
Similar problem might have someone that share connection pool between app and admin and creates engine inside of the lifespan https://fastapi.tiangolo.com/advanced/events/ - which happens after FastAPI app is created.
### Describe the solution you would like.
I would like to have an option to create app with an admin instance without passing engine or sessionmaker yet. and then have an option (like dedicated Admin method) to set proper sessionmaker or engine inside of the `lifespan`. Something like:
```python
async def lifespan(app: FastAPI):
engine = await create_async_engine(...)
app.state.admin.attach_engine(engine)
yield
await engine.dispose()
def create_app():
app = FastAPI(lifespan=lifespan)
admin = Admin(app)
app.state.admin = admin
@admin.add_view
class UserAdmin(ModelView, model=User):
column_list = [User.id, User.username]
return app
```
### Describe alternatives you considered
I tried to create whole Admin inside of the `lifespan` but it didn't work. It looks like it's already too late and we need to set it up when creating FastAPI app.
### Additional context
_No response_ | open | 2024-09-27T11:59:44Z | 2024-10-15T09:39:31Z | https://github.com/aminalaee/sqladmin/issues/822 | [] | kamilglod | 3 |
flairNLP/flair | pytorch | 2,943 | Fine tuning `sentence-transformers/all-MiniLM-L6-v2` | Hello! I'm looking to find out what the training code would look like to fine-tune `sentence-transformers/all-MiniLM-L6-v2`. Right now, I have the following for generating embeddings for the fine-tune procedure:
```
from flair.data import Sentence, Token
from flair.embeddings import TransformerWordEmbeddings
embeddings = TransformerWordEmbeddings('sentence-transformers/all-MiniLM-L6-v2', fine_tune=True, layers='-1')
all_texts = open("ml/data/features.txt").read().split("\n")
sentences = []
for text in all_texts:
if text:
sentence = Sentence(text)
embeddings.embed(sentence)
sentences.append(sentence)
#.....?
```
This is based off my read of https://github.com/flairNLP/flair/blob/cebd2b1c81be4507f62e967f8a2e7701e332dbd3/resources/docs/TUTORIAL_9_TRAINING_LM_EMBEDDINGS.md which does not include what the downstream train call would look like. Any and all advice is welcome! | closed | 2022-09-19T18:34:44Z | 2023-02-02T07:57:12Z | https://github.com/flairNLP/flair/issues/2943 | [
"question",
"wontfix"
] | DGaffney | 1 |
pyppeteer/pyppeteer | automation | 121 | Screenshot Quality argument not working? | The quality argument in coroutine screenshot is not working well
The code i tested with
```
browser = await launch(
headless=True,
executablePath=EXEC_PATH
)
page = await browser.newPage()
await page.goto(link)
await page.screenshot({'fullPage': True, 'path': './FILES/584512526/3726/Python Dictionarieswebshotbot.jpeg', 'type': 'jpeg', 'quality': 1})
```
Unfortunately the output quality is same if i change the quality argv to 100
Below is an example of a screenshot taken with quality 1

**Is there something wrong with my code**
| open | 2020-05-28T23:06:00Z | 2020-05-29T17:33:57Z | https://github.com/pyppeteer/pyppeteer/issues/121 | [
"bug",
"good first issue",
"fixed-in-2.1.1"
] | alenpaulvarghese | 8 |
run-llama/rags | streamlit | 22 | implement using llamacpp as LLM model | i am trying to implement using open source llm model with llamacpp but getting this error
"ValueError: Must pass in vector index for CondensePlusContextChatEngine."
i am new to llamaindex also can anyone help me what exactly i need to configure in order to run the RAGs | open | 2023-11-24T12:52:41Z | 2023-12-24T21:42:04Z | https://github.com/run-llama/rags/issues/22 | [] | adeelhasan19 | 4 |
microsoft/nni | deep-learning | 5,261 | "Hello, NAS!" tutorial perfomance issue with zero cost model evaluation function | I run "Hello, NAS!" tutorial from documentation in GPU jupyter notebook (kaggle and colab with the same results). But I changed evaluate_model function as:
```
def evaluate_model(model_cls):
accuracy = 0.57
nni.report_final_result(accuracy)
```
Using this evaluation function I expected instant experement execution, but it was running for 1 minute 20 seconds. When I changed max_trial_number from 4 to 8, the experement was running 2 minutes and 30 seconds...
Full code:
```
!pip install pytorch-lightning
!pip install nni
```
```
import torch.nn.functional as F
import nni.retiarii.nn.pytorch as nn
import nni.retiarii.strategy as strategy
from nni.retiarii import model_wrapper
from nni.retiarii.evaluator import FunctionalEvaluator
from nni.retiarii.experiment.pytorch import RetiariiExperiment, RetiariiExeConfig
import nni
class DepthwiseSeparableConv(nn.Module):
def __init__(self, in_ch, out_ch):
super().__init__()
self.depthwise = nn.Conv2d(in_ch, in_ch, kernel_size=3, groups=in_ch)
self.pointwise = nn.Conv2d(in_ch, out_ch, kernel_size=1)
def forward(self, x):
return self.pointwise(self.depthwise(x))
@model_wrapper
class ModelSpace(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.LayerChoice([
nn.Conv2d(32, 64, 3, 1),
DepthwiseSeparableConv(32, 64)
])
self.dropout1 = nn.Dropout(nn.ValueChoice([0.25, 0.5, 0.75]))
self.dropout2 = nn.Dropout(0.5)
feature = nn.ValueChoice([64, 128, 256])
self.fc1 = nn.Linear(9216, feature)
self.fc2 = nn.Linear(feature, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(self.conv2(x), 2)
x = torch.flatten(self.dropout1(x), 1)
x = self.fc2(self.dropout2(F.relu(self.fc1(x))))
output = F.log_softmax(x, dim=1)
return output
model_space = ModelSpace()
search_strategy = strategy.Random(dedup=True)
def evaluate_model(model_cls):
accuracy = 0.57
nni.report_final_result(accuracy)
evaluator = FunctionalEvaluator(evaluate_model)
exp = RetiariiExperiment(model_space, evaluator, [], search_strategy)
exp_config = RetiariiExeConfig('local')
exp_config.experiment_name = 'mnist_search'
exp_config.max_trial_number = 4
exp_config.trial_concurrency = 1
exp_config.trial_gpu_number = 1
exp_config.training_service.use_active_gpu = True
```
```
%%time
exp.run(exp_config, 8081)
```
| open | 2022-12-03T11:16:49Z | 2022-12-06T00:31:24Z | https://github.com/microsoft/nni/issues/5261 | [] | miracle-111 | 3 |
aleju/imgaug | machine-learning | 785 | AttributeError: module 'imgaug' has no attribute 'deepcopy' | Hi, I meet a problem when augmenting my dataset. My codes as following:
```
annos = imgs_anns["shapes"]
for anno in annos:
polygon = []
points = anno["points"]
px = [point[0] for point in points]
py = [point[1] for point in points]
poly = [(x + 0.5, y + 0.5) for x, y in zip(px, py)]
ia.Polygon(poly)
polygon.append(ia)
polygons = ia.PolygonsOnImage(polygon, shape=image.shape)
images_aug, polygons_aug = seq(images=[image], polygons=polygons)
```
But the last line gives the error
```
Traceback (most recent call last):
File "tools/augmentation.py", line 58, in <module>
augment_images("../pandents(colored)/dataset/train")
File "tools/augmentation.py", line 51, in augment_images
images_aug, polygons_aug = seq(images=[image], polygons=polygons)
File "/home/jason/Documents/vist/deeplearning/venv/lib/python3.6/site-packages/imgaug/augmenters/meta.py", line 2027, in __call__
return self.augment(*args, **kwargs)
File "/home/jason/Documents/vist/deeplearning/venv/lib/python3.6/site-packages/imgaug/augmenters/meta.py", line 1998, in augment
batch_aug = self.augment_batch_(batch, hooks=hooks)
File "/home/jason/Documents/vist/deeplearning/venv/lib/python3.6/site-packages/imgaug/augmenters/meta.py", line 597, in augment_batch_
batch_inaug = batch_norm.to_batch_in_augmentation()
File "/home/jason/Documents/vist/deeplearning/venv/lib/python3.6/site-packages/imgaug/augmentables/batches.py", line 456, in to_batch_in_augmentation
polygons=_copy(self.polygons_unaug),
File "/home/jason/Documents/vist/deeplearning/venv/lib/python3.6/site-packages/imgaug/augmentables/batches.py", line 447, in _copy
return utils.copy_augmentables(var)
File "/home/jason/Documents/vist/deeplearning/venv/lib/python3.6/site-packages/imgaug/augmentables/utils.py", line 19, in copy_augmentables
result.append(augmentable.deepcopy())
File "/home/jason/Documents/vist/deeplearning/venv/lib/python3.6/site-packages/imgaug/augmentables/polys.py", line 2107, in deepcopy
polygons = [poly.deepcopy() for poly in self.polygons]
File "/home/jason/Documents/vist/deeplearning/venv/lib/python3.6/site-packages/imgaug/augmentables/polys.py", line 2107, in <listcomp>
polygons = [poly.deepcopy() for poly in self.polygons]
AttributeError: module 'imgaug' has no attribute 'deepcopy'
```
What's wrong? I tried: reinstall imgaug as following, but no luck
```
pip3 uninstall imgaug && pip3 install git+https://github.com/aleju/imgaug.git
```
| closed | 2021-08-21T17:29:02Z | 2021-08-23T02:32:43Z | https://github.com/aleju/imgaug/issues/785 | [] | WorstCodeWay | 1 |
automl/auto-sklearn | scikit-learn | 1,659 | [Question]Can I add a regressor without tuing any hyperparameters, i.e, return a blank configuration space | ```
class AbessRegression(AutoSklearnRegressionAlgorithm):
def __init__(self, random_state=None):
self.random_state = random_state
self.estimator = None
def fit(self, X, y):
from abess import LinearRegression
self.estimator = LinearRegression()
self.estimator.fit(X, y)
return self
def predict(self, X):
if self.estimator is None:
raise NotImplementedError
return self.estimator.predict(X)
@staticmethod
def get_properties(dataset_properties=None):
return {
'shortname': 'abess',
'name': 'abess linear regression',
'handles_regression': True,
'handles_classification': False,
'handles_multiclass': False,
'handles_multilabel': False,
'handles_multioutput': True,
'is_deterministic': True,
'input': (SPARSE, DENSE, UNSIGNED_DATA, SIGNED_DATA),
'output': (PREDICTIONS,)
}
@staticmethod
def get_hyperparameter_search_space(dataset_properties=None):
cs = ConfigurationSpace()
return cs
# Add abesscomponent to auto-sklearn.
autosklearn.pipeline.components.regression.add_regressor(AbessRegression)
cs = AbessRegression.get_hyperparameter_search_space()
print(cs)
```
```
regaallp = autosklearn.regression.AutoSklearnRegressor(
time_left_for_this_task=60,
per_run_time_limit=10,
include={
"data_preprocessor": ["NoPreprocessing"],
'regressor': ['AbessRegression'],
'feature_preprocessor':[
#'feature_agglomeration',
'no_preprocessing','polynomial',
'Bspline','kBinsDiscretizer',
],
},
memory_limit=6144,
#ensemble_size=1,
)
regaallp.fit(X_train.values, y_train)
yaallp_pred = regaallp.predict(X_test.values)
```
The error:
TypeError Traceback (most recent call last)
Cell In [25], line 16
1 regaallp = autosklearn.regression.AutoSklearnRegressor(
2 time_left_for_this_task=60,
3 per_run_time_limit=10,
(...)
14 #ensemble_size=1,
15 )
---> 16 regaallp.fit(X_train.values, y_train)
17 yaallp_pred = regaallp.predict(X_test.values)
File ~/miniconda3/envs/p38/lib/python3.8/site-packages/autosklearn/estimators.py:1191, in AutoSklearnRegressor.fit(self, X, y, X_test, y_test, feat_type, dataset_name)
1178 raise ValueError("Regression with data of type {} is "
1179 "not supported. Supported types are {}. "
1180 "You can find more information about scikit-learn "
(...)
1186 )
1187 )
1189 # Fit is supposed to be idempotent!
1190 # But not if we use share_mode.
-> 1191 super().fit(
1192 X=X,
1193 y=y,
1194 X_test=X_test,
1195 y_test=y_test,
1196 feat_type=feat_type,
1197 dataset_name=dataset_name,
1198 )
1200 return self
File ~/miniconda3/envs/p38/lib/python3.8/site-packages/autosklearn/estimators.py:375, in AutoSklearnEstimator.fit(self, **kwargs)
373 if self.automl_ is None:
374 self.automl_ = self.build_automl()
--> 375 self.automl_.fit(load_models=self.load_models, **kwargs)
377 return self
File ~/miniconda3/envs/p38/lib/python3.8/site-packages/autosklearn/automl.py:2133, in AutoMLRegressor.fit(self, X, y, X_test, y_test, feat_type, dataset_name, only_return_configuration_space, load_models)
2122 def fit(
2123 self,
2124 X: SUPPORTED_FEAT_TYPES,
(...)
2131 load_models: bool = True,
2132 ):
-> 2133 return super().fit(
2134 X, y,
2135 X_test=X_test,
2136 y_test=y_test,
2137 feat_type=feat_type,
2138 dataset_name=dataset_name,
2139 only_return_configuration_space=only_return_configuration_space,
2140 load_models=load_models,
2141 is_classification=False,
2142 )
File ~/miniconda3/envs/p38/lib/python3.8/site-packages/autosklearn/automl.py:931, in AutoML.fit(self, X, y, task, X_test, y_test, feat_type, dataset_name, only_return_configuration_space, load_models, is_classification)
898 _proc_smac = AutoMLSMBO(
899 config_space=self.configuration_space,
900 dataset_name=self._dataset_name,
(...)
926 trials_callback=self._get_trials_callback
927 )
929 try:
930 self.runhistory_, self.trajectory_, self._budget_type = \
--> 931 _proc_smac.run_smbo()
932 trajectory_filename = os.path.join(
933 self._backend.get_smac_output_directory_for_run(self._seed),
934 'trajectory.json')
935 saveable_trajectory = \
936 [list(entry[:2]) + [entry[2].get_dictionary()] + list(entry[3:])
937 for entry in self.trajectory_]
File ~/miniconda3/envs/p38/lib/python3.8/site-packages/autosklearn/smbo.py:498, in AutoMLSMBO.run_smbo(self)
495 if self.trials_callback is not None:
496 smac.register_callback(self.trials_callback)
--> 498 smac.optimize()
500 self.runhistory = smac.solver.runhistory
501 self.trajectory = smac.solver.intensifier.traj_logger.trajectory
File ~/miniconda3/envs/p38/lib/python3.8/site-packages/smac/facade/smac_ac_facade.py:720, in SMAC4AC.optimize(self)
718 incumbent = None
719 try:
--> 720 incumbent = self.solver.run()
721 finally:
722 self.solver.save()
File ~/miniconda3/envs/p38/lib/python3.8/site-packages/smac/optimizer/smbo.py:273, in SMBO.run(self)
266 # Skip the run if there was a request to do so.
267 # For example, during intensifier intensification, we
268 # don't want to rerun a config that was previously ran
269 if intent == RunInfoIntent.RUN:
270 # Track the fact that a run was launched in the run
271 # history. It's status is tagged as RUNNING, and once
272 # completed and processed, it will be updated accordingly
--> 273 self.runhistory.add(
274 config=run_info.config,
275 cost=float(MAXINT)
276 if num_obj == 1
277 else np.full(num_obj, float(MAXINT)),
278 time=0.0,
279 status=StatusType.RUNNING,
280 instance_id=run_info.instance,
281 seed=run_info.seed,
282 budget=run_info.budget,
283 )
285 run_info.config.config_id = self.runhistory.config_ids[run_info.config]
287 self.tae_runner.submit_run(run_info=run_info)
File ~/miniconda3/envs/p38/lib/python3.8/site-packages/smac/runhistory/runhistory.py:257, in RunHistory.add(self, config, cost, time, status, instance_id, seed, budget, starttime, endtime, additional_info, origin, force_update)
223 """Adds a data of a new target algorithm (TA) run;
224 it will update data if the same key values are used
225 (config, instance_id, seed)
(...)
253 Forces the addition of a config to the history
254 """
256 if config is None:
--> 257 raise TypeError("Configuration to add to the runhistory must not be None")
258 elif not isinstance(config, Configuration):
259 raise TypeError(
260 "Configuration to add to the runhistory is not of type Configuration, but %s"
261 % type(config)
262 )
TypeError: Configuration to add to the runhistory must not be None
| closed | 2023-04-14T14:54:24Z | 2023-04-17T11:19:13Z | https://github.com/automl/auto-sklearn/issues/1659 | [] | belzheng | 3 |
thtrieu/darkflow | tensorflow | 736 | Cloud service recommendation | Which cloud service do you use for training `darkflow` models?
Or do you usually use your own GPU?
Any link suggested? | closed | 2018-04-24T13:57:48Z | 2018-05-06T13:55:40Z | https://github.com/thtrieu/darkflow/issues/736 | [] | offchan42 | 5 |
google/seq2seq | tensorflow | 132 | Issues with CUDA_OUT_OF_MEMORY | Hi
When trying out the pipeline unit test I found [here](https://google.github.io/seq2seq/getting_started/), I got the two following errors:
> I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 0 with properties:
name: GeForce GTX 1080
major: 6 minor: 1 memoryClockRate (GHz) 1.8095
pciBusID 0000:02:00.0
Total memory: 7.92GiB
Free memory: 7.81GiB
W tensorflow/stream_executor/cuda/cuda_driver.cc:590] creating context when one is currently active; existing: 0x3863510
I tensorflow/core/common_runtime/gpu/gpu_device.cc:885] Found device 1 with properties:
name: GeForce GTX 1080
major: 6 minor: 1 memoryClockRate (GHz) 1.8095
pciBusID 0000:01:00.0
Total memory: 7.92GiB
Free memory: 7.81GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:906] DMA: 0 1
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 0: Y Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:916] 1: Y Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:02:00.0)
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX 1080, pci bus id: 0000:01:00.0)
E tensorflow/stream_executor/cuda/cuda_driver.cc:1002] failed to allocate 7.92G (8508145664 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
E tensorflow/stream_executor/cuda/cuda_driver.cc:1002] failed to allocate 7.92G (8506048512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
INFO:tensorflow:Saving checkpoints for 1 into /tmp/tmp4emg5gzs/model.ckpt.
As there is nothing running on the two GPU's (I checked with nvidia-smi) and I am the only person at my internship to try stuff out on them, I don't find a reasonable explanation. Could someone point me in the right direction? As I'm a newbie to tensorflow and GPU's in general, I find it hard to know where to start.
Thanks in advance | closed | 2017-03-31T10:33:50Z | 2017-07-05T13:45:26Z | https://github.com/google/seq2seq/issues/132 | [] | ghost | 5 |
slackapi/bolt-python | fastapi | 811 | Trying to deploy to AWS Lambda - boto3 Module not found | Trying to deploy on AWS Lambda so I switched from __init__ to `def handler` and started using `SlackRequestHandler`. Getting this Module not found error now.
### Reproducible in:
```bash
pip freeze | grep slack
python --version
sw_vers && uname -v # or `ver`
```
#### The `slack_bolt` version
slack-bolt==1.16.1
slack-sdk==3.19.5
#### Python runtime version
Python 3.10.9
#### OS info
ProductName: macOS
ProductVersion: 12.5
BuildVersion: 21G72
Darwin Kernel Version 21.6.0: Sat Jun 18 17:07:22 PDT 2022; root:xnu-8020.140.41~1/RELEASE_ARM64_T6000
| closed | 2023-01-28T06:21:01Z | 2023-01-28T19:44:54Z | https://github.com/slackapi/bolt-python/issues/811 | [
"question",
"area:adapter"
] | asontha | 2 |
serengil/deepface | deep-learning | 924 | Unable to find good face match with deepface and Annoy | Hi
I am using following code from Serengil's tutorial (you tube), for finding best face match. The embeddings are from deepface and Annoy's ANNS based search is used for find best matching face.
This code does not give good matching face, as was hightlighted in Serengil's you tube video.
Looking for help about why this code would not give best matching face.
Thanks
import os
from deepface.commons import functions
from keras_facenet import FaceNet
from deepface.basemodels import Facenet
from annoy import AnnoyIndex
import matplotlib.pyplot as plt
import random
import time
facial_images = []
for root, directories, files in os.walk("../../deepface/deepface/tests/dataset"):
for file in files:
if ('.jpg' in files):
exact_path = root + files
facial_images.append(exact_path)
embedder = FaceNet()
model = Facenet.loadModel()
representations = []
min = 100000000000
max = -10000000000
for face in facial_images:
img = functions.preprocess_face(img = face, target_size = (160,160))
# embedding = embedder.embeddings(img)
embedding = model.predict(img)[0,:]
temp = embedding.min()
if (min > temp):
min = temp
temp = embedding.max()
if (max < temp):
max = temp
representation = []
representation.append(img)
representation.append(embedding)
representations.append(representation)
#synthetic data to enlarge data size
for i in range(len(representations), 100000):
filename = "dummy_%d.jpg" % i
vector = [random.gauss(min, max) for z in range(128)]
dummy_item = []
dummy_item.append(filename)
dummy_item.append(vector)
representations.append(dummy_item)
t = AnnoyIndex(128, 'euclidean')
for i in range(len(representations)):
vector = representations[i][1]
t.add_item(i, vector)
t.build(3)
idx = 0
k = 2
start = time.time()
neighbors = t.get_nns_by_item(idx,k)
end = time.time()
print ("get_nns_by_item took " , end-start, " seconds");
print (neighbors)
| closed | 2023-12-20T08:22:34Z | 2023-12-20T08:46:59Z | https://github.com/serengil/deepface/issues/924 | [
"question"
] | dumbogeorge | 2 |
encode/uvicorn | asyncio | 1,872 | ctrl+c not close uvicorn in windows when workers>1. beacause SIGINT not catch by parent process | ```python
import uvicorn
from fastapi import FastAPI
app = FastAPI()
if __name__ == "__main__":
uvicorn.run("ccc:app", host="0.0.0.0", workers=2, port=8484)
```
> https://github.com/encode/uvicorn/blob/master/uvicorn/supervisors/multiprocess.py
``` python
class Multiprocess:
def __init__(
self,
config: Config,
target: Callable[[Optional[List[socket]]], None],
sockets: List[socket],
) -> None:
self.config = config
self.target = target
self.sockets = sockets
self.processes: List[SpawnProcess] = []
self.should_exit = threading.Event()
self.pid = os.getpid()
def signal_handler(self, sig: int, frame: Optional[FrameType]) -> None: # Multiprocess.signal_handler never run in windows
"""
A signal handler that is registered with the parent process.
"""
self.should_exit.set()
def run(self) -> None:
self.startup()
self.should_exit.wait()
self.shutdown()
def startup(self) -> None:
message = "Started parent process [{}]".format(str(self.pid))
color_message = "Started parent process [{}]".format(
click.style(str(self.pid), fg="cyan", bold=True)
)
logger.info(message, extra={"color_message": color_message})
for sig in HANDLED_SIGNALS:
signal.signal(sig, self.signal_handler)
for idx in range(self.config.workers):
process = get_subprocess(
config=self.config, target=self.target, sockets=self.sockets
)
process.start()
self.processes.append(process)
def shutdown(self) -> None:
for process in self.processes:
process.terminate()
process.join()
message = "Stopping parent process [{}]".format(str(self.pid))
color_message = "Stopping parent process [{}]".format(
click.style(str(self.pid), fg="cyan", bold=True)
)
logger.info(message, extra={"color_message": color_message})
```
when I send SIGINT to parent process, it is still blocked by "self.should_exit.wait()"
| closed | 2023-02-21T12:07:20Z | 2023-07-11T06:21:29Z | https://github.com/encode/uvicorn/issues/1872 | [
"need confirmation",
"windows",
"multiprocessing"
] | xiaotushaoxia | 5 |
ultralytics/yolov5 | pytorch | 12,943 | 提升训练速度 | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
怎么可以提高yolov5-seg模型的训练速度,调用了GPU,但是利用率很低,3070ti的显卡训练八千张图片一轮需要九分钟
### Additional
_No response_ | closed | 2024-04-18T21:03:59Z | 2024-05-30T00:22:03Z | https://github.com/ultralytics/yolov5/issues/12943 | [
"question",
"Stale"
] | 2375963934a | 2 |
google-research/bert | tensorflow | 1,184 | Question about NSP in get_next_sentence_output | Hello, I have a question about NSP in the get_next_sentence_output function.
As shown in the function (https://github.com/google-research/bert/blob/master/run_pretraining.py#L285)
The size of the output layer is (hidden_size, 2), and the function calls the softmax function to calculate the NSP probability.
As commented in the function, the NSP task can be shown as binary classification.
Therefore, we can make the size of the output layer (hidden_size, 1), and call the sigmoid function, which is a slightly computational-efficient way.
Here is my question:
Why the codes consider the number of classes as 2 and call the softmax function instead consider the class number as 1 class and call the sigmoid function? Does this way have any advantages? | open | 2020-12-08T02:37:03Z | 2020-12-08T02:37:03Z | https://github.com/google-research/bert/issues/1184 | [] | Kyeongpil | 0 |
errbotio/errbot | automation | 1,206 | Errbot should fail if git binary is not present | In order to let us help you better, please fill out the following fields as best you can:
### I am...
* [x] Reporting a bug
### I am running...
* Errbot version: Latest
* OS version: Fedora 28
* Python version: 3
* Using a virtual environment: yes
### Issue description
Installing a plugin fails silently if `git` binary is not present on the system.
### Steps to reproduce
* Install errbot
* ensure git is not installed
* run `errbot -T`, then try to install a plugin via `!repos install x`
### Additional info
This was brought up on gitter.im.
| closed | 2018-05-07T20:12:34Z | 2019-06-20T01:11:45Z | https://github.com/errbotio/errbot/issues/1206 | [
"type: bug"
] | sijis | 3 |
plotly/dash | jupyter | 3,226 | add/change type annotations to satisfy mypy and other tools | With the release of dash 3.0 our CI/CD fails for stuff that used to work.
Here's a minimum working example:
```python
from dash import Dash, dcc, html
from dash.dependencies import Input, Output
from typing import Callable
app = Dash(__name__)
def create_layout() -> html.Div:
return html.Div([
dcc.Input(id='input-text', type='text', value='', placeholder='Enter text'),
html.Div(id='output-text')
])
app.layout = create_layout
@app.callback(Output('output-text', 'children'), Input('input-text', 'value'))
def update_output(value: str) -> str:
return f'You entered: {value}'
if __name__ == '__main__':
app.run(debug=True, port=9000)
```
running mypy on this file results in:
```bash
$ mypy t.py --strict
t.py:1: error: Skipping analyzing "dash": module is installed, but missing library stubs or py.typed marker [import-untyped]
t.py:2: error: Skipping analyzing "dash.dependencies": module is installed, but missing library stubs or py.typed marker [import-untyped]
t.py:2: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports
t.py:15: error: Untyped decorator makes function "update_output" untyped [misc]
Found 3 errors in 1 file (checked 1 source file)
```
which we used to solve by adding `# type: ignore[misc]` to every `callback` call and by adding
```toml
[[tool.mypy.overrides]]
module = [
"dash.*",
"dash_ag_grid",
"dash_bootstrap_components.*",
"plotly.*",
]
ignore_missing_imports = true
```
to our `pyproject.toml`.
However, when updating to version 3, we get:
```bash
$ mypy --strict t.py
t.py:8: error: Returning Any from function declared to return "Div" [no-any-return]
t.py:13: error: Property "layout" defined in "Dash" is read-only [misc]
t.py:15: error: Call to untyped function "Input" in typed context [no-untyped-call]
t.py:15: error: Call to untyped function "Output" in typed context [no-untyped-call]
t.py:15: error: Call to untyped function "callback" in typed context [no-untyped-call]
t.py:15: note: Error code "no-untyped-call" not covered by "type: ignore" comment
Found 5 errors in 1 file (checked 1 source file)
```
without changing anything else.
This can't be intended behavior, right? How to fix that?
community post: https://community.plotly.com/t/dash-3-0-fails-mypy/91308 | open | 2025-03-18T10:25:57Z | 2025-03-20T16:30:13Z | https://github.com/plotly/dash/issues/3226 | [
"bug",
"P2"
] | gothicVI | 6 |
jofpin/trape | flask | 295 | ImportError: No module named requests | `pip install -r requirements.txt`
`Requirement already satisfied`
but when I run `python2 trape.py --url https://facebook.com --port 8080` it returns an error:
```
Traceback (most recent call last):
File "trape.py", line 23, in <module>
from core.utils import utils #
File "/home/nipon/Desktop/trape/core/utils.py", line 21, in <module>
import requests, json
ImportError: No module named requests
```
| open | 2021-01-16T02:49:40Z | 2021-01-19T15:58:07Z | https://github.com/jofpin/trape/issues/295 | [] | Tajmirul | 1 |
deepset-ai/haystack | pytorch | 8,194 | unknown variant `function`, expected one of `system`, `user`, `assistant`, `tool` | **Describe the bug**
unknown variant `function`, expected one of `system`, `user`, `assistant`, `tool`
**Error message**
openai.UnprocessableEntityError: Failed to deserialize the JSON body into the target type: messages[1].role: unknown variant `function`, expected one of `system`, `user`, `assistant`, `tool`
**FAQ Check**
- [X] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- OS: Windows 11
- GPU/CPU: AMD X86_64
- Haystack version (commit or version number):2.3.0
| closed | 2024-08-12T10:59:47Z | 2024-12-08T02:10:35Z | https://github.com/deepset-ai/haystack/issues/8194 | [
"stale",
"community-triage"
] | springrain | 4 |
nl8590687/ASRT_SpeechRecognition | tensorflow | 66 | v0.4模型简单的录制了"你好你好"的音频,识别结果很诡异 | 我用arecord录制了一个"你好你好",结果识别出来这个玩意 "兮好你行黑好好事" 。 录制格式是16bit, 16k,单声道wav。 录制命令如下:
arecord -f S16_LE -r 16000 test.wav | closed | 2019-01-04T02:29:41Z | 2019-03-16T05:45:23Z | https://github.com/nl8590687/ASRT_SpeechRecognition/issues/66 | [] | blood0708 | 1 |
PaddlePaddle/ERNIE | nlp | 80 | 运行run_ChnSentiCorp.sh报错 (cpu版) | 在运行run_ChnSentiCorp.sh报错 (cpu版),在 进行 模型加载的时候,总是被自动 kill,在监控内存消耗的时候,128G 的内存很快就被打满,是存在内存泄漏问题么?对应参数如下


| closed | 2019-04-08T07:44:34Z | 2019-04-08T09:13:14Z | https://github.com/PaddlePaddle/ERNIE/issues/80 | [] | akina2016 | 1 |
jupyterhub/repo2docker | jupyter | 916 | Integration with Jupyter enterprise gateway and jupyterhub | Hello,
I'm deploying a Kubernetes cluster with jupyterhub and jupyter enterprise gateway, and I was wondering how to integrate repo2docker to permit docker build on-demand. I don't see any issue / proposal on the subject, but it would be the perfect modular jupyter installation: isolation and customization of notebooks via container.
### Proposed change
- instead of starting the container with the select image, jupyter entreprise gateway checks if the repo2docker recognized configuration files are present (Dockerfile, etc.).
- If no, it starts the default image
- If yes, it calls a docker build with repo2docker and kaniko in a dedicated pod. After the build is succesful, the image reference is given to jupyter entreprise gateway which can start the image.
### Alternative options
At the present time, choosing a specific notebook container image can be done using form during jupyterhub authentication. The cluster administrator has to build specific image based on user request and add its to the form choices.
### Who would use this feature?
Notebook user who wants customization of notebook environments.
### How much effort will adding it take?
I don't really know, I see these main 3 tasks:
- modify repo2docker to just build the image and not starting a container
- dedicate the build in a specific pod (using kaniko)
- pass the image name to jupyter enterprise gateway
### Who can do this work?
Devs from jupyter entreprise gateway and repo2docker.
| closed | 2020-06-21T16:51:48Z | 2021-02-22T08:07:49Z | https://github.com/jupyterhub/repo2docker/issues/916 | [
"needs: discussion"
] | kartoch | 2 |
Esri/arcgis-python-api | jupyter | 2,205 | v2.4 map widget not loading on Databricks | **Describe the bug**
Upgraded to v2.4 - now map widgets no longer render in Databricks.
Have attempted with Databricks runtime 13.3 LTS and 15.4 LTS. arcgis and arcgis-mapping libraries are both properly installed and able to be imported.
**To Reproduce**
```python
import arcgis
from arcgis.gis import GIS
from arcgis.map import Map
portal = GIS(url = "https://my_organization.com/portal", username = username, password = password)
print(portal.users.me.username) # Shows my user name
my_content = portal.content.search(query="owner:" + portal.users.me.username)
print(my_content) # Shows my content
m = portal.map()
m
# Alternatively, have tried this way as well, and it doesn't work:
import arcgis.map
m2 = arcgis.map.Map()
m2
```
Error:
```python
Loading the widget is taking longer than expected. We suggest the following…
Make sure that your cluster runs on DBR 11.0 (DBR 11.1 for GCP user) or higher.
Try to run the cell again.
Keep waiting if the widget is large or your internet is slow.
Reach out to your Databricks support.
```
**Screenshots**

**Expected behavior**
A widget should appear with an interactive map
**Platform**
Databricks - Runtime (DBR) 13.3 LTS and 15.4 LTS both do not work
| closed | 2025-01-22T19:13:12Z | 2025-03-11T14:21:59Z | https://github.com/Esri/arcgis-python-api/issues/2205 | [
"on-hold",
"As-Designed"
] | pmd84 | 5 |
drivendataorg/erdantic | pydantic | 22 | Add option to specify terminal nodes for splitting up large/deep trees | If a composition tree is enormous, it may be useful to split it up into multiple diagrams.
One way to do that is to specify terminal nodes indicated in a way to show that there is a cut there. | closed | 2021-02-11T21:58:59Z | 2021-02-15T05:47:55Z | https://github.com/drivendataorg/erdantic/issues/22 | [
"enhancement"
] | jayqi | 0 |
pyqtgraph/pyqtgraph | numpy | 2,416 | movable InfiniteLine grabbing does not respect the bounding rect |
### Short description
I use a custom implementation [found here](https://github.com/danielhrisca/asammdf/blob/1e7ac3c2f4e9e4c751c2ea2f79a2f04afbb0f311/asammdf/gui/widgets/cursor.py#L7) that has the following differences compared to a plain InfinteLine:
1. the line color does not change when the mouse cursor hovers over the line
2. the mouse cursor changes to a `Qt::SplitHCursor` (see https://doc.qt.io/qt-6/qt.html#CursorShape-enum) when the mouse hovers the line. This is used as a visual indicator that the line can be grabbed and moved.
I've noticed that coming from the left side of the line the mouse cursor changes to `Qt::SplitHCursor` 1 pixel before the line is actually movable (you can see this running the slightly modified example code bellow). Coming from the right side of the line there is no discrepancy.
### Code to reproduce
```python
import numpy as np
import pyqtgraph as pg
from pyqtgraph.Qt import QtCore
app = pg.mkQApp("InfiniteLine Example")
win = pg.GraphicsLayoutWidget(show=True, title="Plotting items examples")
win.resize(1000,600)
pg.setConfigOptions(antialias=True)
p1 = win.addPlot(title="Plot Items example", y=np.random.normal(size=100, scale=10), pen=0.5)
p1.setYRange(-40, 40)
inf1 = pg.InfiniteLine(movable=True, angle=90, label='x={value:0.2f}',
labelOpts={'position':0.1, 'color': (200,200,100), 'fill': (200,200,200,50), 'movable': True})
# this will change the cursor when hovering over the inifinte line
# ideally the cursor change and the line color change should happend at the same time
inf1.setCursor(QtCore.Qt.SplitHCursor)
inf2 = pg.InfiniteLine(movable=True, angle=0, pen=(0, 0, 200), bounds = [-20, 20], hoverPen=(0,200,0), label='y={value:0.2f}mm',
labelOpts={'color': (200,0,0), 'movable': True, 'fill': (0, 0, 200, 100)})
inf3 = pg.InfiniteLine(movable=True, angle=45, pen='g', label='diagonal',
labelOpts={'rotateAxis': [1, 0], 'fill': (0, 200, 0, 100), 'movable': True})
inf1.setPos([2,2])
p1.addItem(inf1)
p1.addItem(inf2)
p1.addItem(inf3)
targetItem1 = pg.TargetItem()
targetItem2 = pg.TargetItem(
pos=(30, 5),
size=20,
symbol="star",
pen="#F4511E",
label="vert={1:0.2f}",
labelOpts={
"offset": QtCore.QPoint(15, 15)
}
)
targetItem2.label().setAngle(45)
targetItem3 = pg.TargetItem(
pos=(10, 10),
size=10,
symbol="x",
pen="#00ACC1",
)
targetItem3.setLabel(
"Third Label",
{
"anchor": QtCore.QPointF(0.5, 0.5),
"offset": QtCore.QPointF(30, 0),
"color": "#558B2F",
"rotateAxis": (0, 1)
}
)
def callableFunction(x, y):
return f"Square Values: ({x**2:.4f}, {y**2:.4f})"
targetItem4 = pg.TargetItem(
pos=(10, -10),
label=callableFunction
)
p1.addItem(targetItem1)
p1.addItem(targetItem2)
p1.addItem(targetItem3)
p1.addItem(targetItem4)
lr = pg.LinearRegionItem(values=[70, 80])
p1.addItem(lr)
label = pg.InfLineLabel(lr.lines[1], "region 1", position=0.95, rotateAxis=(1,0), anchor=(1, 1))
if __name__ == '__main__':
pg.exec()
```
### Expected behavior
The cursor change and the line color change happen at the same time. Once the cursor is changed dragging should be possible
### Real behavior
On the left side of the infinite line the cursor change is activated 1 pixel before the color change. Trying to drag the line at this point will result in a viewbox change instead of the line movement.
```
An error occurred?
Post the full traceback inside these 'code fences'!
```
### Tested environment(s)
* PyQtGraph version: 0.12.4
* Qt Python binding: PySide6.2.0 and PySide 6.3.2
* Python version: 3.10
* NumPy version: 1.23.3
* Operating system: Windows 10 x64
* Installation method: pip
### Additional context
| open | 2022-09-15T12:43:32Z | 2022-10-01T02:32:46Z | https://github.com/pyqtgraph/pyqtgraph/issues/2416 | [
"InfiniteLine"
] | danielhrisca | 6 |
ivy-llc/ivy | pytorch | 28,335 | Fix Frontend Failing Test: torch - math.tensorflow.math.is_strictly_increasing | To-do List: https://github.com/unifyai/ivy/issues/27498 | closed | 2024-02-19T17:28:14Z | 2024-02-20T09:26:02Z | https://github.com/ivy-llc/ivy/issues/28335 | [
"Sub Task"
] | Sai-Suraj-27 | 0 |
tensorflow/tensor2tensor | deep-learning | 1,004 | a question about the code of universal transformer | https://github.com/tensorflow/tensor2tensor/blob/2f8423a7daf39c549fa4f87d369d3ff95e719e6c/tensor2tensor/models/research/universal_transformer_util.py#L1207
Is this supposed to be "(previous_state * (1 - update_weights))" instead of "(previous_state * 1 - update_weights)"?
Thanks. | closed | 2018-08-17T20:18:36Z | 2018-08-30T00:42:49Z | https://github.com/tensorflow/tensor2tensor/issues/1004 | [] | yinboc | 1 |
streamlit/streamlit | data-visualization | 10,193 | Version information doesn't show in About dialog in 1.41 | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
We used to show which Streamlit version is running in the About dialog, but apparently that's broken in 1.41:

### Reproducible Code Example
_No response_
### Steps To Reproduce
Run any Streamlit app, go on app menu > About.
### Expected Behavior
_No response_
### Current Behavior
_No response_
### Is this a regression?
- [X] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.41
- Python version:
- Operating System:
- Browser:
### Additional Information
_No response_ | closed | 2025-01-15T19:31:52Z | 2025-01-16T15:26:20Z | https://github.com/streamlit/streamlit/issues/10193 | [
"type:bug",
"status:awaiting-user-response"
] | jrieke | 3 |
giotto-ai/giotto-tda | scikit-learn | 570 | [BUG] Cannot call module 'gtda' | Hi there,
I have just discovered Giotto TDA and am looking forward to getting stuck in. However, I am having a ModuleNotFoundError when I try to use the library.
I have tried both venv within PyCharm as well as my global interpreter in my laptop, but both come up with the error despite both being verified by using:
pip list
Interestingly enough, PyCharm recognises the module and autofills classes and the gtda when I am scripting, but despite being in the interpreter, I receive the same error.
I understand this is very likely on my end, but I have tried uninstalling and reinstalling python, gtda, creating new venv etc but have not seemed to make any progress. I was wondering if anyone else has had this issue and if so, what they did to resolve it.
| closed | 2021-03-26T22:24:14Z | 2021-06-05T12:55:20Z | https://github.com/giotto-ai/giotto-tda/issues/570 | [
"bug"
] | glacey1 | 5 |
yt-dlp/yt-dlp | python | 12,243 | [FranceTVSite] [FranceTV] - Unable to extract video ID | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
France
### Provide a description that is worded well enough to be understood
Can't download video from france.tv - Unable to extract video ID. Same issue on Linux / W11. Tested with different ISP.
Tested also with other episodes from this title ; the same.
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
yt-dlp -vU -F https://www.france.tv/enfants/six-huit-ans/tortues-ninja-les-chevaliers-d-ecaille/saison-1/6136895-le-collectionneur-fou.html
[debug] Command-line config: ['-vU', '-F', 'https://www.france.tv/enfants/six-huit-ans/tortues-ninja-les-chevaliers-d-ecaille/saison-1/6136895-le-collectionneur-fou.html']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2025.01.26 from yt-dlp/yt-dlp [3b4531934] (pip)
[debug] Python 3.12.3 (CPython x86_64 64bit) - Linux-6.8.0-51-generic-x86_64-with-glibc2.39 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.39)
[debug] exe versions: none
[debug] Optional libraries: certifi-2023.11.17, requests-2.31.0, secretstorage-3.3.3, sqlite3-3.45.1, urllib3-2.0.7
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1839 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2025.01.26 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2025.01.26 from yt-dlp/yt-dlp)
[FranceTVSite] Extracting URL: https://www.france.tv/enfants/six-huit-ans/tortues-ninja-les-chevaliers-d-ecaille/saison-1/6136895-le-collectionneur-fou.html
[FranceTVSite] 6136895-le-collectionneur-fou: Downloading webpage
ERROR: [FranceTVSite] 6136895-le-collectionneur-fou: Unable to extract video ID; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "/usr/local/lib/python3.12/dist-packages/yt_dlp/extractor/common.py", line 742, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/yt_dlp/extractor/francetv.py", line 349, in _real_extract
video_id = self._html_search_regex(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/yt_dlp/extractor/common.py", line 1382, in _html_search_regex
res = self._search_regex(pattern, string, name, default, fatal, flags, group)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/dist-packages/yt_dlp/extractor/common.py", line 1346, in _search_regex
raise RegexNotFoundError(f'Unable to extract {_name}')
``` | closed | 2025-01-31T10:32:24Z | 2025-01-31T11:38:04Z | https://github.com/yt-dlp/yt-dlp/issues/12243 | [
"duplicate",
"site-bug"
] | DraGula | 1 |
mwaskom/seaborn | matplotlib | 3,027 | JointGrid can't rotation the xticks | 
## How to rotation the xticks like the picture above ?
``` python
plt.figure(figsize = (10,8))
g = sns.JointGrid(x='u',y='t',data = tdata)
g.plot(sns.scatterplot, sns.histplot)
plt.show()
```
And "plt.figure" also can't work on "JoinGrid" | closed | 2022-09-15T14:54:43Z | 2022-09-15T15:26:29Z | https://github.com/mwaskom/seaborn/issues/3027 | [] | duckbill | 1 |
onnx/onnx | pytorch | 5,945 | how to print the weights & biases values of model.onnx file using c++ | # Ask a Question
how to print the weights & biases values of model.onnx file using c++ ??
### Question
<!--Unable to get the weights & biases value how to do it ? but able to get the it's type and shape using c++-->
### Further information
- Relevant Area: <!--e.g., trying to get the values as shown in netron -->
- Is this issue related to a specific model?
**Model name**: <!-- *model with ir>3 -->
**Model opset**: <!-- * NA* -->
### Notes
<!-- // Print Weights
auto graph = m_Model.graph();
if(graph.initializer_size() > 0)
{
std::cout << "Weight & Biases Info:" << std::endl;
/* for(auto& tensorProto : graph.initializer())
{
handleTensorproto(tensorProto);
std::cout << std::endl;
} */
for(auto& initializer : graph.initializer())
{
//auto initializer = graph.initializer(i);//.at(i);
std::string name = initializer.name();
//if(nodename == name)
{
std::cout << name << ": ";
//std::cout << initializer.data_type() <<"=" ;
std::cout << initializer.DataType_Name(initializer.data_type()) << " ";
if(initializer.dims_size() > 0)
{
std::cout << "[ ";
for(auto& dim: initializer.dims())
{
std::cout << dim << " ";
}
std::cout << "]";
}
//std::cout << initializer.raw_data();
//std::cout << initializer.GetTypeName();
//handleTensorproto(initializer);
auto type = initializer.data_type();
switch(type)
{
//std::cout << "Abhishek" << std::endl;
case onnx::TensorProto_DataType::TensorProto_DataType_FLOAT:
{
//std::cout << initializer.float_data_size();
//if(initializer.float_data_size() > 0)
{
//std::cout << "Float value " << std::endl;
for(const float& value : initializer.float_data())
{
std::cout << value << ",";
}
}
break;
}
case onnx::TensorProto_DataType::TensorProto_DataType_INT64:
{
//std::cout << initializer.float_data_size();
//if(initializer.float_data_size() > 0)
{
//std::cout << "Float value " << std::endl;
for(const int64_t& value : initializer.int64_data())
{
std::cout << value << ",";
}
}
break;
}
default:
{
std::cout <<"no TensorProto_DataType matched";
}
}
-->
| closed | 2024-02-18T12:10:37Z | 2024-02-19T01:29:46Z | https://github.com/onnx/onnx/issues/5945 | [
"question"
] | abhishek27m1992github | 1 |
biolab/orange3 | scikit-learn | 6,753 | Import Images Widget Info Typo | <!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
I imported an `image-data` folder with 3 labeled subfolders - `broccoli`, `capsicum`, `tomato` using the Image Analytics Add-on.

It showed `3 categorys`, instead of `3 categories`.

I tested by adding these two lines to [orange3/Orange/widgets/utils/localization/__init__.py](https://github.com/biolab/orange3/blob/master/Orange/widgets/utils/localization/__init__.py) locally, it somehow fixed the typo.

**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system: Windows 10
- Orange version: 3.36.2
- How you installed Orange: `pip install Orange3`
| closed | 2024-03-04T11:53:34Z | 2024-03-15T11:36:16Z | https://github.com/biolab/orange3/issues/6753 | [
"bug",
"snack"
] | foongminwong | 1 |
keras-team/keras | machine-learning | 20,115 | PyDataset shape error | I constructed a network with an input size of (360,) and a generator with an output size of (96,360). I confirmed that the generator output size is (96,360) by calling the getitem method. But when using fit(), there will be an error message indicating inconsistent input sizes as 'expected shape=(None, 360), found shape=(1, 96, 360)'. How to solve it? | closed | 2024-08-13T07:08:48Z | 2024-09-12T01:58:55Z | https://github.com/keras-team/keras/issues/20115 | [
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] | Sticcolet | 11 |
huggingface/datasets | deep-learning | 7,473 | Webdataset data format problem | ### Describe the bug
Please see https://huggingface.co/datasets/ejschwartz/idioms/discussions/1
Error code: FileFormatMismatchBetweenSplitsError
All three splits, train, test, and validation, use webdataset. But only the train split has more than one file. How can I force the other two splits to also be interpreted as being the webdataset format? (I don't think there is currently a way, but happy to be told that I am wrong.)
### Steps to reproduce the bug
```
import datasets
datasets.load_dataset("ejschwartz/idioms")
### Expected behavior
The dataset loads. Alternatively, there is a YAML syntax for manually specifying the format.
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-6.8.0-52-generic-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.28.1
- PyArrow version: 19.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.9.0 | closed | 2025-03-21T17:23:52Z | 2025-03-21T19:19:58Z | https://github.com/huggingface/datasets/issues/7473 | [] | edmcman | 1 |
huggingface/datasets | pandas | 6,720 | TypeError: 'str' object is not callable | ### Describe the bug
I am trying to get the HPLT datasets on the hub. Downloading/re-uploading would be too time- and resource consuming so I wrote [a dataset loader script](https://huggingface.co/datasets/BramVanroy/hplt_mono_v1_2/blob/main/hplt_mono_v1_2.py). I think I am very close but for some reason I always get the error below. It happens during the clean-up phase where the directory cannot be removed because it is not empty.
My only guess would be that this may have to do with zstandard
```
Traceback (most recent call last):
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1744, in _prepare_split_single
writer.write(example, key)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 492, in write
self.write_examples_on_file()
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 434, in write_examples_on_file
if self.schema
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 409, in schema
else (pa.schema(self._features.type) if self._features is not None else None)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1643, in type
return get_nested_type(self)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in get_nested_type
{key: get_nested_type(schema[key]) for key in schema}
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in <dictcomp>
{key: get_nested_type(schema[key]) for key in schema}
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1221, in get_nested_type
value_type = get_nested_type(schema.feature)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1228, in get_nested_type
return schema()
TypeError: 'str' object is not callable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1753, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 588, in finalize
self.write_examples_on_file()
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 434, in write_examples_on_file
if self.schema
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 409, in schema
else (pa.schema(self._features.type) if self._features is not None else None)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1643, in type
return get_nested_type(self)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in get_nested_type
{key: get_nested_type(schema[key]) for key in schema}
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in <dictcomp>
{key: get_nested_type(schema[key]) for key in schema}
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1221, in get_nested_type
value_type = get_nested_type(schema.feature)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1228, in get_nested_type
return schema()
TypeError: 'str' object is not callable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 959, in incomplete_dir
yield tmp_dir
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1005, in download_and_prepare
self._download_and_prepare(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1767, in _download_and_prepare
super()._download_and_prepare(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1100, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1605, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1762, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/pricie/vanroy/.config/JetBrains/PyCharm2023.3/scratches/scratch_5.py", line 4, in <module>
ds = load_dataset(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/load.py", line 2549, in load_dataset
builder_instance.download_and_prepare(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 985, in download_and_prepare
with incomplete_dir(self._output_dir) as tmp_output_dir:
File "/home/pricie/vanroy/.pyenv/versions/3.10.13/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 966, in incomplete_dir
shutil.rmtree(tmp_dir)
File "/home/pricie/vanroy/.pyenv/versions/3.10.13/lib/python3.10/shutil.py", line 731, in rmtree
onerror(os.rmdir, path, sys.exc_info())
File "/home/pricie/vanroy/.pyenv/versions/3.10.13/lib/python3.10/shutil.py", line 729, in rmtree
os.rmdir(path)
OSError: [Errno 39] Directory not empty: '/home/pricie/vanroy/.cache/huggingface/datasets/BramVanroy___hplt_mono_v1_2/ky/1.2.0/7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47.incomplete'
```
Interestingly, though, this directory _does_ appear to be empty:
```shell
> cd /home/pricie/vanroy/.cache/huggingface/datasets/BramVanroy___hplt_mono_v1_2/ky/1.2.0/7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47.incomplete
> ls -lah
total 0
drwxr-xr-x. 1 vanroy vanroy 0 Mar 7 12:01 .
drwxr-xr-x. 1 vanroy vanroy 304 Mar 7 11:52 ..
> cd ..
> ls
7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47_builder.lock 7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47.incomplete
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset(
"BramVanroy/hplt_mono_v1_2",
"ky",
trust_remote_code=True
)
```
### Expected behavior
No error.
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.34
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.2
- PyArrow version: 14.0.1
- Pandas version: 2.1.3
- `fsspec` version: 2023.10.0
| closed | 2024-03-07T11:07:09Z | 2024-03-08T07:34:53Z | https://github.com/huggingface/datasets/issues/6720 | [] | BramVanroy | 2 |
roboflow/supervision | deep-learning | 1,362 | sv changes bbox shape for object detection with YOLOv8?? | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
I've been using supervision, its tracker, annotators, ... Nice work!! However I've noticed that, doing object detection with yolov8, bboxe shape from ultralytics are changed by supervision even though it refers to the same detection. The following screenshot shows a detected object provided by YOLO, ultralytics.Result (before doing `supervision_tracker.update(results[0])` and after parsing it to `supervision_tracker`.

The bboxes are diferent. I expect they shouldn't...
Can this bbox shape change be removed? I would like to keep original bbox shape.
Thanks!!
### Additional
_No response_ | open | 2024-07-15T10:20:40Z | 2024-07-16T18:01:17Z | https://github.com/roboflow/supervision/issues/1362 | [
"question"
] | abelBEDOYA | 10 |
MaxHalford/prince | scikit-learn | 95 | CA. method fit doesn't work | Hello
I tried to fit CA model and after that build CA factor map but it doesn't work.
I checked my data - df is good (like your example).
I attache screenshot

So, method which build plot didn't start, because ca.fit(X) had failed. | closed | 2020-08-21T10:58:19Z | 2020-08-21T12:15:23Z | https://github.com/MaxHalford/prince/issues/95 | [] | Muilton | 3 |
allure-framework/allure-python | pytest | 481 | How to create a custom Allure step function for sensible data |
#### I'm submitting a ...
- [ ] bug report
- [x] feature request
- [ ] support request => Please do not submit support request here, see note at the top of this template.
#### What is the current behavior?
With a step decorator like this:

Which takes an element (a text box) and enters the value in it. In the step function I display the value that I want to enter No matter what I enter on the step title, the report always shows the info that was passed as arguments to the function.
My report looks like the following image:

#### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
Not a bug
#### What is the expected behavior?
Thus, the "value" argument will always be displayed and that is something that I cannot have on certain projects. I could either use with not showing the value at all or to change it to something like '*****'. Is there anyway to make a custom step function that solves my problem?.
#### What is the motivation / use case for changing the behavior?
I am currently working on a Testing Automation team, using Python and Allure to make reports of all the test cases that we run. Sometimes we deal with sensible data (e.g: passwords) that I can't show directly on the reports due to data protection policies.
#### Please tell us about your environment:
- Allure version: 2.13.0
- Test framework: pytest@3.6.1
- Allure adaptor: pytest-allure-adaptor@1.7.10
#### Other information
[//]: # (
. e.g. detailed explanation, stacktraces, related issues, suggestions
. how to fix, links for us to have more context, eg. Stackoverflow, Gitter etc
)
| open | 2020-03-30T07:38:11Z | 2023-01-23T08:31:04Z | https://github.com/allure-framework/allure-python/issues/481 | [
"theme:core",
"task:new feature"
] | CarlosHernandezP | 2 |
sqlalchemy/sqlalchemy | sqlalchemy | 10,646 | Return object by default for python_type, types that have union types should just return it | ### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/10177
<div type='discussions-op-text'>
<sup>Originally posted by **amacfie-tc** August 2, 2023</sup>
The [`python_type`](https://docs.sqlalchemy.org/en/20/core/type_api.html#sqlalchemy.types.TypeEngine.python_type) for JSONB columns in Postgres is `dict`. Shouldn't it be `JSONType` where
```python
JSONType = Union[str, int, float, bool, None, dict[str, "JSONType"], list["JSONType"]]
```
?
| closed | 2023-11-16T14:13:01Z | 2025-02-13T16:32:15Z | https://github.com/sqlalchemy/sqlalchemy/issues/10646 | [
"datatypes",
"typing"
] | CaselIT | 2 |
deepinsight/insightface | pytorch | 2,378 | Install mac numpy h import error | ```
ipython
import numpy
numpy.get_include()
sudo cp -r /Users/xxxxxxxxxx/anaconda3/envs/sd/lib/python3.10/site-packages/numpy/core/include/numpy
/usr/local/include
```
it perfectly solve this problem | open | 2023-07-20T02:21:07Z | 2023-07-20T02:21:07Z | https://github.com/deepinsight/insightface/issues/2378 | [] | zdxpan | 0 |
2noise/ChatTTS | python | 147 | ModuleNotFoundError: No module named '_pynini',请教用win10运行的大佬们,没有更改pynini任何东西,为啥报这个错? | INFO:ChatTTS.core:Load from local: ./ChatTTS
WARNING:ChatTTS.utils.gpu_utils:No GPU found, use CPU instead
INFO:ChatTTS.core:use cpu
INFO:ChatTTS.core:vocos loaded.
INFO:ChatTTS.core:dvae loaded.
INFO:ChatTTS.core:gpt loaded.
INFO:ChatTTS.core:decoder loaded.
INFO:ChatTTS.core:tokenizer loaded.
INFO:ChatTTS.core:All initialized.
INFO:ChatTTS.core:All initialized.
Traceback (most recent call last):
File "E:\Python310\lib\idlelib\run.py", line 578, in runcode
exec(code, self.locals)
File "D:\IDM Download\ChatTTS-main\test.py", line 10, in <module>
wavs = chat.infer(texts, use_decoder=True)
File "D:\IDM Download\ChatTTS-main\ChatTTS\core.py", line 145, in infer
self.init_normalizer(_lang)
File "D:\IDM Download\ChatTTS-main\ChatTTS\core.py", line 182, in init_normalizer
from nemo_text_processing.text_normalization.normalize import Normalizer
File "E:\Anaconda3/Lib/site-packages\nemo_text_processing\text_normalization\__init__.py", line 15, in <module>
from nemo_text_processing.text_normalization.normalize import Normalizer
File "E:\Anaconda3/Lib/site-packages\nemo_text_processing\text_normalization\normalize.py", line 28, in <module>
import pynini
File "E:\Anaconda3/Lib/site-packages\pynini\__init__.py", line 1, in <module>
from _pynini import *
ModuleNotFoundError: No module named '_pynini'

<img width="536" alt="微信图片_20240531232135" src="https://github.com/2noise/ChatTTS/assets/29656860/a28caeeb-0733-4fb8-b707-ef10ea903d76">
| closed | 2024-05-31T15:23:17Z | 2024-07-18T04:01:48Z | https://github.com/2noise/ChatTTS/issues/147 | [
"stale"
] | MiniSnk | 3 |
flasgger/flasgger | flask | 162 | Using marshmallow schemas gives "not defined!" error in output | For instance, when running the ["colors_with_schema" example](https://github.com/rochacbruno/flasgger/blob/master/examples//colors_with_schema.py), instead of the schema, like in [base_model_view](https://github.com/rochacbruno/flasgger/blob/master/examples//base_model_view.py), this gives: `<span class="strong">Palette is not defined!</span>`
This can also be seen [live](http://flasgger.pythonanywhere.com/colors_with_schema/apidocs/#!/default/get_colors_palette). | closed | 2017-10-25T15:04:24Z | 2018-01-02T13:27:11Z | https://github.com/flasgger/flasgger/issues/162 | [] | kibernick | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.