repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
chiphuyen/stanford-tensorflow-tutorials | nlp | 46 | Cannot run chatbot.py in python=3.5, tensorflow=1.2.1?? | I hv trouble in running the chatbot.py, is it because of the tensorflow version? how to fix it?
python chatbot.py --mode train
Data ready!
Bucketing conversation number 9999
Bucketing conversation number 19999
Bucketing conversation number 9999
Bucketing conversation number 19999
Bucketing conversation number 29999
Bucketing conversation number 39999
Bucketing conversation number 49999
Bucketing conversation number 59999
Bucketing conversation number 69999
Bucketing conversation number 79999
Bucketing conversation number 89999
Bucketing conversation number 99999
Bucketing conversation number 109999
Bucketing conversation number 119999
Bucketing conversation number 129999
Bucketing conversation number 139999
Bucketing conversation number 149999
Bucketing conversation number 159999
Bucketing conversation number 169999
Bucketing conversation number 179999
Bucketing conversation number 189999
Number of samples in each bucket:
[103459]
Bucket scale:
[1.0]
Initialize new model
Create placeholders
Create inference
Creating loss...
It might take a couple of minutes depending on how many buckets you have.
Traceback (most recent call last):
File "chatbot.py", line 262, in <module>
main()
File "chatbot.py", line 257, in main
train()
File "chatbot.py", line 138, in train
model.build_graph()
File "C:\Users\Alan\Documents\Udacity Deep learning\chatbot\stanford-tensorflow-tutorials-master\stanford-tensorflow-tutorials-master\assignments\chatbot\model.py", line 132, in build_graph
self._create_loss()
File "C:\Users\Alan\Documents\Udacity Deep learning\chatbot\stanford-tensorflow-tutorials-master\stanford-tensorflow-tutorials-master\assignments\chatbot\model.py", line 100, in _create_loss
softmax_loss_function=self.softmax_loss_function)
File "C:\Program Files\Anaconda3\envs\rnn2\lib\site-packages\tensorflow\contrib\legacy_seq2seq\python\ops\seq2seq.py", line 1221, in model_with_buckets
softmax_loss_function=softmax_loss_function))
File "C:\Program Files\Anaconda3\envs\rnn2\lib\site-packages\tensorflow\contrib\legacy_seq2seq\python\ops\seq2seq.py", line 1134, in sequence_loss
softmax_loss_function=softmax_loss_function))
File "C:\Program Files\Anaconda3\envs\rnn2\lib\site-packages\tensorflow\contrib\legacy_seq2seq\python\ops\seq2seq.py", line 1089, in sequence_loss_by_example
crossent = softmax_loss_function(labels=target, logits=logit)
TypeError: sampled_loss() got an unexpected keyword argument 'logits' | open | 2017-08-06T16:47:31Z | 2018-03-24T17:41:01Z | https://github.com/chiphuyen/stanford-tensorflow-tutorials/issues/46 | [] | akirannz | 4 |
dask/dask | scikit-learn | 11,824 | read_parquet with empty columns gives incorrect partitions | <!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**:
Reading parquet files and either subsetting to empty columns or passing `columns=[]` results in inconsistent `map_partitions` behavior compared to with non-empty columns.
**Minimal Complete Verifiable Example**:
```python
import dask.dataframe as dd
# Write some dummy data w/ 2 partitions
ddf = dd.from_pandas(pd.DataFrame({"a": range(12)}), npartitions=2)
ddf.to_parquet("/tmp/to_parquet/")
# Reload normally and it has 2 partitions; mapping gives 2 outputs
reloaded = dd.read_parquet("/tmp/to_parquet")
print(f"{reloaded.map_partitions(len).compute()}")
# Output:
# 0 6
# 1 6
# dtype: int64
# Reload with empty columns, or reload and then subset to no columns, and suddenly mapping gives 1 output
print(f"{reloaded[[]].map_partitions(len).compute()}")
rereloaded = dd.read_parquet("/tmp/to_parquet", columns=[])
print(f"{rereloaded.npartitions=}")
print(f"{rereloaded.map_partitions(len).compute()}")
# Output:
# 0 12
# dtype: int64
# rereloaded.npartitions=2
# 0 12
# dtype: int64
```
Even though the number of partitions is still listed as 2, mapping gives just a single partition w/ all the rows.
**Anything else we need to know?**:
**Environment**:
- Dask version: `dask, version 2025.2.0`
- Python version: `Python 3.10.16`
- Operating System: MacOS
- Install method (conda, pip, source): uv pip
| closed | 2025-03-12T00:23:51Z | 2025-03-20T13:31:47Z | https://github.com/dask/dask/issues/11824 | [
"dataframe",
"dask-expr"
] | bnaul | 2 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 827 | [FEATURE]: Separate out Personal Information logs from app logs | ### Feature summary
Enhance logger such that separate logs are generated for PII
### Feature description
regular logs won't contain any PII, giving confidence for contributor to upload logs without worry on PII
### Motivation
_No response_
### Alternatives considered
_No response_
### Additional context
_No response_ | closed | 2024-11-12T23:10:28Z | 2025-01-31T23:39:09Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/827 | [
"enhancement",
"stale"
] | surapuramakhil | 1 |
mjhea0/flaskr-tdd | flask | 58 | Consider renaming app.test.py to something else | This is mostly an FYI as I don't really want or expect you to change the name everywhere. Maybe you could add a note about it to the tutorial: VSCode's python test extensions don't handle dots in the filename per [this](https://github.com/microsoft/vscode-python/issues/8490#issuecomment-553181223). Test discovery fails, etc. If you rename it to app_test.py suddenly everything works as desired. They say this is intentional. | closed | 2019-11-20T22:38:12Z | 2020-10-14T00:04:55Z | https://github.com/mjhea0/flaskr-tdd/issues/58 | [] | markdall | 2 |
ultralytics/ultralytics | deep-learning | 19,753 | Export RT-DETR to onnx get wrong parameters | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I used the following code to export pre-trained RT-DETR model to onnx:
```
from ultralytics import RTDETR
model = RTDETR("detection/ultralytics/runs/rtdert_ad/rtdert_resnet50_tiny_x_e3003/weights/best.pt")
model.export(
format="onnx",
imgsz=(960, 480),
half=False,
simplify=False,
opset=None,
device="0",
dynamic=False)
```
But the output onnx model has different parameters with the RTDETR model.
For example, for the first 7*7 Conv, the onnx model's metrics (in NETRON) are (too large):
```
min: -2.9261393547058105
max: 3.2974531650543213
```
while the RTDETR first 7*7 Conv's metrics are:
```
min: -0.305908203125
max: 0.3447265625
```
Here is the version of installed packages:
```
onnx 1.16.1
onnxruntime 1.20.1
onnxruntime-gpu 1.20.0
onnxsim 0.4.36
onnxslim 0.1.48
torch 2.4.1+cu118
ultralytics 8.3.73
ultralytics-thop 2.0.14
```
Any idea to deal with this problem?
Thanks a lot.
### Additional
_No response_ | closed | 2025-03-18T07:44:18Z | 2025-03-18T09:01:05Z | https://github.com/ultralytics/ultralytics/issues/19753 | [
"question",
"detect",
"exports"
] | civat | 7 |
aleju/imgaug | deep-learning | 693 | String format expects number but gets string | In imgaug/augmenters/meta.py string formatter expects digit but gets string from type(image).__name__

| open | 2020-06-24T21:21:50Z | 2020-06-24T21:21:50Z | https://github.com/aleju/imgaug/issues/693 | [] | r0mac09 | 0 |
ageitgey/face_recognition | python | 1,509 | 【Bug】Fail to run facerec_from_webcam_faster.py | * face_recognition version: 1.3.0
* Python version: 3.10.10
* Operating System: MacOS
### Description
Fail to run the realtime example code and met with the following error after install dependency
```
Traceback (most recent call last):
File ".../face_recognition/examples/facerec_from_webcam_faster.py", line 55, in <module>
face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)
File ".../face_recognition/face_recognition/api.py", line 214, in face_encodings
return [np.array(face_encoder.compute_face_descriptor(face_image, raw_landmark_set, num_jitters)) for raw_landmark_set in raw_landmarks]
File ".../face_recognition/face_recognition/api.py", line 214, in <listcomp>
return [np.array(face_encoder.compute_face_descriptor(face_image, raw_landmark_set, num_jitters)) for raw_landmark_set in raw_landmarks]
TypeError: compute_face_descriptor(): incompatible function arguments. The following argument types are supported:
1. (self: _dlib_pybind11.face_recognition_model_v1, img: numpy.ndarray[(rows,cols,3),numpy.uint8], face: _dlib_pybind11.full_object_detection, num_jitters: int = 0, padding: float = 0.25) -> _dlib_pybind11.vector
2. (self: _dlib_pybind11.face_recognition_model_v1, img: numpy.ndarray[(rows,cols,3),numpy.uint8], num_jitters: int = 0) -> _dlib_pybind11.vector
3. (self: _dlib_pybind11.face_recognition_model_v1, img: numpy.ndarray[(rows,cols,3),numpy.uint8], faces: _dlib_pybind11.full_object_detections, num_jitters: int = 0, padding: float = 0.25) -> _dlib_pybind11.vectors
4. (self: _dlib_pybind11.face_recognition_model_v1, batch_img: List[numpy.ndarray[(rows,cols,3),numpy.uint8]], batch_faces: List[_dlib_pybind11.full_object_detections], num_jitters: int = 0, padding: float = 0.25) -> _dlib_pybind11.vectorss
5. (self: _dlib_pybind11.face_recognition_model_v1, batch_img: List[numpy.ndarray[(rows,cols,3),numpy.uint8]], num_jitters: int = 0) -> _dlib_pybind11.vectors
Invoked with: <_dlib_pybind11.face_recognition_model_v1 object at 0x112682530>, array([[[121, 114, 101],
[121, 114, 102],
[121, 113, 103],
...,
[250, 253, 251],
[251, 253, 252],
[250, 253, 252]],
[[120, 112, 100],
[123, 115, 105],
[120, 112, 102],
...,
[253, 255, 255],
[253, 255, 255],
[252, 254, 254]],
[[126, 118, 106],
[123, 115, 104],
[124, 116, 106],
...,
[253, 255, 255],
[253, 255, 255],
[253, 255, 255]],
...,
[[138, 196, 193],
[157, 217, 213],
[144, 204, 200],
...,
[227, 210, 191],
[226, 208, 187],
[229, 206, 177]],
[[162, 222, 217],
[146, 205, 203],
[163, 225, 222],
...,
[234, 215, 198],
[228, 210, 188],
[229, 207, 181]],
[[137, 196, 192],
[164, 226, 221],
[135, 194, 191],
...,
[ 90, 74, 59],
[227, 211, 192],
[234, 213, 188]]], dtype=uint8), <_dlib_pybind11.full_object_detection object at 0x1126824b0>, 1
``` | open | 2023-05-23T07:21:35Z | 2023-09-03T06:53:28Z | https://github.com/ageitgey/face_recognition/issues/1509 | [] | WaveBird | 1 |
tensorflow/tensor2tensor | machine-learning | 1,530 | Is there a good example tutorial on how to use Tensor2Tensor? | Is there a good example tutorial on how to use Tensor2Tensor? | closed | 2019-04-03T05:43:35Z | 2019-04-08T15:37:34Z | https://github.com/tensorflow/tensor2tensor/issues/1530 | [] | zqs01 | 1 |
scrapy/scrapy | web-scraping | 6,355 | Fails to fetch request with hyphens in 3rd and 4th position of domain | <!--
Thanks for taking an interest in Scrapy!
If you have a question that starts with "How to...", please see the Scrapy Community page: https://scrapy.org/community/.
The GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself.
Keep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md
The following is a suggested template to structure your issue, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#reporting-bugs
-->
### Description
An exception may occur for domain names similar to the following:
`https://hg--69.imgbb.com/following`
### Steps to Reproduce
1. yield scrapy.Request('https://hg--69.imgbb.com/following')
2. run scrapy
**Expected behavior:**
```
2024-05-12 10:15:35 [scrapy.core.scraper] ERROR: Error downloading <[GET https://hg--69.imgbb.com/following>
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\twisted\internet\defer.py", line 1656, in _inlineCallbacks
result = current_context.run(
File "C:\ProgramData\Anaconda3\lib\site-packages\twisted\python\failure.py", line 489, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "C:\ProgramData\Anaconda3\lib\site-packages\scrapy\core\downloader\middleware.py", line 52, in process_request
return (yield download_func(request=request, spider=spider))
File "C:\ProgramData\Anaconda3\lib\site-packages\scrapy\utils\defer.py", line 73, in mustbe_deferred
result = f(*args, **kw)
File "C:\ProgramData\Anaconda3\lib\site-packages\scrapy\core\downloader\handlers\__init__.py", line 79, in download_request
return handler.download_request(request, spider)
File "C:\ProgramData\Anaconda3\lib\site-packages\scrapy\core\downloader\handlers\http11.py", line 72, in download_request
return agent.download_request(request)
File "C:\ProgramData\Anaconda3\lib\site-packages\scrapy\core\downloader\handlers\http11.py", line 376, in download_request
d = agent.request(
File "C:\ProgramData\Anaconda3\lib\site-packages\twisted\web\client.py", line 1148, in request
endpoint = self._getEndpoint(parsedURI)
File "C:\ProgramData\Anaconda3\lib\site-packages\twisted\web\client.py", line 1132, in _getEndpoint
return self._endpointFactory.endpointForURI(uri)
File "C:\ProgramData\Anaconda3\lib\site-packages\twisted\web\client.py", line 1003, in endpointForURI
connectionCreator = self._policyForHTTPS.creatorForNetloc(
File "C:\ProgramData\Anaconda3\lib\site-packages\scrapy\core\downloader\contextfactory.py", line 91, in creatorForNetloc
return ScrapyClientTLSOptions(
File "C:\ProgramData\Anaconda3\lib\site-packages\scrapy\core\downloader\tls.py", line 43, in __init__
super().__init__(hostname, ctx)
File "C:\ProgramData\Anaconda3\lib\site-packages\twisted\internet\_sslverify.py", line 1124, in __init__
self._hostnameBytes = _idnaBytes(hostname)
File "C:\ProgramData\Anaconda3\lib\site-packages\twisted\internet\_idna.py", line 31, in _idnaBytes
return idna.encode(text)
File "C:\ProgramData\Anaconda3\lib\site-packages\idna\core.py", line 360, in encode
s = alabel(label)
File "C:\ProgramData\Anaconda3\lib\site-packages\idna\core.py", line 258, in alabel
ulabel(label_bytes)
File "C:\ProgramData\Anaconda3\lib\site-packages\idna\core.py", line 297, in ulabel
check_label(label_bytes)
File "C:\ProgramData\Anaconda3\lib\site-packages\idna\core.py", line 231, in check_label
check_hyphen_ok(label)
File "C:\ProgramData\Anaconda3\lib\site-packages\idna\core.py", line 128, in check_hyphen_ok
raise IDNAError('Label has disallowed hyphens in 3rd and 4th position')
idna.core.IDNAError: Label has disallowed hyphens in 3rd and 4th position
```
**Actual behavior:**
```
>>> import idna
>>> idna.encode('hg--69')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\ProgramData\Anaconda3\lib\site-packages\idna\core.py", line 355, in encode
s = alabel(label)
File "C:\ProgramData\Anaconda3\lib\site-packages\idna\core.py", line 258, in alabel
ulabel(label_bytes)
File "C:\ProgramData\Anaconda3\lib\site-packages\idna\core.py", line 292, in ulabel
check_label(label_bytes)
File "C:\ProgramData\Anaconda3\lib\site-packages\idna\core.py", line 235, in check_label
check_hyphen_ok(label)
File "C:\ProgramData\Anaconda3\lib\site-packages\idna\core.py", line 128, in check_hyphen_ok
raise IDNAError('Label has disallowed hyphens in 3rd and 4th position')
idna.core.IDNAError: Label has disallowed hyphens in 3rd and 4th position
```
So I modified the twisted `_idna` module fixed it, in ` xxx\twisted\internet\_idna.py`
```
try:
import idna
except ImportError:
return text.encode("idna")
else:
# return idna.encode(text) # orginal
return text.encode("idna") if text.isascii() else idna.encode(text) # correctional
```
### Versions
Scrapy : 2.11.1 | closed | 2024-05-12T10:29:07Z | 2024-05-12T10:29:26Z | https://github.com/scrapy/scrapy/issues/6355 | [] | vlln | 0 |
twopirllc/pandas-ta | pandas | 241 | are the indicators always in a different order in the dataframe? | hello I am using this
```python
def get_data():
"""
Gets data from CSV (0 - AAPL, 1 - MSI, 2 - SBUX).
:return: T x 3 Stock Prices
"""
ticker = yf.Ticker("AAPL")
df = ticker.history(period='1y',interval='1d')
df = df.dropna()
df.ta.strategy()
df = df.fillna(0)
with pd.option_context('display.max_rows', None, 'display.max_columns', None): # more options can be specified also
print(df[0:10][0:10])
exit()
indexess = df.index
retdats = df.copy(deep=True)
return df, indexess, retdats
```
and everytime i use it the indicators are in different columns, is there a way for the indicators to persist their order in the dataframe on every run. sorry this is probably not a bug but i need a way for the indicators to stay in the same column every run. thank you for your time | closed | 2021-03-08T20:00:58Z | 2021-03-10T21:12:36Z | https://github.com/twopirllc/pandas-ta/issues/241 | [
"info"
] | copypasteearth | 4 |
netbox-community/netbox | django | 18,835 | bash: pip: command not found | ### Deployment Type
NetBox Cloud
### NetBox Version
v4.2.5
### Python Version
3.12
### Steps to Reproduce
I tried install plugin, but "pip not found":
```bash
root@netbox-0:/opt/netbox/netbox# source /opt/netbox/venv/bin/activate
(venv) root@netbox-0:/opt/netbox/netbox# pip install netbox-bgp
bash: pip: command not found
```
in "/opt/netbox/venv/bin" don't see _pip_ binary:
```bash
(venv) root@netbox-0:/opt/netbox/venv/bin# ls -l
total 168
-rw-r--r-- 1 root root 3692 Feb 15 05:49 activate
-rw-r--r-- 1 root root 2248 Feb 15 05:49 activate.bat
-rw-r--r-- 1 root root 2599 Feb 15 05:49 activate.csh
-rw-r--r-- 1 root root 4163 Feb 15 05:49 activate.fish
-rw-r--r-- 1 root root 3848 Feb 15 05:49 activate.nu
-rw-r--r-- 1 root root 2762 Feb 15 05:49 activate.ps1
-rw-r--r-- 1 root root 2383 Feb 15 05:49 activate_this.py
-rw-r--r-- 1 root root 1728 Feb 15 05:49 deactivate.bat
-rwxr-xr-x 1 root root 358 Mar 7 05:50 django-admin
-rwxr-xr-x 1 root root 1273 Mar 7 05:50 dul-receive-pack
-rwxr-xr-x 1 root root 1269 Mar 7 05:50 dul-upload-pack
-rwxr-xr-x 1 root root 305 Mar 7 05:50 dulwich
-rwxr-xr-x 1 root root 304 Mar 7 05:50 ghp-import
-rwxr-xr-x 1 root root 1707 Mar 7 05:50 jp.py
-rwxr-xr-x 1 root root 308 Mar 7 05:50 jsonschema
-rwxr-xr-x 1 root root 315 Mar 7 05:50 markdown-it
-rwxr-xr-x 1 root root 309 Mar 7 05:50 markdown_py
-rwxr-xr-x 1 root root 307 Mar 7 05:50 mkdocs
-rwxr-xr-x 1 root root 316 Mar 7 05:50 mkdocs-get-deps
-rwxr-xr-x 1 root root 305 Mar 7 05:50 netaddr
-rwxr-xr-x 1 root root 321 Mar 7 05:50 normalizer
-rwxr-xr-x 1 root root 317 Mar 7 05:50 pybabel
-rw-r--r-- 1 root root 1215 Feb 15 05:49 pydoc.bat
-rwxr-xr-x 1 root root 310 Mar 7 05:50 pygmentize
-rwxr-xr-x 1 root root 307 Mar 7 05:50 pyrsa-decrypt
-rwxr-xr-x 1 root root 307 Mar 7 05:50 pyrsa-encrypt
-rwxr-xr-x 1 root root 305 Mar 7 05:50 pyrsa-keygen
-rwxr-xr-x 1 root root 328 Mar 7 05:50 pyrsa-priv2pub
-rwxr-xr-x 1 root root 301 Mar 7 05:50 pyrsa-sign
-rwxr-xr-x 1 root root 305 Mar 7 05:50 pyrsa-verify
lrwxrwxrwx 1 root root 16 Feb 15 05:49 python -> /usr/bin/python3
lrwxrwxrwx 1 root root 6 Feb 15 05:49 python3 -> python
lrwxrwxrwx 1 root root 6 Feb 15 05:49 python3.12 -> python
-rwxr-xr-x 1 root root 306 Mar 7 05:50 pytkdocs
-rwxr-xr-x 1 root root 300 Mar 7 05:50 rq
-rwxr-xr-x 1 root root 300 Mar 7 05:50 rqinfo
-rwxr-xr-x 1 root root 304 Mar 7 05:50 rqworker
-rwxr-xr-x 1 root root 9517 Mar 7 05:50 shopify_api.py
-rwxr-xr-x 1 root root 311 Mar 7 05:50 sqlformat
-rwxr-xr-x 1 root root 303 Mar 7 05:50 stone
-rwxr-xr-x 1 root root 306 Mar 7 05:50 strawberry
-rwxr-xr-x 1 root root 312 Mar 7 05:50 watchmedo
```
### Expected Behavior
install plugin
### Observed Behavior
see: "bash: pip: command not found" | closed | 2025-03-07T06:46:08Z | 2025-03-07T13:32:40Z | https://github.com/netbox-community/netbox/issues/18835 | [] | janhlavin | 0 |
floodsung/Deep-Learning-Papers-Reading-Roadmap | deep-learning | 6 | Deep learning book link has moved | The new address is http://www.deeplearningbook.org/
| closed | 2016-10-21T08:12:46Z | 2016-10-21T08:22:46Z | https://github.com/floodsung/Deep-Learning-Papers-Reading-Roadmap/issues/6 | [] | willprice | 1 |
dmlc/gluon-nlp | numpy | 679 | [usability] print warnings when Pad() is used | When `data.batchify.Pad()` is used, the default padding value is 0. However, this usually does not respond to the pad token id in the vocab. We should print a warning if 0 is used for `Pad()` so that users are aware. | closed | 2019-04-24T06:11:19Z | 2019-06-05T04:17:51Z | https://github.com/dmlc/gluon-nlp/issues/679 | [
"enhancement"
] | eric-haibin-lin | 0 |
encode/databases | sqlalchemy | 215 | execute_many should not discard results | consider the following:
```sql
CREATE TABLE my_table
(
id BIGSERIAL,
item INTEGER
)
```
```python
query = """
INSERT INTO
my_table (item)
VALUES (:item) RETURNING id
"""
values=[{'item':100, 'item':200}]
my_ids = database.execute_many(query=query,values=values)
```
I would want to get the id for each value.
https://github.com/encode/databases/blob/45519d7ea1183d8d1ce24de21836ccb7c2174039/databases/backends/postgres.py#L182 | open | 2020-06-04T15:28:00Z | 2022-12-20T15:42:07Z | https://github.com/encode/databases/issues/215 | [] | charterchap | 3 |
jupyter-book/jupyter-book | jupyter | 1,988 | Build a book within an existent Sphinx documentation | ### Context
I am planning a Python3-based software package that allows to build books from a set of shipped notebooks and a book template.
My project has its Sphinx-based documentation and I was looking for a way to build an example book _within_ my documentation, instead of e.g. a gallery of notebooks (because the point is in the end to use the much nicer book structure).
I looked here https://jupyterbook.org/en/stable/sphinx/index.html, but it's not really what I mean.
I mean, the code for the book is already in the source code of the sphinx documentation and when I do `make html` I build also the book which is linked to the docs from, say, a `example_book.rst` file.
Apologies if this is already possible!
### Proposal
_No response_
### Tasks and updates
_No response_ | open | 2023-04-02T20:04:16Z | 2024-02-24T14:19:11Z | https://github.com/jupyter-book/jupyter-book/issues/1988 | [
"enhancement"
] | HealthyPear | 4 |
SYSTRAN/faster-whisper | deep-learning | 87 | How do I get the avg_logprob , compression_ratio and no_speech_prob of the output? | Thank you for porting this to cTranslate2, really helps running the large model on my 3060 laptop (not possible with OpenAI's implementation)
Is there a way to get the properties of the model output? In OpenAI's implementation a dictionary is returned when model.transcribe is called, something like this:
This is Python.
{'text': ' This is Python.', 'segments': [{'id': 0, 'seek': 0, 'start': 0.0, 'end': 4.0, 'text': ' This is Python.', 'tokens': [50364, 639, 307, 15329, 13, 50564], 'temperature': 0.0, 'avg_logprob': -0.6939573287963867, 'compression_ratio': 0.7142857142857143, 'no_speech_prob': 0.13947993516921997}], 'language': 'en'}
I looked at the source code, seems like generate_with_fallback() returns avg_logprob but not the rest...?
TIA | closed | 2023-03-29T10:51:18Z | 2023-04-03T14:57:03Z | https://github.com/SYSTRAN/faster-whisper/issues/87 | [] | palladium123 | 4 |
521xueweihan/HelloGitHub | python | 2,320 | 【开源自荐】quic-tun 构建快速且安全的 TCP 传输隧道 | ## 推荐项目
- 项目地址:https://github.com/kungze/quic-tun
- 类别:Go
- 项目标题:一个快速且安全的 TCP 隧道工具,能加速弱网环境下(网络有丢包)TCP 的转发性能。
- 项目描述:QUIC 是 google 指定的一个新型的可靠的传输层网络协议,相较于传统的 TCP 其在传输性能和安全方面有非常突出的表现,HTTP3 就是就要 QUIC 协议指定的,那么除了 HTTP3 这种新型应用,有没有方案能使传统的 TCP 应用也搭上 QUIC 的便车呢?答案当然是肯定的,quic-tun 就提供了这样的一套解决方案。
- 亮点:成倍的提升网络传输性能;安全的传输隧道;一个 UDP 端口构建多条隧道
- 后续更新计划:
支持查看每个隧道的状态:流量和类型
支持 UDP 打洞
对接 prometheus 监控
| closed | 2022-08-10T02:02:25Z | 2022-08-22T11:18:16Z | https://github.com/521xueweihan/HelloGitHub/issues/2320 | [] | jeffyjf | 1 |
skypilot-org/skypilot | data-science | 4,105 | [UX] Showing logs for automatic translation of file uploads in `jobs launch` | <!-- Describe the bug report / feature request here -->
We have the logs showing for the normal `sky launch`'s file_mounts, but we don't show the logs for `jobs launch`'s automatic translation. We should have a log file for that. (requested by a user)
<!-- If relevant, fill in versioning info to help us troubleshoot -->
_Version & Commit info:_
* `sky -v`: PLEASE_FILL_IN
* `sky -c`: PLEASE_FILL_IN
| closed | 2024-10-17T17:31:33Z | 2024-12-25T06:40:57Z | https://github.com/skypilot-org/skypilot/issues/4105 | [
"good first issue",
"P0",
"interface/ux"
] | Michaelvll | 0 |
allenai/allennlp | data-science | 5,724 | SRL BERT performing poorly for german dataset | I am trying to train a SRL model for German text by translating Ontonotes dataset and propagating the labels from English sentences to German sentences. When i train the model with this dataset, as well a manually annotated dataset i seem to be stuck at maximum F1 score of 0.62. I am using deepset/gbert-large bert model for training with learning rate 5e-5. I have updated the Ontonotes.py file to read the conll formatted files and i checked the srl frames to ensure the labels are being picked up correctly. Is there something else i am missing out which i need to take care while trying to train a model in different language or is it just the low quality of data which might be causing the issue.
Thanks | closed | 2022-10-21T23:21:58Z | 2022-11-07T16:09:59Z | https://github.com/allenai/allennlp/issues/5724 | [
"question",
"stale"
] | stevemanavalan | 1 |
docarray/docarray | pydantic | 1,600 | Weaviate cannot handle tensor with different axis | # Context
it seems that Weaviate cannot handle tensor with multiple axis (n-dimensional array)
If this is expected we should have a better error message and it should be documented somewhere
## How to reproduce.
weaviate version: 1.18.3
client version: 3.19.2
To run it please before start the weaviate database. (from the docarray repo)
```bash
cd tests/index/weaviate
docker-compose up
```
```python
from typing import Optional
from docarray.index import WeaviateDocumentIndex
from docarray.typing import ImageUrl, TorchTensor
from pydantic import Field
from docarray import BaseDoc, DocList
import torch
class NestedDoc(BaseDoc):
embedding: Optional[TorchTensor] = Field(is_embedding=True)
tensor: TorchTensor
class MyDoc(BaseDoc):
product: NestedDoc
# Weviate Connection
batch_config = {
"batch_size": 20,
"dynamic": False,
"timeout_retries": 3,
"num_workers": 1,
}
runtimeconfig = WeaviateDocumentIndex.RuntimeConfig(batch_config=batch_config)
dbconfig = WeaviateDocumentIndex.DBConfig(host="http://localhost:8080")
store = WeaviateDocumentIndex[MyDoc](db_config=dbconfig)
store.configure(runtimeconfig)
docs = DocList[MyDoc]([MyDoc(product=NestedDoc(embedding=torch.zeros(128), tensor=torch.zeros(3,224,224)))])
store.index(docs)
```
```bash
{'error': [{'message': "invalid number array property 'product__tensor' on class 'MyDoc': invalid integer array value: [[[0.0 0
```
| open | 2023-05-31T12:40:23Z | 2023-05-31T15:23:29Z | https://github.com/docarray/docarray/issues/1600 | [
"index/weaviate"
] | samsja | 6 |
Lightning-AI/pytorch-lightning | data-science | 19,585 | universal resume checkpoint from deepspeed | ### Description & Motivation
One pain in training with deepspeed is that when resume from a checkpoint you have to use the same amount of gpus as the num of gpus that the checkpoint was trained on. otherwise, you will see the following error:
```
deepspeed.runtime.zero.utils.ZeRORuntimeException: The checkpoint being loaded used a DP world size of 32 but the current world size is 128. Automatic adjustment of ZeRO's optimizer state partitioning with a new world size is not currently supported.
```
see this issue. https://github.com/microsoft/DeepSpeed/issues/3810
Also when the model is well trained, and we want to paly with the inference, it would be a problem to load a deepspeed ckpt for inference as it requires the same num of GPUs as the training?
But currently, Deepspeed proposes a universal checkpointing to convert the deepspeed ckpt to universal ckpt, which can be loaded in whatever many of gpus.
Please refer to the link
https://github.com/microsoft/Megatron-DeepSpeed/tree/main/examples_deepspeed/universal_checkpointing#zero-stage-2-training
Would lightning integrate this feature?
Also a weird usage, is that any way to load only the checkpoint of the model while ignoring other state ckpt like optimizer?
### Pitch
_No response_
### Alternatives
_No response_
### Additional context
_No response_
cc @borda @awaelchli | closed | 2024-03-06T20:37:10Z | 2024-03-14T15:08:32Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19585 | [
"feature",
"strategy: deepspeed"
] | pengzhangzhi | 1 |
aleju/imgaug | deep-learning | 749 | Tiler augmenter proposal | I needed to create tile image from different objects of an image for object detection task.
I couldn't find anything similar so I implemented it with your `iaa.meta.Augmenter` class.
For those who are looking for such functionality, I attach the `Tiler` class, client code with sequence of augmenters including the Tiler augmenter and an example of launch.
**Required imports**:
```
from imgaug import augmenters as iaa
import random
from random import randint
from imgaug.augmentables.bbs import BoundingBox, BoundingBoxesOnImage
from imgaug.augmentables.batches import _BatchInAugmentation
```
**Tiler class**:
```
class Tiler(iaa.meta.Augmenter):
seq = iaa.Sequential([
iaa.GammaContrast(),
iaa.Multiply(mul=(0.6, 1.2)),
iaa.Affine(translate_percent={"x": (-0.1, 0.1), 'y': (-0.1, 0.1)}, scale=(0.7, 1.5), rotate=(-180, 180), fit_output=True),
iaa.PerspectiveTransform(scale=(0, 0.1), keep_size=False),
iaa.AdditiveGaussianNoise()
], random_order=True)
min_patch_size_frac = 0.005
def generate_patches(self, img, bbs):
image_aug, bbs_aug = Tiler.seq(image=img, bounding_boxes=bbs)
bbs_aug = bbs_aug.remove_out_of_image_fraction(0.3)
patches = [(bb.label, bb.extract_from_image(image_aug)) for bb in bbs_aug]
return random.sample(patches, len(patches))
def get_parameters(self):
pass
def _augment_batch_(self, batch, random_state, parents, hooks):
images = batch.images
bbs = batch.bounding_boxes
assert len(images) == 1
assert len(bbs) == 1
image = images[0]
bounding_boxes = bbs[0]
src_img_h, src_img_w = image.shape[:2]
src_img_area = src_img_h * src_img_w
patch_min_size = src_img_area * Tiler.min_patch_size_frac
bounding_boxes = BoundingBoxesOnImage([bb for bb in bounding_boxes if patch_min_size <= bb.area], bounding_boxes.shape)
if len(bounding_boxes) == 0:
return batch
blur_img = iaa.GaussianBlur(sigma=(21, 21))(image=image)
blur_img = iaa.Affine(translate_percent=0, scale=(0.8, 1.2), fit_output=True)(image=blur_img)
img_h, img_w = blur_img.shape[:2]
img_area = img_h * img_w
patches_area = 0
patches = []
while True:
new_patches = self.generate_patches(image, bounding_boxes)
for label, patch in new_patches:
h, w = patch.shape[:2]
h_ratio = h / img_h
w_ratio = w / img_w
if h_ratio > 0.3 or w_ratio > 0.3:
new_ratio = random.uniform(0.1, 0.3)
if h_ratio > w_ratio:
new_h = int(img_h * new_ratio)
patch = resize(patch, height=new_h)
else:
new_w = int(img_w * new_ratio)
patch = resize(patch, width=new_w)
h, w = patch.shape[:2]
patch_area = h * w
if patches_area + patch_area > img_area:
break
else:
patches.append((patch_area, (label, patch)))
patches_area += patch_area
else:
continue
break
patches = sorted(patches, reverse=False, key=lambda patch: patch[0])
if randint(0, 1):
random.shuffle(patches)
margin_range = (5, 30)
top_y = bottom_y = 0
cur_x = randint(*margin_range)
boxes = []
for _, (label, patch) in patches:
h_p, w_p = patch.shape[:2]
if cur_x + w_p > img_w:
top_y = bottom_y
cur_x = randint(*margin_range)
if cur_x + w_p > img_w:
cur_x = 0
y_margin = randint(*margin_range)
if y_margin + top_y + h_p > img_h:
y_margin = 0
if top_y + h_p > img_h:
break
x1, y1 = cur_x, y_margin+top_y
x2, y2 = cur_x+w_p, y_margin+top_y+h_p
try:
blur_img[y1:y2, x1:x2] = patch
except Exception:
print(blur_img.shape, img_h, img_w)
print(patch.shape)
print((x1, y1), (x2, y2))
boxes.append(BoundingBox(x1=x1, y1=y1, x2=x2, y2=y2, label=label))
cur_x += w_p + randint(*margin_range)
if y_margin + h_p > bottom_y - top_y:
bottom_y = y_margin + top_y + h_p
bbs = BoundingBoxesOnImage(boxes, shape=blur_img.shape)
return _BatchInAugmentation(images=[blur_img], bounding_boxes=[bbs])
```
**Client code**:
```
img = cv2.imread('your_image.jpg')
bbs =' yourBoundingBoxesOnImage object'
transformer = \
iaa.Sequential([
iaa.Grayscale(),
iaa.Sometimes(
p=0.1,
then_list=Tiler(),
else_list=iaa.Sequential([
iaa.GammaContrast(),
iaa.Multiply(mul=(0.7, 1.3)),
iaa.Affine(translate_percent={"x": (-0.1, 0.1), 'y': (-0.1, 0.1)}, scale=(0.5, 1.5), rotate=(-40, 40), fit_output=True),
iaa.PerspectiveTransform(scale=(0, 0.1), keep_size=False),
iaa.Sometimes(0.3, iaa.AdditiveGaussianNoise(3))
], random_order=True)
)]
)
img, bbs = transformer(image=img, bounding_boxes=bbs)
ia.imshow(bbs.draw_on_image(img))
```
Source img:

Example of execution ordinary sequence of augmentations:

Examples of execution Tiler augmentator:


| open | 2021-02-25T11:59:56Z | 2021-03-01T16:17:08Z | https://github.com/aleju/imgaug/issues/749 | [] | chamecall | 0 |
exaloop/codon | numpy | 206 | print(~1) assert | print(~1)
produces:
Assert failed: type not set for (unary "~" (int* 1 #:type "int")) [x.py:1:1]
Expression: expr->type
Source: /home/jplevyak/projects/codon/codon/parser/visitors/typecheck/typecheck.cpp:63
Aborted (core dumped) | closed | 2023-03-11T22:08:20Z | 2023-03-25T23:56:19Z | https://github.com/exaloop/codon/issues/206 | [] | jplevyak | 2 |
zappa/Zappa | flask | 808 | [Migrated] Zappa requires "future" package under Python 3.x | Originally from: https://github.com/Miserlou/Zappa/issues/1966 by [rudyryk](https://github.com/rudyryk)
Trying to minify uploaded installation ZIP. Noticed that future package is more than 1.5Mb size, this is far larger than many. But it should not be necessary under Python 3.
## Expected Behavior
Don't install and don't require "future" package for Python 3.
## Actual Behavior
Zappa CLI can't ZIP package without "future" installed.
## Steps to Reproduce
1. Initialize Zappa project
2. Uninstall "future" with `pip uninstall future`
3. Run `zappa --help` and get error
## Your Environment
* Zappa version used: 0.48.2
* Operating System and Python version: Python 3.7.4, macOS 10.13.6
* `pip freeze` result
```bash
argcomplete==1.9.3
boto3==1.10.25
botocore==1.13.25
certifi==2019.9.11
cfn-flip==1.2.2
chardet==3.0.4
Click==7.0
docutils==0.15.2
durationpy==0.5
Flask==1.1.1
future==0.16.0
hjson==3.0.1
idna==2.8
itsdangerous==1.1.0
Jinja2==2.10.3
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.20.0
MarkupSafe==1.1.1
peewee==3.11.2
placebo==0.9.0
psycopg2-binary==2.8.4
PyJWT==1.7.1
python-dateutil==2.6.1
python-slugify==1.2.4
PyYAML==5.1.2
requests==2.22.0
s3transfer==0.2.1
sentry-sdk==0.13.2
signa==0.2.1
six==1.13.0
structlog==19.2.0
toml==0.10.0
tqdm==4.19.1
troposphere==2.5.2
Unidecode==1.1.1
urllib3==1.25.7
Werkzeug==0.16.0
wsgi-request-logger==0.4.6
zappa==0.48.2
``` | closed | 2021-02-20T12:51:53Z | 2022-08-05T10:37:24Z | https://github.com/zappa/Zappa/issues/808 | [] | jneves | 1 |
scikit-optimize/scikit-optimize | scikit-learn | 531 | ValueError: The truth value of an array with more than one element is ambiguous | Hi,
when trying to run the following code:
```
res_gbrt = gbrt_minimize(tagpowercalc_electron_train.evaluate, x0_gp_limits, n_calls=5000, n_random_starts=10, x0=x0_gp, random_state=42, n_jobs=4)
```
I get the following error:
```
/afs/cern.ch/work/v/vibattis/miniconda2/lib/python2.7/site-packages/skopt/space/space.pyc in __contains__(self, point)
235
236 def __contains__(self, point):
--> 237 return self.low <= point <= self.high
238
239 @property
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
I have numpy 1.12.1 installed. This used to work on the past (several months ago), but unfortunately I can't figure out which version of what I was using at that time. Do you have any idea? Thanks. | closed | 2017-10-09T09:24:57Z | 2017-10-09T14:13:16Z | https://github.com/scikit-optimize/scikit-optimize/issues/531 | [] | VINX89 | 2 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 297 | [BUG] TikTok短连接视频解析不了 | ***发生错误的平台?***
如:TikTok
***发生错误的端点?***
如:API-V1
***提交的输入值?***
如:[短视频链接](https://vm.tiktok.com/ZM2kkTThP/)
***是否有再次尝试?***
如:是
***你有查看本项目的自述文件或接口文档吗?***
如:有
| closed | 2023-10-14T04:00:40Z | 2024-02-07T03:41:38Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/297 | [
"BUG"
] | JayFang1993 | 0 |
DistrictDataLabs/yellowbrick | matplotlib | 795 | PCA projections on biplot are not clear for datasets with large number of features | **Problem**
The PCA projections visualized on a biplot are not clear, and various feature's projection can be seen to overlap with each other when working with a dataset with a large number of features. e.g. with credit dataset from yellowbrick.datasets

This could be solved by considering only certain features in a particular dimension, to avoid overlapping of vectors. Work is required in selecting criteria for features.
| closed | 2019-03-25T12:52:07Z | 2019-08-28T23:51:56Z | https://github.com/DistrictDataLabs/yellowbrick/issues/795 | [
"level: expert"
] | percygautam | 2 |
Avaiga/taipy | data-visualization | 2,421 | Generate stub classes for extension libraries | ### Description
Once an extension library has been created, there's no easy way to expose the new element(s) to the Page Builder API.
#### Workaround
The only situation where this can be achieved is making sure the element name is not already used by the regular Taipy components (like 'table' or 'scenario'). Then these elements can be created using `tgb.element_name()`, assuming the library has been registered *before* this code (using `Gui.add_library()`.
### Solution Proposed
A Python script could be provided to load the library and generate the classes definition from the result of invoking `get_elements()`. This generation would produce an `__init__.pyi` file next to the extension library's `__init__.py` file, declaring the elements with their API (their properties, types and default values).
The API generation is performed when the library is registered.
Contrary to the current situation, element APIs must be created in the library module, at the same level as the library itself.
### Impact of Solution
An option additional step required in building Taipy GUI extension libraries if Page Builer API is required.
This needs documentation.
### Acceptance Criteria
- [x] If applicable, a new demo code is provided to show the new feature in action.
- [x] Integration tests exhibiting how the functionality works are added.
- [x] Any new code is covered by a unit tested.
- [x] Check code coverage is at least 90%.
- [x] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [x] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2025-01-22T17:04:13Z | 2025-02-10T12:03:48Z | https://github.com/Avaiga/taipy/issues/2421 | [
"📄 Documentation",
"🟨 Priority: Medium",
"✨New feature",
"🔒 Staff only",
"Gui: Back-End"
] | FabienLelaquais | 0 |
seleniumbase/SeleniumBase | pytest | 3,155 | Retrofit SeleniumBase with `pyproject.toml` structure | ## Retrofit SeleniumBase with `pyproject.toml` structure.
Lots of the "cool kids" have already switched over to using `pyproject.toml`. Now, I'm forced to switch over due to deprecation warnings when calling `"pip install -e ."`:
> `DEPRECATION: Legacy editable install of <PACKAGE> from <PACKAGE_PATH> (setup.py develop) is deprecated. pip 25.0 will enforce this behaviour change. A possible replacement is to add a pyproject.toml or enable --use-pep517, and use setuptools >= 64. If the resulting installation is not behaving as expected, try using --config-settings editable_mode=compat. Please consult the setuptools documentation for more information. Discussion can be found at https://github.com/pypa/pip/issues/11457`
Python.org has instructions: https://packaging.python.org/en/latest/guides/writing-pyproject-toml/
Thankfully, `pyproject.toml` has a `[project.dynamic]` field that can auto-populate data from `setup.py` for the following items:
`['version', 'description', 'readme', 'requires-python', 'license', 'authors', 'maintainers', 'keywords', 'classifiers', 'urls', 'scripts', 'gui-scripts', 'entry-points', 'dependencies', 'optional-dependencies']`.
This means that I can reuse a lot of my existing `setup.py` definitions when creating the `pyproject.toml` file. | closed | 2024-09-21T17:47:25Z | 2024-09-23T05:46:23Z | https://github.com/seleniumbase/SeleniumBase/issues/3155 | [
"enhancement",
"requirements"
] | mdmintz | 1 |
yeongpin/cursor-free-vip | automation | 270 | [Bug]: User is unauthorized | ### Commit before submitting
- [x] I understand that Issues are used to provide feedback and solve problems, not to complain in the comments section, and will provide more information to help solve the problem.
- [x] I have checked the top Issue and searched for existing [open issues](https://github.com/yeongpin/cursor-free-vip/issues) and [closed issues](https://github.com/yeongpin/cursor-free-vip/issues?q=is%3Aissue%20state%3Aclosed%20), and found no similar issues.
- [x] I have filled out a short and clear title, so that developers can quickly determine the general problem when browsing the Issue list. Not "a suggestion", "stuck", etc.
### Platform
Windows x32
### Version
v0.46
### Description
After I've created a new account by using this tool and then try to use cursor ai, it notices to me unauthorizes
### Related log output
```shell
```
### Additional information
_No response_ | open | 2025-03-17T07:39:06Z | 2025-03-18T04:36:48Z | https://github.com/yeongpin/cursor-free-vip/issues/270 | [
"bug"
] | EKALAT | 17 |
oegedijk/explainerdashboard | plotly | 18 | Temporary failure in name resolution - linux | Tested on my linux machine
Installation:
```
$ conda create -n test_env python=3.8
$ conda activate test_env
$ pip install explainerdashboard
```
and tested this https://explainerdashboard.readthedocs.io/en/latest/index.html#a-more-extended-example
This may be outside of explainerdashboard
```
>>> from sklearn.ensemble import RandomForestClassifier
>>>
>>> from explainerdashboard import ClassifierExplainer, ExplainerDashboard
>>> from explainerdashboard.datasets import titanic_survive
>>>
>>> X_train, y_train, X_test, y_test = titanic_survive()
>>>
>>> model = RandomForestClassifier(n_estimators=50, max_depth=5)
>>> model.fit(X_train, y_train)
RandomForestClassifier(max_depth=5, n_estimators=50)
>>>
>>> explainer = ClassifierExplainer(
... model, X_test, y_test,
... # optional:
... cats=['Sex', 'Deck', 'Embarked'],
... labels=['Not survived', 'Survived'])
Note: shap=='guess' so guessing for RandomForestClassifier shap='tree'...
Note: model_output=='probability', so assuming that raw shap output of RandomForestClassifier is in probability space...
Generating self.shap_explainer = shap.TreeExplainer(model)
Detected RandomForestClassifier model: Changing class type to RandomForestClassifierExplainer...
>>>
>>> db = ExplainerDashboard(explainer, title="Titanic Explainer",
... whatif=False, # you can switch off tabs with bools
... shap_interaction=False,
... decision_trees=False)
Building ExplainerDashboard..
Generating layout...
Calculating shap values...
Calculating dependencies...
Calculating permutation importances (if slow, try setting n_jobs parameter)...
Calculating categorical permutation importances (if slow, try setting n_jobs parameter)...
Calculating prediction probabilities...
Calculating predictions...
Calculating pred_percentiles...
Registering callbacks...
>>> db.run(port=8051)
Starting ExplainerDashboard on http://localhost:8051
Dash is running on http://x86_64-conda_cos6-linux-gnu:8051/
* Serving Flask app "explainerdashboard.dashboards" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ray/local/bin/anaconda3/envs/test_env/lib/python3.8/site-packages/explainerdashboard/dashboards.py", line 674, in run
self.app.run_server(port=port, **kwargs)
File "/home/ray/local/bin/anaconda3/envs/test_env/lib/python3.8/site-packages/dash/dash.py", line 1716, in run_server
self.server.run(host=host, port=port, debug=debug, **flask_run_options)
File "/home/ray/local/bin/anaconda3/envs/test_env/lib/python3.8/site-packages/flask/app.py", line 990, in run
run_simple(host, port, self, **options)
File "/home/ray/local/bin/anaconda3/envs/test_env/lib/python3.8/site-packages/werkzeug/serving.py", line 1052, in run_simple
inner()
File "/home/ray/local/bin/anaconda3/envs/test_env/lib/python3.8/site-packages/werkzeug/serving.py", line 996, in inner
srv = make_server(
File "/home/ray/local/bin/anaconda3/envs/test_env/lib/python3.8/site-packages/werkzeug/serving.py", line 847, in make_server
return ThreadedWSGIServer(
File "/home/ray/local/bin/anaconda3/envs/test_env/lib/python3.8/site-packages/werkzeug/serving.py", line 740, in __init__
HTTPServer.__init__(self, server_address, handler)
File "/home/ray/local/bin/anaconda3/envs/test_env/lib/python3.8/socketserver.py", line 452, in __init__
self.server_bind()
File "/home/ray/local/bin/anaconda3/envs/test_env/lib/python3.8/http/server.py", line 138, in server_bind
socketserver.TCPServer.server_bind(self)
File "/home/ray/local/bin/anaconda3/envs/test_env/lib/python3.8/socketserver.py", line 466, in server_bind
self.socket.bind(self.server_address)
socket.gaierror: [Errno -3] Temporary failure in name resolution
``` | closed | 2020-11-17T14:38:27Z | 2020-12-05T04:05:28Z | https://github.com/oegedijk/explainerdashboard/issues/18 | [] | raybellwaves | 6 |
jowilf/starlette-admin | sqlalchemy | 309 | Enable sub properties of relationships to appear as columns | ### Discussed in https://github.com/jowilf/starlette-admin/discussions/307
<div type='discussions-op-text'>
<sup>Originally posted by **samearcher** September 25, 2023</sup>
Hi,
I have a relationship, "mat_class" of my table, "A1_A3_Data", as follows:
```
class A1_A3_Data(A1_A3_Data_Base, ActiveRecordMixin, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
mat_class: Optional[Material_Class] = Relationship(back_populates="a1_a3_data")
```
This is in relationship (Many to one) with the Material_Class table:
```
class Material_Class(Material_Class_Base, ActiveRecordMixin, table=True):
description: str = Field(min_length=3, max_length=200, unique=True)
id: Optional[int] = Field(default=None, primary_key=True)
a1_a3_data: List["A1_A3_Data"] = Relationship(back_populates="mat_class")
```
I'd like to be able to display (and sort) a column representing the mat_class description field as follows:
```
class A1_A3_Data_View(BaseModelView):
fields = [
A1_A3_Data.mat_class.description,
"sheet_title",
etc, etc
]
```
At the moment this gives the following error:
AttributeError: Neither 'InstrumentedAttribute' object nor 'Comparator' object associated with A1_A3_Data.mat_class has an attribute 'description'
I think this is because it's referencing the Class rather than the instances of the class?
Thanks</div> | closed | 2023-09-26T18:34:40Z | 2023-10-28T04:41:13Z | https://github.com/jowilf/starlette-admin/issues/309 | [] | samearcher | 3 |
scikit-image/scikit-image | computer-vision | 7,455 | enable to pass 'seed' to skimage.feature.plot_matches() | ### Description:
The colors of the lines drawn by skimage.feature.plot_matches can be generated with random colors, which is very useful for distinguishing them instead of having lines of the same color.
However, it would be beneficial to be able to pass a seed to the random function to achieve consistent results when the function is called multiple times with the same arguments.
https://github.com/scikit-image/scikit-image/blob/bbf94f879ffe197f96305c37593a86b43206b4c9/skimage/feature/util.py#L153
-> ` rng = np.random.default_rng(seed=seed)`
with seed defined as seed=None in the plot_matches() arguments.
Thanks | open | 2024-07-05T15:58:11Z | 2025-03-10T02:26:44Z | https://github.com/scikit-image/scikit-image/issues/7455 | [
":pray: Feature request",
":sleeping: Dormant"
] | patquem | 6 |
dgtlmoon/changedetection.io | web-scraping | 2,782 | [feature] Create A Copy with Immediate Edit | **Version and OS**
0.47.06 docker
**Is your feature request related to a problem? Please describe.**
When adding multiple watches where the watched url is the only change, Create A Copy button creates an exact active copy and goes back to the list of watches from which I must quickly click edit and make my changes. This process also carries forward the history so there is always a change notification after saving the updated watch.
**Describe the solution you'd like**
An option for current Create A Copy or a new button where the copy is created and the user is immediately pushed to the edit page for the copied watch. The history of the copied watch is cleared. No new watch is created, however, until the Save button is pressed on the edit screen. Bonus: if the “Extract <title> from document and use as watch title” is checked, clear the title of the copied watch so it can be populated on first check of the new watch. Effectively using an existing watch as a template or prototype for a new but distinct watch.
**Describe the use-case and give concrete real-world examples**
Using to monitor new books for authors on GoodReads, with the best settings configured on an existing watch. Adding a new author’s books watch, I would edit existing watch, click Create A Copy And Edit and then update the url to point to the to-be-added author’s book list page and save the watch. Since the prototype’s watch history and title is cleared, the newly created watch functions as a brand new watch so there isn’t a massive change notification detailing the prototype’s author’s book list versus the new author’s book list and the title is updated with the watched URL’s webpage title instead of retaining the prototype’s existing title.
**Additional context**
Add any other context or screenshots about the feature request here.
| open | 2024-11-15T15:24:45Z | 2024-11-16T10:54:29Z | https://github.com/dgtlmoon/changedetection.io/issues/2782 | [
"enhancement",
"user-interface"
] | jolpadgett | 3 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 295 | Make please google collab notebook version | Hi. Make please google collab notebook version | closed | 2020-03-08T14:26:25Z | 2020-07-04T22:18:09Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/295 | [] | shadowzoom | 2 |
ray-project/ray | machine-learning | 50,958 | [RayServe] Documentation is misleading about what services get created by a RayService | ### Description
I tried creating a sample RayService, following the documentation: https://docs.ray.io/en/latest/serve/production-guide/kubernetes.html#deploying-a-serve-application :
```
$ kubectl apply -f https://raw.githubusercontent.com/ray-project/kuberay/5b1a5a11f5df76db2d66ed332ff0802dc3bbff76/ray-operator/config/samples/ray-service.text-ml.yaml
rayservice.ray.io/rayservice-sample created
```
With KubeRay operator 1.3.0:
```
$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
kuberay-operator default 2 2025-02-27 12:13:41.616923988 -0800 PST deployed kuberay-operator-1.3.0
```
Although the documentation implied that this would create services with names such as:
```
rayservice-sample-head-svc ClusterIP ... 8080/TCP,6379/TCP,8265/TCP,10001/TCP,8000/TCP,52365/TCP XXs
rayservice-sample-raycluster-454c4-dashboard-svc ClusterIP ... 52365/TCP XXs
rayservice-sample-raycluster-454c4-head-svc ClusterIP ... 8000/TCP,52365/TCP,8080/TCP,6379/TCP,8265/TCP,10001/TCP XXs
rayservice-sample-serve-svc ClusterIP ... 8000/TCP XXs
```
No service with a name `rayservice-sample-head-svc` was created:
```
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 2d18h
rayservice-sample-raycluster-47q27-head-svc ClusterIP None <none> 10001/TCP,8265/TCP,6379/TCP,8080/TCP,8000/TCP 4m10s
```
### Link
https://docs.ray.io/en/latest/serve/production-guide/kubernetes.html#deploying-a-serve-application | open | 2025-02-27T21:17:46Z | 2025-02-27T21:22:44Z | https://github.com/ray-project/ray/issues/50958 | [
"triage",
"docs"
] | boyleconnor | 0 |
indico/indico | sqlalchemy | 6,073 | [A11Y] Semantic UI date input overrides the associated label with placeholder text | **Describe the bug**
When using the date picker with a screen reader, the input is always read as if the placeholder is the label, regardless of whether there is a proper field label associated with it.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to any form that includes a date picker
2. If necessary, manually associate the label with the input using a developer tool (usually they aren't associated)
3. While using a screen reader, focus the date input
4. Observe that the field is announced as "why why why why em em dee dee"
**Expected behavior**
The associated label is read
**Additional context**
https://www.w3.org/WAI/WCAG21/Understanding/labels-or-instructions.html
| closed | 2023-11-30T07:51:21Z | 2023-12-05T08:35:11Z | https://github.com/indico/indico/issues/6073 | [
"bug"
] | foxbunny | 2 |
wemake-services/django-test-migrations | pytest | 6 | Skip/XFail tests depending on migrator fixture if --nomigrations option was applied. | closed | 2019-11-25T21:44:14Z | 2020-02-25T14:04:21Z | https://github.com/wemake-services/django-test-migrations/issues/6 | [] | proofit404 | 0 | |
scikit-multilearn/scikit-multilearn | scikit-learn | 29 | get_params and set_params for MLClassifierBase erroneous | Hi,
While writing the test case for cross validation I noticed that I missed something in my implementation of said functions, which gives me time to discuss some shortcomings of the code.
In the scikit-learn implementation, there is quite a lot of code to find out the attributes of each object for the get_param function, namely `_get_param_names` and `signature`. I think for simple cases like the meta estimators we have here this would be just overhead. Thus I'd advocate for a list of attributes to be copied, defined by the very estimators. Or at least the MLClassifierBase could define the common ones, like `copyable_attrs = ["require_dense", "classifier"]`, and if needed, the heiring classifiers would rewrite this attribute with some of their own attributes, or add them or whatever.
In `get_params` we can then run over the keys and copy them, if deep-copyable and `deep=True` copy them too. Also we can check for attributes in `set_params`, providing another safety measure to prevent writing attribues to the object that weren't set in the first place. (This could result in some nasty bugs I think.)
What is your take on this?
PS: I'll commit the bug fixes. Additionally I've removed the get_params/set_params from the label powerset meta estimator. They were wrong anyways (missing `self` as first attribute).
| closed | 2016-02-26T11:29:20Z | 2016-05-17T00:43:59Z | https://github.com/scikit-multilearn/scikit-multilearn/issues/29 | [
"bug"
] | ChristianSch | 4 |
microsoft/nni | pytorch | 4,958 | HPO problem | When I set the search space use quniform like below in HPO.
<img width="234" alt="image" src="https://user-images.githubusercontent.com/11451001/175040897-0a8cbe73-fbb9-4a9e-85ff-f8025bb16c1a.png">
The parameters often become Long decimal, how can I fix it?
<img width="392" alt="image" src="https://user-images.githubusercontent.com/11451001/175040684-ea85af97-62c7-47f1-9314-5657e908737b.png">
| closed | 2022-06-22T13:29:47Z | 2022-06-27T03:05:20Z | https://github.com/microsoft/nni/issues/4958 | [
"question"
] | woocoder | 2 |
xinntao/Real-ESRGAN | pytorch | 530 | Using GFPGANv1.pth (Papermodel) in esrgan | I tried using the papermodel of gfpgan (GFPGANv1.pth) separately which was working. But when I'm trying to use it in esrgan it gives error.
<img width="1142" alt="image" src="https://user-images.githubusercontent.com/104137976/209955622-0475c12c-f133-4a16-8e00-68822e6ac2a5.png">
Has anyone tried the same? | open | 2022-12-29T13:11:30Z | 2022-12-29T13:11:30Z | https://github.com/xinntao/Real-ESRGAN/issues/530 | [] | sumit27kumar | 0 |
ray-project/ray | deep-learning | 51,273 | [core][gpu-objects] Actor sends the same ObjectRef twice to another actor | ### Description
If an actor sends the same ObjectRef twice to another actor, we should not call NCCL send twice. Instead, we only call the NCCL send once and the receiver retrieve the tensor from its in-actor storage twice.
### Use case
_No response_ | open | 2025-03-11T22:08:47Z | 2025-03-11T22:13:36Z | https://github.com/ray-project/ray/issues/51273 | [
"enhancement",
"P1",
"core",
"gpu-objects"
] | kevin85421 | 0 |
newpanjing/simpleui | django | 108 | 大bug 进入admin页面 或刷新后台管理页面 总会闪出一个很丑的模版渲染页面 | **bug描述**
进入后台管理页面 、刷新后台管理页面 ;总会闪出一个很丑的模版渲染页面
**重现步骤**
1.进入admin页面 活着刷新后台管理页面、
2.总会闪出一个很丑的模版渲染页面
3.这是一个巨大的bug 直接就是首页 入口的bug

| closed | 2019-07-05T13:57:45Z | 2019-07-09T05:48:30Z | https://github.com/newpanjing/simpleui/issues/108 | [
"bug"
] | insseek | 1 |
timkpaine/lantern | plotly | 143 | allow adjustments to comm interval | `time.sleep(1)` in comm.py should be a configurable parameter. maybe instantaneous is best (let the source rate limit?)
| closed | 2018-01-31T20:10:49Z | 2018-02-27T22:31:41Z | https://github.com/timkpaine/lantern/issues/143 | [
"feature"
] | timkpaine | 0 |
deepspeedai/DeepSpeed | machine-learning | 5,583 | different setting for same (num_gpus * batch_size * grad_accum_steps) output different loss and gradient norm | hi i have observed significant performance degradation in multi-gpu with grad accum > 1 setting.
I'm sorry not to upload profiling code (will be uploaded soon) but i tested 7B scale LLM using same size input data (fixed seed and expand same sequence in batch dimension)
i expected the loss and gradient are same for different training setting because their num_gpus * batch_size * grad_accum_steps) are all same. but each exps output all different loss and gradients.
num_gpus * batch_size * grad_accum_steps) setting is as follows
- 1 / 4 / 1
- 2 / 2 / 1
- 2 / 1 / 2
i implement fwd+bwd 4 times using AdamW with lr=0.01 (i set lr high for strict profiling), CPU offloaded zero-3 (for 1gpu too because memory issue).
Here is my question.
Is it correct outputs should all be the same when (num_gpus * batch_size * grad_accum_steps) is equal?
| closed | 2024-05-29T09:27:56Z | 2024-11-01T21:03:28Z | https://github.com/deepspeedai/DeepSpeed/issues/5583 | [
"bug",
"training"
] | SeunghyunSEO | 2 |
albumentations-team/albumentations | deep-learning | 1,645 | [Feature Request] Make RandomGridShuffle work when `size % num_splits != 0` | Right now if side is non divisible by the number of splits => we do not do anything.
Better would be:
image shape `(20, 17)`
Want `(3, 4)` splits
=> 20 = 6 + 7 + 7
=> 17 = 4 + 4 + 5 + 4
After that perform shuffle on every tuple:
Let it become: (7, 6, 7) и (4, 5, 4, 4)
Now we have 12 tiles with different sized => shuffle those that have the same size.
=>
6 tiles 7x4
2 tiles 7x5
3 tiles 6x4 | closed | 2024-04-10T21:38:38Z | 2024-04-12T02:39:16Z | https://github.com/albumentations-team/albumentations/issues/1645 | [
"enhancement"
] | ternaus | 1 |
JaidedAI/EasyOCR | pytorch | 750 | Free ocr tool | Hi I hope you are doing well!
Our dhurvaa's OCR Tool Transform any image, scanned document, or printed PDF to editable text in seconds using our FREE online Optical Character Recognition (OCR) feature. Use our FREE online OCR feature to recognize text from images.
https://dhurvaa.com/online_ocr_tool
Feel free to share your feedback 😊 | closed | 2022-06-07T16:16:21Z | 2022-06-08T08:45:07Z | https://github.com/JaidedAI/EasyOCR/issues/750 | [] | vishal-nayak1 | 0 |
roboflow/supervision | machine-learning | 928 | [Smoother] - `tracker_id` is `None` when there is no detection for an extended period | ### Bug
I have code that performs detections on video, which operates smoothly without the smoother function. However, upon integrating the smoother function, I encounter the following error:
```
Traceback (most recent call last):
File "/content/script_detection.py", line 165, in <module>
processor.process_video()
File "/content/script_detection.py", line 67, in process_video
annotated_frame = self.process_frame(frame)
File "/content/script_detection.py", line 121, in process_frame
return self.annotate_frame(frame, detections)
File "/content/script_detection.py", line 86, in annotate_frame
in zip(detections.tracker_id, detections.confidence, detections.class_id)
TypeError: 'NoneType' object is not iterable
```
### Environment
- Supervision: 0.18.0
### Minimal Reproducible Example
```python
def annotate_frame(
self, frame: np.ndarray, detections: sv.Detections
) -> np.ndarray:
annotated_frame = frame.copy()
labels = [
f"#{tracker_id} {class_id}"
for tracker_id, confidence, class_id
in zip(detections.tracker_id, detections.confidence, detections.class_id)
]
annotated_frame = self.trace_annotator.annotate(
scene=annotated_frame, detections=detections
)
annotated_frame = self.bounding_box_annotator.annotate(
scene=annotated_frame, detections=detections
)
annotated_frame = self.label_annotator.annotate(
scene=annotated_frame, detections=detections, labels=labels
)
annotated_frame = self.percentage_bar_annotator.annotate(
scene=annotated_frame, detections=detections
)
return annotated_frame
def process_frame(self, frame: np.ndarray) -> np.ndarray:
results = self.model(
frame, verbose=False, conf=self.conf_threshold, iou=self.iou_threshold
)[0]
detections = sv.Detections.from_ultralytics(results)
detections = detections[detections.confidence > args.confidence_threshold]
detections = detections.with_nms(threshold=args.iou_threshold)
detections = self.tracker.update_with_detections(detections=detections)
detections = self.smoother.update_with_detections(detections=detections)
return self.annotate_frame(frame, detections)
``` | closed | 2024-02-20T09:59:47Z | 2024-02-28T13:25:32Z | https://github.com/roboflow/supervision/issues/928 | [
"bug"
] | vhuard | 8 |
stanfordnlp/stanza | nlp | 844 | Options to turn off or alter sentence segmentation | I have a document where each line is a sentence already. Therefore, the delimiter is `\n` and not `\n\n`. I want stanza to respect this segmentation which is already present in the data. It is not such that tokens are split by ` ` or any other nice delimiter, tokenization must be performed. So then ...
1) Is there a way to accomplish this?
2) Is there a general way to set a delimiter or list of delimiters which serve as an end of sentence marker? As crude as it is, for example, a list containing periods, end of lines and question marks. This is just an example, as I said, the only recognized delimiter in my documents would be a single `\n` | closed | 2021-10-14T15:04:06Z | 2021-12-24T20:41:38Z | https://github.com/stanfordnlp/stanza/issues/844 | [
"question",
"stale"
] | demongolem-biz | 4 |
aleju/imgaug | machine-learning | 124 | Release of 2.5 / 2.6? | Any idea of when the current master (2.5 or 2.6 I think) might be released? Is there a release schedule / procedure? | open | 2018-04-19T14:06:51Z | 2018-06-04T19:20:53Z | https://github.com/aleju/imgaug/issues/124 | [] | Erotemic | 2 |
pydantic/logfire | pydantic | 879 | Add support for structured pydantic log bodies | ### Description
Hey folks! Our team just did a hands-on trail day yesterday to analyze the capabilities of pydantic-logfire, and the
first thing I'd like to say:
- the ease of use is amazing! :rocket:
pydantic logfire might be one of those desperately needed tools that finally simplify the huge technical (digest tons of
documentation and concept pages, understand / reverse engineer the logs, metrics, trace sdks) and
infrastructure complexity (run your own collector) of open telemetry to make it more easily adoptable by a less
technical audience as well.
Integrating pydantic logfire into one of our microservices has been a no-brainer and stupid simple :rocket:
However, we ran into one thing that really puzzled us:
- pydantic logfire does not support its own library and enforces log record bodies to be strings
This appears to be a huge technical limitation. We are running a highly automated distributed mlops stack, and we put a
decent effort into ensuring that all platform components (microservices running fastapi, orchestration engines (
apache-airlfow), processing jobs (spark), training jobs and many more) issue fully structured logs. And we use pydantic
to model our log events which helps a lot to publish fully structured and schema safe log events.
In a nutshell, this is what we do more or less:
```python
import pydantic
class Model(pydantic.BaseModel):
name: str
version: str
class Instance(pydantic.BaseModel):
model_config = pydantic.ConfigDict(populate_by_name=True)
type_: str = pydantic.Field(alias="type")
count: int
class EndpointCreated(pydantic.BaseModel):
name: str
model: Model
instance: Instance
somelogger.info(
EndpointCreated(
name="my_ml_endpoint",
model=Model(name="foo", version="2"),
instance=Instance(type_="medium", count=2)
))
```
Due to above-mentioned pain points, our logs are (alas) not opentelemetry compatible yet, but this is somewhat what the
resulting log entry could look like in otel:
```yaml
{
"timestamp": 174012134465303,
"observed_timestamp": 174012134565816,
"trace_id": "019523d43b42cf4394594759b699305d",
"span_id": "a9c6c9ec18b1472a",
"severity_text": "INFO",
"severity_number": 9,
"body": {
"endpoint_name": "my_ml_endpoint",
"model": {
"name": "foo",
"version": "2"
},
"instance": {
"type": "medium",
"count": 2
}
},
"resource": {
"service.namespace": "model-serving",
"service.name": "api",
"service.version": "3.0.1"
},
"attributes": {
# attributes derived from the pydantic class object
"log.record.type": "EndpointCreated",
"log.record.schema.major": "1",
"log.record.schema.minor": "3",
# other standardized otel attributes
}
}
```
Using fully structured logs enables a whole new world of automation if log entries are finally fully structured and
machine parseable (instead of human-readable unstructured prose).
Don't get me wrong. I do understand that a logging library somehow needs to support "good" old fstring logging, but having
no support for fully structured log events feels somewhat strange, **especially** if built by a team that essentially
created a powerful and enjoyable python serde library in the first place 🤔
- Has this been considered, but rejected?
- If so, for what reasons? 🤔
Amongst lot's of other things, think of use cases such as this (creating log views that instantly show all ml models ever deployed, alongside other standardized metadata validated by pydantic):
```sql
SELECT
trace_id
, span_id
, body ->> 'endpoint_name' as endpoint_name
, body ->> 'model' ->> 'name' as model_name
, body ->> 'model' ->> 'version' as model_version
, body ->> 'instance' ->> 'type' as instance_type
, body ->> 'instance' ->> 'count' as instance_count
FROM
records
```
Kind regards! | open | 2025-02-21T07:58:27Z | 2025-02-21T07:58:27Z | https://github.com/pydantic/logfire/issues/879 | [
"Feature Request"
] | ddluke | 0 |
microsoft/nni | machine-learning | 5,776 | I cannot make it work on GPU for training | ## Description of the issue
I cannot run any experiment on GPU.
I have tried both with a Tesla P4, a P100 and a GTX 1060. I can only make it work using CPU only.
I have tried many configs with setting useActiveGpu to True or False, trialGpuNumber to 1, gpuIndices: '0'. However it always couldn't complete a single architecture training.
I have tried both outside and inside a Docker container.
## Configuration
- Experiment config: `nni/examples/trials/mnist-pytorch/config.yml`
## Outside a Docker container
### Environment
- NNI version: 3.0
- Training service: local
- Client OS: Debian 10
- Python version: 3.10.13
- PyTorch/TensorFlow version: 2.3.0+cu121
- Is conda/virtualenv/venv used?: yes
### Log message
#### nnimanager.log
```
[2024-05-03 10:54:56] WARNING (pythonScript) Python command [nni.tools.nni_manager_scripts.collect_gpu_info] has stderr: Traceback (most recent call last):
File "/opt/conda/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.10/site-packages/nni/tools/nni_manager_scripts/collect_gpu_info.py", line 174, in <module>
main()
File "/opt/conda/lib/python3.10/site-packages/nni/tools/nni_manager_scripts/collect_gpu_info.py", line 34, in main
print(json.dumps(data), flush=True)
File "/opt/conda/lib/python3.10/json/__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "/opt/conda/lib/python3.10/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/opt/conda/lib/python3.10/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/opt/conda/lib/python3.10/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type bytes is not JSON serializable
[2024-05-03 10:54:56] INFO (ShutdownManager) Initiate shutdown: training service initialize failed
[2024-05-03 10:54:56] ERROR (GpuInfoCollector) Failed to collect GPU info, collector output:
[2024-05-03 10:54:56] ERROR (TrainingServiceCompat) Training srevice initialize failed: Error: TaskScheduler: Failed to collect GPU info
at TaskScheduler.init (/opt/conda/lib/python3.10/site-packages/nni_node/common/trial_keeper/task_scheduler/scheduler.js:16:19)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async TaskSchedulerClient.start (/opt/conda/lib/python3.10/site-packages/nni_node/common/trial_keeper/task_scheduler_client.js:20:13)
at async Promise.all (index 0)
at async TrialKeeper.start (/opt/conda/lib/python3.10/site-packages/nni_node/common/trial_keeper/keeper.js:48:9)
at async LocalTrainingServiceV3.start (/opt/conda/lib/python3.10/site-packages/nni_node/training_service/local_v3/local.js:28:9)
at async V3asV1.start (/opt/conda/lib/python3.10/site-packages/nni_node/training_service/v3/compat.js:235:29
```
There, the GPU's infos cannot be retreived.
#### experiment.log
```
[2024-05-03 13:52:31] INFO (nni.experiment) Starting web server...
[2024-05-03 13:52:32] INFO (nni.experiment) Setting up...
[2024-05-03 13:52:33] INFO (nni.experiment) Web portal URLs: http://127.0.0.1:8081 http://10.164.0.8:8081 http://172.17.0.1:8081
[2024-05-03 13:53:03] INFO (nni.experiment) Stopping experiment, please wait...
[2024-05-03 13:53:03] INFO (nni.experiment) Saving experiment checkpoint...
[2024-05-03 13:53:03] INFO (nni.experiment) Stopping NNI manager, if any...
[2024-05-03 13:53:23] ERROR (nni.experiment) HTTPConnectionPool(host='localhost', port=8081): Read timed out. (read timeout=20)
Traceback (most recent call last):
File "/opt/conda/envs/nni/lib/python3.9/site-packages/urllib3/connectionpool.py", line 537, in _make_request
response = conn.getresponse()
File "/opt/conda/envs/nni/lib/python3.9/site-packages/urllib3/connection.py", line 466, in getresponse
httplib_response = super().getresponse()
File "/opt/conda/envs/nni/lib/python3.9/http/client.py", line 1377, in getresponse
response.begin()
File "/opt/conda/envs/nni/lib/python3.9/http/client.py", line 320, in begin
version, status, reason = self._read_status()
File "/opt/conda/envs/nni/lib/python3.9/http/client.py", line 281, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "/opt/conda/envs/nni/lib/python3.9/socket.py", line 704, in readinto
return self._sock.recv_into(b)
socket.timeout: timed out
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/conda/envs/nni/lib/python3.9/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/opt/conda/envs/nni/lib/python3.9/site-packages/urllib3/connectionpool.py", line 847, in urlopen
retries = retries.increment(
File "/opt/conda/envs/nni/lib/python3.9/site-packages/urllib3/util/retry.py", line 470, in increment
raise reraise(type(error), error, _stacktrace)
File "/opt/conda/envs/nni/lib/python3.9/site-packages/urllib3/util/util.py", line 39, in reraise
raise value
File "/opt/conda/envs/nni/lib/python3.9/site-packages/urllib3/connectionpool.py", line 793, in urlopen
response = self._make_request(
File "/opt/conda/envs/nni/lib/python3.9/site-packages/urllib3/connectionpool.py", line 539, in _make_request
self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
File "/opt/conda/envs/nni/lib/python3.9/site-packages/urllib3/connectionpool.py", line 370, in _raise_timeout
raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPConnectionPool(host='localhost', port=8081): Read timed out. (read timeout=20)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/nni/lib/python3.9/site-packages/nni/experiment/experiment.py", line 171, in _stop_nni_manager
rest.delete(self.port, '/experiment', self.url_prefix)
File "/opt/conda/envs/nni/lib/python3.9/site-packages/nni/experiment/rest.py", line 52, in delete
request('delete', port, api, prefix=prefix)
File "/opt/conda/envs/nni/lib/python3.9/site-packages/nni/experiment/rest.py", line 31, in request
resp = requests.request(method, url, timeout=timeout)
File "/opt/conda/envs/nni/lib/python3.9/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/opt/conda/envs/nni/lib/python3.9/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/opt/conda/envs/nni/lib/python3.9/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/opt/conda/envs/nni/lib/python3.9/site-packages/requests/adapters.py", line 532, in send
raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: HTTPConnectionPool(host='localhost', port=8081): Read timed out. (read timeout=20)
[2024-05-03 13:53:23] WARNING (nni.experiment) Cannot gracefully stop experiment, killing NNI process...
```
There is a timeout since data cannot be retreived.
## Inside a Docker container
### Dockerfile
```
# Copyright (c) Microsoft Corporation.
# Licensed under the MIT license.
FROM nvidia/cuda:11.3.1-cudnn8-runtime-ubuntu20.04
ARG NNI_RELEASE
LABEL maintainer='Microsoft NNI Team<nni@microsoft.com>'
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get -y update
RUN apt-get -y install \
automake \
build-essential \
cmake \
curl \
git \
openssh-server \
python3 \
python3-dev \
python3-pip \
sudo \
unzip \
wget \
zip
RUN apt-get clean
RUN rm -rf /var/lib/apt/lists/*
RUN ln -s python3 /usr/bin/python
RUN python3 -m pip --no-cache-dir install pip==22.0.3 setuptools==60.9.1 wheel==0.37.1
RUN python3 -m pip --no-cache-dir install \
lightgbm==3.3.2 \
numpy==1.22.2 \
pandas==1.4.1 \
scikit-learn==1.0.2 \
scipy==1.8.0
RUN python3 -m pip --no-cache-dir install \
torch==1.10.2+cu113 \
torchvision==0.11.3+cu113 \
torchaudio==0.10.2+cu113 \
-f https://download.pytorch.org/whl/cu113/torch_stable.html
RUN python3 -m pip --no-cache-dir install pytorch-lightning==1.6.1
RUN python3 -m pip --no-cache-dir install tensorflow==2.9.1
RUN python3 -m pip --no-cache-dir install azureml==0.2.7 azureml-sdk==1.38.0
# COPY dist/nni-${NNI_RELEASE}-py3-none-manylinux1_x86_64.whl .
# RUN python3 -m pip install nni-${NNI_RELEASE}-py3-none-manylinux1_x86_64.whl
# RUN rm nni-${NNI_RELEASE}-py3-none-manylinux1_x86_64.whl
ENV PATH=/root/.local/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/bin:/usr/bin:/usr/sbin
WORKDIR /root
RUN pip install nni
RUN git clone https://github.com/microsoft/nni.git
RUN apt-get -y update
RUN apt-get -y install nano
```
### Log message
#### nnimanager.log
```
root@1b02414e6d3e:~/nni-experiments/_latest/log# cat nnimanager.log
[2024-05-03 14:46:11] DEBUG (WsChannelServer.tuner) Start listening tuner/:channel
[2024-05-03 14:46:11] INFO (main) Start NNI manager
[2024-05-03 14:46:11] INFO (RestServer) Starting REST server at port 8080, URL prefix: "/"
[2024-05-03 14:46:11] INFO (RestServer) REST server started.
[2024-05-03 14:46:11] DEBUG (SqlDB) Database directory: /root/nni-experiments/o21hdgqs/db
[2024-05-03 14:46:11] INFO (NNIDataStore) Datastore initialization done
[2024-05-03 14:46:11] DEBUG (main) start() returned.
[2024-05-03 14:46:12] DEBUG (NNIRestHandler) GET: /check-status: body: {}
[2024-05-03 14:46:12] DEBUG (NNIRestHandler) POST: /experiment: body: {
experimentType: 'hpo',
searchSpaceFile: '/root/nni/examples/trials/mnist-pytorch/search_space.json',
searchSpace: {
batch_size: { _type: 'choice', _value: [Array] },
hidden_size: { _type: 'choice', _value: [Array] },
lr: { _type: 'choice', _value: [Array] },
momentum: { _type: 'uniform', _value: [Array] }
},
trialCommand: 'python3 mnist.py',
trialCodeDirectory: '/root/nni/examples/trials/mnist-pytorch',
trialConcurrency: 1,
trialGpuNumber: 1,
useAnnotation: false,
debug: false,
logLevel: 'info',
experimentWorkingDirectory: '/root/nni-experiments',
tuner: { name: 'TPE', classArgs: { optimize_mode: 'maximize' } },
trainingService: {
platform: 'local',
trialCommand: 'python3 mnist.py',
trialCodeDirectory: '/root/nni/examples/trials/mnist-pytorch',
trialGpuNumber: 1,
debug: false,
useActiveGpu: true,
maxTrialNumberPerGpu: 1,
reuseMode: false
}
}
[2024-05-03 14:46:12] INFO (NNIManager) Starting experiment: o21hdgqs
[2024-05-03 14:46:12] INFO (NNIManager) Setup training service...
[2024-05-03 14:46:12] DEBUG (LocalV3.local) Training sevice config: {
platform: 'local',
trialCommand: 'python3 mnist.py',
trialCodeDirectory: '/root/nni/examples/trials/mnist-pytorch',
trialGpuNumber: 1,
debug: false,
useActiveGpu: true,
maxTrialNumberPerGpu: 1,
reuseMode: false
}
[2024-05-03 14:46:12] INFO (NNIManager) Setup tuner...
[2024-05-03 14:46:12] DEBUG (NNIManager) dispatcher command: /usr/bin/python3,-m,nni,--exp_params,eyJleHBlcmltZW50VHlwZSI6ImhwbyIsInNlYXJjaFNwYWNlRmlsZSI6Ii9yb290L25uaS9leGFtcGxlcy90cmlhbHMvbW5pc3QtcHl0b3JjaC9zZWFyY2hfc3BhY2UuanNvbiIsInRyaWFsQ29tbWFuZCI6InB5dGhvbjMgbW5pc3QucHkiLCJ0cmlhbENvZGVEaXJlY3RvcnkiOiIvcm9vdC9ubmkvZXhhbXBsZXMvdHJpYWxzL21uaXN0LXB5dG9yY2giLCJ0cmlhbENvbmN1cnJlbmN5IjoxLCJ0cmlhbEdwdU51bWJlciI6MSwidXNlQW5ub3RhdGlvbiI6ZmFsc2UsImRlYnVnIjpmYWxzZSwibG9nTGV2ZWwiOiJpbmZvIiwiZXhwZXJpbWVudFdvcmtpbmdEaXJlY3RvcnkiOiIvcm9vdC9ubmktZXhwZXJpbWVudHMiLCJ0dW5lciI6eyJuYW1lIjoiVFBFIiwiY2xhc3NBcmdzIjp7Im9wdGltaXplX21vZGUiOiJtYXhpbWl6ZSJ9fSwidHJhaW5pbmdTZXJ2aWNlIjp7InBsYXRmb3JtIjoibG9jYWwiLCJ0cmlhbENvbW1hbmQiOiJweXRob24zIG1uaXN0LnB5IiwidHJpYWxDb2RlRGlyZWN0b3J5IjoiL3Jvb3Qvbm5pL2V4YW1wbGVzL3RyaWFscy9tbmlzdC1weXRvcmNoIiwidHJpYWxHcHVOdW1iZXIiOjEsImRlYnVnIjpmYWxzZSwidXNlQWN0aXZlR3B1Ijp0cnVlLCJtYXhUcmlhbE51bWJlclBlckdwdSI6MSwicmV1c2VNb2RlIjpmYWxzZX19
[2024-05-03 14:46:12] INFO (NNIManager) Change NNIManager status from: INITIALIZED to: RUNNING
[2024-05-03 14:46:12] DEBUG (tuner_command_channel) Waiting connection...
[2024-05-03 14:46:12] DEBUG (NNIRestHandler) GET: /check-status: body: {}
[2024-05-03 14:46:13] DEBUG (WsChannelServer.tuner) Incoming connection __default__
[2024-05-03 14:46:13] DEBUG (WsChannel.__default__) Epoch 0 start
[2024-05-03 14:46:13] INFO (NNIManager) Add event listeners
[2024-05-03 14:46:13] DEBUG (NNIManager) Send tuner command: INITIALIZE: [object Object]
[2024-05-03 14:46:13] INFO (LocalV3.local) Start
[2024-05-03 14:46:13] INFO (NNIManager) NNIManager received command from dispatcher: ID,
[2024-05-03 14:46:13] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 0, "parameter_source": "algorithm", "parameters": {"batch_size": 32, "hidden_size": 128, "lr": 0.001, "momentum": 0.47523697672790355}, "parameter_index": 0}
[2024-05-03 14:46:14] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 0,
hyperParameters: {
value: '{"parameter_id": 0, "parameter_source": "algorithm", "parameters": {"batch_size": 32, "hidden_size": 128, "lr": 0.001, "momentum": 0.47523697672790355}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2024-05-03 14:46:15] INFO (GpuInfoCollector) Forced update: {
gpuNumber: 1,
driverVersion: '550.54.15',
cudaVersion: 12040,
gpus: [
{
index: 0,
model: 'Tesla T4',
cudaCores: 2560,
gpuMemory: 16106127360,
freeGpuMemory: 15642263552,
gpuCoreUtilization: 0,
gpuMemoryUtilization: 0
}
],
processes: [],
success: true
}
[2024-05-03 14:46:17] INFO (LocalV3.local) Register directory trial_code = /root/nni/examples/trials/mnist-pytorch
```
#### experiment.log
```
root@1b02414e6d3e:~/nni-experiments/_latest/log# cat experiment.log
[2024-05-03 14:46:11] INFO (nni.experiment) Creating experiment, Experiment ID: o21hdgqs
[2024-05-03 14:46:11] INFO (nni.experiment) Starting web server...
[2024-05-03 14:46:12] INFO (nni.experiment) Setting up...
[2024-05-03 14:46:12] INFO (nni.experiment) Web portal URLs: http://127.0.0.1:8080 http://172.17.0.2:8080
[2024-05-03 14:46:42] INFO (nni.experiment) Stopping experiment, please wait...
[2024-05-03 14:46:42] INFO (nni.experiment) Saving experiment checkpoint...
[2024-05-03 14:46:42] INFO (nni.experiment) Stopping NNI manager, if any...
```
## When I'm using CPU only:
I obtain what I want using the GPU, the WebUI, the experiments trials, and so on...
```
root@6dcd2267cf44:~# nnictl create --config nni/examples/trials/mnist-pytorch/config.yml --foreground --debug
[2024-05-03 14:37:54] Creating experiment, Experiment ID: tcq192jf
[2024-05-03 14:37:54] Starting web server...
[2024-05-03 14:37:55] DEBUG (WsChannelServer.tuner) Start listening tuner/:channel
[2024-05-03 14:37:55] INFO (main) Start NNI manager
[2024-05-03 14:37:55] INFO (RestServer) Starting REST server at port 8080, URL prefix: "/"
[2024-05-03 14:37:55] INFO (RestServer) REST server started.
[2024-05-03 14:37:55] DEBUG (SqlDB) Database directory: /root/nni-experiments/tcq192jf/db
[2024-05-03 14:37:55] INFO (NNIDataStore) Datastore initialization done
[2024-05-03 14:37:55] DEBUG (main) start() returned.
[2024-05-03 14:37:55] DEBUG (NNIRestHandler) GET: /check-status: body: {}
[2024-05-03 14:37:55] Setting up...
[2024-05-03 14:37:55] DEBUG (NNIRestHandler) POST: /experiment: body: {
experimentType: 'hpo',
searchSpaceFile: '/root/nni/examples/trials/mnist-pytorch/search_space.json',
searchSpace: {
batch_size: { _type: 'choice', _value: [Array] },
hidden_size: { _type: 'choice', _value: [Array] },
lr: { _type: 'choice', _value: [Array] },
momentum: { _type: 'uniform', _value: [Array] }
},
trialCommand: 'python3 mnist.py',
trialCodeDirectory: '/root/nni/examples/trials/mnist-pytorch',
trialConcurrency: 1,
trialGpuNumber: 0,
useAnnotation: false,
debug: false,
logLevel: 'info',
experimentWorkingDirectory: '/root/nni-experiments',
tuner: { name: 'TPE', classArgs: { optimize_mode: 'maximize' } },
trainingService: {
platform: 'local',
trialCommand: 'python3 mnist.py',
trialCodeDirectory: '/root/nni/examples/trials/mnist-pytorch',
trialGpuNumber: 0,
debug: false,
maxTrialNumberPerGpu: 1,
reuseMode: false
}
}
[2024-05-03 14:37:55] INFO (NNIManager) Starting experiment: tcq192jf
[2024-05-03 14:37:55] INFO (NNIManager) Setup training service...
[2024-05-03 14:37:55] DEBUG (LocalV3.local) Training sevice config: {
platform: 'local',
trialCommand: 'python3 mnist.py',
trialCodeDirectory: '/root/nni/examples/trials/mnist-pytorch',
trialGpuNumber: 0,
debug: false,
maxTrialNumberPerGpu: 1,
reuseMode: false
}
[2024-05-03 14:37:55] INFO (NNIManager) Setup tuner...
[2024-05-03 14:37:55] DEBUG (NNIManager) dispatcher command: /usr/bin/python3,-m,nni,--exp_params,eyJleHBlcmltZW50VHlwZSI6ImhwbyIsInNlYXJjaFNwYWNlRmlsZSI6Ii9yb290L25uaS9leGFtcGxlcy90cmlhbHMvbW5pc3QtcHl0b3JjaC9zZWFyY2hfc3BhY2UuanNvbiIsInRyaWFsQ29tbWFuZCI6InB5dGhvbjMgbW5pc3QucHkiLCJ0cmlhbENvZGVEaXJlY3RvcnkiOiIvcm9vdC9ubmkvZXhhbXBsZXMvdHJpYWxzL21uaXN0LXB5dG9yY2giLCJ0cmlhbENvbmN1cnJlbmN5IjoxLCJ0cmlhbEdwdU51bWJlciI6MCwidXNlQW5ub3RhdGlvbiI6ZmFsc2UsImRlYnVnIjpmYWxzZSwibG9nTGV2ZWwiOiJpbmZvIiwiZXhwZXJpbWVudFdvcmtpbmdEaXJlY3RvcnkiOiIvcm9vdC9ubmktZXhwZXJpbWVudHMiLCJ0dW5lciI6eyJuYW1lIjoiVFBFIiwiY2xhc3NBcmdzIjp7Im9wdGltaXplX21vZGUiOiJtYXhpbWl6ZSJ9fSwidHJhaW5pbmdTZXJ2aWNlIjp7InBsYXRmb3JtIjoibG9jYWwiLCJ0cmlhbENvbW1hbmQiOiJweXRob24zIG1uaXN0LnB5IiwidHJpYWxDb2RlRGlyZWN0b3J5IjoiL3Jvb3Qvbm5pL2V4YW1wbGVzL3RyaWFscy9tbmlzdC1weXRvcmNoIiwidHJpYWxHcHVOdW1iZXIiOjAsImRlYnVnIjpmYWxzZSwibWF4VHJpYWxOdW1iZXJQZXJHcHUiOjEsInJldXNlTW9kZSI6ZmFsc2V9fQ==
[2024-05-03 14:37:55] INFO (NNIManager) Change NNIManager status from: INITIALIZED to: RUNNING
[2024-05-03 14:37:55] DEBUG (tuner_command_channel) Waiting connection...
[2024-05-03 14:37:55] Web portal URLs: http://127.0.0.1:8080 http://172.17.0.2:8080
[2024-05-03 14:37:55] DEBUG (NNIRestHandler) GET: /check-status: body: {}
[2024-05-03 14:37:57] DEBUG (WsChannelServer.tuner) Incoming connection __default__
[2024-05-03 14:37:57] DEBUG (WsChannel.__default__) Epoch 0 start
[2024-05-03 14:37:57] INFO (NNIManager) Add event listeners
[2024-05-03 14:37:57] DEBUG (NNIManager) Send tuner command: INITIALIZE: [object Object]
[2024-05-03 14:37:57] INFO (LocalV3.local) Start
[2024-05-03 14:37:57] INFO (NNIManager) NNIManager received command from dispatcher: ID,
[2024-05-03 14:37:57] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 0, "parameter_source": "algorithm", "parameters": {"batch_size": 128, "hidden_size": 1024, "lr": 0.001, "momentum": 0.6039114358987745}, "parameter_index": 0}
[2024-05-03 14:37:57] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 0,
hyperParameters: {
value: '{"parameter_id": 0, "parameter_source": "algorithm", "parameters": {"batch_size": 128, "hidden_size": 1024, "lr": 0.001, "momentum": 0.6039114358987745}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2024-05-03 14:37:58] INFO (LocalV3.local) Register directory trial_code = /root/nni/examples/trials/mnist-pytorch
[2024-05-03 14:37:58] INFO (LocalV3.local) Created trial wcvTY
[2024-05-03 14:38:00] INFO (LocalV3.local) Trial parameter: wcvTY {"parameter_id": 0, "parameter_source": "algorithm", "parameters": {"batch_size": 128, "hidden_size": 1024, "lr": 0.001, "momentum": 0.6039114358987745}, "parameter_index": 0}
[2024-05-03 14:38:05] DEBUG (NNIRestHandler) GET: /check-status: body: {}
...
```
## How to reproduce it?
If from a Docker container:
```
docker build -t "nas-experiment" .
nvidia-docker run -it -p 8081:8081 nas-101-experiment
```
Then in both cases:
1. I would both outside and inside a Docker container modify the file from /nni/example/trials/mnist-pytorch/config.yml in order to set the process on GPU.
2. Then I would run the following command so I could see the logs in direct.
```
nnictl create --config /nni/example/trials/mnist-pytorch/config.yml --port 8081 --debug --foreground
```
As a result, the WebUI wouldn't start due to a timeout trying to retrive data, since the experiment won't load on GPU.
## Notes
- I very available to answer and get helped on the subject as I currently work on NAS.
- I'm going to see what is ArchAI and how it differs from nii util I can use GPU for training there.
- I'm using GCP Instances to do this search | open | 2024-05-03T14:50:54Z | 2024-05-23T15:01:40Z | https://github.com/microsoft/nni/issues/5776 | [] | dtamienER | 3 |
adbar/trafilatura | web-scraping | 557 | Readme.md table is broken. | 
| closed | 2024-04-14T23:15:06Z | 2024-04-19T14:56:29Z | https://github.com/adbar/trafilatura/issues/557 | [
"bug"
] | AnishPimpley | 1 |
aminalaee/sqladmin | fastapi | 835 | Edit database 400 error: Too many fields. Maximum number of fields is 1000. | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
I have an object that I am trying to edit from the admin panel. The model field is a list of enum values for which I want to add one of the values to the list. The annoying thing is that for some objects with the same model I can alter it fine. However, for some objects the admin page shows the message:

I know this is not much to go on but does anyone have any idea what it might be and how I can further my investigation? Please let me know if you need more info
### Steps to reproduce the bug
_No response_
### Expected behavior
_No response_
### Actual behavior
_No response_
### Debugging material
_No response_
### Environment
- The code is deployed with FastAPI using Dockerfile to a kubernetes cluster
- The docker base image is python:3.12.3-slim https://hub.docker.com/layers/library/python/3.12.3-slim/images/sha256-a9922d9d812c99034d23bef370270abe95def980c1f2664afac002f3e6a9428e
### Additional context
_No response_ | open | 2024-10-16T13:53:30Z | 2024-12-25T14:15:24Z | https://github.com/aminalaee/sqladmin/issues/835 | [] | pimverschuuren | 2 |
deepfakes/faceswap | deep-learning | 543 | Install went fine, extraction fine, training no errors but no feedback either. | (i answered all of the items in the post outline form below, but i just condensed to paragraph)
Hey, thanks for the awesome program (cool gui as well) I came from openswap, and this gui is great cant wait to delve into the alignments file operations and maintenance options. (even though ill probably use the console, its nice having a gui to quickly visualize and save configs)
Anyway, so I had an issue with splicing first, I ended up just using ffmpeg on a separate machine and porting the stills over. Extraction went just fine, lots of faces found etc. After removing the false positives, I went to train, however it never showed any progress after a half an hour. I switched to 25 save interval, still nothing. I am using original trainer, batch 64. I am working with a k80, and usually that can breeze through the first 100 iterations in less than a couple minutes. So I think something is wrong. I'm also seeing very low cpu usage in task manager, and usually it would be almost maxed out when training in the past.
I followed the windows installation guide to a T, have all the proper drivers/cuda/cudnn/py3.5 versions, all recognized etc. ATX yes etc. The extraction went fine, so not sure why training is not showing anything. I used the verbose option, but cant really see any output anywhere at all.
Any ideas? thanks.
| closed | 2018-12-09T05:34:49Z | 2019-01-09T20:01:16Z | https://github.com/deepfakes/faceswap/issues/543 | [] | IEWbgfnYDwHRoRRSKtkdyMDUzgdwuBYgDKtDJWd | 2 |
ray-project/ray | tensorflow | 51,518 | [Autoscaler] Issue when using ray start --head with --autoscaling-config | ### What happened + What you expected to happen
## Describe
When running ray start --head with --autoscaling-config, the head node is deployed, but the head node creation request defined in head_node_type(autoscaling-config) remains in a pending state in the autoscaler, even though the head node is already running.
In this case, the head_node_type does not need to be defined, but it is required as a configuration. Is this behavior intentional, or is there something I might be overlooking?
### autoscaling-config
```yaml
...
# Specify the node type of the head node (as configured above).
head_node_type: head.default
```
## Expected Behavior
- Run the head node using the following command:
- ray start --head --dashboard-agent-listen-port=52365 --memory=0 --num-gpus=1 --num-cpus=0 --autoscaling-config=/tmp/dev/test.yaml
- The head node should be deployed, and since worker.default is set with min_workers: 10, 10 worker.default nodes should be pending in the autoscaler.
- ray list nodes
- Total: 1(head node)
- Pending:
- worker.default, 10
## Actual Behavior

- ray list nodes
- Total: 1(head node)
- Pending:
- head.default, 1
- worker.default, 10
### Versions / Dependencies
Ray-2.43.0
### Reproduction script
## /tmp/dev/test.yaml
```yaml
cluster_name: test
provider:
type: local
head_ip: localhost
coordinator_address: "127.0.0.1:8000"
# How Ray will authenticate with newly launched nodes.
auth:
ssh_user: root
ssh_private_key: /root/.ssh/id_rsa
worker_liveness_check: False
worker_rpc_drain: True
disable_node_updaters: True
disable_launch_config_check: True
use_internal_ips: True
max_workers: 100
# The default behavior for manually managed clusters is
# min_workers == max_workers == len(worker_ips),
# meaning that Ray is started on all available nodes of the cluster.
# For automatically managed clusters, max_workers is required and min_workers defaults to 0.
# The autoscaler will scale up the cluster faster with higher upscaling speed.
# E.g., if the task requires adding more nodes then autoscaler will gradually
# scale up the cluster in chunks of upscaling_speed*currently_running_nodes.
# This number should be > 0.
upscaling_speed: 1.0
idle_timeout_minutes: 5
# Files or directories to copy to the head and worker nodes. The format is a
# dictionary from REMOTE_PATH: LOCAL_PATH, e.g.
file_mounts: {
# "/path1/on/remote/machine": "/path1/on/local/machine",
# "/path2/on/remote/machine": "/path2/on/local/machine",
}
# Files or directories to copy from the head node to the worker nodes. The format is a
# list of paths. The same path on the head node will be copied to the worker node.
# This behavior is a subset of the file_mounts behavior. In the vast majority of cases
# you should just use file_mounts. Only use this if you know what you're doing!
cluster_synced_files: []
# Whether changes to directories in file_mounts or cluster_synced_files in the head node
# should sync to the worker node continuously
file_mounts_sync_continuously: False
# Patterns for files to exclude when running rsync up or rsync down
rsync_exclude:
- "**/.git"
- "**/.git/**"
# Pattern files to use for filtering out files when running rsync up or rsync down. The file is searched for
# in the source directory and recursively through all subdirectories. For example, if .gitignore is provided
# as a value, the behavior will match git's behavior for finding and using .gitignore files.
rsync_filter:
- ".gitignore"
# List of commands that will be run before `setup_commands`. If docker is
# enabled, these commands will run outside the container and before docker
# is setup.
initialization_commands: []
# List of shell commands to run to set up each nodes.
setup_commands: [
]
# Note: if you're developing Ray, you probably want to create a Docker image that
# has your Ray repo pre-cloned. Then, you can replace the pip installs
# below with a git checkout <your_sha> (and possibly a recompile).
# To run the nightly version of ray (as opposed to the latest), either use a rayproject docker image
# that has the "nightly" (e.g. "rayproject/ray-ml:nightly-gpu") or uncomment the following line:
# - pip install -U "ray[default] @ https://s3-us-west-2.amazonaws.com/ray-wheels/latest/ray-2.0.0.dev0-cp37-cp37m-manylinux2014_x86_64.whl"
# Custom commands that will be run on the head node after common setup.
head_setup_commands: [
]
# Custom commands that will be run on worker nodes after common setup.
worker_setup_commands: [
]
# Command to start ray on the head node. You don't need to change this.
head_start_ray_commands: [
]
# Command to start ray on worker nodes. You don't need to change this.
worker_start_ray_commands: [
]
# Tell the autoscaler the allowed node types and the resources they provide.
# The key is the name of the node type, which is just for debugging purposes.
# The node config specifies the launch config and physical instance type.
available_node_types:
worker:
min_workers: 10
max_workers: 10
resources: {"memory": 16384, "num_cpus": 1}
worker2:
min_workers: 0
max_workers: 10
node_config:
InstanceType: t1.large
resources: {"num_cpus": 100}
head.default:
min_workers: 0
max_workers: 0
# You can override the resources here. For GPU, currently only Nvidia GPU is supported. If no ESXi host can
# fulfill the requirement, the Ray node creation will fail. The number of created nodes may not meet the desired
# minimum number. The vSphere node provider will not distinguish the GPU type. It will just count the quantity:
resources: {"memory": 0, "num_cpus": 0, "num_gpus": 1}
# Specify the node type of the head node (as configured above).
head_node_type: head.default
```
## Run
```sh
ray start --head --dashboard-agent-listen-port=52365 --memory=0 --num-gpus=1 --num-cpus=0 --autoscaling-config=/tmp/dev/test.yaml
```
### Issue Severity
High: It blocks me from completing my task. | open | 2025-03-19T09:29:43Z | 2025-03-20T00:33:48Z | https://github.com/ray-project/ray/issues/51518 | [
"bug",
"P2",
"core"
] | nadongjun | 2 |
ets-labs/python-dependency-injector | asyncio | 121 | Create DelegatedCallable provider | closed | 2015-12-27T16:10:07Z | 2015-12-28T15:49:48Z | https://github.com/ets-labs/python-dependency-injector/issues/121 | [
"feature"
] | rmk135 | 0 | |
hankcs/HanLP | nlp | 1,718 | HanLP1.x 文本推荐 逻辑错误 | <!--
Thank you for reporting a possible bug in HanLP.
Please fill in the template below to bypass our spam filter.
以下必填,否则恕不受理。
-->
**Describe the bug**
在 Suggester 类的suggest方法中对于不同评价器求和出错,Suggester 位于package com.hankcs.hanlp.suggest;
` scoreMap.put(entry.getKey(), score / max + entry.getValue() * scorer.boost);`
其中max表示当前评价器的最优得分,被错误的除在了之前评价器得分之和上,导致最终推荐结果不准确
应改为
` scoreMap.put(entry.getKey(), score + entry.getValue() * scorer.boost/ max);`
**Code to reproduce the issue**
测试代码
```public class TEST {
public static void main(String[] args)
{
Suggester suggester = new Suggester();
String[] titleArray =
(
"wuqi\n" +"服务器"
).split("\\n");
for (String title : titleArray)
{
suggester.addSentence(title);
}
System.out.println(suggester.suggest("务器", 1));
}
}
```
实际运行结果推荐为wuqi
**Describe the current behavior**
因为对于评价器分数求和错误导致了模型得分发生偏差,更倾向于选择拼音检查得分高的句子
**Expected behavior**
期望运行结果应如下所示,每轮得到的评分应该位于(0,1)之间,最后测试结果应该为服务器
```
---IdVectorScorer------
当前总得分 1.0 当前评价器给出得分 1.1457421786266213E-10 当前评价器给出得分最大值 1.1457421786266213E-10 候选句子内容 服务器 本轮增长分数 1.0
---EditDistanceScorer------
当前总得分 2.0 当前评价器给出得分 0.5 当前评价器给出得分最大值 0.5 候选句子内容 服务器 本轮增长分数 1.0
当前总得分 0.4 当前评价器给出得分 0.2 当前评价器给出得分最大值 0.5 候选句子内容 wuqi 本轮增长分数 0.4
---PinyinScorer------
当前总得分 2.7 当前评价器给出得分 1.1666666666666665 当前评价器给出得分最大值 1.6666666666666665 候选句子内容 服务器 本轮增长分数 0.7000000000000002
当前总得分 1.4 当前评价器给出得分 1.6666666666666665 当前评价器给出得分最大值 1.6666666666666665 候选句子内容 wuqi 本轮增长分数 0.9999999999999999
```
**System information**
- Windows
- jdk11.0.5
- HanLP version:1.x
**Other info / logs**
实际运行结果,出现了轮次评分大于1的情况。评价器评分叠加异常
```
---IdVectorScorer------
当前总得分 1.1457421786266213E-10 当前评价器给出得分 1.1457421786266213E-10 当前评价器给出得分最大值 1.1457421786266213E-10 候选句子内容 服务器 本轮增长分数 1.1457421786266213E-10
---EditDistanceScorer------
当前总得分 0.5000000002291485 当前评价器给出得分 0.5 当前评价器给出得分最大值 0.5 候选句子内容 服务器 本轮增长分数 0.5000000001145742
当前总得分 0.2 当前评价器给出得分 0.2 当前评价器给出得分最大值 0.5 候选句子内容 wuqi 本轮增长分数 0.2
---PinyinScorer------
当前总得分 1.4666666668041557 当前评价器给出得分 1.1666666666666665 当前评价器给出得分最大值 1.6666666666666665 候选句子内容 服务器 本轮增长分数 0.9666666665750072
当前总得分 1.7866666666666666 当前评价器给出得分 1.6666666666666665 当前评价器给出得分最大值 1.6666666666666665 候选句子内容 wuqi 本轮增长分数 1.5866666666666667
```
* [x] I've completed this form and searched the web for solutions.
| closed | 2022-04-09T17:12:34Z | 2022-04-09T17:41:40Z | https://github.com/hankcs/HanLP/issues/1718 | [
"bug"
] | QAQ516284797 | 1 |
gradio-app/gradio | python | 10,320 | Option "editable" in gr.Chatbot could only work for the first message in a group of consecutive messages. | ### Describe the bug
Option "editable" in gr.Chatbot could only work for the first message in a group of consecutive messages.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
examples = [
{"role": "user", "content": "User message 1."},
{"role": "user", "content": "User message 2."},
{
"role": "assistant",
"content": "This is a plain text.",
},
]
with gr.Blocks() as demo:
chatbot = gr.Chatbot(
type="messages",
editable="all",
group_consecutive_messages=False,
)
demo.load(lambda: examples, None, chatbot)
demo.launch(server_name="0.0.0.0", server_port=19011)
```
### Screenshot

### Logs
_No response_
### System Info
```shell
Gradio==5.10.0
Python==3.10.14
```
### Severity
Blocking usage of gradio | open | 2025-01-09T03:44:13Z | 2025-03-22T11:57:33Z | https://github.com/gradio-app/gradio/issues/10320 | [
"bug"
] | tankgit | 1 |
seleniumbase/SeleniumBase | pytest | 2,990 | error 'Chrome' object has no attribute 'uc_gui_click_captcha' | why I get this error 'Chrome' object has no attribute 'uc_gui_click_captcha' on driver
how can use this method on wich object ?
please help | closed | 2024-08-03T20:17:04Z | 2024-08-03T22:01:56Z | https://github.com/seleniumbase/SeleniumBase/issues/2990 | [
"question",
"UC Mode / CDP Mode"
] | alish511 | 1 |
mitmproxy/mitmproxy | python | 6,822 | WireGuard mode in mitmproxy where find client config file? How work with it? Any documentation? | #### Problem Description
WireGuard mode in mitmproxy where find client config file? (not mitmweb in "mitmproxy --mode wireguard@1111")
#### Steps to reproduce the behavior:
1. Look in ~./wireguard but nothing
#### System Information
Using last mitmproxy
And how it work? i cant find any documentation I need start wg too (with mitm), or setup some networks? what i need do for start mitmproxy in WireGuard mode on my server, share config to my iphone and see all requests (get, post, put, patch) | closed | 2024-04-27T03:08:35Z | 2024-04-30T10:04:53Z | https://github.com/mitmproxy/mitmproxy/issues/6822 | [
"kind/triage"
] | Vasa211 | 1 |
strawberry-graphql/strawberry-django | graphql | 40 | Unwanted argument in model Types for O2O and ForeignKeys | For models with a ForeignKey (or a O2O), a `pk` arg appears on the generated type, which, to my knowledge doesn't do anything.
Here is the schema used in tests:
```
type BerryFruit {
id: ID!
name: String!
nameUpper: String!
nameLower: String!
}
type Group {
id: ID!
name: String!
users: [User!]
}
type Query {
user(pk: ID): User!
users: [User!]!
group(pk: ID): Group!
groups: [Group!]!
berries: [BerryFruit!]!
}
type User {
id: ID!
name: String!
group(pk: ID): Group
}
```
The `type User` has a `group(pk: ID): Group`. `group: Group` should be enough. | closed | 2021-06-18T13:21:18Z | 2021-06-21T20:23:58Z | https://github.com/strawberry-graphql/strawberry-django/issues/40 | [] | g-as | 1 |
noirbizarre/flask-restplus | flask | 655 | Validate request parser and model object in expect decorator cause problem | Im trying to validate both api.model and request parser objects in expect decorator but it seems its not possible.
```
profile_pic_args = reqparse.RequestParser()
profile_pic_args.add_argument(
'profile_pic',
type=werkzeug.datastructures.FileStorage,
location='files',
required=False,
help='Profile Picture'
)
```
```
profile = api.model(
'Instagram Profile', {
'id': fields.Integer(required=True),
'username': fields.String(required=True),
'full_name': fields.String(required=True),
'is_private': fields.Boolean(),
})
```
```
@users.expect(profile_pic_args, profile, validate=True)
def put(self, id):
....
```
When i remove validate=True from decorator it works but still cant access to api.payload and its empty.
Is there a workaround for this problem? Im using flask-restplus version 0.12.1. | open | 2019-06-15T22:19:52Z | 2019-06-15T22:21:29Z | https://github.com/noirbizarre/flask-restplus/issues/655 | [] | alicmp | 0 |
dropbox/PyHive | sqlalchemy | 34 | Installation problem:UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 42 | Hello,
My laptop is running Ubuntu 14.04 server 6.4,and python2.7-dev has been installed,when install pyhive,blow errors got printed:
``` python
Command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/sasl/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-hRzjMT-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip_build_root/sasl
Traceback (most recent call last):
File "/usr/bin/pip", line 9, in <module>
load_entry_point('pip==1.5.4', 'console_scripts', 'pip')()
File "/usr/lib/python2.7/dist-packages/pip/__init__.py", line 235, in main
return command.main(cmd_args)
File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 161, in main
text = '\n'.join(complete_log)
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 42: ordinal not in range(128)
```
| closed | 2016-01-14T09:29:22Z | 2016-01-14T18:11:31Z | https://github.com/dropbox/PyHive/issues/34 | [] | AlexLuya | 1 |
scanapi/scanapi | rest-api | 554 | curl generation doesn't take verify: false into account | ## Bug report
### Environment
- Operating System: not relevant
- Python version: not relevant
### Description of the bug
When using `verify: false`, the curl command generated without the `--inscure / -k` flag.
### Expected behavior ?
The generated curl should include `--insecure / -k` when the request is not verified.
### How to reproduce the the bug ?
```yaml
endpoints:
- name: scanapi-demo
path: http://demo.scanapi.dev/api/
options:
verify: false
requests:
- name: list_all_devs
path: devs/
method: get
```
then
```
scanapi run
```
Then observe the genreated curl.
### Anthing else we need to know?
curlify2 supports a `verify` argument on the `to_curl` method: https://github.com/marcuxyz/curlify2/blob/fc0fd402f77dea7c8c28840eeaa2886dc287f040/curlify2/curlify.py#L1
| closed | 2022-11-21T15:41:45Z | 2023-03-02T16:44:11Z | https://github.com/scanapi/scanapi/issues/554 | [
"Bug"
] | Crocmagnon | 1 |
onnx/onnx | tensorflow | 6,141 | Export onnx model from pytorch with dynamic input size, output tensor doesn't match. | - python 3.10.12
- onnx 1.15.0
- onnxruntime 1.17.1
- torch 2.1.1
##### I transfer a transformer model from pytorch to onnx, this model necessitates cyclic inference, with the size of the third input node expanding with each inference. Although no errors were encountered during the conversion, the first output matches the original model, while subsequent outputs fail to align.
- This is the code for my model transfer:
` torch.onnx.export(decoder, (src[0].to(device), src_mask[0].to(device), input_ids.to(device)),
"decoder_0520_ds.onnx", input_names=["input1","input2","input3"], output_names=["output"],
dynamic_axes={"input1":{2:"input_width"},"input2":{2:"input_width"}, "input3":{1:"length"}})`
thank you | open | 2024-05-20T07:19:40Z | 2024-05-28T18:53:03Z | https://github.com/onnx/onnx/issues/6141 | [
"question",
"topic: converters"
] | younghuvee | 9 |
noirbizarre/flask-restplus | api | 728 | Defining varaible keys in case of nested field model | ```
"preview": {
"9:16": {
"thumbnail": "",
"video": ""
}, "16:9": {
"thumbnail": "",
"video": ""
}, "1:1": {
"thumbnail": "",
"video": ""
}
}
```
I have the above data coming in the request for which I want to create a model. I tried implementing wildcard fields but didn't get any success. In the above case, the aspect ratio is dynamic and can be anything but will always be in `*:*` format. In case that is coming, thumbnail and video are in nested modal with `required= true`.
Any leads would be appreciated. Thanks in advance!
| closed | 2019-10-10T11:40:59Z | 2019-10-27T17:56:00Z | https://github.com/noirbizarre/flask-restplus/issues/728 | [
"enhancement",
"in progress"
] | knowBalpreet | 15 |
ranaroussi/yfinance | pandas | 1,294 | yfinance 0.2.3: Cannot load Ticker Info | Dear Together,
I am facing the following situation:
yfinance module suddenly stopped providing Info today.
History request seems to be working however, while asking "actions" seems to provide "Empty Dataframe".
See the output below.
I found this issue from last month, resembling to my current situation, however its solution didn't solved my problem:
[https://github.com/ranaroussi/yfinance/issues/1246](url)
My questions would be:
Anybody facing this same problem currently?
Could you please help what may be the problem here and how to solve it?
**What I tried so far:**
-uninstall/reinstall yfinance package, with Pycharm restart (with reboot as well, just in case)
-update yfinance: pip install yfinance --upgrade --no-cache (it says "Requirement already satisfied" for all the packages)
-opening a new project with dummy requests
-debugging for the root cause, however I am quite a beginner in python, so no progress
**Environment information:**
I am using Windows, python 3.10, yfinance 0.2.3
**The example code I tried:**
asset = yf.Ticker("AAPL")
print(asset.history())
print(asset.actions)
print(asset.info["exchange"])
**Its output:**
Open High ... Dividends Stock Splits
Date ...
2022-12-13 00:00:00-05:00 149.500000 149.970001 ... 0.0 0.0
...
2023-01-13 00:00:00-05:00 132.029999 133.960007 ... 0.0 0.0
[22 rows x 7 columns]
Empty DataFrame
Columns: [Dividends, Stock Splits]
Index: []
Traceback (most recent call last):
File "D:\...\main.py", line 613, in <module>
print(asset.info["exchange"])
File "D:\...\lib\site-packages\yfinance\ticker.py", line 138, in info
return self.get_info()
File "D...\lib\site-packages\yfinance\base.py", line 894, in get_info
data = self._quote.info
File "D:\...\lib\site-packages\yfinance\scrapers\quote.py", line 27, in info
self._scrape(self.proxy)
File "D:\...\lib\site-packages\yfinance\scrapers\quote.py", line 58, in _scrape
quote_summary_store = json_data['QuoteSummaryStore']
TypeError: string indices must be integers
Thanks for the help in advance!
| closed | 2023-01-13T17:42:34Z | 2023-01-14T09:28:13Z | https://github.com/ranaroussi/yfinance/issues/1294 | [] | Tatcher91 | 2 |
huggingface/datasets | pytorch | 6,863 | Revert temporary pin huggingface-hub < 0.23.0 | Revert temporary pin huggingface-hub < 0.23.0 introduced by
- #6861
once the following issue is fixed and released:
- huggingface/transformers#30618 | closed | 2024-05-03T05:53:55Z | 2024-05-27T10:14:41Z | https://github.com/huggingface/datasets/issues/6863 | [] | albertvillanova | 0 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 508 | Lora模型合并前 和 合并后显存不一致 | 加载base model + lora
和 加载merge后的模型,后者总是会OOM,这可能是原因导致的呢?
合并后的模型体积貌似也没有增加,是正常的状态,那为何合并后推理会OOM | closed | 2023-06-05T06:08:53Z | 2023-06-13T23:59:18Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/508 | [
"stale"
] | lucasjinreal | 5 |
deeppavlov/DeepPavlov | nlp | 892 | Downgrade tensoflow version AttributeError | There is no fixed tensorflow version in requirments.txt.
So, if I downgrade tensorflow to version 1.10, I'll catch error: "AttributeError: module 'tensorflow' has no attribute 'init_scope'. | closed | 2019-06-20T14:43:11Z | 2019-07-15T13:14:32Z | https://github.com/deeppavlov/DeepPavlov/issues/892 | [] | artyerokhin | 2 |
jschneier/django-storages | django | 1,007 | changing 'DEFAULT_FILE_STORAGE' causing high TTFB ( waiting time ) | basically, it's a webpage with few photos serving from MySQL stored in the s3 server ( DigitalOcean )...
so I realized every time I reload the page it actually downloads every photo chunk then loads the page. ( maybe ), and it does this with only images uploaded to MySQL, not to my other normal static images.
I tracked it via an app that tracks data usage.
Sorry I m new to Django...
### **My settings**
```
...
AWS_ACCESS_KEY_ID = 'MY_KEY'
AWS_SECRET_ACCESS_KEY = 'MY_SECRET_KEY'
AWS_STORAGE_BUCKET_NAME = 'wallpapers'
AWS_S3_ENDPOINT_URL = 'https://sgp1.digitaloceanspaces.com'
AWS_S3_CUSTOM_DOMAIN = 'wallpapers.sgp1.cdn.digitaloceanspaces.com'
AWS_QUERYSTRING_AUTH = False
AWS_S3_OBJECT_PARAMETERS = {
'CacheControl': 'max-age=86400',
}
AWS_LOCATION = 'static'
AWS_DEFAULT_ACL = 'public-read'
STATIC_URL = 'https://%s/%s/' % (AWS_S3_CUSTOM_DOMAIN, AWS_LOCATION)
MEDIA_URL = 'https://%s/%s/' % (AWS_S3_CUSTOM_DOMAIN, 'media')
MEDIA_ROOT = 'media/'
STATICFILES_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
...
```
when I open 'http://127.0.0.1:8000/'
### **this causes high TTFB...**
<img width="362" alt="Screenshot 2021-04-26 at 7 16 43 PM" src="https://user-images.githubusercontent.com/62889318/116093411-42d52d80-a6c4-11eb-92d1-06684c57558f.png">
but when I comment out/ remove this in settings.py...
`#DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'`
**then everything works fine**
<img width="364" alt="Screenshot 2021-04-26 at 7 15 16 PM" src="https://user-images.githubusercontent.com/62889318/116093386-3cdf4c80-a6c4-11eb-84bf-22957a0616f5.png">
| closed | 2021-04-26T15:10:51Z | 2021-04-27T10:05:15Z | https://github.com/jschneier/django-storages/issues/1007 | [] | harshjadon9 | 1 |
plotly/dash-table | dash | 72 | index column edit/delete button unusable | we should consider removing the edit/delete buttons (remove altogether or make unactive) as it's not possible to remove this column:

| closed | 2018-09-11T13:41:14Z | 2019-09-20T00:28:46Z | https://github.com/plotly/dash-table/issues/72 | [
"dash-type-bug"
] | cldougl | 1 |
plotly/dash-bio | dash | 116 | Why are generated files in the version control? | I noticed that generated files in dash-bio/dash_bio (such as bundle.js etc.), or the .tar.gz, are versioned in this repository. I wonder why this is the case: usually (at least in my experience) files which can be generated are not integrated in the version control, in particular for large files. | closed | 2019-01-22T22:09:27Z | 2019-01-25T15:19:07Z | https://github.com/plotly/dash-bio/issues/116 | [] | emmanuelle | 8 |
keras-team/keras | python | 20,524 | Accuracy is lost after save_weights/load_weights | ### Keras version: 3
### TensorFlow version
2.16.1
### Current behavior?
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 354ms/step - accuracy: 0.5000 - loss: 1.1560
[1.1560312509536743, 0.5]
Epoch 1/10
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 596ms/step - accuracy: 0.5000 - loss: 1.1560
Epoch 2/10
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 28ms/step - accuracy: 0.5000 - loss: 14.5018
Epoch 3/10
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 30ms/step - accuracy: 0.5000 - loss: 9.9714
Epoch 4/10
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 31ms/step - accuracy: 0.7500 - loss: 1.3363
Epoch 5/10
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 26ms/step - accuracy: 1.0000 - loss: 8.9407e-08
Epoch 6/10
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 29ms/step - accuracy: 1.0000 - loss: 4.7684e-07
Epoch 7/10
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 31ms/step - accuracy: 0.7500 - loss: 0.2545
Epoch 8/10
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 27ms/step - accuracy: 0.7500 - loss: 0.8729
Epoch 9/10
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 28ms/step - accuracy: 1.0000 - loss: 9.1682e-04
Epoch 10/10
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 25ms/step - accuracy: 1.0000 - loss: 2.6822e-07
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 27ms/step - accuracy: 1.0000 - loss: 0.0000e+00
[0.0, 1.0]
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 335ms/step - accuracy: 0.2500 - loss: 0.8475 # this should be acc 1.0 loss 0
[0.847506046295166, 0.25]
### Standalone code to reproduce the issue
```python
import tensorflow as tf
class CusModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.dense = tf.keras.layers.Dense(units=2, activation='softmax', name='output')
def call(self, x):
return self.dense(x)
dummy_data_x = tf.convert_to_tensor([[0, 0],
[1, 0],
[0, 1],
[1, 1]])
dummy_data_y = tf.convert_to_tensor([0, 1, 0, 1])
model = CusModel()
model.compile(optimizer=tf.keras.optimizers.Adam(10.0),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
print(model.evaluate(x=dummy_data_x, y=dummy_data_y))
model.fit(x=dummy_data_x, y=dummy_data_y, epochs=10)
print(model.evaluate(x=dummy_data_x, y=dummy_data_y))
model.save_weights('test_model.weights.h5')
model = CusModel()
model.load_weights('test_model.weights.h5')
model.compile(optimizer=tf.keras.optimizers.Adam(10.0),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
print(model.evaluate(x=dummy_data_x, y=dummy_data_y))
```
| closed | 2024-11-20T08:56:00Z | 2024-12-26T02:01:36Z | https://github.com/keras-team/keras/issues/20524 | [
"stat:awaiting response from contributor",
"stale"
] | Pandaaaa906 | 5 |
ymcui/Chinese-BERT-wwm | nlp | 16 | tensorflow版本和hugging face torch版本的weights是否一致? | 我目前测下来tf和torch在同一文本的embedding好像不一样 | closed | 2019-06-28T06:08:04Z | 2019-07-10T05:20:09Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/16 | [] | iamlxb3 | 2 |
dpgaspar/Flask-AppBuilder | rest-api | 2,170 | Order of Lists in Multiview | The Multiview is displaying views in the order of coding dispite they are declared as a list.
```python
def list(self):
pages = get_page_args()
page_sizes = get_page_size_args()
orders = get_order_args()
views_widgets = list()
for view in self._views:
if orders.get(view.__class__.__name__):
order_column, order_direction = orders.get(view.__class__.__name__)
else:
order_column, order_direction = "", ""
page = pages.get(view.__class__.__name__)
page_size = page_sizes.get(view.__class__.__name__)
views_widgets.append(
view._get_view_widget(
filters=view._base_filters,
order_column=order_column,
order_direction=order_direction,
page=page,
page_size=page_size,
)
)
self.update_redirect()
return self.render_template(
self.list_template, views=self._views, views_widgets=views_widgets
)
```
[it's there](https://github.com/dpgaspar/Flask-AppBuilder/blob/ba63c5c1629101dadf43e68a373402ee116f6ba9/flask_appbuilder/views.py#L779C1-L804C1)
Just change this function with
```python
def list(self):
pages = get_page_args()
page_sizes = get_page_size_args()
orders = get_order_args()
views_widgets = list()
for view in self._views:
if orders.get(view.__class__.__name__):
order_column, order_direction = orders.get(view.__class__.__name__)
else:
order_column, order_direction = "", ""
page = pages.get(view.__class__.__name__)
page_size = page_sizes.get(view.__class__.__name__)
views_widgets.append(
view._get_view_widget(
filters=view._base_filters,
order_column=order_column,
order_direction=order_direction,
page=page,
page_size=page_size,
)
)
# reorder views_widgets (instance) in the same order as self.views (class)
views_order = [v.__name__ for v in self.views]
views_widgets = sorted(views_widgets, key=lambda x: views_order.index(x.template_args['modelview_name']))
# reorder self._views (instance) in the same order as self.views (class)
_views = sorted(self._views, key=lambda x: views_order.index(x.__class__.__name__))
return self.render_template(
self.list_template, views=_views, views_widgets=views_widgets, title=self.title
)
```
It will be ordered according to Mutiview declaration
For example
```python
class LesReleves(MultipleView):
title = "Les Relevés"
list_template = 'releve/multiple_views.html'
views = [
Vue_releve_A_Valider_vsql,
Vue_releve_A_Confirmer_vsql,
Vue_releve_OT_ilex_vsql,
Vue_releve_avant_vente_chiffre_vsql
]
```
Will show the lists according to the views order in this last piece of code
@dpgaspar could you confirm passing _views instead of self._views is ok? Many THANKS. | closed | 2023-11-24T09:39:51Z | 2023-12-20T08:50:54Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/2170 | [] | gbrault | 3 |
ultralytics/ultralytics | machine-learning | 19,745 | Export torchscript not support dynamic batch and nms | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
code
```
def export(model_path='yolo11x-pose.pt', imagz=832, batch=8):
model = YOLO(model_path)
# dynamic_axes = {
# 'images': {0: 'batch', 2: 'height', 3: 'width'}, # 让 batch, height, width 为动态维度
# 'output0': {0: 'batch', 1: 300, 2: 6} # 假设你需要设置输出维度
# }
onnx_model_path = model.export(imgsz=imagz, format="torchscript",
**{'batch': batch, 'half': False, 'simplify': True,
'nms': True, })
print(f"✅ 模型已导出为 torchscript 格式: {onnx_model_path}")
# # 转换为 FP16 格式
# output_path = onnx_model_path.replace('.onnx', '_fp16.onnx')
# utils.convert_to_fp16(onnx_model_path, output_path)
return onnx_model_path
imagz = 640
model_path = 'det_31c.pt'
batch = 8
script_path = export(model_path, imagz=imagz, batch=batch)
```
If I use nms = false , also get same error.
error
```
Traceback of TorchScript, original code (most recent call last):
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/ultralytics/engine/exporter.py(1576): forward
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/nn/modules/module.py(1729): _slow_forward
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/nn/modules/module.py(1750): _call_impl
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/nn/modules/module.py(1739): _wrapped_call_impl
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/jit/_trace.py(1276): trace_module
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/jit/_trace.py(696): _trace_impl
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/jit/_trace.py(1000): trace
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/ultralytics/engine/exporter.py(492): export_torchscript
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/ultralytics/engine/exporter.py(175): outer_func
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/ultralytics/engine/exporter.py(407): __call__
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/ultralytics/engine/model.py(741): export
/tmp/wyx/yolo_onnx_demo/yolo_export_script.py(32): export
/tmp/wyx/yolo_onnx_demo/yolo_export_script.py(48): <module>
RuntimeError: select(): index 5 out of range for tensor of size [5, 8400, 4] at dimension 0
Thread [77] had error: PyTorch execute failure: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File "code/__torch__/ultralytics/engine/exporter.py", line 277, in forward
_169 = torch.pad(dets3, _168)
_170 = torch.copy_(torch.select(out, 0, 4), _169)
box4 = torch.select(boxes, 0, 5)
~~~~~~~~~~~~ <--- HERE
cls9 = torch.select(classes, 0, 5)
score9 = torch.select(scores0, 0, 5)
Traceback of TorchScript, original code (most recent call last):
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/ultralytics/engine/exporter.py(1576): forward
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/nn/modules/module.py(1729): _slow_forward
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/nn/modules/module.py(1750): _call_impl
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/nn/modules/module.py(1739): _wrapped_call_impl
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/jit/_trace.py(1276): trace_module
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/jit/_trace.py(696): _trace_impl
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/jit/_trace.py(1000): trace
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/ultralytics/engine/exporter.py(492): export_torchscript
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/ultralytics/engine/exporter.py(175): outer_func
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/ultralytics/engine/exporter.py(407): __call__
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/ultralytics/engine/model.py(741): export
/tmp/wyx/yolo_onnx_demo/yolo_export_script.py(32): export
/tmp/wyx/yolo_onnx_demo/yolo_export_script.py(48): <module>
RuntimeError: select(): index 5 out of range for tensor of size [5, 8400, 4] at dimension 0
Thread [91] had error: PyTorch execute failure: The following operation failed in the TorchScript interpreter.
```
I want to modify the `exporter.py` to slove.
If I use `torch.jit.script` to export such as `task.py , head.py , module.py` will get error and I don't know how to fix them.
And I use it after `torch.jit.trace` get same error when use different batch size to inference
```
@try_export
def export_torchscript(self, prefix=colorstr("TorchScript:")):
"""YOLO TorchScript model export."""
LOGGER.info(f"\n{prefix} starting export with torch {torch.__version__}...")
f = self.file.with_suffix(".torchscript")
ts = torch.jit.trace(NMSModel(self.model, self.args) if self.args.nms else self.model, self.im, strict=False)
ts = torch.jit.script(ts)
extra_files = {"config.txt": json.dumps(self.metadata)} # torch._C.ExtraFilesMap()
if self.args.optimize: # https://pytorch.org/tutorials/recipes/mobile_interpreter.html
LOGGER.info(f"{prefix} optimizing for mobile...")
from torch.utils.mobile_optimizer import optimize_for_mobile
optimize_for_mobile(ts)._save_for_lite_interpreter(str(f), _extra_files=extra_files)
else:
ts.save(str(f), _extra_files=extra_files)
return f, None
```
I don't know how to solve my problem
### Environment
Ultralytics 8.3.76 🚀 Python-3.11.11 torch-2.6.0+cu124 CPU (Intel Xeon Gold 6462C)
Model summary (fused): 112 layers, 43,630,509 parameters, 0 gradients, 164.9 GFLOPs
PyTorch: starting from 'det_31c.pt' with input shape (8, 3, 640, 640) BCHW and output shape(s) (8, 35, 8400) (83.6 MB)
### Minimal Reproducible Example
```
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-03-17T10:03:57Z | 2025-03-24T00:05:26Z | https://github.com/ultralytics/ultralytics/issues/19745 | [
"exports"
] | 631068264 | 15 |
nok/sklearn-porter | scikit-learn | 6 | How to read the multi-layer perceptrons model in Java written using python | I am using the wrapper of scikit-learn Multilayer Perceptron in Python [https://github.com/aigamedev/scikit-neuralnetwork](url) to train the neural network and save it to a file. Now, I want to expose it on production to predict in real time. So, I was thinking to use Java for better concurrency than Python. Hence, my question is whether can we read the model using this library written using Python or above wrapper? The code below I am using for training the model and last three lines I want to port to Java to expose it on production
```
import pickle
import numpy as np
import pandas as pd
from sknn.mlp import Classifier, Layer
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
f = open("TrainLSDataset.csv")
data = np.loadtxt(f,delimiter = ',')
x = data[:, 1:]
y = data[:, 0]
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.3)
nn = Classifier(
layers=[
Layer("Rectifier", units=5),
Layer("Softmax")],
learning_rate=0.001,
n_iter=100)
nn.fit(X_train, y_train)
filename = 'finalized_model.txt'
pickle.dump(nn, open(filename, 'wb'))
**Below code i want to write in Java/GoLang for exposing it on Production** :
loaded_model = pickle.load(open(filename, 'rb'))
result = loaded_model.score(X_test, y_test)
y_pred = loaded_model.predict(X_test)
``` | closed | 2017-01-25T08:54:39Z | 2017-01-30T18:49:44Z | https://github.com/nok/sklearn-porter/issues/6 | [
"enhancement",
"question"
] | palaiya | 4 |
TencentARC/GFPGAN | deep-learning | 237 | Can I use your latent code for real world-image face editing? | Thanks for sharing your amazing job. It performs the best as far as I know. BTW, can I learn a direction like "smile" with your latent code, and do face editing with your model stylegan_decoder? Look forward your valuable advise. | open | 2022-08-12T09:16:58Z | 2022-08-15T05:43:17Z | https://github.com/TencentARC/GFPGAN/issues/237 | [] | tengshaofeng | 0 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,535 | ### Features: | ### Features:
* A lot of performance improvements (see below in Performance section)
* Stable Diffusion 3 support ([#16030](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16030), [#16164](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16164), [#16212](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16212))
* Recommended Euler sampler; DDIM and other timestamp samplers currently not supported
* T5 text model is disabled by default, enable it in settings
* New schedulers:
* Align Your Steps ([#15751](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15751))
* KL Optimal ([#15608](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15608))
* Normal ([#16149](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16149))
* DDIM ([#16149](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16149))
* Simple ([#16142](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16142))
* Beta ([#16235](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16235))
* New sampler: DDIM CFG++ ([#16035](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16035))
### Minor:
* Option to skip CFG on early steps ([#15607](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15607))
* Add --models-dir option ([#15742](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15742))
* Allow mobile users to open context menu by using two fingers press ([#15682](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15682))
* Infotext: add Lora name as TI hashes for bundled Textual Inversion ([#15679](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15679))
* Check model's hash after downloading it to prevent corruped downloads ([#15602](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15602))
* More extension tag filtering options ([#15627](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15627))
* When saving AVIF, use JPEG's quality setting ([#15610](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15610))
* Add filename pattern: `[basename]` ([#15978](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15978))
* Add option to enable clip skip for clip L on SDXL ([#15992](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15992))
* Option to prevent screen sleep during generation ([#16001](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16001))
* ToggleLivePriview button in image viewer ([#16065](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16065))
* Remove ui flashing on reloading and fast scrollong ([#16153](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16153))
* option to disable save button log.csv ([#16242](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16242))
### Extensions and API:
* Add process_before_every_sampling hook ([#15984](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15984))
* Return HTTP 400 instead of 404 on invalid sampler error ([#16140](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16140))
### Performance:
* [Performance 1/6] use_checkpoint = False ([#15803](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15803))
* [Performance 2/6] Replace einops.rearrange with torch native ops ([#15804](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15804))
* [Performance 4/6] Precompute is_sdxl_inpaint flag ([#15806](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15806))
* [Performance 5/6] Prevent unnecessary extra networks bias backup ([#15816](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15816))
* [Performance 6/6] Add --precision half option to avoid casting during inference ([#15820](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15820))
* [Performance] LDM optimization patches ([#15824](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15824))
* [Performance] Keep sigmas on CPU ([#15823](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15823))
* Check for nans in unet only once, after all steps have been completed
* Added pption to run torch profiler for image generation
### Bug Fixes:
* Fix for grids without comprehensive infotexts ([#15958](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15958))
* feat: lora partial update precede full update ([#15943](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15943))
* Fix bug where file extension had an extra '.' under some circumstances ([#15893](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15893))
* Fix corrupt model initial load loop ([#15600](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15600))
* Allow old sampler names in API ([#15656](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15656))
* more old sampler scheduler compatibility ([#15681](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15681))
* Fix Hypertile xyz ([#15831](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15831))
* XYZ CSV skipinitialspace ([#15832](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15832))
* fix soft inpainting on mps and xpu, torch_utils.float64 ([#15815](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15815))
* fix extention update when not on main branch ([#15797](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15797))
* update pickle safe filenames
* use relative path for webui-assets css ([#15757](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15757))
* When creating a virtual environment, upgrade pip in webui.bat/webui.sh ([#15750](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15750))
* Fix AttributeError ([#15738](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15738))
* use script_path for webui root in launch_utils ([#15705](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15705))
* fix extra batch mode P Transparency ([#15664](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15664))
* use gradio theme colors in css ([#15680](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15680))
* Fix dragging text within prompt input ([#15657](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15657))
* Add correct mimetype for .mjs files ([#15654](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15654))
* QOL Items - handle metadata issues more cleanly for SD models, Loras and embeddings ([#15632](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15632))
* replace wsl-open with wslpath and explorer.exe ([#15968](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15968))
* Fix SDXL Inpaint ([#15976](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15976))
* multi size grid ([#15988](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15988))
* fix Replace preview ([#16118](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16118))
* Possible fix of wrong scale in weight decomposition ([#16151](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16151))
* Ensure use of python from venv on Mac and Linux ([#16116](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16116))
* Prioritize python3.10 over python3 if both are available on Linux and Mac (with fallback) ([#16092](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16092))
* stoping generation extras ([#16085](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16085))
* Fix SD2 loading ([#16078](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16078), [#16079](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16079))
* fix infotext Lora hashes for hires fix different lora ([#16062](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16062))
* Fix sampler scheduler autocorrection warning ([#16054](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16054))
* fix ui flashing on reloading and fast scrollong ([#16153](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16153))
* fix upscale logic ([#16239](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16239))
* [bug] do not break progressbar on non-job actions (add wrap_gradio_call_no_job) ([#16202](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16202))
* fix OSError: cannot write mode P as JPEG ([#16194](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16194))
### Other:
* fix changelog #15883 -> #15882 ([#15907](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15907))
* ReloadUI backgroundColor --background-fill-primary ([#15864](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15864))
* Use different torch versions for Intel and ARM Macs ([#15851](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15851))
* XYZ override rework ([#15836](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15836))
* scroll extensions table on overflow ([#15830](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15830))
* img2img batch upload method ([#15817](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15817))
* chore: sync v1.8.0 packages according to changelog ([#15783](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15783))
* Add AVIF MIME type support to mimetype definitions ([#15739](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15739))
* Update imageviewer.js ([#15730](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15730))
* no-referrer ([#15641](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15641))
* .gitignore trace.json ([#15980](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/15980))
* Bump spandrel to 0.3.4 ([#16144](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16144))
* Defunct --max-batch-count ([#16119](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16119))
* docs: update bug_report.yml ([#16102](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16102))
* Maintaining Project Compatibility for Python 3.9 Users Without Upgrade Requirements. ([#16088](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16088), [#16169](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16169), [#16192](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16192))
* Update torch for ARM Macs to 2.3.1 ([#16059](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16059))
* remove deprecated setting dont_fix_second_order_samplers_schedule ([#16061](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16061))
* chore: fix typos ([#16060](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16060))
* shlex.join launch args in console log ([#16170](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16170))
* activate venv .bat ([#16231](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16231))
* add ids to the resize tabs in img2img ([#16218](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16218))
* update installation guide linux ([#16178](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16178))
* Robust sysinfo ([#16173](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16173))
* do not send image size on paste inpaint ([#16180](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16180))
* Fix noisy DS_Store files for MacOS ([#16166](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/16166))
<hr /><em>This discussion was created from the release <a href='https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.10.0'>1.10.0</a>.</em>
_Originally posted by @AUTOMATIC1111 in https://github.com/AUTOMATIC1111/stable-diffusion-webui/discussions/16271_ | closed | 2024-10-05T22:57:35Z | 2024-10-06T01:06:05Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16535 | [] | lawchingman | 0 |
asacristani/fastapi-rocket-boilerplate | pydantic | 17 | Monitoring: add logging system | Include logs using different levels for the following events:
- db interactions
- celery task interactions
- endpoint interactions | open | 2023-10-11T10:56:06Z | 2023-10-11T11:07:03Z | https://github.com/asacristani/fastapi-rocket-boilerplate/issues/17 | [
"enhancement"
] | asacristani | 0 |
PaddlePaddle/ERNIE | nlp | 217 | kpi.py where it is? not in models | thanks | closed | 2019-07-18T12:12:27Z | 2020-05-28T12:53:03Z | https://github.com/PaddlePaddle/ERNIE/issues/217 | [
"wontfix"
] | SupDataKing | 2 |
kennethreitz/responder | graphql | 227 | docs_route at base of other routes results in 404 | I like to have the docs render at `/api`, with other endpoints being at `/api/other_route/...`. This used to work with earlier versions of responder, but recently gives a 404 at `/api`.
Minimal example:
```python
import responder
from marshmallow import Schema, fields
api = responder.API(title="Web Service", version="1.0", openapi="3.0.0", docs_route='/api')
@api.schema("Pet")
class PetSchema(Schema):
name = fields.Str()
@api.route("/api/pet")
def route(req, resp):
"""A cute furry animal endpoint.
---
get:
description: Get a random pet
responses:
200:
description: A pet to be returned
schema:
$ref = "#/components/schemas/Pet"
"""
resp.media = PetSchema().dump({"name": "little orange"})
api.run(port=5000, address='0.0.0.0')
```
Calling `127.0.0.1/api` results in 404 `Not found.`
Is this intentional or a bug? | closed | 2018-11-15T10:15:15Z | 2018-11-15T11:23:52Z | https://github.com/kennethreitz/responder/issues/227 | [] | mmanhertz | 1 |
scikit-optimize/scikit-optimize | scikit-learn | 779 | Issues trying to use BayesianCV on an xgb classifier estimator | I seem to have stumbled into some bug trying to do a Bayesian CV with an XGB Classifier estimator. My skopt version is `0.5.2` and my xgb version is `0.90`. Here's the code and the error output;
```py
from skopt import BayesSearchCV
# include below until https://github.com/scikit-optimize/scikit-optimize/issues/718 is resolved
class BayesSearchCV(BayesSearchCV):
def _run_search(self, x): raise BaseException('Use newer skopt')
xgb_opt_all = XGBClassifier()
param_xgb = {
'base_score': (0.01, 0.99, 'uniform'),
'learning_rate': (0.01, 1, 'uniform'),
}
xgb_grid = BayesSearchCV(
estimator=xgb_opt_all, search_spaces=param_xgb,
scoring='accuracy', n_jobs=-1, cv=10)
```
```py
The error output is like this:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-53-b1c76ec62c4e> in <module>
14 xgb_grid = BayesSearchCV(
15 estimator=xgb_opt_all, search_spaces=param_xgb,
---> 16 scoring='accuracy', n_jobs=-1, cv=10)
~\AppData\Local\Continuum\anaconda3\envs\classifier-v2\lib\site-packages\skopt\searchcv.py in __init__(self, estimator, search_spaces, optimizer_kwargs, n_iter, scoring, fit_params, n_jobs, n_points, iid, refit, cv, verbose, pre_dispatch, random_state, error_score, return_train_score)
295 n_jobs=n_jobs, iid=iid, refit=refit, cv=cv, verbose=verbose,
296 pre_dispatch=pre_dispatch, error_score=error_score,
--> 297 return_train_score=return_train_score)
298
299 def _check_search_space(self, search_space):
TypeError: __init__() got an unexpected keyword argument 'fit_params'
```
Am I doing something wrong or is this an upstream bug? | closed | 2019-07-04T08:19:16Z | 2020-01-20T20:00:17Z | https://github.com/scikit-optimize/scikit-optimize/issues/779 | [] | PyDataBlog | 8 |
piskvorky/gensim | machine-learning | 3,411 | Python11 can not install gensim, if it is possible, I wish Python11 can have the right version for gensim too | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
What are you trying to achieve? What is the expected result? What are you seeing instead?
#### Steps/code/corpus to reproduce
Include full tracebacks, logs and datasets if necessary. Please keep the examples minimal ("minimal reproducible example").
If your problem is with a specific Gensim model (word2vec, lsimodel, doc2vec, fasttext, ldamodel etc), include the following:
```python
print(my_model.lifecycle_events)
```
#### Versions
Please provide the output of:
```python
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import struct; print("Bits", 8 * struct.calcsize("P"))
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import gensim; print("gensim", gensim.__version__)
from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
```
| closed | 2022-12-09T08:38:34Z | 2022-12-09T09:49:36Z | https://github.com/piskvorky/gensim/issues/3411 | [] | Victorrrrr86 | 2 |
sktime/pytorch-forecasting | pandas | 1,718 | Improving docs, examples, and tutorials of `pytorch-forecasting` | This list is a quick bucket list of items that can be potentially improved from what I've seen in the docs.
As requested by @yarnabrina, we should create simple examples, reducing complexity of the library as much as possible, to encourage users who are not too familiar with `pytorch-forecasting` or time series machine learning in general to try out the library. I think for example, the 'Example' inside https://pytorch-forecasting.readthedocs.io/en/stable/getting-started.html# is way too complicated, and will deter users from using the library because it is too complex to understand and read through as a first time user.
Thus for now, I'm proposing we
- refactor the main example inside the getting-started page
- improve upon the tutorials [page](https://pytorch-forecasting.readthedocs.io/en/stable/tutorials.html) that is currently setup in the library.
I think these two items can improved in the following way:
- Restructure the tutorials page into sections, with increasing 'complexity' of tutorials. That way, new users unexperience with the library will instantly be guided to the intro notebooks, while experience users familiar with the model will be directed to more complex notebooks, specific to their needs.
1. This first section will be the 'introduction' section, which will contain one extremely basic introductory tutorial with minimal code, featuring all the different modules (`TimeSeriesDataset`, different `Model`s, `Trainers`, etc), but with little to no explanations. This basic tutorial is designed to essentially be a minimal walkthrough on how to use the library. The preceding tutorials will explain more in depth about each module. For example, we will have one tutorial explaining `TimeSeriesDataset` and so on.
2. The second section will be "Modelling tutorials", where most of the autoregressive models or forecasting model tutorials will be located here. For example, [this](https://pytorch-forecasting.readthedocs.io/en/stable/tutorials/deepar.html) and [this](https://pytorch-forecasting.readthedocs.io/en/stable/tutorials/nhits.html).
3. The last section will be used for any other miscellaneous and additional tutorials, for example, [this](https://pytorch-forecasting.readthedocs.io/en/stable/tutorials/building.html)
- For the tutorial in the getting-started page, we can re-use the very first intro notebook that we create to provide an extremely simple tutorial to explain to the user how it works.
I am also proposing that we split the directory that is used to load the [data](https://github.com/sktime/pytorch-forecasting/tree/main/examples/data) into a new directory inside the root folder named `datasets`. This folder will be the primarily method to load datasets for tutorials or for modelling purposes. To reduce duplication, we can just import loading methods from the sktime library.
This will allow us to keep the examples [directory](https://github.com/sktime/pytorch-forecasting/tree/main/examples) standalone, and the files inside will be used to code up simple functions and variables that can be used inside tutorials.
| open | 2024-11-20T02:35:04Z | 2024-12-17T18:43:12Z | https://github.com/sktime/pytorch-forecasting/issues/1718 | [] | julian-fong | 6 |
numpy/numpy | numpy | 27,918 | BUG: eigh can't work when N = 45000 | ### Describe the issue:
When N is small, the error is very low;
When test N = 45000, the program end very fast and give wrong answer.
### Reproduce the code example:
```python
import numpy as np
def error_analysis(B, B_eigenvalue, B_eigenvector, N):
"""
Perform error analysis for eigenvalue decomposition.
Parameters:
B: numpy.ndarray
Incremental matrix B.
B_eigenvalue: numpy.ndarray
Diagonal matrix of eigenvalues of B.
B_eigenvector: numpy.ndarray
Matrix of eigenvectors of B.
N: int
Size of matrix B.
Returns:
R: float
Residual.
O: float
Orthogonality.
"""
# Residual calculation
residual = np.linalg.norm(B - B_eigenvector @ B_eigenvalue @ B_eigenvector.T) / (np.linalg.norm(B) * (N + 1) * np.finfo(float).eps)
# Orthogonality check
ERRORve = np.linalg.norm(B_eigenvector @ B_eigenvector.T - np.eye(N + 1))
orthogonality = ERRORve / ((N + 1) * np.finfo(float).eps)
return residual, orthogonality
N = 45000
A = np.random.rand(N, N)
A = (A + A.T) / 2 # Ensure A is symmetric
Alpha = np.random.randn(N, 1)
w = np.random.randn(1, 1)[0]
print('--- Incremental matrix B ---')
B = np.block([[A, Alpha], [Alpha.T, w]])
EB_, QB_ = np.linalg.eigh(B)
R, O = error_analysis(B, np.diag(EB_), QB_,N)
print(f'Residual: {R:.4e}')
print(f'Orthogonality: {O:.4e}')
```
### Error message:
```shell
--- ERROR ANALYSIS ---
Residual: 1.0008e+11
Orthogonality: 2.1230e+13
```
### Python and NumPy Versions:
>>> print(numpy.__version__); print(sys.version)
1.21.4
3.9.12 (main, Aug 29 2022, 00:54:58)
[GCC 11.2.0]
### Runtime Environment:
_No response_
### Context for the issue:
_No response_ | open | 2024-12-06T14:01:02Z | 2024-12-06T21:51:30Z | https://github.com/numpy/numpy/issues/27918 | [
"00 - Bug"
] | werwrewe | 1 |
LAION-AI/Open-Assistant | machine-learning | 3,060 | Create a direct messaging system for the Dataset Collection dashboard | ### Problem Statement:
Currently, we are experiencing an influx of new users who are eager to contribute to our dataset collection. While these users have good intentions, we often receive messages that do not meet our quality standards. It is crucial to maintain the integrity of our dataset and prevent the inclusion of low-quality responses.
### Proposed Solution:
To address this issue, we need to implement a Direct Messaging (DM) system within the Dataset Collection dashboard. This DM system will allow moderators to communicate directly with users who have submitted responses that do not meet our quality standards. By notifying these users of their mistakes and providing guidance, we can ensure that their future contributions align with our requirements.
### Benefits:
- **Maintaining Dataset Quality:** The DM system will enable us to address low-quality responses promptly, ensuring that only high-quality contributions are included in the dataset.
- **Positive User Experience:** By directly messaging users, we can maintain a supportive environment and foster a sense of community, guiding users towards better contributions.
- **Efficient Moderation:** The DM system will streamline the moderation process by providing a dedicated channel for communication, making it easier for moderators to do their job.
### Implementation Details:
The proposed DM system should include the following features:
- A messaging interface accessible to moderators within the Dataset Collection dashboard.
- The ability to view the user's submitted response alongside the message.
- Basic formatting options (e.g., bold, italics) for clearer communication.
- Notification alerts for new messages.
- Threaded conversations to maintain context and allow for efficient communication.
### Action Plan:
1. **Backend Development:** Implement the necessary backend infrastructure to handle messaging between moderators and users.
2. **Frontend Integration:** Integrate the DM system into the existing Dataset Collection dashboard, ensuring a seamless user experience.
3. **Testing and Feedback:** Conduct thorough testing to identify any issues or areas for improvement. Gather feedback from moderators to refine the system.
4. **Deployment and Documentation:** Once the DM system is stable and reliable, deploy it to the production environment. Provide detailed documentation for moderators on how to use the system effectively.
### Dependencies:
This issue is dependent on the completion of the following tasks:
- Backend infrastructure for messaging and notifications.
- Integration with the existing Dataset Collection dashboard.
- User authentication and authorization mechanisms.
### Additional Notes:
- Consider implementing user-friendly guidelines and tips within the DM system to help educate users and improve the quality of their contributions.
Let's work together to create a direct messaging system that empowers moderators to guide users towards better-quality responses and maintain the integrity of our dataset. | open | 2023-05-06T08:44:10Z | 2023-05-11T18:34:54Z | https://github.com/LAION-AI/Open-Assistant/issues/3060 | [
"feature",
"backend",
"website"
] | Subarasheese | 2 |
d2l-ai/d2l-en | machine-learning | 2,575 | Not able to render :begin_tab:toc | Hello I have installed all the dependencies but my notebook is not able to render markers like `:begin_tab:toc....:end_tab:`. | open | 2023-12-21T23:34:10Z | 2024-12-25T11:06:15Z | https://github.com/d2l-ai/d2l-en/issues/2575 | [] | roychowdhuryrohit-dev | 4 |
noirbizarre/flask-restplus | api | 424 | Swagger specification adds host name to basePath | I'm setting up my Flask application like so:
```
from flask import Flask
from flask_restplus import Api, Namespace
app = Flask(__name__)
app.config['SERVER_NAME'] = 'localhost:5000'
api = Api(app, prefix="/v0.1")
user_api = Namespace('users', description='User-related operations')
api.add_namespace(user_api)
```
and generate a Swagger specification through:
```
with app.app_context():
print(json.dumps(api.__schema__, indent=4))
```
However, the resulting JSON output will then contain the following lines:
```
{
"basePath": "http://localhost:5000/v0.1",
…
"host": "localhost:5000"
}
```
This violates the [definition of `basePath`](https://swagger.io/docs/specification/2-0/api-host-and-base-path/) which says:
> `basePath` is the URL prefix for all API paths, **relative** to the host root.
(emphasis mine) In particular, a code generator like swagger-codegen for TypeScript/Angular will then infer that the base URL for calls to the API is:
```
protected basePath = 'https://localhost:5000http://localhost:5000/v0.1';
```
which will obviously lead to errors.
As a temporary workaround, I now set the `basePath` manually when generating the Swagger specification:
```
with app.app_context():
schema = api.__schema__
schema['basePath'] = '/v0.1'
print(json.dumps(schema, indent=4))
```
| open | 2018-04-29T19:27:47Z | 2018-04-29T19:33:32Z | https://github.com/noirbizarre/flask-restplus/issues/424 | [] | codethief | 0 |
HIT-SCIR/ltp | nlp | 562 | 在加载LTP模型时出现 Can't get attribute '_gpus_arg_default' on <module 'pytorch_lightning.utilities.argparse' | 所有的4.*版本都试过了,全部都有这个问题........
Can't get attribute '_gpus_arg_default' on <module 'pytorch_lightning.utilities.argparse' from '/data/user/conda/envs/pretrain/lib/python3.7/site-packages/pytorch_lightning/utilities/argparse.py'> | closed | 2022-04-06T07:34:06Z | 2022-09-11T10:14:24Z | https://github.com/HIT-SCIR/ltp/issues/562 | [] | Reuben-Yang | 1 |
litestar-org/litestar | api | 3,414 | Enhancement(Packaging): Add/Maintain Debian package | ### Summary
Add `python3-litestar` to the Debian repos by working with/joining the DBT and publishing the Litestar package for Debian and Debian derivatives
### Basic Example
_No response_
### Drawbacks and Impact
_No response_
### Unresolved questions
_No response_ | open | 2024-04-22T14:35:44Z | 2025-03-20T15:54:37Z | https://github.com/litestar-org/litestar/issues/3414 | [
"Enhancement",
"Upstream",
"Infrastructure",
"Package"
] | JacobCoffee | 1 |
flasgger/flasgger | rest-api | 47 | Authorization Header + UI input view | Anyone can help me with how to implement custom headers e.g. "Authorization header" in requests + adding UI element for it with flasgger?
I need to use JWT with some of the endpoints.
I was able to achieve this by modifying flasgger source code, but that shouldn't be the way! | open | 2017-01-01T15:56:58Z | 2023-10-08T06:27:20Z | https://github.com/flasgger/flasgger/issues/47 | [
"enhancement",
"help wanted",
"hacktoberfest"
] | saeid | 4 |
jmcnamara/XlsxWriter | pandas | 757 | Issue with inserting an image into the cell (not on top of it) | Hi,
I am using XlsxWriter to do generate a file where each row represents an Artwork and first column always displays a thumbnail of the work. I am using `insert_image` method to insert the thumb but it appears floating on top of the cell instead of being within the cell.
I am using Python version 3.7.7 and XlsxWriter 1.3.7 and Excel for Mac version 16.38.
Here is some code that demonstrates the problem:
```python
import xlsxwriter
xlsx_buffer = io.BytesIO()
workbook = xlsxwriter.Workbook(xlsx_buffer)
worksheet = workbook.add_worksheet('Works')
# Resize cells
worksheet.set_column('A:G', 14)
worksheet.set_default_row(80)
image_url = 'https://placekitten.com/300/200'
image_data = io.BytesIO(urlopen(image_url).read())
image_params = {
'image_data': image_data,
'x_scale': 0.5,
'y_scale': 0.5,
'x_offset': 10,
'y_offset': 10,
'positioning': 1,
}
worksheet.insert_image('A2', image_url, image_params)
worksheet.insert_image('A3', image_url, image_params)
workbook.close()
```
The code results in the following:

When what we need is this:

I tried different `positioning` options but that didn't do the trick. Please provide the guidance on how to fit and lock the image within the cell. | closed | 2020-10-29T19:07:09Z | 2022-06-25T16:19:05Z | https://github.com/jmcnamara/XlsxWriter/issues/757 | [
"question",
"awaiting user feedback"
] | striveforbest | 6 |
graphql-python/graphene | graphql | 729 | A new way to write ObjectType with python3's annotation | _This issue is a **feature discussion**._
For now, the way to write a ObjectType is as below:
```Python
class Hero(ObjectType):
name = Field(List(String), only_first_name=Boolean())
age = Int()
def resolve_name(self, info, only_first_name=False):
if only_first_name:
return [self._first_name]
return [self._first_name, self._last_name]
def resolve_age(self, info):
return self._age
```
I define name twice (`Hero.name` and `Hero.resolve_name`) because I have to define types of arguments and return value. This will cause some reading and writing problems.
Since python3, annotation feature bring a native way to describe types. By using annotation, we can rewrite this class with a clearer way:
```python
class Heroine(AnnotationObjectType):
def name(self, info, only_first_name: Boolean() = False) -> List(String):
if only_first_name:
return [self._first_name]
return [self._first_name, self._last_name]
def age(self, info) -> Int():
return self._age
print(Heroine.name.__annotations__)
# {'only_first_name': <graphene.types.scalars.Boolean object at 0x104cb8550>, 'return': <graphene.types.structures.List object at 0x105742f28>}
```
`AnnotationObjectType` shouldn't be difficult to write if we somehow transform it into `ObjectType`. But I think a looooot of cases should be tested before being released.
Of cause even if we write an `AnnotationObjectType` class, `ObjectType` class will not be dropped since python2.7 doesn't support annotation.
I would like to hear you guys' comments and suggestions before doing this. | closed | 2018-05-20T15:12:32Z | 2023-07-26T07:50:13Z | https://github.com/graphql-python/graphene/issues/729 | [
"✨ enhancement"
] | ocavue | 29 |
yezyilomo/django-restql | graphql | 303 | Is it possible to exclude fields by default? | Is it possible to exclude fields by default? I have a serializer with a serialized field with a lot of data. This data will be used in some specific cases, but not always.
```python
class BookSerializer(DynamicFieldsMixin, serializer.ModelSerializer):
....
class Meta:
model = Book
fields = ['id', 'name', '...']
class UserSerializer(DynamicFieldsMixin, serializer.ModelSerializer):
books = BookSerializer(many=True, read_only=True)
....
class Meta:
model = User
fields = ['id', 'username', 'email', '...', 'books']
```
Imagine a user with 500 books. In my logic I normally don't need to know information about those 500 books, but I do need to know information about the user (this is not a real example).
I could exclude using the query `GET /user/1?query={username, -books}` but it forces me to put it everywhere where it is consumed.
The idea would be something like:
```python
class UserSerializer(DynamicFieldsMixin, serializer.ModelSerializer):
books = BookSerializer(many=True, read_only=True)
....
class Meta:
model = User
fields = ['id', 'username', 'email', '...', 'books']
default_exclude = ['books']
```
Default:
```python
# `GET /user/1`
{
"id": 100
"username": "dummy",
"....": "....", # without the "books" field
},
```
With books field:
```python
# `GET /user/1?query={id, username, books}`
{
"id": 100
"username": "dummy",
"books": [ ..... ]
}
```
Thank you for everything!
| open | 2022-03-18T21:10:31Z | 2023-12-06T17:01:35Z | https://github.com/yezyilomo/django-restql/issues/303 | [] | Alejandroid17 | 2 |
collerek/ormar | sqlalchemy | 358 | Allow ignoring extra fields in ormar Model | Hi!
I've been trying to set up a demo app to get familiar with ormar - everything has been going very good so far, so thanks a lot for creating this 👏
I just ran into something confusing that I'm not sure is a bug or intended behaviour:
### Issue
when sending in an empty field name (`"": None`} in a request body, ormar from [here](https://github.com/collerek/ormar/blob/master/ormar/models/newbasemodel.py#L268) raises a `ModelError` which results in a 500 being returned from my endpoint.
### Context
This happens in my update endpoint (but I think it would happen anywhere), which looks like this:
```python
@author_router.put('/{item_id}/', status_code=200, response_model=Author, responses=responses['not_found'])
async def update(item_id: int, updated_author: Author) -> Author:
try:
author = await Author.objects.get(id=item_id)
except NoMatch:
raise HTTPException(status_code=404, detail="Item not found")
await author.update(**updated_author.dict(exclude_unset=True))
return author
```
The payload that triggers the error looks like this
```json
{"name": "", "short_name": "", "": None}
```
And my model looks like this:
```python
class Author(ormar.Model):
class Meta(BaseMeta):
tablename = 'authors'
id: int = ormar.Integer(primary_key=True, autoincrement=True)
name: str = ormar.String(max_length=100, unique=True)
short_name: str = ormar.String(max_length=100)
```
### Expected behavior
I would expect Ormar to ignore this empty field value since it doesn't correspond to anything in my model. Alternatively I would like the `ModelError` to result in a 422 being returned - anything other than a 500, since this would mean I have to add logic for this in every endpoint.
If I substitute the Ormar model with a Pydantic model, Pydantic seems to just ignore the field. This works:
```python
AuthorUpdateModel = Author.get_pydantic(exclude={'id', 'books'})
@author_router.put('/{item_id}/', status_code=200, response_model=Author, responses=responses['not_found'])
async def update(item_id: int, updated_author: AuthorUpdateModel) -> Author:
try:
author = await Author.objects.get(id=item_id)
except NoMatch:
raise HTTPException(status_code=404, detail="Item not found")
await author.update(**updated_author.dict(exclude_unset=True))
return author
```
-----------------------
Is this a bug, or am I missing something important in terms of how to structure my APIs? 🙂
_Originally posted by @sondrelg in https://github.com/collerek/ormar/discussions/353_ | closed | 2021-09-26T10:44:57Z | 2021-09-26T12:28:02Z | https://github.com/collerek/ormar/issues/358 | [
"enhancement"
] | collerek | 1 |
Anjok07/ultimatevocalremovergui | pytorch | 615 | An error occurred with VR Architecture | Sometimes it occurs, sometimes it doesn’t.
Last Error Received:
Process: VR Architecture
If this error persists, please contact the developers with the error details.
Raw Error Details:
ParameterError: "Audio buffer is not finite everywhere"
Traceback Error: "
File "UVR.py", line 4716, in process_start
File "separate.py", line 696, in seperate
File "separate.py", line 826, in spec_to_wav
File "lib_v5\spec_utils.py", line 322, in cmb_spectrogram_to_wave
File "librosa\util\decorators.py", line 104, in inner_f
File "librosa\core\audio.py", line 606, in resample
File "librosa\util\decorators.py", line 88, in inner_f
File "librosa\util\utils.py", line 294, in valid_audio
"
Error Time Stamp [2023-06-14 21:12:02]
Full Application Settings:
vr_model: 5_HP-Karaoke-UVR
aggression_setting: 10
window_size: 320
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: v3 | UVR_Model_Bag
segment: Default
overlap: 0.25
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: UVR-MDX-NET Main
chunks: Auto
margin: 44100
compensate: Auto
is_denoise: False
is_invert_spec: False
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: True
is_testing_audio: False
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_create_model_folder: False
mp3_bit_set: 320k
save_format: MP3
wav_type_set: PCM_16
help_hints_var: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems | open | 2023-06-14T13:25:00Z | 2023-06-14T13:25:00Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/615 | [] | kmsmdn | 0 |
modelscope/data-juicer | data-visualization | 490 | Merge local and API LLM calling | ### Search before continuing 先搜索,再继续
- [X] I have searched the Data-Juicer issues and found no similar feature requests. 我已经搜索了 Data-Juicer 的 issue 列表但是没有发现类似的功能需求。
### Description 描述
目前API LLM相关算子可能还需要支持一下vllm或hugging face调用本地开源LLM。这里改动涉及:
1. Prepare model接口的统一
2. 不同接口的response需要一个统一的解析函数
### Use case 使用场景
_No response_
### Additional 额外信息
_No response_
### Are you willing to submit a PR for this feature? 您是否乐意为此功能提交一个 PR?
- [X] Yes I'd like to help by submitting a PR! 是的!我愿意提供帮助并提交一个PR! | closed | 2024-11-15T08:49:57Z | 2024-12-31T08:52:52Z | https://github.com/modelscope/data-juicer/issues/490 | [
"enhancement"
] | BeachWang | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.