repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
sloria/TextBlob | nlp | 33 | Default feature extractor for NaiveBayesianClassifier marks all features as False | I'm trying to use the NaiveBayesianClassifier to classify some text, as below:
``` python
train = [('CARD PAYMENT TO ASDA SUPERSTORE ON ', 'Supermarket'),
('MONTHLY ACCOUNT FEE', 'Bill'),
('CARD PAYMENT TO SAINSBURYS SMKT GBP RATE GBP ON ', 'Supermarket'),
('CARD PAYMENT TO ORDNANCE SURVEY GBP RATE GBP ON ', 'Eating Out'),
('CARD PAYMENT TO TEXQUEENSWAYSSTN GBP RATE GBP ON ', 'Petrol')]
c = NaiveBayesClassifier(train)
```
However, it doesn't seem to classify properly, and when I get it to extract the features, I find that all of the features are `False`:
``` python
c.extract_features('CARD PAYMENT')
{u'contains(ACCOUNT)': False,
u'contains(ASDA)': False,
u'contains(CARD)': False,
u'contains(FEE)': False,
u'contains(GBP)': False,
u'contains(MONTHLY)': False,
u'contains(ON)': False,
u'contains(ORDNANCE)': False,
u'contains(PAYMENT)': False,
u'contains(RATE)': False,
u'contains(SAINSBURYS)': False,
u'contains(SMKT)': False,
u'contains(SUPERSTORE)': False,
u'contains(SURVEY)': False,
u'contains(TEXQUEENSWAYSSTN)': False,
u'contains(TO)': False}
```
I assume this is a problem with the default feature extractor. When I write my own extractor, as below, it all works fine.
``` python
def extractor(doc):
tokens = doc.split()
features = {}
for token in tokens:
if token == "":
continue
features[token] = True
return features
```
| closed | 2013-09-29T15:37:50Z | 2013-09-29T21:20:28Z | https://github.com/sloria/TextBlob/issues/33 | [
"bug"
] | robintw | 1 |
twopirllc/pandas-ta | pandas | 432 | MACD is not working | macd = ta.macd(data['close'], fast=12, slow=26, signal=9, talib=True, offset=None)
Output is missing | closed | 2021-11-16T20:19:00Z | 2021-11-23T22:26:16Z | https://github.com/twopirllc/pandas-ta/issues/432 | [
"good first issue",
"question"
] | graphicgeared | 3 |
ufoym/deepo | tensorflow | 44 | About the caffe2.detectron | Hi@ufoym Did the deepo:caffe2 include the Detectron module? I didn't find it after i pull this images. | closed | 2018-06-18T05:45:11Z | 2018-06-26T03:40:25Z | https://github.com/ufoym/deepo/issues/44 | [] | Hsintao | 1 |
microsoft/nni | deep-learning | 5,248 | how to add channel constriction in the pruning config | Hi, I've look through the docs and could not find how to set in the pruning config(suppose we use l1-norm-pruner) so that remained convs have channel number that is divisible by a certain number, say 8.
Thanks. | open | 2022-11-26T14:33:13Z | 2022-11-29T03:14:53Z | https://github.com/microsoft/nni/issues/5248 | [] | RunningLeon | 4 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 593 | Encoder's embeds _loss becomes NaN | Hi, I gave a try training the Encoder on my dataset (which is much smaller - around 500 speakers) to see how the model behaves. However, at nearly step 25,000, I encountered an error:
``
ValueError: Input contains NaN, infinity or a value too large for dtype('float32').
``
Does anyone know what is the error here ? My 2cent is it may encouter the vanishing gradient problem (possibly because the dataset is too small). If this is the case, should I decrease the learning rate ? Any help or comments are appreciated :D. | closed | 2020-11-13T02:09:54Z | 2021-02-23T13:39:45Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/593 | [] | tranctan | 4 |
psf/requests | python | 6,087 | Python3.6 - requests exceptions SSLError | Please refer to our [Stack Overflow tag](https://stackoverflow.com/questions/tagged/python-requests) for guidance.
Hi,
I am having a problem with python3 script, that uses requests to perform API Get Request.
The problem is that the communication should go through proxy, i had similar issue using python3.6 -m pip install package_name, and figured it out by defining http_proxy and https_proxy to use "http://proxy_ip:port", however trying to do the same in the python script keeps failing, because the target destination only allow https communication, and routing the traffic through the http_proxy wont work in this case, but the https_proxy should be working fine from my browser, however when running the request get the below error:
"""
Traceback (most recent call last):
File "/home/integration/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 700, in urlopen
self._prepare_proxy(conn)
File "/home/integration/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 994, in _prepare_proxy
conn.connect()
File "/home/integration/.local/lib/python3.6/site-packages/urllib3/connection.py", line 364, in connect
conn = self._connect_tls_proxy(hostname, conn)
File "/home/integration/.local/lib/python3.6/site-packages/urllib3/connection.py", line 507, in _connect_tls_proxy
ssl_context=ssl_context,
File "/home/integration/.local/lib/python3.6/site-packages/urllib3/util/ssl_.py", line 453, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls)
File "/home/integration/.local/lib/python3.6/site-packages/urllib3/util/ssl_.py", line 495, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock)
File "/opt/rh/rh-python36/root/usr/lib64/python3.6/ssl.py", line 407, in wrap_socket
_context=self, _session=session)
File "/opt/rh/rh-python36/root/usr/lib64/python3.6/ssl.py", line 817, in __init__
self.do_handshake()
File "/opt/rh/rh-python36/root/usr/lib64/python3.6/ssl.py", line 1077, in do_handshake
self._sslobj.do_handshake()
File "/opt/rh/rh-python36/root/usr/lib64/python3.6/ssl.py", line 689, in do_handshake
self._sslobj.do_handshake()
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:852)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/integration/.local/lib/python3.6/site-packages/requests/adapters.py", line 450, in send
timeout=timeout
File "/home/integration/.local/lib/python3.6/site-packages/urllib3/connectionpool.py", line 786, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "/home/integration/.local/lib/python3.6/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='ruh-services.sec.ibm.com', port=443): Max retries exceeded with url: /micro/ticket_detail (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:852)'),))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/integration/.local/lib/python3.6/site-packages/requests/api.py", line 75, in get
return request('get', url, params=params, **kwargs)
File "/home/integration/.local/lib/python3.6/site-packages/requests/api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "/home/integration/.local/lib/python3.6/site-packages/requests/sessions.py", line 529, in request
resp = self.send(prep, **send_kwargs)
File "/home/integration/.local/lib/python3.6/site-packages/requests/sessions.py", line 645, in send
r = adapter.send(request, **kwargs)
File "/home/integration/.local/lib/python3.6/site-packages/requests/adapters.py", line 517, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='ruh-services.sec.ibm.com', port=443): Max retries exceeded with url: /micro/ticket_detail (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:852)'),))
"""
below is the python script i am using:
import requests
url="https://any_website"
response = requests.get(url, headers={"User-Agent": "Opera/9.80 (X11; Linux x86_64; U; de) Presto/2.2.15 Version/10.00"}, auth = ("UserName", "Password"), proxies={"https": "https://proxy_ip:port"})
note * the error message is the same for all destinations, as i barely get the error immediately after running the script"
Appreciate your support to identify the cause of such problem.
Thanks | closed | 2022-03-16T09:41:05Z | 2023-03-21T00:03:14Z | https://github.com/psf/requests/issues/6087 | [] | MrSled | 4 |
StackStorm/st2 | automation | 6,219 | Unable to install packs that has .git file in it | I was trying to install the packs that are there on the github, but on installing it is showing me this error:
result:
errors:
- message: Execution failed. See result for details.
result:
exit_code: 1
result: None
stderr: "Traceback (most recent call last):
File "/opt/stackstorm/st2/lib/python3.8/site-packages/python_runner/python_action_wrapper.py", line 395, in <module>
obj.run()
File "/opt/stackstorm/st2/lib/python3.8/site-packages/python_runner/python_action_wrapper.py", line 214, in run
output = action.run(**self._parameters)
File "/opt/stackstorm/packs/packs/actions/pack_mgmt/download.py", line 94, in run
pack_result = download_pack(
File "/opt/stackstorm/st2/lib/python3.8/site-packages/st2common/util/pack_management.py", line 161, in download_pack
clone_repo(
File "/opt/stackstorm/st2/lib/python3.8/site-packages/st2common/util/pack_management.py", line 205, in clone_repo
repo = Repo.clone_from(repo_url, temp_dir)
File "/opt/stackstorm/st2/lib/python3.8/site-packages/git/repo/base.py", line 1328, in clone_from
return cls._clone(
File "/opt/stackstorm/st2/lib/python3.8/site-packages/git/repo/base.py", line 1237, in _clone
finalize_process(proc, stderr=stderr)
File "/opt/stackstorm/st2/lib/python3.8/site-packages/git/util.py", line 437, in finalize_process
proc.wait(**kwargs)
File "/opt/stackstorm/st2/lib/python3.8/site-packages/git/cmd.py", line 602, in wait
raise GitCommandError(remove_password_if_present(self.args), status, errstr)
git.exc.GitCommandError: Cmd('git') failed due to: exit code(128)
cmdline: git clone -v -- git@github.com:exampleuser/stackstorm-examplepack.git /root/.st2packs/006re5b146af7979431fc9e001da0d79f3
stderr: 'Cloning into '/root/.st2packs/0065b146af7979431fc9e001dase0d79f3'...
git@github.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
'
"
stdout: ''
task_id: download_pack
type: error
output:
conflict_list: null
message: ''
packs_list: null
warning_list: null
| closed | 2024-07-09T05:39:48Z | 2024-07-09T12:42:18Z | https://github.com/StackStorm/st2/issues/6219 | [] | anshul-axl | 2 |
django-import-export/django-import-export | django | 1,396 | Documentation for customizing admin forms is out-of-date / unclear | The documentation describes how to [customize the import forms](https://django-import-export.readthedocs.io/en/latest/getting_started.html#customize-admin-import-forms). The example shows how you can add an `author` field, but it doesn't make it clear what the purpose of this is, and that you have to provide any filtering capabilities yourself. This has led to some [confusion](https://stackoverflow.com/q/71161607/39296).
I propose to update this section so that it is clearer why you would want to customize the import forms, and to give an example of how to filter using a dropdown. | closed | 2022-02-18T17:19:22Z | 2024-07-24T15:34:13Z | https://github.com/django-import-export/django-import-export/issues/1396 | [
"docs",
"good first issue"
] | matthewhegarty | 5 |
koxudaxi/datamodel-code-generator | fastapi | 1,427 | since v0.16.0 `--use-default` is broken when `allOf` is present | **Describe the bug**
In v0.15.0 `--use-default` works as expected. Since `v0.16.0`, this is only the case when no `allOf` is present in the schema.
**To Reproduce**
Example schema:
```json
{
"type": "object",
"title": "Item",
"allOf": [{
"title": "Entity",
"type": "object"
}],
"required": [
"test",
"testarray"
],
"properties": {
"test": {
"type": "string",
"default": "test123"
},
"testarray": {
"title": "test array",
"type": "array",
"items": {
"type": "string"
},
"minItems": 1,
"default": [
"test123"
]
}
}
}
```
Used commandline:
```
$ datamodel-codegen.exe --input "RequiredWithDefaultTest.json" --input-file-type jsonschema --output "testmodel.py" --use-default
```
**Expected behavior**
With `v0.15.0` or `allOf` removed from the schema, the result is:
```python
class Item(BaseModel):
test: Optional[str] = 'test123'
testarray: Optional[List[str]] = Field(['test123'], min_items=1, title='test array')
```
**Actual behavior**
With `v0.16.0` and `allOf` present in the schema, the result is:
```python
class Item(BaseModel):
test: str
testarray: List[str] = Field(..., min_items=1, title='test array')
```
**Version:**
- OS: Windows 10
- Python version: 3.9.5
- datamodel-code-generator version: >= v0.16.0
**Additional context**
Is is likely that this is related to #1009 / #1012
| closed | 2023-07-16T06:21:05Z | 2024-05-11T05:28:23Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1427 | [
"bug"
] | simontaurus | 2 |
jadore801120/attention-is-all-you-need-pytorch | nlp | 142 | About Layernorm | I think there is a problem in your code about using layer_norm.
In the forword function of MultiHeadAttention, you just layer norm the q vector at the begining of the function. I think you should layer norm (residual + attention(q,k,v,mask))(just ignoring the feed forword and dropout) | closed | 2020-02-23T11:00:04Z | 2020-06-07T14:35:53Z | https://github.com/jadore801120/attention-is-all-you-need-pytorch/issues/142 | [] | BUCTwangkun | 2 |
2noise/ChatTTS | python | 197 | 长文本请教 |
以前没接触过这种项目,想请教一下,刚开始放了网上随便找了点文本,发现文本处理后随机出现字符
猜想是文本太长了或者有特殊字符,然后做了一下字符处理和分段处理,想问一下这样分段处理速度如何评判,个人感觉很慢
用例为550字符长度,处理总时间为11min左右
<img width="1647" alt="Snipaste_2024-06-02_20-50-38" src="https://github.com/2noise/ChatTTS/assets/50728459/9bd7d046-27ca-452a-a0a7-5e4308167826">
| closed | 2024-06-02T12:59:02Z | 2024-07-19T04:01:38Z | https://github.com/2noise/ChatTTS/issues/197 | [
"stale"
] | Oliver-Lief | 3 |
pykaldi/pykaldi | numpy | 7 | Use Cmake for building extension modules | Building pykaldi takes a long time since we can not parallelize the build using setuptools. Also our current build system does not detect the changes in some dependencies like the C++ headers in pykaldi source tree. Building pykaldi will become much worse as the code base grows. We should try and see if we can build the extension modules in CMake. | closed | 2017-07-24T23:37:35Z | 2017-08-01T15:27:09Z | https://github.com/pykaldi/pykaldi/issues/7 | [] | dogancan | 0 |
piskvorky/gensim | nlp | 3,244 | Self provided normalization function is not used. | https://github.com/RaRe-Technologies/gensim/blob/5bec27767ad40712e8912d53a896cb2282c33880/gensim/models/tfidfmodel.py#L525
`self.normalize = matutils.unitvec` does not allow users to use self-defined normalization function. | open | 2021-10-05T23:00:33Z | 2021-10-06T22:35:27Z | https://github.com/piskvorky/gensim/issues/3244 | [] | cosmozhang | 3 |
huggingface/transformers | pytorch | 36,875 | Qwen2-VL-7B-Instruct shape error when using TP=4 | ### System Info
- `transformers` version: 4.49.0
- Platform: Linux-5.4.250-2-velinux1u1-amd64-x86_64-with-glibc2.31
- Python version: 3.10.15
- Huggingface_hub version: 0.29.2
- Safetensors version: 0.5.3
- Accelerate version: 1.4.0
- Accelerate config: not found
- DeepSpeed version: 0.16.3
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A800-SXM4-80GB
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
pass
### Expected behavior
Hi, I am trying to use transformers TP to inference a reward model in my RL project.
I followed [this page](https://huggingface.co/docs/transformers/main/perf_infer_gpu_multi) and inited qwen vl model with `tp_plan='auto'`, however I came across this error when calling generate:
```
File "/root/miniconda3/lib/python3.10/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 790, in forward
[rank0]: attn_output = attn_output.view(bsz, q_len, self.hidden_size)
[rank0]: RuntimeError: shape '[1, 359, 3584]' is invalid for input of size 321664
```
`attn_output.view(bsz, q_len, self.hidden_size)` is apparantly wrong since in TensorParallel num_heads are splited,so we should not expect `self.hidden_size` in last dim, instead it should be `(bsz, q_len, self.hidden_size/tp_size)`. | open | 2025-03-21T07:03:27Z | 2025-03-21T09:02:17Z | https://github.com/huggingface/transformers/issues/36875 | [
"bug"
] | KimmiShi | 2 |
vitalik/django-ninja | pydantic | 1,095 | Creating data with POST request | I have a model in that a field takes ManyToManyField with reference to a Model. It throws an error, can't bulk assign the data, and suggests to use `.set(data)`. is there any other way to create the data?
```
PrimaryModel:
id = UUIDField(primary_key=True, editable=False)
actors = ManyToManyField(Actor, blank=True)
```
```
Actor:
id = UUIDField(primary_key=True, editable=False)
name=CharField(max_length=50)
```
While there is a `post` request to the PrimaryModel, the payload comes as `[id, id]`.
```Python
@router.post("/", response={201: MovieOut})
def create_movie(request, data: MovieIn):
list_id = data.list_id
result = Term.objects.create(**data.dict(exclude=["actors"]))
result.actor.set(data.actor)
return 201, result
```
| closed | 2024-02-23T14:36:18Z | 2024-02-24T16:17:35Z | https://github.com/vitalik/django-ninja/issues/1095 | [] | Harshav0423 | 2 |
igorbenav/fastcrud | pydantic | 197 | [BUG] Pagination does not work properly | **Describe the bug or question**
Pagination in EndpointCreator will produce inconsistent results, as there is no ordering applied together with the offset and limit.
Postgres explanation on why we need order by when doing pagination: https://www.postgresql.org/docs/current/queries-limit.html
**To Reproduce**
--
**Description**
Expecting the arguments "sort_columns" and "sort_orders" of FastCRUD.get_multi to be set, if applying pagination https://github.com/igorbenav/fastcrud/blob/82b6b17ec6bdae0ef6069f7ad7fdaf7518d6714b/fastcrud/endpoint/endpoint_creator.py#L413 Could for example just use the _get_primary_keys function and "desc" as default sort order to achieve this.
Will have a performance impact, as SQL order by is not free lunch.
| open | 2025-02-06T11:29:53Z | 2025-02-07T22:33:04Z | https://github.com/igorbenav/fastcrud/issues/197 | [
"bug",
"enhancement",
"Automatic Endpoint"
] | Simon128 | 6 |
psf/requests | python | 6,536 | Requests are blocked by cloudflare inside docker | Hello to all, i need some help here
I have a code where i need to post a URL to get the autorization of the cookies and then i get this cookies and do another request in the website
It's a goverment thing, and i really need some help
When i use this script on my computer i can use it normally but when i upload in the container on docker (in my vps) i get "403" and i cannot continue my code
Someone can help me?
Anyone had an similar issue?
Everything is correct, the certificate path, the user and password, everything... But i keep being blocked by cloudflare

The HTML message of cloudflare and the 403 on docker:

Thank you all!
| closed | 2023-09-24T10:30:26Z | 2023-09-24T10:42:03Z | https://github.com/psf/requests/issues/6536 | [] | GabrielYudenich | 1 |
cvat-ai/cvat | computer-vision | 9,000 | How can I import two different annotations files to a single task? | I wanna import segmentations masks in the format "Segmentation Mask 1.1" but I also want to import bounding boxes in the COCO format. If i import one after the other, the second one overwrites the first one. | closed | 2025-01-27T18:25:49Z | 2025-01-28T09:09:11Z | https://github.com/cvat-ai/cvat/issues/9000 | [] | beatrizpinheiro | 1 |
vitalik/django-ninja | rest-api | 398 | schema_extra in Config class from schema | Greetings, I am loading an example inside the schema configuration class and it does not appear in the autodocumentation (OpenAPI).
```python
class Config:
schema_extra = {
"examples": [
{
"type": "Car"
}
]
}
```
but in the documentation it appears as follows

| closed | 2022-03-18T23:39:12Z | 2022-03-19T14:46:03Z | https://github.com/vitalik/django-ninja/issues/398 | [] | facundopadilla | 2 |
labmlai/annotated_deep_learning_paper_implementations | machine-learning | 241 | Wrong Image Scale in DDPM | The [torchvision.transforms.ToTensor](https://pytorch.org/vision/master/generated/torchvision.transforms.ToTensor.html) scale images from range **(0, 255)** to range **(0.0, 1.0)**, but in original paper, it should be scaled to range **(-1.0, 1.0)**. | open | 2024-02-10T04:17:31Z | 2025-03-16T07:00:26Z | https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/241 | [] | ww-rm | 4 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,354 | Edit signup enabling manual activation of tenants | This ticket is to keep track of the changes needed in order to edit the signup module enabling manual activation of tenants.
This feature is consider valuable to make it possible to collect users interest in running a whistleblowing project without enabling immediately the requested platform and postponing the activation to further evaluations. | open | 2023-02-23T16:12:40Z | 2023-04-20T19:34:12Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3354 | [
"T: Enhancement",
"C: Client",
"C: Backend"
] | evilaliv3 | 0 |
holoviz/panel | jupyter | 7,298 | pn.chat.WebLLM module | With the release of Panel's integration of JSComponent and ability to couple with [WebLLM](https://panel.holoviz.org/pyodide/webllm), I believe that, Panel has a great opportunity to distinguish itself from other dashboarding libraries besides its amazing integration with interactive plotting (HoloViews) by leveraging its ability to run LLMs directly on the client side with minimal setup.
I propose we migrate the example from [this page](https://panel.holoviz.org/gallery/webllm.html) into a module and make it easy for developers to use or extend it.
- [x] I may be interested in making a pull request to address this | closed | 2024-09-18T23:05:19Z | 2025-01-20T19:36:43Z | https://github.com/holoviz/panel/issues/7298 | [
"type: feature",
"type: discussion"
] | ahuang11 | 4 |
horovod/horovod | tensorflow | 3,791 | error: 'ncclCommGetAsyncError' was not declared in this scope whem pip install horovod | **Environment:**
1. Framework: PyTorch
2. Framework version:
3. Horovod version:0.26.1
4. MPI version:
5. CUDA version: 11.1
6. NCCL version: nccl-local-repo-rhel7-2.8.4-cuda11.1-1.0-1.x86_64.rpm
7. Python version: 3.8
8. Spark / PySpark version:
9. Ray version:
10. OS and version: CentOS 7.6
11. GCC version:
12. CMake version:3.22.1
**Checklist:**
1. Did you search issues to find if somebody asked this question before? yes
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)? yes
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? yes
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? yes
**Bug report:**
I have followed [instrcutions](https://github.com/horovod/horovod/blob/master/docs/gpus.rst) to install hovorod. Scripts are shown below:
- rpm -i nccl-local-repo-rhel7-2.8.4-cuda11.1-1.0-1.x86_64.rpm
- yum install libnccl-2.8.4-1+cuda11.1 libnccl-devel-2.8.4-1+cuda11.1 libnccl-static-2.8.4-1+cuda11.1 -y
- gunzip -c openmpi-4.1.4.tar.gz | tar xf - && cd openmpi-4.1.4 && ./configure --prefix=/usr/local && make all install
- HOROVOD_NCCL_INCLUDE=/usr/include HOROVOD_NCCL_LIB=/usr/lib64 HOROVOD_GPU_OPERATIONS=NCCL HOROVOD_WITH_PYTORCH=1 HOROVOD_WITHOUT_MXNET=1 HOROVOD_WITHOUT_TENSORFLOW=1 python3 -m pip install --no-cache-dir horovod[pytorch] -i https://pypi.hobot.cc/simple --extra-index-url https://pypi.hobot.cc/hobot-local/simple
When I ran last step, errors were caught. Do I need to upgrade nccl version? Thanks in advance!
```
Requirement already satisfied: cloudpickle in /usr/local/lib/python3.8/site-packages (from horovod[pytorch]) (2.1.0)
Requirement already satisfied: psutil in /usr/local/lib/python3.8/site-packages (from horovod[pytorch]) (5.9.1)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.8/site-packages (from horovod[pytorch]) (5.4.1)
Requirement already satisfied: packaging in /usr/local/lib/python3.8/site-packages (from horovod[pytorch]) (21.3)
Requirement already satisfied: cffi>=1.4.0 in /usr/local/lib/python3.8/site-packages (from horovod[pytorch]) (1.15.0)
Requirement already satisfied: torch in /usr/local/lib/python3.8/site-packages (from horovod[pytorch]) (1.10.2+cu111)
Requirement already satisfied: pycparser in /usr/local/lib/python3.8/site-packages (from cffi>=1.4.0->horovod[pytorch]) (2.21)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.8/site-packages (from packaging->horovod[pytorch]) (3.0.9)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.8/site-packages (from torch->horovod[pytorch]) (4.2.0)
Building wheels for collected packages: horovod
Building wheel for horovod (setup.py): started
Building wheel for horovod (setup.py): still running...
Building wheel for horovod (setup.py): finished with status 'error'
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [1446 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.8
creating build/lib.linux-x86_64-3.8/horovod
copying horovod/__init__.py -> build/lib.linux-x86_64-3.8/horovod
creating build/lib.linux-x86_64-3.8/horovod/_keras
copying horovod/_keras/__init__.py -> build/lib.linux-x86_64-3.8/horovod/_keras
copying horovod/_keras/callbacks.py -> build/lib.linux-x86_64-3.8/horovod/_keras
copying horovod/_keras/elastic.py -> build/lib.linux-x86_64-3.8/horovod/_keras
creating build/lib.linux-x86_64-3.8/horovod/common
copying horovod/common/__init__.py -> build/lib.linux-x86_64-3.8/horovod/common
copying horovod/common/basics.py -> build/lib.linux-x86_64-3.8/horovod/common
copying horovod/common/elastic.py -> build/lib.linux-x86_64-3.8/horovod/common
copying horovod/common/exceptions.py -> build/lib.linux-x86_64-3.8/horovod/common
copying horovod/common/process_sets.py -> build/lib.linux-x86_64-3.8/horovod/common
copying horovod/common/util.py -> build/lib.linux-x86_64-3.8/horovod/common
creating build/lib.linux-x86_64-3.8/horovod/data
copying horovod/data/__init__.py -> build/lib.linux-x86_64-3.8/horovod/data
copying horovod/data/data_loader_base.py -> build/lib.linux-x86_64-3.8/horovod/data
creating build/lib.linux-x86_64-3.8/horovod/keras
copying horovod/keras/__init__.py -> build/lib.linux-x86_64-3.8/horovod/keras
copying horovod/keras/callbacks.py -> build/lib.linux-x86_64-3.8/horovod/keras
copying horovod/keras/elastic.py -> build/lib.linux-x86_64-3.8/horovod/keras
creating build/lib.linux-x86_64-3.8/horovod/mxnet
copying horovod/mxnet/__init__.py -> build/lib.linux-x86_64-3.8/horovod/mxnet
copying horovod/mxnet/compression.py -> build/lib.linux-x86_64-3.8/horovod/mxnet
copying horovod/mxnet/functions.py -> build/lib.linux-x86_64-3.8/horovod/mxnet
copying horovod/mxnet/mpi_ops.py -> build/lib.linux-x86_64-3.8/horovod/mxnet
creating build/lib.linux-x86_64-3.8/horovod/ray
copying horovod/ray/__init__.py -> build/lib.linux-x86_64-3.8/horovod/ray
copying horovod/ray/adapter.py -> build/lib.linux-x86_64-3.8/horovod/ray
copying horovod/ray/driver_service.py -> build/lib.linux-x86_64-3.8/horovod/ray
copying horovod/ray/elastic.py -> build/lib.linux-x86_64-3.8/horovod/ray
copying horovod/ray/elastic_v2.py -> build/lib.linux-x86_64-3.8/horovod/ray
copying horovod/ray/ray_logger.py -> build/lib.linux-x86_64-3.8/horovod/ray
copying horovod/ray/runner.py -> build/lib.linux-x86_64-3.8/horovod/ray
copying horovod/ray/strategy.py -> build/lib.linux-x86_64-3.8/horovod/ray
copying horovod/ray/utils.py -> build/lib.linux-x86_64-3.8/horovod/ray
copying horovod/ray/worker.py -> build/lib.linux-x86_64-3.8/horovod/ray
creating build/lib.linux-x86_64-3.8/horovod/runner
copying horovod/runner/__init__.py -> build/lib.linux-x86_64-3.8/horovod/runner
copying horovod/runner/gloo_run.py -> build/lib.linux-x86_64-3.8/horovod/runner
copying horovod/runner/js_run.py -> build/lib.linux-x86_64-3.8/horovod/runner
copying horovod/runner/launch.py -> build/lib.linux-x86_64-3.8/horovod/runner
copying horovod/runner/mpi_run.py -> build/lib.linux-x86_64-3.8/horovod/runner
copying horovod/runner/run_task.py -> build/lib.linux-x86_64-3.8/horovod/runner
copying horovod/runner/task_fn.py -> build/lib.linux-x86_64-3.8/horovod/runner
creating build/lib.linux-x86_64-3.8/horovod/spark
copying horovod/spark/__init__.py -> build/lib.linux-x86_64-3.8/horovod/spark
copying horovod/spark/conf.py -> build/lib.linux-x86_64-3.8/horovod/spark
copying horovod/spark/gloo_run.py -> build/lib.linux-x86_64-3.8/horovod/spark
copying horovod/spark/mpi_run.py -> build/lib.linux-x86_64-3.8/horovod/spark
copying horovod/spark/runner.py -> build/lib.linux-x86_64-3.8/horovod/spark
creating build/lib.linux-x86_64-3.8/horovod/tensorflow
copying horovod/tensorflow/__init__.py -> build/lib.linux-x86_64-3.8/horovod/tensorflow
copying horovod/tensorflow/compression.py -> build/lib.linux-x86_64-3.8/horovod/tensorflow
copying horovod/tensorflow/elastic.py -> build/lib.linux-x86_64-3.8/horovod/tensorflow
copying horovod/tensorflow/functions.py -> build/lib.linux-x86_64-3.8/horovod/tensorflow
copying horovod/tensorflow/gradient_aggregation.py -> build/lib.linux-x86_64-3.8/horovod/tensorflow
copying horovod/tensorflow/gradient_aggregation_eager.py -> build/lib.linux-x86_64-3.8/horovod/tensorflow
copying horovod/tensorflow/mpi_ops.py -> build/lib.linux-x86_64-3.8/horovod/tensorflow
copying horovod/tensorflow/sync_batch_norm.py -> build/lib.linux-x86_64-3.8/horovod/tensorflow
copying horovod/tensorflow/util.py -> build/lib.linux-x86_64-3.8/horovod/tensorflow
creating build/lib.linux-x86_64-3.8/horovod/torch
copying horovod/torch/__init__.py -> build/lib.linux-x86_64-3.8/horovod/torch
copying horovod/torch/compression.py -> build/lib.linux-x86_64-3.8/horovod/torch
copying horovod/torch/functions.py -> build/lib.linux-x86_64-3.8/horovod/torch
copying horovod/torch/mpi_ops.py -> build/lib.linux-x86_64-3.8/horovod/torch
copying horovod/torch/optimizer.py -> build/lib.linux-x86_64-3.8/horovod/torch
copying horovod/torch/sync_batch_norm.py -> build/lib.linux-x86_64-3.8/horovod/torch
creating build/lib.linux-x86_64-3.8/horovod/runner/common
copying horovod/runner/common/__init__.py -> build/lib.linux-x86_64-3.8/horovod/runner/common
creating build/lib.linux-x86_64-3.8/horovod/runner/driver
copying horovod/runner/driver/__init__.py -> build/lib.linux-x86_64-3.8/horovod/runner/driver
copying horovod/runner/driver/driver_service.py -> build/lib.linux-x86_64-3.8/horovod/runner/driver
creating build/lib.linux-x86_64-3.8/horovod/runner/elastic
copying horovod/runner/elastic/__init__.py -> build/lib.linux-x86_64-3.8/horovod/runner/elastic
copying horovod/runner/elastic/constants.py -> build/lib.linux-x86_64-3.8/horovod/runner/elastic
copying horovod/runner/elastic/discovery.py -> build/lib.linux-x86_64-3.8/horovod/runner/elastic
copying horovod/runner/elastic/driver.py -> build/lib.linux-x86_64-3.8/horovod/runner/elastic
copying horovod/runner/elastic/registration.py -> build/lib.linux-x86_64-3.8/horovod/runner/elastic
copying horovod/runner/elastic/rendezvous.py -> build/lib.linux-x86_64-3.8/horovod/runner/elastic
copying horovod/runner/elastic/settings.py -> build/lib.linux-x86_64-3.8/horovod/runner/elastic
copying horovod/runner/elastic/worker.py -> build/lib.linux-x86_64-3.8/horovod/runner/elastic
creating build/lib.linux-x86_64-3.8/horovod/runner/http
copying horovod/runner/http/__init__.py -> build/lib.linux-x86_64-3.8/horovod/runner/http
copying horovod/runner/http/http_client.py -> build/lib.linux-x86_64-3.8/horovod/runner/http
copying horovod/runner/http/http_server.py -> build/lib.linux-x86_64-3.8/horovod/runner/http
creating build/lib.linux-x86_64-3.8/horovod/runner/task
copying horovod/runner/task/__init__.py -> build/lib.linux-x86_64-3.8/horovod/runner/task
copying horovod/runner/task/task_service.py -> build/lib.linux-x86_64-3.8/horovod/runner/task
creating build/lib.linux-x86_64-3.8/horovod/runner/util
copying horovod/runner/util/__init__.py -> build/lib.linux-x86_64-3.8/horovod/runner/util
copying horovod/runner/util/cache.py -> build/lib.linux-x86_64-3.8/horovod/runner/util
copying horovod/runner/util/lsf.py -> build/lib.linux-x86_64-3.8/horovod/runner/util
copying horovod/runner/util/network.py -> build/lib.linux-x86_64-3.8/horovod/runner/util
copying horovod/runner/util/remote.py -> build/lib.linux-x86_64-3.8/horovod/runner/util
copying horovod/runner/util/streams.py -> build/lib.linux-x86_64-3.8/horovod/runner/util
copying horovod/runner/util/threads.py -> build/lib.linux-x86_64-3.8/horovod/runner/util
creating build/lib.linux-x86_64-3.8/horovod/runner/common/service
copying horovod/runner/common/service/__init__.py -> build/lib.linux-x86_64-3.8/horovod/runner/common/service
copying horovod/runner/common/service/compute_service.py -> build/lib.linux-x86_64-3.8/horovod/runner/common/service
copying horovod/runner/common/service/driver_service.py -> build/lib.linux-x86_64-3.8/horovod/runner/common/service
copying horovod/runner/common/service/task_service.py -> build/lib.linux-x86_64-3.8/horovod/runner/common/service
creating build/lib.linux-x86_64-3.8/horovod/runner/common/util
copying horovod/runner/common/util/__init__.py -> build/lib.linux-x86_64-3.8/horovod/runner/common/util
copying horovod/runner/common/util/codec.py -> build/lib.linux-x86_64-3.8/horovod/runner/common/util
copying horovod/runner/common/util/config_parser.py -> build/lib.linux-x86_64-3.8/horovod/runner/common/util
copying horovod/runner/common/util/env.py -> build/lib.linux-x86_64-3.8/horovod/runner/common/util
copying horovod/runner/common/util/host_hash.py -> build/lib.linux-x86_64-3.8/horovod/runner/common/util
copying horovod/runner/common/util/hosts.py -> build/lib.linux-x86_64-3.8/horovod/runner/common/util
copying horovod/runner/common/util/network.py -> build/lib.linux-x86_64-3.8/horovod/runner/common/util
copying horovod/runner/common/util/safe_shell_exec.py -> build/lib.linux-x86_64-3.8/horovod/runner/common/util
copying horovod/runner/common/util/secret.py -> build/lib.linux-x86_64-3.8/horovod/runner/common/util
copying horovod/runner/common/util/settings.py -> build/lib.linux-x86_64-3.8/horovod/runner/common/util
copying horovod/runner/common/util/timeout.py -> build/lib.linux-x86_64-3.8/horovod/runner/common/util
copying horovod/runner/common/util/tiny_shell_exec.py -> build/lib.linux-x86_64-3.8/horovod/runner/common/util
creating build/lib.linux-x86_64-3.8/horovod/spark/common
copying horovod/spark/common/__init__.py -> build/lib.linux-x86_64-3.8/horovod/spark/common
copying horovod/spark/common/_namedtuple_fix.py -> build/lib.linux-x86_64-3.8/horovod/spark/common
copying horovod/spark/common/backend.py -> build/lib.linux-x86_64-3.8/horovod/spark/common
copying horovod/spark/common/cache.py -> build/lib.linux-x86_64-3.8/horovod/spark/common
copying horovod/spark/common/constants.py -> build/lib.linux-x86_64-3.8/horovod/spark/common
copying horovod/spark/common/datamodule.py -> build/lib.linux-x86_64-3.8/horovod/spark/common
copying horovod/spark/common/estimator.py -> build/lib.linux-x86_64-3.8/horovod/spark/common
copying horovod/spark/common/params.py -> build/lib.linux-x86_64-3.8/horovod/spark/common
copying horovod/spark/common/serialization.py -> build/lib.linux-x86_64-3.8/horovod/spark/common
copying horovod/spark/common/store.py -> build/lib.linux-x86_64-3.8/horovod/spark/common
copying horovod/spark/common/util.py -> build/lib.linux-x86_64-3.8/horovod/spark/common
creating build/lib.linux-x86_64-3.8/horovod/spark/data_loaders
copying horovod/spark/data_loaders/__init__.py -> build/lib.linux-x86_64-3.8/horovod/spark/data_loaders
copying horovod/spark/data_loaders/pytorch_data_loaders.py -> build/lib.linux-x86_64-3.8/horovod/spark/data_loaders
creating build/lib.linux-x86_64-3.8/horovod/spark/driver
copying horovod/spark/driver/__init__.py -> build/lib.linux-x86_64-3.8/horovod/spark/driver
copying horovod/spark/driver/driver_service.py -> build/lib.linux-x86_64-3.8/horovod/spark/driver
copying horovod/spark/driver/host_discovery.py -> build/lib.linux-x86_64-3.8/horovod/spark/driver
copying horovod/spark/driver/job_id.py -> build/lib.linux-x86_64-3.8/horovod/spark/driver
copying horovod/spark/driver/mpirun_rsh.py -> build/lib.linux-x86_64-3.8/horovod/spark/driver
copying horovod/spark/driver/rendezvous.py -> build/lib.linux-x86_64-3.8/horovod/spark/driver
copying horovod/spark/driver/rsh.py -> build/lib.linux-x86_64-3.8/horovod/spark/driver
creating build/lib.linux-x86_64-3.8/horovod/spark/keras
copying horovod/spark/keras/__init__.py -> build/lib.linux-x86_64-3.8/horovod/spark/keras
copying horovod/spark/keras/bare.py -> build/lib.linux-x86_64-3.8/horovod/spark/keras
copying horovod/spark/keras/datamodule.py -> build/lib.linux-x86_64-3.8/horovod/spark/keras
copying horovod/spark/keras/estimator.py -> build/lib.linux-x86_64-3.8/horovod/spark/keras
copying horovod/spark/keras/optimizer.py -> build/lib.linux-x86_64-3.8/horovod/spark/keras
copying horovod/spark/keras/remote.py -> build/lib.linux-x86_64-3.8/horovod/spark/keras
copying horovod/spark/keras/tensorflow.py -> build/lib.linux-x86_64-3.8/horovod/spark/keras
copying horovod/spark/keras/util.py -> build/lib.linux-x86_64-3.8/horovod/spark/keras
creating build/lib.linux-x86_64-3.8/horovod/spark/lightning
copying horovod/spark/lightning/__init__.py -> build/lib.linux-x86_64-3.8/horovod/spark/lightning
copying horovod/spark/lightning/datamodule.py -> build/lib.linux-x86_64-3.8/horovod/spark/lightning
copying horovod/spark/lightning/estimator.py -> build/lib.linux-x86_64-3.8/horovod/spark/lightning
copying horovod/spark/lightning/legacy.py -> build/lib.linux-x86_64-3.8/horovod/spark/lightning
copying horovod/spark/lightning/remote.py -> build/lib.linux-x86_64-3.8/horovod/spark/lightning
copying horovod/spark/lightning/util.py -> build/lib.linux-x86_64-3.8/horovod/spark/lightning
creating build/lib.linux-x86_64-3.8/horovod/spark/task
copying horovod/spark/task/__init__.py -> build/lib.linux-x86_64-3.8/horovod/spark/task
copying horovod/spark/task/gloo_exec_fn.py -> build/lib.linux-x86_64-3.8/horovod/spark/task
copying horovod/spark/task/mpirun_exec_fn.py -> build/lib.linux-x86_64-3.8/horovod/spark/task
copying horovod/spark/task/task_info.py -> build/lib.linux-x86_64-3.8/horovod/spark/task
copying horovod/spark/task/task_service.py -> build/lib.linux-x86_64-3.8/horovod/spark/task
creating build/lib.linux-x86_64-3.8/horovod/spark/tensorflow
copying horovod/spark/tensorflow/__init__.py -> build/lib.linux-x86_64-3.8/horovod/spark/tensorflow
copying horovod/spark/tensorflow/compute_worker.py -> build/lib.linux-x86_64-3.8/horovod/spark/tensorflow
creating build/lib.linux-x86_64-3.8/horovod/spark/torch
copying horovod/spark/torch/__init__.py -> build/lib.linux-x86_64-3.8/horovod/spark/torch
copying horovod/spark/torch/estimator.py -> build/lib.linux-x86_64-3.8/horovod/spark/torch
copying horovod/spark/torch/remote.py -> build/lib.linux-x86_64-3.8/horovod/spark/torch
copying horovod/spark/torch/util.py -> build/lib.linux-x86_64-3.8/horovod/spark/torch
creating build/lib.linux-x86_64-3.8/horovod/tensorflow/data
copying horovod/tensorflow/data/__init__.py -> build/lib.linux-x86_64-3.8/horovod/tensorflow/data
copying horovod/tensorflow/data/compute_service.py -> build/lib.linux-x86_64-3.8/horovod/tensorflow/data
copying horovod/tensorflow/data/compute_worker.py -> build/lib.linux-x86_64-3.8/horovod/tensorflow/data
creating build/lib.linux-x86_64-3.8/horovod/tensorflow/keras
copying horovod/tensorflow/keras/__init__.py -> build/lib.linux-x86_64-3.8/horovod/tensorflow/keras
copying horovod/tensorflow/keras/callbacks.py -> build/lib.linux-x86_64-3.8/horovod/tensorflow/keras
copying horovod/tensorflow/keras/elastic.py -> build/lib.linux-x86_64-3.8/horovod/tensorflow/keras
creating build/lib.linux-x86_64-3.8/horovod/torch/elastic
copying horovod/torch/elastic/__init__.py -> build/lib.linux-x86_64-3.8/horovod/torch/elastic
copying horovod/torch/elastic/sampler.py -> build/lib.linux-x86_64-3.8/horovod/torch/elastic
copying horovod/torch/elastic/state.py -> build/lib.linux-x86_64-3.8/horovod/torch/elastic
running build_ext
Running CMake in build/temp.linux-x86_64-3.8/RelWithDebInfo:
cmake /tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63 -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELWITHDEBINFO=/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/build/lib.linux-x86_64-3.8 -DPYTHON_EXECUTABLE:FILEPATH=/usr/local/bin/python3
cmake --build . --config RelWithDebInfo -- -j8 VERBOSE=1
-- Could not find CCache. Consider installing CCache to speed up compilation.
-- The CXX compiler identification is GNU 5.4.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/local/gcc-5.4.0/bin/g++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Build architecture flags: -mf16c -mavx -mfma
-- Using command /usr/local/bin/python3
-- Found MPI_CXX: /usr/mpi/gcc/mvapich2-2.2/lib/libmpicxx.so (found version "3.0")
-- Found MPI: TRUE (found version "3.0")
-- Looking for a CUDA compiler
-- Looking for a CUDA compiler - /usr/local/cuda/bin/nvcc
-- The CUDA compiler identification is NVIDIA 11.1.74
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Found CUDAToolkit: /usr/local/cuda/include (found version "11.1.74")
-- Looking for C++ include pthread.h
-- Looking for C++ include pthread.h - found
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- Linking against static NCCL library
-- Found NCCL: /usr/include
-- Determining NCCL version from the header file: /usr/include/nccl.h
-- NCCL_MAJOR_VERSION: 2
-- Found NCCL (include: /usr/include, library: /usr/lib64/libnccl_static.a)
-- Found NVTX: /usr/local/cuda/include
-- Found NVTX (include: /usr/local/cuda/include, library: dl)
-- The C compiler identification is GNU 5.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/local/gcc-5.4.0/bin/gcc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Gloo build as STATIC library
-- Found MPI_C: /usr/mpi/gcc/mvapich2-2.2/lib/libmpi.so (found version "3.0")
-- Found MPI: TRUE (found version "3.0")
-- MPI include path: /usr/mpi/gcc/mvapich2-2.2/include
-- MPI libraries: /usr/mpi/gcc/mvapich2-2.2/lib/libmpicxx.so/usr/mpi/gcc/mvapich2-2.2/lib/libmpi.so
-- Found Pytorch: 1.10.2+cu111 (found suitable version "1.10.2+cu111", minimum required is "1.5.0")
-- HVD_NVCC_COMPILE_FLAGS = -O3 -Xcompiler -fPIC -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_62,code=sm_62 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_72,code=sm_72 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_86,code=\"sm_86,compute_86\"
-- Gloo build as STATIC library
-- MPI include path: /usr/mpi/gcc/mvapich2-2.2/include
-- MPI libraries: /usr/mpi/gcc/mvapich2-2.2/lib/libmpicxx.so/usr/mpi/gcc/mvapich2-2.2/lib/libmpi.so
-- Configuring done
CMake Warning (dev) in horovod/common/ops/cuda/CMakeLists.txt:
Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
empty CUDA_ARCHITECTURES not allowed. Run "cmake --help-policy CMP0104"
for policy details. Use the cmake_policy command to set the policy and
suppress this warning.
CUDA_ARCHITECTURES is empty for target "compatible_horovod_cuda_kernels".
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) in horovod/common/ops/cuda/CMakeLists.txt:
Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
empty CUDA_ARCHITECTURES not allowed. Run "cmake --help-policy CMP0104"
for policy details. Use the cmake_policy command to set the policy and
suppress this warning.
CUDA_ARCHITECTURES is empty for target "compatible_horovod_cuda_kernels".
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) in horovod/common/ops/cuda/CMakeLists.txt:
Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
empty CUDA_ARCHITECTURES not allowed. Run "cmake --help-policy CMP0104"
for policy details. Use the cmake_policy command to set the policy and
suppress this warning.
CUDA_ARCHITECTURES is empty for target "horovod_cuda_kernels".
This warning is for project developers. Use -Wno-dev to suppress it.
CMake Warning (dev) in horovod/common/ops/cuda/CMakeLists.txt:
Policy CMP0104 is not set: CMAKE_CUDA_ARCHITECTURES now detected for NVCC,
empty CUDA_ARCHITECTURES not allowed. Run "cmake --help-policy CMP0104"
for policy details. Use the cmake_policy command to set the policy and
suppress this warning.
CUDA_ARCHITECTURES is empty for target "horovod_cuda_kernels".
This warning is for project developers. Use -Wno-dev to suppress it.
-- Generating done
-- Build files have been written to: /tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/build/temp.linux-x86_64-3.8/RelWithDebInfo
... ...
[ 91%] Building CXX object horovod/torch/CMakeFiles/pytorch.dir/__/common/gloo/gloo_controller.cc.o
cd /tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/build/temp.linux-x86_64-3.8/RelWithDebInfo/horovod/torch && /usr/local/gcc-5.4.0/bin/g++ -DEIGEN_MPL2_ONLY=1 -DHAVE_CUDA=1 -DHAVE_GLOO=1 -DHAVE_GPU=1 -DHAVE_MPI=1 -DHAVE_NCCL=1 -DHAVE_NVTX=1 -DHOROVOD_GPU_ALLGATHER=78 -DHOROVOD_GPU_ALLREDUCE=78 -DHOROVOD_GPU_ALLTOALL=78 -DHOROVOD_GPU_BROADCAST=78 -DHOROVOD_GPU_REDUCESCATTER=78 -DPYTORCH_VERSION=1010002000 -DTORCH_API_INCLUDE_EXTENSION_H=1 -Dpytorch_EXPORTS -I/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/third_party/HTTPRequest/include -I/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/third_party/boost/assert/include -I/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/third_party/boost/config/include -I/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/third_party/boost/core/include -I/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/third_party/boost/detail/include -I/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/third_party/boost/iterator/include -I/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/third_party/boost/lockfree/include -I/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/third_party/boost/mpl/include -I/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/third_party/boost/parameter/include -I/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/third_party/boost/predef/include -I/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/third_party/boost/preprocessor/include -I/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/third_party/boost/static_assert/include -I/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/third_party/boost/type_traits/include -I/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/third_party/boost/utility/include -I/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/third_party/lbfgs/include -I/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/third_party/gloo -I/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/third_party/eigen -I/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/third_party/flatbuffers/include -isystem /usr/mpi/gcc/mvapich2-2.2/include -isystem /usr/local/cuda/targets/x86_64-linux/include -isystem /usr/local/cuda/include -isystem /usr/local/lib/python3.8/site-packages/torch/include -isystem /usr/local/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -isystem /usr/local/lib/python3.8/site-packages/torch/include/TH -isystem /usr/local/lib/python3.8/site-packages/torch/include/THC -isystem /usr/local/include/python3.8 -D_GLIBCXX_USE_CXX11_ABI=0 -pthread -fPIC -Wall -ftree-vectorize -mf16c -mavx -mfma -O3 -g -DNDEBUG -fPIC -std=c++14 -MD -MT horovod/torch/CMakeFiles/pytorch.dir/__/common/gloo/gloo_controller.cc.o -MF CMakeFiles/pytorch.dir/__/common/gloo/gloo_controller.cc.o.d -o CMakeFiles/pytorch.dir/__/common/gloo/gloo_controller.cc.o -c /tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/horovod/common/gloo/gloo_controller.cc
In file included from /tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/horovod/common/ops/nccl_operations.cc:18:0:
/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/horovod/common/ops/nccl_operations.h: In constructor 'horovod::common::NCCLTorusAllreduce::NCCLTorusAllreduce(horovod::common::NCCLContext*, horovod::common::NCCLContext*, horovod::common::GPUContext*, horovod::common::HorovodGlobalState*)':
/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/horovod/common/ops/nccl_operations.h:276:16: warning: 'horovod::common::NCCLTorusAllreduce::cross_nccl_context_' will be initialized after [-Wreorder]
NCCLContext* cross_nccl_context_;
^
/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/horovod/common/ops/nccl_operations.h:275:17: warning: 'horovod::common::NCCLOpContext horovod::common::NCCLTorusAllreduce::local_nccl_op_context_' [-Wreorder]
NCCLOpContext local_nccl_op_context_;
^
/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/horovod/common/ops/nccl_operations.h:255:3: warning: when initialized here [-Wreorder]
NCCLTorusAllreduce(NCCLContext* local_nccl_context, NCCLContext* cross_nccl_context,
^
/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/horovod/common/ops/nccl_operations.cc: In function 'void horovod::common::commDestroyOrAbort(ncclComm*&, bool)':
/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/horovod/common/ops/nccl_operations.cc:51:67: error: 'ncclCommGetAsyncError' was not declared in this scope
auto nccl_err = ncclCommGetAsyncError(nccl_comm, &nccl_async_err);
^
/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/horovod/common/ops/nccl_operations.cc:58:28: error: 'ncclCommAbort' was not declared in this scope
ncclCommAbort(nccl_comm);
^
/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/horovod/common/ops/nccl_operations.cc: In member function 'void horovod::common::NCCLContext::ErrorCheck(std::string, ncclResult_t, ncclComm*&)':
/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/horovod/common/ops/nccl_operations.cc:65:28: error: 'ncclCommAbort' was not declared in this scope
ncclCommAbort(nccl_comm);
^
/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/horovod/common/ops/nccl_operations.cc: In member function 'void horovod::common::NCCLOpContext::AsyncErrorCheck()':
/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/horovod/common/ops/nccl_operations.cc:128:69: error: 'ncclCommGetAsyncError' was not declared in this scope
auto nccl_err = ncclCommGetAsyncError(*nccl_comm_, &nccl_async_err);
^
/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/horovod/common/ops/nccl_operations.cc: In member function 'virtual horovod::common::Status horovod::common::NCCLTorusAllreduce::Execute(std::vector<horovod::common::TensorTableEntry>&, const horovod::common::Response&)':
/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/horovod/common/ops/nccl_operations.cc:729:11: warning: unused variable 'total_buffer_len' [-Wunused-variable]
int64_t total_buffer_len = is_root_rank
^
gmake[2]: *** [horovod/torch/CMakeFiles/pytorch.dir/__/common/ops/nccl_operations.cc.o] Error 1
gmake[2]: *** Waiting for unfinished jobs....
In file included from /tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/horovod/common/gloo/../mpi/mpi_context.h:25:0,
from /tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/horovod/common/gloo/gloo_context.h:25,
from /tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/horovod/common/gloo/gloo_controller.h:19,
from /tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/horovod/common/gloo/gloo_controller.cc:16:
/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/horovod/common/gloo/../mpi/../half.h: In function 'void horovod::common::HalfBits2Float(const short unsigned int*, float*)':
/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/horovod/common/gloo/../mpi/../half.h:76:44: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
*res = *reinterpret_cast<float const*>(&f);
^
gmake[2]: Leaving directory `/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/build/temp.linux-x86_64-3.8/RelWithDebInfo'
gmake[1]: *** [horovod/torch/CMakeFiles/pytorch.dir/all] Error 2
gmake[1]: Leaving directory `/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/build/temp.linux-x86_64-3.8/RelWithDebInfo'
gmake: *** [all] Error 2
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/setup.py", line 213, in <module>
setup(name='horovod',
File "/usr/local/lib/python3.8/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/usr/local/lib/python3.8/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/local/lib/python3.8/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/usr/local/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.8/site-packages/wheel/bdist_wheel.py", line 299, in run
self.run_command('build')
File "/usr/local/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/local/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.8/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/usr/local/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/local/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/local/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/usr/local/lib/python3.8/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "/usr/local/lib/python3.8/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/tmp/pip-install-7k5yrk_n/horovod_f4ad254eb4854941818414c07d6e4c63/setup.py", line 145, in build_extensions
subprocess.check_call(command, cwd=cmake_build_dir)
File "/usr/local/lib/python3.8/subprocess.py", line 364, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--config', 'RelWithDebInfo', '--', '-j8', 'VERBOSE=1']' returned non-zero exit status 2.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for horovod
Running setup.py clean for horovod
Failed to build horovod
``` | open | 2022-12-06T12:32:36Z | 2022-12-07T07:38:06Z | https://github.com/horovod/horovod/issues/3791 | [
"bug"
] | DRosemei | 1 |
JaidedAI/EasyOCR | machine-learning | 729 | Own Detection Modell | Hi,
I am trying to train my own text detection modell on pictures with the following format:



I just want to extract the clock and the current quarter:
1. Picture: [1ST, 13:52]
2. Picture: [1ST, 8:38]
3. Picture: [1ST, 722]
I do not have a labeled test dataset so I thought of creating my own time and pictures of the quarter with
[TextRecognitionDataGenerator](https://github.com/Belval/TextRecognitionDataGenerator) and replace the given time and quarter
with the small Picture of the timestamp and quarter.

I am already using the provided script for training.
But I have some questions about parameters for Training.
1. Would you use the described methode above to generate the testdataset or is there another simpler solution?
2. Does it matter that there is other text in the box which should not get recognized
3. How many Training Pictures and Validation Pictures do I need for high accuracy results around 0.9
4. The training script provides a List of characters and symbols which should get detected? Should I just provide the necessary ones or do you think it is better to provide all characters
5. Which batch size and how many iterations would you suggest according to the amount of training data
6. Is there a possibility to let the modell know where in the picture to search or does the implemented CRAFT Modell already detect everything looking like a character or symbol
| open | 2022-05-18T19:40:27Z | 2022-08-09T11:28:29Z | https://github.com/JaidedAI/EasyOCR/issues/729 | [] | Schubert-Tom | 1 |
Tinche/aiofiles | asyncio | 167 | Async os.walk attempted solution | Async adapted version of os.walk, it'll only do "topdown" walking.
Feel free to add it in if it makes sense~
```
import os
import aiofiles.os
async def _walk(top, onerror, followlinks):
dirs = []
nondirs = []
walk_dirs = []
# We may not have read permission for top, in which case we can't
# get a list of the files the directory contains. os.walk
# always suppressed the exception then, rather than blow up for a
# minor reason when (say) a thousand readable directories are still
# left to visit. That logic is copied here.
try:
scandir_it = await aiofiles.os.scandir(top)
except OSError as error:
if onerror is not None:
onerror(error)
return
with scandir_it:
while True:
try:
try:
entry = next(scandir_it)
except StopIteration:
break
except OSError as error:
if onerror is not None:
onerror(error)
return
try:
is_dir = entry.is_dir()
except OSError:
# If is_dir() raises an OSError, consider that the entry is not
# a directory, same behaviour than os.path.isdir().
is_dir = False
if is_dir:
dirs.append(entry.name)
else:
nondirs.append(entry.name)
yield top, dirs, nondirs
# Recurse into sub-directories
islink, join = aiofiles.os.path.islink, os.path.join
for dirname in dirs:
new_path = join(top, dirname)
if followlinks or not await islink(new_path):
async for x in _walk(new_path, onerror, followlinks):
yield x
async def walk(top, onerror=None, followlinks=False):
async for top, dirs, nondirs in _walk(top, onerror, followlinks):
yield top, dirs, nondirs
```
#160
Un-deleted, I thought there was a bug, but there wasn't 🙃 | open | 2023-06-10T02:45:52Z | 2024-05-09T23:52:33Z | https://github.com/Tinche/aiofiles/issues/167 | [] | BeRT2me | 1 |
django-cms/django-cms | django | 7,258 | [BUG] | <!--
Please fill in each section below, otherwise, your issue will be closed.
This info allows django CMS maintainers to diagnose (and fix!) your issue
as quickly as possible.
-->
## Description
running djangocms is stopped because of "ModuleNotFoundError: No module named 'pytz'" after start
## Steps to reproduce
1) python3.7 -m venv env
2) source env/bin/activate
3) pip install --upgrade pip
4) pip install djangocms-installer
5) djangocms
## Expected behaviour
Stopped after start because of "djangocms: error: the following arguments are required: project_name"
## Actual behaviour
Stopped after start because if "ModuleNotFoundError: No module named 'pytz'"
## Screenshots
<!--If applicable, add screenshots to help explain your problem.
-->
## Additional information (CMS/Python/Django versions)
python3.7, djangocms-installer 2.0.0, pip 22.0.3,
## Do you want to help fix this issue?
Add additional requirement to setup.py or to requirements - pytz
* [x] No, I only want to report the issue.
| closed | 2022-03-03T14:09:28Z | 2022-03-04T19:07:49Z | https://github.com/django-cms/django-cms/issues/7258 | [] | michalnik | 5 |
jina-ai/serve | machine-learning | 5,453 | patch Otel to allow new protobuf to be used | closed | 2022-11-25T13:14:39Z | 2022-12-15T17:35:18Z | https://github.com/jina-ai/serve/issues/5453 | [] | JoanFM | 0 | |
ckan/ckan | api | 7,963 | Dataset search view returns inconsistent Errors | ## CKAN version
master
## Describe the bug
Our current dataset search view is not consistent nor clear on the error handling of Solr. It is also not well documented so it is difficult to improve without understanding the Solr interface and its errors.
Some search results return just an "Error" message and the underlying code in the view does not provide any information on why is this happening. If trying to review it, we end up in 2 **custom** Solr error classes `SearchError` and `SearchQueryError` that do not provide any documentation on how nor when to use them.
### Steps to reproduce
In our search dataset view, run a query that will return an error. For example, try to search for the following text `search: :error` and it will fail.
On Demo: https://demo.ckan.org/dataset/?q=search%3A+%3Aerror

### Expected behavior
- Search Errors should be more descriptive so users can understand what's happening and why it is failing.
- Our custom error classes `SearchQueryError` and `SearchError` should have a better documentation on how to use them.
| open | 2023-12-06T10:10:30Z | 2024-01-30T21:54:52Z | https://github.com/ckan/ckan/issues/7963 | [
"UX",
"Good for Contribution"
] | pdelboca | 1 |
google-research/bert | nlp | 832 | TRAINED_CLASSIFIER | if TRAINED_CLASSIFIER ==BERT_BASE_DIR, what happens? Does it ok? | open | 2019-08-30T12:07:42Z | 2019-08-30T12:07:42Z | https://github.com/google-research/bert/issues/832 | [] | SuMeng123 | 0 |
StackStorm/st2 | automation | 5,885 | Action Stuck in Scheduled & Running | I have installed Stackstorm 3.7 previously using Stackstorm HA Helm Chart
Stackstorm run fine for the first month, however recently there is hoard of automation kicking in and causing bunch of Action stuck in running / scheduled for few days

Any recommendation on how to fix this? What is the potential issue that caused this?
| closed | 2023-02-05T05:31:21Z | 2023-02-05T12:24:23Z | https://github.com/StackStorm/st2/issues/5885 | [] | mgazzali | 1 |
matplotlib/matplotlib | matplotlib | 29,578 | [Bug]: button event key error when using %matplotlib widget in jupyter lab | ### Bug summary
Python 3.11
import ipympl
import ipywidgets
import matplotlib
print("ipympl:", ipympl.__version__)
print("ipywidgets:", ipywidgets.__version__)
print("matplotlib:", matplotlib.__version__)
ipympl: 0.9.3
ipywidgets: 8.1.5
matplotlib: 3.10.0
%matplotlib widget
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot([0, 1], [0, 1])
def on_click(event):
print("Click detected at", event.xdata, event.ydata)
fig.canvas.mpl_connect('button_press_event', on_click)
plt.show()
### Code for reproduction
```Python
%matplotlib widget
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot([0, 1], [0, 1])
def on_click(event):
print("Click detected at", event.xdata, event.ydata)
fig.canvas.mpl_connect('button_press_event', on_click)
plt.show()
```
### Actual outcome
No printing when a click event occurred
In the info log: ---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File [~/anaconda3/envs/catan_test/lib/python3.11/site-packages/ipympl/backend_nbagg.py:279](http://localhost:8888/lab/workspaces/~/anaconda3/envs/catan_test/lib/python3.11/site-packages/ipympl/backend_nbagg.py#line=278), in Canvas._handle_message(self, object, content, buffers)
276 self.manager.handle_json(content)
278 else:
--> 279 self.manager.handle_json(content)
File [~/anaconda3/envs/catan_test/lib/python3.11/site-packages/matplotlib/backends/backend_webagg_core.py:474](http://localhost:8888/lab/workspaces/~/anaconda3/envs/catan_test/lib/python3.11/site-packages/matplotlib/backends/backend_webagg_core.py#line=473), in FigureManagerWebAgg.handle_json(self, content)
473 def handle_json(self, content):
--> 474 self.canvas.handle_event(content)
File [~/anaconda3/envs/catan_test/lib/python3.11/site-packages/matplotlib/backends/backend_webagg_core.py:264](http://localhost:8888/lab/workspaces/~/anaconda3/envs/catan_test/lib/python3.11/site-packages/matplotlib/backends/backend_webagg_core.py#line=263), in FigureCanvasWebAggCore.handle_event(self, event)
261 e_type = event['type']
262 handler = getattr(self, f'handle_{e_type}',
263 self.handle_unknown_event)
--> 264 return handler(event)
File [~/anaconda3/envs/catan_test/lib/python3.11/site-packages/matplotlib/backends/backend_webagg_core.py:288](http://localhost:8888/lab/workspaces/~/anaconda3/envs/catan_test/lib/python3.11/site-packages/matplotlib/backends/backend_webagg_core.py#line=287), in FigureCanvasWebAggCore._handle_mouse(self, event)
286 e_type = event['type']
287 button = event['button'] + 1 # JS numbers off by 1 compared to mpl.
--> 288 buttons = { # JS ordering different compared to mpl.
289 button for button, mask in [
290 (MouseButton.LEFT, 1),
291 (MouseButton.RIGHT, 2),
292 (MouseButton.MIDDLE, 4),
293 (MouseButton.BACK, 8),
294 (MouseButton.FORWARD, 16),
295 ] if event['buttons'] & mask # State *after* press[/release.](http://localhost:8888/release.)
296 }
297 modifiers = event['modifiers']
298 guiEvent = event.get('guiEvent')
File [~/anaconda3/envs/catan_test/lib/python3.11/site-packages/matplotlib/backends/backend_webagg_core.py:295](http://localhost:8888/lab/workspaces/~/anaconda3/envs/catan_test/lib/python3.11/site-packages/matplotlib/backends/backend_webagg_core.py#line=294), in <setcomp>(.0)
286 e_type = event['type']
287 button = event['button'] + 1 # JS numbers off by 1 compared to mpl.
288 buttons = { # JS ordering different compared to mpl.
289 button for button, mask in [
290 (MouseButton.LEFT, 1),
291 (MouseButton.RIGHT, 2),
292 (MouseButton.MIDDLE, 4),
293 (MouseButton.BACK, 8),
294 (MouseButton.FORWARD, 16),
--> 295 ] if event['buttons'] & mask # State *after* press[/release.](http://localhost:8888/release.)
296 }
297 modifiers = event['modifiers']
298 guiEvent = event.get('guiEvent')
KeyError: 'buttons'
12:05:24 PM
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File [~/anaconda3/envs/catan_test/lib/python3.11/site-packages/ipympl/backend_nbagg.py:279](http://localhost:8888/lab/workspaces/~/anaconda3/envs/catan_test/lib/python3.11/site-packages/ipympl/backend_nbagg.py#line=278), in Canvas._handle_message(self, object, content, buffers)
276 self.manager.handle_json(content)
278 else:
--> 279 self.manager.handle_json(content)
File [~/anaconda3/envs/catan_test/lib/python3.11/site-packages/matplotlib/backends/backend_webagg_core.py:474](http://localhost:8888/lab/workspaces/~/anaconda3/envs/catan_test/lib/python3.11/site-packages/matplotlib/backends/backend_webagg_core.py#line=473), in FigureManagerWebAgg.handle_json(self, content)
473 def handle_json(self, content):
--> 474 self.canvas.handle_event(content)
File [~/anaconda3/envs/catan_test/lib/python3.11/site-packages/matplotlib/backends/backend_webagg_core.py:264](http://localhost:8888/lab/workspaces/~/anaconda3/envs/catan_test/lib/python3.11/site-packages/matplotlib/backends/backend_webagg_core.py#line=263), in FigureCanvasWebAggCore.handle_event(self, event)
261 e_type = event['type']
262 handler = getattr(self, f'handle_{e_type}',
263 self.handle_unknown_event)
--> 264 return handler(event)
File [~/anaconda3/envs/catan_test/lib/python3.11/site-packages/matplotlib/backends/backend_webagg_core.py:288](http://localhost:8888/lab/workspaces/~/anaconda3/envs/catan_test/lib/python3.11/site-packages/matplotlib/backends/backend_webagg_core.py#line=287), in FigureCanvasWebAggCore._handle_mouse(self, event)
286 e_type = event['type']
287 button = event['button'] + 1 # JS numbers off by 1 compared to mpl.
--> 288 buttons = { # JS ordering different compared to mpl.
289 button for button, mask in [
290 (MouseButton.LEFT, 1),
291 (MouseButton.RIGHT, 2),
292 (MouseButton.MIDDLE, 4),
293 (MouseButton.BACK, 8),
294 (MouseButton.FORWARD, 16),
295 ] if event['buttons'] & mask # State *after* press[/release.](http://localhost:8888/release.)
296 }
297 modifiers = event['modifiers']
298 guiEvent = event.get('guiEvent')
File [~/anaconda3/envs/catan_test/lib/python3.11/site-packages/matplotlib/backends/backend_webagg_core.py:295](http://localhost:8888/lab/workspaces/~/anaconda3/envs/catan_test/lib/python3.11/site-packages/matplotlib/backends/backend_webagg_core.py#line=294), in <setcomp>(.0)
286 e_type = event['type']
287 button = event['button'] + 1 # JS numbers off by 1 compared to mpl.
288 buttons = { # JS ordering different compared to mpl.
289 button for button, mask in [
290 (MouseButton.LEFT, 1),
291 (MouseButton.RIGHT, 2),
292 (MouseButton.MIDDLE, 4),
293 (MouseButton.BACK, 8),
294 (MouseButton.FORWARD, 16),
--> 295 ] if event['buttons'] & mask # State *after* press[/release.](http://localhost:8888/release.)
296 }
297 modifiers = event['modifiers']
298 guiEvent = event.get('guiEvent')
KeyError: 'buttons'
12:05:24 PM
### Expected outcome
print("Click detected at", event.xdata, event.ydata). This should print out the click coordinates
### Additional information
_No response_
### Operating system
Mac 15.1.1 (24B91), M2
### Matplotlib Version
matplotlib: 3.10.0
### Matplotlib Backend
module://ipympl.backend_nbagg
### Python version
Python 3.11.11
### Jupyter version
4.3.4
### Installation
conda | closed | 2025-02-04T18:10:10Z | 2025-02-04T18:36:00Z | https://github.com/matplotlib/matplotlib/issues/29578 | [
"status: duplicate",
"backend: ipympl"
] | GanshengT | 1 |
adap/flower | scikit-learn | 4,317 | bug in flwr/common/config.py when replacing config args using --run_config | ### Describe the bug
flwr --run_config XXX won't work, the script still uses default args in pyporject.toml
line 185 in flwr/simulation/run_simulation.py:
`override_config = parse_config_args(
[args.run_config] if args.run_config else args.run_config
)`
- in parse_config_args(), the overrde_config is parsed and **toml.loaded**
line 100 in flwr/common/config.py:
`default_config = get_project_config(project_dir)["tool"]["flwr"]["app"].get(
"config", {}
)`
`flat_default_config = flatten_dict(default_config)`
`return fuse_dicts(flat_default_config, override_config)`
- in get_project_config(), the default_config is **toml.loaded** from dir
- but then **flattened**
- however override_config is **not flattened**
- so later in fuse_dicts(flat_default_config, override_config) the keys of them are not properly matched
line 105 in in flwr/common/config.py:
- change to:
- `flat_default_config = flatten_dict(default_config)`
- `flat_override_config = flatten_dict(override_config)`
- `return fuse_dicts(flat_default_config, flat_override_config)`
- will fix the bug
### Steps/Code to Reproduce
any command with ' --run_config XXX '
### Expected Results
config args being overrided
### Actual Results
not overrided, still the default args. | closed | 2024-10-10T07:56:53Z | 2024-10-10T19:44:15Z | https://github.com/adap/flower/issues/4317 | [
"bug"
] | xiliguguagua | 5 |
Miserlou/Zappa | django | 1,934 | zappa update errors in docker container with AWS profile/env vars | <!--- Provide a general summary of the issue in the Title above -->
`zappa update dev `in docker container errors:
`Warning! Couldn't get function prop-scrape-serverless-dev in ap-southeast-2 - have you deployed yet?`
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
I get this error in either of the two scenrios:
1. Set AWS_XXXX env vars and removed "profile_name" from zappa_settings.json.
2. Defined "default" AWS profile and set "profile_name": "default"
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
Yes virtual on 3.7
## Expected Behavior
<!--- Tell us what should happen -->
Should update AWS lambda function
## Actual Behavior
<!--- Tell us what happens instead -->
`Warning! Couldn't get function prop-scrape-serverless-dev in ap-southeast-2 - have you deployed yet?`
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
Verbose logging would help to understand what the actual problem is
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
0. Set environment vars on docker host (passed in in next step)
1. docker run --rm -it -e AWS_DEFAULT_REGION="%AWS_DEFAULT_REGION%" -e AWS_ACCESS_KEY_ID="%AWS_ACCESS_KEY_ID%" -e AWS_SECRET_ACCESS_KEY="%AWS_SECRET_ACCESS_KEY%" -v "%CD%":/var/task lambci/lambda:build-python3.7 bash
2. python -m venv aws_venv
3. source aws_venv/bin/activate
4. pip install -r aws_requirements.txt
5. zappa update
Get error
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.48.2
* Operating System and Python version: Docker host: Windows 10
Docker container: lambci/lambda:build-python3.7
* The output of `pip freeze`:
argcomplete==1.9.3
boto3==1.9.233
botocore==1.12.233
certifi==2019.9.11
cfn-flip==1.2.1
chardet==3.0.4
Click==7.0
docutils==0.14
durationpy==0.5
future==0.16.0
hjson==3.0.1
idna==2.8
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.20.0
placebo==0.9.0
python-dateutil==2.7.5
python-slugify==1.2.4
PyYAML==5.1.2
requests==2.22.0
s3transfer==0.2.1
selenium==3.141.0
six==1.11.0
SQLAlchemy==1.3.8
toml==0.10.0
tqdm==4.19.1
troposphere==2.5.1
Unidecode==1.1.1
urllib3==1.24.1
Werkzeug==0.16.0
wsgi-request-logger==0.4.6
zappa==0.48.2
* Link to your project (optional):
* Your `zappa_settings.py`:
{
"dev": {
"aws_region": "ap-southeast-2",
"project_name": "prop-scrape-serverless",
"use_apigateway": false,
"runtime": "python3.7",
"s3_bucket": "gaia-zappa-deploy",
"keep_warm": false,
"timeout_seconds": 120,
"memory_size": 1024,
"environment_variables": {
"PATH": "/var/task/chrome"
},
"events": [
{
"function": "prop_search_scope.lambda_handler",
"event_source": {
"arn": "arn:aws:sqs:ap-southeast-2:XXXXXXX:YYYYYYYY",
"batch_size": 1,
"enabled": true
}
},
{
"function": "prop_search_page.lambda_handler",
"event_source": {
"arn": "arn:aws:sqs:ap-southeast-2:XXXXXXX:YYYYYYYY",
"batch_size": 1,
"enabled": true
}
},
{
"function": "prop_listing_page.lambda_handler",
"event_source": {
"arn": "arn:aws:sqs:ap-southeast-2:XXXXXXX:YYYYYYYY",
"batch_size": 1,
"enabled": true
}
}
]
}
}
| open | 2019-09-30T19:57:31Z | 2020-06-25T16:28:26Z | https://github.com/Miserlou/Zappa/issues/1934 | [] | SmileSydney | 5 |
ethanopp/fitly | plotly | 24 | Issues connecting to Strava | I set up a new app in Strava and filled in user_id and secret in `settings.ini` but I have a hard time pulling Strava data. Docker logs show that the connection is indeed established, I can see API requests on Strava app's page, but Fitly doesn't display any info from Strava. Some of the logs:
```
fitly | Unable to set attribute nickname on entity <Shoe id=g5815688 name='Whitin trail Barefoot'>
fitly | Unable to set attribute retired on entity <Shoe id=g5815688 name='Whitin trail Barefoot' resource_state=2>
fitly | Unable to set attribute converted_distance on entity <Shoe id=g5815688 name='Whitin trail Barefoot' resource_state=2>
fitly | Strava connected
fitly | 'NoneType' object has no attribute 'tokens'
fitly | Spotify not connected
fitly | Error in https://api.spotify.com/v1/me/top/tracks?time_range=medium_term&limit=10&offset=0:
fitly | 401: Invalid access token
```
The app looks gorgeous on the screenshots, but when deployed the main page simply says "Provide oura credentials" and nothing else. I do not own oura, I just want Strava data.
I wonder if Strava has changed their API access rules since the last commit made to this app. Looking forward to using it. | open | 2023-08-20T03:28:45Z | 2025-03-11T18:33:51Z | https://github.com/ethanopp/fitly/issues/24 | [] | 0x09AF | 1 |
pydata/bottleneck | numpy | 128 | Preparing to release bottleneck 1.1.0 | I am getting ready to release bottleneck 1.1.0. The only thing left to do is testing.
@toobaz, I made some changes to setup.py after your PR. Is bottleneck 1.1.0 ready for Debian?
@cgohlke, I don't dare release bottleneck 1.1.0 without testing on windows. Can you test it?
@shoyer, I bet most people who use bottleneck do so because of xarray and pandas. Can you test bottleneck against those?
Everyone else, please send it test reports. Does bn.test() pass on your computer? Does bn 1.1.0 run with your code that uses bn?
| closed | 2016-06-20T20:05:24Z | 2016-06-30T18:43:37Z | https://github.com/pydata/bottleneck/issues/128 | [] | kwgoodman | 9 |
rthalley/dnspython | asyncio | 496 | Invalid Canonical Representation Format for SRV Records | [RFC 4034 defines in Sec. 6.2 item 3](https://tools.ietf.org/html/rfc4034#section-6.2) that for record types NS, MD, MF, CNAME, SOA, MB, MG, MR, PTR, HINFO, MINFO, MX, HINFO, RP, AFSDB, RT, SIG, PX, NXT, NAPTR, KX, SRV, DNAME, A6, RRSIG, or NSEC, "all uppercase US-ASCII letters in the DNS names contained within the RDATA are replaced by the corresponding lowercase US-ASCII letters". However, dnspython does not do so for SRV records:
```python3
from dns import rdata, rdataclass, rdatatype
import dns
for type_, presentation_format in [
('NS', 'EXAMPLE.com.'),
('SRV', '100 1 5061 EXAMPLE.com.')
]:
one = rdata.from_text(
rdclass=rdataclass.IN,
rdtype=rdatatype.from_text(type_),
tok=presentation_format,
relativize=False
)
two = one.to_digestable()
three = rdata.from_wire(
rdclass=rdataclass.IN,
rdtype=rdatatype.from_text(type_),
wire=two,
current=0,
rdlen=len(two)
)
four = three.to_text()
print(f'{type_}: {presentation_format}\n'
f' ==from_text======> {one}\n'
f' ==to_digestable==> {two}\n'
f' ==from_wire======> {three}\n'
f' ==> {four}\n')
```
Code above will output for dnspython version 28bde999ee9b5f57c06b4761d130a193f0e51972:
```
NS: EXAMPLE.com.
==from_text======> EXAMPLE.com.
==to_digestable==> b'\x07example\x03com\x00'
==from_wire======> example.com.
==> example.com.
SRV: 100 1 5061 EXAMPLE.com.
==from_text======> 100 1 5061 EXAMPLE.com.
==to_digestable==> b'\x00d\x00\x01\x13\xc5\x07EXAMPLE\x03com\x00'
==from_wire======> 100 1 5061 EXAMPLE.com.
==> 100 1 5061 EXAMPLE.com.
```
I.e., for NS, lowercasing happens as expected, whereas for SRV, no lowercasing was applied. This may break signing or signature validation built on top of dnspython, if the signer/validator is given SRV records containing uppercase.
I'm working on a fix. | closed | 2020-06-01T13:41:42Z | 2020-06-01T15:26:27Z | https://github.com/rthalley/dnspython/issues/496 | [
"Bug",
"Fixed"
] | nils-wisiol | 8 |
wkentaro/labelme | deep-learning | 1,206 | labelme | ### Provide environment information
(labelme) C:\Users\s\Desktop\example>python labelme2voc.py images target --labels labels.txt
Creating dataset: target
class_names: ('_background_', 'SAT', 'VAT', 'Muscle', 'Bone', 'Gas', 'Intestine')
Saved class_names: target\class_names.txt
Generating dataset from: images\A004.json
Traceback (most recent call last):
File "labelme2voc.py", line 106, in <module>
main()
File "labelme2voc.py", line 95, in main
viz = imgviz.label2rgb(
TypeError: label2rgb() got an unexpected keyword argument 'img'
### What OS are you using?
python3.6 labelme3.16
### Describe the Bug
My file is labelme2voc.py. I can't find img in label2rgb(). And all my try are wrong.
### Expected Behavior
_No response_
### To Reproduce
_No response_ | closed | 2022-10-29T15:42:04Z | 2023-07-10T07:29:20Z | https://github.com/wkentaro/labelme/issues/1206 | [
"status: wip-by-author"
] | she-666 | 7 |
tensorflow/tensor2tensor | machine-learning | 1,609 | Tensor2Tensor Model-Based Reinforcement Learning - Rainbow hyperparameters used | For the (great) paper Model Based Reinforcement Learning for Atari which I believe is represented by [this](https://github.com/tensorflow/tensor2tensor/tree/master/tensor2tensor/rl) part of the repo - **does anyone know where in the repo can I find the tuned hyperparameters used for the Rainbow model comparison?**
| open | 2019-06-21T08:37:13Z | 2019-06-21T08:37:28Z | https://github.com/tensorflow/tensor2tensor/issues/1609 | [] | p-christ | 0 |
slackapi/bolt-python | fastapi | 298 | Timepicker integration | Hey! Just a short question. I know that timepicker in beta stage, but is it possible to use it via slack-bolt? I gave it a try, but slack denies my attempt to use it. And also i enabled beta features on my app website. Is there a workaround for it? Many thanks
slack-bolt 1.4.4
#### Python runtime version
3.7.4
#### OS info
ProductName: macOS
ProductVersion: 11.2.3
BuildVersion: 20D91
Darwin Kernel Version 20.3.0: Thu Jan 21 00:07:06 PST 2021; root:xnu-7195.81.3~1/RELEASE_X86_64
### Expected result:
(Tell what you expected to happen)
### Actual result:
I got unexpected type timepicker, skipping....
| closed | 2021-04-20T07:22:44Z | 2021-04-20T08:15:38Z | https://github.com/slackapi/bolt-python/issues/298 | [
"question"
] | tomeszmh | 5 |
pyg-team/pytorch_geometric | deep-learning | 9,311 | MoleculeNet's BBBP dataset incorrectly batched | ### 🐛 Describe the bug
While batching the BBBP dataset, there is one graph that is not associated with any node. This causes a discrepancy in the number of graph labels in the batch and output shape of the downstream model. This affects loss calculations and a shape mismatched is observed.
Minimal code for reproducibility:
`
import torch
from torch_geometric.loader import DataLoader
from torch_geometric.datasets import MoleculeNet
import random
# Ensure reproducibility
seed = 42
random.seed(seed)
torch.manual_seed(seed)
# Load the BBBP dataset
dataset = MoleculeNet(root='.', name='BBBP')
loader = DataLoader(dataset, batch_size=64, shuffle=True, drop_last=True)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Check unique graphs in batch match number of graphs in batch
for data in loader:
print(data.batch.unique(), data.batch.unique().shape,data.num_graphs)
assert data.batch.unique().shape[0] == data.num_graphs`
Expected output:
tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35,
36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53,
54, 55, 56, 57, 58, 59, 60, 61, 63]) torch.Size([63]) 64
AssertionError
### Versions
PyTorch version: 2.2.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: NVIDIA DGX Server (x86_64)
GCC version: (GCC) 5.4.0
Clang version: Could not collect
CMake version: version 3.28.3
Libc version: glibc-2.34
Python version: 3.10.4 | packaged by conda-forge | (main, Mar 24 2022, 17:39:04) [GCC 10.3.0] (64-bit runtime)
Python platform: Linux-5.14.0-162.23.1.el9_1.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA H100 80GB HBM3
Nvidia driver version: 535.161.08
cuDNN version: Probably one of the following:
/usr/lib64/libcudnn.so.8.9.7
/usr/lib64/libcudnn_adv_infer.so.8.9.7
/usr/lib64/libcudnn_adv_train.so.8.9.7
/usr/lib64/libcudnn_cnn_infer.so.8.9.7
/usr/lib64/libcudnn_cnn_train.so.8.9.7
/usr/lib64/libcudnn_ops_infer.so.8.9.7
/usr/lib64/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 224
On-line CPU(s) list: 0-223
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8480CL
CPU family: 6
Model: 143
Thread(s) per core: 2
Core(s) per socket: 56
Socket(s): 2
Stepping: 7
CPU max MHz: 3800.0000
CPU min MHz: 800.0000
BogoMIPS: 4000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cat_l2 cdp_l3 invpcid_single intel_ppin cdp_l2 ssbd mba ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect avx_vnni avx512_bf16 wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid bus_lock_detect cldemote movdiri movdir64b enqcmd fsrm md_clear serialize tsxldtrk pconfig arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
L1d cache: 5.3 MiB (112 instances)
L1i cache: 3.5 MiB (112 instances)
L2 cache: 224 MiB (112 instances)
L3 cache: 210 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-55,112-167
NUMA node1 CPU(s): 56-111,168-223
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.21.2
[pip3] torch==2.2.0
[pip3] torch_cluster==1.6.3+pt22cu121
[pip3] torch-geometric==2.3.1
[pip3] torch_scatter==2.1.2+pt22cu121
[pip3] torch_sparse==0.6.18+pt22cu121
[pip3] torch-spline-conv==1.2.2
[pip3] torchaudio==2.2.0
[pip3] torchdata==0.7.1
[pip3] torchmetrics==0.11.4
[pip3] torchvision==0.17.0
[pip3] torchviz==0.0.2
[pip3] triton==2.2.0
[pip3] tsne-torch==1.0.1
[conda] numpy 1.21.2 pypi_0 pypi
[conda] torch 2.2.0 pypi_0 pypi
[conda] torch-cluster 1.6.3+pt22cu121 pypi_0 pypi
[conda] torch-geometric 2.3.1 pypi_0 pypi
[conda] torch-scatter 2.1.2+pt22cu121 pypi_0 pypi
[conda] torch-sparse 0.6.18+pt22cu121 pypi_0 pypi
[conda] torch-spline-conv 1.2.2 pypi_0 pypi
[conda] torchaudio 2.2.0 pypi_0 pypi
[conda] torchdata 0.7.1 pypi_0 pypi
[conda] torchmetrics 0.11.4 pypi_0 pypi
[conda] torchvision 0.17.0 pypi_0 pypi
[conda] torchviz 0.0.2 pypi_0 pypi
[conda] triton 2.2.0 pypi_0 pypi
[conda] tsne-torch 1.0.1 pypi_0 pypi | closed | 2024-05-11T20:01:44Z | 2024-05-13T13:31:43Z | https://github.com/pyg-team/pytorch_geometric/issues/9311 | [
"bug"
] | apurvakokate | 1 |
encode/httpx | asyncio | 2,492 | Refactor test cases that import from private namespace. | We have some test cases that import from private namespace inside `httpx`.
This is clearly a code-smell, because our test cases *ought* to be tests against our public API, rather than testing implementation details. Perhaps there's some cases where it's a necessary hack, but... perhaps not?
It'd be worthwhile reviewing if we're able to remove all the cases where we're doing this. I'd suggest that any pull requests resolving this are handled on a one-module-at-a-time basis.
| open | 2022-12-06T11:18:31Z | 2024-10-07T00:42:15Z | https://github.com/encode/httpx/issues/2492 | [
"refactor"
] | tomchristie | 5 |
streamlit/streamlit | data-science | 10,307 | Support copy/pasting files into `st.chat_input` | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
We'll soon launch file upload support for `st.chat_input`. This lets you manually upload a file from your machine to the chat. It would be great to also support copy/pasting a file into the chat input.
### Why?
More convenient in some situations.
### How?
When you copied a file, focus on the chat input, and paste, upload the file to the chat just as if you would have selected it from your disk.
### Additional Context
_No response_ | open | 2025-01-31T16:52:43Z | 2025-02-21T00:59:14Z | https://github.com/streamlit/streamlit/issues/10307 | [
"type:enhancement",
"feature:st.chat_input"
] | sfc-gh-jrieke | 6 |
hankcs/HanLP | nlp | 714 | 时间识别不准确,CheckDateElements中的规则存在问题 | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.5.2
我使用的版本是:1.5.0
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
使用NShortSegment分词器进行分词,时间识别不准确,存在以下问题:
1. 会将中英文逗号识别为/m类型
2. 如果中英文逗号后紧跟数字,分词时会将逗号和数字合并识别为/m类型
3. 九点这种可以识别为/t类型,但9点误识别为/m类型
4. 18:00这种格式的时间表述,会被误识别为/m类型
WordBasedGenerativeModelSegment中的CheckDateElements方法,相关规则建议修改
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
### 步骤
1. 首先……
2. 然后……
3. 接着……
### 触发代码
```
public void testIssue1234() throws Exception
{
Segment segment = new NShortSegment();
List<Term> list1 = segment.seg("想出去玩,9点要上班,18:00下班,9成的人会加班。");
System.out.println(list1.toString());
List<Term> list2 = segment.seg("想出去玩但9点要上班,然后18:00下班,9成的人会加班。");
System.out.println(list2.toString());
}
```
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
```
[想/v, 出去/vf, 玩/v, ,/w, 9点/t, 要/v, 上班/vi, ,/w, 18:00/t, 下班/vi, ,/w, 9成/m, 的/ude1, 人/n, 会/v, 加班/vi, 。/w]
[想/v, 出去/vf, 玩/v, 但/c, 9点/t, 要/v, 上班/vi, ,/w, 然后/c, 18:00/t, 下班/vi, ,/w, 9成/m, 的/ude1, 人/n, 会/v, 加班/vi, 。/w]
```
### 实际输出
<!-- HanLP实际输出了什么?产生了什么效果?错在哪里?-->
```
[想/v, 出去/vf, 玩/v, ,9点/m, 要/v, 上班/vi, ,18:00/m, 下班/vi, ,9成/m, 的/ude1, 人/n, 会/v, 加班/vi, 。/w]
[想/v, 出去/vf, 玩/v, 但/c, 9点/m, 要/v, 上班/vi, ,/m, 然后/c, 18:00/m, 下班/vi, ,9成/m, 的/ude1, 人/n, 会/v, 加班/vi, 。/w]
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
| closed | 2017-12-12T11:16:02Z | 2020-01-01T10:51:26Z | https://github.com/hankcs/HanLP/issues/714 | [
"ignored"
] | laobaicai | 1 |
django-import-export/django-import-export | django | 1,547 | XSS vulnerability in HTML export | **Describe the bug**
Triggering a HTML export with a model that has javascript code in one of its fields results in the unsanitized JS code to be present in the HTML export file, resulting in a potential XSS vector attack.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Versions (please complete the following information):**
- Django Import Export: 2.9.0 (but happens in the latest version too)
- Python 3.8
- Django 2.2
**Expected behavior**
JS code should be escaped in HTML report.
**Additional context**
A PR with a fix has been submitted: https://github.com/django-import-export/django-import-export/pull/1546
| closed | 2023-02-17T09:54:28Z | 2023-02-21T19:02:44Z | https://github.com/django-import-export/django-import-export/issues/1547 | [
"bug"
] | samupl | 0 |
scikit-optimize/scikit-optimize | scikit-learn | 388 | skopt.plots in 1-dimension | I'm working on simple examples with optimization with respect to a single variable.
Both
```
from skopt.plots import plot_evaluations
from skopt.plots import plot_objective
```
seem to fail if I'm only optimizing wrt a single variable
```
/Users/cranmer/anaconda/lib/python3.5/site-packages/skopt/plots.py in plot_objective(result, levels, n_points, n_samples, zscale)
305 for j in range(space.n_dims):
306 if i == j:
--> 307 xi, yi = partial_dependence(space, result.models[-1], i,
308 j=None,
309 sample_points=rvs_transformed,
IndexError: list index out of range
``` | closed | 2017-06-06T01:47:45Z | 2020-02-28T10:35:57Z | https://github.com/scikit-optimize/scikit-optimize/issues/388 | [
"Bug",
"API",
"Easy"
] | cranmer | 9 |
ray-project/ray | python | 51,348 | Release test map_groups.few_groups (sort_shuffle_pull_based) failed | Release test **map_groups.few_groups (sort_shuffle_pull_based)** failed. See https://buildkite.com/ray-project/release/builds/35758#0195916e-c0ca-44a7-9bdb-92c6d6035682 for more details.
Managed by OSS Test Policy | closed | 2025-03-13T22:07:26Z | 2025-03-14T22:58:40Z | https://github.com/ray-project/ray/issues/51348 | [
"bug",
"P0",
"triage",
"data",
"release-test",
"jailed-test",
"ray-test-bot",
"weekly-release-blocker",
"stability"
] | can-anyscale | 3 |
MentatInnovations/datastream.io | jupyter | 36 | Google Colab | Hey man, great package, if there is a way to get this to run on Google Colab, it would be very much appreciated. I have been struggling to convert it so that I can add it to the awesome google colab repo.
Thanks.
Derek | open | 2019-12-05T20:17:58Z | 2019-12-05T20:17:58Z | https://github.com/MentatInnovations/datastream.io/issues/36 | [] | firmai | 0 |
scrapy/scrapy | python | 5,954 | test_utf16 fails on big-endian architectures | Found by running tests on Debian s390x:
```
/tmp/autopkgtest-lxc.k2eym0yr/downtmp/autopkgtest_tmp/tests/test_http_response.py:174: in _assert_response_values
self.assertEqual(response.body, body_bytes)
E AssertionError: b'\xff\xfeh\x00i\x00' != b'\xfe\xff\x00h\x00i'
``` | open | 2023-06-19T10:56:53Z | 2025-01-10T12:50:54Z | https://github.com/scrapy/scrapy/issues/5954 | [
"bug",
"help wanted",
"CI"
] | wRAR | 3 |
dsdanielpark/Bard-API | nlp | 286 | How can automaticly get cookies value in no gui servers? | How can automaticly get cookies value in no gui servers?
i need use automatic get tokens in no gui server like pythonanywhere.com | closed | 2024-02-22T14:54:23Z | 2024-02-23T13:05:50Z | https://github.com/dsdanielpark/Bard-API/issues/286 | [] | Usercodersystem | 2 |
aiortc/aiortc | asyncio | 690 | Is there any example to transfer the video from cam to stun server (p2p)? | the webcam example looks like to use the browser js webrtc api to connect the stun iceserver
i have not found any example for aiortc to send stun server
| closed | 2022-04-13T09:09:25Z | 2022-04-14T12:55:45Z | https://github.com/aiortc/aiortc/issues/690 | [] | diybl | 2 |
marcomusy/vedo | numpy | 439 | example can not run | when i try run the example, it does not work.
who can tell me the reason?
And i have install it.
<img width="879" alt="22" src="https://user-images.githubusercontent.com/60449974/128636366-bbdbf1f3-e36f-43d0-ae55-d2abb38d7c4d.png">
| closed | 2021-08-08T14:56:37Z | 2021-08-10T11:12:15Z | https://github.com/marcomusy/vedo/issues/439 | [] | subenyu | 7 |
gunthercox/ChatterBot | machine-learning | 1,536 | Corpus categorization | Hi all,
my question is related to the logic used by ChatterBot when processing multiple corpora. More specifically I made a chatbot to use in a company internal e-learning system: when a user asks a question about the behaviour of a specific software, the bot should provide the specific response.
This is the use case: my chatbot is trained with multiple corpora, each one about a specific topic. I would like it to provide informations more or less technical according to the company user's role.
I wrote corpora containing informations to be used as answers for different roles: for example, if I'm a simple user, when asking "What is a car?", I would like to receive a simple response like "It is a system composed by wheels and an engine"; if I'm a more technical user, contrariwise, the bot should provide a more technical and detailed description of a car, such as "A car (or automobile) is a wheeled motor vehicle used for transportation, composed by an engine, which is...".
So how can I tell the bot to search for the response according to the role of the user, with the assumption that there are different answers (on different level of detail and in different corpora) for the same question?
Thank you so much in advance
Alan | closed | 2018-12-19T12:29:07Z | 2019-10-23T08:09:59Z | https://github.com/gunthercox/ChatterBot/issues/1536 | [] | ghost | 4 |
huggingface/diffusers | deep-learning | 11,023 | Getting blur images on playground v2.5 model when used with 'lpw_stable_diffusion_xl' custom pipeline | ### Describe the bug
I am getting blurred images on playground v2.5 model when used with 'lpw_stable_diffusion_xl'. I see Playground v2.5 uses the same architecture as SD-XL.
Please help me fix this @sayakpaul @hlky
Thank you!
### Reproduction
from diffusers import DiffusionPipeline, StableDiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained(
"playgroundai/playground-v2.5-1024px-aesthetic"
, torch_dtype = torch.float16
, use_safetensors = True
, variant = "fp16"
, custom_pipeline = "lpw_stable_diffusion_xl",
)
pipe.to("cuda")
images = pipe(prompt= "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k",
num_inference_steps=50 ).images[0]
images.save("image.png")

### Logs
```shell
```
### System Info
- 🤗 Diffusers version: 0.32.2
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Running on Google Colab?: Yes
- Python version: 3.11.11
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): 0.10.4 (gpu)
- Jax version: 0.4.33
- JaxLib version: 0.4.33
- Huggingface_hub version: 0.28.1
- Transformers version: 4.48.3
- Accelerate version: 1.3.0
- PEFT version: 0.14.0
- Bitsandbytes version: not installed
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: Tesla T4, 15360 MiB
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
_No response_ | open | 2025-03-10T20:03:35Z | 2025-03-11T20:50:24Z | https://github.com/huggingface/diffusers/issues/11023 | [
"bug"
] | kotlasaicharan | 5 |
jupyter-incubator/sparkmagic | jupyter | 103 | Error when visualizing empty dataframe | Steps to reproduce:
1. `%sql SHOW TABLES` or `%hive SHOW TABLES` when there are no tables (i.e. the result dataframe is empty).
2. The data viz widget pops up. Switch from "table" to any of the other chart styles.
3. You get an exception. This except doesn't go away even if you switch back to the table graph type.
```
ValueError: cannot label index with a null key
```
| closed | 2016-01-07T19:49:11Z | 2016-01-08T19:30:43Z | https://github.com/jupyter-incubator/sparkmagic/issues/103 | [
"kind:bug"
] | msftristew | 1 |
onnx/onnx | machine-learning | 5,901 | numpy_helper.to_array modifies TensorProto on big endian systems | # Bug Report
### Is the issue related to model conversion?
Yes, but debugged to an issue within ONNX
### Describe the bug
Using tf2onnx to export a TF model to ONNX, some items were byteswapped incorrectly. Tracing of the code showed that the TensorProto holding the value had the correct contents, but at some point reexamination of the same TensorProto showed byteeswapped data. Further investigation showed that `numpy_helper.to_array()` was byteswapping the data within the `TensorProto` as opposed to returning a byteswapped copy of the data, yielding the inconsistent format. A local fix produced correct data.
### System information
<!--
- OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*):
- ONNX version (*e.g. 1.13*):
- Python version:
- GCC/Compiler version (if compiling from source):
- CMake version:
- Protobuf version:
- Visual Studio version (if applicable):-->
Linux RHEL 9.3 on s390x
ONNX 1.15.0
Python 3.10
gcc (compiling ONNX via pip install) 11.4.0
CMake 3.22.1
protobuf 3.20.3
### Reproduction instructions
Simplified testcase:
```python
from onnx import numpy_helper, TensorProto
import numpy as np
t = np.array(1, dtype=np.int32)
onnx_tensor = numpy_helper.from_array(t, "Tensor")
assert t == numpy_helper.to_array(onnx_tensor)
assert t == numpy_helper.to_array(onnx_tensor) # Currently blows up on big endian
```
### Expected behavior
In the above testcase, the expectation is that `onnx_tensor` is treated as immutable during `to_array()`
### Notes
| closed | 2024-02-02T16:49:39Z | 2024-02-02T23:49:18Z | https://github.com/onnx/onnx/issues/5901 | [
"bug"
] | tehbone | 2 |
sigmavirus24/github3.py | rest-api | 707 | Consider replacing requests with urllib3 | See also https://github.com/kennethreitz/requests/issues/4069
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/45635123-consider-replacing-requests-with-urllib3?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github).
</bountysource-plugin> | open | 2017-05-27T12:53:56Z | 2018-12-17T13:27:31Z | https://github.com/sigmavirus24/github3.py/issues/707 | [] | sigmavirus24 | 3 |
huggingface/datasets | tensorflow | 6,441 | Trouble Loading a Gated Dataset For User with Granted Permission | ### Describe the bug
I have granted permissions to several users to access a gated huggingface dataset. The users accepted the invite and when trying to load the dataset using their access token they get
`FileNotFoundError: Couldn't find a dataset script at .....` . Also when they try to click the url link for the dataset they get a 404 error.
### Steps to reproduce the bug
1. Grant access to gated dataset for specific users
2. Users accept invitation
3. Users login to hugging face hub using cli login
4. Users run load_dataset
### Expected behavior
Dataset is loaded normally for users who were granted access to the gated dataset.
### Environment info
datasets==2.15.0
| closed | 2023-11-21T19:24:36Z | 2023-12-13T08:27:16Z | https://github.com/huggingface/datasets/issues/6441 | [] | e-trop | 3 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 15,328 | [Bug]: can't find the preprocess Tab | ### Checklist
- [ ] The issue exists after disabling all extensions
- [x] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [x] The issue exists in the current version of the webui
- [x] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I wanted to Train an embedding after having pulled the newest version, but I can't find the preprocess tab.
I made a new install and the issue is still there.

### Steps to reproduce the problem
I don't know.
### What should have happened?
I should have been able to find it.
### What browsers do you use to access the UI ?
Mozilla Firefox
### Sysinfo
[sysinfo-2024-03-19-21-50.json](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/14657230/sysinfo-2024-03-19-21-50.json)
### Console logs
```Shell
I don't have one
```
### Additional information
_No response_ | open | 2024-03-19T21:50:40Z | 2024-03-21T04:23:24Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15328 | [
"bug-report"
] | Moineau54 | 3 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 696 | Continuing training with niter_decay | Hello!
Since I have a limited uptime for GPU, I'm training my model in sequences of 15 epochs, each time saving the weights and loading them and continuing the training. I have trained 100 epochs with fixed learning rate (niter), but now I would like to be able to separate the training of the next 100 epochs with decaying learning rate (niter_decay).
If I'm not wrong, there isn't a way for me to define that I would like it to decay through 100 epochs while doing it in series of 15 epochs. From what I understood, if I run it for the first 15 epochs (out of 100), it'll drive the learning rate to 0 by the end of that 15th epoch.
Any help is appreciated! | closed | 2019-07-09T10:23:00Z | 2019-07-17T11:35:43Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/696 | [] | ibro45 | 2 |
coqui-ai/TTS | pytorch | 3,398 | correctly reading | I tried it with the following expression
$\int_{b}^{a} f(x) dx = F(b)-F(a)$
Result was terrible and included some sort of jungle scream, it should have read :
"the riemann integral from a to b of f of x with respect to x is equal to the difference between the antiderivative evaluated at the upper limit and the antiderivative at the lower limit" | closed | 2023-12-11T00:04:17Z | 2023-12-12T11:36:23Z | https://github.com/coqui-ai/TTS/issues/3398 | [
"feature request"
] | scronge | 1 |
pytest-dev/pytest-cov | pytest | 557 | pytest-cov is incompatible with pytest-xdist 3.x due to injection of rsync option | # Summary
See https://github.com/pytest-dev/pytest-xdist/issues/825#issuecomment-1292306864 which reports a deprecation option related to rsyncdir(s) options. This breaks the builds for anyone that run with warnings-as-errors.
## Code
https://github.com/pytest-dev/pytest-cov/blob/f7fced579e36b72b57e14768026467e4c4511a40/src/pytest_cov/engine.py#L263
Link to your repository, gist, pastebin or just paste raw code that illustrates the issue.
## Solution
We need to stop adding this option on newer versions as it was already deprecated and will be removed on next version of xdist. | closed | 2022-10-26T17:40:46Z | 2024-03-13T06:50:24Z | https://github.com/pytest-dev/pytest-cov/issues/557 | [] | ssbarnea | 11 |
JaidedAI/EasyOCR | deep-learning | 692 | ocr not reading word by word | Hi how to configure the easyocr to print the text word by word with the coordinates | open | 2022-03-23T11:40:28Z | 2022-04-04T10:30:56Z | https://github.com/JaidedAI/EasyOCR/issues/692 | [] | sreebalaji2418 | 1 |
dropbox/PyHive | sqlalchemy | 272 | thrift.transport.TTransport.TTransportException: failed to resolve sockaddr for host | from pyhive import hive
conn = hive.Connection(host="host", username="hive",auth="NOSASL",port=10000)
cur = conn.cursor()
Bu kodu yazdım. **Bu hatayı aldım: thrift.transport.TTransport.TTransportException: host için sockaddr çözümlenemedi**. Kütüphane, tweepy, pyhive kullanıyorum
pure-sasl==0.6.1
PyHive==0.6.1
pyhs2==0.6.0
pyOpenSSL==17.2.0
PySAL==1.14.4.post1
pysasl==0.4.1
sasl==0.2.1
thrift==0.11.0
tweepy==3.5.0
twitter==1.18.0``
twython==3.7.0 | open | 2019-03-15T06:19:06Z | 2019-03-15T06:19:06Z | https://github.com/dropbox/PyHive/issues/272 | [] | OguzKircicek | 0 |
ultralytics/yolov5 | pytorch | 12,585 | train the picture without the target | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
When yolov5 training, how to train the picture without the target, there is an empty txt can be?
### Additional
_No response_ | closed | 2024-01-05T12:01:47Z | 2024-02-17T00:20:07Z | https://github.com/ultralytics/yolov5/issues/12585 | [
"question",
"Stale"
] | ZhaoMonica | 4 |
tflearn/tflearn | tensorflow | 688 | Does build_hdf5_image_dataset support 'tif' images? | I got this error while creating hdf5 dataset from folder
File "model.py", line 442, in <module>
m.fileupload_train()
File "model.py", line 84, in fileupload_train
normalize=True, files_extension=['.tif', '.jpg'])
File "/usr/local/lib/python3.5/dist-packages/tflearn/data_utils.py", line 414, in build_hdf5_image_dataset
dataset['X'][i] = img
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/tmp/pip-huypgcah-build/h5py/_objects.c:2840)
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/tmp/pip-huypgcah-build/h5py/_objects.c:2798)
File "/usr/local/lib/python3.5/dist-packages/h5py/_hl/dataset.py", line 629, in __setitem__
for fspace in selection.broadcast(mshape):
File "/usr/local/lib/python3.5/dist-packages/h5py/_hl/selections.py", line 299, in broadcast
raise TypeError("Can't broadcast %s -> %s" % (target_shape, count))
TypeError: Can't broadcast (32, 32, 4) -> (1, 32, 32, 3) | open | 2017-03-30T03:22:04Z | 2017-03-30T03:22:04Z | https://github.com/tflearn/tflearn/issues/688 | [] | iandecks45 | 0 |
gunthercox/ChatterBot | machine-learning | 1,789 | Blacklist words | First of all very good project, well done to the creator.
How do I create blacklist (banned) words,
and respond with a specific phrase each time (e.g. That isn't acceptable, stop or conversation over)
thanks | closed | 2019-08-02T13:12:12Z | 2025-02-19T12:49:06Z | https://github.com/gunthercox/ChatterBot/issues/1789 | [] | ayo-dami | 3 |
BeanieODM/beanie | asyncio | 501 | [BUG] Migration is stuck even though all the documents have been transformed | **Describe the bug**
Currently, one of our collections has around 60k documents. Running a migration to add one field to all the documents through the iterative migration results in the migration process being stuck. It even ran for 30 minutes but was still stuck. But what was interesting was that all the documents in DB had the updates.
Even tried setting the `batch_size` to 100. But even in that case, this will take a lot of time.
Tested
**To Reproduce**
```python
class Forward:
@iterative_migration(document_models=[xyz])
async def add_to_xyz(
self, input_document: Oldxyz, output_document: Newxyz
):
output_document.field = Decimal(0.0)
```
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
| closed | 2023-03-13T22:02:41Z | 2023-04-27T02:21:18Z | https://github.com/BeanieODM/beanie/issues/501 | [
"Stale"
] | prabhumarappan | 5 |
hyperspy/hyperspy | data-visualization | 3,491 | Personalized ROI | I extracted, from a previous analysis, the coordinates of pixels belonging to a cluster. I want, now, to analyze the signal of those pixels in an EELS hypermap. I use the function inav to extract the pixels from the map and I add them to a list. It appears in that format:
[dask.array<getitem, shape=(1782,), dtype=float32, chunksize=(1782,), chunktype=numpy.ndarray>,
dask.array<getitem, shape=(1782,), dtype=float32, chunksize=(1782,), chunktype=numpy.ndarray>,
dask.array<getitem, shape=(1782,), dtype=float32, chunksize=(1782,), chunktype=numpy.ndarray>,
dask.array<getitem, shape=(1782,), dtype=float32, chunksize=(1782,), chunktype=numpy.ndarray>,
dask.array<getitem, shape=(1782,), dtype=float32, chunksize=(1782,), chunktype=numpy.ndarray>,
dask.array<getitem, shape=(1782,), dtype=float32, chunksize=(1782,), chunktype=numpy.ndarray>,
dask.array<getitem, shape=(1782,), dtype=float32, chunksize=(1782,), chunktype=numpy.ndarray>,
dask.array<getitem, shape=(1782,), dtype=float32, chunksize=(1782,), chunktype=numpy.ndarray>,
dask.array<getitem, shape=(1782,), dtype=float32, chunksize=(1782,), chunktype=numpy.ndarray>,
dask.array<getitem, shape=(1782,), dtype=float32, chunksize=(1782,), chunktype=numpy.ndarray>,
dask.array<getitem, shape=(1782,), dtype=float32, chunksize=(1782,), chunktype=numpy.ndarray>,
dask.array<getitem, shape=(1782,), dtype=float32, chunksize=(1782,), chunktype=numpy.ndarray>,
dask.array<getitem, shape=(1782,), dtype=float32, chunksize=(1782,), chunktype=numpy.ndarray>,
dask.array<getitem, shape=(1782,), dtype=float32, chunksize=(1782,), chunktype=numpy.ndarray>,
dask.array<getitem, shape=(1782,), dtype=float32, chunksize=(1782,), chunktype=numpy.ndarray>]
Each array is a pixel. What I want to do now is to reconstruct the cluster in the EELS hypermap to highlight it and to compute further analysis.
#### Describe the context
The clusterization is based on crystallographic analysis
| closed | 2025-02-10T14:03:41Z | 2025-02-10T17:13:17Z | https://github.com/hyperspy/hyperspy/issues/3491 | [] | andreacicconardi94 | 1 |
mirumee/ariadne | api | 211 | Recommended way to wrap mutation in a transaction | What is the recommended way of wrapping a mutation in a transaction? I can think of 2 alternatives:
1. Create a middleware that detects if it is at the top of the mutation; if so, manage the transaction, otherwise pass through doing nothing.
2. Create a custom `graphql.ExecutionContext` that overrides the `execute_operation` method. I can't figure out how to pass a custom `execution_context_class` into asgi.GraphQL, however. | closed | 2019-07-05T09:04:46Z | 2019-07-05T10:04:25Z | https://github.com/mirumee/ariadne/issues/211 | [] | alexchamberlain | 5 |
autogluon/autogluon | computer-vision | 3,904 | Issue with RandomForestMSE Running Over Time Limit | I've been using AutoGluon without any problems for a while. Recently, after I added some more columns to my dataset, I ran into a problem with the RandomForestMSE model. I had set the model to run for 14 hours using 20 CPUs and 1 GPU at full capacity, but it didn't finish even after waiting another 5 hours.
I tried running it again with a 1-hour time limit and set it to medium_quality, hoping it would finish faster. But, the RandomForestMSE model still took longer than expected and somehow ended up showing a completion time of "-4k seconds," which doesn't make sense.
Can anyone help me figure out why this is happening and how to fix it? Is there a problem with adding more columns, or is there something else I should adjust for RandomForestMSE to handle bigger datasets without these delays?
```
from autogluon.tabular import TabularDataset, TabularPredictor
train_data = TabularDataset(train_df)
kwargs = {
'ignored_columns': ['ig_col1','ig_col2','ig_col3'], # Replace with columns you want to ignore
}
predictor = TabularPredictor(label=label,log_to_file=True,eval_metric='rmse',learner_kwargs=kwargs).fit(
train_data=train_df,
num_gpus=1, # Grant 1 gpu for the entire Tabular Predictor
presets='medium_quality',
time_limit=3600, #*17
ag_args_fit={'num_gpus':1}
)
```
```
Presets specified: ['medium_quality']
Beginning AutoGluon training ... Time limit = 3600s
AutoGluon will save models to "AutogluonModels/ag-20240206_010021"
=================== System Info ===================
AutoGluon Version: 1.0.0
Python Version: 3.10.13
Operating System: Linux
Platform Machine: x86_64
Platform Version: #102-Ubuntu SMP Wed Jan 10 09:33:48 UTC 2024
CPU Count: 20
Memory Avail: 75.65 GB / 125.64 GB (60.2%)
Disk Space Avail: 314.21 GB / 914.78 GB (34.3%)
===================================================
Train Data Rows: 415888
Train Data Columns: 1150
Label Column: future_rev
AutoGluon infers your prediction problem is: 'regression' (because dtype of label-column == float and many unique label-values observed).
Label info (max, min, mean, stddev): (2845273751552.0, 0.0, 4371510784.0, 28876195840.0)
If 'regression' is not the correct problem_type, please manually specify the problem_type parameter during predictor init (You may specify problem_type as one of: ['binary', 'multiclass', 'regression'])
Problem Type: regression
Preprocessing data ...
Using Feature Generators to preprocess the data ...
Dropping user-specified ignored columns: ['item', 'date', 'calendardate']
Fitting AutoMLPipelineFeatureGenerator...
Available Memory: 75626.51 MB
Train Data (Original) Memory Usage: 1816.54 MB (2.4% of available memory)
Inferring data type of each feature based on column values. Set feature_metadata_in to manually specify special dtypes of the features.
Stage 1 Generators:
Fitting AsTypeFeatureGenerator...
Stage 2 Generators:
Fitting FillNaFeatureGenerator...
Stage 3 Generators:
Fitting IdentityFeatureGenerator...
Fitting CategoryFeatureGenerator...
Fitting CategoryMemoryMinimizeFeatureGenerator...
Stage 4 Generators:
Fitting DropUniqueFeatureGenerator...
Stage 5 Generators:
Fitting DropDuplicatesFeatureGenerator...
Types of features in original data (raw dtype, special dtypes):
('category', []) : 3 | [...]
('float', []) : 1144 | [...]
Types of features in processed data (raw dtype, special dtypes):
('category', []) : 3 | [...]
('float', []) : 1144 | [...]
9.3s = Fit runtime
1147 features in original data used to generate 1147 features in processed data.
Train Data (Processed) Memory Usage: 1816.53 MB (2.4% of available memory)
Data preprocessing and feature engineering runtime = 10.44s ...
AutoGluon will gauge predictive performance using evaluation metric: 'root_mean_squared_error'
This metric's sign has been flipped to adhere to being higher_is_better. The metric score can be multiplied by -1 to get the metric value.
To change this, specify the eval_metric parameter of Predictor()
Automatically generating train/validation split with holdout_frac=0.01, Train Rows: 411729, Val Rows: 4159
User-specified model hyperparameters to be fit:
{
'NN_TORCH': {},
'GBM': [{'extra_trees': True, 'ag_args': {'name_suffix': 'XT'}}, {}, 'GBMLarge'],
'CAT': {},
'XGB': {},
'FASTAI': {},
'RF': [{'criterion': 'gini', 'ag_args': {'name_suffix': 'Gini', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'entropy', 'ag_args': {'name_suffix': 'Entr', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'squared_error', 'ag_args': {'name_suffix': 'MSE', 'problem_types': ['regression', 'quantile']}}],
'XT': [{'criterion': 'gini', 'ag_args': {'name_suffix': 'Gini', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'entropy', 'ag_args': {'name_suffix': 'Entr', 'problem_types': ['binary', 'multiclass']}}, {'criterion': 'squared_error', 'ag_args': {'name_suffix': 'MSE', 'problem_types': ['regression', 'quantile']}}],
'KNN': [{'weights': 'uniform', 'ag_args': {'name_suffix': 'Unif'}}, {'weights': 'distance', 'ag_args': {'name_suffix': 'Dist'}}],
}
Fitting 11 L1 models ...
Fitting model: KNeighborsUnif ... Training model for up to 3589.56s of the 3589.56s of remaining time.
-6216838656.0 = Validation score (-root_mean_squared_error)
7.34s = Training runtime
12.37s = Validation runtime
Fitting model: KNeighborsDist ... Training model for up to 3568.76s of the 3568.76s of remaining time.
-4719784960.0 = Validation score (-root_mean_squared_error)
7.33s = Training runtime
12.37s = Validation runtime
Fitting model: LightGBMXT ... Training model for up to 3547.96s of the 3547.96s of remaining time.
Training LightGBMXT with GPU, note that this may negatively impact model quality compared to CPU training.
-2584059392.0 = Validation score (-root_mean_squared_error)
467.87s = Training runtime
0.2s = Validation runtime
Fitting model: LightGBM ... Training model for up to 3079.78s of the 3079.78s of remaining time.
Training LightGBM with GPU, note that this may negatively impact model quality compared to CPU training.
-2731340800.0 = Validation score (-root_mean_squared_error)
505.27s = Training runtime
0.19s = Validation runtime
Fitting model: RandomForestMSE ... Training model for up to 2574.18s of the 2574.18s of remaining time.
-5517256192.0 = Validation score (-root_mean_squared_error)
7533.37s = Training runtime
0.13s = Validation runtime
Fitting model: WeightedEnsemble_L2 ... Training model for up to 360.0s of the -4959.61s of remaining time.
Ensemble Weights: {'LightGBMXT': 0.605, 'LightGBM': 0.395}
-2468598016.0 = Validation score (-root_mean_squared_error)
0.06s = Training runtime
0.0s = Validation runtime
AutoGluon training complete, total runtime = 8561.65s ... Best model: "WeightedEnsemble_L2"
TabularPredictor saved. To load, use: predictor = TabularPredictor.load("AutogluonModels/ag-20240206_010021")
``` | closed | 2024-02-06T03:43:40Z | 2025-01-08T19:36:07Z | https://github.com/autogluon/autogluon/issues/3904 | [
"module: tabular",
"bug: unconfirmed",
"Needs Triage"
] | sideshot | 1 |
aws/aws-sdk-pandas | pandas | 2,319 | IAM Authentication for RDS | My company is migrating away from having permanent credentials for RDS instances. Instead, they are considering using IAM Authentication to generate passwords, with `aws rds generate-db-auth-token` or something similar.
Before this, what I did was take credentials (which are hosted in AWS Parameter Store, we don't use Secrets Manager), and create durable Glue Catalog connections, once. This way, I am able to use your SDK's `connection` parameter.
However, with frequently rotating credentials, this is no longer feasible.
Thus, my questions are:
- Have you considered adding IAM Authentication or similar mechanism to authenticate RDS instances in AWS SDK Pandas?
- If not, have you considered allowing passing a JDBC URL or dictionary of connection string values?
Personally I think the second option would be more ideal since most JDBC connection handlers I've seen allow this (so pretty orthodox solution), and allows interesting security setups like the above.
Otherwise we'd have to, in the background, constantly need to generate new passwords and either reset the Glue connection string or Secrets Manager secret after the TTL expires. | closed | 2023-06-04T17:48:15Z | 2023-06-06T03:01:27Z | https://github.com/aws/aws-sdk-pandas/issues/2319 | [
"question"
] | m1hawkgsm | 2 |
plotly/dash-table | plotly | 723 | Header row overlaps Dropdown when fixed_rows is set | Hi all,
Here's a minimal example:
```python
import dash
import dash_core_components as dcc
import dash_html_components as html
import dash_table
app = dash.Dash(__name__, eager_loading=True)
app.layout = html.Div([
# Note that "one" isn't visible because the table header overlaps it
dcc.Dropdown(id='dropdown', options=[{'label': c, 'value': c} for c in ['one', 'two', 'three']]),
dash_table.DataTable(
id='table',
fixed_rows=dict(headers=True, data=0),
columns=[{'name': c, 'id': c} for c in ['A', 'B']],
data=[
{'A': 'aa', 'B': 'bb'},
],
),
])
if __name__ == '__main__':
app.run_server(debug=True)
```
When opening the dropdown I see the following:

I.e., **the table header overlaps the Dropdown**, which is clearly wrong.
Here's the output of `conda list dash` - I'm pretty much on the lastest versions:
```
dash 1.9.1
dash-core-components 1.8.1
dash-dangerously-set-inner-html 0.0.2
dash-html-components 1.0.2
dash-renderer 1.2.4
dash-table 4.6.1
dash_utils 0.19
```
| open | 2020-03-18T15:29:29Z | 2021-11-01T16:31:50Z | https://github.com/plotly/dash-table/issues/723 | [] | kasparthommen | 5 |
onnx/onnxmltools | scikit-learn | 592 | How to write entirely new converter | We have GBM models written in C# using our proprietary algorithms and was wondering if it is possible to convert these to ONNX?
These models do not use any traditional framework as they are entirely written inhouse.
What would be required to write a converter for these? I cannot seem to find documentation on the process. | closed | 2022-10-20T10:19:25Z | 2022-11-07T06:21:21Z | https://github.com/onnx/onnxmltools/issues/592 | [] | camer314 | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 931 | error test.py |
Hello.
I am working on pix2pix by creating my custom data.
In the middle of the process, I tried to test only the weight pulled.
Therefore, when test.py was configured by executing test data in the format 'facades', the following error occurred.
How can I solve it?
Thank you.
dataset [AlignedDataset] was created
initialize network with normal
model [Pix2PixModel] was created
loading the model from ./checkpoints\depth_pix2pix\latest_net_G.pth
Traceback (most recent call last):
RuntimeError: Error(s) in loading state_dict for UnetGenerator:
Unexpected key(s) in state_dict: "model.model.1.model.2.num_batches_tracked", "model.model.1.model.3.model.2.num_batches_tracked", "model.model.1.model.3.model.3.model.2.num_batches_tracked", "model.model.1.model.3.model.3.model.3.model.2.num_batches_tracked", "model.model.1.model.3.model.3.model.3.model.3.model.2.num_batches_tracked", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.2.num_batches_tracked", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.3.model.4.num_batches_tracked", "model.model.1.model.3.model.3.model.3.model.3.model.3.model.6.num_batches_tracked", "model.model.1.model.3.model.3.model.3.model.3.model.6.num_batches_tracked", "model.model.1.model.3.model.3.model.3.model.6.num_batches_tracked", "model.model.1.model.3.model.3.model.6.num_batches_tracked", "model.model.1.model.3.model.6.num_batches_tracked", "model.model.1.model.6.num_batches_tracked". | closed | 2020-02-25T08:27:28Z | 2020-02-25T08:30:51Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/931 | [] | kimtaehyeong | 1 |
collerek/ormar | pydantic | 1,391 | `load_all()`/`select_related()` pydantic validation errors for ormar.JSON() field type | **Describe the bug**
When running `load_all()` or `select_related()` on model with foreign model with ormar.JSON() field, we see a pydantic validation error.
**To Reproduce**
```python
import asyncio
import databases
import ormar
import sqlalchemy
DATABASE_URL = "sqlite:///db.sqlite"
base_ormar_config = ormar.OrmarConfig(
database=databases.Database(DATABASE_URL),
metadata=sqlalchemy.MetaData(),
engine=sqlalchemy.create_engine(DATABASE_URL),
)
class Author(ormar.Model):
ormar_config = base_ormar_config.copy(tablename="authors")
id: int = ormar.Integer(primary_key=True)
class Book(ormar.Model):
ormar_config = base_ormar_config.copy(tablename="books")
id: int = ormar.Integer(primary_key=True)
author: Author = ormar.ForeignKey(Author, name="author_id")
my_data: dict = ormar.JSON(nullable=True)
async def main():
base_ormar_config.metadata.drop_all(base_ormar_config.engine)
base_ormar_config.metadata.create_all(base_ormar_config.engine)
author = await Author.objects.create()
book = await Book.objects.create(
author=author,
my_data={}
)
# this errors out
authors = await Author.objects.select_related(Author.books).all()
# this also errors out
authors = await Author.objects.select_all().all()
asyncio.run(main())
```
<details><summary>Error traceback</summary>
<p>
```
Traceback (most recent call last):
File "/Users/tonyyao/workspace/ormar-bug/main.py", line 50, in <module>
asyncio.run(main())
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
return future.result()
File "/Users/tonyyao/workspace/ormar-bug/main.py", line 47, in main
authors = await Author.objects.select_all().all()
File "/Users/tonyyao/.local/pipx/.cache/f0aff6fe1a77366/lib/python3.9/site-packages/ormar/queryset/queryset.py", line 1084, in all
result_rows = await self._process_query_result_rows(rows)
File "/Users/tonyyao/.local/pipx/.cache/f0aff6fe1a77366/lib/python3.9/site-packages/ormar/queryset/queryset.py", line 189, in _process_query_result_rows
self.model.from_row(
File "/Users/tonyyao/.local/pipx/.cache/f0aff6fe1a77366/lib/python3.9/site-packages/ormar/models/model_row.py", line 104, in from_row
instance = cast("Model", cls(**item))
File "/Users/tonyyao/.local/pipx/.cache/f0aff6fe1a77366/lib/python3.9/site-packages/ormar/models/newbasemodel.py", line 138, in __init__
self.__pydantic_validator__.validate_python(
pydantic_core._pydantic_core.ValidationError: 12 validation errors for Author
books.int
Input should be a valid integer [type=int_type, input_value=Book({'id': 1, 'author': ...040>]}), 'my_data': {}}), input_type=Book]
For further information visit https://errors.pydantic.dev/2.5/v/int_type
books.Book.my_data
JSON input should be string, bytes or bytearray [type=json_type, input_value={}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.5/v/json_type
books.PkOnlyBookrybjdp
Input should be a valid dictionary or instance of PkOnlyBookrybjdp [type=model_type, input_value=Book({'id': 1, 'author': ...040>]}), 'my_data': {}}), input_type=Book]
For further information visit https://errors.pydantic.dev/2.5/v/model_type
books.`list[nullable[union[int,Book,...]]]`.0.int
Input should be a valid integer [type=int_type, input_value=('id', 1), input_type=tuple]
For further information visit https://errors.pydantic.dev/2.5/v/int_type
books.`list[nullable[union[int,Book,...]]]`.0.Book
Input should be a valid dictionary or object to extract fields from [type=model_attributes_type, input_value=('id', 1), input_type=tuple]
For further information visit https://errors.pydantic.dev/2.5/v/model_attributes_type
books.`list[nullable[union[int,Book,...]]]`.0.PkOnlyBookrybjdp
Input should be a valid dictionary or instance of PkOnlyBookrybjdp [type=model_type, input_value=('id', 1), input_type=tuple]
For further information visit https://errors.pydantic.dev/2.5/v/model_type
books.`list[nullable[union[int,Book,...]]]`.1.int
Input should be a valid integer [type=int_type, input_value=('author', Author({'id': ...Book at 0x102391040>]})), input_type=tuple]
For further information visit https://errors.pydantic.dev/2.5/v/int_type
books.`list[nullable[union[int,Book,...]]]`.1.Book
Input should be a valid dictionary or object to extract fields from [type=model_attributes_type, input_value=('author', Author({'id': ...Book at 0x102391040>]})), input_type=tuple]
For further information visit https://errors.pydantic.dev/2.5/v/model_attributes_type
books.`list[nullable[union[int,Book,...]]]`.1.PkOnlyBookrybjdp
Input should be a valid dictionary or instance of PkOnlyBookrybjdp [type=model_type, input_value=('author', Author({'id': ...Book at 0x102391040>]})), input_type=tuple]
For further information visit https://errors.pydantic.dev/2.5/v/model_type
books.`list[nullable[union[int,Book,...]]]`.2.int
Input should be a valid integer [type=int_type, input_value=('my_data', {}), input_type=tuple]
For further information visit https://errors.pydantic.dev/2.5/v/int_type
books.`list[nullable[union[int,Book,...]]]`.2.Book
Input should be a valid dictionary or object to extract fields from [type=model_attributes_type, input_value=('my_data', {}), input_type=tuple]
For further information visit https://errors.pydantic.dev/2.5/v/model_attributes_type
books.`list[nullable[union[int,Book,...]]]`.2.PkOnlyBookrybjdp
Input should be a valid dictionary or instance of PkOnlyBookrybjdp [type=model_type, input_value=('my_data', {}), input_type=tuple]
For further information visit https://errors.pydantic.dev/2.5/v/model_type
```
</p>
</details>
**Expected behavior**
Unless I'm using these functions the wrong way, I expect either of these functions to not hit any pydantic errors.
**Versions (please complete the following information):**
```
# /// script
# dependencies = [
# "databases[aiosqlite]==0.7.0",
# "pydantic==2.5.3",
# "ormar==0.20.1",
# "sqlalchemy==1.4.52"
# ]
# ///
```
**Additional context**
- does **NOT** hit validation errors if json field is optional and not populated
- does **NOT** hit validation errors for other field types
- does **NOT** hit validation errors if using `prefetch_related` instead of `selected_related`
- one mitigation to not hit validation errors is the following `author = await author.load(); books = await author.books.all()`
Thanks in advance! | open | 2024-08-03T15:31:06Z | 2025-01-08T15:37:57Z | https://github.com/collerek/ormar/issues/1391 | [
"bug"
] | dingxuanyao | 3 |
ymcui/Chinese-BERT-wwm | tensorflow | 232 | RoBERTa-wwm-ext-large ft的时候loss飞了 | lr:2e-5
batch_size:16

| closed | 2023-04-15T00:07:47Z | 2023-05-18T23:30:30Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/232 | [
"stale"
] | fword | 2 |
qubvel-org/segmentation_models.pytorch | computer-vision | 167 | Could someone share an example of multiclass segmentation? | Hi
I tried before but always failed in multi-class segmentation, the predicted mask is always empty. Could someone share an example of multi-class segmentation?
Thank you very much! | closed | 2020-03-13T07:00:53Z | 2022-02-04T07:34:47Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/167 | [
"Stale"
] | zhongqiu1245 | 8 |
alteryx/featuretools | data-science | 2,125 | Design doc for integration with SQL | - As a user of Featuretools, I want to create an EntitySet from a SQL database.
- An Entityset in Featuretools is like a database. It contains data type and relationships.
- For this issue, let's create a design doc. | closed | 2022-06-21T16:00:32Z | 2022-07-29T20:19:45Z | https://github.com/alteryx/featuretools/issues/2125 | [] | gsheni | 1 |
skypilot-org/skypilot | data-science | 4,140 | [Example] Make `rdvz` work with multi-node SkyPilot clusters | <!-- Describe the bug report / feature request here -->
rdvz fail to work with SkyPilot multi-node cluster (probably on k8s).
https://github.com/stas00/ml-engineering/blob/master/network/benchmarks/all_reduce_bench.py
<!-- If relevant, fill in versioning info to help us troubleshoot -->
_Version & Commit info:_
* `sky -v`: PLEASE_FILL_IN
* `sky -c`: PLEASE_FILL_IN
| open | 2024-10-22T07:06:57Z | 2024-12-19T23:08:34Z | https://github.com/skypilot-org/skypilot/issues/4140 | [
"P0"
] | Michaelvll | 1 |
FlareSolverr/FlareSolverr | api | 518 | [mteamtp] (updating) The cookies provided by FlareSolverr are not valid | **Please use the search bar** at the top of the page and make sure you are not creating an already submitted issue.
Check closed issues as well, because your issue may have already been fixed.
### How to enable debug and html traces
[Follow the instructions from this wiki page](https://github.com/FlareSolverr/FlareSolverr/wiki/How-to-enable-debug-and-html-trace)
### Environment
* **FlareSolverr version**:2.2.6
* **Last working FlareSolverr version**:2.2.6
* **Operating system**:synology dsm 7.1
* **Are you using Docker**: yes
* **FlareSolverr User-Agent (see log traces or / endpoint)**:chrome
* **Are you using a proxy or VPN?** yes
* **Are you using Captcha Solver:** no
* **If using captcha solver, which one:**
* **URL to test this issue:**kp-tema.cc
### Description
[List steps to reproduce the error and details on what happens and what you expected to happen]
FlareSolverr is installed on the synology system using docker. Important note: FlareSolverr can only open ip:8191 normally through a network proxy tool. Adding a search site always gives "Invalid cookie provided by FlareSolverr".
### Logged Error Messages
The cookies provided by FlareSolverr are not valid
[Place any relevant error messages you noticed from the logs here.]
2022-09-20T09:00:15.758571134Z stdout 2022-09-20T09:00:15+00:00 INFO REQ-5 Incoming request => GET /favicon.ico
2022-09-20T09:00:15.443839859Z stdout 2022-09-20T09:00:15+00:00 INFO REQ-4 Incoming request => GET /
2022-09-20T08:52:57.007973330Z stdout 2022-09-20T08:52:57+00:00 INFO REQ-3 Response in 17.71 s
2022-09-20T08:52:50.400207832Z stdout 2022-09-20T08:52:50+00:00 INFO REQ-3 Challenge solved
2022-09-20T08:52:49.166043581Z stdout 2022-09-20T08:52:49+00:00 INFO REQ-3 Cloudflare detected
2022-09-20T08:52:39.297552959Z stdout 2022-09-20T08:52:39+00:00 INFO REQ-3 Incoming request => POST /v1 body: {"postData":"username=XXXX&password=ghcr.io%2Fflaresolverr%2Fflaresolverr%3Alatest","maxTimeout":55000,"cmd":"request.post","url":"https://kp.m-team.cc/takelogin.php"}
2022-09-20T08:52:02.798386812Z stdout 2022-09-20T08:52:02+00:00 INFO REQ-2 Incoming request => GET /favicon.ico
2022-09-20T08:52:02.427485403Z stdout 2022-09-20T08:52:02+00:00 INFO REQ-1 Incoming request => GET /
2022-09-20T08:52:02.229561685Z stdout 2022-09-20T08:52:02+00:00 INFO Listening on http://0.0.0.0:8191
2022-09-20T08:52:02.226822433Z stdout 2022-09-20T08:52:02+00:00 INFO Test successful
2022-09-20T08:52:02.132821553Z stdout 2022-09-20T08:52:02+00:00 INFO FlareSolverr User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:94.0) Gecko/20100101 Firefox/94.0
2022-09-20T08:51:49.005945294Z stdout 2022-09-20T08:51:49+00:00 INFO Testing web browser installation...
2022-09-20T08:51:49.004399727Z stdout 2022-09-20T08:51:49+00:00 INFO FlareSolverr v2.2.6
[Make sure you attach the full logs with your personal information removed in case we need more information]
### Screenshots
[Place any screenshots of the issue here if needed]

| closed | 2022-09-20T11:04:49Z | 2022-09-21T03:22:54Z | https://github.com/FlareSolverr/FlareSolverr/issues/518 | [
"more information needed"
] | yjiawqgj | 4 |
sinaptik-ai/pandas-ai | data-science | 993 | File exists error when creating a `SmartDataframe` object | ### System Info
OS version: Ubuntu 20.04.6 LTS
Python version: 3.11.8
The current version of `pandasai` being used: 2.0.3
### 🐛 Describe the bug
Here is the code (simple flask API) that I'm using right now:
```python
# Route to get all books
@app.route('/run', methods=['POST'])
def run_pandasai():
data = request.get_json()
engine = create_engine(SQLALCHEMY_BASE_DATABASE_URI)
df = None
with engine.connect() as conn:
df = pd.read_sql(text(f'SELECT * FROM some_table;'), conn)
llm = OpenAI(api_token='<my_api_key>')
df = SmartDataframe(df, config={"llm": llm})
response = df.chat(some prompt?')
return jsonify({'response': response})
```
I get the following error while running this:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 1463, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 872, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 870, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 855, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/app.py", line 20, in run_pandasai
df = SmartDataframe(df, config={"llm": llm})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pandasai/smart_dataframe/__init__.py", line 64, in __init__
self._agent = Agent([df], config=config)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pandasai/agent/base.py", line 75, in __init__
self.context = PipelineContext(
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pandasai/pipelines/pipeline_context.py", line 35, in __init__
self.cache = cache if cache is not None else Cache()
^^^^^^^
File "/usr/local/lib/python3.11/site-packages/pandasai/helpers/cache.py", line 29, in __init__
os.makedirs(cache_dir, mode=DEFAULT_FILE_PERMISSIONS, exist_ok=True)
File "<frozen os>", line 225, in makedirs
FileExistsError: [Errno 17] File exists: '/app/cache'
```
I can understand that this issue is because it is trying to create the dir that already exists, and even though the exist_ok is `True` the `DEFAULT_FILE_PERMISSIONS` is such that it can't create in and it fails, is this a bug? | closed | 2024-03-04T15:06:35Z | 2024-03-07T18:54:46Z | https://github.com/sinaptik-ai/pandas-ai/issues/993 | [
"wontfix"
] | araghuvanshi-systango | 2 |
automl/auto-sklearn | scikit-learn | 965 | ConvergenceWarning while trainning | Hello,
auto-sklearn works great for my. But during training I often get messages like this:
```
/usr/local/lib/python3.6/dist-packages/sklearn/linear_model/_stochastic_gradient.py:557: ConvergenceWarning: Maximum number of iteration reached before convergence. Consider increasing max_iter to improve the fit.
ConvergenceWarning)
```
Sometimes these lines are repeated about 100-200 times one after the other.
Please tell me if there is any way to increase this 'max_iter' for stochastic_gradient using by auto-sklearn?
I tried increasing the time by using "time_left_for_this_task" and "per_run_time_limit". But it didn't help.
| open | 2020-09-26T15:41:57Z | 2021-11-17T10:50:02Z | https://github.com/automl/auto-sklearn/issues/965 | [
"enhancement"
] | medphisiker | 3 |
drivendataorg/erdantic | pydantic | 83 | Incompatible with Pydantic V2 | GitHub Actions workflow [tests #285](https://github.com/drivendataorg/erdantic/actions/runs/5433724070) failed.
Event: schedule
Branch: [main](https://github.com/drivendataorg/erdantic/tree/main)
Commit: [c094337c79029d3d6cc530748b5fb80a46d58ab0](https://github.com/drivendataorg/erdantic/commit/c094337c79029d3d6cc530748b5fb80a46d58ab0)
<sup><i>Created by [jayqi/failed-build-issue-action](https://github.com/jayqi/failed-build-issue-action)</i></sup> | closed | 2023-07-02T00:07:12Z | 2025-03-11T13:32:14Z | https://github.com/drivendataorg/erdantic/issues/83 | [
"build failed"
] | github-actions[bot] | 1 |
ijl/orjson | numpy | 548 | orjson 3.10.15 release breaks installs due to missing dependency | Your most-recent release 3.10.15 just broke the installation procedure of OneTrainer and probably other software projects:
```
ollecting orjson>=3.2.1 (from fastapi[all]>=0.94.0->runpod==1.7.4->-r requirements-global.txt (line 45))
Using cached orjson-3.10.15.tar.gz (5.3 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [6 lines of output]
Cargo, the Rust package manager, is not installed or is not on PATH.
This package requires Rust and Cargo to compile extensions. Install it through
the system's package manager or via https://rustup.rs/
Checking for Rust toolchain....
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
Manually installing cargo using `apt-get install cargo` on a linux system doesn't easily fix it:
```
Building wheel for orjson (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for orjson (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [12 lines of output]
Running `maturin pep517 build-wheel -i /OneTrainer/venv/bin/python --compatibility off`
📦 Including license file "/tmp/pip-install-is4ca3me/orjson_570f903b7c7c4d3895d7c5e6ef1b5011/LICENSE-APACHE"
📦 Including license file "/tmp/pip-install-is4ca3me/orjson_570f903b7c7c4d3895d7c5e6ef1b5011/LICENSE-MIT"
🍹 Building a mixed python/rust project
🔗 Found pyo3-ffi bindings
🐍 Found CPython 3.11 at /OneTrainer/venv/bin/python
error: package `orjson v3.10.15 (/tmp/pip-install-is4ca3me/orjson_570f903b7c7c4d3895d7c5e6ef1b5011)` cannot be built because it requires rustc 1.82 or newer, while the currently active rustc version is 1.75.0
💥 maturin failed
Caused by: Failed to build a native library through cargo
Caused by: Cargo build finished with "exit status: 101": `env -u CARGO PYO3_ENVIRONMENT_SIGNATURE="cpython-3.11-64bit" PYO3_PYTHON="/OneTrainer/venv/bin/python" PYTHON_SYS_EXECUTABLE="/OneTrainer/venv/bin/python" "cargo" "rustc" "--message-format" "json-render-diagnostics" "--manifest-path" "/tmp/pip-install-is4ca3me/orjson_570f903b7c7c4d3895d7c5e6ef1b5011/Cargo.toml" "--release" "--lib"`
Error: command ['maturin', 'pep517', 'build-wheel', '-i', '/OneTrainer/venv/bin/python', '--compatibility', 'off'] returned non-zero exit status 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for orjson
Successfully built diffusers
Failed to build orjson
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (orjson)
```
Please remove this new dependency
| closed | 2025-01-18T16:49:26Z | 2025-01-18T18:18:52Z | https://github.com/ijl/orjson/issues/548 | [] | dxqbYD | 11 |
Avaiga/taipy | data-visualization | 1,608 | [🐛 BUG] Submit button unactive after data node lock | ### What went wrong? 🤔
When using a data node viewer and scenario viewer, editing the data node will lock the scenario and finishing the edition should unblock the possibility to submit the scenario.
However, the "Submit" button is still grey and inactive even it the edition has been done. Refreshing the scenario or the page will re-activate the button.
This happens everytime editing an input data node.
### Expected Behavior
Finishing the edition should reactivate the button automatically if the scenario can be submitted without the need to refresh the scenario or the page.
### Steps to Reproduce Issue
Run this code, choose a scenario, notice that the scenario could be submitted, and edit the data to 2. The button becomes grey and is inactive.
```python
# Import necessary libraries
import pandas as pd
import taipy as tp
from taipy import Config, Scope, Frequency
from taipy.gui import notify
import time
import datetime as dt
Config.configure_job_executions(mode="standalone", max_nb_of_workers=2)
# Function to run a Dataiku scenario
def run_something(input_1, input_2):
datetime = dt.datetime.now()
date = dt.date(2018, 1, 1)
int_var = 10
string_var = "String"
return datetime, date, int_var, string_var
data = {"toto": [i for i in range(10_000)],
"titi": [2*i for i in range(10_000)],
"tata": [4*i for i in range(10_000)]}
input_1_cfg = Config.configure_data_node(
id="input_1_data_node",
default_data=1,
)
input_2_cfg = Config.configure_data_node(
id="input_2_data_node",
default_data=data,
)
datetime_cfg = Config.configure_data_node(id="datetime_data_node")
date_cfg = Config.configure_data_node(id="date_data_node")
int_cfg = Config.configure_data_node(id="int_data_node")
string_cfg = Config.configure_data_node(id="string_data_node")
# Scenario and task configuration in Taipy
scenario_task_cfg = Config.configure_task(
id="scenario_task",
function=run_something,
input=[input_1_cfg, input_2_cfg],
output=[datetime_cfg, date_cfg, int_cfg, string_cfg]
)
scenario_cfg = Config.configure_scenario(
id="scenario",
task_configs=[scenario_task_cfg],
frequency=Frequency.DAILY)
data_node = None
# GUI Markdown content
scenario_md = """
<|{scenario}|scenario_selector|>
<|{data_node}|data_node|>
<|{scenario}|scenario|>
"""
def on_change(state, var_name, var_value):
if var_name == "scenario" and isinstance(var_value, tp.Scenario):
state.data_node = state.scenario.input_1_data_node
# Main execution block with GUI setup
if __name__ == "__main__":
tp.Core().run()
scenario = tp.create_scenario(scenario_cfg)
tp.Gui(scenario_md).run(title="Bug replication", port=3248)
```
### Version of Taipy
develop and 4.0.0.dev0
### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Create related issue in taipy-doc for documentation and Release Notes.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2024-07-30T10:05:20Z | 2024-10-03T13:37:32Z | https://github.com/Avaiga/taipy/issues/1608 | [
"Core",
"🖰 GUI",
"💥Malfunction",
"🟧 Priority: High"
] | FlorianJacta | 7 |
matterport/Mask_RCNN | tensorflow | 2,470 | How do I improve the mask prediction by Mask RCNN? | The bounding box and class prediction are okay but the masks are not. However, masks for the small objects are okay compared to large objects. The story is similar for other images as well. Here's my configurations:
RPN_ANCHOR_SCALES = (16, 32, 64, 128, 256)
TRAIN_ROIS_PER_IMAGE = 64
MAX_GT_INSTANCES = 50
POST_NMS_ROIS_INFERENCE = 500
POST_NMS_ROIS_TRAINING = 1000
USE_MINI_MASK True
MASK_SHAPE [28, 28]
MINI MASK_SHAPE [56, 56]
LEARNING_RATE = 0.001
LEARNING_MOMENTUM = 0.9
WEIGHT_DECAY = 0.0001
EPOCHS = 500
Can anybody suggest on this, please?

| open | 2021-01-28T15:49:32Z | 2021-01-28T15:49:32Z | https://github.com/matterport/Mask_RCNN/issues/2470 | [] | BishwaBS | 0 |
tensorflow/tensor2tensor | machine-learning | 1,739 | t2t-decoder regression predictions | ### Description
I have several regression-based tasks that involve inferring a single scalar value from a block of text. I've created a new Problem class Text2RegressionProblem that's analogous to the existing Text2ClassProblem, but for problems where the output is a single scalar. I'm able to successfully train models on this problem. However, when I run inference with trained models using t2t-decoder, I get dozens of values back.
Example:
```
t2t-decoder \
--t2t_usr_dir=$USR_DIR \
--data_dir=$DATA_DIR \
--problem=$PROBLEM \
--model=$MODEL \
--hparams_set=$HPARAMS \
--output_dir=$TRAIN_DIR \
--hparams="max_length=1" \
--decode_hparams="beam_size=$BEAM_SIZE,alpha=$ALPHA,extra_length=0" \
--decode_from_file=$DECODE_FILE \
--decode_to_file=/tmp/decoded.txt \
Inference results INPUT: <Some tokenized text>
Inference results OUTPUT: 5.3629727 5.303556 5.144407 5.17002 5.3072686 5.2574806 5.2288465 5.2287025 5.2602153 5.2170677 5.2564077 5.225711 5.1793923 5.1638227 5.2056766 5.314677 5.2889915 5.4006414 5.4020224 5.370219 5.247168 5.2442055 5.2933908 5.2927985 5.328215 5.353526 5.3514934 5.3330526 5.369112 5.3747473 5.329636 5.2914467 5.260538 5.2635546 5.2717643 5.310262 5.372777 5.3689275 5.396057 5.379212 5.338006 5.3245363 5.3512154 5.3231683 5.2917695 5.3729806 5.369797 5.3465395 5.3407426 5.370006 5.386562 5.3699274 5.3307123 5.33022 5.3411913 5.337703 5.3624287 5.389234 5.4132752 5.4185905 5.378593 5.34707 5.332845 5.335811 5.34976 5.3591514 5.3616056 5.3694997 5.387833 5.401154 5.390521 5.358708 5.3527293 5.3604765 5.359528 5.3743944 5.391922 5.407466 5.4318314
```
**How can I get run inference and get back a single scalar per example?**
Problem definition is below:
```
class Text2RegressionProblem(Text2TextProblem):
"""Base class for text regression problems."""
def generate_samples(self, data_dir, tmp_dir, dataset_split):
"""Generate samples of text and label pairs.
Each yielded dict will be a single example. The inputs should be raw text.
The label should be a list containing a single float.
Args:
data_dir: final data directory. Typically only used in this method to copy
over user-supplied vocab files (for example, if vocab_type ==
VocabType.TOKEN).
tmp_dir: temporary directory that you can use for downloading and scratch.
dataset_split: problem.DatasetSplit, which data split to generate samples
for (for example, training and evaluation).
Yields:
{"inputs": text, "label": [float]}
"""
raise NotImplementedError()
def generate_text_for_vocab(self, data_dir, tmp_dir):
for i, sample in enumerate(
self.generate_samples(data_dir, tmp_dir, problem.DatasetSplit.TRAIN)):
yield sample["inputs"]
if self.max_samples_for_vocab and (i + 1) >= self.max_samples_for_vocab:
break
def generate_encoded_samples(self, data_dir, tmp_dir, dataset_split):
generator = self.generate_samples(data_dir, tmp_dir, dataset_split)
encoder = self.get_or_create_vocab(data_dir, tmp_dir)
for sample in generator:
inputs = encoder.encode(sample["inputs"])
inputs.append(text_encoder.EOS_ID)
yield {"inputs": inputs, "targets": sample["targets"]}
def feature_encoders(self, data_dir):
encoder = self.get_or_create_vocab(data_dir, None, force_get=True)
return {
"inputs": encoder,
"targets": text_encoder.RealEncoder(),
}
def hparams(self, defaults, unused_model_hparams):
p = defaults
p.modality = {
"inputs": modalities.ModalityType.SYMBOL,
"targets": modalities.ModalityType.REAL_L2_LOSS,
}
p.vocab_size = {"inputs": self._encoders["inputs"].vocab_size, "targets": 1}
def max_length(self, model_hparams):
return model_hparams.batch_size
def example_reading_spec(self):
data_fields = {
"inputs": tf.VarLenFeature(tf.int64),
"targets": tf.FixedLenFeature([1], tf.float32),
}
data_items_to_decoders = None
return (data_fields, data_items_to_decoders)
def eval_metrics(self):
return [metrics.Metrics.RMSE, metrics.Metrics.PEARSON, metrics.Metrics.R2]
``` | closed | 2019-11-07T22:58:12Z | 2019-11-14T21:46:07Z | https://github.com/tensorflow/tensor2tensor/issues/1739 | [] | gabegrand | 2 |
autogluon/autogluon | scikit-learn | 4,322 | feature importance all shows 0 under timerseries models | The following models are trained based on Autogluon Timeseries:
'Chronos':{'model_path': 'base'},
"DeepAR":{},
"DLinear":{},
"PatchTST":{},
"SimpleFeedForward":{},
"RecursiveTabular":{},
"DirectTabular":{},
"AutoETS":{},
"Theta":{},
The train info detail is as followings:
Warning: path already exists! This predictor may overwrite an existing predictor! path="V_price_D_0"
Beginning AutoGluon training...
AutoGluon will save models to 'V_price_D_0'
=================== System Info ===================
AutoGluon Version: 1.1.0
Python Version: 3.9.19
Operating System: Windows
Platform Machine: AMD64
Platform Version: 10.0.22631
CPU Count: 12
GPU Count: 1
Memory Avail: 27.46 GB / 63.76 GB (43.1%)
Disk Space Avail: 201.39 GB / 931.50 GB (21.6%)
===================================================
Setting presets to: best_quality
Fitting with arguments:
{'enable_ensemble': True,
'eval_metric': RMSE,
'hyperparameters': {'AutoETS': {},
'Chronos': {'model_path': 'base'},
'DLinear': {},
'DeepAR': {},
'DirectTabular': {},
'PatchTST': {},
'RecursiveTabular': {},
'SimpleFeedForward': {},
'Theta': {}},
'known_covariates_names': [],
'num_val_windows': 5,
'prediction_length': 1,
'quantile_levels': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9],
'random_seed': 123,
'refit_every_n_windows': 1,
'refit_full': False,
'skip_model_selection': False,
'target': 'Chg',
'verbosity': 2}
Inferred time series frequency: 'D'
Provided train_data has 11624829 rows, 5293 time series. Median time series length is 2748 (min=136, max=2748).
Provided data contains following columns:
target: 'Chg'
past_covariates:
categorical: []
continuous (float): ['IQR', 'Transaction_volume_ln', 'Turnover_rate', 'chip_concentration_90', 'K', 'D', ...]
To learn how to fix incorrectly inferred types, please see documentation for TimeSeriesPredictor.fit
AutoGluon will gauge predictive performance using evaluation metric: 'RMSE'
This metric's sign has been flipped to adhere to being higher_is_better. The metric score can be multiplied by -1 to get the metric value.
===================================================
Starting training. Start time is 2024-07-13 11:36:59
Models that will be trained: ['RecursiveTabular', 'DirectTabular', 'Theta', 'AutoETS', 'Chronos[base]', 'DeepAR', 'PatchTST', 'SimpleFeedForward', 'DLinear']
Training timeseries model RecursiveTabular.
-0.0425 = Validation score (-RMSE)
1096.62 s = Training runtime
7.31 s = Validation (prediction) runtime
Training timeseries model DirectTabular.
-0.0325 = Validation score (-RMSE)
978.43 s = Training runtime
11.77 s = Validation (prediction) runtime
Training timeseries model Theta.
Warning: Exception caused Theta to fail during training... Skipping this model.
Unable to allocate 8.62 GiB for an array with shape (107, 10810221) and data type float64
Training timeseries model AutoETS.
Warning: Exception caused AutoETS to fail during training... Skipping this model.
Unable to allocate 8.62 GiB for an array with shape (107, 10810221) and data type float64
Training timeseries model Chronos[base].
Warning: Exception caused Chronos[base] to fail during training... Skipping this model.
CUDA out of memory. Tried to allocate 242.00 MiB. GPU 0 has a total capacity of 2.00 GiB of which 0 bytes is free. Of the allocated memory 1.58 GiB is allocated by PyTorch, and 54.40 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Training timeseries model DeepAR.
-0.0310 = Validation score (-RMSE)
742.75 s = Training runtime
14.07 s = Validation (prediction) runtime
Training timeseries model PatchTST.
-0.0307 = Validation score (-RMSE)
760.91 s = Training runtime
10.32 s = Validation (prediction) runtime
Training timeseries model SimpleFeedForward.
-0.0307 = Validation score (-RMSE)
696.61 s = Training runtime
20.55 s = Validation (prediction) runtime
Training timeseries model DLinear.
-0.0303 = Validation score (-RMSE)
611.82 s = Training runtime
25.24 s = Validation (prediction) runtime
Warning: Exception caused ensemble to fail during training... Skipping this model.
Unable to allocate 88.7 MiB for an array with shape (1, 11624829) and data type float64
Training complete. Models trained: ['RecursiveTabular', 'DirectTabular', 'DeepAR', 'PatchTST', 'SimpleFeedForward', 'DLinear']
Total runtime: 5358.75 s
Best model: DLinear
Best model score: -0.0303
Then I tried to find out the feature importance of all the factors used in the fitting process as:
models_to_try = ['DeepAR', 'DLinear', 'PatchTST']
for model_name in models_to_try:
try:
feature_importances = predictor.feature_importance(data=train_data, model=model_name)
print(f'Feature importances for {model_name}:')
print(feature_importances)
The result of feature importance all goes to 0 or Nan as followings:
Feature importances for DeepAR:
importance stdev n p99_low p99_high
Turnover_rate_weighted_momentum_8 0.0 0.0 0.0 NaN NaN
ILLIQ_ma_13 0.0 0.0 0.0 NaN NaN
Ret_21 0.0 0.0 0.0 NaN NaN
Momentum_21 0.0 0.0 0.0 NaN NaN
Turnover_rate_stable_3 0.0 0.0 0.0 NaN NaN
... ... ... ... ... ...
IQR_ma_21 0.0 0.0 0.0 NaN NaN
Turnover_rate_weighted_momentum_21 0.0 0.0 0.0 NaN NaN
Sharp_ratio_21 0.0 0.0 0.0 NaN NaN
CCI_8 0.0 0.0 0.0 NaN NaN
chip_concentration_90_ma_5 0.0 0.0 0.0 NaN NaN
[106 rows x 5 columns]
Feature importances for DLinear:
importance stdev n p99_low p99_high
Turnover_rate_weighted_momentum_8 0.0 0.0 0.0 NaN NaN
ILLIQ_ma_13 0.0 0.0 0.0 NaN NaN
Ret_21 0.0 0.0 0.0 NaN NaN
Momentum_21 0.0 0.0 0.0 NaN NaN
Turnover_rate_stable_3 0.0 0.0 0.0 NaN NaN
... ... ... ... ... ...
IQR_ma_21 0.0 0.0 0.0 NaN NaN
Turnover_rate_weighted_momentum_21 0.0 0.0 0.0 NaN NaN
Sharp_ratio_21 0.0 0.0 0.0 NaN NaN
CCI_8 0.0 0.0 0.0 NaN NaN
chip_concentration_90_ma_5 0.0 0.0 0.0 NaN NaN
[106 rows x 5 columns]
Feature importances for PatchTST:
importance stdev n p99_low p99_high
Turnover_rate_weighted_momentum_8 0.0 0.0 0.0 NaN NaN
ILLIQ_ma_13 0.0 0.0 0.0 NaN NaN
Ret_21 0.0 0.0 0.0 NaN NaN
Momentum_21 0.0 0.0 0.0 NaN NaN
Turnover_rate_stable_3 0.0 0.0 0.0 NaN NaN
... ... ... ... ... ...
IQR_ma_21 0.0 0.0 0.0 NaN NaN
Turnover_rate_weighted_momentum_21 0.0 0.0 0.0 NaN NaN
Sharp_ratio_21 0.0 0.0 0.0 NaN NaN
CCI_8 0.0 0.0 0.0 NaN NaN
chip_concentration_90_ma_5 0.0 0.0 0.0 NaN NaN
How to sort this out? | closed | 2024-07-14T12:34:07Z | 2025-01-30T16:15:24Z | https://github.com/autogluon/autogluon/issues/4322 | [] | luochixq | 2 |
yeongpin/cursor-free-vip | automation | 71 | 遇到问题了 | 你好是这样的,原来注册好之后,他官方显示的是150和无限
然而,我用了一次之后
他就变成:
50 / 无限了
这是什么情况? | closed | 2025-02-16T13:28:38Z | 2025-02-19T07:09:32Z | https://github.com/yeongpin/cursor-free-vip/issues/71 | [] | duoduo666 | 8 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,540 | Color Discrepancies in Facial Restoration with ADetailer[Bug]: | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [X] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [X] The issue has been reported before but has not been fixed yet
### What happened?
I encountered noticeable color differences when using the ADetailer extension for facial restoration. I have tried switching different models, various versions of the web UI, different versions of ADetailer, and different versions of PyTorch, as well as upgrading to the latest graphics card driver, but the issue persists. Whenever ADetailer is enabled, the generated images have obvious color discrepancies. My system environment is Ubuntu 24.02, NVIDIA-SMI 560.35.03, Driver Version: 560.35.03, CUDA Version: 12.6, GPU: 4090.
open ADetailer:

close ADetailer:

The difference is here; it is very noticeable on my monitor.

### Steps to reproduce the problem
In text2img, enable ADetailer, select face_yolov8m.pt, and start generating images.
### What should have happened?
No color differences before and after restoration.
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
n text2img, enable ADetailer, select face_yolov8m.pt, and start generating images.
### Console logs
```Shell
The logs outputted in the terminal show no errors.
```
### Additional information
_No response_ | open | 2024-10-08T08:42:53Z | 2024-10-23T02:33:10Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16540 | [
"bug-report"
] | guodong1994 | 1 |
tqdm/tqdm | jupyter | 1,270 | Cannot append elements to shell array while using tqdm | - [ ] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [ ] visual output bug
+ [x] Command-line unintentional behaviour
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```bash
python -c "import tqdm, sys; print(tqdm.__version__, sys.version, sys.platform)"
4.60.0 3.8.0 (default, Oct 6 2020, 11:07:52)
[GCC 10.2.0] linux
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
I have encountered an issue where elements are not appended to shell arrays when using the tqdm bar from the terminal. The following example produces an empty array
```
$ test_arr=()
total=10
for i in $(seq $total)
do
echo "$i"
test_arr+=("$i")
done | tqdm --total $total --null
echo "Size of array: ${#test_arr[@]}"
echo "with elements: ${test_arr[@]}"
100%|████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 242445.32it/s]
Size of array: 0
with elements:
```
Whereas the elements are properly added if you remove the bar
```
$ test_arr=()
total=10
for i in $(seq $total)
do
echo "$i"
test_arr+=("$i")
done
echo "Size of array: ${#test_arr[@]}"
echo "with elements: ${test_arr[@]}"
1
2
3
4
5
6
7
8
9
10
Size of array: 10
with elements: 1 2 3 4 5 6 7 8 9 10
``` | open | 2021-11-02T12:26:51Z | 2021-11-02T12:26:51Z | https://github.com/tqdm/tqdm/issues/1270 | [] | jakob1379 | 0 |
marcomusy/vedo | numpy | 1,187 | 2D images become non-pickable after changing cmap | Hi, I found a strange thing when running the spline_draw example.
Below is the officially provided code (vedo\examples\advanced\spline_draw.py) which works fine.

But when I changed the pic cmap to binary_r, I could no longer select the points on the image because the event.actor became None.

I am not sure if this is a feature or bug, but it seems not making sense if the pickability depends on the cmap. Any information would be appreciated! | closed | 2024-09-03T22:10:32Z | 2025-01-30T13:39:56Z | https://github.com/marcomusy/vedo/issues/1187 | [
"bug"
] | sudmat | 2 |
mwaskom/seaborn | data-visualization | 3,382 | sns.scatterplot | **_**hello,sir!
i find a question,When I customized the color range, I found through Searbon that it didn't follow my custom colors,the legend shows color is wrong
codeing:
merged_df1= pd.read_csv("C:\\Users\\Administrator\\Desktop\\data.csv")
plt.figure(figsize=(8.5, 8))
thresholds = [5,50,100,200]
colors = ['darkslategrey','skyblue', 'deepskyblue', 'white','orange']
legend_patches = [
mpatches.Patch(color=colors[0], label=f'<{thresholds[0]}'),
mpatches.Patch(color=colors[1], label=f'{thresholds[0]} - {thresholds[1]}'),
mpatches.Patch(color=colors[2], label=f'{thresholds[1]}- {thresholds[2]}'),
mpatches.Patch(color=colors[3], label=f'{thresholds[2]}- {thresholds[3]}'),
mpatches.Patch(color=colors[4], label=f'>{thresholds[3]}')
]
conditions = [
(merged_df1['logpvalue'] < thresholds[0]),
(merged_df1['logpvalue'] >= thresholds[0]) & (merged_df1['logpvalue'] < thresholds[1]),
(merged_df1['logpvalue'] >= thresholds[1]) & (merged_df1['logpvalue'] < thresholds[2]),
(merged_df1['logpvalue'] >= thresholds[2]) & (merged_df1['logpvalue'] < thresholds[3]),
(merged_df1['logpvalue'] >= thresholds[3])
]
color_array = np.select(conditions, colors)
cmap=sns.color_palette(colors, as_cmap=True)
sns.scatterplot(x=merged_df1['group'], y=merged_df1['gene'], hue=color_array,size=merged_df1['tpm'], sizes=(5, 250),legend='auto',palette=cmap,edgecolor='black')
plt.title('Bubble Chart')
plt.xlabel('tissue')
plt.ylabel('motif')
sizes = [30, 100, 200]
legend_sizes = [plt.scatter([], [], s=size, color='grey', alpha=0.6) for size in sizes]
legend_labels = [f'TPM: {size}' for size in sizes]
a=plt.legend(loc='upper right', bbox_to_anchor=(1.2,1),prop={'size': 10},handles=legend_patches, title='-log pvalue')
plt.legend(legend_sizes, legend_labels, title='Size', loc='upper right',bbox_to_anchor=(1.2, 0.75),prop={'size': 10})
plt.gca().add_artist(a)



| closed | 2023-06-12T09:33:48Z | 2023-06-12T12:36:22Z | https://github.com/mwaskom/seaborn/issues/3382 | [] | gavinjzg | 2 |
horovod/horovod | deep-learning | 4,013 | Error install horovod with python 3.11.5 on macOS 11.3.1 | **Environment:**
1. Framework: TensorFlow
2. Framework version: 2.12.0
3. Horovod version: v0.28.1
4. MPI version:
5. CUDA version:
6. NCCL version:
7. Python version: 3.11.5
8. Spark / PySpark version:
9. Ray version: 2.9.0
10. OS and version: macOS 11.3.1
11. GCC version: 9.5.0
12. CMake version: 3.28.1
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
The install cmd:
```shell
pip install 'horovod[tensorflow]'
```
The envs:
```shell
HOROVOD_WITH_TENSORFLOW=1
HOROVOD_WITHOUT_PYTORCH=1
HOROVOD_WITHOUT_MXNET=1
```
The errors:
```shell
running install_scripts
[WARNING] This wheel needs a higher macOS version than the version your Python interpreter is compiled against. To silence this warning, set MACOSX_DEPLOYMENT_TARGET to at least 11_0 or recreate these files with lower MACOSX_DEPLOYMENT_TARGET:
build/bdist.macosx-10.9-x86_64/wheel/horovod/tensorflow/mpi_lib.cpython-311-darwin.soTraceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/7t/kj0yg5_x5c398pp4jht7p6tm0000gp/T/pip-install-q40x46lh/horovod_611b3167fef94970beb553a171d240da/setup.py", line 213, in <module>
setup(name='horovod',
File "/myhome/anaconda3/envs/ray-examples/lib/python3.11/site-packages/setuptools/__init__.py", line 103, in setup
return distutils.core.setup(**attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/myhome/anaconda3/envs/ray-examples/lib/python3.11/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
^^^^^^^^^^^^^^^^^^
File "/myhome/anaconda3/envs/ray-examples/lib/python3.11/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/myhome/anaconda3/envs/ray-examples/lib/python3.11/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
self.run_command(cmd)
File "/myhome/anaconda3/envs/ray-examples/lib/python3.11/site-packages/setuptools/dist.py", line 989, in run_command
super().run_command(command)
File "/myhome/anaconda3/envs/ray-examples/lib/python3.11/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/myhome/anaconda3/envs/ray-examples/lib/python3.11/site-packages/wheel/bdist_wheel.py", line 328, in run
impl_tag, abi_tag, plat_tag = self.get_tag()
^^^^^^^^^^^^^^
File "/myhome/anaconda3/envs/ray-examples/lib/python3.11/site-packages/wheel/bdist_wheel.py", line 278, in get_tag
assert tag in supported_tags, "would build wheel with unsupported tag {}".format(tag)
^^^^^^^^^^^^^^^^^^^^^
AssertionError: would build wheel with unsupported tag ('cp311', 'cp311', 'macosx_11_0_x86_64')
```
| open | 2023-12-22T10:36:38Z | 2023-12-22T10:36:38Z | https://github.com/horovod/horovod/issues/4013 | [
"bug"
] | DriverSong | 0 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 109 | Add verbosity flag to remove print statements | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
The print statements make the outputs too verbose - it would be useful to optionally disable them using a flag
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
Add an optional flag in the node classes to remove print statements
| closed | 2024-04-29T19:48:45Z | 2024-04-30T15:00:16Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/109 | [
"enhancement"
] | 0xj7r | 2 |
strawberry-graphql/strawberry | graphql | 3,413 | Surprising FIFO behaviour of lifecycle hooks | ## Describe the (maybe) Bug
I'm surprised that various lifecycle hooks (`on_operation`, `on_parse`, `on_validate`, `on_execute`) are completed in a FIFO fashion, rather than LIFO.
I would expect that if we're wrapping an operation with `on_operation` with 2 extension the following will happen (LIFO):
* First extension starts (before `yield` part)
* Second extension starts (before `yield` part)
* Second extension completes (after `yield` part)
* First extension completes (after `yield` part)
However, the order I'm _actually_ seeing is the following (FIFO):
* First extension starts (before `yield` part)
* Second extension starts (before `yield` part)
* First extension completes (after `yield` part)
* Second extension completes (after `yield` part)
I'm concerned about it because extension can mutate state, so it would be good for them to behave like a context manager. [Example of state mutation.](https://strawberry.rocks/docs/guides/custom-extensions#execution-context)
In fact, I do find it surprising that this is how things work out. Notably, overriding `resolve` doesn't have the same problem – but it also happens in a slightly different way.
## Repro details
Here're some toy extensions I built to investigate things:
```python
class MyCustomExtension(SchemaExtension):
id = "?"
@override
async def on_validate(self) -> AsyncGenerator[None, None]:
print(f"GraphQL validation start ({self.id})")
yield
print(f"GraphQL validation end ({self.id})")
@override
def on_parse(self) -> Generator[None, None, None]:
print(f"GraphQL parsing start ({self.id})")
yield
print(f"GraphQL parsing end ({self.id})")
@override
def on_execute(self) -> Generator[None, None, None]:
print(f"GraphQL execution start ({self.id})")
yield
print(f"GraphQL execution end ({self.id})")
@override
def on_operation(self) -> Generator[None, None, None]:
print(f"GraphQL operation start ({self.id})")
yield
print(f"GraphQL operation end ({self.id})")
@override
async def resolve(
self,
_next: Callable[..., object],
root: object,
info: GraphQLResolveInfo,
*args,
**kwargs,
) -> AwaitableOrValue[object]:
random_id = randint(0, 1000)
print(f"GraphQL resolver {random_id} start ({self.id})")
result = await await_maybe(_next(root, info, *args, **kwargs))
print(f"GraphQL resolver {random_id} end ({self.id})")
return result
class MyCustomExtensionA(MyCustomExtension):
id = "A"
class MyCustomExtensionB(MyCustomExtension):
id = "B"
```
I'm testing it by running a simple query against a GraphQL Schema:
```python
@strawberry.type
class Me:
id: str
@strawberry.type
class Query:
@strawberry.field
@staticmethod
async def me() -> Me:
return Me(id="foo")
schema = MySchema(
query=Query,
extensions=[MyCustomExtensionA, MyCustomExtensionB],
)
```
When running a simple GraphQL query against this schema:
```graphql
query {
me { id }
}
```
I see the following lines being printed:
```
GraphQL operation start (A)
GraphQL operation start (B)
GraphQL parsing start (A)
GraphQL parsing start (B)
GraphQL parsing end (A)
GraphQL parsing end (B)
GraphQL validation start (A)
GraphQL validation start (B)
GraphQL validation end (A)
GraphQL validation end (B)
GraphQL execution start (A)
GraphQL execution start (B)
GraphQL resolver 598 start (B)
GraphQL resolver 975 start (A)
GraphQL resolver 975 end (A)
GraphQL resolver 598 end (B)
GraphQL resolver 196 start (B)
GraphQL resolver 638 start (A)
GraphQL resolver 638 end (A)
GraphQL resolver 196 end (B)
GraphQL execution end (A)
GraphQL execution end (B)
GraphQL operation end (A)
GraphQL operation end (B)
```
## System Information
- Strawberry version (if applicable): `0.220.0` | closed | 2024-03-19T15:27:12Z | 2025-03-20T15:56:37Z | https://github.com/strawberry-graphql/strawberry/issues/3413 | [
"bug"
] | kkom | 2 |
tensorpack/tensorpack | tensorflow | 857 | model without input source. | Hi,
My model does not require input dataflow, All I need - to tick the steps.
How can I implement this? | closed | 2018-08-08T12:23:44Z | 2018-08-14T21:56:08Z | https://github.com/tensorpack/tensorpack/issues/857 | [
"usage"
] | mikeun | 1 |
jupyter-book/jupyter-book | jupyter | 1,940 | Issue on page /publish/gh-pages.html | In section on Github Actions - The example YAML refers to Master this needs to be changed to Main to reflect Github changes made to branch naming conventions. Needed for correct execution of workflow. | open | 2023-02-24T13:41:55Z | 2023-09-30T12:12:28Z | https://github.com/jupyter-book/jupyter-book/issues/1940 | [] | mfernandes61 | 2 |
automl/auto-sklearn | scikit-learn | 1,365 | `AutoML::fit_ensemble` with `ensemble_size =0` causes crash | It seems there is no validation on `fit_ensemble` when ensemble size is `0`, causing an issue to appear as seen in #1327 | closed | 2022-01-10T10:57:58Z | 2022-03-07T21:10:23Z | https://github.com/automl/auto-sklearn/issues/1365 | [
"Good first issue",
"maintenance"
] | eddiebergman | 4 |
clovaai/donut | computer-vision | 212 | No answer in docVQA | I tried the inference of docVQA, but I don't get any answers. There is only question in output. | open | 2023-06-14T13:22:58Z | 2023-06-14T13:22:58Z | https://github.com/clovaai/donut/issues/212 | [] | GingL | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.