repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
pyppeteer/pyppeteer | automation | 151 | How to disguise browser fingerprint? | I didn't find any documentation about injecting JavaScript before the page loads.
How to modify the fingerprint information of these browsers in the picture

| closed | 2020-07-12T01:14:20Z | 2020-07-12T04:10:39Z | https://github.com/pyppeteer/pyppeteer/issues/151 | [
"invalid"
] | xiaohuimc | 1 |
allenai/allennlp | nlp | 5,260 | I think the implementation of bimpm_mathcing is wrong | <!--
Please fill this template entirely and do not erase any of it.
We reserve the right to close without a response bug reports which are incomplete.
If you have a question rather than a bug, please ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than posting an issue here.
-->
## Checklist
<!-- To check an item on the list replace [ ] with [x]. -->
- [ ] I have verified that the issue exists against the `main` branch of AllenNLP.
- [x] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/main/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs.
- [ ] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports.
- [ ] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes.
- [ ] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/main/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/main) to find out if the bug was already fixed in the main branch.
- [ ] I have included in the "Description" section below a traceback from any exceptions related to this bug.
- [ ] I have included in the "Related issues or possible duplicates" section beloew all related issues and possible duplicate issues (If there are none, check this box anyway).
- [ ] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug.
- [ ] I have included in the "Environment" section below the output of `pip freeze`.
- [ ] I have included in the "Steps to reproduce" section below a minimally reproducible example.
## Description
I read the code source code of Bi-MPM model(https://github.com/allenai/allennlp/blob/main/allennlp/modules/bimpm_matching.py) and I found the Attentive-Matching in the code is very different from what is described in the original paper.
For example, the 350, 351 lines, softmax is used to the hidden_state dimension(the last dimension). However, to be the same as what is in the paper, I think we just need to devide the weighted sum with the sum of weights.
<!-- Please provide a clear and concise description of what the bug is here. -->
<details>
<summary><b>Python traceback:</b></summary>
<p>
<!-- Paste the traceback from any exception (if there was one) in between the next two lines below -->
```
```
</p>
</details>
## Related issues or possible duplicates
- None
## Environment
<!-- Provide the name of operating system below (e.g. OS X, Linux) -->
OS:
<!-- Provide the Python version you were using (e.g. 3.7.1) -->
Python version:
<details>
<summary><b>Output of <code>pip freeze</code>:</b></summary>
<p>
<!-- Paste the output of `pip freeze` in between the next two lines below -->
```
```
</p>
</details>
## Steps to reproduce
<details>
<summary><b>Example source:</b></summary>
<p>
<!-- Add a fully runnable example in between the next two lines below that will reproduce the bug -->
```
```
</p>
</details>
| closed | 2021-06-15T02:41:30Z | 2021-07-28T16:13:55Z | https://github.com/allenai/allennlp/issues/5260 | [
"bug",
"stale"
] | zhaowei-wang-nlp | 6 |
healthchecks/healthchecks | django | 1,006 | Discord Webhook integration | Hello,
thanks for healthchecks !
Would it be possible to get [Discord Webhook integration](https://support.discord.com/hc/en-us/articles/228383668-Intro-to-Webhooks) ?
There are simpler to set-up than the Discord App integration.
| open | 2024-05-25T16:12:58Z | 2024-08-28T16:45:04Z | https://github.com/healthchecks/healthchecks/issues/1006 | [
"good-first-issue"
] | r3mi | 9 |
MaartenGr/BERTopic | nlp | 1,109 | `_preprocess_text` does not remove stop words | I tried to aggregate documents by topics using the following code:
```
# Aggregate documents by topics
documents = pd.DataFrame({"Document": docs, "ID": range(len(docs)), "Topic": topics})
documents_per_topic = documents.groupby(['Topic'], as_index=False).agg({'Document': ' '.join})
cleaned_docs = topic_model._preprocess_text(documents_per_topic.Document.values)
```
However, I found the stop words seem not yet removed as seen in the word cloud (there are many `s`, `ing`, etc.):

BTW, I have explicitly set up stop words in the config of bertopic model:
```
import os
import pandas as pd
path_dataset = 'Dataset'
df_all = pd.read_json(os.path.join(path_dataset, 'all_filtered.json'))
docs = df_all['Challenge_original_content_gpt_summary'].tolist()
# visualize the best challenge topic model
from sklearn.feature_extraction.text import TfidfVectorizer
from bertopic.vectorizers import ClassTfidfTransformer
from sentence_transformers import SentenceTransformer
from bertopic.representation import KeyBERTInspired
from bertopic import BERTopic
from hdbscan import HDBSCAN
from umap import UMAP
# Step 1 - Extract embeddings
embedding_model = SentenceTransformer("all-mpnet-base-v2")
# Step 2 - Reduce dimensionality
umap_model = UMAP(n_components=5, metric='manhattan',
random_state=42, low_memory=False)
# Step 3 - Cluster reduced embeddings
min_samples = int(35 * 0.5)
hdbscan_model = HDBSCAN(min_cluster_size=35,
min_samples=min_samples, prediction_data=True)
# Step 4 - Tokenize topics
vectorizer_model = TfidfVectorizer(stop_words="english", ngram_range=(1, 2))
# Step 5 - Create topic representation
ctfidf_model = ClassTfidfTransformer(reduce_frequent_words=True)
# Step 6 - (Optional) Fine-tune topic representation
representation_model = KeyBERTInspired()
# All steps together
topic_model = BERTopic(
embedding_model=embedding_model,
umap_model=umap_model,
hdbscan_model=hdbscan_model,
vectorizer_model=vectorizer_model,
ctfidf_model=ctfidf_model,
representation_model=representation_model,
calculate_probabilities=True
)
topics, probs = topic_model.fit_transform(docs)
``` | closed | 2023-03-21T06:33:50Z | 2023-03-21T10:06:31Z | https://github.com/MaartenGr/BERTopic/issues/1109 | [] | zhimin-z | 1 |
dmlc/gluon-cv | computer-vision | 1,514 | temporal segment network load increases on inference | I tried inferencing on a pretrained TSN model for Action recognition from Gluon zoo. On inferencing the first few frames the CPU consumption was lower, but it gradually increased on inferencing on later frames | closed | 2020-11-11T05:31:07Z | 2021-05-22T06:40:20Z | https://github.com/dmlc/gluon-cv/issues/1514 | [
"Stale"
] | athulvingt | 1 |
AntonOsika/gpt-engineer | python | 896 | pip metadata problem in 0.2.0? (downgrades install to 0.1.0) |
## Expected Behavior
Using `python -m pip install gpt-engineer` should install version 0.2.0 by now.
## Current Behavior
pip seems to reject 0.2.0 (as the package metadata is "0.0.0"???) and installs 0.1.0 instead.
## Failure Information
Running the below pip command shows a message:
```
Discarding https://files.pythonhosted.org/packages/17/d3/adbca4a7f982636fc8a57f41bd174105f5b78b557749fc1e5d19d6f89dea/gpt_engineer-0.2.0.tar.gz (from https://pypi.org/simple/gpt-engineer/) (requires-python:>=3.8.1,<3.12): Requested gpt-engineer from https://files.pythonhosted.org/packages/17/d3/adbca4a7f982636fc8a57f41bd174105f5b78b557749fc1e5d19d6f89dea/gpt_engineer-0.2.0.tar.gz has inconsistent version: expected '0.2.0', but metadata has '0.0.0'
Downloading gpt_engineer-0.1.0-py3-none-any.whl.metadata (7.5 kB)
```
### Steps to Reproduce
Using an anaconda installation on MacOS
conda create -n gpt-engineer python=3.11.5
conda activate gpt-engineer
python -m pip install gpt-engineer
| closed | 2023-12-11T15:57:10Z | 2023-12-14T17:53:00Z | https://github.com/AntonOsika/gpt-engineer/issues/896 | [
"bug",
"triage"
] | IanRogers | 2 |
kornia/kornia | computer-vision | 2,173 | Using images from tutorials is breaking in some places | ## 📚 Documentation
Using images from tutorials is crashing in some places
- Face detection - https://kornia.readthedocs.io/en/latest/applications/face_detection.html

similar to https://github.com/kornia/kornia/pull/2167/commits/b0f2e61c95b1a1ad8290bb589ffeeb864839fc6d | open | 2023-01-23T21:52:30Z | 2023-01-24T16:08:43Z | https://github.com/kornia/kornia/issues/2173 | [
"bug :bug:",
"docs :books:"
] | johnnv1 | 3 |
pennersr/django-allauth | django | 3,161 | Changing primary key for user model causes No Reverse Match | Reverse for 'account_reset_password_from_key' with keyword arguments '{'uidb36': 'mgodhrawala402@gmail.com', 'key': 'bbz25w-9c6941d5cb69a49883f15bc8e076f504'}' not found. 1 pattern(s) tried: ['accounts/password/reset/key/(?P<uidb36>[0-9A-Za-z]+)-(?P<key>.+)/$']
| closed | 2022-09-19T00:18:17Z | 2022-12-10T21:55:04Z | https://github.com/pennersr/django-allauth/issues/3161 | [] | mustansirgodhrawala | 1 |
plotly/dash | data-visualization | 2,449 | dcc.Upload doesn't support rendering of tif image files with html.Img | **Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.8.1
dash-canvas 0.1.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
dash-uploader 0.6.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: Ubuntu
- Browser Chrome
- Version Version 110.0.5481.177 (Official Build) (64-bit)
**Describe the bug**
I am trying to render .tif image files from `dcc.Upload` using `html.Img` components. The tifs render perfectly fine using a direct path read:
```
html.Img(src=Image.open(os.path.join(os.path.dirname(os.path.abspath(__file__)),
"sample.tif")))
```
but when using dcc.Upload and the base64 string, nothing appears.
**Expected behavior**
i would expect that plotly dash components would be able to render .tif files using the upload feature as the HTML Img components are able to. Is there some sort of additional conversion required, is this a bug, or is support not being offered for this feature?
This lack of image rendering can also be seen here: https://dash.plotly.com/dash-core-components/upload when attempting to upload .tifs, nothing will appear.
| open | 2023-03-10T15:31:05Z | 2024-08-13T14:29:22Z | https://github.com/plotly/dash/issues/2449 | [
"bug",
"P3"
] | matt-sd-watson | 2 |
RobertCraigie/prisma-client-py | asyncio | 351 | Add foreign key constraint failed error | closed | 2022-04-01T17:51:39Z | 2022-04-30T04:07:06Z | https://github.com/RobertCraigie/prisma-client-py/issues/351 | [
"kind/subtask"
] | RobertCraigie | 0 | |
google-deepmind/graph_nets | tensorflow | 31 | Pretrained Networks | closed | 2018-12-01T06:38:10Z | 2019-03-27T23:03:00Z | https://github.com/google-deepmind/graph_nets/issues/31 | [] | ferreirafabio | 0 | |
jina-ai/serve | machine-learning | 5,401 | Wrong host ip it served really | **Describe the bug**
I start a service
```python
f = Flow(port=22456, host_in='127.0.0.1', host='127.0.0.1').add(uses=xxxx)
with f:
f.block()
```
And i use `neetstat -ant`

It seems that it binds to 0.0.0.0 not 127.0.0.1.Why?
**Environment**
jina 3.10.1
docarray 0.17.0
jcloud 0.0.35
jina-hubble-sdk 0.19.1
jina-proto 0.1.13
protobuf 3.20.0
proto-backend cpp
grpcio 1.48.0
pyyaml 5.3.1
python 3.7.0
platform Linux
platform-release 3.10.0-862.14.1.5.h442.eulerosv2r7.x86_64
platform-version https://github.com/jina-ai/jina/pull/1 SMP Fri May 15 22:01:58 UTC 2020
architecture x86_64
processor x86_64
uid 171880166524263
session-id 22b0ff44-65b2-11ed-8def-9c52f8450567
uptime 2022-11-16T21:25:27.563941
ci-vendor (unset)
internal False
JINA_DEFAULT_HOST (unset)
JINA_DEFAULT_TIMEOUT_CTRL (unset)
JINA_DEPLOYMENT_NAME (unset)
JINA_DISABLE_UVLOOP (unset)
JINA_EARLY_STOP (unset)
JINA_FULL_CLI (unset)
JINA_GATEWAY_IMAGE (unset)
JINA_GRPC_RECV_BYTES (unset)
JINA_GRPC_SEND_BYTES (unset)
JINA_HUB_NO_IMAGE_REBUILD (unset)
JINA_LOG_CONFIG (unset)
JINA_LOG_LEVEL (unset)
JINA_LOG_NO_COLOR (unset)
JINA_MP_START_METHOD (unset)
JINA_OPTOUT_TELEMETRY (unset)
JINA_RANDOM_PORT_MAX (unset)
JINA_RANDOM_PORT_MIN (unset)
| closed | 2022-11-17T02:56:57Z | 2022-11-17T09:00:07Z | https://github.com/jina-ai/serve/issues/5401 | [] | wqh17101 | 1 |
pytest-dev/pytest-mock | pytest | 312 | #note-about-usage-as-context-manager leads to no message | https://github.com/pytest-dev/pytest-mock/blob/35e2dca0ab5e0a0e1580359f7effd6ef99a7c8e6/src/pytest_mock/plugin.py#L212-L220
Leads to the now not-existing section of the README.rst:
https://github.com/pytest-dev/pytest-mock/blob/4c3caaf2260f77ed10e855a20207023dded12c07/README.rst#L277-L315 | closed | 2022-09-09T10:25:48Z | 2022-09-09T11:58:12Z | https://github.com/pytest-dev/pytest-mock/issues/312 | [] | stdedos | 1 |
ydataai/ydata-profiling | pandas | 1,426 | to_html ignores sensitive parameter and exposes data | ### Current Behaviour
In ydata-profile v4.5.0, `ProfileReport.to_html()` ignores the sensitive parameter and exposes data, similar to the bug reported in #1300.
### Expected Behaviour
No sensitive data shown.
### Data Description
A list of integers from 0 - 9, inclusive.
### Code that reproduces the bug
```Python
import pandas as pd
from ydata_profiling import ProfileReport
data = [[i] for i in range(10)]
df = pd.DataFrame(data, columns=['sensitive_column'])
displayHTML(ProfileReport(df, sensitive=True).to_html())
```
### pandas-profiling version
v4.5.0
### Dependencies
```Text
pandas==1.1.5
```
### OS
_No response_
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | open | 2023-08-11T18:25:03Z | 2023-08-24T15:38:41Z | https://github.com/ydataai/ydata-profiling/issues/1426 | [
"information requested ❔"
] | ch-nickgustafson | 1 |
piskvorky/gensim | nlp | 3,184 | Reduce duplication in word2vec.pyx source code | OK, we can deal with this separately.
_Originally posted by @mpenkov in https://github.com/RaRe-Technologies/gensim/pull/3169#discussion_r660297089_ | open | 2021-06-29T05:44:26Z | 2021-06-29T05:44:42Z | https://github.com/piskvorky/gensim/issues/3184 | [
"housekeeping"
] | mpenkov | 0 |
HIT-SCIR/ltp | nlp | 655 | 有必要再加上ltp.to("cuda")吗 | 加载模型的时候,看到初始化里面包含了判断是否有GPU

在文档里面也看到有类似的判断,那这个是有必要的吗

| closed | 2023-06-30T02:35:46Z | 2023-07-04T07:56:42Z | https://github.com/HIT-SCIR/ltp/issues/655 | [] | liyanfu520 | 1 |
pydata/pandas-datareader | pandas | 125 | Treasury returns | To my knowledge, pandas-datareader does not support loading of US treasury returns from the federal reserve. These are implemented in zipline: https://github.com/quantopian/zipline/blob/master/zipline/data/treasuries.py
Is there interest in adding these to `pandas-datareader`? What's the preferred style?
| closed | 2015-11-23T11:28:54Z | 2018-01-18T17:27:50Z | https://github.com/pydata/pandas-datareader/issues/125 | [] | twiecki | 2 |
coqui-ai/TTS | python | 2,884 | [Bug] Sound too quick when synthesize one-word-speech like "hello"。 | ### Describe the bug
hello,I want to synthesize some speech only have one word, like "Hello".
I try the model named "tts_models/en/ek1/tacotron2". I got the wav file but it sound so quickly that can not hear the word clearly. Any method can solve it?
### To Reproduce
tts --text "hello" --model_name tts_models/en/ek1/tacotron2 --vocoder_name vocoder_models/en/ek1/wavegrad --out_path ./hello.wav
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
-TTS 1.3.
```
### Additional context
_No response_ | closed | 2023-08-23T10:42:18Z | 2023-08-26T20:29:32Z | https://github.com/coqui-ai/TTS/issues/2884 | [
"bug"
] | travisCxy | 2 |
PokeAPI/pokeapi | api | 441 | Egg Group Missing: Field | As described here:
https://bulbapedia.bulbagarden.net/wiki/Egg_Group
https://bulbapedia.bulbagarden.net/wiki/Field_(Egg_Group)
The field group is missing (or does not seem to work.)
Its other group name would be **ground** but that does not seem to work either.
Ex.
I've tried both **egg-group/field** and **egg-group/ground**.
Neither of them yields a result. | closed | 2019-08-05T07:05:51Z | 2019-08-06T15:26:02Z | https://github.com/PokeAPI/pokeapi/issues/441 | [] | bausshf | 3 |
Kitware/trame | data-visualization | 623 | Bug with exporting the plotter to an HTML file | I found a bug with exporting the plotter to an HTML file ```self.plotter.export_html()```.
When I open it in a browser, it works normally.
However, when I open trame in desktop mode (using this line ```sys.argv.append('--app')```),
not only does the HTML file fail to export, but a strange dialog box also pops up.
I look forward to a resolution to this issue.
Thanks.
Windows 11
Python 3.11
Trame 3.6.5
Pyvista 0.44.0
``` python
import sys
import pyvista as pv
from pyvista.trame import PyVistaLocalView
from trame.decorators import TrameApp
from trame_vtk.modules.vtk.serializers import encode_lut
from trame.app import get_server
from trame.widgets import vuetify3 as vuetify
from trame.ui.vuetify3 import SinglePageLayout
encode_lut(True)
@TrameApp()
class KlGModelApp:
def __init__(self):
self.view = None
self.server = get_server(client_type="vue3")
self.plotter = pv.Plotter(off_screen=True)
sphere = pv.Sphere(center=(0, 0, 0))
self.plotter.add_mesh(sphere)
self.build_ui()
@property
def state(self):
return self.server.state
@property
def ctrl(self):
return self.server.controller
def export_to_html(self):
self.plotter.export_html("pv.html")
def build_ui(self):
with SinglePageLayout(_server=self.server) as layout:
layout.title.set_text("(ο´・д・)??")
with layout.toolbar:
vuetify.VDivider(vertical=True, classes="mx-2")
vuetify.VBtn(children="Export HTML", click=self.export_to_html)
with layout.content:
with vuetify.VContainer(fluid=True, classes="pa-0 fill-height"):
self.view = PyVistaLocalView(self.plotter)
# -----------------------------------------------------------------------------
# Main
# -----------------------------------------------------------------------------
if __name__ == "__main__":
app = KlGModelApp()
sys.argv.append('--app')
app.server.start(width=1800, height=900, port=8081)
```
| closed | 2024-10-28T10:02:28Z | 2024-11-04T14:54:55Z | https://github.com/Kitware/trame/issues/623 | [] | Brandon-Xu | 2 |
jupyterhub/zero-to-jupyterhub-k8s | jupyter | 2,995 | Customizing jupyter docker image not working | <!-- Thank you for contributing. These HTML comments will not render in the issue, but you can delete them once you've read them if you prefer! -->
### Bug description
I tried creating a customized notebook using this link https://z2jh.jupyter.org/en/latest/jupyterhub/customizing/user-environment.html#customize-an-existing-docker-image but when built and push into dockerhub, chaning the config.yaml file it is not working. the image puller is on crashloopbackoff state
#### Expected behaviour
continuous image puller is working
#### Actual behaviour
continuous image puller is in backoff state Init:CrashLoopBackOff
### How to reproduce
<!-- Use this section to describe the steps that a user would take to experience this bug. -->
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
### Your personal set up
<!--
Tell us a little about the system you're using.
Please include information about how you installed,
e.g. are you using a distribution such as zero-to-jupyterhub or the-littlest-jupyterhub.
-->
- OS:
<!-- [e.g. ubuntu 20.04, macOS 11.0] -->
- Version(s):
<!-- e.g. jupyterhub --version, python --version --->
<details><summary>Full environment</summary>
<!-- For reproduction, it's useful to have the full environment. For example, the output of `pip freeze` or `conda list` --->
```
# paste output of `pip freeze` or `conda list` here
```
</details>
<details><summary>Configuration</summary>
<!--
For JupyterHub, especially include information such as what Spawner and Authenticator are being used.
Be careful not to share any sensitive information.
You can paste jupyterhub_config.py below.
To exclude lots of comments and empty lines from auto-generated jupyterhub_config.py, you can do:
grep -v '\(^#\|^[[:space:]]*$\)' jupyterhub_config.py
-->
```python
# jupyterhub_config.py
```
</details>
<details><summary>Logs</summary>
<!--
Errors are often logged by jupytehub. How you get logs depends on your deployment.
With kubernetes it might be:
kubectl get pod # hub pod name starts with hub...
kubectl logs hub-...
# or for a single-user server
kubectl logs jupyter-username
Or the-littlest-jupyterhub:
journalctl -u jupyterhub
# or for a single-user server
journalctl -u jupyter-username
-->
```
# paste relevant logs here, if any
```
</details>
| closed | 2023-01-10T09:22:32Z | 2023-01-10T09:27:52Z | https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/2995 | [
"support"
] | rafmacalaba | 2 |
tpvasconcelos/ridgeplot | plotly | 171 | [DISCUSSION] Using `ridgeplot` for `sktime` and `skpro` distributional predictions? | For a while I have now been thinking about what a good plotting modality would be for fully distributional predictions, i.e., the output of `predict_proba` in `sktime` or `skpro`.
The challnge is that you have a (marginal) distribution for each entry in a `pandas`-like table, which seems hard to visualize. I've experimented with panels (`matplotlib.subplots`) but I wasn't quit happy with the result.
Now, by accident (just curious clicking), I've discovered `ridgeplot`.
What would you think of using the look & feel of `ridgeplot` as a plotting function in `BaseDistribution`? Where rows are the rows of the data-frame like stucture, and mayb there are also columns (but I am happy with the single-variable case too)
The main difference is that the distribution does not need to be estimated via KDE, you already have it in a form where you can access `pdf`, `cdf`, etc, completely, and you have the quantile function too which helps with selecting x-axis range.
Plotting `cdf` and other distribution defining functions would also be neat, of course `pdf` (if exists), or `cdf` (for survival) are already great.
Imagined usage, sth like
```python
fcst = BuildSth(Complex(params), more_params)
fcst.fit(y, fh=range(1, 10)
y_dist = fcst.predict_proba()
y_dist.plot() # default is pdf for continuous distirbutions
y_dist.plot("cdf")
```
Dependencies-wise, one could imagine `ridgeplot` as a plotting softdep like `matplotlib` or `seaborn`, of `skpro` and therefore indirectly of `sktime`.
What do you think? | closed | 2024-01-31T02:36:32Z | 2024-02-01T11:48:15Z | https://github.com/tpvasconcelos/ridgeplot/issues/171 | [] | fkiraly | 2 |
TencentARC/GFPGAN | pytorch | 519 | Image blending problem while caching the gfpgan model | I have created an API for Real-ESRGAN using FastAPI, and it is working properly for multiple user requests. However, when I am initially loading the models (Real-ESRGAN and GFPGAN) using lru_cache (functools) to decrease the inference time, I am encountering following two errors during execution.
**1. Sometimes I have getting faces of one user request mixed up with another user request.**

**2. In some requests, I have getting following error.**
```
Traceback (most recent call last):
File "D:/Image Super Resolution/Models/Real-ESRGAN/env/lib/site-packages/starlette/middleware/errors.py", line 164, in _call_
await self.app(scope, receive, _send)
File "D:/Image Super Resolution/Models/Real-ESRGAN/env/lib/site-packages/starlette/middleware/exceptions.py", line 62, in _call_
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "D:/Image Super Resolution/Models/Real-ESRGAN/env/lib/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "D:/Image Super Resolution/Models/Real-ESRGAN/env/lib/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "D:/Image Super Resolution/Models/Real-ESRGAN/env/lib/site-packages/starlette/routing.py", line 758, in _call_
await self.middleware_stack(scope, receive, send)
File "D:/Image Super Resolution/Models/Real-ESRGAN/env/lib/site-packages/starlette/routing.py", line 778, in app
await route.handle(scope, receive, send)
File "D:/Image Super Resolution/Models/Real-ESRGAN/env/lib/site-packages/starlette/routing.py", line 299, in handle
await self.app(scope, receive, send)
File "D:/Image Super Resolution/Models/Real-ESRGAN/env/lib/site-packages/starlette/routing.py", line 79, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "D:/Image Super Resolution/Models/Real-ESRGAN/env/lib/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "D:/Image Super Resolution/Models/Real-ESRGAN/env/lib/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "D:/Image Super Resolution/Models/Real-ESRGAN/env/lib/site-packages/starlette/routing.py", line 74, in app
response = await func(request)
File "D:/Image Super Resolution/Models/Real-ESRGAN/env/lib/site-packages/fastapi/routing.py", line 299, in app
raise e
File "D:/Image Super Resolution/Models/Real-ESRGAN/env/lib/site-packages/fastapi/routing.py", line 294, in app
raw_response = await run_endpoint_function(
File "D:/Image Super Resolution/Models/Real-ESRGAN/env/lib/site-packages/fastapi/routing.py", line 193, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "D:/Image Super Resolution/Models/Real-ESRGAN/env/lib/site-packages/starlette/concurrency.py", line 42, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
File "D:/Image Super Resolution/Models/Real-ESRGAN/env/lib/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "D:/Image Super Resolution/Models/Real-ESRGAN/env/lib/site-packages/anyio/_backends/_asyncio.py", line 2134, in run_sync_in_worker_thread
return await future
File "D:/Image Super Resolution/Models/Real-ESRGAN/env/lib/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
File "D:/Image Super Resolution/Models/Real-ESRGAN/api.py", line 102, in process_image
intermediate_image = hd_process(img_array)
File "D:/Image Super Resolution/Models/Real-ESRGAN/api.py", line 58, in hd_process
, , output = face_enhancer.enhance(img_array, has_aligned=False, only_center_face=False, paste_back=True)
File "D:/Image Super Resolution/Models/Real-ESRGAN/env/lib/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:/Image Super Resolution/Models/Real-ESRGAN/env/lib/site-packages/gfpgan/utils.py", line 144, in enhance
restored_img = self.face_helper.paste_faces_to_input_image(upsample_img=bg_img)
File "D:/Image Super Resolution/Models/Real-ESRGAN/env/lib/site-packages/facexlib/utils/face_restoration_helper.py", line 291, in paste_faces_to_input_image
assert len(self.restored_faces) == len(self.inverse_affine_matrices), ('length of restored_faces and affine_matrices are different.')
AssertionError: length of restored_faces and affine_matrices are different.
```
This is the small code snippet from my api:
```
@lru_cache()
def loading_model():
real_esrgan_model_path = "D:/Image Super Resolution/Models/Real-ESRGAN/weights/RealESRGAN_x4plus.pth"
gfpgan_model_path = "D:/Image Super Resolution/Models/Real-ESRGAN/env/Lib/site-packages/gfpgan/weights/GFPGANv1.3.pth"
model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4)
netscale = 4
upsampler = RealESRGANer(scale=netscale,model_path=real_esrgan_model_path,dni_weight=0.5,model=model,tile=0,tile_pad=10,pre_pad=0,half=False)
face_enhancer = GFPGANer(model_path=gfpgan_model_path,upscale=4,arch='clean',channel_multiplier=2,bg_upsampler=upsampler)
return face_enhancer
def hd_process(file):
filename = file.filename.split('.')[0]
save_path = os.path.join("temp_images", f"{filename}.jpg")
content = file.file.read()
with open(save_path, 'wb') as image_file:
image_file.write(content)
img_array = cv2.imread(save_path, cv2.IMREAD_UNCHANGED)
face_enhancer = loading_model()
with torch.no_grad():
_, _, output = face_enhancer.enhance(img_array, has_aligned=False, only_center_face=False, paste_back=True)
output_rgb = cv2.cvtColor(output, cv2.COLOR_BGR2RGB)
del face_enhancer
torch.cuda.empty_cache()
return output_rgb
```
So, when I went through the code of GFPGAN, I found that GFPGANer contains an "enhance" function which calls the "facexlib" library for face enhancement and face-related operations. The "enhance" function clears all list variables of "facexlib" after every execution by reinitializing them. This type of behavior is only observed when I load the model into the cache; otherwise, it works properly. Is there any way to cache the model and also resolve this error? | open | 2024-02-21T10:25:05Z | 2024-03-15T08:04:52Z | https://github.com/TencentARC/GFPGAN/issues/519 | [] | dummyuser-123 | 8 |
jeffknupp/sandman2 | sqlalchemy | 235 | Is it possible to serialize the models/code that sandman2 generates? | My understanding is that `sandmanctl` generates SQLAlchemy models and Flask routes for my DB on the fly.
Is it in any way possible to store the generated code so it can reviewed, put under version control etc? | open | 2021-09-09T13:48:40Z | 2021-09-09T13:48:40Z | https://github.com/jeffknupp/sandman2/issues/235 | [] | arne-cl | 0 |
PaddlePaddle/PaddleHub | nlp | 2,164 | 向容器中的服务发送请求,报错:(External) CUDA error(3), initialization error. | 报错信息:
{"msg":"(External) CUDA error(3), initialization error.
[Hint: 'cudaErrorInitializationError'. The API call failed because the CUDA driver and runtime could not be initialized. ] (at /paddle/paddle/phi/backends/gpu/cuda/cuda_info.cc:172)
","results":"","status":"101"}
欢迎您反馈PaddleHub使用问题,非常感谢您对PaddleHub的贡献!
在留下您的问题时,辛苦您同步提供如下信息:
1)版本、环境信息
容器(ce)版本:docker 20.10.21, build baeda1f
初始镜像:paddlepaddle/paddle 2.3.2-gpu-cuda11.2-cudnn8
生成容器中的paddle相关信息:
paddle-bfloat 0.1.7
paddle2onnx 1.0.1
paddlefsl 1.1.0
paddlehub 2.3.0
paddlenlp 2.4.2
paddlepaddle-gpu 2.3.2.post112
python版本:3.7.13
我准备利用Knover中的PLATO-2训练一个chitchat服务,我使用的是24层结构的模型,在粗略地完成了模型训练的阶段后,我使用paddlehub部署服务。在此之前,我已经在容器中使用脚本进行了测试,虽然话说的有些奇怪,但功能执行无误,后来使用hub serving start部署服务,使用jmeter测试服务的时候,报出上述错误。
我的模型名称重新起名为“plato2_cn24”
我的项目中的module.py参考了一位开发者的开源项目,内容如下:
# coding:utf-8
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import ast
import os
import json
import sys
import argparse
import contextlib
from collections import namedtuple
import paddle.fluid as fluid
import paddlehub as hub
from paddlehub.module.module import runnable
from paddlehub.module.nlp_module import DataFormatError
from paddlehub.common.logger import logger
from paddlehub.module.module import moduleinfo, serving
import plato2_cn_small.models as plato_models
from plato2_cn_small.tasks.dialog_generation import DialogGeneration
from plato2_cn_small.utils import check_cuda, Timer
from plato2_cn_small.utils.args import parse_args
import translate as trans
import jieba
from collections import namedtuple
@moduleinfo(
name="plato2_cn_small",
version="1.0.0",
summary=
"A novel pre-training model for dialogue generation, incorporated with latent discrete variables for one-to-many relationship modeling. "
"This model is a minor revision from plato2_en_base, making it be able to do conversation in Chinese and English (translated)",
author="baidu-nlp, Dongyang Yan",
author_email="dongyangyan@bjtu.edu.cn",
type="nlp/text_generation",
)
class Plato(hub.NLPPredictionModule):
def _initialize(self):
"""
initialize with the necessary elements
"""
if "CUDA_VISIBLE_DEVICES" not in os.environ:
raise RuntimeError(
"The module only support GPU. Please set the environment variable CUDA_VISIBLE_DEVICES."
)
args = self.setup_args()
self.task = DialogGeneration(args)
self.model = plato_models.create_model(args, fluid.CUDAPlace(0))
self.Example = namedtuple("Example", ["src", "data_id"])
self._interactive_mode = False
self._from_lang = "cn"
self._to_lang = "cn"
self._trans_en2cn = trans.Translator("zh-cn", 'en')
self._trans_cn2en = trans.Translator('en', 'zh-cn')
def setup_args(self, tokenized=False):
"""
Setup arguments.
"""
assets_path = os.path.join(self.directory, "assets")
vocab_path = os.path.join(assets_path, "vocab.txt")
init_pretraining_params = os.path.join(assets_path, "12L", "Plato")
spm_model_file = os.path.join(assets_path, "spm.model")
nsp_inference_model_path = os.path.join(assets_path, "12L", "NSP")
config_path = os.path.join(assets_path, "12L.json")
# ArgumentParser.parse_args use argv[1:], it will drop the first one arg, so the first one in sys.argv should be ""
if not tokenized:
sys.argv = [
"", "--model", "Plato", "--vocab_path",
"%s" % vocab_path, "--do_lower_case", "False",
"--init_pretraining_params",
"%s" % init_pretraining_params, "--spm_model_file",
"%s" % spm_model_file, "--nsp_inference_model_path",
"%s" % nsp_inference_model_path, "--ranking_score", "nsp_score",
"--do_generation", "True", "--batch_size", "1", "--config_path",
"%s" % config_path
]
else:
sys.argv = [
"", "--model", "Plato", "--data_format", "tokenized", "--vocab_path",
"%s" % vocab_path, "--do_lower_case", "False",
"--init_pretraining_params",
"%s" % init_pretraining_params, "--spm_model_file",
"%s" % spm_model_file, "--nsp_inference_model_path",
"%s" % nsp_inference_model_path, "--ranking_score", "nsp_score",
"--do_generation", "True", "--batch_size", "1", "--config_path",
"%s" % config_path
]
parser = argparse.ArgumentParser()
plato_models.add_cmdline_args(parser)
DialogGeneration.add_cmdline_args(parser)
args = parse_args(parser)
args.load(args.config_path, "Model")
args.run_infer = True # only build infer program
return args
@serving
def generate(self, texts):
"""
Get the robot responses of the input texts.
Args:
texts(list or str): If not in the interactive mode, texts should be a list in which every element is the chat context separated with '\t'.
Otherwise, texts shoule be one sentence. The module can get the context automatically.
Returns:
results(list): the robot responses.
"""
if not texts:
return []
if self._from_lang == 'cn':
if not self._interactive_mode:
texts = [' '.join(list(jieba.cut(text))) for text in texts]
else:
texts = ' '.join(list(jieba.cut(texts)))
if self._interactive_mode:
if isinstance(texts, str):
if self._from_lang == 'en':
texts = self._trans_en2cn.translate(texts)
self.context.append(texts.strip())
texts = [" [SEP] ".join(self.context[-self.max_turn:])]
else:
raise ValueError(
"In the interactive mode, the input data should be a string."
)
elif not isinstance(texts, list):
raise ValueError(
"If not in the interactive mode, the input data should be a list."
)
else:
if self._from_lang == 'en':
texts = [self._trans_en2cn.translate(text) for text in texts]
bot_responses = []
for i, text in enumerate(texts):
example = self.Example(src=text.replace("\t", " [SEP] "), data_id=i)
record = self.task.reader._convert_example_to_record(
example, is_infer=True)
data = self.task.reader._pad_batch_records([record], is_infer=True)
pred = self.task.infer_step(self.model, data)[0] # batch_size is 1
bot_response = pred["response"] # ignore data_id and score
bot_responses.append(bot_response)
if self._interactive_mode:
self.context.append(bot_responses[0].strip())
if self._to_lang == 'en':
bot_responses = [self._trans_cn2en.translate(resp) for resp in bot_responses]
if self._to_lang == 'cn':
bot_responses = [''.join(resp.split()) for resp in bot_responses]
return bot_responses
@serving
def generate_for_test(self, records):
"""
Get the robot responses of the input texts.
Args:
list of dicts: numerical data, [field_values, ...]
field_values = {
"token_ids": src_token_ids,
"type_ids": src_type_ids,
"pos_ids": src_pos_ids,
"tgt_start_idx": tgt_start_idx
}
Returns:
results(list): the robot responses.
"""
if not records:
return []
if self._interactive_mode:
print("Warning: This function is not suitable for interactive mode.")
elif not isinstance(records, list):
raise ValueError(
"If not in the interactive mode, the input data should be a list.")
fields = ["token_ids", "type_ids", "pos_ids", "tgt_start_idx", "data_id"]
Record = namedtuple("Record", fields, defaults=(None,) * len(fields))
record_all = []
for i, record in enumerate(records):
record["data_id"] = i
record = Record(**record)
record_all.append(record)
data = self.task.reader._pad_batch_records(record_all, is_infer=True)
pred = self.task.infer_step(self.model, data)
bot_responses = [p["response"] for p in pred]
return bot_responses
def set_dialog_mode(self, from_lang='cn', to_lang='cn'):
"""
To set the mode of dialog, from_lang is the language type of input, and
to_lang is the language type from the robot. "cn": Chinese; "en": English.
Default: from_lang is "cn", to_lang is "cn".
"""
self._from_lang = from_lang
self._to_lang = to_lang
@contextlib.contextmanager
def interactive_mode(self, max_turn=6):
"""
Enter the interactive mode.
Args:
max_turn(int): the max dialogue turns. max_turn = 1 means the robot can only remember the last one utterance you have said.
"""
self._interactive_mode = True
self.max_turn = max_turn
self.context = []
yield
self.context = []
self._interactive_mode = False
@runnable
def run_cmd(self, argvs):
"""
Run as a command
"""
self.parser = argparse.ArgumentParser(
description='Run the %s module.' % self.name,
prog='hub run %s' % self.name,
usage='%(prog)s',
add_help=True)
self.arg_input_group = self.parser.add_argument_group(
title="Input options", description="Input data. Required")
self.arg_config_group = self.parser.add_argument_group(
title="Config options",
description="Run configuration for controlling module behavior, optional.")
self.add_module_input_arg()
args = self.parser.parse_args(argvs)
try:
input_data = self.check_input_data(args)
except DataFormatError and RuntimeError:
self.parser.print_help()
return None
results = self.generate(texts=input_data)
return results
if __name__ == "__main__":
module = Plato()
for result in module.generate([
"你是机器人吗?",
"如果你不是机器人,那你得皮肤是什么颜色的呢?"
]):
print(result)
"""
import paddlehub as hub
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
module = hub.Module("plato2_en&cn_base")
# change the dialog language.
module.set_dialog_mode(from_lang='en', to_lang='cn')
"""
with module.interactive_mode(max_turn=3):
while True:
human_utterance = input()
robot_utterance = module.generate(human_utterance)
print("Robot: %s" % robot_utterance[0])
| open | 2022-12-05T09:46:47Z | 2022-12-05T10:58:10Z | https://github.com/PaddlePaddle/PaddleHub/issues/2164 | [] | what-is-perfect | 2 |
qubvel-org/segmentation_models.pytorch | computer-vision | 334 | Is it possible to increase the U-Net context (input size different than the output size)? | Nowadays, most implementations have inputs of the same size as the outputs. However, the original U-Net has a 572x572 image as input and a 388x388 mask as output. I think this extra context is useful in many applications. Would it be possible to add this additional context in the segmentation_models.pytorch?
Thanks in advance! | closed | 2021-01-23T21:52:25Z | 2022-02-28T01:54:55Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/334 | [
"Stale"
] | bpmsilva | 2 |
PokeAPI/pokeapi | graphql | 324 | Asynchronous Python Wrapper? | I was deep into my project when I embarrassingly noticed that the pokeapi wrapper I used ([PokeBase](https://github.com/GregHilmes/pokebase) by Greg Hilmes) is not asynchronous.
Now I have the issue that I am not particularly experienced in this field and the wrapper is already fairly deeply integrated into the project.
So does anybody already have some sort of asynchronous python wrapper or would be willing to create one?
Any help would be greatly appreciated as I don't think that I could create something efficient in a timely manner! | closed | 2018-03-01T21:06:31Z | 2018-03-02T13:20:56Z | https://github.com/PokeAPI/pokeapi/issues/324 | [] | AtomToast | 1 |
Yorko/mlcourse.ai | seaborn | 773 | Proofread topic 9 | - Fix issues
- Fix typos
- Correct the translation where needed
- Add images where necessary | open | 2024-08-25T07:53:55Z | 2024-08-25T08:11:36Z | https://github.com/Yorko/mlcourse.ai/issues/773 | [
"enhancement",
"articles"
] | Yorko | 0 |
matterport/Mask_RCNN | tensorflow | 2,713 | ValueError: operands could not be broadcast together with shapes (571,800,3) (300,506,3) | I was trying to remove the background and only include the the segmented object in my image but however only 1 image worked but the rest showing me this error. Any one has faced this issue?

| closed | 2021-10-26T09:25:58Z | 2021-11-17T17:30:51Z | https://github.com/matterport/Mask_RCNN/issues/2713 | [] | hcyeow | 1 |
521xueweihan/HelloGitHub | python | 2,200 | 【自荐】ACNumpad,为主键盘区添加数字小键盘 | ## 项目推荐
- 项目地址:[ACNumpad](https://github.com/AstronChen/ACNumpad)
- 类别:AutoHotkey
- 平台
Windows
- 项目后续更新计划:
- 增加自定义键位映射功能。
- 保持软件简洁。
- 项目描述:
- 运行于后台,在主键盘区增加可随时开关的数字小键盘。
- 绿色,免费,简单,高效。
- 相对于其他同类软件,切换方式更顺手,快捷键冲突少。
- 解决以下问题:
①快速输入数字。
②使用小键盘快捷键可大幅提升效率的应用。如几个著名的音视频制作软件。
## 1. 安装
a、 解压缩到任何地方。运行。
b、 (可选)向Startup文件夹发送快捷方式。
为当前用户安装:
```
%USERPROFILE%\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup
```
为所有本地用户安装:
```
%APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup
```
c、 (可选)进入设置-个性化设置-任务栏-其他系统托盘图标。切换以持续显示ACNumpad。
## 2. 使用
ACNumpad静默运行于后台。会显示一个托盘图标,帮助您确定是否开启了数字小键盘。
ACNumpad刚启动时,默认挂起。托盘图标为红色,不会对键盘进行任何更改。
当你按下右侧Alt+,(逗号)或点击任务栏图标右键菜单中的“Toggle Suspend”时,Numpad激活。托盘图标变为绿色。
按键变更如下:

## 3. 卸载
退出ACNumpad并删除目录文件。
# 截图


# 相关工具
AutoHotkey: https://www.autohotkey.com/
Ahk2Exe: https://github.com/AutoHotkey/Ahk2Exe
| closed | 2022-05-12T12:28:50Z | 2022-05-24T03:23:56Z | https://github.com/521xueweihan/HelloGitHub/issues/2200 | [] | AstronChen | 1 |
ageitgey/face_recognition | machine-learning | 1,447 | What is the image size sweet-spot? | I have been crawling the discussions here hoping to find any recommended sizes for training images.
We are building an app that will capture multiple images of each person so we can control the input.
The app will capture a full face and neck. So the face will occupy most of the image's available space.
Can anyone recommend the smallest sized file we should make to keep processing times to a minimum?
For instance is 400px x 500px suitable at 72dpi?
What would be suitable jpeg quality settings as a percentage?
Thanks in advance.
| closed | 2022-09-18T14:51:05Z | 2023-01-18T00:44:05Z | https://github.com/ageitgey/face_recognition/issues/1447 | [] | julianadormon | 5 |
zappa/Zappa | flask | 1,179 | How to add regions to existing deployment? | I'd like to either add select regions to an existing deployment or switch to deploying globally. Ideally the former but I don't know if it's possible. I've seen many articles/docs referencing the option in the `init` function for deploying globally, but I haven't seen anywhere what the zappa settings file should look like or how to update an existing application to this. Can anyone point me in the right direction? | closed | 2022-09-29T13:59:03Z | 2024-04-13T20:13:02Z | https://github.com/zappa/Zappa/issues/1179 | [
"documentation",
"no-activity",
"auto-closed"
] | davidgolden | 6 |
adamerose/PandasGUI | pandas | 213 | Installing Pandasgui breaks opencv/matplotlib compatibility | Ubuntu 20.04.5 LTS
To reproduce:
Install matplotlib, and opencv, then install pandasgui. Note that you can no longer plot anything with matplotlib due to the following error:
```
QObject::moveToThread: Current thread (0x2c2de30) is not the object's thread (0x36f3050).
Cannot move to target thread (0x2c2de30)
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/.../venv/lib/python3.8/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: xcb, eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, webgl.
Process finished with exit code 134 (interrupted by signal 6: SIGABRT)
```
I suspect this is because pandasgui installs PyQT5 as a dependency, breaking opencv/matplotlibs dependence on QT installed on the system. Pandasgui should handle this gracefully and make use of existing backends rather than force-installing PyQT5.
| open | 2022-09-29T14:46:47Z | 2022-09-29T14:46:47Z | https://github.com/adamerose/PandasGUI/issues/213 | [
"bug"
] | ckyleda | 0 |
widgetti/solara | fastapi | 145 | TypeError: set_parent() takes 3 positional arguments but 4 were given | When trying the First script example on the Quickstart of the docs, it works correctly when executed on Jupyter notebook, but it won't work as a script directly executed via solara executable.
When doing:
**solara run .\first_script.py**
the server starts but then it keeps logging the following error:
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\uvicorn\protocols\websockets\websockets_impl.py", line 254, in run_asgi
result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\starlette\applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\starlette\middleware\errors.py", line 149, in __call__
await self.app(scope, receive, send)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\starlette\middleware\gzip.py", line 26, in __call__
await self.app(scope, receive, send)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
raise exc
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\starlette\routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\starlette\routing.py", line 341, in handle
await self.app(scope, receive, send)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\starlette\routing.py", line 82, in app
await func(session)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\solara\server\starlette.py", line 197, in kernel_connection
await thread_return
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\anyio\to_thread.py", line 34, in run_sync
func, *args, cancellable=cancellable, limiter=limiter
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\solara\server\starlette.py", line 190, in websocket_thread_runner
anyio.run(run)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\anyio\_core\_eventloop.py", line 68, in run
return asynclib.run(func, *args, **backend_options)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\anyio\_backends\_asyncio.py", line 204, in run
return native_run(wrapper(), debug=debug)
File "c:\users\jicas\anaconda3\envs\ml\lib\asyncio\runners.py", line 43, in run
return loop.run_until_complete(main)
File "c:\users\jicas\anaconda3\envs\ml\lib\asyncio\base_events.py", line 587, in run_until_complete
return future.result()
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\anyio\_backends\_asyncio.py", line 199, in wrapper
return await func(*args)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\solara\server\starlette.py", line 182, in run
await server.app_loop(ws_wrapper, session_id, connection_id, user)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\solara\server\server.py", line 148, in app_loop
process_kernel_messages(kernel, msg)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\solara\server\server.py", line 179, in process_kernel_messages
kernel.set_parent(None, msg)
File "c:\users\jicas\anaconda3\envs\ml\lib\site-packages\solara\server\kernel.py", line 294, in set_parent
super().set_parent(ident, parent, channel)
TypeError: set_parent() takes 3 positional arguments but 4 were given
Is there anything I can do to avoid this error?
Thanks in advance. | closed | 2023-06-06T10:05:14Z | 2023-07-28T09:55:25Z | https://github.com/widgetti/solara/issues/145 | [
"bug"
] | jicastillow | 5 |
tflearn/tflearn | data-science | 338 | Is it possible to change the tensor shape in the model define process | Hello, there,
I want to define the data as
```
Input:
Images:(NxM) x height x width x channel,
label: N x L
```
Is that possible to change the shape of the tensor in the network definition such as
```
net = input_data(inputs, [-1, NxM, height, width, channel]) # inputs
net = conv_2d(net, 32, 3) # convolutional neural network
net = do some things here # change the shape of tensor, maybe tf.reshape(net, (N, -1))?
net = fully_connected(net, L)
```
| closed | 2016-09-12T16:29:58Z | 2016-09-15T16:00:46Z | https://github.com/tflearn/tflearn/issues/338 | [] | ShownX | 3 |
man-group/arctic | pandas | 628 | enum34 should be used via enum-compat for python 3.6+ compatibility | **enum34** (one of arctic's dependencies in [setup.py](https://github.com/manahl/arctic/blob/5ef7f322481fcee7a275e3b3708c6c3ecdab6304/setup.py#L83)) should be used via [enum-compat](https://pypi.org/project/enum-compat/0.0.2/) for **python 3.6+ compatibility**
Please see: https://stackoverflow.com/questions/43124775/why-python-3-6-1-throws-attributeerror-module-enum-has-no-attribute-intflag
enum34 should not be installed anymore starting from python 3.6
| closed | 2018-09-20T14:53:30Z | 2018-11-13T14:11:19Z | https://github.com/man-group/arctic/issues/628 | [] | fersarr | 1 |
graphql-python/graphene-sqlalchemy | graphql | 337 | Sorting not working | ```
def int_timestamp():
return int(time.time())
class UserActivity(TimestampedModel, Base, DictModel):
__tablename__ = 'user_activities'
id = Column(Integer, primary_key=True)
user_id = Column(Integer, ForeignKey("user.id", ondelete='SET NULL'))
username = Column(String())
timestamp = Column(Integer, default=int_timestamp)
class UserActivityModel(SQLAlchemyObjectType):
class Meta:
model = UserActivity
only_fields = ()
exclude_fields = ('user_id',)
interfaces = (relay.Node,)
class Query(ObjectType):
list_user_activities = SQLAlchemyConnectionField(
type=UserActivityModel,
sort=UserActivityModel.sort_argument()
)
def resolve_list_user_activities(self, info: ResolveInfo, sort=None, first=None, after=None, **kwargs):
# Build the query
query = UserActivityModel.get_query(info)
query = query.filter_by(**kwargs)
return query.all()
graphql_app = GraphQLApp(schema=Schema(query=Query))
```
My query in GQL:
```
query {
listUserActivities(first: 3, sort: TIMESTAMP_DESC) {
edges {
node {
username
timestamp
}
}
}
}
```
The result:
```
{
"data": {
"listUserActivities": {
"edges": [
{
"node": {
"username": "adis@ulap.co",
"timestamp": 1644321703
}
},
{
"node": {
"username": "adis@ulap.co",
"timestamp": 1644334763
}
},
{
"node": {
"username": "adis@ulap.co",
"timestamp": 1644344156
}
}
]
}
}
}
```
What's really strange is the `first` argument appears to apply a limit to the result, but the sort argument appears to just be swallowed and unused. Switching to `TIMESTAMP_ASC` produces the same result. I'm trying to find examples online to help with this. What am I doing wrong or what can i try here?
I'm using `2.3.0` | closed | 2022-04-27T16:53:47Z | 2023-02-25T00:48:48Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/337 | [
"question"
] | kastolars | 11 |
thunlp/OpenPrompt | nlp | 282 | 请问反向传播训练PromptModel的原理是什么? | 在README的示例中,prompt model是(template, plm, verbalizer)三元组,而template和verbalizer是Manual给定的,不会发生变化,那么Step 7: Train and inference 是怎么可能做到训练prompt model的呢?是会修改plm吗? | open | 2023-06-18T16:47:35Z | 2023-06-18T16:48:36Z | https://github.com/thunlp/OpenPrompt/issues/282 | [] | 2catycm | 1 |
man-group/arctic | pandas | 425 | Metadata for Tickstore | Hi, I see from the documents that metadata can be saved in a VersionStore. However, I'm using TickStore and now I can only save metadata of each symbol in a seperate mongo library, which is very inconvenient.
Is there a way to save metadata in a TickStore? | closed | 2017-09-26T08:47:32Z | 2017-12-03T23:25:14Z | https://github.com/man-group/arctic/issues/425 | [] | SnowWalkerJ | 3 |
PokemonGoF/PokemonGo-Bot | automation | 5,716 | UBUNTU 16 | Hi,
Have installed the bot on ubuntu 16.04
But i cant get the bot working. If i start the bot on the defualt config it just go to sleep in 2 seconds, when the sleep is done it just sleeps again.
if i go with optimizer it stops at:
2016-09-27 20:59:14,676 [pokemongo_bot.health_record.bot_event] [INFO] Health check is enabled. For more information:
2016-09-27 20:59:14,676 [pokemongo_bot.health_record.bot_event] [INFO] https://github.com/PokemonGoF/PokemonGo-Bot/tree/dev#analytics
2016-09-27 20:59:14,677 [PokemonGoBot] [INFO] Starting bot...
2016-09-27 20:59:14,694 [PokemonOptimizer] [INFO] Buddy Dragonite walking: 0.00 / 5.00 km
2016-09-27 20:59:14,694 [PokemonOptimizer] [INFO] Pokemon Bag: 209 / 250
been trying to figure this out all day but now i have given up. is there something wrong with the bot at the moment?
| closed | 2016-09-27T18:59:59Z | 2016-09-27T19:51:26Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5716 | [] | perpysling2 | 3 |
tqdm/tqdm | pandas | 1,380 | Unnecessary | - [ ] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [ ] visual output bug
- [ ] I have visited the [source website], and in particular
read the [known issues]
- [ ] I have searched through the [issue tracker] for duplicates
- [ ] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
| closed | 2022-10-07T03:33:55Z | 2022-10-13T21:13:23Z | https://github.com/tqdm/tqdm/issues/1380 | [
"invalid ⛔"
] | soheil | 0 |
littlecodersh/ItChat | api | 217 | 自动回复消息没有反应 | 我的版本是python3.5
这是chengxu程序进程

源码在这里
```
import itchat
@itchat.msg_register(itchat.content.TEXT)
def text_reply(msg):
return msg['Text']
itchat.auto_login()
itchat.run()
``` | closed | 2017-01-27T07:43:53Z | 2017-02-02T14:51:14Z | https://github.com/littlecodersh/ItChat/issues/217 | [
"question"
] | Ericxiaoshuang | 1 |
aiortc/aiortc | asyncio | 381 | Ice connection state stuck in checking | Hi!
I have been trying to work on handling frames from a wowza streaming server using python,
i'm working since a while on this code and i cannot understand why my ice candidate still on checking state and i'm not receiving any frames or any feedback from server
I'm look for a lot of examples on this repository but i cannot reproduce the behavior on my code.
Here it's my code.
```python
import asyncio
import cv2
from aiortc import (
RTCIceCandidate,
MediaStreamTrack,
RTCPeerConnection,
RTCSessionDescription,
RTCConfiguration,
)
pc = RTCPeerConnection(RTCConfiguration(iceServers=[]))
@pc.on("iceconnectionstatechange")
async def on_iceconnectionstatechange():
print(f"ICE connection state is {pc.iceConnectionState}")
if pc.iceConnectionState == "failed":
await pc.close()
if pc.iceConnectionState == "checking":
candidates = pc.localDescription.sdp.split("\r\n")
for candidate in candidates:
if "a=candidate:" in candidate:
print("added ice candidate")
candidate = candidate.replace("a=candidate:", "")
splitted_data = candidate.split(" ")
remote_ice_candidate = RTCIceCandidate(
foundation=splitted_data[0],
component=splitted_data[1],
protocol=splitted_data[2],
priority=int(splitted_data[3]),
ip=splitted_data[4],
port=int(splitted_data[5]),
type=splitted_data[7],
sdpMid=0,
sdpMLineIndex=0,
)
pc.addIceCandidate(remote_ice_candidate)
@pc.on("track")
async def on_track(track):
print(f"Track {track.kind} received")
if track.kind == "video":
local_video = VideoTransformTrack(track)
pc.addTrack(local_video)
await local_video.recv()
@track.on("ended")
def on_ended():
print(f"Track {track.kind} ended")
@pc.on("datachannel")
async def on_datachannel(channel):
print(f"changed datachannel to {channel}")
@pc.on("signalingstatechange")
async def on_signalingstatechange():
print(f"changed signalingstatechange {pc.signalingState}")
@pc.on("icegatheringstatechange")
async def on_icegatheringstatechange():
print(f"changed icegatheringstatechange {pc.iceGatheringState}")
class VideoTransformTrack(MediaStreamTrack):
"""
A video stream track that transforms frames from an another track.
"""
kind = "video"
def __init__(self, track):
super().__init__() # don't forget this!
self.track = track
async def recv(self):
print("trying to retrieve frame...")
frame = await self.track.recv()
print("framed retrieved.")
return frame
async def offer(sdp, sdp_type):
offer = RTCSessionDescription(sdp=sdp, type=sdp_type)
# handle offer
await pc.setRemoteDescription(offer)
# send answer
answer = await pc.createAnswer()
await pc.setLocalDescription(answer)
print("finished offer.")
if __name__ == "__main__":
sdp_type = "offer"
sdp = "v=0\r\no=WowzaStreamingEngine-next 948965951 2 IN IP4 127.0.0.1\r\ns=-\r\nt=0 0\r\na=fingerprint:sha-256 53:21:2B:54:27:E2:4F:16:7F:A9:70:0D:21:D0:0A:DF:9D:8A:E1:A7:6B:7B:A9:2B:57:D9:42:EB:C9:7A:76:8C\r\na=group:BUNDLE video\r\na=ice-options:trickle\r\na=msid-semantic:WMS *\r\nm=video 9 RTP/SAVPF 97\r\na=rtpmap:97 H264/90000\r\na=fmtp:97 packetization-mode=1;profile-level-id=42001f;sprop-parameter-sets=Z00AH52oFAFum4CAgKAAAAMAIAAAAwMQgA==,aO48gA==\r\na=cliprect:0,0,720,1280\r\na=framesize:97 1280-720\r\na=framerate:12.0\r\na=control:trackID=1\r\nc=IN IP4 0.0.0.0\r\na=sendrecv\r\na=ice-pwd:30f8917b33a334eb74fb468068b9b492\r\na=ice-ufrag:206f59a4\r\na=mid:video\r\na=msid:{af74b9ec-8e25-42a6-829c-ef935ac422c2} {4b9b5630-3b6b-41c9-9ca9-50bf00d7be78}\r\na=rtcp-fb:97 nack\r\na=rtcp-fb:97 nack pli\r\na=rtcp-fb:97 ccm fir\r\na=rtcp-mux\r\na=setup:actpass\r\na=ssrc:1715144048 cname:{a4361dd7-133c-4777-90dc-671394ecafe9}\r\n"
loop = asyncio.get_event_loop()
loop.run_until_complete(offer(sdp, sdp_type))
loop.run_forever()
```
This is the output from the code.
```txt
Track video received
trying to retrieve frame...
changed signalingstatechange stable
changed signalingstatechange stable
changed icegatheringstatechange gathering
changed icegatheringstatechange complete
finished offer.
ICE connection state is checking
added ice candidate
added ice candidate
added ice candidate
added ice candidate
added ice candidate
added ice candidate
```
In this case the frame retrieved it's never printed and the ICE connection state keeps in checking
Any feedback would be appreciated!
Cheers
Jcanabarro | closed | 2020-06-17T18:58:55Z | 2021-08-05T12:38:58Z | https://github.com/aiortc/aiortc/issues/381 | [
"invalid"
] | jcanabarro | 8 |
plotly/dash | data-visualization | 2,608 | [BUG] adding restyleData to input causing legend selection to clear automatically | Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.11.1
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: [MacOS Ventura 13.5, Apple M1]
- Browser [chrome]
- Version [chrome Version 115.0.5790.114 (Official Build) (arm64)]
**Describe the bug**
After adding `restyleData` in `Input` in `@app.callback` like below:
@app.callback(
Output("graph", "figure"),
Input("input_1", "value"),
Input('upload-data', 'contents'),
Input("graph", "restyleData"),
)
When I click or double click on a legend, initially the graph is updated correctly (e.g. removing the data series if single clicked, or only showing the data series is double clicked). But then the graph is reverted back to its initial state automatically (i.e. all data series are shown) after a certain period of time (depending on the size of the df). If I click or double click the legend the second time, the graph does not revert back automatically. If I click or double click the third time, the graph reverts back again automatically...and so on.
To isolate the issue, the `restyleData` input is not used anywhere in the function (e.g. `def update_line_chart(input_1, contents, restyleData)`) but in a `print` statement. But the `restyleData` content prints out correctly it seems.
**Expected behavior**
When I click or double click on a legend, the update to the graph retains.
**Screenshots**
If applicable, add screenshots or screen recording to help explain your problem.
https://github.com/plotly/dash/assets/5752865/79f3cdfd-2c13-46c7-8097-98e9fa046217
| closed | 2023-08-01T04:39:47Z | 2024-07-25T13:39:35Z | https://github.com/plotly/dash/issues/2608 | [] | crossingchen | 3 |
httpie/cli | python | 1,388 | No such file or directory: '~/.config/httpie/version_info.json' | ## Checklist
- [x] I've searched for similar issues.
- [x] I'm using the latest version of HTTPie.
---
## Minimal reproduction code and steps
1. do use `http` command, e.g. `http GET http://localhost:8004/consumers/test`
## Current result
```bash
❯ http GET http://localhost:8004/consumers/test
HTTP/1.1 200
Connection: keep-alive
Content-Length: 0
Date: Fri, 06 May 2022 06:28:00 GMT
Keep-Alive: timeout=60
X-B3-TraceId: baf0d94787afeb82
~ on ☁️ (ap-southeast-1)
❯ Traceback (most recent call last):
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/__main__.py", line 19, in <module>
sys.exit(main())
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/__main__.py", line 9, in main
exit_status = main()
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/core.py", line 162, in main
return raw_main(
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/core.py", line 44, in raw_main
return run_daemon_task(env, args)
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/internal/daemon_runner.py", line 47, in run_daemon_task
DAEMONIZED_TASKS[options.task_id](env)
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/internal/update_warnings.py", line 51, in _fetch_updates
with open_with_lockfile(file, 'w') as stream:
File "/opt/homebrew/Cellar/python@3.10/3.10.4/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/utils.py", line 287, in open_with_lockfile
with open(file, *args, **kwargs) as stream:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/na/.config/httpie/version_info.json'
```
## Expected result
```bash
❯ http GET http://localhost:8004/consumers/test
HTTP/1.1 200
Connection: keep-alive
Content-Length: 0
Date: Fri, 06 May 2022 06:28:00 GMT
Keep-Alive: timeout=60
X-B3-TraceId: baf0d94787afeb82
```
(without the `FileNotFoundError`)
---
## Debug output
Please re-run the command with `--debug`, then copy the entire command & output and paste both below:
```bash
❯ http --debug GET http://localhost:8004/consumers/test
HTTPie 3.2.0
Requests 2.27.1
Pygments 2.12.0
Python 3.10.4 (main, Apr 26 2022, 19:36:29) [Clang 13.1.6 (clang-1316.0.21.2)]
/opt/homebrew/Cellar/httpie/3.2.0/libexec/bin/python3.10
Darwin 21.4.0
<Environment {'apply_warnings_filter': <function Environment.apply_warnings_filter at 0x104179d80>,
'args': Namespace(),
'as_silent': <function Environment.as_silent at 0x104179c60>,
'colors': 256,
'config': {'default_options': []},
'config_dir': PosixPath('/Users/nico.arianto/.config/httpie'),
'devnull': <property object at 0x104153b00>,
'is_windows': False,
'log_error': <function Environment.log_error at 0x104179cf0>,
'program_name': 'http',
'quiet': 0,
'rich_console': <functools.cached_property object at 0x104169570>,
'rich_error_console': <functools.cached_property object at 0x10416b0a0>,
'show_displays': True,
'stderr': <_io.TextIOWrapper name='<stderr>' mode='w' encoding='utf-8'>,
'stderr_isatty': True,
'stdin': <_io.TextIOWrapper name='<stdin>' mode='r' encoding='utf-8'>,
'stdin_encoding': 'utf-8',
'stdin_isatty': True,
'stdout': <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>,
'stdout_encoding': 'utf-8',
'stdout_isatty': True}>
<PluginManager {'adapters': [],
'auth': [<class 'httpie.plugins.builtin.BasicAuthPlugin'>,
<class 'httpie.plugins.builtin.DigestAuthPlugin'>,
<class 'httpie.plugins.builtin.BearerAuthPlugin'>],
'converters': [],
'formatters': [<class 'httpie.output.formatters.headers.HeadersFormatter'>,
<class 'httpie.output.formatters.json.JSONFormatter'>,
<class 'httpie.output.formatters.xml.XMLFormatter'>,
<class 'httpie.output.formatters.colors.ColorFormatter'>]}>
>>> requests.request(**{'auth': None,
'data': RequestJSONDataDict(),
'headers': <HTTPHeadersDict('User-Agent': b'HTTPie/3.2.0')>,
'method': 'get',
'params': <generator object MultiValueOrderedDict.items at 0x104483220>,
'url': 'http://localhost:8004/consumers/test'})
HTTP/1.1 200
Connection: keep-alive
Content-Length: 0
Date: Fri, 06 May 2022 06:37:17 GMT
Keep-Alive: timeout=60
X-B3-TraceId: 5c2a368fd5b3f98a
~ on ☁️ (ap-southeast-1)
❯ Traceback (most recent call last):
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/__main__.py", line 19, in <module>
sys.exit(main())
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/__main__.py", line 9, in main
exit_status = main()
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/core.py", line 162, in main
return raw_main(
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/core.py", line 44, in raw_main
return run_daemon_task(env, args)
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/internal/daemon_runner.py", line 47, in run_daemon_task
DAEMONIZED_TASKS[options.task_id](env)
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/internal/update_warnings.py", line 51, in _fetch_updates
with open_with_lockfile(file, 'w') as stream:
File "/opt/homebrew/Cellar/python@3.10/3.10.4/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/opt/homebrew/Cellar/httpie/3.2.0/libexec/lib/python3.10/site-packages/httpie/utils.py", line 287, in open_with_lockfile
with open(file, *args, **kwargs) as stream:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/nico.arianto/.config/httpie/version_info.json'
```
## Additional information, screenshots, or code examples
Installation via `homebrew`
| closed | 2022-05-06T06:38:24Z | 2022-05-07T00:43:46Z | https://github.com/httpie/cli/issues/1388 | [
"bug",
"new"
] | nico-arianto | 3 |
microsoft/unilm | nlp | 1,501 | Image decoder download for beitv3 | **Describe**
For my personal research, I would like to have the pre-training parameters for the BEiT3-base-indomain version of the decoder.
Is there any place where I can download it?
| open | 2024-04-07T11:50:28Z | 2024-04-07T11:50:28Z | https://github.com/microsoft/unilm/issues/1501 | [] | YangSun22 | 0 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 322 | Linked in | closed | 2024-09-08T17:25:06Z | 2024-09-08T22:31:12Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/322 | [] | TheGeoHaze | 1 | |
ultralytics/ultralytics | machine-learning | 19,669 | minor bug critical bug in /examples/YOLOv8-ONNXRuntime /main.py | ### Search before asking
- [ ] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Other
### Bug
in line 184
`gain = min(self.input_height / self.img_height, self.input_width / self.img_height)`
it should be
`gain = min(self.input_height / self.img_height, self.input_width / self.img_width)`
### Environment
Ultralytics 8.3.80 🚀 Python-3.11.11 torch-2.6.0 CPU (Apple M1 Max)
Setup complete ✅ (10 CPUs, 32.0 GB RAM, 846.0/926.4 GB disk)
OS macOS-15.3.1-arm64-arm-64bit
Environment Darwin
Python 3.11.11
Install pip
RAM 32.00 GB
Disk 846.0/926.4 GB
CPU Apple M1 Max
CPU count 10
GPU None
GPU count None
CUDA None
numpy ✅ 2.1.1<=2.1.1,>=1.23.0
matplotlib ✅ 3.10.0>=3.3.0
opencv-python ✅ 4.11.0.86>=4.6.0
pillow ✅ 11.1.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.15.2>=1.4.1
torch ✅ 2.6.0>=1.8.0
torch ✅ 2.6.0!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.21.0>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 5.9.0
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.3>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
### Minimal Reproducible Example
just run the current code and you will get wrong bbox.
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-03-13T01:15:49Z | 2025-03-14T01:29:03Z | https://github.com/ultralytics/ultralytics/issues/19669 | [
"bug",
"exports"
] | FitzWang | 2 |
hack4impact/flask-base | flask | 183 | Can't install postgresql | For some reason I can't install postgresql, obviously there's a missing resource but had anyone else has had this problem before?
<img width="920" alt="screen shot 2019-02-26 at 6 29 28 pm" src="https://user-images.githubusercontent.com/13212319/53456553-80f73980-39f4-11e9-94cf-b32fce872b76.png"> | closed | 2019-02-27T00:31:45Z | 2019-03-01T20:39:11Z | https://github.com/hack4impact/flask-base/issues/183 | [] | JeanPierreFig | 1 |
lepture/authlib | django | 654 | Algorithm confusion when verifying JSON Web Tokens with asymmetric public keys | # Issue description
If the `algorithm` field is left unspecified when calling `jwt.decode`, the library will allow HMAC verification with ANY asymmetric public key. The library does no checks whatsoever to mitigate this. This applies to verification with the algorithms HS256, HS384, and HS512 in lieu of the asymmetric algorithm. This issue is also persistent in [joserfc](https://github.com/authlib/joserfc). This vulnerability is similar to CVE-2022-29217 and CVE-2024-33663, however severity is higher as this applies to ALL verification with asymmetric public keys, regardless of format.
The [Authlib documentation on JWTs](https://docs.authlib.org/en/latest/jose/jwt.html) starts off with a code snippet demonstrating JWT signing and verfication of claims using RSA. The code snippet shown is vulnerable to this issue. The documetation does halfway down the page go on to describe the danger of not checking the algorithm header, however does not adequately press the importance of not doing so, nor does the library implement adequate protections against this.
# Proposed solution
Same solution as for the patch for CVE-2022-29217 and CVE-2024-33663. A thorough, comprehensive check of whether the verifying key is asymmetric, see [here](https://github.com/jpadilla/pyjwt/blob/7b4bc844b9d4c38a8dbba1e727f963611124dd5b/jwt/utils.py#L100). When performing signature verification with HMAC, first check whether the verifying key is not actually a PEM or SSH-encoded asymmetric public key; this is a clear sign of algorithm confusion.
Also make non-usage of the algorithms keyword throw an exception when using the `jwt.decode` method, or at the very least a warning, so that the developer at least knows they are doing something silly by not using it. Alternatively, depricate the method an instead only allow usage of the `JsonWebToken` class, with algorithm as a mandatory parameter and disallow usage of multiple algorithms in a single instance.
# Proof-of-Concept
Here is a simplified Proof-of-Concept using pycryptodome for key generation that illustrates one way this could be exploited
```py
from authlib.jose import jwt
from Crypto.PublicKey import RSA
from Crypto.Hash import HMAC, SHA256
import base64
# ----- SETUP -----
# generate an asymmetric RSA keypair
# !! signing should only be possible with the private key !!
KEY = RSA.generate(2048)
# PUBLIC KEY, AVAILABLE TO USER
# CAN BE RECOVERED THROUGH E.G. PUBKEY RECOVERY WITH TWO SIGNATURES:
# https://crypto.stackexchange.com/questions/26188/rsa-public-key-recovery-from-signatures
# https://github.com/FlorianPicca/JWT-Key-Recovery
PUBKEY = KEY.public_key().export_key(format='PEM')
# Sanity check
PRIVKEY = KEY.export_key(format='PEM')
token = jwt.encode({"alg": "RS256"}, {"pwned":False}, PRIVKEY)
claims = jwt.decode(token, PUBKEY)
assert not claims["pwned"]
# ---- CLIENT SIDE -----
# without knowing the private key, a valid token can be constructed
# YIKES!!
b64 = lambda x:base64.urlsafe_b64encode(x).replace(b'=',b'')
payload = b64(b'{"alg":"HS256"}') + b'.' + b64(b'{"pwned":true}')
hasher = HMAC.new(PUBKEY, digestmod=SHA256)
hasher.update(payload)
evil_token = payload + b'.' + b64(hasher.digest())
print("😈",evil_token)
# ---- SERVER SIDE -----
# verify and decode the token using the public key, as is custom
# algorithm field is left unspecified
# but the library will happily still verify without warning, trusting the user-controlled alg field of the token header
data = jwt.decode(evil_token, PUBKEY)
if data["pwned"]:
print("VULNERABLE")
```
## Disclaimer
As per the security policy, I contacted both the author and Tidelift about this issue in early April of this year. I received a response from Tidelift that they would follow up the issue, however the issue remains unpatched and I have still not heard further from either. As such, I am opening a public issue on this vulnerability. | closed | 2024-06-03T13:51:05Z | 2024-06-10T16:40:33Z | https://github.com/lepture/authlib/issues/654 | [
"bug"
] | milliesolem | 5 |
paperless-ngx/paperless-ngx | django | 9,304 | [BUG] Server hang up when multiple consecutive requests | ### Description
Using API, I would like to delete an existing document and replace it with a new one, with the same name,
First I verify that the document exists, then I delete it and finally I upload the document.
I can do the getbyname of the document but immediately after, at the time of delete, the server responds with HANG UP.
I tried in all the ways I know but the same thing happens when I execute the second API request after the first.
Sorry if I opened a bug but but the REPLACE DOCUMENT functionality is fundamental in my application.
If you have any suggestions I would greatly appreciate it.
### Steps to reproduce
I have open a [discussion](https://github.com/paperless-ngx/paperless-ngx/discussions/9285) with part of the source code used.
### Webserver logs
```bash
These messages are correctly, because the delete of existing document has not been executed for HANG UP error.
Not consuming sample-sso.pdf: It is a duplicate of sample-sso (#72). Note: existing document is in the trash.
[2025-03-05 17:41:22,828] [ERROR] [paperless.tasks] ConsumeTaskPlugin failed: sample-sso.pdf: Not consuming sample-sso.pdf: It is a duplicate of sample-sso (#72). Note: existing document is in the trash.
```
### Browser logs
```bash
```
### Paperless-ngx version
2.14.7 in Docker environment
### Host OS
Linux-5.15.167.4-microsoft-standard-WSL2-x86_64-with-glibc2.36
### Installation method
Docker - official image
### System status
```json
```
### Browser
_No response_
### Configuration changes
_No response_
### Please confirm the following
- [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [x] I have already searched for relevant existing issues and discussions before opening this report.
- [x] I have updated the title field above with a concise description. | closed | 2025-03-05T16:44:04Z | 2025-03-05T16:53:24Z | https://github.com/paperless-ngx/paperless-ngx/issues/9304 | [
"not a bug"
] | clabnet | 1 |
plotly/dash | jupyter | 2,871 | Address typing issues | #2841 addresses some longstanding issues with Python typing, and could be extended to add typing for methods as well. | closed | 2024-05-29T19:22:05Z | 2024-08-13T19:51:09Z | https://github.com/plotly/dash/issues/2871 | [
"feature",
"P3"
] | gvwilson | 0 |
noirbizarre/flask-restplus | api | 274 | Having 'strict' fields when using JSON schema models | After creating an JSON Schema model and using it on api.expect(<schema_model>, strict=True). It would only allow fields that are on the schema model and throw error if other args are sent. Similar to "... parse_args() with strict=True ensures that an error is thrown if the request includes arguments your parser does not define." Ex:
address = api.schema_model('Address', {
'properties': {
'road': {'type': 'string'},
},
'type': 'object'
})
@api.route('/address')
class AcountInfo(Resource):
@api.expect(address, strict=True)
def POST(self, account_id):
And if someone tried to send a POST paylod of { "road": "123 South", "city": "Miami" } it would give an error because "city" is not on the schema_model payload | open | 2017-04-13T23:16:00Z | 2017-04-20T17:58:03Z | https://github.com/noirbizarre/flask-restplus/issues/274 | [] | apires03 | 2 |
litestar-org/polyfactory | pydantic | 466 | Use defaults values from BaseModel in ModelFactory | ### Summary
I think it would be great if `ModelFactory` can use defaults values from `BaseModel`
### Basic Example
For example I have class `AppSettings`, where I set `app_title` as APP_TITLE by default
```
from pydantic import BaseModel, Field
from polyfactory.factories.pydantic_factory import ModelFactory
APP_TITLE = 'title'
class AppSettings(BaseModel):
app_title: str = Field(APP_TITLE)
some_another_field: int
```
And now I can specify `app_title` as `APP_TITLE` in `ModelFactory` if only I set it model
```
class AppSettingsFactory(ModelFactory):
__model__ = AppSettings
app_title = APP_TITLE
```
So I think it would be great to have an extra param for using defaults in `ModelFactory`. Maybe something like this
```
class AppSettingsFactory(ModelFactory):
__model__ = AppSettings
__use_defaults__ = True
```
### Drawbacks and Impact
_No response_
### Unresolved questions
_No response_ | closed | 2023-12-20T09:13:31Z | 2025-03-20T15:53:13Z | https://github.com/litestar-org/polyfactory/issues/466 | [
"enhancement"
] | ShtykovaAA | 4 |
JaidedAI/EasyOCR | pytorch | 1,058 | Can I get the result of negative number? | Hello I hope you are doing well.
In my code, I cannot get the result of negative number.
I didn't train my custom model.
```
import easyocr
reader = easyocr.Reader(['en'])
result = reader.readtext('directory')
```
Other characters are detected well(English characters and numbers), but only negative symbol " - " is not detected.
How can I solve this?
Thank you.
| open | 2023-06-20T08:27:59Z | 2024-11-26T06:46:47Z | https://github.com/JaidedAI/EasyOCR/issues/1058 | [] | chungminho1 | 1 |
plotly/dash | flask | 3,094 | Allow_duplicate=True Fails with More Than Two Duplicate Callbacks | ## Bug Report: `allow_duplicate=True` Fails with More Than Two Duplicate Callbacks
**Description:**
The `allow_duplicate=True` parameter does not function correctly when there are more than two duplicate callbacks.
**Reproducible Example:**
The following examples demonstrate the issue:
**Working Examples (Two Duplicate Callbacks):**
```python
# Example 1: Works
Output("layout_ctx-train", "children")
Input('button1', 'n_clicks'),
...
Output("layout_ctx-train", "children", allow_duplicate=True)
Input('button2', 'n_clicks'),
...
```
```python
# Example 2: Works
Output("layout_ctx-train", "children", allow_duplicate=True)
Input('button1', 'n_clicks'),
...
Output("layout_ctx-train", "children")
Input('button2', 'n_clicks'),
...
```
```python
# Example 3: Works
Output("layout_ctx-train", "children", allow_duplicate=True)
Input('button1', 'n_clicks'),
...
Output("layout_ctx-train", "children", allow_duplicate=True)
Input('button2', 'n_clicks'),
...
```
**Failing Examples (More Than Two Duplicate Callbacks):**
```python
# Example 4: Fails
Output("layout_ctx-train", "children", allow_duplicate=True)
Input('button1', 'n_clicks'),
...
Output("layout_ctx-train", "children")
Input('button2', 'n_clicks'),
...
Output("layout_ctx-train", "children")
Input('button3', 'n_clicks'),
...
Output("layout_ctx-train", "children")
Input('button4', 'n_clicks'),
...
```
```python
# Example 5: Fails
Output("layout_ctx-train", "children")
Input('button1', 'n_clicks'),
...
Output("layout_ctx-train", "children")
Input('button2', 'n_clicks'),
...
Output("layout_ctx-train", "children")
Input('button3', 'n_clicks'),
...
Output("layout_ctx-train", "children", allow_duplicate=True)
Input('button4', 'n_clicks'),
...
```
```python
# Example 6: Fails
Output("layout_ctx-train", "children", allow_duplicate=True)
Input('button1', 'n_clicks'),
...
Output("layout_ctx-train", "children", allow_duplicate=True)
Input('button2', 'n_clicks'),
...
Output("layout_ctx-train", "children", allow_duplicate=True)
Input('button3', 'n_clicks'),
...
Output("layout_ctx-train", "children", allow_duplicate=True)
Input('button4', 'n_clicks'),
...
```
**Expected Behavior:**
Duplicate callbacks should function correctly when at least one of the components has `allow_duplicate=True` set.
**Additional Comments:**
This functionality worked correctly in Dash version 2.9.1 for more than two duplicate callbacks as long as `allow_duplicate=True` was present on all relevant components. The issue was encountered in Dash versions 2.17.1+. | closed | 2024-11-26T12:01:25Z | 2024-11-27T15:35:24Z | https://github.com/plotly/dash/issues/3094 | [
"bug",
"P2"
] | Kissabi | 1 |
google-research/bert | tensorflow | 1,191 | Where is 'token_is_max_context' used in run_squad.py? | For input features, there is an attributre called `token_is_max_context` in `run_squad.py`. However, I don't find where it has been used apart from checking validity of an answer prediction.
I would be grateful if you could provide with a description and how and where it is being used.
Thanks,
Gunjan | closed | 2021-01-14T00:25:14Z | 2021-01-21T16:03:07Z | https://github.com/google-research/bert/issues/1191 | [] | gchhablani | 1 |
MagicStack/asyncpg | asyncio | 544 | Serializing Connection/Pool/Record objects | I'm trying to use asyncpg in combination with Dask, but I'm running into the problem that Pool, Connection or asyncpg.Record objects cannot be serialized (pickled) to and from my workers. (I need to supply a Pool or Connection to a worker, and expect Record objects back)
Any suggestions?
Regards, | open | 2020-03-19T15:38:45Z | 2024-01-14T06:48:20Z | https://github.com/MagicStack/asyncpg/issues/544 | [] | MennoNij | 1 |
redis/redis-om-python | pydantic | 646 | redis-om 0.3.2 no longer supports pydantic<2 | https://github.com/redis/redis-om-python/blob/c5068e561116d6d19e571aa336175de91311d695/pyproject.toml#L40
```bash
pip install "redis-om" "pydantic<2"
```
```python
from redis_om import JsonModel
```
```python
File.../site-packages/redis_om/__init__.py", line 4, in <module>
from .model.migrations.migrator import MigrationError, Migrator
File.../site-packages/redis_om/model/__init__.py", line 2, in <module>
from .model import (
File.../site-packages/redis_om/model/model.py", line 2216, in <module>
class EmbeddedJsonModel(JsonModel, abc.ABC):
File.../site-packages/redis_om/model/model.py", line 1311, in __new__
new_class = super().__new__(cls, name, bases, attrs, **kwargs)
File "pydantic/main.py", line 282, in pydantic.main.ModelMetaclass.__new__
File "/usr/lib/python3.10/abc.py", line 106, in __new__
cls = super().__new__(mcls, name, bases, namespace, **kwargs)
File.../site-packages/redis_om/model/model.py", line 1896, in __init_subclass__
cls.redisearch_schema()
File.../site-packages/redis_om/model/model.py", line 1965, in redisearch_schema
schema_parts = [schema_prefix] + cls.schema_for_fields()
File.../site-packages/redis_om/model/model.py", line 1983, in schema_for_fields
fields[name] = PydanticFieldInfo.from_annotation(field)
AttributeError: type object 'FieldInfo' has no attribute 'from_annotation'
```
It works for redis-om 0.3.1 so the issue was introduced in 0.3.2. | closed | 2024-08-07T16:42:04Z | 2024-10-30T14:47:17Z | https://github.com/redis/redis-om-python/issues/646 | [] | woutdenolf | 5 |
jupyter/nbviewer | jupyter | 758 | Error 503 No healthy backends | Hi guys,
Any idea why it's not possible to access https://nbviewer.jupyter.org/
Error 503 No healthy backends
No healthy backends
Guru Mediation:
Details: cache-bos8234-BOS 1516977539 2493363251
Varnish cache server | closed | 2018-01-26T14:39:46Z | 2019-10-16T18:11:53Z | https://github.com/jupyter/nbviewer/issues/758 | [
"tag:HackIllinois"
] | igorrates | 5 |
jmcnamara/XlsxWriter | pandas | 494 | Modification to enable producing consistent binary output | I think it'd be very useful to have a way to make `xlsxwriter` produce identical binary result every time a workbook with identical contents is generated.
Two scenarios I have personally in mind:
- it would be useful to keep some slow-changing data in worksheets in version control without wasting space every time a report is regenerated,
- it would be helpful to quickly check binary hash of a report to verify the same contents were generated as a kind of unit test
Unfortunately, that's currently not possible. I've seen that there are two moving pieces here:
1. one can set constant creation/modification date that is written to the metadata using "document properties", but
1. the zipping process always sets file timestamps to current time what results in the archive being different on every run.
I've tracked down the problem to the zipping library which internally makes use of timestamps of temporary files created by `xlsxwriter` or current time when in-memory mode is used.
In my pull request I propose a solution where creation time is taken from the metadata (if available) and:
1. a `ZipInfo` structure with that date is used for in-memory zipping
1. temporary files' modification time is set to that date with file-based zipping.
All of that can be done in `Workbook`'s `_store_workbook` method. I wondered whether setting the temporary files' date in `Packager` would be more natural but decided it's better to have it all in one place for both file-based and in-memory modes.
I think the enhancement would be useful not only to me (if properly advertised) but I tried to keep a low profile and "constant output mode" is only activated when creation date is set in properties which I expect would be rare.
For implementation details and usage code see my pull request https://github.com/jmcnamara/XlsxWriter/pull/495.
All existing unit tests passed (linux, py27). I've seen your encouragement for creating new ones for any additional functionality but all existing tests check internal xml data I haven't spotted a natural place to add an "outside" binary-level test.
Last but not least, thanks for your great library, I use it to all kind of things and am glad I can give something back. | closed | 2018-03-28T19:10:05Z | 2018-04-23T19:09:58Z | https://github.com/jmcnamara/XlsxWriter/issues/494 | [
"feature request",
"medium term"
] | ziembla | 6 |
StackStorm/st2 | automation | 5,239 | Benchmark and prototype compressing message bus payloads | Right now we send raw pickled object byte strings over message bus (yeah, pickling is not great and IIRC, there is like a 3-4 year old ticket to move from pickle to something else, but sadly that never materialized and it's not an easy change).
In some scenarios such as when dispatching whole ExecutionDB and TriggerInstanceDB objects, those payloads can be quite large since they contain the whole result / payload values.
We should prototype and benchmark compressing the payload using something like zstandard. Micro benchmarks should be a good starting point.
Since we already send byte strings over wire, I think adding compression should be pretty straight forward and we could also make it backward compatible relatively easily.
That functionality should also be behind a feature flag and only enabled by default if it shows speed improvements for all the common scenarios (small, mid and large sized payloads). | closed | 2021-04-18T20:55:15Z | 2021-04-22T11:05:50Z | https://github.com/StackStorm/st2/issues/5239 | [
"performance",
"rabbitmq"
] | Kami | 1 |
unytics/bigfunctions | data-visualization | 16 | add "private preview" label for remote functions | closed | 2022-12-06T22:54:45Z | 2022-12-24T08:33:57Z | https://github.com/unytics/bigfunctions/issues/16 | [
"website"
] | unytics | 0 | |
DistrictDataLabs/yellowbrick | scikit-learn | 514 | The Manifold visualizer doesn't work with 'tsne' | **Describe the bug**
calling _fit_transform()_ on _Manifold_ object fails when using **'tsne'**. It seems that it is calling _transform()_ function which dosent exist on [sklearn.manifold.TSNE](http://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html)
**To Reproduce**
```
from yellowbrick.features.manifold import Manifold
X = np.random.rand(250,20)
y = np.random.randint(0,2,20)
visualizer = Manifold(manifold='tsne', target='discrete')
visualizer.fit_transform(X,y)
visualizer.poof()
```
**Traceback**
```
AttributeError Traceback (most recent call last)
<ipython-input-524-025714ae6ba9> in <module>()
5
6 visualizer = Manifold(manifold='tsne', target='discrete')
----> 7 visualizer.fit_transform(X,y)
8 visualizer.poof()
...\lib\site-packages\sklearn\base.py in fit_transform(self, X, y, **fit_params)
518 else:
519 # fit method of arity 2 (supervised transformation)
--> 520 return self.fit(X, y, **fit_params).transform(X)
521
522
...\lib\site-packages\yellowbrick\features\manifold.py in transform(self, X)
328 Returns the 2-dimensional embedding of the instances.
329 """
--> 330 return self.manifold.transform(X)
331
332 def draw(self, X, y=None):
AttributeError: 'TSNE' object has no attribute 'transform'
```
**Desktop**
- OS: [Windows 10]
- Python Version [3.4.5]
- Scikit-Learn version is 0.19.1
- Yellowbrick Version [0.8]
| closed | 2018-07-20T15:22:19Z | 2018-07-20T15:31:25Z | https://github.com/DistrictDataLabs/yellowbrick/issues/514 | [] | imad24 | 2 |
newpanjing/simpleui | django | 39 | 在 raw_id_fields的模式下,弹出选择清单后, 如果点搜索后, 会选择不了 | 在 raw_id_fields的模式下,弹出选择清单后, 如果点搜索后, 会选择不了, 直接会跑到编辑页面
发现在是在点搜索后页面刷新了, url之前的带&_popup=1, 刷新没有这个了
**重现步骤**
1. 采用raw_id_fields
2. 弹出选择框
3. 筛选一项, 后就无法选择要的项目
**环境**
1.操作系统:
2.python版本:3.6
3.django版本:2.1
4.simpleui版本:2.0
**其他描述**
| closed | 2019-05-16T10:37:24Z | 2019-05-21T02:31:16Z | https://github.com/newpanjing/simpleui/issues/39 | [
"bug"
] | JohnYan2017 | 2 |
3b1b/manim | python | 1,389 | First example command returns error (get_monitors) | ### Describe the error
I want to execute :
manimgl example_scenes.py OpeningManimExample
### Code and Error
**Code**:
example_scenes.py
**Error**:
Warning: Using the default configuration file, which you can modify in d:\videos\manim\manimlib\default_config.yml
If you want to create a local configuration file, you can create a file named custom_config.yml, or run manimgl --config
Traceback (most recent call last):
File "C:\Users\bob\AppData\Local\Programs\Python\Python37-32\Scripts\manimgl-script.py", line 33, in <module>
sys.exit(load_entry_point('manimgl', 'console_scripts', 'manimgl')())
File "d:\videos\manim\manimlib\__main__.py", line 13, in main
config = manimlib.config.get_configuration(args)
File "d:\videos\manim\manimlib\config.py", line 237, in get_configuration
monitor = get_monitors()[custom_config["window_monitor"]]
File "c:\users\bob\appdata\local\programs\python\python37-32\lib\site-packages\screeninfo\screeninfo.py", line 37, in get_monitors
raise ScreenInfoError("No enumerators available")
screeninfo.common.ScreenInfoError: No enumerators available
### Environment
**OS System**: windows 8.1
**manim version**: master
**python version**: Python 3.7.2
| open | 2021-02-14T16:14:40Z | 2025-03-09T13:24:39Z | https://github.com/3b1b/manim/issues/1389 | [] | ultravision3d | 8 |
ray-project/ray | tensorflow | 51,276 | [core][gpu-objects] Support collective operations | ### Description
Support collective operations of GPU objects such as gather / scatter / all-reduce.
### Use case
_No response_ | open | 2025-03-11T22:43:35Z | 2025-03-11T22:43:51Z | https://github.com/ray-project/ray/issues/51276 | [
"enhancement",
"P2",
"core",
"gpu-objects"
] | kevin85421 | 0 |
lepture/authlib | flask | 263 | 你好!用authlib做阿里钉钉的第三方登录功能的时候 | 今天做阿里钉钉的时候,,
钉钉的请求接口用的字段不是client_id而是appid字段,
这样的结果就是调用auth.dingding.authorize_redirect()生成的url中client_id在钉钉的接口中是无效的
我想寻求的帮助是。。如何把client_id变成appid
| closed | 2020-09-04T11:25:31Z | 2020-09-17T07:24:24Z | https://github.com/lepture/authlib/issues/263 | [] | kanhebei | 1 |
sloria/TextBlob | nlp | 151 | Issue with .correct() | I'm just testing out TextBlob and in particular the spelling functionality. Perhaps I'm missing something, but the basic example of the .correct() method is returning an empty Texblob. (I'm using Python 2.7 in Jupyter Notebooks). Other Texblob functionality has worked as expected/demonstrated in docs.
input:
phrase = TextBlob("Do you get thaksgiving day off?")
phrase.correct()
output:
TextBlob("")
| closed | 2017-02-13T19:28:59Z | 2017-03-20T21:22:21Z | https://github.com/sloria/TextBlob/issues/151 | [] | nyborrobyn | 2 |
graphdeco-inria/gaussian-splatting | computer-vision | 1,176 | SIBR_viewer cmake build fail | `cmake -Bbuild . -DCMAKE_BUILD_TYPE=Release`
I overcome some errors before this and it works well for now.
but i can't handle this problem.
is anyone can help me?
after i command
`cmake --build build -j24 --target install`
i got
```
~/gaussian-splatting/SIBR_viewers$ cmake --build build -j24 --target install
[ 0%] Built target sibr_graphics_resources
[ 4%] Built target imgui
[ 6%] Built target mrf
[ 6%] Built target SIBR_texturedMesh_app_resources
[ 8%] Built target xatlas
[ 11%] Built target nativefiledialog
[ 13%] Built target CudaRasterizer
[ 13%] Built target sibr_gaussian_shaders
[ 23%] Built target sibr_system
[ 35%] Built target sibr_graphics
[ 39%] Built target sibr_video
[ 44%] Built target sibr_assets
[ 44%] Built target sibr_renderer_shaders
[ 50%] Built target sibr_raycaster
[ 50%] Built target sibr_view_shaders
[ 50%] Built target PREBUILD
[ 55%] Built target sibr_imgproc
[ 60%] Built target sibr_scene
[ 76%] Built target sibr_view
[ 87%] Built target sibr_renderer
[ 89%] Built target sibr_basic
[ 91%] Linking CXX executable SIBR_texturedMesh_app
[ 91%] Linking CXX executable SIBR_PointBased_app
[ 93%] Built target sibr_remote
[ 95%] Built target sibr_gaussian
[ 95%] Linking CXX executable SIBR_remoteGaussian_app
[ 95%] Linking CXX executable SIBR_gaussianViewer_app
/bin/ld: /usr/lib/x86_64-linux-gnu/libp11-kit.so.0: undefined reference to `ffi_closure_alloc@LIBFFI_CLOSURE_7.0'
/bin/ld: /usr/lib/x86_64-linux-gnu/libp11-kit.so.0: undefined reference to `ffi_prep_closure_loc@LIBFFI_CLOSURE_7.0'
/bin/ld: /usr/lib/x86_64-linux-gnu/libp11-kit.so.0: undefined reference to `ffi_type_uint8@LIBFFI_BASE_7.0'
/bin/ld: /lib/libgdal.so.26: undefined reference to `TIFFReadRGBATileExt@LIBTIFF_4.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_type_void@LIBFFI_BASE_7.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_prep_cif@LIBFFI_BASE_7.0'
/bin/ld: /lib/libgdal.so.26: undefined reference to `TIFFReadRGBAStripExt@LIBTIFF_4.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_type_uint32@LIBFFI_BASE_7.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_type_sint32@LIBFFI_BASE_7.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_call@LIBFFI_BASE_7.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_type_pointer@LIBFFI_BASE_7.0'
/bin/ld: /usr/lib/x86_64-linux-gnu/libp11-kit.so.0: undefined reference to `ffi_type_uint64@LIBFFI_BASE_7.0'
collect2: error: ld returned 1 exit status
make[2]: *** [src/projects/basic/apps/pointBased/CMakeFiles/SIBR_PointBased_app.dir/build.make:180: src/projects/basic/apps/pointBased/SIBR_PointBased_app] Error 1
make[1]: *** [CMakeFiles/Makefile2:1424: src/projects/basic/apps/pointBased/CMakeFiles/SIBR_PointBased_app.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
/bin/ld: /usr/lib/x86_64-linux-gnu/libp11-kit.so.0: undefined reference to `ffi_closure_alloc@LIBFFI_CLOSURE_7.0'
/bin/ld: /usr/lib/x86_64-linux-gnu/libp11-kit.so.0: undefined reference to `ffi_prep_closure_loc@LIBFFI_CLOSURE_7.0'
/bin/ld: /usr/lib/x86_64-linux-gnu/libp11-kit.so.0: undefined reference to `ffi_type_uint8@LIBFFI_BASE_7.0'
/bin/ld: /lib/libgdal.so.26: undefined reference to `TIFFReadRGBATileExt@LIBTIFF_4.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_type_void@LIBFFI_BASE_7.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_prep_cif@LIBFFI_BASE_7.0'
/bin/ld: /lib/libgdal.so.26: undefined reference to `TIFFReadRGBAStripExt@LIBTIFF_4.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_type_uint32@LIBFFI_BASE_7.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_type_sint32@LIBFFI_BASE_7.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_call@LIBFFI_BASE_7.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_type_pointer@LIBFFI_BASE_7.0'
/bin/ld: /usr/lib/x86_64-linux-gnu/libp11-kit.so.0: undefined reference to `ffi_type_uint64@LIBFFI_BASE_7.0'
collect2: error: ld returned 1 exit status
make[2]: *** [src/projects/basic/apps/texturedMesh/CMakeFiles/SIBR_texturedMesh_app.dir/build.make:180: src/projects/basic/apps/texturedMesh/SIBR_texturedMesh_app] Error 1
make[1]: *** [CMakeFiles/Makefile2:1335: src/projects/basic/apps/texturedMesh/CMakeFiles/SIBR_texturedMesh_app.dir/all] Error 2
/bin/ld: /usr/lib/x86_64-linux-gnu/libp11-kit.so.0: undefined reference to `ffi_closure_alloc@LIBFFI_CLOSURE_7.0'
/bin/ld: /usr/lib/x86_64-linux-gnu/libp11-kit.so.0: undefined reference to `ffi_prep_closure_loc@LIBFFI_CLOSURE_7.0'
/bin/ld: /usr/lib/x86_64-linux-gnu/libp11-kit.so.0: undefined reference to `ffi_type_uint8@LIBFFI_BASE_7.0'
/bin/ld: /lib/libgdal.so.26: undefined reference to `TIFFReadRGBATileExt@LIBTIFF_4.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_type_void@LIBFFI_BASE_7.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_prep_cif@LIBFFI_BASE_7.0'
/bin/ld: /lib/libgdal.so.26: undefined reference to `TIFFReadRGBAStripExt@LIBTIFF_4.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_type_uint32@LIBFFI_BASE_7.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_type_sint32@LIBFFI_BASE_7.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_call@LIBFFI_BASE_7.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_type_pointer@LIBFFI_BASE_7.0'
/bin/ld: /usr/lib/x86_64-linux-gnu/libp11-kit.so.0: undefined reference to `ffi_type_uint64@LIBFFI_BASE_7.0'
collect2: error: ld returned 1 exit status
make[2]: *** [src/projects/remote/apps/remoteGaussianUI/CMakeFiles/SIBR_remoteGaussian_app.dir/build.make:181: src/projects/remote/apps/remoteGaussianUI/SIBR_remoteGaussian_app] Error 1
make[1]: *** [CMakeFiles/Makefile2:1871: src/projects/remote/apps/remoteGaussianUI/CMakeFiles/SIBR_remoteGaussian_app.dir/all] Error 2
/bin/ld: /usr/lib/x86_64-linux-gnu/libp11-kit.so.0: undefined reference to `ffi_closure_alloc@LIBFFI_CLOSURE_7.0'
/bin/ld: /usr/lib/x86_64-linux-gnu/libp11-kit.so.0: undefined reference to `ffi_prep_closure_loc@LIBFFI_CLOSURE_7.0'
/bin/ld: /usr/lib/x86_64-linux-gnu/libp11-kit.so.0: undefined reference to `ffi_type_uint8@LIBFFI_BASE_7.0'
/bin/ld: /lib/libgdal.so.26: undefined reference to `TIFFReadRGBATileExt@LIBTIFF_4.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_type_void@LIBFFI_BASE_7.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_prep_cif@LIBFFI_BASE_7.0'
/bin/ld: /lib/libgdal.so.26: undefined reference to `TIFFReadRGBAStripExt@LIBTIFF_4.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_type_uint32@LIBFFI_BASE_7.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_type_sint32@LIBFFI_BASE_7.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_call@LIBFFI_BASE_7.0'
/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libwayland-client.so.0: undefined reference to `ffi_type_pointer@LIBFFI_BASE_7.0'
/bin/ld: /usr/lib/x86_64-linux-gnu/libp11-kit.so.0: undefined reference to `ffi_type_uint64@LIBFFI_BASE_7.0'
collect2: error: ld returned 1 exit status
make[2]: *** [src/projects/gaussianviewer/apps/gaussianViewer/CMakeFiles/SIBR_gaussianViewer_app.dir/build.make:182: src/projects/gaussianviewer/apps/gaussianViewer/SIBR_gaussianViewer_app] Error 1
make[1]: *** [CMakeFiles/Makefile2:1609: src/projects/gaussianviewer/apps/gaussianViewer/CMakeFiles/SIBR_gaussianViewer_app.dir/all] Error 2
make: *** [Makefile:136: all] Error 2`
```
or
is there any other way to use pre-built SIBR_Viewer?
my environment is
OS : Ubuntu 20.04
CUDA : 11.8
GPU : 2080TI | open | 2025-02-28T05:34:20Z | 2025-02-28T05:36:39Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/1176 | [] | qkrwnsdn0427 | 0 |
scikit-learn/scikit-learn | data-science | 30,449 | duck typed estimators fail in check_estimator | ### Describe the bug
I believe these 5 lines, which check for specific types:
https://github.com/scikit-learn/scikit-learn/blob/76ae0a539a0e87145c9f6fedcd7033494082fa17/sklearn/utils/estimator_checks.py#L4439-L4443
breaks the documentation in https://scikit-learn.org/stable/developers/develop.html#rolling-your-own-estimator
Where it says "We tend to use “duck typing” instead of checking for isinstance, which means it’s technically possible to implement estimator without inheriting from scikit-learn classes."
Since \_\_sklearn\_tags\_\_ appears to now be a requirement, and if those specific Tag classes are required to be returned from \_\_sklearn\_tags\_\_, then it is no longer possible to implement scikit-learn estimators through duck typing. I believe either the tests should be changed, or the documentation updated. I would prefer the tests to change.
### Steps/Code to Reproduce
see above
### Expected Results
see above
### Actual Results
see above
### Versions
```shell
1.6.0
```
| closed | 2024-12-10T00:43:08Z | 2024-12-21T18:31:27Z | https://github.com/scikit-learn/scikit-learn/issues/30449 | [
"Bug"
] | paulbkoch | 8 |
AirtestProject/Airtest | automation | 403 | 多线程配合多台手机运行脚本,运行中手机之间有冲突,并且报异常ValueError: generator already executing | **描述问题bug**
多线程配合多台手机运行脚本,运行中报异常ValueError: generator already executing
```
Traceback (most recent call last):
File "D:\Users\tangzt\AppData\Local\Programs\Python\Python35\lib\site-packages\airtest\cli\runner.py", line 65, in runTest
six.reraise(*sys.exc_info())
File "D:\Users\tangzt\AppData\Local\Programs\Python\Python35\lib\site-packages\six.py", line 693, in reraise
raise value
File "D:\Users\tangzt\AppData\Local\Programs\Python\Python35\lib\site-packages\airtest\cli\runner.py", line 61, in runTest
exec(compile(code.encode("utf-8"), pyfilepath, 'exec'), self.scope)
File "D:\git\MobileAutoTest\completePage\debugCase\订单信息_乘客行程_国际多程.air\订单信息_乘客行程_国际多程.py", line 19, in <module>
assert_exists(Template(r"tpl1557384351372.png", record_pos=(0.158, -0.669), resolution=(1080, 2340)), "一程航班")
File "D:\Users\tangzt\AppData\Local\Programs\Python\Python35\lib\site-packages\airtest\utils\logwraper.py", line 72, in wrapper
res = f(*args, **kwargs)
File "D:\Users\tangzt\AppData\Local\Programs\Python\Python35\lib\site-packages\airtest\core\api.py", line 446, in assert_exists
pos = loop_find(v, timeout=ST.FIND_TIMEOUT, threshold=ST.THRESHOLD_STRICT)
File "D:\Users\tangzt\AppData\Local\Programs\Python\Python35\lib\site-packages\airtest\utils\logwraper.py", line 72, in wrapper
res = f(*args, **kwargs)
File "D:\Users\tangzt\AppData\Local\Programs\Python\Python35\lib\site-packages\airtest\core\cv.py", line 42, in loop_find
screen = G.DEVICE.snapshot(filename=None)
File "D:\Users\tangzt\AppData\Local\Programs\Python\Python35\lib\site-packages\airtest\core\android\android.py", line 216, in snapshot
screen = self.minicap.get_frame_from_stream()
File "D:\Users\tangzt\AppData\Local\Programs\Python\Python35\lib\site-packages\airtest\core\android\minicap.py", line 24, in wrapper
return func(inst, *args, **kwargs)
File "D:\Users\tangzt\AppData\Local\Programs\Python\Python35\lib\site-packages\airtest\core\android\minicap.py", line 326, in get_frame_from_stream
return six.next(self.frame_gen)
ValueError: generator already executing
```
**相关截图**
(贴出遇到问题时的截图内容,如果有的话)
(在AirtestIDE里产生的图像和设备相关的问题,请贴一些AirtestIDE控制台黑窗口相关报错信息)
**复现步骤**
总共两台设备,当分别执行t1.start()和t2.start()的时候都是可以成功的。但是当两个线程并发执行队列里面的sctrip的时候,貌似会有冲突。
from airtest.cli.runner import AirtestCase, run_script
from airtest.core.api import *
from argparse import *
import os
import time
import sys
import queue
import re
import threading
sys.path.append("../helper")
import helper
class CustomAirtestCase(AirtestCase):
def setUp(self):
ST.rgb = True
ST.OPDELAY = 0.5
ST.THRESHOLD_STRICT = 0.7
ST.THRESHOLD = 0.7
wake()
super(CustomAirtestCase, self).setUp()
def tearDown(self):
stop_app("ctrip.android.view")
super(CustomAirtestCase, self).setUp()
def caseQueue(self,root_dir='.\\dailyTestCase'):
q = queue.Queue()
for f in os.listdir(root_dir):
if f.endswith(".air"):
script = os.path.join(root_dir, f)
q.put(script)
return q
def run_air(self, caseQueue,device=['android://127.0.0.1:5037/GWY0217406000373']):
while not caseQueue.empty():
script = caseQueue.get()
airName = re.search(r'[^\\]+$', script).group(0)
caseName = airName.replace('.air', '')
root_log = 'D:\\testcase\\'
log = root_log + '\\' + caseName
args = Namespace(device=device, log=log, recording=None, script=script)
try:
run_script(args, CustomAirtestCase)
except:
pass
if __name__ == '__main__':
test = CustomAirtestCase()
caseTasks = test.caseQueue('.\\debugCase')
# vivo
device1 = ['android://127.0.0.1:5037/d5c7f0e8']
# oppo
device2 = ['android://127.0.0.1:5037/ee03674a']
devs = [device1,device2]
res1 = []
res2 = []
t1 = threading.Thread(target=test.run_air, args=(caseTasks,device1,))
t2 = threading.Thread(target=test.run_air, args=(caseTasks,device2,))
t1.start()
time.sleep(5)
t2.start()
t1.join()
t2.join()
**预期效果**
**python 版本:** `python3.5`
**airtest 版本:** `1.0.69`
> airtest版本通过`pip freeze`可以命令可以查到
**设备:**
- 型号: [e.g. google pixel 2]
- 系统: [e.g. Android 8.1]
- (别的信息)
**其他相关环境信息**
windows环境
| closed | 2019-05-14T07:05:36Z | 2019-05-14T10:32:10Z | https://github.com/AirtestProject/Airtest/issues/403 | [] | rossoneri520 | 3 |
pyg-team/pytorch_geometric | pytorch | 8,774 | It is hoped that the explanation class can add a function to output important results (list/dictionary /df). | ### 🚀 The feature, motivation and pitch
It is hoped that the explanation class can add a function to output important results (list/dictionary /df). Because now you can only output images through 'visualize_feature_importance'
### Alternatives
_No response_
### Additional context
_No response_ | open | 2024-01-16T06:37:13Z | 2024-01-18T10:24:04Z | https://github.com/pyg-team/pytorch_geometric/issues/8774 | [
"feature"
] | lck-handsome | 4 |
pydantic/pydantic-ai | pydantic | 895 | AssertionError: OpenAI requires `tool_call_id` | # Description
Hi. I had run into an issue when switching models.
Basically, I have implemented an API endpoint where I can change models.
Where is what happened:
- I started with `gemini-1.5-flash`, asking it what time is now, which would call my `now()` tool.
- It runs without any problem returning the current datetime
- Then I switched to `gpt-4o-mini` and asked the same question again, passing the message history I got after using Gemini
- This causes the following exception: `AssertionError: OpenAI requires `tool_call_id` to be set: ToolCallPart(tool_name='now', args={}, tool_call_id=None, part_kind='tool-call')`
## [Edit] Minimal working example
```python
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
from pydantic_ai.models.gemini import GeminiModel
from datetime import datetime
open_ai_api_key = ...
gemini_api_key = ...
openai_model = OpenAIModel(
model_name='gpt-4o-mini',
api_key=open_ai_api_key,
)
gemini_model = GeminiModel(
model_name='gemini-2.0-flash-exp', # could be gemini-1.5-flash also
api_key=gemini_api_key,
)
agent = Agent(gemini_model)
@agent.tool_plain
def now():
return datetime.now().isoformat()
r1 = agent.run_sync('what is the current date time?')
print(r1.all_messages_json())
r2 = agent.run_sync( # this will fail
'what time is now?',
model=openai_model,
message_history=r1.all_messages(),
)
print(r2.all_messages_json())
```
## Message history (stored until call gpt-4o-mini)
```python
[ModelRequest(parts=[SystemPromptPart(content='\nYou are a test agent.\n\nYou must do what the user asks.\n', dynamic_ref=None, part_kind='system-prompt'), UserPromptPart(content='call now', timestamp=datetime.datetime(2025, 2, 11, 15, 55, 5, 628330, tzinfo=TzInfo(UTC)), part_kind='user-prompt')], kind='request'),
ModelResponse(parts=[TextPart(content='I am sorry, I cannot fulfill this request. The available tools do not provide the functionality to make calls.\n', part_kind='text')], model_name='gemini-1.5-flash', timestamp=datetime.datetime(2025, 2, 11, 15, 55, 6, 59052, tzinfo=TzInfo(UTC)), kind='response'),
ModelRequest(parts=[UserPromptPart(content='call the tool now', timestamp=datetime.datetime(2025, 2, 11, 15, 55, 14, 394461, tzinfo=TzInfo(UTC)), part_kind='user-prompt')], kind='request'),
ModelResponse(parts=[TextPart(content='I cannot call a tool. The available tools are functions that I can execute, not entities that I can call in a telephone sense. Is there something specific you would like me to do with one of the available tools?\n', part_kind='text')], model_name='gemini-1.5-flash', timestamp=datetime.datetime(2025, 2, 11, 15, 55, 15, 449295, tzinfo=TzInfo(UTC)), kind='response'),
ModelRequest(parts=[UserPromptPart(content='what time is now?', timestamp=datetime.datetime(2025, 2, 11, 15, 55, 23, 502937, tzinfo=TzInfo(UTC)), part_kind='user-prompt')], kind='request'),
ModelResponse(parts=[ToolCallPart(tool_name='now', args={}, tool_call_id=None, part_kind='tool-call')], model_name='gemini-1.5-flash', timestamp=datetime.datetime(2025, 2, 11, 15, 55, 24, 151395, tzinfo=TzInfo(UTC)), kind='response'),
ModelRequest(parts=[ToolReturnPart(tool_name='now', content='2025-02-11T12:55:24.153651-03:00', tool_call_id=None, timestamp=datetime.datetime(2025, 2, 11, 15, 55, 24, 153796, tzinfo=TzInfo(UTC)), part_kind='tool-return')], kind='request'),
ModelResponse(parts=[TextPart(content='The current time is 2025-02-11 12:55:24 -03:00.\n', part_kind='text')], model_name='gemini-1.5-flash', timestamp=datetime.datetime(2025, 2, 11, 15, 55, 24, 560881, tzinfo=TzInfo(UTC)), kind='response')]
```
## Traceback
```
Traceback (most recent call last):
File "/app/agents/_agents/_wrapper.py", line 125, in run_stream
async with self._agent.run_stream(
~~~~~~~~~~~~~~~~~~~~~~^
user_prompt=user_prompt,
^^^^^^^^^^^^^^^^^^^^^^^^
...<2 lines>...
deps=self.deps,
^^^^^^^^^^^^^^^
) as result:
^
File "/usr/local/lib/python3.13/contextlib.py", line 214, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/pydantic_ai/agent.py", line 595, in run_stream
async with node.run_to_result(GraphRunContext(graph_state, graph_deps)) as r:
~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/contextlib.py", line 214, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/pydantic_ai/_agent_graph.py", line 415, in run_to_result
async with ctx.deps.model.request_stream(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
ctx.state.message_history, model_settings, model_request_parameters
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
) as streamed_response:
^
File "/usr/local/lib/python3.13/contextlib.py", line 214, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/pydantic_ai/models/openai.py", line 160, in request_stream
response = await self._completions_create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
messages, True, cast(OpenAIModelSettings, model_settings or {}), model_request_parameters
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/usr/local/lib/python3.13/site-packages/pydantic_ai/models/openai.py", line 203, in _completions_create
openai_messages = list(chain(*(self._map_message(m) for m in messages)))
File "/usr/local/lib/python3.13/site-packages/pydantic_ai/models/openai.py", line 267, in _map_message
tool_calls.append(self._map_tool_call(item))
~~~~~~~~~~~~~~~~~~~^^^^^^
File "/usr/local/lib/python3.13/site-packages/pydantic_ai/models/openai.py", line 284, in _map_tool_call
id=_guard_tool_call_id(t=t, model_source='OpenAI'),
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.13/site-packages/pydantic_ai/_utils.py", line 200, in guard_tool_call_id
assert t.tool_call_id is not None, f'{model_source} requires `tool_call_id` to be set: {t}'
^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: OpenAI requires `tool_call_id` to be set: ToolCallPart(tool_name='now', args={}, tool_call_id=None, part_kind='tool-call')
``` | open | 2025-02-11T16:05:52Z | 2025-02-27T12:42:16Z | https://github.com/pydantic/pydantic-ai/issues/895 | [
"bug",
"good first issue"
] | AlexEnrique | 6 |
JaidedAI/EasyOCR | pytorch | 1,340 | Using CPU. Note: This module is much faster with a GPU. | Hi,
It is bug or something wrong with my cpu why its giving me empty output?
My Code:
import cv2
import easyocr
reader = easyocr.Reader(['en', 'hi'], gpu=False)
image_path = r"yoboyS_20230522065533539_.jpg"
image = cv2.imread(image_path)
if image is None:
print(f"Error: Could not read the image from {image_path}")
else:
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
results = reader.readtext(image_rgb)
for bbox, text, prob in results:
print(f"Detected text: '{text}' with confidence: {prob:.2f}")
Result:
Using CPU. Note: This module is much faster with a GPU.

| open | 2024-11-30T16:36:48Z | 2025-01-10T02:57:46Z | https://github.com/JaidedAI/EasyOCR/issues/1340 | [] | parvinders347 | 2 |
graphql-python/graphql-core | graphql | 12 | Got invalid value wrong interpretation | Code in graphql-core
https://github.com/graphql-python/graphql-core/blob/master/graphql/execution/values.py#L71-L76
Code in graphql-core-next:
https://github.com/graphql-python/graphql-core-next/blob/master/graphql/execution/values.py#L96-L99
Graphql-core next leaks the inner Python representation of an object.
In general, after reviewing a lot of tests and code, there is a lot of usage of `repr` when it should be used just for debugging, not for uniform error messages. | closed | 2018-10-04T10:12:18Z | 2018-10-22T18:17:10Z | https://github.com/graphql-python/graphql-core/issues/12 | [] | syrusakbary | 4 |
gevent/gevent | asyncio | 1,721 | Failure to build on Python 3.9.1 / Apple arm64 | * gevent version: 20.9.0
* Python version: cPython 3.9.1 downloaded from python.org
* Operating System: macOS 11.1 (on M1 arm64)
### Description:
I have tried two ways to get a python project, which uses gevent, running on a new M1 macbook.
The first way was to use this method https://stackoverflow.com/a/64885034/202168
i.e. clone the Terminal.app and run it under Rosetta, install everything as usual for the project (Python 3.8.6, virtualenv) and in theory it's running as emulated x64 arch.
In that case pip install completed fine but I get an error when I import gevent:
```
Python 3.8.6 (default, Dec 15 2020, 19:24:48)
Type 'copyright', 'credits' or 'license' for more information
IPython 7.14.0 -- An enhanced Interactive Python. Type '?' for help.
In [1]: import gevent
zsh: illegal hardware instruction ipython
```
(other packages like `psycopg2` import fine, so I don't think it's an ipython problem)
(Activity Monitor reports this ipython `python3.8` process as "intel" architecture, so Rosetta must be active, but clearly was unable to emulate something properly)
Second method I tried was to use the arm64 build of Python 3.9.1, with the installer downloaded from here: https://www.python.org/downloads/release/python-391/
This time I get an error in install phase for gevent:
```
% VENV/bin/pip install --no-binary :all: --no-cache-dir gevent
Collecting gevent
Downloading gevent-20.9.0.tar.gz (5.8 MB)
|████████████████████████████████| 5.8 MB 2.1 MB/s
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... error
ERROR: Command errored out with exit status 1:
command: /Users/anentropic/Documents/Dev/myproject/VENV/bin/python3 /Users/anentropic/Documents/Dev/myproject/VENV/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/tmp45tz0spa
cwd: /private/var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/pip-install-pffddqig/gevent_6639e99d258f4982b9369a7d01ae5be8
Complete output (43 lines):
Traceback (most recent call last):
File "/Users/anentropic/Documents/Dev/myproject/VENV/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 280, in <module>
main()
File "/Users/anentropic/Documents/Dev/myproject/VENV/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 263, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/Users/anentropic/Documents/Dev/myproject/VENV/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py", line 133, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "/private/var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/pip-build-env-jqwff0br/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 161, in prepare_metadata_for_build_wheel
self.run_setup()
File "/private/var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/pip-build-env-jqwff0br/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 253, in run_setup
super(_BuildMetaLegacyBackend,
File "/private/var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/pip-build-env-jqwff0br/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 145, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 471, in <module>
run_setup(EXT_MODULES)
File "setup.py", line 338, in run_setup
setup(
File "/private/var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/pip-build-env-jqwff0br/overlay/lib/python3.9/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "/private/var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/pip-build-env-jqwff0br/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 423, in __init__
_Distribution.__init__(self, {
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/dist.py", line 292, in __init__
self.finalize_options()
File "/private/var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/pip-build-env-jqwff0br/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 695, in finalize_options
ep(self)
File "/private/var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/pip-build-env-jqwff0br/overlay/lib/python3.9/site-packages/setuptools/dist.py", line 702, in _finalize_setup_keywords
ep.load()(self, ep.name, value)
File "/private/var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/pip-build-env-jqwff0br/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 219, in cffi_modules
add_cffi_module(dist, cffi_module)
File "/private/var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/pip-build-env-jqwff0br/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 49, in add_cffi_module
execfile(build_file_name, mod_vars)
File "/private/var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/pip-build-env-jqwff0br/overlay/lib/python3.9/site-packages/cffi/setuptools_ext.py", line 25, in execfile
exec(code, glob, glob)
File "src/gevent/libev/_corecffi_build.py", line 31, in <module>
ffi = FFI()
File "/private/var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/pip-build-env-jqwff0br/overlay/lib/python3.9/site-packages/cffi/api.py", line 48, in __init__
import _cffi_backend as backend
ImportError: dlopen(/private/var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/pip-build-env-jqwff0br/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so, 2): Symbol not found: _ffi_prep_closure
Referenced from: /private/var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/pip-build-env-jqwff0br/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
Expected in: flat namespace
in /private/var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/pip-build-env-jqwff0br/overlay/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so
----------------------------------------
ERROR: Command errored out with exit status 1: /Users/anentropic/Documents/Dev/myproject/VENV/bin/python3 /Users/anentropic/Documents/Dev/myproject/VENV/lib/python3.9/site-packages/pip/_vendor/pep517/_in_process.py prepare_metadata_for_build_wheel /var/folders/w1/_vgkxyln4c7bk8kr29s1y1k00000gn/T/tmp45tz0spa Check the logs for full command output.
```
| closed | 2020-12-16T22:47:28Z | 2023-12-21T12:06:21Z | https://github.com/gevent/gevent/issues/1721 | [
"Status: not gevent",
"Type: Question"
] | anentropic | 24 |
allenai/allennlp | data-science | 5,009 | @Registrable.register decorator hinders annotation-based suggestions in IDEs | I have a fix for your consideration that I'll open a PR for.
<!--
Please fill this template entirely and do not erase any of it.
We reserve the right to close without a response bug reports which are incomplete.
If you have a question rather than a bug, please ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than posting an issue here.
-->
## Checklist
<!-- To check an item on the list replace [ ] with [x]. -->
- [x] I have verified that the issue exists against the `master` branch of AllenNLP.
- [x] I have read the relevant section in the [contribution guide](https://github.com/allenai/allennlp/blob/master/CONTRIBUTING.md#bug-fixes-and-new-features) on reporting bugs.
- [x] I have checked the [issues list](https://github.com/allenai/allennlp/issues) for similar or identical bug reports.
- [x] I have checked the [pull requests list](https://github.com/allenai/allennlp/pulls) for existing proposed fixes.
- [x] I have checked the [CHANGELOG](https://github.com/allenai/allennlp/blob/master/CHANGELOG.md) and the [commit log](https://github.com/allenai/allennlp/commits/master) to find out if the bug was already fixed in the master branch.
- [x] I have included in the "Description" section below a traceback from any exceptions related to this bug.
- [x] I have included in the "Related issues or possible duplicates" section beloew all related issues and possible duplicate issues (If there are none, check this box anyway).
- [x] I have included in the "Environment" section below the name of the operating system and Python version that I was using when I discovered this bug.
- [x] I have included in the "Environment" section below the output of `pip freeze`.
- [x] I have included in the "Steps to reproduce" section below a minimally reproducible example.
## Description
<!-- Please provide a clear and concise description of what the bug is here. -->
The `@<cls>.register(...)` decorator masks IDE-like completions derived from type annotations due to the annotations in [registrable.py](../blob/main/allennlp/common/registrable.py)
Consider the following
```py3
import abc
from allennlp.common import Registrable
class Interface(Registrable, abc.ABC):
@abc.abstractmethod
def method(self) -> object:
raise NotImplementedError
@Interface.register("mplementation")
class Implementation(Interface):
def method(self) -> object:
return object()
instance = Implementation()
# Tab completion will not work for `instance.method()`
obj = instance.method()
```
Neither Jedi or Pylance will offer tab completions for anything that has to do with `Implementation` except for methods on `Registrable`.
mypy will get the type information correct but Pylance will not.
<details>
<summary><b>Python traceback: N/A</b></summary>
<p>
<!-- Paste the traceback from any exception (if there was one) in between the next two lines below -->
```
```
</p>
</details>
## Related issues or possible duplicates
- None
## Environment
<!-- Provide the name of operating system below (e.g. OS X, Linux) -->
OS: macOS 11.2.1
<!-- Provide the Python version you were using (e.g. 3.7.1) -->
Python version: 3.9.1
<details>
<summary><b>Output of <code>pip freeze</code>:</b></summary>
<p>
<!-- Paste the output of `pip freeze` in between the next two lines below -->
```
```
</p>
</details>
## Steps to reproduce
Visual Studio Code version: 1.53.2
Pylance version: 2021.2.3
mypy version: 0.800
Jedi version: 0.18.0
<details>
<summary><b>Example source:</b></summary>
<p>
<!-- Add a fully runnable example in between the next two lines below that will reproduce the bug -->
```py3
import abc
from allennlp.common import Registrable
class Interface(Registrable, abc.ABC):
@abc.abstractmethod
def method(self) -> object:
raise NotImplementedError
@Interface.register("mplementation")
class Implementation(Interface):
def method(self) -> object:
return object()
obj = Implementation()
# Tab completion will not work for `obj.method`
obj.method()
```
</p>
</details>
| closed | 2021-02-22T19:16:54Z | 2021-02-24T01:37:29Z | https://github.com/allenai/allennlp/issues/5009 | [
"bug"
] | willfrey | 0 |
ultralytics/ultralytics | pytorch | 19,413 | first epochs val mAp when fine tune very low | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
im finetuning yolov11n object detection on my custom dataset. The first epoch very low mAp is 0.154, is it normal? im not using COCO classes, use only my custom classes.
### Additional
task: detect
mode: train
model: yolo11n.pt
data: /Users/sina/train_yolo/dataset.yaml
epochs: 100
time: null
patience: 100
batch: 8
imgsz: 640
save: true
save_period: -1
cache: false
device: cpu
workers: 1
project: null
name: train21
exist_ok: false
pretrained: true
optimizer: auto
verbose: true
seed: 0
deterministic: true
single_cls: false
rect: false
cos_lr: false
close_mosaic: 10
resume: false
amp: true
fraction: 1.0
profile: false
freeze: 10
multi_scale: false
overlap_mask: true
mask_ratio: 4
dropout: 0.0
val: true
split: val
save_json: false
save_hybrid: false
conf: null
iou: 0.7
max_det: 300
half: false
dnn: false
plots: true
source: null
vid_stride: 1
stream_buffer: false
visualize: false
augment: false
agnostic_nms: false
classes: null
retina_masks: false
embed: null
show: false
save_frames: false
save_txt: false
save_conf: false
save_crop: false
show_labels: true
show_conf: true
show_boxes: true
line_width: null
format: torchscript
keras: false
optimize: false
int8: false
dynamic: false
simplify: true
opset: null
workspace: null
nms: false
lr0: 0.01
lrf: 0.01
momentum: 0.937
weight_decay: 0.0005
warmup_epochs: 3.0
warmup_momentum: 0.8
warmup_bias_lr: 0.0
box: 7.5
cls: 0.5
dfl: 1.5
pose: 12.0
kobj: 1.0
nbs: 64
hsv_h: 0.015
hsv_s: 0.7
hsv_v: 0.4
degrees: 0.0
translate: 0.1
scale: 0.5
shear: 0.0
perspective: 0.0
flipud: 0.0
fliplr: 0.5
bgr: 0.0
mosaic: 0
mixup: 0.0
copy_paste: 0.0
copy_paste_mode: flip
auto_augment: randaugment
erasing: 0.4
crop_fraction: 1.0
cfg: null
tracker: botsort.yaml
save_dir: /Users/sina/runs/detect/train21 | closed | 2025-02-25T04:01:55Z | 2025-02-25T12:38:47Z | https://github.com/ultralytics/ultralytics/issues/19413 | [
"question",
"detect"
] | nisrinaam29 | 2 |
modelscope/modelscope | nlp | 282 | TP-Aligner语音时间戳预测 运行报错 | OS: windows
Python:python3.7
Package Version:pytorch=1.13.1、modelscope=1.5.0与funasr=0.4.1
Model:TP-Aligner语音时间戳预测-16k-离线
Command:模型中的范例
inference_pipeline = pipeline(
task=Tasks.speech_timestamp,
model='damo/speech_timestamp_prediction-v1-16k-offline',)
rec_result = inference_pipeline(
audio_in='https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/test_audio/asr_example_timestamps.wav',
text_in='一 个 东 太 平 洋 国 家 为 什 么 跑 到 西 太 平 洋 来 了 呢',)
print(rec_result)
问题描述:用模型中的范例报错
Error log:

UnicodeDecodeError: 'gbk' codec can't decode byte 0x8b in position 4210: illegal multibyte sequence
TypeError: function takes exactly 5 arguments (1 given) | closed | 2023-04-21T06:59:31Z | 2023-05-28T02:00:59Z | https://github.com/modelscope/modelscope/issues/282 | [
"Stale"
] | Axiaozhu1 | 2 |
cvat-ai/cvat | computer-vision | 9,009 | Task annotations/backup download fails in online version | ### Actions before raising this issue
- [x] I searched the existing issues and did not find anything similar.
- [x] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1. Import .png images to a previously created project that used to work, as train or val subset
2. Annotate with box annotation tool, save annotations
3. Export task dataset > Ultralytics YOLO Detection 1.0 or Backup task
4. Request starts, click on download upon completion
5. Browser displays 'Unable to download, file doesn't exist'. On inspection of the error, the browser displays the following page :

### Expected Behavior
Annotation labels to export as a .zip file
### Possible Solution
Maybe one of the files or annotations is corrupted ?
### Context
Training a detection model for a research project.
### Environment
```Markdown
- CVAT online version
- Windows 10 Enterprise LTSC version 1809
``` | closed | 2025-01-29T09:47:08Z | 2025-01-30T05:13:36Z | https://github.com/cvat-ai/cvat/issues/9009 | [
"bug"
] | boninale | 10 |
timkpaine/lantern | plotly | 161 | bars dropped from legend | closed | 2018-05-29T22:23:06Z | 2018-09-19T04:25:12Z | https://github.com/timkpaine/lantern/issues/161 | [
"bug",
"matplotlib/seaborn"
] | timkpaine | 1 | |
explosion/spaCy | data-science | 13,484 | Can I retokenize at the start of a training pipeline? | I need to perform a lot of _retokenization_ before running a _training pipeline_, but from the doc I cannot understand if that is possible and, if yes, how to specify that in the config file.
In https://github.com/explosion/spaCy/discussions/5921 , @svlandeg showed how to deal with a similar issue, in the case at hand (i.e. training NER), without adding a custom component; thus, she didn't explicitly answer the original question and my question.
As I explained in issue https://github.com/explosion/spaCy/issues/13248 and, more extensively, in discussion https://github.com/explosion/spaCy/discussions/7146 , I'm struggling to develop a viable tokenizer for the _Arabic language_. For doing that, I think I need both to extend the data (the configuration files) in the current implementation of the tokenizer and to add a considerable amount of post-processing.
In the past, I've implemented the post-processing with some _Cython_ code and I began to get significantly improved results from the _data debug_ and _train_ commands. Then, I installed _spaCy from source_, but in this case I wasn't able to integrate my Cython code with the spaCy codebase, more precisely to import _tokenizer.Tokenizer_ and _vocab.Vocab_.
Now, I guess that being able to put a component just after the spaCy _Tokenizer_ in the training pipeline (and in the production pipeline) would be much cleaner and probably more efficient.
Could somebody answer my question and/or suggest a solution for my problem? Thanks in advance! | open | 2024-05-12T15:11:00Z | 2024-05-15T11:04:49Z | https://github.com/explosion/spaCy/issues/13484 | [
"feat / tokenizer"
] | gtoffoli | 0 |
dsdanielpark/Bard-API | nlp | 289 | Reflection on the Bard API Project | # Thank you for loving Bard API
Your support means a lot!
## 1. Appreciation for Contributors
Before reflecting on the project, I want to express my gratitude to all the contributors. Especially, a big thank you to [Antonio Cheang](https://github.com/acheong08), and to everyone who enriched the Bard API with more features than I had initially envisioned.
In truth, the stars on this GitHub repository belong to all the contributors, not just me. If I could transfer GitHub stars, I would gladly send them to everyone involved.
Once again, I bow my head in gratitude to all the contributors. While I regret not maintaining the code as efficiently as I could have with my current skills, or anticipating that Bard API would remain active for so long, I will strive to become a better and more skilled developer in the future.
Also, as mentioned in the [Gemini API](https://github.com/dsdanielpark/Gemini-API), I apologize for moving the repository without the consent of the contributors, in pursuit of making it better. I have also added badges for all contributors to the [Gemini API](https://github.com/dsdanielpark/Gemini-API) package.
I hope this project has been of some help to the contributors.
<a href="https://github.com/dsdanielpark/Bard_API/graphs/contributors">
<img src="https://contrib.rocks/image?repo=dsdanielpark/Bard_API" />
</a>
[CBoYXD](https://github.com/CBoYXD), [veonua](https://github.com/veonua), [thewh1teagle](https://github.com/thewh1teagle), [jjkoh95](https://github.com/jjkoh95), [yihong0618](https://github.com/yihong0618), [nishantchauhan949](https://github.com/nishantchauhan949), [MeemeeLab](https://github.com/MeemeeLab), https://github.com/dsdanielpark/Gemini-API/issues/9[kota113](https://github.com/kota113), [sachnun](https://github.com/sachnun), [amit9021](https://github.com/amit9021), [zeelsheladiya](https://github.com/zeelsheladiya), [ayansengupta17](https://github.com/ayansengupta17), [thecodekitchen](https://github.com/thecodekitchen), [SalimLouDev](https://github.com/SalimLouDev), [Qewertyy](https://github.com/Qewertyy), [senseibence](https://github.com/senseibence), [mirusu400](https://github.com/mirusu400), [szv99](https://github.com/szv99), [sudoAlireza](https://github.com/sudoAlireza)
## 2. How Bard API Came to Be
In truth, while working as an AI engineer for several years, I had a desire to develop a package that would allow me to monitor the status of my code remotely, especially when running projects with limited computing power over weekends or after work hours. Hence, I quickly developed a package called [Except Notifier](https://github.com/dsdanielpark/ExceptNotifier), which allowed monitoring of code status remotely from various messenger apps. During this process, I also designed the Bard API to be compatible with Chat GPT and Bard, so they could be used together. (Of course, this also needs refactoring, but there's so much to do.)
Additionally, while working on[ Co-Coder](https://github.com/dsdanielpark/co-coder), which returns Python and iPython errors, I wanted to enable developers to receive debugging information about errors without having to search for them. Once again, Bard API came into play during this process.
Although both packages were ambitiously prepared, they didn't receive much attention.
However, I plan to refactor both packages soon to make them more user-friendly.
## 3. Challenges Faced while Curating Bard API
Firstly, it was psychologically challenging not knowing when Google's application interface would change. It was the most difficult part, as I thought that even if I developed a package, it would become unusable within two months.
So, it was tough to invest so much time in the project, not knowing when it might become obsolete.
Looking back, if I had taken the time to prepare then, a quick transition to Gemini API would have been possible.
Secondly, Bard Web underwent frequent interface changes with experimental features being added and removed, leading to issues where implemented features wouldn't work due to these changes. I should have written test code to automate CI, but my fear of not knowing when things might not work prevented me from doing so easily.
Thirdly, feedback on Google account status and differences between regions/countries was the most challenging. Therefore, in an effort to quickly apply it to other cookie values or interfaces, I simply modified the existing structure and deployed it. However, I couldn't proceed easily due to the fear of creating new issues for users who downloaded and used the package due to these changes.
In essence, it was difficult because I couldn't debug various Google accounts, regions, and countries one by one.
Lastly, it was about operating the open-source community. If it had been an API I provided as a service, I would have fundamentally modified the source to prevent various issues from arising. But in reality, it was JSON hijacking, so I couldn't predict and develop various scenarios.
This also contributed to some inefficiencies and messy code.
In conclusion, reflecting on it, the package was effective for a long time, and now it's back in action with Antonio Cheang via Gemini API.
Furthermore, while restructuring Bard API, I also planned to implement it more efficiently asynchronously. However, HanaokaYuzu had been preparing for this attempt since January and implemented it cleanly, so I aim to quickly finish the new Gemini API for my personal projects and refactoring purposes.
## 4. Reasons Bard API Was Loved
Firstly, although it became a bit messy later on, the initial code was very lean, which I believe was effective. It was easy to pull together into various projects because it was a small structure that could be attached here and there. Also, providing easy-to-understand usage instructions in the readme and offering complete code examples in Colab might have contributed to its effectiveness.
Later on, the addition of various features by many contributors and the inclusion of gifs and various code examples to make it easier for people to use may have also been effective.
In short, the principle that easy and convenient code for me to use is easy for others to use seemed to work to some extent.
Moreover, I realized that striving for both a complex structure and user-friendly package requires a lot of consideration.
## 5. Personal Insights
- Providing modules for minimal usage across various projects
- Strong encapsulation from the beginning is not ideal; it's better to leave some duplication in the code so that other contributors can easily modify it (It's better to remove constants and various variables later during refactoring. If encapsulation is too strong from the beginning, it may be difficult for other contributors to modify and test it easily. (Later, it's better to separate them to catch errors and make it easier for other contributors to modify.)
- Providing easy documentation and minimum samples encourages people to use it. Therefore, it's good to assume that readers use very simple code. In other words, even if it's messy, it's better to write functions that are easy to modify.
## 6. Lessons Learned
- I was able to communicate with various people in the open-source project. In particular, it was interesting to see how users used my package and their derivative repositories, through which I learned how to write more efficient and better code. Since I understand the overall process of my package well, I was able to confirm these aspects and make contributions to readme when there were some parts where users declared sessions inefficiently.
- I realized that frequently asked questions were due to the unkindness of my documentation or code, so I modified the package to avoid repeated questions.
- Overall, I think it was good to make the code user-friendly even if it's messy and takes a little longer. It is also good to consider how much users will use the feature when feature requests come in and compare the development cost accordingly.
- I became a bit tired as I continued to communicate with developers of various nationalities, but it was also good to see them find answers and be passionate.
## 7. Conclusion
Other developers' efforts to find answers and their passion motivated me to work harder.
Once again, I sincerely thank all the contributors who have loved and contributed to this imperfect package.
I hope you all have a happy and fulfilling 2024. Stay healthy, developers! Adiós! | open | 2024-03-11T02:54:05Z | 2024-03-11T03:00:11Z | https://github.com/dsdanielpark/Bard-API/issues/289 | [
"project ending credit"
] | dsdanielpark | 0 |
PaddlePaddle/ERNIE | nlp | 692 | retrospective feed mechanism的开启机制 | 请问retrospective feed mechanism是根据repeat_input来判断是否开启吗?
那做文本分类任务时,是默认没有开启吗?
我试着改成TRUE,迭代了三次以后,效果并没有变好,请问是什么原因呢?
可能是我理解错了,期待您的回答,谢谢 | closed | 2021-06-02T09:52:19Z | 2021-08-08T10:33:46Z | https://github.com/PaddlePaddle/ERNIE/issues/692 | [
"wontfix"
] | zpp13 | 1 |
ets-labs/python-dependency-injector | flask | 628 | Error while overriding container with copy | Hello,
I have an issue while overriding a base container with providers using new sub-dependencies:
```
from dependency_injector import containers, providers
class SessionA:
def __init__(self) -> None:
print("init sessionA")
class SessionB:
def __init__(self, config) -> None:
print("init sessionB", config)
self.config=config
class Usecase:
def __init__(self, session) -> None:
print("init usecase", session.config)
self.session = session
class BaseContainer(containers.DeclarativeContainer):
session = providers.Factory(SessionA)
usecase = providers.Factory(Usecase, session=session)
@containers.copy(BaseContainer)
class NewContainer(BaseContainer):
config = providers.Configuration()
session = providers.Factory(SessionB, config=config)
def create_container():
container = NewContainer()
container.config.value.from_value("foo")
assert container.config()["value"] == "foo"
assert container.session().config["value"] == "foo"
assert container.usecase().session.config["value"] == "foo" # KeyError: 'value'
```
| open | 2022-10-03T17:04:18Z | 2022-10-03T17:04:59Z | https://github.com/ets-labs/python-dependency-injector/issues/628 | [] | giodall | 0 |
ultralytics/ultralytics | machine-learning | 18,925 | OBB Model Prediction | ### Search before asking
- [x] I have searched the Ultralytics [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests.
### Description
To the best of my knowledge, currently YOLO OBB models ignore labels that are partially out of the image area and cannot predict such predictions.
But for the use case of OBB models, we need to cover the entire area of the target. (see the attached image)
Models like MMRotate can make partially out of image area predictions. Would you mind adding such feature as well?

### Use case
This feature would be used to make predictions that cover the entire area of the target which is at the edge. The user may convert the prediction to polygon and clip it to get triangular results.
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2025-01-28T08:12:37Z | 2025-02-11T23:54:53Z | https://github.com/ultralytics/ultralytics/issues/18925 | [
"enhancement",
"OBB"
] | oguz-hanoglu | 4 |
ageitgey/face_recognition | python | 1,161 | Dlib install error must use the git to get latest vision | * face_recognition version:1.3
* Python version:3.8
* Operating System: ubuntu 20
### Description
When I install the dlib, it always failed and threw these errors. I have update de version to the latest and installed the python3.8-dev.
> c++: fatal error: Killed signal terminated program cc1plus
compilation terminated.
make[2]: *** [CMakeFiles/dlib_python.dir/build.make:330: CMakeFiles/dlib_python.dir/src/face_recognition.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:137: CMakeFiles/dlib_python.dir/all] Error 2
make: *** [Makefile:104: all] Error 2
Traceback (most recent call last):
File "setup.py", line 223, in <module>
setup(
File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 144, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python3.8/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.8/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/lib/python3/dist-packages/setuptools/command/install.py", line 67, in run
self.do_egg_install()
File "/usr/lib/python3/dist-packages/setuptools/command/install.py", line 109, in do_egg_install
self.run_command('bdist_egg')
File "/usr/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/lib/python3/dist-packages/setuptools/command/bdist_egg.py", line 172, in run
cmd = self.call_command('install_lib', warn_dir=0)
File "/usr/lib/python3/dist-packages/setuptools/command/bdist_egg.py", line 158, in call_command
self.run_command(cmdname)
File "/usr/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/usr/lib/python3/dist-packages/setuptools/command/install_lib.py", line 23, in run
self.build()
File "/usr/lib/python3.8/distutils/command/install_lib.py", line 109, in build
self.run_command('build_ext')
File "/usr/lib/python3.8/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3.8/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "setup.py", line 135, in run
self.build_extension(ext)
File "setup.py", line 175, in build_extension
subprocess.check_call(cmake_build, cwd=build_folder)
File "/usr/lib/python3.8/subprocess.py", line 364, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--config', 'Release', '--', '-j1']' returned non-zero exit status 2.
### What I Did
I tried
`pip3 install dlib`
when it failed, tried to git clone and cd to the file then
`python3 setup.py install`
| closed | 2020-06-17T10:07:05Z | 2020-06-17T12:03:39Z | https://github.com/ageitgey/face_recognition/issues/1161 | [] | Windyskys | 1 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 586 | Compatibility with Pytorch 1.8 and CUDA 11 | Hi,
In order to take advantage of Nvidia Amper architecture, I was wondering how one could make the synthesizer andd vocoder training program use Pitorch 1.8 and CUDA 11.
Any lead on the procedure to follow would be very helpfull.
Thank you | closed | 2020-11-02T17:42:47Z | 2020-11-04T15:37:58Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/586 | [] | rallandr | 2 |
huggingface/datasets | computer-vision | 6,771 | Datasets FileNotFoundError when trying to generate examples. | ### Discussed in https://github.com/huggingface/datasets/discussions/6768
<div type='discussions-op-text'>
<sup>Originally posted by **RitchieP** April 1, 2024</sup>
Currently, I have a dataset hosted on Huggingface with a custom script [here](https://huggingface.co/datasets/RitchieP/VerbaLex_voice).
I'm loading my dataset as below.
```py
from datasets import load_dataset, IterableDatasetDict
dataset = IterableDatasetDict()
dataset["train"] = load_dataset("RitchieP/VerbaLex_voice", "ar", split="train", use_auth_token=True, streaming=True)
dataset["test"] = load_dataset("RitchieP/VerbaLex_voice", "ar", split="test", use_auth_token=True, streaming=True)
```
And when I try to see the data I have loaded with
```py
list(dataset["train"].take(1))
```
And it gives me this stack trace
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Cell In[2], line 1
----> 1 list(dataset["train"].take(1))
File /opt/conda/lib/python3.10/site-packages/datasets/iterable_dataset.py:1388, in IterableDataset.__iter__(self)
1385 yield formatter.format_row(pa_table)
1386 return
-> 1388 for key, example in ex_iterable:
1389 if self.features:
1390 # `IterableDataset` automatically fills missing columns with None.
1391 # This is done with `_apply_feature_types_on_example`.
1392 example = _apply_feature_types_on_example(
1393 example, self.features, token_per_repo_id=self._token_per_repo_id
1394 )
File /opt/conda/lib/python3.10/site-packages/datasets/iterable_dataset.py:1044, in TakeExamplesIterable.__iter__(self)
1043 def __iter__(self):
-> 1044 yield from islice(self.ex_iterable, self.n)
File /opt/conda/lib/python3.10/site-packages/datasets/iterable_dataset.py:234, in ExamplesIterable.__iter__(self)
233 def __iter__(self):
--> 234 yield from self.generate_examples_fn(**self.kwargs)
File ~/.cache/huggingface/modules/datasets_modules/datasets/RitchieP--VerbaLex_voice/9465eaee58383cf9d7c3e14111d7abaea56398185a641b646897d6df4e4732f7/VerbaLex_voice.py:127, in VerbaLexVoiceDataset._generate_examples(self, local_extracted_archive_paths, archives, meta_path)
125 for i, audio_archive in enumerate(archives):
126 print(audio_archive)
--> 127 for path, file in audio_archive:
128 _, filename = os.path.split(path)
129 if filename in metadata:
File /opt/conda/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:869, in _IterableFromGenerator.__iter__(self)
868 def __iter__(self):
--> 869 yield from self.generator(*self.args, **self.kwargs)
File /opt/conda/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:919, in ArchiveIterable._iter_from_urlpath(cls, urlpath, download_config)
915 @classmethod
916 def _iter_from_urlpath(
917 cls, urlpath: str, download_config: Optional[DownloadConfig] = None
918 ) -> Generator[Tuple, None, None]:
--> 919 compression = _get_extraction_protocol(urlpath, download_config=download_config)
920 # Set block_size=0 to get faster streaming
921 # (e.g. for hf:// and https:// it uses streaming Requests file-like instances)
922 with xopen(urlpath, "rb", download_config=download_config, block_size=0) as f:
File /opt/conda/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:400, in _get_extraction_protocol(urlpath, download_config)
398 urlpath, storage_options = _prepare_path_and_storage_options(urlpath, download_config=download_config)
399 try:
--> 400 with fsspec.open(urlpath, **(storage_options or {})) as f:
401 return _get_extraction_protocol_with_magic_number(f)
402 except FileNotFoundError:
File /opt/conda/lib/python3.10/site-packages/fsspec/core.py:100, in OpenFile.__enter__(self)
97 def __enter__(self):
98 mode = self.mode.replace("t", "").replace("b", "") + "b"
--> 100 f = self.fs.open(self.path, mode=mode)
102 self.fobjects = [f]
104 if self.compression is not None:
File /opt/conda/lib/python3.10/site-packages/fsspec/spec.py:1307, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)
1305 else:
1306 ac = kwargs.pop("autocommit", not self._intrans)
-> 1307 f = self._open(
1308 path,
1309 mode=mode,
1310 block_size=block_size,
1311 autocommit=ac,
1312 cache_options=cache_options,
1313 **kwargs,
1314 )
1315 if compression is not None:
1316 from fsspec.compression import compr
File /opt/conda/lib/python3.10/site-packages/fsspec/implementations/local.py:180, in LocalFileSystem._open(self, path, mode, block_size, **kwargs)
178 if self.auto_mkdir and "w" in mode:
179 self.makedirs(self._parent(path), exist_ok=True)
--> 180 return LocalFileOpener(path, mode, fs=self, **kwargs)
File /opt/conda/lib/python3.10/site-packages/fsspec/implementations/local.py:302, in LocalFileOpener.__init__(self, path, mode, autocommit, fs, compression, **kwargs)
300 self.compression = get_compression(path, compression)
301 self.blocksize = io.DEFAULT_BUFFER_SIZE
--> 302 self._open()
File /opt/conda/lib/python3.10/site-packages/fsspec/implementations/local.py:307, in LocalFileOpener._open(self)
305 if self.f is None or self.f.closed:
306 if self.autocommit or "w" not in self.mode:
--> 307 self.f = open(self.path, mode=self.mode)
308 if self.compression:
309 compress = compr[self.compression]
FileNotFoundError: [Errno 2] No such file or directory: '/kaggle/working/h'
```
After looking into the stack trace, and referring to the source codes, it looks like its trying to access a directory in the notebook's environment and I don't understand why.
Not sure if its a bug in Datasets library, so I'm opening a discussions first. Feel free to ask for more information if needed. Appreciate any help in advance!</div>
Hi, referring to the discussion title above, after further digging, I think it's an issue within the datasets library. But not quite sure where it is.
If you require any more info or actions from me, please let me know. Appreciate any help in advance! | closed | 2024-04-02T10:24:57Z | 2024-04-04T14:22:03Z | https://github.com/huggingface/datasets/issues/6771 | [] | RitchieP | 2 |
liangliangyy/DjangoBlog | django | 443 | 搭建成功 | 感谢作者贡献, http://0kqs2do1op.52http.tech
| closed | 2021-01-28T06:21:17Z | 2021-03-02T09:30:29Z | https://github.com/liangliangyy/DjangoBlog/issues/443 | [] | echo0110 | 0 |
mirumee/ariadne-codegen | graphql | 212 | Future-proofing generated types | One of our teams at Mirumee would like to future-proof their generated client.
They already know how the GraphQL API they are integrating with will change in future and would like the generated client to already target the future API, but include the backwards compat layer.
Initial idea for solving this would be implementing a plugin that adds mapping in generated client methods from current objects reps to old ones, but this requires further study before we can make a decision about implementing something in codegen/plugin or just offering a general recommendation. | open | 2023-09-12T11:12:19Z | 2023-09-12T11:12:19Z | https://github.com/mirumee/ariadne-codegen/issues/212 | [
"decision needed"
] | rafalp | 0 |
explosion/spaCy | data-science | 13,076 | LLM models in spaCy requiring OpenAI key | #The following code will throw the error (marked below)
import spacy
nlp = spacy.blank("en")
#this next line throws the error below
llm_ner = nlp.add_pipe("llm_ner")
spaCy Error:
C:\Program Files\Python311\Lib\site-packages\spacy_llm\models\rest\openai\model.py:25: UserWarning: Could not find the API key to access the OpenAI API. Ensure you have an API key set up via https://platform.openai.com/account/api-keys, then make it available as an environment variable 'OPENAI_API_KEY'.
Why is this defaulting to the OpenAI model? Is there a way to bypass this such that other models from HuggingFace (e.g. Dolly) or spaCy's own LLM models can be used for NER recognition?
Thanks for your help.
Ronny
My Environment
============================== Info about spaCy ==============================
spaCy version 3.7.2
Location C:\Program Files\Python\Lib\site-packages\spacy
Platform Windows-11-10.0.22621-SP0
Python version 3.12.0
Pipelines en_core_web_lg (3.7.0), en_core_web_md (3.7.0), en_core_web_sm (3.7.0) | closed | 2023-10-21T14:47:54Z | 2023-10-23T09:31:16Z | https://github.com/explosion/spaCy/issues/13076 | [
"feat/llm"
] | rshahrabani | 1 |
gunthercox/ChatterBot | machine-learning | 1,399 | Request for new API under Storage class to list all conversations for a specific conversation id | Hi,
I am looking for a new API under Storage class to list all conversations for a specific conversation id.
I am not if that already exists. If not it will be helpful to have something like as shown below
bot.storage.get_conversations(conversation_id),. where bot is an bot of a chatterbot class
Thanks
| closed | 2018-09-13T13:34:41Z | 2019-07-19T02:36:47Z | https://github.com/gunthercox/ChatterBot/issues/1399 | [
"answered"
] | faraazc | 4 |
microsoft/nni | deep-learning | 5,427 | Res2Net/Res2Next series models prune error with assert len(set(num_channels_list)) == 1 | With models from [https://github.com/Res2Net/Res2Net-PretrainedModels](url)
prune with FPGMPruner
I got same error as :
File "/usr/local/lib/python3.7/dist-packages/nni/compression/pytorch/speedup/compressor.py", line 518, in speedup_model
fix_mask_conflict(self.masks, self.bound_model, self.dummy_input)
File "/usr/local/lib/python3.7/dist-packages/nni/compression/pytorch/utils/mask_conflict.py", line 54, in fix_mask_conflict
masks = fix_channel_mask.fix_mask()
File "/usr/local/lib/python3.7/dist-packages/nni/compression/pytorch/utils/mask_conflict.py", line 264, in fix_mask
assert len(set(num_channels_list)) == 1
How could I get rid of such problems? | open | 2023-03-08T07:19:31Z | 2023-03-14T12:13:28Z | https://github.com/microsoft/nni/issues/5427 | [] | moonlightian | 2 |
Crinibus/scraper | web-scraping | 8 | Change soup.find_all(...)[0] to soup.find(...) | closed | 2020-07-05T11:33:19Z | 2020-07-05T21:54:41Z | https://github.com/Crinibus/scraper/issues/8 | [] | Crinibus | 0 | |
babysor/MockingBird | pytorch | 229 | AttributeError: 'HParams' object has no attribute 'tts_schedule' | 今天在训练过程中出现这个问题,请问是什么原因导致的呢

| closed | 2021-11-23T05:55:38Z | 2021-11-23T10:57:25Z | https://github.com/babysor/MockingBird/issues/229 | [] | ffffreeyu | 1 |
LAION-AI/Open-Assistant | python | 3,663 | [Feature]: I want to add "please wait..." loading on continue with email button | I want to add loading "please wait..." on the "continue with email" button when it get disable for a few second when the user go on login page. | open | 2023-08-20T09:36:45Z | 2023-08-29T15:29:06Z | https://github.com/LAION-AI/Open-Assistant/issues/3663 | [] | taqui-786 | 1 |
PaddlePaddle/ERNIE | nlp | 337 | 可以提供命名实体识别的上线demo? | 1)实体识别生成inference_model demo
2)实体识别在线预测demo | closed | 2019-10-11T09:22:21Z | 2020-05-28T10:52:46Z | https://github.com/PaddlePaddle/ERNIE/issues/337 | [
"wontfix",
"feature request"
] | nx04 | 2 |
nl8590687/ASRT_SpeechRecognition | tensorflow | 131 | 识别速度怎么样?我这边需要高并发短语识别 3-5秒 | 识别速度怎么样?我这边需要高并发短语识别 3-5秒 | closed | 2019-07-23T09:42:43Z | 2019-07-25T02:41:16Z | https://github.com/nl8590687/ASRT_SpeechRecognition/issues/131 | [] | BadDeveloper2022 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.