repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
dropbox/sqlalchemy-stubs | sqlalchemy | 240 | Assigning to a Union of a nullable and non-nullable column fails | I'm running the latest (at the time of writing) `mypy` (0.950) and `sqlalchemy-stubs` (0.4) and hitting this issue:
```python
from sqlalchemy import Column, Integer
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class Dog(Base):
__tablename__ = 'dogs'
age = Column(Integer)
class Cat(Base):
__tablename__ = 'cats'
age = Column(Integer, nullable=False)
Animal = Dog | Cat
animal: Animal = Cat()
animal.age = 20 # Mypy error, should be fine!
```
The error I get is:
```
error: Incompatible types in assignment (expression has type "int", variable has type "Union[Column[Optional[int]], Column[int]]")
```
but I don't think there should be an error.
My mypy config in my pyproject.toml is just:
```
[tool.mypy]
plugins = "sqlmypy"
```
| open | 2022-05-16T20:00:08Z | 2022-05-16T20:00:08Z | https://github.com/dropbox/sqlalchemy-stubs/issues/240 | [] | Garrett-R | 0 |
holoviz/panel | plotly | 7,343 | Directly export notebook app into interactive HTML | <!--
Thanks for contacting us! Please read and follow these instructions carefully, then you can delete this introductory text. Note that the issue tracker is NOT the place for usage questions and technical assistance; post those at [Discourse](https://discourse.holoviz.org) instead. Issues without the required information below may be closed immediately.
-->
#### Is your feature request related to a problem? Please describe.
I have many situation where I need to export the notebook app into HTML rendered by panel for sharing purpose. I understand there is `.save` method to export into HTML, but It needs me to explicitly concate and arrange the panel object before exporting. It is somehow not handy. Although jupyter nbconvert can export into HTML, but it is not interactive, not like panel
#### Describe the solution you'd like
It would be great if we can have some command line to export whole notebook into HTML in defaut layout or `.servable()` layout. e.g.:
```
panel export notebook.ipynb --servable
```
#### Describe alternatives you've considered
I have to explicitly concate and arrange the panel object before exporting
| open | 2024-09-29T09:51:01Z | 2025-02-20T15:04:53Z | https://github.com/holoviz/panel/issues/7343 | [
"type: feature"
] | YongcaiHuang | 5 |
deezer/spleeter | deep-learning | 514 | Deleted pretrained_models folder and now it tracebacks when redownloading | I was cleaning up my home directory and deleted the pretrained_models folder a while ago. Now when I run Spleeter I get the traceback below. I'm not sure if this is user error or not. I solved it by manually downloading and extracting the models, so this isn't a blocker or anything.
C:\Users\Simon Jaeger>c:\python37\python -m spleeter separate -o spleeter -p spleeter:2stems -i test.flac
INFO:spleeter:Downloading model archive https://github.com/deezer/spleeter/releases/download/v1.4.0/2stems.tar.gz
Traceback (most recent call last):
File "c:\python37\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "c:\python37\lib\site-packages\spleeter\__main__.py", line 58, in <module>
entrypoint()
File "c:\python37\lib\site-packages\spleeter\__main__.py", line 54, in entrypoint
main(sys.argv)
File "c:\python37\lib\site-packages\spleeter\__main__.py", line 46, in main
entrypoint(arguments, params)
File "c:\python37\lib\site-packages\spleeter\commands\separate.py", line 45, in entrypoint
synchronous=False
File "c:\python37\lib\site-packages\spleeter\separator.py", line 310, in separate_to_file
sources = self.separate(waveform, audio_descriptor)
File "c:\python37\lib\site-packages\spleeter\separator.py", line 271, in separate
return self._separate_librosa(waveform, audio_descriptor)
File "c:\python37\lib\site-packages\spleeter\separator.py", line 247, in _separate_librosa
sess = self._get_session()
File "c:\python37\lib\site-packages\spleeter\separator.py", line 228, in _get_session
get_default_model_dir(self._params['model_dir']))
File "c:\python37\lib\site-packages\spleeter\utils\estimator.py", line 25, in get_default_model_dir
return model_provider.get(model_dir)
File "c:\python37\lib\site-packages\spleeter\model\provider\__init__.py", line 67, in get
model_directory)
File "c:\python37\lib\site-packages\spleeter\model\provider\github.py", line 97, in download
with requests.get(url, stream=True) as response:
AttributeError: __enter__
| closed | 2020-11-08T07:19:21Z | 2020-11-20T14:20:07Z | https://github.com/deezer/spleeter/issues/514 | [] | Simon818 | 1 |
ageitgey/face_recognition | python | 776 | face_landmarks not accurate | Hi,I am building a toy robot head with camera to imitate a human facial expression.
If I raise my eyebrow in front of the camera,the eyebrows in the face_landmark doesn't raise as much as I do,in fact,the landmark changes only a little. It seems like the algorithm is predicting where the eyebrow should be,rather than detecting where the eyebrow really is.
Do you have any suggestions how I can achieve my goal?
Thank you. | open | 2019-03-18T16:21:28Z | 2019-03-18T16:21:28Z | https://github.com/ageitgey/face_recognition/issues/776 | [] | hyansuper | 0 |
idealo/imagededup | computer-vision | 74 | Duplicates not found, even if the source and test images are the same | I took the [CIFAR 10 example code](https://idealo.github.io/imagededup/examples/CIFAR10_deduplication/)
But I get following error even though `duplicates_test` variable contains two duplicates which are basically same images in both source and test folders `{'labels12-source.jpg': [], 'labels9-source.jpg': []}`
This is the error that I'm getting
`plot_duplicates(image_dir=image_dir, duplicate_map=duplicates_test, filename=list(duplicates_test.keys())[0])
File "D:\ProgramData\Anaconda3\lib\site-packages\imagededup\utils\plotter.py", line 123, in plot_duplicates
assert len(retrieved) != 0, 'Provided filename has no duplicates!'
AssertionError: Provided filename has no duplicates!` | closed | 2019-11-15T15:01:52Z | 2019-11-27T15:23:49Z | https://github.com/idealo/imagededup/issues/74 | [] | zubairahmed-ai | 3 |
deepinsight/insightface | pytorch | 1,963 | Can't train | I encountered the following problem when retraining the data, how should I solve it?

| open | 2022-04-05T12:36:31Z | 2022-04-06T01:41:48Z | https://github.com/deepinsight/insightface/issues/1963 | [] | NewtOliver | 2 |
dbfixtures/pytest-postgresql | pytest | 1,087 | Remove ability to pre-populate database on a client fixture level | closed | 2025-02-12T17:38:58Z | 2025-02-15T11:14:22Z | https://github.com/dbfixtures/pytest-postgresql/issues/1087 | [] | fizyk | 0 | |
widgetti/solara | fastapi | 497 | Fullscreen scrolling example is broken | The [fullscreen scrolling example](https://solara.dev/examples/fullscreen/scrolling) is currently broken:
<img width="261" alt="image" src="https://github.com/widgetti/solara/assets/37669773/d081f32c-7624-43ec-a31e-1e4a722b36a2">
| open | 2024-02-07T13:05:50Z | 2024-02-07T14:46:09Z | https://github.com/widgetti/solara/issues/497 | [] | langestefan | 3 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,487 | [Bug]: Non checkpoints found. Can't run without a checkpoint. | ### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I get the same error as [this old bug report][1], but the proposed [solution][2] does not work - it references the [installation instructions][3], specifically downloading the model, but I think that's out of date as I don't see that now. Where exactly do I get the model from, and where do I place it?
I'm new to all this, sorry if I'm doing something stupid but I've tried to get this working for a while on a couple of different machines. I'm on Fedora 40 if it makes any difference.
[1]: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5134
[2]: https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5134#issuecomment-1328347775
[3]: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs
### Steps to reproduce the problem
Install on a new install of Fedora 40:
```
pyenv install 3.11
pyenv global 3.11
wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh
export COMMANDLINE_ARGS="--skip-torch-cuda-test" # I found I needed these as my GPU wouldn't get recognised
python_cmd=python3 bash -x webui.sh
```
### What should have happened?
Run and work correctly on a first-time install
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
Sorry, I don't see the prompted option in the web UI and if I run webui.sh with --dump-sysinfo I get an AttributeError from python
### Console logs
```Shell
loading stable diffusion model: FileNotFoundError
Traceback (most recent call last):
File "/home/john/.pyenv/versions/3.11.10/lib/python3.11/threading.py", line 1002, in _bootstrap
self._bootstrap_inner()
File "/home/john/.pyenv/versions/3.11.10/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
self.run()
File "/home/john/src/stable-diffusion/download/stable-diffusion-webui/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/home/john/src/stable-diffusion/download/stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/utils.py", line 707, in wrapper
response = f(*args, **kwargs)
File "/home/john/src/stable-diffusion/download/stable-diffusion-webui/modules/ui.py", line 1165, in <lambda>
update_image_cfg_scale_visibility = lambda: gr.update(visible=shared.sd_model and shared.sd_model.cond_stage_key == "edit")
File "/home/john/src/stable-diffusion/download/stable-diffusion-webui/modules/shared_items.py", line 175, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "/home/john/src/stable-diffusion/download/stable-diffusion-webui/modules/sd_models.py", line 693, in get_sd_model
load_model()
File "/home/john/src/stable-diffusion/download/stable-diffusion-webui/modules/sd_models.py", line 788, in load_model
checkpoint_info = checkpoint_info or select_checkpoint()
^^^^^^^^^^^^^^^^^^^
File "/home/john/src/stable-diffusion/download/stable-diffusion-webui/modules/sd_models.py", line 234, in select_checkpoint
raise FileNotFoundError(error_message)
FileNotFoundError: No checkpoints found. When searching for checkpoints, looked at:
- file /home/john/src/stable-diffusion/download/stable-diffusion-webui/model.ckpt
- directory /home/john/src/stable-diffusion/download/stable-diffusion-webui/models/Stable-diffusionCan't run without a checkpoint. Find and place a .ckpt or .safetensors file into any of those locations.
```
```
### Additional information
_No response_ | closed | 2024-09-14T18:27:39Z | 2024-09-15T05:25:50Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16487 | [
"bug-report"
] | johngavingraham | 3 |
bendichter/brokenaxes | matplotlib | 85 | How to use a different scale in one part of the axis | I want to use a different scale in one part of the axis.

I tried to change the limits and tick count in the first part of yaxis but it did not work.
```
ax = bax.axs[1]
start, end = ax.get_ylim()
ax.yaxis.set_ticks(np.arange(start, end, 1000))
```
I want to expand the scale of the first part of the yaxis. Any help would be appreciated. Thanks in advance!
| closed | 2022-04-30T15:29:32Z | 2022-04-30T20:30:28Z | https://github.com/bendichter/brokenaxes/issues/85 | [] | sammy17 | 1 |
LibreTranslate/LibreTranslate | api | 134 | Add a parameter in the API to translate html |
Maybe you could add a parameter (format: text or html) in the API to allow translating html?
Thanks to [https://github.com/argosopentech/translate-html](argosopentech/translate-html)
I can do a pull request if interested. | closed | 2021-09-09T07:01:39Z | 2021-09-11T20:03:54Z | https://github.com/LibreTranslate/LibreTranslate/issues/134 | [
"enhancement",
"good first issue"
] | dingedi | 4 |
hack4impact/flask-base | sqlalchemy | 160 | Documentation on http://hack4impact.github.io/flask-base outdated. Doesn't match with README | Hi,
It seems that part of the documentation on https://hack4impact.github.io/flask-base/ are outdated.
For example, the **setup section** of the documentation mentions
```
$ pip install -r requirements/common.txt
$ pip install -r requirements/dev.txt
```
But there is no **requirements** folder.
Whereas the setup section in the README mentions
```
pip install -r requirements.txt
```
I find it confusing to have two sources with different information. | closed | 2018-03-20T09:35:42Z | 2018-05-31T17:57:06Z | https://github.com/hack4impact/flask-base/issues/160 | [] | s-razaq | 0 |
ultralytics/ultralytics | pytorch | 19,743 | How to Train a Custom YOLO Pose Estimation Model for Object 6D Pose | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Dear Ultralytics Team,
I am currently working on 6D pose estimation for objects (specifically apples) and would like to train a custom YOLO pose estimation model using Ultralytics YOLO.
Could you please provide guidance on how to train a YOLO model for object pose estimation, similar to how YOLOv8-Pose is used for human keypoint detection? Specifically:
Dataset Preparation:
What kind of annotations are required for training an object pose estimation model?
1.Should I label keypoints on the object (e.g., specific points on an apple)?
2.How should the dataset be formatted (e.g., COCO-style keypoints, YOLO format)?
Model Training:
1.Which YOLO version is best suited for object pose estimation (YOLOv7-Pose, YOLOv8-Pose, or custom modifications)?
2.What training configurations should be used for pose estimation?
3.Can I modify YOLOv8-Pose to predict 3D keypoints instead of 2D keypoints?
Pose Conversion:
1.Once YOLO predicts keypoints, how can I use them to obtain the 6D object pose?
2.Would PnP (Perspective-n-Point) be a good approach to convert 2D keypoints into a 6D pose?
I would greatly appreciate any guidance, official documentation, or references that can help me train a YOLO-based object pose estimation model.
### Additional
_No response_ | open | 2025-03-17T09:20:31Z | 2025-03-23T23:37:38Z | https://github.com/ultralytics/ultralytics/issues/19743 | [
"question",
"pose"
] | mike55688 | 5 |
AntonOsika/gpt-engineer | python | 835 | Azure OpenAI Integration is not working anymore | The last working version I think was around v0.0.6. v0.1.0 is not working anymore with the following effects:
## Expected Behavior
When using the `--azure` parameter, the Azure OpenAI endpoint should be used. (X.openai.azure.com)
## Current Behavior
Instead of the Azure OpenAI endpoint, the usual OpenAI endpoint is being used. (api.openai.com)
## Failure Information
```
gpt-engineer --azure https://X.openai.azure.com -v ./projects/example/ gpt-4-32k
DEBUG:openai:message='Request to OpenAI API' method=get path=https://api.openai.com/v1/models/gpt-4-32k
DEBUG:openai:api_version=None data=None message='Post details'
DEBUG:urllib3.util.retry:Converted retries value: 2 -> Retry(total=2, connect=None, read=None, redirect=None, status=None)
DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): api.openai.com:443
DEBUG:urllib3.connectionpool:https://api.openai.com:443 "GET /v1/models/gpt-4-32k HTTP/1.1" 401 262
DEBUG:openai:message='OpenAI API response' path=https://api.openai.com/v1/models/gpt-4-32k processing_ms=3 request_id=XX response_code=401
INFO:openai:error_code=invalid_api_key error_message='Incorrect API key provided: XX. You can find your API key at https://platform.openai.com/account/api-keys.' error_param=None error_type=invalid_request_error message='OpenAI API error received' stream_error=False
```
### Steps to Reproduce
1. Use the command as seen above with the --azure parameter.
2. Use verbose output to verify that it's not going to the AzOAI Endpoint, but to the default OAI endpoint.
| closed | 2023-11-02T08:40:17Z | 2023-12-24T09:52:48Z | https://github.com/AntonOsika/gpt-engineer/issues/835 | [
"bug"
] | niklasfink | 8 |
ultralytics/yolov5 | machine-learning | 13,228 | RuntimeError: Caught RuntimeError in replica 0 on device 0 | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
When training with yolov5x6. pt, setting the imagesize to 640 can be used for training, but changing it to 1280 will result in an error. Runtime Error: Caused Runtime Error in reply 0 on device 0
### Additional
_No response_ | closed | 2024-07-29T05:01:37Z | 2024-10-20T19:51:01Z | https://github.com/ultralytics/yolov5/issues/13228 | [
"question"
] | Bailin-He | 2 |
hyperspy/hyperspy | data-visualization | 3,223 | scipy.interp1d legacy | `interp1d` is a legacy function in SciPy that will be deprecated in the future: https://docs.scipy.org/doc/scipy/tutorial/interpolate/1D.html#legacy-interface-for-1-d-interpolation-interp1d
It is currently used 6 times in the codebase: (3 in hs, 1 in rsciio, 2 in eels): https://github.com/search?q=repo%3Ahyperspy%2Fhyperspy%20interp1d&type=code
In view of the legacy nature, it would make sense to use the HyperSpy 2.0 release to actually replace all uses of `interp1d` as the kwargs might be slightly different and thus it is an api break.
In #3214 `scipy.interpolate.make_interp_spline` is used instead, which has similar behavior as `interp1d` and probably is suitable for most other occurences. | closed | 2023-09-01T23:11:37Z | 2023-09-28T07:31:15Z | https://github.com/hyperspy/hyperspy/issues/3223 | [] | jlaehne | 1 |
tensorlayer/TensorLayer | tensorflow | 615 | Failed: TensorLayer (a17229d4) | *Sent by Read the Docs (readthedocs@readthedocs.org). Created by [fire](https://fire.fundersclub.com/).*
---
| TensorLayer build #7203498
---
| 
---
| Build Failed for TensorLayer (1.3.2)
---
You can find out more about this failure here:
[TensorLayer build #7203498](https://readthedocs.org/projects/tensorlayer/builds/7203498/) \- failed
If you have questions, a good place to start is the FAQ:
<https://docs.readthedocs.io/en/latest/faq.html>
You can unsubscribe from these emails in your [Notification Settings](https://readthedocs.org/dashboard/tensorlayer/notifications/)
Keep documenting,
Read the Docs
| Read the Docs
<https://readthedocs.org>
---

| closed | 2018-05-17T07:58:11Z | 2018-05-17T08:00:30Z | https://github.com/tensorlayer/TensorLayer/issues/615 | [] | fire-bot | 0 |
serengil/deepface | machine-learning | 1,316 | [BUG]: broken weight files | ### Before You Report a Bug, Please Confirm You Have Done The Following...
- [X] I have updated to the latest version of the packages.
- [X] I have searched for both [existing issues](https://github.com/serengil/deepface/issues) and [closed issues](https://github.com/serengil/deepface/issues?q=is%3Aissue+is%3Aclosed) and found none that matched my issue.
### DeepFace's version
v0.0.93
### Python version
3.9
### Operating System
Debian
### Dependencies
-
### Reproducible example
```Python
-
```
### Relevant Log Output
_No response_
### Expected Result
_No response_
### What happened instead?
_No response_
### Additional Info
As mentioned in [this issue](https://github.com/serengil/deepface/issues/1315), sometimes weight file is broken while downloading and load_weights command throws exception. We should raise a meaningful message if load_weights fails, and say something like "weights file seems broken, try to delete it and download from this url and copy to that folder".
- We have many load_weights lines. We may consider to overwrite it with a common function.
- We may consider to compare the hash of the target file as well but this comes with a cost. | closed | 2024-08-21T12:59:13Z | 2024-08-31T15:56:10Z | https://github.com/serengil/deepface/issues/1316 | [
"bug"
] | serengil | 0 |
proplot-dev/proplot | data-visualization | 86 | You also depend on pyyaml | https://github.com/lukelbd/proplot/blob/168df5109cc87e1f308711b2657f6126b82a19ff/proplot/rctools.py#L13 | closed | 2019-12-15T14:54:12Z | 2019-12-16T03:42:06Z | https://github.com/proplot-dev/proplot/issues/86 | [
"distribution"
] | hmaarrfk | 1 |
great-expectations/great_expectations | data-science | 10,607 | Improve OpenSSF Scorecard Report - remove critical issue by changing pr-title-checker.yml CI workflow | **Describe the bug**
Noticed when viewing the OpenSSF scorecard for the Great Expectations library at:
https://scorecard.dev/viewer/?uri=github.com/great-expectations/great_expectations
There is a critical *Dangerous-Workflow* pattern detected - the error is as follows:
> Warn: script injection with untrusted input ' github.event.pull_request.title ': .github/workflows/pr-title-checker.yml:17
The workflow checks passes the pull_request.title value to a script command, which introduces a possible script injection issue.
**To Reproduce**
The issue is in the github workflow in this file:
- https://github.com/great-expectations/great_expectations/blob/develop/.github/workflows/pr-title-checker.yml
**Expected behavior**
To avoid the OpenSSF warning about script injection in the pull_request.title value, we could use GitHub Actions' built-in if condition to validate the PR title. This approach avoids using shell commands that could be vulnerable to injection attacks. Here's a revised version of the workflow:
```
jobs:
check:
runs-on: ubuntu-latest
steps:
- name: Check PR title validity
if: "!contains(github.event.pull_request.title, '[FEATURE]') && !contains(github.event.pull_request.title, '[BUGFIX]') && !contains(github.event.pull_request.title, '[DOCS]') && !contains(github.event.pull_request.title, '[MAINTENANCE]') && !contains(github.event.pull_request.title, '[CONTRIB]') && !contains(github.event.pull_request.title, '[RELEASE]')"
run: |
echo "Invalid PR title - please prefix with one of: [FEATURE] | [BUGFIX] | [DOCS] | [MAINTENANCE] | [CONTRIB] | [RELEASE]"
exit 1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
```
This version uses the `if` condition to check if the PR title contains any of the accepted prefixes. If none of the prefixes are found, it outputs an error message and exits with a status of 1.
This approach mitigates the risk of script injection by avoiding the use of shell commands to process the PR title.
**Environment (please complete the following information):**
- Operating System: any
- Great Expectations Version: 1.2.0
- Data Source: n/a
- Cloud environment: n/a
**Additional context**
- https://github.com/ossf/scorecard/blob/367426ed5d9cc62f4944dc4a2174f3bbb5e22169/docs/checks.md#dangerous-workflow
- https://docs.github.com/en/actions/security-for-github-actions/security-guides/security-hardening-for-github-actions#understanding-the-risk-of-script-injections
| closed | 2024-10-31T15:04:40Z | 2024-11-06T21:54:32Z | https://github.com/great-expectations/great_expectations/issues/10607 | [] | nils-woxholt | 0 |
ResidentMario/missingno | pandas | 81 | Problem exporting to PDF | When using msno.matrix() and trying to export to pdf using matplotlib:
`fig.savefig('fig1.pdf', format='pdf', bbox_inches='tight')`
I get a pdf file with a empty plot. All font components appears normally, as ticks, labels, titles, etc. But the plot itself is blank, as can be seen in the attached figure.

| closed | 2019-01-09T13:29:39Z | 2019-03-17T04:25:14Z | https://github.com/ResidentMario/missingno/issues/81 | [] | aguinaldoabbj | 3 |
ipyflow/ipyflow | jupyter | 27 | better logo for JupyterLab startup kernel | Maybe the Python logo where the snake is wearing a helmet, or something like that. | closed | 2020-05-12T18:45:44Z | 2020-05-13T22:31:55Z | https://github.com/ipyflow/ipyflow/issues/27 | [] | smacke | 0 |
pydantic/FastUI | pydantic | 154 | Question: Is there a way to redirect response to an end point or url that's not part of the fastui endpoints? | I tried using RedirectResponse from starlette.responses, like: return RedirectResponse('/logout') or return RedirectResponse('logout.html'). I also tried return [c.FireEvent(event=GoToEvent(url='/logout'))] but it always gives me this error: "Request Error Response not valid JSON". It seems the url is always captured by fastui's /api endpoint. I'd really like to have a way to redirect the page to an external link.
thanks!
| closed | 2024-01-16T10:43:50Z | 2024-02-18T10:50:42Z | https://github.com/pydantic/FastUI/issues/154 | [] | fmrib00 | 2 |
plotly/dash-table | plotly | 430 | [dash-table] Display problem for editable (dropdown) datatable with row_selectable | Hey :slightly_smiling_face:
This is my first post here…
I have a problem (see image) of shift between the lines of my datatable and the checkboxes.
This problem appeared with the addition of the ability to modify the data with a dropdown on a column.
If I change the parameter with “editable = False,” this problem disappears.
Do you have an idea of what can cause this?
Thank you ! :slightly_smiling_face:
### Package version ###
3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34)
[GCC 7.3.0]
Dash : 0.42.0
Dash_table : 3.6.0
dash_core_components : 0.47.0
dash_renderer : 0.23.0
Google chrome :
Version 73.0.3683.103 (Build officiel) (64 bits)

Source code :
dt.DataTable(
id='TAB1_datatable',
columns=[{}],
data=[{}],
pagination_mode=False,
sorting=True,
sorting_type="multi",
style_table={
'maxHeight': '400',
'minWidth': '100%',
'border': 'thin lightgrey solid',
},
style_header={
'backgroundColor': '#a6ebff',
'fontWeight': 'bold',
'textAlign': 'left'
},
style_cell={
'textAlign': 'left',
'minWidth': '100px',
'maxWidth': '1000px',
},
style_cell_conditional=[{
'if': {'row_index': 'odd'},
'backgroundColor': 'rgb(248, 248, 248)'
},
],
style_data_conditional=[{}],
row_selectable=True,
selected_rows=[],
n_fixed_rows=[1],
n_fixed_columns=[1],
editable=True,
column_static_dropdown=[
{
'id': 'Global_Status',
'dropdown': [
{'label': i, 'value': i} for i in ['OK', 'NOT OK']
]
},
]
),
style_data_conditional, columns and data are created with callbacks | open | 2019-05-13T15:47:34Z | 2019-05-13T15:50:22Z | https://github.com/plotly/dash-table/issues/430 | [] | cedricperrotey | 0 |
modelscope/data-juicer | data-visualization | 603 | FT-Data Ranker_大语言模型微调数据赛, 是否可以分享该比赛的数据用于对Data-Juicer项目的使用。 | 尊敬的Data-Juicer框架开发者,你们好。最近,我们有对大模型数据进行处理的需求。从论文“Data-Juicer: A One-Stop Data Processing System for Large Language Models”调研到Data-Juicer的开源大模型数据处理框架。我们想进一步使用和探索这个框架。正好,我们看到了你们在天池比赛中发布了“FT-Data Ranker_大语言模型微调数据赛(7B模型赛道)”比赛。但是比赛已经结束无法获取原始数据。是否可以提供原始数据以供我们探索和使用Data-Juicer框架。万分感谢🙏。 | open | 2025-03-03T09:45:30Z | 2025-03-04T06:27:35Z | https://github.com/modelscope/data-juicer/issues/603 | [
"question"
] | user2311717757 | 1 |
vitalik/django-ninja | django | 527 | Call result of another URL from another URL | In order to be able to get results and avoid an intermediate classic http call, I wanted to call the result of one URL from another, like so:
("Pseudocode").
```
@api.post("/test-dependent")
def composite_result(request):
result = {"icons": media_icon(request), "other": "whatever"}
return result
@api.get("/media-icon", response=List[MediaIconSchema])
def media_icon(request):
objs = MediaIcon.objects.all()
return list(objs)
```
How could I, get the result "icons", parsed to the MediaIconSchema structure, from another function? Is there a way to call the result of the function that has been incorporated in your django-ninja fashion, in order to avoid code repetition?.
Thanks for the info,
| open | 2022-08-12T15:09:57Z | 2022-08-14T08:24:54Z | https://github.com/vitalik/django-ninja/issues/527 | [] | martinlombana | 1 |
Farama-Foundation/PettingZoo | api | 710 | Error running tutorial: 'ProcConcatVec' object has no attribute 'pipes' | I'm running into an error with this long stack trace when I try to run the 13 line tutorial:
```
/Users/erick/.local/share/virtualenvs/rl-0i49mzF7/lib/python3.9/site-packages/torch/utils/tensorboard/__init__.py:4: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
if not hasattr(tensorboard, '__version__') or LooseVersion(tensorboard.__version__) < LooseVersion('1.15'):
/Users/erick/.local/share/virtualenvs/rl-0i49mzF7/lib/python3.9/site-packages/torch/utils/tensorboard/__init__.py:4: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
if not hasattr(tensorboard, '__version__') or LooseVersion(tensorboard.__version__) < LooseVersion('1.15'):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/Cellar/python@3.9/3.9.12/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/usr/local/Cellar/python@3.9/3.9.12/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/spawn.py", line 125, in _main
prepare(preparation_data)
File "/usr/local/Cellar/python@3.9/3.9.12/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/usr/local/Cellar/python@3.9/3.9.12/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/usr/local/Cellar/python@3.9/3.9.12/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 268, in run_path
return _run_module_code(code, init_globals, run_name,
File "/usr/local/Cellar/python@3.9/3.9.12/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 97, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/usr/local/Cellar/python@3.9/3.9.12/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/erick/dev/rl/main_pettingzoo.py", line 22, in <module>
env = ss.concat_vec_envs_v1(env, 8, num_cpus=4, base_class="stable_baselines3")
File "/Users/erick/.local/share/virtualenvs/rl-0i49mzF7/lib/python3.9/site-packages/supersuit/vector/vector_constructors.py", line 60, in concat_vec_envs_v1
vec_env = MakeCPUAsyncConstructor(num_cpus)(*vec_env_args(vec_env, num_vec_envs))
File "/Users/erick/.local/share/virtualenvs/rl-0i49mzF7/lib/python3.9/site-packages/supersuit/vector/constructors.py", line 38, in constructor
return ProcConcatVec(
File "/Users/erick/.local/share/virtualenvs/rl-0i49mzF7/lib/python3.9/site-packages/supersuit/vector/multiproc_vec.py", line 144, in __init__
proc.start()
File "/usr/local/Cellar/python@3.9/3.9.12/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/usr/local/Cellar/python@3.9/3.9.12/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/usr/local/Cellar/python@3.9/3.9.12/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/usr/local/Cellar/python@3.9/3.9.12/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/usr/local/Cellar/python@3.9/3.9.12/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/usr/local/Cellar/python@3.9/3.9.12/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/usr/local/Cellar/python@3.9/3.9.12/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/spawn.py", line 154, in get_preparation_data
_check_not_importing_main()
File "/usr/local/Cellar/python@3.9/3.9.12/Frameworks/Python.framework/Versions/3.9/lib/python3.9/multiprocessing/spawn.py", line 134, in _check_not_importing_main
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
Exception ignored in: <function ProcConcatVec.__del__ at 0x112cf1310>
Traceback (most recent call last):
File "/Users/erick/.local/share/virtualenvs/rl-0i49mzF7/lib/python3.9/site-packages/supersuit/vector/multiproc_vec.py", line 210, in __del__
for pipe in self.pipes:
AttributeError: 'ProcConcatVec' object has no attribute 'pipes'
```
Maybe the issue here is some mismatch in library versioning, but I found no reference to which `supersuit` version is supposed to run with the tutorial (or with the rest of the code).
I am running python 3.9 with `supersuit` 3.4 and `pettingzoo` 1.18.1 | closed | 2022-05-29T13:17:28Z | 2022-08-14T18:22:20Z | https://github.com/Farama-Foundation/PettingZoo/issues/710 | [] | erickrf | 16 |
amidaware/tacticalrmm | django | 1,950 | Add info to Automation policy manager output summary | Add counts of each type of item and total and increase title to more chars
<img width="801" alt="2024-07-31_031728 - automation output" src="https://github.com/user-attachments/assets/a63695c5-1776-43d6-95e5-787e0ef3b434">
| open | 2024-07-31T07:20:29Z | 2024-11-04T15:15:09Z | https://github.com/amidaware/tacticalrmm/issues/1950 | [
"enhancement"
] | silversword411 | 1 |
scrapy/scrapy | web-scraping | 6,478 | Get rid of `testfixtures` | We only use `testfixtures.LogCapture` and I expect it should be easy, [though not trivial](https://docs.pytest.org/en/stable/how-to/unittest.html), to replace it with the pytest `caplog` fixture, reducing our test dependencies.
Alternatively we can try moving to [`twisted.logger.capturedLogs`](https://docs.twisted.org/en/stable/core/howto/logger.html#capturing-log-events-for-testing) but it was only added in Twisted 19.7.0. | open | 2024-09-20T16:52:42Z | 2025-03-13T17:47:19Z | https://github.com/scrapy/scrapy/issues/6478 | [
"enhancement",
"CI"
] | wRAR | 8 |
AirtestProject/Airtest | automation | 376 | 执行脚本时 airtest 报错 | **描述问题 BUG**
执行脚本时 adb.py 抛出 AdbError
Log:
```
adb server version (40) doesn't match this client (39); killing...
[pocoservice.apk] stdout: b'\r\ncom.netease.open.pocoservice.InstrumentedTestAsLauncher:'
[pocoservice.apk] stderr: b''
[pocoservice.apk] retrying instrumentation PocoService
[05:00:12][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 shell am force-stop com.netease.open.pocoservice ; echo ---$?---
2019-04-26 17:00:12 adb.py[line:142] DEBUG /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 shell am force-stop com.netease.open.pocoservice ; echo ---$?---
Exception in thread Thread-2:
Traceback (most recent call last):
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 917, in _bootstrap_inner
self.run()
File "/usr/local/Cellar/python/3.7.3/Frameworks/Python.framework/Versions/3.7/lib/python3.7/threading.py", line 865, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.7/site-packages/poco/drivers/android/uiautomation.py", line 211, in loop
self._start_instrument(port_to_ping) # 尝试重启
File "/usr/local/lib/python3.7/site-packages/poco/drivers/android/uiautomation.py", line 235, in _start_instrument
self.adb_client.shell(['am', 'force-stop', PocoServicePackage])
File "/usr/local/lib/python3.7/site-packages/airtest/core/android/adb.py", line 354, in shell
out = self.raw_shell(cmd).rstrip()
File "/usr/local/lib/python3.7/site-packages/airtest/core/android/adb.py", line 326, in raw_shell
out = self.cmd(cmds, ensure_unicode=False)
File "/usr/local/lib/python3.7/site-packages/airtest/core/android/adb.py", line 187, in cmd
raise AdbError(stdout, stderr)
airtest.core.error.AdbError: stdout[b''] stderr[b"* daemon not running; starting now at tcp:5037\nADB server didn't ACK\nFull server startup log: /var/folders/nx/8qylv0hs66v2zhbm22505kkr0000gn/T//adb.501.log\nServer had pid: 33767\n--- adb starting (pid 33767) ---\nadb I 04-26 17:00:12 33767 453132 main.cpp:56] Android Debug Bridge version 1.0.40\nadb I 04-26 17:00:12 33767 453132 main.cpp:56] Version 4986621\nadb I 04-26 17:00:12 33767 453132 main.cpp:56] Installed as /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb\nadb I 04-26 17:00:12 33767 453132 main.cpp:56] \nadb E 04-26 17:00:12 33767 453138 usb_osx.cpp:340] Could not open interface: e00002c5\nadb E 04-26 17:00:12 33767 453138 usb_osx.cpp:301] Could not find device interface\nadb I 04-26 17:00:12 33765 453135 usb_osx.cpp:308] reported max packet size for a83a6617d030 is 512\nadb I 04-26 17:00:12 33765 453128 adb_auth_host.cpp:416] adb_auth_init...\nadb I 04-26 17:00:12 33765 453128 adb_auth_host.cpp:174] read_key_file '/Users/anonymous/.android/adbkey'...\nadb I 04-26 17:00:12 33765 453128 adb_auth_host.cpp:467] Calling send_auth_response\nerror: could not install *smartsocket* listener: Address already in use\n\n* failed to start daemon\nerror: cannot connect to daemon\n"]
* daemon started successfully
[05:01:04][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 uninstall com.chainsguard.safebox
2019-04-26 17:01:04 adb.py[line:142] DEBUG /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 uninstall com.chainsguard.safebox
[05:01:08][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 install ./apk/com.chainsguard.safebox.apk
2019-04-26 17:01:08 adb.py[line:142] DEBUG /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 install ./apk/com.chainsguard.safebox.apk
[05:01:10][INFO]<airtest.core.api> Try finding:
Template(./air/_chainsguard.air/tpl1546589231057.png)
2019-04-26 17:01:10 cv.py[line:39] INFO Try finding:
Template(./air/_chainsguard.air/tpl1546589231057.png)
[05:01:10][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 shell pm path jp.co.cyberagent.stf.rotationwatcher ; echo ---$?---
2019-04-26 17:01:10 adb.py[line:142] DEBUG /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 shell pm path jp.co.cyberagent.stf.rotationwatcher ; echo ---$?---
[05:01:11][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 shell export CLASSPATH=/data/app/jp.co.cyberagent.stf.rotationwatcher-1/base.apk;exec app_process /system/bin jp.co.cyberagent.stf.rotationwatcher.RotationWatcher
2019-04-26 17:01:11 adb.py[line:142] DEBUG /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 shell export CLASSPATH=/data/app/jp.co.cyberagent.stf.rotationwatcher-1/base.apk;exec app_process /system/bin jp.co.cyberagent.stf.rotationwatcher.RotationWatcher
[05:01:12][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 shell ls /data/local/tmp/minicap ; echo ---$?---
2019-04-26 17:01:12 adb.py[line:142] DEBUG /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 shell ls /data/local/tmp/minicap ; echo ---$?---
[05:01:12][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 shell ls /data/local/tmp/minicap.so ; echo ---$?---
2019-04-26 17:01:12 adb.py[line:142] DEBUG /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 shell ls /data/local/tmp/minicap.so ; echo ---$?---
[05:01:12][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -v 2>&1
2019-04-26 17:01:12 adb.py[line:142] DEBUG /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -v 2>&1
[05:01:12][DEBUG]<airtest.core.android.minicap> version:5
2019-04-26 17:01:12 minicap.py[line:72] DEBUG version:5
[05:01:12][DEBUG]<airtest.core.android.minicap> skip install minicap
2019-04-26 17:01:12 minicap.py[line:79] DEBUG skip install minicap
[05:01:12][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 forward --no-rebind tcp:15156 localabstract:minicap_15156
2019-04-26 17:01:12 adb.py[line:142] DEBUG /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 forward --no-rebind tcp:15156 localabstract:minicap_15156
[05:01:12][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -i ; echo ---$?---
2019-04-26 17:01:12 adb.py[line:142] DEBUG /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -i ; echo ---$?---
[05:01:12][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -n 'minicap_15156' -P 720x1280@720x1280/0 -l 2>&1
2019-04-26 17:01:12 adb.py[line:142] DEBUG /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -n 'minicap_15156' -P 720x1280@720x1280/0 -l 2>&1
[05:01:12][DEBUG]<airtest.utils.nbsp> [minicap_server]b'PID: 8428'
2019-04-26 17:01:12 nbsp.py[line:37] DEBUG [minicap_server]b'PID: 8428'
[05:01:12][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: Using projection 720x1280@720x1280/0'
2019-04-26 17:01:12 nbsp.py[line:37] DEBUG [minicap_server]b'INFO: Using projection 720x1280@720x1280/0'
[05:01:12][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:240) Creating SurfaceComposerClient'
2019-04-26 17:01:12 nbsp.py[line:37] DEBUG [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:240) Creating SurfaceComposerClient'
[05:01:12][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:243) Performing SurfaceComposerClient init check'
2019-04-26 17:01:12 nbsp.py[line:37] DEBUG [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:243) Performing SurfaceComposerClient init check'
[05:01:12][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:250) Creating virtual display'
2019-04-26 17:01:12 nbsp.py[line:37] DEBUG [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:250) Creating virtual display'
[05:01:12][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:256) Creating buffer queue'
2019-04-26 17:01:12 nbsp.py[line:37] DEBUG [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:256) Creating buffer queue'
[05:01:12][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:261) Creating CPU consumer'
2019-04-26 17:01:12 nbsp.py[line:37] DEBUG [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:261) Creating CPU consumer'
[05:01:12][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:265) Creating frame waiter'
2019-04-26 17:01:12 nbsp.py[line:37] DEBUG [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:265) Creating frame waiter'
[05:01:12][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:269) Publishing virtual display'
2019-04-26 17:01:12 nbsp.py[line:37] DEBUG [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:269) Publishing virtual display'
[05:01:12][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (jni/minicap/JpgEncoder.cpp:64) Allocating 2766852 bytes for JPG encoder'
2019-04-26 17:01:12 nbsp.py[line:37] DEBUG [minicap_server]b'INFO: (jni/minicap/JpgEncoder.cpp:64) Allocating 2766852 bytes for JPG encoder'
[05:01:12][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (/home/lxn3032/minicap_for_ide/jni/minicap/minicap.cpp:473) Server start'
2019-04-26 17:01:12 nbsp.py[line:37] DEBUG [minicap_server]b'INFO: (/home/lxn3032/minicap_for_ide/jni/minicap/minicap.cpp:473) Server start'
[05:01:12][DEBUG]<airtest.core.android.minicap> (1, 24, 8428, 720, 1280, 720, 1280, 0, 2)
2019-04-26 17:01:12 minicap.py[line:239] DEBUG (1, 24, 8428, 720, 1280, 720, 1280, 0, 2)
[05:01:12][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (/home/lxn3032/minicap_for_ide/jni/minicap/minicap.cpp:475) New client connection'
2019-04-26 17:01:12 nbsp.py[line:37] DEBUG [minicap_server]b'INFO: (/home/lxn3032/minicap_for_ide/jni/minicap/minicap.cpp:475) New client connection'
[05:01:13][DEBUG]<airtest.core.android.adb> /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -i ; echo ---$?---
2019-04-26 17:01:13 adb.py[line:142] DEBUG /usr/local/lib/python3.7/site-packages/airtest/core/android/static/adb/mac/adb -s a83a6617d030 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -i ; echo ---$?---
[05:01:13][DEBUG]<airtest.core.api> resize: (219, 72)->(146, 48), resolution: (1080, 1920)=>(720, 1280)
2019-04-26 17:01:13 cv.py[line:216] DEBUG resize: (219, 72)->(146, 48), resolution: (1080, 1920)=>(720, 1280)
[05:01:13][DEBUG]<airtest.core.api> try match with _find_template
2019-04-26 17:01:13 cv.py[line:155] DEBUG try match with _find_template
```
**Python 版本:** `3.7.3`
**airtest 版本:** `1.0.25`
**设备**
- 型号: Redmi 3S
- MIUI: 10 9.3.28 开发版
- 系统: Android 6.0.1
**其他相关环境信息**
macOS 10.14.4 | closed | 2019-04-26T09:50:30Z | 2019-04-26T10:25:06Z | https://github.com/AirtestProject/Airtest/issues/376 | [] | i11m20n | 2 |
Kav-K/GPTDiscord | asyncio | 447 | [BUG] Taggable mentions overriding converse opener | **Describe the bug**
During a GPT converse with a custom opener set, using an @mention of the bot reverts back to the default
**To Reproduce**
Steps to reproduce the behaviour:
1. Start a /gpt converse with an opener or opener_file
2. Inside of the thread / conversation, send a message that begins with the taggable name (eg @bot I think today is Monday)
**Expected behaviour**
Inside of a converse thread with a custom opener or opener_file the bot should reply in its proper context whether it is tagged or not
| closed | 2023-12-11T07:11:23Z | 2023-12-31T10:05:17Z | https://github.com/Kav-K/GPTDiscord/issues/447 | [
"bug"
] | jeffe | 1 |
docarray/docarray | fastapi | 1,830 | Error on subindice Embedding type for Torch Tensor moving from GPU to CPU. | ### Initial Checks
- [X] I have read and followed [the docs](https://docs.docarray.org/) and still think this is a bug
### Description
I have found an issue with the following sequence:
Object
SubIndiceObject with an attribute Optional[AnyEmbedding]
I am running an operation on GPU and storing the TorchEmbedding on that attribute
Then I am converting the value back into a NDArrayEmbedding on CPU and storing in the same attribute.
When indexing the data into weaviate, it still tracks that attribute as a TorchTensor on GPU and tries to convert to a NDArrayEmbedding.
In order to fix it, I ran the cpu() operation prior to saving that tensor on the attribute.
I don't know if it is a bug or by design but logging the scenario nonetheless.
### Example Code
_No response_
### Python, DocArray & OS Version
```Text
docarray version: 0.39.1
```
### Affected Components
- [X] [Vector Database / Index](https://docs.docarray.org/user_guide/storing/docindex/)
- [ ] [Representing](https://docs.docarray.org/user_guide/representing/first_step)
- [ ] [Sending](https://docs.docarray.org/user_guide/sending/first_step/)
- [ ] [storing](https://docs.docarray.org/user_guide/storing/first_step/)
- [ ] [multi modal data type](https://docs.docarray.org/data_types/first_steps/) | open | 2023-11-06T19:44:17Z | 2023-12-23T14:52:32Z | https://github.com/docarray/docarray/issues/1830 | [] | vincetrep | 3 |
streamlit/streamlit | machine-learning | 9,947 | More consistent handling of relative paths | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
We have two main ways relative paths are parsed:
- For Page and navigation commands, paths are relative to the entrypoint file
- For most other command (like media), paths are relative to the current working directory
Although this may be the same for many people, Community Cloud in particular used the root of the repository as the current working directory which can necessitate some rearranging of files/path handling for complex apps. When it comes to Community Cloud's handling of Python dependencies, both the entrypoint-file directory and the root of the repository are searched, in that order. (This is not the case for packages.txt, nor for Streamlit's config.toml. The former must be handled in Cloud, but the latter may be handled in open source.)
It would be nice if all of Streamlit's paths prioritized "relative to the entrypoint file" then fell back to "relative to the current working directory." This could be implemented for `.streamlit/config.toml` (or anything in the `.streamlit` folder before falling back to the user's global settings). This could also be used for all local file paths (`st.image`, `st.video`, etc).
Related to #7731, #7578 (though more restrictive than those requests).
cc @tvst @sfc-gh-tteixeira
### Why?
Consistency.
### How?
_No response_
### Additional Context
_No response_ | open | 2024-11-29T09:14:30Z | 2024-11-29T09:16:40Z | https://github.com/streamlit/streamlit/issues/9947 | [
"type:enhancement"
] | sfc-gh-dmatthews | 1 |
nltk/nltk | nlp | 3,165 | a lot of NLTK DATA does not express their license | Dear NLTK.
I read "NLTK corpora are provided under the terms given in the README file for each corpus; all are redistributable and available for non-commercial use."
But a lot of NLTK DATA did not write their license.
Please see below.
https://www.nltk.org/nltk_data/
It is very unfriendly for user. Please write licenses of all NLTK Corpora. | open | 2023-06-16T17:02:04Z | 2023-06-16T17:09:26Z | https://github.com/nltk/nltk/issues/3165 | [] | hiDevman | 0 |
timkpaine/lantern | plotly | 96 | Plotly probplot | closed | 2017-10-19T16:57:35Z | 2017-11-26T05:13:24Z | https://github.com/timkpaine/lantern/issues/96 | [
"feature",
"plotly/cufflinks"
] | timkpaine | 1 | |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,744 | "HTTP Error 404: Not Found" Showing when driver is initialised | For this simple code:
```
import undetected_chromedriver as uc
import time
if __name__ == '__main__':
driver = uc.Chrome()
time.sleep(20)
```
I get this error:
```
Traceback (most recent call last):
File "/home/omkmorendha/Desktop/Work/JSX_scraping/exp.py", line 5, in <module>
driver = uc.Chrome()
File "/home/omkmorendha/.local/share/virtualenvs/JSX_scraping-34LRwH2V/lib/python3.10/site-packages/undetected_chromedriver/__init__.py", line 258, in __init__
self.patcher.auto()
File "/home/omkmorendha/.local/share/virtualenvs/JSX_scraping-34LRwH2V/lib/python3.10/site-packages/undetected_chromedriver/patcher.py", line 178, in auto
self.unzip_package(self.fetch_package())
File "/home/omkmorendha/.local/share/virtualenvs/JSX_scraping-34LRwH2V/lib/python3.10/site-packages/undetected_chromedriver/patcher.py", line 287, in fetch_package
return urlretrieve(download_url)[0]
File "/usr/lib/python3.10/urllib/request.py", line 241, in urlretrieve
with contextlib.closing(urlopen(url, data)) as fp:
File "/usr/lib/python3.10/urllib/request.py", line 216, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.10/urllib/request.py", line 525, in open
response = meth(req, response)
File "/usr/lib/python3.10/urllib/request.py", line 634, in http_response
response = self.parent.error(
File "/usr/lib/python3.10/urllib/request.py", line 563, in error
return self._call_chain(*args)
File "/usr/lib/python3.10/urllib/request.py", line 496, in _call_chain
result = func(*args)
File "/usr/lib/python3.10/urllib/request.py", line 643, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found
``` | closed | 2024-02-15T12:25:30Z | 2024-02-15T12:43:12Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1744 | [] | omkmorendha | 2 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 792 | Datasets_Root | In the instructions to run the preprocess. With the datasets how to I figure where the dataset root is. What is the command line needed to do so.
I'm using Windows 10 BTW.
It launches and works without any dataset_root listed but I want to make it work better. And without the datasets I can't | closed | 2021-07-08T17:46:29Z | 2021-08-25T09:42:20Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/792 | [] | C4l1b3r | 10 |
sebp/scikit-survival | scikit-learn | 159 | Terminal Node Constraint for Random Survival Forests | Is there any parameter/variable which states the minimum number of "uncensored samples" required to be at a leaf node? I think it's called "a minimum of d0 > 0 unique deaths" in the original paper. | open | 2020-12-29T07:31:02Z | 2021-01-27T09:21:45Z | https://github.com/sebp/scikit-survival/issues/159 | [
"enhancement"
] | mastervii | 4 |
Yorko/mlcourse.ai | matplotlib | 358 | Add athlete_events.csv to data folder | The athlete_events.csv file seems to be missing from the data folder. Sure, it can be downloaded via the [Kaggle link][0] but it's not immediately obvious.
[0]: https://www.kaggle.com/heesoo37/120-years-of-olympic-history-athletes-and-results/version/2 | closed | 2018-10-01T11:37:05Z | 2018-10-04T14:11:54Z | https://github.com/Yorko/mlcourse.ai/issues/358 | [
"invalid"
] | morcmarc | 1 |
0b01001001/spectree | pydantic | 21 | falcon endpoint function doesn't get the right `self` | This is due to the decorator.
```py
class Demo:
def test(self): pass
def on_get(self, req, resp):
self.test() # this will raise AttributeError
pass
```
And the `parse_name` function for falcon endpoint function should return the class name instead of the method name. Since the method names are all the same. | closed | 2020-01-09T08:25:43Z | 2020-01-12T10:20:55Z | https://github.com/0b01001001/spectree/issues/21 | [] | kemingy | 2 |
benbusby/whoogle-search | flask | 580 | Ratelimited | Hello, I still don't know how to solve this problem, all was working ok until today that I received this message when trying to use Whoogle.
Thanks | closed | 2021-12-15T11:02:38Z | 2021-12-24T00:07:44Z | https://github.com/benbusby/whoogle-search/issues/580 | [
"question"
] | gaditano66 | 10 |
shibing624/text2vec | nlp | 29 | 'Word2VecKeyedVectors' object has no attribute 'key_to_index' | Hi, how to fix 'Word2VecKeyedVectors' object has no attribute 'key_to_index' after loading GoogleNews-vectors-negative300.bin file in w2v = KeyedVectors.load_word2vec_format(word2vec_path, binary=True)
| closed | 2021-09-04T16:12:36Z | 2021-09-04T16:14:37Z | https://github.com/shibing624/text2vec/issues/29 | [
"bug"
] | AdhyaSuman | 0 |
gee-community/geemap | jupyter | 1,445 | The code "cartoee_subplots" does not work | Dear,
When I run "cartoee_subplots.ipynb" I get the following graphic with projection problem for the image (srtm):

I think the problem is in the following line:
cartoee.add_layer(ax, srtm, region=region, vis_params=vis, cmap="terrain")
Code: https://github.com/giswqs/geemap/blob/master/examples/notebooks/cartoee_subplots.ipynb
Raul | closed | 2023-02-23T14:44:50Z | 2023-06-30T18:43:32Z | https://github.com/gee-community/geemap/issues/1445 | [
"bug"
] | raulpoppiel | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,222 | 'Namespace' object has no attribute 'n_epochs' | Hi, I was trying to run **train.py** for png sets.
The number of training images = 11036
`python3 train.py --dataroot datasets/AS2SE/ --name AS2SE_1 --model cycle_gan --phase train --gpu_ids 0,1,2 --batch_size 16 --preprocess none --load_size 170 --print_freq 1000 --save_epoch_freq 200 --input_nc 1 --output_nc 1`
(I set input_nc and output_nc to 1 because my png images are grayscale. Right?)
Then I met the error message like below.
There was no n_epochs in the **train_options.py** file.
What is n_epochs and how can I solve this issue?
```
Traceback (most recent call last):
File "train.py", line 34, in <module>
for epoch in range(opt.epoch_count, opt.n_epochs + opt.n_epochs_decay + 1): # outer loop for different epochs; we save the model by <epoch_count>, <epoch_count>+<save_latest_freq>
AttributeError: 'Namespace' object has no attribute 'n_epochs'
```
Thank you in advance.
| open | 2021-01-11T13:05:36Z | 2021-01-12T01:40:50Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1222 | [] | lucid0921 | 1 |
pywinauto/pywinauto | automation | 906 | Announcement: I'm making a PyWinAutoUI (graphical) GUI with advanced features | My end goal is to do on-screen lessons with overlayed arrows as done on bubble.is (and their set of lessons), but since I don't want to always be coding to create a simple enough lesson, I thought I should give PyWinAuto a GUI frontend that detects / records clicks in .py script format.
Here's a screenshot:

I'm assuming I should post back here when it's ready.
| closed | 2020-04-01T21:55:36Z | 2020-04-28T05:46:10Z | https://github.com/pywinauto/pywinauto/issues/906 | [
"enhancement",
"New Feature",
"success_stories"
] | enjoysmath | 30 |
aleju/imgaug | machine-learning | 119 | broblem load and show images | I tried but failed. this is my code and error,thanks
import imgaug as ia
from imgaug import augmenters as iaa
import numpy as np
import cv2
import scipy
from scipy import misc
ia.seed(1)
# Example batch of images.
# The array has shape (32, 64, 64, 3) and dtype uint8.
#scipy.ndimage.imread('cat.jpg', flatten=False, mode="RGB")
images = cv2.imread('C:\python3.6.4.64bit\images\cat.jpg',1)
images = np.array(
[ia.quokka(size=(64, 64)) for _ in range(32)],
dtype=np.uint8
)
seq = iaa.Sequential([
iaa.Fliplr(0.5), # horizontal flips
iaa.Crop(percent=(0, 0.1)), # random crops
# Small gaussian blur with random sigma between 0 and 0.5.
# But we only blur about 50% of all images.
iaa.Sometimes(0.5,
iaa.GaussianBlur(sigma=(0, 0.5))
),
# Strengthen or weaken the contrast in each image.
iaa.ContrastNormalization((0.75, 1.5)),
# Add gaussian noise.
# For 50% of all images, we sample the noise once per pixel.
# For the other 50% of all images, we sample the noise per pixel AND
# channel. This can change the color (not only brightness) of the
# pixels.
iaa.AdditiveGaussianNoise(loc=0, scale=(0.0, 0.05*255), per_channel=0.5),
# Make some images brighter and some darker.
# In 20% of all cases, we sample the multiplier once per channel,
# which can end up changing the color of the images.
iaa.Multiply((0.8, 1.2), per_channel=0.2),
# Apply affine transformations to each image.
# Scale/zoom them, translate/move them, rotate them and shear them.
iaa.Affine(
scale={"x": (0.8, 1.2), "y": (0.8, 1.2)},
translate_percent={"x": (-0.2, 0.2), "y": (-0.2, 0.2)},
rotate=(-25, 25),
shear=(-8, 8)
)
], random_order=True) # apply augmenters in random order
images_aug = seq.augment_images(images)
cv2.imshow("Original", images_aug)
===============================
and show this error:
Traceback (most recent call last):
File "C:/python3.6.4.64bit/augg.py", line 52, in <module>
cv2.imshow("Original", images_aug)
cv2.error: C:\projects\opencv-python\opencv\modules\core\src\array.cpp:2493: error: (-206) Unrecognized or unsupported array type in function cvGetMat
| open | 2018-04-05T17:55:33Z | 2018-04-16T15:49:38Z | https://github.com/aleju/imgaug/issues/119 | [] | ghost | 7 |
modoboa/modoboa | django | 3,263 | Incorrect DNS status for subdomains | OS: Debian 12
Modoboa: 2.2.4
Installer used: Yes
Webserver: Nginx
Database: MySQL
I configured a subdomain in Modoboa, and all the records are correct in Cloudflare, and emails are sending and receiving. The issue is with the DNS status in Modoboa, it shows "Domain has no MX record" ("No MX record found for this domain."), DNSBL shows "No information available for this domain.", and SPF shows "No record found".
Interestingly, the DKIM, DMARC and autoconfig are showing "green".
My Cloudflare records are:
- A: mail.sub (points to the IP)
- CNAME: autoconfig.sub
- CNAME: autodiscover.sub
- MX: sub (points to: mail.sub.domain.com)
- TXT: _dmarc.sub (with DMARC configuration)
- TXT: sub (with SPF configuration)
- TXT: modoboa._domainkey.sub (with DKIM) | closed | 2024-06-13T19:06:27Z | 2024-07-15T15:46:04Z | https://github.com/modoboa/modoboa/issues/3263 | [
"feedback-needed"
] | hugohamelcom | 17 |
nltk/nltk | nlp | 3,120 | nltk.download('punkt') not working | 
Please help me with this issue
| open | 2023-02-08T15:56:34Z | 2023-02-08T16:02:46Z | https://github.com/nltk/nltk/issues/3120 | [] | Bhargav2193 | 1 |
piskvorky/gensim | nlp | 2,844 | Conflicts between hyperparameters for negative sampling? | Hi,
I wonder if there are possible interactions/conflicts when you use negative sampling with `negative>0 `and have hierarchical softmax accidentally activated` hs=1`? The docs says that only if hs=0 negative sampling will be used (negative>0). So I can hope that still if hs=1 and `negative>0 ` hopefully _**no**_ negative sampling is used?
Python 3.6
Win 10
NumPy 1.18.1
SciPy 1.1.0
gensim 3.8.1
| closed | 2020-05-18T16:30:41Z | 2020-10-28T02:08:32Z | https://github.com/piskvorky/gensim/issues/2844 | [
"question"
] | datistiquo | 1 |
snarfed/granary | rest-api | 163 | MF2-Atom missing published/updated dates | Using granary.io on aaronparecki.com (@aaronpk) is failing to parse the dt-published fields and include them in the atom feed.
https://granary.io/url?input=html&output=atom&url=https%3A//aaronparecki.com/&hub=https%3A//switchboard.p3k.io/ | closed | 2019-03-28T19:58:01Z | 2019-03-29T04:05:49Z | https://github.com/snarfed/granary/issues/163 | [] | alexmingoia | 1 |
MaartenGr/BERTopic | nlp | 1,153 | Change node colour in visualize_documents based on class | Hi,
Love the BERTopic library!
I wanted to ask whether in <code>.visualize_documents</code>, it would be possible to change the colour of the document nodes according to my own class labels. All my documents have labels for 'blue', 'white', and 'red', representing slower, faster or normal time perception. It would be great to compare how topics are clustered in <code>.visualize_documents</code> (right) and my own method (left).

Appreciate the help!
Best
Akseli
| closed | 2023-04-04T13:50:55Z | 2023-04-10T06:28:50Z | https://github.com/MaartenGr/BERTopic/issues/1153 | [] | Akseli-Ilmanen | 3 |
httpie/cli | python | 778 | body data by parts not printed even if written on a socket | Hi Devs,
I really like your software and the presentation of the data in the terminal!
However, in a case on a get with multiple body responses by parts and a keep-alive attribute, only the header is written on client side (httpie) once flushed on a socket. Nevertheless, curl does show each new data written on a socket. Therefore, I must conclude it's a bug!
Sever side sample:
```
[2019-05-17 13:52:11.270] TRACE: [connection:7] append response (#0), flags: { final_parts, connection_keepalive }, write group size: 2
[2019-05-17 13:52:11.270] TRACE: [connection:7] start next write group for response (#0), size: 2
[2019-05-17 13:52:11.270] TRACE: [connection:7] start response (#0): HTTP/1.1 200 OK
[2019-05-17 13:52:11.270] TRACE: [connection:7] sending resp data, buf count: 2, total size: 213
[2019-05-17 13:52:11.270] TRACE: [connection:7] outgoing data was sent: 213 bytes
[2019-05-17 13:52:11.270] TRACE: [connection:7] finishing current write group
[2019-05-17 13:52:11.270] TRACE: [connection:7] should keep alive
[2019-05-17 13:52:11.270] TRACE: [connection:7] start waiting for request
[2019-05-17 13:52:11.270] TRACE: [connection:7] continue reading request
...
[2019-05-17 13:53:17.892] WARN: [connection:1] try to write response, while socket is closed
[2019-05-17 13:53:17.892] TRACE: [connection:2] append response (#0), flags: { not_final_parts, connection_keepalive }, write group size: 3
[2019-05-17 13:53:17.892] TRACE: [connection:2] start next write group for response (#0), size: 3
[2019-05-17 13:53:17.892] TRACE: [connection:2] sending resp data, buf count: 3, total size: 86
[2019-05-17 13:53:17.892] TRACE: [connection:5] append response (#0), flags: { not_final_parts, connection_keepalive }, write group size: 3
[2019-05-17 13:53:17.892] TRACE: [connection:5] start next write group for response (#0), size: 3
[2019-05-17 13:53:17.892] TRACE: [connection:5] sending resp data, buf count: 3, total size: 86
[2019-05-17 13:53:17.892] TRACE: [connection:2] outgoing data was sent: 86 bytes
[2019-05-17 13:53:17.892] TRACE: [connection:2] finishing current write group
[2019-05-17 13:53:17.892] TRACE: [connection:5] outgoing data was sent: 86 bytes
[2019-05-17 13:53:17.893] TRACE: [connection:5] finishing current write group
[2019-05-17 13:53:17.893] TRACE: [connection:5] should keep alive
[2019-05-17 13:53:17.892] TRACE: [connection:2] should keep alive
...
```
Curl Listener:
```
n0t ~ $ curl -v 127.0.0.1:8080/tata/listen
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
> GET /tata/listen HTTP/1.1
> Host: 127.0.0.1:8080
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Connection: keep-alive
< Server: RESTinio
< Content-Type: application/json
< Access-Control-Allow-Origin: *
< Transfer-Encoding: chunked
<
{"data":"d2FmZmVzIG5ldmVy","id":"633971162421690157","type":0}
{"data":"d2FmZmVzIG5ldmVy","id":"9359431324407924376","type":0}
{"data":"d2FmZmVzIG5ldmVy","id":"12979840589662321868","type":0}
{"data":"bmV2ZXIgc2F5IG5ldmVy","id":"3961064829737946963","type":0}
{"data":"bmV2ZXIgc2F5IG5ldmVy","id":"14572780874153121707","type":0}
{"data":"bmV2ZXIgc2F5IG5ldmVy","id":"5299785977823115375","type":0}
{"data":"d2FmZmVzIG5ldmVy","expired":true,"id":"12979840589662321868","type":0}
{"data":"d2FmZmVzIG5ldmVy","expired":true,"id":"633971162421690157","type":0}
{"data":"d2FmZmVzIG5ldmVy","expired":true,"id":"9359431324407924376","type":0}
```
Httpie Listener:
```
n0t ~ $ http 127.0.0.1:8080/tata/listen
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Type: application/json
Server: RESTinio
Transfer-Encoding: chunked
^C
```
Cheers,
Seva | closed | 2019-05-17T18:02:39Z | 2019-08-29T11:50:16Z | https://github.com/httpie/cli/issues/778 | [] | binarytrails | 1 |
lepture/authlib | flask | 561 | OpenID Connect Front-Channel Logout | I suggest to implement helpers for [OpenID Connect Front-Channel Logout](https://openid.net/specs/openid-connect-frontchannel-1_0.html)
> This specification defines a logout mechanism that uses front-channel communication via the User Agent between the OP and RPs being logged out that does not need an OpenID Provider iframe on Relying Party pages, as [OpenID Connect Session Management 1.0](https://openid.net/specs/openid-connect-frontchannel-1_0.html#OpenID.Session) [OpenID.Session] does. Other protocols have used HTTP GETs to RP URLs that clear login state to achieve this; this specification does the same thing.
Related issues #292 #500 #560 | open | 2023-07-03T15:59:17Z | 2025-02-20T20:36:59Z | https://github.com/lepture/authlib/issues/561 | [
"spec",
"feature request"
] | azmeuk | 0 |
dpgaspar/Flask-AppBuilder | flask | 1,813 | OAUTH : Gitlab - The redirect URI included is not valid | ### Environment
Flask-Appbuilder version:
```
Flask 1.1.2
Flask-AppBuilder 3.4.4
Flask-Babel 2.0.0
Flask-Caching 1.10.1
Flask-JWT-Extended 3.25.1
Flask-Login 0.4.1
Flask-OpenID 1.3.0
Flask-Session 0.4.0
Flask-SQLAlchemy 2.5.1
Flask-WTF 0.14.3
```
pip freeze output:
```
alembic==1.7.6
amqp==5.0.9
anyio==3.5.0
apache-airflow==2.2.4
apache-airflow-providers-celery==2.1.0
apache-airflow-providers-ftp==2.0.1
apache-airflow-providers-http==2.0.3
apache-airflow-providers-imap==2.2.0
apache-airflow-providers-postgres==3.0.0
apache-airflow-providers-sqlite==2.1.0
apispec==3.3.2
argcomplete==1.12.3
attrs==20.3.0
Authlib==0.15.5
aws-cfn-bootstrap==2.0
Babel==2.9.1
billiard==3.6.4.0
blinker==1.4
boto3==1.21.7
botocore==1.24.7
cached-property==1.5.2
cachelib==0.6.0
cattrs==1.10.0
celery==5.2.3
certifi==2020.12.5
cffi==1.15.0
charset-normalizer==2.0.12
click==8.0.3
click-didyoumean==0.3.0
click-plugins==1.1.1
click-repl==0.2.0
clickclick==20.10.2
cloudpickle==1.4.1
colorama==0.4.4
colorlog==4.8.0
commonmark==0.9.1
connexion==2.11.1
croniter==1.3.4
cryptography==3.4.8
dask==2021.6.0
defusedxml==0.7.1
Deprecated==1.2.13
dill==0.3.1.1
distributed==2.19.0
dnspython==2.2.0
docutils==0.16
email-validator==1.1.3
eventlet==0.33.0
Flask==1.1.2
Flask-AppBuilder==3.4.4
Flask-Babel==2.0.0
Flask-Caching==1.10.1
Flask-JWT-Extended==3.25.1
Flask-Login==0.4.1
Flask-OpenID==1.3.0
Flask-Session==0.4.0
Flask-SQLAlchemy==2.5.1
Flask-WTF==0.14.3
flower==1.0.0
fsspec==2022.1.0
gevent==21.12.0
graphviz==0.19.1
greenlet==1.1.2
gunicorn==20.1.0
h11==0.12.0
HeapDict==1.0.1
httpcore==0.14.7
httpx==0.22.0
humanize==4.0.0
idna==3.3
importlib-metadata==4.11.1
importlib-resources==5.4.0
inflection==0.5.1
iso8601==1.0.2
isodate==0.6.1
itsdangerous==1.1.0
Jinja2==3.0.3
jmespath==0.10.0
jsonschema==3.2.0
kombu==5.2.3
lazy-object-proxy==1.4.3
locket==0.2.1
lockfile==0.12.2
Mako==1.1.6
Markdown==3.3.6
MarkupSafe==2.0.1
marshmallow==3.14.1
marshmallow-enum==1.5.1
marshmallow-oneofschema==3.0.1
marshmallow-sqlalchemy==0.26.1
msgpack==1.0.3
numpy==1.20.3
openapi-schema-validator==0.1.6
openapi-spec-validator==0.3.3
packaging==21.3
pandas==1.3.5
partd==1.2.0
pendulum==2.1.2
prison==0.2.1
prometheus-client==0.13.1
prompt-toolkit==3.0.28
psutil==5.9.0
psycopg2-binary==2.9.3
pycparser==2.21
Pygments==2.11.2
PyJWT==1.7.1
pyparsing==2.4.7
pyrsistent==0.16.1
pystache==0.5.4
python-daemon==2.3.0
python-dateutil==2.8.2
python-nvd3==0.15.0
python-slugify==4.0.1
python3-openid==3.2.0
pytz==2021.3
pytzdata==2020.1
PyYAML==5.4.1
requests==2.27.1
rfc3986==1.5.0
rich==11.2.0
s3transfer==0.5.2
sentry-sdk==1.5.5
setproctitle==1.2.2
simplejson==3.2.0
six==1.16.0
sniffio==1.2.0
sortedcontainers==2.4.0
SQLAlchemy==1.3.24
SQLAlchemy-JSONField==1.0.0
SQLAlchemy-Utils==0.38.2
statsd==3.3.0
swagger-ui-bundle==0.0.9
tabulate==0.8.9
tblib==1.7.0
tenacity==8.0.1
termcolor==1.1.0
text-unidecode==1.3
toolz==0.11.2
tornado==6.1
typing-extensions==3.10.0.2
unicodecsv==0.14.1
urllib3==1.26.8
vine==5.0.0
wcwidth==0.2.5
Werkzeug==1.0.1
wrapt==1.13.3
WTForms==2.3.3
zict==2.0.0
zipp==3.7.0
zope.event==4.5.0
zope.interface==5.4.0
```
### Describe the expected results
Apache Airflow uses FAB for UI & authentication, in my use case i'm trying to use OAUTH with my Gitlab instance (Gitlab CE).
https://airflow.apache.org/docs/apache-airflow/2.2.4/security/webserver.html
My Airflow instance is behind an AWS ALB, alb configuration forward request to my airflow instance : no issue with this point
My Gitlab instance should be the authentication reference
<img width="701" alt="image" src="https://user-images.githubusercontent.com/17825769/156008592-d43dab25-baba-487f-a0e8-b603375f9b38.png">
### Describe the actual results
When i try to connect with the OAUTH button, an error occurs ....
"Sign up with" -> An error has occurred "The redirect URI included is not valid."

[DEBUG] webserver log :
```
x.x.x.x - - [28/Feb/2022:17:16:05 +0000] "GET /health HTTP/1.1" 200 159 "-" "ELB-HealthChecker/2.0"
x.x.x.x - - [28/Feb/2022:17:16:05 +0000] "GET /health HTTP/1.1" 200 159 "-" "ELB-HealthChecker/2.0"
[2022-02-28 17:16:16,146] {views.py:615} DEBUG - Provider: None
[2022-02-28 17:16:16,146] {views.py:615} DEBUG - Provider: None
x.x.x.x - - [28/Feb/2022:17:16:16 +0000] "GET /login/ HTTP/1.1" 200 16179 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36"
[2022-02-28 17:16:20,830] {views.py:615} DEBUG - Provider: gitlab
[2022-02-28 17:16:20,830] {views.py:615} DEBUG - Provider: gitlab
[2022-02-28 17:16:20,831] {views.py:628} DEBUG - Going to call authorize for: gitlab
[2022-02-28 17:16:20,831] {views.py:628} DEBUG - Going to call authorize for: gitlab
x.x.x.x - - [28/Feb/2022:17:16:20 +0000] "GET /login/gitlab?next= HTTP/1.1" 302 915 "https://airflow.mydomain/login/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36"
x.x.x.x - - [28/Feb/2022:17:16:30 +0000] "GET /health HTTP/1.1" 200 159 "-" "ELB-HealthChecker/2.0"
```
I checked the redirect URL, the URI used is :
https://gitlab.mydomain.abc/oauth/authorize?response_type=code&client_id=1cxxxxxxx0fa9a2&redirect_uri=https://airflow.mydomain.abc/oauth-authorized/gitlab&scope=read_user&state=eyxxxxxx.eyzzzzzzzzzz
When I compare the URI with one of my services (Superset) currently using OAUTH with Gitlab, I see that the URL is different
https://gitlab.mydomain.abc/oauth/authorize?response_type=code&client_id=xxxxxxxxxxx&redirect_uri=https://superset.mydomain.abc/oauth-authorized/gitlab&scope=read_user&state=eyXXXXXXXXX.eyXXXXXXXX.XXXXX-XXXXX-XXXXX_XXXXX-Y
### Steps to reproduce

airflow.cfg :
```
proxy_fix_x_for = 1
proxy_fix_x_host = 3
...
rbac = True
```
webserver_config.py :
```
rbac = True
from airflow.www.fab_security.manager import AUTH_OAUTH
AUTH_TYPE = AUTH_OAUTH
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = 'Admin'
OAUTH_PROVIDERS = [{
'name':'gitlab',
'icon':'fa-gitlab',
'whitelist': ['@xxxxxxxx.com'],
'token_key': 'access_token',
'remote_app': {
'api_base_url':'https://gitlab.mydomain.abc/api/v4/',
'client_kwargs': {
'scope': 'read_user'
},
'access_token_url':'https://gitlab.mydomain.abc/oauth/token',
'authorize_url':'https://gitlab.mydomain.abs/oauth/authorize',
'request_token_url': None,
'client_id': 'xxxxxxxxx',
'client_secret': 'xxxxxxxxxxxxx',
}
}]
```
Related :
https://github.com/apache/airflow/discussions/21850 | closed | 2022-02-28T17:23:33Z | 2024-04-25T14:16:16Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1813 | [] | vparmeland | 1 |
onnx/onnxmltools | scikit-learn | 130 | new "Transpose" ops when transform keras to onnx | Hi,
Thank you for sharing this nice tool for converting keras to onnx.
I followed the [examples](https://github.com/onnx/onnxmltools#examples) to convert my keras model to onnx model, but encountered with some weird results.
The overall model structure keeps the same before and after the conversion, but there's a lot of new "Transpose" ops in the converted onnx model.
Especially, there will be a new "Transpose" op before and after [BatchNorm, Padding, Conv] ops.
Do you know what may cause this weird result? How can I eliminate those "Transpose" ops?
Here is the plotted keras and ONNX model:
[keras](https://user-images.githubusercontent.com/5886506/44519642-340e6f80-a700-11e8-8243-8cc387132bad.png)
[onnx](https://user-images.githubusercontent.com/5886506/44519643-340e6f80-a700-11e8-96ee-bd39ae502011.png)
| closed | 2018-08-23T10:14:44Z | 2018-08-29T06:39:23Z | https://github.com/onnx/onnxmltools/issues/130 | [] | yanghanxy | 3 |
hbldh/bleak | asyncio | 506 | AccessDenied when enumerating services on Windows | * bleak version: 0.11.0
* Python version: 3.8.x
* Operating System: Windows 10
* BlueZ version (`bluetoothctl -v`) in case of Linux: N/A
### Description
> Describe what you were trying to get done.
Trying to connect to LEGO Technic hub bootloader using Bleak.
> Tell us what happened, what went wrong, and what you expected to happen.
This is described in more detail at https://github.com/pybricks/pybricksdev/issues/15
The short story is that we get an AccessDenied error from `GetCharacteristicsAsync()` due to the fact that the device claims that it has a Service Changed characteristic that supports indications. Apparently Windows sees this and tries to automatically enable indications for this characteristic. But the device replies with an error.
This device works with Web Bluetooth on Windows, so I'm hoping that there is something in Bleak that can be tweaked.
For example, we might be able to use `GetGattServicesForUuidAsync()` and `GetCharacteristicsForUuidAsync()` instead of `GetGattServicesAsync()` and `GetCharacteristicsAsync()` to only enumerate the services and characteristics we are actually interested in.
Or we could separate out discovering characteristics from discovering services. But this would mean that this would have to be manually called by users later.
Actually, I suppose we could do all of this and keep things backwards compatible by adding a keyword argument to connect (and the object constructors for use with async with) to disable scanning for services/characteristics on connect. Then add new API to get services that takes an optional list of UUIDs as an argument. Likewise, APIs would be needed to get characteristics and descriptors with optional UUIDs. Windows and Mac have OS APIs for this, but we would probably just have to fake it on BlueZ.
| closed | 2021-04-03T22:22:14Z | 2021-10-06T23:01:11Z | https://github.com/hbldh/bleak/issues/506 | [
"bug",
"Backend: pythonnet",
"Backend: WinRT"
] | dlech | 5 |
onnx/onnx | tensorflow | 6,428 | [Feature request] Support 1D vector for w_scale in QLinearConv reference implementation | ### System information
1.17.0 release
### What is the problem that this feature solves?
Documentation for `w_scale` argument to `QLinearConv` operator (https://github.com/onnx/onnx/blob/41cba9d512620781257163cb2ba072871456f8d3/docs/Operators.md#QLinearConv) states:
> It could be a scalar or a 1-D tensor, which means a per-tensor/layer or per output channel quantization. If it's a 1-D tensor, its number of elements should be equal to the number of output channels (M).
If using QLinearConv with a 1D vector for `w_scale` in `onnx.reference.ReferenceEvaluator`, though, will get a numpy broadcast error. It can be seen by running this file:
https://gist.github.com/mcollinswisc/bccaa6730089c4221f53415a39b6835b
I see error:
```
Traceback (most recent call last):
File "onnx_qlinearconv.py", line 71, in <module>
ref_result = ref_eval.run(None, {"x": x})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "env/qlinearconv/lib/python3.12/site-packages/onnx/reference/reference_evaluator.py", line 599, in run
outputs = node.run(*inputs, **linked_attributes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "env/qlinearconv/lib/python3.12/site-packages/onnx/reference/op_run.py", line 466, in run
res = self._run(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "env/qlinearconv/lib/python3.12/site-packages/onnx/reference/ops/op_qlinear_conv.py", line 53, in _run
R = res * (x_scale * w_scale / y_scale)
~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ValueError: operands could not be broadcast together with shapes (1,8,24,24) (8,)
```
Thrown from here:
https://github.com/onnx/onnx/blob/41cba9d512620781257163cb2ba072871456f8d3/onnx/reference/ops/op_qlinear_conv.py#L53
### Alternatives considered
N/A
### Describe the feature
For getting full reference implementation results from QLinearConv.
### Will this influence the current api (Y/N)?
No
### Feature Area
ReferenceEvaluator
### Are you willing to contribute it (Y/N)
Yes
### Notes
_No response_ | open | 2024-10-06T21:57:03Z | 2024-10-06T21:57:03Z | https://github.com/onnx/onnx/issues/6428 | [
"topic: enhancement"
] | mcollinswisc | 0 |
ijl/orjson | numpy | 130 | Raspberry Pi 4 - Cannot install orjson | I am trying to install orjson with Python 3.8, so far I have installed wheel, rust nightly and maturin "successfully". On orjson install when it comes to maturin it fails and I can't figure out a solution.
```
$ pip3 install orjson
Defaulting to user installation because normal site-packages is not writeable
Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple
Collecting orjson
Using cached orjson-3.4.0.tar.gz (655 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Building wheels for collected packages: orjson
Building wheel for orjson (PEP 517) ... error
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 /home/pi/.local/lib/python3.8/site-packages/pip/_vendor/pep517/_in_process.py build_wheel /tmp/tmpx8a20kfc
cwd: /tmp/pip-install-34awlfxz/orjson
Complete output (7 lines):
💥 maturin failed
Caused by: The given list of python interpreters is invalid
Caused by: python is not a valid python interpreter
Caused by: Failed to get information from the python interpreter at python
Caused by: Only python >= 3.5 is supported, while you're using python 2.7
Running `maturin pep517 build-wheel -i python --manylinux=off --strip=on`
Error: Command '['maturin', 'pep517', 'build-wheel', '-i', 'python', '--manylinux=off', '--strip=on']' returned non-zero exit status 1.
----------------------------------------
ERROR: Failed building wheel for orjson
Failed to build orjson
ERROR: Could not build wheels for orjson which use PEP 517 and cannot be installed directly
```
Both my `pip` and `pip3` are using Python 3.8
I did `$ alias python=python3` without success.
Note: I am building it on a Raspberry Pi 4, so it's an ARM architecture.
```
$ pip -V
pip 20.2.3 from /home/pi/.local/lib/python3.8/site-packages/pip (python 3.8)
$ python --version
Python 3.8.6
$ maturin -V
maturin 0.8.3
$ wheel version
wheel 0.35.1
$ rustc -V
rustc 1.48.0-nightly (7f7a1cbfd 2020-09-27)
```
________________
_I have to admit I wasn't ready to spend that much time on this package as I ran it so smoothly on my x64 computer 😅 (I didn't expect it to have so many requirements! At least orjson is fast!)_
| closed | 2020-09-29T06:46:19Z | 2020-10-02T12:36:52Z | https://github.com/ijl/orjson/issues/130 | [] | QuentinDanjou | 4 |
chaoss/augur | data-visualization | 2,663 | repo_deps_libyear current_release_date and latest_release_date data type | current_release_date and latest_release_date is a string type not a date type, and the format is not convertible with pandas | open | 2024-01-02T20:07:16Z | 2025-02-10T23:14:50Z | https://github.com/chaoss/augur/issues/2663 | [
"add-feature"
] | cdolfi | 1 |
ageitgey/face_recognition | python | 1,098 | face_recogition.face_locations TypeError: 'list' object is not callable | * face_recognition version: 1.3.0
* Python version: 3.6.1
* Operating System: Manjaro
### Description
I installed dlib and face_recognition via pip in a conda envirnment. I am trying to do facial recognition via webcam. Is it possible that the way I install face_recognition is causing a problem. Or is there an actual error in the code below?
### What I Did
```
# import libraries
import cv2
import face_recognition
# Get a reference to webcam
video_capture = cv2.VideoCapture(0)
# Initialize variables
face_locations = []
while True:
# Grab a single frame of video
ret, frame = video_capture.read()
# Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)
rgb_frame = frame[:, :, ::-1]
# Find all the faces in the current frame of video
face_locations = face_recognition.face_locations(rgb_frame)
# Display the results
for top, right, bottom, left in face_locations:
# Draw a box around the face
cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)
# Display the resulting image
cv2.imshow('Video', frame)
# Hit 'q' on the keyboard to quit!
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Release handle to the webcam
video_capture.release()
cv2.destroyAllWindows()
```
I am getting an error of:
face_locations = face_recognition.face_locations(rgb_frame)
TypeError: 'list' object is not callable
| closed | 2020-03-29T15:37:09Z | 2020-03-29T15:41:56Z | https://github.com/ageitgey/face_recognition/issues/1098 | [] | biographie | 2 |
python-restx/flask-restx | flask | 231 | When used with Flask-jwt-extended the exception in handler is thrown | ```python
from flask import Flask, jsonify
from flask_jwt_extended import JWTManager, jwt_required
from flask_restx import Api, Resource
app = Flask(__name__)
app.config['JWT_SECRET_KEY'] = 'super-secret'
jwt = JWTManager(app)
api = Api(app)
jwt._set_error_handler_callbacks(api)
@jwt.unauthorized_loader
def unauthorized(msg):
return jsonify({'nananana': 'batman'}), 401 #<==== This causes the exception
@app.route('/foo', methods=['GET'])
@jwt_required
def protected():
return jsonify({'foo': 'bar'})
@api.route('/bar')
class HelloWorld(Resource):
@jwt_required
def get(self):
return {'hello': 'world'}
```
### **Repro Steps** (if applicable)
1. In Postman, run Get method http://localhost:5000/bar
### **Expected Behavior**
The error handler returns the response with status 401 and json content {'nananana': 'batman'}
### **Actual Behavior**
Exception is thrown:
```
File "c:\Projects\Flask-JWT-Rest\venv\lib\site-packages\flask_restx\api.py", line 698, in handle_error
default_data["message"] = default_data.get("message", str(e))
AttributeError: 'Response' object has no attribute 'get'
```
If add to app app.config['ERROR_INCLUDE_MESSAGE'] = False another exception is thrown:
```
File "d:\Projects\LearningProjects\Flask-JWT-Rest\venv\lib\site-packages\flask_restx\api.py", line 402, in make_response
resp = self.representations[mediatype](data, *args, **kwargs)
File "d:\Projects\LearningProjects\Flask-JWT-Rest\venv\lib\site-packages\flask_restx\representations.py", line 25, in output_json
dumped = dumps(data, **settings) + "\n"
File "C:\Development\Python36\lib\json\__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "C:\Development\Python36\lib\json\encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "C:\Development\Python36\lib\json\encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "C:\Development\Python36\lib\json\encoder.py", line 180, in default
o.__class__.__name__)
TypeError: Object of type 'Response' is not JSON serializable
```
### **Error Messages/Stack Trace**
see above
### **Environment**
- Python3.6
Flask==1.1.2
Flask-JWT-Extended==3.24.1
flask-restx==0.2.0
> Note. The same code but with restx replaced by restplus is working as expected.
| open | 2020-09-29T18:43:10Z | 2021-04-04T15:45:35Z | https://github.com/python-restx/flask-restx/issues/231 | [
"bug"
] | KathRains | 8 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 958 | I have the problem No Module named torch | please send help | closed | 2021-12-25T12:15:31Z | 2021-12-28T12:34:21Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/958 | [] | santaonholidays | 0 |
pydata/pandas-datareader | pandas | 387 | I can't get kosdaq historical from yahoo and google. | hello. all.
I am korean.
there are two kinds of stock in korea. it is KRX and KOSDAQ.
I can get data KRX. but i cannot get data KOSDAQ.
i can get data KRX from that code.
`df = data.DataReader("KRX:035420", "google", start, end)`
but i cannot get data KOSDAT from code.
`df = data.DataReader("KOSDAQ:003100", "google", start, end)`
could you tell me how to get it?
| closed | 2017-09-04T12:47:05Z | 2018-01-23T10:16:52Z | https://github.com/pydata/pandas-datareader/issues/387 | [
"google-finance",
"yahoo-finance"
] | ryulstory | 1 |
facebookresearch/fairseq | pytorch | 4,748 | Changes outputs directory during training |
#### What is your question?
Hi I'm changes outputs directory place during training mistakenly.. I'm just make new directory immediately
Now didn't append log and train info at hydra_train.log i'm use hydra-train
in this situation my question is new checkpoint is store in directory changed..?
#### What's your environment?
- fairseq Version (e.g., 1.0 or main): 0.12.2
- PyTorch Version (e.g., 1.0) 1.12
- OS (e.g., Linux): ubuntu 20.04
- How you installed fairseq (`pip`, source): pip
- Build command you used (if compiling from source):
- Python version: 3.8.10
- CUDA/cuDNN version: cuda 11.3
- GPU models and configuration:
- Any other relevant information:
| closed | 2022-09-29T04:50:23Z | 2022-09-29T08:04:28Z | https://github.com/facebookresearch/fairseq/issues/4748 | [
"question",
"needs triage"
] | Macsim2 | 1 |
chatanywhere/GPT_API_free | api | 217 | well done~! | 这是哪个菩萨产出的项目,上香~
| open | 2024-04-19T15:47:49Z | 2024-08-16T07:48:28Z | https://github.com/chatanywhere/GPT_API_free/issues/217 | [] | shiker1996 | 3 |
biolab/orange3 | numpy | 6,945 | Problems with Excel/csv reader in Orange 3.38 | **What's wrong?**
#6862 fixed reading of flags in 1-line header format, but it "recognizes" flags that are not.
The attached file has an attribute *1999-12-01 Towards a nuclear-weapon-free world:#the need for a new agenda*. The part before `#` is recognized as flag (I don't know which flags is it supposed to contain) and reading fails when converting the column data into floats.
I think that the code should be more strict about what is recognized as flag. Probably it has to be something like `[cm]?[CDST]?#` and that's it.
**How can we reproduce the problem?**
Load the file [un-resolutions-1.csv](https://github.com/user-attachments/files/17970212/un-resolutions-1.csv) with the File widget. (CSV Reader works.)
**What's your environment?**
Oranga 3.38 - dmg or from github. | closed | 2024-12-01T20:02:42Z | 2024-12-24T10:01:40Z | https://github.com/biolab/orange3/issues/6945 | [
"bug report"
] | janezd | 1 |
tfranzel/drf-spectacular | rest-api | 960 | Serializer Extension Mapping Applies to Request Body for POST Requests | - Django 4.1.4
- DRF 3.14
- drf-spectacular 0.26.0
**Describe the bug**
My DRF Serializers are designed to return a custom response body by overriding the `to_representation` method. The parameters of the response object point to an object, rather than the value of the field. E.g.:
<img width="323" alt="image" src="https://user-images.githubusercontent.com/47714027/225296852-68461c69-1d1f-41fc-aa38-4d1ccb0f2746.png">
I'm using a custom `OpenApiSerializerExtension` and overriding the `map_serializer` method to achieve this structure in my schema response body:
```python
def map_serializer(self, auto_schema, direction):
schema = super().map_serializer(auto_schema, direction)
if direction == 'response':
for field_name, field_schema in schema['properties'].items():
field_meta_schema = {
"type": "object",
"properties": {
"value": field_schema,
"readonly": build_basic_type(bool)
}
}
schema['properties'][field_name] = field_meta_schema
return schema
```
As you can see, I'm only doing this transformation if the direction is `response`, because the request body structure should remain the same. However, for the `POST` endpoint specifically, the request body in the schema is also taking on this modified structure:
<img width="340" alt="image" src="https://user-images.githubusercontent.com/47714027/225297893-68cb5907-4d0e-4e4e-a472-4d1a4dd0ce68.png">
This does not happen for the `PATCH` method request body of the same endpoint though.
I did some debugging and found the following line `drf_spectacular/openapi.py:1285`. If I change this line to `if self.method == 'PATCH' or self.method == 'POST':`, everything works fine, i.e., for both the `POST` and `PATCH` methods, the request body stays the same (isn't modified by my extension) and the response bodies are modified.
**To Reproduce**
- Use a `OpenApiSerializerExtension` extension and override the `map_serializer` method to modify the schema properties only when the direction is `response`.
- Create both a `POST` and `PATCH` endpoint which uses the serializer that this extension modifies the behaviour of.
- Create the schema and view the resulting request bodies for the two methods.
**Expected behavior**
The request bodies in the resulting schema should be unmodified, regardless of method, while the response bodies should be modified by the extension. | closed | 2023-03-15T11:54:25Z | 2023-03-21T10:27:03Z | https://github.com/tfranzel/drf-spectacular/issues/960 | [] | Lawrence-Godfrey | 2 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 127 | Multi lingual | Hi,
how can I make this multi lingual to request the bot to read text in french (right now the title moderator do not work) and to generate resume in french too ?
if you point me where in the code, I might be able to participate in the evolution | closed | 2024-08-29T09:51:15Z | 2024-09-03T14:46:03Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/127 | [
"enhancement"
] | MaraScott | 1 |
littlecodersh/ItChat | api | 500 | AttributeError: module 'itchat' has no attribute 'auto_login | Traceback (most recent call last):
File "C:\Users\Administrator\Desktop\auto_aswer.py", line 2, in <module>
import itchat
File "C:\Users\Administrator\Desktop\itchat.py", line 3, in <module>
itchat.auto_login(hotReload=True)
AttributeError: module 'itchat' has no attribute 'auto_login'
这个是什么问题? 直接在IDLE运行就会提示这个(3.62版本),把代码一步一步写在shell里或者在jupyter就可以运行.是版本兼容问题吗? | closed | 2017-08-29T05:42:34Z | 2023-01-12T11:14:49Z | https://github.com/littlecodersh/ItChat/issues/500 | [
"duplicate"
] | M737 | 6 |
simple-login/app | flask | 1,052 | [Feature Request] Send outgoing email using form on SimpleLogin | Hello,
I have structured this feature request using a user story:
**As a** user who often utilizes the reverse-alias functionality
**I want to** be able to send emails by replying to a reverse-alias or by sending an email to a recipient that I did not earlier contact, directly via a web form on SimpleLogin
**So that** I am not required to visit SimpleLogin to solely copy the reverse-alias for a recipient and then paste it in my email client, but I also have the option to skip the mail client for this purpose and reply directly via the SimpleLogin web app
**Acceptance criteria**
1. Use a web form on SimpleLogin to reply to a known contact that previously contacted an alias
2. Use a web form on SimpleLogin to send an email from an alias to an unknown recipient | closed | 2022-06-06T01:22:52Z | 2022-06-06T01:34:07Z | https://github.com/simple-login/app/issues/1052 | [] | ghost | 1 |
matterport/Mask_RCNN | tensorflow | 2,315 | inference very slow | I have test one image(1000 x1200x3),but the speed is really much slow! It unbelievably takes about 3 mins(160 seconds) to inference one image. My inference device is GTX 1660super. I would be very grateful if someone could help me. | open | 2020-08-11T00:55:05Z | 2020-12-27T09:35:05Z | https://github.com/matterport/Mask_RCNN/issues/2315 | [] | shining-love | 2 |
keras-team/keras | pytorch | 20,564 | Unable to Assign Different Weights to Classes for Functional Model | I'm struggling to assign weights to different classes for my functional Keras model. I've looked for solutions online and everything that I have tried has not worked. I have tried the following:
1) Assigning `class_weights` as an argument for `model.fit`. I convert my datasets to numpy iterators first as `tf.data.Dataset` does not work with `class_weight`
```python
history = model.fit(
train_dataset.as_numpy_iterator(),
validation_data=val_dataset.as_numpy_iterator(),
epochs=epochs,
steps_per_epoch=steps_per_epoch,
validation_steps=validation_steps,
class_weight={0: 0.1, 1: 5.0}
)
```
2) Mapping sample weights directly to my `tf.data.Dataset` object.
```python
def add_sample_weights(features, labels):
class_weights = tf.constant([0.1, 5.0])
sample_weights = tf.gather(class_weights, labels)
return features, labels, sample_weights
train_dataset = train_dataset.shuffle(buffer_size=10).batch(4).repeat().map(add_sample_weights)
val_dataset = val_dataset.batch(4).repeat().map(add_sample_weights)
# ...
model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=['accuracy'], weighted_metrics=['accuracy'])
```
For context, my dataset contains an imbalanced split of 70 / 30, for classes 0 and 1 respectively. I've been assigning the 0 a class weight of 0.1 and class 1 a class weight of 5. To my understanding, this should mean that the model "prioritizes" class 1 over class 0. However, my model still classifies everything as 0. How can I resolve this and is there anything that I'm missing?
Let me know if you need any additional information to help me with this issue. | closed | 2024-11-29T02:25:58Z | 2024-11-29T06:49:47Z | https://github.com/keras-team/keras/issues/20564 | [] | Soontosh | 1 |
scikit-learn/scikit-learn | machine-learning | 30,692 | Inaccurate error message for parameter passing in Pipeline with enable_metadata_routing=True | ### Describe the issue linked to the documentation
**The following error message is inaccurate:**
```
Passing extra keyword arguments to Pipeline.transform is only supported if enable_metadata_routing=True, which you can set using sklearn.set_config.
```
**This can easily be done using `**params` as described in the documentation for sklearn.pipeline:** https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html#sklearn.pipeline.Pipeline.fit
**Please consider the following example:**
```py
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.pipeline import Pipeline
from scipy.sparse import csr_matrix
import pandas as pd
import numpy as np
class DummyTransformer(BaseEstimator, TransformerMixin):
def __init__(self):
self.feature_index_sec = None # initialize attribute
def transform(self, X, feature_index_sec=None, **fit_params):
if feature_index_sec is None:
raise ValueError("Missing required argument 'feature_index_sec'.")
print(f"Transform Received feature_index_sec with shape: {feature_index_sec.shape}")
return X
def fit(self, X, y=None, feature_index_sec=None, **fit_params):
print(f"Fit Received feature_index_sec with shape: {feature_index_sec.shape}")
return self
def fit_transform(self, X, y=None, feature_index_sec=None, **fit_params):
self.fit(X, y, feature_index_sec, **fit_params) # feature_index_sec is passed with other parameters
return self.transform(X, feature_index_sec, **fit_params)
feature_matrix = csr_matrix(np.random.rand(10, 5))
train_idx = pd.DataFrame({'FileDate_ClosingPrice': np.random.rand(10)})
transformer = DummyTransformer()
pipe = Pipeline(steps=[('DummyTransformer', transformer)])
pipe.fit_transform(feature_matrix, DummyTransformer__feature_index_sec=train_idx)
# this line creates the error
pipe.transform(feature_matrix, DummyTransformer__feature_index_sec=train_idx)
```
**Which outputs:**
```
Fit Received feature_index_sec with shape: (10, 1)
Transform Received feature_index_sec with shape: (10, 1)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File /tmp/test.py:35
32 pipe.fit_transform(feature_matrix, DummyTransformer__feature_index_sec=train_idx)
34 # this line creates the error
---> 35 pipe.transform(feature_matrix, DummyTransformer__feature_index_sec=train_idx)
File ~/micromamba/lib/python3.12/site-packages/sklearn/pipeline.py:896, in Pipeline.transform(self, X, **params)
863 @available_if(_can_transform)
864 def transform(self, X, **params):
865 """Transform the data, and apply `transform` with the final estimator.
866
867 Call `transform` of each transformer in the pipeline. The transformed
(...)
894 Transformed data.
895 """
--> 896 _raise_for_params(params, self, "transform")
898 # not branching here since params is only available if
899 # enable_metadata_routing=True
900 routed_params = process_routing(self, "transform", **params)
File ~/micromamba/lib/python3.12/site-packages/sklearn/utils/_metadata_requests.py:158, in _raise_for_params(params, owner, method)
154 caller = (
155 f"{owner.__class__.__name__}.{method}" if method else owner.__class__.__name__
156 )
157 if not _routing_enabled() and params:
--> 158 raise ValueError(
159 f"Passing extra keyword arguments to {caller} is only supported if"
160 " enable_metadata_routing=True, which you can set using"
161 " `sklearn.set_config`. See the User Guide"
162 " <https://scikit-learn.org/stable/metadata_routing.html> for more"
163 f" details. Extra parameters passed are: {set(params)}"
164 )
ValueError: Passing extra keyword arguments to Pipeline.transform is only supported if enable_metadata_routing=True, which you can set using `sklearn.set_config`. See the User Guide <https://scikit-learn.org/stable/metadata_routing.html> for more details. Extra parameters passed are: {'DummyTransformer__feature_index_sec'}
```
**Request**
- The error message should be updated to clarify that parameters can already be passed using the **params (e.g., StepName__param_name) syntax, which is unrelated to metadata routing.
- Additionally, I was unable to find any example of using enable_metadata_routing=True to pass parameters, either in the documentation or in the wild. It would be helpful if the documentation provided a working example of passing parameters using metadata routing, especially for custom transformers.
### Suggest a potential alternative/fix
_No response_ | closed | 2025-01-21T19:08:21Z | 2025-01-28T09:02:51Z | https://github.com/scikit-learn/scikit-learn/issues/30692 | [
"Documentation"
] | jakemdrew | 7 |
NVlabs/neuralangelo | computer-vision | 100 | Images instead of videos | Can I use the already captured images and obtain JSON files without using videos | open | 2023-09-02T03:11:39Z | 2023-09-18T01:58:19Z | https://github.com/NVlabs/neuralangelo/issues/100 | [
"enhancement"
] | mowangmodi | 4 |
RayVentura/ShortGPT | automation | 48 | ✨ [Feature Request / Suggestion]: GPT3 error: Rate limit | ### Suggestion / Feature Request
ERROR | Exception : GPT3 error: Rate limit reached for default-gpt-3.5-turbo in organization on requests per min. Limit: 3 / min. Please try again in 20s. Contact us through our help center at help.openai.com if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method.
The free chatGPT API has rate limits. How can I adjust and decrease the speed in the code?
### Why would this be useful?
_No response_
### Screenshots/Assets/Relevant links
_No response_ | closed | 2023-07-26T06:21:35Z | 2023-10-01T18:24:29Z | https://github.com/RayVentura/ShortGPT/issues/48 | [] | donghao95 | 13 |
jacobgil/pytorch-grad-cam | computer-vision | 476 | gradcam for binary segmentaion network | The final output of my segmentation network is a single-channel prediction map, how can this situation be adjusted according to the tutorial? | open | 2024-01-10T07:29:49Z | 2024-01-10T07:29:49Z | https://github.com/jacobgil/pytorch-grad-cam/issues/476 | [] | King-king424 | 0 |
opengeos/leafmap | streamlit | 1,027 | add_raster doesn't work in container environment | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- leafmap version: 0.42.6
- Python version: 3.11.6
- Operating System: Mac OS 14.6.1 (but running container giwqs/leafmap:latest)
### Description
I am having difficult getting leafmap to display a local COG when running in a container. I have tried multiple permutations of building leafmap into a docker container, all of which show slightly different failures. These are described below.
### What I Did
I will start with the most recent, which is simply directly running the image giwqs/leafmap:latest, with the following command:
```
docker run -it -p 8888:8888 -v $(pwd):/home/jovyan/work -v /Users/me/images/tiles:/home/tiles giswqs/leafmap:latest
```
Note I am mounting a few directories here to get access to local files, the most important of which is a directory containing the local COGs I want to display, which I form paths to and then try to display using these lines, adapted from several notebook cells:
```
import os
import sys
import pandas as pd
import re
import numpy as np
import leafmap.leafmap as leafmap
# traverse directory containing COGs to make catalog
paths = []
for root, _, files in os.walk("/home/tiles/", topdown=True):
for f in files:
if f.endswith(".tif"):
file_bits = f.split("_")
file_dict = {
"tile": int(re.sub("tile", "", file_bits[0])),
"year": file_bits[1].split("-")[0],
"month": file_bits[1].split("-")[1],
"file": f,
"path": os.path.join(root, f)
}
paths.append(file_dict)
image_catalog = pd.DataFrame(paths)
# pluck out tile of interest
image_catalogr = (
image_catalog[image_catalog['tile']==842099]
.groupby(["tile", "year"])
.first()
)
tile = image_catalogr.path.iloc[0]
# display
m = leafmap.Map(zoom=20,
center=[pt.y.iloc[0], pt.x.iloc[0]]
)
m.add_basemap("SATELLITE")
m.add_raster(tile, bands=[1,2,3], layer_name='TRUE COLOR')
m
```
The first failure is that `add_raster` says `xarray` isn't installed. To fix this, I simply opened a terminal in the jupyter lab environment and ran `mamba install xarray`, restarted the kernel, and ran again.
This time it ran, but the COG does not display. Going on previous guidance, I then also tried adding `jupyter-server-proxy` and then added the following to lines to my imports:
```
import localtileserver
os.environ['LOCALTILESERVER_CLIENT_PREFIX'] = '/proxy/{port}'
```
Restarted and ran again. The COG still doesn't show up.
So I am stuck here. I should note I have also tried to build my own containers, using several different variants. One of these is as follows, which is simply adapting the Dockerfile in this repo, and adding the missing xarray and rioxarray, and a few other bits:
```
FROM jupyter/scipy-notebook:latest
RUN mamba install -c conda-forge leafmap geopandas "localtileserver>=0.10.0" osmnx -y && \
pip install -U leafmap jsonschema==4.18.0 lonboard h5py xarray==2024.11.0 rioxarray==0.17.0 && \
fix-permissions "${CONDA_DIR}" && \
fix-permissions "/home/${NB_USER}"
ENV PROJ_LIB='/opt/conda/share/proj'
USER root
# Set up working directory
RUN mkdir -p /home/workdir
WORKDIR /home/workdir
# Activate the Conda environment and ensure it's available in the container
ENV PATH="/opt/conda/base:$PATH"
ENV CONDA_DEFAULT_ENV=geo
# Expose Jupyter Lab's default port
EXPOSE 8888
# Run Jupyter Lab
ENTRYPOINT ["jupyter", "lab", "--ip=0.0.0.0", "--port=8888", "--no-browser", "--allow-root"]
```
Running the same code as above, I get a different error, which has bedevilled me across multiple attempts at this, which is that I get an error saying `localtileserver` doesn't exist, when it does, but it is failing because:
```
ModuleNotFoundError: No module named 'rio_tiler.io'
```
A different, pip-based version of the build script bypasses the error:
```
# build script fixed and optimized by ChatGPT
FROM continuumio/miniconda3:24.9.2-0
# Use a single RUN command where possible for efficiency
RUN apt-get update && \
apt-get --allow-releaseinfo-change update && \
apt-get --allow-releaseinfo-change-suite update && \
apt-get install -y binutils libproj-dev gdal-bin libgdal-dev g++ && \
rm -rf /var/lib/apt/lists/*
# solution from here: https://stackoverflow.com/a/73101774
RUN cp /usr/lib/aarch64-linux-gnu/libstdc++.so.6.0.30 /opt/conda/lib/ && \
cd /opt/conda/lib/ && \
rm -f libstdc++.so.6 && \
ln -s libstdc++.so.6.0.30 libstdc++.so.6
# Upgrade pip, pip-tools, and setuptools
RUN pip install --no-cache-dir --upgrade pip pip-tools setuptools
# Copy requirements.txt and install dependencies
COPY requirements.txt /tmp/requirements.txt
RUN pip install --no-cache-dir -r /tmp/requirements.txt && \
rm /tmp/requirements.txt
# Set up working directory
RUN mkdir -p /home/workdir
WORKDIR /home/workdir
# Expose Jupyter Lab's default port
EXPOSE 8888
# Define entrypoint with corrected Jupyter Lab options
ENTRYPOINT ["jupyter", "lab", "--ip=0.0.0.0", "--port=8888", "--no-browser", "--allow-root"]
```
With requirements:
```
jupyterlab
ipywidgets==8.1.5
ipyleaflet==0.19.2
leafmap==0.41.2
localtileserver==0.10.5
numpy==2.1.3
geopandas==1.0.1
pandas==2.2.3
shapely==2.0.6
matplotlib==3.9.2
```
But this has the same problem with displaying COGs--they just don't show up (and I have tried adding jupyter-server-proxy).
So I am stuck here. I will not that an ordinary, non containerized install of leafmap using conda/mamba does show the COGs, but I want to containerize this for use on a cluster account.
This is probably an issue with localtileserver, but perhaps there is a fix known here for getting it to work with `add_raster`. Any solutions will be most appreciated.
| closed | 2025-01-05T16:06:45Z | 2025-02-25T16:17:02Z | https://github.com/opengeos/leafmap/issues/1027 | [
"bug"
] | ldemaz | 11 |
TheKevJames/coveralls-python | pytest | 73 | Coveralls and Tox 2.0 | I just found out today, that with the new tox release (https://testrun.org/tox/latest/changelog.html#id1), coveralls stops working on travis.
The reason is that tox 2.0 has environment isolation. You have to pass
the `TRAVIS` and `TRAVIS_JOB_ID` and probably `TRAVIS_BRANCH` envvar in you tox settings or coveralls will think it is not on travis: https://testrun.org/tox/latest/example/basic.html#passing-down-environment-variables.
You can see my log on travis (scroll to the bottom): https://travis-ci.org/Pytwitcher/pytwitcherapi/jobs/62437830
The error message has become confusing, because of the tox update. My suggestion is to give a hint in the error message and state it in the documentation.
| closed | 2015-05-13T18:28:42Z | 2015-05-16T20:35:58Z | https://github.com/TheKevJames/coveralls-python/issues/73 | [] | storax | 3 |
kizniche/Mycodo | automation | 1,357 | Add notes via API | **Is your feature request related to a problem? Please describe.**
When on the farm I don't want to open a web app to add a note.
I'd like to be able to add them via an existing farm management app
**Describe the solution you'd like**
Expose the note functionality in the Rest API so that notes can be added by other apps.
**Describe alternatives you've considered**
Running https://v1.farmos.org/guide/app/ and FarmOS as separate apps and syncing the data later somehow
**Additional context**
I may be able to fund this feature request depending on the costs involved.
| closed | 2023-12-16T19:45:28Z | 2024-10-04T03:54:54Z | https://github.com/kizniche/Mycodo/issues/1357 | [
"enhancement",
"Implemented"
] | samuk | 5 |
SciTools/cartopy | matplotlib | 2,199 | BUG: incompatibility with scipy 1.11.0 | ### Description
Scipy 1.11.0 (released yesterday) appear to break [use cases exercised in yt's CI](https://github.com/yt-project/yt/issues/4540)
More specifically the breaking change seems to be https://github.com/scipy/scipy/pull/18502
#### Code to reproduce
```python
import numpy as np
import cartopy.crs as ccrs
from matplotlib.figure import Figure
fig = Figure()
ax = fig.add_axes((0, 0, 1, 1), projection=ccrs.Mollweide())
ax.imshow([[0, 1], [0, 1]], transform=ccrs.PlateCarree())
```
#### Traceback
```python-traceback
Traceback (most recent call last):
File "/private/tmp/t.py", line 7, in <module>
ax.imshow([[0, 1], [0, 1]], transform=ccrs.PlateCarree())
File "/private/tmp/.venv/lib/python3.11/site-packages/cartopy/mpl/geoaxes.py", line 318, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/tmp/.venv/lib/python3.11/site-packages/cartopy/mpl/geoaxes.py", line 1331, in imshow
img, extent = warp_array(img,
^^^^^^^^^^^^^^^
File "/private/tmp/.venv/lib/python3.11/site-packages/cartopy/img_transform.py", line 192, in warp_array
array = regrid(array, source_native_xy[0], source_native_xy[1],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/tmp/.venv/lib/python3.11/site-packages/cartopy/img_transform.py", line 278, in regrid
_, indices = kdtree.query(target_xyz, k=1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "_ckdtree.pyx", line 795, in scipy.spatial._ckdtree.cKDTree.query
ValueError: 'x' must be finite, check for nan or inf values
```
<details>
<summary>Full environment definition</summary>
<!-- fill in the following information as appropriate -->
### Operating system
Discovered in CI on Ubuntu, and reproduced on macOS
### Cartopy version
0.21.1
### pip list
```
Package Version
--------------- --------
Cartopy 0.21.1
certifi 2023.5.7
contourpy 1.1.0
cycler 0.11.0
fonttools 4.40.0
kiwisolver 1.4.4
matplotlib 3.7.1
numpy 1.25.0
packaging 23.1
Pillow 9.5.0
pip 23.1.2
pyparsing 3.1.0
pyproj 3.6.0
pyshp 2.3.1
python-dateutil 2.8.2
scipy 1.11.0
setuptools 65.5.0
shapely 2.0.1
six 1.16.0
```
</details>
| closed | 2023-06-26T18:17:01Z | 2023-07-14T14:12:03Z | https://github.com/SciTools/cartopy/issues/2199 | [] | neutrinoceros | 7 |
aeon-toolkit/aeon | scikit-learn | 1,766 | [DOC] Add Raises to docstrings for methods that can raise exceptions | ### Describe the issue linked to the documentation
We mostly do not document the errors raised and the reasons for them. It would be good to do so. This should be done incrementally and is a good first issue.
1. Pick and estimator and try break it/see what exceptions are present
2. Look at the docstrings (may be base class for some items i.e. input datatype) and see if its documented
3. If not add under "Raises"
This should only be errors and exceptions we intentionally raise, known edge cases or which provide generally useful information to the user, i.e. for invalid parameter values. We do not need to document every item which can possibly raise an exception.
Raises
-------
we may need to discuss where it should be documented and the effect this has on the documentation @MatthewMiddlehurst. I'll look for a good example
### Suggest a potential alternative/fix
_No response_ | open | 2024-07-05T11:20:24Z | 2025-03-21T22:04:45Z | https://github.com/aeon-toolkit/aeon/issues/1766 | [
"documentation",
"good first issue"
] | TonyBagnall | 8 |
BeanieODM/beanie | asyncio | 306 | [Question] not insert None value | ```
class Student(Document):
name: Optional[str]
birth: Optional[datetime]
new_student = Student(name = "New Name")
```
I don't want to insert `Student .birth` to mongo becuase it's none.
How I can do it
Thanks to you all
| closed | 2022-07-19T06:47:22Z | 2022-07-20T13:57:45Z | https://github.com/BeanieODM/beanie/issues/306 | [] | joshung | 2 |
docarray/docarray | fastapi | 1,425 | Bug: cannot stack empty torch tensor | How to reproduce:
```python
from docarray import BaseDoc, DocVec
from docarray.typing import TorchTensor
class MyDoc(BaseDoc):
tens: Optional[TorchTensor]
vec = DocVec[MyDoc]([MyDoc(), MyDoc()])
```
```bash
Traceback (most recent call last):
File "/home/johannes/.cache/pypoetry/virtualenvs/docarray-EljsZLuq-py3.8/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3553, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-11-e510c10fcd92>", line 1, in <cell line: 1>
vec = DocVec[MyDoc]([MyDoc(), MyDoc()])
File "/home/johannes/Documents/jina/docarrayv2/docarray/array/doc_vec/doc_vec.py", line 160, in __init__
cast(AbstractTensor, tensor_columns[field_name])[i] = val
``` | closed | 2023-04-19T23:57:12Z | 2023-04-20T00:01:51Z | https://github.com/docarray/docarray/issues/1425 | [] | JohannesMessner | 1 |
plotly/dash-bio | dash | 488 | Rename async modules | Same as https://github.com/plotly/dash-core-components/issues/745 | closed | 2020-02-27T00:48:46Z | 2020-03-10T19:19:43Z | https://github.com/plotly/dash-bio/issues/488 | [
"dash-type-bug",
"size: 0.2"
] | Marc-Andre-Rivet | 0 |
xlwings/xlwings | automation | 2,310 | xlwings-error-python process exit before it was possible to create interface object | #### OS Windows 11
#### Versions of xlwings, Excel and Python (e.g. 0.30.10, Office 365, Python 3.11)
#### Describe your issue

I tried to implement my python codes in excel using xlwings, but every time I import the function or click on the button, this error pops up. My python codes runs smoothly in Spyder. I tried changing my python codes to some simple function, but the same error kept pop up. So I guess it's a problem with my xlwings or the way python and xlwings are connected.
Could anyone help me with this?

```python
# Some search internet and save pdf functions
```excel VBA
Sub DowJones()
RunPython ("from xw import main; main()")
End Sub

| open | 2023-07-30T17:21:25Z | 2024-12-16T20:14:01Z | https://github.com/xlwings/xlwings/issues/2310 | [] | LucyZZZen | 2 |
reiinakano/xcessiv | scikit-learn | 58 | Base Learner Correlation Matrix | First of all, big props for this project! A big help in constructing big stacking models.
It would maybe be interesting to get some visualizations in the tool, like e.g. a correlation matrix between the meta-features.
If I ever get some time to spare, I'll start reading up on the code base and see if I can integrate it. | open | 2018-02-16T10:19:29Z | 2018-02-16T10:19:29Z | https://github.com/reiinakano/xcessiv/issues/58 | [] | GillesVandewiele | 0 |
wkentaro/labelme | computer-vision | 925 | [Feature] human pose | Does it support human pose label?
For example:
The image have two person.
And label know the keypoint is which person.
| closed | 2021-10-01T02:55:31Z | 2023-02-28T05:26:31Z | https://github.com/wkentaro/labelme/issues/925 | [] | alicera | 4 |
lepture/authlib | django | 130 | Requesting empty scope removes scope from response | When this [request is made](https://httpie.org/doc#forms):
http -a client:secret -f :/auth/token grant_type=client_credentials scope=
I get response, without `scope` even if it was given in request.
Code responsible for this is here:
https://github.com/lepture/authlib/blob/master/authlib/oauth2/rfc6750/wrappers.py#L98-L99
Is this a bug? I would expect `scope` present in response since it was given in request, even if given scope was empty string. | closed | 2019-05-13T06:29:32Z | 2019-05-14T05:27:16Z | https://github.com/lepture/authlib/issues/130 | [] | sirex | 2 |
docarray/docarray | fastapi | 1,192 | Pre-commit hook fails due to poetry issues | **Describe the bug**
When I tried to commit, the pre-commit hook failed and gave the following stack trace.
```
stderr:
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [14 lines of output]
Traceback (most recent call last):
File "/home/rik/.cache/pre-commit/repohmn7vo7q/py_env-python3.8/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/home/rik/.cache/pre-commit/repohmn7vo7q/py_env-python3.8/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/home/rik/.cache/pre-commit/repohmn7vo7q/py_env-python3.8/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 149, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
File "/tmp/pip-build-env-wtpx5r0z/overlay/lib/python3.8/site-packages/poetry/core/masonry/api.py", line 40, in prepare_metadata_for_build_wheel
poetry = Factory().create_poetry(Path(".").resolve(), with_groups=False)
File "/tmp/pip-build-env-wtpx5r0z/overlay/lib/python3.8/site-packages/poetry/core/factory.py", line 57, in create_poetry
raise RuntimeError("The Poetry configuration is invalid:\n" + message)
RuntimeError: The Poetry configuration is invalid:
- [extras.pipfile_deprecated_finder.2] 'pip-shims<=0.3.4' does not match '^[a-zA-Z-_.0-9]+$'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
```
**To Reproduce**
Steps to reproduce the behavior:
1. Make a fresh clone of `docarray` with the v2 branch
2. Try to make a commit
3. A stack trace similar to above should appear
**Expected behavior**
The pre-commit hook should pass through
**Screenshots**

_Note: This is running on wsl2 terminal_
**Additional context**
[This](https://levelup.gitconnected.com/fix-runtimeerror-poetry-isort-5db7c67b60ff) medium article might be of help | closed | 2023-03-01T10:41:08Z | 2023-05-06T09:36:33Z | https://github.com/docarray/docarray/issues/1192 | [] | hrik2001 | 0 |
TracecatHQ/tracecat | fastapi | 342 | [FEATURE IDEA] Add UI to show action if `run_if` is specified | ## Why
- It's hard to tell if a node has a conditional attached unless you select the node or gave it a meaningful title
## Suggested solution
- Add a greenish `#C1DEAF` border that shows up around the node
- Prompt the the user to give the "condition" a human-readable name (why? because the expression will probably be too long)
- If no human-readable name is given. Take the last `.attribute` and `operator value` part of the expression as the condition name. | open | 2024-08-22T17:08:12Z | 2024-08-22T17:17:01Z | https://github.com/TracecatHQ/tracecat/issues/342 | [
"enhancement",
"frontend"
] | topher-lo | 2 |
deepset-ai/haystack | machine-learning | 8,631 | Port Tools from experimental | After new `ChatMessage` is introduced, we should port into Haystack all the work done for Tools in haystack-experimental.
(most of the resources on Tools are collected here: https://github.com/deepset-ai/haystack-experimental/discussions/98)
```[tasklist]
### Tasks
- [x] Tool dataclass
- [x] HF API Chat Generator
- [x] Tool Invoker component
- [x] OpenAI Chat Generator
```
| closed | 2024-12-12T10:29:17Z | 2024-12-20T14:35:30Z | https://github.com/deepset-ai/haystack/issues/8631 | [
"P1",
"topic:agent"
] | anakin87 | 1 |
stanfordnlp/stanza | nlp | 936 | relation changes in stanza (ex: dobj -> obj) | Hello,
I noticed this https://github.com/stanfordnlp/stanza/commit/b6d83e20a65a8cd46005f96dfa3a6d49d863759b commit, where the relation 'dobj' has been changed to 'obj'. Are there any other relations that have been changed with stanza compared to the previous corenlp java based server?
Thanks. | closed | 2022-02-02T06:22:24Z | 2022-02-09T17:55:35Z | https://github.com/stanfordnlp/stanza/issues/936 | [
"question"
] | swatiagarwal-s | 2 |
twopirllc/pandas-ta | pandas | 424 | Psar loop starting at wrong index 1 | **Pandas ta version**
```python
0.3.14b0
```
**Describe the bug**
When reviewing the code of psar (https://github.com/twopirllc/pandas-ta/blob/main/pandas_ta/trend/psar.py#L46) the loop starts at the wrong index 1, it should start at 2, (Line 46)
Because after we do this computation:
_sar = max(high.iloc[row - 1], high.iloc[row - 2], _sar)
or this one
_sar = min(low.iloc[row - 1], low.iloc[row - 2], _sar)
when row is 1 we compare high.iloc[0] and high.iloc[-1] which is the last element of the list not the previous.
| closed | 2021-11-05T14:06:31Z | 2022-02-09T17:15:47Z | https://github.com/twopirllc/pandas-ta/issues/424 | [] | DevDaoud | 6 |
nalepae/pandarallel | pandas | 167 | ValueError: cannot find context for fork |
'3.8.12 (default, Oct 12 2021, 03:01:40) [MSC v.1916 64 bit (AMD64)]'
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [1], in <module>
----> 1 import pandarallel
File ~\Anaconda3\envs\pepti\lib\site-packages\pandarallel\__init__.py:3, in <module>
1 __version__ = "1.5.4"
----> 3 from .pandarallel import pandarallel
File ~\Anaconda3\envs\pepti\lib\site-packages\pandarallel\pandarallel.py:27, in <module>
23 from pandarallel.utils.tools import ERROR, INPUT_FILE_READ, PROGRESSION, VALUE
25 # Python 3.8 on MacOS by default uses "spawn" instead of "fork" as start method for new
26 # processes, which is incompatible with pandarallel. We force it to use "fork" method.
---> 27 context = get_context("fork")
29 # By default, Pandarallel use all available CPUs
30 NB_WORKERS = context.cpu_count()
File ~\Anaconda3\envs\pepti\lib\multiprocessing\context.py:239, in DefaultContext.get_context(self, method)
237 return self._actual_context
238 else:
--> 239 return super().get_context(method)
File ~\Anaconda3\envs\pepti\lib\multiprocessing\context.py:193, in BaseContext.get_context(self, method)
191 ctx = _concrete_contexts[method]
192 except KeyError:
--> 193 raise ValueError('cannot find context for %r' % method) from None
194 ctx._check_available()
195 return ctx
ValueError: cannot find context for 'fork'
``` | closed | 2022-02-03T07:47:20Z | 2022-02-06T17:51:05Z | https://github.com/nalepae/pandarallel/issues/167 | [] | jckkvs | 1 |
ultralytics/ultralytics | pytorch | 19,691 | Error occurred when training YOLOV11 on dataset open-images-v7 | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
train.py:
from ultralytics import YOLO
# Load a COCO-pretrained YOLO11n model
model = YOLO("yolo11n.pt")
# Train the model on the Open Images V7 dataset
results = model.train(data="open-images-v7.yaml", epochs=100, imgsz=640)
error output:
Ultralytics 8.3.85 🚀 Python-3.10.15 torch-2.5.0+cu124 CUDA:0 (NVIDIA GeForce RTX 4080, 16076MiB)
engine/trainer: task=detect, mode=train, model=yolo11n.pt, data=open-images-v7.yaml, epochs=100, time=None, patience=100, batch=16, imgsz=640, save=True, save_period=-1, cache=False, device=None, workers=8, project=None, name=train14, exist_ok=False, pretrained=True, optimizer=auto, verbose=True, seed=0, deterministic=True, single_cls=False, rect=False, cos_lr=False, close_mosaic=10, resume=False, amp=True, fraction=1.0, profile=False, freeze=None, multi_scale=False, overlap_mask=True, mask_ratio=4, dropout=0.0, val=True, split=val, save_json=False, save_hybrid=False, conf=None, iou=0.7, max_det=300, half=False, dnn=False, plots=True, source=None, vid_stride=1, stream_buffer=False, visualize=False, augment=False, agnostic_nms=False, classes=None, retina_masks=False, embed=None, show=False, save_frames=False, save_txt=False, save_conf=False, save_crop=False, show_labels=True, show_conf=True, show_boxes=True, line_width=None, format=torchscript, keras=False, optimize=False, int8=False, dynamic=False, simplify=True, opset=None, workspace=None, nms=False, lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=7.5, cls=0.5, dfl=1.5, pose=12.0, kobj=1.0, nbs=64, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, bgr=0.0, mosaic=1.0, mixup=0.0, copy_paste=0.0, copy_paste_mode=flip, auto_augment=randaugment, erasing=0.4, crop_fraction=1.0, cfg=None, tracker=botsort.yaml, save_dir=/media/user/新加卷/zxc_ubuntu/code/ultralytics/runs/detect/train14
Dataset 'open-images-v7.yaml' images not found ⚠️, missing path '/media/user/新加卷/zxc_ubuntu/code/datasets/open-images-v7/images/val'
WARNING ⚠️ Open Images V7 dataset requires at least **561 GB of free space. Starting download...
Downloading split 'train' to '/media/user/新加卷/zxc_ubuntu/code/datasets/fiftyone/open-images-v7/open-images-v7/train' if necessary
Only found 744299 (<1743042) samples matching your requirements
Necessary images already downloaded
Existing download of split 'train' is sufficient
Subprocess ['/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/fiftyone/db/bin/mongod', '--dbpath', '/home/user/.fiftyone/var/lib/mongo', '--logpath', '/home/user/.fiftyone/var/lib/mongo/log/mongo.log', '--port', '0', '--nounixsocket'] exited with error 127:
/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/fiftyone/db/bin/mongod: error while loading shared libraries: libcrypto.so.3: cannot open shared object file: No such file or directory
Traceback (most recent call last):
File "/media/user/新加卷/zxc_ubuntu/code/ultralytics/ultralytics/engine/trainer.py", line 564, in get_dataset
data = check_det_dataset(self.args.data)
File "/media/user/新加卷/zxc_ubuntu/code/ultralytics/ultralytics/data/utils.py", line 385, in check_det_dataset
exec(s, {"yaml": data})
File "<string>", line 21, in <module>
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/fiftyone/zoo/datasets/__init__.py", line 399, in load_zoo_dataset
if fo.dataset_exists(dataset_name):
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/fiftyone/core/dataset.py", line 103, in dataset_exists
conn = foo.get_db_conn()
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/fiftyone/core/odm/database.py", line 394, in get_db_conn
_connect()
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/fiftyone/core/odm/database.py", line 233, in _connect
establish_db_conn(fo.config)
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/fiftyone/core/odm/database.py", line 195, in establish_db_conn
port = _db_service.port
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/fiftyone/core/service.py", line 277, in port
return self._wait_for_child_port()
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/fiftyone/core/service.py", line 171, in _wait_for_child_port
return find_port()
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/retrying.py", line 56, in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/retrying.py", line 266, in call
raise attempt.get()
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/retrying.py", line 301, in get
six.reraise(self.value[0], self.value[1], self.value[2])
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/six.py", line 719, in reraise
raise value
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/retrying.py", line 251, in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
File "/home/user/anaconda3/envs/yolo/lib/python3.10/site-packages/fiftyone/core/service.py", line 169, in find_port
raise ServiceListenTimeout(etau.get_class_name(self), port)
fiftyone.core.service.ServiceListenTimeout: fiftyone.core.service.DatabaseService failed to bind to port
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/media/user/新加卷/zxc_ubuntu/code/ultralytics/train.py", line 7, in <module>
results = model.train(data="open-images-v7.yaml", epochs=100, imgsz=640)
File "/media/user/新加卷/zxc_ubuntu/code/ultralytics/ultralytics/engine/model.py", line 804, in train
self.trainer = (trainer or self._smart_load("trainer"))(overrides=args, _callbacks=self.callbacks)
File "/media/user/新加卷/zxc_ubuntu/code/ultralytics/ultralytics/engine/trainer.py", line 134, in __init__
self.trainset, self.testset = self.get_dataset()
File "/media/user/新加卷/zxc_ubuntu/code/ultralytics/ultralytics/engine/trainer.py", line 568, in get_dataset
raise RuntimeError(emojis(f"Dataset '{clean_url(self.args.data)}' error ❌ {e}")) from e
RuntimeError: Dataset 'open-images-v7.yaml' error ❌ fiftyone.core.service.DatabaseService failed to bind to port
open-images-v7.yaml:
# Ultralytics 🚀 AGPL-3.0 License - https://ultralytics.com/license
# Open Images v7 dataset https://storage.googleapis.com/openimages/web/index.html by Google
# Documentation: https://docs.ultralytics.com/datasets/detect/open-images-v7/
# Example usage: yolo train data=open-images-v7.yaml
# parent
# ├── ultralytics
# └── datasets
# └── open-images-v7 ← downloads here (561 GB)
# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/open-images-v7 # dataset root dir
train: images/train # train images (relative to 'path') 1743042 images
val: images/val # val images (relative to 'path') 41620 images
test: # test images (optional)
# Classes
names:
0: Accordion
1: Adhesive tape
2: Aircraft
3: Airplane
4: Alarm clock
5: Alpaca
6: Ambulance
7: Animal
8: Ant
9: Antelope
10: Apple
11: Armadillo
12: Artichoke
13: Auto part
14: Axe
15: Backpack
16: Bagel
17: Baked goods
18: Balance beam
19: Ball
20: Balloon
21: Banana
22: Band-aid
23: Banjo
24: Barge
25: Barrel
26: Baseball bat
27: Baseball glove
28: Bat (Animal)
29: Bathroom accessory
30: Bathroom cabinet
31: Bathtub
32: Beaker
33: Bear
34: Bed
35: Bee
36: Beehive
37: Beer
38: Beetle
39: Bell pepper
40: Belt
41: Bench
42: Bicycle
43: Bicycle helmet
44: Bicycle wheel
45: Bidet
46: Billboard
47: Billiard table
48: Binoculars
49: Bird
50: Blender
51: Blue jay
52: Boat
53: Bomb
54: Book
55: Bookcase
56: Boot
57: Bottle
58: Bottle opener
59: Bow and arrow
60: Bowl
61: Bowling equipment
62: Box
63: Boy
64: Brassiere
65: Bread
66: Briefcase
67: Broccoli
68: Bronze sculpture
69: Brown bear
70: Building
71: Bull
72: Burrito
73: Bus
74: Bust
75: Butterfly
76: Cabbage
77: Cabinetry
78: Cake
79: Cake stand
80: Calculator
81: Camel
82: Camera
83: Can opener
84: Canary
85: Candle
86: Candy
87: Cannon
88: Canoe
89: Cantaloupe
90: Car
91: Carnivore
92: Carrot
93: Cart
94: Cassette deck
95: Castle
96: Cat
97: Cat furniture
98: Caterpillar
99: Cattle
100: Ceiling fan
101: Cello
102: Centipede
103: Chainsaw
104: Chair
105: Cheese
106: Cheetah
107: Chest of drawers
108: Chicken
109: Chime
110: Chisel
111: Chopsticks
112: Christmas tree
113: Clock
114: Closet
115: Clothing
116: Coat
117: Cocktail
118: Cocktail shaker
119: Coconut
120: Coffee
121: Coffee cup
122: Coffee table
123: Coffeemaker
124: Coin
125: Common fig
126: Common sunflower
127: Computer keyboard
128: Computer monitor
129: Computer mouse
130: Container
131: Convenience store
132: Cookie
133: Cooking spray
134: Corded phone
135: Cosmetics
136: Couch
137: Countertop
138: Cowboy hat
139: Crab
140: Cream
141: Cricket ball
142: Crocodile
143: Croissant
144: Crown
145: Crutch
146: Cucumber
147: Cupboard
148: Curtain
149: Cutting board
150: Dagger
151: Dairy Product
152: Deer
153: Desk
154: Dessert
155: Diaper
156: Dice
157: Digital clock
158: Dinosaur
159: Dishwasher
160: Dog
161: Dog bed
162: Doll
163: Dolphin
164: Door
165: Door handle
166: Doughnut
167: Dragonfly
168: Drawer
169: Dress
170: Drill (Tool)
171: Drink
172: Drinking straw
173: Drum
174: Duck
175: Dumbbell
176: Eagle
177: Earrings
178: Egg (Food)
179: Elephant
180: Envelope
181: Eraser
182: Face powder
183: Facial tissue holder
184: Falcon
185: Fashion accessory
186: Fast food
187: Fax
188: Fedora
189: Filing cabinet
190: Fire hydrant
191: Fireplace
192: Fish
193: Flag
194: Flashlight
195: Flower
196: Flowerpot
197: Flute
198: Flying disc
199: Food
200: Food processor
201: Football
202: Football helmet
203: Footwear
204: Fork
205: Fountain
206: Fox
207: French fries
208: French horn
209: Frog
210: Fruit
211: Frying pan
212: Furniture
213: Garden Asparagus
214: Gas stove
215: Giraffe
216: Girl
217: Glasses
218: Glove
219: Goat
220: Goggles
221: Goldfish
222: Golf ball
223: Golf cart
224: Gondola
225: Goose
226: Grape
227: Grapefruit
228: Grinder
229: Guacamole
230: Guitar
231: Hair dryer
232: Hair spray
233: Hamburger
234: Hammer
235: Hamster
236: Hand dryer
237: Handbag
238: Handgun
239: Harbor seal
240: Harmonica
241: Harp
242: Harpsichord
243: Hat
244: Headphones
245: Heater
246: Hedgehog
247: Helicopter
248: Helmet
249: High heels
250: Hiking equipment
251: Hippopotamus
252: Home appliance
253: Honeycomb
254: Horizontal bar
255: Horse
256: Hot dog
257: House
258: Houseplant
259: Human arm
260: Human beard
261: Human body
262: Human ear
263: Human eye
264: Human face
265: Human foot
266: Human hair
267: Human hand
268: Human head
269: Human leg
270: Human mouth
271: Human nose
272: Humidifier
273: Ice cream
274: Indoor rower
275: Infant bed
276: Insect
277: Invertebrate
278: Ipod
279: Isopod
280: Jacket
281: Jacuzzi
282: Jaguar (Animal)
283: Jeans
284: Jellyfish
285: Jet ski
286: Jug
287: Juice
288: Kangaroo
289: Kettle
290: Kitchen & dining room table
291: Kitchen appliance
292: Kitchen knife
293: Kitchen utensil
294: Kitchenware
295: Kite
296: Knife
297: Koala
298: Ladder
299: Ladle
300: Ladybug
301: Lamp
302: Land vehicle
303: Lantern
304: Laptop
305: Lavender (Plant)
306: Lemon
307: Leopard
308: Light bulb
309: Light switch
310: Lighthouse
311: Lily
312: Limousine
313: Lion
314: Lipstick
315: Lizard
316: Lobster
317: Loveseat
318: Luggage and bags
319: Lynx
320: Magpie
321: Mammal
322: Man
323: Mango
324: Maple
325: Maracas
326: Marine invertebrates
327: Marine mammal
328: Measuring cup
329: Mechanical fan
330: Medical equipment
331: Microphone
332: Microwave oven
333: Milk
334: Miniskirt
335: Mirror
336: Missile
337: Mixer
338: Mixing bowl
339: Mobile phone
340: Monkey
341: Moths and butterflies
342: Motorcycle
343: Mouse
344: Muffin
345: Mug
346: Mule
347: Mushroom
348: Musical instrument
349: Musical keyboard
350: Nail (Construction)
351: Necklace
352: Nightstand
353: Oboe
354: Office building
355: Office supplies
356: Orange
357: Organ (Musical Instrument)
358: Ostrich
359: Otter
360: Oven
361: Owl
362: Oyster
363: Paddle
364: Palm tree
365: Pancake
366: Panda
367: Paper cutter
368: Paper towel
369: Parachute
370: Parking meter
371: Parrot
372: Pasta
373: Pastry
374: Peach
375: Pear
376: Pen
377: Pencil case
378: Pencil sharpener
379: Penguin
380: Perfume
381: Person
382: Personal care
383: Personal flotation device
384: Piano
385: Picnic basket
386: Picture frame
387: Pig
388: Pillow
389: Pineapple
390: Pitcher (Container)
391: Pizza
392: Pizza cutter
393: Plant
394: Plastic bag
395: Plate
396: Platter
397: Plumbing fixture
398: Polar bear
399: Pomegranate
400: Popcorn
401: Porch
402: Porcupine
403: Poster
404: Potato
405: Power plugs and sockets
406: Pressure cooker
407: Pretzel
408: Printer
409: Pumpkin
410: Punching bag
411: Rabbit
412: Raccoon
413: Racket
414: Radish
415: Ratchet (Device)
416: Raven
417: Rays and skates
418: Red panda
419: Refrigerator
420: Remote control
421: Reptile
422: Rhinoceros
423: Rifle
424: Ring binder
425: Rocket
426: Roller skates
427: Rose
428: Rugby ball
429: Ruler
430: Salad
431: Salt and pepper shakers
432: Sandal
433: Sandwich
434: Saucer
435: Saxophone
436: Scale
437: Scarf
438: Scissors
439: Scoreboard
440: Scorpion
441: Screwdriver
442: Sculpture
443: Sea lion
444: Sea turtle
445: Seafood
446: Seahorse
447: Seat belt
448: Segway
449: Serving tray
450: Sewing machine
451: Shark
452: Sheep
453: Shelf
454: Shellfish
455: Shirt
456: Shorts
457: Shotgun
458: Shower
459: Shrimp
460: Sink
461: Skateboard
462: Ski
463: Skirt
464: Skull
465: Skunk
466: Skyscraper
467: Slow cooker
468: Snack
469: Snail
470: Snake
471: Snowboard
472: Snowman
473: Snowmobile
474: Snowplow
475: Soap dispenser
476: Sock
477: Sofa bed
478: Sombrero
479: Sparrow
480: Spatula
481: Spice rack
482: Spider
483: Spoon
484: Sports equipment
485: Sports uniform
486: Squash (Plant)
487: Squid
488: Squirrel
489: Stairs
490: Stapler
491: Starfish
492: Stationary bicycle
493: Stethoscope
494: Stool
495: Stop sign
496: Strawberry
497: Street light
498: Stretcher
499: Studio couch
500: Submarine
501: Submarine sandwich
502: Suit
503: Suitcase
504: Sun hat
505: Sunglasses
506: Surfboard
507: Sushi
508: Swan
509: Swim cap
510: Swimming pool
511: Swimwear
512: Sword
513: Syringe
514: Table
515: Table tennis racket
516: Tablet computer
517: Tableware
518: Taco
519: Tank
520: Tap
521: Tart
522: Taxi
523: Tea
524: Teapot
525: Teddy bear
526: Telephone
527: Television
528: Tennis ball
529: Tennis racket
530: Tent
531: Tiara
532: Tick
533: Tie
534: Tiger
535: Tin can
536: Tire
537: Toaster
538: Toilet
539: Toilet paper
540: Tomato
541: Tool
542: Toothbrush
543: Torch
544: Tortoise
545: Towel
546: Tower
547: Toy
548: Traffic light
549: Traffic sign
550: Train
551: Training bench
552: Treadmill
553: Tree
554: Tree house
555: Tripod
556: Trombone
557: Trousers
558: Truck
559: Trumpet
560: Turkey
561: Turtle
562: Umbrella
563: Unicycle
564: Van
565: Vase
566: Vegetable
567: Vehicle
568: Vehicle registration plate
569: Violin
570: Volleyball (Ball)
571: Waffle
572: Waffle iron
573: Wall clock
574: Wardrobe
575: Washing machine
576: Waste container
577: Watch
578: Watercraft
579: Watermelon
580: Weapon
581: Whale
582: Wheel
583: Wheelchair
584: Whisk
585: Whiteboard
586: Willow
587: Window
588: Window blind
589: Wine
590: Wine glass
591: Wine rack
592: Winter melon
593: Wok
594: Woman
595: Wood-burning stove
596: Woodpecker
597: Worm
598: Wrench
599: Zebra
600: Zucchini
# Download script/URL (optional) ---------------------------------------------------------------------------------------
download: |
from ultralytics.utils import LOGGER, SETTINGS, Path, is_ubuntu, get_ubuntu_version
from ultralytics.utils.checks import check_requirements, check_version
check_requirements('fiftyone')
if is_ubuntu() and check_version(get_ubuntu_version(), '>=22.04'):
# Ubuntu>=22.04 patch https://github.com/voxel51/fiftyone/issues/2961#issuecomment-1666519347
check_requirements('fiftyone-db-ubuntu2204')
import fiftyone as fo
import fiftyone.zoo as foz
import warnings
name = 'open-images-v7'
fo.config.dataset_zoo_dir = Path(SETTINGS["datasets_dir"]) / "fiftyone" / name
fraction = 1.0 # fraction of full dataset to use
LOGGER.warning('WARNING ⚠️ Open Images V7 dataset requires at least **561 GB of free space. Starting download...')
for split in 'train', 'validation': # 1743042 train, 41620 val images
train = split == 'train'
# Load Open Images dataset
dataset = foz.load_zoo_dataset(name,
split=split,
label_types=['detections'],
classes=["Ambulance","Bicycle","Bus","Boy","Car","Motorcycle","Man","Person","Stop sign","Girl","Truck","Traffic light","Traffic sign","Cat", "Dog","Unicycle","Vehicle","Woman","Land vehicle","Snowplow","Van"],
max_samples=round((1743042 if train else 41620) * fraction))
# Define classes
if train:
classes = dataset.default_classes # all classes
# classes = dataset.distinct('ground_truth.detections.label') # only observed classes
# Export to YOLO format
with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=UserWarning, module="fiftyone.utils.yolo")
dataset.export(export_dir=str(Path(SETTINGS['datasets_dir']) / name),
dataset_type=fo.types.YOLOv5Dataset,
label_field='ground_truth',
split='val' if split == 'validation' else split,
classes=classes,
overwrite=train)
### Additional
_No response_ | closed | 2025-03-14T06:35:34Z | 2025-03-19T10:43:33Z | https://github.com/ultralytics/ultralytics/issues/19691 | [
"question",
"dependencies",
"detect"
] | 1623021453 | 10 |
aio-libs/aiomysql | asyncio | 154 | how does aiomysql know to reuse a mysql pool? | in my programe,i made a global engine by store it into a list,
however when i use the ab to test a simple mysql proxy server with : ab -n 10000 -c 1000 http://localhost
and the proxy server has 2 worker on a 2cpu machine
i got a very strange result ,cause i set the connect pool (min 10 ,maxsize 20), and run the ab first ,i check the mysqldb the connection is 2, when i run it again and again ,i increase to 40 by the end ,and never grow up
i was wondering why is this result? how can i control all my connections? | open | 2017-03-07T06:24:47Z | 2022-01-13T01:00:45Z | https://github.com/aio-libs/aiomysql/issues/154 | [
"question"
] | ihjmh | 12 |
manrajgrover/halo | jupyter | 25 | emojis not show | 
| open | 2017-10-11T02:07:09Z | 2020-03-07T23:06:31Z | https://github.com/manrajgrover/halo/issues/25 | [
"bug"
] | likezjuisee | 12 |
robotframework/robotframework | automation | 5,002 | "Parsing type failed"/"Type name missing" error message appears on the wrong argument | ```py
# a.py
from __future__ import annotations
from typing import Callable
def foo(a: Callable[[], None], b: asdf) -> None: ...
```
```
[ ERROR ] Error in library 'a': Adding keyword 'foo' failed: Parsing type 'Callable[[], None]' failed: Error at index 9: Type name missing.
```
removing the invalid `asdf` type annotation fixes the issue, so the error should be complaining about the `b` argument instead of the `a` argument | closed | 2024-01-05T05:57:17Z | 2024-01-06T23:42:25Z | https://github.com/robotframework/robotframework/issues/5002 | [
"task",
"priority: low"
] | DetachHead | 1 |
tortoise/tortoise-orm | asyncio | 1,520 | Join the same table twice | **Is your feature request related to a problem? Please describe.**
When joining two tables, the generated query contains two names of the same table and this happens only if the second join table is the same as the base table (self.model._meta.basetable) in queryset.AwaitableQuery . According to this masterpiece [pypika issue](https://github.com/kayak/pypika/issues/248), for a good join each table must have its alias, this is what is done in `queryset.AwaitableQuery .resolve_filters`, but as always the `QueryModifier` value
> where_criterion, joins, having_criterion = modifier.get_query_modifiers()
**joins** is an array of tables, each instance of self.model._meta.basetable in this array is a self.model._meta.basetable reference. we understand that each join on this table will have the same alias.
> So error from the RDBMS "the **tablename** appears several times"
**Describe the solution you'd like**
The simplest solution is to make a copy each time we encounter an instance of `self.model._meta.basetable` to say it correctly **It's a reference** `self.model._meta.basetable`
My solution is to add two lines at line 139 in `queryset.py`
.....
for join in joins:
if join[0] not in self._joined_tables:
if join[0] is self.model._meta.basetable: # 1
join = (copy(self.model._meta.basetable), join[1]) # 2
join[0].alias = "U" + str(len(self._joined_tables))
self.query = self.query.join(join[0], how=JoinType.left_outer).on(join[1])
self._joined_tables.append(join[0])
....
**Describe alternatives you've considered**
Rr simply write a model manager with a new `CustomAwaitableQuery` which inherits from `AwaitableQuery` and the two lines
**Additional context**
N/A
| open | 2023-11-29T00:14:05Z | 2023-11-29T00:14:05Z | https://github.com/tortoise/tortoise-orm/issues/1520 | [] | edimedia | 0 |
littlecodersh/ItChat | api | 386 | 启动时报错:SSL证书错误 | itchat v1.3.7 启动时报错提示SSL证书错误,在阿里云、腾讯云、bandwagon三个服务器上试过都是这样。
这三个服务器都曾经renew过一次SSL证书,不知道是不是跟这个有关,所以为什么我的服务器去登陆微信还需要我服务器的SSL?而且renew一次证书为什么就不能用了?😑
log如下:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/itchat/utils.py", line 125, in test_connect
r = requests.get(config.BASE_URL)
File "/usr/local/lib/python3.6/site-packages/requests/api.py", line 72, in get
return request('get', url, params=params, **kwargs)
File "/usr/local/lib/python3.6/site-packages/requests/api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python3.6/site-packages/requests/sessions.py", line 513, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.6/site-packages/requests/sessions.py", line 623, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.6/site-packages/requests/adapters.py", line 514, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: ("bad handshake: Error([('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')],)",)
You can't get access to internet or wechat domain, so exit.
``` | closed | 2017-06-02T01:48:05Z | 2021-02-01T14:56:23Z | https://github.com/littlecodersh/ItChat/issues/386 | [
"question"
] | rikumi | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.